Data Augmentation

시작하기. 무료입니다
또는 회원 가입 e메일 주소
Data Augmentation 저자: Mind Map: Data Augmentation

1. Subspace Learning: Augment on the dimension of features

1.1. SVD: Frequency domain (eigen)space basis vectors

1.1.1. Same dimension for all uttterances. The dimension can be set to no. of filter banks (plus first order derivative)

1.2. SVD: Tempo domain (eigen)space basis vectors

1.2.1. Dynamic Time Warping to get the same length

1.2.2. Encoder - Decoder to get the same length

2. Semi-Supervised Training: Collect more unlabelled audio

2.1. Use the labelled data to train a bootstrap system as a recognizer, then use it to decode on the unlabelled data and include a confidence level estimate on the decoding output. Set a threshold to filter those with low confidence level. Then add the remaining data with unsupervised label in to the training set and re-train the system.

2.2. May not be suitable for disordered speech recognition since we don't have the unlabelled audio

3. Augmentation Policy Learning

3.1. Reinforcement Learning: Learning policies (for image) and possibly policy sequences for speech

4. Traditional Approach: Augment on the data quantity

4.1. Time Domain

4.1.1. Speech Rate (Tempo) Perturbation

4.1.1.1. WSOLA (Waveform Similarity Overlap and Add)

4.1.1.2. PSOLA (Pitch Synchronous Overlap and Add)

4.1.2. Speed Perturbation

4.1.2.1. Resample in Time Domain

4.1.2.1.1. lead to both change in audio length and spectrum envelope

4.1.2.2. Perturbation & Interpolation on frame-level Spectrum

4.1.2.2.1. Theoretically same effect as above, just a different way of implementation

4.1.3. Add noise

4.1.4. Add reverberation

4.2. Frequency Domain

4.2.1. Vocal Tract Length Perturbation (VTLP)

4.2.1.1. Mimic the difference in vocal tract length (mainly linear frequency warping)

4.2.2. Stochasitc Feature Mapping

4.2.2.1. Estimate a maximum likelihood linear transformation in some feature space of the source speaker against the speaker dependent model of the target speaker (statistical method)

4.2.3. Perturbation on (log mel) spectrogram (SpecAugment)

4.2.3.1. Time warping: deformation of the time-series in the time direction

4.2.3.2. Time masking: mask a block of consecutive time steps

4.2.3.3. Frequency masking: mask a block of consecutive mel frequency channels

4.2.4. Generative Adversarial Network (GAN)

4.2.4.1. More variations than VTLP in terms of frequency warping

4.2.4.2. Spectrogram approach: need to do alignment in audio length on pair-wised (ctrl, dys) utterances

4.2.4.2.1. Zero padding in the shorter audio, but this may lead to background noise in the final generated audio

4.2.4.3. (Frame level) Spectrum approach: generate the audio frame by frame based on spectrum

5. End to End Approach: Perturb on the latent variable / vectors

5.1. Speed Perturbation (directly manipulate the time series of frequency vectors)

5.2. SpecAugment

5.3. Sub-sequence Sampling (with constraints, such as the length of the sub-sequence is greater than half of the original sequence)