In speech analysis , synthesis , Conversion in progress , The first step is usually to extract speech feature parameters .
The above-mentioned machine learning methods are used for speech learning , It's often used in the Mel spectrum .
This paper introduces the extraction of Mel spectrum from audio files , And from the Mel spectrum to the audio waveform .

Extracting from audio waveform Mel spectrum :

* Pre emphasis of audio signal , Framing and windowing
* Short time Fourier transform is applied to each frame signal STFT, The short-time amplitude spectrum is obtained
* Short time amplitude spectrum passing Mel Filter banks are obtained Mel spectrum
from Mel Spectrum reconstruction of audio waveform

* Mel Conversion of spectrum to amplitude spectrum
* griffin_lim Waveform reconstruction by vocoder algorithm
* De emphasis

There are many kinds of vocoders , such as world,straight etc. , however griffin_lim It's special , It does not need the phase information to reconstruct the waveform , In fact, it estimates the phase information according to the relationship between frames . The audio quality of Hecheng is also high , The code is also relatively simple .

<> Audio waveform reach mel-spectrogram
sr = 24000 # Sample rate. n_fft = 2048 # fft points (samples) frame_shift =
0.0125 # seconds frame_length = 0.05 # seconds hop_length = int(sr*frame_shift)
# samples. win_length = int(sr*frame_length) # samples. n_mels = 512 # Number
of Mel banks to generate power = 1.2 # Exponent for amplifying the predicted
magnitude n_iter = 100 # Number of inversion iterations preemphasis = .97 # or
None max_db = 100 ref_db = 20 top_db = 15 def get_spectrograms(fpath):
'''Returns normalized log(melspectrogram) and log(magnitude) from `sound_file`.
Args: sound_file: A string. The full path of a sound file. Returns: mel: A 2d
array of shape (T, n_mels) <- Transposed mag: A 2d array of shape (T,
1+n_fft/2) <- Transposed ''' # Loading sound file y, sr = librosa.load(fpath, sr
=sr) # Trimming y, _ = librosa.effects.trim(y, top_db=top_db) # Preemphasis y =
np.append(y[0], y[1:] - preemphasis * y[:-1]) # stft linear = librosa.stft(y=y,
n_fft=n_fft, hop_length=hop_length, win_length=win_length) # magnitude
spectrogram mag = np.abs(linear) # (1+n_fft//2, T) # mel spectrogram mel_basis =
librosa.filters.mel(sr, n_fft, n_mels) # (n_mels, 1+n_fft//2) mel = np.dot(
mel_basis, mag) # (n_mels, t) # to decibel mel = 20 * np.log10(np.maximum(1e-5,
mel)) mag = 20 * np.log10(np.maximum(1e-5, mag)) # normalize mel = np.clip((mel
- ref_db + max_db) / max_db, 1e-8, 1) mag = np.clip((mag - ref_db + max_db) /
max_db, 1e-8, 1) # Transpose mel = mel.T.astype(np.float32) # (T, n_mels) mag =
mag.T.astype(np.float32) # (T, 1+n_fft//2) return mel, mag
<>mel-spectrogram reach Audio waveform
def melspectrogram2wav(mel): '''# Generate wave file from spectrogram''' #
transpose mel = mel.T # de-noramlize mel = (np.clip(mel, 0, 1) * max_db) -
max_db+ ref_db # to amplitude mel = np.power(10.0, mel * 0.05) m =
_mel_to_linear_matrix(sr, n_fft, n_mels) mag = np.dot(m, mel) # wav
reconstruction wav = griffin_lim(mag) # de-preemphasis wav = signal.lfilter([1],
[1, -preemphasis], wav) # trim wav, _ = librosa.effects.trim(wav) return wav.
astype(np.float32) def spectrogram2wav(mag): '''# Generate wave file from
spectrogram''' # transpose mag = mag.T # de-noramlize mag = (np.clip(mag, 0, 1)
* max_db) - max_db + ref_db # to amplitude mag = np.power(10.0, mag * 0.05) #
wav reconstruction wav = griffin_lim(mag) # de-preemphasis wav = signal.lfilter(
[1], [1, -preemphasis], wav) # trim wav, _ = librosa.effects.trim(wav) return
wav.astype(np.float32)
Several auxiliary functions :
def _mel_to_linear_matrix(sr, n_fft, n_mels): m = librosa.filters.mel(sr, n_fft
, n_mels) m_t = np.transpose(m) p = np.matmul(m, m_t) d = [1.0 / x if np.abs(x)
> 1.0e-8 else x for x in np.sum(p, axis=0)] return np.matmul(m_t, np.diag(d))
def griffin_lim(spectrogram): '''Applies Griffin-Lim's raw. ''' X_best = copy.
deepcopy(spectrogram) for i in range(n_iter): X_t = invert_spectrogram(X_best)
est= librosa.stft(X_t, n_fft, hop_length, win_length=win_length) phase = est /
np.maximum(1e-8, np.abs(est)) X_best = spectrogram * phase X_t =
invert_spectrogram(X_best) y = np.real(X_t) return y def invert_spectrogram(
spectrogram): ''' spectrogram: [f, t] ''' return librosa.istft(spectrogram,
hop_length, win_length=win_length, window="hann")
Pre emphasis :

The average power spectrum of speech signal is affected by glottic excitation and oronasal radiation , The high frequency end is about 800HZ Press above 6dB/ Octave fading , The purpose of pre emphasis is to enhance the high frequency component , Flatten the signal spectrum , In order to facilitate spectrum analysis or channel parameter analysis .

Technology