-
Notifications
You must be signed in to change notification settings - Fork 97
Open
Description
Hello, this is my first time working with audio so I'm probably missing something.
I have a model to predict guitar chords, and I'm implementing a simple streamlit dashboard to record and send these audios for prediction. This is the code I'm using, based on this repo:
`wav_audio_data = st_audiorec()
if wav_audio_data is not None:
audio = st.audio(wav_audio_data, format='audio/wav')
data_s16 = np.frombuffer(wav_audio_data, dtype=np.int16, count=len(wav_audio_data)//2, `offset=0)`
My questions:
- Is it possible to directly retrieve the audio in a numpy array? I realize the 'audio' in the code is a DeltaGenerator object, but I don't really know how to use it. So I used this np.frombuffer on the wav_audio_data, but I'm not sure if this is appropriate.
- Is it possible to increase the quality of the recorded audio? When I record something on my computer I have a clear sound, but when I record in the dashboard, I have a low-quality audio.
Thank you in advance
Metadata
Metadata
Assignees
Labels
No labels