-
Notifications
You must be signed in to change notification settings - Fork 222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Streaming Audio Data via recognizeUsingWebSocket #1000
Comments
This issue has been automatically marked as stale because it has had no recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
@repjarms is this feature still on the roadmap? |
@digitallysavvy Yes the plan now is to get it out in a feature level release next week. While I am working on it, can you please provide some more details about the type of interface you are looking for. For example, Are you expecting to send an array of bytes, or configure AVAudioSession in a particular way? Any information about how this feature will be used will help me make sure I am delivering something that will address your needs. |
@repjarms Great news! |
@repjarms We're expecting to send an array of bytes. Agora's SDK passes a raw audio buffer, which I'm converting into a |
@repjarms when @zontan was testing, he was able to send the Data in bursts but it was not a continuous stream (socket) in the way the WatsonMic has its implementation. |
This issue has been automatically marked as stale because it has had no recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
@repjarms I realize there probably isn't an update on this but leaving a comment as to keep the issue from closing. |
This issue has been automatically marked as stale because it has had no recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Still active |
This issue has been automatically marked as stale because it has had no recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically marked as stale because it has had no recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
@mediumTaj can we keep this active? |
This issue has been automatically marked as stale because it has had no recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically marked as stale because it has had no recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically marked as stale because it has had no recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
is there still a chance to get access to the mic buffer? |
When you open an issue for a feature request, please add as much detail as possible:
Currently, the interface exposed in
SpeechToTextV1/SpeechToText+Recognize.swift
only leaves aSpeechToTextSession
alive for the time that it takes to transcribe aData
blob.We should add support to send smaller chunks of data in realtime as a part of one session, to support streaming audio applications that are not driven via the microphone.
The text was updated successfully, but these errors were encountered: