From d3e1617034b41596cf7fbd35d27449c9b8279bc5 Mon Sep 17 00:00:00 2001 From: tanuprasad530 Date: Wed, 31 Dec 2025 16:49:16 +0530 Subject: [PATCH] Update Bhashini Integrations.md Changed the screenshot + step in the steps of 'Using `open_ai` for TTS' with corrected syntax of using the speech engine. Added a line clarifying the current limitation that this can only be used when the source and target languages are the same. --- docs/5. Integrations/Bhashini Integrations.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/docs/5. Integrations/Bhashini Integrations.md b/docs/5. Integrations/Bhashini Integrations.md index 916c4cf0f..4f4a91108 100644 --- a/docs/5. Integrations/Bhashini Integrations.md +++ b/docs/5. Integrations/Bhashini Integrations.md @@ -136,10 +136,13 @@ Please note: In order to get the voice notes as outputs, the Glific instance mus Apart from Bhashini, the OpenAI speech engine can also be used to generate text-to-speech (TTS) responses. Since we are also experimenting with Bhashini, the response quality may sometimes be inconsistent or unreliable in a few languages.This is another alternative, users can try both options to see which gives better results for their audience and language preferences. ### How to configure: -- In the `Function Body`, set the speech engine to `open-ai`. -- Keep the remaining steps the same as those mentioned in the Speech-to-Text section above. +- In the `Function Body`, set the speech engine to `open_ai`. + +Please note that, currently this alternative is supported only when both **the source and target language are the same**. + +- Keep the remaining steps the same as those mentioned in the Text-to-Speech section above. -Screenshot 2025-09-25 at 1 00 36 AM +Screenshot 2025-12-31 at 4 42 02 PM ---