Skip to content

Latest commit

 

History

History
91 lines (73 loc) · 3.3 KB

README.md

File metadata and controls

91 lines (73 loc) · 3.3 KB

Convai-Web-SDK: Interact with your favorite characters from the web browser

Get started

Following examples use typescript bindings.

import { ConvaiClient } from 'convai-web-sdk';
import { GetResponseResponse } from "convai-web-sdk/dist/_proto/service/service_pb";

// Initiate the convai client.
const convaiClient = useRef(null);
convaiClient.current = new ConvaiClient({
      apiKey: string, //Enter your API Key here,
      characterId: string, //Enter your Character ID,
      enableAudio: boolean, //No chareacter audio will be played but will be generated.
      sessionId: string, //current conversation session. Can be used to retrieve chat history. 
      languageCode?: string, 
      textOnlyResponse?: boolean, //Optional parameter for chat only applications (No audio response from chareacter)
      micUsage?: boolean, // Option parameter for no microphone usage and access
      enableFacialData?: boolean, // Optional for viseme data generation used for lipsync and expression
      faceModel?: 3,
      narrativeTemplateKeysMap: Map<string, string>, //dynamically pass variables to the Narrative Design section and triggers
 })

// Set a response callback. This may fire multiple times as response
// can come in multiple parts.
convaiClient.setResponseCallback((response: GetResponseResponse) => {
    // live transcript, only available during audio mode.
    if (response.hasUserQuery()) {
        var transcript = response!.getUserQuery();
        var isFinal = response!.getIsFinal();
    }
    if (response.hasAudioResponse()) {
        var audioResponse = response?.getAudioResponse();
        if (audioResponse.hasTextData()) {
            // Response text.
            console.log(audioResponse?.getTextData());
        }
        if (audioResponse.hasAudioData()) {
            // Play or process audio response.
            var audioByteArray: UInt8Array = audioResponse!.getAudioData_asU8();
        }
    }

    // Actions coming soon!
});

// Send text input
var text = "How are you?";
convaiClient.sendTextChunk(text);

// Send audio chunks.
// Starts audio recording using default microphone.
convaiClient.startAudioChunk();

// Stop recording and finish submitting input.
convaiClient.endAudioChunk();

// End or Reset a conversation session.
convaiClient.resetSession();

Facial Expressions

To kickstart facial expression functionality, initialize the ConvaiClient with the necessary parameters. The enableFacialData flag must be set to true to enable facial expression data.

convaiClient.current = new ConvaiClient({
  apiKey: '<apiKey>',
  characterId: '<characterId>',
  enableAudio: true,
  enableFacialData: true,
  faceModel: 3, // OVR lipsync
});

Further Documentation

Reference Videos

Convai-Npc World (React Three Fiber):

Real Time Lipsync with Reallusion Characters:

NPM

convai-web-sdk