Interface WhisperClient

interface WhisperClient {
    detectLanguage: (
        filePath: string,
    ) => Promise<undefined | LanguageDetectionResult>;
    transcribe: (
        filePath: string,
        options?: WhisperOptions,
    ) => Promise<TranscriptLine[]>;
    translate: (
        filePath: string,
        options?: WhisperOptions,
    ) => Promise<TranscriptLine[]>;
}

Properties

detectLanguage: (
    filePath: string,
) => Promise<undefined | LanguageDetectionResult>

Detects the language of the audio file

Type declaration

    • (filePath: string): Promise<undefined | LanguageDetectionResult>
    • Parameters

      • filePath: string

        The audio file to detect the language of.

      Returns Promise<undefined | LanguageDetectionResult>

      undefined if there was an error or the language could not be detected.

Audio file must be in a processable format just like when using translate.

If the model provided at createWhisperClient is not found.

transcribe: (
    filePath: string,
    options?: WhisperOptions,
) => Promise<TranscriptLine[]>

Type declaration

    • (filePath: string, options?: WhisperOptions): Promise<TranscriptLine[]>
    • Parameters

      • filePath: string

        The audio file to transcribe. Audio file must be in a processable format.

      • Optionaloptions: WhisperOptions

      Returns Promise<TranscriptLine[]>

      Transcription of the audio file.

If the model provided at createWhisperClient is not found.

translate: (
    filePath: string,
    options?: WhisperOptions,
) => Promise<TranscriptLine[]>

Type declaration

    • (filePath: string, options?: WhisperOptions): Promise<TranscriptLine[]>
    • Parameters

      • filePath: string

        The audio file to translate. Audio file must be in a processable format.

      • Optionaloptions: WhisperOptions

      Returns Promise<TranscriptLine[]>

      English translation of the audio file. If it's already english, it will be a transcription.

If the model provided at createWhisperClient is not found.