Node.js bindings for OpenAI's Whisper. Hard-fork of whisper-node.
make
and everything else listed as required to compile whisper.cppnpm install @pr0gramm/fluester
npx --package @pr0gramm/fluester download-model
npx --package @pr0gramm/fluester compile-whisper
Important: The API only supports WAV files (just like the original whisper.cpp). You need to convert any files to a supported format before. You can do this using ffmpeg (example taken from the whisper project):
ffmpeg -i input.mp3 -ar 16000 -ac 1 -c:a pcm_s16le output.wav
OR Use the provided helper to convert the audio file:
import { convertFileToProcessableFile } from "@pr0gramm/fluester";
const inputFile = "input.mp3";
const outputFile = "output.wav";
await convertFileToProcessableFile(inputFile, outputFile);
import { createWhisperClient } from "@pr0gramm/fluester";
const client = createWhisperClient({
modelName: "base",
});
const transcript = await client.translate("example/sample.wav");
console.log(transcript); // output: [ {start,end,speech} ]
[
{
"start": "00:00:14.310", // timestamp start
"end": "00:00:16.480", // timestamp end
"speech": "howdy" // transcription
}
]
import { createWhisperClient } from "@pr0gramm/fluester";
const client = createWhisperClient({
modelName: "base",
});
const result = await client.detectLanguage("example/sample.wav");
if(!result) {
console.log(`Detected: ${result.language} with probability ${result.probability}`);
} else {
console.log("Did not detect anything :(");
}
This library is designed to work well in dockerized environments.
We took time and made some steps independent from each other, so they can be used in a multi-stage docker build.
FROM node:latest as dependencies
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
RUN npx --package @pr0gramm/fluester compile-whisper
RUN npx --package @pr0gramm/fluester download-model tiny
FROM node:latest
WORKDIR /app
COPY --from=dependencies /app/node_modules /app/node_modules
COPY ./ ./
This includes the model in the image. If you want to keep your image small, you can also download the model in your entrypoint using the commands above.