GenKit
overview
Genkit allows us to interact with different LLM providers through a single, unified interface. It allows us to compose prompts together within a flow. It also offers a UI to test and debug prompts and flows.
packages
genkit
genkitx-openai
dotenv #to embed tokens
quick setup
get an ai provider with genkit()
import "dotenv/config"
import { genkit } from "genkit"
import openAI, { gpt4o } from "genkitx-openai"
const ai = genkit({
plugins: [openAI({ apiKey: process.env.OPENAI_API_KEY })],
model: gpt4o,
})
quick request with an inline prompt
The inline prompt may be used for simple usecases or for prototyping.
trigger the request with generate()
or generateStream()
. It expects an arguments object, with a prompt
property, and other optional properties
const { text } = await ai.generate({
prompt: `Pick a number between 222 and 333`,
})
return text
stream the response
We call generateStream()
. stream is an async iterable, so we may iterate over it with for await
.
const { response, stream } = ai.generateStream({
prompt: `Pick a number between 222 and 333`,
})
for await (const chunk of stream) {
process.stdout.write(chunk.text) // debug in stdout
}
// we may still use the complete response
const completeText = (await response).text