Quick Start

A 5-minute guide to get started with NodeLLM. Install, configure, and run your first chat, image generation, and embedding scripts.

Table of contents

  1. Installation
  2. Configuration
  3. Quick Start Examples
    1. Chat
    2. Generate Images
    3. Create Embeddings
    4. Streaming
  4. Next Steps

Start building AI apps in Node.js in 5 minutes. Chat, generate images, and create embeddings with one unified API.


Installation

npm install @node-llm/core
# or
pnpm add @node-llm/core

Configuration

The fastest way to start is using Zero-Config. NodeLLM automatically reads your API keys and the active provider from environment variables.

import "dotenv/config";
import { createLLM } from "@node-llm/core";

// Explicit initialization is recommended for production apps
const llm = createLLM({ provider: "openai" });

Alternatively, use the Zero-Config singleton for rapid prototyping. NodeLLM automatically reads your API keys and the active provider from environment variables:

import { NodeLLM } from "@node-llm/core";
const llm = NodeLLM; // Exported singleton

Quick Start Examples

Chat

const chat = llm.chat(); // Uses default model
const response = await chat.ask("Explain quantum computing in 5 words.");
console.log(response.content);
// => "Computing using quantum mechanical phenomena."

Generate Images

const image = await llm.paint("A cyberpunk city with neon rain");
console.log(image.url);

Create Embeddings

const embedding = await llm.embed("Semantic search is powerful.");
console.log(`Vector dimensions: ${embedding.dimensions}`);

Streaming

Real-time responses are essential for good UX.

for await (const chunk of chat.stream("Write a poem")) {
  process.stdout.write(chunk.content);
}

Next Steps