MCP Server Documentation

Connect coding agents to NotebookLM

The NotebookLM MCP (Model Context Protocol) server allows third-party coding agents like Claude, Kiro, and Cursor to verify code, access GitHub repositories, manage implementation plans, and save sources to your notebooks. This enables seamless integration between your AI coding workflow and research management.

Secure Authentication

Personal API tokens with SHA-256 hashing

Code Verification

Syntax, security, and best practices checks

GitHub Integration

Access repos, files, and create issues

AI Code Analysis

Automatic quality analysis for code sources

Quick Start

1

Generate a Personal API Token

Before setting up the MCP server, generate a personal API token from the NotebookLM app:

  1. Open the NotebookLM app
  2. Go to SettingsAgent Connections
  3. In the API Tokens section, click Generate New Token
  4. Enter a name for your token (e.g., "Kiro Coding Agent")
  5. Optionally set an expiration date
  6. Click Generate and copy the token immediately
⚠️ Important: The token is only displayed once. If you lose it, you'll need to generate a new one.
2

Install the MCP Server

Run the install script for your platform:

Windows (PowerShell):
powershell
irm https://raw.githubusercontent.com/cmgzone/notebookllmmcp/main/install.ps1 | iex
Mac/Linux:
bash
curl -fsSL https://raw.githubusercontent.com/cmgzone/notebookllmmcp/main/install.sh | bash

The script will download the MCP server and show you the configuration to add.

3

Configure Your MCP Client

Add the configuration shown by the install script to your MCP config file:

json
{
  "mcpServers": {
    "notebookllm": {
      "command": "node",
      "args": ["~/.notebookllm-mcp/index.js"],
      "env": {
        "BACKEND_URL": "https://notebookllm-ufj7.onrender.com",
        "CODING_AGENT_API_KEY": "nllm_your-token-here"
      }
    }
  }
}
4

Start Using the Tools

Once configured, your coding agent can use the MCP tools to verify code and save it to your notebooks. The server will automatically connect to the NotebookLM backend.

Authentication

Token Format

Personal API tokens use a specific format for easy identification:

text
nllm_[43 characters of random data]

Example: nllm_a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0u1v2
  • Prefix: nllm_ (5 characters)
  • Random part: 43 characters (32 bytes base64url encoded)
  • Total length: 48 characters

Using the Token

Include the token in the Authorization header for API requests:

bash
curl -X POST http://localhost:3000/api/coding-agent/verify-and-save \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer nllm_your-token-here" \
  -d '{
    "code": "function add(a, b) { return a + b; }",
    "language": "javascript",
    "title": "Add Function"
  }'

Security Features

SHA-256 Hashing

Tokens are hashed before storage. The original token is never stored.

Rate Limiting

Maximum 5 new tokens per hour, 10 active tokens per user.

Instant Revocation

Revoked tokens are immediately invalidated across all services.

Usage Logging

All token usage is logged for security auditing.

MCP Tools

GitHub Integration

Connect your GitHub account to access repositories, files, and create issues directly from coding agents. Files added as sources are automatically analyzed by AI.

Planning Mode

Create and manage implementation plans with structured requirements, design notes, and tasks. Perfect for spec-driven development workflows.

Code Analysis

When GitHub files are added as sources, they are automatically analyzed by AI to provide deep insights that improve fact-checking results.

Analysis Features

Quality Rating (1-10)

Overall code quality score with detailed explanation

Quality Metrics

Readability, maintainability, testability, documentation, error handling

Architecture Analysis

Detected patterns, design patterns, separation of concerns

Recommendations

Strengths, improvements, and security notes

Analysis Tools

get_source_analysis

Get the AI-generated code analysis for a GitHub source.

json
{ "sourceId": "source-uuid-here" }
reanalyze_source

Re-analyze a GitHub source to get fresh code analysis.

json
{ "sourceId": "source-uuid-here" }

User Settings

Configure code analysis settings in the MCP Dashboard → Settings tab:

  • Enable/Disable Analysis: Toggle automatic code analysis on or off
  • AI Model Selection: Choose which AI model to use for analysis
  • Auto (Recommended): Automatically selects the best available model with fallback
Tip: Visit /dashboard/mcp to configure your settings.

Supported Languages

JavaScriptTypeScriptPythonDartJavaKotlinSwiftGoRustC/C++C#RubyPHP

Configuration

Recommended: Install from GitHub

Install the MCP server directly from our public GitHub repository:

Windows (PowerShell):

powershell
irm https://raw.githubusercontent.com/cmgzone/notebookllmmcp/main/install.ps1 | iex

Mac/Linux:

bash
curl -fsSL https://raw.githubusercontent.com/cmgzone/notebookllmmcp/main/install.sh | bash

Kiro Configuration

Add to .kiro/settings/mcp.json (after running the install script):

json
{
  "mcpServers": {
    "notebookllm": {
      "command": "node",
      "args": ["C:/Users/YourName/.notebookllm-mcp/index.js"],
      "env": {
        "BACKEND_URL": "https://notebookllm-ufj7.onrender.com",
        "CODING_AGENT_API_KEY": "nllm_your-personal-api-token-here"
      }
    }
  }
}

Claude Desktop Configuration

Add to claude_desktop_config.json:

json
{
  "mcpServers": {
    "notebookllm": {
      "command": "node",
      "args": ["~/.notebookllm-mcp/index.js"],
      "env": {
        "BACKEND_URL": "https://notebookllm-ufj7.onrender.com",
        "CODING_AGENT_API_KEY": "nllm_your-personal-api-token-here"
      }
    }
  }
}

Manual Download

If you prefer to download manually from GitHub:

bash
git clone https://github.com/cmgzone/notebookllmmcp.git ~/.notebookllm-mcp
cd ~/.notebookllm-mcp
npm install --production

Then configure your MCP client with the path to ~/.notebookllm-mcp/dist/index.js

Alternative: Backend API Install

You can also install from our backend API:

bash
# Get configuration template
curl https://notebookllm-ufj7.onrender.com/api/mcp/config

# Or use the backend install scripts
# Windows: irm https://notebookllm-ufj7.onrender.com/api/mcp/install.ps1 | iex
# Mac/Linux: curl -fsSL https://notebookllm-ufj7.onrender.com/api/mcp/install.sh | bash

Token Management

Viewing Your Tokens

In the NotebookLM app, go to SettingsAgent Connections to see all your active tokens:

  • • Token name and description
  • • Creation date
  • • Last used date (updated each time the token is used)
  • • Partial token display (last 4 characters for identification)

Revoking a Token

If a token is compromised or no longer needed:

  1. Go to SettingsAgent Connections
  2. Find the token in the list
  3. Click the Revoke button
  4. Confirm the revocation
Note: Revoked tokens are immediately invalidated. Any MCP server using that token will receive authentication errors.

Token Limits

10
Max active tokens
5
New tokens per hour
Optional expiration

Architecture

┌─────────────────────────────────────────────────────────────┐ │ Third-Party Agents │ │ (Claude, Kiro, Cursor, etc.) │ └─────────────────────┬───────────────────────────────────────┘ │ MCP Protocol (stdio) ▼ ┌─────────────────────────────────────────────────────────────┐ │ Coding Agent MCP Server │ │ backend/mcp-server/src/index.ts │ │ │ │ Tools: │ │ • verify_code - Check code correctness │ │ • verify_and_save - Verify & save as source │ │ • batch_verify - Verify multiple snippets │ │ • analyze_code - Deep analysis │ │ • get_verified_sources - Retrieve saved sources │ └─────────────────────┬───────────────────────────────────────┘ │ HTTP API + Bearer Token ▼ ┌─────────────────────────────────────────────────────────────┐ │ Backend API │ │ /api/coding-agent/* │ │ │ │ Authentication: │ │ • Personal API tokens (nllm_xxx format) │ │ • SHA-256 hashed storage │ │ • Usage logging & rate limiting │ └─────────────────────┬───────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────┐ │ Code Verification Service │ │ backend/src/services/codeVerificationService.ts │ │ │ │ Features: │ │ • Syntax validation (JS/TS, Python, Dart, JSON) │ │ • Security scanning (XSS, SQL injection, secrets) │ │ • AI-powered analysis (Gemini) │ │ • Best practices checking │ └─────────────────────┬───────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────┐ │ Database │ │ sources table + api_tokens table │ │ │ │ Stores: │ │ • Verified code with metadata │ │ • Token hashes and usage logs │ │ • User/notebook associations │ └─────────────────────────────────────────────────────────────┘

Verification Scoring

Code must score ≥ 60 to be saved as a source. Scoring is based on:

SeverityError ImpactWarning Impact
Critical-25 points-10 points
High-15 points-5 points
Medium-10 points-3 points
Low-5 points-1 point

Supported Languages

JavaScriptTypeScriptPythonDartJSONGeneric