MCP Server Documentation
Connect coding agents to NotebookLM
The NotebookLM MCP (Model Context Protocol) server allows third-party coding agents like Claude, Kiro, and Cursor to verify code, access GitHub repositories, manage implementation plans, and save sources to your notebooks. This enables seamless integration between your AI coding workflow and research management.
Secure Authentication
Personal API tokens with SHA-256 hashing
Code Verification
Syntax, security, and best practices checks
GitHub Integration
Access repos, files, and create issues
AI Code Analysis
Automatic quality analysis for code sources
Quick Start
Generate a Personal API Token
Before setting up the MCP server, generate a personal API token from the NotebookLM app:
- Open the NotebookLM app
- Go to Settings → Agent Connections
- In the API Tokens section, click Generate New Token
- Enter a name for your token (e.g., "Kiro Coding Agent")
- Optionally set an expiration date
- Click Generate and copy the token immediately
Install the MCP Server
Run the install script for your platform:
irm https://raw.githubusercontent.com/cmgzone/notebookllmmcp/main/install.ps1 | iexcurl -fsSL https://raw.githubusercontent.com/cmgzone/notebookllmmcp/main/install.sh | bashThe script will download the MCP server and show you the configuration to add.
Configure Your MCP Client
Add the configuration shown by the install script to your MCP config file:
{
"mcpServers": {
"notebookllm": {
"command": "node",
"args": ["~/.notebookllm-mcp/index.js"],
"env": {
"BACKEND_URL": "https://notebookllm-ufj7.onrender.com",
"CODING_AGENT_API_KEY": "nllm_your-token-here"
}
}
}
}Start Using the Tools
Once configured, your coding agent can use the MCP tools to verify code and save it to your notebooks. The server will automatically connect to the NotebookLM backend.
Authentication
Token Format
Personal API tokens use a specific format for easy identification:
nllm_[43 characters of random data]
Example: nllm_a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0u1v2- • Prefix:
nllm_(5 characters) - • Random part: 43 characters (32 bytes base64url encoded)
- • Total length: 48 characters
Using the Token
Include the token in the Authorization header for API requests:
curl -X POST http://localhost:3000/api/coding-agent/verify-and-save \
-H "Content-Type: application/json" \
-H "Authorization: Bearer nllm_your-token-here" \
-d '{
"code": "function add(a, b) { return a + b; }",
"language": "javascript",
"title": "Add Function"
}'Security Features
SHA-256 Hashing
Tokens are hashed before storage. The original token is never stored.
Rate Limiting
Maximum 5 new tokens per hour, 10 active tokens per user.
Instant Revocation
Revoked tokens are immediately invalidated across all services.
Usage Logging
All token usage is logged for security auditing.
MCP Tools
GitHub Integration
Connect your GitHub account to access repositories, files, and create issues directly from coding agents. Files added as sources are automatically analyzed by AI.
Planning Mode
Create and manage implementation plans with structured requirements, design notes, and tasks. Perfect for spec-driven development workflows.
Code Analysis
When GitHub files are added as sources, they are automatically analyzed by AI to provide deep insights that improve fact-checking results.
Analysis Features
Quality Rating (1-10)
Overall code quality score with detailed explanation
Quality Metrics
Readability, maintainability, testability, documentation, error handling
Architecture Analysis
Detected patterns, design patterns, separation of concerns
Recommendations
Strengths, improvements, and security notes
Analysis Tools
get_source_analysisGet the AI-generated code analysis for a GitHub source.
{ "sourceId": "source-uuid-here" }reanalyze_sourceRe-analyze a GitHub source to get fresh code analysis.
{ "sourceId": "source-uuid-here" }User Settings
Configure code analysis settings in the MCP Dashboard → Settings tab:
- • Enable/Disable Analysis: Toggle automatic code analysis on or off
- • AI Model Selection: Choose which AI model to use for analysis
- • Auto (Recommended): Automatically selects the best available model with fallback
Supported Languages
Configuration
Recommended: Install from GitHub
Install the MCP server directly from our public GitHub repository:
Windows (PowerShell):
irm https://raw.githubusercontent.com/cmgzone/notebookllmmcp/main/install.ps1 | iexMac/Linux:
curl -fsSL https://raw.githubusercontent.com/cmgzone/notebookllmmcp/main/install.sh | bashKiro Configuration
Add to .kiro/settings/mcp.json (after running the install script):
{
"mcpServers": {
"notebookllm": {
"command": "node",
"args": ["C:/Users/YourName/.notebookllm-mcp/index.js"],
"env": {
"BACKEND_URL": "https://notebookllm-ufj7.onrender.com",
"CODING_AGENT_API_KEY": "nllm_your-personal-api-token-here"
}
}
}
}Claude Desktop Configuration
Add to claude_desktop_config.json:
{
"mcpServers": {
"notebookllm": {
"command": "node",
"args": ["~/.notebookllm-mcp/index.js"],
"env": {
"BACKEND_URL": "https://notebookllm-ufj7.onrender.com",
"CODING_AGENT_API_KEY": "nllm_your-personal-api-token-here"
}
}
}
}Manual Download
If you prefer to download manually from GitHub:
git clone https://github.com/cmgzone/notebookllmmcp.git ~/.notebookllm-mcp
cd ~/.notebookllm-mcp
npm install --productionThen configure your MCP client with the path to ~/.notebookllm-mcp/dist/index.js
Alternative: Backend API Install
You can also install from our backend API:
# Get configuration template
curl https://notebookllm-ufj7.onrender.com/api/mcp/config
# Or use the backend install scripts
# Windows: irm https://notebookllm-ufj7.onrender.com/api/mcp/install.ps1 | iex
# Mac/Linux: curl -fsSL https://notebookllm-ufj7.onrender.com/api/mcp/install.sh | bashToken Management
Viewing Your Tokens
In the NotebookLM app, go to Settings → Agent Connections to see all your active tokens:
- • Token name and description
- • Creation date
- • Last used date (updated each time the token is used)
- • Partial token display (last 4 characters for identification)
Revoking a Token
If a token is compromised or no longer needed:
- Go to Settings → Agent Connections
- Find the token in the list
- Click the Revoke button
- Confirm the revocation
Token Limits
Architecture
Verification Scoring
Code must score ≥ 60 to be saved as a source. Scoring is based on:
| Severity | Error Impact | Warning Impact |
|---|---|---|
| Critical | -25 points | -10 points |
| High | -15 points | -5 points |
| Medium | -10 points | -3 points |
| Low | -5 points | -1 point |