
Welcome to Nebula
Nebula is the memory layer for AI applications. It ingests documents, conversations, files, and connected sources, then builds a vector graph your AI can search and reason over. Retrieval returns entities, facts, and source utterances instead of flat text chunks.Quick Example
How It Works
Ingest
Store documents, conversations, or files. Connect external sources like Google Drive, Gmail, Notion, and Slack. Nebula parses 40+ file formats including PDFs, images, and audio.
Extract
Nebula automatically builds a vector graph from your content, linking entities and relationships with temporal awareness and fact corroboration across sources.
Key Features
- Vector Graph - Automatic entity and relationship extraction from raw content
- Hierarchical Retrieval - Three layers: entities, facts, and source utterances
- Multimodal Ingestion - 40+ file formats including PDFs, images, audio, and code
- Connectors - Sync from Google Drive, Gmail, Notion, and Slack via OAuth
- Conversation Memory - Multi-turn context with speaker identity tracking
- Multi-Language SDKs - Python, JavaScript/TypeScript, REST API, and MCP
Core Concepts
Memory Operations
Store, retrieve, and manage documents, conversations, and files
Collections
Group related memories with scoped access control
Search
Vector graph traversal with effort levels and collection scoping
Conversations
Build multi-turn chat applications with persistent context
Connectors
OAuth-based sync for Google Drive, Gmail, Notion, and Slack
Architecture
Memories, chunks, and the vector graph
Client Libraries
Python SDK
Sync and async client for Python 3.8+
JavaScript SDK
TypeScript SDK with full type definitions
MCP Integration
Connect AI assistants via Model Context Protocol