Skip to main content
Nebula Hero

Welcome to Nebula

Nebula is the memory layer for AI applications. It ingests documents, conversations, files, and connected sources, then builds a vector graph your AI can search and reason over. Retrieval returns entities, facts, and source utterances instead of flat text chunks.

Quick Example

from nebula import Nebula, Memory

nebula = Nebula()

# Create a collection
collection = nebula.create_collection(name="engineering_team")

# Store a memory
nebula.store_memory(
    Memory(
        collection_id=collection.id,
        content="Sarah led the migration from PostgreSQL to Aurora last quarter. The project reduced our p99 latency from 200ms to 45ms.",
        metadata={"source": "standup_notes"}
    )
)

# Search
results = nebula.search(
    query="What has Sarah worked on?",
    collection_ids=[collection.id]
)
Get your API key and start building →

How It Works

1

Ingest

Store documents, conversations, or files. Connect external sources like Google Drive, Gmail, Notion, and Slack. Nebula parses 40+ file formats including PDFs, images, and audio.
2

Extract

Nebula automatically builds a vector graph from your content, linking entities and relationships with temporal awareness and fact corroboration across sources.
3

Recall

Search returns three layers of memory: entities (who and what), facts (assertions about them), and utterances (the original source text).

Key Features

  • Vector Graph - Automatic entity and relationship extraction from raw content
  • Hierarchical Retrieval - Three layers: entities, facts, and source utterances
  • Multimodal Ingestion - 40+ file formats including PDFs, images, audio, and code
  • Connectors - Sync from Google Drive, Gmail, Notion, and Slack via OAuth
  • Conversation Memory - Multi-turn context with speaker identity tracking
  • Multi-Language SDKs - Python, JavaScript/TypeScript, REST API, and MCP

Core Concepts

Client Libraries

Support

Need help? Reach out to support@trynebula.ai