Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trynebula.ai/llms.txt

Use this file to discover all available pages before exploring further.

Memories and Sources

In Nebula, everything is stored as a memory. A memory is a container with a unique memory_id that holds one or more sources:
  • Document Memory: An entire document, automatically split into sources for processing. You can upload text, pre-chunked text, or a file. A short piece of text (like a user preference) is a document memory with a single source.
  • Conversation Memory: A full conversation where each message is a source. Set engram_type="conversation" and pass a messages list of {role, content} objects to create one.
Each source has a unique source ID for read-only provenance in memory and search responses. To change stored content, update memory-level properties or append replacement content as a new memory entry.

Document Memories

from nebula import Nebula

nebula = Nebula()
collection = nebula.collections.create(name="research_papers").results

# Store document text
created = nebula.memories.create(
    collection_id=collection.id,
    raw_text="Full document content...",
    engram_type="document",
    metadata={"title": "Research Paper"},
).results
doc_id = created.id

# Or store a short piece of text
nebula.memories.create(
    collection_id=collection.id,
    raw_text="User prefers dark mode",
    metadata={"user_id": "user_789"},
)

Conversation Memories

Create a conversation by passing engram_type="conversation" and an initial messages list. Append more messages by calling memories.append() with the conversation’s memory_id.
collection = nebula.collections.create(name="support_chats").results

# Create a conversation memory
conv = nebula.memories.create(
    collection_id=collection.id,
    engram_type="conversation",
    messages=[{"content": "Hello! How can I help?", "role": "assistant"}],
).results
conv_id = conv.id

# Add messages to the same conversation
nebula.memories.append(
    conv_id,
    collection_id=collection.id,
    messages=[
        {"content": "I need help", "role": "user"},
        {"content": "I'll help you", "role": "assistant"},
    ],
)
See Conversations Guide for multi-turn patterns.

The Vector Graph

When you store memories, Nebula automatically extracts structured knowledge and builds a graph of entities and relationships. When you search, the response contains four layers of memory:
  • Semantics: Subject-predicate-value assertions (e.g., “Sarah - led - the Aurora migration”), with confidence that grows through corroboration across sources
  • Procedures: User preferences and behavioral patterns (e.g., “Prefers dark mode”)
  • Episodes: Temporally clustered events with timestamps
  • Sources: The original source text that grounds each assertion, providing provenance back to the stored memory
The use of metadata is highly discouraged. Nebula already consolidates all the semantics from your content automatically.

Memory Lifecycle

  1. Creation: memories.create() creates a new memory
  2. Expansion: memories.append(memory_id, ...) adds content to an existing memory
  3. Extraction: Nebula extracts entities, facts, and relationships into the vector graph
  4. Retrieval: memories.search() returns semantics, procedures, episodes, and sources; memories.retrieve(id) returns the raw memory with all sources
  5. Source Provenance: Use source IDs to trace retrieval results back to stored memory content
  6. Deletion: memories.delete(id) removes an entire memory and all its sources
Build complete units by using memory_id to group entire conversations or documents in one container, and group related memories into collections.

Next Steps