Skip to main content
New to Nebula? Start with Core Concepts to understand the architecture.

Storing Memories

from nebula import Nebula, Memory
nebula = Nebula()

# Single memory
memory_id = nebula.store_memory(
    Memory(
        collection_id="research",
        content="Machine learning automates model building",
        metadata={"topic": "AI"}
    )
)

# Also accepts a plain dict
memory_id = nebula.store_memory({
    "collection_id": "research",
    "content": "Machine learning automates model building",
})

# Batch storage
memory_ids = nebula.store_memories([
    Memory(
        collection_id="research",
        content="Neural networks...",
        metadata={"type": "concept"}
    ),
    Memory(
        collection_id="research",
        content="Deep learning...",
        metadata={"type": "concept"}
    ),
])
The use of metadata is highly discouraged. Nebula already consolidates all the semantics from your content automatically.
Memory creation is asynchronous. Content becomes searchable after Nebula finishes extraction (typically a few seconds). The API returns a 202 status with the memory ID immediately.

Conversation Messages

# Create conversation
conv_id = nebula.store_memory(
    Memory(
        collection_id="support",
        content="Hello! How can I help?",
        role="assistant"
    )
)

# Add to same conversation
nebula.store_memory(
    Memory(
        memory_id=conv_id,
        collection_id="support",
        content="I need help with my account",
        role="user"
    )
)
See Conversations Guide for multi-turn patterns.

Document Upload

Upload a document as raw text, pre-chunked text, or a file. File processing (OCR/transcription/text extraction) happens automatically.
from nebula import Nebula, Memory

nebula = Nebula()
collection = nebula.get_collection_by_name("my-collection")

# Upload text
doc_id = nebula.create_document_text(
    collection_id=collection.id,
    raw_text="Machine learning is a subset of AI...",
    metadata={"title": "ML Intro"}
)

# Upload pre-chunked content
doc_id = nebula.store_memory(
    Memory(
        collection_id=collection.id,
        content=["Chapter 1...", "Chapter 2..."],
        metadata={"title": "My Doc"}
    )
)

# Upload a file
doc_id = nebula.store_memory(
    Memory.from_file(
        "document.pdf",
        collection_id=collection.id,
        metadata={"title": "Research Paper"}
    )
)
Inline base64 uploads are limited to ~5MB per file part; larger files use a presigned upload flow (max 100MB).

Batch Storage

Store multiple memories at once with store_memories(). Conversation messages with the same collection_id are grouped into a single conversation automatically.
from nebula import Nebula, Memory
nebula = Nebula()

# Batch store documents
ids = nebula.store_memories([
    Memory(collection_id=collection.id, content="First document"),
    Memory(collection_id=collection.id, content="Second document"),
    Memory(collection_id=collection.id, content="Third document"),
])

# Batch store conversation messages (grouped into one conversation)
ids = nebula.store_memories([
    Memory(collection_id=collection.id, content="Hello!", role="user"),
    Memory(collection_id=collection.id, content="Hi there!", role="assistant"),
])

Retrieving Memories

# Get by ID
memory = nebula.get_memory(memory_id)

# List memories in collection
memories = nebula.list_memories(collection_ids=["research"], limit=50)
For semantic search (finding by meaning), see the Search Guide.

Deleting Memories

# Delete single
nebula.delete(memory_id)

# Delete multiple
nebula.delete([memory_id_1, memory_id_2, memory_id_3])
Deletion is permanent and cannot be undone.

Bulk Operations

Use list_memories() with metadata_filters to target a set, then delete or update metadata in batches.
memories = nebula.list_memories(
    collection_ids=["docs"],
    metadata_filters={"metadata.status": {"$eq": "archived"}},
    limit=1000
)

# Bulk delete
nebula.delete([m.id for m in memories])

# Bulk metadata update
for m in memories:
    nebula.update_memory(
        memory_id=m.id,
        metadata={"archived": True},
        merge_metadata=True
    )
Use store_memory() to append content. update_memory() only updates name, metadata, or collection associations.

Source Operations

Memories contain sources (messages in conversations, sections in documents). Each source has a unique ID.
memory = nebula.get_memory(memory_id)
for source in memory.sources:
    print(f"Source {source.id}: {source.content[:50]}...")

# Delete specific source
nebula.delete_source(source_id)

# Update source
nebula.update_source(source_id=source_id, content="Updated content", metadata={"edited": True})

Next Steps

  • Search - Semantic search and filtering
  • Collections - Organize memories into collections