Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trynebula.ai/llms.txt

Use this file to discover all available pages before exploring further.

New to Nebula? Start with Core Concepts to understand the architecture.

Storing Memories

from nebula import Nebula
nebula = Nebula()
collection = nebula.collections.retrieve_by_name("research").results

# Single memory
created = nebula.memories.create(
    collection_id=collection.id,
    raw_text="Machine learning automates model building",
    metadata={"topic": "AI"},
).results
memory_id = created.id

# Multiple memories — the SDK has no batch endpoint, so loop on the wire
memory_ids = []
for item in [
    {"raw_text": "Neural networks...", "metadata": {"type": "concept"}},
    {"raw_text": "Deep learning...", "metadata": {"type": "concept"}},
]:
    res = nebula.memories.create(collection_id=collection.id, **item).results
    memory_ids.append(res.id)
The use of metadata is highly discouraged. Nebula already consolidates all the semantics from your content automatically.
Memory creation is asynchronous. Content becomes searchable after Nebula finishes extraction (typically a few seconds). The API returns a 202 status with the memory ID immediately.

Conversation Messages

support = nebula.collections.retrieve_by_name("support").results

# Create conversation
conv = nebula.memories.create(
    collection_id=support.id,
    engram_type="conversation",
    messages=[{"content": "Hello! How can I help?", "role": "assistant"}],
).results
conv_id = conv.id

# Add to same conversation
nebula.memories.append(
    conv_id,
    collection_id=support.id,
    messages=[{"content": "I need help with my account", "role": "user"}],
)
See Conversations Guide for multi-turn patterns.

Document Upload

Upload a document as raw text, pre-chunked text, or a file. File processing (OCR/transcription/text extraction) happens automatically.
from nebula import Nebula

nebula = Nebula()
collection = nebula.collections.retrieve_by_name("my-collection").results

# Upload text
doc = nebula.memories.create(
    collection_id=collection.id,
    raw_text="Machine learning is a subset of AI...",
    engram_type="document",
    metadata={"title": "ML Intro"},
).results
doc_id = doc.id

# Upload pre-chunked content
doc = nebula.memories.create(
    collection_id=collection.id,
    chunks=["Chapter 1...", "Chapter 2..."],
    metadata={"title": "My Doc"},
).results
doc_id = doc.id
Inline base64 uploads are limited to ~5MB per file part; larger files use a presigned upload flow (max 100MB).

Multiple Documents

The Python SDK exposes a single memories.create() per memory; loop to store multiple documents. To group conversation turns, create one memory with engram_type="conversation" and a messages list.
from nebula import Nebula
nebula = Nebula()

# Store multiple documents
ids = []
for text in ["First document", "Second document", "Third document"]:
    res = nebula.memories.create(collection_id=collection.id, raw_text=text).results
    ids.append(res.id)

# Store an entire conversation in one memory
conv = nebula.memories.create(
    collection_id=collection.id,
    engram_type="conversation",
    messages=[
        {"content": "Hello!", "role": "user"},
        {"content": "Hi there!", "role": "assistant"},
    ],
).results

Retrieving Memories

# Get by ID
memory = nebula.memories.retrieve(memory_id).results

# List memories in collection
memories = nebula.memories.list(collection_ids=[collection.id], limit=50).results
For semantic search (finding by meaning), see the Search Guide.

Deleting Memories

# Delete single
nebula.memories.delete(memory_id)

# Delete multiple
nebula.memories.delete_many(body=[memory_id_1, memory_id_2, memory_id_3])
Deletion is permanent and cannot be undone.

Bulk Operations

Use memories.list() with metadata_filters to target a set, then delete or update metadata in batches. The endpoint caps limit at 1000 per request, so larger result sets need pagination.
For high-fan-out client work, throttle with chunked Promise.all (e.g., p-limit) instead of one unbounded call. The TS SDK already retries 429s with backoff, but bounded concurrency keeps socket pressure and tail latency reasonable.
import json

docs = nebula.collections.retrieve_by_name("docs").results
filters = json.dumps({"metadata.status": {"$eq": "archived"}})
PAGE = 1000  # endpoint maximum

# Bulk delete: re-list with offset=0 each iteration since the previous
# page is gone after delete_many.
while True:
    page = nebula.memories.list(
        collection_ids=[docs.id],
        metadata_filters=filters,
        limit=PAGE,
    )
    if not page.results:
        break
    nebula.memories.delete_many(body=[m.id for m in page.results])
    if len(page.results) < PAGE:
        break

# Bulk metadata update: rows aren't removed, so walk by offset until
# total_entries is covered.
offset = 0
while True:
    page = nebula.memories.list(
        collection_ids=[docs.id],
        metadata_filters=filters,
        limit=PAGE,
        offset=offset,
    )
    for m in page.results:
        nebula.memories.update(
            m.id,
            metadata={"archived": True},
            merge_metadata=True,
        )
    offset += len(page.results)
    if not page.results or offset >= page.total_entries:
        break
Use memories.append() to append content. memories.update() only updates name, metadata, or collection associations.

Source Operations

Memories contain sources (messages in conversations, sections in documents). The retrieve response exposes them on chunks as read-only provenance — opaque records you can inspect but not edit in place. To change stored content, update the memory metadata or append replacement content as a new memory entry.
memory = nebula.memories.retrieve(memory_id).results
# `chunks` is typed `List[object]`; on retrieve each chunk is the chunk text string.
for chunk in memory.chunks or []:
    print(chunk)

Next Steps

  • Search - Semantic search and filtering
  • Collections - Organize memories into collections