Memory Operations
Core operations for working with memories: storing, retrieving, and deleting.
New to Nebula? Start with
Core Concepts to understand the architecture.
Storing Memories
from nebula import Nebula
nebula = Nebula(api_key="your-api-key")
# Single memory
memory_id = nebula.store_memory({
"collection_id": "research",
"content": "Machine learning automates model building",
"metadata": {"topic": "AI"}
})
# Batch storage (more efficient)
memory_ids = nebula.store_memories([
{"collection_id": "research", "content": "Neural networks...", "metadata": {"type": "concept"}},
{"collection_id": "research", "content": "Deep learning...", "metadata": {"type": "concept"}}
])
Conversation Messages
# Create conversation
conv_id = nebula.store_memory({
"collection_id": "support",
"content": "Hello! How can I help?",
"role": "assistant"
})
# Add to same conversation
nebula.store_memory({
"memory_id": conv_id,
"collection_id": "support",
"content": "I need help with my account",
"role": "user"
})
See Conversations Guide for multi-turn patterns.
Document Upload
Upload a document as raw text, pre-chunked text, or a file. File processing (OCR/transcription/text extraction) happens automatically.
from nebula import Nebula, Memory
nebula = Nebula(api_key="your-api-key")
collection = nebula.get_collection_by_name("my-collection")
# Upload text
doc_id = nebula.create_document_text(
collection_id=collection.id,
raw_text="Machine learning is a subset of AI...",
metadata={"title": "ML Intro"}
)
# Upload pre-chunked
doc_id = nebula.create_document_chunks(
collection_id=collection.id,
chunks=["Chapter 1...", "Chapter 2..."],
metadata={"title": "My Doc"}
)
# Upload a file
doc_id = nebula.store_memory(
Memory.from_file("document.pdf", collection_id=collection.id, metadata={"title": "Research Paper"})
)
Inline base64 uploads are limited to ~5MB per file part; larger files use a presigned upload flow (max 100MB).
Retrieving Memories
# Get by ID
memory = nebula.get_memory(memory_id)
# List memories in collection
memories = nebula.list_memories(collection_ids=["research"], limit=50)
For semantic search (finding by meaning), see the
Search Guide.
Deleting Memories
# Delete single
nebula.delete(memory_id)
# Delete multiple
nebula.delete([memory_id_1, memory_id_2, memory_id_3])
Deletion is permanent and cannot be undone.
Bulk Operations
Use list_memories() with metadata_filters to target a set, then delete or update metadata in batches.
memories = nebula.list_memories(
collection_ids=["docs"],
metadata_filters={"metadata.status": {"$eq": "archived"}},
limit=1000
)
# Bulk delete
nebula.delete([m.id for m in memories])
# Bulk metadata update
for m in memories:
nebula.update_memory(
memory_id=m.id,
metadata={"archived": True},
merge_metadata=True
)
Use store_memory() to append content. update_memory() only updates name, metadata, or collection associations.
Chunk Operations
Memories contain chunks (messages in conversations, sections in documents). Each chunk has a unique ID.
memory = nebula.get_memory(memory_id)
for chunk in memory.chunks:
print(f"Chunk {chunk.id}: {chunk.content[:50]}...")
# Delete specific chunk
nebula.delete_chunk(chunk_id)
# Update chunk
nebula.update_chunk(chunk_id=chunk_id, content="Updated content", metadata={"edited": True})
Best Practices
- Use batch operations -
store_memories() is faster than multiple single calls
- Add descriptive metadata - Makes filtering and organization easier
- Organize by collections - Group related memories logically
- Handle errors gracefully - Use try/catch for API operations
Next Steps