Memory Operations
Learn the core operations for working with memories in Nebula: storing, retrieving, and deleting.
New to Nebula? Start with
Core Concepts to understand the architecture.
Storing Memories
Single Memory
from nebula import Nebula
nebula = Nebula(api_key="your-api-key")
memory_id = nebula.store_memory({
"collection_id": "research-collection",
"content": "Machine learning automates analytical model building",
"metadata": {"topic": "AI", "difficulty": "intermediate"}
})
print(f"Stored memory: {memory_id}")
Returns: Memory ID - use this to reference the memory later.
Batch Storage
Store multiple memories at once for better performance:
memories = [
{
"collection_id": "research-collection",
"content": "Supervised learning uses labeled training data",
"metadata": {"type": "definition"}
},
{
"collection_id": "research-collection",
"content": "Neural networks are inspired by biological systems",
"metadata": {"type": "concept"}
}
]
memory_ids = nebula.store_memories(memories)
print(f"Stored {len(memory_ids)} memories")
Batch operations are more efficient than storing memories one at a time.
Conversation Messages
Store conversational exchanges with roles:
# Initial message
conversation_id = nebula.store_memory({
"collection_id": "support-collection",
"content": "Hello! How can I help you?",
"role": "assistant",
"metadata": {"session_id": "session_123"}
})
# Follow-up messages
nebula.store_memory({
"memory_id": conversation_id, # Add to same conversation
"collection_id": "support-collection",
"content": "I need help with my account",
"role": "user"
})
See Conversations Guide for multi-turn conversation patterns.
Document Upload
# Upload text
doc_id = nebula.create_document_text(
collection_ref="my-collection",
raw_text="Machine learning is a subset of AI...",
metadata={"title": "ML Intro"}
)
# Upload from file
with open("doc.txt", "r") as f:
doc_id = nebula.create_document_text(
collection_ref="my-collection",
raw_text=f.read(),
metadata={"filename": "doc.txt"}
)
# Upload pre-chunked
doc_id = nebula.create_document_chunks(
collection_ref="my-collection",
chunks=["Intro...", "Chapter 1...", "Chapter 2..."],
metadata={"title": "My Doc"}
)
Ingestion modes: fast (default), hi-res (better quality), custom
Web UI: Click “Add Memory” → “Upload File” → Select .txt/.md/.json/.csv/.log file
Retrieving Memories
Get by ID
memory = nebula.get_memory(memory_id)
print(f"Content: {memory.content}")
print(f"Metadata: {memory.metadata}")
print(f"Created: {memory.created_at}")
List Memories
Retrieve all memories in a collection:
memories = nebula.list_memories(
collection_ids=["research-collection"],
limit=50,
offset=0
)
for memory in memories:
print(f"{memory.content[:50]}...")
For semantic search (finding memories by meaning), see the
Search Guide.
Deleting Memories
Delete Single Memory
result = nebula.delete(memory_id)
print(f"Deleted: {result}")
Batch Delete
# Delete multiple memories
result = nebula.delete([memory_id_1, memory_id_2, memory_id_3])
print(f"Deleted {result['deleted_count']} memories")
Deletion is permanent and cannot be undone.
Chunk Operations
Memories can contain multiple chunks (for documents) or messages (for conversations). Each chunk has a unique ID for granular operations.
Get Chunks
When retrieving a memory, chunks include their IDs:
memory = nebula.get_memory(memory_id)
# Access chunks with IDs
for chunk in memory.chunks:
print(f"Chunk {chunk.id}: {chunk.content[:50]}...")
if chunk.role: # For conversation messages
print(f"Role: {chunk.role}")
Delete Chunk
Delete a specific chunk or message:
# Delete a specific message in a conversation
nebula.delete_chunk(chunk_id)
Update Chunk
Update content or metadata of a specific chunk:
# Update chunk content
nebula.update_chunk(
chunk_id=chunk_id,
content="Updated content here",
metadata={"edited": True}
)
Chunk IDs are automatically generated when storing memories. You only need them for granular operations.
Complete Example
Here’s a full workflow demonstrating all operations:
from nebula import Nebula
nebula = Nebula(api_key="your-api-key")
# 1. Create collection
collection = nebula.create_collection(name="demo", description="Example collection")
# 2. Store memories
memories = [
{
"collection_id": collection.id,
"content": "Python is a high-level programming language",
"metadata": {"language": "python"}
},
{
"collection_id": collection.id,
"content": "JavaScript is used for web development",
"metadata": {"language": "javascript"}
}
]
memory_ids = nebula.store_memories(memories)
print(f"Stored {len(memory_ids)} memories")
# 3. Retrieve
memory = nebula.get_memory(memory_ids[0])
print(f"Retrieved: {memory.content}")
# 4. Search
results = nebula.search(
query="programming languages",
collection_ids=[collection.id],
limit=5
)
print(f"Found {len(results)} results")
# 5. Delete
nebula.delete(memory_ids)
print("Deleted all memories")
Best Practices
- Use batch operations -
store_memories() is faster than multiple store_memory() calls
- Add descriptive metadata - Makes filtering and organization easier
- Organize by collections - Group related memories for better access control
- Validate before storing - Check that required fields (
collection_id, content) are present
- Handle errors gracefully - Use try/catch for API operations
Rich metadata improves search quality. Include fields like type, category, priority, or tags.
Good metadata structure:
{
"collection_id": "support-collection",
"content": "User reported login issue",
"metadata": {
"type": "support_ticket",
"priority": "high",
"user_id": "user_123",
"category": "authentication",
"tags": ["login", "bug"],
"created_by": "agent_001"
}
}
This enables powerful filtering:
- Find all high-priority tickets
- Search tickets for specific users
- Filter by category or tags
Learn more in Metadata Filtering.
Next Steps