-
Couldn't load subscription status.
- Fork 4.5k
Open
Description
Summary
When using Memory.delete(memory_id) to delete individual memories, the operation only removes data from the vector store (Qdrant) and adds a history record, but fails to remove corresponding nodes and relationships from the Neo4j graph store. This leads to orphaned graph data that accumulates over time.
Environment
- mem0ai version: 0.1.115
- Graph store: Neo4j 5.23
- Vector store: Qdrant
- Configuration: Standard mem0 configuration with both vector and graph stores enabled
Problem Description
Current Behavior
The _delete_memory() method in mem0/memory/main.py only performs cleanup on:
- Vector store (Qdrant) - ✅ Working correctly
- History database - ✅ Working correctly
- Neo4j graph store - ❌ Missing cleanup
Code Analysis
Problematic _delete_memory() method:
def _delete_memory(self, memory_id):
logger.info(f"Deleting memory with {memory_id=}")
existing_memory = self.vector_store.get(vector_id=memory_id)
prev_value = existing_memory.payload["data"]
# ✅ Removes from vector store
self.vector_store.delete(vector_id=memory_id)
# ✅ Adds to history
self.db.add_history(
memory_id,
prev_value,
None,
"DELETE",
actor_id=existing_memory.payload.get("actor_id"),
role=existing_memory.payload.get("role"),
is_deleted=1,
)
# ❌ MISSING: No graph store cleanup!
# Should call: self.graph.delete(memory_id, filters) or similar
capture_event("mem0._delete_memory", self, {"memory_id": memory_id, "sync_type": "sync"})
return memory_idInconsistent Behavior
Working correctly in add() method:
def add(self, messages, **kwargs):
# ✅ Adds to vector store
vector_store_result = self._add_to_vector_store(messages, metadata, filters, infer)
# ✅ Adds to graph store
if self.enable_graph:
graph_result = self._add_to_graph_store(messages, filters)
return {"results": vector_store_result, "relations": graph_result}Partial fix in delete_all() method:
def delete_all(self, **filters):
memories = self.vector_store.list(filters=filters)[0]
# ❌ Individual deletions don't clean graph
for memory in memories:
self._delete_memory(memory.id) # Missing graph cleanup
# ✅ But then deletes entire graph section
if self.enable_graph:
self.graph.delete_all(filters) # Too broad - deletes all matching filtersImpact
Data Consistency Issues
- Orphaned nodes: Memory nodes remain in Neo4j after vector deletion
- Stale relationships: Graph relationships point to non-existent memories
- Storage bloat: Graph database grows indefinitely without cleanup
- Query inconsistencies: Graph queries return deleted memories
Real-World Example
In our environment:
- Qdrant: 2600 active memories
- Neo4j: 2700+ nodes/relationships (including orphaned data from deleted memories)
- Memory Graph visualization: Shows connections to memories that no longer exist in vector store
Log Evidence
# Memory deletion log - only shows vector store deletion
2025-07-29 20:12:36 - mem0_client - INFO - Memory deleted from Mem0 with project scope
memory_id=abc123, project_id=xyz, user_id=user1
# Neo4j still contains relationships to deleted memory
MATCH (n) WHERE n.id = 'abc123' RETURN n # Returns orphaned nodeExpected Behavior
The _delete_memory() method should clean up all three storage systems:
- ✅ Vector store (current behavior)
- ✅ History database (current behavior)
- ❌ Graph store (missing)
Proposed Solution
Option 1: Add graph cleanup to _delete_memory()
def _delete_memory(self, memory_id):
logger.info(f"Deleting memory with {memory_id=}")
existing_memory = self.vector_store.get(vector_id=memory_id)
prev_value = existing_memory.payload["data"]
# Extract filters from existing memory payload
filters = {
"user_id": existing_memory.payload.get("user_id"),
"agent_id": existing_memory.payload.get("agent_id"),
"run_id": existing_memory.payload.get("run_id")
}
# Remove None values
filters = {k: v for k, v in filters.items() if v is not None}
# Existing cleanup
self.vector_store.delete(vector_id=memory_id)
self.db.add_history(...)
# NEW: Add graph cleanup
if self.enable_graph:
self.graph.delete(memory_id, filters) # Need to implement this method
return memory_idOption 2: Implement graph-specific delete method
Add a new method to the graph interface:
# In graph store classes
def delete_memory_node(self, memory_id: str, filters: dict):
"""Delete specific memory node and its relationships"""
query = """
MATCH (n {id: $memory_id})
WHERE n.user_id = $user_id
DETACH DELETE n
"""
self.execute(query, {"memory_id": memory_id, "user_id": filters.get("user_id")})Steps to Reproduce
- Initialize mem0 with both vector and graph stores enabled
- Add memories using
memory.add(messages, user_id="test") - Verify data exists in both Qdrant and Neo4j
- Delete individual memory using
memory.delete(memory_id) - Check Qdrant: memory is deleted ✅
- Check Neo4j: memory node and relationships still exist ❌
Verification Query
// Check for orphaned nodes in Neo4j
MATCH (n)
WHERE NOT EXISTS {
// This would need to be adapted based on your vector store integration
// The point is to find nodes that exist in graph but not in vector store
}
RETURN count(n) as orphaned_nodesRelated Issues
This issue affects:
- Memory consistency across storage systems
- Graph visualization accuracy
- Storage optimization
- Long-term system maintenance
Metadata
Metadata
Assignees
Labels
No labels