agentcontextmemory.com

Agent Context Memory Ontology
Tier-1 Research Quality (75%+)

Focus Area: AI agent context memory and off-platform second brain systems

This ontology provides citation-quality definitions for 15 foundational terms, backed by authoritative sources from standards bodies (NIST, W3C, IETF, OASIS, ISO) and peer-reviewed research.

15
Technical Terms
75%+
Tier-1 Sources
V1.72
Pipeline Version

Technical Glossary

AGT001 Agent Context Memory
A persistent, structured store that retains an AI agent's accumulated understanding of an ongoing task, user preferences, prior decisions, and environmental state across multiple sessions and execution boundaries. Unlike working memory, which is discarded at session end, agent context memory is designed for durable retrieval and incremental enrichment over the agent's operational lifetime. Well-designed context memory architectures separate episodic records from semantic generalizations to support efficient retrieval without context flooding.
Authoritative Sources
AGT002 Off-Platform Memory Store
An external, agent-accessible data repository that persists context and knowledge outside the boundaries of the primary inference platform, enabling agents to retrieve relevant history and learned state even when operating in stateless or ephemeral execution environments. Off-platform stores decouple memory durability from the platform's session lifecycle, allowing knowledge continuity across platform migrations, model updates, and infrastructure changes. Access to off-platform stores must be governed by authentication and authorization controls to prevent unauthorized context retrieval.
Authoritative Sources
AGT003 Second Brain Architecture
A design pattern for AI agents in which a dedicated, external knowledge management system serves as a persistent cognitive substrate, storing and organizing the agent's accumulated experiences, learned associations, and task-relevant knowledge in a format optimized for semantic retrieval. The second brain architecture separates the agent's ephemeral inference context from its durable knowledge base, enabling the agent to operate effectively across session boundaries without relying on in-context memory alone. Integration between the agent's runtime and its second brain requires low-latency read/write protocols and conflict resolution mechanisms for concurrent updates.
Authoritative Sources
AGT004 Context Persistence Layer
An infrastructure component responsible for serializing, storing, and restoring an AI agent's operational context at defined checkpoints, ensuring that task state, user preferences, and accumulated knowledge are not lost when the execution environment is reset or migrated. The persistence layer abstracts away the underlying storage technology, presenting the agent with a uniform interface for context read and write operations regardless of whether data is stored locally, remotely, or in a distributed system. Integrity guarantees — including versioning, checksums, and atomic write operations — are required to prevent context corruption during persistence events.
Authoritative Sources
AGT005 Memory Scope Boundary
The explicitly defined limits on what categories of information an AI agent is permitted to store, retrieve, or share within its context memory system, enforced by policy and technical controls at the memory infrastructure level. Scope boundaries prevent unauthorized accumulation of sensitive user data, cross-context information leakage, and unauthorized memory sharing between agent instances. Boundary definitions must be maintained in a governance policy registry and audited regularly to ensure alignment with applicable data protection requirements.
Authoritative Sources
AGT006 Episodic Context Record
A timestamped, immutable log entry capturing a discrete interaction event, decision outcome, or environmental observation that occurred during an AI agent's operation, stored in context memory for future retrieval and reasoning. Episodic records enable agents to reconstruct the history of a task or interaction with high fidelity, supporting accountability, debugging, and adaptive behavior based on past experience. Each record must include actor identifiers, action descriptions, outcome summaries, and provenance metadata to support complete context reconstruction.
Authoritative Sources
AGT007 Context Memory Indexing
The process of organizing stored context items into a searchable, retrieval-optimized structure that enables an AI agent to efficiently locate relevant memories given a query, task state, or reasoning requirement. Effective indexing strategies combine semantic embeddings, temporal ordering, and relational tags to support both associative and precise retrieval modes. Index maintenance must account for memory additions, updates, and deprecations to prevent stale or contradictory context from polluting retrieval results.
Authoritative Sources
AGT008 Agent Memory Horizon
The temporal or volumetric boundary beyond which an AI agent's context memory system no longer actively surfaces memories during retrieval, either because they have been archived, compressed, or marked as low-priority relative to more recent context. Memory horizons are configurable parameters that balance retrieval relevance against the computational cost of searching large memory stores. Horizon policies must allow critical or time-invariant memories to be designated as horizon-exempt to prevent loss of foundational context.
Authoritative Sources
AGT009 Context Synchronization
The process of reconciling and aligning the state of an AI agent's context memory across multiple instances, sessions, or storage replicas, ensuring that all copies reflect a consistent and up-to-date representation of the agent's accumulated knowledge. Synchronization protocols must handle concurrent write conflicts, partial failure scenarios, and network partition events without corrupting the canonical context state. Synchronization events should be logged with timestamps and actor identifiers to support auditability.
Authoritative Sources
AGT010 Memory Write Policy
The set of governance rules that determine what types of information an AI agent is authorized to write to its context memory, under what conditions writes are permitted, and what approval or logging requirements apply to each write category. Write policies enforce data minimization principles by restricting memory population to operationally necessary context, reducing the risk of unauthorized data accumulation. Policy violations detected at the memory interface must be logged and may trigger agent suspension pending review.
Authoritative Sources
AGT011 Context Recall Trigger
An event, query, or reasoning state that causes an AI agent to initiate a retrieval operation against its context memory, surfacing relevant stored memories to inform the current decision or response. Recall triggers may be explicit — generated by user queries referencing past interactions — or implicit — generated internally by the agent's reasoning process when current task state matches patterns associated with stored context. Trigger sensitivity must be tuned to balance recall completeness against the latency cost of frequent memory access.
Authoritative Sources
AGT012 Agent Memory Partition
A logically isolated segment of an AI agent's context memory store dedicated to a specific task, user, project, or operational domain, enabling the agent to maintain distinct memory contexts for different use cases without cross-contamination. Partitions enforce access boundaries that prevent context from one domain from influencing retrieval results in another, supporting multi-tenant deployments and privacy-sensitive applications. Partition boundaries must be enforced at the storage layer, not solely at the application layer, to prevent bypass.
Authoritative Sources
AGT013 Context Compression
A memory management technique that reduces the storage footprint of an AI agent's accumulated context by abstracting, summarizing, or merging older or lower-priority memory items while preserving the essential semantic content needed for future retrieval. Effective compression must distinguish between dispensable detail and foundational context, applying lossy techniques only to items whose full fidelity is no longer operationally required. Compression events must be logged with the compression method, affected memory identifiers, and the resulting abstract representation.
Authoritative Sources
AGT014 Long-Term Context Store
A durable, high-capacity memory subsystem within an AI agent's architecture that retains knowledge, preferences, and relational structures across extended operational periods — potentially months or years — enabling the agent to build on accumulated experience over its full deployment lifetime. Long-term stores must implement redundancy, backup, and disaster recovery mechanisms commensurate with the strategic value of the stored context. Access patterns for long-term stores are typically read-heavy, justifying optimized indexing strategies that favor retrieval speed over write throughput.
Authoritative Sources
AGT015 Context Integrity Verification
The process of confirming that the contents of an AI agent's context memory have not been tampered with, corrupted, or inadvertently altered since their initial storage, using cryptographic checksums, hash chains, or signed manifests to detect unauthorized modifications. Integrity verification is a required control in high-assurance agent deployments where context manipulation could lead to incorrect decisions or accountability failures. Failed integrity checks must trigger automatic quarantine of the affected memory partition and notification to the governing oversight authority.
Authoritative Sources