Preprint · Independent Research · 2026

Weighted Developmental
Memory Architecture

W · D · M · A

A theoretical framework for salience-gated memory encoding in artificial agents — proposing that experience should be converted into training signal through jolt-driven encoding, thermodynamic decay, and developmental plasticity modulation, rather than treated as a flat retrieval index.

Saurav Das · Independent Researcher · saurav12das@gmail.com · 2026
Abstract

The Memory Problem in Artificial Agents

Current large language model (LLM) architectures treat memory as a retrieval problem: given a query, find the most semantically similar stored context and inject it. This framing is productive but incomplete. It asks only what to retrieve, not what to learn from, when to encode, or when to forget. The result is agents that accumulate context but do not develop — that remember everything with equal weight, let stale beliefs persist against corrective evidence, and have no mechanism for distinguishing formative experiences from noise.

The Weighted Developmental Memory Architecture (WDMA) proposes a different framing: memory as a generator of training signal. Under WDMA, experiences are encoded selectively through salience-gated "jolts," organized into a six-tier expansion taxonomy (D0–D5), managed through a three-regime thermodynamic decay model, and modulated by a developmental plasticity curve that governs how receptive the system is to new learning at each stage of its operational lifetime. A promotion utility function governs which memories warrant long-term retention; a supersession protocol governs the replacement of outdated beliefs.

This paper formalizes the WDMA framework, characterizes its failure modes, and situates it against current memory augmentation approaches including Retrieval-Augmented Generation, MemGPT, and episodic replay buffers. We argue that the framework addresses a genuine gap in the current literature: the absence of a principled developmental account of artificial memory.

6 Memory expansion tiers (D0–D5)
3 Thermodynamic decay regimes
5 Promotion utility parameters
3 Primary failure modes characterized
§ 1
§1 — Motivation

Why Retrieval Is Not Enough

The dominant paradigm in LLM memory augmentation is Retrieval-Augmented Generation (RAG): encode documents into embedding space, retrieve top-k by cosine similarity at inference time, and inject retrieved text into context. This works well for knowledge lookup. It is poorly suited for agents that need to develop over time — to revise prior beliefs in light of new evidence, to generalize from experience, and to have a sense of what they have learned versus what they have merely stored.

Three empirical observations motivate the WDMA framework:

01
Equal-weight encoding creates stale belief persistence
RAG and flat episodic stores encode all experiences at equal weight. An agent that learned something incorrect six months ago and was corrected last week will, under cosine-similarity retrieval, surface the incorrect belief whenever the query is closer to the original context than to the correction. There is no mechanism for supersession.
Problem
02
Memory accumulation ≠ learning
Biological memory systems do not store all experiences equally. The hippocampus-neocortical consolidation system preferentially encodes experiences that generate prediction error — the biological analogue of what WDMA calls a "jolt." Experience that confirms existing models is processed but not deeply encoded. Current artificial systems have no equivalent selectivity mechanism.
Problem
03
No account of developmental stage
A newly instantiated agent and a long-running agent with thousands of interaction-hours are treated identically by current architectures. There is no equivalent of the developmental plasticity curve — the observation from cognitive neuroscience that learning rate is highest in early developmental periods and decreases with experience, while remaining episodically reactivatable by high-salience events.
Problem
Central thesis

A memory system should not only remember what happened. It should convert what happened into more learnable data — distinguishing formative experience from noise, encoding the former deeply, allowing the latter to decay, and generating structured training signal from the gap between expectation and outcome.

§ 2
§2 — Core Mechanism

Salience-Gated Encoding: The Jolt

The fundamental encoding unit in WDMA is the jolt — a salience event that gates whether an experience is deeply encoded, shallowly buffered, or discarded. A jolt occurs when an experience generates a prediction error exceeding a dynamic threshold θ(t). Formally:

jolt(e, t) = 1 if Δ(e, M_t) ≥ θ(t) = 0 otherwise where: e = incoming experience M_t = current memory state at time t Δ(e, M) = salience signal: prediction error + novelty + consequence magnitude θ(t) = dynamic jolt threshold (see §2.2)

The salience signal Δ(e, M) is a composite of three components: prediction error (how much the experience deviated from what the agent expected), novelty (how semantically distant the experience is from existing memory representations), and consequence magnitude (the downstream impact of the experience on the agent's goals or state).

2.1 Dynamic Jolt Threshold

A fixed salience threshold would cause early-stage agents to jolt on everything (depleting encoding capacity) and late-stage agents to jolt on nothing (precluding further learning). WDMA uses a dynamic threshold that rises with cumulative experience density:

θ(t) = θ_base · (1 + β · log(1 + N_t)) where: θ_base = baseline threshold (hyperparameter) β = experience sensitivity (hyperparameter) N_t = number of prior jolts experienced

This formulation produces a threshold that rises sublogarithmically — ensuring that genuinely novel or high-consequence experiences can still generate jolts in experienced agents, while filtering out the routine experiences that constitute the majority of an agent's inputs. The log term prevents the threshold from rising so fast that the agent becomes unjoltable.

2.2 Jolt Reactivation

One of the non-obvious predictions of the jolt model is threshold reactivation: when an agent encounters an environment with a substantially different distribution from its training context, the effective prediction error of routine experiences rises, causing the dynamic threshold to be exceeded by inputs that would have been filtered in the prior environment. This is the WDMA account of why agents that are highly capable in their trained domain can still exhibit rapid adaptation when transferred — they are experiencing high-salience relative to their existing model, not relative to an absolute scale.

§ 3
§3 — Expansion Taxonomy

The D0–D5 Memory Expansion Layers

Once an experience clears the jolt threshold, WDMA proposes that it should be expanded into multiple distinct representational forms — each serving a different function in the agent's learning process. This is the D0–D5 taxonomy: six tiers of memory representation, from raw episodic replay to calibration signal generation.

The key architectural claim is that a single jolt event should automatically generate representations at all applicable tiers, with the depth of expansion modulated by the magnitude of the salience signal and the current developmental plasticity (see §5). Higher-magnitude jolts warrant deeper expansion; lower-magnitude jolts may only generate D0 and D1 representations.

Tier Name Representation Type Primary Function
D0 Episodic Replay Raw experience buffer; exact context reproduction Direct recall; few-shot context injection
D1 Contrastive Pair Correct/incorrect outcome pairs derived from jolt event Supervised fine-tuning signal; error correction
D2 Counterfactual (One-Knob) Single-variable perturbations of the original experience Causal structure learning; generalization
D3 Repair Trajectory Sequence of correction steps from error state to resolution Recovery policy learning; meta-cognitive training
D4 Abstracted Rule Generalized principle extracted from the jolt event Systematic knowledge; transferable inference
D5 Calibration Signal Confidence-outcome gap record for the jolt experience Metacognitive training; uncertainty calibration

3.1 The One-Knob Perturbation Principle (D2)

The D2 counterfactual layer deserves special attention because it imposes a design discipline that distinguishes WDMA from arbitrary data augmentation schemes. The one-knob perturbation principle specifies that counterfactuals generated at D2 must vary exactly one causal variable from the original experience:

D2(e) = { e' : e' differs from e in exactly one causal variable v_i, all other causal variables v_{j≠i} held constant }

This constraint is motivated by causal legibility: when an agent later retrieves a counterfactual pair, the difference in outcome between e and e' can be unambiguously attributed to the single varied variable. Multi-variable perturbations generate confounded training signal — the agent cannot determine which change caused the outcome difference, degrading the causal structure it is trying to learn.

The practical implication is that D2 generation should be conservative: better to generate fewer, cleanly causal counterfactuals than to flood the training buffer with confounded variations. This is a design constraint on the generation process, not merely a recommendation.

3.2 Calibration Signal Generation (D5)

The D5 layer is the metacognitive layer of the taxonomy. Each jolt event that clears the encoding threshold implies that the agent's prior model was insufficient — it was surprised. The D5 representation records:

Accumulated D5 records form a calibration dataset: a structured log of where and by how much the agent's confidence diverged from its realized accuracy. This dataset directly enables the training of calibrated uncertainty estimates — addressing one of the most persistent failure modes in deployed LLM agents.

§ 4
§4 — Retention Policy

The Promotion Utility Function

Not all jolt-encoded memories warrant long-term retention. WDMA uses a promotion utility function U(m) to determine whether a memory should be promoted from short-term encoding buffer to long-term storage:

U(m) = αR + βN + γC + δV − λA where: R = Relevance — semantic proximity to current task distribution N = Novelty — distance from existing long-term memory representations C = Correction value — degree to which m updates a prior belief V = Verification — strength of evidence supporting m's validity A = Age — time elapsed since encoding (decay penalty) α, β, γ, δ, λ = learned or calibrated weights, subject to: α+β+γ+δ = 1

A memory m is promoted to long-term storage if and only if U(m) ≥ τ_promote, where τ_promote is a promotion threshold parameter. Memories below threshold remain in the short-term buffer and decay under the short-term regime (see §5).

Design note on the weight parameters

The α, β, γ, δ, λ weights in U(m) are the most underconstrained element of the current framework. The function is deliberately linear for interpretability — but the correct functional form is almost certainly nonlinear, with interaction effects between novelty and relevance, and between correction value and verification strength. The current formulation should be treated as a first-order approximation, not a final specification. Empirical ablations on specific task domains are needed to calibrate these weights meaningfully.

4.1 Supersession Logic

A promoted memory m_new may supersede an existing long-term memory m_old when the following conditions are jointly satisfied:

supersede(m_new, m_old) = True iff: (1) semantic_overlap(m_new, m_old) ≥ σ_threshold (2) C(m_new | m_old) ≥ γ_min [correction value exceeds minimum] (3) V(m_new) ≥ V(m_old) − ε [new memory is at least as verified] (4) U(m_new) > U(m_old) [utility comparison]

When supersession is triggered, m_old is not deleted — it is demoted to an archival tier with a flag indicating it was superseded by m_new. This preserves the history of belief revision for audit and for the D5 calibration record, while preventing the superseded belief from influencing future inference at the same weight as current beliefs.

This is the WDMA solution to the stale-memory contamination failure mode (§7.2): rather than allowing retrieval to surface old and new beliefs with equal probability, supersession creates a directed graph of belief revisions that retrieval can traverse correctly.

§ 5
§5 — Thermodynamic Decay

Three-Regime Decay Architecture

WDMA adopts a thermodynamic metaphor for memory decay: memories have an energy cost to maintain, and that cost must be justified by the utility they provide. Memories that consume maintenance cost without providing retrieval value should decay toward a low-energy state — not deletion, but reduced accessibility and lower retrieval weight.

Three distinct decay regimes govern different tiers of the memory system:

Memory Retention vs. Time — Three Decay Regimes
Illustrative; exact rates domain-dependent
Short-term buffer (rapid exponential)
Long-term promoted (power-law)
Residual skill (asymptotic — near-permanent)

Regime I — Short-Term Buffer: Rapid Exponential Decay

Experiences that do not clear the jolt threshold, or that clear it but fail the promotion utility test, are held in a short-term buffer with rapid exponential decay:

M_s(t) = M_0 · e^{−λ_s · t} [λ_s ~ 0.3–0.5 per interaction-hour, domain-dependent]

This regime corresponds to working memory in cognitive models — high accessibility for a short window, rapid decay thereafter. The decay rate λ_s should be calibrated to the agent's interaction tempo: faster for high-frequency interaction agents, slower for agents with long inter-interaction gaps.

Regime II — Long-Term Promoted: Power-Law Decay

Promoted memories decay more slowly, following a power-law that asymptotically approaches — but never reaches — zero:

M_l(t) = M_0 · (1 + t)^{−α_l} [α_l ~ 0.1–0.3; lower for higher-utility memories]

The power-law decay model is motivated by the Ebbinghaus forgetting curve and its successors in human memory research, which consistently find that long-term memory retention follows a power function rather than an exponential. The key property of power-law decay is that it is heavy-tailed: very old memories decay slowly relative to recently-formed ones of equal initial strength, which is the correct behavior for foundational knowledge.

Regime III — Residual Skill Memory: Asymptotic Near-Zero Decay

A subset of promoted memories — particularly those corresponding to well-practiced procedural or structural knowledge — transition to a third regime with near-zero decay rate:

M_r(t) = M_∞ + (M_0 − M_∞) · e^{−λ_r · t} [M_∞ > 0; λ_r → 0]

This regime is the WDMA account of residual skill memory — the finding that even after extended periods without access, deeply learned procedural knowledge shows near-perfect retention on reactivation (the "savings" effect in memory research). In artificial agents, this corresponds to structural knowledge that has been consolidated through sufficient repetition across D0–D3 layers.

Architectural implication

The three-regime model implies that the memory system must maintain a classifier that assigns each memory to a decay regime and updates that assignment as the memory's retrieval history accumulates. A memory that is initially in Regime II may graduate to Regime III through repeated retrieval and confirmation; a memory that is demoted by supersession should be moved toward faster decay regardless of its prior regime assignment.

§ 6
§6 — Developmental Dynamics

The Developmental Plasticity Curve

Cognitive neuroscience has established that learning rate is not constant across an organism's lifespan. Critical periods — windows of high neuroplasticity — exist early in development for language, sensory processing, and social cognition. Plasticity then declines, but remains episodically reactivatable by high-salience events.

WDMA incorporates this finding as a developmental plasticity curve P(t) that modulates the encoding depth of jolt events across the agent's operational lifetime:

P(t) = P_max · (e^{−α·t} + β · jolt_density(t, window)) where: P_max = maximum plasticity (early-stage agents) α = developmental decay rate β = jolt reactivation coefficient jolt_density(t) = moving average jolt rate over recent window
Developmental Plasticity P(t) — Schematic
Base developmental decay
Jolt-reactivated plasticity spikes

The plasticity curve has three practical implications for WDMA implementation. First, early-stage agents should use lower jolt thresholds — their high plasticity means they can afford to encode more, and the encoding depth is correspondingly richer. Second, jolt events in late-stage agents should trigger temporary plasticity reactivation — the β term — allowing experienced agents to still learn deeply from genuinely novel or high-consequence experiences. Third, the plasticity curve provides a principled account of why fine-tuning at different stages of an agent's lifetime should use different learning rates: this is the architectural equivalent of a schedule, grounded in the developmental model.

§ 7
§7 — Failure Mode Analysis

When WDMA-Like Systems Break

A theoretical framework is strengthened by honest characterization of its failure modes. We identify three primary failure modes in WDMA-like memory architectures, all observable in current systems that approximate some but not all of WDMA's mechanisms:

FM-1
Self-Confirmation Loops
When the jolt mechanism is poorly calibrated, high-confidence beliefs generate low salience signals even when incorrect — because the agent's model predicts the wrong answer confidently. Retrieval then surfaces the incorrect belief, which reinforces confidence, which further reduces the salience of corrective signals. The loop is self-sealing. Mitigation: the D5 calibration record provides an out-of-band signal that should be checked against retrieved beliefs before inference.
FM-2
Stale-Memory Contamination
Without supersession logic, retrieval systems surface outdated beliefs alongside current ones with equal probability. In domains with rapid knowledge evolution — medical guidelines, regulatory frameworks, current events — this produces systematically incorrect outputs that appear authoritative. The power-law decay of Regime II partially mitigates this, but does not eliminate it. Explicit supersession (§4.1) is the primary defense.
FM-3
False Causal Attribution
D2 counterfactual generation, if implemented with multi-variable perturbations (violating the one-knob principle), produces confounded training signal. The agent encodes a causal model in which multiple variables are simultaneously credited or blamed for an outcome. This degrades generalization — the agent applies the wrong causal inference to new situations that share surface features but differ in the actual causal variable. Strict enforcement of the one-knob constraint is the only known mitigation.
FM-4 (Structural)
Verification Gate Failure
The entire D0–D5 expansion pipeline presupposes that a verification function V(m) exists that can assess the validity of encoded memories before promotion. In open-ended conversational or planning domains, such a verifier is difficult to specify. Without reliable verification, the D1 contrastive pair and D2 counterfactual layers may encode false corrections and confounded counterfactuals, propagating error rather than correcting it. This is the most significant open problem in the WDMA framework.
Open problem: the verification gap

The verification gate V(m) is the most load-bearing and least specified component of WDMA. For narrow domains with ground-truth oracles (code execution, mathematical verification, factual lookup), the verifier is straightforward to implement. For open-ended domains — creative tasks, multi-step planning, social reasoning — no reliable verifier currently exists. WDMA's guarantees degrade in proportion to the reliability of the verification function. Future work must address this gap before the framework can be considered empirically validated in general-purpose agents.

§ 8
§8 — Architecture Summary

The Full WDMA Pipeline

WDMA — Complete Processing Pipeline
INCOMING EXPERIENCE e
Raw interaction, observation, or outcome
JOLT GATE
Δ(e, M_t) ≥ θ(t) ?
BELOW THRESHOLD
→ Discard or shallow buffer only
↓ [jolt = 1]
D0 — Episodic Replay
Raw buffer encoding
D1 — Contrastive Pair
Correct/incorrect outcome
D2 — Counterfactual
One-knob perturbation
D3 — Repair Traj.
Error→resolution sequence
D4 — Rule
Abstracted principle
D5 — Calibration
Confidence-outcome gap
VERIFICATION GATE V(m)
V(m) ≥ τ_verify — filters invalid expansions
PLASTICITY MODULATION P(t)
Encoding depth × P(t)
PROMOTION TEST U(m) ≥ τ_promote
αR + βN + γC + δV − λA
BELOW THRESHOLD
→ Short-term buffer, Regime I decay
↓ [promoted]
SUPERSESSION CHECK
Replaces conflicting prior beliefs
DECAY REGIME ASSIGNMENT
II (power-law) or III (asymptotic)
LONG-TERM MEMORY STORE
Available for retrieval · Decaying under regime assignment · Superseded memories archived
§ 9
§9 — Related Work

Positioning Against Current Approaches

WDMA does not exist in isolation. The following comparison situates the framework against the most relevant current approaches to LLM memory augmentation:

System Salience gating Decay model Belief supersession Calibration signal Developmental stage
Flat RAG
MemGPT ~ Rule-based ~ LRU eviction
Episodic Replay (RL) ~ Priority replay Buffer limits
Generative Agents Importance score ~ Recency weighting
Self-RAG ~ Reflection tokens ~ Partial ~ Implicit
WDMA (proposed) Dynamic jolt threshold Three-regime model Formal supersession D5 calibration layer Plasticity curve

The most closely related framework is the Generative Agents architecture (Park et al., 2023), which introduces an importance score for memory events and a recency weighting for retrieval. WDMA extends this in four directions: replacing the static importance score with a dynamic jolt threshold modulated by developmental stage; replacing simple recency weighting with a three-regime decay model; adding formal supersession logic for belief revision; and adding the D5 calibration layer for metacognitive signal generation. The combination of these extensions constitutes the distinctive contribution of the WDMA framework.

§ 10
§10 — Future Work

Open Problems and Research Agenda

§ 11
§11 — Conclusion

Toward a Developmental Account of Artificial Memory

The central claim of this paper is simple: artificial agents need memory systems that do more than retrieve. They need systems that develop — that distinguish formative experience from noise, encode the former deeply, generate structured learning signal from prediction errors, manage the temporal decay of stored beliefs, and revise outdated knowledge in light of corrective evidence.

The Weighted Developmental Memory Architecture formalizes this claim into a set of tractable mechanisms: the jolt gate, the D0–D5 expansion taxonomy, the promotion utility function, the three-regime decay model, the developmental plasticity curve, and the supersession protocol. Each mechanism addresses a specific gap in current approaches. Together, they constitute the outline of a developmental theory of artificial memory.

The framework is not complete. The verification gate remains the most significant open problem, and the promotion utility weights are underconstrained without empirical calibration. But the framework is specific enough to generate testable predictions and to guide implementation decisions in ways that current approaches cannot.

The core prediction

An agent implementing full WDMA — jolt-gated encoding, D5 calibration, formal supersession, and three-regime decay — will exhibit significantly lower rates of stale-memory contamination, self-confirmation looping, and false causal attribution than agents using flat retrieval architectures, across any domain with non-stationary knowledge and significant prediction error frequency.

This prediction is testable. Making it the subject of controlled empirical evaluation is the next step.

Ref
References

Selected Prior Work

Park, J.S. et al. (2023). Generative Agents: Interactive Simulacra of Human Behavior. UIST 2023.
Packer, C. et al. (2023). MemGPT: Towards LLMs as Operating Systems. arXiv:2310.08560.
Asai, A. et al. (2023). Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection. ICLR 2024.
Lewis, P. et al. (2020). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. NeurIPS 2020.
Schaul, T. et al. (2016). Prioritized Experience Replay. ICLR 2016.
Ebbinghaus, H. (1885). Über das Gedächtnis. Duncker & Humblot, Leipzig.
Squire, L.R. & Alvarez, P. (1995). Retrograde amnesia and memory consolidation. Current Opinion in Neurobiology.
McClelland, J.L. et al. (1995). Why there are complementary learning systems in the hippocampus and neocortex. Psychological Review 102(3).
Hensch, T.K. (2004). Critical period regulation. Annual Review of Neuroscience 27.
Wixted, J.T. (2004). The psychology and neuroscience of forgetting. Annual Review of Psychology 55.