memory infrastructure for ai
Neuro-Symbolic
Associative Memory
Where AI agents feel, remember, and understand
awakening
scroll

Not a vector store.
A living mind.

Every concept connects to every other with weight, valence, and stability. The network thinks — it propagates, inhibits, decays, and learns.

01
Spreading Activation
Activate one concept and the signal propagates outward through weighted synapses. Each hop attenuates — weak signals die, strong ones cascade. Two hops from dog reaches danger via fear.
02
Lateral Inhibition
When a concept fires strongly it suppresses its competitors. safe and danger cannot coexist. The network resolves toward a single dominant emotional interpretation — the winner takes all.
03
Hebbian Learning
Neurons that fire together wire together. Every call to reinforce_association creates bidirectional synapses. Repeated co-activation strengthens the bond. Stability grows 1.5× per reinforcement.
04
Ebbinghaus Forgetting
Flat decay is wrong. Each reinforcement builds stability, slowing decay exponentially. Emotional valence adds a second multiplier — a traumatic memory (valence −1.0) decays 3× slower than a neutral fact.
05
Homeostasis
The brain does synaptic scaling. After repeated reinforcement, apply_homeostasis normalises all weights toward a target mean — preventing runaway drift while preserving relative association strengths.
06
Episodic Sequences
Order matters. Sequences capture narratives — home → walk → dog → fear. predict_next learns what typically follows a concept across all recorded episodes.

From stimulus
to emotional state

A single function call activates a concept. The network does the rest — propagating signal, resolving conflict, and surfacing the emotional context that shapes how the agent responds.

01
Seed long-term memory
Call reinforce_association with a source, target, importance weight, and emotional valence. The system writes bidirectional synapses. Negative importance creates inhibitory connections.
02
Activate a concept
Call propagate_activation with a concept and hop depth. Signal travels outward through synapses, attenuating at each step. Negative synapses suppress — the network computes the full emotional cascade.
03
Resolve emotional conflicts
Call apply_lateral_inhibition. Dominant concepts suppress their competitors. The network converges to a coherent emotional state — not a muddy average of everything.
04
Read the hedonic state
Call hedonic_state to get a scalar read on the current emotional valence across all active concepts. Positive = the agent is in a good state. Negative = distress, fear, or tension is dominant.
05
Let working memory fade
Call decay_activations with your chosen rate. Working memory fades naturally. Long-term synapses are untouched. The slate clears, ready for the next stimulus.
phobia_scenario · json-rpc
// teach the association
[
  { "method": "reinforce_association",
    "params": { "src": "dog", "tgt": "fear",
                "importance": 1.0, "valence": -1.0 } },
  { "method": "reinforce_association",
    "params": { "src": "home", "tgt": "fear",
                "importance": -0.7, "valence": -0.6 } }
]

// encounter a dog
[
  { "method": "propagate_activation",
    "params": { "concept": "dog",
                "strength": 0.9, "hops": 2 } },
  { "method": "apply_lateral_inhibition" },
  { "method": "get_activations" }
]

// response
{ "fear": 0.900, "danger": 0.900,
  "alert": 0.810, "escape": 0.720,
  "safe": -0.720 // suppressed }
context_matters · json-rpc
// same dog, safe context — fear is lower
[
  { "method": "propagate_activation",
    "params": { "concept": "home",
                "strength": 0.8, "hops": 1 } },
  { "method": "propagate_activation",
    "params": { "concept": "dog",
                "strength": 0.9, "hops": 2 } },
  { "method": "apply_lateral_inhibition" }
]

// response — fear dampened by context
{ "fear": 0.180, // ← was 0.900 without home
  "safe": 0.720 // ← home pre-activated safe }

Bring your own embeddings.
We handle canonicalization.

Different surface forms of the same idea collapse to a single canonical concept. Your embedding model decides similarity. We store the registry, run the lookup, and cache every resolution so repeat queries are a single B-tree read.

anxiety
anxiety
canonical
anxious feeling
anxiety
0.8122 similarity
anxiousness
anxiety
0.8543 similarity
feeling anxious
anxiety
alias stored
joy
joy
distinct · 0.4197
volcano eruption
volcano eruption
distinct · 0.2283

Use any embedding provider. We accept vectors — you decide the model.

OpenAI
1536d
Cohere
1024d
MiniLM
384d
Nomic
768d
Gemini
768d
Custom
any
probe before you commit · json-rpc
{ "method": "probe_similarity",
  "params": {
    "name_a":      "anxiety",
    "embedding_a": [...384 floats...],
    "name_b":      "anxious feeling",
    "embedding_b": [...384 floats...]
  }
}

// response
{ "similarity":         0.8122,
  "would_collapse":     true,
  "recommendation":    "Close — will collapse at moderate threshold" }

Fear outlasts facts.
By design.

The Ebbinghaus forgetting curve, implemented in Postgres. Each reinforcement builds stability. Emotional valence adds a second multiplier. Traumatic memories are near-permanent. Neutral facts fade fast.

time →
weight
traumatic · valence −1.0 · high stability
reinforced · valence 0.0 · high stability
neutral fact · valence 0.0 · low stability

Each reinforcement multiplies stability by 1.5×. A concept seen 5 times has stability ≈ 7.6 — its decay rate is divided by that factor every cycle.

High absolute valence amplifies the denominator of the decay formula. Valence −1.0 makes a synapse decay 3× slower than a neutral equivalent with identical stability.

decay formula
weight ×= (1.0 −
  decay_rate
  ÷ max(stability, 1.0)
  ÷ (1.0 + abs(valence) × 2.0)
)

// traumatic  · valence −1.0 → divisor 3.0×
// reinforced · stability 7.6 → divisor 7.6×
// neutral    · stability 1.0 → divisor 1.0×

Built for agents that
know their users.

Any AI that interacts with a person over time can use associative memory to model that person's emotional landscape and reason about it.

therapy & wellbeing
Emotional context that persists across sessions
A therapeutic agent remembers that hospitals trigger anxiety for this user. That airports connect to grief from a past loss. That mornings are associated with calm. Every session starts with a full emotional map — not a blank slate.
fear cascades
valence tracking
safe context
hedonic state
companion agents
Relationships that deepen over time
Synapses strengthen with each interaction. The agent learns what topics energise this user, what drains them, what they associate with home and family and loss. Memory is not retrieval — it is the shape of a relationship.
hebbian learning
stability growth
episodic memory
predict_next
personalisation
Content that lands differently for every person
Activate a topic and read the emotional cascade before surfacing content. A product associated with stress for one user might be associated with joy for another. Context-sensitive retrieval means the same network produces different outputs per person.
propagation
lateral inhibition
emotional charge
per-user isolation
agent memory
Long-term memory that forgets the right things
Emotionally charged associations persist. Irrelevant facts decay. The agent does not accumulate noise — it accumulates meaning. Amnesia is available when a memory wipe is appropriate. Forgetting strategy is caller-controlled.
ebbinghaus decay
homeostasis
amnesia
decay_rate param

A database of functions.
Not a database of rows.

You call named functions over JSON-RPC. Batched. Authenticated with an API key. The Go middleware verifies your token, enforces quota, and routes to Postgres. You never touch SQL directly.

write
reinforce_association
teach a connection
write
propagate_activation
fire a concept
write
apply_lateral_inhibition
resolve conflict
read
get_activations
read working memory
read
hedonic_state
scalar emotional read
query
register_concept
canonicalize a concept
query
probe_similarity
tune your threshold
query
predict_next
episodic prediction
admin
decay_activations
your forgetting strategy
admin
amnesia
full cognitive wipe
request
{
  "jsonrpc": "2.0",
  "method":  "reinforce_association",
  "id":      "req_01",
  "params": {
    "src":       "dog",
    "tgt":       "fear",
    "importance": 1.0,
    "user_id":   "usr_...",
    "valence":   -1.0
  }
}
batched
[
  { "method": "propagate_activation",
    "params": { "concept": "dog",
                "strength": 0.9, "hops": 2 } },
  { "method": "apply_lateral_inhibition",
    "params": {} },
  { "method": "hedonic_state",
    "params": {} }
]