Every concept connects to every other with weight, valence, and stability. The network thinks — it propagates, inhibits, decays, and learns.
reinforce_association creates bidirectional synapses. Repeated co-activation strengthens the bond. Stability grows 1.5× per reinforcement.apply_homeostasis normalises all weights toward a target mean — preventing runaway drift while preserving relative association strengths.predict_next learns what typically follows a concept across all recorded episodes.A single function call activates a concept. The network does the rest — propagating signal, resolving conflict, and surfacing the emotional context that shapes how the agent responds.
reinforce_association with a source, target, importance weight, and emotional valence. The system writes bidirectional synapses. Negative importance creates inhibitory connections.propagate_activation with a concept and hop depth. Signal travels outward through synapses, attenuating at each step. Negative synapses suppress — the network computes the full emotional cascade.apply_lateral_inhibition. Dominant concepts suppress their competitors. The network converges to a coherent emotional state — not a muddy average of everything.hedonic_state to get a scalar read on the current emotional valence across all active concepts. Positive = the agent is in a good state. Negative = distress, fear, or tension is dominant.decay_activations with your chosen rate. Working memory fades naturally. Long-term synapses are untouched. The slate clears, ready for the next stimulus.// teach the association [ { "method": "reinforce_association", "params": { "src": "dog", "tgt": "fear", "importance": 1.0, "valence": -1.0 } }, { "method": "reinforce_association", "params": { "src": "home", "tgt": "fear", "importance": -0.7, "valence": -0.6 } } ] // encounter a dog [ { "method": "propagate_activation", "params": { "concept": "dog", "strength": 0.9, "hops": 2 } }, { "method": "apply_lateral_inhibition" }, { "method": "get_activations" } ] // response { "fear": 0.900, "danger": 0.900, "alert": 0.810, "escape": 0.720, "safe": -0.720 // suppressed }
// same dog, safe context — fear is lower [ { "method": "propagate_activation", "params": { "concept": "home", "strength": 0.8, "hops": 1 } }, { "method": "propagate_activation", "params": { "concept": "dog", "strength": 0.9, "hops": 2 } }, { "method": "apply_lateral_inhibition" } ] // response — fear dampened by context { "fear": 0.180, // ← was 0.900 without home "safe": 0.720 // ← home pre-activated safe }
Different surface forms of the same idea collapse to a single canonical concept. Your embedding model decides similarity. We store the registry, run the lookup, and cache every resolution so repeat queries are a single B-tree read.
Use any embedding provider. We accept vectors — you decide the model.
{ "method": "probe_similarity",
"params": {
"name_a": "anxiety",
"embedding_a": [...384 floats...],
"name_b": "anxious feeling",
"embedding_b": [...384 floats...]
}
}
// response
{ "similarity": 0.8122,
"would_collapse": true,
"recommendation": "Close — will collapse at moderate threshold" }
The Ebbinghaus forgetting curve, implemented in Postgres. Each reinforcement builds stability. Emotional valence adds a second multiplier. Traumatic memories are near-permanent. Neutral facts fade fast.
Each reinforcement multiplies stability by 1.5×. A concept seen 5 times has stability ≈ 7.6 — its decay rate is divided by that factor every cycle.
High absolute valence amplifies the denominator of the decay formula. Valence −1.0 makes a synapse decay 3× slower than a neutral equivalent with identical stability.
weight ×= (1.0 − decay_rate ÷ max(stability, 1.0) ÷ (1.0 + abs(valence) × 2.0) ) // traumatic · valence −1.0 → divisor 3.0× // reinforced · stability 7.6 → divisor 7.6× // neutral · stability 1.0 → divisor 1.0×
Any AI that interacts with a person over time can use associative memory to model that person's emotional landscape and reason about it.
You call named functions over JSON-RPC. Batched. Authenticated with an API key. The Go middleware verifies your token, enforces quota, and routes to Postgres. You never touch SQL directly.
{
"jsonrpc": "2.0",
"method": "reinforce_association",
"id": "req_01",
"params": {
"src": "dog",
"tgt": "fear",
"importance": 1.0,
"user_id": "usr_...",
"valence": -1.0
}
}
[
{ "method": "propagate_activation",
"params": { "concept": "dog",
"strength": 0.9, "hops": 2 } },
{ "method": "apply_lateral_inhibition",
"params": {} },
{ "method": "hedonic_state",
"params": {} }
]