⁄ an open associative-memory standard · v0.1.0-draft · MMXXVI
Storage thatremembers like you do.
Smritidb is a biology-inspired associative-memory layer for every platform. It treats meaning as a first-class addressing primitive — recall by partial cue, degrade like a hologram, consolidate the way the cortex consolidates while you sleep. An open standard for the missing layer in the storage stack.
§ 1. Three properties. All required.
- i.
Fuzzy content-addressing
The brain's cue-based recall — a smell, a glimpse, the whole memory comes back.
Look up data by similarity, not by exact hash. Partial cues, near matches, semantic queries — all native.
- ii.
Holographic distribution
Cortical memory — each item spread across many synapses, no single address.
Lose a chunk of the substrate; lose no specific item. Everything degrades a little, together. The math fades like a hologram, not like a disk.
- iii.
Hebbian consolidation
Hippocampus to cortex during sleep. Items that fire together, bind together.
Frequently co-accessed items get pulled closer. Cold items summarize. The index reshapes itself by how you actually use it.
§ 2. The math is real.
This demo runs the reference TypeScript implementation directly in your browser — no server, no mock. The same package you can npm install.
a. Bind a single role to a single filler
Type a role and a filler. Each is encoded into a 4096-bit hypervector; we bind them by XOR and unbind by XORing again with the role. The recovered filler should match the original at similarity 1.0, exactly.
b. A record bundled from three pairs
Bundle three role-filler bindings into one hypervector. Query it by role; the cleanup memory recovers the correct filler from a fixed candidate set, above the ~0.5 random-pair baseline.
| 1. | alice | 0.7490 | |
| 2. | carol | 0.5110 | |
| 3. | thirty | 0.5049 | |
| 4. | chicago | 0.4995 |
§ 3. A position.
on what this is for
Every storage system you have used is built on the same unstated assumption: the question you ask later will look exactly like the address you wrote earlier. Filesystems, key-value stores, relational databases, object stores, content-addressed stores — every layer of the modern stack inherits that fifty-year-old bet.
The data we store now is meaning-shaped. The questions we ask are partial, fuzzy, compositional, time-decaying. We have spent a decade gluing vector indexes onto byte-addressed storage and calling it AI infrastructure.
It works. It is also the least native abstraction we could have picked.
Read the manifesto — or skim the specification.