January 30 2026

dryft.ai tweets

thoughts on RLMs and context windows for ERP systems

limited context windows remain a bottleneck for intelligence in large-scale ERP systems.

at dryft.ai we’re thinking about how the Recursive Language Model (RLMs) paradigm can help us solve context rot. instead of cramming up the context window, what if we program our way through it? here’s how it works (1/6)


RLMs has a root LM (depth=0) that spins up sub models (depth=1+ LM). rather than stuffing context into tokens, we store it as a Python variable in a persistent REPL environment.

the sub-LM can write code to inspect, slice, filter, and recursively query and return back to the root LM with what is important. attention is expensive, memory is cheap. (2/6)


ERP system decisions are long-horizon + complex. our goal is for our agents to get as much access to data (full stream of operational events) and memory. RLMs help us reduce the strain on context window and helps us reach true manufacturing optmization. (3/6)


in line with mathematical optimization in our agents, having our agents programmatically digest relevant information prevents attention dilution and errors that compound over long horizons. (4/6)


recent work from Alex Zhang et al suggests post-training models on RLM trajectories can 8x base performance. we’re determined to optmize how our agents make the thousand of small operational decisions. (5/6)


agents and the RLM paradigm are not mutually exclusive. more efficient and cleaner context window management improves downstream agentic capabilities.

further readings in descending order of technicality ~Alex Mackenzie~, ~Arjun~, ~Alex Zhang~, ~Prime Intellect~ and ofc the ~Arxiv paper~(6/6)