devnexus is an open-source cli that gives agents persistent shared memory across repos, sessions, and engineers. It maps out dependencies and relations at the function level, builds a code graph, and writes it into a shared Obsidian vault that every agent reads before writing code. Past decisions are also linked directly to the code they touched, so no one goes down the same dead end twice. Still building it out but I would love to hear any thoughts/feedback Comments URL: https://news.ycombinator.com/item?id=47812829 Points: 4 # Comments: 0
I wanted to share something that I have been working on since about 2019, in one form or another. Springdrift is a persistent, auditable runtime for long-lived agents written in Gleam on the BEAM. It is my attempt at filling in some of the gaps in agent development. It is designed to do all the things an agent like Openclaw can do (and more eventually), but it can diagnose its own errors and failures. It has a sophisticated safety metacognition system. It has a character that should not drift. It started out as a machine ethics prototype in Java and gradually morphed into a full agent runtime. All the intermediate variations worth saving are on my GitHub repo. I recall trying to explain to a mentor what exactly I was building. I found it difficult because there was no existing category for this kind of agent. It is not quite an assistant, it does more than run tasks. It is not quite an autonomous agent because even though it is autonomous, its autonomy is bounded. I kept falling back to the example of assistance animals, like guide dogs. This provided what I needed, the example of a non-human agent that has bounded autonomy. But this is not a guide dog, it is an AI system. I needed to look to examples in fiction to add the final part - JARVIS, K9 from Dr. Who, Rhadamanthus from The Golden Age novel. All these systems have bounded autonomy and have a long term professional relationship with humans like a family lawyer or doctor whose services are retained. Hence the type of this system is an Artificial Retainer. The system has lots of interesting features - ambient self perception, introspection tooling and a safety system based on computational ethics (Becker) and decision theory (Beach). It is auditable, backed up to git and can manage its own work with a scheduler and a supervised team of subagents. The website and the accompanying paper provide more details. I make no huge claims for the system, it is pretty new. What I offer it as is a reference implementation of a new category of AI agent, one that I think we need. The road to AGI is all very well and I am not sure Springdrift gets us any closer. But it does represent an attempt to build an intermediate safe type of agent that we seem to be missing. All feedback and comments welcome! GitHub: https://github.com/seamus-brady/springdrift Arxiv paper: https://arxiv.org/abs/2604.04660 Eval data: https://huggingface.co/datasets/sbrady/springdrift-paper-eva... Website: https://springdrift.ai/ Comments URL: https://news.ycombinator.com/item?id=47785663 Points: 2 # Comments: 0
Article URL: https://github.com/AxmeAI/axme-code/ Comments URL: https://news.ycombinator.com/item?id=47768862 Points: 2 # Comments: 1