智能助手网
标签聚合 Multi

/tag/Multi

hnrss.org · 2026-04-17 22:36:11+08:00 · tech

hi HN — we're Cem and Oguzhan. today we are releasing Egregore ( https://github.com/egregore-labs/egregore ) as an open-source shared memory and coordination substrate for teams using Claude Code. MIT, runs locally, `npx create-egregore@latest --open`. Here's a 90-second walkthrough: https://shorturl.at/e1RX2 we were essentially working on dynamic ontology systems (how to organize unstructured data given context / starting conditions dynamically). since december new capabilities that came around with opus 4.5 completely altered the nature of our work. but despite the radical increase in speed and ambition of our experiments, we were visibly diverging from a common vision. once we acknowledged that together, we decided to find ways to infrastructure mechanisms for convergence. first it was integrating the GitHub organization and repos we are working on, later we developed custom commands that would help us share research, request code reviews and even send each other spam through a tg connector we built for notifications :) https://imgur.com/a/egregore-telegram-notification-LM0ldPV we then realized we can integrate the agentic knowledge graph infrastructure we were building for the dynamic ontology research. our stack emerged very rapidly and one week in it was glaringly obvious that we should stop everything and get to working exclusively on this. the speed of accumulation of multi-agent context coupled with frontier model capabilities yielded wow moments one after another. for two months we implemented key features and improved core functionalities. given our background in organization design and distributed systems, we knew that this was a primitive that should be spread widely and experimented with since there is no single good answer to how an organization should be designed. but it was clear to us that using agent harness form factor as an organization design substrate is incredibly promising given: speed of context accumulation, the continuity (handoffs) it enables between individuals and adaptability of the organization to changing (internal and external) conditions. anyway, after couple months of wow-ing to our own creation we knew we had to get this out and face the moment of truth whether: we actually built something interesting or we are getting high off our own supply. so thats what we are doing today: we created a version of egregore — slash commands, git-backed markdown memory, Claude Code hooks — that is self-hosted and without dependencies (no knowledge graph etc) so that we can release it as a viable open-source primitive anyone can use and mutate according to their needs with their friends and colleagues. what you can do with it today: - `/invite` — onboard a teammate with the full accumulated context already in place - `/handoff` — pass a session on with reasoning and context preserved. the next person, future-you, or their agent picks up the thread without restarting - `/activity` — see what the team has actually been doing, synthesized from real sessions - `/deep-reflect` — cross-reference an insight against the organization's full memory - `/view` — render a shareable artifact from any slice of what the team knows excited to have you all try, and get your feedback. Comments URL: https://news.ycombinator.com/item?id=47806427 Points: 3 # Comments: 2

hnrss.org · 2026-04-17 00:58:29+08:00 · tech

Mulligan Labs is a browser-based playtester for Magic: The Gathering. No account or install needed. Just create a room, share the link, import a decklist from Archidekt or Moxfield, and play with mouse and keyboard (mobile support is not great right now). Stack: SvelteKit on Cloudflare Workers, PartyKit (Durable Objects) for the authoritative game server. Clients propose actions over WebSocket; the server validates and broadcasts state. My background is networking and my cofounder's is industrial design. Neither of us had shipped a codebase like this before. We built it over the last 5 months with heavy Claude assistance. Happy to get into what that actually looked like in the comments. It's rough in places (the deck builder is just ok right now) but the core multiplayer loop is solid and we have played a ton of games on it with our Commander pod. We'd love feedback, especially from anyone who's played Cockatrice/XMage/Untap and has opinions on what a browser-native version should feel like. Comments URL: https://news.ycombinator.com/item?id=47796266 Points: 3 # Comments: 4

hnrss.org · 2026-04-16 22:11:37+08:00 · tech

I started EDDI in 2006 as a rule-based dialog engine. Back then it was pattern matching and state machines. When LLMs showed up, the interesting question wasn't "how do I call GPT" but "how do I keep control over what the AI does in production?" My answer was: agent logic belongs in JSON configs, not code. You describe what an agent should do, which LLM to use, what tools it can call, how it should behave. The engine reads that config and runs it. No dynamic code execution, ever. The LLM cannot run arbitrary code by design. The engine is strict so the AI can be creative. v6 is the version where this actually became practical. You can have groups of agents debating a topic in five different orchestration styles (round table, peer review, devil's advocate...). Each agent can use a different model. A cascading system tries cheap models first and only escalates to expensive ones when confidence is low. It also implements MCP as both server and client, so you can control EDDI from Claude Desktop or Cursor. And Google's A2A protocol for agents discovering each other across platforms. The whole thing runs in Java 25 on Quarkus, ships as a single Docker image, and installs with one command. Open source since 2017, Apache 2.0. Would love to hear thoughts on the architecture and feature set. And if you have ideas for what's missing or what you'd want from a system like this, I'm all ears. Always looking for good input on the roadmap. Comments URL: https://news.ycombinator.com/item?id=47793245 Points: 2 # Comments: 0

hnrss.org · 2026-04-16 20:42:49+08:00 · tech

Witchcraft is from-scratch Rust reimplementation of Stanford's XTR-Warp (SIGIR'25, https://arxiv.org/abs/2501.17788 ) multi-vector semantic search engine. Witchcraft runs out of a single SQLite database, is blazing-fast (21ms p.95 end-to-end search latency on NFCorpus on a MacBook Pro), accurate (33% NDCG@10), and easy to deploy in your own apps. The Witchcraft repo also comes with Pickbrain, a sample app and agent skill that you can use to instantly query across all your Claude Code and Codex CLI sessions, effectively giving your agents global long-term memory. Please see the Github page for more details, and feel free to ask questions here, and I will try to answer them. Comments URL: https://news.ycombinator.com/item?id=47792166 Points: 3 # Comments: 0

hnrss.org · 2026-04-16 20:37:56+08:00 · tech

Multi-tier exact-match cache for AI agents backed by Valkey or Redis. LLM responses, tool results, and session state behind one connection. Framework adapters for LangChain, LangGraph, and Vercel AI SDK. OpenTelemetry and Prometheus built in. No modules required - works on vanilla Valkey 7+ and Redis 6.2+. Shipped v0.1.0 yesterday, v0.2.0 today with cluster mode. Streaming support coming next. Existing options locked you into one tier (LangChain = LLM only, LangGraph = state only) or one framework. This solves both. npm: https://www.npmjs.com/package/@betterdb/agent-cache Docs: https://docs.betterdb.com/packages/agent-cache.html Examples: https://valkeyforai.com/cookbooks/betterdb/ GitHub: https://github.com/BetterDB-inc/monitor/tree/master/packages... Happy to answer questions. Comments URL: https://news.ycombinator.com/item?id=47792122 Points: 4 # Comments: 0

hnrss.org · 2026-04-15 13:31:28+08:00 · tech

I run multiple Claude Code sessions on my Mac at home and control them from my iPhone or iPad while I’m out. I like this because it gives me maximum power and flexibility (I can use any plugin or CLI as if I were sitting there locally). The setup is cmux for the terminal multiplexer UI, tmux to keep sessions alive between connections, Tailscale for zero-config encrypted networking, and Echo app as the iOS SSH client. Optional Mosh for auto-reconnect when you switch networks. The gist includes a ‘ccode’ alias shell function that handles session naming (so it doesn't conflict with claude if I ever need to use it), tmux lifecycle, continue/skip-permissions flags, and a pre-flight check so you don’t get a blank window when there’s nothing to continue. Comments URL: https://news.ycombinator.com/item?id=47775039 Points: 4 # Comments: 0

linux.do · 2026-04-15 09:37:56+08:00 · tech

事情是这样,这两天 我突然发现电脑有时候会突然变得巨卡巨慢,过了两三秒后恢复,如此循环,开始我以为只是我进程开的太多了,加上笔记本没插电 今天早上我感觉不大对劲,看了一眼进程占用,CPU倒是没有占的很高的进程,然后就看见了一个小恐龙,觉得怪可爱的,这玩意内存占用倒是出奇的高,复制英文名,网上一搜,呵,挖矿的 求助各位懂行的佬友: 这东西怎么删干净? 怎么能全面检测电脑里还有没有类似的矿机程序? 我回忆了一下,这两天唯一下过的就是一个投屏软件,当时太急着用了,就带了个金山毒霸,不过我直接卸载了,挖矿程序应该就是这个过程中溜进来的, 感兴趣的佬友可以去拿虚拟机测试一下 下载地址如下: MAXHUB传屏助手下载2026最新电脑版-MAXHUB传屏助手官方PC版免费下载-天极下载 2 个帖子 - 2 位参与者 阅读完整话题

linux.do · 2026-04-13 16:18:38+08:00 · tech

本帖使用社区开源推广,符合推广要求。我申明并遵循社区要求的以下内容: 我的帖子已经打上 开源推广 标签: 是 我的开源项目完整开源,无未开源部分: 是 我的开源项目已链接认可 LINUX DO 社区: 是 我帖子内的项目介绍,AI生成、润色内容部分已截图发出: 是 以上选择我承诺是永久有效的,接受社区和佬友监督: 是 为什么我做了 Multi CLI Studio 大家好,我是这个项目的开发者。 这段文字不是一份很正式的产品介绍,更像是我想认真说明一下:我为什么会去做这个项目,它到底想解决什么问题,以及我觉得它现在比较有价值的地方是什么。 先说结论 我做 Multi CLI Studio ,不是因为我想再做一个“AI 壳子”。 我真正想解决的问题是: 现在我们在本地使用 AI 编码工具时,工作流太割裂了。 你可能会同时用 Codex、Claude、Gemini,也可能会同时开 CLI、开网页、开编辑器、开终端。每个工具都有自己的长处,但它们之间通常是断开的。上下文断开、状态断开、操作断开,最后人反而要花很多精力在“切工具”和“补上下文”上。 我想做的是一个更统一的桌面工作台,让这些事情至少能在一个地方发生。 我遇到的真实问题 我在日常开发里很明显地感受到几件事: 1. 不同 CLI 的能力真的不一样 有的更适合直接改代码,有的更适合规划和解释,有的在 UI、推理、长文本处理上更顺手。 问题不是“谁最好”,而是: 很多时候你就是需要多个工具配合。 如果每次切换都要换终端、换页面、换上下文,那效率会掉得很快。 2. 会话和项目状态总是散的 终端里有终端的上下文,网页聊天里有网页聊天的上下文,自动化脚本有脚本自己的状态,Git 变更又在另一个地方看。 这些东西其实都围绕同一个项目,但现实里它们常常互相不知道对方在干什么。 3. 自动化和日常对话是割裂的 很多重复任务,其实应该能沉淀成“流程”或者“任务模板”,而不是每次重新写一遍 prompt。 但现有工具里,聊天、命令执行、自动化编排经常是三套东西,很难自然地衔接起来。 所以我想做什么 我希望有一个本地桌面应用,可以把这些能力放进一个相对统一的工作区里: CLI 终端交互 模型对话 Provider 管理 自动化任务 工作流编排 本地状态持久化 这就是 Multi CLI Studio 现在在做的事。 这个项目想解决什么问题 如果用更直接的话说,我想解决的是下面这几个问题: 1. 降低多工具切换成本 不是要求用户只选一个 AI 工具,而是承认“你本来就会同时用多个”,然后尽量把切换成本降下来。 2. 让上下文尽量留在本地工作区里 终端会话、模型对话、自动化执行记录、工作流状态,尽可能放在一个桌面应用里统一管理。 3. 让模型切换变成正常操作,而不是重开一套流程 比如在同一个会话里按回合切换模型,而不是每次换模型都要另开一个页面、另起一段上下文。 4. 让重复性 AI 协作任务可以沉淀下来 一些重复性的事情,不应该永远停留在“重新发一遍 prompt”这个阶段,应该逐步变成任务、工作流和自动化。 我觉得这个项目现在比较有意思的亮点 这里我不想写得太夸张,只说我自己觉得比较实在的部分。 1. 跨 CLI 不是附属功能,而是核心思路 这个项目的出发点不是“先支持一个 CLI,再顺手兼容别的”。 相反,我一开始就默认: 真实开发里,本来就会跨 CLI。 所以这个项目的很多设计,都是围绕“如何在多个工具之间保留连续性”来做的。 2. 终端、聊天、自动化是在同一个产品里思考的 这三块通常在很多产品里是分开的,但我觉得它们本来就应该互相连得更紧。 你在聊天里形成的判断,可能会变成终端操作;你在终端里的重复工作,可能会沉淀成自动化;自动化执行后的结果,又应该能回到对话和工作区里。 3. 本地优先 我比较看重本地工作流的完整性。 这个项目不是那种强依赖云端后端才能成立的形态。很多状态都直接保存在本地,包括 SQLite 和 JSON 数据。 这意味着它更像一个真正的桌面开发工具,而不只是一个网页外壳。 4. 不只是“能聊”,而是更接近“能工作” 我不太想把它做成纯聊天工具。 我更关心的是: 这个工具能不能围绕项目工作区运转 能不能感知终端、Git、任务、工作流 能不能把一次次 AI 协作慢慢沉淀成稳定流程 它现在适合谁 我觉得这个项目目前比较适合下面几类人: 本来就在同时用多个 AI 编码工具的人 希望把 CLI 工作流和模型对话整合在一起的人 想把重复 AI 任务逐步沉淀成自动化的人 更偏好本地桌面工作流的人 如果你只想要一个非常简单、非常轻量的单模型聊天框,那这个项目未必是最合适的。 但如果你已经感受到多工具协作带来的混乱,那你应该能比较快理解我为什么做它。 这个项目现在还不完美 这个我也得坦白说。 它现在仍然在快速迭代,很多地方还在打磨,包括: 交互细节 模型和 CLI 的兼容细节 自动化和工作流的体验 不同平台下的稳定性 我不想假装它已经“非常成熟”。 但它已经足够让我把一些原本分散的工作流,收回到一个更顺手的桌面环境里,这对我来说已经是很大的价值。 我希望它最终变成什么 如果说得朴素一点,我希望它最终能变成一个: 真正适合日常开发使用的本地 AI 工作台。 不是只会展示回答,而是能够围绕项目、终端、模型、自动化和上下文一起工作的那种工具。 最后 如果你也在同时使用多个 AI CLI,或者你也觉得现在的 AI 开发工作流太散了,那我很欢迎你看看这个项目。 我做它的出发点很简单: 不是为了做一个看起来很新潮的 AI 产品,而是想认真解决一个我自己每天都在遇到的问题。 如果它也刚好解决了你的问题,那就太好了。 然后下面分享一些这个项目的细节图片: 1. 首页:就是常规的数据展示 2. 终端交互,我们该系统的核心功能,支持跨CLI执行,结构化输出我们的CLI的各类型信息,支持常规Slash命令,如$-skill, @文件,模式切换等 3. 支持不同模型提供商的简单对话 4. 模型提供商管理(类CC Switch的方式) 5. CLI自动化任务,目的是我们想每天使用它来定时执行一些任务,然后支持设定我们任务目标设定,然后根据预期交付结果去检查交付,然后定制化一些执行过程细节,然后就是结果通知 6.自动化工作流,允许配置工作流去使用不同CLI 每个节点进行操作,并且我们现在御三家都支持session resume,所以其实对我们工作流来说很友好,我经常下班前写了些任务,然后扔给它自己执行,明天早上查看就行 7.配置详情,主要查看各个CLI的信息以及邮件通知 链接 - Github: GitHub - Austin-Patrician/multi-cli-studio · GitHub - 下载: Release Optimize chat page · Austin-Patrician/multi-cli-studio · GitHub (macos 端需要本地运行打包) 欢迎大家一起使用起来,求Star 6 个帖子 - 4 位参与者 阅读完整话题

hnrss.org · 2026-04-12 22:08:46+08:00 · tech

I built a tool to run multiple AI coding agents in parallel across projects. NeZha is a character from Chinese mythology, famous for having three heads and six arms — which feels like the perfect metaphor for running AI coding agents across multiple projects at once. Claude Code + Codex + Git + Editor, all in one place. Managing sessions in the terminal was getting painful — especially across projects. Comments URL: https://news.ycombinator.com/item?id=47739860 Points: 2 # Comments: 0