Read an article about analyzing Garmin data with AI. Sounded great — except I didn't want to send my health data to any cloud service. So I asked Claude to write me 2-3 scripts and a dashboard. This escalated a bit. 30 days and 20$ later I have this: A local-first Garmin archive with interactive HTML dashboards, Excel exports, weather and pollen context, AES-256 encrypted token storage, and a self-healing data pipeline with 515 automated tests. Windows desktop app, no terminal needed. Nothing leaves your machine. I never wrote a line of Python. I understood the problems and made the architectural decisions. Claude wrote everything else. GitHub: github.com/Wewoc/Garmin_Local_Archive Comments URL: https://news.ycombinator.com/item?id=47814208 Points: 3 # Comments: 1
Article URL: https://writememaybe.com/ Comments URL: https://news.ycombinator.com/item?id=47805671 Points: 2 # Comments: 1
Hi HN, This is Tudor from Xata. You can think of Xata as an open-source, self-hosted, alternative to Aurora/Neon. Highlight features are: - Fast copy-on-write branching. - Automatic scale-to-zero and wake-up on new connections. - 100% Vanilla Postgres. We run upstream Postgres, no modifications. - Production grade: high availability, read replicas, automatic failover/switchover, upgrades, backups with PITR, IP filtering, etc. You can self-host it, or you can use our [cloud service]( https://xata.io ). Background story: we exist as a company for almost 5 years, offered a Postgres service from the start, and have launched several different products and open source projects here on HN before, including pgroll and pgstream. About a year and half ago, we’ve started to rearchitect our core platform from scratch. It is running in production for almost an year now, and it’s serving customers of all sizes, including many multi-TB databases. One of our goals in designing the new platform was to make it cloud independent and with a careful selection of dependencies. Part of the reason was for us to be able to offer it in any cloud, and the other part is the subject of the announcement today: we wanted to have it open source and self-hostable. Use cases: We think Xata OSS is appropriate for two use cases: - get fast your preview / testing / dev / ephemeral environments with realistic data. We think for many companies this is a better alternative to seed or synthetic data, and allows you to catch more classes of bugs. Combined with anonymization, especially in the world of coding agents, this is an important safety and productivity enabler. - offer an internal PGaaS. The alternative we usually see at customers is that they use a Kubernetes operator to achieve this. But there’s more to a Postgres platform than just the operator. Xata is more opinionated and comes with APIs and CLI. Technical details: We wanted from the start to offer CoW branching and vanilla Postgres. This basically meant that we wanted to do CoW at the storage layer, under Postgres. We’ve have tested a bunch of storage system for performance and reliability and ultimately landed on using OpenEBS. OpenEBS is an umbrella project for more storage engines for Kubernetes, and the one that we use is the replicated storage engine (aka Mayastor). Small side note on separation of storage from compute: since the introduction of PlanetScale Metal, there has been a lot of discussion about the performance of local storage. We had these discussions internally as well, and what’s nice about OpenEBS is that it actually supports both: there are local storage engines and over-the-network storage engines. For our purpose of running CoW branches, however, the advantages of the separation are pretty clear: it allows spreading the compute to multiple nodes, while keeping the storage volumes colocated, which is needed for CoW. So for now the Xata platform is focused on this, but it’s entirely possible to run Xata with local storage: basically a storage-class change away. Another small side note: while Mayastor is serving us well, and it’s what we recommend for OSS installations, we have been working on our own storage engine in parallel (called Xatastor). It is the key to having sub-second branching and wake-up times and we’ll release it in a couple of weeks. For the compute layer, we are building on top of CloudNativePG. It’s a stable and battle-tested operator covering all the production great concerns. We did add quite a lot of services around it, though: our custom SQL gateway, a “branch” operator, control plane and authentication services, etc. The end result is what we think is an opinionated but flexible Postgres platform. More high level and easier to use than a K8s operator, and with a lot of battery included goodies. Let us know if you have any questions! Comments URL: https://news.ycombinator.com/item?id=47803480 Points: 2 # Comments: 0
IT之家 4 月 16 日消息,Steam 多人派对游戏《Write Warz》宣布将于 4 月 17 日推出 3.0 正式版,并同步结束抢先体验阶段, 从当前的免费游玩模式转为付费销售,定价 9.99 美元(IT之家注:现汇率约合 68.3 元人民币) ,玩家可以限时今天内将游戏加入库存,IT之家附游戏商品页( https://store.steampowered.com/app/2477650/Write_Warz/ )。 官方特别说明,仅将游戏“加入收藏库”可能无法满足永久保存条件,因此建议玩家至少启动并进行一局游戏,以确保在游戏转为付费后,仍能够免费游玩(游戏数 +1)。 据介绍,《Write Warz》是一款以“接龙写故事”为核心玩法的多人派对游戏,目前仅支持英文界面。每一回合,玩家根据主题写下一句剧情内容,再由全体投票选出最精彩的一句,胜出内容将成为故事后续发展,最终拼接出一段充满反转与趣味的完整故事。 游戏目前提供恐怖、海盗、奇幻、末日、科幻、西部等多种主题玩法,每种模式都有独特机制。例如恐怖模式会加入惊吓事件干扰节奏;海盗模式则允许玩家购买道具、发射炮弹攻击对手,并通过积累金币决出胜负,整体非常适合朋友聚会或直播互动。 随着 3.0 正式版上线,游戏还将新增全新主题“Elves vs Samurai”。玩家将分为精灵与武士两大阵营,通过合作写作、累计分数与对决机制争夺胜利,让故事创作更具对抗性。 游戏图赏:
Article URL: https://aftersession.ai/ Comments URL: https://news.ycombinator.com/item?id=47767833 Points: 2 # Comments: 1
Article URL: https://github.com/ory/dockertest/tree/v4 Comments URL: https://news.ycombinator.com/item?id=47765234 Points: 3 # Comments: 0
We run superglue, an OSS agentic integration platform. Last week I talked to a founder of another YC startup. She found a use case for our CLI that we hadn't officially launched yet. Her problem: customers wanted to create Opps in Salesforce from inside the chat in her app. We kept seeing this pattern: teams build agents and their users can perfectly describe what they want: "pull these three objects from Salesforce and push to nCino when X condition is true", but translating that into a generalized hard-coded tool the agent can call is a lot of work and does not scale since the logic is different for every user. What superglue CLI does: you point it at any API, and your agent gets the ability to reason over that API at runtime. No pre-built tools. The agent reads the spec, plans the calls, executes them. The founder using this in production described it like this: she gave the CLI to her agent with an instruction set and told it not to build tools, just run against the API. It handled multi-step Salesforce object creation correctly, including per-user field logic and record type templates. Concretely: instead of writing a createSalesforceOpp tool that handles contact -> account -> Opp creation with all the conditional logic, you write a skill doc and let the agent figure out which endpoints to hit and in what order. The tradeoff is: you're giving the agent more autonomy over what API calls it makes. That requires good instructions and some guardrails. But for long-tail, user-specific connectors, it's a lot more practical than building a tool for every case. Happy to discuss. Curious if others have run into the "pre-defined tool" ceiling with MCP-based connectors and how you've worked around it. Docs: https://docs.superglue.cloud/getting-started/cli-skills Repo: https://github.com/superglue-ai/superglue Comments URL: https://news.ycombinator.com/item?id=47762951 Points: 5 # Comments: 3
A clock radio from the 1950s played music and showed the time. I wanted to see what happens when you combined an old idea with a new one — the music itself tells you the time. It's also hilarious. This is a fully automated YouTube live stream. Every few minutes, a new AI-generated song plays. Every song is in a different genre. Every song's lyrics are about what time it is right now. The system generates them ahead of time using Suno's API, builds video segments with FFmpeg, and streams continuously via RTMP. There's no human in the loop and no pre-recorded content. The core challenge is a scheduling problem: generation takes ~3 minutes, song durations are variable (2-4 min), and the lyric timestamp has to match the actual start time. The orchestrator tracks a rolling average of API latency and adjusts its lookahead window accordingly. Stack: Python, Suno API, Pillow, FFmpeg. Comments URL: https://news.ycombinator.com/item?id=47759535 Points: 5 # Comments: 3
As AI agents autonomously write and deploy code, there's no standard for verifying that what they shipped actually satisfies business requirements. OQP is an attempt to define that standard. It's MCP-compatible and defines four core endpoints: - GET /capabilities — what can this agent verify? - GET /context/workflows — what are the business rules for this workflow? - POST /verification/execute — run a verification workflow - POST /verification/assess-risk — what is the risk of this change? The analogy we keep coming back to: what OpenAPI did for REST APIs, OQP does for agentic software verification. Early contributors include Philip Lew (XBOSoft) and Benjamin Young (W3C JSON-LD Working Group). Looking for feedback from engineers building on top of MCP, agent orchestration frameworks, or anyone who has felt the pain of "the agent shipped something wrong and we had no way to catch it." Repo: github.com/OranproAi/open-qa-protocol Comments URL: https://news.ycombinator.com/item?id=47758801 Points: 4 # Comments: 0
As AI agents autonomously write and deploy code, there's no standard for verifying that what they shipped actually satisfies business requirements. OQP is an attempt to define that standard. It's MCP-compatible and defines four core endpoints: - GET /capabilities — what can this agent verify? - GET /context/workflows — what are the business rules for this workflow? - POST /verification/execute — run a verification workflow - POST /verification/assess-risk — what is the risk of this change? The analogy we keep coming back to: what OpenAPI did for REST APIs, OQP does for agentic software verification. Early contributors include Philip Lew (XBOSoft) and Benjamin Young (W3C JSON-LD Working Group). Looking for feedback from engineers building on top of MCP, agent orchestration frameworks, or anyone who has felt the pain of "the agent shipped something wrong and we had no way to catch it." Repo: github.com/OranproAi/open-qa-protocol Comments URL: https://news.ycombinator.com/item?id=47758560 Points: 5 # Comments: 1
Comments URL: https://news.ycombinator.com/item?id=47752871 Points: 1 # Comments: 1
Grateful is a gratitude app with a simple social layer. You write a short entry, keep it private or share it to a circle. A circle is a small private group of your own making — family, close friends, whoever you'd actually want to hear from. It shows you the most recent post first. People in the circle can react or leave a comment. There's also a daily notification that sends you something you were grateful for in the past. Try it out on both iOS and Android. Go to grateful.so Comments URL: https://news.ycombinator.com/item?id=47745300 Points: 5 # Comments: 2
Hi HN, I'm Antoni. I built SpecSource to automate the tedious research you do for every new Linear ticket - matching with Sentry errors, finding the relevant code, searching Slack for context, checking for duplicates in Linear. You connect your ticketing system like Linear to tools like Sentry, GitHub, Slack, and when a new ticket comes in, an AI agent cross-references everything and writes a structured specification. Takes about 30 seconds. The output works well as input for AI coding agents too. Free tier: 100 credits/month (enough for ~20 tickets in my experience). Pro: $10/month for 1,000 credits. Happy to answer questions about the approach or architecture :) Comments URL: https://news.ycombinator.com/item?id=47739077 Points: 2 # Comments: 2