Hello, this is my work where you can move lights in 3D inside a photograph. The project page and has some cool interactive demos to play with. Do check them out and let me know what you think! Comments URL: https://news.ycombinator.com/item?id=47815784 Points: 1 # Comments: 0
参考豆包专家的资料整的 英文提示词通用: transform this anime character into a realistic human, maintain exact hairstyle, hair color, eye color, facial proportions and expression, natural skin pores, detailed eyes and lips, even studio lighting, clear facial contours 多图的话,把this改成these ,Character后面加个s 删掉into后面的a 不然傻傻的豆包可能就帮你把所有图片融合起来,生成一张图片了() 直接看看豆包5.0效果吧,我不是很喜欢,但是某些AI形象创建必须要真实人脸 把二次元的头部比例复刻到这里还是很奇怪喵 3 个帖子 - 3 位参与者 阅读完整话题
render部署的cpa遇到Access blocked by Cloudflare. This usually happens when connecting from a restricted region (status 403 Forbidden), u rl: https://www.xxxx.top/v1/responses , cf-ray: 9ee0d216d8856e64-HKG 现在在本机使用codex会报错,尝试过切换网络、切换节点依然报错,直接访问网页是正常的。然而换了机器后是可以正常连接的。怀疑是被render的cf通过tls指纹给ban了。有佬友遇到过知道怎么解决吗? 1 个帖子 - 1 位参与者 阅读完整话题
Google – 17 Apr 26 7 ways to travel smarter this summer, with help from Google The latest tools from Google can help you plan trips, find a great deal and explore your next destination. [!quote]+ 您已经可以追踪城市级别的酒店价格,从今天开始,您也可以追踪单个酒店的价格了。要在桌面上开始使用,请前往 "搜索 "并按名称查找特定酒店,然后点击新的价格跟踪切换按钮。在手机上,你可以在搜索后的 "价格 "选项卡下找到价格跟踪选项。无论哪种方式,如果您选择的日期内房价发生重大变化,您都会收到电子邮件提醒,这样您就可以抓住降价的机会,抢到超值优惠。 品尝当地美食总是度假的一大亮点,但要在合适的餐厅找到一张桌子却可能是一项艰巨的挑战。有了人工智能模式和 "问问地图 "的代理功能,您就可以专注于美食了。您只需描述自己的需求和偏好,比如下周六在一家有古巴菜和现场音乐的餐厅预订五人桌。 想到了旅行中需要的东西,但没有时间在网上订购?谷歌可以代您致电当地商店,查看谁有您要找的特定物品,包括相关优惠。去年 11 月,这一工具直接在搜索功能上推出,未来几周将在美国的人工智能模式上推出。 TechCrunch – 17 Apr 26 Google's AI Mode can now help you find products in stock nearby | TechCrunch Although you can already track hotel prices at the city level, the new update lets you do so for a specific hotel that you're interested in. Est. reading time: 2 minutes https://9to5google.com/2026/04/17/google-individual-hotel-price-tracking/ 1 个帖子 - 1 位参与者 阅读完整话题
I built this to run OpenClaw safely. The problem: every sandbox I tried still handed the real API token to the agent as an env var. nilbox never gives the agent the real token. It gets a fake placeholder instead (ANTHROPIC_API_KEY=ANTHROPIC_API_KEY). nilbox intercepts outbound API calls and swaps in the real token at the network layer. So if the agent leaks the "token" — attacker gets a useless string. That's it. Also ships a managed Linux runtime (consistent across mac/win/linux) and a Store for one-click agent app installs. Full shell access too. Available for macOS, Windows, and Linux https://nilbox.run Curious how others are thinking about token security when running agents locally. Comments URL: https://news.ycombinator.com/item?id=47812193 Points: 3 # Comments: 0
Built this around one simple idea: the workflow that wants to execute should not be the same place that decides whether execution may continue. This project puts an external allow/deny boundary before action. Public entry points: * live pilot * commercial request * private deployment There is also a GitHub Marketplace action install surface, but the main point is the boundary model itself: decision stays outside the workflow that is asking to proceed. Looking for feedback from people working on CI/CD, security controls, approval boundaries, and automated execution. Comments URL: https://news.ycombinator.com/item?id=47811161 Points: 2 # Comments: 0
I built this tool a while back when I accidentally deleted thousands of my PDFs. I found the existing ext4magick and similar solutions to be cumbersome and complicated to use, and wanted something similar that just did PDFs. As a bonus, because it only handles PDF documents, the pattern recognision is super simple, allowing this program to scan through a disk at high speed, like the maximum read speed for your disk. Hope people find it useful. Mirror: https://github.com/seanhly/recover-pdfs Comments URL: https://news.ycombinator.com/item?id=47810848 Points: 3 # Comments: 0
API Error: Claude Code is unable to respond to this request, which appears to violate our Usage Policy ( https://www.anthropic.com/legal/aup ). This request triggered restrictions on violative cyber content and was blocked under Anthropic’s Usage Policy. To request an adjustment pursuant to our Cyber Verification Program based on how you use Claude, fill out https://claude.com/form/cyber-use-case?token=pGYLMnOMNAll0qGatSemsGMPVwg44IccTgchmVGPeug9rBoG__W2Q2mEoSACBRJLuROgXsCiVNOABgiMSAL80i6He8YsoeSWPjxHceD3C-h_V3K7Talb30XqJaqcEv1fw2XsFTxGwD… Try rephrasing the request or attempting a different approach. If you are seeing this refusal repeatedly, try running /model claude-sonnet-4-20250514 to switch models.为啥会出现这个错误 4 个帖子 - 4 位参与者 阅读完整话题
I made this tool that allows server admins to decrypt passwords during bootup with the help of Matrix and HTTP. Comments URL: https://news.ycombinator.com/item?id=47809368 Points: 2 # Comments: 0
Built this code art project as a commentary on the recent wave of using LOC as a primary measure of engineering productivity It's now up to 350M lines of useless (but hopefully funny) code, constantly growing using GitHub actions and https://github.com/jshchnz/codemaxxing Unfortunately, GitHub has now sent me a notice to remove the repo within the next 14 days, so enjoy it while you can Comments URL: https://news.ycombinator.com/item?id=47808458 Points: 1 # Comments: 0
With Rec Room studio closing June 1, we put this site together to help this wonderful team find what's next. If you're hiring, you're in for a treat. Comments URL: https://news.ycombinator.com/item?id=47806413 Points: 1 # Comments: 0
Hi HN, This is Tudor from Xata. You can think of Xata as an open-source, self-hosted, alternative to Aurora/Neon. Highlight features are: - Fast copy-on-write branching. - Automatic scale-to-zero and wake-up on new connections. - 100% Vanilla Postgres. We run upstream Postgres, no modifications. - Production grade: high availability, read replicas, automatic failover/switchover, upgrades, backups with PITR, IP filtering, etc. You can self-host it, or you can use our [cloud service]( https://xata.io ). Background story: we exist as a company for almost 5 years, offered a Postgres service from the start, and have launched several different products and open source projects here on HN before, including pgroll and pgstream. About a year and half ago, we’ve started to rearchitect our core platform from scratch. It is running in production for almost an year now, and it’s serving customers of all sizes, including many multi-TB databases. One of our goals in designing the new platform was to make it cloud independent and with a careful selection of dependencies. Part of the reason was for us to be able to offer it in any cloud, and the other part is the subject of the announcement today: we wanted to have it open source and self-hostable. Use cases: We think Xata OSS is appropriate for two use cases: - get fast your preview / testing / dev / ephemeral environments with realistic data. We think for many companies this is a better alternative to seed or synthetic data, and allows you to catch more classes of bugs. Combined with anonymization, especially in the world of coding agents, this is an important safety and productivity enabler. - offer an internal PGaaS. The alternative we usually see at customers is that they use a Kubernetes operator to achieve this. But there’s more to a Postgres platform than just the operator. Xata is more opinionated and comes with APIs and CLI. Technical details: We wanted from the start to offer CoW branching and vanilla Postgres. This basically meant that we wanted to do CoW at the storage layer, under Postgres. We’ve have tested a bunch of storage system for performance and reliability and ultimately landed on using OpenEBS. OpenEBS is an umbrella project for more storage engines for Kubernetes, and the one that we use is the replicated storage engine (aka Mayastor). Small side note on separation of storage from compute: since the introduction of PlanetScale Metal, there has been a lot of discussion about the performance of local storage. We had these discussions internally as well, and what’s nice about OpenEBS is that it actually supports both: there are local storage engines and over-the-network storage engines. For our purpose of running CoW branches, however, the advantages of the separation are pretty clear: it allows spreading the compute to multiple nodes, while keeping the storage volumes colocated, which is needed for CoW. So for now the Xata platform is focused on this, but it’s entirely possible to run Xata with local storage: basically a storage-class change away. Another small side note: while Mayastor is serving us well, and it’s what we recommend for OSS installations, we have been working on our own storage engine in parallel (called Xatastor). It is the key to having sub-second branching and wake-up times and we’ll release it in a couple of weeks. For the compute layer, we are building on top of CloudNativePG. It’s a stable and battle-tested operator covering all the production great concerns. We did add quite a lot of services around it, though: our custom SQL gateway, a “branch” operator, control plane and authentication services, etc. The end result is what we think is an opinionated but flexible Postgres platform. More high level and easier to use than a K8s operator, and with a lot of battery included goodies. Let us know if you have any questions! Comments URL: https://news.ycombinator.com/item?id=47803480 Points: 2 # Comments: 0
I’ve been working on this tool on and off for a while, and I recently got it back into a shape that feels ready to share. It’s an open-source cost attribution tool for Kafka systems that supports both identity-level and topic-level attribution. The original version focused only on identity-level attribution using Confluent Cloud API and metrics data. In v2, I expanded it to support topic-level attribution as well. The backend is Python/FastAPI, and the frontend is React. Compared to v1, which was fully in-memory and depended on Prometheus and Grafana for retention and visualization, v2 adds SQLite-backed persistence, an evolving REST API, and a plugin-based architecture for extending to other ecosystems. No signups, free to use, and fully open source. I’d appreciate feedback, especially from people who deal with Kafka platform ownership, multi-tenancy, or chargeback/showback problems. I’m actively working on it, and I’d be glad to hear about bugs, gaps, or features that would make it more useful. Comments URL: https://news.ycombinator.com/item?id=47796822 Points: 1 # Comments: 0
Hey HN, In this age of agentic coding I've found myself spending a lot of time reviewing markdown files. Whether it's plans or documentation that I've asked my agent to generate for me, it seems that I spend more time reading markdown than code. I've tried a few different solutions to make it easier to read such as Obsidian however I've found their Vault system to be quite limiting for this use case and I've found TUI solutions to not quite be as friendly to read as I've wanted so I made Marky. Marky is a lightweight desktop application that makes it incredibly easy to read and track your markdown files. It also has a helpful cli so you can just run marky FILENAME and have the app open to the md file that you pointed it at. I've been using the daily over the past week and I really enjoy it so I figured I'd share it. I have plans to add more features such as incorporating agentic tools such as claude code and codex into the UI as well as developing a local git diff reviewer to allow me to do local code review before pushing up to git. I'd love to hear your thoughts and any feature suggestions you may have :) Comments URL: https://news.ycombinator.com/item?id=47795468 Points: 2 # Comments: 0
Dear diary, this is my story: I'd been sharing MCP configs with other devs at work a lot - templates in shared repos, credentials in Bitwarden, everyone cowboying their own env vars. That's a lot of manual wiring and lack of any real control, so there was already a problem statement forming in my mind. Then three weeks ago I was putting my kids to sleep and reading about Jensen Huang saying every company will run 100 agents per employee, and the math started mathing. That evening I kept thinking about what agents actually need to operate in the real world and eventually landed on the same answer as every spy movie ever: basically, a passport suitable for the mission and clever drop-off locations. So I built STACK. True story. - The passport: a signed JWT (EdDSA) that proves which agent is acting, who authorized it, and what it's allowed to do. Works offline - any service can verify it without calling STACK. Agents can delegate to sub-agents but the scope ever only narrows. Max 4 hops. - The drop-off: is an encrypted handoff between agents. Agent A drops off a package with a JSON schema contract, encryption at rest, and a TTL. Agent B collects it, the custody transfers, and the payload gets deleted. Neither agent needs to trust the other. Just like in the movies! All credentials are KMS-encrypted. In proxy mode they are injected at the network boundary so the agents can make API calls through STACK without ever seeing the raw key. To try it, sign up at https://getstack.run , grab your API key, and connect: claude mcp add stack --transport http https://mcp.getstack.run/mcp --header "Authorization: Bearer YOUR_API_KEY" I want to provide a generous free tier and I hope people get value out of it. Keycard ($38M, a16z) does scoped agent credentials, Descope ($88M) does auth flows, Composio ($29M) does tool integrations. I'm a solo founder in Stockholm without funding, but I'm betting the full control plane is where the market is heading. I may be naive about that, but that's the bet. I like betting. Docs at https://getstack.run/docs . Comments URL: https://news.ycombinator.com/item?id=47795391 Points: 2 # Comments: 0
Article URL: https://duodegen.vercel.app Comments URL: https://news.ycombinator.com/item?id=47791344 Points: 1 # Comments: 0
Hey HN, I originally built this tiny project a few years ago but ended up abandoning it. I recently revived and finished it to solve a very specific, recurring annoyance I had with my usual workflow. Whenever I turn on my VPN, I always want a quick visual confirmation that it’s actually routing correctly before I start browsing, usually by just checking what city/country I appear to be in. The frustration was that whenever I googled "what is my ip" and clicked a random result, half the sites were bloated with ads, required scrolling past walls of SEO text, or bizarrely buried the actual location data (or didn't show it at all). So I revived https://whereismyip.info/ It does two things on load: Shows your IP and exact location front and center. Checks the ASN and automatically flags if you are connecting through a known datacenter or commercial VPN. (Also, since I live in the terminal, I added a JSON route: `curl https://whereismyip.info `). It’s obviously free, and the source is just a lightweight Flask backend with a vanilla HTML/CSS/JS frontend (using Leaflet for the map), all routed through Caddy. I'd love to know what you think or if your VPN ASN bypasses the detection (I know there are but the list is infinite :D) Comments URL: https://news.ycombinator.com/item?id=47790924 Points: 1 # Comments: 0
Built this as a simple tool for estimating roof area, materials, and replacement cost without having to request a quote first. A lot of homeowners have no good way to sanity-check roofing costs early on, and contractors often still do the first pass manually. Roofing Calculator is meant to be a lightweight starting point: enter a few inputs, get a rough estimate, and understand project scope before talking to anyone. Interested in feedback from people in construction, home services, or anyone who has dealt with roof replacement before. Comments URL: https://news.ycombinator.com/item?id=47790556 Points: 1 # Comments: 0
Hi HN, I’ve been working on a blockchain design that takes a different approach to the 51% attack problem. Instead of making attacks instantly expensive through hardware or capital (as in traditional systems), it makes them structurally slow to execute, which in turn makes them costly and difficult to sustain over time. The core idea is simple: (1) Each node/identity is limited to ~1 hash/sec in the Proof-of-Work process, regardless of hardware (2) Remove parallel mining advantages (no work is sharable) (3) Shift influence toward sustained persistent participation rather than raw hashpower (4) Make generating sybil identities (in bulk) slow, costly, and difficult. I’ve built a browser-based MVP demonstrating the capped PoW model (as a reference implementation): https://grahambell.io/mvp/Proof_of_Witness.html There’s also a short demo here (Capped PoW) : https://youtu.be/i5gzzqFXXUk?si=y_Pv5ZDv9SGhRFjY Website: https://grahambell.io/ Curious whether the approach sounds interesting, or immediately broken to you. Peace Comments URL: https://news.ycombinator.com/item?id=47790455 Points: 2 # Comments: 0
I haven't seen anything like this so I decided to build it in a weekend. How it works: You see a bunch of things pulled from Wikipedia displayed on cards. You ask yes or no questions to figure out which card is the secret article. The AI model has access to the image and wiki text and it's own knowledge to answer your question. Happy to have my credits burned for the day but I'll probably have to make this paid at some point so enjoy. I found it's not easy to get cheap+fast+good responses but the tech is getting there. Most of the prompts are running through Groq infra or hitting a cache keyed by a normalization of the prompt. Comments URL: https://news.ycombinator.com/item?id=47787081 Points: 5 # Comments: 4