智能助手网
标签聚合 an

/tag/an

www.ithome.com · 2026-04-18 22:25:08+08:00 · tech

IT之家 4 月 18 日消息,联想官网 4 月 10 日公布了 ThinkPad T16 Gen 5(Intel)相关资料,这款新机目前已经在海外开启预售或正式开售,国行版本预计近期发布,港版(Ultra 5 332 vPro 版本)显示 14990 港币(现汇率约合 13083 元人民币)。 ThinkPad T16 Gen 5 首次引入 Intel Panther Lake 平台并搭载 LPCAMM2 内存,成为当前市面上为数不多同时拥有高性能核显与可更换内存的机型之一。 该机型在 2026 年世界移动通信大会(MWC)上亮相,当时只透露将于 4 月在欧洲市场率先开售,起售价约 1500 欧元(IT之家注:现汇率约合 12069 元人民币)。 该机型最大的升级在于其搭载了 Intel 最新的第三代酷睿 Ultra 处理器(代号 Panther Lake),最高可选 16 核 CPU(4 个性能核、8 个能效核和 4 个低功耗能效核),集成 12 个 X e 3 核心的 Arc B390 GPU,NPU 算力最高可达 50 TOPS,整体 AI 算力可轻松突破 100 TOPS。 联想还强调,新款 T16 Gen 5 在散热系统上有所增强,其 Panther Lake CPU 的 TDP(热设计功耗)设定为 30W,高于旧款 T 系列机型的基本 TDP,为持续高性能输出提供了保障。 该机型采用了全新的 LPCAMM2 可插拔内存,最高支持 64GB LPDDR5X-8533MHz 内存,既保留了低功耗特性,又赋予了用户自行升级的自由,打破了以往商务本“内存焊死不可升”的局限,为用户提供了更大的后期灵活性。 同时,该机型维持了经典的商务外观,A 面升级为铝合金材质,并采用更宽的条形铰链设计,实现了单手开合。屏幕可选 16 英寸 WUXGA(1920x1200)IPS 屏(400 尼特亮度)、2.8K(2880x1800)OLED 屏,支持 120Hz 可变刷新率和 DisplayHDR True Black 600,并保留防窥屏版本。 接口方面,该机型左侧依次排布了 2 个雷电 4、2 个 USB-A 5Gbps(其中一个支持关机供电)、HDMI 2.1 以及 RJ45 千兆网口,右侧则保留了 3.5mm 耳麦孔、可选 Nano-SIM 卡槽和读卡器。 其他方面,该机电池容量跃升至 75Wh,标配 dTPM 2.0 安全芯片、指纹电源键和红外摄像头(支持 Windows Hello),并引入基于计算机视觉的 Human Presence Detection(人体存在检测)功能。部分机型支持 Intel vPro Enterprise 平台,方便 IT 部门进行远程管理和维护。 此外,该机大量采用回收材料(PCC),如键盘支架采用 90% 回收镁合金,键帽采用 85% 回收塑料,并获得 EPEAT Gold 和 TCO 10 认证;同时进一步强化可维修性,其内置电池仅需按下两个卡扣即可轻松取出,USB-C 充电接口也采用模块化设计,用户可自行更换,进一步降低了维护成本并延长了设备的使用寿命。

www.ithome.com · 2026-04-18 22:07:29+08:00 · tech

IT之家 4 月 18 日消息,安全研究团队 OX Security 本周(4 月 15 日)发现,Anthropic 创建、维护的行业标准 AI 通信协议 MCP(IT之家注:Model Context Protocol)存在设计缺陷, 可导致服务器被诱导执行任意代码(RCE) 。 据介绍,该漏洞并非普通的疏忽,而是架构层面的设计缺陷并存在于官方 MCP SDK 中。影响 Python、TypeScript、Java 和 Rust 等所有支持语言,等于说任何基于 MCP 构建的项目都存在这一风险。 研究人员主要识别出未认证 UI 注入攻击、安全加固绕过、提示词注入、恶意插件分发等四种主流攻击路径,并在多个真实生产环境中成功利用漏洞。 目前,该机构已在 LiteLLM、LangChain、IBM LangFlow 等主流项目中发现关键漏洞, 目前已分配 10 个 CVE 编号且仍在不断增加 ,均属“严重”级别。 研究团队透露,他们曾多次联系 Anthropic 并希望修复漏洞。 但对方拒绝修改架构 ,并称该行为属于“预期设计”。 团队随后告知公司,将公开研究成果。对方未提出异议。 团队建议,所有用户都不应该将大语言模型、AI 工具等暴露在公网环境,并且将 MCP 输入直接视为不可信数据,防止提示词注入。同时启用沙箱环境运行服务并时刻更新最新软件,将权限锁住。

hnrss.org · 2026-04-18 21:10:24+08:00 · tech

Readox is a Chrome extension that reads web pages, PDFs, selections, text files (ie markdown), and text input notes aloud with TTS. You can just play the current page, or save things to a library and queue them up. It highlights as it reads, keeps your place across items, and can OCR scanned PDFs. TTS (for text and PDF) and OCR run on-device, and it works offline after setup. I’m interested on feedback on the library collections as a "playlist-like" functionality, or if most people just want a play button for the current page. Also interested in anything that feels missing or awkward. Thanks! Comments URL: https://news.ycombinator.com/item?id=47815638 Points: 2 # Comments: 0

linux.do · 2026-04-18 21:09:24+08:00 · tech

new_api_panic: Panic detected, error: runtime error: invalid memory address or nil pointer dereference. Please submit a issue here: GitHub - QuantumNous/new-api: A unified AI model hub for aggregation & distribution. It supports cross-converting various LLMs into OpenAI-compatible, Claude-compatible, or Gemini-compatible formats. A centralized gateway for personal and enterprise model management. 🍥 · GitHub | Upstream: {“error”:{“message”:“Panic detected, error: runtime error: invalid memory address or nil pointer dereference. Please submit a issue here: https://github.com/Calcium-Ion/new-api",“type”:"new_api_panic ”}} 2 个帖子 - 2 位参与者 阅读完整话题

linux.do · 2026-04-18 20:58:44+08:00 · tech

在 IPO 推进关键阶段,OpenAI 单日流失三位核心高管,包括 Sora 项目负责人 Bill Peebles、AI for Science 副总裁 Kevin Weil 以及企业应用 CTO Srinivas Narayanan,均已公开宣布离职。 此次人事变动伴随业务收缩:Sora 项目被关停,团队转向其他方向;AI 科学工具与研究团队被拆分并入内部项目,整体战略重心转向编程模型(Codex)及核心产品线。此前 GPT-4o 相关团队亦出现人员流动,显示公司正在进行大规模方向调整。 与此同时,公司管理层动荡加剧,包括 AGI 部署负责人、COO、CMO 等多位高管近期处于休假或转岗状态,叠加部分基础设施项目负责人集体流向外部初创公司,内部组织稳定性受到关注。 治理层面,据 WSJ 披露,部分股东正对 CEO Sam Altman 的决策与其个人投资之间的潜在利益冲突表达担忧。相关争议涉及核聚变公司 Helion、脑机接口公司 Merge Labs 以及航天公司 Stoke Space 等项目,这些投资被认为偏离公司核心业务。 知情人士称,部分股东已私下讨论由董事会主席 Bret Taylor 接替 Altman 出任 CEO 的可能性。当前 OpenAI 正冲刺约 8500 亿美元估值 IPO,内部权力结构与战略方向的不确定性,正成为资本市场关注焦点。 值得注意的是,这并非首次管理层危机。早在 2023 年,Altman 曾短暂被董事会罢免后迅速复职,此次高层震荡被市场视为治理矛盾再度浮出水面。 wallstreetcn.com Sora之父跑路!OpenAI一日流失三高管,资本还密谋换掉奥特曼 IPO前夕,OpenAI单日痛失三位核心高管——Sora创始人、AI科学副总裁、企业CTO同日公开辞职,Sora项目就此画上句点。与此同时,华尔街日报曝光奥特曼多次将核聚变、脑机接口等个人投资强推进公司决策,部分股东已私下讨论由董事会主席Bret Taylor取而代之。 4 个帖子 - 3 位参与者 阅读完整话题

linux.do · 2026-04-18 20:40:28+08:00 · tech

any路由器因为跟claude code有一些参数适配的问题,所以我们可以在本地架设一个简单的网关,将参数在本地拦截,然后修改一下,再传给any大善人,就可以绕过这些参数适配的小问题了。 claudecode最新版本适用,不需要回退版本 这个本地网关做了什么 在本地监听 127.0.0.1:1998。 把 Claude Code 的请求转发到上游any的端口。 自动补认证头(Authorization / x-api-key)。 对 haiku 请求额外修正:补 context-1m-2025-08-07,并加 thinking.budget_tokens=1024。 把请求和响应写到 gateway_requests.jsonl 方便排错。 极简启动步骤 先开网关 export ANTHROPIC_BASE_URL=“any大善人地址” export ANTHROPIC_AUTH_TOKEN=“你的真实token” python3 /Users/Apple/Desktop/code/claude_gateway.py 新开一个终端再开 Claude Code ANTHROPIC_BASE_URL=“ http://127.0.0.1:1998 ” claude --enable-auto-mode 截图为证: 网关代码如下(vibe写的,很多冗余,大佬可以自行修改): #!/usr/bin/env python3 import base64 import json import os import threading from datetime import datetime, timezone from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer from urllib.error import HTTPError, URLError from urllib.parse import urlsplit, urlunsplit from urllib.request import Request, urlopen LISTEN_HOST = os.getenv("CLAUDE_GATEWAY_HOST", "127.0.0.1") LISTEN_PORT = int(os.getenv("CLAUDE_GATEWAY_PORT", "1998")) UPSTREAM_BASE_URL = os.getenv( "ANTHROPIC_BASE_URL", "https://a-ocnfniawgw.cn-shanghai.fcapp.run" ) UPSTREAM_AUTH_TOKEN = os.getenv("ANTHROPIC_AUTH_TOKEN", "") UPSTREAM_TIMEOUT = float(os.getenv("CLAUDE_GATEWAY_TIMEOUT", "120")) LOG_PATH = os.getenv( "CLAUDE_GATEWAY_LOG", os.path.join(os.path.dirname(__file__), "gateway_requests.jsonl") ) LOG_LOCK = threading.Lock() def utc_now_iso() -> str: return datetime.now(timezone.utc).isoformat() def ensure_log_parent_exists() -> None: parent = os.path.dirname(LOG_PATH) if parent: os.makedirs(parent, exist_ok=True) def decode_body_for_log(body: bytes) -> dict: if not body: return {"encoding": "utf-8", "text": ""} try: return {"encoding": "utf-8", "text": body.decode("utf-8")} except UnicodeDecodeError: return {"encoding": "base64", "text": base64.b64encode(body).decode("ascii")} def append_log(record: dict) -> None: ensure_log_parent_exists() line = json.dumps(record, ensure_ascii=False) with LOG_LOCK: with open(LOG_PATH, "a", encoding="utf-8") as f: f.write(line + "\n") def build_upstream_url(base_url: str, incoming_path: str) -> str: base = urlsplit(base_url) incoming = urlsplit(incoming_path) incoming_path_only = incoming.path or "/" base_path = base.path.rstrip("/") if base_path: merged_path = f"{base_path}{incoming_path_only}" else: merged_path = incoming_path_only return urlunsplit((base.scheme, base.netloc, merged_path, incoming.query, "")) def rewrite_request_headers(headers: dict, path: str) -> dict: rewritten = dict(headers) if UPSTREAM_AUTH_TOKEN: has_x_api_key = any(k.lower() == "x-api-key" for k in rewritten) has_authorization = any(k.lower() == "authorization" for k in rewritten) if not has_x_api_key: rewritten["x-api-key"] = UPSTREAM_AUTH_TOKEN if not has_authorization: rewritten["Authorization"] = f"Bearer {UPSTREAM_AUTH_TOKEN}" # 先做骨架,后续按 any 规则逐步覆写。 return rewritten def rewrite_request_body(body: bytes, headers: dict, path: str) -> bytes: if not body: return body content_type = "" for k, v in headers.items(): if k.lower() == "content-type": content_type = v break if "application/json" not in content_type.lower(): return body try: payload = json.loads(body.decode("utf-8")) except (UnicodeDecodeError, json.JSONDecodeError): return body model = str(payload.get("model", "")).lower() if not model.startswith("claude-haiku"): return body beta_key = None for k in headers.keys(): if k.lower() == "anthropic-beta": beta_key = k break raw_beta = headers.get(beta_key, "") if beta_key else "" beta_features = [item.strip() for item in raw_beta.split(",") if item.strip()] if "context-1m-2025-08-07" not in beta_features: beta_features.append("context-1m-2025-08-07") merged_beta = ",".join(beta_features) if beta_key: headers[beta_key] = merged_beta else: headers["anthropic-beta"] = merged_beta payload["thinking"] = {"type": "enabled", "budget_tokens": 1024} return json.dumps(payload, ensure_ascii=False, separators=(",", ":")).encode("utf-8") class ClaudeGatewayHandler(BaseHTTPRequestHandler): protocol_version = "HTTP/1.1" def do_GET(self): self._handle_proxy() def do_POST(self): self._handle_proxy() def do_PUT(self): self._handle_proxy() def do_PATCH(self): self._handle_proxy() def do_DELETE(self): self._handle_proxy() def do_OPTIONS(self): self._handle_proxy() def do_HEAD(self): self._handle_proxy() def log_message(self, fmt, *args): return def _read_request_body(self) -> bytes: content_length = int(self.headers.get("Content-Length", "0") or "0") if content_length <= 0: return b"" return self.rfile.read(content_length) def _copy_request_headers(self) -> dict: headers = {} for key, value in self.headers.items(): key_l = key.lower() if key_l in {"host", "content-length", "connection", "accept-encoding"}: continue headers[key] = value return headers def _send_response(self, status: int, headers: dict, body: bytes) -> None: self.send_response(status) ignored = {"transfer-encoding", "content-length", "connection"} for k, v in headers.items(): if k.lower() in ignored: continue self.send_header(k, v) self.send_header("Content-Length", str(len(body))) self.send_header("Connection", "close") self.end_headers() if self.command != "HEAD" and body: self.wfile.write(body) def _handle_proxy(self): req_body = self._read_request_body() req_headers = self._copy_request_headers() req_headers = rewrite_request_headers(req_headers, self.path) req_body = rewrite_request_body(req_body, req_headers, self.path) upstream_url = build_upstream_url(UPSTREAM_BASE_URL, self.path) request_log = { "timestamp": utc_now_iso(), "client_ip": self.client_address[0] if self.client_address else "", "method": self.command, "path": self.path, "upstream_url": upstream_url, "headers": dict(self.headers.items()), "body": decode_body_for_log(req_body), "body_length": len(req_body), } try: upstream_req = Request( url=upstream_url, data=req_body if req_body else None, headers=req_headers, method=self.command, ) with urlopen(upstream_req, timeout=UPSTREAM_TIMEOUT) as resp: resp_status = resp.getcode() resp_headers = dict(resp.headers.items()) resp_body = resp.read() request_log["response"] = { "status": resp_status, "headers": resp_headers, "body": decode_body_for_log(resp_body), "body_length": len(resp_body), } append_log(request_log) self._send_response(resp_status, resp_headers, resp_body) return except HTTPError as e: err_body = e.read() if hasattr(e, "read") else b"" err_headers = dict(e.headers.items()) if getattr(e, "headers", None) else {} request_log["response"] = { "status": e.code, "headers": err_headers, "body": decode_body_for_log(err_body), "body_length": len(err_body), } append_log(request_log) self._send_response(e.code, err_headers, err_body) return except (URLError, TimeoutError, Exception) as e: error_payload = { "error": "gateway_upstream_error", "message": str(e), } error_body = json.dumps(error_payload, ensure_ascii=False).encode("utf-8") request_log["response"] = { "status": 502, "headers": {"Content-Type": "application/json; charset=utf-8"}, "body": {"encoding": "utf-8", "text": error_body.decode("utf-8")}, "body_length": len(error_body), } append_log(request_log) self._send_response( 502, {"Content-Type": "application/json; charset=utf-8"}, error_body, ) def main(): server = ThreadingHTTPServer((LISTEN_HOST, LISTEN_PORT), ClaudeGatewayHandler) print(f"[gateway] listening on http://{LISTEN_HOST}:{LISTEN_PORT}") print(f"[gateway] upstream: {UPSTREAM_BASE_URL}") print(f"[gateway] auth token configured: {bool(UPSTREAM_AUTH_TOKEN)}") print(f"[gateway] log file: {LOG_PATH}") server.serve_forever() if __name__ == "__main__": main() 3 个帖子 - 2 位参与者 阅读完整话题

hnrss.org · 2026-04-18 20:30:16+08:00 · tech

I built LogsGo as a learning project to explore log ingestion, querying, and storage tradeoffs. It’s a small Go-based system where logs come in over gRPC, land in memory first, then flush into local storage and optionally S3-compatible object storage. I also added a simple query language plus a small UI to inspect log occurrences over time. This wasn’t built because I think the world needed “another logging system” or because I’m an expert here. I mostly wanted to learn by building something end to end: ingestion paths, storage layering, querying, retention, auth/TLS, and some UI work. Repo: https://github.com/Saumya40-codes/LogsGO I’d genuinely appreciate feedback, including “this design is wrong for X reason” type feedback. If parts of it feel overengineered / naive / badly thought through, that’s useful for me too. Comments URL: https://news.ycombinator.com/item?id=47815402 Points: 1 # Comments: 0

linux.do · 2026-04-18 20:26:12+08:00 · tech

当前对话出现retry,重开另一个session却可以正常使用,切回原对话还是retry 。这是any轮询的问题吗?还是新版cluade code的原因 而且感觉any现在retrying明显变多了 个人感觉还和上下文有关,上下文一长retry出现概率更大 报错多为: API Error: 503 {“error”:{“message”:“Service Unavailable”,“type”:“error”},“type”:“error”} · check status.claude.com 明明前一周还能爽蹬 2 个帖子 - 2 位参与者 阅读完整话题

hnrss.org · 2026-04-18 20:17:13+08:00 · tech

150 applications. One offer. Each application took 5+ manual steps. Separate tools, separate tabs, separate sites — none of them talking to each other. Generic output. Over an hour per application. Paste a job description — or pull it from any job site with the Chrome extension — and five AI agents run an orchestrated pipeline in under 30 seconds: analyzing the role, scoring your fit, researching the company, writing a targeted cover letter, and tailoring your resume to the role. Sequential where it needs to be, parallel where it can be, each agent's output feeding the next. Also includes a dashboard to track every application. And tools for everything around it: interview prep with mock sessions, salary negotiation, job comparison, follow-ups, thank you notes, and references. Runs on your machine. No subscriptions, no data stored on our servers — just your own Gemini API key connecting directly to Google. Comments URL: https://news.ycombinator.com/item?id=47815326 Points: 1 # Comments: 0