智能助手网
标签聚合 and

/tag/and

hnrss.org · 2026-04-18 16:53:41+08:00 · tech

Hello, I built chlibc, a linux tool to change the system interp and glibc to your custom one. Normally, running an ELF against a different glibc, you'd use LD_LIBRARY_PATH and patchelf, or use chroot/docker. chlibc allows you to change the dynamic linker of a process on the fly without patching and root access. Key Features: - zero disk modification: no need for patchelf --set-interpreter. - no root required: works entirely in user-space via ptrace - multi-arch: native support for x86_64, AArch64, and RISC-V. - lightweight: unlike PRoot, which intercepts every syscall to translate paths, chlibc only intervenes during the initial execve() phase. Once the loader is swapped, it almost has no runtime overhead. I’d love to hear your thoughts about this tool, thanks! Comments URL: https://news.ycombinator.com/item?id=47814330 Points: 2 # Comments: 0

linux.do · 2026-04-18 13:53:38+08:00 · tech

我发现最新Claude code已无法直接在Android termux用npm安装来直接使用,会有报错,肯定是Termux环境的兼容问题,毕竟不是标准的Linux。如何解决: 1.非proot方案(推荐) #!/data/data/com.termux/files/usr/bin/bash set -euo pipefail readonly SCRIPT_NAME="$(basename "$0")" readonly PREFIX_DIR="${PREFIX:-/data/data/com.termux/files/usr}" readonly STATE_DIR="${CLAUDE_CODE_HOME:-$HOME/.claude-code-termux}" readonly NODE_DIR="$STATE_DIR/node" readonly WRAPPER_BIN_DIR="$STATE_DIR/bin" readonly PATCH_DIR="$STATE_DIR/patches" readonly GLOBAL_PREFIX_DIR="$STATE_DIR/npm-global" readonly GLOBAL_BIN_DIR="$GLOBAL_PREFIX_DIR/bin" readonly NPM_CACHE_DIR="$STATE_DIR/npm-cache" readonly TMP_ROOT_DIR="${TMPDIR:-$PREFIX_DIR/tmp}" readonly GLIBC_LDSO="$PREFIX_DIR/glibc/lib/ld-linux-aarch64.so.1" readonly GLIBC_RUNNER_BIN="$PREFIX_DIR/bin/grun" readonly GLIBC_MARKER="$STATE_DIR/.glibc-arch" readonly HOST_CLAUDE_PATH="$PREFIX_DIR/bin/claude" readonly BACKUP_DIR="$STATE_DIR/backups" readonly CLAUDE_PACKAGE_NAME="@anthropic-ai/claude-code" readonly CLAUDE_PACKAGE_VERSION="${CLAUDE_CODE_VERSION:-latest}" readonly NODE_VERSION="${CLAUDE_CODE_NODE_VERSION:-22.22.0}" readonly NODE_TARBALL="node-v${NODE_VERSION}-linux-arm64.tar.xz" readonly NODE_URL="https://nodejs.org/dist/v${NODE_VERSION}/${NODE_TARBALL}" readonly COMPAT_PATCH_PATH="$PATCH_DIR/claude-glibc-compat.js" readonly CLAUDE_EXE_PATH="$GLOBAL_PREFIX_DIR/lib/node_modules/@anthropic-ai/claude-code/bin/claude.exe" readonly HOST_WRAPPER_MARKER="# claude-code-termux-nonproot-wrapper" readonly C_BOLD_BLUE="\033[1;34m" readonly C_BOLD_GREEN="\033[1;32m" readonly C_BOLD_YELLOW="\033[1;33m" readonly C_BOLD_RED="\033[1;31m" readonly C_RESET="\033[0m" info() { printf '%b[INFO]%b %s\n' "$C_BOLD_BLUE" "$C_RESET" "$*" } success() { printf '%b[ OK ]%b %s\n' "$C_BOLD_GREEN" "$C_RESET" "$*" } warn() { printf '%b[WARN]%b %s\n' "$C_BOLD_YELLOW" "$C_RESET" "$*" >&2 } die() { printf '%b[ERR ]%b %s\n' "$C_BOLD_RED" "$C_RESET" "$*" >&2 exit 1 } usage() { cat <<EOF Usage: bash $SCRIPT_NAME What it does: 1. Installs Termux dependencies needed for a glibc-based Node runtime. 2. Installs glibc-runner through pacman (no proot distro). 3. Downloads official Node.js ${NODE_VERSION} linux-arm64. 4. Wraps node/npm with ld.so so they run on Termux. 5. Installs ${CLAUDE_PACKAGE_NAME} and exposes it as: $HOST_CLAUDE_PATH Environment overrides: CLAUDE_CODE_HOME install state dir, default: $STATE_DIR CLAUDE_CODE_VERSION npm package version/tag, default: $CLAUDE_PACKAGE_VERSION CLAUDE_CODE_NODE_VERSION Node.js linux-arm64 version, default: $NODE_VERSION Notes: - This follows the non-proot glibc-wrapper approach used by openclaw-android. - Only aarch64 Termux is supported. - Existing $HOST_CLAUDE_PATH will be backed up if it is not already managed. EOF } command_exists() { command -v "$1" >/dev/null 2>&1 } require_termux() { [ -d "$PREFIX_DIR" ] || die "This script must run in Termux." command_exists pkg || die "pkg not found. This script must run in Termux." } ensure_tmp_root() { mkdir -p "$TMP_ROOT_DIR" } ensure_state_dirs() { mkdir -p "$STATE_DIR" "$WRAPPER_BIN_DIR" "$PATCH_DIR" "$GLOBAL_PREFIX_DIR" \ "$GLOBAL_BIN_DIR" "$NPM_CACHE_DIR" "$BACKUP_DIR" } ensure_termux_package() { local package_name="$1" if dpkg -s "$package_name" >/dev/null 2>&1; then success "Termux package already installed: $package_name" return 0 fi info "Installing Termux package: $package_name" pkg install -y "$package_name" success "Installed Termux package: $package_name" } ensure_glibc_runner() { local arch local pacman_conf local siglevel_patched=0 arch="$(uname -m)" [ "$arch" = "aarch64" ] || die "glibc mode only supports aarch64, got: $arch" if [ -f "$GLIBC_MARKER" ] && [ -x "$GLIBC_LDSO" ]; then success "glibc-runner already available" return 0 fi ensure_termux_package "pacman" pacman_conf="$PREFIX_DIR/etc/pacman.conf" info "Initializing pacman for glibc-runner" if [ -f "$pacman_conf" ] && ! grep -q '^SigLevel = Never' "$pacman_conf"; then cp "$pacman_conf" "${pacman_conf}.bak" sed -i 's/^SigLevel\s*=.*/SigLevel = Never/' "$pacman_conf" siglevel_patched=1 warn "Applied temporary pacman SigLevel workaround" fi pacman-key --init 2>/dev/null || true pacman-key --populate 2>/dev/null || true info "Installing glibc-runner" if ! pacman -Sy glibc-runner --noconfirm --assume-installed bash,patchelf,resolv-conf; then if [ "$siglevel_patched" -eq 1 ] && [ -f "${pacman_conf}.bak" ]; then mv "${pacman_conf}.bak" "$pacman_conf" fi die "Failed to install glibc-runner" fi if [ "$siglevel_patched" -eq 1 ] && [ -f "${pacman_conf}.bak" ]; then mv "${pacman_conf}.bak" "$pacman_conf" success "Restored pacman SigLevel" fi [ -x "$GLIBC_LDSO" ] || die "glibc dynamic linker not found at $GLIBC_LDSO" touch "$GLIBC_MARKER" success "glibc-runner is ready" } write_compat_patch() { info "Writing Node compatibility patch" cat >"$COMPAT_PATCH_PATH" <<'EOF' 'use strict'; const childProcess = require('child_process'); const dns = require('dns'); const fs = require('fs'); const os = require('os'); const path = require('path'); const prefix = process.env.PREFIX || '/data/data/com.termux/files/usr'; const home = process.env.HOME || '/data/data/com.termux/files/home'; const wrapperPath = process.env._CLAUDE_WRAPPER_PATH || path.join(home, '.claude-code-termux', 'bin', 'node'); const termuxExec = path.join(prefix, 'lib', 'libtermux-exec-ld-preload.so'); const termuxShell = path.join(prefix, 'bin', 'sh'); try { if (fs.existsSync(wrapperPath)) { Object.defineProperty(process, 'execPath', { value: wrapperPath, writable: true, configurable: true, }); } } catch {} if (process.env._CLAUDE_ORIG_LD_PRELOAD) { process.env.LD_PRELOAD = process.env._CLAUDE_ORIG_LD_PRELOAD; delete process.env._CLAUDE_ORIG_LD_PRELOAD; } else if (!process.env.LD_PRELOAD) { try { if (fs.existsSync(termuxExec)) { process.env.LD_PRELOAD = termuxExec; } } catch {} } const originalCpus = os.cpus; os.cpus = function cpus() { try { const result = originalCpus.call(os); if (Array.isArray(result) && result.length > 0) { return result; } } catch {} return [{ model: 'unknown', speed: 0, times: { user: 0, nice: 0, sys: 0, idle: 0, irq: 0 }, }]; }; const originalNetworkInterfaces = os.networkInterfaces; os.networkInterfaces = function networkInterfaces() { try { return originalNetworkInterfaces.call(os); } catch { return { lo: [{ address: '127.0.0.1', netmask: '255.0.0.0', family: 'IPv4', mac: '00:00:00:00:00:00', internal: true, cidr: '127.0.0.1/8', }], }; } }; if (!fs.existsSync('/bin/sh') && fs.existsSync(termuxShell)) { const originalExec = childProcess.exec; const originalExecSync = childProcess.execSync; childProcess.exec = function exec(command, options, callback) { if (typeof options === 'function') { callback = options; options = {}; } options = options || {}; if (!options.shell) { options.shell = termuxShell; } return originalExec.call(childProcess, command, options, callback); }; childProcess.execSync = function execSync(command, options) { options = options || {}; if (!options.shell) { options.shell = termuxShell; } return originalExecSync.call(childProcess, command, options); }; } try { let dnsServers = ['8.8.8.8', '8.8.4.4']; try { const resolvConf = fs.readFileSync(path.join(prefix, 'etc', 'resolv.conf'), 'utf8'); const matches = resolvConf.match(/^nameserver\s+(.+)$/gm); if (matches && matches.length > 0) { dnsServers = matches.map((line) => line.replace(/^nameserver\s+/, '').trim()); } } catch {} try { dns.setServers(dnsServers); } catch {} const originalLookup = dns.lookup; const originalLookupPromise = dns.promises.lookup; dns.lookup = function lookup(hostname, options, callback) { if (typeof options === 'function') { callback = options; options = {}; } const originalOptions = options; const opts = typeof options === 'number' ? { family: options } : (options || {}); const wantAll = opts.all === true; const family = opts.family || 0; const resolveWith = (fam, done) => { const resolver = fam === 6 ? dns.resolve6 : dns.resolve4; resolver(hostname, done); }; const tryResolve = (fam) => { resolveWith(fam, (error, addresses) => { if (!error && Array.isArray(addresses) && addresses.length > 0) { const resolvedFamily = fam === 6 ? 6 : 4; if (wantAll) { callback(null, addresses.map((address) => ({ address, family: resolvedFamily, }))); return; } callback(null, addresses[0], resolvedFamily); return; } if (family === 0 && fam === 4) { tryResolve(6); return; } originalLookup.call(dns, hostname, originalOptions, callback); }); }; tryResolve(family === 6 ? 6 : 4); }; dns.promises.lookup = async function lookup(hostname, options) { const opts = typeof options === 'number' ? { family: options } : (options || {}); const wantAll = opts.all === true; const family = opts.family || 0; const resolveWith = family === 6 ? dns.promises.resolve6 : dns.promises.resolve4; try { const addresses = await resolveWith(hostname); if (addresses.length > 0) { const resolvedFamily = family === 6 ? 6 : 4; if (wantAll) { return addresses.map((address) => ({ address, family: resolvedFamily, })); } return { address: addresses[0], family: resolvedFamily, }; } } catch {} if (family === 0) { try { const addresses = await dns.promises.resolve6(hostname); if (addresses.length > 0) { if (wantAll) { return addresses.map((address) => ({ address, family: 6 })); } return { address: addresses[0], family: 6, }; } } catch {} } return originalLookupPromise.call(dns.promises, hostname, options); }; } catch {} EOF success "Compatibility patch written to $COMPAT_PATCH_PATH" } write_node_wrappers() { local node_bin_path local node_real_path node_bin_path="$NODE_DIR/bin/node" node_real_path="$NODE_DIR/bin/node.real" if [ -f "$node_real_path" ]; then : elif [ -f "$node_bin_path" ]; then mv "$node_bin_path" "$node_real_path" else die "Node binary missing at $node_bin_path" fi info "Writing node/npm wrappers" cat >"$WRAPPER_BIN_DIR/node" <<EOF #!$PREFIX_DIR/bin/bash [ -n "\${LD_PRELOAD:-}" ] && export _CLAUDE_ORIG_LD_PRELOAD="\$LD_PRELOAD" unset LD_PRELOAD export _CLAUDE_WRAPPER_PATH="$WRAPPER_BIN_DIR/node" export TMPDIR="\${TMPDIR:-$TMP_ROOT_DIR}" _CLAUDE_COMPAT="$COMPAT_PATCH_PATH" if [ -f "\$_CLAUDE_COMPAT" ]; then case "\${NODE_OPTIONS:-}" in *"\$_CLAUDE_COMPAT"*) ;; *) export NODE_OPTIONS="\${NODE_OPTIONS:+\$NODE_OPTIONS }-r \$_CLAUDE_COMPAT" ;; esac fi exec "$GLIBC_LDSO" --library-path "$PREFIX_DIR/glibc/lib" "$NODE_DIR/bin/node.real" "\$@" EOF cat >"$WRAPPER_BIN_DIR/npm" <<EOF #!$PREFIX_DIR/bin/bash export PATH="$WRAPPER_BIN_DIR:$NODE_DIR/bin:\$PATH" export TMPDIR="\${TMPDIR:-$TMP_ROOT_DIR}" export NPM_CONFIG_PREFIX="$GLOBAL_PREFIX_DIR" export npm_config_prefix="$GLOBAL_PREFIX_DIR" export NPM_CONFIG_CACHE="$NPM_CACHE_DIR" export npm_config_cache="$NPM_CACHE_DIR" export NPM_CONFIG_SCRIPT_SHELL="$PREFIX_DIR/bin/sh" export npm_config_script_shell="$PREFIX_DIR/bin/sh" exec "$WRAPPER_BIN_DIR/node" "$NODE_DIR/lib/node_modules/npm/bin/npm-cli.js" "\$@" EOF cat >"$WRAPPER_BIN_DIR/npx" <<EOF #!$PREFIX_DIR/bin/bash export PATH="$WRAPPER_BIN_DIR:$NODE_DIR/bin:\$PATH" export TMPDIR="\${TMPDIR:-$TMP_ROOT_DIR}" export NPM_CONFIG_PREFIX="$GLOBAL_PREFIX_DIR" export npm_config_prefix="$GLOBAL_PREFIX_DIR" export NPM_CONFIG_CACHE="$NPM_CACHE_DIR" export npm_config_cache="$NPM_CACHE_DIR" export NPM_CONFIG_SCRIPT_SHELL="$PREFIX_DIR/bin/sh" export npm_config_script_shell="$PREFIX_DIR/bin/sh" exec "$WRAPPER_BIN_DIR/node" "$NODE_DIR/lib/node_modules/npm/bin/npx-cli.js" "\$@" EOF chmod 755 "$WRAPPER_BIN_DIR/node" "$WRAPPER_BIN_DIR/npm" "$WRAPPER_BIN_DIR/npx" success "node/npm wrappers are ready" } install_node_runtime() { local installed_version local tmp_dir local extract_dir local fresh_dir ensure_termux_package "curl" ensure_termux_package "xz-utils" if [ -x "$WRAPPER_BIN_DIR/node" ]; then installed_version="$("$WRAPPER_BIN_DIR/node" --version 2>/dev/null | sed 's/^v//')" if [ "$installed_version" = "$NODE_VERSION" ]; then success "Node.js already installed: v$installed_version" write_compat_patch write_node_wrappers return 0 fi fi info "Downloading official Node.js ${NODE_VERSION} linux-arm64" tmp_dir="$(mktemp -d "$TMP_ROOT_DIR/claude-node.XXXXXX")" curl -fL --max-time 300 "$NODE_URL" -o "$tmp_dir/$NODE_TARBALL" success "Downloaded $NODE_TARBALL" extract_dir="$tmp_dir/extract" fresh_dir="$tmp_dir/node-fresh" mkdir -p "$extract_dir" "$fresh_dir" tar -xJf "$tmp_dir/$NODE_TARBALL" -C "$extract_dir" mv "$extract_dir"/node-v"${NODE_VERSION}"-linux-arm64/* "$fresh_dir"/ rm -rf "$NODE_DIR" mkdir -p "$(dirname "$NODE_DIR")" mv "$fresh_dir" "$NODE_DIR" write_compat_patch write_node_wrappers rm -rf "$tmp_dir" success "Node.js runtime installed in $NODE_DIR" } install_claude_package() { local package_spec package_spec="$CLAUDE_PACKAGE_NAME" if [ "$CLAUDE_PACKAGE_VERSION" != "latest" ]; then package_spec="${CLAUDE_PACKAGE_NAME}@${CLAUDE_PACKAGE_VERSION}" fi info "Installing $package_spec" PATH="$WRAPPER_BIN_DIR:$GLOBAL_BIN_DIR:$PATH" "$WRAPPER_BIN_DIR/npm" install -g "$package_spec" [ -e "$GLOBAL_BIN_DIR/claude" ] || die "npm install completed, but $GLOBAL_BIN_DIR/claude was not created" [ -x "$CLAUDE_EXE_PATH" ] || die "Claude native binary missing at $CLAUDE_EXE_PATH" success "Claude Code is installed under $GLOBAL_PREFIX_DIR" } backup_existing_launcher() { local backup_path if [ ! -e "$HOST_CLAUDE_PATH" ]; then return 0 fi if grep -Fq "$HOST_WRAPPER_MARKER" "$HOST_CLAUDE_PATH" 2>/dev/null; then success "Managed host launcher already present" return 0 fi backup_path="$BACKUP_DIR/claude.host-backup.$(date +%Y%m%d_%H%M%S)" cp "$HOST_CLAUDE_PATH" "$backup_path" success "Backed up existing launcher to $backup_path" } install_host_wrapper() { local tmp_wrapper tmp_wrapper="$(mktemp "$TMP_ROOT_DIR/claude-wrapper.XXXXXX")" cat >"$tmp_wrapper" <<EOF #!$PREFIX_DIR/bin/bash $HOST_WRAPPER_MARKER export PATH="$WRAPPER_BIN_DIR:$GLOBAL_BIN_DIR:\$PATH" export TMPDIR="\${TMPDIR:-$TMP_ROOT_DIR}" exec "$GLIBC_RUNNER_BIN" -t "$CLAUDE_EXE_PATH" "\$@" EOF chmod 755 "$tmp_wrapper" cp "$tmp_wrapper" "$HOST_CLAUDE_PATH" chmod 755 "$HOST_CLAUDE_PATH" rm -f "$tmp_wrapper" success "Installed host launcher: $HOST_CLAUDE_PATH" } verify_install() { info "Verifying Node wrapper" "$WRAPPER_BIN_DIR/node" --version info "Verifying npm wrapper" "$WRAPPER_BIN_DIR/npm" --version info "Verifying Claude Code launcher" "$HOST_CLAUDE_PATH" --version success "Non-proot Claude Code setup completed" } main() { if [ "${1:-}" = "-h" ] || [ "${1:-}" = "--help" ]; then usage exit 0 fi require_termux ensure_tmp_root ensure_state_dirs ensure_glibc_runner install_node_runtime install_claude_package backup_existing_launcher install_host_wrapper verify_install cat <<EOF Run Claude Code with: claude Current configuration: state dir: $STATE_DIR node version: $NODE_VERSION package version: $CLAUDE_PACKAGE_VERSION launcher: $HOST_CLAUDE_PATH EOF } main "$@" 2.proot方案 (会卡顿 卡就对了 卡了就等待) #!/data/data/com.termux/files/usr/bin/bash set -euo pipefail readonly SCRIPT_NAME="$(basename "$0")" readonly DISTRO_NAME="${CLAUDE_CODE_DISTRO:-debian}" readonly CLAUDE_PACKAGE_NAME="@anthropic-ai/claude-code" readonly CLAUDE_PACKAGE_VERSION="${CLAUDE_CODE_VERSION:-latest}" readonly PREFIX_DIR="${PREFIX:-/data/data/com.termux/files/usr}" readonly HOST_CLAUDE_PATH="$PREFIX_DIR/bin/claude" readonly PROOT_ROOT_DIR="$PREFIX_DIR/var/lib/proot-distro/installed-rootfs" readonly BACKUP_DIR="$HOME/.codex/tmp" readonly WRAPPER_MARKER="# claude-code-termux-wrapper" readonly C_BOLD_BLUE="\033[1;34m" readonly C_BOLD_GREEN="\033[1;32m" readonly C_BOLD_YELLOW="\033[1;33m" readonly C_BOLD_RED="\033[1;31m" readonly C_RESET="\033[0m" info() { printf '%b[INFO]%b %s\n' "$C_BOLD_BLUE" "$C_RESET" "$*" } success() { printf '%b[ OK ]%b %s\n' "$C_BOLD_GREEN" "$C_RESET" "$*" } warn() { printf '%b[WARN]%b %s\n' "$C_BOLD_YELLOW" "$C_RESET" "$*" >&2 } die() { printf '%b[ERR ]%b %s\n' "$C_BOLD_RED" "$C_RESET" "$*" >&2 exit 1 } usage() { cat <<EOF Usage: bash $SCRIPT_NAME What it does: 1. Installs proot-distro in Termux if needed. 2. Installs Debian userspace if needed. 3. Installs nodejs + npm inside Debian. 4. Installs ${CLAUDE_PACKAGE_NAME} inside Debian. 5. Replaces Termux's claude launcher with a wrapper that forwards into Debian. Environment overrides: CLAUDE_CODE_DISTRO proot distro alias, default: ${DISTRO_NAME} CLAUDE_CODE_VERSION npm package version/tag, default: ${CLAUDE_PACKAGE_VERSION} Notes: - Official Claude Code npm binaries do not support Termux's android-arm64 host. - This script uses Debian in proot as the supported Linux runtime. EOF } command_exists() { command -v "$1" >/dev/null 2>&1 } require_termux() { [ -d "$PREFIX_DIR" ] || die "This script must run in Termux." command_exists pkg || die "pkg not found. This script must run in Termux." } ensure_termux_package() { local package_name="$1" if dpkg -s "$package_name" >/dev/null 2>&1; then success "Termux package already installed: $package_name" return 0 fi info "Installing Termux package: $package_name" pkg install -y "$package_name" success "Installed Termux package: $package_name" } ensure_distro() { if [ -d "$PROOT_ROOT_DIR/$DISTRO_NAME" ]; then success "proot distro already installed: $DISTRO_NAME" return 0 fi info "Installing proot distro: $DISTRO_NAME" proot-distro install "$DISTRO_NAME" success "Installed proot distro: $DISTRO_NAME" } run_in_distro() { local command_text="$1" proot-distro login "$DISTRO_NAME" -- bash -lc "$command_text" } ensure_distro_packages() { info "Updating apt metadata inside $DISTRO_NAME" run_in_distro "env DEBIAN_FRONTEND=noninteractive apt-get update" info "Installing nodejs and npm inside $DISTRO_NAME" run_in_distro "env DEBIAN_FRONTEND=noninteractive apt-get install -y nodejs npm" success "nodejs and npm are ready inside $DISTRO_NAME" } install_claude_in_distro() { local package_spec="$CLAUDE_PACKAGE_NAME" if [ "$CLAUDE_PACKAGE_VERSION" != "latest" ]; then package_spec="${CLAUDE_PACKAGE_NAME}@${CLAUDE_PACKAGE_VERSION}" fi info "Installing ${package_spec} inside $DISTRO_NAME" run_in_distro "npm install -g ${package_spec@Q}" success "Claude Code is installed inside $DISTRO_NAME" } backup_existing_launcher() { local backup_path mkdir -p "$BACKUP_DIR" if [ ! -e "$HOST_CLAUDE_PATH" ]; then return 0 fi if grep -Fq "$WRAPPER_MARKER" "$HOST_CLAUDE_PATH" 2>/dev/null; then success "Managed Termux launcher already present" return 0 fi backup_path="$BACKUP_DIR/claude.host-backup.$(date +%Y%m%d_%H%M%S)" cp "$HOST_CLAUDE_PATH" "$backup_path" success "Backed up existing launcher to $backup_path" } install_host_wrapper() { local tmp_wrapper tmp_wrapper="$(mktemp "${TMPDIR:-/tmp}/claude-wrapper.XXXXXX")" cat >"$tmp_wrapper" <<EOF #!/data/data/com.termux/files/usr/bin/sh $WRAPPER_MARKER work_dir=\$PWD if [ ! -d "\$work_dir" ]; then work_dir=/root fi exec proot-distro login --shared-tmp --work-dir "\$work_dir" $DISTRO_NAME -- /usr/local/bin/claude "\$@" EOF chmod 755 "$tmp_wrapper" cp "$tmp_wrapper" "$HOST_CLAUDE_PATH" chmod 755 "$HOST_CLAUDE_PATH" rm -f "$tmp_wrapper" success "Installed Termux launcher: $HOST_CLAUDE_PATH" } verify_install() { info "Verifying Claude inside $DISTRO_NAME" run_in_distro "claude --version" info "Verifying Termux launcher" "$HOST_CLAUDE_PATH" --version success "Claude Code setup completed" } main() { if [ "${1:-}" = "-h" ] || [ "${1:-}" = "--help" ]; then usage exit 0 fi require_termux ensure_termux_package "proot-distro" ensure_distro ensure_distro_packages install_claude_in_distro backup_existing_launcher install_host_wrapper verify_install cat <<EOF Run Claude Code with: claude Current configuration: distro: $DISTRO_NAME host launcher: $HOST_CLAUDE_PATH EOF } main "$@" 非proot方案参考 GitHub - AidanPark/openclaw-android: Run OpenClaw on Android with a single command — no proot, no Linux · GitHub 1 个帖子 - 1 位参与者 阅读完整话题

hnrss.org · 2026-04-18 13:30:28+08:00 · tech

Built InGaming as a frontend MVP for the iGaming space, with two main parts: 1. a back-office admin for casino / gambling operations 2. a multi-brand storefront system for casino sites The starting point was simple: in all the years I’ve worked around iGaming, I haven’t seen a single admin panel I’d call genuinely good. Most of them are either bloated, awkward to use, or patched together from disconnected tools that don’t really fit how teams work day to day. So I decided to build the kind of admin I actually wish existed. On the admin side, the MVP includes flows for players, payments and transactions, casino and sportsbook operations, content and static pages, reporting, and website configuration. On the storefront side, I built a separate frontend repo to validate a multi-brand setup on shared foundations. Right now that includes two demo brands: BetStake and ShipBet. Important caveat: this is frontend work and product validation, not a finished end-to-end platform. I’m treating it as an expanded MVP that is already strong enough to be a serious starting point, rather than just another rough concept. What I’m really trying to figure out now is whether this is already valuable as a project someone could buy as a head start. My view is that it could save a lot of time for a team that would otherwise need to build the admin, storefront layer, and overall product structure from scratch. Happy to answer product or frontend architecture questions. Comments URL: https://news.ycombinator.com/item?id=47813386 Points: 1 # Comments: 0

hnrss.org · 2026-04-18 11:09:05+08:00 · tech

devnexus is an open-source cli that gives agents persistent shared memory across repos, sessions, and engineers. It maps out dependencies and relations at the function level, builds a code graph, and writes it into a shared Obsidian vault that every agent reads before writing code. Past decisions are also linked directly to the code they touched, so no one goes down the same dead end twice. Still building it out but I would love to hear any thoughts/feedback Comments URL: https://news.ycombinator.com/item?id=47812829 Points: 4 # Comments: 0

linux.do · 2026-04-18 10:03:18+08:00 · tech

github.com GitHub - anthropics/claude-desktop-buddy: Reference and an example for the Bluetooth API for... Reference and an example for the Bluetooth API for makers in Claude Cowork & Claude Code Desktop [!quote]+ 适用于 macOS 和 Windows 的 Claude 可通过 BLE 将 Claude Cowork 和 Claude Code 与创客设备连接起来。 通过 BLE 将 Claude Cowork 和 Clude Code 与制造商的设备连接起来,这样开发人员和制造商就能构建硬件,以 显示权限提示、最近消息和其他交互。我们 围绕 Claude 的创客社区的创造力给我们留下了深刻印象。 提供一个轻量级、选择性的应用程序接口是我们的一种方式,它能让我们更容易地构建 提供一个轻量级、可选择的应用程序接口是我们的一种方式,可以让用户更轻松地构建与 Claude 集成的有趣的小硬件设备。 例如,我们在 ESP32 上制作了一个桌面宠物,它依靠许可 批准和与克劳德的互动。没事的时候它会睡觉、 在会话开始时唤醒,在等待审批提示时会明显不耐烦。 在等待批准提示时,它会表现出明显的不耐烦,并让你在设备上直接批准或拒绝。 3 个帖子 - 2 位参与者 阅读完整话题

hnrss.org · 2026-04-18 05:50:03+08:00 · tech

I’ve been working on Pyra for the past few months and wanted to start sharing it in public. Right now it’s focused on the core package/project management workflow: Python installs, init, add/remove, lockfiles, env sync, and running commands in the managed env. The bigger thing I’m exploring is whether Python could eventually support a more cohesive toolchain story overall, more in the direction of Bun: not just packaging, but maybe over time testing, tasks, notebooks, and other common workflow tools feeling like one system instead of a bunch of separate pieces. It’s still early, and I’m definitely not claiming it’s as mature as uv. I’m mostly sharing it now because I want honest feedback on whether the direction feels interesting or misguided. Comments URL: https://news.ycombinator.com/item?id=47810994 Points: 6 # Comments: 1

hnrss.org · 2026-04-18 04:02:01+08:00 · tech

Hunk-by-hunk and line-by-line staging for git, designed for building clean commit history! Writing code is messy. Git history doesn't have to be. During development we experiment, refactor, backtrack, and fix mistakes. If every step ends up as a commit, the history becomes noise. A curated history turns that process into a clear sequence of logical changes. git-stage-batch helps you build that history incrementally by letting you stage changes hunk-by-hunk or line-by-line, shaping commits around meaning instead of the order the edits happened. Comments URL: https://news.ycombinator.com/item?id=47809951 Points: 6 # Comments: 1

hnrss.org · 2026-04-18 04:00:04+08:00 · tech

Paper Lantern is an MCP server that lets coding agents ask for personalized techniques / ideas from 2M+ CS research papers. Your coding agent tells PL what problem it is working on --> PL finds the most relevant ideas from 100+ research papers for you --> gives it to your coding agent including trade-offs and implementation instructions. We had previously shown that this helps research work and want to know understand whether it helps everyday software engineering tasks. We built out 9 tasks to measure this and compared using only a Coding Agent (Opus 4.6) (baseline) vs Coding Agent + Paper Lantern access. (Blog post with full breakdown: https://www.paperlantern.ai/blog/coding-agent-benchmarks ) Some interesting results : 1. we asked the agent to write tests that maximize mutation score (fraction of injected bugs caught). The baseline caught 63% of injected bugs. Baseline + Paper Lantern found mutation-aware prompting from recent research (MuTAP, Aug 2023; MUTGEN, Jun 2025), which suggested enumerating every possible mutation via AST analysis and then writing tests to target each one. This caught 87%. 2. extracting legal clauses from 50 contracts. The baseline sent the full document to the LLM and correctly extracted 44% of clauses. Baseline + Paper Lantern found two papers from March 2026 (BEAVER for section-level relevance scoring, PAVE for post-extraction validation). Accuracy jumped to 76%. Five of nine tasks improved by 30-80%. The difference was technique selection. 10 of 15 most-cited papers across all experiments were published in 2025 or later. Everything is open source : https://github.com/paperlantern-ai/paper-lantern-challenges Each experiment has its own README with detailed results and an approach.md showing exactly what Paper Lantern surfaced and how the agent used it. Quick setup: `npx paperlantern@latest` Comments URL: https://news.ycombinator.com/item?id=47809920 Points: 3 # Comments: 4

hnrss.org · 2026-04-18 03:18:52+08:00 · tech

Hey HN, Sjoerd de Vries here. I have worked on Seamless for nearly 10 years now. It has been used in my lab, but I was always around for troubleshooting. This is the first time that I think it's ready to stand on its own. I would love to hear your thoughts about it. It started as a hobby project, I had an itch about programming not being at-your-fingertips enough. Then I applied it to my work as a bioinformatics research engineer. The early versions focused on interactive workflows. After a year or two I realized that to make interactivity work properly, you need really good DAG tracking, so checksums were added everywhere. My lab built a collaborative web server with it that we published. More recently I've rebuilt it around the command line, persistent caching, and remote deployment. It's still in alpha, but the core is usable. Core idea: same code + same inputs = same result, identified by checksum. If you've already computed it, you don't compute it again. Two entry points: Python: from seamless.transformer import direct @direct def add(a, b): import time time.sleep(5) return a + b add(2, 3) # runs, caches result add(2, 3) # cache hit — instant Bash: seamless-run 'seq 1 10 | tac && sleep 5' # runs, caches result seamless-run 'seq 1 10 | tac && sleep 5' # cache hit — instant With persistent caching enabled, results are stored as checksum-to-checksum mappings in a small SQLite database that can be shared with collaborators, so that they get cache hits too. Execution scales by changing config, not code: in-process, spawned workers, or a Dask-backed HPC cluster. Remote execution also doubles as a reproducibility test. If your code produces the same result on a clean worker, it's reproducible. If not, Seamless helped you find the problem, whether it's a missing dependency, an undeclared input, or a platform sensitivity. Built for scientific computing and data pipelines, but works for anything pipeline-shaped. Comments URL: https://news.ycombinator.com/item?id=47809537 Points: 1 # Comments: 0

hnrss.org · 2026-04-18 03:11:06+08:00 · tech

I've grown increasingly skeptical that public coding benchmarks tell me much about which model is actually worth paying for and worried that as demand continues to spike model providers will silently drop performance. I did a few manual analyses but found it non-trivial to compare across models due to difference in token caching and tool-use efficiency and so wanted a tool for repeatable evaluations. So the goal was an OSS tool get data to help answer questions like: “Would Sonnet have solved most of the issues we gave Opus? "How much would that have actually saved?” “What about OSS models like Kimi K2.5 or GLM-1?” “The vibes are off, did model performance just regress from last month?” Right now the project is a bit medium-rare - but it works end-to-end. I’ve run it successfully against itself, and I’m waiting for my token limits to reset so I can add support for more languages and do a broader run. I'm already seeing a few cases where I could've used 5.4-mini instead of 5.4 for some parts of implementation. I’d love any feedback, criticism, and ideas. I am especially interested if this is something you might pay for as a managed service or if you would contribute your private testcases to a shared commons hold-out set to hold AI providers a bit more accountable. https://repogauge.org [email protected] https://github.com/s1liconcow/repogauge Thanks! David Comments URL: https://news.ycombinator.com/item?id=47809457 Points: 1 # Comments: 0

hnrss.org · 2026-04-18 01:45:05+08:00 · tech

Small cli program I made to convert and modify bookmark files. Supports converting between json and netscape bookmark file format (default formatted exported by chrome/firefox). I created this because I have a lot of bookmarks across devices that I want to batch edit/delete and I can't always just directly modify the local browser db. Not many filters implemented so far, but I made it easy to add filters see: https://github.com/ediw8311xht/cl-bookmark-tool/blob/main/sr... Comments URL: https://news.ycombinator.com/item?id=47808543 Points: 1 # Comments: 0

hnrss.org · 2026-04-18 01:12:17+08:00 · tech

A terminal UI (TUI) for exploring, auditing, and cleaning developer cache directories on macOS and Linux. Scan cached packages for known CVEs, find outdated dependencies, and reclaim disk space — all from one tool. Developer machines accumulate tens of gigabytes of invisible cache data — ML models, package archives, build artifacts, downloaded bottles. ccmd makes it all visible, scannable for vulnerabilities, and safely deletable. brew tap juliensimon/tap && brew install ccmd cargo binstall ccmd Comments URL: https://news.ycombinator.com/item?id=47808194 Points: 2 # Comments: 0