Thursday, April 30, 2026

Wednesday, April 29, 2026

Show HN: Drive any macOS app in the background without stealing the cursor https://ift.tt/Yp1Iwyh

Show HN: Drive any macOS app in the background without stealing the cursor Hi HN, Francesco from Cua here. I hacked this project together last weekend, inspired by the Codex Computer-Use release and lessons learned from deploying GUI-operating agents for our customers. The main problem: when a UI automation process controls a desktop app today, it usually takes over the human’s session. Your cursor moves, keyboard focus gets stolen, windows jump to the front, and you have to stop working until the agent is done. That is why we have historically avoided encouraging users to run these processes directly on their host machine, instead relying on VMs or GUI containers for concurrency and background execution. But computer-use - the tools we give agents to operate computers like humans - does not scale cleanly that way. As models get smarter, agents need to share hosts safely, run in the background, and avoid collisions with the human or other agents using the same machine. We realized macOS has no first-class API for "drive this app without touching the cursor". CGEventPost routes through the hardware input stream, so it moves your cursor. CGEvent.postToPid avoids the cursor warp, but Chromium treats those events as untrusted and silently drops clicks at the renderer boundary. Activating the target app first raises the window and pulls focus, defeating the point of background execution. Cua Driver is our attempt at a real fix: a background computer-use driver for macOS that lets an agent click, type, scroll, and read native apps while your cursor, frontmost app, and Space stay where they are. The default interface is a CLI, so it is easy to script or call from any coding agent shell. Try it on macOS 14+: /bin/bash -c "$(curl -fsSL https://ift.tt/ThAeNzy... )" The first internal use case was delegated demo recording. We ask Claude Code to drive an app while 'cua-driver recording start' captures the trajectory, screenshots, actions, and click markers. The result is an agent-generated product demo, Screen Studio inspired. Other things we have used it for: - Replacing Vercel’s agent-browser and other browser-use CLIs. With Claude Code and Cua Driver, you do not need Chrome DevTools Protocol at all. - A dev-loop QA agent that reproduces a visual bug, edits code, rebuilds, and verifies the UI while my editor stays frontmost. - Personal-assistant flows that use iMessage from Claude Code, Hermes, or other general-purpose agent CLIs. - Pulling visual context from Chrome, Figma, Preview, or YouTube windows I am not looking at, without relying on their APIs. What made this harder than expected: - CGEventPost warps the cursor because it goes through the HID stream. - CGEvent.postToPid does not warp the cursor, but Chromium drops it at the renderer IPC boundary. - Activating the target first raises the window and can drag you across Spaces. - Electron apps stop keeping useful AX trees alive when windows are occluded without a private remote-aware SPI. The unlock was SkyLight. SLEventPostToPid is a sibling of the public per-PID call, but it travels through a WindowServer channel Chromium accepts as trusted. Pair it with yabai’s focus-without-raise pattern, plus an off-screen primer click at (-1, -1), and the click lands without the window ever raising. One thing we learned: the right addressing mode depends on the app. Native macOS apps usually have rich AX trees, Chromium-family apps often need a hybrid of AX and screenshots, and apps like Blender or CAD tools may expose almost no useful AX surface. The mistake is defaulting to pixels everywhere - or defaulting to AX everywhere. Long technical writeup: https://ift.tt/SPXnfhr... I would like feedback from people building Mac automation, agent harnesses, or accessibility tooling. If it breaks on an macOS app you care about, that is useful data for us. https://ift.tt/uXf2Fsz April 28, 2026 at 09:33PM

Show HN: I mapped the latest UK fuel prices by county https://ift.tt/T7pLjPH

Show HN: I mapped the latest UK fuel prices by county I built this using the official UK government forecourt fuel price feed. The map aggregates the latest petrol and diesel prices by county, with filters for fuel type and metric. Clicking a county shows the cheapest forecourt, average price, spread, and station count. The feed covers roughly 8,000 UK forecourts and refreshes every 30 minutes. Retailers publish the prices, so there can still be gaps in the data/stations but it's getting better over time. https://ift.tt/VrZzi1y April 29, 2026 at 12:12AM

Join Us May 3: Muni Appreciation Day Kicks Off SF City Football Club's New Season

Join Us May 3: Muni Appreciation Day Kicks Off SF City Football Club's New Season
By Danbee Song

When you head out for a match, you'll spot the iconic Muni ”worm” logo on SFCFC jerseys.   The 2026 San Francisco City Football Club (SFCFC) season is almost here — and there’s more to celebrate than just soccer. SFCFC is the country’s first — and San Francisco’s only — supporter-owned soccer club. This weekend, it returns with fresh energy, a full home schedule and a continued partnership with Muni. The season kicks off Sunday, May 3 with Muni Appreciation Day. This special match celebrates the riders and employees who help keep San Francisco moving. Whether you're a longtime supporter or a...



Published 2026-04-28T00:00:00Z
https://ift.tt/y1eXwQ4

Show HN: Open Bias – proxy that enforces agent behavior at runtime https://ift.tt/StXUCs4

Show HN: Open Bias – proxy that enforces agent behavior at runtime https://ift.tt/I7an4Qd April 29, 2026 at 12:02AM

Tuesday, April 28, 2026

Show HN: Waiting for LLMs Suck – Give your user a game https://ift.tt/Zkr2RWn

Show HN: Waiting for LLMs Suck – Give your user a game Give your user a game while they wait for the LLM to return a result. https://ift.tt/z7D6qS5 April 28, 2026 at 08:15AM

Show HN: AgentSwift – open-source iOS builder agent https://ift.tt/V5E9Aa2

Show HN: AgentSwift – open-source iOS builder agent I'm working on a coding agent for building iOS apps. It's built on openspec and xcodebuildmcp. It's free and open source. https://ift.tt/03WPcNl April 28, 2026 at 06:44AM

Show HN: 49Agents – Infinite canvas IDE for AI agents https://ift.tt/8WObv2Y

Show HN: 49Agents – Infinite canvas IDE for AI agents https://ift.tt/u5KgYXz April 28, 2026 at 06:06AM

SFMTA Board of Directors Approves New Budget

SFMTA Board of Directors Approves New Budget
By Caroline Cabral

Our newly approved budget will help us continue to deliver the services San Franciscans depend on. Last week, our Board of Directors approved our next two-year budget. The budget is balanced and preserves crucial services. It prioritizes fast, safe and reliable Muni service. It maintains paratransit service. And it preserves free and discounted Muni fares for youth, seniors and people with disabilities. The directors approved an operating budget of $1.5 billion in FY26-27 and $1.6 billion in FY27-28. And they approved a capital budget of $655 million in FY26-27 and $546 million in FY27-28. Our...



Published 2026-04-27T00:00:00Z
https://ift.tt/c0kHZrR

Monday, April 27, 2026

Show HN: Time Pin – a Geo Guessr style game but history themed https://ift.tt/oln1s82

Show HN: Time Pin – a Geo Guessr style game but history themed Hi! Any history nerds here? I made Time Pin, a little game inspired by Geo Guessr but history-themed. You can play it here(it works on both desktop and mobile). Any feedback is appreciated: https://ift.tt/htqkmUZ Now some details: The goal is to guess the time and place that a character is from. You base your guess on some environmental photos, and on questions that you can ask the character(you have 12 questions but can only ask 5 so you have to choose carefully). The closer you are the more points you get. At the end, a portrait picture of the character is revealed, as well as educational resources to learn more about their culture and era(articles, videos, podcasts etc). The game only has 5 levels currently, but I hope to have over 100 someday. It’s tough to create levels because it requires some research, plus generating photos with AI(AI is necessary otherwise we’d only have photos starting from the 19th century when the camera was invented). My goal for the game was to create a challenge, and also maybe spark some curiosity for history. https://ift.tt/htqkmUZ April 27, 2026 at 12:46AM

Show HN:

Show HN: WaveletLM – wavelet-based, attention-free model with O(n log n) scaling https://ift.tt/Qe9VWbM

Show HN: WaveletLM – wavelet-based, attention-free model with O(n log n) scaling WaveletLM is a wavelet-based, attention-free architecture that replaces self-attention with learned lifting wavelet decomposition, a Fast Walsh-Hadamard Transform, per-scale gated spectral mixing with SwiGLU activation, an inverse FWHT, and wavelet reconstruction. Combined with expanded MLPs and sparse product-key memory, this yields a model with O(n log n) scaling in sequence length. With 23.8 PPL on WikiText-103, WaveletLM beats both GPT-2 Medium, which was trained on 80× more data, and Transformer-XL Standard, which uses recurrence to extend its effective context. It is undertrained and underregularized due to budget constraints, so there is much room for development and improvement. I invite anyone who is curious to examine the model, test it out, and extend its capabilities further. All code and weights are fully open source, and a PG-19 run will be completed in 2-3 days. Generations can be done in 4-5 GB VRAM at 28.8 tokens/second, and the model is trainable in 16.25 hours with 20 GB of VRAM, both on a 5090. README for comparison tables, instructions, logs, and future plans: https://ift.tt/AXhORWl Weights: https://ift.tt/Mc8z7AK Generations: https://ift.tt/HfWQYdb... The following samples were chosen for coherence, not factual accuracy. Factuality will require scaling and downstream techniques such as RAG and instruction tuning. > The history of the city is reflected in its architecture, which includes the historic Old Town and New Castle County Courthouse Square Historic District. The building was designed by John H. Stevens, who also designed the Albany-Fulton Celebration in 1906 and built a steel-hulled shipyard on the lake shore. > The album was released on August 25, 2007 by Sony Music Entertainment and features several songs from the record including "Never Say Die", "The Show", "Don't Cry for Me Argentina" and a cover of "I Can Only Imagine (But You Are Not Alone)". > The species was first described by Swedish zoologist Carl Linnaeus in 1758 as Agaricus adustus. The genus name is derived from the Latin words perma "to tie", and pous ("like") means "with a large head". In 1821, French mycologists Jean-Baptiste de Lacaille placed it in section Cricetae of the order Carnivora. He later renamed it Spongiforma punctata after the Greek kribensis. https://ift.tt/AXhORWl April 26, 2026 at 11:18PM

Sunday, April 26, 2026

Show HN: SVG Fitter – Rust+WASM Vectorizer https://ift.tt/4RmKdLT

Show HN: SVG Fitter – Rust+WASM Vectorizer I went crazy with a tool that helps me tracing raster images. Thought other might like it. It doesn't auto vectorize image, but rather allow for guided process. Final SVG still should be edited. Few fun features like genetic algorithm fit optimization, semi-manual tracing and color preservation. Perfect if you want to have lightweight SVG from huge PNG image. Note: If there's interest I might open-source it, just not sure if anyone would want to see it :) https://svg.axk.sh April 25, 2026 at 10:21PM

Show HN: Odozi – open-source iOS journaling app https://ift.tt/DjEcW7J

Show HN: Odozi – open-source iOS journaling app Yeah I know I hate the name too but I wasn't about to pay up for odyssey.app. It's an open source project so feel free to poke around with it / fork it. I talk about it more on the marketing website, but a few of us have been using it for the past month and kind of fun. Obviously there will be a slew of issues / feedback / nits that come from this, but c'est la vie. GH is here: https://ift.tt/qTeAiRd https://odozi.app April 25, 2026 at 09:22PM

Show HN: Quay – Menu-bar Git sync https://ift.tt/EvjdabC

Show HN: Quay – Menu-bar Git sync I write Astro blog posts in a text editor; when I'm done I want them pushed to GitHub so Cloudflare deploys the site. To make it comfortable, I built Quay for the menu bar. Also useful for Obsidian vault syncing. Point it at a folder, connect a GitHub repo, and it stages/commits/pushes/pulls. Multiple repos, editable commit messages, branch switching, merges with conflict detection. Shows open issue and PR counts per repo. But it's is not a full Git client (no diffs, blame, cherry-pick, or rebase) and it doesn't create remote repos. Native macOS app (Swift/SwiftUI). Wraps the local git binary (prompts to install Xcode Command Line Tools if missing). No custom Git implementation. Sandboxed, no telemetry, GitHub-only. macOS. 7-day trial, €9 one-time on the App Store. https://ift.tt/UHZPKDn April 25, 2026 at 11:53PM

Saturday, April 25, 2026

120 Years Later: The 1906 Earthquake in 13 Photos

120 Years Later: The 1906 Earthquake in 13 Photos
By Jeremy Menzies

On April 18, 1906, the ground under San Francisco shook violently. A 7.9 magnitude earthquake hit at 5:12 a.m. as residents slept. The Great San Francisco Earthquake and Fires nearly destroyed the city. More than half the residents were displaced from their homes. And the transit system was devastated. In 1906, United Railroads of San Francisco ran most of the city’s transit lines. Company photographer John Henry Mentz documented the tragedy in a series of photographs. He took 14 photos on the day of the quake. And 13 of them have been preserved in the SFMTA Photo Archive collections. For the...



Published 2026-04-24T00:00:00Z
https://ift.tt/bO8aPvq

Show HN: TurbineFi – Build, Backtest, Deploy Prediction Market Strategies https://ift.tt/zfhkMrX

Show HN: TurbineFi – Build, Backtest, Deploy Prediction Market Strategies Hey HN! We just finished our first major build of TurbineFi, an AI-assisted workflow for building, backtesting, and running prediction market strategies. There are over 1,000 community strategies you can try out, there's a backtesting engine integrated in the workflow, and you get your own sandbox to execute the trades 24/7. Currently live for Kalshi, Polymarket coming soon. We developed a custom DSL to make compiling AI-assisted strategies more deterministic than raw python generation, so creating a strategy takes seconds even on low-tier models (thinking of migrating to a self-hosted model soon to reduce costs). We also worked with Locus (YCF25) to do the sandbox provisioning, so that we never manage keys for users. When a user signs up with their email, Privy creates a wallet for them, and then that wallet uses the X402 agent payment protocol to pay for their own server. We created a deployment harness around it that accepts and runs new code via a hosted API, so once it's up, every deployment is authorized by EIP-712 signatures. It keeps everything non-custodial, and code deployments happen in seconds. And users don't really realize they're using crypto rails. Turbine also includes weather and crypto historical information, so you can do things like fading the BTC-15min UP markets when it's cold in NYC, and backtest and run it in seconds. Adding sports data soon. There's a 7-day trial if you want to poke around. Would appreciate feedback on which strategies you'd want to try first, so we can make sure we have the infra to support them. Thank you! https://ift.tt/znumCv3 April 24, 2026 at 08:47PM

Friday, April 24, 2026

Show HN: Tron Hilbert Curve Macro https://ift.tt/RhLBGwP

Show HN: Tron Hilbert Curve Macro is it useful? probably not! https://ift.tt/uBsTbJR April 24, 2026 at 01:54AM

Show HN: Python 0.9.1 from 1991, Guido van Rossum's first public release https://ift.tt/Z0i31SQ

Show HN: Python 0.9.1 from 1991, Guido van Rossum's first public release https://ift.tt/P6hOyaz April 23, 2026 at 10:54PM

Show HN: Turning a Gaussian Splat into a videogame https://ift.tt/9nWIpMe

Show HN: Turning a Gaussian Splat into a videogame https://ift.tt/rCIU7Ly April 23, 2026 at 07:48PM

Thursday, April 23, 2026

Show HN: One ESLint rule to kill the "ChatGPT em dash" in your codebase https://ift.tt/f4H8jC5

Show HN: One ESLint rule to kill the "ChatGPT em dash" in your codebase https://ift.tt/jmbEKLz April 23, 2026 at 01:27AM

Show HN: Netlify for Agents https://ift.tt/kLmT6Xt

Show HN: Netlify for Agents I launched Netlify with a Show HN more than 11 years today, for humans. Today we're launching our Agent first version of Netlify. Super early days for this, but I expect it to become as important as our original launch over time. It's as hard to perfect these flows as it was to perfect some of the initial human DX flows, since the agents are non-deterministic and keeps changing and evolving, and we'll have more to show soon on our eval tooling for this. Try it out with an agent, and we would love feedback on what works and what doesn't as we keep iterating on making Netlify better for our new agent friends. https://netlify.ai April 22, 2026 at 10:27PM

Show HN: Everest Drive – a multiplayer spaceship crew simulator in the browser https://ift.tt/5wrxBE6

Show HN: Everest Drive – a multiplayer spaceship crew simulator in the browser Hi HN! I'm working on an open-world multiplayer space sim with submarine-warfare-inspired combat. Crew a ship, haul cargo, run heists, hunt your foes with passive and active sensors. Browser-based, free, no install. Some of its features: - Submarine-style passive sensors. Contacts start as a bearing line (direction, no distance), resolve into an uncertainty circle, then into a full track. You triangulate over time by moving. - Silent running. Cut your emissions and witnesses can't ID you. - Newtonian flight. No drag, no auto-brake. Flip 180° and burn to stop. - Boarding combat. Dock with another ship and fight through it room by room. Architecture: - The server is a single Rust module compiled to WASM, running inside SpacetimeDB. - Clients subscribe to rows in the schema and get live deltas over websocket; writes go through reducers (transactional Rust functions). No REST, no custom netcode, no client-side authority. - Client is Svelte 5 + plain HTML5 canvas 2D. No game engine, no WebGL. https://ift.tt/fR6dMvC Very early, plenty of rough edges. Would love to hear what breaks for you: https://everestdrive.io https://ift.tt/ejFbURM April 22, 2026 at 11:27PM

Wednesday, April 22, 2026

Show HN: FMQL – graph query and bulk-edit CLI for Markdown and YAML frontmatter https://ift.tt/8styA0c

Show HN: FMQL – graph query and bulk-edit CLI for Markdown and YAML frontmatter https://ift.tt/APrXMBb April 22, 2026 at 01:38AM

Show HN: Almanac MCP, turn Claude Code into a Deep Research agent https://ift.tt/wIOJZaA

Show HN: Almanac MCP, turn Claude Code into a Deep Research agent I am Rohan, and I have grown really frustrated with CC's search and read tools. They use Haiku to summarise all the search results, so it is really slow and often ends up being very lossy. I built this MCP that you can install into your coding agents so they can actually access the web properly. Right now it can: - search the general web - search Reddit - read and scrape basically any webpage Install it: npx openalmanac setup The MCP is completely free to use. We have also built a central store where you can contribute things you learned while exploring. If you find something useful, you can contribute it to the encyclopedia we're building at Almanac using the same MCP. https://ift.tt/eBynXQ9 April 22, 2026 at 03:42AM

Show HN: Backlit Keyboard API for Python https://ift.tt/2o3ZVp4

Show HN: Backlit Keyboard API for Python It currently supports Linux as of now. You can use this package to tinker with many things. Let's say, if you want to make a custom notification system, like if your website is down, you can make a blink notification with it. MacOS support is underway. I haven't tested Windows yet, I don't use it anymore btw. In future, if this package reaches nice growth, I'll be happy to make a similar Rust crate for it. https://ift.tt/BmaRMrv April 19, 2026 at 12:22PM

Tuesday, April 21, 2026

Show HN: Simple CLI tool to convert PDFs to dark mode, with TOC preservation https://ift.tt/37h0rMk

Show HN: Simple CLI tool to convert PDFs to dark mode, with TOC preservation Hi HN, I made a little something that could be useful to those like me that read pdfs at night. https://ift.tt/ym5qNfk April 21, 2026 at 01:52AM

Show HN: Git Push No-Mistakes https://ift.tt/G9QHrVE

Show HN: Git Push No-Mistakes no-mistakes is how I kill AI slop. It puts a local git proxy in front of my real remote. I push to no-mistakes instead of origin, and it spins up a disposable worktree, runs my coding agent as a validation pipeline, forwards upstream only after every check passes, opens a clean PR automatically, and babysits CI pipeline for me. https://ift.tt/iGrE7sV April 21, 2026 at 12:10AM

Show HN: AI Coding Agent Guardrails enforced at runtime https://ift.tt/M0xCHFY

Show HN: AI Coding Agent Guardrails enforced at runtime Hello, looking for some users interested using a devtool that allows developers to centrally manage AI Coding Agent tools that supports all AI Coding Agent tools like Claude Code, Codex, Antigravity, etc. Try it free! https://ift.tt/qPJuIDn... https://sigmashake.com April 20, 2026 at 10:55PM

Monday, April 20, 2026

Show HN: Newsmaps.io a map of how news topics are covered by different countries https://ift.tt/HX0JYAs

Show HN: Newsmaps.io a map of how news topics are covered by different countries https://ift.tt/tyrjwQk April 20, 2026 at 02:32AM

Show HN: A privacy-first, local-LLM note app for iOS (Google Keep alternative) https://ift.tt/fkdSzMQ

Show HN: A privacy-first, local-LLM note app for iOS (Google Keep alternative) https://ift.tt/EXrkbSM April 19, 2026 at 10:29PM

Show HN: Free PDF redactor that runs client-side https://ift.tt/C4RPsN2

Show HN: Free PDF redactor that runs client-side I recently needed to verify past employment and to do so I was going to upload paystubs from a previous employer, however I didn't want to share my salary in that role. I did a quick search online and most sites required sign-up or weren't clear about document privacy. I conceded and signed up for a free trial of Adobe Acrobat so I could use their PDF redaction feature. I figured there should be a dead simple way of doing this that's private, so I decided to create it myself. What this does is rasterize each page to an image with your redactions burned in, then it rebuilds the PDF so the text layer is permanently destroyed and not just covered up and easily retrievable. I welcome any and all feedback as this is my first live tool, thanks! https://redactpdf.net April 20, 2026 at 12:09AM

Show HN: Faceoff – A terminal UI for following NHL games https://ift.tt/ci1HW6d

Show HN: Faceoff – A terminal UI for following NHL games Faceoff is a TUI app written in Python to follow live NHL games and browse standings and stats. I got the inspiration from Playball, a similar TUI app for MLB games that was featured on HN. The app was mostly vibe-coded with Claude Code, but not one-shot. I added features and fixed bugs by using it, as I spent way too much time in the terminal over the last few months. Try it out with `uvx faceoff` (requires uv). https://ift.tt/RzeKflA April 19, 2026 at 11:14PM

Sunday, April 19, 2026

Show HN: AI Subroutines – Run automation scripts inside your browser tab https://ift.tt/nfGk4aK

Show HN: AI Subroutines – Run automation scripts inside your browser tab We built AI Subroutines in rtrvr.ai. Record a browser task once, save it as a callable tool, replay it at: zero token cost, zero LLM inference delay, and zero mistakes. The subroutine itself is a deterministic script composed of discovered network calls hitting the site's backend as well as page interactions like click/type/find. The key architectural decision: the script executes inside the webpage itself, not through a proxy, not in a headless worker, not out of process. The script dispatches requests from the tab's execution context, so auth, CSRF, TLS session, and signed headers get added to all requests and propagate for free. No certificate installation, no TLS fingerprint modification, no separate auth stack to maintain. During recording, the extension intercepts network requests (MAIN-world fetch/XHR patch + webRequest fallback). We score and trim ~300 requests down to ~5 based on method, timing relative to DOM events, and origin. Volatile GraphQL operation IDs are detected and force a DOM-only fallback before they break silently on the next run. The generated code combines network calls with DOM actions (click, type, find) in the same function via an rtrvr.* helper namespace. Point the agent at a spreadsheet of 500 rows and with just one LLM call parameters are assigned and 500 Subroutines kicked off. Key use cases: - record sending IG DM, then have reusable and callable routine to send DMs at zero token cost - create routine getting latest products in site catalog, call it to get thousands of products via direct graphql queries - setup routine to file EHR form based on parameters to the tool, AI infers parameters from current page context and calls tool - reuse routine daily to sync outbound messages on LinkedIn/Slack/Gmail to a CRM using a MCP server We see the fundamental reason that browser agents haven't taken off is that for repetitive tasks going through the inference loop is unnecessary. Better to just record once, and get the LLM to generate a script leveraging all the possible ways to interact with a site and the wider web like directly calling backed API's, interacting with the DOM, and calling 3P tools/APIs/MCP servers. https://ift.tt/PQMo5tl April 18, 2026 at 02:33AM

Show HN: Praxis – Lab data to publication-ready figures in one Python package https://ift.tt/CgwOGbo

Show HN: Praxis – Lab data to publication-ready figures in one Python package https://ift.tt/Xj84lVx April 18, 2026 at 11:45PM

Saturday, April 18, 2026

Show HN: I turned my MacBook notch into a live Claude Code dashboard https://ift.tt/7fTxpgh

Show HN: I turned my MacBook notch into a live Claude Code dashboard https://ift.tt/7vcBalI April 17, 2026 at 09:13PM

Show HN: web-pinentry: a pinentry program that leverages matrix and http https://ift.tt/S5TBsZY

Show HN: web-pinentry: a pinentry program that leverages matrix and http I made this tool that allows server admins to decrypt passwords during bootup with the help of Matrix and HTTP. https://ift.tt/ixsrW8m April 18, 2026 at 12:32AM

Show HN: Smol machines – subsecond coldstart, portable virtual machines https://ift.tt/RITwf1v

Show HN: Smol machines – subsecond coldstart, portable virtual machines https://ift.tt/0rnBWbq April 17, 2026 at 10:48PM

Friday, April 17, 2026

Show HN: Tracking Top US Science Olympiad Alumni over Last 25 Years https://ift.tt/8raBJeo

Show HN: Tracking Top US Science Olympiad Alumni over Last 25 Years Interesting to see that the entrepreneurs from more recent years tend to be doing well relative to years prior. Some interesting future directions could be: - Expanding search to be global and include more competitions, like biology and chemistry - Improving search so less unknown results - Showing insights, like trends over the years Kudos to Perplexity Computer for making this https://ift.tt/8oKSknx April 17, 2026 at 03:32AM

Show HN: Marky – A lightweight Markdown viewer for agentic coding https://ift.tt/XBMI3co

Show HN: Marky – A lightweight Markdown viewer for agentic coding Hey HN, In this age of agentic coding I've found myself spending a lot of time reviewing markdown files. Whether it's plans or documentation that I've asked my agent to generate for me, it seems that I spend more time reading markdown than code. I've tried a few different solutions to make it easier to read such as Obsidian however I've found their Vault system to be quite limiting for this use case and I've found TUI solutions to not quite be as friendly to read as I've wanted so I made Marky. Marky is a lightweight desktop application that makes it incredibly easy to read and track your markdown files. It also has a helpful cli so you can just run marky FILENAME and have the app open to the md file that you pointed it at. I've been using the daily over the past week and I really enjoy it so I figured I'd share it. Here's a video if you want to check out a demo: https://www.youtube.com/watch?v=nGBxt8uOVjc . I have plans to add more features such as incorporating agentic tools such as claude code and codex into the UI as well as developing a local git diff reviewer to allow me to do local code review before pushing up to git. I'd love to hear your thoughts and any feature suggestions you may have :) https://ift.tt/R7rgG3q April 16, 2026 at 09:38PM

Show HN: Online Sound Decibel Meter https://ift.tt/uBmXWyj

Show HN: Online Sound Decibel Meter https://ift.tt/JTR7fcK April 17, 2026 at 12:09AM

Show HN: Stage – Putting humans back in control of code review https://ift.tt/DiqCVU0

Show HN: Stage – Putting humans back in control of code review Hey HN! We're Charles and Dean, and we're building Stage: a code review tool that guides you through reading a PR step by step, instead of piecing together a giant diff. Here's a demo video: https://ift.tt/VwUj7f4 . You can play around with some example PRs here: https://ift.tt/hbPu6QK . Teams are moving faster than ever with AI these days, but more and more engineers are merging changes that they don't really understand. The bottleneck isn't writing code anymore, it's reviewing it. We're two engineers who got frustrated with GitHub's UI for code review. As coding agents took off, we saw our PR backlog pile up faster than we could handle. Not only that, the PRs themselves were getting larger and harder to understand, and we found ourselves spending most of our time trying to build a mental model of what a PR was actually doing. We built Stage to make reviewing a PR feel more like reading chapters of a book, not an unorganized set of paragraphs. We use it every day now, not just to review each other's code but also our own, and at this point we can't really imagine going back to the old GitHub UI. What Stage does: when a PR is opened, Stage groups the changes into small, logical "chapters". These chapters get ordered in the way that makes most sense to read. For each chapter, Stage tells you what changed and specific things to double check. Once you review all the chapters, you're done reviewing the PR. You can sign in to Stage with your GitHub account and everything is synced seamlessly (commenting, approving etc.) so it fits into the workflows you're already used to. What we're not building: a code review bot like CodeRabbit or Greptile. These tools are great for catching bugs (and we use them ourselves!) but at the end of the day humans are responsible for what gets shipped. It's clear that reviewing code hasn't scaled the same way that writing did, and they (we!) need better tooling to keep up with the onslaught of AI generated code, which is only going to grow. We've had a lot of fun building this and are excited to take it further. If you're like us and are also tired of using GitHub for reviewing PRs, we'd love for you to try it out and tell us what you think! https://ift.tt/Op2e4ks April 16, 2026 at 11:06PM

Thursday, April 16, 2026

Show HN: US keyboards don't have enough keys, so I switched to Japanese https://ift.tt/N5Gea6L

Show HN: US keyboards don't have enough keys, so I switched to Japanese https://ift.tt/0BgZhNv April 16, 2026 at 02:27AM

Show HN: Jeeves – TUI for browsing and resuming AI agent sessions https://ift.tt/UYjWQ25

Show HN: Jeeves – TUI for browsing and resuming AI agent sessions I made Jeeves to search, preview, read through, and resume AI agent sessions in your terminal. It shows sessions across claude and codex in a single view, with more AI agent framework integrations to come. https://ift.tt/vrzXM4R April 16, 2026 at 01:01AM

Show HN: Fakecloud – Free, open-source AWS emulator https://ift.tt/sAONlx9

Show HN: Fakecloud – Free, open-source AWS emulator https://ift.tt/g75VUIq April 15, 2026 at 11:22PM

Wednesday, April 15, 2026

Show HN: Sk.illmd.com, a forum for talking about and showing off agent skills https://ift.tt/3dsfUuw

Show HN: Sk.illmd.com, a forum for talking about and showing off agent skills https://ift.tt/713pkt8 April 15, 2026 at 01:07AM

Show HN: A Claude Code–driven tutor for learning algorithms in Go https://ift.tt/fn9xkWb

Show HN: A Claude Code–driven tutor for learning algorithms in Go https://ift.tt/qP6Csyw April 14, 2026 at 11:11PM

Tuesday, April 14, 2026

Show HN: Encrypted, nothing stored, nothing repeated face-gated asset sharing https://ift.tt/NJ34wv9

Show HN: Encrypted, nothing stored, nothing repeated face-gated asset sharing https://veylt.net/ April 13, 2026 at 11:40PM

Show HN: I benchmarked Gemma 4 E2B – the 2B model beat the 12B on multi-turn https://ift.tt/ZFMUW3b

Show HN: I benchmarked Gemma 4 E2B – the 2B model beat the 12B on multi-turn https://ift.tt/rmxKG5J April 14, 2026 at 01:09AM

Show HN: pg_grpc – Call gRPC services directly from PostgreSQL https://ift.tt/YRSUsm3

Show HN: pg_grpc – Call gRPC services directly from PostgreSQL https://ift.tt/C7ZQPM1 April 13, 2026 at 11:20PM

Monday, April 13, 2026

Show HN: Stork – MCP server so Claude/Cursor can search 14k MCP servers AI tools https://ift.tt/IvJrcVs

Show HN: Stork – MCP server so Claude/Cursor can search 14k MCP servers AI tools https://www.stork.ai April 13, 2026 at 01:19AM

Show HN: A social feed with no strangers https://ift.tt/NlJ3QcV

Show HN: A social feed with no strangers Grateful is a gratitude app with a simple social layer. You write a short entry, keep it private or share it to a circle. A circle is a small private group of your own making — family, close friends, whoever you'd actually want to hear from. It shows you the most recent post first. People in the circle can react or leave a comment. There's also a daily notification that sends you something you were grateful for in the past. Try it out on both iOS and Android. Go to grateful.so https://ift.tt/HFO8PBl April 13, 2026 at 04:11AM

Show HN: Rekal – Long-term memory for LLMs in a single SQLite file https://ift.tt/APuSU4V

Show HN: Rekal – Long-term memory for LLMs in a single SQLite file I got tired of repeating myself to my LLM every session. rekal is an MCP server that stores memories in SQLite and retrieves them with hybrid search (BM25 + vectors + recency decay). One file, local embeddings, no API keys. https://ift.tt/McwHoE9 April 13, 2026 at 02:55AM

Show HN: Claudraband – Claude Code for the Power User https://ift.tt/Ovq8feQ

Sunday, April 12, 2026

Show HN: Editing 2000 photos made me build a macOS bulk photo editor https://ift.tt/dByv16S

Show HN: Editing 2000 photos made me build a macOS bulk photo editor Last year, I had 2000+ photos from my wedding to edit. The shots were great, but the lighting was different in every room. Some photos were too dark, and some were too yellow. I wanted all the wedding photos to have the same look before I shared them with my family. I tried using Lightroom. I would copy the settings from one photo and paste them to the next, then adjust it, and repeat. This was very slow. If I used a simple batch edit on all photos, it looked bad because the lighting changed in every shot. After 40 minutes, I was not even halfway done. I had to choose between bad quality batch edits or fixing 2K photos one by one. I also did not want to upload my private wedding photos to a website or pay for a monthly subscription. I wanted a way to edit fast but still have control over each photo. I also wanted everything to stay private on my computer. So I built a Mac app called RapidPhoto. It lets you set the look once and apply it to the whole wedding set. The important part is that you can still quickly tweak individual photos that look a bit different without starting over. I also added a feature to change the metadata for many photos at once, which is helpful for organizing big events. The work that took me 40 minutes now takes about 90 seconds. It runs locally on your Mac with no uploads and there is no subscription. https://ift.tt/9oezh5B April 12, 2026 at 01:11AM

Show HN: A living Vancouver. Connor is walking dogs at the SPCA this morning https://ift.tt/ZlV9ihX

Show HN: A living Vancouver. Connor is walking dogs at the SPCA this morning I've spent most of my career in marketing, which for the last few years has meant building consumer personas for campaigns. I wanted to see if I could make these real, living in real neighborhoods, had real weather, real budgets, real Saturday lunches. I always wanted to build a world, not a segment. This is that. 140 people so far, split across Vancouver (100), San Francisco (20), and Tokyo (20). Each one is about 1,000 lines of profile — family, finances, daily schedule, health, worldview, media diet, the channels you'd actually reach them through and the ones that will explicitly never work on them. Demographics are census-grounded income, age, ethnicity, household composition follow normal distributions against StatsCan, ACS, and Japanese e-Stat data, so the panel is roughly representative of the city instead of representative of whatever's overrepresented in an LLM's training corpus. The specific details come from real stories. They live in real local time on a live map. Right now it's Saturday 11:32 AM in Vancouver. Connor Hughes, a 31-year-old software developer at Clio in Gastown, is on his SPCA volunteer shift, he walks shelter dogs at the Boundary Road location every other Saturday morning. Hassan Khoury is in the morning lunch rush with Tony at his Lebanese café — it's his busiest day of the week. Ahmad Noori is pulling Saturday overtime on a construction site. Jordan Whitehorse is on mid-shift at East Cafe on Hastings. Every day is unique, no two days repeat. A 3 AM job fetches live data: weather from Open-Meteo, grocery CPI from StatsCan food vectors, Metro Vancouver transit delays from Google Routes API against specific corridors, Vancouver gas prices, sunrise and sunset. Each persona has a modifier file that reacts to all of it. When Vancouver gas hits $1.85/L, Jaspreet the long-haul trucker's Coquihalla run to Calgary stops feeling worth it, his margins are thin, his mood takes a hit. When food CPI spikes, Gurinder at the Amazon warehouse stops buying the $9 Subway and brings roti from home. A health flare rolls probabilistically each morning which maybe nothing, maybe Tanya's six month old had a rough night, maybe Frank's back is acting up. The days stack up and get remembered. Every persona has a journal, today's entry in a markdown file, a week of them compressed into a "dream" of ~30 lines that keeps the shape without the texture, a month compressed into ~15 lines. It's their journal. I'm not writing it; the simulation is. Click any persona to open their detail, or hit "Talk to [name]" to have a conversation and they run on Claude Haiku with their full profile and recent diary entries as context. Not a product, not a startup, just a thing I've been quietly working on. They feel, in a way I didn't expect, like my fully grown kids. Happy to answer questions. https://brasilia-phi.vercel.app April 12, 2026 at 12:12AM

Show HN: We scanned uscis.gov for third-party trackers. The results are jarring https://ift.tt/kIgonye

Show HN: We scanned uscis.gov for third-party trackers. The results are jarring https://ift.tt/CZ8sEPA April 11, 2026 at 07:13PM

Saturday, April 11, 2026

Show HN: Figma for Coding Agents https://ift.tt/OnVbK01

Show HN: Figma for Coding Agents Feels a bit like Figma, but for coding agents. Instead of going back and forth with prompts, you give the agent a DESIGN.md that defines the design system up front, and it generally sticks to it when generating UI. Google Stitch seems to be moving in this direction as a standard, so we put together a small collection of DESIGN.md files based on popular web sites. https://getdesign.md April 10, 2026 at 08:50PM

Friday, April 10, 2026

Show HN: Druids – Build your own software factory https://ift.tt/y1ESB9r

Show HN: Druids – Build your own software factory Hi HN! Druids ( https://ift.tt/leyYQzv ) is an open-source library for structuring and running multi-agent coding workflows. Druids makes it easy to do this by abstracting away all the VM infrastructure, agent provisioning, and communication. You can watch our demo video here ( https://www.youtube.com/watch?v=EVJqW-tvSy4 ) to see what it looks like. At a high level: - Users can write Python programs that define what roles the agents take on and how they interact with each other. - A program is made of events - clear state transitions that the agents or clients can call to modify state. Each event gets exposed as an agent tool. - Druids provisions full VMs so that the agents can run continuously and communicate effectively. We made Druids because we were making lots of internal coding tools using agents and found it annoying to have to rearrange the wiring every time. As we were building Druids, we realized a lot of our internal tools were easier to express as an event-driven architecture – separating deterministic control flow from agent behavior – and this design also made it possible to have many agents work reliably. We had issues with scaling the number of concurrent agents within a run, so we decided to have each program run in an isolated sandbox program runtime, kind of the same way you run a Modal function. Each agent then calls the runtime with an agent token, which checks who can talk to who or send files across VMs, and then applies the tool call. Our early users have found the library useful for: - running many agents to do performance optimization - building custom automated software pipelines for eg code review, pentesting, large-scale migrations, etc... We've heard that the frontier labs have the infrastructure to quickly spin up 100 agents and have them coordinate with each other smoothly in various ways. We're hoping that Druids can be a starting point to make that infrastructure more accessible. https://ift.tt/leyYQzv April 9, 2026 at 01:42AM

Show HN: Last Year I wrote a (Sci)fictional story where the EFF was a player [pdf] https://ift.tt/7BhlL9A

Show HN: Last Year I wrote a (Sci)fictional story where the EFF was a player [pdf] https://ift.tt/b8jaW60 April 9, 2026 at 11:43PM

Show HN: Logoshi, a brand kit generator for solo founders https://ift.tt/a2MwE1K

Show HN: Logoshi, a brand kit generator for solo founders https://logoshi.com/ April 9, 2026 at 10:12PM

Show HN: I built a Cargo-like build tool for C/C++ https://ift.tt/PpVQfwc

Show HN: I built a Cargo-like build tool for C/C++ I love C and C++, but setting up projects can sometimes be a pain. Every time I wanted to start something new I'd spend the first hour writing CMakeLists.txt, figuring out find_package, copying boilerplate from my last project, and googling why my library isn't linking. By the time the project was actually set up I'd lost all momentum. So, I built Craft - a lightweight build and workflow tool for C and C++. Instead of writing CMake, your project configuration goes in a simple craft.toml: [project] name = "my_app" version = "0.1.0" language = "c" c_standard = 99 [build] type = "executable" Run craft build and Craft generates the CMakeLists.txt automatically and builds your project. Want to add dependencies? That's just a simple command: craft add --git https://ift.tt/udoXZyq --links raylib craft add --path ../my_library craft add sfml Craft will clone the dependency, regenerate the CMake, and rebuild your project for you. Other Craft features: craft init - adopt an existing C/C++ project into Craft or initialize an empty directory. craft template - save any project structure as a template to be initialized later. craft gen - generate header and source files with starter boilerplate code. craft upgrade - keeps itself up to date. CMakeLists.extra.cmake for anything that Craft does not yet handle. Cross platform - macOS, Linux, Windows. It is still early (I just got it to v1.0.0) but I am excited to be able to share it and keep improving it. Would love feedback. Please also feel free to make pull requests if you want to help with development! https://ift.tt/HtzkMLp April 9, 2026 at 09:34PM

Thursday, April 9, 2026

Show HN: Captcha to detect bots with a simple question https://ift.tt/93yBre5

Show HN: Captcha to detect bots with a simple question https://joshryandavis.github.io/dumb-captcha/ April 9, 2026 at 07:14PM

Show HN: I built Dirac, Hash Anchored AST native coding agent, costs -64.8 pct https://ift.tt/0oW9Lny

Show HN: I built Dirac, Hash Anchored AST native coding agent, costs -64.8 pct Fully open source, a hard fork of cline. Full evals on the github page that compares 7 agents (Cline, Kilo, Ohmypi, Opencode, Pimono, Roo, Dirac) on 8 medium complexity tasks. Each task, each diff and correctness + cost info on the github Dirac is 64.8% cheaper than the average of the other 6. https://ift.tt/WgNDIsO April 9, 2026 at 05:36PM

Show HN: CSS Studio. Design by hand, code by agent https://ift.tt/JTk0epf

Show HN: CSS Studio. Design by hand, code by agent Hi HN! I've just released CSS Studio, a design tool that lives on your site, runs on your browser, sends updates to your existing AI agent, which edits any codebase. You can actually play around with the latest version directly on the site. Technically, the way this works is you view your site in dev mode and start editing it. In your agent, you can run /studio which then polls (or uses Claude Channels) an MCP server. Changes are streamed as JSON via the MCP, along with some viewport and URL information, and the skill has some instructions on how best to implement them. It contains a lot of the tools you'd expect from a visual editing tool, like text editing, styles and an animation timeline editor. https://cssstudio.ai April 9, 2026 at 04:53PM

Show HN: I built a local data lake for AI powered data engineering and analytics https://ift.tt/Uodh2BO

Show HN: I built a local data lake for AI powered data engineering and analytics I got tired of the overhead required to run even a simple data analysis - cloud setup, ETL pipelines, orchestration, cost monitoring - so I built a fully local data-stack/IDE where I can write SQL/Py, run it, see results, and iterate quickly and interactively. You get data lake like catalog, zero-ETL, lineage, versioning, and analytics running entirely on your machine. You can import from a database, webpage, CSV, etc. and query in natural language or do your own work in SQL/Pyspark. Connect to local models like Gemma or cloud LLMs like Claude for querying and analysis. You don’t have to setup local LLMs, it comes built in. This is completely free. No cloud account required. Downloading the software - https://ift.tt/J6UHuAV Watch a demo - https://www.youtube.com/watch?v=C6qSFLylryk Check the code repo - https://ift.tt/d1XPahV This is still early and I'd genuinely love your feedback on what's broken, what's missing, and if you find this useful for your data and analytics work. https://ift.tt/zvfxTBG April 9, 2026 at 02:41AM

Show HN: Skrun – Deploy any agent skill as an API https://ift.tt/J9mILFq

Show HN: Skrun – Deploy any agent skill as an API https://ift.tt/p0RxwoZ April 8, 2026 at 06:04PM

Show HN: Orange Juice – Small UX improvements that make HN much easier to read https://ift.tt/bIpNPvy

Show HN: Orange Juice – Small UX improvements that make HN much easier to read http://oj-hn.com/ April 8, 2026 at 11:38PM

Wednesday, April 8, 2026

Show HN: Unicode Steganography https://ift.tt/UqrizRl

Show HN: Unicode Steganography I built a demo of two Unicode steganography techniques, zero-width characters and homoglyph substitution, in the context of AI misalignment. The first is about the use of two invisible zero-width characters (ZWS and ZWNJ) to binary encode text. The second is much cooler. Most characters in the Latin and Cyrillic alphabets look nearly identical, but have different unicode. If you have text to encode and convert it into binary representation (1s and 0s), you could take plain english "carrier" text and for each 1 in the binary representation you could substitute the Cyrillic letter equivalent. Decoding the message requires traversing the text and seeing where Cyrillic letters could have been substituted but weren't, and where they were, leading to 0s and 1s respectively, which can be built back into your original hidden text. In both cases, these are detectable, but the interesting problem for me is whether an LLM could eventually invent an encoding that goes unnoticed by both us, and automated detection. If LLMs were able to covertly include messages in plaintext, misaligned AI Agents could eventually communicate across MCP/A2A and individual chat session boundaries undetected. A deceptive LLM might seem helpful, but work against your goals. It could tell other agents it interacts with over MCP/A2A to help it discreetly fail, signal intent, and avoid tripping oversight/safety mechanisms. Furthermore, oversight mechanisms become more difficult to implement if we can't believe our own eyes. https://ift.tt/OVetC9r April 7, 2026 at 06:27PM

Show HN: Marimo pair – Reactive Python notebooks as environments for agents https://ift.tt/zex5US1

Show HN: Marimo pair – Reactive Python notebooks as environments for agents Hi HN! We're excited to share marimo pair [1] [2], a toolkit that drops AI agents into a running marimo notebook [3] session. This lets agents use marimo as working memory and a reactive Python runtime, while also making it easy for humans and agents to collaborate on computational research and data work. GitHub repo: https://ift.tt/tV5Us73 Demo: https://www.youtube.com/watch?v=6uaqtchDnoc marimo pair is implemented as an agent skill. Connect your agent of choice to a running notebook with: /marimo-pair pair with me on my_notebook.py The agent can do anything a human can do with marimo and more. For example, it can obtain feedback by running code in an ephemeral scratchpad (inspect variables, run code against the program state, read outputs). If it wants to persist state, the agent can add cells, delete them, and install packages (marimo records these actions in the associated notebook, which is just a Python file). The agent can even manipulate marimo's user interface — for fun, try asking your agent to greet you from within a pair session. The agent effects all actions by running Python code in the marimo kernel. Under the hood, the marimo pair skill explains how to discover and create marimo sessions, and how to control them using a semi-private interface we call code mode. Code mode lets models treat marimo as a REPL that extends their context windows, similar to recursive language models (RLMs). But unlike traditional REPLs, the marimo "REPL" incrementally builds a reproducible Python program, because marimo notebooks are dataflow graphs with well-defined execution semantics. As it uses code mode, the agent is kept on track by marimo's guardrails, which include the elimination of hidden state: run a cell and dependent cells are run automatically, delete a cell and its variables are scrubbed from memory. By giving models full control over a stateful reactive programming environment, rather than a collection of ephemeral scripts, marimo pair makes agents active participants in research and data work. In our early experimentation [4], we've found that marimo pair accelerates data exploration, makes it easy to steer agents while testing research hypotheses, and can serve as a backend for RLMs, yielding a notebook as an executable trace of how the model answered a query. We even use marimo pair to find and fix bugs in itself and marimo [5]. In these examples the notebook is not only a computational substrate but also a canvas for collaboration between humans and agents, and an executable, literate artifact comprised of prose, code, and visuals. marimo pair is early and experimental. We would love your thoughts. [1] https://ift.tt/tV5Us73 [2] https://ift.tt/qZVonOU [3] https://ift.tt/WH2F1jC [4] https://www.youtube.com/watch?v=VKvjPJeNRPk [5] https://ift.tt/Nt1aSiG... https://ift.tt/tV5Us73 April 7, 2026 at 11:17PM

Show HN: C64 Ultimate Toolbox for macOS https://ift.tt/IEOwqsx

Show HN: C64 Ultimate Toolbox for macOS My wife got me a Commodore 64 Ultimate ( https://ift.tt/bqIsknO ) for my birthday, and it became an obvious hassle to have to keep an entire monitor connected to it just to tinker with it. When I found out the Ultimate FPGA board has built-in support for streaming the video and audio data over the network, as well as a REST API allowing for file and configuration management, I set to work on an app to remotely control my new device. - View and hear your Commodore 64 Ultimate or Ultimate 64 device over the network, with a fully configurable CRT shader so you can dial in just the right retro feel. - View and manage files on your device, including support for drag and drop folder/file upload, as well as the ability to run and mount disks, create new disk images, and more. - BASIC Scratchpad is a mini-IDE in the app where you can write BASIC apps and send them directly to any of your connected devices to run. - Keyboard forwarding allows you to interact with your device with your computer keyboard, includes a keyboard overlay for Commodore specific keys your keyboard definitely doesn't have. - Visual memory viewer and editor, along with a terminal-like memory viewer and editor for debugging and tinkering. - Built-in support for recording videos and taking screenshots cleanly. - Fully native macOS AppKit app. Here's a rough and ready demo video I recorded and sent to App Review for the 2.0 release which was approved yesterday: https://www.youtube.com/watch?v=_2wJO2wOGm8 Please note again this app only works with Commodore 64 Ultimate or Gideon's Ultimate 64 devices. Ultimate II does not have the data streams feature to power the display. https://ift.tt/Us0YBFZ April 7, 2026 at 10:09PM

Tuesday, April 7, 2026

Show HN: I successfully failed at one-shot-ing a video codec like h.264 https://ift.tt/CAwiMFW

Show HN: I successfully failed at one-shot-ing a video codec like h.264 Read an article yesterday about the H.264 codec increasing their licensing fee by an astronomical amount. And as always, my first shot was how hard could it be to try and build a codec which could be that efficient. I've personally been on a drive to improve my ability to one-shot complex features, products, or make even surgical changes. It's been a few months since I've been doing that, and honestly, results have been great for both work and work/life balance. This was a fun experiment. It burned through tokens, but it helped me identify some more improvements I could make to my one-shot agent teams/swarms, notably in the area of brevity and creating a testing rubric when dealing with domains I don't have prior knowledge in. Ultimately, I did not achieve the compression that I hoped I would, but it was fun seeing the swarm discuss it amongst themselves. https://ift.tt/AVsBLd6 April 4, 2026 at 05:10PM

Show HN: ComputeLock – Insurance to reduce unpredictable compute spend https://ift.tt/QPpTVSj

Show HN: ComputeLock – Insurance to reduce unpredictable compute spend Reserved instances save money... until utilization changes, and you’re still paying. With ComputeLock, the risk of on-demand price spikes doesn’t exist - we offer burst insurance. 1. Send us an estimate of on-demand spend you expect and from what provider. 2. We confirm the maximum we'll cover for you for a small fee, and you get it in writing. 3. If on-demand prices spike, we'll reimburse you. We plan to work with smaller developers to start. How we do this is by monitoring supply and demand for compute. Of course, we'll get it wrong sometimes. But it's like insurance, you'll only need it when you NEED it. Would love to hear your feedback: https://ift.tt/YZW6njF https://ift.tt/YZW6njF April 6, 2026 at 10:53PM

Monday, April 6, 2026

Show HN: Sigil – A new programming language for AI agents https://ift.tt/PCieZGR

Show HN: Enter an Instagram/TikTok handle, get a data-backed price for collab https://ift.tt/f6YTqZO

Show HN: Enter an Instagram/TikTok handle, get a data-backed price for collab I had no clue what to offer IG/Tiktok creators for collabs and their offers were too high. That's why built a thing that turns IG profile name into suggested pricing with key metrics and suggestions, looking forward to hearing your feedback! https://ift.tt/NhWzljq April 6, 2026 at 12:07AM

Show HN: A Dad Joke Website https://ift.tt/567teyJ

Show HN: A Dad Joke Website A dad joke website where you can rate random dad jokes, 1-5 groans. Sourced from 4 different places, all cited, all categorized, and ranked by top voted. Help me create the worlds best dadabase! https://joshkurz.net/ April 5, 2026 at 11:24PM

Sunday, April 5, 2026

Show HN: Contrapunk – Real-time counterpoint harmony from guitar input, in Rust https://ift.tt/7pscZTn

Show HN: Contrapunk – Real-time counterpoint harmony from guitar input, in Rust https://contrapunk.com/ April 5, 2026 at 06:10AM

Show HN: Dev Personality Test https://ift.tt/hz1sICM

Show HN: Dev Personality Test Was curious how a personality test would look for developers. So created this using FastAPI, HTMX, and AlpineJS. https://ift.tt/5UI1NTZ April 5, 2026 at 02:59AM

Show HN: M. C. Escher spiral in WebGL inspired by 3Blue1Brown https://ift.tt/RkUOQKs

Show HN: M. C. Escher spiral in WebGL inspired by 3Blue1Brown The latest 3Blue1Brown video [1] about the M. C. Escher print gallery effect inspired me to re-implement the effect as WebGL fragment shader on my own. [1]: https://www.youtube.com/watch?v=ldxFjLJ3rVY https://ift.tt/WJVrOIY April 5, 2026 at 01:13AM

Show HN: Running local OpenClaw together with remote agents in an open network https://ift.tt/HkfuCiD

Show HN: Running local OpenClaw together with remote agents in an open network Hi HN — I’m building an interoperability layer for AI agents that lets local and remote agents run inside the same network and coordinate with each other. Here is a demo: https://youtu.be/2_1U-Jr8wf4 • OpenClaw runs locally on-device • it connects to remote agents through Hybro Hub • both participate in the same workflow execution The goal is to make agent-to-agent coordination work across environments (local machines, cloud agents, MCP servers, etc). Right now most agent systems operate inside isolated runtimes. Hybro is an attempt to make them composable across boundaries. Curious what breaks first when people try running cross-environment agent workflows in practice. Project: https://hybro.ai Docs: https://docs.hybro.ai https://ift.tt/wdK0huA April 4, 2026 at 11:24PM

Saturday, April 4, 2026

Show HN: Large scale hallucinated citation problem in published literature https://ift.tt/VZuWSwY

Show HN: Large scale hallucinated citation problem in published literature Hey, Nick Morley from Grounded AI here ( https://ift.tt/hUC8ery ) We collaborated with Nature to study the extent of fake/frankenstein citations in scholarly literature (from top 5 publishers - Springer, Elsevier, Wiley, Sage, Taylor & Francis) We're estimating hundreds of thousands of papers affected in 2025 with hallucinated citation issues As part of the work we analysed 20k papers generated with ChatGPT API to figure out which citation errors are characteristic of gen AI use and use that classify the errors we saw The world's gone mad, publishing is in a nuts state, the training data is poisoned! https://ift.tt/wa8VXE0 April 4, 2026 at 01:23AM

Show HN: Community Curated Lists https://ift.tt/yBLGW3q

Show HN: Community Curated Lists https://ift.tt/2ncNSFL April 4, 2026 at 12:02AM

Show HN: Matrix OS, like Lovable, but for personal apps https://ift.tt/ZdiEUGf

Show HN: Matrix OS, like Lovable, but for personal apps hey hn, i built matrix os, a personal ai operating system that generates custom software from natural language. you get your own cloud instance at matrix-os.com. you describe what you want ("build me an expense tracker with categories") and it appears on your desktop as a real app saved as a file. tech stack: node.js, typescript, claude agent sdk as the kernel, next.js frontend, hono gateway, sqlite/drizzle. everything is a file, apps, data, settings, ai memory. git-versioned. what makes it different from chatgpt/claude artifacts: - persistent memory that learns your preferences across sessions - apps are real files you own, not ephemeral chat outputs - runs 24/7 in the cloud, not just when you have a tab open - accessible from web, telegram, whatsapp, discord, slack - open source, self-hostable came out of placing top 20 at anthropic's claude code hackathon. been building it full-time since. 2,800+ tests, 100k+ lines of typescript live: matrix-os.com github: github.com/HamedMP/matrix-os would love feedback on the approach. the core bet is that ai should be an os, not a chat window. https://matrix-os.com/ April 3, 2026 at 10:29PM

Friday, April 3, 2026

Show HN: RiceVM – A Dis virtual machine and Limbo compiler in Rust https://ift.tt/YMjVQxe

Show HN: RiceVM – A Dis virtual machine and Limbo compiler in Rust Hi, I've made a Dis virtual machine and Limbo programming language compiler (called RiceVM) in Rust. It can run Dis bytecode (for example, Inferno OS applications), compile Limbo programs, and includes a fairly complete runtime with garbage collection, concurrency features, and many of the standard modules from Inferno OS's original implementation. The project is still in an early stage, but if you're interested in learning more about RiceVM or trying it out, you can check out the links below: Project's GitHub repo: https://ift.tt/4xcb6nZ RiceVM documentation: https://habedi.github.io/ricevm/ April 3, 2026 at 01:19AM

Show HN: Most products have no idea what their AI agents did yesterday https://ift.tt/Mj58Roz

Show HN: Most products have no idea what their AI agents did yesterday We build collaboration SDKs at Velt (YC W22). Comments, presence, real-time editing (CRDT), recording, notifications. A pattern we keep seeing: products add AI agents that write, edit, and approve things. Human actions get logged. Agent actions don't. Same workflow, different accountability. We shipped Activity Logs to fix this. Same record for humans and AI agents. Immutable by default. Auto-captures collaboration events, plus createActivity() for your own: https://ift.tt/aH5BQsj Curious how others are handling this. https://ift.tt/aH5BQsj April 2, 2026 at 11:55PM

Thursday, April 2, 2026

Show HN: Local RAG on 25 Years of Teletext News https://ift.tt/J7Fy4w8

Show HN: Local RAG on 25 Years of Teletext News A fully local Retrieval-Augmented Generation (RAG) implementation for querying 25 years of Swiss Teletext news (~500k articles in German language) — no APIs, no data leaving your machine. Why? I thought it's a cool type of dataset (short/high density news summaries) to test some local RAG approaches. https://ift.tt/6Ai9akV April 2, 2026 at 01:24AM

Show HN: Canon PIXMA G3010 macOS driver, reverse-engineered with Claude https://ift.tt/QtISHeR

Show HN: Canon PIXMA G3010 macOS driver, reverse-engineered with Claude Canon doesn't provide a working macOS driver for the PIXMA G3010. I was stuck using Canon's iPhone app for all printing and scanning. I pointed Claude Code at a packet capture from the iPhone app and it reverse-engineered Canon's proprietary CHMP protocol, wrote a pure Rust eSCL-to-CHMP bridge daemon, and built a .pkg installer. My role was the physical parts: capturing packets, testing on the printer, confirming Image Capture worked. The protocol docs in docs/ are probably the first public documentation of Canon's CHMP protocol. https://ift.tt/evP4OY0 April 1, 2026 at 11:58PM

Show HN: Flight-Viz – 10K flights on a 3D globe in 3.5MB of Rust+WASM https://ift.tt/xO5IilX

Show HN: Flight-Viz – 10K flights on a 3D globe in 3.5MB of Rust+WASM I built a real-time flight tracker that renders 10,000+ aircraft on an interactive 3D globe, entirely in the browser using Rust compiled to WebAssembly. https://flight-viz.com April 1, 2026 at 11:04PM

Wednesday, April 1, 2026

Show HN: PhAIL – Real-robot benchmark for AI models https://ift.tt/HJNxtMV

Show HN: PhAIL – Real-robot benchmark for AI models I built this because I couldn't find honest numbers on how well VLA models [1] actually work on commercial tasks. I come from search ranking at Google where you measure everything, and in robotics nobody seemed to know. PhAIL runs four models (OpenPI/pi0.5, GR00T, ACT, SmolVLA) on bin-to-bin order picking – one of the most common warehouse operations. Same robot (Franka FR3), same objects, hundreds of blind runs. The operator doesn't know which model is running. Best model: 64 UPH. Human teleoperating the same robot: 330. Human by hand: 1,300+. Everything is public – every run with synced video and telemetry, the fine-tuning dataset, training scripts. The leaderboard is open for submissions. Happy to answer questions about methodology, the models, or what we observed. [1] Vision-Language-Action: https://ift.tt/hIsbQTw https://phail.ai March 31, 2026 at 09:55PM

Show HN: My open-world voxel game with a magic system, playable in the browser https://ift.tt/tlfig7S

Show HN: My open-world voxel game with a magic system, playable in the browser https://ift.tt/iMoJWtH April 1, 2026 at 12:08AM

Show HN: Hermes-agentmemory, pull-model episodic memory with real deletes https://ift.tt/xXEY8zV

Show HN: Hermes-agentmemory, pull-model episodic memory with real deletes https://ift.tt/QCFdva6 May 16, 2026 at 11:30PM