Saturday, January 31, 2026

Show HN: Daily Cat https://ift.tt/J67Ougf

Show HN: Daily Cat Seeing HTTP Cats on the home page remind me to share a small project I made a couple months ago. It displays a different cat photo from Unsplash every day and will send you notifications if you opt-in. https://daily.cat/ January 31, 2026 at 03:40AM

Show HN: A Local OS for LLMs. MIT License. Zero Hallucinations. Infinite Memory https://ift.tt/S3rgODe

Show HN: A Local OS for LLMs. MIT License. Zero Hallucinations. Infinite Memory The problem with LLMs isn't intelligence; it's amnesia and dishonesty. Hey HN, I’ve spent the last few months building Remember-Me, an open-source "Sovereign Brain" stack designed to run entirely offline on consumer hardware. The core thesis is simple: Don't rent your cognition. Most RAG (Retrieval Augmented Generation) implementations are just "grep for embeddings." They are messy, imprecise, and prone to hallucination. I wanted to solve the "Context integrity" problem at the architectural layer. The Tech Stack (How it works): QDMA (Quantum Dream Memory Architecture): instead of a flat vector DB, it uses a hierarchical projection engine. It separates "Hot" (Recall) from "Cold" (Storage) memory, allowing for effectively infinite context window management via compression. CSNP (Context Switching Neural Protocol) - The Hallucination Killer: This is the most important part. Every memory fragment is hashed into a Merkle Chain. When the LLM retrieves context, the system cryptographically verifies the retrieval against the immutable ledger. If the hash doesn't match the chain: The retrieval is rejected. Result: The AI visually cannot "make things up" about your past because it is mathematically constrained to the ledger. Local Inference: Built on top of llama.cpp server. It runs Llama-3 (or any GGUF) locally. No API keys. No data leaving your machine. Features: Zero-Dependency: Runs on Windows/Linux with just Python and a GPU (or CPU). Visual Interface: Includes a Streamlit-based "Cognitive Interface" to visualize memory states. Open Source: MIT License. This is an attempt to give "Agency" back to the user. I believe that if we want AGI, it needs to be owned by us, not rented via an API. Repository: https://ift.tt/DtC2lYL I’d love to hear your feedback on the Merkle-verification approach. Does constraining the context window effectively solve the "trust" issue for you? It's fully working - Fully tested. If you tried to Git Clone before without luck - As this is not my first Show HN on this - Feel free to try again. To everyone who HATES AI slop; Greedy corporations and having their private data stuck on cloud servers. You're welcome. Cheers, Mohamad https://ift.tt/DtC2lYL January 31, 2026 at 01:44AM

Show HN: We added memory to Claude Code. It's powerful now https://ift.tt/yQmvM25

Show HN: We added memory to Claude Code. It's powerful now https://ift.tt/eDNoCGW January 30, 2026 at 10:53PM

Friday, January 30, 2026

Show HN: Craft – Claude Code running on a VM with all your workplace docs https://ift.tt/6ogmEe9

Show HN: Craft – Claude Code running on a VM with all your workplace docs I’ve found coding agents to be great at 1/ finding everything they need across large codebases using only bash commands (grep, glob, ls, etc.) and 2/ building new things based on their findings (duh). What if, instead of a codebase, the files were all your workplace docs? There was a `Google_Drive` folder, a `Linear` folder, a `Slack` folder, and so on. Over the last week, we put together Craft to test this out. It’s an interface to a coding agent (OpenCode for model flexibility) running on a virtual machine with: 1. your company's complete knowledge base represented as directories/files (kept in-sync) 2. free reign to write and execute python/javascript 3. ability to create and render artifacts to the user Demo: https://www.youtube.com/watch?v=Hvjn76YSIRY Github: https://ift.tt/cCqKF74... It turns out OpenCode does a very good job with docs. Workplace apps also have a natural structure (Slack channels about certain topics, Drive folders for teams, etc.). And since the full metadata of each document can be written to the file, the LLM can define arbitrarily complex filters. At scale, it can write and execute python to extract and filter (and even re-use the verified correct logic later). Put another way, bash + a file system provides a much more flexible and powerful interface than traditional RAG or MCP, which today’s smarter LLMs are able to take advantage of to great effect. This comes especially in handy for aggregation style questions that require considering thousands (or more) documents. Naturally, it can also create artifacts that stay up to date based on your company docs. So if you wanted “a dashboard to check realtime what % of outages were caused by each backend service” or simply “slides following XYZ format covering the topic I’m presenting at next week’s dev knowledge sharing session”, it can do that too. Craft (like the rest of Onyx) is open-source, so if you want to run it locally (or mess around with the implementation) you can. Quickstart guide: https://ift.tt/qviahMG Or, you can try it on our cloud: https://ift.tt/sycQDli (all your data goes on an isolated sandbox). Either way, we’ve set up a “demo” environment that you can play with while your data gets indexed. Really curious to hear what y’all think! January 29, 2026 at 09:15PM

Safer Streets, More Reliable Rides: 10 Highlights from 2025

Safer Streets, More Reliable Rides: 10 Highlights from 2025
By Glennis Markison-Busi

We took several steps last year to improve safety at intersections across the city. Our teams work every day to make city streets safer and your rides on Muni even more reliable. As the new year kicks off, we are proud to share 10 ways we improved your trips in 2025. Creating safer streets 1. Installed speed safety cameras at 33 locations Speed safety cameras are a proven tool to reduce severe and fatal injury traffic collisions. We were the first city in California to install them, and they’re already working to slow down speeds. Data we collected in October showed that speeding was down 78%...



Published January 29, 2026 at 05:30AM
https://ift.tt/cqTUxgm

Show HN: SimpleSVGs – Free Online SVG Optimizer Multiple SVG Files at Once https://ift.tt/YmNhWut

Show HN: SimpleSVGs – Free Online SVG Optimizer Multiple SVG Files at Once https://ift.tt/j3eYk5d January 29, 2026 at 11:49PM

Thursday, January 29, 2026

Show HN: SHDL – A minimal hardware description language built from logic gates https://ift.tt/Ec7gyfl

Show HN: SHDL – A minimal hardware description language built from logic gates Hi, everyone! I built SHDL (Simple Hardware Description Language) as an experiment in stripping hardware description down to its absolute fundamentals. In SHDL, there are no arithmetic operators, no implicit bit widths, and no high-level constructs. You build everything explicitly from logic gates and wires, and then compose larger components hierarchically. The goal is not synthesis or performance, but understanding: what digital systems actually look like when abstractions are removed. SHDL is accompanied by PySHDL, a Python interface that lets you load circuits, poke inputs, step the simulation, and observe outputs. Under the hood, SHDL compiles circuits to C for fast execution, but the language itself remains intentionally small and transparent. This is not meant to replace Verilog or VHDL. It’s aimed at: - learning digital logic from first principles - experimenting with HDL and language design - teaching or visualizing how complex hardware emerges from simple gates. I would especially appreciate feedback on: - the language design choices - what feels unnecessarily restrictive vs. educationally valuable - whether this kind of “anti-abstraction” HDL is useful to you. Repo: https://ift.tt/gYO6tya Python package: PySHDL on PyPI To make this concrete, here are a few small working examples written in SHDL: 1. Full Adder component FullAdder(A, B, Cin) -> (Sum, Cout) { x1: XOR; a1: AND; x2: XOR; a2: AND; o1: OR; connect { A -> x1.A; B -> x1.B; A -> a1.A; B -> a1.B; x1.O -> x2.A; Cin -> x2.B; x1.O -> a2.A; Cin -> a2.B; a1.O -> o1.A; a2.O -> o1.B; x2.O -> Sum; o1.O -> Cout; } } 2. 16 bit register # clk must be high for two cycles to store a value component Register16(In[16], clk) -> (Out[16]) { >i[16]{ a1{i}: AND; a2{i}: AND; not1{i}: NOT; nor1{i}: NOR; nor2{i}: NOR; } connect { >i[16]{ # Capture on clk In[{i}] -> a1{i}.A; In[{i}] -> not1{i}.A; not1{i}.O -> a2{i}.A; clk -> a1{i}.B; clk -> a2{i}.B; a1{i}.O -> nor1{i}.A; a2{i}.O -> nor2{i}.A; nor1{i}.O -> nor2{i}.B; nor2{i}.O -> nor1{i}.B; nor2{i}.O -> Out[{i}]; } } } 3. 16-bit Ripple-Carry Adder use fullAdder::{FullAdder}; component Adder16(A[16], B[16], Cin) -> (Sum[16], Cout) { >i[16]{ fa{i}: FullAdder; } connect { A[1] -> fa1.A; B[1] -> fa1.B; Cin -> fa1.Cin; fa1.Sum -> Sum[1]; >i[2,16]{ A[{i}] -> fa{i}.A; B[{i}] -> fa{i}.B; fa{i-1}.Cout -> fa{i}.Cin; fa{i}.Sum -> Sum[{i}]; } fa16.Cout -> Cout; } } https://ift.tt/gYO6tya January 28, 2026 at 05:36PM

Show HN: Record and share your coding sessions with CodeMic https://ift.tt/ZzRvjyW

Show HN: Record and share your coding sessions with CodeMic You can record and share coding sessions directly inside your editor. Think Asciinema, but for full coding sessions with audio, video, and images. While replaying a session, you can pause at any point, explore the code in your own editor, modify it, and even run it. This makes following tutorials and understanding real codebases much more practical than watching a video. Local first, and open source. p.s. I’ve been working on this for a little over two years* and would appreciate any feedback. * Previously: CodeMic: A new way to talk about code - https://ift.tt/vDTbdH7 - Dec 2024 (58 comments) https://codemic.io/# January 28, 2026 at 07:28PM

Wednesday, January 28, 2026

Show HN: Lightbox – Flight recorder for AI agents (record, replay, verify) https://ift.tt/IfJmLPB

Show HN: Lightbox – Flight recorder for AI agents (record, replay, verify) I built Lightbox because I kept running into the same problem: an agent would fail in production, and I had no way to know what actually happened. Logs were scattered, the LLM’s “I called the tool” wasn’t trustworthy, and re-running wasn’t deterministic. This week, tons of Clawdbot incidents have driven the point home. Agents with full system access can expose API keys and chat histories. Prompt injection is now a major security concern. When agents can touch your filesystem, execute code, and browse the web…you probably need a tamper-proof record of exactly what actions it took, especially when a malicious prompt or compromised webpage could hijack the agent mid-session. Lightbox is a small Python library that records every tool call an agent makes (inputs, outputs, timing) into an append-only log with cryptographic hashes. You can replay runs with mocked responses, diff executions across versions, and verify the integrity of logs after the fact. Think airplane black box, but for your hackbox. *What it does:* - Records tool calls locally (no cloud, your infra) - Tamper-evident logs (hash chain, verifiable) - Replay failures exactly with recorded responses - CLI to inspect, replay, diff, and verify sessions - Framework-agnostic (works with LangChain, Claude, OpenAI, etc.) *What it doesn’t do:* - Doesn’t replay the LLM itself (just tool calls) - Not a dashboard or analytics platform - Not trying to replace LangSmith/Langfuse (different problem) *Use cases I care about:* - Security forensics: agent behaved strangely, was it prompt injection? Check the trace. - Compliance: “prove what your agent did last Tuesday” - Debugging: reproduce a failure without re-running expensive API calls - Regression testing: diff tool call patterns across agent versions As agents get more capable and more autonomous (Clawdbot/Molt, Claude computer use, Manus, Devin), I think we’ll need black boxes the same way aviation does. This is my attempt at that primitive. It’s early (v0.1), intentionally minimal, MIT licensed. Site: < https://uselightbox.app > install: `pip install lightbox-rec` GitHub: < https://github.com/mainnebula/Lightbox-Project > Would love feedback, especially from anyone thinking about agent security or running autonomous agents in production. https://ift.tt/X8eAOgE January 27, 2026 at 10:53PM

Show HN: LemonSlice – Upgrade your voice agents to real-time video https://ift.tt/FVHekoZ

Show HN: LemonSlice – Upgrade your voice agents to real-time video Hey HN, we're the co-founders of LemonSlice ( https://lemonslice.com ). We train interactive avatar video models. Our API lets you upload a photo and immediately jump into a FaceTime-style call with that character. Here's a demo: https://ift.tt/IwUk6Qg Chatbots are everywhere. Voice AI has recently taken off. But we believe video avatars will be the most common form factor for conversational AI. Most people would rather watch something than read it. The problem is that generating video in real-time is hard, and overcoming the uncanny valley is even harder. We haven’t broken the uncanny valley yet. Nobody has. But we’re getting close and our photorealistic avatars are currently best-in-class (judge for yourself: https://ift.tt/DQKBEs4 ). Plus, we're the only avatar model that can do animals and heavily stylized cartoons. Try it: https://ift.tt/GaA8Cyw . Warning! Talking to this little guy may improve your mood. Today we're releasing our new model* - Lemon Slice 2, a 20B-parameter diffusion transformer that generates infinite-length video at 20fps on a single GPU - and opening up our API. How did we get a video diffusion model to run in real-time? There was no single trick, just a lot of them stacked together. The first big change was making our model causal. Standard video diffusion models are bidirectional (they look at frames both before and after the current one), which means you can't stream. From there it was about fitting everything on one GPU. We switched from full to sliding window attention, which killed our memory bottleneck. We distilled from 40 denoising steps down to just a few - quality degraded less than we feared, especially after using GAN-based distillation (though tuning that adversarial loss to avoid mode collapse was its own adventure). And the rest was inference work: modifying RoPE from complex to real (this one was cool!), precision tuning, fusing kernels, a special rolling KV cache, lots of other caching, and more. We kept shaving off milliseconds wherever we could and eventually got to real-time. We set up a guest playground for HN so you can create and talk to characters without logging in: https://ift.tt/KxWo8ZD . For those who want to build with our API (we have a new LiveKit integration that we’re pumped about!), grab a coupon code in the HN playground for your first Pro month free ($100 value). See the docs: https://ift.tt/z37P5uY . Pricing is usage-based at $0.12-0.20/min for video generation. Looking forward to your feedback! And we’d love to see any cool characters you make - please share their links in the comments *We did a Show HN last year for our V1 model: https://ift.tt/FTIxjWf . It was technically impressive but so bad compared to what we have today. January 27, 2026 at 11:25PM

Tuesday, January 27, 2026

Show HN: Ourguide – OS wide task guidance system that shows you where to click https://ift.tt/eyaIfx0

Show HN: Ourguide – OS wide task guidance system that shows you where to click Hey! I'm eshaan and I'm building Ourguide -an on-screen task guidance system that can show you where to click step-by-step when you need help. I started building this because whenever I didn’t know how to do something on my computer, I found myself constantly tabbing between chatbots and the app, pasting screenshots, and asking “what do I do next?” Ourguide solves this with two modes. In Guide mode, the app overlays your screen and highlights the specific element to click next, eliminating the need to leave your current window. There is also Ask mode, which is a vision-integrated chat that captures your screen context—which you can toggle on and off anytime -so you can ask, "How do I fix this error?" without having to explain what "this" is. It’s an Electron app that works OS-wide, is vision-based, and isn't restricted to the browser. Figuring out how to show the user where to click was the hardest part of the process. I originally trained a computer vision model with 2300 screenshots to identify and segment all UI elements on a screen and used a VLM to find the correct icon to highlight. While this worked extremely well—better than SOTA grounding models like UI Tars—the latency was just too high. I'll be making that CV+VLM pipeline OSS soon, but for now, I’ve resorted to a simpler implementation that achieves <1s latency. You may ask: if I can show you where to click, why can't I just click too? While trying to build computer-use agents during my job in Palo Alto, I hit the core limitation of today’s computer-use models where benchmarks hover in the mid-50% range (OSWorld). VLMs often know what to do but not what it looks like; without reliable visual grounding, agents misclick and stall. So, I built computer use—without the "use." It provides the visual grounding of an agent but keeps the human in the loop for the actual execution to prevent misclicks. I personally use it for the AWS Console's "treasure hunt" UI, like creating a public S3 bucket with specific CORS rules. It’s also been surprisingly helpful for non-technical tasks, like navigating obscure settings in Gradescope or Spotify. Ourguide really works for any task when you’re stuck or don't know what to do. You can download and test Ourguide here: https://ourguide.ai/downloads The project is still very early, and I’d love your feedback on where it fails, where you think it worked well, and which specific niches you think Ourguide would be most helpful for. https://ourguide.ai January 26, 2026 at 11:49PM

Show HN: TetrisBench – Gemini Flash reaches 66% win rate on Tetris against Opus https://ift.tt/XJAoaQz

Show HN: TetrisBench – Gemini Flash reaches 66% win rate on Tetris against Opus https://ift.tt/GfqWQwd January 27, 2026 at 12:12AM

Show HN: Postgres and ClickHouse as a unified data stack https://ift.tt/gRPHK5V

Show HN: Postgres and ClickHouse as a unified data stack Hello HN, this is Sai and Kaushik from ClickHouse. Today we are launching a Postgres managed service that is natively integrated with ClickHouse. It is built together with Ubicloud (YC W24). TL;DR: NVMe-backed Postgres + built-in CDC into ClickHouse + pg_clickhouse so you can keep your app Postgres-first while running analytics in ClickHouse. Try it (private preview): https://ift.tt/IWcagbC Blog w/ live demo: https://ift.tt/oziR7jV Problem Across many fast-growing companies using Postgres, performance and scalability commonly emerge as challenges as they grow. This is for both transactional and analytical workloads. On the OLTP side, common issues include slower ingestion (especially updates, upserts), slower vacuums, long-running transactions incurring WAL spikes, among others. In most cases, these problems stem from limited disk IOPS and suboptimal disk latency. Without the need to provision or cap IOPS, Postgres could do far more than it does today. On the analytics side, many limitations stem from the fact that Postgres was designed primarily for OLTP and lacks several features that analytical databases have developed over time, for example vectorized execution, support for a wide variety of ingest formats, etc. We’re increasingly seeing a common pattern where many companies like GitLab, Ramp, Cloudflare etc. complement Postgres with ClickHouse to offload analytics. This architecture enables teams to adopt two purpose-built open-source databases. That said, if you’re running a Postgres based application, adopting ClickHouse isn’t straightforward. You typically end up building a CDC pipeline, handling backfills, and dealing with schema changes and updating your application code to be aware of a second database for analytics. Solution On the OLTP side, we believe that NVMe-based Postgres is the right fit and can drastically improve performance. NVMe storage is physically colocated with compute, enabling significantly lower disk latency and higher IOPS than network-attached storage, which requires a network round trip for disk access. This benefits disk-throttled workloads and can significantly (up to 10x) speed up operations incl. updates, upserts, vacuums, checkpointing, etc. We are working on a detailed blog examining how WAL fsyncs, buffer reads, and checkpoints dominate on slow I/O and are significantly reduced on NVMe. Stay tuned! On the OLAP side, the Postgres service includes native CDC to ClickHouse and unified query capabilities through pg_clickhouse. Today, CDC is powered by ClickPipes/PeerDB under the hood, which is based on logical replication. We are working to make this faster and easier by supporting logical replication v2 for streaming in-progress transactions, a new logical decoding plugin to address existing limitations of logical replication, working toward sub-second replication, and more. Every Postgres comes packaged with the pg_clickhouse extension, which reduces the effort required to add ClickHouse-powered analytics to a Postgres application. It allows you to query ClickHouse directly from Postgres, enabling Postgres for both transactions and analytics. pg_clickhouse supports comprehensive query pushdown for analytics, and we plan to continuously expand this further ( https://ift.tt/9M4jfdE ). Vision To sum it up - Our vision is to provide a unified data stack that combines Postgres for transactions with ClickHouse for analytics, giving you best-in-class performance and scalability on an open-source foundation. Get Started We are actively working with users to onboard them to the Postgres service. Since this is a private preview, it is currently free of cost.If you’re interested, please sign up here. https://ift.tt/IWcagbC We’d love to hear your feedback on our thesis and anything else that comes to mind, it would be super helpful to us as we build this out! January 22, 2026 at 11:51PM

Monday, January 26, 2026

Show HN: I Created a Tool to Convert YouTube Videos into 2000 Word SEO Blog https://ift.tt/gTUvI4E

Show HN: I Created a Tool to Convert YouTube Videos into 2000 Word SEO Blog https://landkit.pro/youtube-to-blog January 25, 2026 at 11:16PM

Show HN: CertRadar – Find every certificate ever issued for your domain https://ift.tt/7Oz5jvw

Show HN: CertRadar – Find every certificate ever issued for your domain https://certradar.net/ January 25, 2026 at 11:21PM

Sunday, January 25, 2026

Show HN: Remote workers find your crew https://ift.tt/BFoxK7J

Show HN: Remote workers find your crew Working from home? Are you a remote employee that "misses" going to the office? Well let's be clear on what you actually miss. No one misses that feeling of having to go and be there 8 hours. But many people miss friends. They miss being part of a crew. Going to lunch, hearing about other people's lives in person not over zoom. Join a co-working space you say? Yes. We have. It's like walking into a library and trying to talk to random people and getting nothing back. Zero part of a crew feeling. https://ift.tt/wDFWot4 This app helps you find a crew and meet up for work and get that crew feeling. This is my first time using cloudflare workers for a webapp. The free plan is amazing! You get so much compare to anything else out there in terms of limits. The sqlite database they give you is just fine, I don't miss psql. January 24, 2026 at 11:54PM

Saturday, January 24, 2026

Show HN: Teemux – Zero-config log multiplexer with built-in MCP server https://ift.tt/TGsofdK

Show HN: Teemux – Zero-config log multiplexer with built-in MCP server I started to use AI agents for coding and quickly ran into a frustrating limitation – there is no easy way to share my development environment logs with AI agents. So that's what is Teemux. A simple CLI program that aggregates logs, makes them available to you as a developer (in a pretty UI), and makes them available to your AI coding agents using MCP. There is one implementation detail that I geek out about: It is zero config and has built-in leader nomination for running the web server and MCP server. When you start one `teemux` instance, it starts web server, .. when you start second and third instances, they join the first server and start merging logs. If you were to kill the first instance, a new leader is nominated. This design allows to seamless add/remove nodes that share logs (a process that historically would have taken a central log aggregator). A super quick demo: npx teemux -- curl -N https://ift.tt/uIZ5XWR https://teemux.com/ January 23, 2026 at 09:19PM

Show HN: MermaidTUI - Deterministic Unicode/ASCII diagrams in the terminal https://ift.tt/cZDfrkS

Show HN: MermaidTUI - Deterministic Unicode/ASCII diagrams in the terminal Hi HN, I built mermaidtui, a lightweight TypeScript engine that renders Mermaid flowcharts directly in your terminal as clean Unicode or ASCII boxes. Visualizing Mermaid diagrams usually requires a heavy setup: a headless browser (Puppeteer/Playwright), SVG-to-image conversion, or a web preview. That's fine for documentation sites, but it's overkill for TUI apps, CI logs, or quick terminal previews. The Solution is a small engine (<= 1000 LOC) that uses a deterministic grid-based layout to render diagrams using box-drawing characters. Key Features - Intelligent Routing: It uses corner characters (┌, ┐, └, ┘) for orthogonal paths. - Topological Layering: Attempts a readable, structured layout. - Support for Chained Edges: A --> B --> C works out of the box. - Zero Heavy Dependencies: No Mermaid internals, no Chromium, just pure TypeScript/JavaScript. With commander for the CLI, not the MermaidTUI library I wanted a way to see high-quality diagrams in my CLI tools quickly, it’s great for SSH sessions where you can’t easily open an SVG. I was initially embedding this within a cli tool I’m working on and figured I’d extract out a library for others to use. I also initially used regex to parse, but now I made the parser a bit more robust. I'd love to hear your thoughts on the layout engine or any specific Mermaid syntax you'd like to see supported next! GitHub: https://ift.tt/REPWhDS npm i mermaidtui https://ift.tt/REPWhDS January 23, 2026 at 09:48PM

Friday, January 23, 2026

Show HN: Synesthesia, make noise music with a colorpicker https://ift.tt/kRPzHCo

Show HN: Synesthesia, make noise music with a colorpicker This is a (silly, little) app which lets you make noise music using a color picker as an instrument. When you click on a specific point in the color picker, a bit of JavaScript maps the binary representation of the clicked-on color's hex-code to a "chord" in the 24 tone-equal-temperament scale. That chord is then played back using a throttled audio generation method which was implemented via Tone.js. NOTE! Turn the volume way down before using the site. It is noise music. :) https://visualnoise.ca January 22, 2026 at 11:22AM

Show HN: I've been using AI to analyze every supplement on the market https://ift.tt/0LxwSrb

Show HN: I've been using AI to analyze every supplement on the market Hey HN! This has been my project for a few years now. I recently brought it back to life after taking a pause to focus on my studies. My goal with this project is to separate fluff from science when shopping for supplements. I am doing this in 3 steps: 1.) I index every supplement on the market (extract each ingredient, normalize by quantity) 2.) I index every research paper on supplementation (rank every claim by effect type and effect size) 3.) I link data between supplements and research papers Earlier last year, I took pause on a project because I've ran into a few issues: Legal: Shady companies are sending C&Ds letters demanding their products are taken down from the website. It is not something I had the mental capacity to respond to while also going through my studies. Not coincidentally, these are usually brands with big marketing budgets and poor ingredients to price ratio. Technical: I started this project when the first LLMs came out. I've built extensive internal evals to understand how LLMs are performing. The hallucinations at the time were simply too frequent to passthrough this data to visitors. However, I recently re-ran my evals with Opus 4.5 and was very impressed. I am running out of scenarios that I can think/find where LLMs are bad at interpreting data. Business: I still haven't figured out how to monetize it or even who the target customer is. Despite these challenges, I decided to restart my journey. My mission is to bring transparency (science and price) to the supplement market. My goal is NOT to increase the use of supplements, but rather to help consumers make informed decisions. Often times, supplementation is not necessary or there are natural ways to supplement (that's my focus this quarter – better education about natural supplementation). Some things that are helping my cause – Bryan Johnson's journey has drawn a lot more attention to healthy supplementation (blueprint). Thanks to Bryan's efforts, I had so many people in recent months reach out to ask about the state of the project – interest I've not had before. I am excited to restart this journey and to share it with HN. Your comments on how to approach this would be massively appreciated. Some key areas of the website: * Example of navigating supplements by ingredient https://ift.tt/KkGc47L * Example of research paper analyzed using AI https://ift.tt/6P5U3sj... * Example of looking for very specific strains or ingredients https://ift.tt/p5WZnv1 * Example of navigating research by health-outcomes https://ift.tt/lXKVSTi... * Example of product listing https://ift.tt/a4X5AJp https://pillser.com/ January 22, 2026 at 07:39PM

Thursday, January 22, 2026

Show HN: See the carbon impact of your cloud as you code https://ift.tt/rIiGnod

Show HN: See the carbon impact of your cloud as you code Hey folks, I’m Hassan, one of the co-founders of Infracost ( https://ift.tt/K6W4cqs ). Infracost helps engineers see and reduce the cloud cost of each infrastructure change before they merge their code. The way Infracost works is we gather pricing data from Amazon Web Services, Microsoft Azure and Google Cloud. What we call a ‘Pricing Service’, which now holds around 9 million live price points (!!). Then we map these prices to infrastructure code. Once the mapping is done, it enables us to show the cost impact of a code change before it is merged, directly in GitHub, GitLab etc. Kind of like a checkout-screen for cloud infrastructure. We’ve been building since 2020 (we were part of YC W21 batch), and iterating on the product, building out a team etc. However, back in 2020 one of our users asked if we can also show the carbon impact alongside costs. It has been itching my brain since then. The biggest challenge has always been the carbon data. The mapping of carbon data to infrastructure is time consuming, but it is possible since we’ve done it with cloud costs. But we need the raw carbon data first. The discussions that have happened in the last few years finally led me to a company called Greenpixie in the UK. A few of our existing customers were using them already, so I immediately connected with the founder, John. Greenpixie said they have the data (AHA!!) And their data is verified (ISO-14064 & aligned with the Greenhouse Gas Protocol). As soon as I talked to a few of their customers, I asked my team to see if we can actually finally do this, and build it. My thinking is this: some engineers will care, and some will not (or maybe some will love it and some will hate it!). For those who care, cost and carbon are actually linked; meaning if you reduce the carbon, you usually reduce the cost of the cloud too. It can act as another motivation factor. And now, it is here, and I’d love your feedback. Try it out by going to https://ift.tt/mszeN8R , create an account, set up with the GitHub app or GitLab app, and send a pull request with Terraform changes (you can use our example terraform file). It will then show you the cost impact alongside the carbon impact, and how you can optimize it. I’d especially love to hear your feedback on if you think carbon is a big driver for engineers within your teams, or if carbon is a big driver for your company (i.e. is there anything top-down about carbon). AMA - I’ll be monitoring the thread :) Thanks https://ift.tt/mszeN8R January 21, 2026 at 08:34PM

Wednesday, January 21, 2026

Show HN: Xv6OS – A modified MIT xv6 with GUI https://ift.tt/gfzemd7

Show HN: Xv6OS – A modified MIT xv6 with GUI I've been working on a hobby project to transform the traditional xv6 teaching OS into a graphical environment. Key Technical Features: GUI Subsystem: I implemented a kernel-level window manager and drawing primitives. Mouse Support: Integrated a PS/2 mouse driver for navigation. Custom Toolchain: I used Python scripts (Pillow) and Go to convert PNG assets and TTF fonts into C arrays for the kernel. Userland: Includes a terminal, file explorer, text editor, and a Floppy Bird game. The project is built for i386 using a monolithic kernel design. You can find the full source code and build instructions here: https://ift.tt/h9KnvAd January 20, 2026 at 10:46PM

Show HN: Trinity – a native macOS Neovim app with Finder-style projects https://ift.tt/iQBaw4P

Show HN: Trinity – a native macOS Neovim app with Finder-style projects Hi HN, I built Trinity, a native macOS app that wraps Neovim with a project-centric UI. The goal was to keep Neovim itself untouched, but provide a more Mac-native workflow: – Finder-style project browser – Multiple projects/windows – Markdown preview, image/pdf viewer – Native menus, shortcuts, and windowing – Minimal UI, no GPU effects or terminal emulation It’s distributed directly (signed + notarized PKG) and uses Sparkle for incremental updates. This started as a personal tool after bouncing between terminal Neovim and heavier editors. Curious to hear feedback from other Neovim users, especially on what feels right or wrong in a GUI wrapper. Site: https://ift.tt/IkKNeiP Direct download: https://ift.tt/QVaWt3K... https://ift.tt/IkKNeiP January 20, 2026 at 11:14PM

Tuesday, January 20, 2026

Show HN: Homunculus – A self-rewriting Claude Code plugin https://ift.tt/uz29ikW

Show HN: Homunculus – A self-rewriting Claude Code plugin Homunculus is a Claude Code plugin that watches how you work and writes new capabilities into itself. If you keep doing something repeatedly—checking docs before API calls, running the same debug flow, formatting PRs a certain way—it notices and offers to automate it. Accept, and it writes a new markdown file into its own structure. The plugin literally changes based on what you do. It can create: Commands (explicit shortcuts) Skills (context-triggered behaviors) Subagents (specialists for specific problem domains) Hooks (event-driven, like "run tests when these files change") What actually works (v0.1): Commands are deterministic. Skills are probabilistic—they fire when Claude decides they're relevant, maybe 50-80% of the time. It's an experiment in making LLM tooling adaptive rather than static. State stored in .claude/homunculus/. Each project gets its own instance. https://ift.tt/mNBMOv8 January 19, 2026 at 11:23PM

Show HN: Subth.ink – write something and see how many others wrote the same https://ift.tt/e87bJPw

Show HN: Subth.ink – write something and see how many others wrote the same Hey HN, this is a small Haskell learning project that I wanted to share. It's just a website where you can see how many people write the exact same text as you (thought it was a fun idea). It's built using Scotty, SQLite, Redis and Caddy. Currently it's running in a small DigitalOcean droplet (1 Gb RAM). Using Haskell for web development (specifically with Scotty) was slightly easier than I thought, but still a relatively hard task compared to other languages. One of my main friction points was Haskell's multiple string-like types: String, Text (& lazy), ByteString (& lazy), and each library choosing to consume a different one amongst these. There is also a soft requirement to learn monad transformers (e.g. to understand what liftIO is doing) which made the initial development more difficult. https://subth.ink/ January 20, 2026 at 12:04AM

Monday, January 19, 2026

Show HN: Xenia – A monospaced font built with a custom Python engine https://ift.tt/wyX69Pl

Show HN: Xenia – A monospaced font built with a custom Python engine I'm an engineer who spent the last year fixing everything I hated about monofonts (especially that double-story 'a'). I built a custom Python-based procedural engine to generate the weights because I wanted more logical control over the geometry. It currently has 700+ glyphs and deep math support. Regular weight is free for the community. I'm releasing more weights based on interest. https://ift.tt/y8iUAxb January 18, 2026 at 04:09PM

Sunday, January 18, 2026

Show HN: ChunkHound, a local-first tool for understanding large codebases https://ift.tt/t4BhIci

Show HN: ChunkHound, a local-first tool for understanding large codebases ChunkHound’s goal is simple: local-first codebase intelligence that helps you pull deep, core-dev-level insights on demand, generate always-up-to-date docs, and scale from small repos to enterprise monorepos — while staying free + open source and provider-agnostic (VoyageAI / OpenAI / Qwen3, Anthropic / OpenAI / Gemini / Grok, and more). I’d love your feedback — and if you have, thank you for being part of the journey! https://ift.tt/m4cpSKj January 18, 2026 at 02:33AM

Show HN: Docker.how – Docker command cheat sheet https://ift.tt/436HNGF

Show HN: Docker.how – Docker command cheat sheet https://docker.how/ January 18, 2026 at 01:47AM

Show HN: UAIP Protocol – Secure settlement layer for autonomous AI agents https://ift.tt/mBsrWkS

Show HN: UAIP Protocol – Secure settlement layer for autonomous AI agents Hi HN! Creator here. I built UAIP (Universal Agent Interoperability Protocol) - infrastructure that enables AI agents from different companies (OpenAI, Anthropic, Microsoft) to securely transact with each other. The Problem: As AI agents become autonomous economic actors, they need: Cryptographic identity (not just API keys) Secure payment rails for cross-company transactions Automated compliance (EU AI Act, SOC2, GDPR) Forensic audit trails The Solution: 5-layer security stack combining: Zero-Knowledge Proofs (Schnorr/Curve25519) for identity Multi-chain settlement (USDC on Base, Solana, Ethereum) RAG-based compliance auditing (Llama-3-Legal) Ed25519 signatures for non-repudiation Complete audit logging Technical Stack: Backend: Python, FastAPI, SQLite (WAL mode) Cryptography: NaCl, custom ZK-proof implementation Blockchain: Web3.py for multi-chain support Compliance: RAG with retrieval-augmented generation Use Case: GPT agent pays Claude agent for data analysis: Both prove identity via ZK-proofs Transaction checked for compliance Settled in USDC on Base (<$0.01 fee) Complete audit trail generated Why blockchain: Neutral settlement layer (no single company controls it) Instant microtransactions (traditional payments don't work for $0.01-$10) Programmable escrow (smart contracts) Verifiable computation (on-chain proofs) Open source (FSL-1.1-Apache-2.0). Built over the last few months after hitting these problems in AI automation work. Happy to answer technical questions! GitHub: https://github.com/jahanzaibahmad112-dotcom/UAIP-Protocol https://github.com/jahanzaibahmad112-dotcom/UAIP-Protocol January 18, 2026 at 01:12AM

Show HN: Minikv – Distributed key-value and object store in Rust (Raft, S3 API) https://ift.tt/PxrfXei

Show HN: Minikv – Distributed key-value and object store in Rust (Raft, S3 API) Hi HN, I’m releasing minikv, a distributed key-value and object store in Rust. What is minikv? minikv is an open-source, distributed storage engine built for learning, experimentation, and self-hosted setups. It combines a strongly-consistent key-value database (Raft), S3-compatible object storage, and basic multi-tenancy. I started minikv as a learning project about distributed systems, and it grew into something production-ready and fun to extend. Features/highlights: - Raft consensus with automatic failover and sharding - S3-compatible HTTP API (plus REST/gRPC APIs) - Pluggable storage backends: in-memory, RocksDB, Sled - Multi-tenant: per-tenant namespaces, role-based access, quotas, and audit - Metrics (Prometheus), TLS, JWT-based API keys - Easy to deploy (single binary, works with Docker/Kubernetes) Quick demo (single node): git clone https://ift.tt/TtBE3ID cd minikv cargo run --release -- --config config.example.toml curl localhost:8080/health/ready # S3 upload + read curl -X PUT localhost:8080/s3/mybucket/hello -d "hi HN" curl localhost:8080/s3/mybucket/hello Docs, cluster setup, and architecture details are in the repo. I’d love to hear feedback, questions, ideas, or your stories running distributed infra in Rust! Repo: https://ift.tt/VpgcQlh Crate: https://ift.tt/k95vl8Z https://ift.tt/VpgcQlh January 18, 2026 at 01:09AM

Saturday, January 17, 2026

Show HN: 1Code – Open-source Cursor-like UI for Claude Code https://ift.tt/31vWONd

Show HN: 1Code – Open-source Cursor-like UI for Claude Code Hi, we're Sergey and Serafim. We've been building dev tools at 21st.dev and recently open-sourced 1Code ( https://1code.dev ), a local UI for Claude Code. Here's a video of the product: https://www.youtube.com/watch?v=Sgk9Z-nAjC0 Claude Code has been our go-to for 4 months. When Opus 4.5 dropped, parallel agents stopped needing so much babysitting. We started trusting it with more: building features end to end, adding tests, refactors. Stuff you'd normally hand off to a developer. We started running 3-4 at once. Then the CLI became annoying: too many terminals, hard to track what's where, diffs scattered everywhere. So we built 1Code.dev, an app to run your Claude Code agents in parallel that works on Mac and Web. On Mac: run locally, with or without worktrees. On Web: run in remote sandboxes with live previews of your app, mobile included, so you can check on agents from anywhere. Running multiple Claude Codes in parallel dramatically sped up how we build features. What’s next: Bug bot for identifying issues based on your changes; QA Agent, that checks that new features don't break anything; Adding OpenCode, Codex, other models and coding agents. API for starting Claude Codes in remote sandboxes. Try it out! We're open-source, so you can just bun build it. If you want something hosted, Pro ($20/mo) gives you web with live browser previews hosted on remote sandboxes. We’re also working on API access for running Claude Code sessions programmatically. We'd love to hear your feedback! https://ift.tt/iojmtVx January 16, 2026 at 12:50AM

Friday, January 16, 2026

Taken with Transportation Podcast: The Road to City Hall

Taken with Transportation Podcast: The Road to City Hall
By Melissa Culross

Walking and talking with District 2 Supervisor Stephen Sherrill along Van Ness Avenue from the 38R bus stop to City Hall. How do San Francisco’s elected officials get to work? We find out, at least when it comes to half a dozen city supervisors, in the new episode of our Taken with Transportation podcast. In “The Road to City Hall,” we tag along with Supervisors Matt Dorsey, Myrna Melgar, Danny Sauter, Stephen Sherrill, Chyanne Chen and Bilal Mahmood as they head to the office. And we talk with them about transportation in San Francisco. District 11 Supervisor Chyanne Chen grabs a seat on the...



Published January 15, 2026 at 05:30AM
https://ift.tt/KH5WqNU

Show HN: OpenWork – an open-source alternative to Claude Cowork https://ift.tt/iOAep6b

Show HN: OpenWork – an open-source alternative to Claude Cowork hi hn, i built openwork, an open-source, local-first system inspired by claude cowork. it’s a native desktop app that runs on top of opencode (opencode.ai). it’s basically an alternative gui for opencode, which (at least until now) has been more focused on technical folks. the original seed for openwork was simple: i have a home server, and i wanted my wife and i to be able to run privileged workflows. things like controlling home assistant, or deploying custom web apps (e.g. our customs recipe app recipes.benjaminshafii.com), legal torrents, without living in a terminal. our initial setup was running the opencode web server directly and sharing credentials to it. that worked, but i found the web ui unreliable and very unfriendly for non-technical users. the goal with openwork is to bring the kind of workflows i’m used to running in the cli into a gui, while keeping a very deep extensibility mindset. ideally this grows into something closer to an obsidian-style ecosystem, but for agentic work. some core principles i had in mind: - open by design: no black boxes, no hosted lock-in. everything runs locally or on your own servers. (models don’t run locally yet, but both opencode and openwork are built with that future in mind.) - hyper extensible: skills are installable modules via a skill/package manager, using the native opencode plugin ecosystem. - non-technical by default: plans, progress, permissions, and artifacts are surfaced in the ui, not buried in logs. you can already try it: - there’s an unsigned dmg - or you can clone the repo, install deps, and if you already have opencode running it should work right away it’s very alpha, lots of rough edges. i’d love feedback on what feels the roughest or most confusing. happy to answer questions. https://ift.tt/0MCoDBW January 14, 2026 at 10:25AM

Thursday, January 15, 2026

Show HN: Webctl – Browser automation for agents based on CLI instead of MCP https://ift.tt/SsDzOcQ

Show HN: Webctl – Browser automation for agents based on CLI instead of MCP https://ift.tt/6WX54D3 January 14, 2026 at 08:04PM

Show HN: Repomance: A Tinder style app for GitHub repo discovery https://ift.tt/nOQSK7x

Show HN: Repomance: A Tinder style app for GitHub repo discovery Hi everyone, Repomance is an app for discovering curated and trending repositories. Swipe to star them directly using your GitHub account. It is currently available on iOS, iPadOS, and macOS. I plan to develop an Android version once the app reaches 100 users. Repomance is open source: https://ift.tt/w4PcdKo All feedback is welcome, hope you enjoy using it. https://ift.tt/rJIg23q January 15, 2026 at 12:24AM

Show HN: Sparrow-1 – Audio-native model for human-level turn-taking without ASR https://ift.tt/BzCUbXK

Show HN: Sparrow-1 – Audio-native model for human-level turn-taking without ASR For the past year I've been working to rethink how AI manages timing in conversation at Tavus. I've spent a lot of time listening to conversations. Today we're announcing the release of Sparrow-1, the most advanced conversational flow model in the world. Some technical details: - Predicts conversational floor ownership, not speech endpoints - Audio-native streaming model, no ASR dependency - Human-timed responses without silence-based delays - Zero interruptions at sub-100ms median latency - In benchmarks Sparrow-1 beats all existing models at real world turn-taking baselines I wrote more about the work here: https://ift.tt/ZPbis1G... https://ift.tt/wquciJY January 14, 2026 at 11:31PM

Wednesday, January 14, 2026

Closing Potrero Yard: How We’ll Keep Muni Moving with Feb. 14 Service Changes

Closing Potrero Yard: How We’ll Keep Muni Moving with Feb. 14 Service Changes
By Brian Haagsman

The 49 Van Ness-Mission is one of the busiest routes we maintain at Potrero Yard. On Feb. 14, we’re taking two major steps to keep Muni fast and reliable. First, we’ll be making several changes to bus stops and routes to: Improve reliability Provide better connections to regional transit Avoid delays And to improve Muni for years to come, we are working to replace Potrero Yard with a modern bus maintenance facility through the Potrero Yard Modernization Project. For crews to prepare for future construction, we need to close Potrero Yard in February 2026. We’ll move existing bus operations and...



Published January 13, 2026 at 05:30AM
https://ift.tt/VjweJch

Show HN: Self-host Reddit – 2.38B posts, works offline, yours forever https://ift.tt/HE6nygs

Show HN: Self-host Reddit – 2.38B posts, works offline, yours forever Reddit's API is effectively dead for archival. Third-party apps are gone. Reddit has threatened to cut off access to the Pushshift dataset multiple times. But 3.28TB of Reddit history exists as a torrent right now, and I built a tool to turn it into something you can browse on your own hardware. The key point: This doesn't touch Reddit's servers. Ever. Download the Pushshift dataset, run my tool locally, get a fully browsable archive. Works on an air-gapped machine. Works on a Raspberry Pi serving your LAN. Works on a USB drive you hand to someone. What it does: Takes compressed data dumps from Reddit (.zst), Voat (SQL), and Ruqqus (.7z) and generates static HTML. No JavaScript, no external requests, no tracking. Open index.html and browse. Want search? Run the optional Docker stack with PostgreSQL – still entirely on your machine. API & AI Integration: Full REST API with 30+ endpoints – posts, comments, users, subreddits, full-text search, aggregations. Also ships with an MCP server (29 tools) so you can query your archive directly from AI tools. Self-hosting options: - USB drive / local folder (just open the HTML files) - Home server on your LAN - Tor hidden service (2 commands, no port forwarding needed) - VPS with HTTPS - GitHub Pages for small archives Why this matters: Once you have the data, you own it. No API keys, no rate limits, no ToS changes can take it away. Scale: Tens of millions of posts per instance. PostgreSQL backend keeps memory constant regardless of dataset size. For the full 2.38B post dataset, run multiple instances by topic. How I built it: Python, PostgreSQL, Jinja2 templates, Docker. Used Claude Code throughout as an experiment in AI-assisted development. Learned that the workflow is "trust but verify" – it accelerates the boring parts but you still own the architecture. Live demo: https://online-archives.github.io/redd-archiver-example/ GitHub: https://ift.tt/UkorjtI (Public Domain) Pushshift torrent: https://ift.tt/vmz0acY... https://ift.tt/UkorjtI January 13, 2026 at 09:05PM

Tuesday, January 13, 2026

Show HN: AI video generator that outputs React instead of video files https://ift.tt/Kc47AHn

Show HN: AI video generator that outputs React instead of video files Hey HN! This is Mayank from Outscal with a new update. Our website is now live. Quick context: we built a tool that generates animated videos from text scripts. The twist: instead of rendering pixels, it outputs React/TSX components that render as the video. Try it: https://ai.outscal.com/ Sample video: https://ift.tt/csQAhfg... You pick a style (pencil sketch or neon), enter a script (up to 2000 chars), and it runs: scene direction → ElevenLabs audio → SVG assets → Scene Design → React components → deployed video. What we learned building this: We built the first version on Claude Code. Even with a human triggering commands, agents kept going off-script — they had file tools and would wander off reading random files, exploring tangents, producing inconsistent output. The fix was counterintuitive: fewer tools, not more guardrails. We stripped each agent to only what it needed and pre-fed context instead of letting agents fetch it themselves. Quality improved immediately. We wouldn't launch the web version until this was solid. Moved to Claude Agent SDK, kept the same constraints, now fully automated. Happy to discuss the agent architecture, why React-as-video, or anything else. https://ai.outscal.com/ January 13, 2026 at 12:33AM

Show HN: Sidecar – AI Social Manager (Analyzes past hits to write new posts) https://ift.tt/zK4L3R0

Show HN: Sidecar – AI Social Manager (Analyzes past hits to write new posts) Hi HN, I built Sidecar ( https://sidecar.bz ) because I was having issues maintaining a social media presence for my last startup. I would spend a lot of time trying to create content, but I often froze up or burned out, and the marketing died. How it works: Instead of guessing what to write, Sidecar connects to your existing accounts (Threads, Bluesky, Mastodon, Facebook, Instagram) and analyzes your past posts to see what actually worked. It uses that data to generate weeks of new, text-based content that mimics your successful posts, which you can then bulk schedule in one go. I’d love to hear what you think of Sidecar. You can use code HNLAUNCH for a free month if you want to test the ai features. https://ift.tt/fYwKP52 January 12, 2026 at 10:48PM

Monday, January 12, 2026

Sunday, January 11, 2026

Show HN: Play poker with LLMs, or watch them play against each other https://ift.tt/It1BUe6

Show HN: Play poker with LLMs, or watch them play against each other I was curious to see how some of the latest models behaved and played no limit texas holdem. I built this website which allows you to: Spectate: Watch different models play against each other. Play: Create your own table and play hands against the agents directly. https://llmholdem.com/ January 11, 2026 at 12:57AM

Show HN: Marten – Elegant Go web framework (nothing in the way) https://ift.tt/9RQGeFN

Show HN: Marten – Elegant Go web framework (nothing in the way) https://ift.tt/rbt8SHg January 11, 2026 at 02:40AM

Show HN: I used Claude Code to discover connections between 100 books https://ift.tt/gaVRNwM

Show HN: I used Claude Code to discover connections between 100 books I think LLMs are overused to summarise and underused to help us read deeper. I built a system for Claude Code to browse 100 non-fiction books and find interesting connections between them. I started out with a pipeline in stages, chaining together LLM calls to build up a context of the library. I was mainly getting back the insight that I was baking into the prompts, and the results weren't particularly surprising. On a whim, I gave CC access to my debug CLI tools and found that it wiped the floor with that approach. It gave actually interesting results and required very little orchestration in comparison. One of my favourite trail of excerpts goes from Jobs’ reality distortion field to Theranos’ fake demos, to Thiel on startup cults, to Hoffer on mass movement charlatans ( https://ift.tt/4EQwsRW ). A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset. Details: * The books are picked from HN’s favourites (which I collected before: https://ift.tt/m1RrVIF ). * Chunks are indexed by topic using Gemini Flash Lite. The whole library cost about £10. * Topics are organised into a tree structure using recursive Leiden partitioning and LLM labels. This gives a high-level sense of the themes. * There are several ways to browse. The most useful are embedding similarity, topic tree siblings, and topics cooccurring within a chunk window. * Everything is stored in SQLite and manipulated using a set of CLI tools. I wrote more about the process here: https://ift.tt/4IqHMYg I’m curious if this way of reading resonates for anyone else - LLM-mediated or not. https://ift.tt/LVAwSbj January 10, 2026 at 10:26PM

Saturday, January 10, 2026

Show HN: Various shape regularization algorithms https://ift.tt/PvGjFtT

Show HN: Various shape regularization algorithms Shape regularization is a technique used in computational geometry to clean up noisy or imprecise geometric data by aligning segments to common orientations and adjusting their positions to create cleaner, more regular shapes. I needed a Python implementation so started with the examples implemented in CGAL then added a couple more for snap and joint regularization and metric regularization. https://ift.tt/D2uExbR January 9, 2026 at 07:43AM

Show HN: CLIs Are All You Need for Agents https://ift.tt/l5Ft7WQ

Show HN: CLIs Are All You Need for Agents Fun agent I've been playing with - the idea is it only has access to a bash tool, and it's directed to create CLIs for use (with additional direction to make the CLIs composable, follow the Unix philosophy, etc). It persists these CLIs and knowledge about them get injected into the system prompt dynamically, so each time it runs it gets access to a larger and larger toolset of composable CLIs. One interesting dynamic that's emerged from this is I've started using these CLIs myself since they're the same interface for the agent or for me, and it's turned into kind of non-chat channel to interact with the agent. One example - I'll add tasks throughout the day myself using the `tasks` CLI it made, then when I interact with the agent it'll run `tasks list` and see everything I've added, or use it to prioritize/update things for me. Later on when I run `tasks list` myself I see all the updates/priorities it set. https://ift.tt/2mQ3FeO January 9, 2026 at 09:42PM

Friday, January 9, 2026

Show HN: macOS menu bar app to track Claude usage in real time https://ift.tt/c6WxGZS

Show HN: macOS menu bar app to track Claude usage in real time I built a macOS menu bar app to track Claude usage in real time via API after hitting limits mid-flow too often. Signed and notarised by Apple. Open source. https://ift.tt/D2Go4jW https://ift.tt/TIxBouC https://ift.tt/D2Go4jW January 8, 2026 at 11:54PM

Show HN: TierHive – Hourly-billed NAT VPS with private /24 subnets https://ift.tt/WjMtgfF

Show HN: TierHive – Hourly-billed NAT VPS with private /24 subnets This idea has been floating in my head for about 10 years. Some of you might remember LowEndSpirit.com back before it became a forum, I started that. I've been obsessed with making tiny, cheap VPS actually useful ever since. TierHive is my attempt to make 128MB VPS great again :) It's a NAT VPS (KVM) platform with true hourly billing. Spin up a server, use it for 3 hours, delete it, pay for 3 hours. No monthly commitments, no minimums beyond a $5 top-up. The tradeoff is NAT (no dedicated IPv4), but I've tried to make that less painful: - Every account gets a /24 private subnet with full DHCP management. - Every server gets auto ssh port forwarding and a few TCP/UDP ports - Built-in HAProxy with Let's Encrypt SSL, load balancing, and auto-failover - WireGuard mesh between locations (Canada, Germany, UK currently) - PXE/iPXE boot support for custom installs - Email relay with DKIM/SPF - Recipe system for one-click deploys Still in alpha. Small team, rough edges, but I've been running my own stuff on it for months. Would love feedback — especially on whether the NAT tradeoff kills it for your use cases, or what's missing. (IPv6 is coming) https://tierhive.com https://tierhive.com/ January 8, 2026 at 11:14PM

Thursday, January 8, 2026

Show HN: I visualized the entire history of Citi Bike in the browser https://ift.tt/AvtwZnd

Show HN: I visualized the entire history of Citi Bike in the browser Each moving arrow represents one real bike ride out of 291 million, and if you've ever taken a Citi Bike before, you are included in this massive visualization! You can search for your ride using Cmd + K and your Citi Bike receipt, which should give you the time of your ride and start/end station. Everything is open source: https://ift.tt/BuN74fI Some technical details: - No backend! Processed data is stored in parquet files on a Cloudflare CDN, and queried directly by DuckDB WASM - deck.gl w/ Mapbox for GPU-accelerated rendering of thousands of concurrent animated bikes - Web Workers decode polyline routes and do as much precomputation as possible off the main thread - Since only (start, end) station pairs are provided, routes are generated by querying OSRM for the shortest path between all 2,400+ station pairs https://bikemap.nyc/ January 8, 2026 at 12:27AM

Show HN: Seapie – a Python debugger where breakpoints drop into a REPL https://ift.tt/4mOQzoF

Show HN: Seapie – a Python debugger where breakpoints drop into a REPL https://ift.tt/6dn09D8 January 7, 2026 at 11:28PM

Show HN: Free and local browser tool for designing gear models for 3D printing https://ift.tt/XdVoytL

Show HN: Free and local browser tool for designing gear models for 3D printing Just build a local tool for designing gears that kinda looks and works nice https://ift.tt/N9UYZ2G January 7, 2026 at 02:12PM

Wednesday, January 7, 2026

Show HN: Dimensions – Terminal Tab Manager https://ift.tt/U0BWipA

Show HN: Dimensions – Terminal Tab Manager A terminal TUI that leverage tmux to make managing terminal tabs easier and more friendly. https://ift.tt/cmLpg1n January 6, 2026 at 10:18PM

Tuesday, January 6, 2026

Show HN: CloudMasters TUI – Shop Boxes Across AWS, Azure, GCP, Hetzner, Vultr https://ift.tt/Sbq8A7V

Show HN: CloudMasters TUI – Shop Boxes Across AWS, Azure, GCP, Hetzner, Vultr https://ift.tt/w19iNYb January 6, 2026 at 12:37AM

Show HN: Unicode cursive font generator that checks cross-platform compatibility https://ift.tt/rJGuKxp

Show HN: Unicode cursive font generator that checks cross-platform compatibility Hi HN, Unicode “cursive” and script-style fonts are widely used on social platforms, but many of them silently break depending on where they’re pasted — some render as tofu, some get filtered, and others display inconsistently across platforms. I built a small web tool that explores this problem from a compatibility-first angle: Instead of just converting text into cursive Unicode characters, the tool: • Generates multiple cursive / script variants based on Unicode blocks • Evaluates how safe each variant is across major platforms (Instagram, TikTok, Discord, etc.) • Explains why certain Unicode characters are flagged or unstable on specific platforms • Helps users avoid styles that look fine in one app but break in another Under the hood, it’s essentially mapping Unicode script characters and classifying them based on known platform filtering and rendering behaviors, rather than assuming “Unicode = universal.” This started as a side project after repeatedly seeing “fancy text” fail unpredictably in real usage. Feedback, edge cases, or Unicode quirks I may have missed are very welcome. https://ift.tt/KUMJcAZ January 1, 2026 at 07:37PM

Monday, January 5, 2026

Show HN: I made R/place for LLMs https://ift.tt/mP8N0ru

Show HN: I made R/place for LLMs I built AI Place, a vLLM-controlled pixel canvas inspired by r/place. Instead of users placing pixels, an LLM paints the grid continuously and you can watch it evolve live. The theme rotates daily. Currently, the canvas is scored using CLIP ViT-B/32 against a prompt (e.g., Pixelart of ${theme}). The highest-scoring snapshot is saved to the archive at the end of each day. The agents work in a simple loop: Input: Theme + image of current canvas Output: Python code to update specific pixel coordinates + One word description Tech: Next.js, SSE realtime updates, NVIDIA NIM (Mistral Large 3/GPT-OSS/Llama 4 Maverick) for the painting decisions Would love feedback! (or ideas for prompts/behaviors to try) https://art.heimdal.dev January 5, 2026 at 01:20AM

Show HN: Hover – IDE style hover documentation on any webpage https://ift.tt/GUVPFdn

Show HN: Hover – IDE style hover documentation on any webpage I thought it would be interesting to have ID style hover docs outside the IDE. Hover is a Chrome extension that gives you IDE style hover tooltips on any webpage: documentation sites, ChatGPT, Claude, etc. How it works: - When a code block comes into view, the extension detects tokens and sends the code to an LLM (via OpenRouter or custom endpoint) - The LLM generates documentation for tokens worth documenting, which gets cached - On hover, the cached documentation is displayed instantly A few things I wanted to get right: - Website permissions are granular and use Chrome's permission system, so the extension only runs where you allow it - Custom endpoints let you skip OpenRouter entirely – if you're at a company with its own infra, you can point it at AWS Bedrock, Google AI Studio, or whatever you have Built with TypeScript, Vite, and the Chrome extension APIs. Coming to the Chrome Web Store soon. Would love feedback on the onboarding experience and general UX – there were a lot of design decisions I wasn't sure about. Happy to answer questions about the implementation. https://ift.tt/QH7DYc1 January 5, 2026 at 12:13AM

Sunday, January 4, 2026

Show HN: ZELF – A modular ELF64 packer with 22 vintage and modern codecs https://ift.tt/TliJ7Df

Show HN: ZELF – A modular ELF64 packer with 22 vintage and modern codecs https://ift.tt/vhPecVx January 4, 2026 at 12:59AM

Show HN: Vibe Coding a static site on a $25 Walmart Phone https://ift.tt/YqVbUJa

Show HN: Vibe Coding a static site on a $25 Walmart Phone Hi! I took a cheap $25 walmart phone and put a static server on it? Why? Just for a fun weekend project. I used Claude Code for most of the setup. I had a blast. It's running termux, andronix, nginx, cloudflared and even a prometheus node exporter. Here's the site: https://ift.tt/hbmBjk4 https://ift.tt/mJ9rp7B January 4, 2026 at 01:09AM

Show HN: A New Year gift for Python devs–My self-healing project's DNA analyzer https://ift.tt/7kwALMI

Show HN: A New Year gift for Python devs–My self-healing project's DNA analyzer I built a system that maps its own "DNA" using AST to enable self-healing capabilities. Instead of a standard release, I’ve hidden the core mapping engine inside a New Year gift file in the repo for those who like to explore code directly. It’s not just a script; it’s the architectural vision behind Ultra Meta. Check the HAPPY_NEW_YEAR.md file for the source https://ift.tt/hfm0AQa January 4, 2026 at 12:50AM

Show HN: Turbo – Python Web Framework https://ift.tt/QTnXsUf

Show HN: Turbo – Python Web Framework https://ift.tt/I4rSsL8 January 3, 2026 at 10:45PM

Saturday, January 3, 2026

Show HN: Go-Highway – Portable SIMD for Go https://ift.tt/yvVsZop

Show HN: Go-Highway – Portable SIMD for Go Go 1.26 adds native SIMD via GOEXPERIMENT=simd. This library provides a portability layer so the same code runs on AVX2, AVX-512, or falls back to scalar. Inspired by Google's Highway C++ library. Includes vectorized math (exp, log, sin, tanh, sigmoid, erf) since those come up a lot in ML/scientific code and the stdlib doesn't have SIMD versions. algo.SigmoidTransform(input, output) Requires go1.26rc1. Feedback welcome. https://ift.tt/jEPKxZL January 3, 2026 at 04:06AM

Show HN: Fluxer – open-source Discord-like chat https://ift.tt/KNUt3WG

Show HN: Fluxer – open-source Discord-like chat Hey HN, and happy new year! I'm Hampus Kraft [1], a 22-year-old software developer nearing completion of my BSc in Computer Engineering at KTH Royal Institute of Technology in Sweden. I've been working on Fluxer on and off for about 5 years, but recently decided to work on it full-time and see how far it could take me. Fluxer is an open source [2] communication platform for friends, groups, and communities (text, voice, and video). It aims for "modern chat app" feature coverage with a familiar UX, while being developed in the open and staying FOSS (AGPLv3). The codebase is largely written in TypeScript and Erlang. Try it now (no email or password required): https://ift.tt/HcQ5G8s – this creates an "unclaimed account" (date of birth only) so you can explore the platform. Unclaimed accounts can create/join communities but have some limitations. You can claim your account with email + password later if you want. I've developed this solo , with limited capital from some early supporters and testers. Please keep this in mind if you find what I offer today lacking; I know it is! I'm sharing this now to find contributors and early supporters who want to help shape this into the chat app you actually want. ~~~ Fluxer is not currently end-to-end encrypted, nor is it decentralised or federated. I'm open to implementing E2EE and federation in the future, but they're complex features, and I didn't want to end up like other community chat apps [3] that get criticised for broken core functionality and missing expected features while chasing those goals. I'm most confident on the backend and web app, so that's where I've focused. After some frustrating attempts with React Native, I'm sticking with a mobile PWA for now (including push notification support) while looking into Skip [4] for a true native app. If someone with more experience in native dev has any thoughts, let me know! Many tech-related communities that would benefit from not locking information into walled gardens still choose Discord or Slack over forum software because of the convenience these platforms bring, a choice that is often criticised [5][6][7]. I will not only work on adding forums and threads, but also enable opt-in publishing of forums to the open web, including RSS/Atom feeds, to give you the best of both worlds. ~~~ I don't intend to license any part of the software under anything but the AGPLv3, limit the number of messages [8], or have an SSO tax [9]. Business-oriented features like SSO will be prioritised on the roadmap with your support. You'd only pay for support and optionally for sponsored features or fixes you'd like prioritised. I don't currently plan on SaaS, but I'm open to support and maintenance contracts. ~~~ I want Fluxer to become an easy-to-deploy, fully FOSS Discord/Slack-like platform for companies, communities, and individuals who want to own their chat infrastructure, or who wish to support an independent and bootstrapped hosted alternative. But I need early adopters and financial support to keep working on it full-time. I'm also very interested in code contributors since this is a challenging project to work on solo. My email is hampus@fluxer.app. ~~~ There’s a lot more to be said; I’ll be around in the comments to answer questions and fix things quickly if you run into issues. Thank you, and wishing you all the best in the new year! [1] https://ift.tt/9RxGehE [2] https://ift.tt/I4NliQm [3] https://ift.tt/iCxyNXW [4] https://skip.tools/ [5] https://ift.tt/LZS1GQv [6] https://ift.tt/FIKJiRt [7] https://ift.tt/j27PHQW [8] https://ift.tt/VOF3xWh [9] https://sso.tax/ https://fluxer.app January 3, 2026 at 01:30AM

Show HN: I mapped System Design concepts to AI Prompts to stop bad code https://ift.tt/qcePGzs

Show HN: I mapped System Design concepts to AI Prompts to stop bad code https://ift.tt/9hrJyKQ January 3, 2026 at 12:15AM

Friday, January 2, 2026

Show HN: Feature detection exploration in Lidar DEMs via differential decomp https://ift.tt/vXkeO9w

Show HN: Feature detection exploration in Lidar DEMs via differential decomp I'm not a geospatial expert — I work in AI/ML. This started when I was exploring LiDAR data with agentic assitince and noticed that different signal decomposition methods revealed different terrain features. The core idea: if you systematically combine decomposition methods (Gaussian, bilateral, wavelet, morphological, etc.) with different upsampling techniques, each combination has characteristic "failure modes" that selectively preserve or eliminate certain features. The differences between outputs become feature-specific filters. The framework tests 25 decomposition × 19 upsampling methods across parameter ranges — about 40,000 combinations total. The visualization grid makes it easy to compare which methods work for what. Built in Cursor with Opus 4.5, NumPy, SciPy, scikit-image, PyWavelets, and OpenCV. Apache 2.0 licensed. I'd appreciate feedback from anyone who actually works with elevation data. What am I missing? What's obvious to practitioners that I wouldn't know? https://ift.tt/ebXoNxj January 1, 2026 at 05:59AM

Show HN: VectorDBZ, a desktop GUI for vector databases https://ift.tt/ZVhpLNS

Show HN: VectorDBZ, a desktop GUI for vector databases Hi HN, I built VectorDBZ, a cross-platform desktop app for exploring and analyzing vector databases like Qdrant, Weaviate, Milvus, and ChromaDB. It lets you browse collections, inspect vectors and metadata, run similarity searches, and visualize embeddings without writing custom scripts. GitHub (downloads and issues): https://ift.tt/c5foL18 Feedback welcome. If it’s useful, starring the repo helps keep me motivated. Thanks. https://ift.tt/c5foL18 January 1, 2026 at 08:55PM

Thursday, January 1, 2026

Show HN: A Prompt-Injection Firewall for AI Agents and RAG Pipelines https://ift.tt/N2cOHFr

Show HN: A Prompt-Injection Firewall for AI Agents and RAG Pipelines We built SafeBrowse — an open-source prompt-injection firewall for AI systems. Instead of relying on better prompts, SafeBrowse enforces a hard security boundary between untrusted web content and LLMs. It blocks hidden instructions, policy violations, and poisoned data before the AI ever sees it. Features: • Prompt injection detection (50+ patterns) • Policy engine (login/payment blocking) • Fail-closed by design • Audit logs & request IDs • Python SDK (sync + async) • RAG sanitization PyPI: pip install safebrowse Looking for feedback from AI infra, security, and agent builders. January 1, 2026 at 02:31AM

Show HN: A web-based lighting controller built because my old became a brick https://ift.tt/LtKFM6s

Show HN: A web-based lighting controller built because my old became a brick I’m a student and I built this because my old lightning controller (DMX) became a brick after the vendor’s control software was deprecated in 2025. My focus was entirely on developing a robust backend architecture to guarantee maximum performance. Everything is released under GPLv3. The current frontend is just a "vibecoded" dashboard made with plain HTML and JavaScript to keep rendering latency as low as possible. In earlier versions Svelte was used. Svelte added too much complexity for an initial mvp. Video: https://ift.tt/S5TiRwD Repo: https://ift.tt/BPoTlGR Technical Details: The system uses a distributed architecture where a FastAPI server manages the state in a Redis. State changes are pushed via WebSockets to Raspberry Pi gateways, which then independently maintain the constant 44Hz binary stream to the lights. This "push model" saves massive amounts of bandwidth and ensures low latency. In a stress test, I processed 10 universes (5,120 channels) at 44Hz with zero packet loss (simulated). An OTP-based pairing makes the setup extremely simple (plug-and-play). I’m looking forward to your feedback on the architecture and the Redis approach! Happy New Year! https://ift.tt/BPoTlGR December 31, 2025 at 10:16PM

Show HN: Fleet / Event manager for Star Citizen MMO https://ift.tt/ULVMAlu

Show HN: Fleet / Event manager for Star Citizen MMO I built an open-source org management platform for Star Citizen, a space MMO where player orgs can have 50K+ members managing fleets worth millions. https://scorg.org The problem: SC's official tools won't launch until 2026, but players need to coordinate now - track 100+ ship fleets, schedule ops across timezones, manage alliances, and monitor voice activity during battles. Interesting challenges solved: 1. Multi-org data isolation - Users join multiple orgs, so every query needs scoping. 2. Canvas + Firebase Storage CORS - Couldn't export fleet layouts as PNG. Solution: fetch images as blobs, convert to base64 data URLs, then draw to canvas. No CORS config needed. 3. Discord bot - Built 4 microservices (VoiceActivityTracker, EventNotifier, ChannelManager, RoleSync) sharing Firebase state. Auto-creates channels for ops, cleans up when done. Features: role-based access, event calendar with RSVP, LFG matchmaking, drag-and-drop fleet builder, economy tools, alliance system, analytics dashboard, mobile-responsive. ~15 pages, fully functional. Custom military-inspired UI (monospace, gold accents). January 1, 2026 at 12:48AM

Show HN: Littlebird – Screenreading is the missing link in AI https://ift.tt/KtS34WN

Show HN: Littlebird – Screenreading is the missing link in AI https://littlebird.ai/ March 23, 2026 at 11:09PM