Saturday, January 24, 2026

Show HN: MermaidTUI - Deterministic Unicode/ASCII diagrams in the terminal https://ift.tt/cZDfrkS

Show HN: MermaidTUI - Deterministic Unicode/ASCII diagrams in the terminal Hi HN, I built mermaidtui, a lightweight TypeScript engine that renders Mermaid flowcharts directly in your terminal as clean Unicode or ASCII boxes. Visualizing Mermaid diagrams usually requires a heavy setup: a headless browser (Puppeteer/Playwright), SVG-to-image conversion, or a web preview. That's fine for documentation sites, but it's overkill for TUI apps, CI logs, or quick terminal previews. The Solution is a small engine (<= 1000 LOC) that uses a deterministic grid-based layout to render diagrams using box-drawing characters. Key Features - Intelligent Routing: It uses corner characters (┌, ┐, └, ┘) for orthogonal paths. - Topological Layering: Attempts a readable, structured layout. - Support for Chained Edges: A --> B --> C works out of the box. - Zero Heavy Dependencies: No Mermaid internals, no Chromium, just pure TypeScript/JavaScript. With commander for the CLI, not the MermaidTUI library I wanted a way to see high-quality diagrams in my CLI tools quickly, it’s great for SSH sessions where you can’t easily open an SVG. I was initially embedding this within a cli tool I’m working on and figured I’d extract out a library for others to use. I also initially used regex to parse, but now I made the parser a bit more robust. I'd love to hear your thoughts on the layout engine or any specific Mermaid syntax you'd like to see supported next! GitHub: https://ift.tt/REPWhDS npm i mermaidtui https://ift.tt/REPWhDS January 23, 2026 at 09:48PM

Friday, January 23, 2026

Show HN: Synesthesia, make noise music with a colorpicker https://ift.tt/kRPzHCo

Show HN: Synesthesia, make noise music with a colorpicker This is a (silly, little) app which lets you make noise music using a color picker as an instrument. When you click on a specific point in the color picker, a bit of JavaScript maps the binary representation of the clicked-on color's hex-code to a "chord" in the 24 tone-equal-temperament scale. That chord is then played back using a throttled audio generation method which was implemented via Tone.js. NOTE! Turn the volume way down before using the site. It is noise music. :) https://visualnoise.ca January 22, 2026 at 11:22AM

Show HN: I've been using AI to analyze every supplement on the market https://ift.tt/0LxwSrb

Show HN: I've been using AI to analyze every supplement on the market Hey HN! This has been my project for a few years now. I recently brought it back to life after taking a pause to focus on my studies. My goal with this project is to separate fluff from science when shopping for supplements. I am doing this in 3 steps: 1.) I index every supplement on the market (extract each ingredient, normalize by quantity) 2.) I index every research paper on supplementation (rank every claim by effect type and effect size) 3.) I link data between supplements and research papers Earlier last year, I took pause on a project because I've ran into a few issues: Legal: Shady companies are sending C&Ds letters demanding their products are taken down from the website. It is not something I had the mental capacity to respond to while also going through my studies. Not coincidentally, these are usually brands with big marketing budgets and poor ingredients to price ratio. Technical: I started this project when the first LLMs came out. I've built extensive internal evals to understand how LLMs are performing. The hallucinations at the time were simply too frequent to passthrough this data to visitors. However, I recently re-ran my evals with Opus 4.5 and was very impressed. I am running out of scenarios that I can think/find where LLMs are bad at interpreting data. Business: I still haven't figured out how to monetize it or even who the target customer is. Despite these challenges, I decided to restart my journey. My mission is to bring transparency (science and price) to the supplement market. My goal is NOT to increase the use of supplements, but rather to help consumers make informed decisions. Often times, supplementation is not necessary or there are natural ways to supplement (that's my focus this quarter – better education about natural supplementation). Some things that are helping my cause – Bryan Johnson's journey has drawn a lot more attention to healthy supplementation (blueprint). Thanks to Bryan's efforts, I had so many people in recent months reach out to ask about the state of the project – interest I've not had before. I am excited to restart this journey and to share it with HN. Your comments on how to approach this would be massively appreciated. Some key areas of the website: * Example of navigating supplements by ingredient https://ift.tt/KkGc47L * Example of research paper analyzed using AI https://ift.tt/6P5U3sj... * Example of looking for very specific strains or ingredients https://ift.tt/p5WZnv1 * Example of navigating research by health-outcomes https://ift.tt/lXKVSTi... * Example of product listing https://ift.tt/a4X5AJp https://pillser.com/ January 22, 2026 at 07:39PM

Thursday, January 22, 2026

Show HN: See the carbon impact of your cloud as you code https://ift.tt/rIiGnod

Show HN: See the carbon impact of your cloud as you code Hey folks, I’m Hassan, one of the co-founders of Infracost ( https://ift.tt/K6W4cqs ). Infracost helps engineers see and reduce the cloud cost of each infrastructure change before they merge their code. The way Infracost works is we gather pricing data from Amazon Web Services, Microsoft Azure and Google Cloud. What we call a ‘Pricing Service’, which now holds around 9 million live price points (!!). Then we map these prices to infrastructure code. Once the mapping is done, it enables us to show the cost impact of a code change before it is merged, directly in GitHub, GitLab etc. Kind of like a checkout-screen for cloud infrastructure. We’ve been building since 2020 (we were part of YC W21 batch), and iterating on the product, building out a team etc. However, back in 2020 one of our users asked if we can also show the carbon impact alongside costs. It has been itching my brain since then. The biggest challenge has always been the carbon data. The mapping of carbon data to infrastructure is time consuming, but it is possible since we’ve done it with cloud costs. But we need the raw carbon data first. The discussions that have happened in the last few years finally led me to a company called Greenpixie in the UK. A few of our existing customers were using them already, so I immediately connected with the founder, John. Greenpixie said they have the data (AHA!!) And their data is verified (ISO-14064 & aligned with the Greenhouse Gas Protocol). As soon as I talked to a few of their customers, I asked my team to see if we can actually finally do this, and build it. My thinking is this: some engineers will care, and some will not (or maybe some will love it and some will hate it!). For those who care, cost and carbon are actually linked; meaning if you reduce the carbon, you usually reduce the cost of the cloud too. It can act as another motivation factor. And now, it is here, and I’d love your feedback. Try it out by going to https://ift.tt/mszeN8R , create an account, set up with the GitHub app or GitLab app, and send a pull request with Terraform changes (you can use our example terraform file). It will then show you the cost impact alongside the carbon impact, and how you can optimize it. I’d especially love to hear your feedback on if you think carbon is a big driver for engineers within your teams, or if carbon is a big driver for your company (i.e. is there anything top-down about carbon). AMA - I’ll be monitoring the thread :) Thanks https://ift.tt/mszeN8R January 21, 2026 at 08:34PM

Wednesday, January 21, 2026

Show HN: Xv6OS – A modified MIT xv6 with GUI https://ift.tt/gfzemd7

Show HN: Xv6OS – A modified MIT xv6 with GUI I've been working on a hobby project to transform the traditional xv6 teaching OS into a graphical environment. Key Technical Features: GUI Subsystem: I implemented a kernel-level window manager and drawing primitives. Mouse Support: Integrated a PS/2 mouse driver for navigation. Custom Toolchain: I used Python scripts (Pillow) and Go to convert PNG assets and TTF fonts into C arrays for the kernel. Userland: Includes a terminal, file explorer, text editor, and a Floppy Bird game. The project is built for i386 using a monolithic kernel design. You can find the full source code and build instructions here: https://ift.tt/h9KnvAd January 20, 2026 at 10:46PM

Show HN: Trinity – a native macOS Neovim app with Finder-style projects https://ift.tt/iQBaw4P

Show HN: Trinity – a native macOS Neovim app with Finder-style projects Hi HN, I built Trinity, a native macOS app that wraps Neovim with a project-centric UI. The goal was to keep Neovim itself untouched, but provide a more Mac-native workflow: – Finder-style project browser – Multiple projects/windows – Markdown preview, image/pdf viewer – Native menus, shortcuts, and windowing – Minimal UI, no GPU effects or terminal emulation It’s distributed directly (signed + notarized PKG) and uses Sparkle for incremental updates. This started as a personal tool after bouncing between terminal Neovim and heavier editors. Curious to hear feedback from other Neovim users, especially on what feels right or wrong in a GUI wrapper. Site: https://ift.tt/IkKNeiP Direct download: https://ift.tt/QVaWt3K... https://ift.tt/IkKNeiP January 20, 2026 at 11:14PM

Tuesday, January 20, 2026

Show HN: Homunculus – A self-rewriting Claude Code plugin https://ift.tt/uz29ikW

Show HN: Homunculus – A self-rewriting Claude Code plugin Homunculus is a Claude Code plugin that watches how you work and writes new capabilities into itself. If you keep doing something repeatedly—checking docs before API calls, running the same debug flow, formatting PRs a certain way—it notices and offers to automate it. Accept, and it writes a new markdown file into its own structure. The plugin literally changes based on what you do. It can create: Commands (explicit shortcuts) Skills (context-triggered behaviors) Subagents (specialists for specific problem domains) Hooks (event-driven, like "run tests when these files change") What actually works (v0.1): Commands are deterministic. Skills are probabilistic—they fire when Claude decides they're relevant, maybe 50-80% of the time. It's an experiment in making LLM tooling adaptive rather than static. State stored in .claude/homunculus/. Each project gets its own instance. https://ift.tt/mNBMOv8 January 19, 2026 at 11:23PM

Show HN: Burn, baby, burn (those tokens) https://ift.tt/NdoFK1y

Show HN: Burn, baby, burn (those tokens) https://ift.tt/gtzW1ea May 15, 2026 at 10:50PM