Thursday, January 15, 2026

Show HN: Sparrow-1 – Audio-native model for human-level turn-taking without ASR https://ift.tt/BzCUbXK

Show HN: Sparrow-1 – Audio-native model for human-level turn-taking without ASR For the past year I've been working to rethink how AI manages timing in conversation at Tavus. I've spent a lot of time listening to conversations. Today we're announcing the release of Sparrow-1, the most advanced conversational flow model in the world. Some technical details: - Predicts conversational floor ownership, not speech endpoints - Audio-native streaming model, no ASR dependency - Human-timed responses without silence-based delays - Zero interruptions at sub-100ms median latency - In benchmarks Sparrow-1 beats all existing models at real world turn-taking baselines I wrote more about the work here: https://ift.tt/ZPbis1G... https://ift.tt/wquciJY January 14, 2026 at 11:31PM

Wednesday, January 14, 2026

Closing Potrero Yard: How We’ll Keep Muni Moving with Feb. 14 Service Changes

Closing Potrero Yard: How We’ll Keep Muni Moving with Feb. 14 Service Changes
By Brian Haagsman

The 49 Van Ness-Mission is one of the busiest routes we maintain at Potrero Yard. On Feb. 14, we’re taking two major steps to keep Muni fast and reliable. First, we’ll be making several changes to bus stops and routes to: Improve reliability Provide better connections to regional transit Avoid delays And to improve Muni for years to come, we are working to replace Potrero Yard with a modern bus maintenance facility through the Potrero Yard Modernization Project. For crews to prepare for future construction, we need to close Potrero Yard in February 2026. We’ll move existing bus operations and...



Published January 13, 2026 at 05:30AM
https://ift.tt/VjweJch

Show HN: Self-host Reddit – 2.38B posts, works offline, yours forever https://ift.tt/HE6nygs

Show HN: Self-host Reddit – 2.38B posts, works offline, yours forever Reddit's API is effectively dead for archival. Third-party apps are gone. Reddit has threatened to cut off access to the Pushshift dataset multiple times. But 3.28TB of Reddit history exists as a torrent right now, and I built a tool to turn it into something you can browse on your own hardware. The key point: This doesn't touch Reddit's servers. Ever. Download the Pushshift dataset, run my tool locally, get a fully browsable archive. Works on an air-gapped machine. Works on a Raspberry Pi serving your LAN. Works on a USB drive you hand to someone. What it does: Takes compressed data dumps from Reddit (.zst), Voat (SQL), and Ruqqus (.7z) and generates static HTML. No JavaScript, no external requests, no tracking. Open index.html and browse. Want search? Run the optional Docker stack with PostgreSQL – still entirely on your machine. API & AI Integration: Full REST API with 30+ endpoints – posts, comments, users, subreddits, full-text search, aggregations. Also ships with an MCP server (29 tools) so you can query your archive directly from AI tools. Self-hosting options: - USB drive / local folder (just open the HTML files) - Home server on your LAN - Tor hidden service (2 commands, no port forwarding needed) - VPS with HTTPS - GitHub Pages for small archives Why this matters: Once you have the data, you own it. No API keys, no rate limits, no ToS changes can take it away. Scale: Tens of millions of posts per instance. PostgreSQL backend keeps memory constant regardless of dataset size. For the full 2.38B post dataset, run multiple instances by topic. How I built it: Python, PostgreSQL, Jinja2 templates, Docker. Used Claude Code throughout as an experiment in AI-assisted development. Learned that the workflow is "trust but verify" – it accelerates the boring parts but you still own the architecture. Live demo: https://online-archives.github.io/redd-archiver-example/ GitHub: https://ift.tt/UkorjtI (Public Domain) Pushshift torrent: https://ift.tt/vmz0acY... https://ift.tt/UkorjtI January 13, 2026 at 09:05PM

Tuesday, January 13, 2026

Show HN: AI video generator that outputs React instead of video files https://ift.tt/Kc47AHn

Show HN: AI video generator that outputs React instead of video files Hey HN! This is Mayank from Outscal with a new update. Our website is now live. Quick context: we built a tool that generates animated videos from text scripts. The twist: instead of rendering pixels, it outputs React/TSX components that render as the video. Try it: https://ai.outscal.com/ Sample video: https://ift.tt/csQAhfg... You pick a style (pencil sketch or neon), enter a script (up to 2000 chars), and it runs: scene direction → ElevenLabs audio → SVG assets → Scene Design → React components → deployed video. What we learned building this: We built the first version on Claude Code. Even with a human triggering commands, agents kept going off-script — they had file tools and would wander off reading random files, exploring tangents, producing inconsistent output. The fix was counterintuitive: fewer tools, not more guardrails. We stripped each agent to only what it needed and pre-fed context instead of letting agents fetch it themselves. Quality improved immediately. We wouldn't launch the web version until this was solid. Moved to Claude Agent SDK, kept the same constraints, now fully automated. Happy to discuss the agent architecture, why React-as-video, or anything else. https://ai.outscal.com/ January 13, 2026 at 12:33AM

Show HN: Sidecar – AI Social Manager (Analyzes past hits to write new posts) https://ift.tt/zK4L3R0

Show HN: Sidecar – AI Social Manager (Analyzes past hits to write new posts) Hi HN, I built Sidecar ( https://sidecar.bz ) because I was having issues maintaining a social media presence for my last startup. I would spend a lot of time trying to create content, but I often froze up or burned out, and the marketing died. How it works: Instead of guessing what to write, Sidecar connects to your existing accounts (Threads, Bluesky, Mastodon, Facebook, Instagram) and analyzes your past posts to see what actually worked. It uses that data to generate weeks of new, text-based content that mimics your successful posts, which you can then bulk schedule in one go. I’d love to hear what you think of Sidecar. You can use code HNLAUNCH for a free month if you want to test the ai features. https://ift.tt/fYwKP52 January 12, 2026 at 10:48PM

Monday, January 12, 2026

Sunday, January 11, 2026

Show HN: Play poker with LLMs, or watch them play against each other https://ift.tt/It1BUe6

Show HN: Play poker with LLMs, or watch them play against each other I was curious to see how some of the latest models behaved and played no limit texas holdem. I built this website which allows you to: Spectate: Watch different models play against each other. Play: Create your own table and play hands against the agents directly. https://llmholdem.com/ January 11, 2026 at 12:57AM

Show HN: 3D-Agent – AI that edits Blender scenes through the Python API https://ift.tt/K8jQOZb

Show HN: 3D-Agent – AI that edits Blender scenes through the Python API https://ift.tt/qVL1uH2 May 14, 2026 at 08:17PM