Friday, April 18, 2025

Show HN: HN Watercooler – listen to HN threads as an audio conversation https://ift.tt/t4zL3j9

Show HN: HN Watercooler – listen to HN threads as an audio conversation Hi HN, here's something fun to play with. It takes any HN thread and turns it into an audio conversation so you can listen to the thread while doing other things. I've seen many previous attempts to turn HN threads into podcasts, but they all shared a common issue IMO: trying to reduce the very rich back-and-forth into a single-thread single-reader boring podcast. Instead, I wanted to hear the actual debate from the actual thread! So I asked Claude 3.7 to build this for me as a browser-only app. It just needs a thread URL and an Elevenlabs API key (this all remains in your browser, you can check the source code, it's only 3 files, there is no server storage of anything). To make the resulting audio experience as natural as possible, each commenter has a different voice. Commenters who appear multiple times in the thread have the same voice, and introduce themselves. A bit of context is also introduced when coming back "up" from deeply nested comments. You can play the resulting audio or download it for later listening. I'm planning to later add the ability to load multiple threads so I can have a playlist generated for listening in the gym! Any comments or improvement suggestions are appreciated! https://ift.tt/qzjdOG9 April 18, 2025 at 12:24AM

Show HN: Image2video.app – Transform Ghibli-style images into videos using AI https://ift.tt/t7P9FSE

Show HN: Image2video.app – Transform Ghibli-style images into videos using AI Hi HN, I’ve developed a web app that lets you turn static Ghibli-style images into animated videos. It uses AI to create smooth, dynamic animations from your uploads. I will also add Ghibli-style images generation abilities in the app soon. I’d love to hear your feedback and see what you create with it :) Check it out at https://ift.tt/9WhdBjN . https://ift.tt/9WhdBjN April 17, 2025 at 09:02PM

Show HN: Zuni (YC S24) – AI Copilot for the Browser https://ift.tt/B6XdeQN

Show HN: Zuni (YC S24) – AI Copilot for the Browser Hi HN, I'm Will, and along with my co-founder George, we've built Zuni ( https://zuni.app ) - a browser extension that adds contextual AI capabilities to your browser. It understands what you're reading and working on, whether that's email, research, or anything else in your tabs. We started out building a full email client with AI built in (you might have seen that version showcased in YC’s AI Design Review - https://www.youtube.com/watch?v=DBhSfROq3wU&t=1601s ), but learned that people don't actually want to leave their existing tools - they just want them to work better. Gmail might be frustrating, but it has years of features people rely on. So we pivoted to enhance the tools people already use, rather than replace them entirely. Some specific things Zuni does today: - Analyzes emails as you read them in Gmail, identifying action items and suggesting possible responses - Lets you discuss how to handle tricky emails, almost like having a thought partner - Maintains context across your browsing session so you can ask follow-up questions naturally - Runs locally first for speed and privacy - Doesn't store chats, emails or anything sensitive in the cloud We're still early and focusing on getting the core experience right before adding more integrations. The goal is to make AI actually useful in your daily work, rather than just another "AI feature" checkbox. Would genuinely love feedback from the HN community - what would make this truly useful for your workflow? What are we missing? Happy to answer any questions about the technical implementation too. https://zuni.app April 17, 2025 at 08:45PM

Thursday, April 17, 2025

Show HN: We made a VS Code extension to recreate a debugger experience from logs https://ift.tt/KhSBWF3

Show HN: We made a VS Code extension to recreate a debugger experience from logs A month ago [1], we made an MCP server so Cursor can debug Node.js on its own. We emailed every person that starred our repository [2] and learnt that frontend devs really want to give Cursor access to browser logs, and that backend devs (our intended audience) do not use debuggers nearly as much as we thought. We interviewed friends across startups and discovered that they use logs to debug, because they can’t run services locally on their machine. The services (1) require too much disk, RAM, or CPUs to run locally, (2) have too many service dependencies (think microservices), or (3) are a faff to instantiate locally with a debugger. Instead, our friends instrument their services, deploy them to staging environments via Kubernetes, and then query the logs via data stores (think Grafana, Axiom.co, Google Cloud Logging, etc) or directly (think Kubernetes logs). We thought: "What if we could recreate a debugger-like experience from logs?". That would save them from browsing logs and trying to make sense of them outside the context of the code base. We looked into it and made a VS code extension that lets you (1) import logs, (2) go to the line of code associated with a log, and navigate up/down the probable call stack associated with a log. It's a prototype, but if you're interested in trying it out, we'd love some feedback! GitHub: github.com/hyperdrive-eng/traceback --- References: [1]: https://ift.tt/Zycr0ok [2]: 140 Github stars, 69 emails sent (the rest were bots), 19 responses received (= 28% response conversion), 4 meetings held (= 21% meeting conversion). https://ift.tt/1cJgvuE April 17, 2025 at 04:37AM

Show HN: logidiff – determine if two or more logical statements are equivalent https://ift.tt/4NJT6a0

Show HN: logidiff – determine if two or more logical statements are equivalent https://ift.tt/U8gcHoF April 17, 2025 at 02:29AM

Show HN: Milter in Rust to Add Headers https://ift.tt/XywOTtI

Show HN: Milter in Rust to Add Headers Here's a milter in Rust that adds List-Unsubscribe headers. It creates a URL that has encoded email-from, rcpt-to and a HMAC SHA 256 verification hash using a shared secret key. Possibly it improves delivery of newsletters and transactional emails. https://ift.tt/pGz8aqK April 17, 2025 at 02:22AM

Show HN: Plandex v2 – open source AI coding agent for large projects and tasks https://ift.tt/8b6arfS

Show HN: Plandex v2 – open source AI coding agent for large projects and tasks Hey HN! I’m Dane, the creator of Plandex ( https://ift.tt/DX97RW3 ), an open source AI coding agent focused especially on tackling large tasks in real world software projects. You can watch a 2 minute demo of Plandex in action here: https://www.youtube.com/watch?v=SFSu2vNmlLk And here’s more of a tutorial style demo showing how Plandex can automatically debug a browser application: https://www.youtube.com/watch?v=VCegxOCAPq0 I launched Plandex v1 here on HN a little less than a year ago ( https://ift.tt/mlYbwcD ). Now I’m launching a major update, Plandex v2, which is the result of 8 months of heads down work, and is in effect a whole new project/product. In short, Plandex is now a top-tier coding agent with fully autonomous capabilities. It combines models from Anthropic, OpenAI, and Google to achieve better results, more reliable agent behavior, better cost efficiency, and better performance than is possible by using only a single provider’s models. I believe it is now one of the best tools available for working on large tasks in real world codebases with AI. It has an effective context window of 2M tokens, and can index projects of 20M tokens and beyond using tree-sitter project maps (30+ languages are supported). It can effectively find relevant context in massive million-line projects like SQLite, Redis, and Git. A bit more on some of Plandex’s key features: - Plandex has a built-in diff review sandbox that helps you get the benefits of AI without leaving behind a mess in your project. By default, all changes accumulate in the sandbox until you approve them. The sandbox is version-controlled. You can rewind it to any previous point, and you can also create branches to try out alternative approaches. - It offers a ‘full auto mode’ that can complete large tasks autonomously end-to-end, including high level planning, context loading, detailed planning, implementation, command execution (for dependencies, builds, tests, etc.), and debugging. - The autonomy level is highly configurable. You can move up and down the ladder of autonomy depending on the task, your comfort level, and how you weigh cost optimization vs. effort and results. - Models and model settings are also very configurable. There are built-in models and model packs for different use cases. You can also add custom models and model packs, and customize model settings like temperature or top-p. All model changes are version controlled, so you can use branches to try out the same task with different models. The newly released OpenAI models and the paid Gemini 2.5 Pro model will be integrated in the default model pack soon. - It can be easily self-hosted, including a ‘local mode’ for a very fast local single-user setup with Docker. - Cloud hosting is also available for added convenience with a couple of subscription tiers: an ‘Integrated Models’ mode that requires no other accounts or API keys and allows you to manage billing/budgeting/spending alerts and track usage centrally, and a ‘BYO API Key’ mode that allows you to use your own OpenAI/OpenRouter accounts. I’d love to get more HNers in the Plandex Discord ( https://ift.tt/lVIwM4K ). Please join and say hi! And of course I’d love to hear your feedback, whether positive or negative. Thanks so much! https://ift.tt/DX97RW3 April 17, 2025 at 02:56AM

Show HN: Do You Know RGB? https://ift.tt/t8kUpbO

Show HN: Do You Know RGB? https://ift.tt/OWhvmMT June 24, 2025 at 01:49PM