Friday, August 15, 2025

Show HN: Happy Coder – End-to-End Encrypted Mobile Client for Claude Code https://ift.tt/vt1BkI0

Show HN: Happy Coder – End-to-End Encrypted Mobile Client for Claude Code Hey all! Few weeks ago we realized AI models are now so good you don't need to babysit them anymore. You can kick off a coding task at lunch and Claude Code just... works. But then you're stuck at your desk steering it. We were joking around - wouldn't it be cool to grab coffee and keep chatting with Claude from your phone? Next thing you know, 4 of us are hacking on weekends to make it happen. Dead simple to try: "npm install -g happy-coder" then run "happy" instead of "claude". That's it. We had three goals: * Don't break anyone's flow - Use Claude Code normally at your desk, pick up your phone when you leave. Nothing changes, nothing breaks. * Actually private - Full E2E encryption, no regular accounts. Your encryption keys are created on your phone and securely paired with your terminal. We protect our infra, not your data (because we literally can't see it). * Hands-free is the future - This was the fun one. We hooked up 11Labs' new realtime SDK so you can literally talk to Claude Code through GPT-4.1 while walking around. Picked 11Labs because we can configure it to not store audio or transcripts. The mobile experience turned out pretty great - fast chat, works on everything (iPads, foldables, whatever), and there's a web version too. It's free! The app and chat are completely free. Down the road we'll probably charge for voice inference or let you run it client-side with your own API keys. Links to apps iOS: https://ift.tt/0vskapO... Android (just released): https://ift.tt/6290jMS... Web: https://ift.tt/VGrQ8bW Would love to hear what you think! https://ift.tt/pe72EgO August 15, 2025 at 12:11AM

Show HN: OWhisper – Ollama for realtime speech-to-text https://ift.tt/zHWu6BJ

Show HN: OWhisper – Ollama for realtime speech-to-text Hello everyone. This is Yujong from the Hyprnote team ( https://ift.tt/FzXaKW7 ). We built OWhisper for 2 reasons: (Also outlined in https://ift.tt/9CJXpFS ) (1). While working with on-device, realtime speech-to-text, we found there isn't tooling that exists to download / run the model in a practical way. (2). Also, we got frequent requests to provide a way to plug in custom STT endpoints to the Hyprnote desktop app, just like doing it with OpenAI-compatible LLM endpoints. The (2) part is still kind of WIP, but we spent some time writing docs so you'll get a good idea of what it will look like if you skim through them. For (1) - You can try it now. ( https://ift.tt/IqncR3Y ) bash brew tap fastrepl/hyprnote && brew install owhisper owhisper pull whisper-cpp-base-q8-en owhisper run whisper-cpp-base-q8-en If you're tired of Whisper, we also support Moonshine :) Give it a shot (owhisper pull moonshine-onnx-base-q8) We're here and looking forward to your comments! https://ift.tt/9CJXpFS August 14, 2025 at 09:17PM

Thursday, August 14, 2025

Show HN: Yet Another Memory System for LLM's https://ift.tt/0oZIwAv

Show HN: Yet Another Memory System for LLM's Built this for my LLM workflows - needed searchable, persistent memory that wouldn't blow up storage costs. I also wanted to use it locally for my research. It's a content-addressed storage system with block-level deduplication (saves 30-40% on typical codebases). I have integrated the CLI tool into most of my workflows in Zed, Claude Code, and Cursor, and I provide the prompt I'm currently using in the repo. The project is in C++ and the build system is rough around the edges but is tested on macOS and Ubuntu 24.04. https://ift.tt/VtTDysh August 14, 2025 at 09:04AM

Show HN: Real-time privacy protection for smart glasses https://ift.tt/Ex12jqU

Show HN: Real-time privacy protection for smart glasses I built a live video privacy filter that helps smart glasses app developers handle privacy automatically. How it works: You can replace a raw camera feed with the filtered stream in your app. The filter processes a live video stream, applies privacy protections, and outputs a privacy-compliant stream in real time. You can use this processed stream for AI apps, social apps, or anything else. Features: Currently, the filter blurs all faces except those who have given consent. Consent can be granted verbally by saying something like "I consent to be captured" to the camera. I'll be adding more features, such as detecting and redacting other private information, speech anonymization, and automatic video shut-off in certain locations or situations. Why I built it: While developing an always-on AI assistant/memory for glasses, I realized privacy concerns would be a critical problem, for both bystanders and the wearer. Addressing this involves complex issues like GDPR, CCPA, data deletion requests, and consent management, so I built this privacy layer first for myself and other developers. Reference app: There's a sample app (./examples/rewind/) that uses the filter. The demo video is in the README, please check it out! The app shows the current camera stream and past recordings, both privacy-protected, and will include AI features using the recordings. Tech: Runs offline on a laptop. Built with FFmpeg (stream decode/encode), OpenCV (face recognition/blurring), Faster Whisper (voice transcription), and Phi-3.1 Mini (LLM for transcription analysis). I'd love feedback and ideas for tackling the privacy challenges in wearable camera apps! https://ift.tt/fZz0w6U August 12, 2025 at 01:10AM

Show HN: Mock Interviews for Software Engineers https://ift.tt/OoIelBY

Show HN: Mock Interviews for Software Engineers https://ift.tt/hr759Yl August 14, 2025 at 04:32AM

Show HN: Emailcore – write chiptune in plain text in the browser https://ift.tt/8jZWpyE

Show HN: Emailcore – write chiptune in plain text in the browser I tried using the AudioContext API to make the most primitive browser-based multi-voice chiptune tracker conceivable. No frameworks or external dependencies were used, and the page source ought to be very readable. Songs are written in plain, 7-bit safe text. Every line makes a voice/channel. The examples given on the page should hopefully illustrate every feature, but as a quick overview: Sounds are specified using Anglo-style note names, with flat (black) keys being the lowercase version of the white key above so as to maintain one character per note. Hence, a full chromatic scale is AbBCdDeEFgGa. Every note name is interpreted as the closest instance of that note to the preceding one. +- skips up or down an octave, ~ holds the previous note for a beat, . skips a beat, 01234 chooses one of 5 preset timbres, <> makes beats slower or faster (for all channels), () makes the current channel louder or quieter. All other characters are ignored. If you come up with a good tune, please share it in the comments! https://ift.tt/Tw50Vz2 August 14, 2025 at 03:23AM

Wednesday, August 13, 2025

Show HN: Nocturne – Your Car Thing's Second Chapter https://ift.tt/Xf2ojAy

Show HN: Nocturne – Your Car Thing's Second Chapter Hello HN! Recently, we have released Nocturne 3.0.0, which is a complete replacement for the (now unusable) Spotify Car Thing stock firmware. We're proud to eliminate more e-waste in the world. # Changes from v2 - Bluetooth tethering for car use (no more Raspberry Pi in the car) - Full graphics acceleration - Native Spotify login (no more client ID/secret) - Start DJ from the Car Thing - Podcast support - Gesture control - New settings - Boot to Now Playing - Spotify Connect device switcher - Support for Japanese, Simplified Chinese, Traditional Chinese, Korean, Arabic, Devanagari, Hebrew, Bengali, Tamil, Thai, Cyrillic, Vietnamese, and Greek - Full knob control support - Local file support - Preset button support - Status bar on home (shows time & Bluetooth/Wi-Fi) - Auto brightness - Hold settings button for power menu - Lock screen showing time full screen (press settings button) - DJ preset binding (hold preset button while DJ is playing in Now Playing) - Spotify mixes in Radio tab (Discover Weekly, daily mixes, etc.) - OTA updates - + MUCH more (this is just the important stuff!) # Flashing A guide to flashing Nocturne 3.0.0 is in the README. Bluetooth will work out of the box, or choose an alternative in the Setting up Network section. Hotspot capability from your phone and plan are required for Bluetooth. # Notes This wouldn’t be possible without our donors and the rest of the Nocturne Team. We hope you’ll enjoy it, as we've spent thousands of hours working on it! Consider buying the team a coffee if you can https://ift.tt/ePOBhSC https://ift.tt/FGYQc5P https://usenocturne.com August 12, 2025 at 10:53PM

Show HN: ReadMyMRI DICOM native preprocessor with multi model consensus/ML pipes https://ift.tt/H4txQBC

Show HN: ReadMyMRI DICOM native preprocessor with multi model consensus/ML pipes I'm building ReadMyMRI to solve a problem I kept runnin...