Sunday, January 26, 2025

Show HN: I made an extension that turns Google Sheets into Google Slides https://ift.tt/gMLqxEK

Show HN: I made an extension that turns Google Sheets into Google Slides https://ift.tt/j8bQy2h January 23, 2025 at 07:14PM

Show HN: Freelens OSS Kubernetes IDE https://ift.tt/MD7N3jK

Show HN: Freelens OSS Kubernetes IDE Hello everyone, disappointed that Open Lens has become closed source, I and other enthusiasts are trying to continue its open source project with Freelens. We hope this will help others who like us used Open Lens as a graphical IDE to work with Kubernetes, continuing to give the community the opportunity to develop it by directly contributing to its realization as an open source project. What do you think? Any feedback or contribution is welcome! Thanks! https://ift.tt/ZNGwj1s January 26, 2025 at 12:50AM

Saturday, January 25, 2025

Show HN: Magenta.nvim – AI coding plugin for Neovim focused on tool use https://ift.tt/f7sQyte

Show HN: Magenta.nvim – AI coding plugin for Neovim focused on tool use I've been developing this on and off for a few weeks. There are a few videos on the README page showing demos of the plugin. I just shipped an update today, which adds: - inline editing with forced tool use - better pinned context management - prompt caching for anthropic - port to node (from bun) Check it out! https://ift.tt/6nmTPby January 21, 2025 at 08:37AM

Show HN: Snap Scope – Visualize Lens Focal Length Distribution from EXIF Data https://ift.tt/yrqHZtD

Show HN: Snap Scope – Visualize Lens Focal Length Distribution from EXIF Data Hey HN, I built this tool because I wanted to understand which focal lengths I actually use when taking photos. It's a web app that analyzes EXIF data to visualize focal length distribution patterns. While it's admittedly niche (focused specifically on photography), I think it could be useful for photographers trying to understand their lens usage patterns or making decisions about lens purchases. Features: Client-side EXIF data processing (no server uploads/tracking) / Handles thousands of photos at once / Clean visualization with shareable summaries This tool supports most RAW formats, but you might occasionally encounter files where EXIF extraction fails. In such cases, converting to more common formats like JPEG usually resolves the issue. Try it out: https://ift.tt/dIk6fVo Source: https://ift.tt/SIGFXDP https://ift.tt/dIk6fVo January 24, 2025 at 07:48PM

Friday, January 24, 2025

Show HN: Helicone (YC W23) – OSS LLM Observability and Development Platform https://ift.tt/ErH4FlZ

Show HN: Helicone (YC W23) – OSS LLM Observability and Development Platform Hey HN, we're Justin and Cole, the founders of Helicone ( https://helicone.ai ). Helicone is an open-source platform that helps teams build better LLM applications through a complete development lifecycle of logging, evaluation, experimentation, and release. You can try our free demo by signing up ( https://ift.tt/edJuxpW ) or self-deploy with our new fully open-source helm chart ( https://ift.tt/jQ7e4c1 ). When we first launched 22 months ago, we focused on providing visibility into LLM applications. With just a single line of code, teams could trace requests and responses, track token usage, and debug production issues. That simple integration has since processed over 2.1B requests and 2.6T tokens, working with teams ranging from startups to Fortune 500 companies. However, as we scaled and our customers matured, it became clear that logging alone wasn’t enough to manage production-grade applications. Teams like Cursor and V0 have shown what peak AI application performance looks like and it's our goal to help teams achieve that quality. From speaking with users, we realized our platform was missing the necessary tools to create an iterative improvement loop - prompt management, evaluations, and experimentation. Helicone V1: Log → Review → Release (Hope it works) From talking with our users, we noticed a pattern: while many successfully launch their MVP quickly, the teams that achieve peak performance take a systematic approach to improvement. They identify inconsistent behaviors through evaluation, experiment methodically with prompts, and measure the impact of each change. This observation shaped our new workflow: Helicone V2: Log → Evaluate → Experiment → Review → Release It begins with comprehensive logging, capturing the entire context of an LLM application. Not just prompts and responses, but variables, chain steps, embeddings, tool calls, and vector DB interactions ( https://ift.tt/NoE3QUH ). Yet even with detailed traces, probabilistic systems are notoriously hard to debug at scale. So, we released evaluators (either via LLM-as-judge or custom Python evaluators leveraging the CodeSandbox SDK - https://ift.tt/vBmKMsg ). From there, our users were able to more easily monitor performance and investigate what went wrong. Did the embedding search return poor results? Did a tool call fail? Did the prompt mishandle an edge case? But teams would still edit prompts in a playground, run a few test cases, and deploy based on intuition. This lacked the systematic testing we’re used to in traditional software development. That’s why we built experiments (similar to Anthropic's workbench but model-agnostic) ( https://ift.tt/0JY9jun ). For instance, when a prompt generates occasional rude support responses, you can test prompt variations against historical conversations. Each variant runs through your production evaluators, measuring real improvement before deployment. Once deployed, the cycle begins again. We recognize that Helicone can’t solve all of the problems you might face when building an LLM application, but we hope that we can help you bring a better product to your customers through our new workflow. If you're curious how our infrastructure handled our growth: Our initial architecture struggled - synchronous log processing overwhelmed our database and query times went from milliseconds to minutes. We've completely rebuilt our infrastructure with two key changes: 1) using Kafka to decouple log ingestion from processing, and 2) splitting storage by access pattern across S3, Kafka, and ClickHouse. This was a long journey but resulted in zero data loss and fast query times even at billions of records. You can read about that here: https://ift.tt/JZtVr3g... We'd love your feedback and questions - join us in this HN thread or on Discord ( https://ift.tt/yVRJKjL ). If you're interested in contributing to what we build next, check out our GitHub. https://ift.tt/vb1yuDK January 23, 2025 at 11:28PM

Show HN: Mixlist https://ift.tt/bGvmg4F

Show HN: Mixlist built a web app that uses k-means clustering on artist genres (one or multiple) to automatically organize Spotify liked songs into playlists. clean UI. you might have to click refresh playlists couple of times to get what you want. comments are appreciated. thanks! https://ift.tt/t6cORzn January 23, 2025 at 11:11PM

Thursday, January 23, 2025

Show HN: Stratoshark, a sibling application to Wireshark https://ift.tt/2HMiBJa

Show HN: Stratoshark, a sibling application to Wireshark Hi all, I'm excited to announce Stratoshark, a sibling application to Wireshark that lets you capture and analyze process activity (system calls) and log messages in the same way that Wireshark lets you capture and analyze network packets. If you would like to try it out you can download installers for Windows and macOS and source code for all platforms at https://stratoshark.org. AMA: I'm the goofball whose name is at the top of the "About" box in both applications, and I'll be happy to answer any questions you might have. https://ift.tt/oywhSWG January 22, 2025 at 08:55PM

Show HN: Anti-Cluely – Detect virtual devices and cheating tools on exam systems https://ift.tt/onuTQWR

Show HN: Anti-Cluely – Detect virtual devices and cheating tools on exam systems Anti-Cluely is a lightweight tool designed to detect common...