Saturday, July 20, 2024

Show HN: Spectral – Visualize, explore, and share code in Python/JS/TS https://ift.tt/V2G5PFg

Show HN: Spectral – Visualize, explore, and share code in Python/JS/TS https://ift.tt/YqoK3he July 20, 2024 at 01:22AM

Show HN: I built an app to generate me windows blue screen of death https://ift.tt/xL8C7Yk

Show HN: I built an app to generate me windows blue screen of death Everybody is getting windows blue screen of death so I built myself an app to generate windows blue screen of death. https://ift.tt/5UdFxRb July 20, 2024 at 01:24AM

Show HN: 80+ CLI tools to build, browse, and blend your media library https://ift.tt/19W4q8C

Show HN: 80+ CLI tools to build, browse, and blend your media library https://ift.tt/R2CMm8Z July 19, 2024 at 11:01PM

Friday, July 19, 2024

Show HN: ChatGPT Chrome Extension to Keep Temporary Chat Enabled https://ift.tt/74ocb3L

Show HN: ChatGPT Chrome Extension to Keep Temporary Chat Enabled https://ift.tt/n0OYHe9 July 19, 2024 at 09:35AM

Show HN: NetSour, CLI Based Wireshark https://ift.tt/Vo27g6z

Show HN: NetSour, CLI Based Wireshark This code is still in early beta, but i sincerley hope it will become as ubiquitous as VIM on Linux. https://ift.tt/oqJga48 July 19, 2024 at 07:47AM

It’s Getting Easier to Use Parking Meters – Learn How and Explore their History

It’s Getting Easier to Use Parking Meters – Learn How and Explore their History
By Pamela Johnson and Kelley Trahan

Our staff are installing thousands of these new single-space meters across the city. They'll make it easier to pay for parking. San Francisco has 27,000 metered parking spaces, and we're working hard to upgrade every single one. The goal: replace outdated technology with meters that are easier to use. It's all part of our Parking Meter Replacement Project. We'll share how the upgrades help and look back on the history of parking meters in the city. For even more details, you can check out our Illustrated History of San Francisco’s Parking Meters webpage. Upgrading thousands of meters: how the...



Published July 18, 2024 at 05:30AM
https://ift.tt/eYd0j92

Show HN: How we leapfrogged traditional vector based RAG with a 'language map' https://ift.tt/3KFdsy8

Show HN: How we leapfrogged traditional vector based RAG with a 'language map' TL;DR: Vector-based RAG performs poorly for many real-world applications like codebase chats, and you should consider 'language maps'. Part of our mission at Mutable.ai is to make it much easier for developers to build and understand software. One of the natural ways to do this is to create a codebase chat, that answer questions about your repo and help you build features. It might seem simple to plug in your codebase into a state-of-the-art LLM, but LLMs have two limitations that make human-level assistance with code difficult: 1. They currently have context windows that are too small to accommodate most codebases, let alone your entire organization's codebases. 2. They need to reason immediately to answer any questions without thinking through the answer "step-by-step." We built a chat sometime a year ago based on keyword retrieval and vector embeddings. No matter how hard we tried, including training our own dedicated embedding model, we could not get the chat to get us good performance. Here is a typical example: https://ift.tt/IeFHCKf... If you ask how to do quantization in llama.cpp the answers were oddly specific and seemed to pull in the wrong context consistently, especially from tests. We could, of course, take countermeasures, but it felt like a losing battle. So we went back to step 1, let’s understand the code, let’s do our homework, and for us, that meant actually putting an understanding of the codebase down in a document — a Wikipedia-style article — called Auto Wiki. The wiki features diagrams and citations to your codebase. Example: https://ift.tt/AcqDyH0 This wiki is useful in and of itself for onboarding and understanding the business logic of a codebase, but one of the hopes for constructing such a document was that we’d be able to circumvent traditional keyword and vector-based RAG approaches. It turns out using a wiki to find context for an LLM overcomes many of the weaknesses of our previous approach, while still scaling to arbitrarily large codebases: 1. Instead of context retrieval through vectors or keywords, the context is retrieved by looking at the sources that the wiki cites. 2. The answers are based both on the section(s) of the wiki that are relevant AND the content of the actual code that we put into memory — this functions as a “language map” of the codebase. See it in action below for the same query as our old codebase chat: https://ift.tt/IeFHCKf... https://ift.tt/IeFHCKf... The answer cites it sources in both the wiki and the actual code and gives a step by step guide to doing quantization with example code. The quality of the answer is dramatically improved - it is more accurate, relevant, and comprehensive. It turns out language models love being given language and not a bunch of text snippets that are nearby in vector space or that have certain keywords! We find strong performance consistently across codebases of all sizes. The results from the chat are so good they even surprised us a little bit - you should check it out on a codebase of your own, at https://wiki.mutable.ai , which we are happy to do for free for open source code, and starts at just $2/mo/repo for private repos. We are introducing evals demonstrating how much better our chat is with this approach, but were so happy with the results we wanted to share with the whole community. Thank you! https://twitter.com/mutableai/status/1813815706783490055 July 19, 2024 at 12:10AM

Show HN: Orca – AI Game Engine https://ift.tt/By5qzel

Show HN: Orca – AI Game Engine https://ift.tt/EnUGtua August 16, 2025 at 02:52AM