Wednesday, March 11, 2026

Show HN: Don't share code. Share the prompt https://ift.tt/IyCw38h

Show HN: Don't share code. Share the prompt Hey HN, I'm Mario. I recently talked to a colleague about AI, agents and how software development will change in the future. We were wondering why we should even share code anymore when AI agents are already really good at implementing software, just through prompts. Why can't everyone get customized software with prompts? "Share the prompt, not the code." Well, I thought, great idea, let's do that. That's why I built Open Prompt Hub: https://ift.tt/M0DVTdb . Think GitHub just for prompts. The idea is simple: Users can upload prompts that can then be used by you and your AI tools to generate a script, app, or web service (or prime their agent for a certain task): Just past it into your agent or ide and watch it build for you. If the prompt does not 100% covers your usecase, fork it, tweak it, et voila: tailor-made software ready to use! The prompts are simple markdown files with a frontematter block for meta information. (The spec can be found here: https://ift.tt/8sgFVPd ) They versioned, have information on which AI models build it successfuly and have instructions on how the AI agent can test the resulting software. Users can mention with which models they have successfully or unsuccessfully executed a prompt (builds or fail). This helps in assessing whether a prompt provides reliable output or not. Want to create a open prompt file? Here is the prompt for it which will guide you through: https://ift.tt/fy98pAG Security! Always a topic when dealing with AI and prompts? I've added several security checks that look at every prompt for injections and malicious behavior. Statistical analysis as well as two checks against LLMs for behaviour classification and prompt injection detection. It's an MVP for now. But all the mentioned features are already included. If this sounds good, let me know. Try a prompt, fork it, or tell me what you'd change in the spec or security scanner. I'm really curious about what would make you trust and reuse prompts. Or if you like the general idea... https://ift.tt/xm0NHRC March 11, 2026 at 12:29AM

Show HN: Satellite imagery object detection using text prompts https://ift.tt/QOCzo1S

Show HN: Satellite imagery object detection using text prompts I built a browser-based tool for detecting objects in satellite imagery using vision-language models (VLMs). You draw a polygon on the map and enter a text prompt such as "swimming pools", "oil tanks", or "buses". The system scans the selected area tile-by-tile and returns detections projected back onto the map as GeoJSON. Pipeline: select area and zoom level, split the region into mercantile tiles, run each tile with the prompt through a VLM, convert predicted bounding boxes to geographic coordinates (WGS84), and render the results back on the map. It works reasonably well for distinct structures in a zero-shot setting. occluded objects are still better handled by specialized detectors like YOLO models. There is a public demo and no login required. I am mainly interested in feedback on detection quality, performance tradeoffs between VLMs and specialized detectors, and potential real-world use cases. https://ift.tt/PgWLdfv March 9, 2026 at 01:22PM

Tuesday, March 10, 2026

Show HN: Caloriva – A calorie tracker that actually understands https://ift.tt/xfjaHnw

Show HN: Caloriva – A calorie tracker that actually understands I have built Caloriva because I got tired of the "search-select-confirm" loop in every other fitness app. I wanted something where I could just chat, and have the data structured automatically. What it does: Parses natural language for both food and exercise. Automatically calculates macros and tracks which muscle groups you've trained. No bloated UI, just a fast way to log and get on with your day. It’s live at https://caloriva.app. I’d love to hear your thoughts on the parsing accuracy and what features would make you actually switch from your current tracker. March 10, 2026 at 12:02AM

Show HN: Colchis Log – cryptographic audit trail for AI systems (Python) https://ift.tt/wntFj16

Show HN: Colchis Log – cryptographic audit trail for AI systems (Python) Built a tamper-proof execution logging library for AI systems. SHA-256 hash chain detects any tampering. Content-addressable payload store. CLI + Web interface. Works fully offline. Ko-fi: https://ift.tt/WqtfJrh https://ift.tt/skCJ64K March 10, 2026 at 12:02AM

Show HN: Ratschn – A local Mac dictation app built with Rust, Tauri and CoreML https://ift.tt/WeK3aGb

Show HN: Ratschn – A local Mac dictation app built with Rust, Tauri and CoreML Hi HN, I'm the solo developer behind Ratschn. I type a lot and got extremely frustrated with the current state of Mac dictation tools. Most of them are either heavy Electron wrappers, rely on cloud APIs (a privacy nightmare), or force you into a SaaS subscription for a tool that essentially runs on your own hardware. I wanted something that feels native, respects system resources, and runs entirely offline without forced subscriptions. The stack is Rust, Tauri, and whisper.cpp. Here are the design decisions I made: Model Size vs. Accuracy: Instead of using the smallest possible model just to claim a tiny footprint, the app downloads a ~490MB multi-language Whisper model locally on the first run. I found this to be the sweet spot for high accuracy (accents, technical jargon) to drastically reduce text correction time. Hardware Acceleration: The downloaded model is compiled via CoreML. This allows the transcription to run directly on the Apple Neural Engine (ANE) and Metal on M-series chips, keeping the main CPU largely idle. Memory Footprint: By using Tauri instead of Electron, the UI footprint is negligible. While actively running, the app takes up around 500MB of RAM. This makes perfect technical sense, as it is almost entirely the ~490MB AI model being actively held in memory to ensure instant transcription the millisecond you hit the global shortcut. Input Method: It uses macOS accessibility APIs to type directly into your active window. Business Model & Pricing: I strongly dislike subscription fatigue for local tools. There is a fully functional 7-day free trial (no account required). If you want to keep it, my main focus is a fair one-time purchase (€125 for a lifetime license). However, since I highly value the technical feedback from this community, I generated an exclusive launch code (HN25) that takes 25% off at checkout (dropping it to roughly €93 / ~$100). Bug Bounty: Since I'm a solo dev, I know I might have missed some edge cases (especially around CoreML compilation on specific M-chips or weird keyboard layouts). If you find a genuine, reproducible bug and take the time to report it here in the thread, I will happily manually upgrade you to a free Lifetime license as a massive thank you for the QA help. I'd love to hear your technical feedback on the Rust/Tauri architecture or how the CoreML compilation performs on your specific Apple Silicon setup. Happy to answer any questions! https://ratschn.com March 9, 2026 at 11:56PM

Show HN: The Mog Programming Language https://ift.tt/BgHhU7i

Show HN: The Mog Programming Language https://moglang.org March 9, 2026 at 11:27PM

Monday, March 9, 2026

Show HN: Proxly – Self-hosted tunneling on your own domain in 60 second https://ift.tt/YP0oWht

Show HN: Proxly – Self-hosted tunneling on your own domain in 60 second Proxly is a self-hosted tunneling tool that exposes local services through subdomains on your own VPS. npm install -g @a1tem/proxly, run proxly, and the interactive wizard sets up your first tunnel. No bandwidth caps, no session limits. Built it because frp's config is painful and ngrok's free tier is too limited. Open source, MIT licensed. GitHub: https://ift.tt/7KfAd1P March 8, 2026 at 03:34PM

Show HN: Don't share code. Share the prompt https://ift.tt/IyCw38h

Show HN: Don't share code. Share the prompt Hey HN, I'm Mario. I recently talked to a colleague about AI, agents and how software de...