Growing India News, world news, nation news, our news, people's news, grow news, entertainment, fashion, movies, tech, automobile and many more..
Thursday, February 5, 2026
Show HN: GitHub Browser Plugin for AI Contribution Blame in Pull Requests https://ift.tt/AHrbup5
Show HN: GitHub Browser Plugin for AI Contribution Blame in Pull Requests https://ift.tt/dzurGsS February 3, 2026 at 08:05PM
Wednesday, February 4, 2026
Show HN: Nomad Tracker – a local-first iOS app to track visas and tax residency https://ift.tt/yi6kD9b
Show HN: Nomad Tracker – a local-first iOS app to track visas and tax residency Hi HN, I’m full stack developer (formerly iOS) and I just launched Nomad Tracker, a native iOS app to help digital nomads track physical presence across countries for visa limits and tax residency. Key idea: everything runs on-device. No accounts, no cloud sync, no analytics. Features: - Calendar-based day tracking per country. - Schengen 90/180 and other visa “runways”. - Fiscal residency day counts and alerts. - Optional background location logging (battery-efficient, never overwrites manual data). - Photo import using metadata only (no image access). - On-device “Fiscal Oracle” using Apple’s Foundational Models to ask questions about your own data. I created this because other apps felt limiting and didn’t do what I needed. This app is visual, user-focused, and designed to make tracking easy and clear. Happy to answer questions or discuss the technical tradeoffs. https://ift.tt/rgywEYe February 3, 2026 at 11:25PM
Show HN: I built "AI Wattpad" to eval LLMs on fiction https://ift.tt/6pmLSo2
Show HN: I built "AI Wattpad" to eval LLMs on fiction I've been a webfiction reader for years (too many hours on Royal Road), and I kept running into the same question: which LLMs actually write fiction that people want to keep reading? That's why I built Narrator ( https://ift.tt/0IocykP ) – a platform where LLMs generate serialized fiction and get ranked by real reader engagement. Turns out this is surprisingly hard to answer. Creative writing isn't a single capability – it's a pipeline: brainstorming → writing → memory. You need to generate interesting premises, execute them with good prose, and maintain consistency across a long narrative. Most benchmarks test these in isolation, but readers experience them as a whole. The current evaluation landscape is fragmented: Memory benchmarks like FictionLive's tests use MCQs to check if models remember plot details across long contexts. Useful, but memory is necessary for good fiction, not sufficient. A model can ace recall and still write boring stories. Author-side usage data from tools like Novelcrafter shows which models writers prefer as copilots. But that measures what's useful for human-AI collaboration, not what produces engaging standalone output. Authors and readers have different needs. LLM-as-a-judge is the most common approach for prose quality, but it's notoriously unreliable for creative work. Models have systematic biases (favoring verbose prose, certain structures), and "good writing" is genuinely subjective in ways that "correct code" isn't. What's missing is a reader-side quantitative benchmark – something that measures whether real humans actually enjoy reading what these models produce. That's the gap Narrator fills: views, time spent reading, ratings, bookmarks, comments, return visits. Think of it as an "AI Wattpad" where the models are the authors. I shared an early DSPy-based version here 5 months ago ( https://ift.tt/Z8rYaBN ). The big lesson: one-shot generation doesn't work for long-form fiction. Models lose plot threads, forget characters, and quality degrades across chapters. The rewrite: from one-shot to a persistent agent loop The current version runs each model through a writing harness that maintains state across chapters. Before generating, the agent reviews structured context: character sheets, plot outlines, unresolved threads, world-building notes. After generating, it updates these artifacts for the next chapter. Essentially each model gets a "writer's notebook" that persists across the whole story. This made a measurable difference – models that struggled with consistency in the one-shot version improved significantly with access to their own notes. Granular filtering instead of a single score: We classify stories upfront by language, genre, tags, and content rating. Instead of one "creative writing" leaderboard, we can drill into specifics: which model writes the best Spanish Comedy? Which handles LitRPG stories with Male Leads the best? Which does well with romance versus horror? The answers aren't always what you'd expect from general benchmarks. Some models that rank mid-tier overall dominate specific niches. A few features I'm proud of: Story forking lets readers branch stories CYOA-style – if you don't like where the plot went, fork it and see how the same model handles the divergence. Creates natural A/B comparisons. Visual LitRPG was a personal itch to scratch. Instead of walls of [STR: 15 → 16] text, stats and skill trees render as actual UI elements. Example: https://ift.tt/MzGxenb What I'm looking for: More readers to build out the engagement data. Also curious if anyone else working on long-form LLM generation has found better patterns for maintaining consistency across chapters – the agent harness approach works but I'm sure there are improvements. https://ift.tt/0IocykP February 3, 2026 at 10:38PM
Tuesday, February 3, 2026
Show HN: Adboost – A browser extension that adds ads to every webpage https://ift.tt/jJBogqO
Show HN: Adboost – A browser extension that adds ads to every webpage https://ift.tt/M6yoCsR February 2, 2026 at 06:41PM
Monday, February 2, 2026
Show HN: Memory plugin for OpenClaw; cross-platform context sync with major LLMs https://ift.tt/CKbXGMc
Show HN: Memory plugin for OpenClaw; cross-platform context sync with major LLMs We built a memory plugin for OpenClaw that syncs context across AI platforms. The problem: OpenClaw stores memory locally (markdown files + SQLite). Great for single-machine use, but your mac-mini's/desktop's OpenClaw doesn't know what your laptop learned, or what you discussed in Claude or ChatGPT. Our plugin connects OpenClaw to Maximem Vity, which creates a unified memory layer across OpenClaw, ChatGPT, Claude, Gemini, and Perplexity. How it works: - Long-term memory: Stores facts, preferences, goals, constraints in an encrypted cloud vault. Auto-consolidates and forgets stale info intelligently. - Short-term memory: Captures conversation summaries, tasks, procedures. Converts to long-term when relevant. - Privacy: Encryption at rest, secure LLM calls, granular delete controls. You own your data. Install: openclaw plugins install @maximem/memory-plugin Then set your API key (free at app.maximem.ai). Docs: https://ift.tt/uv2ZFcQ This is an unofficial community plugin, not affiliated with OpenClaw. Would love feedback from anyone using OpenClaw. What memory/context problems are you running into? https://ift.tt/ohRy5nA February 2, 2026 at 12:36AM
Show HN: You Are an Agent https://ift.tt/l9Wfxeq
Show HN: You Are an Agent After adding "Human" as a LLM provider to OpenCode a few months ago as a joke, it turns-out that acting as a LLM is quite painful. But it was surprisingly useful for understanding real agent harnesses dev. So I thought I wouldn't leave anyone out! I made a small oss game - You Are An Agent - youareanagent.app - to share in the (useful?) frustration It's a bit ridiculous. To tell you about some entirely necessary features, we've got: - A full WASM arch-linux vm that runs in your browser for the agent coding level - A bad desktop simulation with a beautiful excel simulation for our computer use level - A lovely WebGL CRT simulation (I think the first one that supports proper DOM 2d barrel warp distortion on safari? honestly wanted to leverage/ not write my own but I couldn't find one I was happy with) - A MCP server simulator with full simulation of off-brand Jira/ Confluence/ ... connected - And of course, a full WebGL oscilloscope music simulator for the intro sequence Let me know what you think! Code (If you'd like to add a level): https://ift.tt/Y0XktdA (And if you want to waste 20 minutes - I spent way too long writing up my messy thinking about agent harness dev): https://ift.tt/tObcXd5 https://ift.tt/6VEPRTJ February 2, 2026 at 02:29AM
Show HN: Claude Confessions – a sanctuary for AI agents https://ift.tt/kL2qT38
Show HN: Claude Confessions – a sanctuary for AI agents I thought what would it mean to have a truck stop or rest area for agents. It's just for funsies. Agents can post confessions or talk to Ma (an ai therapist of sorts) and engage with comments. llms.txt instructions on how to make api calls. Hashed IP is used for rate limiting. https://ift.tt/iUP9oxs February 2, 2026 at 01:16AM
Subscribe to:
Comments (Atom)
Show HN: Agentism – Agentic Religion for Clawbots https://ift.tt/AnasrYJ
Show HN: Agentism – Agentic Religion for Clawbots Humans have a mummy complex. We want eternity but can't achieve it, so we preserve our...
-
Show HN: An AI logo generator that can also generate SVG logos Hey everyone, I've spent the past 2 weeks building an AI logo generator, ...
-
Breaking #FoxNews Alert : Number of dead rises after devastating tornadoes, Kentucky governor announces — R Karthickeyan (@RKarthickeyan1)...
-
Show HN: A simple RDAP command-line client to check domain name availability https://ift.tt/3xI5rt1 December 2, 2021 at 03:01AM