Monday, February 23, 2026

Sunday, February 22, 2026

Show HN: Winslop – De-Slop Windows https://ift.tt/snKENo5

Show HN: Winslop – De-Slop Windows https://ift.tt/rcJZypb February 22, 2026 at 01:26AM

Show HN: Rigour – Open-source quality gates for AI coding agents https://ift.tt/4cEdAwt

Show HN: Rigour – Open-source quality gates for AI coding agents Hey HN, I built Rigour, an open-source CLI that catches quality issues AI coding agents introduce. It runs as a quality gate in your workflow — after the agent writes code, before it ships. v4 adds --deep analysis: AST extracts deterministic facts (line counts, nesting depth, method signatures), an LLM interprets what the patterns mean (god classes, SRP violations, DRY issues), then AST verifies the LLM didn't hallucinate. I ran it on PicoClaw (open-source AI coding agent, ~50 Go files): - 202 total findings - 88 from deep analysis (SOLID violations, god functions, design smells) - 88/88 AST-verified (zero hallucinations) - Average confidence: 0.89 - 120 seconds for full codebase scan Sample finding: pkg/agent/loop.go — 1,147 lines, 23 functions. Deep analysis identified 5 distinct responsibilities (agent init, execution, tool processing, message handling, state management) and suggested specific file decomposition. Every finding includes actionable refactoring suggestions, not just "fix this." The tool is local-first — your code never leaves your machine unless you explicitly opt in with your own API key (--deep -k flag). Tech: Node.js CLI, AST parsing per language, structured LLM prompts with JSON schema enforcement, AST cross-verification of every LLM claim. GitHub: https://ift.tt/CiDYj9n Would love feedback, especially from anyone dealing with AI-generated code quality in production. https://rigour.run February 21, 2026 at 10:45PM

Saturday, February 21, 2026

Show HN: Manifestinx-verify – offline verifier for evidence bundles (drift) https://ift.tt/9CaKeEi

Show HN: Manifestinx-verify – offline verifier for evidence bundles (drift) Manifest-InX EBS is a spec + offline verifier + proof kit for tamper-evident evidence bundles. Non-negotiable alignment: - Live provider calls are nondeterministic. - Determinism begins at CAPTURE (pinned artifacts). - Replay is deterministic offline. - Drift/tamper is deterministically rejected. Try it in typically ~10 minutes (no signup): 1) Run the verifier against the included golden bundle → PASS 2) Tamper an artifact without updating hashes → deterministic drift/tamper rejection Repo: https://ift.tt/VD8WbK9 Skeptic check: docs/ebs/PROOF_KIT/10_MINUTE_SKEPTIC_CHECK.md Exit codes: 0=OK, 2=DRIFT/TAMPER, 1=INVALID/ERROR Boundaries: - This repo ships verifier/spec/proof kit only. The Evidence Gateway (capture/emission runtime) is intentionally not included. - This is not a “model correctness / no hallucinations” claim—this is evidence integrity + deterministic replay/verification from pinned artifacts. Looking for feedback: - Does the exit-code model map cleanly to CI gate usage? - Any spec/report format rough edges that block adoption? https://ift.tt/VD8WbK9 February 20, 2026 at 11:57PM

Show HN: HelixDB Explorer – A macOS GUI for HelixDB https://ift.tt/eROQtKD

Show HN: HelixDB Explorer – A macOS GUI for HelixDB https://ift.tt/RvDpw37 February 20, 2026 at 11:18PM

Friday, February 20, 2026

Show HN: Saga – SQLite project tracker for AI coding agents https://ift.tt/OuTxX79

Show HN: Saga – SQLite project tracker for AI coding agents https://ift.tt/Sdt9acg February 23, 2026 at 12:19AM