Saturday, April 26, 2025

Helping Everyone Take Green Trips Across the City: Our Focus on Accessibility

Helping Everyone Take Green Trips Across the City: Our Focus on Accessibility
By Glennis Markison

Seniors and people with disabilities have a range of ways to go green with us. During Climate Week and all year long, our teams work together to ensure everyone can take green trips around the city. We’re proud to share the investments we’re making across our system to improve access for seniors and people with disabilities. This means providing reliable trips for people who choose shared-ride paratransit or our clean-air taxis. It means making it easier to board electric Muni vehicles. It means ensuring adaptive scooters are part of the scooter share fleet, and that we support adaptive...



Published April 25, 2025 at 05:30AM
https://ift.tt/g5ImRjO

Show HN: Bertrand Russell's Principia Mathematica in Lean https://ift.tt/6EXlcmy

Show HN: Bertrand Russell's Principia Mathematica in Lean This project aims to formalize the first volume of Prof. Bertrand Russell’s Principia Mathematica using the Lean theorem prover. Throughout the formalization, I tried to rigorously follow Prof. Russell’s proof, with no or little added statements from my side, which were only necessary for the formalization but not the logical argument. Should you notice any inaccuracy (even if it does not necessarily falsify the proof), please let me know as I would like to proceed with the same spirit of rigour. Before starting this project, I had already found Prof. Elkind’s formalization of the Principia using Rocq (formerly Coq), which is much mature work than this one. However, I still thought it would be fun to do it using Lean4. https://ift.tt/em7xcEw April 26, 2025 at 12:19AM

Show HN: Claude Code with GUI and Block Based Prompt Editor (MIT) https://ift.tt/ptvflUD

Show HN: Claude Code with GUI and Block Based Prompt Editor (MIT) https://ift.tt/SrzNcUs April 25, 2025 at 10:28PM

Show HN: Open-Source, Self-Hostable Rate Limiting API https://ift.tt/qO2pw6R

Show HN: Open-Source, Self-Hostable Rate Limiting API https://ift.tt/rXxVemG April 25, 2025 at 11:03PM

Friday, April 25, 2025

Show HN: GitNote- Online MD note editor that syncs to GitHub https://ift.tt/OrYel6Z

Show HN: GitNote- Online MD note editor that syncs to GitHub https://ift.tt/lNToHRA April 25, 2025 at 01:25AM

Show HN: I reverse engineered top websites to build an animated UI library https://ift.tt/FEH4mTx

Show HN: I reverse engineered top websites to build an animated UI library Looking at websites such as Clerk, I began thinking that design engineers might be some kind of wizards. I wanted to understand how they do it, so I started reverse-engineering their components out of curiosity. One thing led to another, and I ended up building a small library of reusable, animated components based on what I found. The library is built in React and Framer Motion. I’d love to hear your feedback https://reverseui.com April 24, 2025 at 11:17PM

Show HN: Lemon Slice Live, a real-time video-audio AI model https://ift.tt/Y3asirQ

Show HN: Lemon Slice Live, a real-time video-audio AI model Hey HN, this is Lina, Andrew, and Sidney from Lemon Slice. We’ve trained a custom diffusion transformer (DiT) model that achieves video streaming at 25fps and wrapped it into a demo that allows anyone to turn a photo into a real-time, talking avatar. Here’s an example conversation from co-founder Andrew: https://www.youtube.com/watch?v=CeYp5xQMFZY . Try it for yourself at: https://ift.tt/oy5gd47 . (Btw, we used to be called Infinity AI and did a Show HN under that name last year: https://ift.tt/C8Z9EbL .) Unlike existing avatar video chat platforms like HeyGen, Tolan, or Apple Memoji filters, we do not require training custom models, rigging a character ahead of time, or having a human drive the avatar. Our tech allows users to create and immediately video-call a custom character by uploading a single image. The character image can be any style - from photorealistic to cartoons, paintings, and more. To achieve this demo, we had to do the following (among other things! but these were the hardest): 1. Training a fast DiT model. To make our video generation fast, we had to both design a model that made the right trade-offs between speed and quality, and use standard distillation approaches. We first trained a custom video diffusion transformer (DiT) from scratch that achieves excellent lip and facial expression sync to audio. To further optimize the model for speed, we applied teacher-student distillation. The distilled model achieves 25fps video generation at 256-px resolution. Purpose-built transformer ASICs will eventually allow us to stream our video model at 4k resolution. 2. Solving the infinite video problem. Most video DiT models (Sora, Runway, Kling) generate 5-second chunks. They can iteratively extend it by another 5sec by feeding the end of the 1st chunk into the start of the 2nd in an autoregressive manner. Unfortunately the models experience quality degradation after multiple extensions due to accumulation of generation errors. We developed a temporal consistency preservation technique that maintains visual coherence across long sequences. Our technique significantly reduces artifact accumulation and allows us to generate indefinitely-long videos. 3. A complex streaming architecture with minimal latency. Enabling an end-to-end avatar zoom call requires several building blocks, including voice transcription, LLM inference, and text-to-speech generation in addition to video generation. We use Deepgram as our AI voice partner. Modal as the end-to-end compute platform. And Daily.co and Pipecat to help build a parallel processing pipeline that orchestrates everything via continuously streaming chunks. Our system achieves end-to-end latency of 3-6 seconds from user input to avatar response. Our target is <2 second latency. More technical details here: https://lemonslice.com/live/technical-report . Current limitations that we want to solve include: (1) enabling whole-body and background motions (we’re training a next-gen model for this), (2) reducing delays and improving resolution (purpose-built ASICs will help), (3) training a model on dyadic conversations so that avatars learn to listen naturally, and (4) allowing the character to “see you” and respond to what they see to create a more natural and engaging conversation. We believe that generative video will usher in a new media type centered around interactivity: TV shows, movies, ads, and online courses will stop and talk to us. Our entertainment will be a mixture of passive and active experiences depending on what we’re in the mood for. Well, prediction is hard, especially about the future, but that’s how we see it anyway! We’d love for you to try out the demo and let us know what you think! Post your characters and/or conversation recordings below. April 24, 2025 at 10:40PM

Show HN: Anti-Cluely – Detect virtual devices and cheating tools on exam systems https://ift.tt/onuTQWR

Show HN: Anti-Cluely – Detect virtual devices and cheating tools on exam systems Anti-Cluely is a lightweight tool designed to detect common...