Saturday, January 18, 2025

Show HN: Compile C to Not Gates https://ift.tt/eTwHNOF

Show HN: Compile C to Not Gates Hi! I've been working on the flipjump project, a programming language with 1 opcode: flip (invert) a bit, then jump (unconditionally). So a bit-flip followed by more bit-flips. It's effectively a bunch of NOT gates. This language, as poor as it sounds, is RICH. Today I completed my compiler from C to FlipJump. It takes C files, and compiles them into flipjump. I finished testing it all today, and it works! My key interest in this project is to stretch what we know of computing and to prove that anything can be done even with minimal power. I appreciate you reading my announcement, and be happy to answer questions. More links: - The flipjump language: https://ift.tt/F1efrsU https://ift.tt/e56IC2p - c2fj python package https://ift.tt/vygJT1S https://ift.tt/Cfhg3jD January 18, 2025 at 01:06AM

Friday, January 17, 2025

Thursday, January 16, 2025

Show HN: A Common Lisp implementation in development https://ift.tt/mjibfB8

Show HN: A Common Lisp implementation in development I've been working on this for a couple years. Implementation of the standard is still not complete, but in my opinion breakpoints and stepping work quite well! Support for loading systems with ASDF is near. Let me know if you like it! Support on Patreon or Liberapay is much appreciated https://ift.tt/KaQFtwj January 16, 2025 at 02:23AM

SHOW HN: I made a tool to save multimedia from various platforms https://ift.tt/QeNjxbn

SHOW HN: I made a tool to save multimedia from various platforms https://ift.tt/OabSBUm January 16, 2025 at 02:00AM

Show HN: QwQ-32B APIs – o1 like reasoning at 1% the cost https://ift.tt/zmMy5iO

Show HN: QwQ-32B APIs – o1 like reasoning at 1% the cost Ubicloud is an open source alternative to AWS. Today, we launched our inference APIs, built with open source AI models. QwQ-32B-Preview is one of those models; and it can provide o1-like reasoning at 1% the cost. QwQ is licensed under Apache 2.0 [1] and Ubicloud under AGPL v3. We deploy open models on a cloud stack that can run anywhere. This allows us to offer great price / performance. From an accuracy standpoint, QwQ does well in math and coding domains. For example, in the MMLU-Pro Computer Science LLM Benchmark, the accuracy rankings are as follows. Claude-3.5 Sonnet (82.5), QwQ-32B-Preview (79.1), and GPT 4o 2024-11-20 (73.1). [2] You can start evaluating QwQ (and Llama 3B / 70B) by logging into the Ubicloud console: https://ift.tt/xK1JYk4 We also provide an AI chat box for convenience. We price the API endpoints at $0.60 per M tokens, or 100x lower than o1’s output token price. Also, when using open models, your first million tokens each month are free. This way, you can start evaluating these models today. ## OpenAI o1 or QwQ-32B In math and coding benchmarks, QwQ-32B ties with o1 and outperforms Claude 3.5 Sonnet. In our qualitative tests, we found o1 to perform better. For example, we asked both models to “add a pair of parentheses to the incorrect equation: 1 + 2 * 3 + 4 * 5 + 6 * 7 + 8 * 9 = 479, to make the equation true.” [3] QwQ’s answer shows iterative reasoning steps, where the model enumerates over answers using light heuristics. o1’s answer to the same question feels like an iterative deepen-and-test (though not purely depth-first). When we asked the models harder questions, it felt that o1 could understand the question better and employ more complex strategies. [3][4] Finally, we found that o1’s advantage in reasoning compounded with other ones. For example, we asked both models to write example Python programs. Looking at the answers, it became clear that o1 was trained on a larger data set and that it was aware of Python libraries that QwQ-32B didn’t know about. Further, QwQ-32B at times flip flopped between English and Chinese, making it harder for us to understand the model. [3] Now, if we think that o1 has these advantages, why the heck are we doing a Show HN on QwQ-32B (and other open weight models)? Two reasons. First, QwQ is still comparable to o1 and Ubicloud offers it for 100x less. You can employ a dozen QwQ-32Bs, prompt them with different search strategies, use VMs to verify their results, and still come in under what o1 costs. In the short term, combining these classic AI search strategies with AI models feels much more efficient than trying to “teach” an uber AI model. Second, we think open source fosters collaboration and trust -- and that is its superpower that compounds over time. We foresee a future where open source AI not only delivers top-quality results, but also surpasses proprietary models in some areas. If you believe in that future and are looking for someone to partner with on the infrastructure side, please hit us up at info@ubicloud.com! [1] https://ift.tt/hDg0bJY [2] https://ift.tt/FPg7rIl... [3] https://ift.tt/8EAUjRp [4] https://ift.tt/A0w5so4 January 15, 2025 at 08:59PM

Celebrating 10 Years and 100+ Internships: How Students Helps Us Keep Muni Moving

Celebrating 10 Years and 100+ Internships: How Students Helps Us Keep Muni Moving
By Glennis Markison

Nearly 120 student interns from Genesys Works have supported our operations at the SFMTA. Many talented students have helped us keep Muni moving and our streets safe. We’re starting off the new year with gratitude for their hard work! We recently celebrated 10 years of partnership and more than 100 internships with Genesys Works. See how Genesys Works students have helped our agency and the city – and what they’ve learned along the way. SFMTA staff who have supervised interns celebrate 10 years of partnership with Genesys Works. Helping young people build fulfilling careers Genesys Works...



Published January 15, 2025 at 05:30AM
https://ift.tt/rZnVUdA

Show HN: Do You Know RGB? https://ift.tt/t8kUpbO

Show HN: Do You Know RGB? https://ift.tt/OWhvmMT June 24, 2025 at 01:49PM