Saturday, July 13, 2024

All Aboard the Boat Tram This Summer!

All Aboard the Boat Tram This Summer!
By Madhu Unnikrishnan

Is it a boat or a tram? Find out this summer! Our beloved Boat Tram will make its 2024 debut this Saturday, July 13. It joins a roster of historic vehicles plying the city’s rails as part of Muni’s Summer Heritage Service. We’ll share more about the history of these heritage vehicles. First, let’s cover how to catch a ride. Muni Summer Heritage Service includes: Vintage Streetcars on the Embarcadero Where you can ride: Serving the Embarcadero, the heritage streetcars make stops on the F Market & Wharves Line between the Ferry Building and Pier 39. When you can ride: Sundays and Mondays through...



Published July 12, 2024 at 05:30AM
https://ift.tt/IcWKYkE

Show HN: Windows 9X – Windows 98 but all of the programs are AI generated https://ift.tt/tKqy7bJ

Show HN: Windows 9X – Windows 98 but all of the programs are AI generated https://ift.tt/t2VUCoR July 12, 2024 at 08:44PM

Friday, July 12, 2024

Show HN: Leaderboard of Top GitHub Repositories Based on Stars https://ift.tt/yQAeniV

Show HN: Leaderboard of Top GitHub Repositories Based on Stars I created a leaderboard showcasing the top 1000 GitHub repositories based on the number of stars. With GitHub hosting over 100 million public repositories, this leaderboard highlights the top 0.001% in terms of the number of stars. Stars might not be the perfect metric for adoption—metrics like the number of monthly downloads could be more accurate—but this list still represents some of the most popular and influential projects in the open-source community. You can check out the leaderboard here: https://ift.tt/aPwCKZe https://ift.tt/aPwCKZe July 12, 2024 at 02:07AM

Show HN: Mandala – Automatically save, query and version Python computations https://ift.tt/FQIApHr

Show HN: Mandala – Automatically save, query and version Python computations `mandala` is a framework I wrote to automate tracking ML experiments for my research. It differs from other experiment tracking tools by making persistence, query and versioning logic a generic part of the programming language itself, as opposed to an external logging tool you must learn and adapt to. The goal is to be able to write expressive computational code without thinking about persistence (like in an interactive session), and still have the full benefits of a versioned, queriable storage afterwards. Surprisingly, it turns out that this vision can pretty much be achieved with two generic tools: 1. a memoization+versioning decorator, `@op`, which tracks inputs, outputs, code and runtime dependencies (other functions called, or global variables accessed) every time a function is called. Essentially, this makes function calls replace logging: if you want something saved, you write a function that returns it. Using (a lot of) hashing, `@op` ensures that the same version of the function is never executed twice on the same inputs. Importantly, the decorator encourages/enforces composition. Before a call, `@op` functions wrap their inputs in special objects, `Ref`s, and return `Ref`s in turn. Furthermore, data structures can be made transparent to `@op`s, so that an `@op` can be called on a list of outputs of other `@op`s, or on an element of the output of another `@op`. This creates an expressive "web" of `@op` calls over time. 2. a data structure, `ComputationFrame`, can automatically organize any such web of `@op` calls into a high-level view, by grouping calls with a similar role into "operations", and their inputs/outputs into "variables". It can detect "imperative" patterns - like feedback loops, branching/merging, and grouping multiple results in a single object - and surface them in the graph. `ComputationFrame`s are a "synthesis" of computation graphs and relational databases, and can be automatically "exported" as dataframes, where columns are variables and operations in the graph, and rows contain values and calls for (possibly partial) executions of the graph. The upshot is that you can query the relationships between any variables in a project in one line, even in the presence of very heterogeneous patterns in the graph. I'm very excited about this project - which is still in an alpha version being actively developed - and especially about the `ComputationFrame` data structure. I'd love to hear the feedback of the HN community. Colab quickstart: https://ift.tt/b7fUM0G... Blog post introducing `ComputationFrame`s (can be opened in Colab too): https://ift.tt/guEhzXi Docs: https://ift.tt/9Qw0hE7 https://ift.tt/DhiR9Sj July 12, 2024 at 01:40AM

Show HN: Upload your photo and generate crazy YouTube Faces for your thumbnail https://ift.tt/OjiaUJg

Show HN: Upload your photo and generate crazy YouTube Faces for your thumbnail Upload your photo, this AI tool generates hundreds of High-Conversion Youtube faces. Our AI analyzed millions of viral video thumbnails, found the top performing Youtube Faces templates for each niche. Then it can select and generate the best performing youtube faces according to your content. Works for both realistic photos and cartoon photos for faceless channels. https://ift.tt/DJwusaT July 12, 2024 at 12:27AM

Show HN: I made an SEO checker to fix frustrating issues in minutes, not hours https://ift.tt/xFBCHWM

Show HN: I made an SEO checker to fix frustrating issues in minutes, not hours If you have any issues optimizing your website. Seototal will help you. A while ago I was trying to improve my SEO on my first startup, That was when i realized how clunky and overcrowded most SEO tools were I used were, Ahrefs and Semrush initially. I built it to be lightweight and focus on the basics. It checks on page and technical issues to output straight forward reports with quick and helpful knowledge bases that will help you fix your SEO basics fast. The website is still in early stages and is actively being improved. So I'm open to any here any issues or feature recommendations you have. Thank you for your time. https://seototal.xyz July 11, 2024 at 11:40PM

Thursday, July 11, 2024

Show HN: Dut, a fast Linux disk usage calculator https://ift.tt/tR4HhQV

Show HN: Dut, a fast Linux disk usage calculator "dut" is a disk usage calculator that I wrote a couple months ago in C. It is multi-threaded, making it one of the fastest such programs. It beats normal "du" in all cases, and beats all other similar programs when Linux's caches are warm (so, not on the first run). I wrote "dut" as a challenge to beat similar programs that I used a lot, namely pdu[1] and dust[2]. "dut" displays a tree of the biggest things under your current directory, and it also shows the size of hard-links under each directory as well. The hard-link tallying was inspired by ncdu[3], but I don't like how unintuitive the readout is. Anyone have ideas for a better format? There's installation instructions in the README. dut is a single source file, so you only need to download it and copy-paste the compiler command, and then copy somewhere on your path like /usr/local/bin. I went through a few different approaches writing it, and you can see most of them in the git history. At the core of the program is a datastructure that holds the directories that still need to be traversed, and binary heaps to hold statted files and directories. I had started off using C++ std::queues with mutexes, but the performance was awful, so I took it as a learning opportunity and wrote all the datastructures from scratch. That was the hardest part of the program to get right. These are the other techniques I used to improve performance: * Using fstatat(2) with the parent directory's fd instead of lstat(2) with an absolute path. (10-15% performance increase) * Using statx(2) instead of fstatat. (perf showed fstatat running statx code in the kernel). (10% performance increase) * Using getdents(2) to get directory contents instead of opendir/readdir/closedir. (also around 10%) * Limiting inter-thread communication. I originally had fs-traversal results accumulated in a shared binary heap, but giving each thread a binary-heap and then merging them all at the end was faster. I couldn't find any information online about fstatat and statx being significantly faster than plain old stat, so maybe this info will help someone in the future. [1]: https://ift.tt/58Fenog [2]: https://ift.tt/hI5Owxf [3]: https://ift.tt/kzeDlxg , see "Shared Links" https://ift.tt/akTMhFV July 11, 2024 at 04:59AM

Show HN: Do You Know RGB? https://ift.tt/t8kUpbO

Show HN: Do You Know RGB? https://ift.tt/OWhvmMT June 24, 2025 at 01:49PM