Saturday, December 17, 2022

Show HN: The easiest way to run a brainstorm, supercharged with A.I https://ift.tt/TZW5bON

Show HN: The easiest way to run a brainstorm, supercharged with A.I Hello friends! We've been working hard on building Mimosa, and would love for you to try it! No signups required. Facilitating brainstorming sessions can be a challenging and time-consuming task. It's difficult to keep everyone on agenda, engaged, and contributing. So we've built Mimosa to be the easiest way to facilitate a brainstorming session. Imagine if a "digital whiteboard" and a "trello board" had a baby, but built with all the best practices of professional facilitation. With Mimosa, you can minimize bias by allowing team members to contribute anonymously before revealing their ideas. This ensures that everyone has a fair chance to share their thoughts and ideas without fear of judgment. Once the brainstorming session is complete, you can easily view and export the final results. This allows you to keep track of a meeting's effectiveness and share it with stakeholders. But the real game-changer is our AI Brainstorming features and it's ability to turn your 1x team into a 10x team. It help generates and collaborates with you in your brainstorming session to help you come up with more and better ideas. We're both scared and excited to hear all your thoughts, HN! :) Please do let us know any features or improvements you think we should make to help you in your meetings. https://mimosa.so/ December 17, 2022 at 12:11AM

Friday, December 16, 2022

Show HN: Simpler – Your GPT-3 Task Planner https://ift.tt/zuFbxkS

Show HN: Simpler – Your GPT-3 Task Planner https://simplerlist.com December 16, 2022 at 07:11AM

Show HN: I'm challenging your clicking speed with this game I rebuilt https://ift.tt/teBz9hs

Show HN: I'm challenging your clicking speed with this game I rebuilt https://ift.tt/80nxWYD December 16, 2022 at 02:15AM

Show HN: Natural language Twitter search using Codex https://ift.tt/P5spM2a

Show HN: Natural language Twitter search using Codex We built a structured search engine for Twitter called Bird SQL, available at https://ift.tt/g1p29Gd . Our search interface uses OpenAI Codex to translate natural language to SQL. Our backend then verifies the SQL, executes it, and displays the results on the web app. This makes large structured datasets like a scrape of Twitter easy for anyone to explore. As background, while working on text-to-SQL as a general problem, we came to believe one of its most powerful applications is as a search tool because: - SQL is hard to write by hand and prone to errors - It allows you to iterate quickly if you’re exploring a new dataset - A lot of contextual information that you’d normally have to internalize (e.g. your data’s schema) can be automatically generated and offloaded to the language model Using large language models (LLMs) like Codex to write the SQL for you means you don’t have to worry about the nitty gritty language details, but still benefit from the power of a language like SQL. Also, after seeing the results of the query, you can inspect (and if necessary, change) the SQL. The lack of this sort of explainability of the query result is one of the more notorious challenges of returning the output of an LLM directly to the user. Additionally, using LLMs in this way makes these kinds of queries over structured data accessible to people who know little or no SQL. While Bird SQL shares significant infrastructure with our more general LLM-powered search engine over unstructured data (Ask Perplexity - https://perplexity.ai[1] ), the two approaches and their respective challenges are quite different. For example, the type of models are different (GPT3.5 vs Codex), obviously the model prompts have different structure, and how to verify model output when its text vs when its code is different. We are currently exploring ways to combine the two approaches, such as using the results of retrieving information from a structured source (as in Bird SQL) as one of the inputs for the LLM to interpret or summarize (as in Ask Perplexity). We would love to hear your questions, suggestions, and feedback! [1] https://ift.tt/GDk7Ieq https://ift.tt/ByEcuSL December 16, 2022 at 03:42AM

Show HN: Readwise Reader, an all-in-one reading app https://ift.tt/1cm5Egu

Show HN: Readwise Reader, an all-in-one reading app Hey HN, cofounder of Readwise here. We've been working on this cross-platform reader app for about 2 years, excited to finally share it in public beta. Probably the most notable thing that makes Reader unique is that it supports almost any content type you could want to save/read/highlight: * web pages * emails/newsletters * PDFs * ePubs * twitter threads * youtube videos (with transcripts) * RSS feeds With all of your knowledge content in one place, we built powerful reading and highlighting, as well as a bunch of novel triage/organization features, so you can actually consume & stay on top of that content! There are also a lot of advanced features too, such as text-to-speech, GPT3 questions/summaries, super powerful highlighting (that includes markup and images), complex filtering/search (with our own query language), sleek mobile triage UI, keyboard shortcuts for reading/everything, integrations with note-taking apps, a browser extension for both saving pages and highlighting them, and much more. If anyone's interested in more product details, as well as our business model, etc, we wrote a detailed launch post: https://ift.tt/HESizsg... Predicting a common question: Reader is part of the Readwise subscription pricing right now in beta -- there's a 30 day free trial and then it's paid at ~$8usd/month. We also promise to not raise this price for existing subscribers. Reader is also fairly technically interesting -- our iOS, Android and webapp all work fully offline and sync your reading data/progress with eachother. Our search on web is built with wasm sqlite. We have a fairly intense pipeline for cleaning web articles (removing ads/styling). We share lot of modules around syncing/highlighting across all platforms, etc... Happy to answer any questions :) https://ift.tt/pM2wkzL December 16, 2022 at 03:14AM

A Brief History of the T Third Part 2: 1980s-2023

A Brief History of the T Third Part 2: 1980s-2023
By Jeremy Menzies

Beginning in January 2023 full service on the new T Third extension 2023 will run from Sunnydale to Chinatown every day. In this two-part blog series, we will look back at some of the history of the T Third Street line. Part 1, published last month goes through the first 100+ years. In Part 2, we look at the recent history of the T Third and Central Subway projects from the 1980s to today. 

The Call for Better Transit: 1980s-90s 

In the decades following World War II, the neighborhoods along the southern end of 3rd Street became more economically depressed and transit service declined.  Residents felt cut off from the rest of the city as bus service did not meet their needs. 

Passengers lining up to board a bus on a busy city street with buildings in the background People boarding a 15 Route bus on 3rd Street near Market in 1983. Bus service on the 15 provided critical north-south service through the City’s busiest areas. 

In the late 1980s, the city was looking to revitalize the Mission Bay, Dogpatch, and Bayview and reliable transit was critical to this goal. Early outreach and research resulted in the 1993 Bayshore Transit Study. This initial plan solicited community input on several possible solutions on how to improve transit to these neighborhoods. Two years later in 1995, the Four Corridor Plan built upon the Bayshore Transit Study and elevated 3rd street as the top priority for San Francisco’s long-range transit plan. It was clear that residents, particularly in Bayview, wanted rail service to return to 3rd Street. 

T Third Phase 1: 1990s-2007 

These reports and outreach formed the backbone of the Third Street Light Rail Project, which would be built in two initial Phases. Phase 1 involved extending Muni Metro service from 4th and King to Bayshore Boulevard along 3rd Street. Phase 2 would focus on the 4th and Stockton corridors to extend the service into Chinatown and possibly North Beach. 

By the end of the ‘90s, funding was coming in to make Phase 1 a reality. Early plans for the T also included a new rail maintenance facility, Muni Metro East, as well as a turnback loop in Mission Bay and a direct connection to the Bayshore Caltrain Station. Due to various factors during preconstruction planning, the Mission Bay Loop and Bayshore Caltrain connection were dropped from the T Line plan. 

People in business attire and hard hats smiling with shovels in hand over a large pile of dirt under s station structure

A groundbreaking ceremony for T Third construction was held on May 28, 2002 at the 4th and King Caltrain Station.  

Construction of 5.1 miles of new tracks, overhead power lines, lighting, stations, and a variety of other improvements took five years to complete. On January 13, 2007, free weekend shuttle service commenced on the new line and full weekday service started on April 7. Just one year later in 2008, the Muni Metro East rail yard opened, boosting Muni’s ability to serve the new line. For the first time in 50 years, rail service returned to the eastern waterfront. 

A long perspective view of a corridor showing the city's neighborhood skyline View north along 3rd Street at Jamestown Avenue during construction in 2004. 

T Third Phase 2: Central Subway 

Planning and outreach for Phase 2 of the T Line had already begun when the line opened in 2007. The Central Subway Project was created to address the transit needs of Chinatown, Union Square and South of Market. Construction would extend the T nearly two miles and build the first new subway in the city since the 1970s.

Construction and crew underground in a front of two boring entrances This 2016 photo, taken inside the excavation for Chinatown Station shows the massive scale of the Central Subway Project. 

Early proposals showed the Central Subway traveling north on 3rd Street and along Geary where it would turn up Stockton to end in Chinatown. Going south, the line would branch and exit the tunnel on 4th Street. The challenges of construction along 3rd and at Market Street resulted in a plan to run the line on 4th and Stockton streets. It was also decided to tunnel underneath the Market Street Subway/BART tunnels using special tunnel boring machines instead of more conventional construction methods. 

The official groundbreaking ceremony took place on February 9, 2010. The arduous process of building a subway with four stations through San Francisco’s densest neighborhoods began soon after. Despite an intensive construction period with project delays and cost increases, the Central Subway opened for service on November 19, 2022.  

Passengers boarding onto a train in a station on a busy platformOpening day of the Central Subway on November 19, 2022. Thousands of people came out to see and ride in the long-awaited subway. 

Over 30 years in the making, the T Third line follows in the footsteps of the first horsecars that ran over 160 years ago. However, this is not the end of the line for the T. Planning is already underway on the T Third Phase 3 extension. Aimed at expanding service beyond 3rd Street, this extension will mark yet another chapter in San Francisco’s transportation history. 



Published December 16, 2022 at 12:52AM
https://ift.tt/KG4aUVl

Show HN: AI Avatar Image Generator Based on Other AI Images https://ift.tt/JETnMHa

Show HN: AI Avatar Image Generator Based on Other AI Images An AI Avatar image generator that creates images of you based on other AI created images. The popularity of the recent AI Avatar image generator apps sparked some interest of mine in the area. After playing around with a few AI Avatar image generators, some of which generate pictures of you based on pictures you upload and prompts you write, I thought, wouldn't it be much easier, instead of writing out prompts, to just select another image which you want your generated image to be based on. The image the user selects is actually another AI generated image, so we know the prompt used to generate it. So prompt writing is abstracted away from the user, and instead they can search for images and click on ones they like. Backend of the application currently uses Astria AI for the image model training and generation, and uses the Lexica API for getting lists of AI generated images and their prompts. https://ift.tt/w9fLgP0 December 15, 2022 at 08:19PM

Show HN: Anti-Cluely – Detect virtual devices and cheating tools on exam systems https://ift.tt/onuTQWR

Show HN: Anti-Cluely – Detect virtual devices and cheating tools on exam systems Anti-Cluely is a lightweight tool designed to detect common...