Wednesday, December 7, 2022

Show HN: I designed a ChatGPT prompt evaluator to ruin your fun;) https://ift.tt/sObmS17

Show HN: I designed a ChatGPT prompt evaluator to ruin your fun;) Today I designed a method to prevent users from jailbreaking ChatGPT (for instance, users have generated instructions to produce weapons or illegal drugs, commit a burglary, kill oneself, take over the world as an evil superintelligence, or create a virtual machine which they then can use). The OpenAI team appears to be countering these primarily using prompt engineering or fine-tuning on the ChatGPT model. The idea is to use a second and fully separate, fine-tuned LLM to evaluate prompts before sending them to ChatGPT. You can test this by inserting your successful ChatGPT jailbreaks. Break it for me if you dare! I look forward to seeing your results! https://ift.tt/Fsi6Qn4 December 6, 2022 at 11:16PM

No comments:

Post a Comment

Show HN: Yet Another Memory System for LLM's https://ift.tt/0oZIwAv

Show HN: Yet Another Memory System for LLM's Built this for my LLM workflows - needed searchable, persistent memory that wouldn't bl...