• 0 Posts
  • 35 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle
  • Eh, that’s not quite true. There is a general alignment tax, meaning aligning the LLM during RLHF lobotomizes it some, but we’re talking about usecase specific bots, e.g. for customer support for specific properties/brands/websites. In those cases, locking them down to specific conversations and topics still gives them a lot of leeway, and their understanding of what the user wants and the ways it can respond are still very good.


  • Depends on the model/provider. If you’re running this in Azure you can use their content filtering which includes jailbreak and prompt exfiltration protection. Otherwise you can strap some heuristics in front or utilize a smaller specialized model that looks at the incoming prompts.

    With stronger models like GPT4 that will adhere to every instruction of the system prompt you can harden it pretty well with instructions alone, GPT3.5 not so much.











  • I mean, we’re talking curl here, I don’t think a lot of suffering of consequences is happening. And man pages are often also not a great resource, throw everything at you, often don’t contain examples. If I’m building an app that integrates curl or libcurl, oh yes I’m reading the doc. If I just need curl to do something quickly, the LLM output is perfectly fine.


  • I recently watched the 3.5-hour workshop Mastering the curl command line by Daniel Stenberg, the author of curl

    I commend the author for watching and subsequently summarizing this into a blog article, but in the age of LLMs like ChatGPT I cannot be arsed anymore to memorize any of the arcane command line arguments of curl, ffmpeg and the like.