- cross-posted to:
- programming@kbin.social
- ai_infosec@infosec.pub
- cross-posted to:
- programming@kbin.social
- ai_infosec@infosec.pub
From https://twitter.com/llm_sec/status/1667573374426701824
- People ask LLMs to write code
- LLMs recommend imports that don’t actually exist
- Attackers work out what these imports’ names are, and create & upload them with malicious payloads
- People using LLM-written code then auto-add malware themselves
Definitely not. LLMs just make things up that sounds right, for anything other than the simplest code you pretty much always have to fix the output.
LLMs are only useful as rubber ducks to figure out what might be wrong with your code, and it’s honestly easier for me to read the documentation/Stack Overflow instead when trying to write code from scratch.