With as much as AI like chatGPT are used to aid in programming is anyone working on developing AI as an iterative attack surface to probe networks?
There are some obvious ethical concerns, but I can see a future with AI red and blue team actors playing a substantial role in network security.
You must log in or # to comment.
It’s already being used for security audits, so it is definitely possible to use it that same way in a malicious manner.
Also, there are companies like Lakera (creators of the Gandalf prompt injection challenge) offering products to sanitize and secure LLMs, so there is a market for it, because the risks are definitely there.