Rather than introducing wholly unprecedented threats into society, large language models (LLMs) highlight and stress test existing vulnerabilities in how organizations govern data, manage access, and configure systems. With care and responsibility, we can respond to their revelations by engineering solutions that make technology usage more secure and ethical overall.
Specific ways responsible LLM adoption can improve security include:
-
Red team penetration testing. Use LLMs to model criminal hacking and fraud to harden defenses.
-
Automated vulnerability scanning. Leverage LLM conversational ability to identify flaws in public-facing chat interfaces.
-
Anomaly detection. Monitor corporate system logs with LLMs fine-tuned to flag unusual internal events as possible attacks.
-
Safety analysis. Stress test new features through automated conversational exploration of potential abuses.
-
Product-security reviews. Use LLMs as a team member when designing new products to probe attack possibilities in simulated conversations.
-
Threat intelligence. Continuously train LLMs on emerging attack data to profile bad actors and model potential techniques.
-
Forensic reconstruction. Assist investigations of past incidents by using LLMs to speculate about criminal conversations and motives.
-
Security policy analysis. Check that policies adequately address LLM-relevant risks revealed through conversational probing.
-
Security training. Use LLM-generated attack scenarios and incidents to build staff defensive skills.
-
Bug bounties. Expand scope of bounty programs to include misuse cases identified through simulated LLM hacking.
With careful design and effective oversight, LLMs can be an ally rather than a liability in securing organizations against modern technological threats. Their partially open nature invites probing for weaknesses in a controlled setting.
LLMs present a further opportunity to improve an organization’s information security capability. The practical application of LLMs to business challenges requires creating sophisticated, multistage, software-driven data pipelines. As these pipelines start to become prevalent, an opportunity to design with more effective security protocols is presented.
Various security postures can be applied at different points in the pipeline. For instance, a permissive security posture that allows an LLM to generate the best possible response can be followed by a more restrictive security filter that automatically checks the output for potential data leakage.
If we accept that LLM security problems are new manifestations of existing information security challenges (and that human behavior is the biggest cause of security breaches), then automated multistage processes with carefully constructed security gateways can provide a powerful new tool in the toolkit.
[For more from the authors on this topic, see: “LLM Security Concerns Shine a Light on Existing Data Vulnerabilities.”]