logo
logo

Top 7 GenAI security practices

Below, you can find a brief introduction to seven of the main GenAI security practices for any enterprise organization.

1Remove shadow AI

To ensure robust AI security, the first step is to achieve full visibility into what you are defending. This means eliminating any unauthorized AI usage within your organization.

Prerequisite: Everybody in your organization should know what they can and cannot do with GenAI. Make sure to add simple-to-follow GenAI security practices in the organization’s general security guide.

Gaining visibility over all GenAI in the organization requires you to:

  • Create an AI-BOM, i.e., a bill of materials collecting all your AI-related assets, ideally capable of automatically detecting new AI use.
  • Set up relevant networking to ensure access for only whitelisted GenAI providers and software, or to block access to all those blacklisted.
  • Foster education and awareness aimed at promoting a security-first mindset.

2Protect your data

Safeguarding sensitive information is crucial to maintaining organizational security and regulatory compliance. No sensitive information should be used in GenAI web and SaaS applications unless secured and approved, and no training data should be exposed and accessible through the GenAI model and application.

Prerequisite: Your team should agree with business and technical stakeholders on a definition of what constitutes sensitive information in your organization, possibly with different levels of criticality.

To protect your training and inference data:

  • Discover and classify your data according to its security criticality.
  • Use encryption for data at rest and in transit.
  • Perform data sanitization such as removing or masking PII information for training data sets.
  • Configure data loss prevention (DLP) policies to avoid sensitive data being used in end-user applications.
  • Audit who has access to which data to understand effective access.

3Secure access to GenAI models

Unauthorized agents gaining access to GenAI models could deploy a variety of techniques to modify and misuse the model, such as introducing biases or harmful deceptions.

Prerequisite: A well-defined IAM configuration is a must-have for all assets associated with GenAI deployments and applications, with role-based access control (RBAC) recommended.

Whether the models are internal or external, you can add controls to secure GenAI models that allow you to:

  • Set up authentication and rate limiting for API usage.
  • Restrict access to model weights.
  • Allow only required users to kickstart model training and deployment pipelines.

4Use LLM built-in guardrails

Following a multi-layer security-first mindset, it is always ideal to introduce security at the source by incorporating built-in guardrails of your GenAI models as security controls.

Prerequisite: Thoroughly review the documentation of GenAI providers and models to ensure they provide support for your designated guardrails.

Different GenAI providers and solutions offer different built-in security controls which may include:

  • Content filtering to automatically remove or flag inappropriate or harmful content.
  • Abuse detection mechanisms to uncover and mitigate general model misuse.
  • Temperature settings to change AI output randomness to your desired predictability.

By setting up security controls against LLM misuse at the source, you minimize risk for both your organization and your application users.

5Detect and remove AI risks and attack paths

Attack path analysis (APA) preemptively identifies end-to-end attack paths composed of complex chains of exposures and lateral movement paths in your AI systems.

Prerequisite: End-to-end risk monitoring of your AI infrastructure with clear lineage and full context.

Your attack path analysis solution should:

  • Continuously scan for and identify vulnerabilities in AI models.
  • Verify all systems and components have the most recent patches to close known vulnerabilities.
  • Scan for malicious models.
  • Assess for AI misconfigurations, effective permissions, network exposures, exposed secrets, and sensitive data to detect attack paths.
  • Regularly audit access controls to guarantee only authorized parties are granted access to critical systems.
  • Provide context around AI risks so that you can proactively remove attack paths to models via remediation guidance.

6Monitor against anomalies

Continuous monitoring can help detect and address unusual activities in your AI systems promptly.

Prerequisite: A thorough monitoring solution should be put in place that provides extended detection for suspicious activity in GenAI applications.

Your monitoring solution should:

  • Use anomaly detection and behavior analytics at both the input and output.
  • Detect suspicious behavior in AI pipelines.
  • Keep track of unexpected spikes in latency and other system metrics.
  • Support regular security audits and assessments.

7Set up incident response

Preparing a swift incident response plan is critical to minimizing the blast radius of AI-related security incidents.

Prerequisite: A general incident response team should be available for critical AI systems and be able to rely on security tools designed for easy understanding of AI threats.

Incident response for GenAI systems can involve both automated and manual security controls, which include:

  • Processes for isolation, backup, traffic control, and rollback.
  • Integration with SecOps tools.
  • Availability of an AI-focused incident response plan.