Understanding Risks in Your Applications
Broken Access Control occurs when users can act outside of their intended permissions, allowing unauthorized access to sensitive data or functionality.
Cryptographic Failures refer to weaknesses in cryptographic implementations leading to sensitive data exposure or system compromise.
Injection attacks occur when untrusted data is sent to an interpreter as part of a command or query, allowing attackers to execute arbitrary commands.
Insecure Design focuses on risks related to design flaws, highlighting the need for secure design patterns and threat modeling.
Security Misconfiguration occurs when security settings are not defined, implemented, or maintained, leading to exposure of sensitive data.
Using components with known vulnerabilities can lead to exploitation; keeping software up to date is essential for security.
Failures in identification and authentication can allow unauthorized users to gain access, compromising security.
Software and Data Integrity Failures occur when assumptions about software updates or data integrity are made without verification, leading to potential exploitation.
Insufficient logging and monitoring can impede incident detection and response, leaving systems vulnerable to attacks.
Server-Side Request Forgery (SSRF) allows attackers to send unauthorized requests from the server, potentially accessing internal resources.
APIs tend to expose endpoints that handle object identifiers, creating a wide attack surface of Object Level Access Control issues.
Authentication mechanisms are often implemented incorrectly, allowing attackers to compromise authentication tokens or exploit implementation flaws to assume other users' identities.
This category focuses on the lack of or improper authorization validation at the object property level, leading to information exposure or manipulation by unauthorized parties.
Successful attacks can lead to Denial of Service or increased operational costs due to unrestricted consumption of resources required to satisfy API requests.
Complex access control policies can lead to authorization flaws, allowing attackers to access other users’ resources or administrative functions.
APIs vulnerable to this risk expose business flows without compensating for excessive automated use, potentially harming the business.
SSRF flaws occur when an API fetches a remote resource without validating the user-supplied URI, allowing attackers to send crafted requests to unexpected destinations.
Complex configurations can lead to security oversights, opening the door for various types of attacks if not properly managed.
A proper inventory of hosts and deployed API versions is crucial to mitigate issues like deprecated API versions and exposed debug endpoints.
Developers often trust data from third-party APIs more than user input, leading to weaker security standards and making APIs vulnerable to attacks.
A Prompt Injection Vulnerability occurs when user prompts alter the intended behavior of the model, potentially leading to unintended actions or outputs.
Sensitive information can affect both the LLM and its application, leading to unauthorized access to private data or operational secrets.
LLM supply chains are susceptible to various vulnerabilities, which can compromise the integrity and security of the models and their outputs.
Data poisoning occurs when pre-training, fine-tuning, or embedding data is intentionally corrupted to manipulate model behavior.
Improper Output Handling refers specifically to insufficient validation, sanitization, and handling of outputs generated by the model.
An LLM-based system is often granted a degree of agency that can lead to unintended consequences if not properly controlled.
The system prompt leakage vulnerability in LLMs refers to the unintentional exposure of internal prompts that can be exploited by attackers.
Vectors and embeddings vulnerabilities present significant security risks in systems relying on LLMs for data representation and processing.
Misinformation from LLMs poses a core vulnerability for applications relying on the accuracy of generated information.
Unbounded Consumption refers to the process where a Large Language Model consumes resources without proper limits, potentially leading to denial of service.