CWE 918 - SSRF
About CWE ID 918
Server-Side Request Forgery (SSRF)
This vulnerability occurs when an attacker can manipulate or influence the requests being made by a server to other internal or external resources, typically on behalf of the server itself.
Impact
Data Exposure
Information Leakage
Network Scanning
Service Disruption
Proxying Attacks
Security Bypass
Remote Code Execution
Example with Code Explanation
Let us consider an example case and understand the CWE 918 with context of Vulnerable code and Mitigated code.
Python
Python
Vulnerable Code
import requests
def fetch_url(url):
response = requests.get(url)
return response.text
# This function takes a URL as input from an untrusted source
# and fetches the content from that URL without proper validation.
user_input = input("Enter a URL to fetch: ")
content = fetch_url(user_input)
print("Content from the URL:")
print(content)
In the above example, the code defines a fetch_url
function that takes a URL as input and uses the requests
library to make an HTTP GET request to that URL. The user is prompted to enter a URL, and the code fetches and displays the content from that URL.
This code is vulnerable to SSRF because it does not properly validate or sanitize
the user input. An attacker could provide a malicious URL that targets internal resources or external systems. For example, an attacker could enter a URL like http://internal-server/private-data
to access sensitive internal resources that should not be publicly accessible.
Some of the ways the Vulnerable code can be mitigated is:
Input Validation:
URL Whitelisting: Maintain a whitelist of allowed domains or resources that the application can access. Validate that user-supplied URLs match the whitelist.
URL Validation: Use a URL parsing library or regular expressions to validate user-supplied URLs. Ensure they follow the expected format.
IP Address Validation: If IP addresses are used, validate them to ensure they are safe and do not point to sensitive internal systems.
Access Controls:
Least Privilege: Restrict the permissions and access level of the server making the requests. Ensure it only has access to resources it needs and nothing more.
Network Segmentation: Isolate the server from sensitive internal networks and resources whenever possible to limit the impact of an SSRF attack.
Content-Type Checks:
Verify that the content-type of the response from the requested URL matches the expected type, e.g., if you expect an image, check that it's indeed an image.
Request Sanitization:
Sanitize user-supplied input before making requests. Remove any potentially harmful characters or sequences.
Use Safe Libraries:
If possible, use libraries or functions that have built-in protections against SSRF, or those that allow you to define a white list of allowed resources.
Patch Management:
Keep all libraries, frameworks, and software up to date to ensure that known SSRF vulnerabilities in third-party components are patched.
Mitigated Code
import requests
import validators
from urllib.parse import urlparse
import tldextract # Install using: pip install tldextract
import ipaddress
import dns.resolver
# Define a whitelist of allowed domains or resources
ALLOWED_DOMAINS = ["example.com", "public-api.com"]
def is_valid_url(url):
# Validate the URL format using a third-party library
return validators.url(url)
def is_domain_allowed(url):
# Extract the top-level domain (TLD) using tldextract
extracted = tldextract.extract(url)
domain = extracted.domain
tld = extracted.suffix
# Check if the domain and TLD are in the whitelist
return f"{domain}.{tld}" in ALLOWED_DOMAINS
def is_ip_allowed(url):
# Resolve the domain to an IP address using a DNS resolver library
parsed_url = urlparse(url)
domain = parsed_url.netloc
answer = dns.resolver.resolve(domain, "A")
ip = answer[0].to_text()
# Check if the IP address is private or reserved
return not ipaddress.ip_address(ip).is_private and not ipaddress.ip_address(ip).is_reserved
def is_scheme_allowed(url):
# Get a list of valid schemes from the validators module
valid_schemes = validators.url.schemes
# Parse the URL using a built-in library
parsed_url = urlparse(url)
# Check if the scheme is in the valid schemes list
return parsed_url.scheme in valid_schemes
def fetch_url(url):
if not is_valid_url(url):
raise ValueError("Invalid URL format")
if not is_domain_allowed(url):
raise ValueError("Access to this domain is not allowed")
if not is_ip_allowed(url):
raise ValueError("Access to this IP address is not allowed")
if not is_scheme_allowed(url):
raise ValueError("Unsupported scheme")
# Verify the SSL certificates if using https
verify = True if urlparse(url).scheme == "https" else False
response = requests.get(url, timeout=5, verify=verify) # Set a timeout and verify SSL certificates
if response.status_code != 200:
raise Exception("Failed to fetch URL")
return response.text
try:
user_input = input("Enter a URL to fetch: ")
content = fetch_url(user_input)
print("Content from the URL:")
print(content)
except Exception as e:
print(f"Error: {str(e)}")
The Mitigated code does the following:
URL Validation (
is_valid_url
): The code uses thevalidators.url
function to validate the URL format. This helps prevent basic URL manipulation attacks by ensuring that the input adheres to a valid URL format.Domain Whitelisting (
is_domain_allowed
): It extracts the top-level domain (TLD) usingtldextract
and checks if the combination of the domain and TLD is in theALLOWED_DOMAINS
whitelist. This whitelist approach helps prevent access to unauthorized domains, which is a key defense against SSRF attacks.IP Address Validation (
is_ip_allowed
): The code also resolves the domain to an IP address using a DNS resolver (dns.resolver
) and checks whether the resulting IP address is private or reserved using theipaddress
module. This is a crucial security measure to prevent SSRF via IP address, ensuring that only public and non-reserved IP addresses are allowed.Scheme Validation (
is_scheme_allowed
): It checks if the URL scheme is one of the valid schemes defined by thevalidators
module. This prevents unsupported schemes from being used, reducing the attack surface.SSL Certificate Verification (
requests.get
): The code verifies SSL certificates when making an HTTPS request, which is a good practice to prevent man-in-the-middle attacks.
JAVA
JAVA
Vulnerable Code
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.URL;
import java.net.URLConnection;
public class VulnerableSSRFExample {
public static void main(String[] args) {
try {
BufferedReader reader = new BufferedReader(new InputStreamReader(System.in));
System.out.print("Enter a URL to fetch: ");
String userInput = reader.readLine();
// No validation or sanitization is performed on the user input.
// An attacker can input a malicious URL.
URL url = new URL(userInput);
URLConnection connection = url.openConnection();
BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream()));
String inputLine;
while ((inputLine = in.readLine()) != null) {
System.out.println(inputLine);
}
in.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
The Vulnerable code does the following:
The program takes
user input as a URL
without anyvalidation or sanitization
.It directly creates a
URL
object and establishes a connection to the provided URL.The content from the URL is then read and printed to the console.
This code is susceptible to CWE-918 because it doesn't perform any validation or whitelisting of the input URL
. An attacker can easily manipulate the URL to access internal resources or potentially launch SSRF attacks by pointing the URL to sensitive internal services.
Some of the ways the Vulnerable code can be mitigated is:
URL Validation:
Always validate user-provided URLs using a URL validation library or regular expressions to ensure they conform to expected URL formats (e.g.,
https://example.com
, notfile:///etc/passwd
).Use the
java.net.URL
class to parse and validate URLs before making requests. This class provides basic URL validation.
Domain Whitelisting:
Maintain a
whitelist of allowed
domains orIP addresse
s that the application is allowed to access. Compare the parsed URL's host against this whitelist.Implement a domain extraction mechanism to validate the domain against the whitelist, considering variations like subdomains.
IP Address Validation:
If your application needs to work with IP addresses, validate them to ensure they are public and non-reserved IP addresses. You can use the
InetAddress
andCIDRUtils
classes in Java to help with this validation.Consider using a dedicated library or function to resolve DNS entries and verify that the resolved IP addresses are not private or reserved.
URL Scheme Validation:
Verify that the URL scheme is one of the expected and safe schemes (e.g.,
http
orhttps
) before making the request.
Request Wrapper:
Implement a request wrapper that restricts the types of URLs that can be accessed. The wrapper should enforce the above validation rules.
Use Libraries and Frameworks:
Consider using security libraries and frameworks designed to mitigate
SSRF
vulnerabilities, such as theOWASP CSRFGuard
library.
Least Privilege Principle:
Ensure that the application's user or service account has the
least privilege
necessary to perform its tasks. Avoid running the application as a superuser or with excessive privileges.
Mitigated Code
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.URI;
import java.net.URL;
import java.net.URLConnection;
import org.apache.commons.validator.routines.UrlValidator;
public class MitigatedSSRFExample {
public static void main(String[] args) {
try {
BufferedReader reader = new BufferedReader(new InputStreamReader(System.in));
System.out.print("Enter a URL to fetch: ");
String userInput = reader.readLine();
// Validate and sanitize the user input using UrlValidator
String[] schemes = {"http", "https"}; // Specify the allowed schemes
UrlValidator urlValidator = new UrlValidator(schemes); // Create a UrlValidator instance
if (urlValidator.isValid(userInput)) { // Check if the user input is a valid URL
URI uri = new URI(userInput); // Parse and normalize the URL using URI class
String host = uri.getHost(); // Get the host name from the URI
int port = uri.getPort(); // Get the port number from the URI
String path = uri.getPath(); // Get the path from the URI
// Enforce the URL destination with a positive allow list
// For example, only allow requests to www.example.com on port 80 or 443
if (host.equals("www.example.com") && (port == 80 || port == 443)) {
URL url = uri.toURL(); // Convert the URI to URL
URLConnection connection = url.openConnection();
connection.setInstanceFollowRedirects(false); // Disable HTTP redirections
BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream()));
String inputLine;
while ((inputLine = in.readLine()) != null) {
System.out.println(inputLine);
}
in.close();
} else {
System.out.println("URL destination is not allowed");
}
} else {
System.out.println("Invalid URL");
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
The Mitigated code does the following:
URL Validation (
UrlValidator
): The code uses theUrlValidator
class from the Apache Commons Validator library to validate and sanitize the user input URL. This helps ensure that the input URL adheres to a valid format with allowed schemes (http or https).URI Parsing and Normalization: After validating the URL using
UrlValidator
, the code parses and normalizes the URL using theURI
class. This is a good practice to ensure that the URL is well-formed and to handle any encoding or special characters properly.Host and Port Verification: The code extracts the host name and port number from the URI and checks if they match the allowed destination (www.example.com) and ports (80 or 443). This enforces a positive allow list for the URL destination. If the host and port don't match the allowed values, the code rejects the request.
HTTP Redirections Disabled: The code sets
setInstanceFollowRedirects(false)
to disable HTTP redirections. This is important because SSRF attacks can sometimes be carried out by tricking the server into making requests to internal resources through redirections. By disabling redirections, the code prevents this type of abuse.
Ruby
Ruby
Vulnerable Code
require 'open-uri'
def fetch_url(url)
content = open(url).read
puts "Content from the URL:"
puts content
rescue StandardError => e
puts "Error: #{e.message}"
end
print "Enter a URL to fetch: "
user_input = gets.chomp
fetch_url(user_input)
The Vulnerable code does the following:
The program takes
user input as a URL
without anyvalidation or sanitization
.It directly uses the
open
method to fetch the content of the URL.The content from the URL is then read and printed to the console.
This code is susceptible to CWE-918 because it doesn't perform any validation or whitelisting of the input URL. An attacker can easily manipulate the URL to access internal resources or potentially launch SSRF attacks by pointing the URL to sensitive internal services.
Some of the ways the Vulnerable code can be mitigated is:
URL Validation:
Use a URL validation library or regular expressions to ensure that user-provided URLs conform to expected URL formats (e.g.,
https://example.com
, notfile:///etc/passwd
).Check for the scheme of the URL (e.g.,
http
orhttps
) to ensure that it is a valid scheme.
Domain Whitelisting:
Maintain a whitelist of allowed domains or IP addresses that the application is allowed to access. Compare the parsed URL's host against this whitelist.
Implement a domain extraction mechanism to validate the domain against the whitelist, considering variations like subdomains.
URL Parsing and Normalization:
Use a URL parsing library to parse and normalize user-provided URLs. This helps ensure proper handling of special characters and encoding.
Extract the hostname and port from the URL to verify that it matches allowed values.
HTTP Redirection Handling:
Implement proper handling of HTTP redirections. By default, the code should follow redirections (e.g., using
open-uri
's:redirects
option). However, ensure that redirections do not lead to unexpected or unauthorized destinations.
URI Manipulation Protection:
Protect against URL manipulation by verifying that the path, query parameters, and other components of the URL do not contain dangerous input or payloads.
Logging and Monitoring:
Implement logging to track user input and requests, especially when accessing external resources.
Set up monitoring to detect suspicious activity, such as unusual patterns of URL requests.
Input Sanitization:
Sanitize user input to remove or escape potentially dangerous characters. For example, use functions like
CGI.escape
to sanitize query parameters.
Mitigated Code
require 'uri'
require 'net/http'
require 'open-uri'
# Define a whitelist of allowed domains or resources
ALLOWED_DOMAINS = ['example.com', 'public-api.com']
# Define a whitelist of allowed ports
ALLOWED_PORTS = [80, 443]
# Define a whitelist of allowed schemes
ALLOWED_SCHEMES = ['http', 'https']
def is_valid_url?(url)
uri = URI.parse(url)
# Normalize the URL to avoid tricks like IP addresses, subdomains, or encoded characters
uri = uri.normalize
# Check if the scheme is in the whitelist
return false unless ALLOWED_SCHEMES.include?(uri.scheme)
# Check if the host (domain) is in the whitelist
return false unless ALLOWED_DOMAINS.include?(uri.host)
# Check if the port is in the whitelist
return false unless ALLOWED_PORTS.include?(uri.port)
# Return true if all checks passed
true
end
def fetch_url(url)
if is_valid_url?(url)
begin
# Disable HTTP redirections
open(url, allow_redirections: :safe) do |response|
content = response.read
puts 'Content from the URL:'
puts content
end
rescue OpenURI::HTTPError => e
puts "HTTP Error: #{e.message}"
rescue StandardError => e
puts "Error: #{e.message}"
end
else
puts 'Invalid URL format or access not allowed.'
end
end
print 'Enter a URL to fetch: '
user_input = gets.chomp
fetch_url(user_input)
The Mitigated does the following:
URL Validation (
is_valid_url?
function):The code validates the URL format by parsing it using
URI.parse(url)
and then normalizing it usinguri.normalize
. This normalization helps prevent tricks like using IP addresses, subdomains, or encoded characters to bypass the whitelist.It checks if the URL scheme is in the
ALLOWED_SCHEMES
whitelist, ensuring that only 'http' and 'https' schemes are allowed.It verifies if the host (domain) is in the
ALLOWED_DOMAINS
whitelist, restricting access to specific domains.It checks if the port is in the
ALLOWED_PORTS
whitelist, limiting access to specific ports.If any of these checks fail, the function returns
false
, and the URL is considered invalid.
HTTP Redirection Handling:
The code uses the
open-uri
library with theallow_redirections: :safe
option to disable HTTP redirections. This prevents SSRF attacks through redirections, as the code only fetches the content from the initially provided URL.
Error Handling:
The code includes error handling for HTTP errors (e.g., 404 Not Found) and general exceptions, which ensures that it gracefully handles unexpected issues without exposing sensitive information.
Whitelists:
The code uses
whitelists for allowed domains
,ports
, andschemes
. This is an effective security measure to restrict requests to trusted resources and prevent unauthorized access.
Mitigation
Some common mitigation techniques include:
Input Validation and Sanitization:
Validate and sanitize
all user-provided input, especially URLs, before using them in any HTTP request. Ensure that the input adheres to a safe format.
URL Whitelisting:
Maintain a
whitelist of allowed domains
,IP addresses
, andports
that your application is allowed to access. Only permit requests to resources on this whitelist.
URL Normalization:
Normalize
URLs to a consistent format to prevent attackers from using various tricks like IP addresses, subdomains, or encoded characters to bypass restrictions.
Scheme Validation:
Ensure that the URL scheme is limited to safe schemes like
http
andhttps.
Reject URLs with unsupported or dangerous schemes.
Restrict External Access:
Design your network architecture to
limit external access
to only the necessary resources. Employ firewalls and network-level access controls.
HTTP Redirect Controls:
Disable or carefully control HTTP redirections
to prevent attackers from tricking the application into making requests to internal or unauthorized resources.
DNS Resolution Safeguards:
When resolving domain names to IP addresses, ensure that the IP addresses are
public and not reserved
. Implement DNS resolution controls to prevent access to internal resources via DNS.
Use Safe Libraries and Frameworks:
Utilize
security-focused libraries and frameworks
that have built-in protections against SSRF and other common web security issues.
Least Privilege Principle:
Ensure that the service or user account making the requests has
minimal privileges
required to perform its tasks. Avoid running with superuser or excessive permissions.
Logging and Monitoring:
Implement comprehensive logging to
track and monitor
all outbound requests, especially those involving user input. Set up alerting systems to detect and respond to suspicious activities.
Regular Security Audits and Testing:
Conduct regular
security audits, penetration testing, and code reviews
to identify and address SSRF vulnerabilities and other security issues.
Dependency Updates:
Keep all libraries, frameworks, and software components
up to date with security patches to reduce the risk of known vulnerabilities.
References
Server Side Request Forgery Prevention - OWASP Cheat Sheet Series
SSRF Cheat Sheet & Bypass Techniques
What is SSRF (Server-side request forgery)? Tutorial & Examples | Web Security Academy
Last updated
Was this helpful?