Principle and Defense Scheme of SSRF Vulnerability

Recently, when looking at some security knowledge, I found a new vulnerability called SSRF. The principle of this vulnerability is not complicated, but because it is the first time I have encountered it, I still simply record it.

What is SSRF?

Server level request forgery (Server-Side Request Forgery), refers to the attacker who fails to obtain the ownership limit of the server, using the server vulnerability to send a constructed request to the intranet where the server is located. SSRF attacks usually target internal systems that cannot be directly accessed by external networks.

Principle

The main reason for the formation of SSRF vulnerabilities is that the interface provided by the server contains the URL parameters of the requested content, and the URL parameters transmitted by the Client are not filtered. **

Assuming that a company’s website is deployed on the A server, in order to allow users to access it, the A server has its own I public network P address (or through the NAT Gateway configuration, it has its own public network IP + PORT), and the company also has A B server stores some important data. This B server is deployed on the intranet and has no public network IP, so under normal circumstances, the extranet cannot be accessed.

At this time, some services on the website on the A server need to use the data on B, so they will send a request to B. Since A knows where B is, A can access it.

At this time, A is equivalent to a reverse proxy of B, but this proxy can only issue a few requests specified on the A website.

If at this time, the request A can make is not strictly limited, and the parameters can be pieced together casually, then A is equivalent to opening a door for hackers to go straight to the B server.

Example

Many web applications provide the function of obtaining data from other servers. Using specified URLs, web applications can retrieve images, download files, read file contents, etc. The essence of SSRF is to use defective web applications as proxies to attack remote and local servers. In general, the target of SSRF attacks is internal systems that are inaccessible to the extranet. Hackers can exploit SSRF vulnerabilities to obtain some information about internal systems (precisely because it is initiated at the server level, it can request internal systems that are connected to it and isolated from the extranet). Most of the reasons for the formation of SSRF are because the server level provides the function of obtaining data from other server applications without filtering and restricting the target address.

SSRF Hazard

** Detect any port of the internal host **

This situation is more extreme and basically does not occur. That is to say, A will go to B through the ip and Port Number in the request parameters. That is to say, the specific information of B is obtained by A parsing the IP and Port in the request

However, once this situation occurs, the attacker can forge various IPs and PORTS to access, and get the network results in the intranet and the port development of each server according to the different returned results.

** Use SSRF to obtain sensitive intranet file information **

There is a ssrf.php page on the server. The function of this page is to obtain URL parameters and then display the content of the URL on the web page.

We go to the link: http://127.0.0.1/ssrf.php?url=http: //127.0.0.1/test.php and it will show the test.php page

We can change the URL parameter to the address of the intranet, which will disclose the information of the server’s intranet. Change the URL to the form of file://to read the local file.

Defense methods

  1. Filter the returned information. If the web application is to obtain a certain type of file, verify whether the returned information meets the standard before displaying the returned result to the user.

  2. Unify the error message to avoid users from judging the port status of the remote server based on the error message.

Limit the requested port, such as 80, 443, 8080, 8090.

  1. Prohibit uncommonly used protocols, only allow http and https requests. It can prevent problems caused by files:///, gopher://, ftp://, etc.

  2. Use DNS cache or Host Whitelist.

Reference link:

https://zhuanlan.zhihu.com/p/91819069

https://www.jianshu.com/p/612c010e588e