Defending against cyber threats amid Israel-Iran geopolitical tensions. Learn More

Defending against cyber threats amid Israel-Iran geopolitical tensions. Learn More

Services
Cyber Advisory
Managed Cloud Security
Data Security
Managed Detection & Response
Email Security
Managed Network Infrastructure Security
Exposure Management
Security Operations Platforms
Incident Readiness & Response
SpiderLabs Threat Intelligence
Solutions
BY TOPIC
Offensive Security
Solutions to maximize your security ROI
Operational Technology
End-to-end OT security
Microsoft Security
Unlock the full power of Microsoft Security
Securing the IoT Landscape
Test, monitor and secure network objects
Why LevelBlue
About Us
Awards and Accolades
LevelBlue SpiderLabs
PGA of America Partnership
Secure What's Next
LevelBlue Security Operations Platforms
Security Colony
Partners
Microsoft
Unlock the full power of Microsoft Security
Technology Alliance Partners
Key alliances who align and support our ecosystem of security offerings

Denial of Service: A Survival Guide

From Anonymous style SYN flooding to Application layer denial of service, denial of service is a subject that has been often confused with hacking by the grand public. While your data might not be stolen, the impact both on sales and reputation can be tremendous, especially for DOS that persists over long period of times. When it comes down to it, the amount of data that can be transferred from one point to another and the processing power is always limited to at one point or another. If you overload any of the two, requests get delayed until they are eventually dropped, creating a denial of service. Furthermore, there are many bottlenecks that can be exploited in order to slow down the whole system, sending a much smaller amount of packets or data compared to the impact a normal request would have on the server load. This is why the battle against denial of service always was and will always be asymmetrical. The resources spent to prevent it are much greater than the ones required by the attacker. Furthermore, as more systems and processes are added to reduce the risk of DOS, the attack surface is expanded. We will be taking a look into some of the measures that can be applied and what common pitfalls should be avoided in the process.

There are multiple considerations that must be taken into account when building a system that is going to be resistant to these types of attacks. First of all, the application type and context itself must be taken into consideration, as they usually influence the following variables that need to be taken into account:

  • Typical peak usage
  • Acceptable response time
  • Acceptable down time

Some of the common bottlenecks that can be exploited are as follows:

  • Total bandwidth available (in bytes/s)
  • Disk space available
  • Memory (RAM) available
  • Processing power available
  • Similar resources for backend infrastructure (database, other systems)

In order to have a robust application, the key is to address each of these issues and limits the effects of bottleneck as much as possible with a good architecture.

Before anything, you may want to make sure that the server application you are using is not vulnerable to known DOS vulnerabilities. Those can easily be identified using a vulnerability scanner. One of the most common DOS vulnerabilities is Slowloris, which affects Apache. The vulnerability consists of sending HTTP requests in fragments and waiting the maximum amount of time before sending the next chunk of HTTP request. By sending out these packets at long intervals, it is possible to excessively stall a single Apache thread and prevent it from serving other clients. When done concurrently, it is possible to use up all the connections that the server allows and thus prevent any legitimate client from connecting to it. In the case of Slowloris, the core of the issue lies in Apache's design not being asynchronous and as such the problem has yet to be completely fixed. Patches as well as IDS/IPS rules do exist to that mitigate the vulnerability either by banning IP addresses performing suspicious repeat requests or lowering the timeout value. In most cases, updating your web server as well as SSL/TLS libraries will fix known vulnerabilities.

You will then want to take steps to reduce any resources used that could be moved to the near-unlimited resources in the cloud. To do so, usually you will want your static content to be hosted on a CDN network such as Akamai (a Trustwave partner) in order to reduce the load on your servers. This often has a side-effect of speeding up loading times of your pages thanks to CDN networks having servers locally all around the world. However, directly moving resources that are to be loaded on your html web pages does introduce new security risks. For example, it may be possible to move your jQuery library to a CDN, but what would happen if that CDN in question happened to get compromised? An attacker who has control over the content served by the CDN could inject malicious script within the library, giving a similar result to a stored cross-site scripting attack. If resources are moved to the cloud, make sure to always check their integrity on systems which you control before using them.

For example, the new standard for subresource integrity can be used to mitigate those attacks within the browser. By adding a hash in the HTML file, it is possible to verify the integrity of the file before it is loaded. If the hashes do not match, the file is not loaded into the DOM. A sample example can be found below: