Verizon's Data Breach Investigation Report (DBIR) is the proverbial almanac in the cyber security field
Verizon's Data Breach Investigation Report (DBIR) is the proverbial almanac in the cyber security field. This year's 119 page DBIR report (You can read it here) is packed with rich insights and will easily consume a weekend if we are willing to go over the details with a fine- toothed comb.
With its coverage of 157,525 incidents and 3950 breaches globally, this year’s report really shines with its data analytics driven information visualisation. The other new addition is its cross referencing to the CIS Security Control Framework and the MITRE ATT&CK framework, acknowledging their relevance and growing importance.
With so many data points vying for attention, distilling key insights from the DBIR can be quite a task. Nevertheless, I have attempted to take stab at it and share my key takeaways.
For almost every significant point, I pose a question or two that we should probably be asking ourselves and taking it back to our work-desk.
For a start, I looked up the 2010 DBIR report to check if something had fundamentally changed over the past decade. In 2010, 48% breaches were due to insider abuse, 40% due to hacking, 38% due to malware. While in the 2020 DBIR Report - 45% of the Breaches featured Hacking, followed by 22% Breaches caused due to Human Error. Only 17% Breaches were due to Malware and 8% breaches were attributed to misuse by authorised users.
Since 2010, breaches due to human error have increased, breaches due to malware have decreased and insider abuse has dropped drastically.
This change in trend could be because of several reasons - gradual improvement in internal security controls over time, organizations doing a good job of managing anti-malware controls, and as a result - threat actors relying less on malware and more on sophisticated file-less attacks and exploiting opportunities presented due to human error.
22% breaches occurring due to human error is probably an indication that we are either not able to fully grasp the security risks that have emerged due to technological complexity (Cloud, Dev-Ops, Big Data etc) or we are under-estimating the problem.
Here are some other insights from the 2020 report -
- 43% breaches involved a web application; 37% breaches involved stealing or using stolen user credentials; 27% breaches involved ransomware and 22% breaches involved phishing.
Q: Are we allocating adequate investments and attention in the right areas to address these threat vectors?
- Password dumper malware is emerging as a preferred vector in successful breaches.
Q: Does our AV detect and prevent this type of malware and can our SOC detect password dumper usage in our environment?
- Nearly 60% of incidents were associated with Denial of Service.
Q: Do we have an Anti-DDOS solution (or service) that has been properly configured, for all key IP addresses / domains and tested on a regular basis?
- More than 60% breaches have been discovered due to a Security Researcher
Q: Should we be mainstreaming Red Team / Bug Bounty Programs?
- Over 80% of the hacking related breaches involve brute forcing or usage of stolen credentials
Q: Does our SIEM detect brute force attempts? Do we have a playbook to respond to this attack? Can we proactively detect stolen credentials? Can we implement 2-factor authentication to render such attacks useless? Is it time that regulators mandate two factor authentication for accessing all internet facing applications/devices (including cloud apps)?
- SSH (22) and Telnet (23) are the most targeted protocols.
Q: Are these ports blocked by default on our perimeter or secured with 2 factor authentication?
The report also throws up a few other interesting findings –
- The risk of a breach from a vulnerability exploit is very low – it peaked at 5% in 2017. Every new vulnerability that is published, does not make you more vulnerable.
- Patches that do not get applied within the first quarter of being released, often don’t get applied at all. This gives the adversaries time to build tools that will make it easy even for a novice to attack the infrastructure that remains vulnerable.
- An encouraging finding about Social Engineering says that click rates on a phishing email have fallen to 3.4%. Reporting rate has risen to 40%.
Q: What is our company’s click rate on internal phishing simulation tests? What % of our users report these phishing emails as suspicious?
- On average, a company lost between 1240 $ and 44,000 $ in a Business Email Compromise Incident. 50% of those affected were able to recover 99% of the money and 9% couldn’t recover any.
Q: Given this scenario, should we be investing in an email security tool that boasts about its AI based BEC detection with a 1 Million USD price tag? The ROI doesn’t seem to work out.
- 24% breaches involved a cloud asset; 73% of cloud breaches affected an app or email and 77% of breaches involved breached credentials.
- Just 4% of breaches involved an OT asset – a low figure for now that is sure to climb up the curve once IIOT matures.
- The report highlights another important risk – on average a company has 5 different networks with an internet presence. If it is not fully aware of what assets it has on each of these networks, then it could be facing an asset management problem; which will eventually translate into a vulnerability management problem.
Q: How do we centrally track assets and manage vulnerabilities across different networks – on-premise and cloud. What about shadow IT?
- There has been a marked improvement in breach detection with rising number of companies discovering a breach in a matter of days (vis-à-vis months).
- Mis-delivery of data has been cited as a major source of data breaches. This is a situation where someone inadvertently sends sensitive data to un-intended recipients (the scenarios could be as wild as your HR manager sending payroll data by mistake to the “ALL” distribution list or an overworked healthcare office staffer sending medical reports to the wrong person in haste).
Q: How do we possibly prevent or deal with such breaches?
Coming to the Industry and Sectoral Insights –
- The Arts and Entertainment industry experienced the highest number of DDOS attacks.
- The Educational Sector has seen the highest incident count involving credential dumping attacks (an attack running for 28 days on average). These attacks are mostly on cloud email accounts.
So, education companies should be monitoring for brute force or password spraying attacks on cloud email. Students bringing their own infected devices and connecting to the institutional network endangers the entire organization.
- The Information Sector has a higher number of web application exploitation related attacks than any other industry, but on the bright side, this sector has the highest percentages of vulnerability patching on time. Almost 89% of the patches were deployed within 75 days of the vulnerability discovery.
This could be a good metric to benchmark our patch cadence.
- The Manufacturing sector is heavily targeted by external actors using password dumper malware and stolen credentials. Nation-state actors were behind 38% of the incidents with 28% breaches having Espionage as the motive. Internal employees misusing their access to abscond with data is also a concern. Privilege abuse as well as data mishandling, where the employee sends company data to his personal email or uploads it to the cloud drives for working from home, are some of the common misuse scenarios affecting this sector.
- In Retail, attacks have migrated from POS terminals to e-commerce applications. Stolen credentials, vulnerability exploits and brute force attacks are the preferred tactics. Credential stuffing is a major cause of concern as is the lack of timely patching. Only 50% of the vulnerabilities are getting patching within 3 months of discovery. SQL, PHP and local file injection are the most common attacks.
Q: Is our WAF configured to detect and block credential stuffing and script- based enumeration attacks? Do we have the guts to enforce rate limiting for suspicious IP addresses during the big billion dollar sale or have we decided to capitulate in the face of the “user experience” ruse that the business guys pull on us?
The report highlights a counter-intuitive finding – that the number of breaches that take months or years to discover, is greater in large organizations, than in small ones. This could be because large organizations have a much larger footprint and could possibly be more likely to miss an intrusion on an internet-facing asset that they forgot they owned, but small orgs have a reduced attack surface, so it might be easier to spot a problem.
In the end, the report finally makes a strong case for adopting the CIS CSC (Critical Security Controls), guidelines in a phase wise manner to reduce cyber risk exposure. You can read all about the CIS framework here.
One thing is certainly true in cyber security. The more things change, the more they remain the same.
For the latest cyber threats and the latest hacking news please follow us on Facebook, Linkedin, and Twitter.
You may be interested in reading: How to Survive the COVID Time Cyber Security Threats?

About the Author
Kaustubh Medhe is an Information Security, Technology Risk and Data Privacy professional with experience in consulting, audit, compliance and cyber security operations.