Monthly Archives: May 2021

The Fourth Annual Global Study on Exchanging Cyber Threat Intelligence: There Has to Be a Better Way

Since Ponemon Institute conducted the first study on threat intelligence sharing in 2014, organizations that use and exchange threat intelligence are improving their security posture and the ability to prevent and mitigate the consequences of a cyberattack. As revealed in The Fourth Annual Study on Exchanging Cyber Threat Intelligence: There Has to Be a Better Way, Some 74 percent of respondents that had a cyberattack believe that the availability of timely and accurate threat intelligence could have prevented or mitigated the consequences of such an attack.

According to the 1,432 IT and IT security practitioners surveyed in North America, EMEA and Asia Pacific , the consumption and exchange of threat intelligence continues to increase, as shown in Figure 1. Despite the increase in the exchange and use of threat intelligence, more work needs to be done to improve the timeliness and actionability of the threat intelligence.

Following are 11 trends that describe the current state of threat intelligence sharing.

1. Satisfaction with the ability to obtain threat intelligence decreases slightly. This year, 40 percent of respondents say they are very satisfied or satisfied with the way their organizations obtain threat intelligence. This is a slight decrease from 41 percent of respondents in 2017. To increase satisfaction, threat intelligence needs to be more timely, less complex and more actionable.

2. Organizations do not have confidence in free sources of threat intelligence. Reasons for paying for threat intelligence is because it has proven effective in stopping security incidents and a lack of confidence in free sources of intelligence.

3. On a positive note, the accuracy of threat intelligence is increasing. However, the majority of organizations believe the timeliness and the actionability of threat intelligence is low.

4. The two main metrics are the ability to prioritize threat intelligence and its timely delivery. Other metrics are the ability to implement the threat intelligence and the number of false positives.

5. When it comes to measuring the ROI of their threat intelligence, 39 percent of respondents say their organizations calculate the ROI. The top ROI metrics organizations look at include the following factors: reduction in the dwell time of a breach, reduction in the number of successful breaches and faster, more effective incident response.

6. Timeliness of threat intelligence is critical but not achieved. Only 11 percent of respondents say threat intelligence is provided real time and only 13 percent of respondents say threat intelligence is provided hourly

7. Threat indicators provide valuable intelligence. Eighty-five percent of respondents say they use threat indicators. The most valuable types of indicators are malicious IP addresses and indicators of malicious URLs.

8. Most organizations either currently or plan to consolidate threat intelligence data from multiple solutions. However, 53 percent of respondents say their organizations mainly use manual processes to accomplish the consolidation.

9. With regards to how threat intelligence is used through the network, the majority of organizations are using it in IDS. United Threat Management (UTM) is usually a single security appliance that provides multiple security functions as a single point on the network. The use of UTMs has increased significantly since 2017.

10. Internal silos prevent more effective collaboration and the exchange of threat intelligence with other organizations. Only 40 percent of respondents say the collaboration between their organization and other companies in the exchange of threat intelligence is either very effective or effective.

11. The use of automated processes to investigate threats is gaining traction. Fifty-four percent of respondents, an increase from 47 percent of respondents, are using automated processes to investigate threats. There also has been a significant increase in the use of machine learning and AI since 2017.

To read the full report visit the Infoblox website.

Plane crashes are investigated. Computer crashes should be, too

Bob Sullivan

When a plane crashes, a government agency rushes to the scene looking for answers…and lessons that might prevent the next plane crash. When computers crash — and the economy crashes, as we’ve seen this week — there is no such fact-finding mission. There should be. And now, perhaps, there will be.

The National Safety Transportation Board, while imperfect, has a remarkable track record for getting to the bottom of transportation disasters. Air travel is remarkably safe, in no small part because of all the public hearings and final reports issued by the NTSB through the years. Yes, wounds are exposed and companies take it on the chin after a crash. That’s the price of learning. Lives are at stake.

Cybersecurity could benefit dramatically from this kind of soul-searching after major attacks.

This week’s Colonial Pipeline ransomware incident and resulting run on gas stations is just the latest incident that screams for some kind of independent agency devoted to this kind of soul searching. And I do mean “just the latest.” A quick trip down memory lane had me re-reading essay after essay calling for a “Computer Network Safety Board” or a “National Cybersecurity Safety Board.” This 2016 report that was part of a NIST Commission cites a 1990(!) publication named Computers at Risk: Safe Computing in the Information Age which called for creation of an incident database, saying “one possible model for data collection is the incident reporting system administered by the National Transportation Safety Board.”

So, this is an idea whose time has come. And perhaps it will. In the wake of the pipeline ransomware incident, President Biden issued an executive order this week addressing cybersecurity. These things can seem like pageantry, but they don’t have to be. The list of actions in the order is non-controversial and has been in the works for a while. Things like: raising government security standards, stronger supply chain/vendor oversight, and improved information sharing. But to me, this is the most critical part of the order:

Establish a Cybersecurity Safety Review Board. The Executive Order establishes a Cybersecurity Safety Review Board, co-chaired by government and private sector leads, that may convene following a significant cyber incident to analyze what happened and make concrete recommendations for improving cybersecurity. Too often organizations repeat the mistakes of the past and do not learn lessons from significant cyber incidents. When something goes wrong, the Administration and private sector need to ask the hard questions and make the necessary improvements. This board is modeled after the National Transportation Safety Board, which is used after airplane crashes and other incidents.

Finally.

This….CSRB?….faces a lot of obstacles. Paul Rosenzweig, one of the essayists who has called for such a thing in the past, laid these obstacles out well in his 2018 paper for R Street. There’s (usually) no wreckage after a computer crash, so investigations will be much harder. There are tens of thousands of important computer hacks every year. Can’t study them all. How will the CSRB pick which ones to examine? Victim companies are notoriously hesitant to share details after an attack, fearing those details will end up in a lawsuit. Sometimes…often…the investigation will be inconclusive. And finally: the “flaw” found by such an investigation will often be a person, not software or hardware.

Good.

I’ve been to 100 conferences where security professionals spend a week talking about fancy new software and then at a closing address, someone ends by saying, “It all comes down to the human element.” I suspect a CSRB will find *many* incidents come down to a mistake made by a person. That’s a good start. Of course, no one person can really screw up something like this. That person is part of a team. S/he is nearly always overworked, part of a flawed system, walking a tightrope without a net, and acting on the wrong incentives. These are the kinds of real problems that can finally be exposed by CSRB reports.

Having covered this industry for 25 years, I am suspicious of the idea that many investigations will be inconclusive. Yes, there are occasional Zero Day hacks and nation-state-sponsored attacks that might elude investigators. But many, many hacks fall into the Equifax camp — they involve an event cascade of errors that should have been caught, like a horror movie where the protagonists miss a dozen or more chances to avert the disaster.

Every one of those movies should be made, and studied, by the CSRB.

Perhaps one conclusion might be limitations on workload, the kind that now protect truck drivers, train engineers and pilots. Perhaps other innovative recommendations will arise from shining such a public light on hacking incidents. Perhaps there will be so many that we’ll move past shaming cybersecurity workers to solving the real problem. If we don’t, we’re going to see a lot more gas lines that result from malicious computer code.