Category Archives: Uncategorized

'Your money or your data!' – Most still have never heard of ransomware; while a majority of victims have paid up, IBM says

Bob Sullivan

There’s fresh evidence out Wednesday to show the ransomware epidemic has staying power. Why? Victims are paying ransoms for their data, that’s why.

Madison County, Indiana made headlines last week because it admitted a recent ransomware attack will cost taxpayers there $220,000 — some to the hackers, most for security upgrades.

But Madison County shouldn’t be singled out. Ransomware nightmares  — involving malicious software that encrypts victims’ data and won’t “give it back” unless a fee is paid —  are playing out everywhere.  The Carroll County, Arkansas, sheriff’s department admitted this week it had paid $2,400 to recover data held captive from the its law enforcement management system, which holds reports, bookings and other day-to-day operational data, according to Townhall.com.

The hits keep coming because victims keep paying; and victims keep paying because they seem to have no other choice.  Obviously, criminals keep will keep doing what works.

IBM researchers set out recently to understand the prevalence of ransomware. In a report released Wednesday, IBM’s X-Force said that the volume of spam containing ransomware has skyrocketed.  The FBI claims there were an average of 4,000 attacks per day in the first quarter of 2016.

And yet, IBM found that only 31 percent of consumers had even heard the term “ransomware.” Meanwhile, 75 percent said they “are confident they can protect personal data on a computer they own.”  Meanwhile, 6 out of 10 said they had not taken any action in the past three months to protect themselves from being hacked.

That’s head-in-the-sand stuff, folks. Forward your friends this story now — but don’t include it as an attachment, please.

Meanwhile, companies seem to be more realistic, and more frightened — 56 percent of companies surveyed by the Ponemon Institute said, in a separate study, they are not ready to deal with ransomware. (I have a business partnership with Larry Ponemon at PonemonSullivanReport.com).

All this matters because a majority of consumers and corporations actually say they’d pay to recover data encrypted by a criminal. Some 54 percent said they’d pay up to $100 to get back financial data, and 55 percent said they’d do so to retrieve lost digital photos. Not surprisingly, Parents (71 percent) are much more concerned than non-parents (54 percent) about family digital photos being held for ransom or access blocked.

(Back up those family photos, kids!)

Now, for the meat of the report.  Many corporations told IBM that they had already paid ransom for data — seven in ten of those who have experience with ransomware attacks have done so, with with more than half paying over $10,000, IBM said.  Many paid more.

  • 20 percent paid more than $40,000
  • 25 percent paid $20,000 – $40,000
  • 11 percent paid $10,000 – $20,000

“The perception of the value of data, and the corresponding willingness to pay to retrieve it, increases with company size. Sixty percent of all respondents say their businesses would pay some ransom and they’re most willing to pay for financial (62 percent) and customer/sales records,” the report said.

All this paying up flies in the face of law enforcement’s advice, which is to never pay.

“Paying a ransom doesn’t guarantee an organization that it will get its data back,” said FBI Cyber Division Assistant Director James Trainor in a report earlier this year. “We’ve seen cases where organizations never got a decryption key after having paid the ransom. Paying a ransom not only emboldens current cybercriminals to target more organizations; it also offers an incentive for other criminals to get involved in this type of illegal activity. And finally, by paying a ransom, an organization might inadvertently be funding.”

Of course, the FBI is looking at the macro impact, while the victims are looking at a huge, immediate micro problem.

How can you protect yourself?  IBM says the main way ransomware arrives is through an unsolicited email with a booby-trapped attachment — usually a Microsoft Office document that asks for macro permissions. So don’t click on those and you’ve gone a long way towards protecting yourself. Here’s some other tips from IBM.

Banish unsolicited email: Sending a poisoned attachment is one of the most popular infection methods used by ransomware operators. Be very discerning when it comes to what attachments you open and what links you click in emails.

No macros: Office document macros have been a top choice for ransomware operators in 2016. Opening a document and that then requires enabling macros to see its content is a very common sign of malware, and macros from email should be disabled altogether.

Update and patch: Always update your operating system, and ideally have automatic updates enabled. Opt to update any software you use often, and delete applications you rarely access.

Protect: Have up-to-date antivirus and malware detection software on your endpoint. Allow scans to run completely, and update the software as needed. Enable the security offered by default through your operating system, like firewall or spyware detection.

Junk it: Instead of unsubscribing from spam emails, which will confirm to your spammer that your address is alive, mark it as junk and set up automatic emptying of the junk folder.

 

The price of the insider threat — negligence more common, criminals more costly

Larry Ponemon

Larry Ponemon

Ponemon Institute is pleased to present the findings of the 2016 Cost of Insider Threats study sponsored by Dtex. The purpose of this benchmark study is to understand the direct and indirect costs that result from insider threats. In the context of this research, insider threats are defined as:

  • A careless or negligent employee or contractor,
  • A criminal or malicious insider or
  • A credential thief.

We interviewed 280 IT and IT security practitioners in 54 organizations from April to July 2016. Each organization experienced one or more material events caused by an insider. These organizations experienced a total of 874 insider incidents over the past 12 months. Our targeted organizations were business organizations with a global headcount of 1,000 or more employees located throughout the United States.

Imposter risk is the most costly

The cost ranges significantly based on the type of incident. If it involves a negligent employee or contractor, the incident can average $206,933. The average cost more than doubles if the incident involves an imposter or thief who steals credentials ($493,093). Criminal and malicious insiders cost the organizations represented in this research an average of $347,130.  The activities that drive costs are: monitoring & surveillance, investigation, escalation, incident response, containment, ex-post analysis and remediation.

The negligent insider is the root cause of most incidents

Most incidents in this research were caused by insider negligence. Specifically, the careless employee or contractor was the root cause of almost 600 (598) of the 874 incidents reported. The most expensive incidents, due to imposters stealing credentials, were the least reported and totaled 85 incidents.

Organizational size and industry affects the cost per incident

The cost of incidents varies according to organizational size. Large organizations with a headcount of more than 75,000 spent an average of $7.8 million to resolve the incident. To deal with the consequences of an insider incident, organizations with a headcount between 1,000 and 5,000 spent an average of $2 million. Financial services, retail, industrial and manufacturing spent an average of $5 million.

User behavior analytics combined with other tools reduce the total cost

Using incremental analysis, we recalculated the total cost of insider-related incidents under the condition that a given tool or activity is deployed across the enterprise. Companies that deploy user behavior analytics (UBA) realized an average cost reduction of $1.1 million. The use of threat intelligence systems resulted in an $0.8 million average cost reduction.  Similarly, the deployment of data loss prevention (DLP) tools resulted in an average cost reduction of $0.7 million. Companies that deploy user behavior analytics in combination with threat intelligence, employee monitoring and data loss prevention have an average total cost of $2.8 million, which is $1.5 million lower than the overall average.

 Click here to read the rest of the study

 

The hack that might have given Trump the White House

Wikileaks. The alleged email that led to compromise of John Podesta's account.

Wikileaks. The alleged email that led to compromise of John Podesta’s account.

Bob Sullivan

Bob Sullivan

A simple, decade-old hacker trick likely led to the hacking of critical Hillary Clinton staff members. If John Podesta can fall for it, with the Presidential election at stake, so can you. So listen up.

I know I sound like a broken record when I warn people to think before they click, and I know most people think they’ll never fall for silly hacker tricks, but hey, this stuff is important.  It very well might have an impact on who gets to be the leader of the free world.

Information continues to trickle out of hacked emails that come from senior officials in Hillary Clinton’s campaign team, including campaign chair John Podesta. This month brought additional evidence describing how it happened.

It was pretty easy.

It appears that Podesta, and hundreds of other Clinton camp workers, received targeted phishing emails telling them they had to change their password immediately.  Of course, workers who fell for the email were led to a look-alike page controlled by hackers.  Part of the reason the dupe worked involved links that used of URL-shortening service Bitly, which turns long web addresses into short ones for convenience. Bitly also has the terrible quality of completely obscuring where the clicker is actually going until it’s too late.  For years, I’ve thought this to be a security flaw inherent in link shorterners, and I believe Bitly and other URL shorteners needed to engineer a fix.

In the meantime, you need to know three critical things:

A) Bitly links can’t be trusted; never click on a Bitly link when anything even remotely sensitive is involved

B) Any plea to urgently change your password should be met with serious skepticism. When you decide to do so, always manually type the service’s address into your web browsers and navigate to its password update page. NEVER click on a link telling you to do so. Even if you are sure it’s legitimate.

C) The presidential election might hang in the balance because of this simple hack. So, yes, anyone can fall for it. You can too.

The Bitly link

The Bitly link

Back in June, SecureWorks published a pretty convincing research paper that reconstructed the careful attack on the Hillary Clinton Presidential Campaign.  Analyzing data left publicly available on a Bitly account, it found evidence of thousands of spear phishing emails targeting election officials between March and June of this year.    The targets included: national political director, finance director, Director of strategic communications, and so on.

For example, 213 links were created targeting 108 email addresses at HillaryClinton.com. The hackers succeeded again and again: “20 of the 213 short links have been clicked as of this publication. Eleven of the links were clicked once, four were clicked twice, two were clicked three times, and two were clicked four times,” the report says.

The group also targeted personal Gmail accounts belonging to campaign officials.  This produced plenty of hits, too.

“They include the director of speechwriting for Hillary for America and the deputy director office of the chair at the DNC,” the report says. “(The hackers) created 150 short links targeting this group. As of this publication, 40 of the links have been clicked at least once.”

Clicking on a link does not mean the clicker subsequently entered login information and fell for the scam.  But the high click rate certainly suggests some victims did. So does the timing of all this; The DNC hack was revealed in June, weeks after this spear phishing campaign.

Release last week of what appears to be the actual email that led to the hacking of Podesta’s email on Wikileaks — sorry for the circular reasoning there — seems to confirm SecureWorks’ analysis.  An email sent to John.Podesta@gmail.com appears to come from Google and wants that someone located in the Ukraine had tried to access his account.

“Google stopped this sign-in attempt. You should change your password immediately,” it says. “CHANGE PASSWORD.”  And there’s a link headed for bit.ly/1PibSU0.

Click on that Bitly link, and you are today brought to a warning page saying there “might be a problem with the requested link.”  A bit too late for Podesta and the Clinton campaign.

The ultimate destination for that link appears to be Google, but it’s not. Instead, it sends visitors to a web site at http://myaccount.google.com-securitysettingpage.tk

An IT worker for the Clinton campaign ominously comments in the thread posted at Wikileaks that “this is a legitimate email,” though to his credit, he leaves instructions to visit Google at the correct link to change the password.

Then, ironically, he offers this call to action:

“Does JDP (John Podesta) have the 2 step verification or do we need to do with him on the phone? Don’t want to lock him out of his in box!”

If only a locked inbox were the biggest email problem Podesta had.

Innovation vs. security is a tough battle

Larry Ponemon

Larry Ponemon

Ponemon Institute is pleased to present the findings of Global Trends in Identity Governance & Access Management, sponsored by Micro Focus. The purpose of this study is to understand companies’ ability to protect access to sensitive and confidential information and what they believe is necessary to improve the protection.

All participants in this study are involved in providing end users access to information resources in their organizations.
In this study, we surveyed 2,580 IT and IT security practitioners in North America, United
Kingdom, Germany, EMEA, Brazil, LATAM and Asia-Pacific1. The consolidated findings are
presented in this report. The findings for North America, UK, Germany and Brazil are published in separate reports.

On average, companies represented in this research must provide identity
governance and access support to approximately 13,000 internal users (employees) and 191,000 external users (contractors, vendors, business partners, customers and consumers).

All enterprise organizations are under pressure to drive business innovation in order to respond to changes in the competitive landscape, and to meet changing customer expectations. This is fueling a trend toward digitalization as more resources and interaction move online, requiring greater and freer access to online information sources. Yet the survey shows that the security, access management, and governance processes to support this digitalization are not yet in place.

In this study, we have identified the following trends that will have a significant impact on how
organizations will be managing identity governance and access.

1. Employees are frustrated with access rights processes, and IT security is considered a
bottleneck. Sixty-two percent of respondents say IT security is viewed as a bottleneck in the
process for assigning and managing access rights to users and 57 percent of respondents
say employees are frustrated with the current process for assigning and managing access
rights.

2. Responding to requests for access is considered slow.
Only 41 percent of respondents say the function that provides end-user access to information resources is quick to respond to such changes as termination or role changes. These findings may explain why lines of business and application owners are taking charge of access when
it comes to the cloud.

3. Control over access management is decentralized.
According to 59 percent of respondents, senior leaders prefer each business function to determine what access privileges are required for a given user’s role and function.
In the cloud environment, responsibility is more decentralized. Twenty-nine percent of
respondents say lines of business and 21 percent say it is the application owner who is
deciding end-user access in the cloud environment.

4. Certain technologies are considered an important part of meeting identity governance
and access management requirements. These are multi-factor authentication (69 percent
of respondents), identity and access management (69 percent of respondents), access
request systems (67 percent of respondents) and biometric authentication (60 percent of
respondents.

5. A single-factor authentication approach is no longer effective. Seventy-five percent of
respondents say a single-factor authentication approach, including username and password,
can no longer effectively prevent unauthorized access to information resources.

6. Integration of machine learning within identity governance solutions is critical (64
percent of respondents). Also considered critical are scalability to achieving an effective
identity governance process and compliance with leading standards or guidelines, both noted
by 63 percent of respondents.

Click here to download and read the full report.

 

Russians attacking U.S. election systems? Here's the real risk, from a man who fought Soviet electronic attacks during the Cold War

Bob Sullivan

Bob Sullivan

With U.S. officials openly blaming Russia for hacker attacks on state election computer systems, and the myriad possibilities for election chaos such attacks raise, it’s important to put them in proper context. I went to Harri Hursti, a globally-known election security consultant, for some answers.  Hursti cut his teeth in the Finnish military fending off electronic attacks, so he has valuable perspective – particularly on a unique part of Russian culture which could explain who is really behind the attacks.  He also explains the potential for psychological warfare in this incident, and why it all feels a bit familiar to his Cold War sensibilities.

Harri Hursti developed the Hursti Hack(s), in which he demonstrated how the voting results produced by the Diebold Election Systems voting machines could be altered. HBO turned the Hursti Hack into a documentary called “Hacking Democracy” which was nominated for an Emmy award for outstanding investigative journalism. Hursti is co-author of several studies on data and election security, and his consultancy. Nordic Innovation Labs, advises governments around the world on election vulnerabilities.

Between 1984-1989, Hursti worked for the UNESCO and the Finnish military in technology and cyber defense initiatives.

What do you think of the news that a member of Congress says there is “no doubt” that Russia is behind recent attacks on state election systems: (http://www.reuters.com/article/us-usa-election-cyber-idUSKCN1220SL)?

The article makes several dangerous assumptions about the security of elections and election systems. Representative Adam Schiff said he doubted (Russians) could falsify a vote tally in a way that effects the election outcome. He also said outdated election systems makes this unlikely, but really, it just makes it easier. The voting machines were designed at a time when security wasn’t considered, included, or part of the specifications at all.

These outdated computers are extremely slow. They don’t have the extra horsepower to do decent security on top of the job they were designed for. Basically, the voting machine is as powerful as today’s refrigerator or toaster, but some use the same components and logistics so outdated doesn’t mean it’s forgotten and obsolete, it means that it’s common and therefore a lot of people still today know how those systems work and can subvert them. “Outdated” isn’t offering any protection from an attacker, quite the opposite.

So there’s no proof of voter registration tampering?

As in voting machines, the registration machine don’t have the capability of logging an alteration, and they are trivially altered themselves. It’s meaningless to claim there’s no evidence, since the systems don’t have the capability to report when they’re altered. These are not standard parts of a database so there’s no common sense in saying “there’s some sort of feature that would do that, right?” Unless we study the system we can’t know one way or the other. This isn’t a common sense claim, this is a claim that would require a forensic investigation.

In addition, the number of vendors and different systems is low, so a skillful attacker doesn’t need to learn hundreds of systems; they only need to know a half dozen to control all of the U.S. election systems. But a skillful attacker only needs to learn one system in order to manipulate enough votes to tilt the election, even if it’s not close to tied. This means the attacker has more places to go to be strategic and instead of going to a big jurisdiction, They’ll go to 10 smaller ones with fewer resources and less (attention). But if you calculate the gap you need to fill to alter an election you go to the smaller, underfunded and less technologically savvy districts to own the state.

Also, some states too have made state-wide decisions that one system is used across the state, or jursdictions are central count only. So a statement that the US is decentralized is a false statement. It’s easy to understand why people think that, but from an attacker’s point of view, the threat model, you could not ask for an easier target. And the diversity between small jurisdictions is limited. An attacker can choose the jurisdictions based on the systems they are best skilled to attack.

How can the US be so sure it’s Russia?

It can’t. It is very hard to find from where a network attack is coming from. It is equally easy to make certain that investigators will find “the trail” which is pointing to the wrong direction. Therefore under the assumption that you’re dealing with a skillful attacker, any trail found is a red flag for the fact there are so many ways to make it virtually impossible to find the trail. Any conclusive looking trail “found” should be considered suspect. Unless it’s a false trail, you can only say we suspect them, and until you get to the real people to the level of the actual perpetrators true identities, you can’t make a conclusion as to “where” they come from.

Could it have been Russia?

We could use a working hypothesis, or a reasonable suspicion of Russian involvement, but until you’re down to individual people you don’t know who they are. They might even have been based in Russia, but have arrived there as tourists to carry out their attacks. There’s no way to know who the individual attackers are until they’re confronted.

Given your Cold War background, does this feel familiar?

The Cold War was all about ideology, and therefore a large concept was something that we today call hybrid warfare. In that game the actual technological attacks are equally important as the psychological influencing of the general population with misinformation and misdirection. So this is all very familiar.

Also, something we in the Western world don’t understand is how deeply patriotic Russians are. Individual Russians, and self-organized groups, are willing to go to great lengths on their own, with their own initiative, if they believe that what they do will benefit Mother Russia, and/or in hope and believe that their actions once known will be rewarded. So this kind of self-initiated actions which do resemble organized operations are commonplace. Bearing in mind that the self-organized groups can have members whose day jobs are close to the government the remaining question is, is the government aware of these groups, and if they are, are they encouraging or discouraging? Which is something we cannot know. But the fact of the matter is that Russia is self-organizing and self-providing the capability of plausible deniability which in many case can be actually true that they didn’t know.

Also, it is good to understand how high is the level of science education in Russia and the eastern bloc, when East Germany and West Germany united they had to tone down the science education in East Germany in order to match the West Germans. Science education in East Germany was way higher. The percentage of people in the general population of Russia who possess the relevant skill sets for carrying out this kind of attack is higher than we assume based on western standards. And that’s not just Russia but the whole Eastern Bloc, it was very high and is still.

Given the number of, say, smartphones and laptops used today how hard is it to fend off an attack?

In today’s world where we have “bring you device” models everywhere, we inherently assume every risk the wireless world brings to us. Our laptops and mobile phones are paired to our home networks and other wireless places we visit. It is still not understood how little security WiFi has and how easy it is with an “evil” access point to gain a connection to a target, and once you have a target you can start to work to gain access.

To mitigate this we have two possible paths. One path is to ultra-high security with all the restrictions it comes with. The alternative is to assume a breach is imminent and utilize experts to put in place an active defense mechanism which catches the breach before the attacker can use the breach to gain access to valuable information.

What would an appropriate US response be if the U.S. discovered foreign hackers in its election system?

The first action is obviously to secure your home base. Taking into account the difficulty of identifying the actual attacker, a public retaliation towards an assumed attacker may be part of the attacker’s plan and intensify the attack. Hence, public retaliation is not an effective defense. Public disclosure is important, but after the fact and after the situation has been properly handled.

Finally what is the real risk here? Could Russian hacking throw the Nov. 8 result into doubt? Could Trump supporters, should they lose, blame Russia, for example?

There’s a myriad of risks. Just to start from the simple fact that attacking the voter registration system is a highway to all crimes involving identity theft. Therefore, massive breaches of voter registration databases might lead to discouragement of people to participate in the democratic process and cause them to drop out by ceasing to be registered voters. It also poses a national security level threats, by allowing malicious actors and adversaries to gain valuable intel whether it is personal-level attacks or whether it is for hybrid warfare psyops.

It is also important to understand that data theft takes the public interest. but detect injection or insertion is far more serious. In this attack, the U.S. could be a set up for later attacks and set up false identities to be leveraged for multiple purposes in and out of the election space.

For example, a voter registration database interacts with a lot of government databases, such as criminal records. While common sense might say this kind of interaction should be a one way street, in reality the implementations quite often allow two-way interaction between the data sources. Therefore from one jurisdiction to another it should be carefully analyzed what kind of data propagation inserted voted records could lead to. Remember only US citizens can be voters, so a registered voter is assumed to be a citizen already.

Russians attacking U.S. election systems? Here’s the real risk, from a man who fought Soviet electronic attacks during the Cold War

Bob Sullivan

Bob Sullivan

With U.S. officials openly blaming Russia for hacker attacks on state election computer systems, and the myriad possibilities for election chaos such attacks raise, it’s important to put them in proper context. I went to Harri Hursti, a globally-known election security consultant, for some answers.  Hursti cut his teeth in the Finnish military fending off electronic attacks, so he has valuable perspective – particularly on a unique part of Russian culture which could explain who is really behind the attacks.  He also explains the potential for psychological warfare in this incident, and why it all feels a bit familiar to his Cold War sensibilities.

Harri Hursti developed the Hursti Hack(s), in which he demonstrated how the voting results produced by the Diebold Election Systems voting machines could be altered. HBO turned the Hursti Hack into a documentary called “Hacking Democracy” which was nominated for an Emmy award for outstanding investigative journalism. Hursti is co-author of several studies on data and election security, and his consultancy. Nordic Innovation Labs, advises governments around the world on election vulnerabilities.

Between 1984-1989, Hursti worked for the UNESCO and the Finnish military in technology and cyber defense initiatives.

What do you think of the news that a member of Congress says there is “no doubt” that Russia is behind recent attacks on state election systems: (http://www.reuters.com/article/us-usa-election-cyber-idUSKCN1220SL)?

The article makes several dangerous assumptions about the security of elections and election systems. Representative Adam Schiff said he doubted (Russians) could falsify a vote tally in a way that effects the election outcome. He also said outdated election systems makes this unlikely, but really, it just makes it easier. The voting machines were designed at a time when security wasn’t considered, included, or part of the specifications at all.

These outdated computers are extremely slow. They don’t have the extra horsepower to do decent security on top of the job they were designed for. Basically, the voting machine is as powerful as today’s refrigerator or toaster, but some use the same components and logistics so outdated doesn’t mean it’s forgotten and obsolete, it means that it’s common and therefore a lot of people still today know how those systems work and can subvert them. “Outdated” isn’t offering any protection from an attacker, quite the opposite.

So there’s no proof of voter registration tampering?

As in voting machines, the registration machine don’t have the capability of logging an alteration, and they are trivially altered themselves. It’s meaningless to claim there’s no evidence, since the systems don’t have the capability to report when they’re altered. These are not standard parts of a database so there’s no common sense in saying “there’s some sort of feature that would do that, right?” Unless we study the system we can’t know one way or the other. This isn’t a common sense claim, this is a claim that would require a forensic investigation.

In addition, the number of vendors and different systems is low, so a skillful attacker doesn’t need to learn hundreds of systems; they only need to know a half dozen to control all of the U.S. election systems. But a skillful attacker only needs to learn one system in order to manipulate enough votes to tilt the election, even if it’s not close to tied. This means the attacker has more places to go to be strategic and instead of going to a big jurisdiction, They’ll go to 10 smaller ones with fewer resources and less (attention). But if you calculate the gap you need to fill to alter an election you go to the smaller, underfunded and less technologically savvy districts to own the state.

Also, some states too have made state-wide decisions that one system is used across the state, or jursdictions are central count only. So a statement that the US is decentralized is a false statement. It’s easy to understand why people think that, but from an attacker’s point of view, the threat model, you could not ask for an easier target. And the diversity between small jurisdictions is limited. An attacker can choose the jurisdictions based on the systems they are best skilled to attack.

How can the US be so sure it’s Russia?

It can’t. It is very hard to find from where a network attack is coming from. It is equally easy to make certain that investigators will find “the trail” which is pointing to the wrong direction. Therefore under the assumption that you’re dealing with a skillful attacker, any trail found is a red flag for the fact there are so many ways to make it virtually impossible to find the trail. Any conclusive looking trail “found” should be considered suspect. Unless it’s a false trail, you can only say we suspect them, and until you get to the real people to the level of the actual perpetrators true identities, you can’t make a conclusion as to “where” they come from.

Could it have been Russia?

We could use a working hypothesis, or a reasonable suspicion of Russian involvement, but until you’re down to individual people you don’t know who they are. They might even have been based in Russia, but have arrived there as tourists to carry out their attacks. There’s no way to know who the individual attackers are until they’re confronted.

Given your Cold War background, does this feel familiar?

The Cold War was all about ideology, and therefore a large concept was something that we today call hybrid warfare. In that game the actual technological attacks are equally important as the psychological influencing of the general population with misinformation and misdirection. So this is all very familiar.

Also, something we in the Western world don’t understand is how deeply patriotic Russians are. Individual Russians, and self-organized groups, are willing to go to great lengths on their own, with their own initiative, if they believe that what they do will benefit Mother Russia, and/or in hope and believe that their actions once known will be rewarded. So this kind of self-initiated actions which do resemble organized operations are commonplace. Bearing in mind that the self-organized groups can have members whose day jobs are close to the government the remaining question is, is the government aware of these groups, and if they are, are they encouraging or discouraging? Which is something we cannot know. But the fact of the matter is that Russia is self-organizing and self-providing the capability of plausible deniability which in many case can be actually true that they didn’t know.

Also, it is good to understand how high is the level of science education in Russia and the eastern bloc, when East Germany and West Germany united they had to tone down the science education in East Germany in order to match the West Germans. Science education in East Germany was way higher. The percentage of people in the general population of Russia who possess the relevant skill sets for carrying out this kind of attack is higher than we assume based on western standards. And that’s not just Russia but the whole Eastern Bloc, it was very high and is still.

Given the number of, say, smartphones and laptops used today how hard is it to fend off an attack?

In today’s world where we have “bring you device” models everywhere, we inherently assume every risk the wireless world brings to us. Our laptops and mobile phones are paired to our home networks and other wireless places we visit. It is still not understood how little security WiFi has and how easy it is with an “evil” access point to gain a connection to a target, and once you have a target you can start to work to gain access.

To mitigate this we have two possible paths. One path is to ultra-high security with all the restrictions it comes with. The alternative is to assume a breach is imminent and utilize experts to put in place an active defense mechanism which catches the breach before the attacker can use the breach to gain access to valuable information.

What would an appropriate US response be if the U.S. discovered foreign hackers in its election system?

The first action is obviously to secure your home base. Taking into account the difficulty of identifying the actual attacker, a public retaliation towards an assumed attacker may be part of the attacker’s plan and intensify the attack. Hence, public retaliation is not an effective defense. Public disclosure is important, but after the fact and after the situation has been properly handled.

Finally what is the real risk here? Could Russian hacking throw the Nov. 8 result into doubt? Could Trump supporters, should they lose, blame Russia, for example?

There’s a myriad of risks. Just to start from the simple fact that attacking the voter registration system is a highway to all crimes involving identity theft. Therefore, massive breaches of voter registration databases might lead to discouragement of people to participate in the democratic process and cause them to drop out by ceasing to be registered voters. It also poses a national security level threats, by allowing malicious actors and adversaries to gain valuable intel whether it is personal-level attacks or whether it is for hybrid warfare psyops.

It is also important to understand that data theft takes the public interest. but detect injection or insertion is far more serious. In this attack, the U.S. could be a set up for later attacks and set up false identities to be leveraged for multiple purposes in and out of the election space.

For example, a voter registration database interacts with a lot of government databases, such as criminal records. While common sense might say this kind of interaction should be a one way street, in reality the implementations quite often allow two-way interaction between the data sources. Therefore from one jurisdiction to another it should be carefully analyzed what kind of data propagation inserted voted records could lead to. Remember only US citizens can be voters, so a registered voter is assumed to be a citizen already.

It's 10 p.m.: Do you know where are your apps are?

Larry Ponemon

Larry Ponemon

Ponemon Institute is pleased to present the results of Application Security in the Changing Risk
Landscape sponsored by F5. The purpose of this study is to understand how today’s security
risks are affecting application security. We surveyed 605 IT and IT security practitioners in the
United States who are involved in their organization’s application security activities.

The majority of respondents (57 percent) say it is the lack of visibility in the
application layer that is preventing a strong application security. In fact, 63 percent of respondents say attacks at the application
layer are harder to detect than at the network layer and 67 percent of
respondents say these attacks are more difficult to contain than at the network
layer.

Following are key takeaways from this research.

Lack of visibility in the application layer is the main barrier to achieving a
strong application security posture. Other significant barriers are created by
migration to the cloud (47 percent of respondents), lack of skilled or expert
personnel (45 percent of respondents) and proliferation of mobile devices (43 percent of
respondents).

The frequency and severity of attacks on the application layer is considered greater than
at the network layer. Fifty percent of respondents (29 percent + 21 percent) say the application is attacked more and 58 percent of respondents (33 percent + 21 percent) say attacks are more severe than at the network layer. In the past 12 months, the most common security incidents due to insecure applications were: SQL injections (29 percent), DDoS (25 percent) and Web fraud (21 percent).

Network security is better funded than application security. On average, 18 percent of the IT security budget is dedicated to application security. More than double that amount (an average of 39 percent) is allocated to network security. As a consequence, only 35 percent of respondents say their organizations have ample resources to detect vulnerabilities in applications, and 30 percent of respondents say they have enough resources to remediate vulnerabilities in applications.

Accountability for the security of applications is in a state of flux. Fifty-six percent of
respondents believe accountability for application security is shifting from IT to the end user or
application owner. However, at this time responsibility for ensuring the security of applications is dispersed throughout the organization. While 21 percent of respondents say the CIO or CTO is accountable, another 20 percent of respondents say no one person or department is responsible.

Twenty percent of respondents say business units are accountable and 19 percent of
respondents say the head of application development is accountable.

Shadow IT affects the security of applications. Respondents estimate that on average their
organizations have 1,175 applications and an average of 33 percent are considered mission
critical. Sixty-six percent of respondents are only somewhat confident (23 percent) or have no
confidence (43 percent) they know all the applications in their organizations. Accordingly, 68
percent of respondents (34 percent + 34 percent) say their IT function does not have visibility into all the applications deployed in their organizations and 65 percent of respondents (32 percent + 33 percent) agree that Shadow IT is a problem.

Mobile and business applications in the cloud are proliferating. An average of 31 percent of
business applications are mobile apps and this will increase to 38 percent in the next 12 months. Today, 37 percent of business applications are in the cloud and this will increase to an average of 46 percent.

The growth in mobile and cloud-based applications is seen as significantly affecting
application security risk. Sixty percent of respondents say mobile apps increase risk (25
percent) or increase risk significantly (35 percent). Fifty-one percent of respondents say cloud based applications increase risk (25 percent) or increase risk significantly (26 percent).
Hiring and retaining skilled and qualified application developers will improve an
organization’s security posture. Sixty-nine percent of respondents believe the shortage of
skilled and qualified application developers puts their applications at risk. Moreover, 67 percent of respondents say the “rush to release” causes application developers in their organization to
neglect secure coding procedures and processes.

Cyber security threats will weaken application security programs, but new IT security and
privacy compliance requirements will strengthen these programs. Eighty-eight percent of
respondents are concerned that new and emerging cyber security threats will affect the security
of applications. In contrast, 54 percent of respondents say new and emerging IT security and
privacy compliance requirements will help their security programs. According to respondents,
there are more trends expected to weaken application security than will strengthen security.
The responsibility for securing applications will move closer to the application developer.

Sixty percent of respondents anticipate the applications developer will assume more responsibility for the security of applications. Testing for vulnerabilities should take place in the design and development phase of the system development life cycle (SDLC). Today, most applications are tested in the launch or post-launch phase (61 percent). In the future, the goal is to perform more testing in the design and development phase (63 percent).

Do secure coding practices affect the application delivery cycle? Fifty percent of
respondents say secure coding practices, such as penetration testing, slow down the application delivery cycle within their organizations significantly (12 percent of respondents) or some slowdown (38 percent of respondents). However, 44 percent of respondents say there is no slowdown.

How secure coding practices will change. The secure coding practices most often performed
today are: run applications in a safe environment (67 percent of respondents), use automated
scanning tools to test applications for vulnerabilities (49 percent of respondents) and perform
penetration testing procedures (47 percent of respondents). In the next 24 months, the following practices will most likely be performed: run applications in a safe environment (80 percent of respondents), monitor the runtime behavior of applications to determine if tampering has occurred (65 percent

It’s 10 p.m.: Do you know where are your apps are?

Larry Ponemon

Larry Ponemon

Ponemon Institute is pleased to present the results of Application Security in the Changing Risk
Landscape sponsored by F5. The purpose of this study is to understand how today’s security
risks are affecting application security. We surveyed 605 IT and IT security practitioners in the
United States who are involved in their organization’s application security activities.

The majority of respondents (57 percent) say it is the lack of visibility in the
application layer that is preventing a strong application security. In fact, 63 percent of respondents say attacks at the application
layer are harder to detect than at the network layer and 67 percent of
respondents say these attacks are more difficult to contain than at the network
layer.

Following are key takeaways from this research.

Lack of visibility in the application layer is the main barrier to achieving a
strong application security posture. Other significant barriers are created by
migration to the cloud (47 percent of respondents), lack of skilled or expert
personnel (45 percent of respondents) and proliferation of mobile devices (43 percent of
respondents).

The frequency and severity of attacks on the application layer is considered greater than
at the network layer. Fifty percent of respondents (29 percent + 21 percent) say the application is attacked more and 58 percent of respondents (33 percent + 21 percent) say attacks are more severe than at the network layer. In the past 12 months, the most common security incidents due to insecure applications were: SQL injections (29 percent), DDoS (25 percent) and Web fraud (21 percent).

Network security is better funded than application security. On average, 18 percent of the IT security budget is dedicated to application security. More than double that amount (an average of 39 percent) is allocated to network security. As a consequence, only 35 percent of respondents say their organizations have ample resources to detect vulnerabilities in applications, and 30 percent of respondents say they have enough resources to remediate vulnerabilities in applications.

Accountability for the security of applications is in a state of flux. Fifty-six percent of
respondents believe accountability for application security is shifting from IT to the end user or
application owner. However, at this time responsibility for ensuring the security of applications is dispersed throughout the organization. While 21 percent of respondents say the CIO or CTO is accountable, another 20 percent of respondents say no one person or department is responsible.

Twenty percent of respondents say business units are accountable and 19 percent of
respondents say the head of application development is accountable.

Shadow IT affects the security of applications. Respondents estimate that on average their
organizations have 1,175 applications and an average of 33 percent are considered mission
critical. Sixty-six percent of respondents are only somewhat confident (23 percent) or have no
confidence (43 percent) they know all the applications in their organizations. Accordingly, 68
percent of respondents (34 percent + 34 percent) say their IT function does not have visibility into all the applications deployed in their organizations and 65 percent of respondents (32 percent + 33 percent) agree that Shadow IT is a problem.

Mobile and business applications in the cloud are proliferating. An average of 31 percent of
business applications are mobile apps and this will increase to 38 percent in the next 12 months. Today, 37 percent of business applications are in the cloud and this will increase to an average of 46 percent.

The growth in mobile and cloud-based applications is seen as significantly affecting
application security risk. Sixty percent of respondents say mobile apps increase risk (25
percent) or increase risk significantly (35 percent). Fifty-one percent of respondents say cloud based applications increase risk (25 percent) or increase risk significantly (26 percent).
Hiring and retaining skilled and qualified application developers will improve an
organization’s security posture. Sixty-nine percent of respondents believe the shortage of
skilled and qualified application developers puts their applications at risk. Moreover, 67 percent of respondents say the “rush to release” causes application developers in their organization to
neglect secure coding procedures and processes.

Cyber security threats will weaken application security programs, but new IT security and
privacy compliance requirements will strengthen these programs. Eighty-eight percent of
respondents are concerned that new and emerging cyber security threats will affect the security
of applications. In contrast, 54 percent of respondents say new and emerging IT security and
privacy compliance requirements will help their security programs. According to respondents,
there are more trends expected to weaken application security than will strengthen security.
The responsibility for securing applications will move closer to the application developer.

Sixty percent of respondents anticipate the applications developer will assume more responsibility for the security of applications. Testing for vulnerabilities should take place in the design and development phase of the system development life cycle (SDLC). Today, most applications are tested in the launch or post-launch phase (61 percent). In the future, the goal is to perform more testing in the design and development phase (63 percent).

Do secure coding practices affect the application delivery cycle? Fifty percent of
respondents say secure coding practices, such as penetration testing, slow down the application delivery cycle within their organizations significantly (12 percent of respondents) or some slowdown (38 percent of respondents). However, 44 percent of respondents say there is no slowdown.

How secure coding practices will change. The secure coding practices most often performed
today are: run applications in a safe environment (67 percent of respondents), use automated
scanning tools to test applications for vulnerabilities (49 percent of respondents) and perform
penetration testing procedures (47 percent of respondents). In the next 24 months, the following practices will most likely be performed: run applications in a safe environment (80 percent of respondents), monitor the runtime behavior of applications to determine if tampering has occurred (65 percent

Submarine builder declares 'economic warfare' as plans for ship said to be hacked; now what?

Bob Sullivan

Bob Sullivan

Get used to another term in world of computer hacking: “economic warfare.”

A French firm building multi-billion-dollar submarines for Australia and several other nations says it was the victim of economic warfare after some of its schematics for similar subs being built for India were released online, allegedly by hackers.   The data was published by Australian media

The firm, DCNS, is currently bidding for military contracts in Poland and Norway. For the India gig, it had beaten out German and Japanese firms.

An embarrassing data leak would obviously hurt the French firm’s bid for more deals — in addition to perhaps imperiling the security of its current projects.

“DCNS has been made aware of articles published in the Australian press related to the leakage of sensitive data about Indian Scorpene,” the firm said on its website. “This serious matter is thoroughly investigated by the proper French national authorities for Defense Security. This investigation will determine the exact nature of the leaked documents, the potential damages to DCNS customers as well as the responsibilities for this leakage.”

Right now, there’s only speculation about how much the allegedly stolen data might impact the security of the ships when they arrive in India — and the security of similar DCNS ships in Malaysia and Chile.

But DCNS immediately suggested that rivals might be to blame for the leak.

“Competition is getting tougher and tougher, and all means can be used in this context,” a company spokesperson said to Reuters. “There is India, Australia and other prospects, and other countries could raise legitimate questions over DCNS. It’s part of the tools in economic warfare.”

It’s clearly too early to know, however, if simple corporate espionage is to blame — or there might be some military advantage to be gained from publication of the documents.  Given that the alleged hackers send the data to a media outlet, it’s also possible their motivation was political.

The incident does highlight the asymmetrical nature of digital “warfare,” however.  A billion-dollar project involving thousands of employees can be derailed by a single person with a digital file and a the e-mail address of a journalist.

“If this was economic warfare as speculated, we can expect more attacks like this on a global scale,” said Scott Gordon, COO at file security firm FinalCode. “Hacktivists are motivated by reputational, economic and political gains from capitalizing on businesses’ and countries’ inability to secure sensitive, critical documents— tipping the scale in favor of other contenders in future military action and contracting situations.”

It also shows how hard it is to keep data under wraps when multiple third-party contractors have to share information in large projects.

“Sharing files, such as the 22,000-plus pages of blueprints and technical details on DCNS’s Scorpene submarines, is a necessary collaboration between government, contractor and manufacturing entities,” Gordon said. “But the exposure of these Indian naval secrets illustrates how lax file protection has opened a door to new data loss risks—and how even confidential military information can be exfiltrated and exposed by a weak link in the supply chain.”

Submarine builder declares ‘economic warfare’ as plans for ship said to be hacked; now what?

Bob Sullivan

Bob Sullivan

Get used to another term in world of computer hacking: “economic warfare.”

A French firm building multi-billion-dollar submarines for Australia and several other nations says it was the victim of economic warfare after some of its schematics for similar subs being built for India were released online, allegedly by hackers.   The data was published by Australian media

The firm, DCNS, is currently bidding for military contracts in Poland and Norway. For the India gig, it had beaten out German and Japanese firms.

An embarrassing data leak would obviously hurt the French firm’s bid for more deals — in addition to perhaps imperiling the security of its current projects.

“DCNS has been made aware of articles published in the Australian press related to the leakage of sensitive data about Indian Scorpene,” the firm said on its website. “This serious matter is thoroughly investigated by the proper French national authorities for Defense Security. This investigation will determine the exact nature of the leaked documents, the potential damages to DCNS customers as well as the responsibilities for this leakage.”

Right now, there’s only speculation about how much the allegedly stolen data might impact the security of the ships when they arrive in India — and the security of similar DCNS ships in Malaysia and Chile.

But DCNS immediately suggested that rivals might be to blame for the leak.

“Competition is getting tougher and tougher, and all means can be used in this context,” a company spokesperson said to Reuters. “There is India, Australia and other prospects, and other countries could raise legitimate questions over DCNS. It’s part of the tools in economic warfare.”

It’s clearly too early to know, however, if simple corporate espionage is to blame — or there might be some military advantage to be gained from publication of the documents.  Given that the alleged hackers send the data to a media outlet, it’s also possible their motivation was political.

The incident does highlight the asymmetrical nature of digital “warfare,” however.  A billion-dollar project involving thousands of employees can be derailed by a single person with a digital file and a the e-mail address of a journalist.

“If this was economic warfare as speculated, we can expect more attacks like this on a global scale,” said Scott Gordon, COO at file security firm FinalCode. “Hacktivists are motivated by reputational, economic and political gains from capitalizing on businesses’ and countries’ inability to secure sensitive, critical documents— tipping the scale in favor of other contenders in future military action and contracting situations.”

It also shows how hard it is to keep data under wraps when multiple third-party contractors have to share information in large projects.

“Sharing files, such as the 22,000-plus pages of blueprints and technical details on DCNS’s Scorpene submarines, is a necessary collaboration between government, contractor and manufacturing entities,” Gordon said. “But the exposure of these Indian naval secrets illustrates how lax file protection has opened a door to new data loss risks—and how even confidential military information can be exfiltrated and exposed by a weak link in the supply chain.”