Category Archives: Uncategorized

Russians attacking U.S. election systems? Here’s the real risk, from a man who fought Soviet electronic attacks during the Cold War

Bob Sullivan

Bob Sullivan

With U.S. officials openly blaming Russia for hacker attacks on state election computer systems, and the myriad possibilities for election chaos such attacks raise, it’s important to put them in proper context. I went to Harri Hursti, a globally-known election security consultant, for some answers.  Hursti cut his teeth in the Finnish military fending off electronic attacks, so he has valuable perspective – particularly on a unique part of Russian culture which could explain who is really behind the attacks.  He also explains the potential for psychological warfare in this incident, and why it all feels a bit familiar to his Cold War sensibilities.

Harri Hursti developed the Hursti Hack(s), in which he demonstrated how the voting results produced by the Diebold Election Systems voting machines could be altered. HBO turned the Hursti Hack into a documentary called “Hacking Democracy” which was nominated for an Emmy award for outstanding investigative journalism. Hursti is co-author of several studies on data and election security, and his consultancy. Nordic Innovation Labs, advises governments around the world on election vulnerabilities.

Between 1984-1989, Hursti worked for the UNESCO and the Finnish military in technology and cyber defense initiatives.

What do you think of the news that a member of Congress says there is “no doubt” that Russia is behind recent attacks on state election systems: (http://www.reuters.com/article/us-usa-election-cyber-idUSKCN1220SL)?

The article makes several dangerous assumptions about the security of elections and election systems. Representative Adam Schiff said he doubted (Russians) could falsify a vote tally in a way that effects the election outcome. He also said outdated election systems makes this unlikely, but really, it just makes it easier. The voting machines were designed at a time when security wasn’t considered, included, or part of the specifications at all.

These outdated computers are extremely slow. They don’t have the extra horsepower to do decent security on top of the job they were designed for. Basically, the voting machine is as powerful as today’s refrigerator or toaster, but some use the same components and logistics so outdated doesn’t mean it’s forgotten and obsolete, it means that it’s common and therefore a lot of people still today know how those systems work and can subvert them. “Outdated” isn’t offering any protection from an attacker, quite the opposite.

So there’s no proof of voter registration tampering?

As in voting machines, the registration machine don’t have the capability of logging an alteration, and they are trivially altered themselves. It’s meaningless to claim there’s no evidence, since the systems don’t have the capability to report when they’re altered. These are not standard parts of a database so there’s no common sense in saying “there’s some sort of feature that would do that, right?” Unless we study the system we can’t know one way or the other. This isn’t a common sense claim, this is a claim that would require a forensic investigation.

In addition, the number of vendors and different systems is low, so a skillful attacker doesn’t need to learn hundreds of systems; they only need to know a half dozen to control all of the U.S. election systems. But a skillful attacker only needs to learn one system in order to manipulate enough votes to tilt the election, even if it’s not close to tied. This means the attacker has more places to go to be strategic and instead of going to a big jurisdiction, They’ll go to 10 smaller ones with fewer resources and less (attention). But if you calculate the gap you need to fill to alter an election you go to the smaller, underfunded and less technologically savvy districts to own the state.

Also, some states too have made state-wide decisions that one system is used across the state, or jursdictions are central count only. So a statement that the US is decentralized is a false statement. It’s easy to understand why people think that, but from an attacker’s point of view, the threat model, you could not ask for an easier target. And the diversity between small jurisdictions is limited. An attacker can choose the jurisdictions based on the systems they are best skilled to attack.

How can the US be so sure it’s Russia?

It can’t. It is very hard to find from where a network attack is coming from. It is equally easy to make certain that investigators will find “the trail” which is pointing to the wrong direction. Therefore under the assumption that you’re dealing with a skillful attacker, any trail found is a red flag for the fact there are so many ways to make it virtually impossible to find the trail. Any conclusive looking trail “found” should be considered suspect. Unless it’s a false trail, you can only say we suspect them, and until you get to the real people to the level of the actual perpetrators true identities, you can’t make a conclusion as to “where” they come from.

Could it have been Russia?

We could use a working hypothesis, or a reasonable suspicion of Russian involvement, but until you’re down to individual people you don’t know who they are. They might even have been based in Russia, but have arrived there as tourists to carry out their attacks. There’s no way to know who the individual attackers are until they’re confronted.

Given your Cold War background, does this feel familiar?

The Cold War was all about ideology, and therefore a large concept was something that we today call hybrid warfare. In that game the actual technological attacks are equally important as the psychological influencing of the general population with misinformation and misdirection. So this is all very familiar.

Also, something we in the Western world don’t understand is how deeply patriotic Russians are. Individual Russians, and self-organized groups, are willing to go to great lengths on their own, with their own initiative, if they believe that what they do will benefit Mother Russia, and/or in hope and believe that their actions once known will be rewarded. So this kind of self-initiated actions which do resemble organized operations are commonplace. Bearing in mind that the self-organized groups can have members whose day jobs are close to the government the remaining question is, is the government aware of these groups, and if they are, are they encouraging or discouraging? Which is something we cannot know. But the fact of the matter is that Russia is self-organizing and self-providing the capability of plausible deniability which in many case can be actually true that they didn’t know.

Also, it is good to understand how high is the level of science education in Russia and the eastern bloc, when East Germany and West Germany united they had to tone down the science education in East Germany in order to match the West Germans. Science education in East Germany was way higher. The percentage of people in the general population of Russia who possess the relevant skill sets for carrying out this kind of attack is higher than we assume based on western standards. And that’s not just Russia but the whole Eastern Bloc, it was very high and is still.

Given the number of, say, smartphones and laptops used today how hard is it to fend off an attack?

In today’s world where we have “bring you device” models everywhere, we inherently assume every risk the wireless world brings to us. Our laptops and mobile phones are paired to our home networks and other wireless places we visit. It is still not understood how little security WiFi has and how easy it is with an “evil” access point to gain a connection to a target, and once you have a target you can start to work to gain access.

To mitigate this we have two possible paths. One path is to ultra-high security with all the restrictions it comes with. The alternative is to assume a breach is imminent and utilize experts to put in place an active defense mechanism which catches the breach before the attacker can use the breach to gain access to valuable information.

What would an appropriate US response be if the U.S. discovered foreign hackers in its election system?

The first action is obviously to secure your home base. Taking into account the difficulty of identifying the actual attacker, a public retaliation towards an assumed attacker may be part of the attacker’s plan and intensify the attack. Hence, public retaliation is not an effective defense. Public disclosure is important, but after the fact and after the situation has been properly handled.

Finally what is the real risk here? Could Russian hacking throw the Nov. 8 result into doubt? Could Trump supporters, should they lose, blame Russia, for example?

There’s a myriad of risks. Just to start from the simple fact that attacking the voter registration system is a highway to all crimes involving identity theft. Therefore, massive breaches of voter registration databases might lead to discouragement of people to participate in the democratic process and cause them to drop out by ceasing to be registered voters. It also poses a national security level threats, by allowing malicious actors and adversaries to gain valuable intel whether it is personal-level attacks or whether it is for hybrid warfare psyops.

It is also important to understand that data theft takes the public interest. but detect injection or insertion is far more serious. In this attack, the U.S. could be a set up for later attacks and set up false identities to be leveraged for multiple purposes in and out of the election space.

For example, a voter registration database interacts with a lot of government databases, such as criminal records. While common sense might say this kind of interaction should be a one way street, in reality the implementations quite often allow two-way interaction between the data sources. Therefore from one jurisdiction to another it should be carefully analyzed what kind of data propagation inserted voted records could lead to. Remember only US citizens can be voters, so a registered voter is assumed to be a citizen already.

Russians attacking U.S. election systems? Here's the real risk, from a man who fought Soviet electronic attacks during the Cold War

Bob Sullivan

Bob Sullivan

With U.S. officials openly blaming Russia for hacker attacks on state election computer systems, and the myriad possibilities for election chaos such attacks raise, it’s important to put them in proper context. I went to Harri Hursti, a globally-known election security consultant, for some answers.  Hursti cut his teeth in the Finnish military fending off electronic attacks, so he has valuable perspective – particularly on a unique part of Russian culture which could explain who is really behind the attacks.  He also explains the potential for psychological warfare in this incident, and why it all feels a bit familiar to his Cold War sensibilities.

Harri Hursti developed the Hursti Hack(s), in which he demonstrated how the voting results produced by the Diebold Election Systems voting machines could be altered. HBO turned the Hursti Hack into a documentary called “Hacking Democracy” which was nominated for an Emmy award for outstanding investigative journalism. Hursti is co-author of several studies on data and election security, and his consultancy. Nordic Innovation Labs, advises governments around the world on election vulnerabilities.

Between 1984-1989, Hursti worked for the UNESCO and the Finnish military in technology and cyber defense initiatives.

What do you think of the news that a member of Congress says there is “no doubt” that Russia is behind recent attacks on state election systems: (http://www.reuters.com/article/us-usa-election-cyber-idUSKCN1220SL)?

The article makes several dangerous assumptions about the security of elections and election systems. Representative Adam Schiff said he doubted (Russians) could falsify a vote tally in a way that effects the election outcome. He also said outdated election systems makes this unlikely, but really, it just makes it easier. The voting machines were designed at a time when security wasn’t considered, included, or part of the specifications at all.

These outdated computers are extremely slow. They don’t have the extra horsepower to do decent security on top of the job they were designed for. Basically, the voting machine is as powerful as today’s refrigerator or toaster, but some use the same components and logistics so outdated doesn’t mean it’s forgotten and obsolete, it means that it’s common and therefore a lot of people still today know how those systems work and can subvert them. “Outdated” isn’t offering any protection from an attacker, quite the opposite.

So there’s no proof of voter registration tampering?

As in voting machines, the registration machine don’t have the capability of logging an alteration, and they are trivially altered themselves. It’s meaningless to claim there’s no evidence, since the systems don’t have the capability to report when they’re altered. These are not standard parts of a database so there’s no common sense in saying “there’s some sort of feature that would do that, right?” Unless we study the system we can’t know one way or the other. This isn’t a common sense claim, this is a claim that would require a forensic investigation.

In addition, the number of vendors and different systems is low, so a skillful attacker doesn’t need to learn hundreds of systems; they only need to know a half dozen to control all of the U.S. election systems. But a skillful attacker only needs to learn one system in order to manipulate enough votes to tilt the election, even if it’s not close to tied. This means the attacker has more places to go to be strategic and instead of going to a big jurisdiction, They’ll go to 10 smaller ones with fewer resources and less (attention). But if you calculate the gap you need to fill to alter an election you go to the smaller, underfunded and less technologically savvy districts to own the state.

Also, some states too have made state-wide decisions that one system is used across the state, or jursdictions are central count only. So a statement that the US is decentralized is a false statement. It’s easy to understand why people think that, but from an attacker’s point of view, the threat model, you could not ask for an easier target. And the diversity between small jurisdictions is limited. An attacker can choose the jurisdictions based on the systems they are best skilled to attack.

How can the US be so sure it’s Russia?

It can’t. It is very hard to find from where a network attack is coming from. It is equally easy to make certain that investigators will find “the trail” which is pointing to the wrong direction. Therefore under the assumption that you’re dealing with a skillful attacker, any trail found is a red flag for the fact there are so many ways to make it virtually impossible to find the trail. Any conclusive looking trail “found” should be considered suspect. Unless it’s a false trail, you can only say we suspect them, and until you get to the real people to the level of the actual perpetrators true identities, you can’t make a conclusion as to “where” they come from.

Could it have been Russia?

We could use a working hypothesis, or a reasonable suspicion of Russian involvement, but until you’re down to individual people you don’t know who they are. They might even have been based in Russia, but have arrived there as tourists to carry out their attacks. There’s no way to know who the individual attackers are until they’re confronted.

Given your Cold War background, does this feel familiar?

The Cold War was all about ideology, and therefore a large concept was something that we today call hybrid warfare. In that game the actual technological attacks are equally important as the psychological influencing of the general population with misinformation and misdirection. So this is all very familiar.

Also, something we in the Western world don’t understand is how deeply patriotic Russians are. Individual Russians, and self-organized groups, are willing to go to great lengths on their own, with their own initiative, if they believe that what they do will benefit Mother Russia, and/or in hope and believe that their actions once known will be rewarded. So this kind of self-initiated actions which do resemble organized operations are commonplace. Bearing in mind that the self-organized groups can have members whose day jobs are close to the government the remaining question is, is the government aware of these groups, and if they are, are they encouraging or discouraging? Which is something we cannot know. But the fact of the matter is that Russia is self-organizing and self-providing the capability of plausible deniability which in many case can be actually true that they didn’t know.

Also, it is good to understand how high is the level of science education in Russia and the eastern bloc, when East Germany and West Germany united they had to tone down the science education in East Germany in order to match the West Germans. Science education in East Germany was way higher. The percentage of people in the general population of Russia who possess the relevant skill sets for carrying out this kind of attack is higher than we assume based on western standards. And that’s not just Russia but the whole Eastern Bloc, it was very high and is still.

Given the number of, say, smartphones and laptops used today how hard is it to fend off an attack?

In today’s world where we have “bring you device” models everywhere, we inherently assume every risk the wireless world brings to us. Our laptops and mobile phones are paired to our home networks and other wireless places we visit. It is still not understood how little security WiFi has and how easy it is with an “evil” access point to gain a connection to a target, and once you have a target you can start to work to gain access.

To mitigate this we have two possible paths. One path is to ultra-high security with all the restrictions it comes with. The alternative is to assume a breach is imminent and utilize experts to put in place an active defense mechanism which catches the breach before the attacker can use the breach to gain access to valuable information.

What would an appropriate US response be if the U.S. discovered foreign hackers in its election system?

The first action is obviously to secure your home base. Taking into account the difficulty of identifying the actual attacker, a public retaliation towards an assumed attacker may be part of the attacker’s plan and intensify the attack. Hence, public retaliation is not an effective defense. Public disclosure is important, but after the fact and after the situation has been properly handled.

Finally what is the real risk here? Could Russian hacking throw the Nov. 8 result into doubt? Could Trump supporters, should they lose, blame Russia, for example?

There’s a myriad of risks. Just to start from the simple fact that attacking the voter registration system is a highway to all crimes involving identity theft. Therefore, massive breaches of voter registration databases might lead to discouragement of people to participate in the democratic process and cause them to drop out by ceasing to be registered voters. It also poses a national security level threats, by allowing malicious actors and adversaries to gain valuable intel whether it is personal-level attacks or whether it is for hybrid warfare psyops.

It is also important to understand that data theft takes the public interest. but detect injection or insertion is far more serious. In this attack, the U.S. could be a set up for later attacks and set up false identities to be leveraged for multiple purposes in and out of the election space.

For example, a voter registration database interacts with a lot of government databases, such as criminal records. While common sense might say this kind of interaction should be a one way street, in reality the implementations quite often allow two-way interaction between the data sources. Therefore from one jurisdiction to another it should be carefully analyzed what kind of data propagation inserted voted records could lead to. Remember only US citizens can be voters, so a registered voter is assumed to be a citizen already.

It’s 10 p.m.: Do you know where are your apps are?

Larry Ponemon

Larry Ponemon

Ponemon Institute is pleased to present the results of Application Security in the Changing Risk
Landscape sponsored by F5. The purpose of this study is to understand how today’s security
risks are affecting application security. We surveyed 605 IT and IT security practitioners in the
United States who are involved in their organization’s application security activities.

The majority of respondents (57 percent) say it is the lack of visibility in the
application layer that is preventing a strong application security. In fact, 63 percent of respondents say attacks at the application
layer are harder to detect than at the network layer and 67 percent of
respondents say these attacks are more difficult to contain than at the network
layer.

Following are key takeaways from this research.

Lack of visibility in the application layer is the main barrier to achieving a
strong application security posture. Other significant barriers are created by
migration to the cloud (47 percent of respondents), lack of skilled or expert
personnel (45 percent of respondents) and proliferation of mobile devices (43 percent of
respondents).

The frequency and severity of attacks on the application layer is considered greater than
at the network layer. Fifty percent of respondents (29 percent + 21 percent) say the application is attacked more and 58 percent of respondents (33 percent + 21 percent) say attacks are more severe than at the network layer. In the past 12 months, the most common security incidents due to insecure applications were: SQL injections (29 percent), DDoS (25 percent) and Web fraud (21 percent).

Network security is better funded than application security. On average, 18 percent of the IT security budget is dedicated to application security. More than double that amount (an average of 39 percent) is allocated to network security. As a consequence, only 35 percent of respondents say their organizations have ample resources to detect vulnerabilities in applications, and 30 percent of respondents say they have enough resources to remediate vulnerabilities in applications.

Accountability for the security of applications is in a state of flux. Fifty-six percent of
respondents believe accountability for application security is shifting from IT to the end user or
application owner. However, at this time responsibility for ensuring the security of applications is dispersed throughout the organization. While 21 percent of respondents say the CIO or CTO is accountable, another 20 percent of respondents say no one person or department is responsible.

Twenty percent of respondents say business units are accountable and 19 percent of
respondents say the head of application development is accountable.

Shadow IT affects the security of applications. Respondents estimate that on average their
organizations have 1,175 applications and an average of 33 percent are considered mission
critical. Sixty-six percent of respondents are only somewhat confident (23 percent) or have no
confidence (43 percent) they know all the applications in their organizations. Accordingly, 68
percent of respondents (34 percent + 34 percent) say their IT function does not have visibility into all the applications deployed in their organizations and 65 percent of respondents (32 percent + 33 percent) agree that Shadow IT is a problem.

Mobile and business applications in the cloud are proliferating. An average of 31 percent of
business applications are mobile apps and this will increase to 38 percent in the next 12 months. Today, 37 percent of business applications are in the cloud and this will increase to an average of 46 percent.

The growth in mobile and cloud-based applications is seen as significantly affecting
application security risk. Sixty percent of respondents say mobile apps increase risk (25
percent) or increase risk significantly (35 percent). Fifty-one percent of respondents say cloud based applications increase risk (25 percent) or increase risk significantly (26 percent).
Hiring and retaining skilled and qualified application developers will improve an
organization’s security posture. Sixty-nine percent of respondents believe the shortage of
skilled and qualified application developers puts their applications at risk. Moreover, 67 percent of respondents say the “rush to release” causes application developers in their organization to
neglect secure coding procedures and processes.

Cyber security threats will weaken application security programs, but new IT security and
privacy compliance requirements will strengthen these programs. Eighty-eight percent of
respondents are concerned that new and emerging cyber security threats will affect the security
of applications. In contrast, 54 percent of respondents say new and emerging IT security and
privacy compliance requirements will help their security programs. According to respondents,
there are more trends expected to weaken application security than will strengthen security.
The responsibility for securing applications will move closer to the application developer.

Sixty percent of respondents anticipate the applications developer will assume more responsibility for the security of applications. Testing for vulnerabilities should take place in the design and development phase of the system development life cycle (SDLC). Today, most applications are tested in the launch or post-launch phase (61 percent). In the future, the goal is to perform more testing in the design and development phase (63 percent).

Do secure coding practices affect the application delivery cycle? Fifty percent of
respondents say secure coding practices, such as penetration testing, slow down the application delivery cycle within their organizations significantly (12 percent of respondents) or some slowdown (38 percent of respondents). However, 44 percent of respondents say there is no slowdown.

How secure coding practices will change. The secure coding practices most often performed
today are: run applications in a safe environment (67 percent of respondents), use automated
scanning tools to test applications for vulnerabilities (49 percent of respondents) and perform
penetration testing procedures (47 percent of respondents). In the next 24 months, the following practices will most likely be performed: run applications in a safe environment (80 percent of respondents), monitor the runtime behavior of applications to determine if tampering has occurred (65 percent

It's 10 p.m.: Do you know where are your apps are?

Larry Ponemon

Larry Ponemon

Ponemon Institute is pleased to present the results of Application Security in the Changing Risk
Landscape sponsored by F5. The purpose of this study is to understand how today’s security
risks are affecting application security. We surveyed 605 IT and IT security practitioners in the
United States who are involved in their organization’s application security activities.

The majority of respondents (57 percent) say it is the lack of visibility in the
application layer that is preventing a strong application security. In fact, 63 percent of respondents say attacks at the application
layer are harder to detect than at the network layer and 67 percent of
respondents say these attacks are more difficult to contain than at the network
layer.

Following are key takeaways from this research.

Lack of visibility in the application layer is the main barrier to achieving a
strong application security posture. Other significant barriers are created by
migration to the cloud (47 percent of respondents), lack of skilled or expert
personnel (45 percent of respondents) and proliferation of mobile devices (43 percent of
respondents).

The frequency and severity of attacks on the application layer is considered greater than
at the network layer. Fifty percent of respondents (29 percent + 21 percent) say the application is attacked more and 58 percent of respondents (33 percent + 21 percent) say attacks are more severe than at the network layer. In the past 12 months, the most common security incidents due to insecure applications were: SQL injections (29 percent), DDoS (25 percent) and Web fraud (21 percent).

Network security is better funded than application security. On average, 18 percent of the IT security budget is dedicated to application security. More than double that amount (an average of 39 percent) is allocated to network security. As a consequence, only 35 percent of respondents say their organizations have ample resources to detect vulnerabilities in applications, and 30 percent of respondents say they have enough resources to remediate vulnerabilities in applications.

Accountability for the security of applications is in a state of flux. Fifty-six percent of
respondents believe accountability for application security is shifting from IT to the end user or
application owner. However, at this time responsibility for ensuring the security of applications is dispersed throughout the organization. While 21 percent of respondents say the CIO or CTO is accountable, another 20 percent of respondents say no one person or department is responsible.

Twenty percent of respondents say business units are accountable and 19 percent of
respondents say the head of application development is accountable.

Shadow IT affects the security of applications. Respondents estimate that on average their
organizations have 1,175 applications and an average of 33 percent are considered mission
critical. Sixty-six percent of respondents are only somewhat confident (23 percent) or have no
confidence (43 percent) they know all the applications in their organizations. Accordingly, 68
percent of respondents (34 percent + 34 percent) say their IT function does not have visibility into all the applications deployed in their organizations and 65 percent of respondents (32 percent + 33 percent) agree that Shadow IT is a problem.

Mobile and business applications in the cloud are proliferating. An average of 31 percent of
business applications are mobile apps and this will increase to 38 percent in the next 12 months. Today, 37 percent of business applications are in the cloud and this will increase to an average of 46 percent.

The growth in mobile and cloud-based applications is seen as significantly affecting
application security risk. Sixty percent of respondents say mobile apps increase risk (25
percent) or increase risk significantly (35 percent). Fifty-one percent of respondents say cloud based applications increase risk (25 percent) or increase risk significantly (26 percent).
Hiring and retaining skilled and qualified application developers will improve an
organization’s security posture. Sixty-nine percent of respondents believe the shortage of
skilled and qualified application developers puts their applications at risk. Moreover, 67 percent of respondents say the “rush to release” causes application developers in their organization to
neglect secure coding procedures and processes.

Cyber security threats will weaken application security programs, but new IT security and
privacy compliance requirements will strengthen these programs. Eighty-eight percent of
respondents are concerned that new and emerging cyber security threats will affect the security
of applications. In contrast, 54 percent of respondents say new and emerging IT security and
privacy compliance requirements will help their security programs. According to respondents,
there are more trends expected to weaken application security than will strengthen security.
The responsibility for securing applications will move closer to the application developer.

Sixty percent of respondents anticipate the applications developer will assume more responsibility for the security of applications. Testing for vulnerabilities should take place in the design and development phase of the system development life cycle (SDLC). Today, most applications are tested in the launch or post-launch phase (61 percent). In the future, the goal is to perform more testing in the design and development phase (63 percent).

Do secure coding practices affect the application delivery cycle? Fifty percent of
respondents say secure coding practices, such as penetration testing, slow down the application delivery cycle within their organizations significantly (12 percent of respondents) or some slowdown (38 percent of respondents). However, 44 percent of respondents say there is no slowdown.

How secure coding practices will change. The secure coding practices most often performed
today are: run applications in a safe environment (67 percent of respondents), use automated
scanning tools to test applications for vulnerabilities (49 percent of respondents) and perform
penetration testing procedures (47 percent of respondents). In the next 24 months, the following practices will most likely be performed: run applications in a safe environment (80 percent of respondents), monitor the runtime behavior of applications to determine if tampering has occurred (65 percent

Submarine builder declares ‘economic warfare’ as plans for ship said to be hacked; now what?

Bob Sullivan

Bob Sullivan

Get used to another term in world of computer hacking: “economic warfare.”

A French firm building multi-billion-dollar submarines for Australia and several other nations says it was the victim of economic warfare after some of its schematics for similar subs being built for India were released online, allegedly by hackers.   The data was published by Australian media

The firm, DCNS, is currently bidding for military contracts in Poland and Norway. For the India gig, it had beaten out German and Japanese firms.

An embarrassing data leak would obviously hurt the French firm’s bid for more deals — in addition to perhaps imperiling the security of its current projects.

“DCNS has been made aware of articles published in the Australian press related to the leakage of sensitive data about Indian Scorpene,” the firm said on its website. “This serious matter is thoroughly investigated by the proper French national authorities for Defense Security. This investigation will determine the exact nature of the leaked documents, the potential damages to DCNS customers as well as the responsibilities for this leakage.”

Right now, there’s only speculation about how much the allegedly stolen data might impact the security of the ships when they arrive in India — and the security of similar DCNS ships in Malaysia and Chile.

But DCNS immediately suggested that rivals might be to blame for the leak.

“Competition is getting tougher and tougher, and all means can be used in this context,” a company spokesperson said to Reuters. “There is India, Australia and other prospects, and other countries could raise legitimate questions over DCNS. It’s part of the tools in economic warfare.”

It’s clearly too early to know, however, if simple corporate espionage is to blame — or there might be some military advantage to be gained from publication of the documents.  Given that the alleged hackers send the data to a media outlet, it’s also possible their motivation was political.

The incident does highlight the asymmetrical nature of digital “warfare,” however.  A billion-dollar project involving thousands of employees can be derailed by a single person with a digital file and a the e-mail address of a journalist.

“If this was economic warfare as speculated, we can expect more attacks like this on a global scale,” said Scott Gordon, COO at file security firm FinalCode. “Hacktivists are motivated by reputational, economic and political gains from capitalizing on businesses’ and countries’ inability to secure sensitive, critical documents— tipping the scale in favor of other contenders in future military action and contracting situations.”

It also shows how hard it is to keep data under wraps when multiple third-party contractors have to share information in large projects.

“Sharing files, such as the 22,000-plus pages of blueprints and technical details on DCNS’s Scorpene submarines, is a necessary collaboration between government, contractor and manufacturing entities,” Gordon said. “But the exposure of these Indian naval secrets illustrates how lax file protection has opened a door to new data loss risks—and how even confidential military information can be exfiltrated and exposed by a weak link in the supply chain.”

Submarine builder declares 'economic warfare' as plans for ship said to be hacked; now what?

Bob Sullivan

Bob Sullivan

Get used to another term in world of computer hacking: “economic warfare.”

A French firm building multi-billion-dollar submarines for Australia and several other nations says it was the victim of economic warfare after some of its schematics for similar subs being built for India were released online, allegedly by hackers.   The data was published by Australian media

The firm, DCNS, is currently bidding for military contracts in Poland and Norway. For the India gig, it had beaten out German and Japanese firms.

An embarrassing data leak would obviously hurt the French firm’s bid for more deals — in addition to perhaps imperiling the security of its current projects.

“DCNS has been made aware of articles published in the Australian press related to the leakage of sensitive data about Indian Scorpene,” the firm said on its website. “This serious matter is thoroughly investigated by the proper French national authorities for Defense Security. This investigation will determine the exact nature of the leaked documents, the potential damages to DCNS customers as well as the responsibilities for this leakage.”

Right now, there’s only speculation about how much the allegedly stolen data might impact the security of the ships when they arrive in India — and the security of similar DCNS ships in Malaysia and Chile.

But DCNS immediately suggested that rivals might be to blame for the leak.

“Competition is getting tougher and tougher, and all means can be used in this context,” a company spokesperson said to Reuters. “There is India, Australia and other prospects, and other countries could raise legitimate questions over DCNS. It’s part of the tools in economic warfare.”

It’s clearly too early to know, however, if simple corporate espionage is to blame — or there might be some military advantage to be gained from publication of the documents.  Given that the alleged hackers send the data to a media outlet, it’s also possible their motivation was political.

The incident does highlight the asymmetrical nature of digital “warfare,” however.  A billion-dollar project involving thousands of employees can be derailed by a single person with a digital file and a the e-mail address of a journalist.

“If this was economic warfare as speculated, we can expect more attacks like this on a global scale,” said Scott Gordon, COO at file security firm FinalCode. “Hacktivists are motivated by reputational, economic and political gains from capitalizing on businesses’ and countries’ inability to secure sensitive, critical documents— tipping the scale in favor of other contenders in future military action and contracting situations.”

It also shows how hard it is to keep data under wraps when multiple third-party contractors have to share information in large projects.

“Sharing files, such as the 22,000-plus pages of blueprints and technical details on DCNS’s Scorpene submarines, is a necessary collaboration between government, contractor and manufacturing entities,” Gordon said. “But the exposure of these Indian naval secrets illustrates how lax file protection has opened a door to new data loss risks—and how even confidential military information can be exfiltrated and exposed by a weak link in the supply chain.”

From hunted to hunter

Larry Ponemon

Larry Ponemon

The purpose of the “Don’t Wait: The Evolution of Proactive Threat Hunting” survey, sponsored by Raytheon, is to examine how organizations are deploying managed security services to strengthen their security posture. The research also looks at the critical success factors, barriers and challenges to having a successful relationship with managed security services providers.

We surveyed 1,784 chief information security officers and other senior IT security leaders in North America, Europe, Middle East and Asia Pacific[1] who are familiar with their organizations’ managed security service practices. Managed security services providers (MSSPs) are engaged by organizations to manage and strengthen their IT environment’s security by providing services including security information and event management (SIEM), network security management (NSM), endpoint detection and response (EDR), incident response, forensics and more.

Security tools such as anti-virus, firewalls, intrusion detection and sandbox technologies, are built upon the assumption that attackers adhere to a known set of tools and tactics. Today, while a majority of MSSPs focus on these traditional, reactive tools, some provide more advanced, proactive services. Proactive threat hunting services can effectively find sophisticated and damaging threats, including previously undetected attacks, and stop them before businesses suffer damage.

In this study, 56 percent of respondents use an MSSP and 22 percent say they plan to engage an MSSP in the future. Part 2 of this report provides analysis of the 56 percent who are engaged with a provider. In many cases, it is a serious security incident such as a data breach that motivates companies to engage an MSSP to strengthen their security posture.

A key takeaway is that organizations using MSSPs understand the primary benefits of leveraging external expertise. Eighty percent view MSS as essential, very important or important to their overall IT security strategy. Figure 1 shows the primary reason to have an MSSP is to improve security posture (59 percent). This is followed closely by the need to reduce the challenge of recruiting and retaining necessary talent (58 percent) and the lack of in-house security technologies (57 percent).

The following are the seven most salient research findings.

 1. MSSPs help companies achieve a stronger security posture. With evolving cyber threats, organizations face the critical challenge of lack of expertise, personnel and resources. MSSPs are seen as filling these gaps to improve their security.

Many organizations worldwide still typically wait until after a breach before the money is allocated to engage an MSSP. Two-thirds of organizations not currently using an MSSP say that the top trigger would be a significant data loss resulting from an IT security incident.

A breach would confirm that the organization’s risk of compromise is high, so it becomes a priority.

2. A shift from reactive services to proactive services offered by providers and demanded by organizations is occurring but is still in the early stages. The lack of proactive threat hunting services could be contributing to the daily barrage of media headlines about data breaches in organizations worldwide. It highlights a need for organizations to be doing more to protect their networks from the most insidious threats. Currently, MSSPs offer cybersecurity assessment (39 percent), integration services (31 percent) and digital forensics and incident response (DFIR) engineering and/or assessment (28 percent). Only 16 percent say their MSS offers proactive threat hunting to find advanced threats based on behaviors and anomalies.

3. Interoperability with security intelligence tools such as SIEM is essential or very important. When asked what characteristics of MSSPs are essential or very important, the number one feature is high interoperability with the company’s security intelligence tools, such as SIEM (73 percent). Also critical are speedy deployment (65 percent), round-the-clock threat monitoring and management (63 percent), a tried and tested service offering (62 percent) and scalability of services (61 percent). Not as critical are compliance with data protection requirements (52 percent) and indemnification for service failures (36 percent)\

Whether organizations use MSSPs or not, interoperability/integration between MSSP and the customer is top priority. Those currently not using one say it is difficult to find MSSPs that would support or integrate with their systems and requirements. Fifty-three percent list difficulty finding vendors strong in interoperability as the reason they choose not to outsource.

4. MSSPs provide insights about security events and a better understanding of the external threat environment. Sixty-five percent of respondents believe their MSSP leverages insight gained from monitoring a large number of security events from a global customer base and 53 percent say the MSSP helps to better understand the external threat environment through the collection and analysis of information on attackers, methods and motives. More than half (51 percent) say it effectively mitigates the risks after they are identified.

5. MSSPs have identified existing software vulnerabilities that are more than three months old. Fifty-four percent of respondents say their MSSPs identified exploits of existing software vulnerabilities greater than three months old, and 45 percent say exploits of existing software vulnerabilities less than three months old have been discovered. They also revealed Web-borne malware attacks (51 percent). New threats are often going undetected because typical providers are not actively identifying new threats but importing threats identified by industry into their toolsets.

6. Responsibility for relationships with MSSPs is shifting. Fifty-nine percent say responsibility for the MSSP is shifting from IT to the lines of business. Today, however, the IT (43 percent) or IT security professional (15 percent) owns their organizations’ relationships with MSSPs. This represents a trend that MSS services are not considered a commodity but a strategic element and competitive advantage companies can foster. One reason for this shift is that in many organizations the CEO and board of directors now have a responsibility to the shareholders to ensure that companies are protected.

7. A lack of visibility into the outsourcer’s IT security infrastructure is a barrier to successful outsourcing of security services. Fifty-one percent say a lack of visibility into the outsourcer’s IT security infrastructure is the main hindrance to a successful approach to outsourcing. Other barriers are inconsistency with the organization’s culture (49 percent) and turf or silo issues between the organization’s IT security operations team and the outsourcer (46 percent).

To read the rest of this report, click here

 

[1] The countries represented in these regions are: United States, Canada, United Kingdom, Denmark, France, Germany, Netherlands, Brunei, Kuwait, Saudi Arabia, Oman, Qatar, UAE, India, Australia, Japan, Singapore and South Korea.

 

New worries about ransomware — attacking smartphones

Bob Sullivan

Bob Sullivan

There’s been a scary increase in successful ransomware attacks against large organizations this year. Specifically, hospitals have found themselves at the mercy of hackers who demand ransom payments to unlock critical system files. Recently, there have been signs that these criminals have moved on to universities, too. The University of Calgary admitted to Canadian media last month that it paid $20,000 ransom “to address system issues.”

But individuals have something new to worry about. A new report from Kaspersky Lab says its detection rate for mobile ransomware — malicious software targeting smartphones and demanding ransoms — quadrupled in one year.

It’s easy to see why phone ransomware would work. Consumers fly into a panic when their phone battery dies; imagine what it’s like to see a message saying your phone is locked, and a $100 payment is required to unlock it.

Kaspersky says some ransomware criminals simply require that mobile victims type in a iTunes gift card number to free the device. I’ve written recently about the increases use of Apple card payments for fraud.

A combination of easy, anonymous payments and off-the-shelf copycatting software tools makes mobile ransomware a new and potentially dangerous threat, both to consumers and to the companies that employ them.

The numbers tell the story: From April 2014 to March 2015, Kaspersky Lab security solutions for Android protected 35,413 users from mobile ransomware. A year later the number had increased almost four-fold to 136,532 users.

It’s unclear from the report how users encounter mobile ransomware in the first place, though at least some get it when visiting porn sites and are tricked into downloading and installing malicious software.

“The extortion model is here to stay,” Kaspersky says in its report. “Mobile ransomware emerged as a follow-up to PC ransomware and it is likely that it will be followed-up with malware targeting devices that are very different to a PC or a smartphone. These could be connected devices: like smart watches, smart TVs, and other smart products including home and in-car entertainment systems. There are a few proof-of-concepts for some of these devices, and the appearance of actual malware targeting smart devices is only a question of time.”

Kaspersly offers these tips to consumers:

Back-up is a must. If you ever thought that one day you would finally download and install that strange boring back-up software, today is the day. The sooner back-up becomes yet another rule in your day-to-day PC activity, the sooner you will become invulnerable to any kind of ransomware.

Use a reliable security solution. And when using it do not turn off the advanced security features which it most certainly has. Usually these are features that enable the detection of new ransomware based on its behavior.

Keep the software on your PC up-to-date. Most widely-used programs (Flash, Java, Chrome, Firefox, Internet Explorer, Microsoft Windows and Office) have an automatic updates feature. Keep it turned on, and don’t ignore requests from these applications for the installation of updates.

Keep an eye on files you download from the Internet. Especially from untrusted sources. In other words, if what is supposed to be an mp3 file has an .exe extension, it is definitely not a musical track but malware. The best way to be sure that everything is fine with the downloaded content is to make sure it has the right extension and has successfully passed the checks run by the protection solution on your PC.

Keep yourself informed of the new approaches cyber-crooks use to lure their victims into installing malware.

The state of SMB security: 10 findings

Larry Ponemon

Larry Ponemon

No business is too small to evade a cyber attack or data breach. Unfortunately, smaller organizations may not have the budget and in-house expertise to harden their systems and networks against potential threats. In fact, only 14 percent of the companies represented in this study rate their ability to mitigate cyber risks, vulnerabilities and attacks as highly effective. Moreover, the introduction of cloud applications and infrastructure and more mobile devices is creating more security risks that will stretch these companies’ resources.

Ponemon Institute is pleased to present the results of the 2016 State of Cybersecurity in Small and Medium-Sized Business sponsored by Keeper Security. We surveyed 598 individuals in companies with a headcount from less than 100 to 1,000.

Some 55 percent of these respondents say their companies have experienced a cyber attack in the past 12 months, and 50 percent report they had data breaches involving customer and employee information in the past 12 months. In the aftermath of these incidents, these companies spent an average of $879,582 because of damage or theft of IT assets. In addition, disruption to normal operations cost an average of $955,429.

The following 10 findings reveal the state of cybersecurity in smaller businesses.

  1. The most prevalent attacks against smaller businesses are Web-based and phishing/social engineering.
  1. Negligent employees or contractors and third parties caused most data breaches. However, almost one-third of companies in this research could not determine the root cause.
  1. Companies are most concerned about the loss or theft of their customers’ information and their intellectual property.
  1. Strong passwords and biometrics are believed an essential part of the security defense. However, 59 percent of respondents say they do not have visibility into employees’ password practices such as the use of unique or strong passwords and sharing passwords with others.
  1. Password policies are not strictly enforced. If a company has a password policy, 65 percent of respondents say they do not strictly enforce it. Moreover, the policy does not require employees to use a password or biometric to secure access to mobile devices.
  1. Current technologies cannot detect and block many cyber attacks. Most exploits have evaded intrusion detection systems and anti-virus solutions.
  1. Personnel, budget and technologies are insufficient to have a strong security posture. As a result, some companies engage managed security service providers to support an average of 34 percent of their IT security operations.
  1. Determination of IT security priorities is not centralized. The two functions most responsible are chief executive and chief information office. However, 35 percent of respondents say no one function in their company determines IT security priorities.
  1. Web and intranet servers are considered the most vulnerable endpoints or entry points to networks and enterprise systems. The challenge of not having adequate resources may prevent many companies from investing in the technologies to mitigate these risks. Web application firewalls, SIEM, endpoint management, network traffic intelligence are not considered very important in current security strategy. At a minimum anti-malware and client firewalls are considered the most important security technologies.
  1. Cloud usage and mobile devices that access business-critical applications and IT infrastructure will increase and threaten the security posture of companies in this study. However, only 18 percent of respondents say their company uses cloud-based IT security services and most password policies do not require employees to use a password or biometric to secure access to their mobile devices.

 

Download the full report here.

Cardinals' 'hacker' gets nearly four years in jail (for 'cheating' in baseball?) — don't you be next

Bob Sullivan

Bob Sullivan

Baseball has long celebrated cheating, but electronic cheating just sent a former team front-office worker to prison for nearly four years.

Former St. Louis Cardinals scouting director Chris Correa, who earlier pled guilty to using old passwords to access a former team’s scouting database, was sentenced to 46 months in jail on Monday. Correa broke into the Houston Astros’ computer systems repeatedly, stealing data. He had previously worked for the Astros.

Correa has been dubbed a hacker by sports media, but he simply made educated guesses to break into his old team’s computer database, mainly to download scouting intelligence that might help the Cardinals gain insight into players the Astros wanted to draft or trade for.

The long sentence was tied to the economic loss “suffered” by the Astros…and here things get confusing. According to STLToday.com, federal prosecutors essentially calculated how much money the Astros spent developing the data in their player database.

Assistant U.S. Attorney Michael Chu, who handled the hearing, listed the formula used to arrive at $1.7 million.

“But since much of the data that we looked at focused on the 2013 draft, what we did was we took the number of players that he looked at by 200 and we divided that by the number of players that were eligible to be drafted that year, and we multiplied that times the scouting budget of the Astros that year. That comes to $1.7 million,” he said.

That kind of loss meant a sentence of 36-48 months, according to federal guidelines.

That kind of jail time sounds like a lot for what some might consider the equivalent of stealing a third-base coach’s signs…particularly when you hear about rapists getting 6-month sentences…but it is not out of line with many computer criminal punishments.

There has long been debate about fairness in hacker sentencing, a debate that reached fever pitch after Aaron Swartz for “hacking” research and received a 30-year sentence and ultimately committed suicide.

Again, Correa is no hacker.  When I talked to Morey Haber, vice president of technology at BeyondTrust, he sharply defended the sentence.

“Yes, there is a certain amount of cheating that goes on (in sports), but that’s during the game,” he said. “This is corporate espionage. It’s no different from hacking a bank…It’s no different than if you went from Lockheed Martin to Northrup Grumman (and hacked into your old employer)….It’s not acceptable and courts are sending a strong message.”

Whatever you feel about Correa’s sentence — and hanging questions about whether or not he could have been the only one who knew about all this — there are three really important lessons to learn from the Cardinals hack.\

First, Correa actually told the judge during a hearing that he started breaking into Astros computers because he was afraid they were doing the same thing to him.  That may or may not be true. But “hacking back,” however tempting, is a crime. And it can steal several years from your life.

Second, using an old password to log into your old company — or slight variations of that — might seem like a fairly innocent thing to do. Maybe you forgot a contact phone number, or there’s a document (you wrote!) that you’d like to see one more time.  This kind of “hacking” can feel like no crime at all. It’s just a few keystrokes.

Doing that can also cost you years of your life.

Finally, to you Astros-like companies out there.  Passwords can be easily guessed.  And they can be really easily guessed by former employers who know the password tendencies of your current employees.  Look at this section of the court transcript that describes the ‘hack.’

“It was based on the name of a player who was scrawny and who would not have been thought of to succeed in the major leagues, but through effort and determination he succeeded anyway. So this user of the password just liked that name, so he just kept on using that name over the years. … Kind of like Magidson123… Or Magidson1/2,1/4,1/3.

Have a smarter authentication system than that. At least change the indicator once in a while. (That’s a baseball joke.)