Author Archives: BobSulli

Russians attacking U.S. election systems? Here's the real risk, from a man who fought Soviet electronic attacks during the Cold War

Bob Sullivan

Bob Sullivan

With U.S. officials openly blaming Russia for hacker attacks on state election computer systems, and the myriad possibilities for election chaos such attacks raise, it’s important to put them in proper context. I went to Harri Hursti, a globally-known election security consultant, for some answers.  Hursti cut his teeth in the Finnish military fending off electronic attacks, so he has valuable perspective – particularly on a unique part of Russian culture which could explain who is really behind the attacks.  He also explains the potential for psychological warfare in this incident, and why it all feels a bit familiar to his Cold War sensibilities.

Harri Hursti developed the Hursti Hack(s), in which he demonstrated how the voting results produced by the Diebold Election Systems voting machines could be altered. HBO turned the Hursti Hack into a documentary called “Hacking Democracy” which was nominated for an Emmy award for outstanding investigative journalism. Hursti is co-author of several studies on data and election security, and his consultancy. Nordic Innovation Labs, advises governments around the world on election vulnerabilities.

Between 1984-1989, Hursti worked for the UNESCO and the Finnish military in technology and cyber defense initiatives.

What do you think of the news that a member of Congress says there is “no doubt” that Russia is behind recent attacks on state election systems: (http://www.reuters.com/article/us-usa-election-cyber-idUSKCN1220SL)?

The article makes several dangerous assumptions about the security of elections and election systems. Representative Adam Schiff said he doubted (Russians) could falsify a vote tally in a way that effects the election outcome. He also said outdated election systems makes this unlikely, but really, it just makes it easier. The voting machines were designed at a time when security wasn’t considered, included, or part of the specifications at all.

These outdated computers are extremely slow. They don’t have the extra horsepower to do decent security on top of the job they were designed for. Basically, the voting machine is as powerful as today’s refrigerator or toaster, but some use the same components and logistics so outdated doesn’t mean it’s forgotten and obsolete, it means that it’s common and therefore a lot of people still today know how those systems work and can subvert them. “Outdated” isn’t offering any protection from an attacker, quite the opposite.

So there’s no proof of voter registration tampering?

As in voting machines, the registration machine don’t have the capability of logging an alteration, and they are trivially altered themselves. It’s meaningless to claim there’s no evidence, since the systems don’t have the capability to report when they’re altered. These are not standard parts of a database so there’s no common sense in saying “there’s some sort of feature that would do that, right?” Unless we study the system we can’t know one way or the other. This isn’t a common sense claim, this is a claim that would require a forensic investigation.

In addition, the number of vendors and different systems is low, so a skillful attacker doesn’t need to learn hundreds of systems; they only need to know a half dozen to control all of the U.S. election systems. But a skillful attacker only needs to learn one system in order to manipulate enough votes to tilt the election, even if it’s not close to tied. This means the attacker has more places to go to be strategic and instead of going to a big jurisdiction, They’ll go to 10 smaller ones with fewer resources and less (attention). But if you calculate the gap you need to fill to alter an election you go to the smaller, underfunded and less technologically savvy districts to own the state.

Also, some states too have made state-wide decisions that one system is used across the state, or jursdictions are central count only. So a statement that the US is decentralized is a false statement. It’s easy to understand why people think that, but from an attacker’s point of view, the threat model, you could not ask for an easier target. And the diversity between small jurisdictions is limited. An attacker can choose the jurisdictions based on the systems they are best skilled to attack.

How can the US be so sure it’s Russia?

It can’t. It is very hard to find from where a network attack is coming from. It is equally easy to make certain that investigators will find “the trail” which is pointing to the wrong direction. Therefore under the assumption that you’re dealing with a skillful attacker, any trail found is a red flag for the fact there are so many ways to make it virtually impossible to find the trail. Any conclusive looking trail “found” should be considered suspect. Unless it’s a false trail, you can only say we suspect them, and until you get to the real people to the level of the actual perpetrators true identities, you can’t make a conclusion as to “where” they come from.

Could it have been Russia?

We could use a working hypothesis, or a reasonable suspicion of Russian involvement, but until you’re down to individual people you don’t know who they are. They might even have been based in Russia, but have arrived there as tourists to carry out their attacks. There’s no way to know who the individual attackers are until they’re confronted.

Given your Cold War background, does this feel familiar?

The Cold War was all about ideology, and therefore a large concept was something that we today call hybrid warfare. In that game the actual technological attacks are equally important as the psychological influencing of the general population with misinformation and misdirection. So this is all very familiar.

Also, something we in the Western world don’t understand is how deeply patriotic Russians are. Individual Russians, and self-organized groups, are willing to go to great lengths on their own, with their own initiative, if they believe that what they do will benefit Mother Russia, and/or in hope and believe that their actions once known will be rewarded. So this kind of self-initiated actions which do resemble organized operations are commonplace. Bearing in mind that the self-organized groups can have members whose day jobs are close to the government the remaining question is, is the government aware of these groups, and if they are, are they encouraging or discouraging? Which is something we cannot know. But the fact of the matter is that Russia is self-organizing and self-providing the capability of plausible deniability which in many case can be actually true that they didn’t know.

Also, it is good to understand how high is the level of science education in Russia and the eastern bloc, when East Germany and West Germany united they had to tone down the science education in East Germany in order to match the West Germans. Science education in East Germany was way higher. The percentage of people in the general population of Russia who possess the relevant skill sets for carrying out this kind of attack is higher than we assume based on western standards. And that’s not just Russia but the whole Eastern Bloc, it was very high and is still.

Given the number of, say, smartphones and laptops used today how hard is it to fend off an attack?

In today’s world where we have “bring you device” models everywhere, we inherently assume every risk the wireless world brings to us. Our laptops and mobile phones are paired to our home networks and other wireless places we visit. It is still not understood how little security WiFi has and how easy it is with an “evil” access point to gain a connection to a target, and once you have a target you can start to work to gain access.

To mitigate this we have two possible paths. One path is to ultra-high security with all the restrictions it comes with. The alternative is to assume a breach is imminent and utilize experts to put in place an active defense mechanism which catches the breach before the attacker can use the breach to gain access to valuable information.

What would an appropriate US response be if the U.S. discovered foreign hackers in its election system?

The first action is obviously to secure your home base. Taking into account the difficulty of identifying the actual attacker, a public retaliation towards an assumed attacker may be part of the attacker’s plan and intensify the attack. Hence, public retaliation is not an effective defense. Public disclosure is important, but after the fact and after the situation has been properly handled.

Finally what is the real risk here? Could Russian hacking throw the Nov. 8 result into doubt? Could Trump supporters, should they lose, blame Russia, for example?

There’s a myriad of risks. Just to start from the simple fact that attacking the voter registration system is a highway to all crimes involving identity theft. Therefore, massive breaches of voter registration databases might lead to discouragement of people to participate in the democratic process and cause them to drop out by ceasing to be registered voters. It also poses a national security level threats, by allowing malicious actors and adversaries to gain valuable intel whether it is personal-level attacks or whether it is for hybrid warfare psyops.

It is also important to understand that data theft takes the public interest. but detect injection or insertion is far more serious. In this attack, the U.S. could be a set up for later attacks and set up false identities to be leveraged for multiple purposes in and out of the election space.

For example, a voter registration database interacts with a lot of government databases, such as criminal records. While common sense might say this kind of interaction should be a one way street, in reality the implementations quite often allow two-way interaction between the data sources. Therefore from one jurisdiction to another it should be carefully analyzed what kind of data propagation inserted voted records could lead to. Remember only US citizens can be voters, so a registered voter is assumed to be a citizen already.

Russians attacking U.S. election systems? Here’s the real risk, from a man who fought Soviet electronic attacks during the Cold War

Bob Sullivan

Bob Sullivan

With U.S. officials openly blaming Russia for hacker attacks on state election computer systems, and the myriad possibilities for election chaos such attacks raise, it’s important to put them in proper context. I went to Harri Hursti, a globally-known election security consultant, for some answers.  Hursti cut his teeth in the Finnish military fending off electronic attacks, so he has valuable perspective – particularly on a unique part of Russian culture which could explain who is really behind the attacks.  He also explains the potential for psychological warfare in this incident, and why it all feels a bit familiar to his Cold War sensibilities.

Harri Hursti developed the Hursti Hack(s), in which he demonstrated how the voting results produced by the Diebold Election Systems voting machines could be altered. HBO turned the Hursti Hack into a documentary called “Hacking Democracy” which was nominated for an Emmy award for outstanding investigative journalism. Hursti is co-author of several studies on data and election security, and his consultancy. Nordic Innovation Labs, advises governments around the world on election vulnerabilities.

Between 1984-1989, Hursti worked for the UNESCO and the Finnish military in technology and cyber defense initiatives.

What do you think of the news that a member of Congress says there is “no doubt” that Russia is behind recent attacks on state election systems: (http://www.reuters.com/article/us-usa-election-cyber-idUSKCN1220SL)?

The article makes several dangerous assumptions about the security of elections and election systems. Representative Adam Schiff said he doubted (Russians) could falsify a vote tally in a way that effects the election outcome. He also said outdated election systems makes this unlikely, but really, it just makes it easier. The voting machines were designed at a time when security wasn’t considered, included, or part of the specifications at all.

These outdated computers are extremely slow. They don’t have the extra horsepower to do decent security on top of the job they were designed for. Basically, the voting machine is as powerful as today’s refrigerator or toaster, but some use the same components and logistics so outdated doesn’t mean it’s forgotten and obsolete, it means that it’s common and therefore a lot of people still today know how those systems work and can subvert them. “Outdated” isn’t offering any protection from an attacker, quite the opposite.

So there’s no proof of voter registration tampering?

As in voting machines, the registration machine don’t have the capability of logging an alteration, and they are trivially altered themselves. It’s meaningless to claim there’s no evidence, since the systems don’t have the capability to report when they’re altered. These are not standard parts of a database so there’s no common sense in saying “there’s some sort of feature that would do that, right?” Unless we study the system we can’t know one way or the other. This isn’t a common sense claim, this is a claim that would require a forensic investigation.

In addition, the number of vendors and different systems is low, so a skillful attacker doesn’t need to learn hundreds of systems; they only need to know a half dozen to control all of the U.S. election systems. But a skillful attacker only needs to learn one system in order to manipulate enough votes to tilt the election, even if it’s not close to tied. This means the attacker has more places to go to be strategic and instead of going to a big jurisdiction, They’ll go to 10 smaller ones with fewer resources and less (attention). But if you calculate the gap you need to fill to alter an election you go to the smaller, underfunded and less technologically savvy districts to own the state.

Also, some states too have made state-wide decisions that one system is used across the state, or jursdictions are central count only. So a statement that the US is decentralized is a false statement. It’s easy to understand why people think that, but from an attacker’s point of view, the threat model, you could not ask for an easier target. And the diversity between small jurisdictions is limited. An attacker can choose the jurisdictions based on the systems they are best skilled to attack.

How can the US be so sure it’s Russia?

It can’t. It is very hard to find from where a network attack is coming from. It is equally easy to make certain that investigators will find “the trail” which is pointing to the wrong direction. Therefore under the assumption that you’re dealing with a skillful attacker, any trail found is a red flag for the fact there are so many ways to make it virtually impossible to find the trail. Any conclusive looking trail “found” should be considered suspect. Unless it’s a false trail, you can only say we suspect them, and until you get to the real people to the level of the actual perpetrators true identities, you can’t make a conclusion as to “where” they come from.

Could it have been Russia?

We could use a working hypothesis, or a reasonable suspicion of Russian involvement, but until you’re down to individual people you don’t know who they are. They might even have been based in Russia, but have arrived there as tourists to carry out their attacks. There’s no way to know who the individual attackers are until they’re confronted.

Given your Cold War background, does this feel familiar?

The Cold War was all about ideology, and therefore a large concept was something that we today call hybrid warfare. In that game the actual technological attacks are equally important as the psychological influencing of the general population with misinformation and misdirection. So this is all very familiar.

Also, something we in the Western world don’t understand is how deeply patriotic Russians are. Individual Russians, and self-organized groups, are willing to go to great lengths on their own, with their own initiative, if they believe that what they do will benefit Mother Russia, and/or in hope and believe that their actions once known will be rewarded. So this kind of self-initiated actions which do resemble organized operations are commonplace. Bearing in mind that the self-organized groups can have members whose day jobs are close to the government the remaining question is, is the government aware of these groups, and if they are, are they encouraging or discouraging? Which is something we cannot know. But the fact of the matter is that Russia is self-organizing and self-providing the capability of plausible deniability which in many case can be actually true that they didn’t know.

Also, it is good to understand how high is the level of science education in Russia and the eastern bloc, when East Germany and West Germany united they had to tone down the science education in East Germany in order to match the West Germans. Science education in East Germany was way higher. The percentage of people in the general population of Russia who possess the relevant skill sets for carrying out this kind of attack is higher than we assume based on western standards. And that’s not just Russia but the whole Eastern Bloc, it was very high and is still.

Given the number of, say, smartphones and laptops used today how hard is it to fend off an attack?

In today’s world where we have “bring you device” models everywhere, we inherently assume every risk the wireless world brings to us. Our laptops and mobile phones are paired to our home networks and other wireless places we visit. It is still not understood how little security WiFi has and how easy it is with an “evil” access point to gain a connection to a target, and once you have a target you can start to work to gain access.

To mitigate this we have two possible paths. One path is to ultra-high security with all the restrictions it comes with. The alternative is to assume a breach is imminent and utilize experts to put in place an active defense mechanism which catches the breach before the attacker can use the breach to gain access to valuable information.

What would an appropriate US response be if the U.S. discovered foreign hackers in its election system?

The first action is obviously to secure your home base. Taking into account the difficulty of identifying the actual attacker, a public retaliation towards an assumed attacker may be part of the attacker’s plan and intensify the attack. Hence, public retaliation is not an effective defense. Public disclosure is important, but after the fact and after the situation has been properly handled.

Finally what is the real risk here? Could Russian hacking throw the Nov. 8 result into doubt? Could Trump supporters, should they lose, blame Russia, for example?

There’s a myriad of risks. Just to start from the simple fact that attacking the voter registration system is a highway to all crimes involving identity theft. Therefore, massive breaches of voter registration databases might lead to discouragement of people to participate in the democratic process and cause them to drop out by ceasing to be registered voters. It also poses a national security level threats, by allowing malicious actors and adversaries to gain valuable intel whether it is personal-level attacks or whether it is for hybrid warfare psyops.

It is also important to understand that data theft takes the public interest. but detect injection or insertion is far more serious. In this attack, the U.S. could be a set up for later attacks and set up false identities to be leveraged for multiple purposes in and out of the election space.

For example, a voter registration database interacts with a lot of government databases, such as criminal records. While common sense might say this kind of interaction should be a one way street, in reality the implementations quite often allow two-way interaction between the data sources. Therefore from one jurisdiction to another it should be carefully analyzed what kind of data propagation inserted voted records could lead to. Remember only US citizens can be voters, so a registered voter is assumed to be a citizen already.

It's 10 p.m.: Do you know where are your apps are?

Larry Ponemon

Larry Ponemon

Ponemon Institute is pleased to present the results of Application Security in the Changing Risk
Landscape sponsored by F5. The purpose of this study is to understand how today’s security
risks are affecting application security. We surveyed 605 IT and IT security practitioners in the
United States who are involved in their organization’s application security activities.

The majority of respondents (57 percent) say it is the lack of visibility in the
application layer that is preventing a strong application security. In fact, 63 percent of respondents say attacks at the application
layer are harder to detect than at the network layer and 67 percent of
respondents say these attacks are more difficult to contain than at the network
layer.

Following are key takeaways from this research.

Lack of visibility in the application layer is the main barrier to achieving a
strong application security posture. Other significant barriers are created by
migration to the cloud (47 percent of respondents), lack of skilled or expert
personnel (45 percent of respondents) and proliferation of mobile devices (43 percent of
respondents).

The frequency and severity of attacks on the application layer is considered greater than
at the network layer. Fifty percent of respondents (29 percent + 21 percent) say the application is attacked more and 58 percent of respondents (33 percent + 21 percent) say attacks are more severe than at the network layer. In the past 12 months, the most common security incidents due to insecure applications were: SQL injections (29 percent), DDoS (25 percent) and Web fraud (21 percent).

Network security is better funded than application security. On average, 18 percent of the IT security budget is dedicated to application security. More than double that amount (an average of 39 percent) is allocated to network security. As a consequence, only 35 percent of respondents say their organizations have ample resources to detect vulnerabilities in applications, and 30 percent of respondents say they have enough resources to remediate vulnerabilities in applications.

Accountability for the security of applications is in a state of flux. Fifty-six percent of
respondents believe accountability for application security is shifting from IT to the end user or
application owner. However, at this time responsibility for ensuring the security of applications is dispersed throughout the organization. While 21 percent of respondents say the CIO or CTO is accountable, another 20 percent of respondents say no one person or department is responsible.

Twenty percent of respondents say business units are accountable and 19 percent of
respondents say the head of application development is accountable.

Shadow IT affects the security of applications. Respondents estimate that on average their
organizations have 1,175 applications and an average of 33 percent are considered mission
critical. Sixty-six percent of respondents are only somewhat confident (23 percent) or have no
confidence (43 percent) they know all the applications in their organizations. Accordingly, 68
percent of respondents (34 percent + 34 percent) say their IT function does not have visibility into all the applications deployed in their organizations and 65 percent of respondents (32 percent + 33 percent) agree that Shadow IT is a problem.

Mobile and business applications in the cloud are proliferating. An average of 31 percent of
business applications are mobile apps and this will increase to 38 percent in the next 12 months. Today, 37 percent of business applications are in the cloud and this will increase to an average of 46 percent.

The growth in mobile and cloud-based applications is seen as significantly affecting
application security risk. Sixty percent of respondents say mobile apps increase risk (25
percent) or increase risk significantly (35 percent). Fifty-one percent of respondents say cloud based applications increase risk (25 percent) or increase risk significantly (26 percent).
Hiring and retaining skilled and qualified application developers will improve an
organization’s security posture. Sixty-nine percent of respondents believe the shortage of
skilled and qualified application developers puts their applications at risk. Moreover, 67 percent of respondents say the “rush to release” causes application developers in their organization to
neglect secure coding procedures and processes.

Cyber security threats will weaken application security programs, but new IT security and
privacy compliance requirements will strengthen these programs. Eighty-eight percent of
respondents are concerned that new and emerging cyber security threats will affect the security
of applications. In contrast, 54 percent of respondents say new and emerging IT security and
privacy compliance requirements will help their security programs. According to respondents,
there are more trends expected to weaken application security than will strengthen security.
The responsibility for securing applications will move closer to the application developer.

Sixty percent of respondents anticipate the applications developer will assume more responsibility for the security of applications. Testing for vulnerabilities should take place in the design and development phase of the system development life cycle (SDLC). Today, most applications are tested in the launch or post-launch phase (61 percent). In the future, the goal is to perform more testing in the design and development phase (63 percent).

Do secure coding practices affect the application delivery cycle? Fifty percent of
respondents say secure coding practices, such as penetration testing, slow down the application delivery cycle within their organizations significantly (12 percent of respondents) or some slowdown (38 percent of respondents). However, 44 percent of respondents say there is no slowdown.

How secure coding practices will change. The secure coding practices most often performed
today are: run applications in a safe environment (67 percent of respondents), use automated
scanning tools to test applications for vulnerabilities (49 percent of respondents) and perform
penetration testing procedures (47 percent of respondents). In the next 24 months, the following practices will most likely be performed: run applications in a safe environment (80 percent of respondents), monitor the runtime behavior of applications to determine if tampering has occurred (65 percent

Submarine builder declares 'economic warfare' as plans for ship said to be hacked; now what?

Bob Sullivan

Bob Sullivan

Get used to another term in world of computer hacking: “economic warfare.”

A French firm building multi-billion-dollar submarines for Australia and several other nations says it was the victim of economic warfare after some of its schematics for similar subs being built for India were released online, allegedly by hackers.   The data was published by Australian media

The firm, DCNS, is currently bidding for military contracts in Poland and Norway. For the India gig, it had beaten out German and Japanese firms.

An embarrassing data leak would obviously hurt the French firm’s bid for more deals — in addition to perhaps imperiling the security of its current projects.

“DCNS has been made aware of articles published in the Australian press related to the leakage of sensitive data about Indian Scorpene,” the firm said on its website. “This serious matter is thoroughly investigated by the proper French national authorities for Defense Security. This investigation will determine the exact nature of the leaked documents, the potential damages to DCNS customers as well as the responsibilities for this leakage.”

Right now, there’s only speculation about how much the allegedly stolen data might impact the security of the ships when they arrive in India — and the security of similar DCNS ships in Malaysia and Chile.

But DCNS immediately suggested that rivals might be to blame for the leak.

“Competition is getting tougher and tougher, and all means can be used in this context,” a company spokesperson said to Reuters. “There is India, Australia and other prospects, and other countries could raise legitimate questions over DCNS. It’s part of the tools in economic warfare.”

It’s clearly too early to know, however, if simple corporate espionage is to blame — or there might be some military advantage to be gained from publication of the documents.  Given that the alleged hackers send the data to a media outlet, it’s also possible their motivation was political.

The incident does highlight the asymmetrical nature of digital “warfare,” however.  A billion-dollar project involving thousands of employees can be derailed by a single person with a digital file and a the e-mail address of a journalist.

“If this was economic warfare as speculated, we can expect more attacks like this on a global scale,” said Scott Gordon, COO at file security firm FinalCode. “Hacktivists are motivated by reputational, economic and political gains from capitalizing on businesses’ and countries’ inability to secure sensitive, critical documents— tipping the scale in favor of other contenders in future military action and contracting situations.”

It also shows how hard it is to keep data under wraps when multiple third-party contractors have to share information in large projects.

“Sharing files, such as the 22,000-plus pages of blueprints and technical details on DCNS’s Scorpene submarines, is a necessary collaboration between government, contractor and manufacturing entities,” Gordon said. “But the exposure of these Indian naval secrets illustrates how lax file protection has opened a door to new data loss risks—and how even confidential military information can be exfiltrated and exposed by a weak link in the supply chain.”

Cardinals' 'hacker' gets nearly four years in jail (for 'cheating' in baseball?) — don't you be next

Bob Sullivan

Bob Sullivan

Baseball has long celebrated cheating, but electronic cheating just sent a former team front-office worker to prison for nearly four years.

Former St. Louis Cardinals scouting director Chris Correa, who earlier pled guilty to using old passwords to access a former team’s scouting database, was sentenced to 46 months in jail on Monday. Correa broke into the Houston Astros’ computer systems repeatedly, stealing data. He had previously worked for the Astros.

Correa has been dubbed a hacker by sports media, but he simply made educated guesses to break into his old team’s computer database, mainly to download scouting intelligence that might help the Cardinals gain insight into players the Astros wanted to draft or trade for.

The long sentence was tied to the economic loss “suffered” by the Astros…and here things get confusing. According to STLToday.com, federal prosecutors essentially calculated how much money the Astros spent developing the data in their player database.

Assistant U.S. Attorney Michael Chu, who handled the hearing, listed the formula used to arrive at $1.7 million.

“But since much of the data that we looked at focused on the 2013 draft, what we did was we took the number of players that he looked at by 200 and we divided that by the number of players that were eligible to be drafted that year, and we multiplied that times the scouting budget of the Astros that year. That comes to $1.7 million,” he said.

That kind of loss meant a sentence of 36-48 months, according to federal guidelines.

That kind of jail time sounds like a lot for what some might consider the equivalent of stealing a third-base coach’s signs…particularly when you hear about rapists getting 6-month sentences…but it is not out of line with many computer criminal punishments.

There has long been debate about fairness in hacker sentencing, a debate that reached fever pitch after Aaron Swartz for “hacking” research and received a 30-year sentence and ultimately committed suicide.

Again, Correa is no hacker.  When I talked to Morey Haber, vice president of technology at BeyondTrust, he sharply defended the sentence.

“Yes, there is a certain amount of cheating that goes on (in sports), but that’s during the game,” he said. “This is corporate espionage. It’s no different from hacking a bank…It’s no different than if you went from Lockheed Martin to Northrup Grumman (and hacked into your old employer)….It’s not acceptable and courts are sending a strong message.”

Whatever you feel about Correa’s sentence — and hanging questions about whether or not he could have been the only one who knew about all this — there are three really important lessons to learn from the Cardinals hack.\

First, Correa actually told the judge during a hearing that he started breaking into Astros computers because he was afraid they were doing the same thing to him.  That may or may not be true. But “hacking back,” however tempting, is a crime. And it can steal several years from your life.

Second, using an old password to log into your old company — or slight variations of that — might seem like a fairly innocent thing to do. Maybe you forgot a contact phone number, or there’s a document (you wrote!) that you’d like to see one more time.  This kind of “hacking” can feel like no crime at all. It’s just a few keystrokes.

Doing that can also cost you years of your life.

Finally, to you Astros-like companies out there.  Passwords can be easily guessed.  And they can be really easily guessed by former employers who know the password tendencies of your current employees.  Look at this section of the court transcript that describes the ‘hack.’

“It was based on the name of a player who was scrawny and who would not have been thought of to succeed in the major leagues, but through effort and determination he succeeded anyway. So this user of the password just liked that name, so he just kept on using that name over the years. … Kind of like Magidson123… Or Magidson1/2,1/4,1/3.

Have a smarter authentication system than that. At least change the indicator once in a while. (That’s a baseball joke.)

State official: Please stop falling for ransomware attacks — you're costing the taxpayers big bucks

Bob Sullivan

Bob Sullivan

How bad has the ransomware problem become?  The state auditor of Ohio held a press conference yesterday because local government agencies keep falling for ransomware attacks. And a firm that tracks domain activity found a 3,500% increase in ransomware-related domain name registrations in the past quarter.  Hacker love to cut and paste, so imitation is the surest sign that something is working.

Recall the high-profile, alarming ransomware attacks earlier this year on hospitals.  These “your money or your data” crimes can do a lot of damage quickly, and confused organizations brought to their knees by missing mission-critical data often pay up.  Of course, smaller organization with less IT resources are at greater risk.

Here’s what’s going on in Ohio.  Auditor of State Dave Yost issued a warning on Thrusday to treasurers, fiscal officers and others responsible for spending public money that cybercrimes targeting government are “on the rise.” And he offered these examples.

  • An investigation continues in an eastern Ohio county after the county’s court data was attacked by ransomware on May 31. A virus had encrypted the court’s data and hackers demanded $2,500 for the key to unlock the information. Because a recent copy of the data wasn’t available, the county agreed to pay the $2,500. (Note: Because the transaction is ongoing, we are not identifying the county.)
  • A similar ransomware attempt was made April 5 in Vernon Township (Clinton County). That cyberattack did not result in the payment of any ransom because the township’s data was backed up.
  • In Peru Township (Morrow County), the township fiscal officer’s computer began screeching on March 9 before a notice appeared on the screen advising that a solution was available by calling an 800 number. The township paid $200 to stop the attack.

In separate, non-ransomware incidents,  an employee at Big Walnut Local School District in Delaware County was tricked into issuing a check for $38,520 to a hacker. The money was recovered before it was lost. The Madison County Agricultural Society wasn’t as lucky; it was scammed out of $60,491 through someone posing as the IRS, collecting back taxes.

“We’ve all seen and heard about the criminals who try to steal our personal funds. These scammers would like nothing more than to get their sticky fingers on our tax dollars, too,” Yost said. “We need to be vigilant because they are becoming increasingly sophisticated in how they attempt to steal money through the internet.”

Yost is right.  Network security firm Infoblox reported last week that hackers were falling over each other to set up websites related to ransomware scams.  The firm tracks domain registrations as a way of monitoring the Internet for threats, and it says it found a 35-fold increase in newly observed ransomware domains from the fourth quarter of 2015.

“There is an old adage that success begets success, and it seems to apply to malware as in any other corner of life.
In the first quarter of 2016, there were numerous stories in the news about successful ransomware attacks on both
companies and consumers,” the firm said.  “We believe the larger cybercriminal community has taken notice.”

According to the FBI, ransomware victims reported costs of $209 million in the first quarter, compared to $24 million for all of 2015.

“Unless and until companies figure out how to guard against ransomware – and certainly not reward the attack – we expect it to continue its successful run,” Infoblox said.

Yost said all the crimes began with some variation of phishing, and urged all government employees to be on alert.

“The internet is the tool of choice for criminals, and we need to make it as difficult as possible for thieves to access community treasure chests,” Yost said.

The best way to do that, as Vernon Township showed above, is to keep good backups.

The day my bank, yet again, blocked me from my money for 'security' — and why two-factor tools aren't ready for prime time

Bob Sullivan

Bob Sullivan

How can a bank – or any organization — become less secure in its attempts to become more secure?  Let me tell you how.

Security must do two things: Protect and enable.  If your security doesn’t enable people to do what they have to do, they will inevitably circumvent it, creating all sorts of exception conditions as they do. And that is the path to perdition (and hacking).

Security often fails because people who design security are much better at throwing up roadblocks than they are creating pathways.  Both are equally important if a security scheme is to work.

This month brought yet another story chronicling theft of millions of passwords by hackers, once again highlighting the importance of implementing “not-just-passwords security” at places that really matter.

But I’m about to turn off two-factor authentication at my bank, right at the moment when everyone seems hell bent to turn it on. Why?  Because it doesn’t make me safer if it doesn’t work; it just prevents me from accessing my money.

I’ve run into classic 21st Century Red Tape headaches with my bank recently as I try very hard to use its two-factor authentication scheme.  I often don’t like single-anecdote stories, but occasionally they illuminate larger problems so perfectly they are worth telling. So here goes.

A quick review:  Two-factor authentication adds a strong layer of security to a service by requiring two tests be met by a person seeking access — a debit card and a PIN code, for example, representing something you have and something you know.  Online banks and websites are slowly but surely nudging everyone towards various forms of two-factor authentication, because it really does make life harder for hackers.

Most of these two-factor forms involve use of smartphones, as they have become nearly ubiquitous. Log on to a website at a PC, confirm a code sent to your phone.  Something you have (the phone) and something you know (the password). Simple, but elegant, and far harder for bad guys to crack.

And it’s great, when it works. But what about when it doesn’t work?

Here’s a simple problem. Consumers get new phones all the time. If the code is tied to the physical handset, the code doesn’t work any longer. What then?

Turns out this can be a really vexing problem. (Readers of this column know why I had to get a new smartphone recently)

I’ve been a USAA banking customer for decades. The financial services firm has ranked atop customer satisfaction surveys seemingly forever, and for good reason:  It really does take good care of members.

At least it did, until it tried to implement two-factor security. I try not to be hypocritical, and follow my own advice, so I turned on USAA’s flavor of two-factor pretty early on. It’s a solid design: A Symantec app loaded onto your smartphone offers a temporary token — a 6-digit code — that changes every 30 seconds. The token is tied to the physical handset. Only a person who knows your PIN and can access the token on that handset can log into the website. You can see all the layers of protection that creates.

Sure, it’s a tiny hassle to pull out the phone every time you want to log on to the website — a larger hassle if your phone battery is dead. But that’s a fair price to pay for security.

However, the hassle becomes immense when it becomes time to change handsets.   So immense that as I type this, I cannot access my bank…and have no idea when I will be able to do so.(UPDATE: I was able to fix my login woes 24 hours later.) And that’s happened twice to me in the past year. Why? Chiefly because USAA is not set up to deal with the problem of new handsets.

To review: When I tried to access the website it demanded a token from my phone — a token that was no longer valid because I had a new phone.  When I tried to use the phone’s app to access my accounts, USAA asked for a password because it didn’t recognize the phone.  I didn’t have a password, I had a token — an invalid token.  You get the picture.

All that is a predictable technology hiccup that’s not the end of the world.  The real problem came next.

A call to customer service seemed to be my last available option, but that was dismal, too.  At various times I wasn’t been able to get through to customer service phone lines. What’s much worse, however, is what happened when I did get through.

People change phones roughly every two years, so this new handset problem must come up often enough.  Yet it’s obvious to me USAA operators are not ready to handle the problem when consumers call.  Each time I have reached an operator, I had to spend a lot of time explaining the problem — and remember, I do this for a living.  The first successful call today, the operator merely changed my mobile application login settings after putting me on hold for minutes.  When I protested that, she said she had to transfer me to a special department, and then the phone went dead.

After a second call and wait, the operator was sympathetic, but put me on hold quickly and wasted a lot of time trying to set me up with a new phone number.  It took a while before I could convince her that “new phone” meant “new handset” not “new number,” a mistake I will correct in future calls. We eventually agreed that all I needed was someone to turn off two-factor and issue me a temporary password so I could go in and re-establish the connection between my handset and my account.  But after another long hold, and transfers to two other operators, I was told that, sadly, they were having trouble issuing temporary passwords and asked if I could call back in an hour or so.

I’ve left out many steps in this saga.  At each stage, of course, I was subject to strict authentication questions. That’s fine — I was asking for a new password, after all.  But at the end of my fruitless journey through tech support, when I asked if I could somehow get express treatment when I called back just to find out if I could get a temporary password, I was told, “no.”  So I will have to once, again, convince a primary operator who I am, and that I am having token problems and that I need a temporary password.  There is obviously no “token problem” script, ready for my problem.

My experience last time was similar, so I know I am not just the victim of bad luck.

The last time this happened, I was sure to give the operator who finally liberated my account some specific feedback — there needs to be a tidy process for dealing with people who get new handsets.    Obviously, that hasn’t occurred. And so, the first thing I will do when I can access my account is disable the token. (I’ll use another form of two-factor). While I am afraid of hackers, I’m more afraid of not be able to access my money because my bank has poorly implement a security solution.

When I called USAA as a reporter to discuss my experience, the firm owned up to the challenges of implementing two-factor security.

“You’ve encountered an experience we are aware of,” said Mike Slaugh, Executive Director, Financial Crimes Prevention, at USAA. “What we’re working on here is a way to make that experience better. … Multi-factor authentication for us at USAA and the industry in general, it’s important.  (Making this experience better) is top of mind for us as we work to help members protect  themselves.”

USAA is hardly the only firm having trouble dealing with two-factor issues.  Independent security analyst Harri Hursti told me about the foibles consumers face when dealing with two-factor authentication that relies on text messages.

“The moment you start traveling, all bets are off. Text messages over roaming are far from reliable – they either are never delivered, or they experience regular delivery delays over 10-15 minutes, which are the most typical time-out limits on the websites,” he said. Hursti, who was in Portugal when I interviewed him, said he was late paying an electricity bill this month because of two-factor pain points.  “Basically, in order to do banking when travelling internationally, you need to start that by turning all security off. And yet you are knowingly getting into increased security risk environment.”

Gartner security analyst Avivah Litan says these kinds of implementation and customer service issues not only threaten adoption of two-factor security, they actually create more pathways for hackers.

“Two factor, in this case, actually weakens security – rather than strengthens it,” she said. “I always tell our clients that their security is only as strong as its weakest link and surely, when they disable two factor authentication on the account, they likely ask the account holder to verify their identity by answering those easily compromised challenge questions, which any criminal who can buy data on the dark web has access to.  Therefore this is an easy way for criminals to get access to your account.  So not does two factor authentication without proper supporting processes serve to annoy and greatly inconvenience good legitimate customers, it also does little to keep the bad guys out for this and other reasons.”

As Litan is fond of saying, there’s a fallacy that “harder is better” in security.  It “doesn’t keep bad guys out, but it annoys good guys.”

Perhaps this problem isn’t *that* common yet, as uptake on two-factor is still relatively small (USAA acknowledged that, and it’s common across the industry). Don’t worry: With each password hack, more and more people will turn on two-factor.  If companies blow the implementation, consumers will just as quickly turn it off again.  And we might lose them for several years.

Protect and enable, or we’re all at greater risk.

Two-thirds of security pros waste a 'significant' amount of time chasing false positives

Larry Ponemon

Larry Ponemon

We are pleased to present the findings of The State of Malware Detection & Prevention sponsored by Cyphort. The study reveals the difficulty of preventing and detecting malware and advanced threats. The IT function also seems to lack the information and intelligence necessary to update senior executives on cybersecurity risks.

We surveyed 597 IT and IT security practitioners in the United States who are responsible for directing cybersecurity activities and/or investments within their organizations. All respondents have a network-based malware detection tool or are familiar with this type of tool.

Getting malware attacks under control continues to be a challenge for companies. Some 68 percent of respondents say their security operations team spends a significant amount of time chasing false positives. However, only 32 percent of respondents say their teams spend a significant amount of time prioritizing alerts that need to be investigated.

Despite such catastrophic data breaches as Target, cyber threats are not getting the appropriate attention from senior leadership they deserve. As shown in the findings of this research, respondents say they do not have the necessary intelligence to make a convincing case to the C-suite about the threats facing their company.

The following findings further reveal the problems IT security faces in safeguarding their companies’ high value and sensitive information.

Companies are ineffective in dealing with malware and advanced threats. Only 39 percent of respondents rate their ability to detect a cyber attack as highly effective, and similarly only 30 percent rate their ability to prevent cyber attacks as highly effective. Respondents also say their organizations are doing poorly in prioritizing alerts and minimizing false positives. As mentioned above, a significant amount time is spent chasing false positives but not prioritizing alerts.

Most respondents say C-level executives aren’t concerned about cyber threats. Respondents admit they do not have the intelligence and necessary information to effectively update senior executives on cyber threats. If they do meet with senior executives, 70 percent of respondents say they report on these risks to C-level executives only on a need-to-know basis (36 percent of respondents) or never (34 percent of respondents).

Sixty-three percent of respondents say their companies had one or more advanced attacks during the past 12 months. On average, it took 170 days to detect an advanced attack, 39 days to contain it and 43 days to remediate it.

The percentage of malware alerts investigated and determined to be false positives. On average, 29 percent of all malware alerts received by their security operations team are investigated and an average of 40 percent are considered to be false positives. Only 18 percent of respondents say their malware detection tool provides a level of risk for each incident.

Do organizations reimage endpoints based on malware detected in the network? More than half (51 percent) of respondents say their organization reimages endpoints based on malware detected in the network. An average of 33 percent of endpoint re-images or remediations are performed without knowing whether it was truly infected. The most effective solutions for the remediation of advanced attacks are network-based sandboxing and network behavior anomaly analysis.

Download and read the rest of the report.

Car hacking worries FBI, too; and reports of keyless entry hacking won't go away

Bob Sullivan

Bob Sullivan

We know that Americans are concerned about their cars being hacked.  We also know that some consumers believe criminals are “hacking” into their parked cars and committing “snatch and grab” crimes using devices that simulate newfangled keyless entry systems.

Now, we know the FBI is worried about car hacking, too. The agency, along with the National Highway Traffic Safety Administration, issued a bold warning to consumers and manufacturers last week.

“The FBI and NHTSA are warning the general public and manufacturers – of vehicles, vehicle components, and aftermarket devices – to maintain awareness of potential issues and cybersecurity threats related to connected vehicle technologies in modern vehicles,” the warning says. “While not all hacking incidents may result in a risk to safety – such as an attacker taking control of a vehicle – it is important that consumers take appropriate steps to minimize risk.”

The FBI warning didn’t raise any new concerns; it mainly cites revelations of car hacking from 2015 as impetus for the warning. Still, the notice clearly demonstrates there is a level of activity around car hacking that should have everyone concerned. Drive down the highway sometime (as a passenger) and use your smartphone to see at all the cars sending out Bluetooth connections around you and you’ll get an idea about how connected our vehicles have become.

Meanwhile, consumers continue to report mysterious car break-ins around the country with no signs of forced entry, in situations when they swear their car doors were locked.  In Baltimore, a string of crimes following this pattern frustrated local residents earlier this year.

“What was strange to me was that, while I could tell it was broken into because my jacket was taken and they tossed through the stuff in the car, there were no signs of a breaking. No broken windows or anything,” said one driver. “I called and reported it mostly because I wanted to know how anyone could have gotten in if it was locked and no windows were broken. The officer said people have these things that basically interfere with newer cars electronic/fob locking systems and disable the alarms.”

The reports follow a persistent set of national stories around keyfob break-ins that began with a CNN report two years ago, and was followed by a New York Times story last year that casually suggested drivers store their car fobs in their freezers to keep them safe from hackers. (Notably, the story appeared in the Times’ Style section. The science was a little shallow).

There have also been vague warnings issued by some agencies around the world, like this notice from London Police, or this notice from the National Insurance Crime Bureau,

“The key-less entry feature on newer cars is a popular advancement that lets drivers unlock their cars with the simple click of a button on a key fob using radio frequency transmission. The technology also helps prevent drivers from locking their keys in the vehicle,” it says.  “Not surprisingly, thieves have found a way to partially outwit the new technology using electronic ‘scanner boxes.’ These small, handheld devices can pop some factory-made electronic locks in seconds, allowing thieves to get into the vehicle and steal personal items left inside.”

The existence of such a scanner box is very much in question, as are assertions that such a universal master key can be purchased for as little as $17; so is any notion that the crime is widespread. If any law enforcement agency has seized such a device, we are all waiting for it to be put on display.

How would such a magic device work?  By tricking your car into thinking your key fob is nearby and opening the door in response to a handle jiggle; or perhaps by amplifying the signal it sends out, or by intercepting that signal and copying it somehow. Or, hackers could “guess” the code for opening a car, if the code were poorly constructed. Here’s a great explanation of how it might work, and why it’s a major challenge unlikely to be used by street thugs.

*Could* such a hack exist? Well, of course, says embedded device security expert Philip Koopman, a professor at  Carnegie Mellon. Koopman actually worked on earlier generation designs for key fobs.

“I would not at all be surprised if the Bad Guys have figured out that some manufacturer has bad security and how to attack it,” he said. “There is nothing really new here, other than general lack of people to admit that if you cut corners on security you will get burned, and an insistence by manufacturers and suppliers that known bad practices are adequate.”

In a blog post six years ago, he warned about the cost sensitivity for auto manufacturors (“No way could we afford industrial strength crypto.”)

Back to today, he offered this speculation on keyless entry attacks.

“It is (possible) that the manufacturers used bad crypto that is easy to hack, possibly via just listening to transmissions and doing off-line analysis. And it is possible to attack by getting near someone when they aren’t near their car and extracting the secrets from their car keys when it is in their pocket, then using that info to build a fake key. The technology is very similar to the US Passport biometric chips, so all the attacks for those are plausible here as well.”

The FBI offers the following advice to consumers: Keep your car software up to date, as you do with your PC; don’t modify your car software; be careful when connecting your car to third parties; and “be aware of who has physical access to your vehicle.”

That last bit of advice might work for people with long driveways, but the rest of us can’t do much about who might be able to walk by our cars on streets and in parking lots.

“While these tips may seem innocuous, they do show the limitations that law enforcement and consumers have in combating the car hacking threat,” said Tyler Cohen Wood, Cyber Security Advisor of Inspired eLearning.  “With the ever-increasing implementation of Internet of Things devices, including devices installed in newer cars, it’s a real challenge for law enforcement to identify different threat vectors associated with vehicle hacking.  There is no real standard for Internet of Things devices from a vehicle standpoint—each automobile manufacturer offers different types of devices as options in vehicles, from entertainment and navigation systems to remote ignition starting devices.  There is no industry standard for operating systems or security protocols on these devices, so it’s difficult for law enforcement to identify the specific threats that the devices pose to the public.”

So what else should you do?  Putting your car “keys” in the freezer is probably a bad idea; it will likely create more problems than it solves.  You might damage the very expensive key, for example, to mitigate a threat that is still perceived as low. But it wouldn’t hurt to take great care with where you leave the key. If you park directly in front of your front door, perhaps you shouldn’t leave the key right there.  Otherwise, read the local police blotter and talk to neighbors about street crime.

Most of all, make sure you really do lock your car doors.

 

Anti-encryption opportunists seize on Paris attacks; don't be fooled

Bob Sullivan

Bob Sullivan

It’s natural to look for a scapegoat after something terrible happens, like this: If only we could read encrypted communications, perhaps the Paris terrorist attacks could have been stopped.  It’s natural, but it’s wrong.  Read every story you see about Paris carefully and look for evidence that encryption played a role.

There’s a reason The Patriot Act was passed only a few weeks after 9-11, and it wasn’t because Congress was finally able to act quickly and efficiently on something.  The speed came because many elements of the Patriot Act had already been written, and forces with an agenda were sitting in wait for a disaster so they could push that agenda.  That is wrong.

So here we are now, once again faced with political opportunism after an unthinkable human tragedy, and we must remain strong in the face of it.  There is no simple answer to terrorism, and we should all know this by now.  And so there must be no simple discussion about the use of encryption in the Western world.  The debate requires a bit of thoughtful analysis, and we owe it to everyone who ever died for a free society to have this debate thoughtfully.

The basics are this: Only recently, computing power has become inexpensive enough that ordinary citizens can scramble messages so effectively that even governments with near-infinite resources cannot crack them. Such secret-keeping powers scare government officials, and for good reason.  They can, theoretically, allow criminals and terrorists to communicate with a cloak of invisibility.  Not surprisingly, several government officials have called for a method that would allow law enforcement to crack these codes.  There are many schemes for this, but they all boil down to something akin to creating a master key that would be generated by encryption-making firms and given to government officials, who would use the key only after a judge granted permission.  This is sometimes referred to as creating “backdoors” for law enforcement.

Governments can already listen in on telephone conversations after obtaining the proper court order.  What’s the difference with a master encryption key?

Sadly, it’s not so simple.

For starters, U.S. firms that sell products using encryption would create backdoors, if forced by law.  But products created outside the U.S.?  They’d create backdoors only if their governments required it.  You see where I’m going. There will be no global master key law that all corporations adhere to.  By now I’m sure you’ve realized that such laws would only work to the extent that they are obeyed.  Plenty of companies would create rogue encryption products, now that the market for them would explode.  And of course, terrorists are hard at work creating their own encryption schemes.

There’s also the problem of existing products, created before such a law. These have no backdoors and could still be used. You might think of this as the genie out of the bottle problem, which is real. It’s very,  very hard to undo a technological advance.

Meanwhile, creation of backdoors would make us all less safe.  Would you trust governments to store and protect such a master key?  Managing defense of such a universal secret-killer is the stuff of movie plots.  No, the master key would most likely get out, or the backdoor would be hacked.  That would mean illegal actors would still have encryption that worked, but the rest of us would not. We would be fighting with one hand behind out backs.

In the end, it’s a familiar argument: disabling encryption would only stop people from using it legally. Criminals and terrorists would still use it illegally.

Is there some creative technological solution that might help law enforcement find terrorists without destroying the entire concept of encryption? Perhaps, and I’d be all ears. I haven’t heard it yet.

Only a few weeks after 9-11, a software engineer who told me he was working for the FBI contacted me and told me he was helping create a piece of software called Magic Lantern.  It was a type of computer virus, a Trojan horse keylogger, that could be remotely installed on a target’s computer and steal passphrases used to open up encrypted documents.  The programmer was uncomfortable with the work and wanted to expose it. I wrote the story for msnbc.com, and after denying the existence of Magic Lantern for a while, the FBI ultimately conceded using this strategy.  While we could debate the merits of Magic Lantern, at least it constituted a targeted investigation — something far, far removed from rendering all encryption ineffective.

For a far more detailed examination of these issues, you should read Kim Zetter at Wired, as I always do. Then make up your own mind.

Don’t let a politician or a law enforcement official with an agenda make it for you. Most of all, don’t allow someone who capitalizes on tragedy a mere hours after the first blood is spilled — an act so crass it disqualifies any argument such a person makes — to influence your thinking.