Monthly Archives: August 2018

The value of Artificial Intelligence in Cybersecurity

Larry Ponemon

Ponemon Institute is pleased to present The Value of Artificial Intelligence in Cybersecurity sponsored by IBM Security. The purpose of this research is to understand trends in the use of artificial intelligence and how to overcome barriers to full adoption.

Ponemon Institute surveyed 603 IT and IT security practitioners in US organizations that have either deployed or plan to deploy AI as part of their cybersecurity program or infrastructure. According to the findings, these participants strongly believe in the importance and value of AI but admit that being able to get the maximum value from technologies is a challenge.

The adoption of AI can have a very positive impact on an organization’s security posture and bottom line. The biggest benefit is the increase in speed of analyzing threats (69 percent of respondents) followed by an acceleration in the containment of infected endpoints/devices and hosts (64 percent of respondents). Because AI reduces the time to respond to cyber exploits organizations can potentially save an average of more than $2.5 million in operating costs.

In addition to greater efficiencies in analyzing and containing threats, 60 percent of respondents say AI identifies application security vulnerabilities. In fact, 59 percent of respondents say that AI increases the effectiveness of their organizations’ application security activities.

To improve the effectiveness of AI technologies, organizations should focus on the following three activities.

 Attract and retain IT security practitioners with expertise in AI technologies. AI may improve productivity but it will increase the need for talented IT security personnel. Fifty-two percent of respondents say AI will increase the need for in-house expertise and dedicated headcount.

Simplify and streamline security architecture. While some complexity in an IT security architecture is expected in order to deal with the many threats facing organizations, too much complexity can impact the effectiveness of AI. Fifty-six percent of respondents say their organizations need to simplify and streamline security architecture to obtain maximum value from AI-based security technologies. Sixty-one percent say it is difficult to integrate AI-based security technologies with legacy systems.

Supplement IT security personnel with outside expertise. Fifty percent of respondents say it requires too much staff to implement and maintain AI-based technologies and 57 percent of respondents say outside expertise is necessary to maximize the value of AI-based security technologies.

As the adoption of AI technologies matures, the more committed organizations become to investing in these technologies.

In this research, 139 respondents of the total sample of 603 respondents self-reported that their organizations have either fully deployed AI (55) or partially deployed AI (84). We refer to these respondents as AI users. We conducted a deeper analysis of how these respondents perceive the benefits and value of AI. Following are some of the most interesting differences between AI users and the overall sample of respondents who are in the planning stages of their deployment of AI. 

  • AI users are more likely to appreciate the benefits of AI technology. Seventy-one percent of AI users vs. 60 percent of the overall sample say an important benefit is the ability of AI to deliver deeper security than if organizations relied exclusively on their IT security staff.
  • AI users are more likely to believe these technologies simplify the process of detecting and responding to application security threats. As a result, AI users are more committed to AI technologies.
  • While AI users are more likely to believe AI will increase the need for in-house expertise and dedicated headcount (60 percent of AI users vs. 52 percent in the overall sample), these respondents are more aware than the overall sample that AI benefits their organization because it increases the productivity of security personnel.
  • AI has reduced application security risk in organizations that have achieved greater deployment of these technologies. When asked about the effectiveness of AI in reducing application security risk, 69 percent of respondents say these technologies have significantly increased or increased the effectiveness of their application security activities vs. 59 percent of respondents in the overall sample who say their effectiveness increased in reducing application security risk.
  • AI technologies tend to decrease the complexity of organizations’ security architecture. Fifty-six percent of respondents in organizations that have more fully deployed AI report that instead of adding complexity AI actually decreases complexity. Only 24 percent of AI users say it increases complexity.
  • As the use of AI increases, the more knowledgeable the IT security staff becomes in identifying areas where the use of advanced technologies would be most beneficial. Fifty-six percent of AI users rate their organizations’ ability to accurately identify areas in their security infrastructure where AI and machine learning would create the most value as very high.
  • AI improves the ability to detect previously “undetectable” zero-day exploits. On average, AI users are able to detect 63 percent of previously “undetectable” zero-day exploits. In contrast, respondents in the overall sample say AI can increase detection by an average of 41 percent.

Download the entire report from IBM here. 

The newest, most devastating cyber-weapon: ‘patriotic trolls’

Bob Sullivan

Governments around the world are waging war on a new battleground: Social Media.  Their fighting force is an army of trolls. And if you are reading this story, you’ve probably been drafted.

Troll armies have helped overthrow governments and control populations. The playbook has been repeated in places like Turkey, India, and the Philippines. Once installed, trolls become engines of state propaganda, shouting down and crowding out voices of dissension.

While America is embroiled in an endless back-and-forth about Russian election meddling, this larger development has largely been missed: The 2016 election was just a data point in a much larger, more alarming trend. Trolling has become perhaps the most powerful weapon in 21st Century warfare.

If free speech has a weakness, this is it.  And it’s being used against democratic societies across the globe.

Sometimes called “patriotic trolling,” it’s a stunning reversal from the way dictatorial regimes used to handle the information superhighway — by shutting off the on ramps.  Increasingly, those in power are instead flooding the highway with misinformation, overwhelming it with noisy and malicious traffic.  It’s easier, and far cheaper, to control populations with a hashtag than the barrel of a gun.

The Great Firewall is being replaced by the Great Troll.

“States have realized that the internet offers new and innovative opportunities for propaganda dissemination that, if successful, obviate the need for censorship. This approach is one of ‘speech itself as a censorial weapon,’ ” write authors Carly Nyst and Nicholas Monaco in a chilling new report called “State-Sponsored Trolling: How Governments Are Deploying Disinformation as Part of Broader Digital Harassment Campaigns.”  The report was published by the Institute for the Future, which says it is a non-partisan research group based in Palo Alto, California.   “States are seizing upon declining public trust in traditional media outlets and the proliferation of new media sources and platforms to control information in new ways. States are using the same tools they once perceived as a threat to deploy information technology as a means for power consolidation and social control.”

What does state-sponsored trolling look like? Government officials and political leaders encourage personal attacks on opponents and civil rights groups.  They sow seeds of disbelief around the work of traditional watchdogs, like judges and journalists.  They encourage public vitriol and cynicism by citizens to protect themselves and their policies from traditional scrutiny and debate.

In some cases, professional trolls are hired to sow seeds of doubt and frustration. Other regimes sign up volunteers into an organized “cyber militia” to harass journalists and civil rights groups.  But in many cases, citizens are nudged to do the dirty work of trolls with little or no prompting from those in power.

You probably see evidence of this kind of behavior every day on your social media feeds; people lining up to lob personal attacks on those who disagree. That’s low-level trolling, however. The stakes get higher, fast.

Bloomberg recently investigated the phenomenon worldwide and came up with a long list of examples:

“In Venezuela, prospective trolls sign up for Twitter and Instagram accounts at government-sanctioned kiosks in town squares and are rewarded for their participation with access to scarce food coupons, according to Venezuelan researcher Marianne Diaz of the group @DerechosDigitales. A self-described former troll in India says he was given a half-dozen Facebook accounts and eight cell phones after he joined a 300-person team that worked to intimidate opponents of Prime Minister Narendra Modi. And in Ecuador, contracting documents detail government payments to a public relations company that set up and ran a troll farm used to harass political opponents.”

If you are shocked by the spread of conspiracy theories like Pizzagate online — and the emergence of a cottage industry that profits from the spread of such crazy ideas — don’t be. It’s not an accident, the report says.

“The new digital political landscape is one in which the state itself sows seeds of distrust in the media, fertilizes conspiracy theories and untruths, and harvests the resulting disinformation to serve its own ends,” the state-sponsored trolling report says.  “States have shifted from seeking to curtail online activity to attempting to profit from it, motivated by a realization that the data individuals create and disseminate online itself constitutes information translatable into power.”

The authors spent 18 months examining widespread trolling efforts in seven countries around the world: Azerbaijan, Bahrain, Ecuador, the Philippines, Turkey, Venezuela … and yes, the United States.

“Such attacks appear organic by design, both to exacerbate their intimidation effects on the target and to distance the attack from state responsibility,” the report says.  “However, in the cases we studied, attributing trolling attacks to states is not only possible, it is also critical to understanding and reducing the harmful effects of this trend on democratic institutions.

  • The report cites multiple examples of government propaganda by trolling.
  • In China members of the “50 Cent Army” are paid nominal sums to engage in nationalistic propaganda
  • In Turkey, journalist Ceyda Karan was subjected to a three-day-long trolling campaign in which two high-profile media actors played a key role:
  • Pro-Erdoğan journalist Fatih Tezcan, who has more than 560,000 followers, and Bayram Zilan, a self-declared “AKP journalist” with 49,000 followers. Tezcan and Zilan were central players in a campaign that involved 13,723 tweets against Karan sent by 5,800 Twitter users
  • The Twitter account of Indian prime minister Narendra Modi follows at least twenty-six known troll accounts, and the prime minister has hosted a reception attended by many of the same trolls
  • Filipino president Rodrigo Duterte has given bloggers active in online harassment campaigns accreditation to cover presidential foreign and local trips. Duterte groomed a cyber militia of around five hundred volunteers during his election campaign, eventually promoting key volunteers to government jobs after his election (For more on Duterte’s use of trolls, read this Bloomberg story.)
  • The Turkish government maintains a volunteer group of six thousand “social media representatives” spread across Turkey who receive training in Ankara in order to promote party perspectives and monitor online discussion
  • In Venezuela, former vice president Diosdado Cabello, who currently hosts the TV show Con el Mazo Dando (Hitting with the Sledgehammer) on the Venezuelan state-owned TV channel VTV8, used his TV show and a Telegram channel associated with it to encourage Twitter attacks on opposition politician Luis Florido using the hashtag #FloridoEresUnPajuo (“Florido, you’re a lying idiot”). Attacks on Florido lasted for days; they were vitriolic and crude and frequently accused him of being a traitor to Venezuela.
  • In Russia, state-sponsored trolling has been professionalized, with “troll farms” operating in a corporatized manner to support government social media campaigns. The most well-known troll farm is the Internet Research Agency (IRA), but there are reportedly scores of such organizations all around the country

Trolling efforts work in part because the trolls have access to data which help them game social media algorithms; their posts fool Facebook and Twitter into giving them more prominence. That worked during the U.S. presidential campaign, when the Russian troll group Heart of Texas gained 200,000 likes soon after launch – more than the official state GOP page.

“In one form of algorithm gaming, trolls hijack hashtags in order to drown out legitimate expression,” the report says.

Don’t be part of a troll army

If all this sounds to you like a fairly traditional propaganda campaign, I agree.  It’s just far more targeted, thanks to the information age. And, Americans seem particularly vulnerable to propaganda at the moment, for a variety of reasons. But you don’t have to be.

If you don’t want to be part of the troll/propaganda army, what should you do?  Do all the things your high school English said to do. Don’t be a troll.  Don’t say things just to get an emotional reaction, because you like setting people’s hair on fire. Always provide evidence, stick to facts, and don’t be drawn into ad hominem attacks.  Rise above them. When you see a vitriolic post by someone whose Twitter handle includes random strings of numbers, or who otherwise has a thin social media profile, assume you are dealing with a troll – even if the person seems to be on your side. Remember, America’s enemies simply want to sow discord, they don’t really care whose “side” they’re on. At a bare minimum, don’t repeat things you haven’t verified yourself just because you agree with the sentiment expressed.  Read numerous independent sources before passing on information.

Meanwhile, if you see or hear someone dismissing independent media with over-the-top criticisms, question their motives. Disagreeing with facts is healthy. Questioning someone’s integrity and patriotism, or persuading others to ignore an entire group or industry, should be viewed with deep skepticism.

Here’s how you recognize trolling, according to the Institute for the Future report:

  • Accusations of collusion with foreign intelligence agencies.
  • Accusations of treason.
  • Use of violent hate speech as a means of overwhelming and intimidating targets. Every female target of government-backed harassment receives rape threats
  • Creation of elaborate cartoons and memes.
  • Trolls often accuse targets of the very behaviors the state is engaging in. In numerous countries, for example, trolls make claims that targets are affiliated with Nazism or fascist elements. Politicians and their proxies use claims of “fake news” as a form of dog whistling to state-sponsored trolls.