Monthly Archives: June 2017

Medical Device Security: An Industry Under Attack and Unprepared to Defend

Larry Ponemon

Ponemon Institute is pleased to present the findings of Medical Device Security: An Industry Under Attack and Unprepared to Defend, sponsored by Synopsys. (Click here for full report.) The purpose of this research is to understand the risks to clinicians and patients because of insecure medical devices. We surveyed both device makers and healthcare delivery organizations (HDO) to determine if both groups are in alignment about the need to address risks to medical device. To ensure a knowledgeable respondent participants in this research have a role or involvement in the assessment of and contribution to the security of medical devices.

Please join us on Wednesday June 21 at 9 AM PT/12 PM ET to learn more about the findings of this research https://www.brighttalk.com/webcast/11447/263163

In the context of this research, medical devices are any instrument, apparatus, appliance, or other article, whether used alone or in combination, including the software intended by its manufacturer to be used for diagnostic and/or therapeutic purposes. Medical devices vary according to their intended use. Examples range from simple devices such as medical thermometers to those that connect to the Internet to assist in the conduct of medical testing, implants, and prostheses.

The following medical devices are manufactured or used by the organizations represented in this research: robots, implantable devices, radiation equipment, diagnostic & monitoring equipment, networking equipment designed specifically for medical devices and mobile medical apps.

How vulnerable are these medical devices to attack and why both device makers and HDOs lack confidence in their security? Our survey shows 67 percent of device makers in this study believe an attack on one or more medical devices they have built by their organization is likely and 56 percent of HDOs believe such an attack is likely. Despite the likelihood of an attack, only 17 percent of device makers and 15 percent of HDOs are taking significant steps to prevent attacks. Further, only 22 percent of HDOs say their organizations have an incident response plan in place in the event of an attack on vulnerable medical devices and 41 percent of device makers say such a plan is in place.

In fact, patients have already suffered adverse events and attacks. Thirty-one percent of device makers and 40 percent of HDOs represented in this study say they are aware of these incidents. Of these respondents, 38 percent of respondents in HDOs say they are aware of inappropriate therapy/treatment delivered to the patient because of an insecure medical device and 39 percent of device makers confirm that attackers have taken control of medical devices.

Despite the risks, few organizations are taking steps to prevent attacks on medical devices. Only 17 percent of device makers are taking significant steps to prevent attacks and 15 percent of HDOs are taking significant steps.

The research reveals the following risks to medical devices and why clinicians and patients are at risk.

Both device makers and users have little confidence that patients and clinicians are protected. Both device makers and HDOs have little confidence that the security protocols or architecture built inside medical devices provide clinicians and patients with protection. HDOs are more confident than device makers that they can detect security vulnerabilities in medical devices (59 percent vs. 37 percent).

The use of mobile devices is affecting the security risk posture in healthcare organizations. Clinicians depend upon their mobile devices to more efficiently serve patients. However, 60 percent of device makers and 49 percent of HDOs say the use of mobile devices in hospitals and other healthcare organizations is significantly increasing security risks.

Medical devices are very difficult to secure. Eighty percent of medical device manufacturers and users in this study say medical devices are very difficult to secure. Further, only 25 percent of respondents say security protocols or architecture built inside devices adequately protects clinicians and patients.

In many cases, budget increases to improve the security of medical devices would occur only after a serious hacking incident occurred. Respondents believe their organizations would increase the budget only if a potentially life threatening attack took place. Only 19 percent of HDOs say concern over potential loss of customers/patients due to a security incident would result in more funds for medical device security.

 Medical device security practices in place are not the most effective. Both manufacturers and users rely upon security requirements instead of more thorough practices such as security testing throughout the SDLC, code review and debugging systems and dynamic application security testing. As a result, both manufacturers and users concur that medical devices contain vulnerable code due to lack of quality assurance and testing procedures and rush to release pressures on the product development team.

Most organizations do not encrypt traffic among IoT devices. Only a third of device makers say their organizations encrypt traffic among IoT devices and 29 percent of HDOs deploy encryption to protect data transmitted from medical devices. Of these respondents, only 39 percent of device makers and 35 percent of HDOs use key management systems on encrypted traffic.

Medical devices contain vulnerable code because of a lack of quality assurance and testing procedures as well as the rush to release. Fifty-three percent of device makers and 58 percent of HDOs say there is a lack of quality assurance and testing procedures that lead to vulnerabilities in medical devices. Device makers say another problem is the rush to release pressures on the product development team (50 percent). HDOs say accidental coding errors (52 percent) is a problem.

Testing of medical devices rarely occurs. Only 9 percent of manufacturers and 5 percent of users say they test medical devices at least annually. Instead, 53 percent of HDOs do not test (45 percent) or are unsure if testing occurs (8 percent). Forty-three percent of manufacturers do not test (36 percent) or are unsure if testing takes place (7 percent).

Accountability for the security of medical devices manufactured or used is lacking. While 41 percent of HDOs believe they are primarily responsible for the security of medical devices, almost one-third of both device makers and HDOs say no one person or function is primarily responsible.

Manufacturers and users of medical devices are not in alignment about current risks to medical devices. The findings reveal a serious disconnect between the perceptions of device manufacturers and users about the state of medical device security and could prevent collaboration in achieving greater security. Some disconnects, as detailed in this report, include the following: HDOs are more likely to be concerned about medical device security and to raise concerns about risks. They are also far more concerned about the medical industry’s lack of action to protect patients/users of medical devices.

How effective is the FDA in the security of medical devices? Only 44 percent of HDOs follow guidance from the FDA to mitigate or reduce inherent security risks in medical devices. Slightly more than half of device makers (51 percent) follow guidance. Only 24 percent of device makers have recalled a product because of security vulnerabilities with or without FDA guidance. Only 19 percent of HDOs have recalled a product.

Most device makers and users do not disclose privacy and security risks of their medical devices. Sixty percent of device makers and 59 percent of HDOs do not share information about security risks with clinicians and patients. If they do, it is primarily in contractual agreements or policy disclosure. Such disclosures would typically include information about how patient data is collected, stored and shared and how the security of the device could be affected.

Click here to read the study’s detailed findings. 

Remarkable look inside the underground 'fake news' economy shows how lucrative truth hacking can be

Bob Sullivan

Fake news is the new computer virus.

That’s the conclusion I came to when reading a remarkable new report from computer security firm Trend Micro (PDF). If you doubt the massive efforts of underground “hackers” to influence you — and the massive cash they can make doing so — flip through the pages of this report. A few years ago, it could have been written about the spam, computer virus or click fraud economies. Today, “news” has now  been weaponized, both for political gain and profit.

While Americans bicker over who might have gained the most from hacking in our last presidential campaign, they are missing the larger point: a massive infrastructure has been put in place from China to Russia to India to make money off polarization.  The truth is for sale in a way that most people couldn’t have imagined just a few years ago. As the report crucially notes: there’s no such thing as “moderate” fake news.  Whichever side you are on, if you play in extremism, you are probably helping make these truth hackers rich.

Here are some highlights from the report, but you should really read it yourself.

“(Russian)  forums offer services for each stage of the campaign—from writing press releases, promoting them in news outlets, to sustaining their momentum with positive or negative comments, some of which can be even supplied by the customer in a template. Advertisements for such services are frequently found in both public and private sections of forums, as well as on banner ads on the forums themselves.”

Many services have a crowd source model, meaning users can either buy credits for clicks, or “earn” them though participating in others’ campaigns.

“(One service) allows contributors to promote internet sites and pages, flaunting a 500,000-strong registered user base that can provide traffic (and statistics) from real visitors to supported platforms. It uses a coin system, which is also available in the underground.”

A price list claims the service can make a video appear on YouTube’s home page for about $600, or get 10,000 site visitors for less than $20.

Such services aren’t limited to Russia, of course.  According to the report, a Middle Eastern firm offers, “auto-likes on Facebook (for) a monthly subscription of $25; 2,200 auto-likes from Arabic/Middle East based users fetch $150 per month…(another service) has a customizable auto-comment function, with templates of comments customers can choose from. Prices vary, from $45 per month for eight comments per day, to $250 for 1,000 comments in a month.”

In China, the report says, “For … less than $2,600 spent on services in the Chinese underground, a social media profile can easily fetch more than 300,000 followers in a month. ”

It goes on to claim that fake news campaigns have incited riots and caused journalists to be attacked.  Here’s an example of the latter:

“If an attacker aims to silence a journalist from speaking out or publishing a story that can be detrimental to an attacker’s agenda or reputation, he can also be singled out and discredited by mounting campaigns against him.

“An attacker can mount a four-week fake news campaign to defame the journalist using services available in gray or underground marketplaces. Fake news unfavorable to the journalist can be bought once a week, which can be promoted by purchasing 50,000 retweets or likes and 100,000 visits. These cost around $2,700 per week. Another option for the attacker is to buy four related videos and turn them into trending videos on YouTube, each of which can sell for around $2,500 per video.

“The attacker can also buy comments; to create an illusion of believability, the purchase can start with 500 comments, 400 of which can be positive, 80 neutral, and 20 negative. Spending $1,000 for this kind of service will translate to 4,000 comments.

“After establishing an imagined credibility, an attacker can launch his smear campaign against his target.

“Poisoning a Twitter account with 200,000 bot followers will cost $240. Ordering a total of 12,000 comments with most bearing negative sentiment and references/links to fake stories against the journalist will cost around $3,000. Dislikes and negative comments on a journalist’s article, and promoting them with 10,000 retweets or likes and 25,000 visits, can cost $20,400 in the underground.

“The result? For around $55,000, a user who reads, watches, and further searches the campaign’s fake
content can be swayed into having a fragmented and negative impression of the journalist. A more
daunting consequence would be how the story, exposé or points the journalist wanted to divulge or raise will be drowned out by a sea of noise fabricated by the campaign.”

The key for all these attacks, the report notes, is appealing to the more extreme nature of our political discourse today.

“In the realm of political opinion manipulation, this tends to be in the form of highly partisan content. Political fake news tends to align with the extremes of the political spectrum; ‘moderate’ fake news does not really exist.”

The reports offers tips for news consumers to avoid being unwitting partners in a fake news campaign. The target of fake news is the general public, the report notes, so “Ultimately, the burden of differentiating the truth from untruth falls on the audience.”

Here are some signs users can look out for if the news they’re reading is fake:
• Hyperbolic and clickbait headlines
• Suspicious website domains that spoof legitimate news media
• Misspellings in content and awkwardly laid out website
• Doctored photos and images
• Absence of publishing timestamps
• Lack of author, sources, and data