Category Archives: Uncategorized

Closing the IT security gap: What are high performers doing differently?

This year, 2023, marks the beginning of a new age of data-driven transformation. Security and IT teams must scale to keep pace with the needs of business to ensure the protection of any data, anywhere. Modern hybrid cloud landscapes present complex environments and daunting security challenges for security and IT teams who are responsible for the protection of data and apps and workloads operating across a heterogenous landscape of data centers, hybrid clouds and edge computing devices. As the volume of data generated by IoT devices and systems grows exponentially, the ability to close the IT security gap is proving to be elusive and frustrating.

The 2023 Global Study on Closing the IT Security Gap: Addressing Cybersecurity Gaps from Edge to Cloud, now in its third year, is sponsored by Hewlett Packard Enterprises (HPE) to look deeply into the critical actions needed to close security gaps and protect valuable data. In this year’s research, Ponemon Institute surveyed 2,084 IT and IT security practitioners in North America, the United Kingdom, Germany, Australia, Japan, and for the first time, France. All participants in this research are knowledgeable about their organizations’ IT security and strategy and are involved in decisions related to the investment in technologies.

Security and IT teams face the challenge of trying to manage operational risk without preventing their organizations from growing and being innovative. In this year’s study, only 44 percent of respondents say they are very effective or highly effective in keeping up with a constantly changing threat landscape. However, as shown in this research there are strategies security and IT teams can implement to defend against threats in complex edge-to-cloud environments.

The IT security gap is not shrinking because of the lack of visibility and control into user and device activities. As the proliferation of IoT devices continues, respondents say identifying and authenticating IoT devices accessing their network is critical to their organizations’ security strategy (67 percent of respondents). However, 63 percent of respondents say their security teams lack visibility and control into all the activity of every user device connected to their IT infrastructure.

How high-performing teams are closing the IT security gap

Seventy percent of respondents self-reported their organizations are highly effective in keeping up with a constantly changing threat landscape and close their organizations’ IT security gap (9+ responses on a scale of 1 = not effective to highly effective). We refer to these organizations as “high performers”. In this section, we analyze what these organizations are doing differently to achieve a more effective cybersecurity posture and close the IT security gap as compared to the 80 percent of respondents in the other organizations represented in this research.

As evidence of their effectiveness, high-performing organizations had fewer security breaches in the past 12 months that resulted in data loss or downtime. Almost half of respondents (46 percent) say their organizations had at least 7 and more than 10 incidents in just the past 12 months. In contrast, only 35 percent of high performers say their organizations had between 7 and more than 10 security incidents.

High-performing organizations have a larger IT security function. Fifty-four percent of high performing organizations say their organizations have a minimum of 21 to more than 50 employees in their IT security function. Only 44 percent of respondents of other organizations had the same range of employees in IT security.

 High performers are more likely to control the deployment of zero trust within a Network as a Service (NaaS) deployment. Of those familiar with their organization’s zero-trust strategy, more high performers (36 percent of respondents) than others (28 percent of respondents) say their organization is responsible for implementing zero trust within a NaaS. Only 20 percent of high performers say it is the responsibility of the NaaS provider and 10 percent say a third-party managed service provider is responsible.

 High performers centralize decisions about investments in security solutions and architectures. Sixty percent of high performers say it is either the network team (30 percent) or security team (30 percent) who are the primary decision makers about security solutions and architectures. Only 15 percent say both functions are responsible.

 More high performers have deployed or plan to deploy the SASE architecture. Forty-nine percent of high performers have deployed (32 percent) or plan to deploy (17 percent) the SASE architecture. In contrast only 39 percent of respondents in the other organizations have deployed (24 percent) or plan to deploy (15 percent) the SASE architecture.

 More high performers have achieved visibility of all users and devices. High performers are slightly more confident (38 percent of respondents) than other respondents (30 percent of respondents) that their organizations know all the users and devices connected to their networks all the time.

 Far more high performers are positive about the use of Network Access Control (NAC) solutions and their importance to proving compliance. These respondents are more likely to use these solutions for IoT security. Fifty-one percent of high performers say NAC solutions are an essential tool for proof of compliance vs. 42 percent of respondents in other organizations. Fifty-five percent of high performers vs. 38 percent of other respondents say NAC solutions are best delivered by the cloud.

 High performers recognize the importance of the integration of NAC functionality with the security stack. Respondents were asked to rate the importance of the integration of NAC functionality with other elements of the security stack on a scale from 1 = not important to 10 = highly important. Sixty-two percent of high performers vs. 54 percent of other respondents say such integration is important.

 High performers are more likely to believe continuous monitoring of network traffic and real-time solutions will reduce IoT risks. Sixty-two percent of high performers vs. 52 percent of other respondents say continuous monitoring of network traffic for each IoT device to spot anomalies is required. Forty-seven percent of high performers vs. 38 percent of other respondents say real-time solutions to stop compromised or malicious IoT activity is required.

 High performers are more likely to require current security vendors to supply new security solutions as compute and storage moves from the data center to the edge. Forty percent of high performers vs. 30 percent of other respondents say their organizations will require current security vendors to supply new security solutions. Respondents in other organizations say their infrastructure providers will be required to supply protection (45 percent vs. 34 percent in high performing organizations).

 High performers are more likely to require servers that leverage security certificates and infrastructures that leverage chips and/or certificates. The research reveals significant differences regarding compute and storage requirements. Specifically, high performers require servers that leverage security certificates to identify that the system has not been compromised during delivery (67 percent vs. 60 percent in other organizations). High performers are more likely to require infrastructure that leverages chip and/or certificates to determine if the system has been compromised during delivery (64 percent vs. 56 percent in other organizations). High performers also are more likely to believe data protection and recovery are key components of their organizations’ security and resiliency strategy (58 percent vs. 50 percent in other organizations).

 Conclusion: Recommendations to close the IT security gap

According to the research, the most effective steps to minimize stealthy or hidden threats within the IT infrastructure are the adoption of technologies that automate infrastructure integrity verification and implement network segmentation. The research also reveals there is a growing adoption of zero trust and Secure Access Service Edge (SASE) architectures to manage vulnerabilities and user access. Important activities to achieving a stronger level of IoT security, according to the research, is the continuous monitoring of network traffic for each IoT device to spot anomalies and real-time solutions to stop compromised or malicious IoT activity.

Other actions to be considered in the coming year include the following:

  • Require servers that leverage security certificates and infrastructures that leverage chips and/or certificates.
  • Invest in having a fully staffed and well-trained IT security function. Such expertise is critical to ensuring data protection and recovery are key components of an organization’s security and resiliency strategy. A lack of skills and expertise is also the primary deterrent to adopting a zero-trust framework.
  • Consider centralizing decisions about investments in security solutions and architectures as high performers in this research tend to do. A concern of respondents is the inability of IT and IT security teams to agree on the activities that should be prioritized to close the IT security gap. This concern is exacerbated by the siloed or point security solutions in organizations.
  • Deploy Network Access Control (NAC) solutions to improve IoT and BYOD security. These solutions support network visibility and access management through policy enforcement for devices on users of computer networks. NAC solutions can improve visibility and verify the security of all apps and workloads.

Click here to download the full report from Hewlett Packard

Hundreds of supplement companies warned about ads; is this any way to protect consumers?

Bob Sullivan

I’m often asked, “Isn’t there a truth in advertising law?!!??” by consumers who feel cheated by a company that embedded a gotcha in its advertisements.  My sad answer is often some variation of “No, not really.” At least that’s been the on-the-ground reality for some time.  There’s a glimmer of hope that things might be changing, however. The Federal Trade Commission recently sent out hundreds of letters warning companies that sell OTC drugs, homeopathic products, or dietary supplements that they’re being watched for potentially bogus ads — which is both a hopeful sign and a demonstration of just how weak consumer protection efforts are in the USA.

First, to get this out of the way, I’m not a lawyer, and there are actually many, many laws that govern advertising — some generic, some very industry specific. But as I say with only a hint of sarcasm, everything is legal until there’s a lawsuit or an arrest, and that’s the reality most consumers face every day.  Basically, TV and radio wouldn’t exist if it weren’t for aggressive snake-oil pitches from companies claiming their lab-tested products will make you younger, or stronger, or more focused — most backed by junk “science,” if at all.  But these firms have been given the tacit green light for decades by understaffed federal agencies that could hardly pick one in 1,000 battles to fight. And even worse, they’ve often seen a wink and a nod from agencies controlled by a hands-off philosophy derived from a perverted notion of how free markets are supposed to operate.

That’s why I’m encouraged by the announcement recently that the FTC had sent out a pile of so-called “Notice of Penalty Offenses” letters about “substantiation of product claims.” The approximately 700 recipients — large and small firms alike — have been put on notice that the FTC is worried they might be making claims that deceive consumers. The letters do not constitute a legal finding; but they do include warnings that should such a finding occur, the penalty could be about $50,000 per incident.  And the letters include reminders of what potential violations look like. Like this:

“Failing to have adequate support for objective product claims; claims relating to the
health benefits or safety features of a product; or claims that a product is effective in the cure,
mitigation, or treatment of any serious disease. These unlawful acts and practices also include:
misrepresenting the level or type of substantiation for a claim, and misrepresenting that a product claim has been scientifically or clinically proven.”

A particular pet peeve of mine in the age of social media is the deceptive use of consumer reviews and other endorsements.  Apparently, that’s a pet peeve of the current FTC too, because the warning letters also include reminders about that:

“Such unlawful acts and practices include: falsely claiming an endorsement by a third party; misrepresenting that an endorsement represents the experience or opinions of product users; misrepresenting that an endorser is an actual, current, or recent user of a product or service; continuing to use an endorsement without good reason to believe that the endorser continues to hold the views presented; using an endorsement to make deceptive performance claims; failing to disclose an unexpected material connection with an endorser; and misrepresenting that the experience of endorsers are typical or ordinary. Note that positive consumer reviews are a type of endorsement, so such reviews can be unlawful if they are fake or if a material connection is not adequately disclosed.”

“Everyone gets sick, and most of us will experience the infirmities that accompany aging,” wrote FTC Commissioner Rebecca Slaughter about the orders. “That shared vulnerability leaves us all susceptive to health-claim scams and to plausible-sounding treatments that promise to alleviate pain, to restore lost virility, or to help cure the most deadly and tragic of illnesses. At best, many of these product claims are unreliable and waste tens of billions of consumer dollars a year, and, even worse, they can cause serious health problems requiring acute medical attention.”

Advertising is a touchy area and a tough business.  There is a centuries-old tradition of sellers doing what they can to get buyers’ attention, with ad-makers walking up to and over the line of what’s deemed legal.  That’s to be expected.  With attention so divided in our time, those lines have become even more blurry, and the attempts to get consumers’ attention even more desperate.  Warning letters sent before dramatic fines certainly seem like a positive way to clean up a murky marketplace before doling out what might be death penalties to smaller companies.

However, the list of warning notice recipients certainly includes companies that could afford to do better research before publishing their ads.  Kellogg, AstraZeneca, BASF and Bausch and Lomb are on the list. So are Amazon, Goop, and Kourtney Kardashian’s Lemme, Inc. Again, there is no finding of illegality in these letters. You can see the list yourself.

This isn’t the first set of such warning notices sent out by the FTC recently.  In October of 2021, a batch 70 letters went to for-profit colleges focused on alleged exaggerated claims about the future workplace success of graduates.    And later that month, another 700 letters went to advertising firms about potentially illegal testimonials and endorsements.  And still another 1,000-plus notices went out to companies advertising get-rich-quick offerings to freelancers.

To my knowledge, none of the firms mentioned in the letters have faced fines or penalties, or been found guilty of anything related to the letters.

It might seem uncontroversial to have the nation’s federal watchdog for consumers send out warning letters to companies that could be engaging in deceptive conduct.  After all, I’d sure like a warning letter when I’m illegally parked.  However, all things have a context, and the strategy of FTC notice of penalty offenses has a deep past.

They were added to the FTC’s toolkit in the 1970s in an effort to more swiftly deal with potential consumer harms. Suing a company takes a long time, and the FTC authority to obtain penalties from law-breaking companies is severely limited.  In many cases, the FTC can only claw back ill-gotten gains from misbehaving firms — allowing them a so-called first bite of the apple.  In these cases, only after a firm agrees to a settlement with the FTC, then engages in the bad behavior AGAIN, can criminal penalties be assessed. In a fast-changing world, this is an ineffective tool for making sure consumer harm is quickly stopped.

Notice of penalty offenses were added to let the FTC skip to that second step. By telling companies that *other* companies had engaged in the same behavior, and been penalized, that one-bite-of-the-apple step could be skipped. The FTC could go after misbehaving companies straight away, after this warning notice, skipping what I think of as the “FTC two-step.”

This effort is not uncontroversial, however. Use of the letters fell out of practice in the 1980s and instead FTC lawyers used a different legal strategy (the so-called Section 13(b) authority — here’s a history lesson) to obtain penalties or seize and freeze assets belonging to companies engaged in deceptive behavior.  That strategy was challenged by a payday lender and in 2021, the U. S. Supreme Court sided with the lender, eliminating this route. So FTC staff resurrected the warning letters.

(Again, I’m not a lawyer. For a different version of this history lesson, visit Veneble’s website).

It’s not hard to find lawyers who think the FTC is on weak legal ground using the warning letters as this first step in the FTC two-step. Cases cited in some of these letters are decades old.  I don’t think anyone disagrees this is a workaround, and a less ideal solution than a new law passed by Congress that makes clear the FTC can freeze assets and penalize misbehaving companies on the first offense, the treatment that consumers expect from their local police officer.

If you’ve made it this far, you’ve come to understand my first point, which is how convoluted our efforts are to protect consumers in America — and how we still lay out the welcome mat to scammers and deceptive companies.  And I haven’t even delved into all the lame ways advertisers can shield themselves from federal (and state) “truth in advertising” laws.  Like “This product is not intended to diagnose, treat, cure or prevent any disease.” Or by our liberal use of the concept of “puffery,” which is legally protected. (It’s ok to say this is the “world’s favorite blog” but it’s not ok to say “4 out of 5 readers prefer this blog” unless I have something to back up that data.)

As I’m fond of saying, free markets are not free-for-all markets. True free markets require perfect information. We don’t have that. And the more imperfect our information is, the more markets require rules to protect the vulnerable.  Warning letters take us a step closer to that.  Armies of lawyers arguing about Section 12(b) authority for many years does not.

Understanding the Serious Risks to Executives’ Personal Cybersecurity & Digital Lives

Organizations are allocating millions of dollars to protecting their information assets and employees but are neglecting to take steps to safeguard the very vulnerable digital assets and lives of key executives and board members. Sponsored by BlackCloak, Ponemon Institute surveyed 553 IT and IT security practitioners who are knowledgeable about the programs and policies used to prevent cybersecurity threats against executives and their digital assets.

The purpose of this research is to understand the risks created by the cybersecurity gap between the corporate office and executives’ protection at home. According to 42 percent of respondents, their key executives and family members have already experienced at least one attack by a cybercriminal.

In the context of this research, digital executive protection extends cybersecurity to outside the office domain by safeguarding the personal digital lives of company executives, board members and key personnel to mitigate the risks of cybercriminals targeting them for hacking, IP theft, reputational risks, doxxing/swatting and financial attacks.

Digital assets include all aspects of an executive’s personal life: address/cell/emails; personal cell, tablet, computer and accounts (email, social etc.), home network and any scams targeting them (doxxing, swatting, personal exposure etc.).

A key takeaway from this research is that while it is likely that executives’ digital assets and lives will be targeted by cybercriminals, organizations are not responding with much needed strategies, budget and staff. We found 58 percent of respondents say the prevention of cyberthreats against executives and their digital assets is not covered in their cyber, IT and physical security strategies and budget. Moreover, only 38 percent of respondents say there is a dedicated team to preventing and/or responding to cyber or privacy attacks against executives and their families.

The following findings are evidence of the risk to executives’ physical security and digital assets

 Executives are experiencing multiple cyberattacks. According to the research, 42 percent of respondents say their executives and family members were attacked by cybercriminals and 25 percent of respondents say in the past two years executives experienced an average of seven or more than 10 cyberattacks. In addition to doxxing and malware infections, other attacks include personal email attacks or compromises (42 percent) and online impersonation (34 percent).

Attacks against executives have the same serious consequences as a data breach. Cyberattacks against executives resulted in the theft of sensitive financial data (47 percent of respondents), loss of important business partners (45 percent of respondents) and theft of intellectual property/company information (36 percent of respondents). More than one-third of respondents (35 percent of respondents) say the consequence was improper access to the executive’s home network, which is not secured or patched to the level an organization would require in its offices and facilities.

 The finance and marketing departments are most likely to send sensitive data to executives’ personal emails, according to 23 percent and 22 percent of respondents respectively. However, the executive suite (21 percent of respondents) and board members (19 percent of respondents) are also guilty of sending sensitive information to personal emails.

 Staff time and the steps taken to detect, identify and remediate the breach are the most costly following an incident.  Thirty-nine percent of respondents say their organizations measure the potential financial consequences from such an attack. Fifty-nine percent of these respondents say their organizations measure the cost of staff time involved in responding to the attack and 55 percent of respondents say they measure the cost to detect, identify and remediate the breach.

 It’s not if but when key executives will be targeted by organized criminals. Sixty-two percent of respondents say attacks against digital assets are highly likely and 50 percent of respondents say future physical threats against executives is highly likely.

Criminals are sophisticated and stealthy when targeting executives and other high-profile individuals. Executives are most likely to unknowingly reuse a compromised password from their personal accounts inside their company (71 percent of respondents) and 67 percent say it is highly likely that an imposter would send a text message to another employee at their company. Fifty-one percent of respondents say it is highly likely that an executive’s significant other or child receives an unsolicited email and clicks on a link taking them to a third-party website.

Organizations are not determining the extent of the threat to executives’ physical safety and security of personal digital devices. Only 41 percent of respondents say their organizations are assessing the physical risk to executives and their families and only 38 percent of respondents say organizations assess the risk to executives’ digital assets.

 Executives are the weakest link in the ability to protect their lives and digital assets. Only 16 percent of respondents say their organizations are highly confident that a CEO or executives’ personal email or social media accounts are protected with dual factor authentication. The most confidence (48 percent of respondents) is that CEOs and other executives would know how to secure their personal email. Twenty-eight percent of respondents are highly confident that executives would know how to determine if an email is phishing and 26 percent of respondents say they are highly confident that executives would know how to set up their home network securely.

Only 32 percent of respondents say executives take some personal responsibility for the security of their digital assets and safety and only 38 percent of respondents say executives understand the threat to their personal digital assets.

As executives switch to their home networks and personal devices, visibility critical to detecting attacks is diminished. According to the research, it is very difficult to have visibility into the following areas when working outside the office: personal devices (74 percent of respondents), executives’ personal email accounts (66 percent of respondents), the executive’s home network to prevent cyberattacks (64 percent of respondents), executives’ privacy footprint (61 percent of respondents) and password hygiene (57 percent of respondents).

Executives working outside the office increase the attack surface significantly. Fifty-nine percent of respondents say ensuring executive protection is more difficult due to the increasing attack surface. However, only about half of respondents (53 percent) say attacks against the digital assets of executives outside the office domain is as much a priority as preventing such attacks when they are in the office. Only 50 percent of respondents say their organizations track potential attacks against executives, such as doxing, phishing and malware attempts.

 To reduce the risk, executives should be trained to secure their devices and physical safety.  Almost all organizations are not doing the basics in enabling executives to protect themselves and their personal digital devices. Training executives to secure devices in and outside the workplace is only conducted by 37 percent and 36 percent of respondents, respectively. More organizations (53 percent of respondents) are providing self-defense training but only 42 percent of respondents say their organizations conduct tabletop exercises specific to the threats against executives.

 Steps taken to protect executives’ lives and digital devices are ineffective. According to 56 percent respondents, organizations are mainly focused on updating executives’ personal devices. Fifty-two percent of respondents say their organizations patch vulnerabilities and 51 percent of respondents say they use password managers. Only 45 percent of respondents say they are using dual factor authentication, 39 percent of respondents say they use botnet scanning and 36 percent of respondents say they analyze network connectivity on personal devices to detect malicious WiFi hotspots.

 Read the full white paper at BlackCloak’s website

 

Two-thirds use tech to avoid face-to-face interactions; the truth we don’t want to face

Click to watch this Amazon driver (heroically) deliver packages in the rain

Bob Sullivan

Machines dehumanize people.  I’ve long had a mental experiment in mind that I’d love to pull off one day — force people to walk at a grocery store the way they drive on a highway.  You know: cut each other off, flip the bird, breathe (literally) down someone’s neck on line.  It would all look and feel absurd, at least for most. All this to show people that we do things when we are in control of machines that we’d never do in “real” life. In other words, the machines control us, not the other way ’round.

Another easy thought experiment: a real-life mall where everyone says the things they’ve said (or heard) on Instagram or TikTok comments.   If you don’t know what I’m talking about, consult a woman.

This is bad for our souls.  When you treat another person like an object, you’re a jerk. But I believe it also rebounds into you, and a piece of your humanity dies every time you dehumanize another person, even if it “feels” good at the moment.  And this is how humans lose the robot war, without ever firing a shot.  We just surrender our humanity and take the robots’ side.  So if you are worried about ChatGPT, I think we have a lot more to worry about.

Cars, naturally, were just the beginning of this underhanded “invasion.” Smartphones have become a far more potent weapon in this dehumanization effort.  I don’t have to work hard to make my case – we’ve all seen someone staring down hypnotically at a handheld screen while a store clerks ask, “Can I help you? CAN I HELP YOU!?” a dozen times.

I saw a survey this week that provides a bit more evidence for my concern. It was sponsored by a website named PlayUSA.com, which describes itself as a news service that provides independent information about the legal U.S. gambling industry. The survey was designed to examine the impact of tech products on loneliness and it found:

  • 62% of respondents like that tech is replacing social interactions
  • 60% use self-service kiosks and mobile apps to skip talking with people
  • 75% report a decrease in social skills due to tech
  • 74% made a delivery driver leave food outside even if they could have opened the door to grab the delivery
  • 30% say they give drivers better ratings for not talking

As always, there’s a host of caveats to this survey.  It was conducted online, using Google forms, which does not produce the best random sample. The company told me it conducted four different surveys from four different age groups to ensure balanced generational perspective — so it tried. That doesn’t give you a sample that’s truly as diverse as the U.S. population, of course.  Doing so is tricky even under the best of circumstances.

Still, the results ring true. They do not necessarily prove my thesis — that tech is making us more lonely – or worse, dehumanizing us. After all there are plenty of other explanations for this behavior.  It can feel safer to avoid meeting in person with a delivery driver; plenty of women will tell you chit-chatting with a driver can turn into something more uncomfortable very quickly; and self-checkout is often quicker than waiting for a cashier.  Plenty of people with crippling social anxiety now have an avenue for living that has made their lives infinitely better, and I don’t mean to discount that.

Still, for most, our lives are designed to be full of human interactions large and small, or at least I believe they should be. I’ve written before about Eric Byrne’s theory of transactional analysis —  that the sum of your everyday hellos and goodbyes and “how-are-yous” really do add to or subtract from your mental health.  The pandemic severely limited our ability to engage in such daily niceties, but technology is keeping us that way.  There are plenty of studies suggesting younger Americans are suffering from depression and social anxiety at rates we’ve not seen before.  Tech clearly enables isolation.

But I worry about something more.

Tech tends to put a great distance between powerful people and weak people. It enables abuse because it can make abuse invisible. You would never yell at an older person in a grocery store for taking an extra moment to be sure-footed while stepping forward in a line.  You probably wouldn’t hesitate to scream at that same person when behind the wheel in a car.

One more thought experiment: The next time someone drives or cycles dinner to you, imagine if you would do the same for them.   I venture to guess you’d never directly ask someone you knew to cycle in the pouring rain for 15 minutes to bring you ice cream – but it’s sure easy to click “deliver” on an app and have the goodies left by the door.

I’m not saying food delivery is evil, or even bad. But I am saying that it’s unhealthy to avoid looking another human being in the eye when you make them do something for you.  And my real fear about artificial intelligence? It’ll put yet another layer of 1s and 0s between powerful people and weak people. Another victory for robots in this war we are losing.

The Hidden Cybersecurity Threat in Organizations: Nonfederated Applications

Nonfederated applications pose an unseen and severe threat because in most organizations there is a lack of visibility into who has access to what and how accounts are secured. Sponsored by Cerby, Ponemon Institute surveyed 595 IT and IT security practitioners in the United States who are involved in their organization’s identity and access management strategy. The study aims to determine organization’s level of understanding of the risks created by nonfederated applications and the steps that can be taken to mitigate the risk.

(Click here to download the full report immediately from Cerby’s website.)

A key takeaway from the research is that organizations don’t know what they don’t know when it comes to nonfederated applications. Less than half (49 percent) of organizations track the number of nonfederated applications they have that are not managed and accessed by their identity provider. Of those respondents who track nonfederated applications, 23 percent say they have between 101 to 250. The average number is 96. Despite efforts to have an accurate inventory, only 21 percent of these respondents are highly confident that they know all the nonfederated applications used throughout the enterprise.

Nonfederated applications are risky because they cannot be centrally managed using the organizations’ IdP, (59 percent of respondents). Fifty-one percent of respondents say they are risky because they do not support industry identity and security standards such as Security Assertion Markup Language (SAML) for single sign-on or System for Cross-domain Identity Management (SCIM) for the user onboarding and offboarding process.  As defined in this research, nonfederated applications lack support for the security standards organizations need to manage at scale. In the cloud and on-premises, these applications do not support common industry security standards.

NOTE: An IdP is a service that stores and manages digital identities. The use of an IdP can simplify the process of managing user identities and access, as it allows users to use a single set of credentials across multiple systems and applications. Many organizations use IdPs to manage user access to internal and external systems, such as cloud-based applications or partner networks.

The following findings are evidence of the risk posed by nonfederated applications. 

  • The cost and time of provisioning and deprovisioning access to applications quickly adds up. Before analyzing the risks, it is important to understand the costs. Seven hours is the average time spent provisioning access to a standard set of applications for one employee. At an average $62.50 hourly pay rate the cost is $437.50 per employee. To deprovision one employee takes an average of 8 hours costing $500 per employee. Organizations can use this benchmark to calculate the process’s impact based on the annual turnover in employees and contractors.
  • Salaries also need to be considered. An average of 8 people are involved in the provisioning and deprovisioning process in addition to their other responsibilities. The average annual salary per staff member is $81,000. Consequently, the total annual staff cost amounts to $648,000, with a significant portion allocated to the time-consuming manual work of provisioning and deprovisioning, which could be better utilized elsewhere.
  •  The total average annual cost to investigate and remediate cybersecurity incidents involving nonfederated applications is $292,500. This is based on 47 hours each week or 2,444 annually to investigate potential unauthorized access and 43 hours weekly, or 2,236 annually, to investigate and remediate cybersecurity incidents caused by unauthorized access to nonfederated applications.
  • Nonfederated applications are represented across all application categories and are not limited to a single business unit. As discussed previously, only 49 percent of organizations are tracking the use of nonfederated applications. Only 21 percent of these respondents say their organizations are confident in knowing all the nonfederated applications being used. Nonfederated application use across business units underscores the difficulty in managing them.
  • Fifty-two percent of respondents say their organizations have experienced a cybersecurity incident caused by the inability to secure nonfederated applications. Sixty-three percent of these respondents say their organizations had a minimum of 4 and more than 5 incidents. Loss of customers and business partners are the primary consequences of a cybersecurity incident caused by the inability to secure nonfederated applications, according to 43 percent and 36 percent of respondents respectively.
  • Security and identity teams are often left out of managing and manually controlling access to nonfederated applications. According to the research, shared management of nonfederated applications leads to a decentralized approach. Business units (63 percent of respondents) are most likely to manage these applications followed by IT teams (54 percent of respondents). Only 45 percent of respondents say the security and/or identity teams are responsible for managing these applications. Moreover, 54 percent of respondents say the granting and revoking of access are controlled by business units.
  • Organizations are using inefficient manual processes to grant and revoke access to applications. An average of 84 applications in organizations represented in this research require an admin to manually log in to add, remove or update access, meaning the application doesn’t support SCIM and the organization cannot leverage automation through its IdP. The primary reasons for not automating the process are SCIM is not supported (33 percent of respondents) and the cost (31 percent of respondents).
  • Organizations rely upon business units to report their use of nonfederated applications. While there are several methods used to collect information about current nonfederated applications, business units are most likely to self-report their use of nonfederated applications (62 percent of respondents) followed by the use of a cloud access security broker (CASB) (48 percent of respondents) and endpoint detection tools (47 percent of respondents). Only 39 percent of respondents say business units complete a form to confirm the nonfederated applications used.
  • An average of more than half of tracked nonfederated applications do not support single sign-on (SSO). As discussed previously, there is an average of 96 nonfederated applications in organizations that track their use and respondents estimate that an average of 50 of these do not support SSO. As described in the research, the benefit of SSO is that it permits a user to have one set of login credentials—for example a username and password to access multiple applications. Thus, SSO eases the management of multiple credentials.
  • Organizations lack an effective process to prevent employees from putting data in nonfederated applications at risk. Few organizations report that they are effective in preventing employees’ reuse of passwords, retaining access to critical systems after they leave or change roles and preventing the disabling of MFA.
  • There is a desire to prioritize nonfederated application security, but the risk is underestimated due to a lack of awareness. While only 34 percent of respondents say their organizations do not make the security of nonfederated applications a priority, 44 percent of respondents say management underestimates the cybersecurity risks. When educated on the risks, 82 percent of respondents say the importance of securing nonfederated applications increased.
  • Employees are sharing their account login credentials, making it critical to have the proper security safeguards in place. Seventy-six percent of respondents say employees are sharing account login credentials with both employees and external collaborators (35 percent), sharing account login credentials with other employees (21 percent) and sharing with external collaborators (20 percent).
  • Exposing, failing to rotate passwords and being unable to track who is accessing a shared account are top security concerns. Forty-one percent of respondents say employees or collaborators share accounts without concealing the password and another 41 percent say passwords are not rotated. Reused or weak credentials also create risk (36 percent of respondents).
  • Organizations are not able to reduce the cybersecurity risks caused by shared accounts. Half of respondents (50 percent) say their organizations’ access management strategy enables employees to share login credentials securely when required by the application. However, only 27 percent of respondents say their organizations are very or highly effective in reducing cybersecurity risks from shared accounts. Of those respondents (73 percent) who rank their organization’s effectiveness as low, 56 percent are motivated to reduce the cybersecurity risk.
  • Organizations lack processes and policies to make nonfederated applications secure. Only 41 percent of respondents have a process to make nonfederated applications secure and compliant with their organizations’ policies and only 35 percent of respondents say they have a policy that prevents the trial use of new nonfederated applications. Thirty-nine percent of respondents say the use of nonfederated applications is limited. As shown in this research, organizations do not like to limit the use of nonfederated applications because it can affect employee morale and productivity.
  • The challenge for organizations is that they don’t know what they don’t know. The top two challenges to securing nonfederated applications is the inability to know and manage all nonfederated applications because of the lack of visibility and not having an accurate inventory. This is followed by the inefficient use of manual processes to secure nonfederated applications. Budget and in-house expertise are not considered as much a challenge.
  • Most organizations do not follow up to ensure password and MFA policies adherence. Fifty-seven percent of respondents say employees are required and reminded to turn on MFA and about half (48 percent of respondents) say employees are required and reminded to rotate passwords regularly. However, only 40 percent of respondents say they follow up with every account to make sure MFA is turned on and passwords are rotated in accordance with their policies.

 To read the full report, visit the Cerby website.

Rules for Whistleblowers: a Handbook for Doing What’s Right

Bob Sullivan

Ever see something at work that you just knew wasn’t right, but felt like there was nothing you could do? Maybe there is something you can do. And maybe you can do it … anonymously.

When whistleblower Francis Haugen came forward and testified before Congress about what she thought was going wrong inside Facebook, she changed big tech forever. Or did she?

I recently talked about this with Stephen Kohn, author of the book, Rules for Whistleblowers, A Handbook for Doing What’s Right. He’s also one of the nation’s leading whistleblower attorneys. We discussed the lasting impact Haugen did (or didn’t) have on the tech industry. But more important, he offered a roadmap for people who work in tech to come forward if they think something terribly wrong is happening at their company. And he explained how workers can do this without putting their livelihoods at risk.

“What we’ve seen is for every one whistleblower who’s willing to go public and really risk a lot, there’s a thousand who would go non-public and provide supporting information,” he said to me on the Duke Debugger podcast that I host. But those who go public often get “crushed” by well-funded legal teams.

“That’s why Congress in 2010 with the Dodd-Frank Act created these… what I call super anonymity laws. When I discussed those with the Senate banking committee, when the law was being debated …  I’ll never forget it, the Senate staffer said to me, ‘Steve, if Wall Street knows who you are, you will be crushed no matter what, and your career will be destroyed. You know, we have to create procedures to prevent that.’ And I said, ‘Hallelujah!’ ”

Whistleblowers can come forward without making a big public display, and in fact, government investigators often prefer that, he said.

“Anonymous means you don’t have to set your hair on fire. You don’t have to burn your bridges,” he said. “And the government wants you to stay working in the company so you can provide additional information about violations. Once you have filed, sometimes the government agencies will share your information or you’re aware of other agencies that might be interested, and  … say, tell the SEC to share your information. So it begins a process. The bottom line is these laws make it easier to do the right thing to report misconduct and not necessarily lose your job and career.”

Provisions in the Dodd-Frank bill have changed the nature of whistleblowing and they include large financial incentives.

“The SEC alone has paid whistleblowers about $1.5 billion in rewards, and in almost every one of those cases, no one even knows who the whistleblower is. They don’t receive big press reports. It’s almost all under the radar,” Kohnm said.

Readers can listen to the entire interview, or read a transcript, at this site.  Kohn’s book is called  Rules for Whistleblowers, A Handbook for Doing What’s Right and will be available at  National Whistleblower Center and bookstores on June 1

The data is in the cloud, but who’s in control?

Ponemon Institute is pleased to present the findings of the 2022 Global Encryption Trends Study, sponsored by Entrust. We surveyed 6,264 individuals across multiple industry sectors in 17 countries/regions – Australia, Brazil, France, Germany, Hong Kong, Japan, Mexico, the Middle East (which is a combination of the respondents located in Saudi Arabia and the United Arab Emirates),2 Netherlands, the Russian Federation, Spain, Southeast Asia, South Korea, Sweden, Taiwan, the United Kingdom, and the United States.

The purpose of this research is to examine how the use of encryption has evolved over the past 17 years and the impact of this technology on the security posture of organizations. The first encryption trends study was conducted in 2005 for a U.S. sample of respondents. Since then we have expanded the scope of the research to include respondents in all regions of the world.

Organizations with an overall encryption strategy increased significantly since last year. Since 2016 the deployment of an overall encryption strategy has steadily increased. This year, 62% of respondents say their organizations have an overall encryption plan that is applied consistently across the entire enterprise, a significant increase from last year. Only 22% of respondents say they have a limited encryption plan or strategy that is applied to certain applications and data types, a significant decrease from last year. The average annual global budget for IT security is $24 million per organization. The countries with the highest average annual budgets are the U.S. ($41 million) and Germany ($28 million).

Following are findings from this year’s research

Enterprise-wide encryption strategies have continued to increase. Since conducting this study 17 years ago, there has been a steady increase in organizations with an encryption strategy applied consistently across the entire enterprise. In turn, there has been a steady decline in organizations not having an encryption plan or strategy. In this year’s study, 61% of respondents rate the level of their senior leaders’ support for an enterprise-wide encryption strategy as significant or very significant.

Certain countries/regions have more mature encryption strategies. The prevalence of an enterprise encryption strategy varies among the countries/regions represented in this research. The highest prevalence of an enterprise encryption strategy is reported in the United States, the Netherlands, and Germany. Although respondents in the Russian Federation and Brazil report the lowest adoption of an enterprise encryption strategy, since last year it has increased significantly. The global average of adoption is 62% of organizations represented in this research.

Globally, the IT operations function is the most influential in framing the organization’s encryption strategy. However, in the United States the lines of business are more influential. IT operations are most influential in the Netherlands, Spain, France, Southeast Asia and the United Kingdom.

The use of encryption has increased in most industries. Results suggest a steady increase in most of the 13 industry sectors represented in this research. The most significant increases in extensive encryption usage occur in manufacturing, energy & utilities and the public sector

Employee mistakes continue to be the most significant threats to sensitive data. In contrast, the least significant threats to the exposure of sensitive or confidential data include government eavesdropping and lawful data requests.

Most organizations have suffered at least one data breach. Seventy-two percent of organizations report having experienced at least one data breach. Twenty-four percent say they have never experienced a breach and 5% are unsure.

The main driver for encryption is the protection of customers’ personal information.
Organizations are using encryption to protect customers’ personal information (53% of respondents), to protect information against specific, identified threats (50% of respondents), and the protection of enterprise intellectual property (48% of respondents)

A barrier to a successful encryption strategy is the inability to discover where sensitive data resides in the organization. Fifty-five percent of respondents say discovering where sensitive data resides in the organization is the number one challenge and 32% of respondents say budget constraints is a barrier. Thirty percent of all respondents cite initially deploying encryption technology as a significant challenge.

No single encryption technology dominates in organizations. Organizations have very diverse needs for encryption. In this year’s research, backup and archives, internet communications, databases, and internal networks are most likely to be deployed. For the fifth year, the study tracked the deployment of the encryption of Internet of Things (IoT) devices and platforms. Sixty-three percent of respondents say IoT platforms have been at least partially encrypted and 64% of respondents say encryption of IoT devices has been at least partially deployed.

Certain encryption features are considered more critical than others. According to the
consolidated findings, system performance and latency, management of keys, and enforcement
of policy are the three most important encryption features.

Intellectual property, employee/HR data, and financial records are most likely to be
encrypted. The least likely data type to be encrypted is health-related information and
non-financial information, which is a surprising result given the sensitivity of health information.

To read the rest of this report, and find out how organizations are using encryption to protect data and workloads across multiple cloud platforms, visit Entrusty’s website at this link.

Dealing with Twitter’s 2FA downgrade? Don’t make this mistake

Bob Sullivan

Twitter has followed through with its half-baked plan to turn off two-factor authentication for (millions of?) non-paying users, leaving them half-naked to the vast criminal underground. If that’s you, you’re looking at not-very-good choices right now, but doing nothing might be the worst of all. I’m seeing reports of people getting hacked almost immediately, which you would expect, given the long lead time criminals have had to prepare for this day when many accounts would suddenly be one password away from compromise.

The only practical answer for most people who wish to continue to use Twitter without paying for SMS security is to enable a free token generator tool like Google Authenticator. I recommend you do that, too, rather than remain out there half-naked. Twitter has haphazardly implemented this massive security change in the most unprofessional and ineffective way, putting all the onus on users — messages this week even tell users “you’ve turned off two-factor authentication,” which is quite an abuse of the English language. It would be understandable, even responsible, for these users to rush into installation of an authenticator. But take please heed of the advice I’m about to give or else, I promise, sometime in the next 10-500 days you’re going to have a Hellish time recovering from loss of access to your account.

 

In short, if you lose your phone, or it’s damaged, or you lose access to that authentication code for any reason, you may very well lose your Twitter account forever. The only thing standing between you and that very frustrating day would be a massive increase in Twitter customer service spending, and I can just about promise you, that’s not happening.

Many authentication tools have a big implementation flaw: they don’t have a user-friendly failover plan. This is because tokens have a damned-if-you-do-and-damned-if-you-don’t quality. Google Authenticator does NOT allow you to create backups. Why? Backups could be accessed by hackers, rendering the entire security protocol insecure.

You’ve seen, and used, the “forgot your password?” link many times. It’s a way of dealing with perhaps the most common roadblock on the Internet — users are told not to re-use passwords, so they forget all these newfangled passwords they use. They’re told to use password managers (a good idea!) but then they lose access to that manager or something else goes wrong. No worries: ‘Forgot your password’ usually fixes things quickly. But it’s also the weakest link in many security implementations (Here’s my 15-year-old story about that!). Criminals with just an email address can request a password reset using ‘forgot your password,’ so it creates quite a dilemma for tech companies — how do you service forgetful users without making things easy for criminals?

Authenticator implementations go a new route, effectively eliminating the customer service part of this risk equation.

If you can’t access Google Authenticator…you can’t log in. You can’t write to the app or website and ask for a new authentication code the way you use “forgot your password.” You are…just stuck. If your phone is stolen, you can’t generate the code you need to log in. Period. As I described in my story about recovering Rusty’s Instagram account, you may very well be in for months of frustration trying to recover your account some other way. Some other way, like this “prison photo” I had to take of myself.

Unless you’ve prepared ahead of time. Many sites which use authenticators create their own backup systems — often, one-time codes that the app generates which can be used as a kind of get-out-of-jail-free card. Twitter, at the moment, lets you generate one such code. To find it, for now, go to “Security and Account Access” then “Security” then “Two Factor Authentication” then “Backup Codes.” Then — and this is CRITICAL — take a screenshot of that code or write it down and put it someplace you’ll remember for the inevitable day that you’ll need it.

WARNING: YOU CANNOT GENERATE THIS CODE AFTER YOU’VE LOST ACCESS TO YOUR ACCOUNT!! You MUST take this step RIGHT NOW, as soon as you implement an authenticator app.

As you re-read that section of this story, I’m sure you’ll see this as I do. There’s about a zillion ways human beings can get this step wrong, and will get this wrong. I predict Twitter will relatively soon be overwhelmed with account recovery requests that it cannot handle. That’s precisely what happened to Instagram/Facebook with authenticator tools. Desperate Instagram users write to me every day trying to regain access to their accounts. I predict this is going to be a far bigger issue for Twitter than account hacking.

For what it’s worth, in Instagram’s case, I believed I *had* copied the backup codes (three years prior) when I turned on 2FA after a hacking attempt from Russia; the codes I had didn’t work. So I think it’s quite possible consumers who don’t create backup codes, or don’t copy them down, or can’t find them the day they need them, aren’t the only potential pitfall of this system.

Meanwhile, if you are thinking, “I’m supposed to write down a secret code on a post-it note and leave it where I can find it as a login procedure? Isn’t that what they told me NOT to do 30 years ago?” you aren’t alone.

To be sure, there are *better* ways to implement an authenticator-based two-factor system. After my phone was stolen, Substack had me fill out a form and I engaged with a customer service representative over email who verified my identity manually. That worked just fine within a day or so. Twitter could, in theory, do this. It won’t. It will be too expensive. Far more expensive than the cost of those pesky SMS text messages that Elon just turned off out of spite and desperate penny-pinching.

Were the implementation responsible and well-planned, I would cheer for the end of SMS-based authentication. It’s not particularly safe, though it is far, far safer than password alone. Switching to a “something you have” model is truly a good long-term goal. But turning off two-factor en masse is crazy, as is hurtling a bunch of unprepared people into token-based authentication world.

BOTTOM LINE: If your two-factor authentication setup has been turned off by Twitter, take 10 minutes to turn it on now, but DON’T sprint past the backup method. I wish I could give you universal instructions to do this. I can’t, really. Everyone’s setup and needs are different. Just ask yourself: What would I do if I lost my phone? For a little more help, here’s a good CNET story about the right way to turn on authenticator on an up-to-date iPhone.

Also, there are alternatives to backup-limited tools like Google Authenticator. Microsoft Authenticator backs up accounts in the cloud — i.e., if you lose access to your phone, you can re-download the authentication generator. I have not used it so I cannot recommend it. Twitter also recommends Authy, Duo Mobile, and 1Password; each of them have their own backup options and quirks. I’ve linked to their backup explainer pages. But whatever you do, don’t just add an authentication app today and move on. You’ll regret it.

 

The state of supply chain risk in healthcare

Ponemon Institute in collaboration with the Healthcare Sector Coordinating Council conducted a study on the cybersecurity challenges facing the healthcare sector. More than 400 IT and IT security practitioners were surveyed who are involved in their organizations’ supply chain risk management program (SCRM) and familiar with their cybersecurity plans or programs.

 A key takeaway is that risks to patients caused by new suppliers are not being evaluated by many healthcare organizations. Only half (50 percent) of respondents say their organizations evaluate the risks impacting patient care outcomes created by new suppliers’ products. Sixty percent of respondents say new suppliers are evaluated to understand if there would be adverse patient outcomes created by these organizations. According to the research, pre-existing and legacy suppliers are more likely to be included in the organizational SCRM.

(The Healthcare and Public Sector Coordinating Council (HSCC) is a coalition of private-sector, critical healthcare infrastructure entities organized under Presidential Policy Directive 21 and the National Infrastructure Protection Plan to partner with government in the identification and mitigation of strategic threats and vulnerabilities facing the sector’s ability to deliver services and assets to the public.)

The following findings reveal why the supply chain is vulnerable to a cyberattack.

Most organizations are in the dark about potential risks created by suppliers. Only 19 percent of respondents say their organizations have a complete inventory of their suppliers of physical goods, business-critical services and/or third-party information technology.

Business-critical suppliers are not regularly evaluated for their security practices. Forty-four percent of respondents say security evaluations are conducted of those suppliers who are business-critical on an ad-hoc basis (24 percent) or only when a security incident occurs (20 percent).

Most organizations are not assessing suppliers’ software and technology. Only 43 percent of respondents say their SCRM program assesses the integrity/provenance of suppliers’ software and technology. Forty-three percent of respondents say their organizations will accept certifications such as PCI-DSS, ISO-27001 in lieu of the usual assessment/attestation process for suppliers.

Pre-existing suppliers and not new suppliers are more likely to be included in the scope of an organization’s SCRM. Fifty-four percent of respondents say pre-existing suppliers that have been on-boarded before the establishment of the program are primarily included in the SCRM process. Only 46 percent of respondents say new suppliers are included.

Rarely are suppliers categorized based on their connectivity or network access to the healthcare organization. Only about half (53 percent of respondents) say their organizations categorize suppliers as part of the SCRM program. Of these, 43 percent of respondents say categorization is based on the nature of the products or services and 40 percent of respondents say it is based on the data shared with these suppliers. Only 10 percent of respondents say it is based on connectivity or network access.

There is a lack of integration between procurement and/or contracting departments and the SCRM process that could affect the ability of contracts to ensure the security of the supply chain. Only 41 percent of respondents say the procurement and/or contracting departments are integrated with their organization’s SCRM process. Only 25 percent of respondents say their organizations always add supplier remediations into their contracts if needed.

The lack of standardized language in security contracts and supply chain issues is a deterrent to an effective SCRM program. In addition to the lack of standardized security contractual language in contracts (59 percent of respondents), healthcare SCRM programs are affected by problems with the supply chain. These problems include challenges identifying critical suppliers as the supplier relationship evolves over time (49 percent of respondents), lack of risk tiering of suppliers (49 percent of respondents) and lack of supplier incident or vulnerability notification (45 percent of respondents)

Healthcare organizations face the challenge of having the in-house expertise and senior leadership support needed to have a successful SCRM program. Respondents were asked to select the reasons for not having an effective SCRM program. Fifty-nine percent of respondents say it is the lack of in-house expertise and 55 percent of respondents say it is a lack of senior leadership support.

A lack of cooperation from suppliers and employees is the primary people-related impediment to a successful SCRM program. Fifty-four percent of respondents say the lack of cooperation from suppliers and 43 percent of respondents say it is the lack of inter-departmental cooperation that stands in the way of having an effective program.

Controlling the sprawl of software usage is the number one technology-related impediment to achieving an effective SCRM program. A barrier to an effective SCRM program is managing the sprawl of software usage (i.e., applications, components and cloud services), according to 55 percent of respondents. This is followed by the prompt delivery of software patches from third parties for required upgrades (45 percent of respondents) and the lack of visibility into the cloud environment used by third parties (44 percent of respondents).

To address the supply chain risks discussed above, healthcare organizations are making the following activities a priority.

Improvement of supply chain management is a priority. Sixty-seven percent of respondents say their organizations’ top priority is implementing tools for supplier inventory management. This is followed by 63 percent of respondents who say their organizations will be implementing tools for assessment automation and 45 percent of respondents say their organizations will hire consultants for program and process definition.

Business goals for SCRM are the cost, product quality and the supply chain. Respondents were asked to identify the business goals driving the SCRM program. Fifty-nine percent of respondents say their organizations are prioritizing the impact to cost, performance, timing and availability of goods followed by 56 percent of respondents who say it is to minimize the impact of product quality. Almost half (48 percent of respondents) say it is to understand and improve cyber-resiliency of their supply chain.

Organizations are focused on tracking direct suppliers and products/services electronically (43 percent of respondents). Other top priorities are to have redundancy across critical suppliers and increase reassessments of suppliers, 36 percent and 32 percent of respondents respectively.

To read the rest of this study, please visit this link at HealthSectorCouncil.org 

Is Alexa getting between you and your partner?

Bob Sullivan

Filling your home with smart gadgets comes with plenty of risks —  your TV might watch you, an angry partner or roommate might spy on you, or they might rob you of mental acuity, for example. These are big, scary threats that you probably think about, then forget about, every time you bring a new WiFi-enabled crock pot into your home.

But tech has smaller, more “everyday” impacts on us, too. If you are constantly asking Alexa for the temperature, does that mean you are losing a chance to chat with a family member? What if one partner loves to geek out, but the other doesn’t want to talk to the lights and the garage door — does that set up a subtle power imbalance that could contribute to domestic strife at some point?  Maybe Amazon Dots make it easy to tell the children it’s dinner time — easier than yelling up the stairs — but is going the Star Trek “comm” route really healthy for families?

Duke University professor Pardis Emami-Naemi has been thinking about these things for a while, and I was glad (and a bit amused) to read this paper she co-authored recently.  It’s cleverly titled You, Me, and IoT.    I interviewed her for an upcoming “Debugger in 10” podcast (more on that soon) but couldn’t help chatting with her about these small, often overlooked, unintended consequences of technology. (Disclosure: I work at Duke, too)

I know I have a bad habit of looking for broken things; don’t worry, Emami-Naemi takes a highly academic approach in the paper and her team found plenty of relational benefits to smart homes.  Here’s a fascinating list of the good gadgets can do, with some comments cribbed from study participants:

Bonding over tech
“Smart devices make it easier to share music with my siblings, like smart speakers for example. Instead of having to pass someone’s phone or rely on one person connected, we can just tell it to play a song and boom.”
Inter-generational kindness
“We’ve got an Apple TV and my father almost cried because he said he was really curious about [the device] and streaming television, but he felt too out of the loop and overwhelmed to try another giant leap in technology. And he was overjoyed…to have my boyfriend help out with setting it up.”
Enabling communication
*My mother was sick…and before she passed away, it was tougher and tougher for her to use the phone…So what I did was I got an Alexa and I installed it in the house, and then I could just call her and rather than her having to figure out how to answer the phone, she could just hear my voice in the ether.”
Encouraging playfulness
“The main joy that I get from Alexa is overhearing my boyfriend ask her ridiculous things just to see like if she’ll respond, how she’ll respond.”
Easing Household task tension

*With the smart thermostat, we don’t argue about the temp of the house because it’s automatically set…With the doorbells, we don’t have to argue or wonder if it was locked. We can just look on the app…
*We don’t have to nag each other to get up and do something. We can ask the device to do it for us.”
*My partner and I use Amazon Echo to set reminders for each other, which helps with making sure we are both on the same page with groceries and chores.
Enabling independence
“My wife can now just ask the Google Home for the weather instead of assuming I know what the weather is.”

That last one there caught my attention. I once had a therapist explain to me that small, seemingly annoying requests like, “Can you bring me the newspaper?” can actually be a love language. Hear that question as, “Do you care about me enough to get me the paper?” or even just, “I want to connect in a small way right now” and you hear something very different. So: Do we really want Google Home to sweep away all these small chances to reach out?

Which brings me to the other side of the smart gadget relationship impact discussion: Tech-amplified tensions, which the authors tend to call “multi-user tensions.”  Afte all, we are used to using gadgets as solitary experiences.  Many smart gadgets are social, so that leads to group dynamics, which can lead to tensions. They fit three categories, the authors say: device selection and installation, regular device usage, and when things go wrong. Some examples:

When tech fails us
*”My husband is not as tech savvy as me and gets irritated with me when I can get a device to do something he can’t.”
*”My parents sometimes want things fixed that are beyond my control. We sometimes disagree about what products to purchase and how they would perform on our network.”

Who’s in charge?
*Our young children “fight” over talking to Alexa. They use Alexa to play songs and will cancel the other one’s music, or ask her to repeat them and use her to insult one another.”

Not everyone is an early adopter
“My husband added smart bulbs and taped over all the light switches and switched us over to using Alexa to turn on and off the lights. I don’t like it because there are times when my young children fall asleep and I want to turn off the lights silently instead of using my voice. My children don’t like it because their pronunciation is not clear and Alexa cannot understand them sometimes when they want the lights on or off. We have argued about it a couple of times but it has been made clear that his excitement for a smart home outweighs the desires of me and our two kids, so now I just deal with it and try to help my kids as much as possible.

Weaponizing gadgets
*Any time that we try to have a conversation about not using our phones or anything like that, the biggest thing is that mostly my fiance, he turns on Alexa and asks her to play a song and at a really high volume so he can’t hear me talk anymore.

Obviously, I think a therapist would have a lot to say about those last two comments. Blaming those issues on tech is probably – misplaced.  And to be fair, I’ve omitted some of the more high-stakes and beautiful ways that smart tech helps families.  Like this:

“My youngest son is actually autistic, but he’s very inquisitive in nature and asks me the most intelligent but random questions that we can never really answer. So it’s always like “Go ask Alexa”…It’s almost like having a teacher or an encyclopedia like right on hand at all times, and for his way of living that’s just really helpful for him.”

Still, while we are rightly focused on the high-stakes ways that tech can endanger us – by enabling stalkers and violence — we should not overlook the small ways gadgets change our lives. I think it’s incredibly important to notice and discuss, and I hope to read more for Pardis & Co. on this.

Do any of you care to share the small ways tech has hurt — or helped — your sense of domestic tranquillity?