Monthly Archives: September 2024

The State of Cybersecurity Risk Management Strategies

What are the cyber-risks — and opportunities — in the age of AI? The purpose of this research is to determine how organizations are preparing for an uncertain future because of the ever-changing cybersecurity risks threatening their organizations. Ponemon Institute surveyed 632 IT and IT security professionals in the United States who are involved in their organizations’ cybersecurity risk management strategies and programs. The research was sponsored by Balbix.

Frequently reviewed and updated cybersecurity risk strategies and programs are the foundation of a strong cybersecurity posture.  However, cybersecurity risk strategies and programs are outdated and jeopardize the ability to prevent and respond to security incidents and data breaches.

 When asked how far in the future their organizations plan their cybersecurity risk strategies and programs, 65 percent of respondents say it is for two years (31 percent) or for more than two years (34 percent). Only 23 percent of respondents say the strategy is for only one year because of changes in technologies and the threat landscape and 12 percent of respondents say it is for less than one year.

The following research findings reveal the steps that should be included in cybersecurity risks and programs.

 Identify unpatched vulnerabilities and patch them in a timely manner. According to a previous study sponsored by Balbix, only 10 percent of respondents were very confident that their organizations have a vulnerability and risk management program that helps them avoid a data breach. Only 15 percent of respondents rated their organizations’ ability to identify vulnerabilities and patch them in a timely manner was highly effective.

In this year’s study, 54 percent of respondents say unpatched vulnerabilities is of the greatest concern to their organizations. This is followed by outdated software (51 percent of respondents) and user error (51 percent of respondents).

Frequent scanning to identify vulnerabilities should be conducted. In the previous Balbix study, only 31 percent of respondents said their organizations scan daily (12 percent) or weekly (19 percent). In this year’s research, scanning has not increased in frequency. Only 38 percent of respondents say their organizations scan for vulnerabilities more than once per day (25 percent) or daily (13 percent).

The prioritization of vulnerabilities should not be limited to a vendor’s vulnerability scoring. Fifty-one percent of respondents say their organizations’ vendor vulnerability scoring is used to prioritize vulnerabilities. Only 33 percent of respondents say their organizations use a risk-based vulnerability management solution and only 25 percent of respondents say it is based upon a risk scoring system within their vulnerability management tools.

Take steps to reduce risks in the attack vector. These risks especially are software vulnerabilities (45 percent of respondents), ransomware (37 percent of respondents), poor or missing encryption (36 percent of respondents) and phishing (36 percent of respondents). An attack vector is a path or method that a hacker uses to gain unauthorized access to a network or computer to exploit system flaws.

 Inform the C-suite and board of directors of the threats against the organization to obtain the necessary funding for cybersecurity programs and strategies.  In the previous Balbix study, the research revealed that the C-suite and IT security functions operate in a communications silo. Only 29 percent of respondents said their organizations’ executives and senior management clearly communicate their business risk management priorities to the IT security leadership and only 21 percent of respondents said their communications with the C-suite are highly effective. Those respondents who said they were very effective say it was because they were able to present information in a way that was understandable and they kept their leaders up-to-date on cyber risks and didn’t wait until the organization had a data breach or security incident.

 In this year’s study, 50 percent of respondents rate their organizations’ effectiveness in communicating the state of their cybersecurity as very low or low.  The primary reasons are negative facts are filtered before being disclosed to senior executives and the CEO (56 percent of respondents), communications are limited to only one department or line of business (silos) (44 percent of respondents) and the information can be ambiguous, which may lead to poor decisions (41 percent of respondents).

The IT and IT security functions should provide regular briefings on the state of their organizations’ cybersecurity risks. In addition to making their presentations understandable and unambiguous, briefings should not be limited to only when a serious security risk is revealed or if senior management initiates the request.

To address the challenge in meeting SLAs agreements, organizations need to eliminate the silos that inhibit communication among project teams. Forty-nine percent of respondents say their organizations track SLAs to evaluate their cybersecurity posture. Of these respondents,

Only 44 percent say their organization is meeting most or all SLAs to support its cybersecurity posture.

If AI is adopted as part of a cybersecurity strategy, risks created by AI need to be managed. Fifty-four percent of respondents say their organizations have fully adopted (26 percent) or partially adopted (28 percent). Risks include poor or misconfigured systems due to over-reliance on AI for cyber risk management, software vulnerabilities due to AI-generated code, data security risks caused by weak or no encryption, incorrect predictions due to data poisoning and inadvertent infringement of privacy rights due to the leakage of sensitive information.

Steps to reduce cybersecurity risks include regular user training and awareness about the security implications of AI, develop a data security programs and practices for AI, identify and mitigate bias in AI models for safe and responsible use, implement and consider a tool for software vulnerability management, conduct regular audits and tests to identify vulnerabilities in AI models and infrastructure, deploy risk quantification of AI models and their infrastructure and consider tools to validate AI prompts and their responses.

To more read key findings from this research, please visit the Balbix website.

Appeals court rules TikTok could be responsible for algorithm that recommended fatal strangulation game to child

Bob Sullivan

TikTok cannot use federal law “to permit casual indifference to the death of a ten-year-old girl,” a federal judge wrote this week.  And with that, an appeals court has opened a Pandora’s box that might clear the way for Big Tech accountability.

Silicon Valley companies have become rich and powerful in part because federal law has shielded them from liability for many of the terrible things their tools enable and encourage — and, it follows, from the expense of stopping such things. Smart phones have poisoned our children’s brains and turned them into The Anxious Generation; social media and cryptocurrency have enabled a generation of scam criminals to rob billions from our most vulnerable people; advertising algorithms tap into our subconscious in an attempt to destroy our very agency as human beings. To date, tech firms have made only passing attempts to stop such terrible things, emboldened by federal law which has so far shielded them from liability …  even when they “recommend” that kids do things which lead to death.

That’s what happened to 10-year-old Tawainna Anderson, who was served a curated “blackout challenge” video by Tiktok on a personalized “For You” page back in 2021. She was among a series of children who took up that challenge and experimented with self-asphyxiation — and died.  When Anderson’s parents tried to sue TikTok, a lower court threw out the case two years ago, saying tech companies enjoy broad immunity because of the 1996 Communications Decency Act, and its Section 230.

You’ve probably heard of that. Section 230 has been used as a get-out-of-jail-free card by Big Tech for decades; it’s also been used as an endless source of bar fights among legal scholars.

But now, with very colorful language, a federal appeals court has revived the Anderson family lawsuit and thrown Section 230 protection into doubt.  Third Circuit Judge Paul Matey’s concurring opinion seethes at the idea that tech companies aren’t required to stop awful things from happening on their platforms, even when it’s obvious that they could.  He also takes a shot at those who seem to care more about the scholarly debate than about the clear and present danger facilitated by tech tools. It’s worth reading this part of the ruling in full.

TikTok reads Section 230 of the Communications Decency Act… to permit casual indifference to the death of a ten-year-old girl. It is a position that has become popular among a host of purveyors of pornography, self-mutilation, and exploitation, one that smuggles constitutional conceptions of a “free trade in ideas” into a digital “cauldron of illicit loves” that leap and boil with no oversight, no accountability, no remedy. And a view that has found support in a surprising number of judicial opinions dating from the early days of dialup to the modern era of algorithms, advertising, and apps.. But it is not found in the words Congress wrote in Section 230, in the context Congress acted, in the history of common carriage regulations, or in the centuries of tradition informing the limited immunity from liability enjoyed by publishers and distributors of “content.” As best understood, the ordinary meaning of Section 230 provides TikTok immunity from suit for hosting videos created and uploaded by third parties. But it does not shield more, and Anderson’s estate may seek relief for TikTok’s knowing distribution and targeted recommendation of videos it knew could be harmful.

Later on, the opinion says, “The company may decide to curate the content it serves up to children to emphasize the lowest virtues, the basest tastes. But it cannot claim immunity that Congress did not provide.”

The ruling doesn’t tear down all Big Tech immunity. It makes a distinction between TikTok’s algorithm specifically recommending a blackout video to a child after the firm knew, or should have known, that it was dangerous ….. as opposed to a child seeking out such a video “manually” through a self-directed search.  That kind of distinction has been lost through years of reading Section 230 at its most generous from Big Tech’s point of view.  I think we all know where that has gotten us.

In the simplest of terms, tech companies shouldn’t be held liable for everything their users do, no more than the phone company can be liable for everything callers say on telephone lines — or, as the popular legal analogy goes, a newsstand can’t be liable the content of magazines it sells.

After all, that newsstand has no editorial control over those magazines.  Back in the 1990s, Section 230 added just a touch of nuance to this concept, which was required because tech companies occasionally dip into their users’ content and restrict it. Tech firms remove illegal drug sales, or child porn, for example.  While that might seem akin to the exercise of editorial content, we want tech companies to do this, so Congress declared such occasional meddling does not turn a tech firm from a newsstand into a publisher, so it does not assume additional liability because of such moderation — it enjoys immunity.

This immunity has been used as permission for all kinds of undesirable activity. Using another mildly strained metaphor, a shopping mall would never be allowed to operate if it ignored massive amounts of crime going on in its hallways…let alone supplied a series of tools that enable elder fraud, or impersonation, or money laundering. But tech companies do that all the time. In fact, we know from whistleblowers like Frances Haugen that tech firms are fully aware their tools help connect anxious kids with videos that glorify anorexia.  And they lead lonely and grief-stricken people right to criminals who are expert at stealing hundreds of thousands of dollars from them. And they allow ongoing crimes like identity theft to occur without so much as answering the phone from desperate victims like American service members who must watch as their official uniform portraits are used for romance scams.

Will tech companies have to change their ways now? Will they have to invest real money into customer service to stop such crimes, and to stop their algorithms from recommending terrible things?  You’ll hear that such an investment is an overwhelming demand. Can you imagine if a large social media firm was forced to hire enough customer service agents to deal with fraud in a timely manner? It might put the company out of business.  In my opinion, that means it never had a legitimate business model in the first place.

This week’s ruling draws an appropriate distinction between tech firms that passively host content which is undesirable and firms which actively promote such content via algorithm. In other words, algorithm recommendations are akin to editorial control, and Big Tech must answer for what their algorithms do.  You have to ask: Why wouldn’t these companies welcome that kind of responsibility?

The Section 230 debate will rage on.  Since both political parties have railed against Big Tech, and there is the appetite for change, it does seem like Congress will get involved. Good. Section 230 is desperate for an update.  Just watch carefully to make sure Big Tech doesn’t write its own rules for regulating the next era of the digital age. Because it didn’t do so well with the current era.

If you want to read more, I’d recommend Matt Stoller’s Substack post on the ruling.