Home Cyber Security Localization Mandates, AI Regs to Pose Major Data Challenges in 2024

Localization Mandates, AI Regs to Pose Major Data Challenges in 2024

0
Localization Mandates, AI Regs to Pose Major Data Challenges in 2024

[ad_1]

Companies should expect to face a trio of trends in 2024 that make data security, protection, and compliance more critical to operations and risk reduction.

Increasingly, governments worldwide are creating laws that govern the handling of data within their borders, with more than three-quarters of countries implementing data localization in some form, according to global consultancy McKinsey & Co. A second trend is the rush to govern the use of data for generative AI (GenAI) models, with the Biden administration’s AI executive order and the European Union’s AI Act likely having the greatest impact in the coming year. Finally, enforcement of data protection regulations will continue to be used more often, impacting a wider variety of companies, according to experts.

For companies, 2024 will be the year they will have to be much more cognizant of where their data moves in the cloud, says Troy Leach, chief strategy officer for the Cloud Security Alliance (CSA).

“Companies are often just not aware of how their different departments are using information,” he says. “I think there’s still a growing awareness, but we’re in the early days of privacy laws and efforts of keeping data local. We’re still trying to figure that out, and I don’t think there’s a universal template yet for guidance on what our efforts should include.”

European, Chinese, and US regulators put an exclamation mark on rules regarding data security with some major fines in 2023. In May, the Irish Data Protection Commission fined Meta, the company behind Facebook, €1.2 billion (US$1.3 billion) for violating localization regulations by transferring personal data about European users to the United States. In July, Chinese authorities fined ride-sharing firm Didi Global more than 8 billion yuan (about $1.2 billion) for violation of the country’s privacy and data security regulations.

The complexity of the regulatory landscape is growing quickly, says Raghvendra Singh, head of the cloud security practice at Tata Consulting Services’ cybersecurity arm, TCS Cybersecurity.

“Almost all government [and] regulatory bodies across the globe are either working to define their data privacy and protection policies or advancing to the next level if already defined,” he says. “It is expected that in the next few years, we will see more stringent regulations across the globe.”

Data Localization Takes Off

Following the European Union’s passage of the General Data Protection Regulation (GDPR) in 2016, which replaced the Data Protection Directive of 1995, more nations have focused on keeping data protected under local regulations. Such data localization will continue, and by the end of next year, three-quarters of people will live in a country that has privacy protections, according to business intelligence firm Gartner.

For businesses, such regulations are a burden but also an opportunity, says TCS Cybersecurity’s Singh.

“Varied localization requirements, a need for reinventing operating models, and different standards for technology systems are some of the big complexities that organizations are struggling to navigate through,” he says. “Having said that, organizations that are able to address these challenges will hugely benefit and gain competitive advantage.”

Adapting to the localization regimes in each part of the world will allow companies to offer personalization, reduce the risk of data leaks and breaches, and benefit from a reputation for cybersecurity, says Singh.

AI Concerns Lead to Changing Landscape

While data localization is happening through the fabric of the cloud, the major change that will affect businesses and how they handle data in the coming year will be the fast adoption of AI services and nations’ attempts to regulate the new technology.

As fears of being left behind in the innovative landscape mount, companies may not do enough due diligence, says CSA’s Leach. Many organizations, for example, may use a private instance of a GenAI model to protect their data, but the data will still be in the cloud, he says.

“What they don’t realize is that their organization is probably going to be borrowing cloud services within different regions of the world for just the computational requirements,” Leach says. “They’re going to have a really hard time — once that data is trained and put into the model — trying to find and be able to articulate where that data is, and where it resides, and where it has been.”

Yet fundamentally, the fast adoption of AI is problematic because data is the lifeblood of AI models — a vast amount of data is going to be fed into, consumed, and output from machines and services — and the companies behind the top AI products are not transparent about how they train their models and use data.

Business analyst firm Forrester Research forecasts major changes wrought by AI that will affect the data landscape. Six in 10 employees will use their own AI for their jobs in 2024, hoping to become more productive. At the same time, there will be significant legal, privacy, and security risks, with at least three major breaches caused by code generated by AI in 2024, Forrester analysts predict.

The data privacy and compliance risks of AI topped a listing of 2024 trends for businesses published by Kiteworks, a regulatory compliance software firm. The estimates of how much private data flows into ChatGPT and other GenAI models are pretty consistent: Data security firm Cyberhaven found 4% of monitored workers submitted sensitive data, while Kiteworks cites a study that found 15% of workers have submitted information to ChatGPT, and a quarter of those — about 4% overall — considered the data to be sensitive.

“Most organizations probably don’t have the right mechanisms in place to be tracking [data input into GenAI systems] and controlling those transfers, and maybe they aren’t even aware of it,” says Patrick Spencer, vice president of research at Kiteworks. “You need to be able to control and track what’s loaded into those AI systems to control your risk.”

Increased Regulations, Especially AI-Related

Numerous small breaches have already taken place. About 40% of organizations have suffered an AI-related privacy breach, three-quarters of which were not malicious, according to a Gartner survey.

“Much of the AI running across organizations today is built into larger solutions, with little oversight available to assess the impact to privacy,” said Nader Henein, vice principal analyst at Gartner, adding that a rush to AI could make things worse. “Once AI regulation becomes more established, it will be nearly impossible to untangle toxic data ingested in the absence of an AI governance program. IT leaders will be left having to rip out systems wholesale, at great expense to their organizations and to their standing.”

There are still a lot of question marks, such as the liability of AI models operating across borders, says CSA’s Leach.

“With data localization, because these regulations are newer, there’s fewer cases of enforcement, but I think there’s still a growing awareness of the risks,” he says. “Companies could accidentally put [themselves] at major risk if they’re consuming data from different countries, and then leveraging these large language models in other regions of the world.”



[ad_2]

LEAVE A REPLY

Please enter your comment!
Please enter your name here