fbpx
Techitup Middle East
Expert Opinion

Balancing Usability, Data Security in the Age of AI and Regulation

Keeping data both safe and easily accessible has been a challenge for organisations since, well, since the first paper file was stored away. Admittedly, over the last couple of decades, this has become much trickier to navigate – digitisation means the sheer amount of data collected, stored, and used has grown exponentially. And now, we’re seeing another data growth spurt due to widespread AI adoption. 

Meanwhile, governments worldwide are doing their best to keep up, introducing growing levels of data regulations seemingly every year. This puts organisations under increased pressure to ensure data resilience as they get to grips with this new age of AI. They’ve been left to walk a tightrope between ensuring that data is usable for business use while also keeping it secure and resilient, in line with evolving regulations.

Dude, where’s my data? 

With the widely acclaimed promise of AI, the demands on enterprise data have never been greater – requiring it to be accurate, accessible, and usable at all times. While the initial excitement around generative AI has quietened, organisations are now adopting the technology in earnest to unlock increased business value from all that existing data. According to the latest McKinsey Global Survey on AI, 65% of respondents worldwide reported that their organisations are regularly using AI. But what does this mean for data resilience?  

Well, it’s no secret that AI relies on data. Some would say the more data the better, but the wiser approach is the more accurate and relevant data, the better. While some AI applications might only need to be trained once, most require live access to a data pool to analyse and react to changes in real time. Any inaccuracies or inconsistencies in data across an organisation can quickly render AI’s output useless. As the adage goes: garbage in, garbage out. Of course, it’s important to be careful about what data you feed the beast, namely any sensitive, mission-critical or customer data. There’s very much still a balance to be figured out as more and more organisations embrace AI.

What should help organisations strike this balance is the wave of regulations demanding greater data resilience and responsibility both in AI and more broadly. These regulations, including NIS2 and the EU AI Act, have all placed increased responsibility on organisations to ensure data security, and rightly so. This new wave of data regulation focuses largely on extending the line of custody that organisations have on their data, requiring them to consider how it will be secured when plugged into AI and other new technologies. When data was originally collected and stored, organisations likely didn’t have AI on their radar, let alone consider how their data might be used in such technologies. While these new considerations fall primarily under the responsibility of chief information governance teams, achieving compliance with AI-related regulations will require effort across the entire organisation. And this is all while ensuring that relevant teams have access to the data they need to innovate and grow.

No need to throw away the key 

So, at the moment, organisations are starting to walk the tightrope between ensuring a suitable speed of access to data while also maintaining data resilience in line with evolving regulations. While this might seem like a herculean task, it is the same problem that organisations have been tackling for years, just with a new set of systems and circumstances. 

This challenge never ends, it just evolves. The principles stay the same, but the technology, the environments, and the scale keep changing. According to the Veeam Data Protection Trends Report 2024, 76% of organisations recognize a ‘Protection Gap’ between how much data they can afford to lose and how often their data is protected. That sounds like a big gap, but it’s been getting smaller in recent years. With AI creating and needing exponentially more data as it evolves, however, this gap could start to widen unless action is taken. 

Collaboration between teams, from data governance to security IT and production has always been, and continues to be, essential to staying on top of data resilience. Working together to create a new set of business risk assessments will lead the way forward for organisations working with data in AI models.

Despite the additional work it brings for organisations, these regulations are perfectly timed to coincide with this AI boom as they demand a re-evaluation of data security practices. But, organisations shouldn’t be reliant on new regulations to prompt this. Monitoring and adjusting risk levels should be a regular, ongoing process, especially when a new technology such as AI comes into the picture. 

Two birds, one backup 

Ultimately, as in so many cases, it comes back to data backups. Already a key aspect of modern data regulation in its own right, they will play a larger role in AI-specific regulation in the future. It will provide those teams developing AI and LLMs a much-needed anchor in a constantly changing environment. 

Not only do they ensure that data remains accurate, secure, and usable at all times but they can also provide a comprehensive record for organisations to prove their adherence to regulations. An invaluable source of truth when dealing with AI as its very nature makes it difficult to account for how exactly it has used the data it’s been fed or trained on. But, by using data backups, organisations can account for the security of their data at any given time, no matter where it’s being used. 

Of course, total security can never be fully achieved when dealing with data and there will always be a weighing up of risk and reward for organisations. But, with quality data backups, you can be assured that you’ve got a safety net to, well, fall back on. 

Related posts

Technology Trends Affecting the Security Sector in 2024 

Editor

Using QLC and All-Flash Arrays in Data Center Modernization

Editor

From Monitoring to Observability, Getting IT Teams On Board

Editor

Leave a Comment