News and Views
Media Coverage

Cyber Insights 2025: Artificial Intelligence

SecurityWeek

Cyber Insights 2025 examines opinions on the expected evolution of more than a dozen areas of cybersecurity interest over the next 12 months. We spoke to hundreds of individual experts to gain their expert opinions. Here we discuss what to expect with Artificial Intelligence.

[...]

The idea that current big tech LLMs have not broken privacy and copyright laws in their making (by scraping the internet and social media) is a stretch. But it’s the perfect illustration of the regulators’ dilemma: do you protect the people or protect innovation (and by extension, the economy)?

Where AI is concerned, the result has been a fudge – basically, the regulators appear to be saying, ‘we’re not going to look too deeply into whether you have broken the law, but don’t break it any more.’ Going forward, the focus is now on copyright, driven by the threat of deepfakes and AI-generated misinformation. ‘Watermarking’ is the solution.

“In Europe, the EU AI Act encourages watermark labeling to be part of the AI vendor output to address concerns like misinformation and deepfakes,” explains Sharon Klein, a partner at Blank Rome law firm. “California also recently passed the California AI Transparency Act requiring developers of widely used AI systems to provide certain AI-detection tools and watermarking capabilities to help identify AI-generated content.” The Act was signed into law by Governor Newsom on September 19, 2024; and will come into effect on January 1, 2026.

To read the full article, please click here.

"Cyber Insights 2025: Artificial Intelligence," by Kevin Townsend was published in SecurityWeek on January 29, 2025.