CTOs and CIOs should establish a robust governance framework to ensure compliance, minimize risk and drive responsible AI adoption.
Prioritizing funding for AI research and startups is essential. The European Commission has set a target of 20 billion euros in annual AI investment, but this goal requires significant private-sector participation.
Under the AI Act, some AI uses are prohibited outright, while others are subject to varying degrees of governance, management, and transparency requirements. The banned AI practices deemed to pose an unacceptable risk include real-time biometric identification in public spaces, social scoring systems, and manipulative technologies.
Tom Bristow, Pieter Haeck and Océane Herrero contributed to the report.
Understanding the need for regulation to ensure the safe use of AI, the European Union (EU) has introduced the world’s most comprehensive
The existential risks of AI are neither inevitable nor purely hypothetical. Though “2001: A Space Odyssey” presented a cautionary tale, real-world AI governance is in our hands. The decisions CIOs make today will determine whether AI remains a powerful ally or becomes a force we struggle to contain.
Europol has conducted a sting operation against a group whose members engage in the distribution of images of minors fully generated by artificial intelligence.
U.S. chipmaker Nvidia has sued EU antitrust regulators for accepting an Italian request last year to scrutinise its acquisition of AI startup Run:ai, saying they had flouted an earlier court ruling restricting their merger powers on minor deals.
This blog post provides a brief overview of the impact of Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonized rules on artificial intelligence (“AI Act”) on video game developers. More and more are integrating AI systems into their video games,
In a world-first, the EU created legislation to regulate Artificial Intelligence, called the AI Act. But it now seems to be moving away from effective protection for those harmed by this technology by abandoning a proposed AI Liability Directive.
Why new systems and standards are needed for rights holders to be fairly compensated or opt out the use of their works to train AI
Some results have been hidden because they may be inaccessible to you
Show inaccessible results