Process Innovation
01/20/2025 | Digital Innovation
It is still only a mere few months ago that the European Union announced, at headline level at least, its landmark EU AI Act, one that represents an attempt to lead the world on much-needed regulation of Artificial Intelligence and set a global standard. It came about after years of failed attempts and numerous drawbacks, even at the late stages of negotiation last year, the level of which the Ada Lovelace Institute policy expert Connor Dunlop, described as marked by an “unprecedented” level of “drafting and redrafting across all three EU institutions”. Objections to principles were plentiful and involved matters such as the need to include so-called Foundation Models driven by the sheer pace of adoption from the likes of major national players which were in real danger of failing - opening the door for other countries outside Europe to set the global AI agenda. Not to mention civil rights campaigners.
But by December 8, after a three-day marathon session to get it over the line, it became the first AI-focused agreement, one described by EC President Ursula von der Leyen as a new era for European values, one which seeks to establish a risk-based framework for regulating products and applications. The Act aims to prioritise human rights in the development and deployment of AI, categorising systems based on the impact they can have on people’s lives, requiring high-risk systems to comply with certain requirements and be assessed before putting them on the market as well as during their lifecycle, and imposing heavy fines for non-compliance.
Key points revolve around generative AI tools such ChatGPT, trustworthy AI adoption, as well as facial recognition and fingerprint scanning systems. Industry reaction was as expected given the pace of execution – and the fact that there was much detail still to be finalised – mixed; blending views that ranged from outright criticism of its almost knee-jerk reaction to rapidly moving events to celebrating the fact that there is now a line in the sand for future developments. Brussells Lobbyist Daniel Friedlaender, who runs the European office of the Computer and Communications Industry Association, was quick to note that the “political deal” marks the beginning of “important and necessary technical work on crucial details”. He went on: “Regrettably, speed seems to have prevailed over quality, with potentially disastrous consequences for the European economy. The negative impact could be felt far beyond the AI sector alone,” he said.
Daniel Castro, vice president of the Information Technology and Innovation Foundation said: “Given how rapidly AI is developing, EU lawmakers should have hit pause on any legislation until they better understand what exactly it is they are regulating. There is likely an equal, if not greater, risk of unintended consequences from poorly conceived legislation than there is from poorly conceived technology. And unfortunately, fixing technology is usually much easier than fixing bad laws. The EU should focus on winning the innovation race, not the regulation race. AI promises to open a new wave of digital progress in all sectors of the economy. But it is not operating without constraints.”
But Sridhar Iyengar, Managing Director at Zoho Europe, a technology company with foundations in Chennai, India, later described it as “a huge milestone in the development of this emerging technology”. He added: “While guidance is needed and welcomed, regulators must also be careful not to stifle innovation. “Artificial Intelligence is becoming increasingly integrated into business processes, offering benefits to aid fraud detection, forecasting, sentiment analysis, deep analysis of data and more. However, it’s crucial to manage the use of AI responsibly, with effective guardrails in place to mitigate risks.
“As governments work on guidance and regulations, businesses should prioritise developing their own policies which go further to protect themselves and their customers. This will enable organisations to have more agility to react to market trends and serve their customers effectively, all while maintaining a high level of trust around how data is collected, stored and used, for example. It will be interesting to see how regulation develops further through a global and collaborative approach to support businesses.
“Academia, government, industry experts and businesses must continue to work together to educate and to ensure the safe and successful deployment of AI to potentially deliver significant economic benefit.”
Konstantinos Komaitis, a fellow and senior researcher at the Lisbon Council, noted the significance of the agreements reached by the European Commission, the European Council, and the European Parliament being political, adding: “The political agreements are not necessarily agreements over the expectations citizens have and, to this end, Europe is still quite far from having a comprehensive piece of legislation.”
Graham Brookie, vice president of the Washington-based Atlantic Council’s Digital Forensic Research Lab, said the Act was “important because it is first – not because it is the most comprehensive”.
It will “undoubtedly be designed to build on the EU’s Digital Services Act and Digital Markets Act, which are public and now being implemented. In particular, the transparency and information-sharing standards in the Digital Services Act will likely be the most solid indicator of what the AI Act could eventually do. In the world of AI governance, the White House’s executive order sticks out as more concrete guidance to industry, but it sorely needs a legislative companion in the US Congress”. Enza Iannopollo, analyst at the research and advisory group Forrester, was more optimistic. “Despite the criticism, this is good news for businesses and society,” he said. “It starts providing companies with a solid framework for the assessment and mitigation of risks, that, if unchecked, could hurt customers and curtail businesses’ ability to benefit from their investments in the technology. And for society, it helps protect people from detrimental outcomes.”
Whatever the issues, such developments may be timely, given AI’s impact across the entire industrial landscape, moving beyond well-known technologies such as big data, robotics and IoT - one reason why almost half of companies plan to begin or increase their investments to integrate it into their businesses this year.
A common view is that any preconceived assumptions about the prospects are likely to be wrong, especially when it comes to any endpoint to its potential.
Its adoption by leading tech players such as Microsoft, Google, Apple, and Amazon who are committing billions to create new products and services and the way universities are now making it a more prominent element of their curricula is helping to ensure that.
Former Google Brain leader and Baidu chief scientist Andrew Ng summed this up when he said: “We may be in the eternal spring of AI.” Nowhere is that better demonstrated than in industry which has a vested interest in staying ahead:
On the move: which involves everything from self-driving cars to flight forecasting to travel planners and more personalised booking procedures.
In the factory: where developments have come a long way since AI-driven robotic arms, the earliest examples of which date back to the 1960s. The industrial robotics market was estimated at more than €48bn last year.
On the ward: Thanks to its big data analysis capabilities, AI is now helping to identify diseases faster and with greater accurately, Drug discovery is also more streamlined and virtual nursing assistants are increasingly monitoring patients. And there’s a human impact that goes way beyond the threat to jobs; one that involves filling the creativity gaps that AI can’t and the need to retrain people out of repetitive jobs and into ones that involve programing and supporting emerging technologies.
One of the leading authorities on the subject, UC Berkeley’s Prof Stuart Russell, one of the signatories of an open letter last year calling for a pause on AI training, gave an interesting take on emerging concerns in an interview with Vox.com’s The Gray Area when he said: “I think it’s important to understand that almost nobody is saying that the state of AI right now is such that we have to worry about AI systems taking over the world, if you want to put it that way. They still exhibit many limitations, and at least the latest generation, the large language models like ChatGPT, don’t exhibit the kinds of decision-making capabilities and all the planning capabilities that you would need to take over the world.
“You can try playing chess with them, for example. They’re pretty hopeless. They pretend well for a few moves, and then they’ll play a move that’s completely illegal because they haven’t actually learned the rules properly. There’s a lot of progress that we still need to make before we reach systems that are fully comparable or better than a human mind.” All this from a man who once told an audience in London that he wouldn’t talk to journalists “unless they agree not to put a Terminator robot in the article”.
The European Parliament approved the Artificial Intelligence Act on March 13, rubber-stamping a regulation agreed in negotiation with member states in December last year. It included safeguards on general purpose AI, the use of biometric identification systems by law enforcers, bans on social scoring and AI used to manipulate or exploit user vulnerabilities as well as the right of consumers to launch complaints and receive meaningful explanations. Civil Liberties Committee co-rapporteur Dragos Tudorache of Romani summed it up when he said: “The EU has delivered. We have linked the concept of artificial intelligence to the fundamental values that form the basis of our societies.”
Author
Process Innovation
Digital Innovation
Digital Innovation
Newsletter
With our newsletter you will receive current information on ACHEMA on a regular basis. You are guaranteed not to miss any important dates.
Theodor-Heuss-Allee 25
60486 Frankfurt am Main
Tel.: +49 69 7564-100