The first attempt of the European Commission to regulate AI. Will it help build an ecosystem of trust?

I hope you enjoy reading this blog post. If you are interested in tools that will help you innovate, automate and unlock new opportunities for growth, let’s talk.

Article written by Georgios

Ai
  • EC’s White Paper or Artificial Intelligence – A European Approach  sets guidelines to establish trust, promote AI and address high-risks AI applicatons

  • Proposed legal requirements for high-risk AI applications-only for the future regulatory framework include: data training and record-keeping, human oversight, information provision when interacting with an AI system, robustness and accuracy

  • EC’s approach is based on 3 pillars: being ahead of technological developments, foresee the socio-economic impact and establish an ethical and legal framework

In February 20th, European Commission President Ursula von der Leyen announced in her political Guidelines, a coordinated European approach on the human and ethical implications of Artificial Intelligence as well as a reflection on the better use of big data for innovation.

The paper is seen as the first attempt to regulate Artificial Intelligence and forms part of the European Union’s grand plan for regulating technology over the next years. The EC points out the risks posed by AI, the existing laws that apply to Artificial Intelligence, plus its intention to update laws to fix any gaps which may exist. 

The framework would prescribe a number of mandatory legal requirements for “high risk” Artificial Intelligence applications in order to ensure the regulatory intervention is proportionate.

Leverage under-used data in the EU to benefit from the potential of AI

Europe has developed a strong computing infrastructure essential to the functioning of AI. Additionally, Europe holds large volumes of public and industrial data, the potential of which, according to EC’s analysis, is currently under-used. Over half of the top European manufacturers implement at least one instance of AI in manufacturing operations.

Some €3.2 billion were invested in Artificial Intelligence in Europe in 2016, compared to around €12.1 billion in North America and €6.5 billion in Asia

However, investment in research and innovation in Europe is still a fraction of the public and private investment in other regions of the world: Some €3.2 billion were invested in AI in Europe in 2016, compared to around €12.1 billion in North America and €6.5 billion in Asia. In response, Europe needs to increase its investment levels significantly. Between 2015 and 2018, EU funding for research and innovation for AI have risen to EUR1.5BN

The Coordinated plan on Artificial Intelligence developed with Member States is proving to be a good starting point in building closer cooperation on Artificial Intelligence in Europe and in creating synergies to maximise investment in the AI value chain.

Seize the next data wave

Europe will need to find a way to create its own tech giants and really compete in the global data economy. The volume of data produced in the world is growing rapidly, from 33 zettabytes in 2018 to an expected 175 zettabytes in 2025. Furthermore, the way in which data are stored and processed will change dramatically over the coming five years.

The volume of data produced in the world is growing rapidly, from 33 zettabytes in 2018 to an expected 175 zettabytes in 2025, according to IDC

Today 80% of data processing and analysis that takes place in the cloud occurs in data centres and centralised computing facilities, and 20% in smart connected objects, such as cars, home appliances or manufacturing robots, and in computing facilities close to the user (“edge computing”). By 2025 these proportions are set to change markedly.

Manage safety risks and liability

Difficulty in identifying whether the Artificial Intelligence Risks for safety and the effective functioning of the liability regime: the cause of the harm, in whole or in part, “in turn may make it difficult for persons having suffered harm to obtain compensation under the current EU and national liability framework.”

As a result of a flaw in the object recognition technology, an autonomous car can wrongly identify an object on the road and cause an accident involving injuries and material damage. As with the risks to fundamental rights, these risks can be caused by flaws in the design of AI technology, be related to problems with the availability and quality of data or to other problems stemming from machine learning. While some of these risks are not limited to products and services that rely on AI, the use of Artificial Intelligence may increase or aggravate the risks.

Scope of Artificial Intelligence regulation: build an ecosystem of trust

The existing provisions of EU law will continue to apply in relation to AI, although certain updates to the framework may be necessary to reflect the digital transformation and the use of Artificial Intelligence.

A risk-based approach is important to ensure regulatory intervention is appropriate

Under the Product Liability Directive, a manufacturer is liable for damage caused by a defective product. However, in the case of an Artificial Intelligence-based system such as autonomous cars, it may be difficult to prove that there is a defect in the product, the damage that has occurred and the causal link between the two. In addition, there is some uncertainty about how and to what extent the Product Liability Directive applies in the case of certain types of defects, for example, if these result from weaknesses in the cyber security of the product.

Europe’s future Regulatory framework for AI

The paper concludes that the current product safety legislation already supports an extended concept of safety protection against all kinds of risks arising from the product according to its use. However, provisions explicitly covering new risks presented by the emerging digital technologies could be introduced to provide more legal certainty.

The characteristics of emerging digital technologies like AI, the IoT and robotics may challenge aspects of the current liability frameworks.

The European Commission suggests legal requirements for a future regulatory framework on high-risk AI applications only:

  1. Training Artificial Intelligence data on accurate, representative data
  2. Keeping accurate records of the data used to train and test the AI systems, the data themselves and the programming and training methodologies
  3. Information provision to individuals so that they know when they are interacting with an AI
  4. Requiring human oversight for AI systems

The European Commission is seeking comments until 19th May 2020. Then they will start drafting legislation based on these proposals and comments at the end of 2020. 

Learn more about Regulation in Auto2x’s report

To read more about the current state of the art of regulation read our report Regulatory guide to Autonomous Driving.

This report analyses the regulatory landscape for the transition from Supervised to Unsupervised-Driving (SAE Level 4-5) to allow the deployment of higher levels of autonomy. Since the future is also Secure and Connected, our analysis also provides a regulatory guide on Automotive Cyber Security and V2X (V2V-V2I).


Never Miss Growth Opportunities Again

Get real-time insights with Current our automated intelligence tool.

Best Selling Reports
Autonomous Vehicles Competitor Analysis Cover
Autonomous Vehicles Competitor Analysis Cover

Cookies preferences

Others

Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.

Necessary

Necessary
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.

Advertisement

Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.

Analytics

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.

Functional

Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.

Performance

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.