Our new report is live! State of AI for Software Developers Report 2024 Read it now→

Our new report is live! State of AI for Software Developers Report 2024 Read it now→

Ethical Tomorrow: Navigating the Rise of AI (EU AI Act)

February 14, 2024
EU AI Act

The rise of Artificial Intelligence (AI) has been significant over the last few decades. With advancements in machine learning algorithms, access to vast amounts of data, and affordable computing power, AI is becoming increasingly sophisticated and capable. The rising trend of AI is transforming industries and improving everyday life through more efficient processes, enhanced user experiences, and new applications.

This rapid advancement of AI tools has with no doubt drawn the attention of regulators around the world. As companies accelerate their efforts to create and launch new AI application sets that will revolutionize businesses in every industry, political bodies and policy-makers have also begun accelerating their efforts to put laws in place that will control the potential risks of AI.

The main question is:

Should technology that possesses near-human intelligence but lacks a moral compass, be given unrestricted freedom?

The growing calls for regulation stem from concerns about how AI is being developed and deployed, particularly regarding its potential impacts on society, employment, privacy, and ethics. Given these concerns, many experts believe that more regulation is necessary to ensure that AI development and deployment proceeds in a safe, ethical, and socially beneficial manner.

The EU AI Act

In April 2021, the European Commission proposed a new legislation that aims to regulate artificial intelligence (AI) systems used within the European Union. The proposal seeks to introduce harmonized rules for the development, deployment and use of AI systems within the EU, to ensure the safety of AI systems, and promote respect for fundamental rights.

The EU AI Act applies to providers of artificial intelligence systems that are placed on the market or whose usage impacts the individuals within the European Union. This covers both private and public entities that develop AI systems in-house and those that purchase and deploy external AI tools.

So how does the EU AI Act categorize the AI systems based on their risk levels?


Unacceptable risk

A set of highly harmful uses of AI that violate EU values and fundamental rights will be outright banned. These include social scoring for public and private purposes, exploitation of vulnerabilities, biometric identification systems, individual predictive policing, emotion recognition in workplaces and educational institutions (except for medical or safety reasons), and untargeted scraping of the internet or CCTV for facial images.

High risk

A limited number of AI systems, as defined with the Act, are considered high-risk due to their potential adverse impact on people’s safety or fundamental rights. Examples include self-driving cars, AI for critical infrastructure management, AI tools used in employment, and medical devices or systems used in the field of law enforcement. High-risk systems will have strict rules to follow around risk mitigation practices, data management protocols, transparency standards and human oversight.

Limited and minimal risk

These categories include all other AI systems that can be developed and used in accordance with existing legislation. Examples include AI chatbots, virtual assistants, or recommendation algorithms. These systems have minimal transparency requirements, such as informing users that they are talking and interacting with automated systems.

Furthermore, the AI Act addresses potential risks and outlines specific responsibilities for general-purpose AI models, especially large generative AI models. It’s worth noting that providers of free and open-source models are exempt from most obligations, though this exclusion does not cover providers of general-purpose AI models with systemic risks.

What are the measures developers must conduct for accountability and control?

  1. Conduct detailed risk assessments to identify potential hazards associated with intended uses;
  2. Use high-quality, relevant, and error-free datasets for training AI models;
  3. Provide comprehensive technical documentation that outlines the development, testing, and validation processes of the AI system to minimize risks;
  4. Ensure transparency for users regarding the capabilities and limitations of the AI system;
  5. Undertake rigorous pre-launch assessments to verify compliance with regulations before releasing the product to the market.

What does the tech industry think?

The tech industry has expressed various views on the impacts of the EU AI Act. Those in favor of the AI Act applaud its efforts to establish clear guidelines and accountability mechanisms for AI systems, particularly high-risk ones. They see it as essential for building trust among users, promoting ethical AI practices, and protecting individuals’ rights and safety.

However, others have raised concerns about the Act’s potential impact on innovation. They argue that overly strict regulations could impede the development and adoption of AI technologies, particularly for startups and smaller companies with limited resources. The general sentiment here in terms of the compliance burden, is that these regulations may pose challenges for companies in terms of time, cost, and expertise.

As AI powerhouses like the US, China, UAE, Saudi Arabia, Australia and UK are actively investing billions of dollars in quantum and AI development, experts say that European AI Act should have been accompanied by a significant announcement of funding for AI research and deployment, because the act must not only address the challenges posed by AI but also harness its full potential for a progressive, competitive, and responsible digital Europe.

What does the AI Act mean for AI Developers and Experts?

Following the AI Act, demand for qualified talent will surge and specific skills like data testing methodologies, algorithmic explainability, privacy-preserving techniques, human-AI interaction design, and model risk management will be particularly high.

Developers with expertise in these areas could benefit from salary increases, more leadership opportunities, and appealing job prospects overall as the compliance needs grow.

So how can developers leverage this moment to uplevel their skills?

  1. Dedicate time to fully understanding the Act’s required accountability mechanisms;
  2. Go beyond just checking boxes; deeply skill up on best practices to make systems trustworthy;
  3. Proactively self-educate on the latest tools and protocols around dataset oversight, model vulnerability assessments, monitoring performance drift etc;
  4. Engage in cross-functional collaboration and communication with legal, risk, and compliance counterparts to translate technical details;
  5. Improve abilities in educating stakeholders, handling ethical uncertainties, and guiding executives in making tough decisions.

Türkiye’s AI Landscape – Room to Catch up with Europe

Türkiye has rapidly scaled its digital economy, as the country’s value of the technology sector was estimated at $30 billion in 2021, with projections that it is going to reach $100 billion by 2028.

Our recent State of AI for Software Developers Report shows that the AI dev tools adaptation in Türkiye is very high and the pool of AI developers is growing steadily, fuelled by increasing interest and investment in artificial intelligence technologies. Many educational institutions offer AI-related programs and courses, producing a steady stream of graduates with expertise in machine learning, deep learning, NLP and other AI domains.

Türkiye’s tech industry is witnessing a rise in startups and companies focused on AI-driven solutions. These companies attract top talent and provide opportunities for AI developers to work on cutting-edge projects across various sectors, including healthcare, finance, e-commerce, and transportation. Although Türkiye’s technology sector is poised for growth, ensuring this growth is positioned responsibly can bring even greater prosperity.

The impending EU AI laws serve as a prime case study for Turkish policymakers to model domestic regulation after rallying the nation’s innovators to integrate ethical AI practices. Compliance with internationally accepted governance rules could become a competitive advantage for the country’s start-ups and AI talent on the global stage.



Ready to complete your free profile and find your next role in tech? Sign up today!

Recent Posts

Go to Top