Back to Insights

The Artificial Intelligence Bill of Law Is on the Agenda

The Bill of Law on Amendments to the Turkish Penal Code No. 5237 dated 26/9/2004 and Certain Laws (“Bill”), submitted to the Grand National Assembly of Turkey (TBMM) on 23 July 2025 and currently under committee review, introduces a legal definition of artificial intelligence (“AI”) and clarifies the applicable liability framework. In this context, the Bill establishes separate penalties for users and developers, imposes a six-hour intervention requirement and a permanent labelling obligation for deepfake content, classifies the use of discriminatory data sets as a violation, and authorizes the Information and Communication Technologies Authority (ICTA) to issue access-blocking orders and impose administrative fines of up to TRY 10 million. It also introduces obligations for AI service providers, including ensuring transparency of training data sets and implementing content-verification mechanisms. Through these measures, the Bill aims to enact a series of regulatory amendments governing the development and use of AI technologies.

13.11.2025

The Artificial Intelligence Bill of Law Is on the Agenda

Introduction

The Bill of Law on Amendments to the Turkish Penal Code No. 5237 dated 26/9/2004 and Certain Laws (“Bill”), submitted to the Grand National Assembly of Turkey (TBMM) on 23 July 2025, aims to introduce comprehensive amendments to primary legislation in order to address legal uncertainties, cybersecurity risks and, in particular, threats to personality rights and public order arising from the rapid proliferation of artificial intelligence (“AI”) systems.

The Bill provides a legal definition of AI and introduces new obligations and sanctions in critical areas such as criminal liability, access-blocking time limits, data security, the prohibition of discrimination, and the labelling of deepfake content.

New Criminal Liability Regime for AI Offences

Through the provisions to be added to the Turkish Penal Code No. 5237 (“TPC”), the Bill separately regulates the liability of users and developers for offences committed through AI systems. Accordingly, the user who directs an AI system to perform an act constituting a crime shall be punished as the principal offender. In addition, where the design or training of the system by the developer enables the commission of the offence, the penalty to be imposed on the developer shall be increased by one-half.

Rapid Intervention Against Deepfake Content and Labelling Obligation

With the article to be added to Law No. 5651 on the Regulation of Publications on the Internet and Combating Crimes Committed Through Such Publications (“Internet Law”), the Bill aims to ensure effective and rapid intervention against AI-generated content that violates personality rights, threatens public security, or qualifies as deepfake content. In this context, the period for implementing access-blocking and content removal decisions shall be reduced to six hours. Content providers and AI developers shall be held jointly liable for compliance with this obligation.

In addition, under another article to be added, it shall be mandatory to label visual, audio or textual deepfake content produced through AI systems with a clear, visible and non-removable statement that the content has been artificially generated, namely: “Generated by Artificial Intelligence.” In the event of non-compliance with this labelling obligation, the Information and Communication Technologies Authority (“ICTA”) may impose an administrative fine ranging from TRY 500,000 to TRY 5,000,000. Where the violation is carried out systematically and intentionally, a decision may be issued to block access to the content provider. ICTA shall have the authority to conduct audits and to use technical monitoring tools for the purposes of this provision.

PDPL Protection Against Discriminatory Data Sets

With the provisions to be added to the Personal Data Protection Law No. 6698 (“PDPL”), the Bill makes it mandatory for data sets used in AI applications to comply with the principles of anonymity, the prohibition of discrimination and lawfulness. The most significant innovation is that the use of discriminatory data sets shall be explicitly deemed a data security breach. This regulation seeks to prevent AI models from being trained on biased data and to safeguard individuals’ fundamental rights.

ICTA’s Expanding Powers

Through amendments to the Cybersecurity Law No. 7545 and the Electronic Communications Law No. 5809, the Bill grants ICTA significant additional powers and imposes extensive obligations on AI service providers. Under the Bill, ICTA will be able to issue urgent access-blocking orders in respect of AI-generated content that threatens public order or election security. In the event of non-compliance with this obligation, an administrative fine of up to TRY 10,000,000 may be imposed.

Furthermore, AI service providers will be subject to extensive obligations, including:

  • ensuring the transparency and auditability of training data sets,
  • establishing content verification mechanisms to prevent the generation of manipulative information,
  • implementing algorithmic controls to reduce hallucination (illusion) risks,
  • setting up human-in-the-loop approval and oversight mechanisms in high-risk areas of use, and
  • conducting security vulnerability tests at regular intervals.

Service providers that fail to fulfil these obligations may be subject to administrative fines of up to TRY 5,000,000. In cases of serious violations that threaten public order, temporary suspension of operations may also be ordered.

Criminal Liability Risks for Social Network Providers

The Bill radically expands the scope of access-blocking under the Internet Law. In addition to the offences already covered, insults (TPC Article 125), threats (TPC Article 28) and crimes against humanity (TPC Article 77), which can easily be disseminated through AI systems, are included among the offences in relation to which decisions for content removal and/or access blocking may be issued.

Moreover, the Bill ensures the applicability of the provisions of the TPC to social network providers on which AI operates, thereby significantly increasing the legal liability of these platforms for offences committed through their own AI-based services and paving the way for the application of TPC provisions to such platforms.

Conclusion

The Bill introduces strong instruments in the field of AI regulation, including increased criminal liability under the TPC, a six-hour mandatory response period for deepfake and similar content, the express recognition of discriminatory data sets as a violation under the PDPL, and ICTA’s authority to impose administrative fines of up to TRY 10,000,000.

In line with the Bill, which is currently under consideration before the commission, it is essential for all companies using AI to review their operational and technical infrastructure for compliance with these new obligations and to take the necessary measures in order to avoid potential legal and financial sanctions.

You can access the full text of the Bill, which is only in Turkish, from here.