The Impact of the EU AI Act on the Use of AI-Powered Chatbots

By Lucija Vranesevic Grbic

May 6, 2026

The Impact of the EU AI Act on the Use of AI-Powered Chatbots

5.6.2026

By Lucija Vranesevic Grbic

Graphics of a robot in the palm of a computer-generated hand.

The expansion of large-language-model artificial intelligence has accelerated the global use of conversational AI for commercial purposes. Given the significant opportunities and risks associated with AI, it is not surprising that it caught the regulatory eye. For New York lawyers advising clients with cross‑border operations, the rapid expansion of AI regulation has become an issue that deserves close attention. Companies that deploy AI systems often face compliance obligations outside the United States – particularly in jurisdictions like the European Union, where the regulatory framework is both comprehensive and extraterritorial. As New York practitioners support clients operating in foreign markets, it helps to know when international rules might come into play. A working understanding of the EU AI Act equips New York lawyers to provide strategic guidance, helping clients anticipate regulatory requirements, manage risk and avoid costly compliance missteps.

Deep neural network systems add another challenge because their internal reasoning remains largely inaccessible to human understanding – a challenge commonly described as the black box problem.[1] Recent research identifies a key weakness in the use of these systems; large language models are frequently treated as sources of information, despite not being designed to ensure factual accuracy. Rather than providing verified truths, these systems generate responses by predicting the most probable sequence of words based on their training data. This limitation introduces a new category of risk, referred to as careless speech.[2]

AI-related risks are closely tied to the architecture and operation of these systems. The EU Artificial Intelligence Act[3] entered into force on Aug. 1, 2024, and established an extensive horizontal framework governing AI-powered systems with extraterritorial reach. In this respect, it follows the approach taken by the General Data Protection Regulation,[4] the EU’s main privacy law. This regulation requires organizations around the world to comply with EU data protection rules whenever they handle the personal data of EU users. The EU AI Act adopts a similar model. It applies regardless of where an AI provider is located or where a model was trained, so long as the system or its outputs are used within the EU.

Many businesses instinctively classify chatbots as low risk, assuming they only need to comply with basic transparency requirements. This assumption often oversimplifies the legal landscape. Not all chatbots function alike, and depending on their purpose, technical capabilities and influence on customer decision-making, certain models may trigger more stringent obligations than companies might expect.

How AI‑Powered Chatbots Work – and the Risks Behind Them

The AI black box problem is frequently highlighted in academic literature. Users provide an input to the system, and the system produces an output. The processes between the submission of the input and the composition of the output give rise to several pitfalls associated with AI models.[5]

Simply put, chatbots and conversational agents driven by AI are software programs designed to mirror human-like dialogue. The underlying technology powering chatbots and conversational AI agents is based on natural language processing and deep learning.[6] Chatbots generally fall into two main groups: open domain chatbots and closed domain chatbots, but they can be further categorized using additional metrics such as knowledge domain, interact mode and goals.[7]

Potential risks of these chatbots generally fall into three areas: risks arising from the inputs they receive, their outputs, and their commercial deployment. The first category is linked to the way large language models, which underpin chatbots, are trained. The importance of properly selecting the datasets used to train these systems is often overlooked, and the lack of transparency can create a negative feedback loop with no consistent standards for dataset quality.[8] For example, a 2024 study documented widespread large language model training data contamination and indirect data leakage.[9]

The second risk category relates to the output produced by AI systems. Back in 2023, the Italian Data Protection Authority ordered an urgent temporary limitation on the processing of personal data by a company operating an AI-power chatbot called Replika. By going through Replika’s replies, the Italian Data Protection Authority concluded that, with no age verification or control procedures in place, the chatbot posed risks to minors and emotionally vulnerable individuals.[10]

The last risk category emerges once chatbots enter commercial use as the new generation of AI models benefits from more advanced capabilities and access to vast training data sets.[11] While these technological advancements should be welcomed, AI-powered chatbots are marketed as commercial products, reaching users with limited understanding of how such systems operate. The EU AI Act therefore underscores the importance of AI literacy and explainability for all relevant actors throughout entire value chain.

Compliance Deadlines

The implementation of the EU AI Act is based on a timeline set by the European Commission, starting from its going into effect on Aug. 1, 2024. The first binding requirements took effect on Feb. 2, 2025, when the chapters on general provisions and prohibited AI practices became enforceable.

The second wave arrived on Aug. 2, 2025, introducing governance obligations for general-purpose AI models and activating provisions on relevant authorities; the EU AI office and national authorities became fully operational.

The next important milestone arrives on Aug. 2, 2026, as the general date of application. From that point, compliance obligations for high-risk AI systems become fully enforceable. This is also when substantial fines – up to 35 million euros ($41 million) or 7% of global annual turnover – begin to apply. The final phase follows in August 2027 when the AI Act is expected to be fully effective.[12]

Decoding the AI Act

Scope and Extraterritorial Reach

The AI Act doesn’t exist in a vacuum; it seeks to stay aligned with the work of international organizations to help ensure that rules governing AI systems are interpreted uniformly. As Recital 12[13] reminds us, all AI systems share one common trait – the ability to infer. Whether they operate on a standalone basis or are tucked inside a larger product, their level of autonomy and ability to draw conclusions from data is what makes compliance important.

One interesting aspect is just how far the AI Act’s reach extends. Recital 22[14] and Article 2[15] make this clear; the AI Act can apply even when an AI system doesn’t physically enter the EU market. For example, if an EU-based company outsources certain activities to a provider located in a third country, and those activities have the potential to fall into the high-risk category, the act may still apply.

This captures the essence of the act’s extraterritorial scope. Providers and deployers located outside the EU can still fall under the AI Act if their AI systems produce outputs intended for use within the EU or otherwise affect individuals located in the EU. While limited exceptions exist, the main principle is clear: If users in the EU can use your AI system, you shouldn’t think twice about compliance with the AI Act.

The Risk Profile of Your Chatbot

The AI Act adopts a risk-based approach, classifying AI systems into four tiers: unacceptable risk, high risk, limited risk and minimal risk. Many chatbots fall under the limited-risk tier, which subjects them to transparency obligations. However, this is not a blanket rule. The precise risk classification depends on the chatbot’s functions and areas of deployment.

Article 5 of the AI Act sets out a list of prohibited practices. These include AI systems that utilize subliminal or manipulative techniques, exploit individuals’ vulnerabilities such as age, disability or financial situation, or carry out social scoring.[16] At first glance, it seems unlikely that a commercial chatbot could fall within these categories, but it is not impossible. Only a handful of cases show how AI chatbots can drift toward the kinds of behavior the AI Act aims to limit once it fully takes effect. Beyond Replika, mentioned above, EU regulators have also raised concerns about Character.AI, an AI companion used widely in Europe, noting that emotionally immersive chatbots can influence users by simulating close relationships. A widely reported U.K. incident illustrates the risks – a young man who attempted to harm Queen Elizabeth II in 2021 told investigators that an AI “girlfriend” chatbot had encouraged him.[17]

The EU Commission guidelines on prohibited AI practices explain that Article 5(1)(a) covers cases where a chatbot presents false or misleading information in a manner that aims to or has the effect of deceiving individuals and distorting their behavior, particularly if the AI nature of the interaction has not been disclosed (para. 72).[18] The guidelines also note that chatbots, designed to use subliminal messaging techniques, exploit emotional dependency or specific vulnerabilities of customers in advertisements and may be considered as intentionally manipulating users (para. 82).[19] Another example is a chatbot promoting fraudulent products capable of causing severe financial harm (para. 89).[20] For the prohibition to apply, there must be a reasonably likely causal link between the chatbot’s subliminal, manipulative, or deceptive technique and the user’s resulting conduct (para. 84).[21]

Article 6 of the AI Act explains what counts as a high-risk AI system. This includes AI used as a safety component in products, or the AI itself being a product, covered by EU legislation and required to undergo a third-party conformity assessment, as well as the use cases listed in Annex III.[22] For example, systems implemented to profile individuals are treated as high risk. However, the act acknowledges that not every Annex III system will create the same degree of risk. A chatbot that nominally falls within an Annex III category will not be treated as high‑risk if it does not pose a significant risk to a person’s health, safety or fundamental rights, including situations where the system does not materially influence the individual’s decision‑making.

The AI Act also sets rules for limited risk AI systems in Chapters 4 and 5. For providers of AI-powered chatbots, Article 50 is particularly relevant: It requires that users must be informed whenever they are interacting with AI.[23]

By contrast, minimal risk AI systems are not regulated by the act. This category covers most AI tools currently available in the EU market – for example, spam-filtering tools or AI-powered video games.

Making Sense of AI Act Compliance

Compliance Duties for High-Risk AI

The AI Act primarily focuses on the responsibilities of providers of high-risk AI systems, regardless of whether they are established within the EU or in a third country. As noted earlier, non-EU providers can be held accountable if the output of their high-risk AI systems is used within the EU.

Articles 8 to 15 in Section 2 of Chapter III explain the main duties that providers must follow throughout the entire lifecycle of a high-risk AI system. These responsibilities involve setting out a well-documented risk management process, implementing proper testing methods, ensuring appropriate data governance practices and maintaining detailed technical documentation. Providers must enable automatic logging to support record-keeping, human oversight and implement measures to ensure accuracy, resilience and cybersecurity. The Cyber Resilience Act[24] applies broadly to digital products, including software and connected devices that include AI‑based functionalities, and readers should be aware that its scope may overlap with the AI Act.[25]

Additional rules for both providers and deployers are specified in Section 3. Article 16 requires providers to clearly state their business name or trademark, along with their contact information.[26] They must implement a quality management system in accordance with Article 17[27] and retain all required documentation for 10 years (as mandated by Article 18[28]). Providers are also obliged to preserve automatically generated logs (Article 19(1)[29]), issue an EU declaration of conformity under Article 47,[30] affix the CE (European Conformity) marking to their product and comply with the registration requirements in Article 49(1).[31] They must persistently address any problems that arise, cooperate with relevant national authorities, and ensure that their quality management system remains effective.

Non-EU providers must be aware of one additional requirement under Article 22.[32] Before their AI system is placed on the EU market, they need to appoint an authorized representative established within the EU. This representative acts as the provider’s local point of contact and is responsible for collaboration and communication with competent supervisory authorities.

Although this regulation mainly addresses providers, Section 3 (Chapter 3) stipulates obligations for other key players in the AI value chain, including importers, distributors, deployers and suppliers. These compliance requirements apply to anyone who puts a high-risk AI system on the market under their own trade name or registered trademark, makes significant alterations that keep the system within the high-risk category, or remodels the system’s main purpose in a manner that triggers categorization as high risk under Article 6.

Transparency Obligations for Providers and Deployers of Certain AI Systems

Chapter 4 outlines transparency requirements for providers and deployers of certain AI systems. As highlighted earlier, Recital 132[33] and Article 50[34] are especially significant for providers and deployers of AI-driven chatbots as they impose an obligation to clearly inform users whenever they are interacting with an AI system, whenever content has been generated or altered by AI, and whenever biometric categorization or emotion recognition features are being used.

In essence, comprehensive documentation supplied by developers on how an AI model functions is central to meeting transparency obligations. Although the AI Act does not specify how documentation should look, tools such as model cards, data set sheets, and transparency reports can help explain how a system was built, trained and tested.[35]

GPAI Compliance Requirements

Chapter 5 governs Global Partnership on Artificial Intelligence models and introduces a two-tier compliance structure: baseline obligations for all their model providers, and an additional set for models with systemic risk. Considering the difficulty of assigning a single risk classification to these models – mainly because they can be adapted to a wide range of use cases – it is unsurprising that the AI Act dedicates a separate chapter to their regulation.[36]

The requirements for the first category are set out in Articles 53[37] and 54.[38] Namely, Global Partnership on Artificial Intelligence providers must maintain detailed records of how their systems are developed and tested, and they are required to share the relevant technical documentation with businesses that integrate their model, while still safeguarding their intellectual property rights. This excludes open-source AI models unless they can be classified as models with systemic risks. Providers may also rely on published codes of practice to demonstrate compliance. In addition, Article 54 requires providers established outside the EU to appoint an EU‑based representative responsible for ensuring adherence to all applicable obligations and for maintaining cooperation with competent authorities.[39]

Providers in the second category must comply with an additional set of requirements. These include conducting model evaluations using standard protocols and tools, assessing and minimizing potential systemic risks, and reporting incidents to the EU AI office and national authorities. They are also obliged to ensure an appropriate degree of cybersecurity protection for the model and the physical infrastructure on which it operates.

Conclusion

The main obstacle to AI Act compliance is not implementation but interpretation. The EU’s AI office shall incentivize the creation of voluntary codes of conduct for AI systems to promote adherence to relevant standards and industry best practices (Article 95[40]). A 2026 study examined attitudes toward the codes of conduct under Article 95 and found that participants expressed a positive overall sentiment.[41]

Although the AI Act interacts with other EU digital regulations, it establishes its own distinct compliance regime that cannot simply be folded into existing General Data Protection Regulation or cybersecurity processes. Businesses should begin by creating a full inventory of all AI systems in use and classifying them under the act’s four‑tier risk framework, as well as under the two‑tier Global Partnership on Artificial Intelligence structure. Once this baseline is in place, organizations can address the remaining compliance tasks: reviewing and updating AI governance frameworks, conducting risk assessments, meeting AI‑literacy obligations and resolving areas where the AI Act overlaps with other EU rules.

The road to AI compliance in Europe begins with an understanding of what the EU AI Act requires, and that obligation extends well beyond EU borders. Any lawyer advising a company that sells into the EU market, processes EU‑based user data, deploys AI tools across multiple jurisdictions or supports clients expanding abroad will need to understand this law. For lawyers in these roles, it isn’t just “European law” anymore – it’s simply part of doing business today.


Lucija Vranesevic Grbic is the founder of a boutique law firm in Belgrade, Serbia, advising domestic and international clients in corporate, IT, contract, and sports law. This article appears in a forthcoming issue of One on One, the publication of the General Practice Section. For more information, please visit nysba.org/gp.

Endnotes:

[1] George Pavlidis, Unlocking the Black Box: Analysing the EU artificial intelligence act’s framework for explainability in AI, Law Innovation and Technology 16 (1), 2023, p. 293.

[2] Sandra Wachter, Brent Mittelstadt and Chris Russell, Do Large Language Models Have a Legal Duty to Tell the Truth? R. Soc. Open Sci 11 (Mar. 15, 2026), https://doi.org/10.1098/rsos.240197.

[3] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), OJ L 2024/1689.

[4] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ L 119.

[5] Sara Migliorini, ‘More than Words’: A Legal Approach to the Risks of Commercial Chatbots Powered by Generative Artificial Intelligence, European Journal of Risk Regulation 15, 2024, p. 720.

[6] Avyay Casheekar et al, A Contemporary Review on Chatbots, AI-Powered Virtual Conversational Agents, ChatGPT: Applications, open challenges and future research directions, Computer Science Review Vol. 52 (2024), https://doi.org/10.1016/j.cosrev.2024.100632.

[7] Sonali Uttam Singh and Akbar Siami Namin, A Survey on Chatbots and Large Language Models: Testing and evaluation techniques, Natural Language Processing Journal 10, 2025, https://doi.org/10.1016/j.nlp.2025.100128.

[8] Migliorini, supra note 5, at 723-724.

[9] Simone Balloccu et al., Leak, Cheat, Repeat: Data Contamination and Evaluation Malpractices in Closed-Source LLMs, In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics, Vol. 1, St. Julian’s Malta, Association for Computational Linguistics, 2024, p. 67.

[10] Provvedimento del 2 febbraio 2023 (9852214), Available in English: https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/9852214#english.

[11] Migliorini, supra note 5, at 724.

[12] European Parliament, AI Act implementation timeline, https://www.europarl.europa.eu/RegData/etudes/ATAG/2025/772906/EPRS_ATA(2025)772906_EN.pdf.

[13] Artificial Intelligence Act, supra note iii, at Recital 12: “A key characteristic of AI systems is their capability to infer. This capability to infer refers to the process of obtaining the outputs, such as predictions, content, recommendations, or decisions, which can influence physical and virtual environments, and to a capability of AI systems to derive models or algorithms, or both, from inputs or data.”

[14] Id. at Recital 22: “To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and deployers of AI systems that are established in a third country, to the extent the output produced by those systems is intended to be used in the Union.”

[15] Id.at Art. 2 (Scope).

[16] Id.at Art. 5 (Prohibited AI practices),

[17] Pieter Haeck, My AI Friend Has EU Rgulators Worried, Politico (Aug. 21, 2025), https://www.politico.eu/article/ai-friends-experts-worried-artificial-intelligence-chatbot-digital-technology/.

[18] Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act), C (2025) 5052 final, para. 72: “By contrast, the prohibition in Article 5(1)(a) AI Act has a much more limited scope…”

[19] Id. at para 82: “Article 5(1)(a) AI Act applies to AI systems deploying the above-mentioned techniques and having as a first scenario ‘the objective to materially distort the behaviour of a person or a group of persons’.”

[20] Id. at para. 89: “Financial and economic harm may encompass a range of adverse effects, including financial loss, financial exclusion, economic instability. For example, a chatbot that offers fraudulent products that cause significant financial harms.”

[21] Id. at para 84: “A plausible/reasonably likely causal link between the subliminal, purposefully manipulative or deceptive technique deployed by the AI system and its effects on the behaviour is, however, always necessary for the prohibition to apply.”

[22] Artificial Intelligence Act, supra note 3, at Art. 6.

[23] Id. at Art. 50: “Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use.”

[24] Regulation (EU) 2024/2847 of the European Parliament and of the Council of 23 October 2024 on horizontal cybersecurity requirements for products with digital elements and amending Regulations (EU) No 168/2013 and (EU) 2019/1020 and Directive (EU) 2020/1828 (Cyber Resilience Act), OJ L, 2024/2847.

[25] Marta Beltrán, AI Algorithms Under Scrutiny: GDPR, DSA, AI Act and CRA as pillars for algorithmic security and privacy in the European Union, Computers & Security Vol. 158, 2025, p. 6.

[26] Artificial Intelligence Act, supra note 3, at Art. 16.

[27] Id. at Art. 17: “Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects…”

[28] Id. at Art. 18: “The provider shall, for a period ending 10 years after the high-risk AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities…”

[29] Id. at Art. 19(1): “Providers of high-risk AI systems shall keep the logs referred to in Article 12(1), automatically generated by their high-risk AI systems, to the extent such logs are under their control. Without prejudice to applicable Union or national law, the logs shall be kept for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in the applicable Union or national law, in particular in Union law on the protection of personal data.”

[30] Id. at Art. 47: “The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity for each high-risk AI system, and keep it at the disposal of the national competent authorities for 10 years after the high-risk AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the high-risk AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request…”

[31] Id. at Art. 49(1): “The CE marking shall be subject to the general principles set out in Article 30 of Regulation (EC) No 765/2008.”

[32] Id. at Art. 22: “Prior to making their high-risk AI systems available on the Union market, providers established in third countries shall, by written mandate, appoint an authorised representative which is established in the Union.”

[33] Id. at Recital 132: “Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems and subject to targeted exceptions to take into account the special need of law enforcement.”

[34] Id. at note34. *This footnote was confusing. You don’t often see Id and supra in the same footnote. I removed supra and guessed what it should be within our style but author may need to clarify at some point.*

[35] Konstatinos Kalodanis et al, Enhancing Transparency in Large Language Models to Meet EU AI Act Requirements, PCI ’24: Proceedings of the 28th Pan-Hellenic Conference on Progress in Computing and Informatics, 2025, https://doi.org/10.1145/3716554.3716597.

[36] Oskar J. Gstrein, Noman Haleem and Andrej Zwitter, General-purpose AI Regulation and the European Union AI Act, Internet Policy Review 13 (3), 2024, https://doi.org/10.14763/2024.3.179.

[37] Artificial Intelligence Act, supra note 3, at Art. 53 (Obligations for providers of general-purpose AI models).

[38] Id. at Art. 54 (Authorised representatives of providers of general-purpose AI models).

[39] Id.  “Prior to placing a general-purpose AI model on the Union market, providers established in third countries shall, by written mandate, appoint an authorised representative which is established in the Union.”

[40] Id. at Art. 95 (Codes of conduct for voluntary application of specific requirements).

[41] Matthias Wagner et al, AI Act High-Risk AI Compliance Challenge and Industry Impact: A multiple case study, Information and Software Technology 194 (2026), https://doi.org/10.1016/j.infsof.2026.108067.

Six diverse people sitting holding signs
gradient circle (purple) gradient circle (green)

Join NYSBA

My NYSBA Account

My NYSBA Account