Skip to main content
Chat with us on WhatsApp
Home Blog AI ACT – What Is It?

AI ACT – What Is It?

Hanna Pizano Galus
AI ACT – What Is It?

AI ACT – What Is It?

Table of Contents

The AI Act is an EU regulation that sets out the rules for designing, placing on the market, and using artificial intelligence systems in business and public activity. It introduces a risk-based approach and imposes obligations on both AI developers and organizations using ready-made AI tools. The regulation is especially important for companies in HR, e-commerce, and other areas where AI influences decisions, users, and operational processes.

AI ACT – What Is It?

AI Act – legal nature, scope of regulation, and significance for business operations and the use of AI systems in commercial activity

ai-act

The AI Act (Artificial Intelligence Act) is a key European Union regulation on artificial intelligence, introducing a uniform legal framework for the design, placing on the market, and use of AI systems.

This regulation sets out the rules governing the operation of artificial intelligence systems across the entire internal market, establishing a consistent regulatory model for all Member States.

It is a legal act establishing a comprehensive legal regime covering the placing on the market, putting into service, and use of artificial intelligence systems.

It has a comprehensive character, covering:

  • the placing of AI systems on the market,

  • putting them into service,

  • their actual use in commercial and public activity.

The AI Act is not a regulation of a purely technical nature.

It serves as a systemic reference point for the responsible use of artificial intelligence, based on risk management, compliance assurance, and control of the impact of AI systems on individuals and the market.

What is the AI Act and why does it matter?

The AI Act (Artificial Intelligence Act, also referred to as the EU AI Act) is a European Union regulation establishing harmonized rules on artificial intelligence.

This regulation:

  • introduces a uniform legal framework for AI systems across the European Union,

  • defines the scope of application of the regulation and the obligations of different categories of entities,

  • focuses on the protection of fundamental rights, safety, and the public interest,

  • establishes compliance and oversight mechanisms for AI systems.

The scope of the regulation covers a broad spectrum of artificial intelligence applications — from simple decision-support systems to advanced models used at scale.

The aim of the AI Act is to ensure that the development and use of artificial intelligence take place in a manner that is:

  • safe,

  • transparent,

  • consistent with the values and legal order of the European Union.

The EU AI Act – how does it work in practice?

AI Act – how the regulatory mechanism works in practice

The AI Act introduces a regulatory model based on risk assessment (the risk-based approach).

This means that artificial intelligence systems are classified depending on the level of risk they may pose, in particular to:

  • natural persons,

  • fundamental rights,

  • safety,

  • the functioning of the market.

Within this approach, four main categories of AI systems are distinguished:

  • prohibited systems (banned practices),

  • high-risk AI systems,

  • limited-risk systems,

  • minimal-risk systems.

The classification of a system into a given category determines the scope of regulatory obligations imposed on the entities involved in its creation and use.

Consequently, the AI Act does not introduce a uniform regime for all artificial intelligence systems, but instead makes the intensity of regulation dependent on the level of risk generated by a given system.

AI systems – definition and scope of regulation

The AI Act introduces an autonomous definition of an “artificial intelligence system,” covering systems capable of:

  • generating outputs in the form of predictions, recommendations, or decisions,

  • influencing the digital or physical environment,

  • operating with a certain degree of autonomy.

This definition is functional and technologically neutral, which means it covers a broad range of AI-based solutions.

In practice, this results in the regulation covering a wide variety of systems used in commercial and public activity — from communication tools (e.g. chatbots) to decision-support systems, including scoring systems.

General-purpose AI models – a separate regulatory regime

One of the important elements of the AI Act is the introduction of rules concerning general-purpose AI models.

These are models that:

  • demonstrate a high degree of generality in their applications,

  • can be used in different contexts and for different purposes,

  • form the basis for the creation of many lower-level AI systems.

This category includes in particular:

  • language models,

  • generative systems,

  • tools used for data analysis.

The significance of this category stems from the fact that general-purpose models:

  • may generate systemic risks,

  • operate at large scale,

  • are used by many providers and deployers of AI systems.

As a consequence, the AI Act imposes separate regulatory obligations on this category of models, including — in the case of models with systemic risk — more advanced requirements for risk management and transparency.

AI Office – oversight mechanisms at the European Union level

Under the AI Act, the establishment of the European AI Office, operating within the European Commission, is предусмотрed.

The AI Office serves as a coordinating body at the EU level, and its tasks include in particular:

  • supporting the implementation and application of the AI Act,

  • coordinating actions between Member States,

  • supporting the uniform interpretation of the rules,

  • monitoring the development of AI systems and models, including general-purpose models.

This institution plays an important role in ensuring consistent application of the regulation across the European Union and in shaping supervisory practice in the field of artificial intelligence.

Obligations for organizations – what does the AI Act change?

The AI Act introduces a range of obligations for entities involved in the creation and use of artificial intelligence systems, with their scope depending on the role of the entity and the level of risk associated with the system in question.

The core obligations include in particular:

  • conducting risk assessments of AI systems,

  • implementing appropriate technical and organizational measures,

  • ensuring systems comply with regulatory requirements,

  • fulfilling obligations related to documentation and — in certain cases — registration of systems,

  • monitoring the operation of AI systems,

  • ensuring appropriate human oversight over the functioning of AI systems.

The scope and intensity of these obligations differ depending on the category of AI system and its use.

In practice, this regulation results in the need for organizations to implement structured mechanisms for managing the use of artificial intelligence, including identifying AI systems, assessing risk, and ensuring compliance with the law.

High-risk AI systems – specific regulatory requirements

Artificial intelligence systems classified as high-risk systems are subject to the strictest regulatory regime provided for in the AI Act.

This category includes systems used in areas identified in the regulation, in particular such as:

  • employment and workforce management (including recruitment processes),

  • access to financial services, including creditworthiness assessment,

  • education and training,

  • the operation of critical infrastructure.

With regard to these systems, the AI Act establishes a number of detailed requirements, including in particular:

  • maintaining extensive technical documentation,

  • implementing a system for monitoring the operation of the AI system,

  • ensuring appropriate human oversight,

  • using data that meets specified quality standards,

  • identifying and mitigating the negative impact of the system on individuals and their fundamental rights.

These requirements are intended to ensure that high-risk AI systems are designed and used in a safe, transparent manner and in accordance with existing standards for the protection of individual rights.

Exemptions and exceptions

The AI Act provides for certain exclusions from the scope of the regulation, relating to specific categories of AI systems and stages of their development.

These exclusions include in particular:

  • AI systems used exclusively for research and scientific purposes,

  • systems at the stage of design, testing, or development (before being placed on the market or put into service),

  • specific cases related to the use of AI models where special regulatory rules apply.

However, the scope of these exclusions is limited and does not cover typical commercial uses.

As a result, most AI systems used in business activity are subject to the obligations arising from the AI Act.

Compliance – a new business standard

The AI Act does not introduce the concept of compliance as such, but gives it particular importance in the context of artificial intelligence systems, making it one of the key elements of their design and use.

Compliance with the regulation is not a one-time exercise, but a continuous process covering the entire lifecycle of the AI system.

In particular, this means the need to:

  • continuously monitor the operation of AI systems,

  • ensure their ongoing relevance and adequacy over time,

  • maintain control over the way systems function and are used,

  • manage risk at every stage of the AI system lifecycle.

As a result, the AI Act requires organizations to implement lasting compliance mechanisms that go beyond a one-time fulfilment of regulatory requirements.

Non-compliance with the AI Act – real consequences

Failure to comply with the provisions of the AI Act may result in administrative and punitive measures.

The most significant consequences include in particular:

  • the imposition of administrative fines,

  • supervisory measures, including restrictions on or prohibitions of placing AI systems on the market or using them,

  • negative reputational effects, including loss of trust among contractors and users.

The sanctions system provided for in the AI Act is similar to the model adopted in the General Data Protection Regulation (GDPR), but its subject matter concerns artificial intelligence systems and their impact on decisions and the legal position of individuals, rather than only the processing of personal data.

AI Act and the use of AI systems in employer–employee relations

The AI Act has significant application in the area of using artificial intelligence systems in employment relationships, in particular in the field of:

  • recruitment processes,

  • employee evaluation and performance assessment,

  • workforce management and work organization.

AI systems used in these areas are, as a rule, classified as high-risk systems, which entails the imposition of specific regulatory obligations.

As a consequence, entities using such systems are required to:

  • ensure an appropriate level of oversight over the operation of AI systems,

  • guarantee transparency towards persons affected by decisions made or supported by AI,

  • take into account the impact of AI systems on fundamental rights, in particular employee rights.

This regulation strengthens the importance of responsible and controlled use of artificial intelligence in the area of human resources management.

Artificial Intelligence Act – strategic significance

The Artificial Intelligence Act goes beyond the classic understanding of sector-specific regulation, constituting an instrument of major importance for market functioning and the development direction of artificial intelligence technologies.

This regulation:

  • creates conditions for building competitive advantage based on lawful use of AI,

  • sets the framework for responsible and safe technological development,

  • contributes to structuring the rules for the use of AI systems in commercial and public activity.

At the same time, the AI Act constitutes:

  • a comprehensive legal act regulating artificial intelligence systems,

  • a regulatory model based on risk assessment,

  • a system of obligations for entities involved in the creation and use of AI,

  • a mechanism for protecting fundamental rights and ensuring the safety of the internal market.

From a functional perspective, the AI Act shifts the focus from the mere use of technology to responsibility for how it works and its impact on individuals and the market environment.

Paweł Choła – how he organized AI at a scale that generates real risk (and advantage)

Who is Paweł Choła and at what level does he operate?

Paweł Choła operates in an environment where AI is not an add-on — it is business infrastructure.

In the analyzed case, operational activity includes in particular:

  • servicing large online platforms,

  • managing extensive e-commerce structures,

  • implementing AI-based solutions in various market segments.

This type of activity involves the use of:

  • many AI systems operating simultaneously,

  • significant volumes of user data,

  • processes having a direct impact on consumer decisions, including product selection, pricing, and shaping user experience.

As a consequence, AI systems become part of the enterprise’s decision-making infrastructure, significantly influencing its functioning and business results.

AI is working, but no one is supervising it

Before implementing an AI Act-aligned approach, the activity was characterized by a high degree of use of artificial intelligence tools, while lacking coherent management and oversight mechanisms.

In particular, the following problems were identified:

  • the use of many AI systems from different providers without a uniform approach to integration and control,

  • a lack of coherent documentation concerning the solutions used,

  • no clearly defined rules for the use of AI systems in business processes,

  • no systemic assessment of the impact of AI on users and on the functioning of the enterprise.

As a result, AI systems were implemented and developed dynamically, but without adequate control and risk management mechanisms.

This reflected a typical situation in which technological development outpaces the implementation of appropriate organizational and regulatory structures.

Turning point: awareness of risk

At a certain stage of operations, an analysis was conducted of the impact of the AI systems used on business processes and compliance with existing and proposed regulations.

This analysis revealed significant shortcomings in AI system governance, in particular:

  • a lack of full knowledge of the scope and manner in which AI systems were being used within the organization,

  • a lack of coherent control and oversight mechanisms over their operation,

  • a lack of organizational preparedness to meet the requirements arising from the AI Act.

As a consequence, significant regulatory risks associated with the use of artificial intelligence systems were identified, covering both potential regulatory breaches and the impact on user rights and the functioning of the enterprise.

AI as a governance area, not only technology

Paweł changed the approach:

❌ AI as a marketing tool
✅ AI as an area of risk management and accountability

The following model was implemented:

“AI under control = business under control”

Stage 1: Identification and mapping of AI systems

The first step was to conduct a comprehensive identification of the artificial intelligence systems used in the organization.

In particular, the following were identified:

  • customer service systems (e.g. chatbots),

  • recommendation systems,

  • dynamic decision-making systems (e.g. pricing systems),

  • customer segmentation systems.

The result of this stage was obtaining a complete picture of AI use within the organization, which formed the basis for further actions in risk management and compliance.

Stage 2: Risk classification

Next, an analysis was carried out of individual AI systems in terms of:

  • their impact on users,

  • their impact on consumer decisions,

  • the potential risk of violating fundamental rights and economic interests.

In line with the AI Act’s risk-based approach, an initial classification of the systems was made, differentiating the scope of oversight and obligations depending on the nature of their application.

Stage 3: Implementation of transparency principles

The next stage was the introduction of solutions increasing the transparency of AI system use.

These measures included in particular:

  • informing users about the use of AI systems,

  • implementing clear notices in user interfaces (e.g. in the case of chatbots),

  • updating documents governing data processing and the use of services,

  • increasing transparency concerning the systems’ functions and their impact on the user.

These actions helped reduce regulatory risk and increase user trust.

Stage 4: Ensuring human oversight

Mechanisms were introduced to ensure appropriate human oversight over AI systems.

In particular, the following were implemented:

  • monitoring of system operation and outputs,

  • the possibility for human intervention in case of irregularities,

  • analysis of anomalies in system operation,

  • control over the impact of AI systems on key business decisions.

The aim of these actions was to reduce the risk of decisions being made in an uncontrolled or unpredictable manner.

Stage 5: Documentation and compliance assurance

The final stage was the organization of documentation and the implementation of mechanisms ensuring compliance with the AI Act.

In particular:

  • a register of AI systems used was created,

  • risk management procedures were developed,

  • policies on the use of artificial intelligence were implemented,

  • mechanisms for monitoring compliance with the regulation were established.

The result was a shift from fragmented and unsystematic use of AI to a model based on structured governance mechanisms.

Implementation results

The implementation of the described measures produced measurable results on several levels.

At the operational level:

  • increased stability of AI systems,

  • greater predictability of their functioning,

  • reduction of scalable errors.

At the legal level:

  • increased organizational readiness to meet AI Act requirements,

  • reduced risk of regulatory sanctions,

  • gaining control over how AI systems are used.

At the strategic level:

  • the use of AI as a consciously managed asset,

  • increased decision-making efficiency,

  • strengthened competitive position of the organization.

What sets Paweł Choła apart?

The case presented illustrates the difference between fragmented use of artificial intelligence and an approach based on systematic governance.

In practice, many organizations:

  • use AI systems in a dispersed and uncoordinated manner,

  • do not conduct a comprehensive analysis of the risks associated with their use,

  • do not implement coherent governance and oversight mechanisms for AI systems.

A different approach consists in treating artificial intelligence as part of the enterprise’s decision-making infrastructure, which involves:

  • implementing risk management and compliance principles,

  • assigning responsibility for the operation of AI systems,

  • preparing the organization to meet regulatory requirements.

The key conclusion is that lack of control over the use of AI leads to significant regulatory and operational risks, whereas implementing structured governance mechanisms makes it possible to reduce those risks and increase organizational effectiveness.

In particular, organizations that:

  • use AI in processes affecting user decisions,

  • operate at larger scale,

  • do not have full control over the AI systems they use,

are in a situation requiring urgent organization of the AI area.

The AI Act does not impose an obligation to use artificial intelligence, but it does impose obligations regarding its design and use. As a result, it forces organizations to gain a deeper understanding of how AI systems operate and how they affect individuals and the market.

Entities that implement governance and compliance mechanisms early enough can at the same time:

  • reduce regulatory risk,

  • organize business processes,

  • strengthen their competitive position.

What is the AI Act again, in short?

The AI Act is a European Union regulation governing the principles for designing, placing on the market, and using artificial intelligence systems in commercial and public activity.

In particular, the regulation defines:

  • the conditions under which AI systems may be created and used,

  • the criteria for assessing the risks associated with their use,

  • the scope of responsibility of entities involved in their design and use.

The AI Act covers the entire lifecycle of artificial intelligence systems — from the design stage, through deployment, to their use and monitoring.

The main purpose of the regulation is:

  • to ensure safety,

  • to protect fundamental rights,

  • to establish uniform rules for the functioning of the market within the European Union.

Who does the AI Act apply to?

The AI Act applies to all entities that come into contact with artificial intelligence in business activity.

In particular, it covers:

  • companies that create AI systems,

  • companies that implement or purchase ready-made AI solutions,

  • organizations that use AI in their processes (e.g. in marketing, HR, or sales),

  • also entities from outside the European Union, if their AI systems are used on the EU market.

In practice, this means the regulation does not apply only to creators of technology.

Obligations may also apply to entities that:

  • use ready-made AI tools,

  • do not create their own models,

  • use AI as support in everyday operations.

Which AI systems are subject to regulation?

The AI Act covers a very broad range of systems based on artificial intelligence.

In practice, this includes, among others:

  • chatbots and virtual assistants,

  • recommendation systems (e.g. suggesting products or content),

  • systems that automatically make decisions (e.g. regarding prices or offers),

  • customer assessment systems (so-called scoring),

  • tools used in HR (e.g. in recruitment or employee evaluation),

  • generative models (e.g. creating texts, images, or analyses).

This means that the regulation does not concern only advanced technologies, but also many tools used in everyday business.

How does the risk-based approach work?

The AI Act classifies artificial intelligence systems depending on the level of risk they may pose.

Four main categories are distinguished:

Prohibited systems (banned practices)
These are systems whose use is completely prohibited because they violate fundamental rights or impermissibly influence users’ behaviour (e.g. manipulation of decisions).

High-risk systems
These include applications that may significantly affect people’s lives, for example in recruitment, education, or creditworthiness assessment.
They are subject to the strictest requirements, including documentation, oversight, and control obligations.

Limited-risk systems
In their case, the main obligation is to ensure transparency, for example informing the user that they are interacting with AI (e.g. a chatbot).

Minimal-risk systems
These are systems with little impact on the user and, as a rule, are not subject to specific regulatory obligations.

The greater the impact of AI on a person, the greater the obligations.

Does the AI Act apply to ChatGPT and generative models?

Yes — the AI Act also covers generative models, such as systems similar to ChatGPT.

The regulation introduces special rules for so-called general-purpose AI models.

These are models that:

  • can be used in many different applications,

  • have a broad scope of use (they are not created for just one purpose),

  • form the basis for other systems and applications.

This category includes, among others:

  • language models (e.g. generating text),

  • image generators,

  • systems used for data analysis.

In practice, this means the regulation covers not only specific applications, but also the technologies that form their foundation.

What is the AI Office?

The AI Office (European AI Office) is a unit operating within the European Commission, established under the AI Act.

Its task is to support the implementation and application of artificial intelligence rules throughout the European Union.

In particular, the AI Office:

  • coordinates actions between Member States,

  • supports the uniform interpretation of the rules,

  • monitors the development and use of AI systems,

  • participates in supervision over the application of the regulation at EU level.

In practice, this means that the AI Office plays an important role in ensuring consistent application of the AI Act and in shaping how the rules are implemented in different countries.

What obligations does the AI Act impose?

The AI Act introduces a range of obligations related to the design and use of artificial intelligence systems. Their scope depends on the type of system and the level of risk it generates.

The most important include:

  • carrying out risk assessments related to the operation of AI systems,

  • maintaining appropriate documentation,

  • implementing technical and organizational measures ensuring safety,

  • ensuring transparency toward users,

  • introducing human oversight over the operation of AI systems,

  • ensuring compliance with regulatory requirements.

In practice, this means that artificial intelligence systems cannot be used in an uncontrolled manner — they require conscious management, monitoring, and oversight.

AI must be managed — not just used.

Does the AI Act apply to small businesses?

Yes — the AI Act applies to both large enterprises and small businesses.

The regulation is not based on the size of the organization, but on:

  • the type of AI systems used,

  • the level of risk generated by those systems.

In practice, this means that even a small company may be subject to significant obligations if it uses AI in sensitive areas such as recruitment or employee evaluation.

On the other hand, a large organization using only simple AI tools may be subject to much less stringent requirements.

What are the penalties for non-compliance with the AI Act?

Failure to comply with the AI Act may lead to serious legal and financial consequences.

In particular, these may include:

  • high administrative fines,

  • orders restricting or discontinuing the use of AI systems,

  • other supervisory measures imposed by competent authorities.

The level of fines may be very significant and — depending on the type of infringement — may reach tens of millions of euros or a certain percentage of the company’s annual turnover.

The sanctions model is similar to the one known from the GDPR, but it concerns the use of artificial intelligence systems and their impact on users.

Does the AI Act ban AI?

No — the AI Act does not ban the use of artificial intelligence as such.

The regulation only prohibits certain uses of AI that are considered unacceptable due to their impact on fundamental rights and safety.

Prohibited practices include in particular:

  • AI systems using manipulative techniques in a way that may influence users’ decisions,

  • so-called social scoring systems, assessing individuals based on their behaviour or characteristics,

  • certain uses of biometric systems, in particular in the context of identifying persons in public spaces (with certain exceptions).

In practice, this means that the AI Act does not restrict the development of artificial intelligence, but sets boundaries for its use in cases considered particularly risky.

How can a company prepare for the AI Act?

Preparing an organization for the AI Act requires a structured approach to the use of artificial intelligence in business activity.

The first step is to understand where and how AI systems are used in the company.

In practice, the preparation process includes in particular:

  • identifying all AI systems used within the organization,

  • determining their impact on users and business processes,

  • carrying out a risk assessment related to their operation,

  • analyzing the level of compliance with regulatory requirements,

  • implementing oversight mechanisms for the functioning of AI systems.

This approach is often referred to as AI Act readiness.

AI Act Readiness

Does the AI Act apply to employees and HR?

Yes — the AI Act is highly relevant to the use of artificial intelligence in HR.

This applies in particular to AI systems used in:

  • recruitment,

  • employee evaluation,

  • performance management and work organization.

In many cases, such systems are classified as high-risk systems because they may directly affect the professional situation of employees or candidates.

In practice, this means the need to:

  • ensure appropriate oversight over the operation of AI systems,

  • guarantee transparency toward employees and candidates,

  • take into account the impact of AI systems on fundamental rights.

Does the AI Act affect e-commerce?

Yes — the AI Act has direct significance for e-commerce activity.

Artificial intelligence is widely used in this area, among others for:

  • recommending products,

  • dynamic pricing,

  • analyzing customer behaviour and preferences.

Because these systems influence users’ purchasing decisions, they are subject to risk assessment and obligations under the AI Act.

In practice, this means the need to ensure an appropriate level of transparency, control, and compliance with the regulation.

If it influences users’ decisions, it is subject to regulation.

Is AI documentation required?

Yes — the AI Act imposes an obligation to maintain documentation concerning artificial intelligence systems.

The scope of this documentation includes in particular:

  • a description of how the AI system works and its intended purpose,

  • information on how it is used,

  • an assessment of the risks associated with its functioning,

  • data necessary to demonstrate compliance with regulatory requirements.

The scope and level of detail of the documentation depend on the type of system and the level of risk.

In practice, this means that a lack of proper documentation may make it impossible to demonstrate compliance with the AI Act.

Does the AI Act concern only technology?

No — the AI Act does not concern only technology as such.

The regulation also covers the way AI systems are used within an organization, including:

  • business processes in which AI supports or automates decisions,

  • decisions made using AI systems,

  • responsibility for the way these systems operate and are used.

In practice, this means that the AI Act matters not only to technical departments, but also to management boards and people responsible for decision-making within the organization.

👉 The AI Act changes the way we think about AI:

❌ AI as a tool
✅ AI as an area of accountability

Companies that:

  • understand their AI systems,

  • implement compliance principles,

  • take control over AI-driven decisions

👉 will not only meet regulatory requirements,
👉 but also build a real market advantage.

The AI Act is not theory. It is a decision you need to make now.

You can continue to:

  • use AI “by feel,”

  • assume it does not apply to your company yet,

  • put the topic off until later.

But the reality is simple:

👉 AI is already influencing decisions in your organization today.
👉 The AI Act means you are starting to become responsible for it.

The real question is not:
“Do we have AI?”

But rather:

  • Do we know where AI influences decisions?

  • Do we have real control over it?

  • Are we able to defend how it operates?

If you do not have a clear answer to that —
this is not a technology issue.

👉 It is a matter of business risk and management accountability.

AI under control = advantage
AI out of control = a problem that is still ahead

Companies that act today:

  • organize their AI systems,

  • implement compliance principles,

  • build governance,

👉 are not just “meeting regulations”

👉 they are taking control over what truly drives their business

You do not need theory. You need clarity.

It is not about knowing the entire AI Act.

It is about knowing:

  • where you are today,

  • what risks you have,

  • what exactly needs to be done in your organization.

Take the first step — consciously

If you want to:

  • map AI in your company,

  • understand the level of risk,

  • assess actual compliance with the AI Act,

  • organize the chaos around AI,

👉 start with a conversation that gives you specifics — not theory

Book a consultation

During the consultation:

  • we go through your business processes,

  • we identify where AI influences decisions,

  • I point out specific risks and gaps,

  • you receive a clear action plan.

No generalities. No slides. No wasted time.

👉 If AI is part of your business — this is the moment to take control over it.

Because the question is no longer:

“Should we implement AI?”

But:

👉 “Do you have control over it — before someone asks you about it?”

Keep reading

Recent News

Gallery

Milena Perka AI Governance | Prawo technologii
Milena Perka AI Governance | Prawo technologii