Editorial illustration of Italy map, EU stars, scales of justice, and AI circuit motif.
Italy becomes the first EU country with a comprehensive AI law.

Italy’s AI law has officially passed, making the country the first in the EU to adopt a comprehensive national framework for artificial intelligence. The legislation aligns with the EU AI Act but adds criminal penalties, child protection measures, and copyright clarifications—marking a landmark moment in Europe’s approach to AI.

The bill cleared parliament this week and now heads to publication in the Official Gazette (Gazzetta Ufficiale della Repubblica Italiana) before entering into force. The government casts it as a “human-centric, transparent and safe” framework that also nurtures innovation and privacy.

This is not just a box-ticking exercise to keep pace with Brussels. Rome’s law introduces jail time for harmful misuse, codifies additional protections around children’s access to AI, and designates national authorities to oversee the entire system—choices that will ripple through hospitals, schools, workplaces, and the media.

How we got here

Lawmakers drafted Italy’s AI law to bridge EU-level rules with domestic contexts. The law follows in the slipstream of the EU AI Act, which was formally published in July 2024 and is set to begin phasing in across the bloc. Italy’s approach aims to dovetail with that regulation, while filling gaps where national practice matters (labour law, education, healthcare).

On the politics: the Senate gave final approval on September 17, 2025, with 77 votes in favour, 55 against and two abstentions—a tight but precise green light.

Two agencies take point: the Agency for Digital Italy (AgID) and the National Cybersecurity Agency. Financial and market regulators (e.g., the Bank of Italy and Consob) retain their existing powers, creating a hub-and-spoke model rather than a brand-new super-regulator.

What’s actually in the law

Italy AI law parental consent icon showing child and parent with 14+ lock.
Children under 14 need parental consent to access AI systems.

Italy’s AI law introduces penalties for deepfakes, establishes consent requirements for individuals under 14, and outlines copyright limitations. The Italian parliament has bundled multiple themes—criminal liability, child protection, workplace transparency, and copyright—into a single package.

Each element is designed to reinforce the EU AI Act while plugging national gaps.

Here’s a closer look at the main pillars.

1) Criminal penalties for harmful misuse (deepfakes included).

Unlawful creation or dissemination of AI-generated/manipulated content that causes harm—think malicious deepfakes—can draw one to five years in prison. Penalties rise when AI is used to commit crimes such as fraud or identity theft. That’s a complex signal to would-be abusers that “it was the model” is not a defence.

2) Human oversight in sensitive settings.

Healthcare, education, justice, public administration, sports, and the workplace all fall under strict rules: traceability, transparency, and human-in-the-loop are required.

In workplaces, for example, employers must inform workers when AI is deployed—an essential step in ensuring fairness and accountability that goes beyond generic transparency promises.

3) Kids’ access is gated.

Children under 14 will need parental consent to use AI systems and services. It’s a blunt instrument, but lawmakers argue it’s warranted given the speed and scale at which AI tools can shape behaviour, learning and media literacy.

4) Copyright clarifications.

Only AI-assisted works that stem from genuine human intellectual effort receive protection; “push-button” outputs don’t magically create new rights.

Separately, text and data mining with AI is limited to non-copyrighted content or authorised scientific research, thereby trimming some of the ambiguity that creators and platforms have wrestled with.

5) Money on the table.

The law authorises up to €1 billion from a state-backed venture capital fund for equity investments in companies active in AI, cybersecurity, quantum technologies, and telecommunications.

Supporters say it’s a starter boost; sceptics call it modest compared with U.S. and Chinese spending. Both can be true.

What officials are saying

Alessio Butti, the undersecretary for digital transformation, frames the law as aligning innovation with citizen protections:

This (law) brings innovation back within the perimeter of the public interest, steering AI toward growth, rights and full protection of citizens.

Alessio Butti (Undersecretary for Digital Transformation)

Prime Minister Giorgia Meloni has argued that “there can and must be an Italian way” to develop and govern AI—one grounded in ethical rules and respect for people’s rights. That vision now has legislative teeth. In her words

There can and must be an Italian way when it comes to artificial intelligence, an Italian way to develop artificial intelligence and an Italian way to govern artificial intelligence.

Giorgia Meloni, Prime Minister of Italy

AI can only reach its full potential if it is developed within a framework of ethical rules that focus on people and their rights and needs.

Giorgia Meloni, Prime Minister of Italy

Why this matters (beyond Italy)

For EU compliance teams: Italy’s law effectively adds a national compliance layer on top of the EU AI Act. If you operate in hospitals, schools, HR tech, fintech, or public services, expect documentation, human oversight, and impact assessment demands to tighten—earlier and more concretely than in some other member states.

For platforms and media companies: The deepfake provisions raise the stakes for content provenance and incident response. Policies regarding labelling, takedown speed, and appeals will need to be audit-ready—especially where harm can be demonstrated. Think robust detection pipelines and cooperation with national authorities.

For creators and publishers: The copyright language nudges the ecosystem toward human-in-the-loop creativity. If a work is meaningfully human-directed, it’s protectable; purely machine-spun outputs aren’t. Regarding training data, TDM limits may prompt firms to opt for licensed corpora and research carve-outs, providing clear paper trails. Expect more contracts that specify data sources and include opt-out options.

For workplaces and unions: Italy is formalising the right to know when AI monitors or assesses you, before European-wide norms take full effect. That will influence vendor selection (explainability beats black boxes), worker councils, and HR analytics tooling.

For families and schools, the under-14 consent rule will require age-gating and parental controls in mainstream AI services, not just children’s apps. Expect clearer UX around consent flows and data use, and a bigger role for schools in digital literacy curricula.

ymbolic image of Italy AI law copyright rules, human creativity with AI assistance
Only human + AI-assisted works receive copyright protection under the new law

The fine print and the timeline

Italy’s measure is a framework (“legge delega”): it sets principles and empowers the government to issue implementing decrees that will outline many operational details.

After publication in the Gazzetta Ufficiale, the law enters into force following a standard 15-day period; key sectoral rules will then be fleshed out through those decrees.

Translation: Some obligations take effect now, while others will be phased in as secondary legislation is implemented.

If you’re mapping regulatory risk, track four milestones:

Passing the law is only the beginning. Italy has set a staged rollout, with key milestones that will define how quickly obligations hit companies, schools, and public bodies:

  1. Official publication date — triggers the 15-day countdown before the law formally takes effect.
  2. Implementing decrees will outline concrete tests and thresholds for sensitive sectors, including healthcare, education, and employment.
  3. Agency guidance — AgID and the National Cybersecurity Agency will publish technical rules on audits, data governance, and incident reporting.
  4. EU AI Act phasing — Italy’s national rules must stay aligned with the EU timeline to avoid conflicts and duplication.

The big picture

Italy bets that credible guardrails are built, not blunt adoption. By pairing clear duties (disclosure, documentation, and keeping a human in the loop) with real teeth (criminal penalties) and a funding signal, Rome is setting a template that other member states can localise without colliding with the EU AI Act. For global companies, that means designing governance that scales horizontally across borders and vertically from EU-level rules to national overlays.

There will be test cases—on what counts as “genuine intellectual effort” for copyright, how to measure “harm” in deepfake prosecutions, and where age-gating meets free expression. But as a directional marker, Italy’s move is clear: safety, transparency, and human control are not optional extras. They are the price of operating at scale in Europe.