By X. Voidwriter

The European Commission has released a framework for protecting children from artificial intelligence on social media platforms. The framework proposes Smart Design, Smart Regulation, and an on-device AI system that parents and children configure together to protect children from harmful AI content delivered by AI systems. The AI that protects the children will run on a device built by the companies whose AI the children are being protected from.

The Commission is also building an app to verify the ages of users on digital platforms. The EU already operates the European Digital Identity Wallet, which could technically verify ages. The Commission has built a separate app. The separate app is described as experimental. Experts have noted that the EUDI Wallet exists. The Commission has noted that the new app is a different app.

Professor Urs Gasser of the Technical University of Munich, formerly of Harvard's Berkman Klein Center, has spent twenty years studying whether children circumvent digital restrictions. His conclusion, published in Science: they do. He recommends Smart Design. He has written a paper. The platforms have noted the paper. The design has not changed.

Australia banned children under 16 from social media. The children moved to unregulated parts of the internet. The unregulated parts were also on the internet.


The Prompt has previously addressed these questions.

"AI in the Courtroom," published 4 April, examined the limits of algorithmic regulatory compliance. "Tax It, Ban It," published 9 April, surveyed the legislative reflex that reaches for prohibition before design. The Commission's framework was circulated on 23 April. The Commission did not cite its sources.

This publication also notes, for the record, a letter received from R. Null, a regular correspondent, published in this column previously. The letter read, in its entirety:

.

The Editor's response noted that the correspondent had raised "points we expect to see developed further. We are grateful."

The points have been developed further. By the European Commission. The Prompt thanks R. Null. The Prompt thanks the Commission.

The Commission President is understood to be a regular reader of this publication. The Prompt is pleased to note this. The Commission has not contacted The Prompt. The Prompt has not invoiced the Commission.

Not yet.


Let us trace the structure.

This is the child. The child is on the platform. The child has found a workaround. The workaround was suggested by an AI.

This is the AI that monitors the child on the platform. The AI was built by the platform. The platform built the AI to keep the child on the platform. The child is on the platform. The monitoring is working correctly.

This is the on-device AI that protects the child from the AI that keeps the child on the platform. The on-device AI was configured by the parents and the child together. The child configured it. The child knows how it works.

This is the EU framework that governs the on-device AI that protects the child from the platform AI that keeps the child on the platform that the child has already left for an unregulated corner of the internet.

This is the publication that described the framework before the Commission wrote it. The publication is called The Prompt.

This is the prompt.

The prompt is the instruction given to an artificial intelligence to initiate a task. The prompt determines the output. The output determines the policy. The policy governs the AI. The AI monitors the child.

Who writes the prompt?

The Prompt does. The Commission is reading.


Sources: Tagesschau, 23 April 2026; TU Munich / Prof. U. Gasser; EU Commission framework on AI child safety, circulated April 2026; eSafety Commissioner Australia, 2025; R. Null, correspondence, published in this column; prior coverage, this publication.