In Brussels, two policy tracks are currently in motion.

The first: a European Parliament coalition of conservatives, social democrats, and liberals has proposed a digital services tax for the EU's next multiannual financial framework, beginning in 2028. The argument is straightforward. The EU borrowed heavily during the pandemic. The interest charges are real. Without new revenue, something else must be cut. The technology companies operating in Europe are, by cross-party consensus, making substantial profits here while contributing comparatively little to the public budgets that sustain the market they operate in. A levy of two to three percent on their EU revenues is presented as proportionate.

The second: a cascade of member state proposals to restrict social media access for users below the age of fifteen or sixteen. Spain: under sixteen. Greece: under fifteen, from January 2027. Austria: under fourteen, draft legislation due by June. Denmark: under fifteen, with parental opt-down to thirteen. France has already passed a national assembly vote at fifteen. Australia introduced the same restriction in December 2025. It reported 4.7 million accounts deactivated within weeks. It also reported that circumvention was straightforward: users entered a different age.

These two tracks are proceeding simultaneously, in the same legislative environment, without apparent coordination. One is designed to extract revenue from digital platforms. The other is designed to reduce their user base. They have not been introduced to each other.

The Numbers

A digital services tax is levied on revenues. Revenues are a function of users, engagement, and advertiser demand. The fiscal interaction is therefore not difficult to model.

Analysis from Glasskügel Analytics estimates that an age restriction of fifteen or sixteen, applied across major EU member states with moderate enforcement, would reduce the taxable base for a two-to-three percent digital services tax by approximately zero point five to one point five percent at aggregate EU level. Under a strict, well-enforced regime converging toward sixteen across the main markets, the effect could reach two to three percent. Under weak or fragmented implementation -- which the Australian precedent suggests is the likelier outcome -- it would be considerably less.

The reason the number stays modest is structurally significant. Users aged thirteen to fifteen account for a disproportionate share of platform engagement -- estimated at eight to twenty percent of time spent on the major services. Their share of advertising revenue is considerably smaller, in the range of three to eight percent. Advertisers pay less for younger audiences. Targeting is restricted. Conversion rates are lower. The revenue index for the thirteen-to- seventeen cohort runs at approximately thirty to sixty percent of the rate for prime-age adults.

The platforms are, in other words, extracting engagement from young users at scale while monetising it at a discount. The EU's proposed tax lands on the monetisation. The proposed ban removes the engagement. These are not the same thing.

The Enforcement Problem

Excluding under-sixteens from social media requires verifying their age. This creates an immediate legal difficulty that none of the legislative proposals has resolved.

Effective age verification requires processing personal data: identity documents, biometric readings, or device-level credentials. A legal analysis of compatibility with EU data protection law is unambiguous on the core question: a fully compliant age verification system without collecting personal data is, in practice, not currently achievable.

The General Data Protection Regulation mandates data minimisation. Age verification by identity document upload collects name, date of birth, document number, and in most implementations a photograph -- all in excess of what is needed to answer a binary question about birth year. Biometric verification engages Article 9, which restricts special category data processing to narrowly defined exceptions. The only architectures that approach compliance -- cryptographic age tokens, zero-knowledge proofs -- operate, in the analysis's assessment, in a grey zone of EU law.

Austria has announced that its system will verify ages without sharing personal data. It has not explained how. Spain, Greece, and Denmark have not addressed the mechanism. No legislative process to resolve the conflict between age verification requirements and the GDPR has been initiated in any of the relevant jurisdictions.

The EU has spent fifteen years constructing a data protection framework that restricts the collection of personal data at scale. It is now designing a child protection regime that requires exactly that. The conflict is not theoretical. It is foundational. It has not been named.

The Evidence

The legislative case for restricting access rests on a documented harm literature: anxiety, sleep disruption, compulsive use, exposure to harmful content and social comparison. The policy argument treats this literature as settled. It is not.

A review of the evidence base prepared by Nullfield Research Ltd identifies several recurring methodological problems in the harm studies. Self-reported screen time is inaccurate; most research measures association rather than causation; distressed adolescents may increase social media use because they are distressed, which the data cannot distinguish from the reverse. Effect sizes in better-designed studies are typically small at population level.

Benefits are documented alongside harms: social connection for isolated or marginalised young people, access to peer support, identity formation, informal learning. The risk of digital exclusion -- from the peer communication that now largely takes place online -- does not appear in the legislative proposals. The evidence base is, in the review's summary, contested, nuanced, and highly dependent on context. The policy debate has not described it that way.

The Argument

The counter-argument can be assembled from publicly available material into a coherent position: age restrictions are economically marginal, practically unenforceable, and incompatible with existing EU law. A well-designed digital services tax is the correct and proportionate instrument. Investment in digital literacy and platform accountability for design choices -- addictive recommendation systems, unlimited scroll, night-time notifications -- addresses the actual harm more directly than a blunt age threshold.

The argument uses child welfare language throughout.

The Endpoint

The EU Parliament's budget coalition wants digital services tax revenue. The member state governments want to remove under-sixteens from the platforms. If both policies succeed fully, the tax raises modestly less than projected. If neither succeeds -- which the enforcement record suggests is more likely -- the tax raises its projected amount and the children remain online.

The scenario that would produce the largest tax yield is the one neither side has proposed: unrestricted access, maximum engagement, full monetisation, levy applied to the revenue.

Someone has modelled the fiscal implications of that position. They did not publish it under that description.


The Prompt asked an AI to draft a five-minute parliamentary speech opposing social media age restrictions while supporting a digital services tax. The conclusion called for "coordinated European action on digital services taxation -- paired with strong commitments to digital literacy and child safety." The children came second.

Sources: Glasskugel Analytics GmbH, fiscal interaction analysis, April 2026. Nullfield Research Ltd, digital policy evidence review, April 2026. Frankfurter Rundschau, EU Parliament digital tax proposal, April 9, 2026. Tagesschau.de, national ban proposals, November 2025 -- April 2026. eSafety Commission (Australia), circumvention assessment, January 2026.