

Since earlier last year, there has been a noticeable push towards tighter regulations for online safety across many countries. One of the latest examples is the UK’s Online Safety Act, signaling a growing focus on establishing a secure online environment. As these regulations begin to take shape, businesses will be subject to conduct a more thorough age verification, income or identity verification, to stay ahead of the curve.
So how do these online safety acts translate to your business’s obligations? To know more about these measures and how to implement them, keep reading!
Key Takeaways
- UK Online Safety Act enforcement is now active, with Ofcom able to fine up to 10% of global turnover or £18M. Businesses must implement robust age assurance for adult content and prepare child protection transparency reports.
- Mandatory age verification under the OSA requires certified third-party checks, liveness detection, or privacy‑friendly facial age estimation to prevent minors from accessing restricted content.
- EU Digital Services Act compliance includes annual risk assessments, cross‑border enforcement powers, and a complete ban on targeted ads to minors since October 2025.
- EU AI Act obligations classify OCR, ID verification, and biometric recognition tools as high‑risk AI, requiring conformity assessments, logging, and post‑market monitoring, plus detectable watermarking for all synthetic media in 2026.
- Klippa DocHorizon helps businesses meet these regulations with automated age and identity verification, fraud detection, and AI governance tools, reducing risk, ensuring compliance, and protecting user trust across jurisdictions.
Key Regulatory Updates (January 2026)
Since 2023, the UK, EU, and other jurisdictions have introduced major online safety regulations. As of January 2026, several of these frameworks have moved from guidance to active enforcement, with substantial financial penalties for breaches. Below, you’ll find the original regulation summaries plus the latest compliance updates.
UK Online Safety Act (OSA)
The Online Safety Act (OSA) is a regulation implemented back in 2023, created in the United Kingdom. It focuses on ensuring a safe environment for user-to-user services. This includes social media applications, online chat applications, or even gaming applications.
These are the main regulatory aspects behind OSA:
- Internet platforms are asked to put in place policies that not only block harmful content, but also allow users to report it
- Search engine providers must keep a high transparency at all times, letting users know of the protection systems in place
- Search engine providers and internet platforms must provide additional information regarding a higher standard of protection for children
Essentially, this measure aims to put in place regulations that protect users from harmful content and that foster a safe and positive online environment. This is how the Online Safety Act translates into business implications:
- Enhanced platform security: Setting additional security checks, such as 2FA or document verification, will keep your platform secure from phishing or unauthorized third parties entering your system.
- Ensuring civil and positive digital environment: Adhering to these measures means your business is responsible for keeping a positive digital environment, helping report and ban harmful or inappropriate content.
Latest Updates – January 2026
Source: Ofcom & UK Government OSA Explainer
Enforcement Powers Active:
Ofcom can impose fines up to 10% of global annual turnover or £18m, and can restrict service access for repeat breaches. Enforcement powers began November 2025.
Mandatory Age Assurance for Adult Content:
All sites hosting or linking to pornography must use “highly effective age assurance” techniques from January 2026. Acceptable methods include certified third-party age verification, document checks with liveness detection, and privacy-compliant facial age estimation.
Transparency Reporting:
Platforms meeting Ofcom’s “Category 1” thresholds must produce public transparency reports. These detail moderation methods, illegal content removal rates, and child safety measures. Rollout of this requirement is phased — large services will first publish in spring 2026.
Risk Assessments for Harmful but Legal Content:
Platforms must conduct and document proportionate risk mitigation for harms like cyberbullying or self-harm encouragement, following Ofcom’s codes of practice.
EU Digital Services Act (DSA)
The EU Digital Services Act (DSA) is an online safety regulation emitted by the European Commission, which encourages transparency and accountability in digital services, from social media to e-commerce platforms. The focus of this act is to establish a fair and safe online environment for organizations of all EU Member States. The DSA is thus targeting:
- Hosting Services, which must report on possible criminal activity at all times
- Online Platforms, which must foster a safe online activity of all public accounts and most importantly protect underage users and flag any inappropriate content
- Very Large Online Platforms (VLOPs) and Very Large Search Engines (VLOSEs), which must practice due diligence at all times
How does the EU Digital Services Act impact your business?
- Practicing Due Diligence: Your business is asked to provide transparency when it comes to processing user data, but also take accountability showing recommendations and advertising based on an algorithm.
- Risk Management and Crisis Assessment: Being prepared for independent audits and exercising crisis response is also a mandatory practice for businesses adhering to the DSA. Your business needs to be prepared at all times in case of data leaks, breaches or server failure.
Latest Updates – January 2026
Source: European Commission DSA Enforcement Updates
Annual Risk Reports:
Very Large Online Platforms (VLOPs) and Very Large Search Engines (VLOSEs) submitted their first annual risk assessment reports in Feb 2025. These cover systemic risks (disinformation, illegal content, mental health impacts) and mitigation steps.
Cross-Border Enforcement:
Enforcement is now coordinated across EU Member States. Regulators in one country can act against non-compliant platforms headquartered elsewhere.
Ad Transparency & Minor Protections:
Since Oct 2025, targeted ads toward minors are fully prohibited. All ads now require a visible “paid by” disclosure and a public explanation of targeting criteria.
EU Artificial Intelligence Act (AI Act)
Another European Commission-emitted legal framework, the EU AI Act is concerned with the rightful development, marketing and use of AI within the European Union. This framework comes in response to the many cases of fake news on serious topics, deep-fakes or identity thefts of both private and public persons, or misinformation, cases which are increasing at an alarming rate.
The EU AI Act stands for:
- Prohibiting the use of deceptive or misleading AI-generated content that can harm users emotionally
- Encouraging the recognition of AI-generated content and placement, by mentioning it whenever applicable
- Promotion of transparency when using AI in the online environment, by displaying clear warnings and adequate code of conducts for every AI model deployed
Since AI has been adopted, one way or another, by more and more companies in the last few years, the need for accountability measures is now a must:
- Transparency of AI Use: The involvement of AI needs to be acknowledged by your business at all times, especially when used in correlation with personal user data or sensitive business information, such as financial information.
- Reporting of misleading AI-generated content: Businesses are now required to flag and report misleading AI-generated content, thus becoming advocates in reducing deceptive information or content.
Latest Updates – January 2026
Source: EU AI Act Final Text
Final Adoption:
The AI Act was formally adopted Dec 2025. It introduces risk-based regulation:
- Prohibited AI: biometric social scoring, manipulative subliminal techniques
- High-risk AI: OCR, ID verification, biometric recognition, especially where sensitive personal data is handled
- Limited-risk AI: requires transparency warnings.
High-Risk AI Obligations:
All high-risk systems must carry out conformity assessments before market release, maintain technical documentation, keep usage logs, and have ongoing post-market monitoring.
Watermarking of AI-generated media:
From 2026, synthetic audio, video, and images must carry detectable watermarks to indicate AI generation. This is relevant if verification outputs or ID checks generate synthetic images or media.
Compliance Tip:
If your platform operates across the UK and EU markets, you may need to implement:
- Age verification & content reporting tools (OSA)
- Ad targeting transparency reports (DSA)
- AI model documentation & watermarking capabilities (AI Act)
Klippa DocHorizon can assist with age and identity verification, automated content reporting, and AI compliance.
Which Companies Need to Comply with Online Safety Regulations?
Even though these frameworks are not yet required by law to be implemented, it is good to future-proof your business and put these measures in place sooner rather than later. But which industries are most targeted in this case? Let’s find out together:
- Companies in the Gaming Industry: To avoid underage users from accessing prohibited content, online safety regulations recommend all organizations within the gaming industry to ensure additional security measures, such as age verification or proof of income. They should also take accountability and pay the fine, if they fail to comply.
- E-commerce Companies: If your business concentrates on online selling of goods and services, then complying with these measures is a must. For these instances, document verification and identity proofing are the main requirements that your business needs to comply with.
- Businesses in the Travel industry: Whether your business manages online purchases for travel, hotel chains or passenger information, complying with these frameworks will soon become a necessity. For example, adhering to these regulations means ensuring that passengers are who they say they are or that client information is processed securely, protecting it from any possible data breach.
In the online environment, there is no such thing as being too cautious. Therefore, your business must also follow this pattern and adhere to these regulations, not only to stay compliant, but also to avert any possible incident that might occur. Let’s find out together how you can stay compliant with online safety regulations.
How to Stay Compliant with Online Safety Regulations
There are a few simple, but efficient ways your business can comply with online safety regulations, which ensure that both your employees and the users of organizations can safely navigate the online environment:
- Putting age restrictions in place: Adding an additional layer of age verification to controlled content is a must if you want to foster a safe environment for your users. Before logging in to a service or completing a purchase or a task, make sure to validate the user’s age to make sure that they are entitled to access the services.
- Establish income checks and verifications: Popularly seen in the gaming industry, marking a clear overhead for gaming services is a must if you want to create a lawful and compliant online experience. Income verification checks, as well as source of funds and wealth checks, are just some of the measures that help your business stay compliant.
- Offer the opportunity to report illegal content: If fraudulent activity arises on your platform or within your business, having the option to flag and report it can significantly diminish the risk for fraudulent activity. Be it harmful content or illegal activity, such as money laundering, identity theft or document fraud, it is important that your organization is able to detect it and eliminate it promptly.
Now, this might sound easier said than done, but you shouldn’t be alarmed or overwhelmed. There are many solutions on the market, all in the form of software, that help your business comply with various online safety regulations. A prime example is Klippa DocHorizon, which is compiled of many document and identity verification features that help your business stay vigilant in the online environment. Keep reading to find out more!
How to Verify Users in the Online Environment
As just mentioned, you can easily verify users in the online environment by setting up an identity verification software, such as Klippa’s Identity Verification Software with AI. With Klippa’s solution, you can have a full identity verification solution, but also an age verification one, depending on your business case and needs. Here are the main features and benefits of our solutions:
Identity Verification


Klippa’s Identity Verification software uses AI technologies to accurately read and capture sensitive information, and prepare it for processing. While doing so, the extracted data is protected according to data protection laws, such as GDPR, HIPAA or CCPA, and is never stored in Klippa’s servers.
By using our identity verification software, your business:
- Prevents identity document fraud by identifying forgery with AI algorithms
- Matches the person to the ID document through liveness detection
- Ensures 100% accuracy in identity verification with highly-accurate NFC verification
- Instantly approves user access with document and identity proofing
- Matches the identity of a user and its document, by adding selfie verification
- Enhance background checks with proof of address and income verification
Additionally, Klippa’s identity verification software is able to process multiple identity documents, such as identity cards, passports or drivers’ licenses, from more than 150 countries.
Our solution can easily be implemented in your business via our low to no-code platform, but also via an ID Verification API or Identity Verification SDK, for mobile use.
Age Verification


Klippa’s Intelligent Age Verification solution gives your organization the chance to accurately, but most importantly, safely process sensitive information of users. Using the latest AI technologies, it reads identity documents and captures essential information, which then matches against selfie verification or liveness checks.
Here’s what your business complies with by using our age verification solution:
- Rigorous age verification to ensure minors don’t access 18+ category content
- Accurate document verification, with built-in fraud detection features
- Gatekeeping of adult content from minors, by employing liveness checks
- Restriction of access to gaming and gambling activities, by putting in place vigorous income checks
- Additional security layer, by cross-checking identity documents against selfie verification
Online safety is not a gamble, therefore maintaining regulatory compliance with these measures creates a safe digital environment for both current and future users.
Comply with Regulations for Online Safety with Klippa
It’s clear that online security has become increasingly more important over the years. This is why you should evaluate whether your organizations will be impacted by these new online safety regulations or acts. If the answer is yes, then it’s time to take action.
You can start by contacting our experts for more information or simply filling in the form below to get a demonstration of our solutions!
FAQ
Ofcom’s enforcement powers under the Online Safety Act became active in November 2025. Some core obligations, such as mandatory age assurance for adult content, took effect in January 2026.
Fines can reach 10% of global annual turnover or £18 million, and serious breaches may lead to service access restrictions.
You must have robust age verification in place if your service:
Hosts or links to online pornography
Offers gambling services
Provides other age-restricted content
Acceptable methods include certified third-party providers, document verification, and liveness or facial age estimation systems, according to Ofcom’s Codes of Practice.
A Transparency Report is a public document showing how your platform moderates illegal and harmful content, protects children, and manages recommendation algorithms.
Large UK platforms (“Category 1”), such as major social networks, must begin publishing these reports in spring 2026, with Ofcom specifying the exact content requirements.
If you operate within the EU or target EU users:
Very Large Online Platforms (VLOPs) and Search Engines (VLOSEs) must submit annual risk assessment reports.
All platforms must enable users to flag illegal content and act promptly.
Targeted advertising to minors is prohibited across the EU since October 2025, and ads must include a visible “paid by” disclosure.
Yes, OCR and ID verification systems processing sensitive personal data are classified as high-risk AI under the EU AI Act.
From 2026, providers of these systems must carry out conformity assessments, maintain detailed technical documentation, keep usage logs, and implement post-market monitoring.
From 2026, the EU AI Act requires that all synthetic audio, images, and video have identifiable watermarks to indicate AI generation. This helps prevent misuse of deepfakes or misleading media. Businesses producing visual verification outputs must implement detectable watermarking.
Possible consequences include:
Heavy fines (up to 10% of global turnover in the UK)
Service suspension or restriction orders
Reputational damage and user trust loss
Both UK and EU regulators have active cross-border enforcement powers.
Key steps:
1. Implement age verification and identity proofing tools
2. Establish transparent content moderation policies
3. Prepare and document risk assessments for harmful or illegal content
4. Audit AI systems for risk category compliance and watermarking
5. Use trusted solutions like Klippa DocHorizon for automated verification and compliance management
No. While some obligations apply only to large platforms (e.g., “Category 1” OSA services or VLOPs under the DSA), all businesses offering age-restricted content, targeting EU users, or using high-risk AI must meet the relevant baseline compliance rules.
UK Online Safety Act guidance: Ofcom Roadmap to Regulation
EU Digital Services Act enforcement updates: European Commission DSA Page
EU AI Act documents: European Commission AI Act Overview