February 16, 2026
The Advertising Association has published a voluntary ‘Best Practice Guide for the Responsible Use of Generative AI in Advertising’.
The Guide has been developed by a sub-group of the Government’s Online Advertising Taskforce and aims to provide practical, high-level guidelines for the use of Generative AI in online advertising, thereby protecting consumers, allowing organisations to better identify and manage risks, enabling responsible innovation, and promoting public trust in the advertising industry.
Organisations are first encouraged to establish internal GenAI governance frameworks. Among other things, these should: (1) designate accountability for GenAI systems and outputs; (2) include clear processes for approving GenAI deployment; (3) address how risks will be monitored and managed; and (4) align with an organisation’s values and legal obligations.
The Guide then sets out eight ‘Core Principles’ that should be adopted to ensure that GenAI is employed responsibly:
- Transparency. According to the Guide, “the decision to disclose GenAI use should be proportionate to the potential for consumer harm or misinterpretation. Content that is clearly deceptive or misleading should never be used, whether GenAI use is disclosed or not. Content that could potentially confuse consumers about facts, product capabilities, endorsements, or the reality of depicted events would benefit from clear disclosure. Content that is obviously fictional, fantastical, or impossible does not typically require AI-specific labelling, though standard advertising disclosure rules would still apply”.
- Responsible Data Use. Organisations should comply with data protection and intellectual property law and ensure that third-party models and platforms they use do the same.
- Bias and Fairness. AI systems should be assessed for potentially discriminatory outcomes before deployment, trained on diverse datasets, and monitored after deployment for discriminatory patterns.
- Human Oversight and Accountability. The Guide recommends that organisations include human-oversight mechanisms that are proportionate to the level of risk associated with the particular applications of AI.
- Societal Wellbeing. Organisations are encouraged, for example, to implement content screening to prevent AI from generating misleading, fraudulent, or harmful advertising, and to establish safeguards against targeting vulnerable groups with exploitative content.
- Brand Safety and Reputation. The Guide recommends ‘proactive brand safety measures’ such as content screening and real-time monitoring, together with controls addressing the appropriateness of content, brand voice, cultural sensitivities, and placement safety.
- Environmental Stewardship. Organisations should consider the environmental implications when considering whether and how to deploy AI systems.
- Continuous Monitoring. GenAI systems should be monitored continuously to “identify and address issues promptly, maintain compliance with evolving standards, and adapt to changing consumer expectations”.
To read the Guide in full, click here.
Expertise