Navigating Hong Kong’s New Generative AI Guidelines: Key Considerations for Businesses

Navigating Hong Kong’s New Generative AI Guidelines: Key Considerations for Businesses

Navigating Hong Kong’s New Generative AI Guidelines: Key Considerations for Businesses 1400 788 Hauzen LLP

On 15 April 2025, Hong Kong’s Digital Policy Office (“DPO”) took a significant step in shaping the future of artificial intelligence (“AI”) governance with the release of the Generative Artificial Intelligence Technical and Application Guideline (the “Guideline”). Developed in collaboration with the Hong Kong Generative AI Research and Development Center, this framework aims to balance innovation with accountability, offering a roadmap for businesses to adopt generative AI responsibly.

Five Dimensions of Governance

The Guideline emphasizes five pillars of ethical AI governance:

  1. Personal Data Privacy: Key aspects of privacy in AI include data collection, accuracy, retention, usage, security, transparency, and access. Ensuring privacy and security throughout the AI lifecycle is crucial for protecting individual rights, maintaining public trust, and supporting the sustainable development of the AI industry.
  2. Intellectual Property Protection: The rapid development of generative AI presents both opportunities and challenges to intellectual property systems, particularly concerning the use of copyrighted materials for AI training.
  3. Crime Prevention: Generative AI enhances crime prevention and control but also introduces governance challenges, such as the misuse of deepfakes for fraud and misinformation. Effective implementation requires ethical considerations, transparency, and public trust to align with societal values.
  4. Reliability and Trustworthiness: The credibility of generative AI hinges on its ability to consistently produce accurate and reliable results, with a robust framework ensuring accountability for developers, operators, and users. However, the complexity and opacity of its technical architecture pose significant challenges to maintaining trustworthiness and effectively addressing issues like algorithmic biases and erroneous outputs.
  5. System Security: System security in generative AI is crucial to prevent unauthorized access and data compromise, but risks like reverse attacks and data poisoning pose significant risks. Implementing strict data verification, anomaly detection, and secure transmission channels can help mitigate these threats and ensure safe and stable AI operations.

Practical Guide for Stakeholders

The Guideline categorizes obligations for three key groups:

Technology Developers

  • Establish a well-structed generative AI development team, including a data team, an algorithm engineering team, a quality control team, and a compliance team.
  • Develop policies on when to accept AI-generated content, such as requiring users to doublecheck AI-generated materials, verify references, and ensure correctness before usage.
  • Follow higher standards and apply independent evaluation mechanisms from the development stage.

Service Providers

  • Establish a responsible service framework to ensure service compliance, data security, system security and system credibility.
  • Develop responsible processes, including clear financial and service security agreements, comprehensive risk assessments at various stages of service development, small-scale pilot projects before rolling out services on a large scale, continuous service improvement and transparent communication with stakeholders.

Service Users

  • Use generative AI services in a legal and regulated manner while maintaining independent discretion.
  • Understand responsibilities and obligations such as privacy, security, and legal compliance before engaging with generative AI services. Explicitly indicate whether generative AI has been used in content generation or decision-making to ensure transparency and accountability.
  • Familiarize themselves with the privacy policies of generative AI services regarding data protection, use and sharing before using the services. Assess and be responsible for the content produced by generative AI and disclose its source when making it public.
  • Respect intellectual property rights and apply necessary technical measures to avoid generating content that constitutes the whole or substantial copying of copyrighted works to prevent copyright disputes.

The Guideline aims to strike a balance between fostering AI innovation and ensuring responsible deployment. It establishes a governance framework tailored to Hong Kong’s unique environment, addressing potential risks while encouraging the widespread adoption of generative AI. Businesses operating in Hong Kong should take note of the Guideline to ensure compliance and capitalize on the opportunities presented by AI technologies.

While the Guideline does not have the force of law, it serves as a timely reminder that the deployment and use of AI tool carries with it a number of legal and ethical risks. In most jurisdictions currently, AI is lightly regulated or completely unregulated. As a fast-developing phenomenon, AI risks are different to predict, given its broad range of potential applications. Nonetheless, the Guideline is a useful first step in laying out some fundamental principles to adopt in deploying or using AI tools.

If you need any advice or help with AI regulatory matters, please contact us today.

 

Back to top
Privacy Preferences

When you visit our website, it may store information through your browser from specific services, usually in the form of cookies. Here you can change your Privacy preferences. It is worth noting that blocking some types of cookies may impact your experience on our website and the services we are able to offer.  View our Legal Notices

For performance and security reasons we use Cloudflare
required
Click to enable/disable Google Analytics tracking code.
Click to enable/disable Google Fonts.
Click to enable/disable Google Maps.
Click to enable/disable video embeds.
Our website uses cookies, including from 3rd party services. Define your Privacy Preferences and/or agree to our use of cookies.