In the rapidly evolving landscape of artificial intelligence (AI), regulators around the world are working to establish frameworks that promote the ethical and compliant use of this transformative technology. Hong Kong has recently taken a significant step in this direction with the release of its Artificial Intelligence: Model Personal Data Protection Framework (the “Model Framework”).
Earlier this year, the Privacy Commissioner for Personal Data (“PCPD”) conducted compliance checks on 28 organisations regarding the implications of the development or use of AI across a number of sectors including telecommunications, finance, insurance, retail and education. PCPD found that 21 out of 28 local organisations were using AI in their daily operations, with 10 of them collecting personal data while implementing appropriate security measures.
The Model Framework aims to provide a comprehensive set of best practices for companies who are using AI solutions for their business operations, particularly in the wake of OpenAI’s launch of the hugely popular ChatGPT platform in late 2022.
The Model Framework outlines four key areas for organisations to address when using generative AI:
- Establishing an AI Strategy and Governance: Organisations are encouraged to formulate an internal strategy and governance considerations when procuring AI solutions, including defining roles, responsibilities, and decision-making processes related to AI implementation.
- Conducting Risk Assessments with Human Oversight: The Model Framework emphasises the importance of comprehensive risk assessments and tailored risk management approaches for the organisation’s use of AI, including determining the appropriate level of human oversight in automated decision-making.
- Customizing AI Models and Managing AI Systems: Organisations must pay careful attention to the preparation and management of data used in AI systems, ensuring data and system security throughout the AI lifecycle.
- Communicating and Engaging with Stakeholders: Transparent communication and collaboration with relevant stakeholders, such as suppliers, customers, and regulators, is crucial to promote trust and accountability in the use of AI.
The underlying theme for the Model Framework lies in the ethical procurement, implementation and use of AI systems, in compliance with data protection under the Personal Data Privacy Ordinance (“PDPO”). As such, the PDPO’s Data Protection Principles are also outlined in the Model Framework, covering areas such as purpose and manner of data collection, accuracy and duration of data retention, use of data, data security, openness and transparency, and data subjects’ rights of access and correction.
The release of Model Framework is a significant step for Hong Kong in providing clarity and guidance to organisations on the responsible use of AI, especially as respects the use of personal data. As the AI landscape continues to evolve, we can expect more standards and recommendations to emerge from other regulators, with personal data protection remaining a key main focus.
By proactively addressing the ethical and regulatory considerations surrounding AI, organisations can unlock the transformative potential of this technology while safeguarding the privacy and rights of individuals.
Contact us today if you would like to know more about what these latest developments mean for your organisation.