Speech by SJ at High-Level Forum on Generative AI Governance and Cultural Co-Creation (English only) (with photo)
					
					******************************************************************************************
					
				Provost Professor Guo (Provost of the Hong Kong University of Science and Technology, Professor Guo Yike), Professor Song (Director, Media Intelligence Research Center of the Hong Kong University of Science and Technology, Professor Celine Song), distinguished guests, ladies and gentlemen,
A very good afternoon. It is a great pleasure to address this assembly of distinguished scholars and industry leaders and kick off the panel discussion on "Credibility and Accountability in the Era of Generative AI".
The choice of topic is most timely as the Chief Executive just announced last month in this year's Policy Address that the Department of Justice will form an interdepartmental working group to co-ordinate the responsible bureaux to review the legislation needed to complement the wider application of artificial intelligence. This raises an important question: why is there an imminent need for such a review of legislation?
While providing convenience and enhancing efficiency, AI, as we all know, is also open to abuse and sometimes, even with the best of intentions, may be misused. Here comes the question of credibility and accountability.
Currently, Hong Kong has no bespoke legislation governing AI. To harness the potential benefits this new technology may bring, it is incumbent upon the Government to take the lead in reviewing the relevant law so as to provide a facilitative yet properly controlled legal environment for AI's development. This is not easy as it requires us to strike a balance, even if it is going to be a fine balance, between the need to encourage and promote innovation and technology development on the one hand, and that to ensure credibility and accountability in the use of AI on the other hand. This balancing exercise is highly relevant to the rule of law. Let me explain why.
There are many attempts to identify and articulate what should be regarded as the core or fundamental principles of the rule of law. This is how I would encapsulate the concept: there must be the existence of laws and regulations which can be enforced effectively to govern human activities so as to ensure that such activities will be conducted in a fair and proper manner, without causing any harm to or prejudicing the rights of others. To uphold this principle, it is essential for our legal framework to evolve in a timely manner to protect the legitimate rights and interests of different stakeholders affected by the use of AI.
I now wish to underscore five issues which are non-exhaustive concerning the credibility and accountability in relation to the use of AI.
First, AI-generated content depends on the dataset on which the AI tool is trained and the information uploaded to it to obtain a response. A reliable dataset and machine learning may involve the use of copyrighted materials. This raises questions on whether the current copyright law can tackle new issues involving the relative rights and liabilities among copyright owners, generative AI providers and its users, such as the use of copyrighted materials without authorisation and ownership of AI-generated work.
Second, as the saying goes, "Garbage in, garbage out." Sometimes, AI may generate false or inaccurate content due to the incomplete or flawed information uploaded. Recently, the High Court of England and Wales has also warned that GenAI can produce apparently coherent and plausible but in fact entirely incorrect responses to prompts, make confident but untrue assertions, cite sources that do not exist.
Third, deepfakes generated by AI may present substantial risks by producing highly realistic but in fact fake audio and visual content. The complaint against a law student using AI to create over 700 pornographic female images is telling. The implication is that, among other things, in any process of evidence-based inquiries, what appears to be very compelling may in fact be fabricated.
Fourth, Professor Cecilia Chan, an education academic, has recently said that the rapid advancement of GenAI presents a serious threat to higher education, particularly with the emergence of the so-called "AI-giarism", i.e. the misuse of AI tools by students and researchers to present AI-generated work as their own. In the long run, it may risk eroding an individual's critical thinking skills in reasoning through the questioning of assumptions and the evaluation of information before reaching independent judgement.
Fifth, UNICEF (United Nations Children's Fund) expresses concern that AI may pose risks to children as it can instantly create persuasive disinformation, as well as harmful and illegal content. AI may also influence children's perceptions and attributions of intelligence, cognitive development, and social behaviour as it blurs the line between animate and inanimate, given the human-like tone of chatbots. Even for mature and seasoned decision-makers, reliance on AI may risk systemic discrimination which reinforces existing biases or even amplify inequalities, prejudices and stereotyping.
The Government has already been putting in much effort to tackle these issues. On the intellectual property front, the Government is introducing, among other things, a new "text and data mining exception" in the Copyright Ordinance (Cap 528), which would allow copyright users to make copies of copyright works for computational data analysis and processing, without a licence from copyright owners.
We also see over time the issuance of guidelines from different quarters. Examples include "Guidelines on the Use of Generative Artificial Intelligence for Judges and Judicial Officers and Support Staff of the Hong Kong Judiciary", "Hong Kong Generative Artificial Intelligence Technical and Application Guideline" issued by the Government's Digital Policy Office and those from local universities.
While these guidelines are flexible and sector-specific, they do not have the force of law. Whether they are sufficient and effective to address the above-mentioned issues is yet to be seen and requires further study. In any event, proper co-ordination and overall supervision may well be required to prevent and reconcile possible inconsistent or even conflicting measures on the same or similar issues.
What is happening elsewhere?
On the Chinese Mainland, the "Interim Measures for the Administration of Generative Artificial Intelligence Services", jointly issued by the Cyberspace Administration of China and other authorities, took effect in August 2023. GenAI service providers must now, for example, take effective measures to make GenAI services more transparent and AI-generated content more accurate and reliable, protect users' inputted information and use records, and label AI-generated content. Service providers will be punished by the supervising authorities for violation after inspection. Under a departmental regulatory document effective in September 2025, service providers who provide text generation or editing services conducted through simulation of natural persons, etc, shall add explicit labels to AI-generated or synthesised content.
The European Artificial Intelligence Act, binding and directly applicable in all EU (European Union) member states, commenced in August 2024, with its requirements applying incrementally. Certain requirements on credibility and accountability will start on August 2 next year and providers of GenAI systems shall ensure that systems' outputs are marked in a machine-readable format and detectable as AI-generated or manipulated. Subject to some exceptions, deployers of an AI system that generates or manipulates content constituting a deepfake, shall disclose that the content has been artificially generated or manipulated.
Most recently, in Italy, Law No. 132/2025 just took effect on October 10, 2025. This is the first domestic law in the European Union on the use of AI. The fundamental principles underpinning the legal framework governing the use of AI in different sectors, including healthcare, employment, professional services, and intellectual property, are transparency, proportionality, security, protection of data, accountability, gender equality and non-discrimination. That Law also seeks to combat crimes committed with AI support. The unlawful dissemination of AI-generated or altered content e.g. deepfake is a standalone offence, whereas market manipulation offences committed through AI are subject to increased penalties. To protect minors, children under the age of 14 may use AI only with their parents' consent.
For us, Hong Kong must of course design its own approach based on its own circumstances. This requires collective wisdom from all sectors. With this in mind, the Government will engage with all relevant stakeholders to build a legal framework to promote AI in a way which is reasonably flexible to cater for its fast development and wide application, but without compromising the rule of law.
That's why as I have mentioned over lunch during my discussion with Professor Song, I am very excited at the establishment of the new centre. One of the focuses of the centre concerns AI governance and I certainly look forward to more collaborations between the Department of Justice, or perhaps the HKSAR Government as a whole, and the centre and, of course, the university. Indeed, more stakeholders will share common interest in this very important area.
Before I conclude, I would like to take the opportunity to mention another very important thing. I would like to invite you to exercise your right to vote in the upcoming Legislative Council General Election on December 7. Your active participation is not merely an exercise of an extremely valuable and important constitutional right. Your active participation will also make a very impactful decision on the future well-being of our society. On this note, I wish you a fruitful, meaningful and enjoyable discussion and exchanges this afternoon.
Thank you very much.
Ends/Thursday, October 30, 2025
				
Issued at HKT 16:42
				Issued at HKT 16:42
				 NNNN
				
	 
				
				
				
				
				
			 
				 
							
