This content originally appeared on NRF Connections.
I had the privilege to attend the first two days of plenary sessions of Singapore's ATxSummit 2023, which was held on 6 and 7 June 2023.
It was a first-rate showing by Singapore's Infocomm Media Development Authority (IMDA), which pulled out all stops for the event and provided an impressive line-up of speakers consisting of government leaders such as Singapore's Deputy Prime Minister Lawrence Wong, Estonia's Prime Minister Kaja Kallas and the Netherlands' Minister for Digitalisation, Alexandra van Huffelen, as well as business and thought leaders in technology and artificial intelligence (AI) such as Ahmed Mazhari (Microsoft), Keith Strier (Nvidia), Michael Zeller (Temasek) Kay Firth-Butterfield (WEF), Elham Tabassi (NIST), Prof. Yi Zeng (Chinese Academy of Sciences) and many more.
IMDA also launched the AI Verify Foundation to support the development and use of the AI Verify to address risks of AI. AI Verify is a toolkit developed by IMDA to provide a testing framework and toolkit that is consistent with internationally recognised AI governance principles, e.g., those from EU, OECD, and Singapore. By making AI Verify available to the open-source community, the AI Verify Foundation seeks to help shape the future of international AI governance standards through collaboration.
The clear theme and focus across the plenary sessions I attended was seeking to understand how we, as a society, will be able to harness the full benefits of generative artificial intelligence (GenAI) while minimizing the potential harms. This is unsurprising given how Open AI's ChatGPT burst onto the scene and captured the collective public imagination just over six months ago in late November 2022.
The sessions left me with much to chew on. Reflecting on what was shared by the speakers, here are some of my immediate takeaways:
- GenAI is still a very new technology, and with such transformative technology, there is always a tendency to overhype the short-term impact and opportunities while underestimating the technology's long-term implications.
- Managing the potential impact of GenAI (and AI in general) cannot rest on regulation alone (although it is a critical part of the solution), but also by setting professional standards and education. It is also important that tech communities participate by making it difficult to misuse the technology and easy to recover if misuse occurs.
- Trust is paramount in encouraging adoption, and harnessing the full potential, of GenAI in society - this includes being clear and upfront about how the training models and underlying datasets are used to train such models, and when AI is being used to generate outputs or make decisions.
- Precision in describing AI is crucial to ensuring that the technology is properly understood. The tendency to anthropomorphize AI may lead to misconceptions about the nature of the technology. For example, it would be erroneous to say that GenAI "hallucinates" because the reality is that when GenAI produces inaccurate outputs, the technology actually worked as it was intended to, i.e., by giving a response that it predicts is what a human being wants. The inaccurate output may be caused by the underlying data or the user prompt that generated the output.
- There needs to be accountability in the deployment of AI to ensure that it is properly used and deployed. Users and developers should be accountable (and liable) depending on the context.