Generative AI: Q&A with Professor Peter McBurney, Professor of Computer Science, Department of Informatics, King’s College London
Professor Peter McBurney specialises in artificial intelligence (AI) and provides AI consultancy services to Norton Rose Fulbright LLP and our clients. In this article, corporate technology lawyer Sarah Esprit discusses generative AI with Professor McBurney.
Sarah: At a very basic level, generative AI refers to models that are designed to synthesise data or generate new content that is similar to, or resembles, existing examples, but can you explain in a bit more detail what it entails, the training process and how it differs from other types of AI?
Peter: This type of AI arose in the area of language translation. If you speak a foreign language, you know that the meaning of words can depend on the context – you cannot look in a dictionary and say, “this means that” – you need to know the sentence in which it appears and sometimes the whole paragraph the sentence appears in. People developing automatic machine translation realised that, and came up with methods to incorporate context – generative AI arose there.
You train generative AI models with lots of data and examples in the form of text and the generative model conducts pattern-matching and builds up probabilities of association. If I say the word “fire”, it tries to predict and identify what words will follow fire in the English sentence, for example, “firewall” – that may be the case in 80% of cases. Other words follow fire, like “fire water”, which is an old name for whisky or spirits (water that puts your mouth on fire) – but it is much less frequent than “firewall”, say maybe 20% of cases. The model will then use this data to build up a knowledge graph where nodes represent words and the links between the words have the probability of one word following another word.
If generative AI models are trained with enough data, they turn out to be quite good. However, early ones were trained on text and so are good at things that are text or text like, like computer code, so can help with coding. Such models are not trained on numbers or arithmetic, so are less good at that, unless you add an extra plug-in. If you went to input, “what is two plus two”, in coming back to you with an answer, it will look where (somewhere in text) someone has written this. The earlier versions of Chat GPT didn’t have arithmetic calculation capability.
Earlier versions of Chat GPT also included a component that tried to assess the emotional state of its users – if the individual was friendly, it would be friendly in return. Similarly, if an individual was hostile, the answers feedback would be hostile in return. You wouldn’t want this in a customer services situation, where a customer got angry. The manner and tone of communication has to change for enterprise apps because of this.
Another aspect when interacting with generative AI that users might find is that they get the impression that they are talking to a human, which is false – the model is not thinking, it is just pattern-matching without doing any reasoning.
The models are not good at going step-by-step through an argument. However, you can train them to do that – which is called following a “chain of thought” by getting a generative AI model to go through a chain of reasoning.
For example, if you asked it for a recipe for something, you’d have to ask for it step-by-step to work through the recipe, because they’re not naturally good at doing arguments or following a chain of thought through. They match patterns.
Sarah: For people who are not as familiar with generative AI beyond, for example Chat GPT, please could you provide a few examples of the use cases and applications of generative AI for images, text and other types of data?
Peter: As generative AI systems started with text, things that are like text - like code - perform well when inputted into models. Code does tend to regularly come back with bugs when put through systems, but these parts can be deleted so that the generative AI model can learn, so that bugs are reduced over time.
Whatever particular domain you are asking it to do something in, an expert who knows about that domain needs to assess the outputs to determine whether a response is correct. For example, there is a lot of talk about not needing junior lawyers anymore as you can get Chat GPT to draft contracts or opinions, but someone with expertise still needs to assess those – you still need people coming through the system to become partners, so work will always be needed at the lower level, otherwise the system would hollow out and human expertise would fail to mature.
A lot of applications have involved matching faces – for example in a shop, the video cameras may identify certain people are being shoplifters, so when they come back into a store, the company will be alerted. You don’t need generative AI for that, but people are thinking of using it to assist with the matching.
When it comes to using generative AI to create images, earlier versions of these systems used copyrighted images, particularly from Getty (which provides stock photographs, bearing the name Getty), which proved problematic. The systems could generate a new original image but they had been trained to believe every image had the word Getty on it, so they’d insert the name on an output image which is an interesting legal issue – the product is not an image that Getty has copyright over, but it bears the name Getty on it, so Getty will know that the system has been trained on Getty images.
Another big application of this technology is summarising meetings (i.e. producing minutes of meetings). In litigation it can be used for summarising databases of emails – if you’re looking into who knew what about something and searching through big databases, generative AI can be used to summarise the contents of those databases.
Sarah: What have been the most recent advancements or breakthroughs in the field of generative AI?
Peter: All of this is recent, from 2017 onwards.
All systems go through different versions of trying to improve how the previous versions work while making sure not to use copyrighted data in training, and deleting stuff that you don’t have rights to use.
The creation of special modules for a domain, such as arithmetic and chain of thought and eXplainable AI (XAI) are other big areas of advancements (we will discuss this in more detail later on).
You can also build systems that just work in one particular domain where a particular subject domain uses certain terminology. For example, Google developed specialised generative AI for healthcare domains using specialised medical terminology.
Sarah: What do you see are the main benefits of generative AI?
Peter: Firstly, systems are already being used for software coding, assuming the role of an intelligent assistant to a coder.
The second big area which is under development is combining facts from internal documents with a conversation ability – so, combining policy documents and company policies to give factual answers to questions and then applying a conversational tone to communicate this to customers or users, in either text or voice.
This could be helpful in dealing with internal HR queries within a company where, for example, employees ask, “How much holiday am I entitled to?” – there will be a factual answer to this factual question. These questions should be able to be answered from the policies of the company – in Britain every company needs to give four weeks holiday to its employees, but in some countries, companies need to provide more.
The third big benefit is around summarising large documents or large databases of data, especially where the data may be in different formats, such as text, graphs, audio etc. For example, with Word Cloud Generator, Google takes a document and counts how many times each word is used, and the word used the most will present as the biggest on the page. That is a type of summary – if you had a legal document to summarise, you could use that to show which words are most common. Generative AI can also generate a narrative summary which would be useful in law – you can get it to produce one paragraph summarising a 100-page document, a court judgment or the minutes of a board meeting.
Finally, there are lots of commercial apps which leverage generative AI, such as in the insurance industry. Where people make a claim, they can submit an audio or video file and the insurance company then assesses whether the claim is genuine and can do that through getting generative AI to assess the data.
Sarah: There is increasing concern around what happens when you keep feeding synthetic content back to a generative AI model – indeed, the potential ramifications are still poorly understood. Terms like MAD - which stands for Model Autophagy Disorder - have been coined, which likens the effect of what happens when generative AI becomes self-consuming to Mad Cow disease. Can you expand on some of the challenges and limitations generative AI models face?
Peter: Future versions being trained on synthetic data is a big problem, but the main concern is that no-one has a solution to it yet. People have tried and succeeded in subverting generative AI. You can create a strange sequence of words and then put it on a webpage – it will get scraped by the next version of generative AI, and then if you type in the first part of the sequence, you get it back, even if it’s the only occurrence on the Web. For example, there could be only one occurrence ever of someone having written “elephants of Argentina are all green” somewhere, but generative AI will nonetheless pull this up as an answer.
Scraping means copying. A company making a generative model copies your webpage (which we call scraping), which it uses for training (see Norton Rose Fulbright’s blogs, UK Government responds on AI and IP rights to allow text and data mining of copyright works in all circumstances and New Singapore Copyright Exception will propel AI revolution). For example, you could ask a model, “What is the ugliest sounding language?” This concept is nonsensical to linguists as there is no such idea of an ugly or beautiful language! However, someone asked a large language model this question, and the LLM answered with the name of a particular language from the Indian sub-continent. The AI did so because in its training data was a statement on a social media site by someone asserting that that language was ugly. A single instance linking the words “language” and “ugly” was enough for the AI to match patterns and respond to the question with the name of a specific language. An expert human linguist asked the same question would have instead contested the basis of the question. The vast majority of webpages are created by machines, not humans. When you do a search on Amazon, a human created the layout, but the content is automatically generated by a machine. On social media, until recently, it was vastly human-created but there are chatbots that can subvert these systems, so we may get to a point where most content on social media is machine-created rather than human-made – we may already be in that position. It’s quite frightening!
The versions of Large Language Models which are publicly available today have been trained on data that is on the public Web, including blog posts, social media posts, sites such as Reddit, as well as some private data. In English, the vast majority of text on the web is written by humans, mostly white men under 40, so there is a gender and age bias. If your model then takes that data, it is easy for it to be biased in that way. There is a lot of bias in generative AI models because of the data used to train them – if you know that, you can work around it. But you have to fix this before deploying these models commercially. (For more information on AI and bias, see Norton Rose Fulbright’s webinars Artificial intelligence and bias and Employment hot topics: AI discrimination and bias.)
Companies can roll out corporate Chat GPT policies which provide that employees cannot use these methods for anything work-related as there are concerns that it can be wholly inaccurate (see Norton Rose Fulbright’s publication, Everyone is using ChatGPT – what does my organisation need to watch out for?) When a public version of Chat GPT had undergone testing, there was basic stuff that in any domain people would know, but the machine did not know and was therefore getting basic things wrong. I can think of an example of a partner in a law firm in New York who had to present arguments to court and used Chat GPT at the last minute to create arguments and accompanying citations. He submitted it to court and the judge initially accepted it before it later transpired that, of the real cases that Chat GPT generated, they were not relevant to the arguments, and then it also made-up fictional cases and fictional courts. So, it can invent stuff – a hallucination.
Sarah: Given the challenges, could you explain why, in any event, AI-synthesised data is increasingly used by choice in a wide range of applications?
Peter: Most companies have not yet made the decision to use these systems, they have only made the decision to explore them for possible use. At least in company enterprise apps, companies are in the exploration stage.
Many find that the systems don’t give factual answers often enough or it may be hard to make the tone not sarcastic and more pleasant, so they may find that, after conducting trials, they decide not to proceed. Tech companies are betting that these AI applications will be successful and so are putting money into it.
But most companies haven’t made the decision yet and are still in the exploration stage.
Sarah: Recently we’ve seen Zoom come under fire for updating its Terms and Conditions to state that customers “consent to Zoom’s access, use, collection, creation, modification, distribution, processing, sharing, maintenance, and storage of Service-Generated Data” for purposes including “machine learning or artificial intelligence (including for training and tuning of algorithms and models).” The company has since updated its Terms and Conditions to say that it will “not use audio, video, or chat customer content to train our artificial intelligence models without your consent.” However, many users will of course click “OK” to the Terms and Conditions without fully realising what they are handing over. How, in your opinion, do we strike the right balance of introducing enough real data (versus synthetic data) for AI models to be trained to avoid a situation where they go MAD? If real data is needed to help improve products that consumers enjoy, should these users be assisting with feeding such data or are there other methods already being employed that achieve this, where consent is given with full awareness?
Peter: Methods are still emerging, and we don’t yet know how to get the balance right.
There is a difference between the attitude to data privacy in North America and in Europe – America is generally more libertarian, where people don’t worry as much about data being taken and used (in relation to the US position, see Norton Rose Fulbright’s article, AI, machine learning and big data laws and regulations 2023). The European Commission is strongly influenced by German attitudes, which are generally more suspicious of the state knowing things about people and is cautious of protecting privacy, which is a sentiment that has fed through into wider Europe.
China has a reputation of the government deciding what data gets used, but Chinese citizens are still concerned about data privacy (see Norton Rose Fulbright’s blog, China finalises its Generative AI Regulation).
It differs – maybe it will end up different in different parts of the world. We don’t know what the right balance is, and we need to have more public discussion to find it.
In terms of methods that help with feeding in more real data, there are already platforms, like Mechanical Turk, which is an online marketplace where you can ask people to carry out tasks (which will mainly be carried out by people in lower income countries), the data from which is used to train models. But even then, sometimes the people carrying out the tasks will use Chat GPT to do the task, as there is downward pressure on prices. That is a concern if you are specifically commissioning something to be done by humans but perhaps it is not.
Another example is that GPT systems typically employ people in developing countries such as Kenya to moderate outputs and to make sure that no explicit content is being generated – people who are English-speaking are needed to conduct review as most content is in English. This is good as people get paid and the platform is vetted, but it turns out that the psychological effects on those people can be severe, as it is hard looking at brutalist images.
Sarah: There are various solutions that have been floated to address the ethical, social and technical challenges of generative AI (e.g. watermarking and authentication, ethical guidelines and regulation and collaborative research). What, in your opinion, do we need to see more of in terms of best practices to limit the negative effects of generative AI?
Peter: I mentioned that there can be issues with watermarking, like the example of the word Getty reappearing on images.
The better answer is that companies need a governance framework for AI systems and to implement it for each system. Most companies don’t yet have this, as this is what Norton Rose Fulbright Partners Marcus Evans and Lara White have been working on – how to create that governance framework (see Norton Rose Fulbright’s publication, Foundations for good AI governance). At the moment, companies don’t need to comply with the EU AI Act, as it is not yet in force, but they will do in the future if they operate in Europe or have European customers or ties to Europe (see Norton Rose Fulbright’s publication, The AI Act: A step closer to the first law on artificial intelligence). So, people should think about governance systems now and design them flexibly but in light of how the regulations are likely to turn out.
Sarah: What discussions, if any, have surfaced concerning the legal implications around using such models?
Peter: There is proposed regulation – the EU AI Act – which will cover AI across all industries to manage the risk of AI systems (see Norton Rose Fulbright’s publication, The AI Act: A step closer to the first law on artificial intelligence).
Under the EU AI Act, companies will have to self-assess their own systems or where subject to product safety assessment have them assessed as part of their product safety procedures, and if deemed to be high risk, they will need to give that information to designated national regulators. People think that the legislation will be passed in 2024 and come into force by 2025, but that companies will only start reporting in 2026 or 2027.
We will need new governance to deal with this, which countries are now starting to look into developing (see Norton Rose Fulbright’s publication, Foundations for good AI governance). The proposed legislation tells you what factors to consider when doing a risk assessment of a system, but it doesn’t tell you how to weigh these factors – some trade-offs are required, how do you make them?
You can only have certainty when we have cases that have gone to the regulators, and they make a decision, and a person challenging this decision takes it to court and court hands down a binding decision. Standards and guidance will also help. So, companies and lawyers don’t know with certainty right now, but lawyers do have an idea of how the regulators will decide cases (see Norton Rose Fulbright’s webinar, Harnessing the power of AI: How is AI transforming the FinTech and financial services industry?)
Sarah: What are your thoughts on the following statement, “There will soon be more synthetic data than real data on the Internet”? Should we be concerned about the future of generative AI?
Peter: An example from a few years ago comes to mind. In 1991, I got to visit Prodigy Communications Corporation (Prodigy), which started the first online service that had graphical user interfaces in 1984 and became defunct in 2001. I was working for a German media company which was considering a joint venture with Prodigy, so went to visit. In 1991 it already ran a service whereby Prodigy would buy news feeds from news companies (Reuters being the biggest one) – for example, “inflation in Mexico has gone up 3%” – and automatically turn it into a story, creating a headline, along with a graph or images, ready for publication, without any human involvement. And that was in 1991! So, we had automated generation of news stories 32 years ago.
We should be very concerned – there are lots of risks – already AI can be used to create false information. It can create a picture of someone in an embarrassing situation which is completely false. We need to figure out ways to detect false information and counter it.
There was an article I saw recently in the Financial Times about the Swedish government – since Sweden applied to join NATO, someone has made false stories abouts Sweden – that war is coming or has already started - so they have set up a unit to counter that false information.
This area is very interesting and will be interesting for 50 years to come!
Want more information?
For more information about AI in general, see our Inside Tech Law hub, Artificial intelligence.
For more information on the regulation of AI, see:
- In relation to the EU AI Act:
- Our guide, Artificial Intelligence Regulation.
- Our blog, The AI Act: A step closer to the first law on artificial intelligence.
- In relation to the UK regulatory position, our blog, AI and the UK regulatory framework.
- In relation to the US position, our article, AI, machine learning and big data laws and regulations 2023.
- In relation to Canada, our blog, Canada's artificial intelligence legislation is here