Written by:
Date:
February 20, 2023
After launching in November 2022, ChatGPT, a generative AI chatbot, quickly became the fastest growing consumer internet app of all time, reaching 100 million users in just two months and attracting a reported USD 10 billion investment from Microsoft.
The uses for ChatGPT and rival apps are seemingly boundless. The Economist has speculated that AI chatbots will transform internet search with far reaching implications for many businesses, and on the face of it, generative AI has some obvious applications for due diligence providers.
ChatGPT can scan large datasets, including real-time data, and unstructured data sources like social media and the dark web, to collect targeted information with tremendous efficiency. Hours spent scrolling through media headlines using boolean searches that may still filter out articles of interest could be replaced with targeted questions such as, “has the subject ever been accused of fraud, corruption or any other wrongdoing in the media?” or, even more succinctly, “have they ever been the subject of adverse media coverage?”
Where the chatbot really comes into its own in this endeavour is with its language capabilities. Presented with a query in the language of the researcher, ChatGPT can scan datasets in unfamiliar languages, before supplying answers in the original language. This will appeal to due diligence outfits of all sizes, from freelance researchers who can only offer checks in one or two languages, to large consultancies that struggle to cover a full range of jurisdictions in-house.
It is also worth bearing in mind that the technology is still in its relative infancy. ChatGPT has only been online for a matter of months, and future iterations may be able to identify trends, track the movements of individuals and organisations, and highlight connections and relationships that human researchers may otherwise overlook.
Nevertheless, the technology’s flaws cannot be overlooked: an obvious issue is the lack of sources. Asked to provide sources for information it gave in relation to a Romanian political scandal currently being researched by CRI, ChatGPT replied that as an AI language model, it didn't have the ability to browse the internet or to access sources directly. In an industry where the corroboration of information using dependable, credible sources is paramount, unreferenced information is essentially a lead that would take time to verify.
Moreover, chatbots must grapple with the algorithmic bias inherent to their own make-up. They are trained on a large corpus of text data from across the internet - which is increasingly flooded with misinformation, prejudice, inaccuracies, and outdated information - and can result in biased outcomes. This is a huge concern in due diligence, where impartiality and objectivity are critical. As companies increasingly seek to monetise their chatbots, whose margins will be slimmer than search engines due to the high running costs, chatbot providers will inevitably encounter the same challenges faced by all the tech giants, balancing transparency with the whims and desires of their funders.
Not unrelatedly, there are issues to do with accuracy. The Economist likened ChatGPT to a mansplainer, “supremely confident in its answers, regardless of their accuracy” while Tim Harford in the Financial Times wrote in even less forgiving terms that AI chatbots are poised to “generate bullshit on an undreamt-of scale.” Harford argued that the chatbots deal in plausibility, not in truth. Its answers are not based on a model of the world, but a model of what people tend to write, so while they sound highly plausible, they are often completely false.
When challenged, it appears that chatbots are open to reassessing their answers. After ChatGPT told CRI there was only one major adverse issue involving our Romanian subject, we fed it some more detail ascertained during our research, and it apologised before expanding on additional issues.
As a complement to existing research tools, then, AI chatbots may serve a purpose. But beyond the most basic level of background checks, they present a fascinating but currently unreliable alternative to a human researcher for most due diligence projects.