This year (2023) has been the year of significant AI improvements, namely in the form of Large Language Model (LLMs) or also known as generative chatbots. The most well known is ChatGPT, the impressive generative chatbot that has significant investment backing from Microsoft.
With the rapid arms race between companies in the tech space (such as Google's BARD and OpenAIs ChatGPT, among others), end users are beginning to understand the nuances of the front facing platforms. These LLMs have characteristics that can make them generate false information (hallucinations) but in such a way it's in a convincing delivery. Many LLMs are trained to provide a plausible response, not necessarily an accurate one. There have been many stories of the LLMs providing fake links and figures when asked to prove their statements (example here). Additionally, as the race to build the best LLM continues, the documentation and transparency of how these models were designed and created is becoming more and more shrouded. The XAI foundation wrote earlier this year about how to avoid some of these pitfalls.
Users and legislative bodies alike are taking note of these risks and lack of XAI, and are beginning to use these platforms with greater caution. Italy has banned ChatGPT and others can possibly follow suit soon. The lack of XAI at the release of these models has caused them to be "behind the eight ball" in user trust - something that can be difficult to reverse. XAI Foundation predicts XAI will become a major topic of focus in the current LLM landscape and beyond in the second half of 2022.
Comments