The AI environment is rapidly reaching a fork in the road scenario, with OpenAI's CEO referencing the quickly approaching Aritificial General Intelligence (AGI) and Artificial Super Intelligence (ASI) capabilities.
This critical point in the advancement of AI is in reference to whether or not the primary parties involved in AI advancement prioritize human trust in the models and methods being created. Reports are showing humans are already distrusting LLMs. To enable the advancement of more advanced models, like AGI and ASI, human trust and model transparency (both technical and ethical) must be vaulted to priority.
There is potential for the significant releases of AGI and ASI models to be met with disinterest (at best) or full blown resistance (at worst). These breakthroughs in AI research can provide great benefits to companies and humans alike, but not if the models can't be fundamentally trusted.
XAI Foundation recommends key steps to improving the potential response and usability of AGI/ASI:
Pause all AGI and ASI research and temporarily move research priority to model transparency in regards to technical details, ethical considerations and potential immediate uses.
A broad investment in AI model education to the public. Many non-technical users lack basic knowledge of how AI/ML works, even in the simplest forms. Bridging this gap will educate users on the benefits and shortcomings of models across the entire spectrum.
Parallel transparency. Parallel transparency refers to illustrating and communicating how a model is working real time (such as citing sources in a simple form or vector database translation). As an example, if an AGI model is deployed in the form of a chatbot, in the same UI the user interacts with the bot, there should be an entire section of the UI dedicated to XAI.
AGI/ASI undoubtedly have tremendous benefits to the world. However, rapidly pursuing the creation of these types of models would be a flawed approach if full user education and transparency are not prioritized first.
Comments