Establishing North Star Principles for Artificial Intelligence

Introduction

From being a piece of fiction threatening to enslave or eliminate all human life to being a conversational pervasive reality, artificial intelligence has come a long way, especially with the generative AI surge in recent time. That obviously has its roots in multiple temporal factors like computing power, data availability etc. but the point of import is the rapid expansion in the AI race.

Since ChatGPT was made open to the public, there has been a corporate race to come up with a better GPT bot, better large language models with more and more attributes in order to have the upper hand. Nobody can deny the potential use of the technology, but, like any other race, there need to be a set of guiding principles that provide a guiding light for the currently chaotic AI race large corporations are into.

Why do we need guardrails?

Like all technologies, artificial intelligence, including the much talked about generative AI, have its flaws as well as potential adversarial use cases. AI is prone to biases, inaccuracies, hallucinations, Type 1 and Type 2 errors and much more. On the other hand, bad actors eventually find applications for any breakthrough technologies. Think of deep fakes combined with generative AI to produce extremely legitimate videos of leaders spewing hatred, or images generated using AI that seems legitimate but are actually meant for phishing. There are endless negative use cases for the technology as well.

The True North for AI

To ensure that the race for better artificial intelligence is ethical and has lasting positive impact on the society, below are the key principles AI models should adhere to:

  1. Correctness – One of the key aspects that AI has to take into account is being correct or accurate. Predictive and Generative AI are often used for decision-making or presenting content and humans, in all their flawed glory, do not cross-check the predictions or content generated by AI. The onus of correctness, in the current state of AI, lies on humans. This onus has to shift to the AI models. Afterall, what is the use of getting a nice presentation generated by a GPT only to sit and cross-check every graph, every figure, every table in it? Usage of AI becomes a moot point with manual fact-checking in the mix.
  2. Sustainability – With the race for large language models heating up, sustainability is another major variable to be considered. Training LLMs takes a massive amount of computing power, which, in turn, needs massive amounts of energy and water for operations. Even predictions made using LLMs require a decent amount of energy as compared to regular prediction models. This essentially means that corporates have to focus on offsetting the environmental impacts of AI with an equivalent amount of positive contribution to environmental goals on other fronts. Nobody really wants to see the climate clock sped up.
  3. Fairness – Humans are flawed and one of the flaws that they pass on to artificial intelligence is the flaw of biases. The training data that artificial intelligence is given or has access to invariably has biases in it. It is easier to monitor and offset biases when providing structured, labelled training data to models, but given the current state of generative AI where training is driven by models off of unstructured data on the internet, controlling the introduction of biases has become much more challenging. There need to be defined steps for monitoring and remediating biases in AI models, whether manually or through introspection processes.
  4. Reliability – The other biggest hurdle in a pervasive adoption of AI is the trust factor. Humans, more often than not, do not trust the outcome of the AI model, though it might be totally accurate with a 100% probability. The reason for that is the black-boxed nature of AI – humans do not know what is going on inside the neural networks and how the AI arrived at this outcome. This is especially true when the outcomes are counter-intuitive or in contract to human opinion. The way out is to have in-built explainability in AI models and provide interactive visualizations of it thereof. Once a human knows how the result was derived, the outcome becomes more acceptable.
  5. Human-centricity – A perhaps more philosophical guiding principle is for all artificial intelligence to be aligned for the betterment of humans; they have to be trained to be human-first. While training and while predicting outcomes, AI models should have higher weightage for outcomes that could positively impact the human society if applicable. As AI evolves with unsupervised learning, this philosophical guardrail can help guide it to the right crests in the future.

 

Conclusion

The future of human race is tightly entwined with how artificial intelligence shapes up in the future. We are at the cusp of a new age, an age where AI will augment human life and become an inextricable, intrinsic, pervasive part of it. The difference between AI being a savior and AI being the last nail of human civilization is how our generation nips the negatives in the bud. Any guidelines, any guardrails, any frameworks, any legislations on AI have to be brought in now, before all of this spirals out of control and reigning it back becomes an exercise in futility.

Author Details

Pratyush Anand

Pratyush is a Salesforce Techno-Solution Architect with the Enterprise Cloud Application Services (Salesforce) Unit at Infosys. He has more than 13 years of experience in designing and developing Salesforce applications, with a keen eye for innovation, optimization and efficiency within the platform as well as in the business processes. He has worked in multiple domains, right from banking, insurance to manufacturing and telecommunications. He also has a knack for ideating, designing, developing and documenting reusable assets and bots to aid in the development and implementation of projects. Pratyush has an overwhelming love for the literary arts, and is an author, blogger and poet when not designing disruptive tech solutions.

Leave a Comment

Your email address will not be published. Required fields are marked *