Responsible AI – The “much needed” layer between leap and adoption!

We are living in a world of instant information flow, synthesis, analysis, and amplification, enabled by all pervasive social platforms with high levels of contextualization and marketing strategies. Artificial Intelligence has always been thought of as the transformational approach in shaping futuristic needs or may be the future in itself. Over the years, researchers were making strides through machine learning, dealing with language models and its complexities. There have been multiple cycles of “AI Winter” supporting incubation period that could provide advancements to the research and breakthrough outcomes. With large language models (LLMs) and the rapid strides made by ChatGPT (which is a “Generative Pretrained Transformer” based on Large Language Model developed by OpenAI), the entire focus on Generative AI as the “future” has been unprecedented driving huge interest across domains. Every one talks about ChatGPT and the user adoption skyrocketed by hitting a 100M mark within a span of 3 months!

OpenAI was established in 2015 with an objective to develop safe and beneficial artificial intelligence. This focus from the leading research group outlines the need to have a strategic approach in the adoption of much needed AI trends in a responsible and ethical manner. The larger question is Why and How?

Need to be Authentic and Accurate: Once again let’s talk about the social channels that manage information and cyber capabilities that enable digital transformation. Banking users are constantly reminded of safeguarding sensitive information from potential scamsters. Such concerns are a miniscule representation of today’s problems without the application of AI dimension. Now imagine a scenario where these information feeds and scams are powered by “AI bots” and “filters” that behave like humans and has access to personal information and user response patterns.

Learning by itself: There are scenarios and learnings that makes humans believe a lie to be a fact and tends to take decisions accordingly. There is a constant buzz on the need to have fact checkers to analyze the online content and to callout fake news. Even then the authenticity of fact checkers themselves continue to be a problem which is not addressed yet. How potentially dangerous it could be if an AI based solution tends to believe a lie, learns, takes actions and identifies itself as a fact checker as well!

Determining interactions: AI solutions have the potential to control behavior patterns of the users based on interactions and guidance. For example, it can help shape the opinion of a 10 year old about the socio cultural aspects around him and can also influence the spending habits of a salaried professional. Many of these can have implications including that of social, cultural, economic, legal and defense and beyond that a major impact on determining how humans interact, respond and identify with each other!

Biased View?: If AI solutions are trained on data models that has a biased approach and lacks fairness, the solution response also will be based on same bias and henceforth continuous learnings will reflect the associated bias. Say for example, a model that is trained based on partial data that includes only a certain section of the society / organization & is made to believe that this is representative of the entire dataset, will continue to reflect the same bias in every response and will evolve in the same manner. Like humans, models also need to be trained by removing the data bias and inaccuracies. These are value based and will differ across geographies and organizations based on the context.

Transparency and Fairness: We expect fairness and transparency in every interaction. Human errors leading to lack of fairness needs to be course corrected. However an AI model that percolates the lack of fairness and transparency through the ecosystem will be a major challenge to deal with. Hence it is critical for AI solutions to provide end users the ability to determine which data is being used, provide insights on how this data is being handled, include details regarding the models, AI engine, data security and management aspects.

It is important to follow the AI trends and to adopt it in a business centric pattern leveraging human centric approach. However there needs to be a strong foundational framework that makes the implementation, adoption and sustenance in a responsible and ethical manner – safe, secure, reliable, resilient, scalable, fair, transparent, accountable, explainable and interpretable.

Organizations need to focus on creating the foundational frameworks that are inclusive of the following key tenets to enable critical outcomes –

  • Establish a wide and deep governance framework that addresses key strategic aspects that include technology determination, model adoption, data identification, modelling, training, security, inclusivity, fairness, scalability, performance, control levers and parameters.
  • Need a focused R&D ecosystem that analyzes the solution considerations through multiple iterations of analysis, scoping, model based understanding, outcomes and learning attributes to make informed choices. Removing hallucinations will be a critical factor in response synthesis.
  • Addressing privacy concerns with user approvals on data usage, transparency in information management, information on the analytical models, solution design including PII redaction and anonymization to enhance data privacy and prevent misuse.
  • Authenticity of the data to ensure that the models doesn’t get trained on inaccurate or biased data that could be counter productive.
  • Data should be understandable to humans and provide in-depth visibility to the end users to enhance credibility of the system. Provide users with the basis and rationale associated with the response for a particular query / problem statement / business need.
  • Enhance the variety of data sets and patterns to ensure robustness of the model and to avoid learning bias.
  • Approved architecture and reference models to govern the design and implementation that is framework based, modularized and well structured.
  • Rigorous system testing, test automation and additional tests including bias testing to ensure that the system responds with the required degree of responsiveness within the control limits.
  • Continuous improvement cycles through proactive monitoring, preventive maintenance, metrics tracking and reporting.
  • Improvise on the existing standards, frameworks, processes and tools through continuous learning, improvements and innovation. They need to keep pace with the advancements of the AI solution to control its growth and adoption.

Recently Hollywood is impacted by the strike called by “Writers Guild of America” and “Screen Actors Guild” – one of the key reasons being the application of AI in film industry. Similarly, there is a growing concern among humans regarding AI replacing them on many counts, primarily on jobs and employment. AI (& Gen AI) is a reality and a trend setter. Given the scale at which the technology is evolving, it is essentially important to democratize AI, allay the fears in its adoption, adopt responsible practices to leverage the power of ecosystem to the best extent possible for common good and bring the solutions closer to the humans!

Author Details

COMMENTS(3)

Leave a Comment

Your email address will not be published.