Executive Summary
A global consumer healthcare company partnered with Infosys Consulting to implement an enterprise-wide Responsible Artificial Intelligence (RAI) framework in response to the growing complexity and risks of Artificial Intelligence (AI) adoption across business functions. Over a 3-year engagement, Infosys developed an AI risk management approach aligned with global standards such as ISO 42001 and the EU AI Act, completing 100+ AI risk assessments and embedding governance across the enterprise. The initiative included automated workflows, third-party accountability, and persona-based training to ensure fairness, transparency, and compliance.
Article:
A 2023 Gartner study found that over 50% of AI projects in the pharmaceutical and healthcare sectors face significant setbacks due to a lack of governance. This client was no exception. With AI rapidly expanding across functions like R&D, supply chain, marketing, legal, and HR, the immediate need was to move to a structured, enterprise-wide RAI enablement approach.
The client, a global consumer healthcare company with a legacy of trusted brands in oral care, pain relief, respiratory, and digestive health, was using AI to transform its operations. With AI becoming increasingly integral to how products are developed, marketed, and delivered, the company recognized the critical need to adopt a RAI framework that ensured fairness, transparency, accountability, and compliance.
Charting the Course: Implementing a Global RAI Framework
To address this, the company partnered with Infosys Consulting (IC) on a 3-year journey to structure, implement, and scale a RAI framework that met global standards and was deeply integrated into business operations.
Engineered for Trust: A Client-Centric RAI Framework
Infosys developed and deployed an end-to-end AI risk management framework aligned with ISO 42001, NIST AI RMF, and the EU AI Act, embedding core principles such as explainability, traceability, and human oversight.
An end-to-end analysis of existing and incoming AI applications (in-house and third-party) was conducted to identify and advise on technical and legal risks. This fostered responsible AI usage through persona-based training and building organizational awareness.
Automated governance was enabled with OneTrust while risk management processes were streamlined through automated workflows, driving scalable and consistent RAI assessments across the enterprise.
100+ AI risk assessments were completed across eight business functions, including R&D, Procurement, Legal, Marketing, Sales, Supply Chain, HR, and IT, covering both in-house and third-party AI systems.
Infosys partnered with Procurement to embed AI-specific clauses into vendor contracts, reinforce third-party accountability, and align supplier systems with internal standards.
Driving Impact: RAI’s Strategic Horizons for 2025
With foundational systems in place, the next phase focuses on advancing capability, integration, and institutionalization:
- Tooling Enhancements: Implement OneTrust RAI Questionnaires and Intelligent Control Recommendations to support contextual, risk-based decision-making.
- Ecosystem Integration: Connect OneTrust with Archer, CMDB, Privacy, and Third-Party Risk Management (TPRM) modules for end-to-end oversight.
- Process Redesign: Redesign TPRM and Software Asset Management workflows to include AI-specific risk screening at the onboarding stage.
- Community & Governance Forums: Launch RAI Advisory Forums and Drop-In forums to foster ongoing governance.
A responsible AI framework is not just a compliance mandate; it’s a strategic differentiator. By embedding ethical, transparent, and scalable AI practices across the enterprise, innovation is not just enabled, it is shaped responsibly.