Generative AI has created huge excitement in the industry which has created a race to experiment with it and adopt its new capabilities. The hype around this new technology has potential possibilities of causing disruption in the industry impacting every aspect of our lives and is seen as next industrial revolution. The rapid adoptions & sheer pace of development happening in this space in terms of release of various General purpose Foundation models (e.g. – GPT-3.5/4, BERT, LaMDA etc.), release of LLM proprietary & open-source models (e.g.- Dolly, Flamingo etc.), Open AI’s ChatGPT, Googles Bard and AI Services offered by Hyperscaler cloud providers is unprecedented.
AI has its own set of challenges like any new emerging technologies. These technologies are in very nascent stage and are far from full maturity, with many of their capabilities entering testing or getting operational before any proper governance and regulation can be fully established around them. Sam Altman , CEO of Open AI urged in-front of US Senate hearing the need of AI regulations. He said “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that”; “We want to work with the government to prevent that from happening”.
Thus, there is urgent need of AI regulations to be in place and having proper guard rails around its usage in the whole ecosystem right from the producers to consumers / 3rd party providers & end users. This article outlines the high-level details on this complex evolving topic on the current initial frameworks which are getting developed and the contents of this is subjective to change once finalized.
Current State of AI Regulations
Globally there are no effective ongoing efforts to regulate AI, the current attempts of regulations are fragmented and limited to country specific only. The EU (European Union) AI Act, also known as the Artificial Intelligence Act, is the world’s first concrete initiative for regulating AI, where these regulations to be applied would increase in proportion with the potential threat to privacy and safety that an AI system poses.
While In United States, currently no comprehensive federal legislation dedicated solely to AI regulation exists and individual state regulations existing laws and regulations touch upon certain aspects of AI, such as privacy, security, and anti-discrimination etc. In a recent historic landmark initiative, an Executive Order was issued by President Biden of United states on 30th Oct 2023, which states the urgent need to have new standards of managing the risks of artificial intelligence (AI).
There are other AI regulations initiatives across the world like China’s approach to AI is characterized by a dual emphasis on promoting AI innovation, while ensuring state control over the technology while Australia does not have forceable AI-specific regulation, but various existing laws are applied to address some of the risks.
Below table provides a brief summarized view of the current EU AI Act vs AI Bill of Rights (US) current state for quick reference.
|Factors Considered||EU AI Act||AI Bill of Rights (US)|
|Release Date||Draft version approved on June 14,2023 by EU parliament earliest adoption post negotiations is possibly in early 2025||Introduced in Oct 2022|
|Actors||Providers, Deployers, distributors & Product manufactures||Companies involved in the development, deployment, and management of AI technologies|
|Approach Taken||· Top-down approach where it has introduced a legislative regulator process which needs to be followed. · Frameworks focusing on use cases & risk for it instead of overall AI regulations
|· Bottom-up decentralized approach focusing on specific applications of AI and is non-regulatory. · This framework applies to automated systems that have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services|
|Goal||AI systems used in the EU follow practice’s which are safe, ethical , traceable & unbiased entities. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.||Ensure fairness, inclusivity, and accountability in AI systems & advocating for users’ rights to know when they are interacting with AI systems and how their personal information is being used|
|Implementation Approach||Provides following step wise approach:
1) Classify current software catalog across the enterprise organizations to identify potential threats
2) Identify Risk classification of models – Distinguishes diff risk categories (Unacceptable| High| Limited) focusing on use cases
3) Start early adoptions across the orgs to complying with the AI Act
|Provides blueprint guide for policy which has identified five key principles
1)Safety and Effective systems
4)Notice and Explanation
5) Human Alternatives, Considerations and Fallback
|Infringement Penalties||Depending on the severity of the risk – €10 million to €40 million or 2% to 7% of the global annual turnover||None, it emphasizes on laying out a set of voluntary commitments for companies|
Gen AI has immense potential to improve the productivity and efficiency of employees and customers across various industries. Also given the rapid pace of changes happening in this space, current phase of experimentations & early adoptions, there is an urgent need of having common regulatory standards to be in place. Globally there has been no consolidated effort to regulate AI among countries and the current efforts are more country specific or in some cases local state/region specific only. Thus Enterprises and other Organizations should plan to have their own AI regulations/governance in place till AI regulations evolve. This will provide readiness for organizations to safe guards themselves against any data privacy issues, copyright infringements, penalties, and lawsuits arising from using of AI.