The neutral legal status of India is essential to the AI framework.

Industry executives predicted that India would play a crucial role in the development of artificial intelligence frameworks as a neutral jurisdiction.

They emphasized that it is essential to construct AI development and learning ecosystems that effectively mitigate the propagation of biases, as this plays a crucial role in establishing responsible AI frameworks.

Earlier this month, Nasscom, the IT industry body, released guidelines pertaining to the responsible usage of AI. In the context of developing generative AI solutions, the guidelines suggest exercising caution and conducting risk assessments to identify potential harm throughout the entire lifecycle of the solutions. Additionally, the guidelines advocate for the public disclosure of data and algorithm sources, unless developers can substantiate that such disclosures might jeopardize public safety.

Tejal Patil, General Counsel at Wipro, emphasized the significance of involving individuals with a strong background in humanitarian sciences such as sociology and philosophy in the development of AI solutions, particularly those involving natural language interfaces. Patil emphasized that their insights and expertise are essential for ensuring a holistic approach to AI development that takes into account societal and ethical considerations.

“…our traditional methods of software development focus on siloed skills,” said Patil. “However, the role of the developer will expand exponentially to include checking for biases, planning, testing and governance of these solutions and there will be an increasing need to bring additional expertise.”

While different geographies have adopted different approaches to regulate AI, India’s role as a neutral jurisdiction will be critical in the evolution of AI frameworks, she said.

Currently, the Indian government has expressed its intention to approach AI regulation primarily from the perspective of mitigating harm to users.

Hasit Trivedi, CTO of Tech Mahindra, responsible for digital technologies and global AI initiatives, highlighted a skills gap within software development ecosystems regarding the understanding of legal implications related to intellectual property (IP) infringement. This concern has gained global attention due to the rising number of instances where generative AI platforms generate visual, software, and textual content without proper acknowledgment or attribution to the original creators of the underlying content.

“The cross-functional teams of technology, legal experts, marketing, human resources among others have to come together to ensure that enterprises have the right strategy to deal with such a disruptive technology,” he added.

The Nasscom guidelines have identified several significant harms and malpractices, including misinformation, IP infringement, data privacy violations, the propagation of biases, large-scale disruptions to life and livelihoods, environmental degradation, and malicious cyberattacks. These guidelines aim to address and mitigate these challenges in order to foster responsible and ethical AI practices.

Ashish Aggarwal, Vice President and Head of Public Policy at Nasscom, emphasized that while biases within AI systems may not be intentional, they pose a significant threat to the effectiveness of such systems. Aggarwal highlighted the importance of addressing and mitigating these biases to ensure the fairness, reliability, and ethical use of AI technologies.

“We believe that bias is a serious problem whether they are introduced to AI systems knowingly or unknowingly,” said Aggarwal. “One of the important ways of addressing this issue during the development of AI solutions is to ensure that businesses work with diverse and inclusive teams that bring in multiple viewpoints and help to call out such biases,” he added.

The Nasscom guidelines recommend the incorporation of explainability in the outputs generated by generative AI algorithms. This emphasis on explainability aims to provide transparency and insights into how these algorithms arrive at their results. Additionally, the guidelines also stress the need for effective grievance redressal mechanisms to address any mishaps or issues that may arise during the development or usage of such AI solutions. These mechanisms are crucial for ensuring accountability and resolving concerns related to AI applications.

Leave a Reply

Your email address will not be published. Required fields are marked *