top of page

Interview | Can AGI can be ever limited to a threshold?

I'm thrilled to share that I've just completed my first interview discussing my book "Artificial Intelligence Ethics and International Law (2nd edition)."

For those interested in exploring this topic further, the book is available here:

In recent times, the discourse on artificial intelligence ethics, especially the apprehensions surrounding AI usage, has intensified. This surge in concern is partly attributable to the widespread exposure to technologies like ChatGPT.

In India, I've observed a tendency in the technology law, policy, and AI innovation conversations to oscillate between undue hype and fear. Instead, our focus should be on enlightening ourselves and others about the positive and transformative applications of AI, particularly in the realms of government, business, and consumer sectors.

The dialogue often veers towards extremes: some view AI as an impending doom, while others consider it a cure-all, expecting breakthroughs like Artificial General Intelligence (AGI) within a decade. Yet, the reality of AGI, its definition, and its limits remain elusive and open-ended.

I delved into these complex issues with Kushal Mehra on the renowned Carvaka Podcast on YouTube. For an in-depth discussion on this topic, you can watch our conversation here:

Looking forward to continuing these meaningful conversations with you all.

Interview on India’s AI Landscape with The Indic Explorer

I had a fantastic exchange on India's #AI landscape - the good, the bad and the unprecedented.In this explainer of an interview with The Indic Explorer, you will find a wholesome explaination of how India can do better on AI without hyping and imitating other big tech firms, and how Gen AI products and services need to survive without a sense of hype.Watch the complete discussion here:

Lastly, an Op-Ed: on AI Governance for the Indo-Pacific Circle

Folks, glad to feature a write-up I could contribute for the Indo-Pacific Circle on an Indo-Pacific Perspective on AI Safety.


bottom of page