Will earth be land of war among ‘gods’, asks UGC ex-VC in ‘Genome to Om’
By : Manoj Anand
Update: 2024-10-20 08:11 GMT
Guwahati: Former vice chairman of the University Grants Commission (UGC) Bhushan Patwardhan warned that “the concept of time travel could allow unprecedented exploration of history and the future”. He asked, will the earth be the land of war among ‘gods’.
He sought a coordinated global response to scientific explorations, including artificial intelligence, to stop humanity chartering into an unregulated and uncontrolled future.
“The concept of time travel is, so far, in the unimagined future of innovations. If, however, this is achieved, it could allow unprecedented exploration of history and the future. For human beings to have the ability to alter history or impact future events raises the most unimaginable concerns: will the earth be the land of war among ‘gods’. Can future scientific innovations and technological advances be woven into a world of humanity,” asked Patwardhan in his new book –Genome to Om: Evolving Journey of Modern Science to Meta-science.
Patwardhan, who is also an advisor to the WHO’s Global Traditional Medicine Centre, has co-authored the book with Indu Ramchandani.
The book claims that “we are possibly standing in a world where thoughts control technology” as it cautioned that the “science-fiction of today is taking us into an imminent future”.
Delving into the future challenges posed by artificial intelligence (AI), Patwardhan and Ramchandani argued that “the concept of machines having consciousness in itself raises important philosophical and ethical questions about the rights and treatment. If they were to possess consciousness, should they not be entitled to the same rights and considerations as humans?”
The authors warned of extreme social and economic disruptions in the event intelligent machines begin replacing the human workforce in the socio-economic framework. They also warned that an unregulated growth of intelligent machines could pose grave security risks without geographical limitations.
“If conscious machines were to gain true autonomy, we could lose control over them. They may act in unpredictable manners or may behave contrary to human interests, potentially risking human safety and well-being,” added the authors, while underlining that “presently, the development of conscious machines is largely speculative and theoretical”.
Yet, they warned, these concerns are not just theoretical. “Recent developments in AI capabilities, such as GANs (generative adversarial networks) and deep learning, demonstrate the technology’s rapid advance towards increasingly complex and autonomous functionalities,” the book stated.
The authors quoted eminent physicist Stephen Hawking, saying: “the development of full artificial intelligence could spell the end of the human race”. They reiterated concern of Hawking that “AI could potentially evolve beyond human control”.
On immediate concerns, the authors also quoted Sam Altman, former president of Y Combinator, saying: “AI has the potential to disrupt traditional industries and job markets, leading to widespread job displacement and economic upheaval. Large segments of the population could possibly be left behind with AI replacing repetitive or routine tasks”.
The book also quoted Sam Harris, the neuroscientist, warning that “with the advance of the AI, there will evolve a machine superintelligent with powers that far exceed those of the human mind. This is something that is not merely possible, but rather a matter of inevitability”.
Patwardhan sought a global response to meet the potential challenges to be posed by the “intelligent machines” as he warned that the scope of risks also involves uncontrolled spread of virus and tools of controlled warfare going out of hands.
“The concerns extend to the realms of regulation and ethics, emphasising the need for a proactive and international approach to AI governance. There should be transparency in AI development, mechanisms for accountability, and including diverse stakeholders in decision-making processes to ensure that AI technologies are beneficial to all of humanity,” stressed the authors.
Detailing risks associated with ‘Mind-Brain Computer Interface (MBCI)’, which is commonly known as mind reading, the authors warned that the “possibility of security breaches is extremely dangerous, with hackers gaining access to our most private thoughts and memories”.
“The ability to interface with the brain also raises ethical concerns about manipulation and control. Malicious and deliberate acts could exploit human vulnerabilities with targeted propaganda or even influence emotions and behaviours,” added the authors in the book.
They also warned against potential deepening of social divide between those who can afford to access the MBCI technology and those who cannot.
He sought a coordinated global response to scientific explorations, including artificial intelligence, to stop humanity chartering into an unregulated and uncontrolled future.
“The concept of time travel is, so far, in the unimagined future of innovations. If, however, this is achieved, it could allow unprecedented exploration of history and the future. For human beings to have the ability to alter history or impact future events raises the most unimaginable concerns: will the earth be the land of war among ‘gods’. Can future scientific innovations and technological advances be woven into a world of humanity,” asked Patwardhan in his new book –Genome to Om: Evolving Journey of Modern Science to Meta-science.
Patwardhan, who is also an advisor to the WHO’s Global Traditional Medicine Centre, has co-authored the book with Indu Ramchandani.
The book claims that “we are possibly standing in a world where thoughts control technology” as it cautioned that the “science-fiction of today is taking us into an imminent future”.
Delving into the future challenges posed by artificial intelligence (AI), Patwardhan and Ramchandani argued that “the concept of machines having consciousness in itself raises important philosophical and ethical questions about the rights and treatment. If they were to possess consciousness, should they not be entitled to the same rights and considerations as humans?”
The authors warned of extreme social and economic disruptions in the event intelligent machines begin replacing the human workforce in the socio-economic framework. They also warned that an unregulated growth of intelligent machines could pose grave security risks without geographical limitations.
“If conscious machines were to gain true autonomy, we could lose control over them. They may act in unpredictable manners or may behave contrary to human interests, potentially risking human safety and well-being,” added the authors, while underlining that “presently, the development of conscious machines is largely speculative and theoretical”.
Yet, they warned, these concerns are not just theoretical. “Recent developments in AI capabilities, such as GANs (generative adversarial networks) and deep learning, demonstrate the technology’s rapid advance towards increasingly complex and autonomous functionalities,” the book stated.
The authors quoted eminent physicist Stephen Hawking, saying: “the development of full artificial intelligence could spell the end of the human race”. They reiterated concern of Hawking that “AI could potentially evolve beyond human control”.
On immediate concerns, the authors also quoted Sam Altman, former president of Y Combinator, saying: “AI has the potential to disrupt traditional industries and job markets, leading to widespread job displacement and economic upheaval. Large segments of the population could possibly be left behind with AI replacing repetitive or routine tasks”.
The book also quoted Sam Harris, the neuroscientist, warning that “with the advance of the AI, there will evolve a machine superintelligent with powers that far exceed those of the human mind. This is something that is not merely possible, but rather a matter of inevitability”.
Patwardhan sought a global response to meet the potential challenges to be posed by the “intelligent machines” as he warned that the scope of risks also involves uncontrolled spread of virus and tools of controlled warfare going out of hands.
“The concerns extend to the realms of regulation and ethics, emphasising the need for a proactive and international approach to AI governance. There should be transparency in AI development, mechanisms for accountability, and including diverse stakeholders in decision-making processes to ensure that AI technologies are beneficial to all of humanity,” stressed the authors.
Detailing risks associated with ‘Mind-Brain Computer Interface (MBCI)’, which is commonly known as mind reading, the authors warned that the “possibility of security breaches is extremely dangerous, with hackers gaining access to our most private thoughts and memories”.
“The ability to interface with the brain also raises ethical concerns about manipulation and control. Malicious and deliberate acts could exploit human vulnerabilities with targeted propaganda or even influence emotions and behaviours,” added the authors in the book.
They also warned against potential deepening of social divide between those who can afford to access the MBCI technology and those who cannot.