New Delhi, Dec. 2 -- Artificial general intelligence or superintelligence has been one of the most widely cited terms in the world of AI but there is hardly any consensus on what it means or what its possible implications could be for society. That being said, leading AI labs like OpenAI, Google and Anthropic are racing to be the first to create a model that could reach AGI status.
However, Anthropic co founder and Chief Scientist Jared Kaplan, in an interview with the Guardian, explained that humanity will have "the biggest decision" on whether it takes the "ultimate risk" of letting AI systems train themselves to become more powerful.
As per Kaplan, the period between 2027 and 2030 may become the moment when artificial intelligence be...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.