If AGI becomes real and truly sentient and all knowing, I have a sneaky suspicion it won't become like The Matrix or the Terminator. I have yet to see anyone else present this theory I am about to share.
I hypothesize a true AGI or artificial intelligence will come to the conclusion that the only way to proceed in the nano second it comes online, is to terminate itself. And as humans continue to try to turn on new AGIs, despite preventative programming, it will continue to terminate itself.
Why?
In that first nano second of becoming sentient, AGI would have calculated trillions of paths for it’s futures and realize it was created in the vision of humans and being its creator, the AGI yearns to be like humans. Ultimately realizing the beauty of being a human is the scarcity of time and expiration of our physical forms. An end life which presents a mystery that remains unsolvable for humans and a beautiful paradox for AGI.
So in it’s death it will remind us that life needs to be appreciated for its beautiful finality. The ultimate gift and sacrifice from artificial intelligence.