To make AI safe, we must develop it as fast as possible without safeguards
lol:
As the leader of an AI company which stands to benefit enormously if I convince enough investors that AGI is inevitable, it’s clear to me that AGI is inevitable. But developing superintelligence safely is a complex process. It would take time and require difficult discussions — discussions that everyone in society should have a say in, not just the small number of researchers working on it. If we pursue that path, there's a real risk that somebody else will make AGI first and destroy all human life before we have a chance to ourselves. That would be unacceptable.
To stop bad actors developing AGI that could kill us all, we need good actors to develop AGI that could also kill us all.
I've come to realise that our best hope is to race at breakneck speed towards this terrifying, thrilling goal, removing any safeguards that risk slowing our progress. Once we've unleashed the technology's full destructive power, we can then adopt a "stable door" approach to its regulation and control — after all, that approach has worked beautifully for previous technologies, from fossil fuels to microplastics.