OnlyFans CEO says creators are key to AI integration
OnlyFans CEO Keily Blair said the company is careful in its approach to AI adoption, not wanting avatars...
In March of this year, an open letter warning people of the existential risks posed by AI was signed by high-profile technologists including Elon Musk and Geoffrey Hinton.
Several weeks later, Geoffrey left a research role at Google, citing a desire to bring this message to a wider audience.
At the core of the message is a desire to highlight the realities of developing large-scale AI models that have the potential not only to render many human jobs obsolete, but to develop superintelligence – described by Nick Bostrom as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills” – to the point where the AI will have goals that are incompatible with human existence.
Speaking at Collision in Toronto, Canada, Geoffrey urged big tech companies to test for and prevent such doomsday scenarios.
Of all the risks that AI poses to humanity, the most overlooked is existential risk – the risk that AI could lead to human extinction – said Geoffrey, who is often referred to as the Godfather of AI.
“Right now there’s 99 very smart people trying to make [AI] better and one very smart person trying to figure out how to stop it from taking over.”
– Geoffrey Hinton
Although other risks – such as bias, misinformation and job losses – are current concerns, Geoffrey warned that we may not be prepared for super intelligent machines motivated to take control of humanity in the near future.
“Before it’s smarter than us, I think the people developing it should be encouraged to put a lot of work into understanding how it might go wrong … and I think the government could maybe encourage the big companies developing it to put comparable resources [into this],” said Geoffrey.
With the possibility that AI could reach super intelligence, the computer scientist advised researchers and companies in this space to “do empirical work into how it goes wrong, how it tries to get control, whether it tries to get control”.
“Right now there’s 99 very smart people trying to make [AI] better and one very smart person trying to figure out how to stop it from taking over. And maybe you want it more balanced.”
And we’re not taking this seriously enough, apparently.
A recent Nature editorial claimed that “talk of artificial intelligence destroying humanity plays into the tech companies’ agenda, and hinders effective regulation of the societal harms AI is causing right now”.
Geoffrey isn’t convinced: “[The editorial] compared existential risks with actual risks, implying the existential risk wasn’t actual. I think it’s important that people understand it’s not just science fiction; it’s not just fearmongering. It is a real risk that we need to think about. And we need to figure out in advance how to deal with it.”
“The jobs that are going to survive AI for a long time are jobs where you have to be very adaptable and physically skilled, and plumbing’s that kind of a job.”
– Geoffrey Hinton
The Godfather of AI was keen to discuss the current and very real risk of AI and automation replacing certain human jobs.
When asked by the Atlantic CEO Nicholas Thompson what careers younger people should be planning for, given the great leap forward in AI over the past couple of years, Geoffrey gave a one-word answer: plumbing.
Why plumbing?
“I’ll give you a little story about being a carpenter. If you’re a carpenter, it’s fun making furniture, but it’s a complete dead loss because machines can make furniture. … What you’re good for [now] is repairing furniture, or fitting things into awkward spaces in old houses; making shelves in things that aren’t quite square,” explained Geoffrey.
“The jobs that are going to survive AI for a long time are jobs where you have to be very adaptable and physically skilled, and plumbing’s that kind of a job.”
So knowledge workers should retrain to prepare for jobs requiring manual dexterity and the ability to repair machines? Not so fast. The former Google researcher – who left the company on good terms and still has insights into its AI developments – said that multimodal AI would be the next leap forward.
Multimodal models are trained not just on language, but also vision. With datasets that include YouTube, these AIs could replicate much more than written language, learning about how humans interact through voice, body language and more.
The Brain team at Google AI is working with Google DeepMind to create their own multimodal AI: Gemini.
When asked if there was anything that a sufficiently well trained model could not do in the future, Geoffrey responded: “If the model is also trained on vision, and picking things up and so on, then no.”
“We’re just machines,” Geoffrey continued. “We’re wonderful, incredibly complicated machines, but we’re just a big neural net. And there’s no reason why an artificial neural net shouldn’t be able to do everything we can do.”
Main image of Geoffrey Hinton speaking on stage at Collision 2023: Ramsey Cardy/Web Summit(CC by 2.0)
OnlyFans CEO Keily Blair said the company is careful in its approach to AI adoption, not wanting avatars...
In recent years, the New York Times has made a strategic bet on ...