The genie escapes: Stanford copies the ChatGPT AI for less than $600

Before too long, you’ll be seeing them in robots

Six months ago, only researchers and boffins were following the development of large language models. But ChatGPT’s launch late last year sent a rocket up humanity’s backside: machines are now able to communicate in a way pretty much indistinguishable from humans. They’re able to write text and even programming code across a dizzying array of subject areas in seconds, often of a very high standard. They’re improving at a meteoric rate, as the launch of GPT-4 illustrates, and they stand to fundamentally transform human society like few other technologies could, by potentially automating a range of job tasks – particularly among white-collar workers – people might previously have thought of as impossible.

Many other companies – notably Google, Apple, Meta, Baidu and Amazon, among others – are not too far behind, and their AIs will soon be flooding into the market, attached to every possible application and device. Language models are already in your search engine if you’re a Bing user, and they’ll be in the rest soon enough. They’ll be in your car, your phone, your TV, and waiting on the other end of the line any time you try to phone a company. Before too long, you’ll be seeing them in robots.

See Also:

Stressed plants ‘scream,’ and it sounds like popping bubble wrap

One small point of solace is that OpenAI, and the rest of these large companies, are aware of these machines’ insane potential for spam, misinformation, malware creation, targeted harassment and all sorts of other use cases most folk can agree would make the world a worse place. They spend months and months working to curtail these capabilities manually before launch. OpenAI CEO Sam Altman is one of many concerned that governments aren’t moving quickly enough to put fences around AIs in the name of the public good.

Read more: New Atlas