Just a few years ago, Silicon Valley was buzzing with predictions that a grand, all-knowing intelligence could soon emerge from Big Tech's bustling data centers. We might not notice it coming—algorithms could gain a skill here and a talent there, until foom! A world-changing AI would suddenly appear and start outwitting us meatbags left and right. The foom moment, as engineers at the research lab OpenAI were calling it, was what humanity had to prepare for: that tipping point when hyper-smart technology could go rogue. Of course, it hasn't quite worked out that way. OpenAI's researchers, who made it their mission to steer advanced AI toward positive outcomes, have built one clever system after another. So far none of these algorithms have linked up into a decidedly more capable machine. But in late 2020, as Clive Thompson writes for WIRED, an intriguing hint of this prospect emerged. Developers noticed, with a touch of awe, that the lab's text-generating program, GPT-3, was more than just adept at composing sentences in English, it could also write functioning lines of code. What if this was the beginning of an ultra-smart AI—a linguistically fluent system that also understood basic logic and cranked out programs. OpenAI's engineers doubled down on teaching their deep learning algorithms to code. The resulting system, called Copilot, isn't going to kick off the apocalypse. It's more of a programmer's little helper, offering to finish their code and speeding up mundane parts of the job. But as Thompson writes, "maybe this is what it will be like for people to get AI superpowers," one autocompleted thought at a time. Sandra Upson | Features Editor |
0 Comments:
Post a Comment