Age of Wonders

shutterstock_1499306735
shutterstock_1499306735

We live in an age of wonders. As I write this in mid-July, we’ve just received the first images from the James Webb Space Telescope—a miracle of engineering, which is now positioned a million miles from Earth. The JWST has an array of 18 hexagonal mirrors, which are aligned by computers to make it the largest optical telescope in space. The infrared sensors of the JWST are protected by a five-layer sunshield from the heat emissions of the sun, moon, and Earth, as well as the heat of the telescope/satellite itself. Those sensors allow us to see deeper into space than ever before—and as a result, deeper into time.

There are stunning pictures of individual nebulae and galaxies, but the most astounding image is the “deep field.” The image covers an area of the sky equivalent to a grain of sand held at arm’s length, but reveals thousands of galaxies. Because light travels at a finite speed, we’re looking at the most distant galaxies in that image as they appeared billions of years ago, perhaps as little as a billion years following the Big Bang. Previously, the Hubble Space Telescope had imaged this same region of space (taking much longer to do so because of its less-sensitive image sensors). The difference between what the Hubble saw and what the JWST sees is astounding. Take a look at this A/B comparison: tinyurl.com/22cfcfmg. What’s more, there were 300 single-point failures, any one of which could have prevented the JWST from working at all.

Even more amazing, I don’t have to leave my seat to see these images. They can be viewed on a computer screen, minutes after they become publicly available, and people can read what the experts have to say about the meaning of each image. We have nearly all the world’s knowledge instantly available to us on our laptops and phones.

Another amazing thing is the amount of shared knowledge that is online. Need to fix a broken pipe? YouTube has the answer. Google an error message and see that you’re not the only one who’s having a problem. If you’re lucky, you might find the solution to your problem as well. Of course, there are issues with the currency and correctness of the answers you find online, but BG—Before Google— it was much harder to get any sort of help with a problem.

People wonder what the world will look like in 25 years, assuming we haven’t turned into Gilead, the theocracy envisioned by Margaret Atwood’s dystopian tales. Consider this: neither Google, social media, or the smartphone existed 25 years ago. Today, we live in a world that is basically dominated by these three technologies, for better or for worse.

Any number of developments could drastically alter the next 25 years: fully-autonomous vehicles; commercial fusion power; a generalized artificial intelligence; practical quantum computing; lab-grown meat; a brain-to-computer interface; or alien contact, friendly or not.

Of course, we might all be wiped out, too. Toby Ord, an Oxford professor of ethics, takes a shot at estimating “existential catastrophe” in his 2020 book, The Precipice: Existential Risk and the Future of Humanity. Long story short, he estimates the total chances of humanity going over the edge within the next 100 years to be about 1 in 6, where “over the edge” is broadly defined: total annihilation being the obvious possibility. He also includes dystopian futures in which we are subjugated by entities more intelligent than humans. (Imagine AI in which the machine’s goals are poorly aligned with ours), and scenarios where our population is so decimated (and our culture so thoroughly obliterated) that recovery to an advanced state is not possible.

Those are actually pretty good odds—more than an 80% chance of humanity making it through the next 100 years. Of course, global suffering is not the same as existential catastrophe, and I’m nearly certain we will see plenty of that, sad to say.

Meanwhile, a researcher at Google was suspended from work because he believes that a learning model for chatbots called LaMDA has become sentient. Having read some of the interactions cited by the researcher, Blake Lemoine, I can understand why he might think that: machine learning models have gotten good at sounding human. But is “sounding human” the same as sentient? In 1950, Alan Turing proposed the “imitation game.” If you couldn’t tell—based on interacting via screen and keyboard—which of two different entities was a human and which was a machine, then the machine must be displaying human intelligence. I dearly wish that Turing were alive today to see these advances, and to hear what he might think of them.

But no, LaMDA is not sentient. In an age filled with engineering marvels that we take for granted, Mr. Lemoine’s claim is more wishful thinking than concrete fact. AI isn’t going to save us from ourselves.

 

Related Posts

Loading...

Sections