In the technology industry, 2023 is a year of change.
Buoyed by the success of last year’s breakout tech star, ChatGPT, Silicon Valley giants are rushing to turn themselves into artificial intelligence companies, jamming generative AI features into their products and racing to build their own , more powerful AI models. They did this while navigating an uncertain tech economy, with layoffs and pivots aplenty, and while trying to keep their old business models afloat.
All did not go well. There are misbehaving chatbots, crypto foibles and bank failures. And then in November, ChatGPT’s maker, OpenAI, dissolved (and quickly corrected itself) in a failed boardroom coup, proving once and for all that there’s no such thing in tech as resting on your laurels.
Every December in my Good Tech Awards column, I try to neutralize my own negativity bias by highlighting some lesser-known tech projects that I’ve found useful. This year, as you can see, many of the awards have to do with artificial intelligence, but my aim is to avoid polarized debates about whether AI will destroy the world or save it and instead focus on here and now. What is good AI for today? Who does it help? What kinds of important breakthroughs are already being made using AI as a catalyst?
As always, my award criteria are vague and subjective, and no actual trophies or prizes are involved. These are small, personal blurbs of appreciation for some tech projects that I think have real, obvious value to humanity in 2023.
To Be My Eyes, Apple and researchers at the University of Texas at Austin, for improving accessibility through AI
Accessibility — the term for making tech products more accessible to people with disabilities — has been an underappreciated area of improvement this year. Some recent advances in artificial intelligence — such as multimodal AI models that can interpret images and turn text into speech — have made it possible for tech companies to develop new features for the disabled that user. This is, in my opinion, an unequivocally good use of AI, and an area where people’s lives are already improving in significant ways.
I asked Steven Aquino, a freelance journalist specializing in accessible tech, to recommend his top accessibility achievements in 2023. He recommended Be My Eyes, a company that makes technology for people with visual impairments. In 2023, Be My Eyes announced a feature known as Be My AIpowered by OpenAI technology, which allows blind and visually impaired people to point their smartphone camera at an object and describe that object for them in natural language.
Mr. Aquino also taught me the new Apple Personal Voice Feature, which builds on iOS 17 and uses AI voice-cloning technology to create a synthetic version of a user’s voice. The feature is designed for people at risk of losing the ability to speak, such as those with a recent diagnosis of amyotrophic lateral sclerosis or another degenerative disease, and gives them a way to preserve their speaking voice so that their friends, relatives and loved ones will hear from them long into the future.
I’ll posit another promising accessibility breakthrough: A research team at the University of Texas at Austin announced this year that it used AI to develop a “noninvasive language decoder” that can translate thoughts into speech — read minds of people, essentially. This type of technology, which uses an AI language model to decode brain activity from fMRI scans, sounds like science fiction. But it can make it easier for people with speech loss or paralysis to communicate. And it doesn’t require putting an AI chip in your brain, which is an added bonus.
For Vertex Pharmaceuticals and CRISPR Therapeutics, to put gene editing to good use
When CRISPR, the Nobel Prize-winning gene-editing tool, entered the public consciousness a decade ago, doomsayers predicted that it could lead to a dystopian world of gene-edited ” designer babies” and nightmarish eugenics experiments. Instead, technology allows scientists to make steady progress toward treating some devastating diseases.
In December, the Food and Drug Administration approved the first gene-editing therapy for humans — a treatment for sickle cell disease, called Exa-cel, jointly developed by Boston’s Vertex Pharmaceuticals and Switzerland’s CRISPR Therapeutics .
Exa-cel uses CRISPR to edit the gene responsible for sickle cell, a debilitating blood disorder that affects about 100,000 Americans, most of them Black. Although it is very expensive and difficult to administer, the treatment offers new hope to sickle cell patients who have access to it.
To Brent Seales, Nat Friedman and Daniel Gross, for using AI to unlock the secrets of antiquity
One of the most interesting interviews I did on my podcast this year was with Brent Seales, a professor at the University of Kentucky who has spent the past two decades trying to decipher a set of ancient papyrus manuscripts known as the Herculaneum Scrolls. The scrolls, belonging to a library owned by Julius Caesar’s father-in-law, were buried under a mountain of ash in 79 AD during the eruption of Mount Vesuvius. They are so thoroughly carbonized that they cannot be opened without breaking them.
Now, AI has made it possible to read these scrolls without opening them. And this year, Dr. Seales with two tech investors, Nat Friedman and Daniel Gross, to launch the Vesuvius Challenge — offering prizes of up to $1 million to anyone who successfully deciphers the scrolls.
The grand prize still hasn’t been won. But the competition has sparked strong interest from amateur history buffs, and this year a 21-year-old computer science student, Luke Farritor, won the $40,000 intermediate prize for deciphering a word — “purple” — from one of the scrolls. I love the idea of using AI to unlock wisdom from the ancient past, and I love the public-minded spirit of this competition.
To Waymo, for the slow road to self-driving
I spent a lot of time in 2023 touring San Francisco in self-driving cars. Robot taxis are a controversial technology — and still have a lot of kinks to work out — but for the most part I buy the idea that self-driving cars will ultimately make our roads safer by of replacing errant, distracted human drivers with always alert AI drivers.
Cruise, one of two companies that have rolled out robot taxis in San Francisco, has come under fire in recent days, after one of its vehicles hit and dragged a woman who had been hit by another vehicle. California regulators said the company misled them about the incident; Cruise pulled its cars off the streets, and its chief executive, Kyle Vogt, stepped down.
But not all self-driving cars are created equal, and this year I’m grateful for the relatively slow, methodical approach taken by Cruise’s competitor, Waymo.
Waymo, which was spun out of Google in 2016, has been logging miles on public roads for more than a decade, and it shows. The half-dozen rides I’ve taken in Waymo’s vehicles this year have felt safer and smoother than the Cruises I’ve taken. And Waymo’s safety data is compelling: According to the a study conducted by the company with Swiss Re, an insurance firm, on 3.8 million self-driving miles Waymo’s vehicles are less likely to cause property damage than human-driven vehicles, and lead to zero injury claims in the body.
I’ll put my cards on the table: I like self-driving cars, and I think society will be better off once they become widespread. But they need to be safe, and Waymo’s slow and steady approach seems better suited to the task.
At the National Institute of Standards and Technology, for managing America’s AI transition
One of the more surprising — and, in my mind, exciting — technology trends in 2023 is seeing governments around the world get involved in the effort to understand and regulate AI
But all that involvement takes work — and in the United States, much of that work has fallen to the National Institute of Standards and Technology, a small federal agency that used to be better known for things like making sure that the clock and scales are properly calibrated.
The Biden administration’s executive order on artificial intelligence, issued in October, designated NIST as one of the main federal agencies responsible for monitoring AI development and mitigating its risks. The sequence manages the agency to develop methods of testing AI systems for safety, develop exercises to help AI companies identify potentially harmful uses of their products, and produce research and guidelines for watermark AI-generated content, among other things.
NIST, which employs about 3,400 people and has an annual budget of $1.24 billion, is small compared to other federal agencies that do critical safety work. (For scale: The Department of Homeland Security has an annual budget of nearly $100 billion.) But it’s important for the government to build its own AI capabilities to effectively control the advances being made by private sector AI labs, and it will be necessary us to invest more in the work being done by NIST and other agencies to give ourselves a fighting chance.
And on that note: Happy holidays, and see you next year!