TAG founder and director, Rabbi Nechemiah Gottlieb, decodes the dangers of artificial intelligence and ChatGPT
On April 30, 1993, a computer program named the World Wide Web was introduced to the public. It was an unfamiliar system and difficult to navigate; some found it useful for sharing information, but most viewed it as largely irrelevant.
That was three decades ago.
Today, that obscure technology has embedded itself into the very fabric of society.
It became progressively more sophisticated with jolts of innovation over the years. There was social media and tablets and smartphones, and then we reached a point where nothing could surprise us. The magical capabilities at our finger’s beck and call had reached their maximum — there could be little room for advancement.
Or so we thought.
We were wrong.
On November 30, 2022, a computer program called ChatGPT was introduced to the public. This time, its relevance and functionality were immediately perceivable; the instantaneous genius was astonishing.
But as amateurs and professionals alike delight in discovering ChatGPT’s capabilities, can it be that we are blindly wading into quicksand? Are we comprehending the magnitude of this latest development?
As the founder and director of the Technology Awareness Group (TAG), Rabbi Nechemiah Gottlieb has been at the forefront of the effort to guard our community from the dangers the Internet presents, and has guided thousands in navigating the challenges it poses.
From his unique vantage point, Rabbi Gottlieb regards the new developments with a healthy dose of caution, sharing what he knows. and especially what he doesn’t know, about ChatGPT.
There are multiple definitions for AI, but in layman’s terms, it’s “the computer’s ability to teach itself things that we didn’t teach it.”
Until now, a computer’s output was limited to applying specific rules programed into it by a human being. It could not analyze things on its own. AI refers to types of programs that give the computer a way to learn things by itself and thus arrive at conclusions that are not directly derived from a specific set of instructions.
A common misconception is that AI is Internet based. This isn’t true. AI, on a basic level, has been used for over 50 years. It’s used to calculate payrolls, insurance rates, and credit scores.
What it does, simply put, is process the requested information while comparing it with millions of other similar pieces of information. Using that comparison, it can arrive at a conclusion with unparalleled accuracy. This is a function of AI that we’ve long been accustomed to.
Over the years, tech companies have been developing AI’s capabilities, advancing its capacity to think creatively. In 2016, the Google-created AlphaGo played a game of Go (an ancient Chinese strategy board game) against champion Lee Sedol and defeated him with a move that observers described as “human.” The move could not possibly have been the result of preprogramming; rather, it was a creative response. That’s what AI programming does: It enables the computer to express an element of creativity.
Does every AI program operate the same way? Are they all capable of delivering whatever result you desire? What are their limitations?
As of now, AI can only perform within the parameters of the task it was programmed to perform. The AI bot that beat the Go champion was programmed to play Go. There was also an AI bot that beat the chess world champion, but that was programmed to play only chess.
There is a debate among computer scientists whether or not a concept called “artificial general intelligence” is possible. Hypothetically, artificial general intelligence would mean that the AI bot would be able to absorb information about anything, just like a human being. It would be able to become a chess master, a Go master, and everything else as well. It’s unclear if something like that will ever materialize.
If AI has been around for so many years, what has happened recently that has been setting off alarm bells across the world?
Computers are good at picking up on patterns and applying precise rules to huge quantities of data. What they’re not good at is common sense. They lack an ability to understand a circumstance and respond in kind.
For example, if a human walks into a room and sees a cup of milk spilled over, and then a trail of milk with cat’s paw prints in it, and then a cat sitting on the porch outside, the human will understand what happened: A cat must have spilled the milk. A computer, however, will not be able to make that association.
Computers’ lack of common sense means they struggle greatly with what’s called “natural language processing,” which refers to the ability to understand what people really mean.
Take, for example, the words, “I’m not here.” That phrase can be interpreted literally, to mean, “I’m not physically present,” or it can be referring to a state of mind, meaning “I’m not mentally present.” Based on the context, it can mean so many different things.
Until now, computers had no way of making that distinction. Google may appear to understand your questions, but that isn’t really what’s happening. What Google does is match your words with relevant websites. It has enough sophistication to make certain associations — the word “egg” will get responses that include the word “omelet” — but that’s because its algorithm showed that people searching for “egg” will also search for “omelet.” Siri works the same way; it pulls its information from Google.
What has happened within the past year is that a company called OpenAI has perfected a methodology that allows for AI to operate with natural language processing. The results astonished even its creators.
ChatGPT, which has been the focus of so much attention these past few months, is the program released to the public by OpenAI. Eerily enough, computers are now capable of behaving as if they understand what you want and responding as a human would.
What does that look like? As part of our testing of ChatGPT, we asked it this question: “What does football have in common with davening?” It responded instantaneously with the following:
- Discipline: Both football and davening require discipline. Football players need to adhere to strict training regimens, follow team strategies, and maintain physical and mental discipline during matches. Similarly, davening often involves adhering to specific rituals, prayers, and guidelines, requiring discipline and focus.
- Teamwork: Football is a team sport that relies on effective teamwork, communication, and coordination among players to achieve success. Similarly, in some religious communities, davening is a communal activity where individuals come together to pray, creating a sense of unity and shared purpose.
- Ritualistic elements: Both football and davening involve ritualistic elements. In football, there are specific rules, regulations, and traditions that are followed. Similarly, davening often involves following prescribed prayers, gestures, and rituals that have been passed down through religious traditions.
- Focus and concentration: Football players need to maintain focus and concentration on the game, paying attention to their teammates, opponents, and the overall strategy. Similarly, davening often requires individuals to concentrate and focus their minds on the prayers, connecting with the divine or their spiritual beliefs.
- Dedication and commitment: Both football and davening require dedication and commitment. Football players devote significant time and effort to training, improving their skills, and contributing to the success of the team. Similarly, davening often requires regular commitment, attending prayer services, and maintaining a consistent spiritual practice.
A response like this is way beyond the capacity of Google’s programming. It can’t be formulated by matching search words, it requires approximating the results of an actual human’s thought process.
The scope of the technology’s capabilities are mind-blowing. You can ask it for so many things, from a business idea to an advertising concept to code for an app.
So what’s the problem with all this? Why are AI’s upgraded capabilities so concerning?
For the world at large, the problem is primarily economic. Once AI can understand and communicate like a human, millions of workers can be easily replaced.
Another basic concern is the level of misinformation that can be dispensed. Through natural language processing, AI can understand all the nuances of a particular individual’s voice and mannerisms. You can then take that voice and make it say whatever you want. You can generate a video of that person that never existed. Recordings and videos will have no validity anymore. There are cybersecurity, military, espionage and privacy concerns as well.
But from a hashkafic perspective, a lot more has to be considered. We can analyze some of the specifics, but even before that, for starters, it’s never wise to look at a situation at surface view only and say, “What’s the problem with this?” Evaluating a threat requires a three-pronged analysis.
The first step is to look back. We’ve been in situations like this before. We’ve seen breakthrough innovations in the past. How did those events unfold?
The second step is the obvious conclusion that, at the very least, whatever happened in the past can happen again.
And finally, we need to bear in mind that in the past there have been results that took us all by surprise. Therefore, we must at least contemplate the possibility that now as well, the new development’s ramifications may take us by surprise.
Applying this analysis to AI, here is how we should be thinking.
We’ve seen monumental technological breakthroughs like this before. When the Internet was first created, it was meant to be used to transmit digital messages and was utilized primarily by government and academia. No one in their right mind would have looked at it and said, “This will soon take over the world.” But that is what happened.
But even after the Internet’s explosion, we still never contemplated the scope of its future impact. Before the iPhone, the Internet was popular but limited to a specific time and space. No one could have imagined that the Internet would become a perennial companion, whose use would be required to perform so many basic tasks. But once smartphones became popular, that’s exactly what happened.
And we’re all well aware of how detrimental this has been to our community and our Yiddishkeit. So we’ve seen breakthroughs and we understand their potential — and their danger.
Now, step two of our three-part analysis is to consider that what happened in the past may likely happen again. So if the Internet has become indispensable to daily life, we must consider the possibility that AI will also become an integral part of our lives, along with all the ensuing spiritual repercussions.
And finally, we’ve been surprised before. Even after we were well familiar with the Internet, we’ve been continuously surprised by developing technologies. Therefore, we must assume that AI may also lead to many surprises. What will they be? I don’t know. No one does. But we must be prepared for that to happen.
So you’ve pointed out the alarming potential for AI to transform the world as we know it — just as the Internet did. But do we know why this is dangerous? Are we aware of any specific damage AI can cause on a spiritual level?
Again, we have no idea where this is headed. AI combined with natural language processing is barely in its infancy. But here are a few things that seem apparent even now.
ChatGPT has a function that allows you to call it on a telephone. The voice on the other end is called “Annie.” Currently, you need to register online to use this function, so that limits its reach slightly. But undoubtedly that impediment will eventually be removed, making AI accessible to everyone, even those who have no Internet connection.
Add the fact that when AI is accessed through the phone, it’s entirely unfilterable, and you reach the sobering conclusion that many of our current protections will prove irrelevant to AI.
Another danger is that AI interacts as a quasi-human. A human is defined by the power of speech — the human is a medaber. Until now, the Internet was inanimate, but AI takes the form of a medaber, which allows for the illusion of a real relationship.
Humans are wired to feel empathy for other human-like beings, even when they know they’re not real. If you watch a video and see someone in danger, your heart will start racing, even though you know it’s fake. This will happen even if the images are cartoons.
Once we have a technology that is forever present, forever necessary, and one that has human qualities, our ability to disconnect ourselves from it will be exponentially more difficult.
What should our approach be, then? Should we come out with a sweeping ban against ChatGPT and all similar forms of AI?
Before answering the question, allow me to clarify: If your rav, rosh yeshivah, or rebbe has taken that position, then that is binding. Everyone must work with the guidelines set by the leaders of their own community.
Some leaders within our community, primarily in chassidic groups, have come out with a total ban on AI, so that is the psak for their kehillos. The reason not all rabbanim have signed onto an all-out prohibition is that, once again, we do not know. We don’t yet know how AI will affect our daily lives and how realistic it is to expect a complete dissociation from it.
We have to expect surprises, and we don’t know if the tactics used to combat the Internet will work against AI.
Let’s use the following analogy. Imagine that a group of virologists get together to formulate a preemptive strategy should another pandemic hit. Someone suggests mass production of masks.
Now, while that might be a good idea, it should only be considered five percent of the strategy, because we don’t know if the next pandemic will be caused by an airborne virus or not. What’s needed is a holistic approach that would be applicable and relevant regardless of the nature of the virus.
The same applies here. We don’t know enough about AI to accurately assess its dangers, so we need to come up with a preventative that can work regardless of the specific perils it can cause.
One thing that we know works is limiting our interaction with technology altogether. We need to get used to carving out spaces in our lives that are not dependent on any sort of technology. Training ourselves to get by without tech is one small way to curtail the effects of what may be waiting for us up ahead.
On a broader level, though, we need to keep a proper perspective. At the Nekadesh event last year (an evening for women to raise awareness about technology), Rav Elya Ber Wachtfogel made the following observation.
If Australia were to report that they are producing atomic energy for peaceful purposes, we would believe them. If Iran reports that they are producing atomic energy for peaceful purposes, we most certainly would not believe them.
In relating to the Internet, he said, we must bear this dynamic in mind. We need to understand that the Internet is not our friend! The amount of spiritual damage it has wreaked on our community is incalculable; it’s the “Iran” in a world of spiritual politics. When the Internet comes along and tells us it’s here for peaceful purposes, don’t believe it.
Now, we’re up against a different threat. On surface level, ChatGPT seems to be an innocent tool that can help us do research or write letters at a meteoric pace. Don’t believe it. It’s not our friend, it’s here to hurt us, and we must realize that before it’s too late.
A few years ago, Rav Mattisyahu Salomon spoke at an event for mechanchos, and his message was, “We must be suspicious of the Internet.” I thought about those words, and this is what I think he meant.
If someone is mugging you, you don’t need to be suspicious of him. You can be certain that he has nefarious intentions. But when someone smiles brightly at you, and you know he isn’t a particular friend of yours, that’s when you need to be suspicious. It’s not the overtly negative aspects of technology that we must be wary of, it’s the ostensibly positive ones.
In the case of AI, this directive is not just a hashkafah, it’s a preventative tool as well. Remember, a part of the problem with AI is its potential to feel like your friend. We have to temper that sense of camaraderie. By constantly being suspicious of the ultimate danger AI presents, we can stave off that destructive sense of attachment.
These solutions are practical in nature. Spiritual dangers require spiritual defenses. What should we be doing in that regard? How can we raise our children b’kedushah v’taharah when such towering challenges abound?
That is the most important part of this discussion. Rav Mattisyahu Salomon has said that ultimately, the real answer is to work on strengthening our yiras Shamayim, to be mechazek our avodas Hashem, and to strive to think in a Torah way.
In regard to raising children, one of the most insightful things I’ve ever heard on the topic of technology was from Rabbi Dr. Abraham J. Twerski a”h when he spoke to a group in Edison, New Jersey.
His message to the audience was cogent. You all grew up in a system and did well, he explained. So your perception is that you can now raise your children in the same system and, they, too, will do well. But this is a mistake.
Times have changed. Life has changed. The way we raise our children today must be entirely different from the way we were raised.
The idea that we need to adopt a new perspective on chinuch to relate to life’s new realities has always resonated with me. There may be different chinuch approaches, but, regardless of what our approach is, we need to do something.
And that something must be well beyond what we’re accustomed to. It’s our only defense against a world that is rapidly advancing to unknown, dangerous new frontiers.
(Originally featured in Mishpacha, Issue 965)