I confess that when seeing constant news about artificial intelligence (AI) I tend to glaze over.

That’s because I get bored quickly when told to worry about things I can’t really affect in any way. It’s kind of like reading about leaks on the International Space Station or the price of eggs in Utah.

AI will evolve toward some end, good or ill, whether we like it or not, and probably as a mixture of both. I don’t lose sleep over this because, as a Christian, I believe that God, who created all life, is sovereign and doesn’t miss a thing.

AI is fantastic in many ways, and it could lead to things like curing cancer. However, compared to the intricacies of the universe, both macro and micro, AI is a crude instrument fashioned by a race that only recently invented the wheel.

For perspective, it would take 100 years of Cray supercomputer time to simulate the millions of processes that occur several times per second in the human eye.

For more amazing facts, see Richard Swenson’s book “More than Meets the Eye.”

Trusting the Creator takes away a lot of the worry about where things are going.

This doesn’t mean for a nanosecond that we should be complacent or indifferent. We have it on the utmost authority that we’re here for a purpose, not navel-gazing. Evil will prevail if good people do nothing.

However, when we read about disgusting developments like AI-generated “revenge porn,” or the Chinese communists rushing to utilize AI to rule the world, the wisdom of the Serenity Prayer seems more and more relevant.

Written by Reinhold Niebuhr (1892-1971), who began using it in his messages in 1934, the Serenity Prayer is best-known for its first passage:

“God grant me the serenity to accept the things I cannot change; courage to change the things I can; and wisdom to know the difference.”

Should we be wary about the accelerating adoption of AI into everything? Yes, but we can also celebrate its incredible potential. Think of the diagnostic resources now at your doctor’s fingertips – or even yours via the internet.

As with any tool, AI can be used or misused. C.S. Lewis observed that science is a two-edged sword: “Each new power won by man is a power over man as well. Each advance leaves him weaker as well as stronger.”

We should appreciate those trying to mitigate the possibilities of tyranny that could result from unrestrained AI. That includes our own national security experts and people like Elon Musk.

Musk, whose Grok system of AI is in a high-speed chase with others like Open AI, ChatGPT, Gemini, and DeepSeek, has warned about the need to govern the technology.

“It’s important for AI to be maximumly truthful, and careful and to love humanity,” Musk told an audience in October 2024.

The trick is figuring out how to do it.

Pontius Pilate famously asked Jesus, “What is truth?” before sending him off to his execution. Pilate wasn’t asking a sincere question but rather dismissing the possibility of finding an answer. He ended up literally washing his hands, a figurative shrug. This made him perhaps the most famous moral relativist in history.

The question persists: What is truth? Jesus answered it like no other person, proclaiming that he was “the way, the truth and the life.” Unlike human-created AI, his kingdom has no bounds.

Generative AI, especially large language models (LLMs) such as ChatGPT, create new content based on digitally accessed stuff vacuumed up by enormous search engines. Think of the Borg, the magnetic behemoth in Star Trek that sucks and incorporates everything near it, getting bigger and bigger and more powerful. That’s what makes it scary.

The other thing is AI’s soullessness. There is no moral force restraining it, and its ability to “think” has been exaggerated.

“These models aren’t searching for truth through facts and logical arguments – they’re predicting text based on patterns in the vast datasets they’re trained on,” wrote Gleb Lisikh, an IT management professional, recently in The Epoch Times.

“That’s not intelligence – and it isn’t reasoning. And if their ‘training’ data are biased, then we’ve got real problems.”

AI begins with human input, so it has built-in values, biases and flaws. As we’ve seen with Google and other systems, some concepts – and people – are elevated and others suppressed, creating propaganda.

Google’s AI program Gemini was caught last year creating images of America’s Founding Fathers as female and minority figures and the pope as a South Asian woman. It even threw in a black Viking. These came after simple prompts such as one asked by the New York Post to “create an image of a pope.”

This reeks of wokeness, like the 1991 Kevin Costner remake of “Robin Hood” with Morgan Freeman playing a Muslim Moor as one of the merry men.

Mr. Freeman is a world-class actor who would grace any production. But I wonder whether AI will always be able to distinguish fact from fiction. I asked ChatGPT about this. It told me:

“Sometimes I generate text that sounds plausible but is inaccurate or made up. … I don’t have an innate sense of truth. … I might treat fiction as fact if prompted in a way that suggests it’s real (e.g., ‘Tell me the history of Atlantis’).”

I could just see a future student doing a paper on medieval England and having a chatbot suggest that Moors and even a black Viking or two constituted the outlaws of Sherwood Forest.

This column was first published at the Washington Times.