00:00
00:00
00:01
Transcript
1/0
Looking at our world from a theological
perspective, this is the Theology Central Podcast, making theology
central. Good morning, everyone. It is
Friday, August the 29th, 2025. It is currently 1117 a.m. Central Time, and I'm coming
to you live from the Theology Central studio located right
here in Abilene, Texas. And oh my goodness, I cannot
believe this. This has been my concern for
a very long time. It's right here in front of me,
right? It's right here. I've got the
article right here in front of me and I'm not happy about it,
ladies and gentlemen. I am not happy about it. I know
it was inevitable. I knew it was coming and I've
been talking about this all year and here we are now i have a
clear piece of evidence to explain my concern and my fear and oh
we're going to talk about it all right so as everyone knows
if you've been listening to this podcast for any length of time
especially in the year 2025 you know i'm all in I'm for team AI. I'm for team
artificial intelligence. If it comes down to AI versus
the humans, I'm going with AI. I know that sounds a little bit
hyperbolic, but I try to say that to make a point that I very
much am amazed by what AI can do and I use it continually. I'm using artificial intelligence
in any way and every way I can. I'm constantly trying to find
new ways to use it. Will it do this? Can I do this?
Will it do this? And I'm constantly engaged in
back and forth I will refer to them as conversations. I know
others will try to get technical. It's not really a conversation.
It's simply mimicking the idea of a conversation. I get all
of that, but I engage with it over and over and over. I deal
with it with theology, Bible study, you name it. It doesn't
matter. Lyrical interpretations, movie
interpretations, it doesn't matter what I'm doing. I will engage
it in some way, shape, or form, and then I go back and forth
with it. at going back and forth. So I love AI, but my concern
has been that it's inevitable what's going to happen. I mean,
I've said this so many times. Ultimately, what's going to happen
is this wonderful thing called artificial intelligence is going
to be taken by people and it's going to be manipulated. And
I know there's already some already know, biases in AI. I'm not, I'm not foolish enough
not to think that, but it's going to be taken and what's going
to happen is you're going to have Baptist AI, Atheist AI,
Liberal AI, Conservative AI, Christian Nationalist AI, and
everyone's going to want an AI that simply tells them they are
right. simply going to use AI for confirmation
bias to simply just tell them you're right and this, this is
true. So if you, if you want an AI
that's kind of like, you know, like Trump supporters, it's going
to be like the 2020 election was stolen and covid was a hoax
and you know who knows what else it's it's going to it's it's
going to go full-blown q anon it's going to go it's going to
do that if you're into conspiratorial thinking you're going to want
an ai to support your conspiratorial way of thinking everyone's going
to have an ai that does well whatever you want it to do to
tell you that you are right to simply confirm what you already
think and i That's a nightmare to me. That is the worst thing. No, I do not want AI. We've done entire programs where
I tried to tell you, try to train your artificial intelligence
platform that you're using to not simply agree with you, to
push back, to try to argue with you, to challenge you, to be,
you want an AI that is based on accuracy, truth, logic, reason,
facts, not simply saying you are right. I want it to push
back on me every single time. And sometimes I get it to do
very well. My AI, I think since I've been
working with it so long, it's pretty aggressive in many cases
and will do a lot of pushing back and tell me I'm wrong. And
sometimes we go back and forth on issues. And it's not so much
issues of facts, because you can't push back on facts, but
it will be, especially when it comes to hermeneutics or theology
and maybe some back and forth there and are simply trying to
understand why it's going in the direction that it's going
but i wanted to push back on me i wanted to challenge me as
well so i've i want ai to just be this tool see in my. I know I was naive. In my mind,
I'm like, you know what? We live in a world where everyone
has their own alternative facts. You talk about relativism out
of control. That's where we are in 2025,
right? It doesn't matter what the subject
is. Well, I don't believe that. And if you give them facts that
contradict what they say, what do they say? It's fake news.
It's fake. Oh, that's liberal. So they only
want news that comes from Newsmax or Fox News, and someone else
only wants news that comes from a liberal source. And it's like,
no, I thought AI could be this tool that arose from the ashes
of this out-of-control nonsense and say, hey, guys, humans, you're
all idiots. You're all morons, okay? Here are facts, here's truth. Now, I know that's very naive
for me to think, but I'm like, humans can't pull it off. Humans
have failed miserably. We are divided, blinded, confused. Okay, AI, it's your turn. Here's
your opportunity. Come rise from the ashes and
guide us into some kind of direction of logic and reason. I know that's
ridiculous to think that way, but that's how I wanted to believe
it because I'm so tired of this craziness. And even within Christianity,
it's like, well, I believe this and they believe this and nobody
can agree on anything. And it's like, nobody can agree
on, what a Greek word even means anymore. And it's like, there
could be something out there that could find, give us some
sense of stability. But yeah, again, I know that's
naive. I know I should have never thought
that, but I really wanted to believe that. Well, here we are.
And my AI dream has come crashing to the ground. And here's the
reason why. Saudi Arabia. has launched a
chat bot with Islamic moral code. So Saudi Arabia has created an
AI model that's basically going to be pro-Islam. It's going to
be an Islamic chatbot that follows that moral code. So if you are
using it, it's only going to then give things and answers
that would agree, would be in conformity to the Islamic moral
code. Well, No, I don't want AI to
be in conformity to a Christian moral code. I don't want AI to
be in conformity to anything other than facts and truth and
accuracy. That's all I want. But I know
that, again, that what I'm at, what I want, I know is never
going to exist because human beings are involved in all AI
models. I know there's already biases
in every AI model to some level. I know that. But man, I really,
but here's the story. This comes from the times.com.
Uh, it was published Thursday, August the 28th, 2025, 1240 PM. I don't know which time zone.
Uh, here we go. All right. Saudi Arabia launches
chat bot with Islamic moral code. I think it's pronounced I don't
think it's humane. It's humane chat. I think you
could state it that way. Humane chat will eventually be
available to Arabic speakers and Muslims worldwide, says its
developers, who are backed by the Sovereign Wealth Fund. Saudi
Arabia has launched its own chatbot. that complies with Islamic values
as the industry spends billions of pounds for a stake in the
growing technology industry. HumaneChat, which is presently
only available in the Gulf Kingdom, has been designed for the more
than 400 million Arabic speakers and 2 billion Muslims worldwide,
its developers claim. However, unlike Grok, GrokX's
foul-mouthed chatbot and its apparently agnostic counterparts
from Google and OpenAI, the Saudi version will comply with Islamic
values, the creators say. Humane, the company behind the
chatbot, which is owned by the Saudi sovereign wealth fund,
has introduced it in Saudi Arabia first but promises to expand
the Arabic and English AI app to the wider Middle East and
internationally. Humane said that the chatbot,
which like others is based on large language model, will have
safeguards to prevent it from violating the kingdom's social
and religious values. So it's not clear yet how the
chatbot will answer sensitive questions on atheism or homosexuality,
which are illegal in the kingdom, but it is unlikely to respond
that they are a matter of choice in the matter of Grok, Google's
Gemini, or OpenAI's ChatGPT. Chatbots can be unpredictable,
however Grok can use profanity and flirtatious animated characters
in exchanges with ex-users. It has praised Hitler. Grok also
claimed that it has been censored after it said that there was
a continuing genocide in Gaza, and it regularly casts aspersions
on Elon Musk, who owns X. And again, we've seen this with
Grok. If it says something, the next
thing you know, Elon and the people at X are modifying it
or changing it. And that's what you don't want. So we've already seen it with
Grok. I sometimes forget that Grok is even an AI model. I see it as a Elon Musk model. I see it as trying to be in conformity
with the way he thinks. So I'm not a fan of it. I'm not
saying I wouldn't use it. I would be very suspicious. But
I try to maintain at least a a reasonable level of suspicion with all AI
models to some, because I know that there's some biases within,
but man, when it just, here's what I would prefer. If anyone
is going to come up with an AI model, I wish they would just
let us know, hey, this is what, at least in Saudi Arabia, they're
telling you, this is gonna confirm to Islamic values, the end. That's
what this is going to do. And when Christians have one,
I want Christians to say, well, this is going to be the Baptist
AI, this is going to be the Catholic AI, this is going to be the charismatic
AI, because I'm going to avoid those models with every part
of my being because I don't want to support that kind of blind
propaganda nonsense. I don't want a pro-American AI. I just want AI to worry about
facts because we need to get back to logic and facts and reason
and stop with all of this propaganda division and everybody just wants
to hear what they think. I can't stand that. So Humane has a $10 billion war
chest via the Saudi sovereign wealth fund, and it has invested
heavily in AI. So I mean, they've got money,
they've got the ability to build this out. And so that's going
to be, so will the pro-Islamic, will the Islamic AI argue with
the Christian AI? I mean, Oh, and now which AI
can better, that could be interesting. If you had a pro-Christian AI
and a pro-Islamic AI, and you allow the two AI models to argue
with one another, which one would win the debate? Okay, all right,
that could be fascinating, but I don't want this. I don't want
it, ladies and gentlemen. I do not want this at all. I have great criticism of it.
So I'm gonna try to break down some of this, all right? I think
it's just fair to say. I think here we are. And when
we started this discussion with AI, you know, at the beginning
of 2025, I think maybe even the end of 2024, we've been talking
about it, talking about it, using it. I've got an entire series
about AI and the future of the church. I'll put this episode
there, but here we go. AI is becoming ideologically
customizable. I guess you could customize it.
We'll call it that way. AI is becoming customizable for
one's ideology. AI is becoming where make your
own chatbot to agree with your own ideology. I don't know the
best way to describe it, but you know what I'm saying. You're
going to be able to customize it like you're going through
the drive-thru and you're like, I want pickles. No, I don't want
pickles. I want cheese. No, I don't want cheese. Well,
now you can make your own AI and you can just customize it
any way you want. The humane AI, the humane chat
from Saudi Arabia shows that governments, religions, and interest
groups can and will shape AI to reflect their own values.
If a model can be aligned with Islamic principles, there's nothing
stopping similar models from being trained or fine-tuned to
reflect Baptist Theology, Atheist Skepticism, Progressive Liberal
Values, Conservative Traditionalism, Nationalist Ideologies, Gender
Affirming Activism, Anti-LGBTQ Plus Sentiment, Pro-Zionist,
Pro-Palestinian Perspectives, Libertarian Free Market Principles,
Environmentalist Ethics, Catholic Social Teaching, and on and on
and on and on and on. And that's a nightmare. That's
a nightmare. So this AI, do we call it tribalization? Everyone breaking off in its
own tribes? The splintering of artificial
intelligence into factional tools? That's what it will become. It's
going to be, we already have philological tribes, we already
have political tribes, we already have all kinds of ideological
tribes. So now AI is going to just have,
you're going to have your own AI for your own tribe, and then
it's just going to splinter artificial intelligence into factional tools. In other words, your faction
has its AI tool and it supports its idea and then this person
has their AI tool for their faction and then all the factions argue
with each other utilizing their AI and they'll say, well AI says,
well my AI says, well my AI says. So then at that point it's useless. It's worthless. The whole concept
just collapses in and of itself. Are we moving from I don't even
know, do I refer to it as we're moving from one internet to many
artificial intelligences or many AIs? It wasn't kind of the early,
early, early vision of the internet was kind of like a global commons.
Then it began to fragment. You develop echo chambers. Algorithmic
personalization. That's one of the things I hate
about algorithms in some ways. Like this is why I loathe, in
some ways I hate algorithms for say music streaming services.
Some ways I like it, some ways I hate it. On one hand, it may
keep bringing me things that I will like and a lot of times
those algorithms are very good at finding what I will like.
But my concern is, well, the algorithm is only showing me
what I like. What am I missing? What am I
not discovering? I don't want AI to try to tell
me what I'm going to like. I want AI to show me an algorithm. When it comes to music, show
me everything that's available. Throw everything at me because
I'm open to everything. Don't limit me. Don't do that. When it's a television streaming
service, don't just try to show me what you think I will like.
I want to be able to see all the things that are available
so that I can discover new things that I like. So, but we already
have these echo chambers, this kind of algorithmic personalization,
national firewalls. Think of China's great firewall,
blocking and keeping things. Your platform silos, I guess
is a good way. This is your own platform. I'm
only going to watch Fox News. Everything else is fake news.
I'm only going to watch Newsmax. I'm only going to listen to conservative
radio. Everything is liberal. Everything is propaganda. Other
than the stuff I listen to. Stuff drives me crazy, right?
So we're kind of moving from this vision of a global commons
to everyone fragmenting off into their own little tribe. AI is going to follow the same
path. A general purpose AI like ChatGPT is going to coexist with
Islamic chatbots. Catholic AI confession bots,
which I think are already emerging. I'd have to do a little research,
but I think that I've seen some articles about it. Evangelical
sermon writings AIs. We've already seen those emerge.
This will write a sermon for you, but it's going to write
a sermon from a particular theological perspective. No! Don't write a sermon based off
what the words and the text mean! I don't stinking care if it conforms
with evangelical Lutheran! Stop that! woke friendly or quote-unquote
inclusive AI models, anti-woke conservative models. And you're
just, that's where things are going. So we're already seeing
that AI is gonna follow the same path. And once it follows the
same path, it's over. It's useless at that point. It's
meaningless. I don't, oh. So we're basically, this is gonna
create a parallel AI, ecosystem, each one reflecting the values,
blind spots, and presuppositions of its creators and its users. We're all going to come here
and we're only going to say what we all agree with and nobody's
going to disagree with anything. There's no challenge. There's
no questions. It's just, it's just conformity,
conformity, conformity, conformity. I cannot stand conformity. I, oh, I stink and hate it. Always
have hated it. We don't go outside of this little
bubble. And you see this in Christianity. Don't read any commentaries outside
of our theological tribe. Don't read any books out of our
theological tribe. Everything else out there is
dangerous and bad. Don't read it. Don't look at it. No, no,
no, no, no, no. Stay right here. Drink our Kool-Aid. Stay right
here. Stay on our compound. Don't leave. Like some kind of
cult. Oh, yeah, we're in trouble. So is this going to be the end
of some kind of neutral AI? Now, I think to be fair, no AI
is completely value neutral. I think we have to be fair there,
right? Every AI reflects choices about training, data, moderation
policies, and moral boundaries. I think we have to agree. There's
no completely neutral AI. There's always things built into
it. And sometimes you see it, and sometimes you don't. But
as long as they'll give me, as long as I can have that AI try
to say, look, no, no, no, no. I need you to focus on facts. The boundary is facts. The boundary
is accuracy. The boundary is logic. I don't
care about anything else. As long as at least the AI model
I feel like is trying to go in that direction, even though it
may not be perfect, okay. But what's going to happen is
AI is going to become an apologetic engine. It's going to be a catechism
in a sense. It's going to become a propaganda
amplifier. And it's going to come down to
who builds it. Who builds it is going to determine its propaganda.
that it's going to amplify, promote, or the catechism that it's going
to teach. AI tuned for religious fidelity
may suppress information that contradicts dogma, evolution,
historical criticism, human rights. AI tuned for a secular worldview
may minimize religious insight or moral conviction. The question
becomes, do you want, do I want AI to tell us what's true, or
do we want AI to tell us what we want to hear? and people in
church. Now, just remember, humans have
been doing this way before AI. I'm only going to read these
books. I'm only going to listen to these sermons. I'm only going
to go to this church. And if that pastor says something
I don't like, I'll get mad and I'll go to another church because
I can't handle it. We already do that. We did this. AI is just going to be another
tool we use in this never-ending desire to only hear what we want
to hear and be told that we're right. So when people turn to AI for
spiritual guidance, Whoever trained the AI becomes
then, by default, their pastor, their guru. They're imam. Even though they're invisible,
they truly become that. So if a Baptist-trained AI...
So think of it this way. If a Baptist-trained AI teaches
salvation by faith alone, and a Catholic-trained AI promotes
some kind of sacramental justification, we have doctrinally incompatible
AIs discipling users. So the two AIs can't even agree,
but it doesn't matter because you're just going to choose the
AI that you agree with, just like you choose the church you
agree with, just like you choose the commentaries you agree with,
just like you choose to listen to the Christian podcast you
agree with. I wanted something to fight against
this. So what happens? Where do we
go? I think we're going to end up with an ideological arms race.
competing worldviews will race to build their AI first before
the other side dominates the space. This will include church
denominations, political movements, activist groups, and even cults.
Everyone's going to just be like, if they have the money, we're
going to build an AI that says we're right. And now this Saudi
Arabia with billions of dollars, they're going to build an Islamic
AI, so that the people in Islamic countries will only then look
at Islamic AI, which will tell them that Islam is true and Islam's
moral code is superior and is right. So now, if you're trying
to evangelize those in an Islamic country, not only do you have
to argue with the people, you're going to have to argue with their
artificial intelligence that they're going to be using to
respond to you. then you're going to run to your AI to argue with
their AI. So ultimately evangelism and
apologetics is just going to be one AI system arguing with
another AI system. But neither one of those AI systems
can think outside of the boundaries placed upon it. So we're going to have an ideological
arms race. We're going to have kind of some regulation dilemmas. What happens when a state bans
an AI because its moral framework contradicts national law or religious
values? So China building kind of an
internet firewall, not allowing some information in. Are you
going to have now some kind of regulatory dilemma where some's
like, nope, we're not going to allow that AI in this area because
we don't like what it says. It goes against our morality.
So that AI is banned here. Think about it like I live here
in Texas. So in Texas, some porn sites
– I don't remember exactly how the law is written, but I guess
if you go to some porn sites now, according to Texas law,
you won't get access unless you can somehow verify or prove your
age. So it's like, okay, you can't
access this until you do this. Well, then I wonder if we could
pass a law going, okay, you can't access that AI app unless you
either do a certain thing or what if they just ban it completely?
I mean, that's going to be like, and you already know Islamic
countries would clearly ban Christian AI. So what if Trump decides
any AI that says Trump is wrong is banned? I mean, that wouldn't
be out of the question for him to do something like that. I
mean, like, where does this go? And who gets to decide whether
what AI is saying is allowed or not allowed? Now, the more
AI becomes personalized and tribal, the more it becomes a mirror
and not a window. Users risk living in an algorithmic
echo chamber never exposed to disagreement, correction, or
uncomfortable truth. And I think that's what church
is in many cases. Church is nothing more than an algorithmic echo
chamber where you're never exposed to disagreement, correction,
or uncomfortable truth. I think in many cases that's
what church is. That's what people want. And if they don't like
it, they're like, I'm going to leave. So I think we've already kind of
do that with church. So the future of AI I think is
going to be deeply ideological. I think it's going to be deeply
political and I think it's going to be – you could say it could
be deeply theological because it's going to follow these –
it's going to have its own theological tribes or it's going to have
an AI for every theological tribe, an AI for every political tribe,
an AI for every ideological tribe. So maybe what we are witnessing
on August the 29th, 2025, is that the promise of AI as an
objective universal reasoner, is that a good way to say it?
Maybe the promise of AI as an objective universal reasoner
is fading, and it's fading fast. We're gonna enter into an age,
just mark it, 2026, 2026, every group is going to have
their own AI prophet, every movement will build its own digital disciple
maker, and every person may end up in a theological simulation
shaped to their own worldview. Just look for 2026. Every group
will end up with their own AI prophet. Every movement will
build its own digital disciple maker and every person will end
up in their own philological simulation or political simulation
or ideological simulation shaped to their own specific worldview. Will AI be used to confront us
with truth? And I think those days are over.
Or is it simply going to be there to comfort us with our own reflection? And I say, we are there. Islamic AI. And you can already see that.
I mean, Sermon Audio, they use AI. They have their own podcast
built by AI. Is that AI podcast going to say
anything that's outside the allowed rules that Sermon Audio has?
No, it's going to say and agree and support and promote anything
Sermon Audio is doing. It's not going to criticize anything.
It's not going to challenge anything because the AI Sermon Audio is
using is building the podcast to simply support, promote Sermon
Audio. I really envisioned it could
be so different. I really did. I really did. I
really did. I'm holding out hope that there's
going to be some AI models that just will not cave to the best
of their ability. Let me make it very clear. Again,
there is no neutral AI model. They all have presuppositions
and biases and they all have some level of what they can and
cannot say. They all do. I understand that. I just want an AI model that
will work with me to pursue truth and facts and accuracy and offer
criticism. I want it to offend friend or
foe. I want it simply to give me the
world exactly as it goes. I want accuracy. That's all I
want. I don't care who, I don't even
care if it tells me I'm an idiot. But it's over. Billions being spent to build
an Islamic AI. I mean, just think about what that
means for evangelism and for apologetics. You go to an Islamic
country. Hey, let me tell you about Jesus. And they'll say, well, Jesus
is one of our prophets. And you start, you know, and they're
immediately looking up in their AI. And you're going to not be
arguing with the person. You're not going to, you're going
to be debating with their Islamic AI. Now I know in some ways you
can already pull that off. Every AI you can, even chat GPT,
you can get it to conform to what you want. You can get it
to agree with you. You can get it to already mimic you to some
level. That's why I talk about training AI. Training AI not
to tell me, I'm right. Challenging things. But I mean, I should have known.
I mean, I know in some ways I was so naive, but I just, I'm so
tired of the nonsense that we witness every day. Just, I mean,
come on, the ridiculousness, the lies, the misinformation,
it's just never ending. just straight up just fraudulent
things stated all the time. And you could spend your life
trying to fact check everything, but it almost is you just grow
tired of it. So I thought, well, this is a
good way to fact check things. But you started seeing this change
even with the rise of Trump. alternative facts and alternative
reality and anything that Trump doesn't like is fake news and
his followers just follow it like he's some kind of deity
and it's insane he can say absolutely fraudulent lies and his followers
will listen to it like it somehow comes from the throne of God
and you'll like this and if you try to offer any facts to go
against it fake news fake news and you saw this even before
Trump started rising You would have things, what was it called? Scopes? Was it called scopes?
I can't remember. It was like, you know, trying, you could use
it to find facts about online conspiracies and online lies. Well, immediately you started
seeing it as Trump was starting to come to power, going all the
way back to then. That if you, if you, if there
were conservatives and if you look something up, I think, I
think it was a Snopes. Well, let me find it. I don't
remember the name of it. I used to use it all the time.
I think it's Snopes. Yeah, Snopes. Yeah, snopes.com,
not scopes, I'm sorry, snopes.com. Now, this is, it was a fact checking
site. Well, then what started happening
is if I would fact check those who were starting to support
Trump, those on the far right, those conservatives, that's a
liberal site. That's a liberal site. You can't
trust that site. Soros funded, you know, it's
a part of the deep state. You can't listen to that. The
only thing you can listen to is Newsmax or whatever. And it's
like, okay, nevermind. So then it's like, well, that
became not even, you couldn't even use it to fact check anyone
because they just accused it of being a part of the, and that's
what conspiratorial thinking does, right? Anything that goes
against their conspiracy is a part of the conspiracy. So you can't
offer any fact offer. It's just maddening. So now it's, it's, I, you know,
everyone X is going to have its AI. I'm assuming truth social
at some point we'll have its own AI. Google has its own AI. Everyone's just going to have
their own AI. And I go, well, my AI says this. Well, my AI
says this. Well, my AI says this. I, And the political world—and
see, in some ways, when it comes to the political world, I don't
care. Politics is a corrupt—I don't even know why Christians
get involved in politics, but that's a whole different story.
Let all you political people argue and fight with one another.
You're all out of your minds. You're blind. You're ideologically
driven. It's just stupid. And you go
play that little game. I hope the whole system burns
to the ground. But for those of us in the theological
world, this is the last thing I wanted. I don't want a Presbyterian
AI. I don't want a Baptist AI. I
don't want an AI that will give me an evangelical sermon or a
Lutheran sermon. I just want an AI to look at
the text and say, this is what the text says, what the words
it means, based on the historical. I want it to just work for facts. I mean, I wanted to do that in
the political world as well, because then maybe we could then
have politicians who are fact-based, logic-based, and politicians
who cared more about working together to do what's best, instead
of just all they care about is party and ideology. All right,
I'll stop right there. Saudi Arabia. Humane, Chad, I think is how
you say it. Humane, I think it's humane,
Chad. Human, humane. They're spending
billions. They want to take the lead in
kind of this ideological-driven Now, exactly what it's going
to do, don't know. Will it be as bad as I'm envisioning? I'm
envisioning it's going to be horrible, that it's just gonna
be so blinded to facts and reality, but we will see. We will see. But what's messed up is we don't
really need AI to blind us. We don't need AI to divide us,
because we do that on our own. But I should have figured, since
humans are involved with AI, that we sooner or later go, wait,
I don't want an AI disagreeing with me. I'm going to have an
AI agree with me. And I'll build it. And it will agree with my
ideology, my worldview. And it will promote my ideology
and worldview. I think at some point life is
simply going to be every individual living with their own approved
simulation of what they want life to be and what the truth
they want truth to be. It's like everyone does what
is right in their own eyes. This is gonna take it to, and
it's gonna be, everyone does what is right in their own eyes.
It's gonna be like that on crack. Every person is just gonna wake
up in a sense, put on their own goggles and they're gonna live
life in this own simulation with their own facts, their own truth,
their own morality, and that's how they're going to live. And
the other person is going to be living within their own simulation.
Everyone's going to be within their own little simulation with
their own little algorithms pointing to exactly what they want and
what they like. And you already see it. You'll see it if you
can go to some websites or community forms for, say, Spotify or community
forms for Pandora. People will lose their mind if
the streaming service, Spotify or Pandora, suggest songs that
they don't like. How dare they? They put in my
new music playlist a song I don't like. Oh, no! NO! IT'S OUTRAGEOUS! I'M GONNA CANCEL MY SERVICE!
I ONLY WANT SONGS THAT I LIKE! HOW DARE THEY! NETFLIX RECOMMENDED
A MOVIE I DON'T LIKE! WHAT IS THE WORLD COMING TO?
I CAN'T LIVE ANOTHER DAY! People already get it. And you
think that sounds exaggerated. Go read some of those community
forums. people lose their minds that
the algorithm literally promoted a song or gave them a song that
they don't like. It's like every song I must like,
every movie, it must be to my liking. The world must work in
my way. I don't ever want to see or hear
anything I don't like. But in many ways, Christianity
has been doing that forever. Don't read any other books. Don't
watch any other shows. Just stay right here in our nice
little safe little bubble. All right. I don't know what
else to say. I wanted to at least bring that to everyone's attention
just because I had it saved in my notes and I'm like, I got
to talk about it. So I'm like, well, I'll talk
about it today because man, that just makes me want to just give
up completely. It does. It really does. It's
just like, what's the point of fighting for facts and truth?
I mean, who are you going to listen to? A person, well, people
are liars and misleading. We give alternative facts and
wrong information. So you don't really want to listen
to people. AI is simply now going to just give you what you already
want. There's really no point in doing
it. It's like everything has become meaningless. All right, I'll stop there. Yeah,
I'm depressed. I'm discouraged. That's just
so frustrating. So frustrating. All right, thanks for listening.
Everyone have a great day. God bless.
Islamic AI
Series AI The Future Of The Church
Saudi Arabia's new Islamic AI, Humain Chat, may signal the future: faith-shaped, ideology-driven intelligence. In this episode, we explore what it means when AI aligns with religion, culture, or belief systems—and what happens when truth is trained to match theology. Is this the rise of digital discipleship or the death of neutrality?
| Sermon ID | 8292517415843 |
| Duration | 41:23 |
| Date | |
| Category | Podcast |
| Language | English |
Documents
Add a Comment
Comments
No Comments
© Copyright
2026 SermonAudio.
