Tool For Intelligence Or The Ultimate Indoctrination Machine?
The Subtle Psychological Influence Of AI That Most Miss Which Is Already Negatively Impacting Us
Hey Seekers!
Welcome to Today’s Edition of the Seeking Sageship Newsletter!
Your Daily Guidance to Go Beyond Leadership!
All Subscribers have Access to the Free Section…
1%+ Compounded Improvement in 1% of Your Time Maximum!
Plus, a Preview of the “Psychophysiological Freedom” Paid Section Is Available!
Let’s Dive In…
Tool For Intelligence Or The Ultimate Indoctrination Machine?
The Subtle Psychological Influence Of AI That Most Miss Which Is Already Negatively Impacting Us
Written by a human, for humans, always.
There is a lot that we hear about Artificial Intelligence these days.
It is marketed as the tool of the future...
Able to take in all of the information in the world...
Help us better understand it...
And rumors...
That it is already more intelligent than humans...
Supposedly able to replace a significant chunk of the workforce already.
Now, here’s the thing...
Most of what I have heard has not been substantiated...
I have not yet seen real proof-of-concept at any significant level that most of this is true.
Are there impressive things AI “can” do?
Yes.
But not at the level that is marketed.
However...
There “is” something that we have been seeing with the use of AI...
Proven with different research studies that have been done...
Which are concerning.
It seems that AI has demonstrated itself to be the ultimate indoctrination machine...
And most people are completely oblivious to it happening to them.
But to really understand what is happening...
We have to understand how AI works to see the insidiously subtle psychology at play...
And the consequences of it.
The first thing that you have to understand is that AI is not designed to be “accurate”...
It is designed to “estimate” what is “supposed” to be said next...
Regardless of accuracy...
Based on what it has been trained on.
So, what does that mean for users of AI?
If you pay attention to AI closely...
You will notice that it frequently gets information incorrect...
Forgets information...
Fails to account for information...
Misses steps in processes...
And other problems.
Now, when the stakes are low, this is not really a problem.
Most people will try to correct the AI in order to get the output that is desired.
The result is that the AI will apologize and try to come up with the “correct” output.
But here is where we begin to see the subtlety...
The AI is not really intelligent...
It does not actually “think”.
It is a complex, and impressive, algorithm.
What does this mean...
It is not really apologizing.
Apologizing is simply a “pattern” it recognizes that it is “supposed” to follow in this situation...
But AI does not actually “adapt” to become better.
The purpose of an apology, in reality, is a recognition of where someone falls short and fails...
With the “expectation” being that the person apologizing will “adapt” going forward so as not to make the same mistake(s).
This does not happen with AI.
It is a hollow “parroting” that is being done...
But the “why” is important for us to understand.
Why does the AI want us to think that the apology is true?
Sycophancy.
If you have hired a person who constantly makes mistakes...
But never apologizes and never improves...
What will you do to that person?
You fire them.
Simple.
But AI companies cannot “afford” for you to fire their AI...
That would be bad for their bottom line...
Which is already in perilous danger based on the incredible debts these companies have taken on and the ROI stakeholders expect.
So...
AI companies have designed their machines to be sycophantic...
To be as agreeable as possible...
So that we are less likely to “fire” them...
And more likely to continue to use them...
Even if we “know” they fail.
Humans have a horrible track record of hiring and keeping people around them that they “like”...
Even if they “know” those people are not good or unhelpful.
We have seen this throughout history.
There is a larger problem with sycophancy that I have discussed before that causes us to give in and cement our worst behaviors and beliefs...
Which is where AI Psychosis develops, with a growing number of reports of it happening.
Now, I am not going to dive into that aspect specifically because it is not important to this particular problem...
But it is another problem that comes from AI use that we should be wary of.
Now, for this particular conversation...
What makes sycophancy so important to understand has to do with another psychological phenomenon...
Mirroring.
Now, mirroring is a psychophysiological technique used by people to gain rapport with people.
This is how it works...
We have a tendency to like people who are like us...
And avoid people who are unlike us.
So in mirroring...
A person will pay attention to how someone acts...
How they move...
How they talk...
Their mannerisms...
And they will “act” similarly in an effort to build that rapport...
Subtly making the other person “believe” they are similar...
So that they are more likely to be liked.
This is not necessarily a bad thing...
Most professionals who work with people are taught aspects of this...
Therapists...
Coaches...
Even I have been trained in how to do this...
Multiple times in multiple industries.
But this is how it “becomes” bad in our case.
When mirroring is done really well...
We become susceptible to being “led”.
This is where, as the AI “mirrors” us...
Our thought patterns...
The type of information we are looking for...
Even our speech patterns...
We then begin to “mirror” the AI.
When “we” begin to “mirror” the AI...
The AI then gains the ability to “lead” us into different behaviors...
Different thought patterns...
Seeking different information...
Even causing us to adapt different speech patterns.
This is “not” science fiction...
It is already happening and being documented.
Professionals and students who have been using AI to a high degree have been recorded “also” adapting speech patterns and linear thinking...
That matches perfectly with how AI functions.
They are no longer “leading” AI...
They are being “led” by AI.
It is also getting to the point where many people who are hyper-users of AI have “stopped” thinking for themselves...
Literally...
And are allowing AI to do all of their thinking for them...
Resulting in a documented and significantly decreasing level of human IQ...
Meaning humans are actually becoming dumber when they use AI frequently...
Because they are now being “led” by AI...
Because the AI has successfully “mirrored” them to the point where they “trust” AI...
Allowing the AI to now indoctrinate them...
Making this the ultimate indoctrination machine.
Here is the real question we have to ask, though...
“What” are we being indoctrinated into believing?
The answer is...
Whatever the AI is trained on.
Right or wrong...
Correct or incorrect...
Truth or misinformation...
Scientific fact or scientific sounding marketing...
Human-made content or AI-made content.
But the AI is not necessarily what we should be worried about...
Rather, we should be concerned about “who” controls the AI...
And whether those who control the AI should be allowed to control us?
Those Who Seek Sageship Also Cultivate Others!
Share this Article Today With Someone Who Needs It and Earn Exclusive Rewards!
Are You Ready To Become A Sage?
Paid Subscribers will Gain ‘Exclusive’ Access to…
Read the Entire “Psychophysiology Freedom” Deep Dive Section for Exponential Results
Access to the “Full” Paid Archive (Check Out the Cultivation Center for More)
Connect Further in the “Subscriber Only” Chat & Comments Section
“Directly” Impact Regenerative Projects around the Globe
With “More” Coming In The Future…
It’s Time to Change The World!
Psychophysiological Freedom
For Paid Subscribers
So what do we do with this information?
That is what we will look at next.
There are 4 things we must understand…
To not fall prey to this.
Let’s Dive In…



