Before You Repost: Using AI and Awareness to Break the Bias Loop
Viral posts shape more than opinions. They shape our reality. This post explores how AI tools can help us examine sentiment, bias, and funding before we hit share, and how resharing impacts both our algorithms and our worldview.
Iâve been deeply disheartened by the politics and polarization in this country â not just because of the headlines themselves and what they represent, but because of how quickly we rush to share them. We often hit âreshareâ before we ask:
Is this true?
Who benefits from this narrative?
Is this feeding my confirmation bias?
Before we repost, we should slow down just enough to ask better questions. AI can help us do that.
The Power of a Post
This TikTok really hit me in the gut: TikTok by @obriengoespotatoes. Itâs raw, emotional, and speaks to a very real fear many of us feel. Iâve been thinking about it a lot lately. It has made me want to reflect on my own feelings about safety, community, and change.
Emotional content sticks. It bypasses our rational brain and appeals straight to our identity. Even though I empathize with the underlying feeling in the video, I also recognize it has a bias: in tone, in framing, and in what it leaves out. That doesnât make it invalid. It makes it powerful and potentially dangerous if consumed without critical thought.
Same goes for this Facebook post, which has been reshared by some people I know. It reinforces a perspective thatâs already popular with my friends; but how much of it is fact, and how much of it is narrative?
Engagement Amplifies
Itâs not just resharing that boosts a postâs reach. commenting, reacting, quote-posting, or even clicking all feed the algorithm data. Every interaction; whether you agree, disagree, or are simply outraged; teaches the platform that this content is engaging.
That means:
- Commenting âThis is wrong!â still counts as a signal to promote the post.
- Clicking to âread the commentsâ helps the post get pushed further.
- Arguing under a post you hate still boosts its visibility to people who might not disagree with it.
Social media algorithms donât care what you think. It only needs to know that youâre thinking about it and staying engaged. The outrage economy thrives on this.
Sometimes, not engaging at all is the wiser path; not out of avoidance, but as an act of intentional non-amplification. Iâve personally learned this the hard way. I was defriended by a family member because of how I was baiting them on some of the content they reshared. I disagreed with the post. I was outraged, and instead of stepping away, I leaned into that outrage. I wanted a reaction. And I got one.
That moment forced me to confront something uncomfortable: engaging from anger doesnât just feed the algorithm, it damages relationships. It didnât change his mind. It didnât make the situation better. It only added more heat; and more visibility; to something that was already divisive.
If youâre truly trying to reduce the spread of misinformation or inflammatory content, the most powerful move might be no engagement at all. Or better yet, engaging with something constructive instead. Starve the drama. Feed the dialogue.
What You Share, Shapes What You See
Algorithms are reflection machines. When you interact with content that aligns with your worldview, platforms learn: Ah, more of that please. And then, thatâs what you see. Again and again.
This is how confirmation bias becomes an echo chamber. Over time, itâs not that people donât want to âdo their own researchâ; itâs that they donât even see the other side anymore. The algorithm has trained them not to.
Iâve seen this happen firsthand in my own social feeds. The longer I interact with something on TikTok; the more often similar content shows up. The more I engage with a post that aligns with my beliefs; the more my feed fills with that perspective.
Start With AI Before You Repost
Iâve started thinking of AI as a pause button. It is something that creates just enough space between reaction and response. Hereâs an easy process to check yourself before you hit share:
1. Do a Sentiment Analysis
Iâve found that noticing the emotional tone of a post is often the fastest way to see how itâs trying to influence me. Before deciding what to believe or how to respond, identify the emotional signals at play.
Use a tool like ChatGPT (or any AI with natural language understanding) to ask:
- What is the emotional tone of this post?
- What emotional response is it trying to evoke?
Paste the post into ChatGPT and prompt it with:
âCan you do a sentiment analysis of this text?â
âWhat emotional language is used, and what feelings might it provoke?â
2. Ask About Bias (Including Your Own)
Bias isnât always obvious and itâs not always bad. Recognizing bias helps you see the assumptions shaping the message (and your reaction to it). Before accepting or rejecting a post, ask: Whose perspective is being centered and whose is missing?
Use a tool like ChatGPT to ask:
- Does this post show signs of political, cultural, or ideological bias?
- What assumptions does the author make?
- What perspectives are not represented?
Paste the post into ChatGPT and prompt it with:
âCan you identify any biases in this text?â
âWhat worldview or position is this post coming from?â
âWho might disagree with this, and why?â
3. Follow the Money
Understanding who funds a message, or who stands to gain from it, can reveal motivations that arenât obvious at first glance. Ask yourself: Is this post truly grassroots, or is there a financial or political engine behind it?
Use a tool like ChatGPT to ask:
- Who owns or funds the outlet or creator behind this post?
- Are there financial, political, or ideological incentives influencing the message?
- Is the content sponsored, monetized, or aligned with a larger agenda?
Paste the post or its source into ChatGPT and prompt it with:
âWho funds or owns this media outlet or creator?â
âDoes this platform have known political or commercial affiliations?â
âIs there any conflict of interest that could shape this message?â
4. Ask Who It Helps or Harms
Every post, even if unintentionally, serves a purpose. It supports a narrative, shapes perception, or influences behavior. Ask yourself: Who benefits if I believe this? Who could be hurt by it being shared or taken out of context?
Use a tool like ChatGPT to ask:
- Who is portrayed positively or negatively in this post?
- What action or belief is the post encouraging?
- Could this harm individuals or groups â especially if the information is misleading or incomplete?
Paste the post into ChatGPT and prompt it with:
âWho is this post positioning as the âheroâ or âvillainâ?â
âWhat behavior or emotion is this message trying to drive?â
âCould this content reinforce harmful stereotypes or misinformation?â

Putting This Into Practice
The screenshot above, showing damaged solar panels, came from a recent repost in a Facebook group Iâm part of. My first instinct was to jump into the comments.
Instead, this is where theory becomes habit. I spent just a few minutes asking a couple of questions from section four.
- Who funds or owns this media outlet?
- Is there any conflict of interest?
I donât do this every time. We have a motto in our house: Practice over Perfection. The more often I ask these questions, the more natural that pause becomes.
What follows is a simple framework for evaluating content before reacting or resharing.
You can write this out by hand, journal-style. Or you can paste the post into ChatGPT and work through each section there.
If you do this often, consider creating a simple custom GPT or saving a reusable prompt template.
Post Analysis Template
Platform:
Where did you see it? (e.g., Facebook, TikTok, X, Instagram)
Outlet or Creator:
Who posted or published this content?
Emotional Sentiment:
What emotional tone is being used? Is it angry, fearful, hopeful, mocking, etc.?
Observed Bias (if any):
Is there a clear political, cultural, or ideological slant? What perspectives are not being represented?
Funding or Incentives Behind the Source:
Is this person or outlet supported by ad clicks, political donors, or a specific community? Who benefits from this message?
What This Encourages the Viewer to Feel or Do:
Is the post trying to stir outrage? Urge action? Reinforce a belief?
What Context Might Be Missing:
Does the post simplify complex issues? Leave out important facts? Has the story been fact-checked elsewhere?
Browser Extensions That Can Help You Do This
While no tool does it all, here are a few browser extensions that can support this kind of thinking:
Chrome Extensions
Firefox Add-Ons
Tip: I have not personally vetted any of these extensions. Always read reviews before installing any tool, especially those claiming to âdetect fake news.â No tool is perfect, and some may have their own biases or limitations.
We Need Better Reflexes, Not Just Better Opinions
The problem isnât that people care too much. Itâs that we often donât stop long enough to care wisely. Sharing emotional content without context is like handing out torches in a dry forest. It feels like action. But it can cause real harm.
I want us to be better than that. I want us to use the tools we now have - AI, transparency tools, even just questions - to check ourselves, and each other.
Not because we want to be âneutral.â But because we want to be honest. So the next time something hits your feed and your first reaction is âYES, THIS!!â â pause.
- Breathe.
- Paste it into a tool like ChatGPT.
- Ask some questions.
- Sit with the answers.
Then decide if you want to be part of amplifying it.
And if something hits your feed and your first reaction is âAbsolutely not!â or âThis is infuriatingâ. Pause there too. That knee-jerk urge to quote-tweet it with outrage, to drop a sarcastic comment, or to drag it into your stories? Thatâs still engagement. Still amplification. Still fuel for the feed. Ask yourself:
- Is this worth engaging with?
- Will it change minds, or just deepen divisions?
- Am I reacting to disinformation; and if so, do I really want to help spread it, even in protest?
We canât control everything about this country or this moment, but we can control how we show up for the conversation â and what we choose to elevate.
The algorithm isnât neutral. But our awareness can be powerful.
A Final Reminder: Be Human First
While AI can help us analyze content, it canât replace our empathy.
Before you repost, take a moment to imagine yourself as the subject of the video, article, or post. Or imagine it is about someone you love; a friend, a sibling, a parent. What would it feel like to read that headline or see that comment? Would it still seem fair? Would it still feel true?
Whether itâs a post that confirms something you already believe, or one that makes the âother sideâ look awful, we are all human. And that means we owe each other kindness, compassion, and respect; especially when we disagree. Leading with empathy is not weakness. It is strength. Itâs not perfect and itâs slower than outrage. But it is a way forward that doesnât leave us worse off.
Before You Move OnâŠ
What is one post you reshared recently that you wish you had looked into more deeply? What did it teach you?
What we share trains the world, and ourselves, on what matters.