The moment I stopped trusting view counts

It started with a YouTube video about Stonehenge. Five million views. Professional narration. Dramatic music. References to real universities — Oxford, Salford, Curtin. The thesis was extraordinary: an AI system had analyzed decades of archaeological data and arrived at a shocking conclusion about Stonehenge's true purpose.

It sounded incredible. And parts of it were genuinely true. That's what made it dangerous.

I decided to do something most people don't: I opened a browser tab and started checking every single claim. What I found wasn't a simple case of "real" or "fake." It was something far more sophisticated — a carefully constructed blend where verified science was stitched together with pure invention, making it nearly impossible for a casual viewer to tell where the facts ended and the fiction began.

Over the following weeks, I applied the same process to a dozen more viral videos and social media posts. The pattern was always the same. And it changed how I consume information online.


Case Study 1: "AI Reveals the Terrifying Truth About Stonehenge"

The video: ~5 million views. Claims that an AI system processed all existing Stonehenge research and discovered it was a "machine that manufactures terror on demand" — a weapon of psychological control built by an ancient ruling elite.

Stonehenge split comparison — dramatic YouTube portrayal vs calm documentary reality

What I found when I checked every claim

The real science (it exists)

The Altar Stone discovery is genuine. In 2024, a team led by Anthony Clarke at Curtin University confirmed through U-Pb dating of zircon and rutile minerals that Stonehenge's six-tonne Altar Stone originated in northeast Scotland — not Wales, as previously assumed for over a century. This was published in Nature and is a legitimately groundbreaking finding that rewrites our understanding of Neolithic trade networks across Britain.

The acoustic research is real. Trevor Cox at the University of Salford did build a 1:12 scale model of Stonehenge and studied its acoustic properties. His research confirmed that the stone circle would have created notable sound effects — standing waves, echoes, and amplification that could have enhanced ceremonial experiences. This was published in the Journal of Archaeological Science.

The bluestones do have acoustic properties. Rupert Till, a musicologist at the University of Huddersfield, demonstrated that certain rocks in the Preseli Hills — where the bluestones originate — produce resonant tones when struck. The locals call them "ringing rocks," and this is documented in peer-reviewed literature.

The 2024 Major Lunar Standstill observation happened. English Heritage organized a multi-university observation program with researchers from Oxford, Leicester, and Bournemouth to study how the Station Stones align with extreme lunar positions. This is a real, ongoing research program.

The fabrications (seamlessly mixed in)

"AI analyzed all the data and reached a shocking conclusion." This is the central thesis of the video — and it's entirely invented. No such AI analysis was ever published in any peer-reviewed journal. I searched extensively. The video took four separate, independently conducted studies (acoustics 2020, geology 2024, astronomy 2024, musicology 2013) and fabricated a narrative that AI had "connected them all."

"Stonehenge produces infrasound that causes primal terror." Brutally exaggerated. While infrasound research exists, the claims about "crushing dread," "flickering shadows," and "a primal urge to flee" are dramatic inventions. No published study has demonstrated that Stonehenge produces infrasound at levels that would cause such effects. The video took a real phenomenon (infrasound can cause mild discomfort at certain frequencies) and inflated it into a horror movie.

"Alignment pointing at a mathematically empty region of sky." The video attributed this claim to Giulio Magli, a real archaeoastronomer at Politecnico di Milano. But searching his actual publications reveals nothing about a "mathematically empty region." His name appears to have been attached to a fabricated claim for credibility.

"Scientists fell silent and couldn't speak." Classic YouTube dramatization. None of the researchers cited in the video ever reported this reaction.

The scorecard

ClaimVerdict
Altar Stone from Scotland✅ True — Nature 2024
Acoustic research (Cox, Salford)✅ True — published in J. Archaeological Science
Ringing bluestones (Till)✅ True — documented
Major Lunar Standstill 2024✅ True — English Heritage program
AI connected all the research❌ Fabricated — no such study exists
Infrasound causes primal terror❌ Grossly exaggerated
Alignment to empty sky region❌ No evidence in Magli's publications
"Weapon of psychological control"❌ No researcher has published this conclusion

Result: 4 verified facts, 4 fabrications. The facts were the foundation. The fabrications were the story.


Case Study 2: "The Man Who Built a Time Machine and Disappeared for 29 Years"

The video: Millions of views. A man named "Mike Markham" built a time machine, called the Coast to Coast AM radio show in the 1990s, then vanished for 29 years before mysteriously reappearing in 2022 with journals from the future.

Time machine myth vs reality — dramatic sci-fi attic scene vs mundane police station

What's real

The person's actual name is Mike Marcum (not Markham — the video already manipulates the name). In 1995, he genuinely stole six transformers from an electrical station in King City, Missouri, claiming he was building a time machine. The theft caused a local power outage, and he was arrested and sentenced to 60 days in jail. This is confirmed by police records and local FOX 2 reporting. He did call Art Bell's Coast to Coast AM radio show, which was a real late-night program famous for giving airtime to anyone with an extraordinary claim.

What's fabricated

Everything after 1997 in the video's narrative is fiction. The "29-year disappearance"? Marcum appeared on Art Bell's new show Midnight in the Desert on September 4, 2015 — so he was "missing" for 18 years, not 29, and he simply left public life. According to a Paranormal Forum administrator, he was living as a homeless person in Hawaii in the late 2010s. In February 2022, Marcum himself confirmed on Reddit that he was alive and well in Hawaii, later moving back to Ohio.

The dramatic elements — a wooden box with journals addressed to future homeowners, a Dr. Harold Voss from a small Oregon university with a fringe science blog, the concept of "unmemory," an email from Hawaii, a body found in a metal tube on a California beach in the 1930s — all of it is invented. Marcum himself debunked the body-in-a-tube claim.

Result: A real 21-year-old stole transformers and called a radio show. The video built an entire sci-fi mythology around that kernel of truth. The story arc was almost too perfect — a mysterious box, journals from the future, a dramatic 29-year return. That perfection should have been the first red flag.

Reality is messy and anticlimactic. Good fiction is suspiciously tidy.


Case Study 3: When There's No Truth at All — Pure Fabrication

Multiple smartphone screens displaying identical Facebook posts — coordinated disinformation campaign visualization

Not everything follows the 60/40 rule. Some viral content is 100% fabricated — but uses the appearance of specificity to simulate credibility.

A Facebook post circulating in Slovak claimed that Jeffrey Epstein sent an email on March 20, 2015 about pandemic preparation — supposedly proving foreknowledge of COVID-19. The post was shared identically across multiple accounts with sensational captions like "PRAVDA VYŠLA NA VON!" (The truth has come out!).

The fundamental problem: Epstein died in August 2019. The post claimed he died in 2025. This isn't a matter of interpretation or missing nuance — it's a factual impossibility that anyone could verify in ten seconds with a Google search. Yet the post was shared thousands of times, with comment sections full of people treating it as a revelation.

What made it work was structural mimicry of journalism: a specific date (March 20, 2015), a named person (Epstein), a concrete claim (email about pandemic preparation), and an emotional payload (they knew all along). The specificity creates an illusion of sourcing.Most people don't fact-check dates — they pattern-match. "Specific claim + named person + emotional hook" triggers the same cognitive response as "credible report."

The coordinated sharing across multiple accounts posting identical text reveals the industrial nature of disinformation distribution. This wasn't organic sharing. It was a content campaign designed to exploit the algorithmic reward for engagement.

This case is important precisely because it's so different from the Stonehenge or Time Machine examples. Those required real research to debunk — you needed to find and read actual papers. The Epstein post could be disproven by anyone with a search engine in under a minute. The fact that thousands didn't bother tells you everything about how social proof overrides individual verification.


Case Study 4: The Science-Washing Machine on Social Media

The same pattern appears on Facebook and Instagram, but with a twist: instead of YouTube's elaborate narratives, social media uses real peer-reviewed studies as raw material for misleading posts.

Smartphone showing Facebook post with sensationalist health claim surrounded by scientific papers with MISSING CONTEXT stamps

Cannabis protects your brain (it doesn't — that's not what the study says)

A viral Facebook post claimed a "large Danish cohort study" proved that cannabis use prevents cognitive decline and raises IQ. The study is real — Høeg et al. (2024), published in Brain and Behavior, a peer-reviewed Wiley journal, following 5,162 Danish men over decades.

But the post stripped away every caveat. The actual IQ difference? 1.3 points — clinically insignificant, within statistical noise. The researchers explicitly warned that cannabis users in the study already had higher baseline IQ and education levels, meaning the tiny difference likely reflects who chooses to try cannabis, not what cannabis does to your brain. It's an observational study with self-reported use data and significant survivorship bias. The authors never claimed cannabis "protects" cognition.

The Facebook post took a nuanced, cautious study and turned it into a miracle headline. Millions of people saw the post. Virtually none read the actual paper.

Non-hallucinogenic LSD repairs your brain (the study is real, the framing is not)

Another viral post claimed scientists had discovered a version of LSD that repairs brain damage without any psychedelic effects. The underlying research — JRT, a compound developed by David Olson's team at UC Davis, published in PNAS and funded by NIH — is legitimate and genuinely promising.

But the post erased the distance between a laboratory finding and a clinical reality. JRT showed neuroplasticity effects in controlled experiments. That's not "brain repair." It's a preclinical compound that might, after years of clinical trials, lead to therapeutic applications. The post transformed careful early-stage science into a clickable miracle.


The Pattern: Why This Works Every Time

After dissecting over a dozen viral videos and posts, the formula became obvious. It's not random. It's engineered.

The 60/40 Rule

Minimalist infographic showing 60% verified facts in green and 40% fabricated claims in red

The most effective misinformation isn't 100% false. It's roughly 60% true and 40% fabricated.

Pure lies are easy to dismiss. But when a video correctly names real researchers, cites real universities, and references real published studies, your brain builds a foundation of trust. By the time the fabrications arrive, your critical filter is already lowered.

This is what makes sciencewashing fundamentally different from old-school conspiracy theories about flat earths or faked moon landings. Those were easy to debunk because they contradicted basic observable reality. Sciencewashing operates within the structure of real science — it just bends the conclusions.

View Count as Social Proof

Here's the psychological trap I fell into initially: a video with five million views feels inherently more credible than one with five hundred. This is a well-documented cognitive bias — social proof. We unconsciously interpret popularity as validation. "Five million people watched this, so there must be something to it."

But view counts measure only one thing: how effectively a thumbnail and title triggered curiosity. They say nothing about accuracy. A video titled "AI Reveals Terrifying Truth About Stonehenge" will always outperform "Interesting Acoustic Properties of Neolithic Stone Circles" — even though the second title is closer to what the research actually shows.

The YouTube Course Pipeline

Isometric factory illustration — scientific papers enter on conveyor belt, identical sensationalist video thumbnails come out

This isn't accidental. There's an entire economy behind it. Over the past five years, a booming industry of "YouTube automation" courses ($500-$2,000) has emerged, teaching a specific formula: find a trending topic, research just enough real facts to build credibility, construct a dramatic narrative arc with emotional peaks every 3-4 minutes, use AI voiceover tools, and publish at scale. Some courses teach students to produce 3-10 videos per day. A single viral hit can generate $2,000-$5,000 in ad revenue; a daily-upload channel can reach $10,000-$30,000 per month within a year. The barrier to entry is a laptop and a text-to-speech subscription.

When your metric is views per dollar spent on production, accuracy isn't just irrelevant — it's actively counterproductive.

The result is an industrial pipeline where semi-informed creators take legitimate research and feed it through a dramatization machine. I've seen channels with hundreds of videos across wildly different topics — ancient mysteries, quantum physics, deep sea creatures — all following the exact same structural template. Same pacing. Same dramatic pauses. Same "what scientists found next changed everything." The creator may not even understand the science they're distorting. They just need enough real references to make the narrative feel credible.

Emotional Architecture

Every viral piece I analyzed followed the same emotional arc:

  1. Hook — a provocative claim or question ("What if everything we knew about Stonehenge was wrong?")
  2. Credibility building — real names, real institutions, real studies
  3. Escalation — increasingly dramatic claims, each building on the trust established by the real facts
  4. The reveal — a conclusion so extraordinary it feels like forbidden knowledge
  5. Call to action — subscribe, share, "they don't want you to know this"

Notice what's missing: uncertainty. Nuance. The phrase "we don't know yet." Real science is full of hedging, caveats, and honest admissions of limited data. Sciencewashed content removes all of that, replacing it with false certainty.


The 5-Minute Fact-Check: A Practical Framework

I'm not a journalist. I'm a solutions architect who builds digital systems. But the same debugging mindset that helps me trace errors in complex software applies surprisingly well to information verification.

Here's the process I now use every time something sounds too compelling:

Five-step fact-checking framework infographic — Identify the Claim, Find the Source, Compare Conclusions, Check for Missing Caveats, Who Benefits?

Step 1: Identify the core claim (30 seconds)

Strip away the drama. What is the video actually asserting? "Stonehenge was a psychological weapon" is the claim. "Scientists studied acoustic properties" is the evidence. These are different things.

Step 2: Search for the actual source (2 minutes)

If a video mentions a study, find it. Google Scholar, PubMed, or even a regular search with the researcher's name and institution. If you can't find the study, that's your answer. If you can find it, read the abstract and conclusion — they're usually freely accessible even behind paywalls.

Step 3: Compare conclusions (1 minute)

What does the original researcher actually conclude? Compare that to what the video claims. In the Stonehenge case, Trevor Cox concluded that the stone circle had interesting acoustic properties that may have enhanced ceremonies. The video concluded that Stonehenge was a "terror machine." The gap between those two statements is where the fabrication lives.

Step 4: Check for the missing caveats (1 minute)

Real science always includes limitations. "This is an observational study." "Sample size was limited." "Further research is needed." If the content you're watching has removed all uncertainty, something has been distorted.

Step 5: Ask "Who benefits?" (30 seconds)

Is this content designed to inform you or to keep you watching? Content that ends with a cliffhanger and a subscribe button has different incentives than content that ends with a bibliography and an honest "we don't fully understand this yet."

AI tools can accelerate parts of this process — searching databases, cross-referencing claims, finding original papers faster than manual searching. But the core skill isn't technical. It's the willingness to spend five minutes checking before you share something with your network.


Why This Matters Beyond YouTube

This isn't just about entertainment content. The same sciencewashing techniques appear everywhere — and the scale is growing.

During my fact-checking sessions, I also encountered a viral story about the "Buga Sphere" — a metallic object found in Colombia that UFO communities claimed contained alien messages. The object was real. The involvement of Jaime Maussan — a Mexican ufologist with a documented history of promoting debunked alien artifacts including fabricated Peruvian "alien mummies" — was also real. But the claims about variable weight, temperature anomalies, and AI-decoded alien messages? Pure speculation layered on top of a genuine archaeological curiosity. Same pattern. Different topic.

The techniques appear in business contexts every day. Vendors citing "studies" to sell software solutions — without linking the actual research. LinkedIn posts claiming "Harvard research proves" something about leadership or productivity — often misrepresenting or inventing the citation. Marketing materials that reference real data points but draw unsupported conclusions. Health influencers taking a single preclinical study and marketing it as a proven treatment.

As someone who builds digital systems, I've learned that the most dangerous failures aren't the obvious ones. They're the subtle corruptions — the API that returns mostly correct data with occasional silent errors, the analytics dashboard that presents real numbers in a misleading context. The same principle applies to information: the most dangerous misinformation isn't the obvious lie. It's the mostly-true narrative with strategically placed fabrications.

The internet didn't create misinformation. But it industrialized the production and distribution pipeline. When anyone with a laptop can produce 10 sciencewashed videos per day, and each video reaches millions of people who use view counts as a credibility signal, we have a systemic problem that individual media literacy alone won't solve.

I don't have a solution for the system. But I have one for myself: I stopped treating popularity as proof. I started treating compelling narratives with the same skepticism I apply to code that "works perfectly on the first try." And I developed a habit that costs me five minutes per claim but has saved me from confidently sharing fabrications dozens of times.

The next time you see a video with millions of views making an extraordinary claim backed by "real science," try the five-minute check. You might find a genuinely fascinating piece of research underneath the dramatization — like the Scottish origins of Stonehenge's Altar Stone, which is a remarkable discovery that needs no embellishment. Or you might find that the entire edifice is built on a foundation of invented AI analyses and misattributed quotes.

Either way, you'll know. And knowing is worth five minutes.

I'm a Digital Solutions Architect based in Slovakia — I build custom platforms and AI integrations. I apply the same verification rigor to building digital products. If you're looking for someone who questions assumptions before writing code, let's talk.