Thursday, February 20, 2025

PolitiFact's first GOP "Pants on Fire" of 2025 is a stinker

Reminder: Just because we're pressed for time to publish at PolitiFact Bias doesn't mean PolitiFact has improved one iota.

PolitiFact took an unusually long time to publish a "Pants on Fire" targeting a partisan political figure in 2025. Unsurprisingly, it was a Republican and it was President Trump.

Also unsurprisingly, it's not a good fact check. As we have previously observed, the mere fact that a fact check has a "False" or "Pants on Fire" rating counts as a good sign that it's a bad fact check. That goes for bad ratings of Democrats as well as Republicans, though of course the latter get hit with more of the unfair ratings.

So, on to business. What have we got on Feb. 19, 2025?


Any time a fact-checked claim contains only one word quoted from the supposed claimant, it's worth looking into whether the fact checker distorted the claim.

PolitiFact does offer some context and a link to a video. Such links should indicate at what point in time the relevant words occur, by the way.

PolitiFact:

"I think I have the power to end this war, and I think it's going very well," Trump said [16:16 -ed] while answering reporters’ questions Feb. 18 at his Mar-a-Lago resort. "But today I heard (from Ukraine), ‘Oh well, we weren't invited.’ Well, you've been there for three years. You should have ended it three years — you should have never started it, you could have made a deal."

PolitiFact makes it easy to reconcile Trump's statement above with his later statement that Putin should not have gone into Ukraine by simply not introducing the latter as relevant context. In other words, if readers don't know Trump said that, then there's no need to reconcile the conflicting statements for their readers.

17:53 (transcript ours, bold emphasis added):

"Look, you have leadership--and I like him (Zelenskyy) personally--he's fine. But I don't care about personally, I care about getting the job done. You have leadership now that's allowed a war to go on that should've never even happened. Even without the United States. Look, we had a president who was grossly incompetent. He had no idea what he was doing. He said some very stupid things, like going in for portions and all of the things he made a lot of bad statements. But, uh, he's grossly incompetent and I think everyone knew that. But this is something that should've never happened. Would've never happened. And I used to discuss it with Putin. President Putin and I would talk about Ukraine and it was the apple of his eye, I will tell you that, but he never, there was never a chance of him going in, and I told him you better not go in, don't go in. And he understood that, and he understood it fully. But I'm only interested, I want to see if I can save maybe millions of lives, this could even end up in a World War III, I mean to be honest with you. You've been hearing now Europe is saying "Well, I think we're going to go in" and we're going to go. All of a sudden you're going to end up in World War III. Over something that should've never happened."

PolitiFact should have looked for clues in the context to help explain Trump's statement about "you should have never started it." With the added context, it seems plain Trump faults all the leaders involved, including Presidents Biden and Putin. Putin did end up going into Ukraine despite Trump's warnings, after all.

But why let context get in the way of a fact check "Gotcha!" story?

This gotcha story featured two bylines, Louis Jacobson and newbie Claire Cranford. Cranford's another from the Bill Adair-fed Duke pipeline.

Apparently nobody told her to scrub her social media of partisan commentary?


This is what you get when Big Journalism thinks "nonpartisanship" means not explicitly telling people who gets your vote.

Speaking of Bill Adair, he didn't see bias at PolitiFact then or now.


They're partisans who allow partisanship to affect their fact-checking.

We love this gem from PolitiFact:
We asked the White House for evidence that Ukraine had started the war and received no response.
Tell me you asked a loaded question in the name of journalism by, uh, telling me you asked a loaded question. That's how you help kill trust in your brand among moderates/independents as well as conservatives.

Here's how an objective journalist might ask about the quotation: "What did you mean when you said "They started it" while talking about Ukraine wanting a place at the table? Were you saying they started the war?"

Objective journalists do not insert their own opinions into questions they ask of others. They ask questions that free the interviewee to expound.

In this case, Trump had already expounded, though perhaps we could wish for even greater specificity. PolitiFact either didn't look for it or chose to ignore it.


Afters

We also like how PolitiFact identifies "they" as "Ukraine" except in the title of the fact check, where "they" turns into "Volodymyr Zelenskyy."

It's as though Zelenskyy chose "they" as one of his pronouns.

More Afters

It's hilarious what you can find in PolitiFact's work when you dig even a little. PolitiFact has a history of finding distinctly partisan experts to weigh in on its fact checks. Expert Erik Herron didn't have any partisan campaign giving history, but twiX ratted him out for canvassing for Kamala Harris.

.



Monday, January 13, 2025

PolitiFact's "Pants on Fire" bias in 2024

For years, PolitiFact Bias has tracked the proportion of false ("False" plus "Pants on Fire") statements PolitiFact rated "Pants on Fire." As PolitiFact has never established an objective distinction between the two ratings, we infer that the difference between the two is substantially or wholly subjective. That makes this dividing line perhaps the best means of using PolitiFact's own ratings to measure its political bias.

As we are looking at proportions and not raw numbers for the bias measurement, the results cannot be dismissed on the basis that Republicans supposedly lie more.

The Tale of the Tape in 2024

Graphs-a-plenty this year!

We'll start with the dual graph of the PoF Bias number along with the story selection proportion number. The PoF Bias number could be expressed either of two ways. As the numbers pretty consistently have show an anti-Republican/pro-Democrat bias, this number shows that anti-Republican bias when the number is greater than 1. Using this option a PoF Bias Number less than 1 shows the PoF bias harmed Democrats. Our chart shows that occurring for four different years (2007, 2011, 2013, 2015). But it's important to point out that the state franchises accounted for the apparent relative evenhandedness for the latter three years. We tracked PolitiFact National separately, and only 2007 and 2017 showed the anti-Democrat bias. The year 2007 counts as a statistical anomaly, we would say. PolitiFact treated the "Pants on Fire" rating as a joke at first.


The chart shows that after 2007 Republicans consistently had more false ratings than Democrats. In 2024 that preference for GOP falsehoods fell just short of the record for 2020. For both years, PolitiFact gave the GOP more than five times the number of false ratings it gave Democrats.

Because Republicans lie more?

Not so fast! Here's where the PoF Bias number shows its value. The PoF Bias number compares the percentages of false statements rated "Pants on Fire" for each party. PolitiFact has never offered an objective means of separating ridiculously false statements from those that are merely false. As the number represents a proportion, it is immune from influence by the sheer number of false ratings. Put another way, it's entirely independent of the Selection Proportion number.

In 2024, PolitiFact was over six times more likely to (subjectively rate a Republican false claim "Pants on Fire" than a false claim from a Democrat. That figure easily eclipsed the old record of 4.58 times more likely set in 2020.

Democrats Lie Less?

Interestingly, the recent year spikes in the PoF Bias Number are not driven by increases in "Pants on Fire" ratings given to Republicans. Those have actually moderated. The driver of the higher bias number stems from PolitiFact having increasing difficulty bringing itself to rate a Democrat "Pants on Fire." In 2010, PolitiFact meted out 31 "Pants on Fire" ratings to Democrats. That number has shrunken pretty steadily over time, with the Democrats setting a new record for PoF avoidance. Only one false Democrat claim received a "Pants on Fire" rating, just 4 percent of the total false ratings.




PolitiFact encountered a similar distaste for giving Democrats false ratings of any kind. "False" and "Pants on Fire" combined fell from a peak of 135 in 2012 to 25 in 2024. That figure was the lowest for any presidential election year over PolitiFact's entire history.

Republicans Lie More and Less?

Probably thanks to social media dollars drawing fact checkers away from politicians and toward fake news and social media hijinx, PolitiFact is finding fewer false claims from Republicans. No! I'm not kidding.



Check the presidential election year peaks.

2008 : 55 (PolitiFact's infancy)
2012: 247 (Good ol' Romney)
2016: 237 (Dawn of the Trump)
2020: 253 (Day of the Trump)
2024: 132 (Return of the Trump)

PolitiFact barely gave the GOP half the false ratings it did in 2020. When former PolitiFact editor Bill Adair runs around in support of his new book decrying an explosion in political falsehoods, what is he talking about? PolitiFact has apparently cut Democrat falsehoods down to almost nothing and cut Republican falsehoods nearly in half.

No, of course we don't believe that. A fool would believe that. We say the "Truth-O-Meter" numbers give us information about PolitiFact, not about the figures on whom they stand in judgment.

Saturday, December 28, 2024

Why does PolitiFact struggle with simple stuff?

 PolitiFact Bias started out and continues as an effort to improve PolitiFact.

We understand PolitiFact's liberal bloggers disliking criticism. But c'mon, it's for your own good. And why the struggle with simple stuff?

Moments ago, I was dipping into some search results relating to a potential research project. While reviewing a PolitiFact story I noticed it had an update notice.




"An update," I think. "I wonder what was updated?"

So I look through the story for an update. Then I looked for the update again.

Then I cheated and tried for an Internet Archive comparison. The oldest archived page was already updated, so that was initially a dead end.

I looked through the story again looking for the update without finding it.

What does (did) PolitiFact's statement of principles say about updates?

Updates – From time to time, we add additional information to stories and fact-checks after they’ve published, not as a correction but as a service to readers. Examples include a response from the speaker we received after publication (that did not change the conclusion of the report), or breaking news after publication that is relevant to the check. Updates can be made parenthetically within the text with a date, or at the end of the report. Updated fact-checks receive a tag of "Corrections and updates."

The update announcement at the top should have featured the date it was added. And, as PolitiFact's supposed principles state, the story should have had a "Corrections and Updates" tag added. There's no such tag.

I was reminded by my attempt to access PolitiFact's archived statement of principles that PolitiFact's update to its website might hide older pages from ordinary Internet Archive searches. I went to PolitiFact's main page, archived on the date of the article. The article was highlighted on the main page, and clicking on it took me to the page as archived on March 29, 2019.

The page had no update announcement. 

Now we're cookin'.

Comparing March 29 to April 1 revealed five added paragraphs from (liberal blogger) Jon Greenberg.

So, PolitiFact updated the story and did not inform its readers on the specifics of the update. This has the effect of a stealth edit, which counts as a no-no in journalism.

This Is So Minor! Who Cares?

Yes, this case is fairly minor, albeit following published principles, in journalism as in anything else, should count as standard practice. Inconsistent application of principles empties the term "principles" of the meaning it ought to have.

As to who cares, the public ought to care because journalism organizations have principles to establish their trustworthiness. Following the principles provides evidence of trustworthiness. Failing to follow principles offers evidence of untrustworthiness.

The International Fact-Checking Network, in its supposed role in holding fact-checking organizations accountable, also ought to care. But I could send a correction request to PolitiFact asking to have this problem corrected and PolitiFact probably would not bother to fix it. I say that based on past experience. Moreover, after PolitiFact failed to fix the error for weeks, I could send this example to the International Fact-Checking Network as an example of PolitiFact failing to scrupulously follow its corrections policy and the IFCN would ignore it (see here).

Meanwhile, the IFCN (owned as is PolitiFact by the Poynter Institute) will continue to assure the public that fact-checking orgs like PolitiFact that are "verified" by the IFCN scrupulously follow their corrections policies.

These journalists who want our trust are telling us falsehoods.

Why wouldn't it be better to fix stories so that they live up to published principles? If they don't have time to follow principles on corrections and updates (among other things), should we expect them to have time to live up to their principles in reporting and fact-checking?

We believe we haven't been able to help PolitiFact or the IFCN much because they don't want any help.

Thursday, December 19, 2024

The PolitiFact Wisconsin story

 This article is a companion to Bryan's forthcoming review of former PolitiFact editor Bill Adair's book, "The Big Lie."

In my review of Bill Adair's book I refer to the way PolitiFact's state operations like PolitiFact Wisconsin tended to favor Republicans during years Adair excluded from his dataset. Readers of that Substack article may find this explanation helpful.

Research published here at PolitiFact Bias has examined the bias PolitiFact applies in the use of its "Pants on Fire" ratings. The difference between "False" and "Pants on Fire" appears entirely subjective and based squarely on the term "ridiculous." Until PolitiFact defines "ridiculous" in a reasonably objective way, its descriptions up through this point strongly encourage the view that the term is subjective.

Until 2020, a "Wisconsin" tag on a PolitiFact story dependably indicated that staffers from PolitiFact's affiliate performed the fact checks. We stopped tracking state data after 2020 because the stories could as easily come from PolitiFact National staffers. We also had reason to believe the state affiliates were no longer in charge of determining the "Truth-O-Meter" ratings.

"Pants on Fire" Bias at PolitiFact Wisconsin

Wisconsin was unusually tough on its Democrats compared to most other PolitiFact operations. Whereas PolitiFact National gave Democrats a "Pants of Fire" for about 17 percent of their false statements from 2007 through 2019, PolitiFact Wisconsin gave them over 27 percent, slightly higher than the 27 percent average Republicans received from PolitiFact National.

Raw Numbers at PolitiFact Wisconsin

Adair's claim that Republicans lie more doesn't rest on percentages, though. Adair sticks with raw numbers of disparaging ratings.

There, too, PolitiFact Wisconsin moderated the bias of the larger organization.

Republicans "earned" about 40 percent more "False" plus "Pants on Fire" ratings than did Democrats from PolitiFact Wisconsin. In contrast, PolitiFact National gave Republicans over 300 percent (3x) more such ratings than Democrats.

The tendency in Wisconsin, as this graph helps show, matches that for PolitiFact as a whole. It isn't that Republicans lie more. It's that Democrats lie less and less.


Where did the Democrat lies go? Did PolitiFact and other fact checkers force them to clean up their act? Did fact checkers at long last realize that they had been too tough on Democrats early on?

Did narrative increasingly conquer objectivity?

Saturday, November 2, 2024

An out-of-context and prejudiced "In Context" feature from PolitiFact

PolitiFact advertises its "In Context" feature as a means of allowing readers decide, considering the surrounding context, the meaning of a politician's words.

The idea's fine in principle. But it takes principles to pull off an idea that's fine on principle and PolitiFact has a tough time with that. Consider PolitiFact's Nov. 1, 2024 feature about comments made by former President Donald Trump.


The third paragraph reminds readers of the ostensible purpose of the "In Context" feature:

With widespread interpretations of Trump’s remarks, we’re using our In Context feature to let voters review his comments in their original context and reach their own conclusions. 

We say this "In Context" passes better as a prejudiced anti-Trump editorial.

Why do we say that?

Even the part of the comment taken completely out of context puts PolitiFact's headline in question. 

Trump, appearing with Tucker Carlson (via PolitiFact's story):

Later, Trump added "I don’t blame (Dick Cheney) for sticking with his daughter, but his daughter is a very dumb individual, very dumb. She is a radical war hawk. Let's put her with a rifle standing there with nine barrels shooting at her, OK? Let's see how she feels about it. You know, when the guns are trained on her face."

What would support the conclusion that Trump describes a "firing squad"? Trump, after all, doesn't use that term in the out-of-context quote or elsewhere.

One element of the statement might reasonably support the "firing squad" idea: Trump describes a number of people (nine) aiming guns at Cheney's face.

What elements of the description fail to support the "firing squad" idea?

  1. Cheney has a gun in Trump's description. What "firing squad" execution offers a gun to the target?
  2. The guns are aimed at Cheney's face. Firing squads traditionally aim at the heart.
  3. The alternative explanation, that Trump was talking about war hawks typically not needing to face battle themselves, has nothing that argues against it unless we count anti-Trump prejudice.
The text of PolitiFact's story links to an out-of-context version of Trump's comments. And though PolitiFact's source list includes a longer clip with the full context, PolitiFact left out what Trump said after "face":
"You know, they're all war hawks when they're sitting in Washington in a nice building saying 'Aw, gee, we'll, let's send, uh, let's send 10,000 troops right into the mouth of the enemy.' But she's a stupid person." (transcript ours, comments start at 7:36)
Obviously, Trump expressed a common theme among politicians, that war hawks are not the ones facing the bullets.

So, even though PolitiFact gathered no reasonable evidence showing Trump was referring to Cheney facing a "firing squad," the fact checkers (actually liberal bloggers) put their own biased interpretation right in the headline to prejudice their readers.

Headlines often aren't written by the person who wrote the story, but we can blame biased journalist Amy Sherman directly because her story pushes the same conclusion: "Trump’s comments about Liz Cheney and a firing squad drew the most public attention."

There's no solid evidence Trump was referring to a firing squad and plenty to suggest he wasn't.

PolitiFact's story contains obvious signs of liberal bias and fails the supposed objective of the "In Context" feature by pushing a conclusion on readers.

Thursday, October 17, 2024

Bill Adair: "I lied because I was trying to show that we were impartial"

 At PolitiFact Bias we never believed that the liberal bloggers at PolitiFact didn't keep score by party. We were willing to believe that PolitiFact didn't keep a physical or digital tally showing the GOP got the worst of the "Truth-O-Meter" but we simply didn't buy the notion that staffers weren't keeping score in their heads. And we would count that as a factor likely to influence bias effects such as confirmation bias.

PolitiFact's founding editor Bill Adair has been making the rounds stumping for his new book, and has now admitted he was lying about not keeping score.

We created a shortened clip to emphasize his confession and the context.

 

 Readers may view Adair's full appearance here.

Adair:
Well, I was lying. We did keep score. And, uh, we didn't keep score by party but we kept score and still do, PolitiFact does, um, by individual. So you could easily look through the prominent Republicans and compare them to the prominent Democrats and see that (2012 C-SPAN caller) Brian was right.

Um, but I lied, um, and I lied because I was trying to show that we were impartial.
Of course Adair has also said that part of his original vision for PolitiFact was giving readers the ability to compare politicians' fact-checking records. That accords with what we have said all along here at PolitiFact Bias: PolitiFact treats political fact-checking as anti-Republican editorial. Subjective ratings are editorials, and aggregating the ratings magnifies the editorial effect.

Afters:

I belatedly tried to get through to C-SPAN to ask Adair a question. I wanted his comment on the fairly rapid decline in "Pants on Fire" ratings given to Democrats. Should PolitiFact take credit for making Democrats more honest over time, or is this just one more evidence that PolitiFact fact checkers lean left?



As the chart shows, in 2012 under Adair PolitiFact identified 26 "Pants on Fire" claims from Democrats. In the past five years PolitiFact has only given out 11 total. "Republicans lie more" doesn't explain it. This is Democrats lying less and less according to the "Truth-O-Meter."

Tuesday, October 15, 2024

PolitiFact's founding editor wrote a book

PolitiFact's founding editor, Bill Adair, will soon have a book in stores revealing that Republicans lie more!

What a surprise!

Of course, no Democrat is surprised for PolitiFact has been implicitly spreading that message for years as we and figures like Eric Ostermeier have pointed out for years.

Republicans and about half of moderates aren't surprised because they do not regard fact checkers like PolitiFact as unbiased actors.

We'll have more to say about the book after we read it, but Adair's pre-publication interviews promise the book will provide great entertainment. Adair apparently bases his claim that Republicans lie more on fact checker ratings, plus anecdotal evidence from various Republicans who have turned on their old party. It's not exactly what one would call a proper fact-checking approach to the question.

The book is called "Beyond the Big Lie." 

Don't bother reading it unless it's your duty as a researcher. We'll get around to pointing out its errors and weaknesses here or at Zebra Fact Check or even my new Substack. Or all three.


Thursday, July 25, 2024

PolitiFact's "border czar" two-step

 Unfortunately it surprises us not at all to see faux fact checkers PolitiFact aligning with the media denial that presumptive Democratic presidential nominee and current Vice President Kamala Harris was appointed "border czar" under by President Biden.





What is this "border czar" Republicans speak of?

PolitiFact hilariously avers that Republicans invented the title as applied to Harris (bold emphasis added):

Vice President Kamala Harris might soon get a new official title: 2024 Democratic presidential nominee. In the meantime, Republicans have revived a title they gave her in 2021: "border czar."

But it wasn't all that long ago that PolitiFact knew that it was okay to call somebody "czar" regardless of whether the president specifically bequeathed that title. PolitiFact gave Republican John McCain a "True" rating for claiming President Obama had more czars than the Romanovs.








But Obama appointed all those people as czars, right?

Eh, not so much:

So who exactly qualifies as a czar? As best we can tell, it's whenever someone in the media says so. You can identify a guy as "Assistant to the President for Science and Technology, Director of the White House Office of Science and Technology Policy, and Co-Chair of the President’s Council of Advisers on Science and Technology," but it's a lot easier on everyone to just say "Science Czar." And "Special Master" sounds like Richie Rich's best friend.

So the title of czar is largely arbitrary media shorthand for "It's this person's job to make sure (blank) goes right." And we think everyone can agree that "Terrorism Czar" sounds way cooler than "Deputy National Security Adviser for Homeland Security." 

PolitiFact spent several paragraphs explaining how "czar" counts as an imprecise term that the administration tended to avoid, culminating in the two paragraphs we quoted. Back in 2009 a person was a czar if the term was applied, albeit PolitFact hints that it's a media prerogative.

Let us circle back and try to find out out how Harris ended up with the "czar" title. PolitiFact now says Republicans did it.

BBC (March 24, 2021):

Announcing Ms Harris's appointment as his immigration czar, Mr Biden told reporters and officials at the White House: "She's the most qualified person to do it, to lead our efforts with Mexico and the Northern Triangle [Honduras, Guatemala and El Salvador], and the countries that are going to need help in stemming the movement of so many folks - stemming the migration to our southern border".

The BBC was calling Harris the "border immigration czar" immediately after Mr. Biden put her in charge of "stemming the migration to our southern border." Does PolitiFact have evidence the BBC took its cue from Republicans?

It was natural to link Harris to the "border czar" title. Why? Because Harris' appointment to her role in March 2021 barely preceded the departure of "border czar" Roberta S. Jacobson at the end of April 2021. If the media would decline to see Harris as the new "border czar" replacing Jacobson, then we have the Biden administration leaving the "border czar" position open while the border problem only worsened.

 This seems like an easy analysis. The "border czar" was going to look bad because the border would not get fixed. So, Harris took on a role fairly called "border czar" but insisted from the first that it wasn't really the "border czar" position.

So far as we can tell, the Biden administration didn't call Jacobson its "border czar" any more than it did Harris. And, though their official titles differed, Jacobson's job description was pretty much the same as Harris' (via Jewish Insider, bold emphasis added):

Now, Jacobson has joined President Joe Biden’s National Security Council as the administration’s “border czar,” tasked with stemming the increasing tide of migrants arriving at the United States’s Southern border. As the special assistant to the president and coordinator for the Southwest border, Jacobson is tasked with engaging diplomatically with Mexico and the “Northern Triangle” nations of Honduras, Guatemala and El Salvador, from where most migrants originate. 

Harris took over the highlighted part of the job the "border czar" was doing. But Harris didn't want the "border czar" title and the baggage that carried. And the media today are eager to oblige, including PolitiFact.

Correction Aug. 3, 2024: In the section quoting the BBC, we stated that BBC called Harris "border czar" when in fact the quotation calls Harris "immigration czar." We've drawn attention to our mistake by using a strikethrough of "border" and adding "immigration." We apologize for the error.