Simanaitis Says

On cars, old, new and future; science & technology; vintage airplanes, computer flight simulation of them; Sherlockiana; our English language; travel; and other stuff

RESISTING A.I. SLOP—AAAS SCIENCE’S VIEW

H. HOLDEN THORP DESCRIBES “Resisting A.I. Slop,” AAAS Science, January 1, 2026. Given that Thorp is Editor-in-Chief of all Science journals, he’s an excellent source of thoughtful information on this often hyped topic. Here are tidbits gleaned from his Editorial, together with a BBC view and my own A.I. counterstroke. 

AAAS Science’s Guardrails. Dr. Thorp writes, “Science’s most recent policies allow the use of large language models for certain processes without any disclosure, such as editing the text in research papers to improve clarity and readability or assisting in the gathering of references. However, the use of AI beyond that—for example, in drafting manuscript text— must be declared. And the use of AI to create figures is not allowed. All authors must certify and be responsible for all content, including that generated with the aid of AI.”

He continues, “Science also uses AI tools, such as iThenticate and Proofig, to better identify text that has been plagiarized or figures that have been altered.”

What’s more, “Over the past year,” Thorp says, “Science has collaborated with DataSeer to evaluate adherence to its policy mandating the sharing of underlying data and code for all published research articles. The initial results are encouraging in that of 2680 Science papers published between 2021 and 2024, 69% shared data.”

That is, these researchers gave sufficient means of having others assess the reproducibility of claimed results; this, an important feature of legitimate science.

Catching Errors? Losing Jobs? Thorp assesses, “Although AI is helping Science catch errors that can be corrected or elements that are missing from a paper but should be included, such as supporting code or raw data, its use and the evaluation of the output require more human effort, not less.”

“Indeed,” Thorp continues, “AI is allowing Science to identify problems more rigorously than before, but the reports generated by such tools must be assessed by people. Perhaps the panic over AI assuming jobs will be justified in the long run, but I remain skeptical. Most technological advances have not led to catastrophic job losses.”

Online Courses? Online Journals? Thorp observes, “Higher education seemed under threat 15 years ago with predictions that massive open online courses were going to put universities out of business. That didn’t happen, but online courses did become an important element of education and allowed universities to grow, not shrink.”

“The movement of journals to publishing online,” he describes, “provoked a similar result—it increased the size and scale of scholarly publishing. Acceptance of bombastic statements about the impacts of A.I. on scientific literature should wait for verification.” 

“Like many tools, Thorp concludes, “A.I. will allow the scientific community to do more if it picks the right ways to use it. The community needs to be careful and not be swept up by the hype surrounding every A.I. product.”

Hear! Hear!

BBC’s Identifying A.I. Slop. Amanda Ruggeri offers “The ‘Sift’ Strategy: A Four-Step Method For Spotting Misinformation,” BBC, May 10, 2024. She quotes Marcia McNutt, president of the U.S. National Academy of Science (and a predecessor of H. Holden Thorp as Science Editor-in-Chief 2013–2016): “Misinformation is worse than an epidemic. It spreads at the speed of light throughout the globe and can prove deadly when it reinforces misplaced personal bias against all trustworthy evidence.”

The “Sift” Method. “One of my favourites,” BBC’s Ruggeri describes, “comes with a nifty acronym: the Sift method. Pioneered by digital literacy expert Mike Caulfield, it breaks down into four easy-to-remember steps.” 

S: Stop. Don’t just immediately share a post. Don’t comment on it. And move on to the next step here.

I: Investigate. Do an independent web search. Ruggeri suggests, “One that fact-checkers often use as a first port of call might surprise you: Wikipedia. While it’s not perfect, it has the benefit of being crowd-sourced, which means that its articles about specific well-known people or organisations often cover aspects like controversies and political biases.” 

I concur.

She also suggests checking the political leanings of sources. I’ve used allsides.com, adfontesmedia.com, and mediabiasfactcheck.com

Image from adfontesmedia.com

Also, the suffix .edu is useful (though, of course, you need to be aware of school creds as well). 

F: Find Better Coverage. That is, digging a little deeper never hurts. As I’ve noted here, ain’t research fun!

T: Trace The Claim to Its Original Context. This, you’ll note, concurs with Thorp’s stressing the importance of identifying data and methodology. 

My Added Identification of A.I. “Support” Slop. Increasingly many firms have A.I. robots as first response to phone contact. Recognizing this is my “A” word: “accommodating.” An overly polite attempt at accommodating a request, albeit with no real solution, is a clear giveway to robotic strategy.

Upon recognizing this, I respond by striving to reach a human. Try “0,” “*,” or “#,” perhaps multiple times. Just wait.  Or, if all else fails, try “Cancel my account.” See wikiHow.com.

And good luck. ds

© Dennis Simanaitis, SimanaitisSays.com, 2026

2 comments on “RESISTING A.I. SLOP—AAAS SCIENCE’S VIEW

  1. ambitiousb408dbb73f
    January 24, 2026
    ambitiousb408dbb73f's avatar

    Dennis, have you heard the term “sludging?” It describes the frustrating rigamarole that some companys’ “customer service” contacts use to avoid handling customer issues. Help lines are not always helpful. They are often designed to make the customer so mad that they just give up, hang up, or “end the chat” and go away. For the business, that’s problem solved.

Leave a reply to simanaitissays Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.