From Tea Leaves to AI: Why Today’s High-Tech Predictions Are So Dangerous


Editors’ note: Welcome to CNET’s new series of guest columns called Alt View, a forum for a diverse array of experts and luminaries to share their insights into the rapidly evolving field of artificial intelligence. For more AI coverage, check out CNET’s AI Atlas.


“How are you using AI?” I asked a class full of executives. Some of the answers I have heard before: health professionals using it to read medical images; managers using it to draft emails; a retail company using it to take notes in meetings before giving up on it when they realized that the AI confabulated and had no understanding of context. And then, a gem. There’s almost always a gem. 

“I use chatbots as fortune tellers,” said a middle-aged Asian woman with a beige cardigan and white sneakers. I would later learn that she has built a billion-dollar empire. A nervous rustle spreads throughout the room as people shift uncomfortably in their seats. “Just like we used to read tea leaves, you can ask AI about the future, and it can be surprisingly accurate. For example, it recently correctly predicted a 2% rise in the stock market,” the student said, nodding and looking around the room while her classmates avoid eye contact.

A glowing translucent lightbulb, held by a hand, in front of lighted lines suggesting a circuit board

Today’s ruling soothsayers are no longer astrologers, astronomers, sociologists or even economists; they are computer scientists, data analysts and engineers. Algorithms are the new tea leaves, animal entrails and stars through which we hope to catch a glimpse of the future. 

We tend to associate predictions with knowledge, but all too often, they are closer to the realm of power. Prophecies are the boxing ring in which fights over the future take place. Our expectations bend the social world toward our predictions. When someone forecasts that the world will be a certain way, they are commanding that others obey their wishes and bring that world about. Even though we have been using predictions for thousands of years to make some of the most important decisions of our lives, we have dedicated remarkably little thought to the deeper questions about prophecy. Thousands of books have been written about how to predict, but none about the ethics of prediction.

Prediction has become a major industry. Take, for instance, platforms like Polymarket, which aggregate public expectations about future events, collecting massive amounts of data and creating influence. If 58% of users believe that the Oklahoma City Thunder are going to win the NBA Championship title, why would you bet against the majority? But the betting on these platforms extends far beyond sports or even reality TV. It has turned political instability, natural disasters and human suffering into a spectacle, dehumanizing the real-life victims, gamifying life.

Today, predictions have evolved into weapons of power that justify value-laden decisions under the pretense of facts, but predictions are never facts. Facts belong to the present and the past. An assertion about the future can be many things — an estimate, a desire, a warning — but never a fact.

What makes the future the future is that it hasn’t yet happened. What hasn’t come to pass doesn’t exist, and there are no facts about what doesn’t exist. Yet we’re using prediction more than ever with AI, prediction markets and experts talking about the future. 

The fantasy of defeating uncertainty

Pierre-Simon Laplace had a dream, often referred to as Laplace’s demon. It occurred to him that, with enough data and compute, it would be possible to achieve complete knowledge. If you knew the exact location and momentum of every particle in the universe, as well as all the laws of nature, then you would be able to predict the future with perfect accuracy. Uncertainty would be defeated at last. As Laplace put it:

Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective situation of the beings who compose it — an intelligence sufficiently vast to submit these data to analysis — it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present to its eyes.

Supporters of AI may not put it in these words, but what they seem to suggest when they enthuse about the power of machine learning plus vast amounts of data is that these technologies are bringing us tantalizingly close to realizing Laplace’s demon. If we can collect every single data point, the thought goes, and we can build enough compute to analyze that data, we can forecast what was previously unforeseeable. Such predictive power promises to revolutionize all fields of knowledge, from medicine to climate change and politics. 

AI Atlas

Driven by this fantasy, the quantifiers are tracking your every move; recording, tabulating and exhaustively analyzing your pleasures and vices; torturing your data until it screams out in confession. You are being tracked while you drive, search online, do sports, have sex, drink alcohol, do drugs, travel, sleep, talk with your friends and family, spend time on social media, go to the doctor’s office, play online games, read, watch television and breathe.

We manage and discuss our fears in quantified terms: the probability of getting cancer, or getting robbed, of earthquakes happening, or another pandemic, of climate change making our world unlivable, of another world war.

The unbridled optimism to defeat uncertainty through AI is understandable. Computers, data and statistics have brought incredible breakthroughs. The computer Bombe broke the Nazi’s Enigma cipher. In medicine, regression analysis was instrumental in identifying risk factors for diseases. Mainframe computers delivered new insights about business; centralized data processing brought real-time transaction processing and scalability. Manufacturing firms gained the ability to monitor production efficiency across entire supply chains, identifying bottlenecks and improving resource allocation. 

Personal computers emerged in the 1980s. The 1990s and 2000s saw the rise of the internet and cloud computing, further increasing data availability and processing power. The 2010s marked a turning point with the practical application of deep learning, fueled by big data and improved hardware like GPUs. Advances in algorithms paved the way for machine learning — prediction machines. 

AI and prediction: a power play

With prediction come all the patterns of prophecy and power that paper our history books. The difference is that AI is prediction on steroids, and we are using it not only on the battlefield and in the doctor’s office but everywhere, from the office to the classroom, the courtroom, our roads, our love lives and beyond. 

Machine learning algorithms are predictive machines. That is all they do, whether they are engaging in regression, classification or language. When a machine learning system translates text, it is predicting the most likely translation based on millions of examples of previous translations. When it recognizes wolves in photos, it does so by predicting the probability that a given image contains a wolf, based on patterns it learned from thousands of images labeled wolf and not-wolf. When a large language model answers a question, it is predicting what a human being would say in its place, based on the statistical analysis of books, online forums, social media and so forth.

It’s no wonder that an “oracle” is a technical term in the context of machine learning. An oracle represents the best possible performance that could be achieved; it’s an idealized function that always provides perfect predictions.

The triumph of machine learning is a corporate victory much more than a scientific one. Idealists might find it anticlimactic, even depressing. Someone wanting to put it crassly might say that we simply threw money at the problem. 

What is most remarkable about the success of machine learning is how unremarkably it came about. “What’s disappointing,” said Michael Wooldridge, professor of AI at Oxford, to a group of my MBA students, “is that it didn’t happen as a result of a scientific breakthrough.” He looked around the room to make sure the weight of his words has landed. 

From the 1960s to the early 2000s, the results from neural networks were not very impressive. The symbolic AI gang was winning the race and the grants — until it wasn’t. Something changed: We got more data and more compute, and machine learning took off. In the span of a few years, automatic translation, for instance, went from being unusable to being comprehensible, then good enough to help clueless tourists find their way with no knowledge of the local language. It’s now good enough that I admit I have sometimes preferred an automatic translation to the suggestions of a professional translator who had a weakness for verbosity. 

The amazing things that machine learning can do didn’t happen because of greater understanding. It didn’t need any genius. The picture is bleaker than an uninspiring lack of creativity. The means through which such brute force in data and compute was acquired involved theft, the exploitation of vulnerable people, a ferocious use of natural resources and building an architecture of mass surveillance, to name but a few sins.

We might be centuries away from the oracles and astrologers who predated algorithms, but prediction is still mostly about power. Power is how you get predictive algorithms, and more power is what they grant you in return.

From Prophecy: Prediction, Power, and the Fight for the Future, from Ancient Oracles to AI by Carissa Véliz. Reprinted by permission of Doubleday, an imprint of the Knopf Doubleday Publishing Group, a division of Penguin Random House LLC. Copyright © 2026 by Carissa Véliz.





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


Taxpayers often submit refund claims when they discover that they overpaid their taxes. Taxpayers usually do this by submitting a formal refund claim using the IRS’s prescribed forms. But this is not always required.

In many cases, taxpayers will submit so-called “informal refund claims” to the IRS during the course of an IRS audit. The IRS treats these informal claims as a refund claim as if the proper tax forms were filed. Given that the tax forms are often not used for informal claims, there may be less certainty as to what the taxpayer’s claim entails. The informal claim itself may just be various business records, complications, etc. or a myriad of other records that the taxpayer submits to the auditor.

This leads to the question as to whether the “variance doctrine,” which can prohibit taxpayers from litigating certain claims in court if they differ substantially from the taxpayer’s position on audit, applies to informal refund claims. The recent Express Scripts, Inc. v. United States, No. 4:21-cv-00035-HEA (E.D. Mo. Feb. 24, 2025) case provides an opportunity to consider this question.

Facts & Procedural History

The taxpayer in this case is a pharmacy benefit manager. It processes prescription drug claims for health plan sponsors and operates mail-order pharmacies.

During an IRS examination, the taxpayer submitted informal claims to the IRS auditor for Section 199 domestic production tax deductions that it omitted from its originally-filed tax returns.

As part of this process, the company provided the IRS with detailed workpapers and memoranda categorizing various revenue streams. These documents specifically identified certain “rebate” revenue and portions of their “mail claims” revenue (those manually entered into their system) as non-qualifying revenue streams that should be excluded from their Domestic Production Gross Receipts (“DPGR”) calculations. The taxpayer took the same positions in the formal administrative refund claims they later filed with the IRS for refunds for the years 2010, 2011, and 2012.

Nearly a decade after the initial claims, the taxpayer determined that both the rebate revenue and manually entered mail claims were qualifying for the Section 199 deduction. The taxpayer filed suit seeking refunds of federal income taxes for tax years 2010, 2011, and 2012, claiming it properly qualified for the Section 199 tax deduction for its rebate revenue and manually entered mail claims.

The government moved to dismiss the portions of the refund claims relating to rebate revenue and manually entered mail claims, arguing that the taxpayer was barred by the “substantial variance doctrine” from including revenue streams in tax litigation when they had specifically excluded them during the administrative claims process.

The Framework for Tax Refund Claims

Section 7422(a) allows taxpayers to sue the government for tax refunds. This is one of the permissible means to litigate a tax issue.

Section 7422 states that no suit for tax recovery can be maintained in any court “until a claim for refund or credit has been duly filed with the Secretary, according to the provisions of law in that regard, and the regulations of the Secretary established in pursuance thereof.”

This is the foundation for what courts often call the “pay first, litigate later” system for tax disputes. Under this framework, taxpayers must first pay the disputed tax, then file an administrative refund claim with the IRS, and only afterward can they pursue litigation if the IRS denies their claim or fails to act within six months.

The treasury regulations provide specific requirements for these administrative refund claims. Treasury Regulation § 301.6402-2(b) states that a claim “must set forth in detail each ground upon which a credit or refund is claimed and facts sufficient to apprise the commissioner of the exact basis thereof.” This regulation serves as the foundation for the substantial variance doctrine that limits what taxpayers can argue once they get to court.

What Is the Substantial Variance Doctrine?

The substantial variance doctrine operates as a jurisdictional limitation on tax refund litigation. As articulated in Lockheed Martin Corp. v. United States, 210 F.3d 1366, 1371 (Fed. Cir. 2000), which involved a research tax credit, a taxpayer is barred from presenting claims in a tax refund action that “substantially vary” the legal theories and factual bases set forth in the tax refund claim presented to the IRS.

The doctrine has two distinct branches: one addressing legal theories and another addressing factual bases. For legal theories, the rule states that “any legal theory not expressly or impliedly contained in the application for refund cannot be considered by a court in which a suit for refund is subsequently initiated.” This means taxpayers cannot pursue entirely new legal arguments in court that weren’t presented to the IRS.

The factual variance branch, which was at issue in the Express Scripts case, prohibits taxpayers from substantially varying the factual bases raised in their refund claims. This rule is not all that strict. Minor factual variations are permitted. Taxpayers cannot introduce entirely new factual elements that the IRS never had an opportunity to consider.

Why Does the Variance Doctrine Exist?

The substantial variance rule serves three primary purposes. First, it gives the IRS notice as to the nature of the claim and the specific facts upon which it is predicated. This notice function ensures that the IRS understands exactly what the taxpayer is claiming and why.

Second, it gives the IRS an opportunity to correct errors administratively. This purpose reflects the preference for resolving tax disputes at the administrative level rather than through costly litigation.

Third, it limits any subsequent litigation to those grounds that the IRS had an opportunity to consider and is willing to defend. This purpose helps ensure that courts aren’t faced with entirely new claims that the IRS never had a chance to review.

These purposes reflect the fundamental principle that tax litigation over refund claims is meant to be a review of the IRS’s administrative determination, not an entirely new proceeding where taxpayers can raise new issues.

Applying the Variance Doctrine to Informal Claims

Most refund claims follow the formal procedures outlined in IRS regulations, typically involving the filing of Forms 1040X for individuals, Forms 1120X for corporations, etc. However, courts have long recognized the “informal claim doctrine,” which allows taxpayers to satisfy the administrative claim requirement through less formal means.

An informal claim can suffice when it puts the IRS on notice that the taxpayer is seeking a refund, describes the legal and factual basis for the refund, and has some written component. IRS audits often provide opportunities for taxpayers to make these informal claims as part of the examination process.

The taxpayer in this case made its initial claims through informal claims during an IRS examination, providing detailed workpapers and memoranda. But does the variance doctrine apply differently to informal claims than to formal ones?

The answer is no. Courts have consistently held that the substantial variance doctrine applies equally to informal claims. In fact, the requirements for specificity can be even more important for informal claims, as the IRS must be able to determine from sometimes less structured submissions exactly what the taxpayer is claiming. This case is an example of the court applying the variance doctrine to informal claims.

Merely Additional Evidence of the Amount

The taxpayer argued that the variance doctrine did not apply as the inclusion of rebates and manually entered pharmacy claims merely represented “additional evidence” of the amount of their Section 199 deduction. They contended that because they were still seeking the same Section 199 deduction, there was no substantial variance in their legal theory.

The court rejected this argument, focusing on the fact that the taxpayer had “specifically excluded these amounts throughout the entire administrative claims period and indeed, through this action until it was asserted in the expert reports.” The court found that the taxpayer’s addition of this revenue “changes the facts upon which the IRS assessed Plaintiffs’ claims.”

The court emphasized that Express Scripts “specifically declined to include these items in its claim. As such, the IRS was not given the opportunity to review whether they were properly designated as gross receipts.” Because the IRS never had the opportunity to consider whether these additional revenue streams qualified for the deduction, the substantial variance doctrine barred their inclusion in the litigation.

What if the IRS Reviews the Position on Audit?

The taxpayer also argued that the IRS had waived the substantial variance doctrine by considering the allocation of DPGR. This approach reflects a strategy sometimes used in tax audits where taxpayers argue that the IRS has effectively waived technical requirements by addressing the merits of a claim.

The court rejected this waiver argument on factual grounds, noting that the taxpayer had “specifically exempted the rebates and manually entered mail pharmacy claims” from consideration, so the IRS “could not have considered the merits of these claims because they were not before the IRS for examination.”

The court’s reasoning highlights a critical point: taxpayers cannot claim waiver based on the IRS’s consideration of issues that were never actually presented to the IRS. The waiver argument can only work when the IRS actually considers facts or theories that were raised in the administrative claim.

The Takeaway

This case shows how important it is to provide clear detail and consistency when submitting tax refund claims to the IRS. This includes informal claims submitted to the IRS on audit. Taxpayers who specifically exclude certain factual bases from their administrative refund claims—whether formal or informal—may not be able to later include those bases in litigation, even if their legal theory remains unchanged. The substantial variance doctrine operates as a jurisdictional bar in these cases, which can serve to deny the taxpayer their day in court.

Watch Our Free On-Demand Webinar

In 40 minutes, we’ll teach you how to survive an IRS audit.

We’ll explain how the IRS conducts audits and how to manage and close the audit.  



Source link