Category: AI

  • Why Probabilistic Programming? A Journey Through the Monty Hall Problem

    Why Probabilistic Programming? A Journey Through the Monty Hall Problem

    Even brilliant minds can be led astray by probability puzzles. When presented with the Monty Hall Problem, renowned mathematician Paul Erdős initially rejected the correct solution – and he wasn’t alone. Thousands of readers, including PhDs in mathematics and statistics, wrote angry letters to Marilyn vos Savant when she published the correct solution in Parade magazine. Their passionate resistance reveals something fascinating about how humans reason about uncertainty.

    To explore these ideas hands-on, we’ve created a Jupyter notebook that implements both traditional and probabilistic programming approaches to the Monty Hall Problem. The notebook includes code for simulating the game, modeling player behavior, and analyzing how people learn from experience.

    (more…)
  • The illusion of homo rationalis: Large language models assume humans are rational decision-makers

    The illusion of homo rationalis: Large language models assume humans are rational decision-makers

    LARGE LANGUAGE MODELS (LLMs) have shown a remarkable ability to mimic human communication, making them increasingly valuable tools for everything from content creation to customer service. However, recent research reveals something unexpected about these AI systems: they consistently overestimate human rationality. When it comes to predicting human choices, they expect people to be far more logical than we are—a bias that mirrors our tendency to overestimate the rationality of others.

    (more…)
  • The Living Word: Why AI Can’t Walk the Talk

    The Living Word: Why AI Can’t Walk the Talk

    Three years ago, in a bland office at Google’s headquarters, an engineer became convinced that the company’s chatbot had developed consciousness. Blake Lemoine’s widely publicized claims were swiftly dismissed, yet they exemplify a persistent muddle in society’s thinking about artificial intelligence. As language models like ChatGPT churn out increasingly convincing prose, a crucial question emerges: Have machines finally cracked the code of human language?

    The evidence seems compelling at first glance. Modern AI systems engage in witty banter, write passable poetry, and help craft legal briefs. This has led some tech evangelists to revive a bold claim first made by Chris Anderson, former editor of Wired magazine, in 2008: that with enough data, theory becomes unnecessary. “Correlation supersedes causation,” he declared, suggesting that patterns alone could reveal all there is to know about the world. Applied to language, this thinking suggests that by ingesting enough text, machines could master human communication.

    (more…)
  • When Math Makes Fools of Us All How a simple game show puzzle reveals the limits of human reason—and how computers might help

    When Math Makes Fools of Us All How a simple game show puzzle reveals the limits of human reason—and how computers might help

    In 1990, a seemingly innocuous puzzle published in Parade magazine sparked what might be called the Great Probability War. Thousands of readers, including doctorate holders in mathematics, bombarded the magazine with angry letters. Their target? Marilyn vos Savant, whose solution to the “Monty Hall Problem” they declared not just wrong, but offensively wrong. Even Paul Erdős, one of the 20th century’s most brilliant mathematicians, initially dismissed her answer. They were all mistaken.

    The puzzle, based on the American game show “Let’s Make a Deal,” seems simple enough. A contestant faces three doors. Behind one is a car, and behind the others are goats. After the contestant picks a door, the host (who knows what lies behind each) opens another to reveal a goat. Should the contestant switch to the remaining door? The counterintuitive answer—that switching doubles one’s chances of winning—has been making heads spin for decades.

    (more…)
  • The consciousness conundrum: From science fiction to silicon minds

    The consciousness conundrum: From science fiction to silicon minds

    In the climactic scene of the 1982 film Blade Runner, a dying artificial being delivers a soliloquy about the memories he will lose: “All those moments will be lost in time, like tears in rain.” The scene poses a provocative question: Can a machine truly experience loss? Four decades later, as artificial intelligence (AI) systems become increasingly sophisticated, this philosophical puzzle has evolved from science fiction into a pressing technological and ethical challenge.

    The quest to determine machine consciousness has moved from Hollywood to Silicon Valley. As large language models (LLMs) engage in increasingly human-like conversations, they force a reckoning with questions first posed by Philip K. Dick’s “Do Androids Dream of Electric Sheep?”, the novel that inspired “Blade Runner.” The challenge lies not merely in creating machines that can simulate consciousness but in developing reliable methods to detect genuine synthetic sentience.

    (more…)