Sometimes the smartest thing is to know when *not* to speak. That goes for robots too. Big Medium’s Josh Clark shares thoughts and links about how a bias toward chat in intelligent interfaces creates experiences that, well, aren’t so smart. Be more expansive in exploring AI opportunities. At Big Medium, we do exactly that. It’s fun and weird and mind-bending and ultimately successful. Using our Sentient Design framework, we help our clients break through AI cliches to deliver actually meaningful, useful features with machine intelligence.
Founder of Big Medium, a digital agency that helps complex organizations design for what’s next. We build design systems, craft exceptional online experiences, and transform digital organizations.
"A machine that talks must be a machine that thinks." It's an assumption and a fallacy that's been with us for the better part of a century, since Alan Turing proposed his imitation game in 1950. The concept was simple: If a machine can fool you into thinking it's human in conversation, it can be considered intelligent. The idea has colored ideas and assumptions about what machine intelligence looks like ever since, from science fiction to Silicon Valley. Two problems fall out of this: 1) Chat is a powerful AI cliche with a gravitational force that pulls designers toward chat solutions before they consider alternatives; and 2) As users, we're often lulled into thinking that smooth-talking interfaces are smarter than they are. Don't get me wrong, chat can be just the right interface/interaction for certain contexts. But designers reach for it too often and often with unintended consequences for the user experience. Sentient Design is about more than prompts and text boxes. And just because a system speaks well doesn't mean it thinks well. (We make the same mistake with people!) A recent essay by Jorge Arango and a WSJ profile of Yann LeCun were good reminders of our faulty assumptions about machines that can talk—and how we might adjust our thinking to better meet reality. There are so many other truly meaningful uses for generative AI models that better fit their actual skills than to treat them as smart, reasoning answer machines (which they are not). "We are used to the idea that people or entities that can express themselves, or manipulate language, are smart—but that’s not true,” says Yann. “You can manipulate language and not be smart, and that’s basically what LLMs are demonstrating.” I linked up and shared some thoughts about those pieces here.... "Exploring the AI Solution Space" by Jorge https://lnkd.in/eYn9xgek "This AI Pioneer Thinks AI Is Dumber Than a Cat" about Yann https://lnkd.in/eGAGxA9n