Journalism and AI

Here are are my written remarks for a hearing on AI and the future of journalism for the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, on January 10, 2024.

I have been a journalist for fifty years and a journalism professor for the last eighteen.

  1. History

I would like to begin with three lessons on the history of news and copyright, which I learned researching my book, The Gutenberg Parenthesis: The Age of Print and its Lessons for the Age of the Internet (Bloomsbury, 2023):

First, America’s 1790 Copyright Act covered only charts, maps, and books. The New York Times’ suit against OpenAI claims that, “Since our nation’s founding, strong copyright protection has empowered those who gather and report news to secure the fruits of their labor and investment.” In truth, newspapers were not covered in the statute until 1909 and even then, according to Will Slauter, author of Who Owns the News: A History of Copyright (Stanford, 2019), there was debate over whether to include news articles, for they were the products of the institution more than an author. 

Second, the Post Office Act of 1792 allowed newspapers to exchange copies for free, enabling journalists with the literal title of “scissors editor” to copy and reprint each others’ articles, with the explicit intent to create a network for news, and with it a nation. 

Third, exactly a century ago, when print media faced their first competitor — radio — newspapers were hostile in their reception. Publishers strong-armed broadcasters into signing the  1933 Biltmore Agreement by threatening not to print program listings. The agreement limited radio to two news updates a day, without advertising; required radio to buy their news from newspapers’ wire services; and even forbade on-air commentators from discussing any event until twelve hours afterwards — a so-called “hot news doctrine,” which the Associated Press has since tried to resurrect. Newspapers lobbied to keep radio reporters out of the Congressional press galleries. They also lobbied for radio to be regulated, carving an exception to the First Amendment’s protections of freedom of expression and the press. 

Publishers accused radio — just as they have since accused television and the internet and AI — of stealing “their” content, audience, and revenue, as if each had been granted them by royal privilege. In scholar Gwenyth Jackaway’s words, publishers “warned that the values of democracy and the survival of our political system” would be endangered by radio. That sounds much like the sacred rhetoric in The Times’ OpenAI suit: “Independent journalism is vital to our democracy. It is also increasingly rare and valuable.” 

To this day, journalists — whether on radio or at The New York Times — read, learn from, and repurpose facts and knowledge gained from the work of fellow journalists. Without that assured freedom, newspapers and news on television and radio and online could not function. The real question at hand is whether artificial intelligence should have the same right that journalists and we all have: the right to read, the right to learn, the right to use information once known. If it is deprived of such rights, what might we lose?

  1. Opportunities

Rather than dwelling on a battle of old technology and titans versus new, I prefer to focus here on the good that might come from news collaborating with this new technology. 

First, though, a caveat: I argue it is irresponsible to use large language models where facts matter, for we know that LLMs have no sense of fact; they only predict words. News companies, including CNET, G/O Media, and Gannett, have misstepped, using the technology to manufacture articles at scale, strewn with errors. I covered the show-cause hearing for a New York attorney who (like President Trump’s former counsel, Michael Cohen) used an LLM to list case citations. Federal District Judge P. Kevin Castel made clear that the problem was not the technology but its misuse by humans. Lawyers and journalists alike must exercise caution in using generative AI to do their work. 

Having said that, AI presents many intriguing possibilities for news and media. For example:

AI has proven to be excellent at translation. News organizations could use it to present their news internationally.

Large language models are good at summarizing a limited corpus of text. This is what Google’s NotebookLM does, helping writers organize their research. 

AI can analyze more text than any one reporter. I brainstormed with an editor about having citizens record 100 school-board meetings so the technology could transcribe them and then answer questions about how many boards are discussing, say, banning books. 

I am fascinated with the idea that AI could extend literacy, helping people who are intimidated by writing tell and illustrate their own stories.

A task force of academics from the Modern Language Association concluded AI in the classroom could help students with word play, analyzing writing styles, overcoming writers’ block, and stimulating discussion. 

AI also enables anyone to write computer code. As an AI executive told me in a podcast about AI that I cohost, “English majors are taking the world back… The hottest programming language on planet Earth right now is English.” 

Because LLMs are in essence a concordance of all available language online, I hope to see scholars examine them to study society’s biases and clichés.

And I see opportunities for publishers to put large language models in front of their content to allow readers to enter into dialog with that content, asking their own questions and creating new subscription benefits. I know an entrepreneur who is building such a business. 

Note that in Norway, the country’s largest and most prestigious publisher, Schibsted, is leading the way to build a Norwegian-language large language model and is urging all publishers to contribute content. In the US, Aimee Rinehart, an executive student of mine at CUNY who works on AI at the Associated Press, is also studying the possibility of an LLM for the news industry. 

  1. Risks

All these opportunities and more are put at risk if we fence off the open internet into private fortresses.

Common Crawl is a foundation that for sixteen years has archived the entire web: 250 billion pages, 10 petabytes of text made available to scholars for free, yielding 10,000 research papers. I am disturbed to learn that The New York Times has demanded that the entire history of its content — that which was freely available — be erased. Personally, when I learned that my books were included in the Books3 data set used to train large language models, I was delighted, for I write not only to make money but also to spread ideas. 

What happens to our information ecosystem when all authoritative news retreats behind paywalls, available only to privileged citizens and giant corporations able to pay for it? What happens to our democracy when all that is left out in public for free — to inform both citizens and machines — is propaganda, disinformation, conspiracies, spam, and lies? I well understand the economic plight of my industry, for I direct a Center for Entrepreneurial Journalism. But I also say we must have a discussion about journalism’s moral obligation to an informed society and about the right not only to speak but to learn.

  1. Copyright

And we need to talk about reimaging copyright in this age of change, starting with a discussion about generative AI as fair and transformative use. When the Copyright Office sought opinions on artificial intelligence and copyright (Docket 2023-6), I responded with concern about an idea the Office raised of establishing compulsory licensing schemes for training data. Technology companies already offer simple opt-out mechanisms (see: robots.TXT).

Copyright at its origin in the Statute of Anne of 1710 was enacted not to protect creators, as is commonly asserted. Instead, it was passed at the demand of booksellers and publishers to establish a marketplace for creativity as a tradeable asset. Our concepts of creativity-as-content and content-as-property have their roots in copyright. 

Now along come machines — large language models and generative AI — that manufacture endless content. University of Maryland Professor Matthew Kirschenbaum warns of what he calls “the Textpocalypse.” Artificial intelligence commodifies the idea of content, even devalues it. I welcome this. For I hope it might drive journalists to understand that their value is not in manufacturing the commodity, content. Instead, they must see journalism as a service to help citizens inform public discourse and improve their communities. 

In 2012, I led a series of discussions with multiple stakeholders — media executives, creative artists, policymakers — for a project with the World Economic Forum on rethinking intellectual property and the support of creativity in the digital age. In the safe space of Davos, even media executives would concede that copyright is outmoded. Out of this work, I conceived of a framework I call “creditright,” which I’ve written is “the right to receive credit for contributions to a chain of collaborative inspiration, creation, and recommendation of creative work. Creditright would permit the behaviors we want to encourage to be recognized and rewarded. Those behaviors might include inspiring a work, creating that work, remixing it, collaborating in it, performing it, promoting it. The rewards might be payment or merely credit as its own reward.” It is just one idea, intended to spark discussion. 

Publishers constantly try to extend copyright’s restrictions in their favor, arguing that platforms owe them the advertising revenue they lost when their customers fled for better, competitive deals online. This began in 2013 with German publishers lobbying for a Leistungsschutzrecht, or ancillary copyright, which inspired further protectionist legislation, including Spain’s link tax, articles 15 and 17 of the EU’s Copyright Directive, Australia’s News Media Bargaining Code, and most recently Canada’s Bill C-18, which requires large platforms — namely Google and Facebook — to negotiate with publishers for the right to link to their news. To gain an exemption from the law, Google agreed to pay about $75 million to publishers — generous, but hardly enough to save the industry. Meta decided instead to take down links to news rather than being forced to pay to link. That is Meta’s right under Canada’s Charter of Rights and Freedoms, for compelled speech is not free speech. 

In this process, lobbyists for Canada’s publishers insisted that their headlines were valuable while Meta’s links were not. The nonmarket intervention of C-18 sided with the publishers. But as it turned out, when those links disappeared, Facebook lost no traffic while publishers lost up to a third of theirs. The market spoke: Links are valuable. Legislation to restrict linking would break the internet for all. 

I fear that the proposed Journalism Competition and Preservation Act (JCPA) and the California Journalism Protection Act (CJPA) could have similar effect here. As a journalist, I must say that I am offended to see publishers lobby for protectionist legislation, trading on the political capital earned through journalism. The news should remain independent of — not beholden to — the public officials it covers. I worry that publishers will attempt to extend copyright to their benefit not only with search and social platforms but now with AI companies, disadvantaging new and small competitors in an act of regulatory capture. 

  1. Support for innovation

The answer for both technology and journalism is to support innovation. That means enabling open-source development, encouraging both AI models and data — such as that offered by Common Crawl — to be shared freely. 

Rather than protecting the big, old newspaper chains — many of them now controlled by hedge funds, which will not invest or innovate in news — it is better to nurture new competition. Take, for example, the 450 members of the New Jersey News Commons, which I helped start a decade ago at Montclair State University; and the 475 members of the Local Independent Online News Publishers; the 425 members of the Institute for Nonprofit News; and the 4,000 members of the News Product Alliance, which I also helped start at CUNY. This is where innovation in news is occurring: bottom-up, grass-roots efforts emergent from communities. 

There are many movements to rebuild journalism. I helped develop one: a degree program called Engagement Journalism. Others include Solutions Journalism, Constructive Journalism, Reparative Journalism, Dialog Journalism, and Collaborative Journalism. What they share is an ethic of first listening to communities and their needs. 

In my upcoming book, The Web We Weave, I ask technologists, scholars, media, users, and governments to enter into covenants of mutual obligation for the future of the internet and, by extension, AI. 

There I propose that you, as government, promise first to protect the rights of speech and assembly made possible by the internet. Base decisions that affect internet rights on rational proof of harms, not protectionism for threatened industries and not media’s moral panic. Do not splinter the internet along national borders. And encourage and enable new competition and openness rather than entrenching incumbent interests through regulatory capture. 

In short, I seek a Hippocratic Oath for the internet: First, do no harm.

A journalism of belief and belonging


I increasingly come to see that we are not in a crisis of information and disinformation or even of misguided beliefs, but instead of belonging. I wonder how to reimagine journalism to address this plight.

Belonging is a good. The danger is in not belonging, and filling that void with malign substitutes for true community: joining a cult of personality or conspiracies, an insurrection, or some nihilistic, depraved perversion of a religion.

What role might journalism play to fill that void instead with conversation, connection, understanding, collaboration, enlightened values, and education?

Hannah Arendt teaches us that amid the thrall and threat of totalitarianism, some people belong to nothing, and so they are vulnerable to the lure of joining a noxious cause manufactured of fear. In The Gutenberg Parenthesis, I quote her:

“But totalitarian domination as a form of government is new in that it is not content with this isolation but destroys private life as well. It bases itself on loneliness, on the experience of not belonging to the world at all, which is among the most radical and desperate experiences of man.” For Arendt, to be public is to be whole, to be private is to be deprived; to be without both is to be uprooted, vulnerable, and alone.

Arendt found in Nazi and Soviet history “such unexpected and unpredicted phenomena as the radical loss of self-interest, the cynical or bored indifference in the face of death or other personal catastrophes, the passionate inclination toward the most abstract notions as guides for life, and the general contempt for even the most obvious rules of common sense.” The lessons for these populist times are undeniable as Trump’s base shows a loss of self-interest (what did he accomplish for them over the rich?), an indifference to death (defiantly burning masks at COVID superspreader rallies), a passionate inclination toward abstract notions (are abortion and guns truly more important to their everyday lives than jobs and health?), and contempt for common sense (see: science denial and conspiracy theories).

Later in my book, I call upon the theories of sociologist William Kornhauser, who contends that the solution to such alienated mass society is to support a pluralistic society of belonging, in which people connect with communities — they “possess multiple commitments to diverse and autonomous groups” — and are less vulnerable to, or at least feel a competitive tug away from, the siren call of populist movements. I write:

A pluralistic society is marked by belonging — to families, tribes (in the best and most supportive sense, which Sebastian Junger defines as “the people you feel compelled to share the last of your food with”), clubs, congregations, organizations, communities. A pluralistic society is more secure and less vulnerable to domination as a whole, as a mass. In such associations we do not give up our individuality; we gain individual identity by connecting, gathering, organizing, and acting with others who share our interests, needs, goals, desires, or circumstances. When that occurs, in Kornhauser’s view, elites become accessible as “competition among independent groups opens many channels of communication and power.” Then, too, “the autonomous man respects himself as an individual, experiencing himself as the bearer of his own power and as having the capacity to determine his life and to affect the lives of his fellows.” In short, a pluralistic society is a diverse society.

Of course, it is diversity that most threatens the autocrats, populists, racists, and fascists who in turn imperil our nation and democracy around the world. That is why they condemn “identity politics.” The internet, I theorize, enabled voices too long not represented in so-called mainstream — i.e., old, white — mass media to at last be heard. That is what the would-be tyrants and cultists use to stir fear and recruit their rudderless hordes, preaching that the Others — Blacks, Hispanics, LGBTQ people, immigrants, “woke mobs,” and lately trans people — will come steal their jobs, homes, history, security, society, and even children.

Journalism brings information to the fight for their very souls. We stand outside reactionary revival tents with slips of paper bearing facts, thinking that can compete with the heart-thumping hymns of fear within.

In 2022 in Paris, a group of scholars gathered at the International Communication Association for a preconference that asked, “What comes after disinformation studies?” In a paper reporting on the discussion, Théophile Lenoir and Chris Anderson conclude: “Fact-checking our way out of politics will not work.”

Journalists want to believe that we are in a crisis of disinformation because they think the cure must be what they offer: information. The mania around disinformation after 2016 led to what Joe Bernstein in Harper’s calls Big Disinfo, a veritable industry devoted to dis-dis-information. I was part of that effort, having raised money after 2016 to support such projects. I’m certainly not opposed to reporting information and checking facts! But we need to concede that these are insufficient ends.

If the problem is not disinformation, then it must be belief, we say, pointing to opinion polls in which shocking numbers of citizens say they ascribe to insane ideas and conspiracy theories. Regarding such polls, I will forever return to the lessons of the late James Carey: “Public life started to evaporate with the emergence of the public opinion industry and the apparatus of polling. Polling … was an attempt to simulate public opinion in order to prevent an authentic public opinion from forming.”

Polls are fatally and fundamentally flawed because they reflect the biases of the pollsters, who insist on sorting us into their buckets, leaving no room for nuance or context. Worse than that, polls have become a mechanism for signaling belonging in some rebellious, defiant cause. Writes Reece Peck, another scholar at the ICA Paris preconference, “Political scientists have come to understand that voting is less a cool-headed deliberation on how specific policies help or hurt the voter’s material economic interest and more an occasion for expressing the voter’s cultural attachments and group loyalties.” Fringe opinions are a means for these citizens to tell pollsters, media, and authority: ‘You can’t sort us. We’ll sort ourselves.’ As researchers Michael Bang Petersen, Mathias Osmundsen, and Kevin Arceneaux have found, people who circulate hostile political information do so out of a “Need for Chaos,” a desire to “‘burn down’ the entire political order in the hope they gain status in the process.” In the hope, that is, that they will find a place to belong in their posse, their institutional insurrection. See again: Arendt.

I believe there is only one true hope to cure vulnerability to such performative belief: education. By that I do not mean media- or news-literacy, the hubristic assertion that if only people understood how journalism works and consumed its products, all would be well. I mean education, period: in the humanities, the social sciences, and science. As I write in my upcoming book, The Web We Weave, I taught in a public university because I believe education is our best hope. But universities — particularly their humanities departments — are being starved of resources and attacked by populist, right-wing forces that view education as their enemy because it is through education that they lose voters and power. This is where our underlying crisis and solution lie.

What can journalism do? I am not sure.

In any discussion of the crisis in democracy, someone will pipe up with banalities about the internet segregating us in filter bubbles and echo chambers. But research by Petersen and Axel Bruns shows that — as Petersen says — “the biggest echo chamber that we all live in is the one we live in in our everyday lives,” in the towns, jobs, and congregations we seek out to be around people like us. Journalist Bill Bishop said it well in the subtitle of his 2008 book, The Big Sort: “The clustering of like-minded American is tearing us apart.” The internet doesn’t cause filter bubbles, it punctures them, confronting people with those they are told to fear. The internet does not cause division. It exposes it.

Thus I have argued that one mission for journalism (and, for that matter, social networks) should be to make strangers less strange. At the Tow-Knight Center, I funded research to that end by Caroline Murray and Talia Stroud, who found 25 inspiring projects in newsrooms attempting to do just that; look at their list. I find that work heartening, yet still insufficient.

Journalism is flawed at its core. It is built to seek out, highlight, and exploit — and cause — conflict. Political journalism is engineered to predict, which does nothing to inform the electorate. Instead, in the words of Jay Rosen, it should focus on what is at stake in the choices citizens make. Journalism has done tremendous harm to countless communities that have never trusted its institutions. Journalism — just like the internet companies it criticizes — is built on the economics of attention.

I do not, of course, reject all of journalism. Yes, I criticize The Times and The Post because they have been our biggest and best and we need them to be better. I also praise excellent reporting there and support it with my subscriptions. I think it is important to understand our history sans the sacred rhetoric publishers use to lobby politicians and courts for protection against new competitors, from radio to television to the internet to AI. James Gordon Bennett, the early newspaper titan said to be the father of modern journalism — thus mass media — once said to an upstart in the field: “Young man, ‘to instruct the people,’ as you say, is not the mission of journalism. That mission, if journalism has any, is to startle or amuse.” There are our roots in mass media. Hear Carl Lindstrom writing in The Fading American Newspaper:

In its hunger for circulation it has sought status as a mass medium to the point where it is a hollow attempt to be all things to all men. It has scorned competition as an evil, and cultivated monopoly as a virtue. While claiming a holy mission with constitutional protection, it has left great vacuums of journalistic obligation into which competiting mediums have moved with impunity and public acceptance. Today journalism is on the move at an ever-accelerating rate with the daily press showing no apparent concern. This indifference is in accord with its incapacity for relentless self-examination. In this vacant place self-delusion has built itself a nest.

He wrote that in 1960.

There are movements to address the mission void in present-day journalism. I helped start one in Engagement Journalism, with my colleague Carrie Brown. There is Solutions JournalismCollaborative JournalismConstructive JournalismReparative JournalismDialog JournalismDeliberative Journalism … and others. I would like to bring these various ’ives together in a room to see what links them. I think it will be this: They start with listening.

Journalism is terrible at listening. We train reporters to hit the streets with premade narratives and predictions, looking for quotes to fulfill them. In Engagement Journalism, we teach journalists instead to hear the communities they serve. That does not mean we must listen to every cultist’s crazy theories and fears concocted for media attention. Journalists give them plenty of oxygen already. No, I mean that we need to allow people to be heard regarding their real lives and actual circumstances and concerns. That is a necessary start.

How do we then reimagine journalism built around helping people understand that they can belong to positive communities of understanding and empathy, they can build bridges to other communities through listening and learning, they can find fulfillment in their own identities without excluding or denigrating the identities of others?

A few years ago, I participated in valuable diversity training. In one exercise, our trainer told each of us to reflect on our own cultures. I demurred, saying that I had no culture as I am of boring, generic, white-bread, American, suburban stock. She told me I was wrong. Upon reflection, I saw that she was right. She forced me to recognize the power of the cultural default. I’ve learned that lesson, too, from André Brock, whom I quote in The Gutenberg Parenthesis:

In Distributed Blackness, his trenchant analysis of African American cybercultures … Georgia Tech Professor André Brock Jr. sought to understand Black Twitter on its own terms, not in relation to mass and white media, not in the context of aiming to be heard there. “My claim is ecological: Black folk have made the internet a ‘Black space’ whose contours have become visible through sociality and distributed digital practice while also decentering whiteness as the default internet identity.” That is to say that it is necessary to acknowledge the essential whiteness of mass media as well as the internet. “Despite protestations about color-blindness or neutrality,” Brock wrote, “the internet should be understood as an enactment of whiteness through the interpretive flexibility of whiteness as information. By this, I mean that white folks’ communications, letters, and works of art are rarely understood as white; instead, they become universal and are understood as ‘communication,’ ‘literature,’ and ‘art.’”

Brock helped me see where journalism is “whiteness as information.” So have Wesley Lowery and Lewis Raven Wallace in their criticism of journalistic objectivity (works I assigned and taught every year).

Brock also made me see how the internet has helped me belong. I long was a loner; journalists fancy themselves that: separate, apart (and let’s admit it, above). I live in a town disconnected from many of my neighbors. But on the internet, I have found myself connected with many communities.

Every year in the Engagement Journalism class I had the privilege of teaching with Carrie Brown, we would ask students what communities they belong to. The answers inevitably began with the obvious: “I’m a student.” “I live in Brooklyn.” But then someone might say, “I struggle with mental health issues.” A few students later in the circle, another students would echo that. Thus a connection is made, empathy established, a community enabled. Not all communities are bounded by geography; online, they might exist in any definition, anywhere.

Such conversation and connection can occur only in an environment of trust, but today we live in an environment of distrust — and that is the fault, in great measure, of media and politics manufacturing disconnection and fear. That is what journalism must fight against: a darkness not of information but of the soul. I return to Lenoir and Anderson in Paris:

Technical solutions to political problems are bound to fail. Historical, structural, and political inequality — and especially race, ethnicity, and social difference — needs to be at the forefront of our understanding of politics and, indeed, disinformation. The challenge for researchers, and our field broadly, is to engage in politics by generating ideas and crafting narratives that make people want to live in a more just world, not just a more truthful one.

The same should be said of journalism. How might we do that?

Journalists might see ourselves as conveners of conversation (see, for example, Spaceship Media).

We might see ourselves as educators, defenders of — yes, advocates for — enlightened values of reason, liberty, equality, tolerance, and progress. It is not enough to expose inequality, we must defend equality.

We might see it as our task to build bridges among communities — to make strangers less strange, to help people escape the filter bubbles in their real lives.

We might understand the imperative to fight — not neutrally amplify — the dark forces of hate, fear, and fascism.

We must pay reparations to the communities our institutions have damaged by finally assuring that their stories are told — by themselves — and heard.

We could reject the economics of attention and scale of mass media and rebuild journalism at human scale, valuing our work not through our metrics of audience but instead as the public values us.

As I leave my last job and the last year, I am reflecting on where to turn my attention next. I spent a dozen years at the end of my time in the industry working to make journalism digital, a task that should be self-evident but even so, is far from done. I spent eighteen years in a university exploring new business models for news, though I fear that trying to save established journalism ends in protectionism. My proudest work has been teaching and learning Engagement Journalism and it is there — in listening to communities — where I wish to devote myself.

I also believe it is critical that we understand journalism now in the context of a connected world and call upon other disciplines — history, ethics, psychology, community studies, anthropology, sociology — to understand the internet not as a technology but as a human network. That is the subject of my next book. That is what I have been calling Internet Studies: examining how we interact now and what reimagined and reformed institutions we need to help us do that better. Somewhere in there, I believe, is the essence of a new journalism, a journalism of education, a journalism of belonging.

Artificial general bullshit

I began writing this as a report from a useful conference on AI that I just attended, where experts and representatives of concerned sectors of society had serious discussion about the risks, benefits, and governance of the technology.

But, of course, I first must deal with the ludicrous news playing out now at leading AI generator, OpenAI. So let me begin by saying that in my view, the company is pure bullshit. Sam Altman’s contention that they are building “artificial general intelligence” or “artificial superintelligence”: Bullshit. Board members’ cult of effective altruism and AI doomerism: Bullshit. The output of ChatGPT: Bullshit. It’s all hallucinations: Pure bullshit. I even fear that the discussion of AI safety in relation to OpenAI could be bullshit. 

This is not to say that AI and its capabilities as it is practiced there and elsewhere is not something to be taken seriously, even with wonder. And we should take seriously discussion of AI impact and safety, its speed of development and adoption, and its governance. 

These topics were on the agenda of the AI conference I attended at the San Francisco outpost of the World Economic Forum (Davos). Snipe if you will at this fraternity of rich and powerful, this is one thing the Forum does consistently well: convene multistakeholder conversations about important topics, because people accept their invitations. At this meeting, there were representatives of technology companies, governments, and the academy. I sat next to an honest-to-God philosopher who is leading a program in ethical AI. At last. 

I knew I was in the right place when I heard AGI brought up and quickly dismissed. Artificial general intelligence is the purported goal of OpenAI and other boys in the AI fraternity: that they are so smart they can build a machine that is smarter than all of us, even them — a machine so powerful it could destroy humankind unless we listen to its creators. I call bullshit. 

In the public portion of the conference, panel moderator Ian Bremmer said he had no interest in discussing AGI. I smiled. Andrew Ng, cofounder of Google Brain and Coursera, said he finds claims of imminent AGI doom “vague and fluffy…. I can’t prove that AI won’t wipe us out anymore than I could prove that radio waves won’t attract aliens that would wipe us out.” Gary Marcus — a welcome voice of sanity in discourse about AI — talked of trying to get Elon Musk to make good on his prediction that AGI will arrive by 2029 with a $100,000 bet. What exactly Musk means by that is no clearer than anything he says. Keep in mind that Musk has also said that by now cars would drive themselves and Twitter would be successful and he would soon (not soon enough) be on his way to Mars. One participant doubted not only the arrival of AGI but said large language models might prove to be a parlor trick.

With that BS was out of the way, this turned out to be a practical meeting, intended to bring various perspectives together to begin to formulate frameworks for discussion of responsible use of AI. The first results will be published from the mountaintop in January. 

I joined a breakout session that had its own breakouts (life is breakouts all the way down). The circle I sat in was charged with outlining benefits and risks of generative AI. Their first order of business was to question the assignment and insist on addressing AI as a whole. The group emphasized that neither benefits nor risks are universal, as each will fall unevenly on different populations: individuals, organizations (companies to universities), communities, sectors, and society. They did agree on a framework for that impact, asserting that for some, AI could:

  • raise the floor (allowing people to engage in new skills and tasks to which they might not have had access — e.g., coding computers or creating illustrations);
  • scale (that is, enabling people and organizations to take on certain tasks much more efficiently); and
  • raise the ceiling (performing tasks — such as analyzing protein folding — that heretofore were not attainable by humans alone). 

On the negative side, the group said AI would:

  • bring economic hardship; 
  • enable evil at scale (from exploding disinformation to inventing new diseases); and
  • for some, result in a loss of purpose or identity (see the programmer who laments in The New Yorker that “bodies of knowledge and skills that have traditionally taken lifetimes to master are being swallowed at a gulp. Coding has always felt to me like an endlessly deep and rich domain. Now I find myself wanting to write a eulogy for it”).

This is not to say that the effects of AI will fit neatly into such a grid, for what is wondrous for one can be dreadful for another. But this gives us a way to begin to define responsible deployment. While we were debating in our circle, other groups at the meeting tackled questions of technology and governance. 

There have been a slew of guidelines for responsible AI — most lately the White House issued its executive order, and tech companies, eager to play a game of regulatory catch, are writing their own. Here are Google’s, these are Microsoft’s, and Meta has its own pillars. OpenAI has had a charter built on its hubristic presumption that is building AGI. Anthropic is crowdsourcing a “constitution” for AI, filled with vague generalities about AI characterized as “reliable,” “honest,” “truth, “good,” and “fair.” (I challenge either an algorithm or a court to define and enforce the terms.) Meanwhile, the EU, hoping to lead in regulation if not technology, is writing its AI Act

Rather than principles or statutes chiseled permanently on tablets, I say we need ongoing discussion to react to rapid development and changing impact; to consider unintended consequences (of both the technology and regulation of it); and to make use of what I hope will be copious research. That is what WEF’s AI Governance Alliance says it will do. 

As I argue in The Gutenberg Parenthesis regarding the internet — and print — the full effect of a new technology can take generations to be realized. The timetable that matters is not so much invention and development but adaptation. As I will argue in my next book, The Web We Weave: Why We Must Reclaim the Internet from Moguls, Misanthropes, and Moral Panic (out from Basic Books next year), this debate must occur less in the context of technology than of humanity, which is why the humanities and social sciences must be in the circle.

At the meeting, there was much discussion about where we are in the timeline of AI’s gestation. Most agreed that there is no distinction between generative AI and AI. Generative AI looks different — momentous, even — to those of us not deeply engaged in the technology because now, suddenly, the program speaks — and, more importantly, can compute — our language. Code was a language; now language is code. Some said that AI is progressing from its beginning, with predictive capabilities, to its current generative abilities, and next will come autonomous agents — as with the GPT store Altman announced only a week before. Before allowing AI agents to go off on their own, we must trust them. 

That leads to the question of safety. One participant at WEF quoted Altman in a recent interview, saying that the company’s mission is to figure out how to make AGI, then figure out how to make it safe, and then figure out its benefits. This, the participant said, is the wrong order. What we need is not to make AI safe but to make safe AI. There was much talk about “shifting left” — not a political manifesto but instead a promise to move safety, transparency, and ethics to the start of the development process, rather than coming to them as afterthoughts. I, too, will salute that flag, but….

I come to believe there is no sure way to guarantee safety with the use of this new technology — as became all too clear clear to princes and popes at the birth of print. “What is safe enough?” asked one participant. “You give me a model that can do anything, I can’t answer your question.” We talk of requiring AI companies to build in guardrails. But it is impossible for any designer, no matter how smart, to anticipate every nefarious use that every malign actor could invent, let alone every unintended consequence that could arise. 

That doesn’t mean we should not try to build safety into the technology. Nor does it mean that we should not use the technology. It just means that we must be realistic in our expectations, not about the technology but about our fellow humans. Have we not learned by now that some people will always find new ways to do bad things? It is their behavior more than technology that laws regulate. As another participant said, a machine that is trained to imitate human linguistic behavior is fundamentally unsafe. See: print. 

So do we hold the toolmaker responsible for what users have it do? I know, this is the endless argument we have about whether guns (and cars and chemicals and nukes) kill people or the people who wield them do. Laws are about fixing responsibility, thus liability. This is the same discussion we are having about Section 230: whom do we blame for “harmful speech” — those who say it, those who carry it, those who believe it? Should we hold the makers of the AI models themselves responsible for everything anyone does with them, as is being discussed in Europe? That is unrealistic. Should we instead hold to account users — like the schmuck lawyer who used ChatGPT to write his brief — when they might not know that the technology or its makers is lying to them? That could be unfair. There was much discussion at this meeting about regulating not the technology itself but its applications.

The most contentious issue in the event was whether large language models should be open-sourced. Ng said he can’t believe that he is having to work so hard to convince governments not to outlaw open source — as is also being bandied about in the EU. A good number of people in the room — I include myself among them — believe AI models must be open to provide competition to the big companies like OpenAI, Microsoft, and Google, which now control the technology; access to the technology for researchers and countries that otherwise could not afford to use it; and a transparent means to audit compliance with regulations and safety. But others fear that bad actors will take open-source models, such as Meta’s LLaMA, and detour around guardrails. But see the prior discussion about the ultimate effectiveness of such guardrails. 

I hope that not only AI models but also data sets used for training will be open-sourced and held in public commons. (Note the work of MLCommons, which I learned about at the meeting.) In my remarks to another breakout group about information integrity, I said I worried about our larger knowledge ecosystem when books, newspapers, and art are locked up by copyright behind paywalls, leaving machines to learn only from the crap that is free. Garbage in; garbage multiplied. 

At the event’s opening reception high above San Francisco in Salesforce headquarters, I met an executive from Norway who told me that his nation wants to build large language models in the Norwegian language. That is made possible because — this being clever Norway — all its books and newspapers from the past are already digitized, so the models can learn from them. Are publishers objecting? I asked. He thought my question odd; why would they? Indeed, see this announcement from much-admired Norwegian news publisher Schibsted: “At the Nordic Media Days in Bergen in May, [Schibsted Chief Data & Technology Officer Sven Størmer Thaulow] invited all media companies in Norway to contribute content to the work of building a solid Norwegian language model as a local alternative to ChatGPT. The response was overwhelmingly positive.” I say we need to a similar discussion in the anglophone world about our responsibility to the health of the information ecosystem — not to submit to the control and contribute to the wealth of AI giants but instead to create a commons of mutual benefit and control. 

At the closing of the WEF meeting, during a report-out from the breakout group working on governance (where there are breakout groups, there must be report-outs; it’s the law) one professor proposed that public education about AI is critical and media must play a role. I intervened (as we say in circles) and said that first journalists must be educated about AI because too much of their coverage amounts to moral panic (as in their prior panics about the telegraph, talkies, radio, TV, and video games). And too damned often, journalists quote the same voices — namely, the same boys who are making AI — instead of the scholars who study AI. The issue of The New Yorker I referenced above has yet another interview with former Google computer scientist Geoffrey Hinton, who has already been on 60 Minutes and everywhere. 

Where are the authors of the Stochastic Parrots paper, former Google AI safety chiefs Timnit Gebru and Margaret Mitchell, along with linguists Emily Bender and Angelina McMillan-Major? Where are the women and scholars of color who have been warning of the present-tense costs and risks of AI, instead of the future-shock doomsaying of the AI boys? Where is Émile Torres, who studies the faux philosophies that guide AI’s proponents and doomsayers, which Torres and Gebru group under the acronym TESCREAL? (See the video below.)

The problem is that the press and policymakers alike are heeding the voices of the AI boys who are proponents of these philosophies instead of the scholars who hold them to account. The afore-fired Sam Altman gets invited to Congress. When UK PM Rishi Sunak held his AI summit, whom did he invite on stage but Elon Musk, the worst of them. Whom did Sunak appoint to his AI task force but another adherent of these philosophies. 

To learn more about TESCREAL, watch this conversation with Torres that Jason Howell and I had on our podcast, AI Inside, so we can separate the bullshit from the necessary discussion. This is why we need more meetings like the one WEF held, with stakeholders besides AI’s present proponents so we might debate the issues, the risks — and the benefits — they could bring. 

Gibberish from the machine


I’m honored that Germany’s Stern asked me to write about AI and journalism for a 75th anniversary edition. Here’s a version prior to final editing and trimming for print and translation. And I learned a new word: Kauderwelsch (“The variety of Romansch spoken in the Swiss town of Chur (Kauder) in canton Graubünden) means gibberish. 


We have Gutenberg to blame. It is because of his invention, print, that society came to think of public discourse, creativity, and news as “content,” a commodity to fill the products we call publications or lately websites. Journalists believe that their value resides primarily in making content. To fill the internet’s insatiable maw, reporters at some online sites are given content quotas, and their news organizations no longer appoint editors-in-chief but instead “chief content officers.” For the record, Stern still has actual editors, many of them.

And now here comes a machine — generative artificial intelligence or large language models (LLMs), such as ChatGPT — that can create no end of content: text that sounds just like us because it has been trained on all our words. An LLM maps the trillions of relationships among billions of words, turning them and their connections into numbers a computer can calculate. LLMs have no understanding of the words, no conception of truth. They are programmed only to predict the next most likely word to occur in a sentence.

A New York lawyer named Steven Schwartz had to learn his lesson about ChatGPT’s factual fallibility the hard way. In a now-infamous case, attorney Schwartz asked ChatGPT for precedents in a lawsuit involving an errant airline snack cart and his client’s allegedly injured knee. Schwartz needed to find cases relating to highly technical issues of international treaties and bankruptcy. ChatGPT dutifully delivered more than a half-dozen citations.

As soon as Schwartz’s firm filed the resulting legal brief in federal court, opposing counsel said they could not find the cases, and the judge, P. Kevin Castel, directed the lawyers to produce them. Schwartz returned to ChatGPT. The machine is programmed to tell us what we want to hear, so when Schwartz asked whether the cases were real, ChatGPT said they were. Schwartz then asked ChatGPT to show him the complete cases; it did, and he sent them to the court. The judge called them “gibberish” and ordered Schwartz and his colleagues into court to explain why they should not be sanctioned. I was there, along with many more journalists, to witness the humbling of the attorneys at the hands of technology and the media.

“The world now knows about the dangers of ChatGPT,” the lawyers’ lawyer told the judge. “The court has done its job warning the public of these risks.” Judge Castel interrupted: “I did not set out to do that.” The problem here was not with the technology but with the lawyers who used it, who failed to heed warnings about the dubious citations, who failed to use other tools — even Google — to verify them, and who failed to serve their clients. The lawyers’ lawyer said Schwartz “was playing with live ammo. He didn’t know because technology lied to him.”

But ChatGPT did not lie because, again, it has no conception of truth. Nor did it “hallucinate,” in the description of its creators. It simply predicted strings of words, which sounded right but were not. The judge fined the lawyers $5,000 each and acknowledged that they had suffered humiliation enough in news coverage of their predicament.

Herein lies a cautionary tale for news organizations that are rushing to have large language models write stories — because they want to be cool and trendy, or save work, or perhaps to eliminate jobs, and manufacture ever more content. The news companies CNET and G/O Media have gotten into hot water for using AI to produce content that turned out to be less than factual. America’s largest newspaper chain, Gannett, just turned off artificial intelligence that was producing embarrassing sports stories that would call a football game “a close encounter of the athletic kind.” I have heard online editors plead that they are in a war to produce more and more content to attract more likes and clicks so they may earn more digital advertising pennies. Their problem is that they think their mission is only to make content.

My advice to editors and publishers is to steer clear of large language models for writing the news, except in well-proven use cases, such as turning highly structured financial reports into basic news stories, which must be checked before release. I would give the same advice to Microsoft and Google about connecting LLMs with their search engines. Fact-free gibberish coming out of the machine could ruin the authority and credibility of both news and technology companies — and affect the reputation of artificial intelligence overall.

There are good uses for AI. I benefit from it every day in, for example, Google Translate, Maps, Assistant, and autocomplete. As for large language models, they could be useful to augment — not replace — journalists’ work. I recently tested a new Google tool called NotebookLM, which can take a folder filled with a journalist’s research and summarize it, organize it, and allow the writer to ask questions of it. LLMs could also be used in, for example, language education, where what matters is fluency, not facts. My international students use these programs to smooth out their English for school and work. I even believe LLMs could be used to extend literacy, to help people who are intimidated by writing to communicate more effectively and tell their own stories.

Ah, but therein lies the rub for writers, like me. We believe we are special, that we hold a skill — a talent for writing — that few others can boast. We are storytellers and wield the power to tell others’ tales, to decide what tales are told, who shall be heard in them, and how they will begin and neatly end. We think that gives us the ability to explain the world in what journalists like to call the first draft of history — the news.

Now writers and journalists see both the internet and AI as competition. The internet enables the silent mass of citizens who were not heard in media to at last have their say — and to create a lot of content. And by producing credible prose in seconds, AI devalues writing and robs writers of their special status.

This is one reason why I believe we see hostile coverage of technology in media these days. News organizations and their proprietors claim that Google, Facebook, et al steal away audience, attention, and advertising money (as if God granted publishers those assets in perpetuity). Journalists are engaged in their latest moral panic — another in a long line of panics over movies, television, comic books, rock lyrics, and video games. They warn about the dangers of the internet, social media, our phones, and now AI, claiming that these technologies will make us stupid, addict us, take away our jobs, and destroy democracy under a deluge of disinformation.

They should calm down. A 2020 study found that in the US no age group “spent more than an average of a minute a day engaging with fake news, nor did it occupy more than 0.2% of their overall media consumption.” The issue for democracy isn’t so much disinformation but the willingness — the eagerness — of some citizens to believe lies that stoke their own fears and hatreds. Journalism should be reporting on the roots of bigotry and extremism rather than simplistically blaming technology.

In my book, The Gutenberg Parenthesis, I track society’s entry into the age of print as we now leave it for the digital age that follows. Print’s development as an institution of authority took time. Not until fifty years after Gutenberg’s Bible, around 1500, did the book take the shape we know today, with titles, title pages, and page numbers. It took another century, a few years either side of 1600, before the technology and its technologists — printers — faded into the background, making way for tremendous innovation with print: the birth of the modern novel with Cervantes, the essay with Montaigne, and the newspaper. A business model for print did not arrive until one century more, in 1710, with the advent of copyright. Come the 1800s, the technology of print — which had hardly changed since Gutenberg — evolved at last with the arrival of steam-powered presses and typesetting machines, leading to the birth of mass media. The twentieth century brought print’s first competitors, radio and television. And here we are today, just over a quarter century past the introduction of the commercial web browser. This is to say that we are likely at just the beginning of a long transition into the digital age. It is only 1480 in Gutenberg years.

In the beginning, rumor was trusted more than print because any anonymous printer could produce a book or pamphlet — just as anyone today can make a web site or tweet. In 1470 — only fifteen years after Gutenberg’s Bible came off the press — Latin scholar Niccolò Perotti made what is said to be the first call for censorship of print. Offended by a bad translation of Pliny, he wrote to the Pope demanding that a censor be assigned to approve all text before it came off the press. As I thought about this, I realized Perroti was not seeking censorship. Instead, he was anticipating the establishment of the institutions of editing and publishing, which would assure quality and authority in print for centuries.

Like Perotti in his day, media and politicians today demand that something must be done about harmful content online. Governments — like editors and publishers — cannot cope with the scale of speech now, so they deputize platforms to police and censor all that is said online. It is an impossible task.

Journalists must be careful using AI to produce the news. At the same time, there is a danger in demonizing the technology. In the best case, the rise of AI might force journalists to examine their role in society, to ask how they improve public discourse. The internet provides them with many new ways to connect with communities, to build relationships of trust and authority with them, to listen to their needs, to discover and share voices too long not heard in the public sphere, to expand the work of journalism past publishing to the wider canvas of the internet.

Journalists think their content is what makes them valuable, and so publishers and their lawyers and lobbyists are threatening to sue AI companies, dreaming of huge payments for machines that read their content. That is no strategy for the future of journalism. Neither is Axel Springer’s plan to replace journalists in content factories with AI. That is not where the value of journalism lies. It lies with reporting on and serving communities. Like Nicollò Perotti, we should anticipate the creation of new services to help internet users cope with the abundance of content today, to verify the truth and falsity of what we see online, to assess authority, to discover more diverse voices, to nurture new talent, to recommend content that is worth our time and attention. Could such a service be the basis of a new journalism for the online, AI age?

A generation later: What have we learned?

The date sneaked up on me this year, attacking from behind. Every year on 9/11 I reflect, grateful that I survived the attack. This year, though, I find myself angry. Some of that might be my own loss: my father to COVID this year; my imminent unemployment.

But I am angry on this 22nd anniversary at what has fallen since: at the authoritarianism that overtook this country and threatens the world, at racism and bigotry set loose, at the pandemic killing still, at my own field — journalism — failing to meet these challenges. 

A generation has passed since 9/11/01 and what have we learned? Authoritarians attacked us that day and now authoritarians attack from within. My failing field — journalism — elevates the evil as if it is merely another side in a spectator sport.

Since 9/11/01, our only popularly elected presidents succeeded in strengthening the nation. Under Biden, the economy & nation are strong. But journalism fails at informing the public and wants to make jet lag an election issue while normalizing the fascism in the house. WTF. 

It was on 9/11/01, on my way to work through the World Trade Center, that I decided it was time to leave my job. I would teach. Now I leave that role and I ask what I have accomplished. I pray my students will turn around journalism, for we, their elders, have failed. 

I am, of course, still grateful to have survived 9/11/01. The images and lessons of that day are seared into my soul and will never leave me; they define me. I regret that the spirit in the nation was perverted into war in Iraq. I worry about the state of politics everywhere. 

But on this day I will try to rise above my anger and remember the names of the souls lost and the faces of the selfless first responders I saw rushing toward danger and mercy. This is a day for memorial and gratitude to them.

The only suitable memorial to those lost on 9/11/01 is to recognize the evil that took them and for our institutions — government, politics, journalism, education — to protect present and future generations from further fascism.