AI Won’t Overthrow Us, But It Will Optimize the Capitalist Death Machine

admin

image Interview | Culture & Media AI Won’t Overthrow Us, But It Will Optimize the Capitalist Death Machine “The real threat of these systems is what it will mean for our power,” says Paris Marx.

By Kelly Hayes Part of the Series Movement Memos “How are these tools going to be used to increase the power of employers and of management once again, and to be used against workers,” asks Paris Marx.In this episode of “Movement Memos,” Marx and host Kelly Hayes break down the hype and potential of artificial intelligence, and what we should really be worried about.

TRANSCRIPT Note: This a rush transcript and has been lightly edited for clarity.Copy may not be in its final form.

Kelly Hayes: Welcome to “Movement Memos,” a Truthout podcast about organizing, solidarity and the work of making change.I’m your host, writer and organizer Kelly Hayes.This week, we are talking about AI, and what activists and organizers should understand about this emerging technology.In the last year, we have been inundated with hype and predictions of transformation and doom regarding artificial intelligence.

In January 2023, ChatGPT had racked up 100 million active users, only two months after its launch, as mesmerized journalists published accounts of their interactions with the product.For a time, ChatGPT was hailed as the fastest-growing consumer application in history, although desktop usage of the app declined by almost 10 percent in June , with some users complaining that ChatGPT has produced lower quality content over time.Economists with Goldman Sachs have predicted that AI could eliminate as many as 300 million jobs over the next decade, and some tech leaders warn that artificial intelligence could eliminate the human race altogether.So, is AI really poised to kill our jobs and annihilate us as a species? A statement from industry leaders, published by the Center for AI Safety on May 30, stated , “Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war.” Signatories for that statement included Bill Gates; Sam Altman, who is the CEO of OpenAI; and Demis Hassabis, the CEO of Google DeepMind.Sam Altman has also stated that he doesn’t “think we’ll want to go back,” once AI has revolutionized our economy.So some of the same people who are telling us that artificial intelligence will eliminate millions of jobs and potentially wipe out humanity are also telling us that it will transform the world for the better.

So what the hell is actually going on with this technology? And what do activists and organizers need to understand about it?

Today, we will be hearing from Paris Marx.Paris is the host of one of my favorite podcasts, “Tech Won’t Save Us,” and they also write for Disconnect , a newsletter for people who want a critical perspective on Silicon Valley and what it’s doing to the world.They’re also the author of Road to Nowhere: What Silicon Valley Gets Wrong about the Future of Transportation .In this episode, Paris is going to help separate the reality of AI, and what this technology can and cannot do, from the nonsense and sci-fi tropes being churned out by the Silicon Valley hype machine.

This episode follows our recent conversation with Émile Torres about longtermism , a cult-ish ideology that is running rampant in the tech world.

While you don’t have to listen to that episode first, I do recommend circling back, if you haven’t checked out our conversation with Émile, because these subjects connect in really important ways.

Related Story Interview | Politics & Elections Bizarre and Dangerous Utopian Ideology Has Quietly Taken Hold of Tech World “It’s shaping our world right now,” says philosopher and historian Émile P.Torres.By Kelly Hayes , T ruthout July 20, 2023 Truthout As we are going to discuss, AI is definitely overhyped, but the tech industry is, in fact, shaping and reshaping our world in disturbing ways.To fight back, we need to understand the ideas and technology that are driving these changes.

I am grateful for the opportunity to dive into these subjects, because I’ve been alarmed by some of the misinformation that organizers and activists are being hit with around these issues.Irresponsible coverage of AI, which we’ll talk more about in this episode, is just one example of why we need independent news and analysis, now more than ever.I’m able to put together episodes like this one thanks to Truthout , a nonprofit news organization with no ads, no paywalls and no corporate sponsors, that isn’t beholden to any of the people whose shenanigans I call out on this show.With this podcast, we are working to provide a resource for political education that can help empower activists and organizers.

If you want to support the show, you can sign up for Truthout’s newsletter or make a donation at truthout.org.You can also subscribe to “Movement Memos” or write a review on Apple or Spotify, or wherever you get your podcasts.Sharing your favorite episodes is also a big help.Remember, Truthout is a union shop and we have the best family and sick leave policies in the industry.

A lot of publications have gone under or suffered layoffs in recent years.At Truthout , we’ve managed to dodge those bullets, and we have our recurring donors to thank for that.So, I want to give a special shout out to our sustainers.Thanks for your support, and for believing in what we do.

And with that, I hope you enjoy the show.

[musical interlude]

Paris Marx : My name is Paris Marx.My pronouns are he or they, whichever you prefer.I host a podcast called “Tech Won’t Save Us.” I also write a lot about technology from a critical left perspective for a bunch of different publications, everything from Time Magazine to Wired to Jacobin , all over the place, and I also wrote a book called Road to Nowhere: What Silicon Valley Gets Wrong about the Future of Transportatio n.With regard to AI, obviously, it’s something that I am trying to understand like everyone else, and so since I guess last year, I’ve been interviewing and talking to a lot of critics and skeptics of AI to learn more about their perspectives and what is going on now that we see this kind of generative AI boom.

So there’s a ton of hype around AI right now.I don’t need to tell you that.Everyone will have seen it.All the stories about ChatGPT that we’ve seen over the past six months or so, along with the image-generation tools and everyone kind of going crazy about open AI and Sam Altman doing his tour around the world.These things are getting a lot of media attention, and there’s a lot of reporting and a lot of writing about the potential consequences of all of these technologies, and what we’re led to believe is that this is some massive new advance in AI technology and in the digital technologies that surround us, and that it’s going to mean huge changes for how we work and how we live, and nothing is ever going to be the same again.

I think that that is a huge overstatement that works for the companies, right? That works for the industry.

What we know is that the tech industry was really struggling previous to this kind of AI boom.Not only did we see the crash of the crypto boom that everyone might remember with the cryptocurrencies and the NFTs, but also the big push for the metaverse to be the next big thing really kind of fizzled out as a lot of people just found it to be a joke, and so at the same time, interest rates have been rising.

This is something people will be very familiar with, but that also jeopardized the model that the tech industry had used for the past 15 years in order to forward its business models and its kind of global domination.Those low interest rates allowed for easy access to capital that was useful in rapidly growing these businesses even if they were making losses and not turning a profit.

And so the industry was in this really difficult situation.People probably remember the stories of layoffs and things like that from earlier this year, and so the AI boom comes along not to be this big technological revolution, but to kind of save the industry from a business standpoint because by having this boom, by having this excitement around AI, it drives a new wave of investment even though there’s all these other kind of factors that are going on that are kind of negative for the industry.And so I think that that is the best way to understand AI — not as this kind of massive technological transformation, but as a real business move in order to resuscitate and keep the industry going, so that it’s not going to enter this really prolonged kind of downward spiral.

And so in order to get us to buy into these things, and in order to get us to believe these things, the industry has to put out really over-exaggerated narratives to make it seem like these technologies are going to transform the world.There’s also this idea that these AI technologies, these chatbots, are this big step forward that means that we’re just on the cusp of artificial general intelligence, which is this idea that the computers are going to reach the level where they’re at parity with humans in terms of their ability to think and process, like they kind of gain a consciousness in a sense, right? And so people like Sam Altman and people who are in the industry are making us believe that we are right on the cusp of achieving this because that works for their business goals, right?

So [making us believe] that, instead of paying attention to the real consequences of these AI tools: how they can be used in welfare systems to discriminate against people, how they might be used to encourage the privatization of education or health care services, many other ways that they can affect our lives in a really negative way without us realizing that that’s even happening.And instead of saying, “Let’s focus the regulatory lens and the critical scrutiny lens on those types of things, those really real ways that they can affect our lives,” this kind of narrative about artificial general intelligence says, “Don’t look at those things, but instead look to the possible future where these intelligent machines could become our overseers or our overlords and might eradicate humanity.” And it’s total fantasy, but it works for them to make us believe that.

KH : Now, if you listened to our episode with Émile Torres, you may be wondering whether some of these tech characters actually believe their own hype, when it comes to AI.

After all, the idea of creating an AI superintelligence is a big part of the longtermist ideology, which is basically the Scientology of Silicon Valley.

Some people in the tech world do seem to believe that artificial general intelligence could ultimately dominate and destroy us all.As Survival of the Richest author Doug Rushkoff told The Guardian in May:

They’re afraid that their little AIs are going to come for them.They’re apocalyptic, and so existential, because they have no connection to real life and how things work.

They’re afraid the AIs are going to be as mean to them as they’ve been to us.

So, some tech leaders may believe that superintelligence is a threat, just as some colonizers might have believed Manifest Destiny was real, but what matters is how these ideas function in the world, who they empower, and who they dehumanize and disempower.So, I want to go ahead and stick a fork in the question of what tech leaders believe, with regard to AI, by arguing that it doesn’t matter what they believe.When ideas are weaponized, motives and actions matter more than what people think of the weapons they’re wielding.I also want to note that, when it comes to the damage that powerful people want to cause, lofty justifications usually follow existing projects or desires.

So with longtermism, we have a sort of techy religion that’s been built around certain pursuits, and with the general public, we have the narrative of an angry, inevitable, future god, in the form of artificial general intelligence, that tech leaders supposedly want to protect us from.As religiosity goes, it really is one hell of a scam, regardless of whether cult-y tech leaders believe in what they’re selling.

On a practical level, as Paris explained, the supposed threat of an AI superintelligence is used to keep us mesmerized, so that we won’t get riled up about how these technologies are being deployed against us in real time.Because algorithms already control our lives, in so many ways, and by building up some future boogeyman that tech leaders supposedly want to protect us from, they are hoping to distract us from the reality of what this tech is and is not doing in the world today.

PM : There’s a bunch of AI that surrounds us every single day; if you think about when you’re typing on your phone, autocorrect comes up, and that is AI, right? So there are really kind of general basic things that are also artificial intelligence or would fit under that term.I think that the term itself is misleading because it makes us believe that these machines are intelligent in some sort of way, which I would argue is not the case.

They’re just models that these people have put together that make predictions and things like that based on the data that they’re trained on.

So when we look at what these tools are actually doing, the chatbots are based on large language models, and basically, these things have been around for a while, but what has been achieved with these ones, part of the reason that they are getting so much attention in this moment and that they seem so much better than in the past, is because they have access to a lot of centralized computing power, and they have access to a lot of data on which to be trained, right? People are probably familiar with Microsoft’s Cloud infrastructure.Amazon and Google have them as well, that are these data centers all over the world that give them access to a lot of computing power.And so these companies are using this in order to train these models, and at the same time, they have kind of scraped the open internet to take hundreds of millions or billions of words and images, and things like that on which to train these models.

And because they’re using so much computing power and because they’re using so much data, certainly they can churn out things that these tools have not been able to in the past, but that’s not because they’re so much better.

It’s just because they’re using more resources to do it, and they have more data that they’re trained on.That doesn’t mean that they’re so much more intelligent than previous machines or anything like that.It’s just that we’re kind of increasing the scale at which this operates, and so they’re able to achieve some new things as a result.

KH : One reality that tech leaders are seeking to obscure when they hype up AI doomsday scenarios is that large language learning models are already contributing to one of the greatest threats humanity is faced with today: climate chaos.

As Paris has explained, we are talking about existing technologies that have a much greater extractive capacity than the average chatbot.That extractive capacity is, itself, powered by more material forms of extraction.

PM : One of the things that we often think about — and I think the term “cloud” hides it from us, hides the real impact from us — is that when we think about these computer systems, when we go onto the web, when we access a Netflix movie or we access some files that we have in the cloud, that those aren’t just kind of in some ethereal place that has no impact.

They’re in one of these large data centers that are filled with computers that are holding all of this information, and they require a lot of energy in order to power them, but they also require a lot of water in order to cool them, right? And so as these kind of generative AI models, these chatbots like ChatGPT or these image-generation systems become more popular, they also require more resources in order to power them, and so that will require kind of more data centers, and more energy, and more computer parts and more water to keep all these things going.

And I think the other piece of it that’s really important when we think about extractiveness and when we think about the extractive nature of these products is not just resource extraction in the sense of mining to create the computers, for the energy creation and also for the water that’s needed for those, but also extractive from the point of data, right? Because they’re basically going out to the open web where all of us have been sharing things for the past several decades, and they’re scraping all of that information and all of those posts, and all of those images, and using it to train their models.And they’re saying that this should be okay, that they should not be held to account for that.

In some cases, like in the case of OpenAI, they’re not even telling us what it has specifically been trained on.

They’re trying to keep that a secret, and so we’ve seen a number of lawsuits that have been launched in recent months challenging these companies on the data that they used to train these models, saying that they’ve used copyrighted material, saying that they’ve used people’s private information that they’ve scraped off of the web, and challenging their ability to actually just take all of our data in that way and use it how they see fit.

KH : According to The Washington Post , a large data center’s cooling systems can consume “anywhere between 1 million and 5 million gallons of water a day — as much as a town of 10,000 to 50,000 people.” Phoenix, Arizona, the fifth-largest city in the United States, is home to data centers owned by Apple and Google, among others, and is experiencing a decades-long “ megadrought ,” and a historic heat wave.According to the Arizona Department of Water Resources , there simply isn’t enough water beneath the Phoenix metropolitan area to meet projected demands over the next 100 years.

In total, Google’s global data centers used over 4.3 billion gallons of water in 2021.

The so-called Cloud now has a larger carbon footprint than every major airline combined.Data centers also contribute 2 percent of all global addition to the minerals that are mined to produce the hardware used in data centers, AI technologies also contribute to other forms of extraction.While some claim that artificial intelligence can help solve the climate crisis, Dan McQuillan explains in his book, Resisting AI: An Anti-fascist Approach to Artificial Intelligence , why the opposite is true.McQuillan writes:

Amazon aggressively markets its AI to the oil and gas industry with programmes like ‘Predicting the Next Oil Field in Seconds with Machine Learning’ while Microsoft holds events such as ‘Empowering Oil & Gas with AI’.… Despite bandying about the idea that AI is a key part of the solution to the climate crisis, the real modus operandi of the AI industry is its offer to accelerate and optimize fossil fuel extraction and climate precarity.

So how many data centers will it take to power the so-called AI revolution? Given what we know about the extractive nature of data centers, and the industries AI would support, the environmental costs of powering a world run on AI appear downright incalculable.And while these environmental concerns are troubling enough, AI threatens our well-being in a number of other disturbing ways.

PM : So in the narratives around AI, these companies like to have us focus on the future and the potential consequences that could come of artificial general intelligence, and these machines becoming so powerful that they can overtake humans and stuff like that, right? And that distracts us from the real harms that can come of these things that are actually very important and very consequential for people, not just in the United States, but around the world.And so I think people will be familiar with things like predictive policing, where AI tools were used to kind of figure out statistically, predicatively who might commit a crime into the future, so that the police can go and try to stop it before it happens, right?

And these systems are very racist, are very inaccurate, because they’re trained on past data.

So if the police have spent a lot of time policing particular neighborhoods like Black neighborhoods and arrested a disproportionate number of people in those neighborhoods, then the models will suggest that the people who are going to commit crimes in the future are those types of people in those types of locations in the city, and ignore other people who might also be committing crimes but are not the types of people who are generally arrested by police.You think white collar criminals and things like that, right? So that’s one piece of it, but I would say that it extends much further than that, and that the risk is much greater.

There’s an example in Australia that I think is actually very telling, where the government down there was looking to make their welfare system more efficient, and to find people who were receiving welfare benefits and they shouldn’t have been receiving them.So they implemented this AI tool that was termed down there, “Robodebt.” And what happened was this tool matched up people’s income submissions with the amount of money that they were receiving through the program and sent out a ton of letters to people all around the country, expecting them to pay back welfare payments that had been made to them — social assistance payments and things like that.And after years of fighting and litigation, what came out was that this AI tool was flawed.The way it was calculating this was not accurate, and so it was telling a bunch of people who had legitimate reasons to be on welfare or social assistance that they had to actually pay that money back, and that caused a lot of harm in those people’s lives.

It caused them a lot of stress and a lot of heartache.It obviously caused people to lose homes and things like that, but it also caused people to commit suicide because they just saw that there was no hope and no way forward for them now that the government was taking them on, and taking the last bit of kind of money they had, or cutting them off from support that they deserved and that they should have.

And ultimately, the government had to pay I believe it was $7 billion in compensation to these people and shut down the program, but we’ve seen these types of systems rolled out in other parts of the world as well.

In welfare systems, we see very commonly now that using AI is becoming more common in immigration systems and in visa applications, so it’s used to judge people’s submissions for work visas, and travel visas, and things like green cards and whatnot, and that is becoming a serious issue as well, because again, these systems are often very racist and discriminatory.

So these are some of the real implications, some of the aspects of AI being integrated into the various systems that people depend on that can have life-altering consequences, but that people like Sam Altman and these powerful people in the tech industry are really not interested in and don’t want us to think about, because that might make us have some second thoughts on the kind of AI revolution in the world that they are trying to create, because it shows that they’re actually very real flaws with these systems once they are rolled out into the real world rather than just kind of existing in their kind of imaginaries of what the future might look like.

KH : As Paris explains, the so-called revolution that AI offers is really a hardening and enhancement of the status quo.In the same way that neoliberalism encases markets, protecting corporations from the interference of governments, and from democracy itself, AI encases systemic and bureaucratic violence.Compassion, reason, ethics and other human concerns, which might, at times, interrupt the perpetration of systemic violence, become the work of algorithms that feel nothing and cannot be reasoned with.

Algorithmic governance, in effect, amplifies and speeds up everything that’s wrong with our social systems and institutions.

Proponents of these technologies are also working to erode notions of what it means to be human, in order to argue that their souped-up chatbots — or what some experts have called “stochastic parrots” — are just like us.In December 2022, Sam Altman tweeted, “i am a stochastic parrot, and so r u.”

PM : It’s so disappointing to see people like Sam Altman go on Twitter and claim that we are all stochastic parrots, right? And this is this term that was coined by a bunch of AI researchers , including Emily Bender and Timnit Gebru, in saying that these systems are basically like a parrot, right? It takes in this data, and it will kind of spit out similar types of things as the data that has been given to it, right? It’s not intelligent.

It’s just kind of repeating words, and sometimes that can make us believe that it is intelligent because it is kind of spitting things back at us, and we might want to believe that there’s something more there.

And so when people like Sam Altman say that they are also a stochastic parrot, that humans are stochastic parrots, what they’re doing is they’re devaluing human intelligence and what it means to be human to try to make them more equivalent to computers, so it’s easier for them to argue that actually these AI tools, this artificial intelligence — which again, I think is a very misleading term — is close to becoming equivalent with human intelligence, right? And these computers are going to match us very soon.I think many people rightfully argue that that is very unlikely to happen, and that we’re not going to see that at any time in the future, but their ideology is kind of caught up in believing this and in believing that the computers are about to match us, and they also have this belief that we should want that to happen.

There’s kind of a transhumanist belief here that someone like Altman, of course, has in investments in companies that want to ensure that people’s minds are uploaded to computers, and he, I believe, even has a reservation or something to try to have this happen, I guess, or to have his body frozen, I think, when he dies until the point when it’s actually possible for him to upload his mind to the computer — something weird like that, right? And so these people have these really odd beliefs, as we’ve talked about, but it also shapes how they see the world.And I think that the real problem there, if we’re thinking in the big picture, is that it leads us to not value the unique things that humans can do and that we should want them to continue doing.

If we think of something like art and creativity, for example, these people who are developing these tools want us to replace the creation of art, want us to replace writing with chatbots and image-generation tools, and instead have humans kind of do much more mundane labor, of kind of checking over these systems and whatnot.So instead of us creating this art and doing these creative works that are based on our kind of experiences of the world and the various things that we’ve experienced in our lives and kind of reflecting that in the artistic creations that then a bunch of people enjoy — and I think that’s one of the things that we value about human society is kind of art and creativity, and things like that.

Well, they would prefer to see that taken over by machines, and they don’t see any particular problem with that because it will just allow these machines to churn out more things that look like art but don’t have that kind of soul that would be given to a piece of creative work that is made by a human, I would argue.Yeah, and I could say so much more, but I think that those are some real concerns with their approach to it.

KH : If what you’re hearing in this episode seems to contradict something you’ve read in a major publication, that’s likely because coverage of artificial intelligence has largely been incompetent, at best, and, at worst, highly unethical.

PM : I think the media reporting on AI is often very irresponsible.We do occasionally get stories that do present a more critical perspective on these things, and I always welcome that, but I would say that the majority of reporting is very uncritical and is kind of repeating the narratives that these CEOs and these companies want us to believe, basically: you know, the things like artificial general intelligence, and how these systems are going to be a big threat to us, and all the ways that they’re going to make society a better place, and blah, blah, blah, right? All the things that are coming from these companies and that these CEOs want us to believe, but I think that what we’re really lacking then is a proper understanding of how these technologies actually work, what their potential real impacts are on the world, and how we should actually be thinking about them, right?

Because when the media coverage just reflects what the companies are saying for the most part, then we don’t get that critical understanding that allows us to do a real assessment of what these technologies might mean for us, because what we’re always presented is that they’re going to have all these huge effects on our lives, whether they’re positive or negative, because in this case, I would argue that even the negative scenarios — like artificial general intelligence and potential robots or AIs kind of taking over humanity — also really work for the companies and their PR narratives.

And so what we’re missing there is a media that is able to challenge those things and to present us with alternative perspectives.And I think that that happens for a number of reasons, not just in the tech media but in the mainstream media as well.

So one of the things that we’ve seen over the past number of years and the past few decades, I guess, is that the funding for media and journalism has really been hollowed out in part because it has moved online.You lose the classifieds because there are just free websites where people do that kind of stuff now, so newspapers don’t get the revenue from that, but also digital advertising is basically controlled by Google and Facebook, and to a lesser degree, Amazon.And so newspapers and media organizations get less advertising revenue, which was kind of the core of their business model, and so when they have less revenue, that means that they not only need to continue churning out stories in order to get people to be reading what’s on their website, but they also don’t have the time to do the investigative work that would be necessary for a more critical assessment of these technologies.

I think that part of it as well is that in the tech industry, there’s a really strong desire for access, especially in the tech publications.

So if you’re going to write critically about some of these major companies, then you might be excluded from press events, from things like Apple Keynotes where they launch new products, and so that will mean that you won’t be able to access those things and do the types of coverage that other outlets have.And I think as well, some people who go into reporting on tech and writing about tech just kind of generally have a positive view of technology, and that’s part of the reason they want to enter that sphere anyway.And so they come in with these kind of preconceived notions of what the tech industry is, and that it’s doing positive things in the world, and that new technology is equivalent to progress, and all these sorts of ideological ideas that we have about tech, and that then gets reflected in their reporting.

So unfortunately, I think that the coverage we get of AI is just reflective of a broader trend in tech coverage that is very boosterish, that is very positive, that is not nearly critical enough, and that leaves the public unprepared to actually judge what these technologies might mean for them because it always seems that the tech industry is doing really positive things in revolutionizing society, even when we can see that that is not what the real impact tends to be.

KH : I have previously mentioned Dan McQuillan’s book, Resisting AI , which is a really great read that I think everyone should check out.When I first picked up that book, my take on artificial intelligence was that these technologies were inevitable, and probably, unstoppable.You may have heard similar arguments, and you may hold these beliefs yourself.

You may also be thinking, as I once did, that, under the right circumstances, and in the right hands, these technologies could, in fact, transform our lives for the better.I understand the appeal of this idea, but having researched this technology, what it’s doing and where it’s heading, I no longer hold this view.

PM : I think Dan McQuillan’s book, Resisting AI , is really informative, and it was interesting to me when I spoke to him to interview him for my podcast that he explained that initially, he set out to write a book that was about AI for good.I believe that was the initial title.And as he got into his research, he realized that actually there is no AI for good, and we need to resist AI, and that kind of resulted in the shift that happened in the title of the book, but also the content of the book in terms of what he was arguing.

And I think that he has a really persuasive argument when you think about the impacts of AI on society — the real impacts, right? Not the things that kind of Sam Altman is pointing us toward and Elon Musk are pointing us toward, but the things that we were talking about with regard to the way that it can be deployed in welfare systems, and in immigration systems, and in policing systems, and all of these other ways that it can really affect people, right? And really shape people’s lives in a really significant way, but also where they are disempowered from being able to take actions that would change those sorts of things.

Because when they’re built into the systems and when your life is governed by AI systems, you have a lot less power over your own life because of how these technologies can shape just everything that you interact with, and you might not be able to access a human who can go around the system or fix the system because there becomes this kind of inbuilt belief that if the system is saying something, then the system must be right because it’s a computer, and why would a computer be wrong, right? So when it comes to AI for good, and the ways that it might be able to be used in a more positive way, I think it’s possible, right? I think of things that are much more mundane, like autocorrect tools and whatnot.

Obviously, as I was talking about, AI is deployed in many different ways in society.Some of those ways are much more mundane, but others can have a lot of real impacts for everyone, basically, but especially the most marginalized and least powerful people in society who have very little ability to push back on these things, and these tools and these systems.

So I think that my concern is more that we live under capitalism right now.Capitalism shapes how technology is developed to serve its need for profit, and for control of workers and of the public by the state, and so the way that we see AI technologies developed and rolled out aligns with those capitalist goals and aligns with the interests of those corporations and of particular governments, not the needs and the will of the general public, right?

And so I think that McQuillan makes very understandable arguments in arguing that we should be resisting these AI tools, and that these AI tools can be used as part of this wider shift toward fascism and kind of fascist politics that we’re seeing in society right now, because you can very clearly see how they can be used to kind of rank people, to classify people, to control the way that we access services.

I think that there are a lot of very dangerous ways that AI can be used to enhance these types of politics.And I think that if we’re thinking about it from a left-wing perspective, I don’t think the argument that we should encourage the development of AI because at some point in the future, if we have a socialist government, it might be able to be used for good if we can seize the tech and use it in a different way.

I just don’t think that is a compelling argument in a moment where we know that we live under capitalism, we know that our politics is veering to the right, we know that we’re facing a lot of crises in the future when we think about the climate crisis, and what is that going to mean for the politics of our societies as we move forward? And so I think it makes a very good case that we should be opposing AI right now, and we should be opposing the rollout of these systems.Ultimately, when we think about the net impact of these technologies, it’s going to be a net negative for the vast majority of people in society who are not the Sam Altmans and the tech billionaires.

KH : Now, I’m not saying you’re one of the bad guys if you’re using auto transcription tools or other technologies that fall under the AI umbrella.The truth is, it’s very hard to get away from these applications.Our society has been structured to make us reliant upon a lot of things that are ultimately bad for the world, and to make us feel that those things are essential and inevitable.And yet, we know that our ways of living must ultimately be transformed, or most life on Earth will be destroyed — not by some scary super-intelligent AI, but by the extractive workings of capitalism.So I would ask you to open your mind a bit, and consider that we cannot allow notions of technological dependency or inevitability to govern our lives or our world.

Just as you can own a car, and still rage against the oil industry, you can use autocorrect and still question what’s happening with AI.

I also want to point out that we have been set up to become dependent on AI in really disturbing ways, and that some of the vulnerabilities this technology exploits point to the damage capitalism has already done in our lives.Some people, for example, have suggested that AI presents a solution for loneliness among elderly people, suggesting that AI pets and companions can provide “joy” and a therapeutic presence for older people.Loneliness and abandonment are massive issues in our society, and around the world.What we really need is each other, but we are being encouraged to engage with chatbots and simulations that will either leave us unsatisfied, or plunge us further into our own fractured realities.Why? Because Goldman Sachs estimates that generative AI could ultimately increase gross domestic product by 7 percent, or almost $7 trillion.

PM : I get very angry about the tech industry and the ways that they affect our society and the ways that they shape our conversation and how we think about it.

But also seeing some of the narratives around AI also just makes me profoundly sad when I think about the type of society that it is creating and that it is suggesting for how we move forward, and that is a society that is very cold, that is very lacking in human interaction, because everything goes through these computer systems, and that is what we are encouraged to do because that is what works for the business models and the goals of these companies, right? Because they make money when we interact with computer systems and with apps, not when we interact with other people.

And so they don’t want to have us do that.They want to have us stay home and ask our chatbot for advice, and have a chat with our chatbot; and order things from Amazon or from Uber, and have it dropped off; and just stay home, and consume our Netflix, and do our work, and all this kind of stuff, and it’s a very kind of just terrible vision for what the future should look like.And I think it builds on existing issues that are in society when we think about, obviously, the crisis of loneliness and how people have fewer interactions with the people around them than I think we would imagine in the past, and that we would imagine a healthy society having in part because of the eradication of communal space and public space, and how that has been privatized over the past number of decades, but also how we’ve lost the number of community organizations that used to bring people together, and that also goes along with kind of the suburbanization and the types of communities that we’ve created where everyone or a lot of people lived much further from one another.

And the transport system is basically reliant on cars and otherwise is very degraded.So in order to get anywhere, you have to get in a car, and kind of wait in traffic, and buy expensive gasoline, and all this kind of stuff instead of living close to the people you care about and the services that you depend on where you can quickly access those things through public transit and cycling.It’s a very different vision of what a society can look like, but you can see how these tech tools then take advantage of the type of environment that we have created, and they want to further this.

And I think that, just to make a final point, when you listen to people like Sam Altman and these people who are developing these tools, they very clearly say that they see these chatbots serving many roles in society, right? That they see them becoming our teachers, and that they see them becoming our doctors, and that they see them becoming our kind of psychologists or therapists, and all this kind of stuff.

And it always brings to mind a couple of things to me.First of all, back in the [1960s], we think about AI as being this very new technology, right? But there was this guy called Joseph Weizenbaum who was working on AI systems all the way back then, and he developed this chatbot called ELIZA, and it was based on this model of a psychotherapist.And so it was programmed that when you would interact with it and type in your little command, it would kind of spit back out a question that was taking your words and kind of reframing them for you.And it was a very simple thing, but he was just trying to demonstrate how these computers could do that.But what he found very quickly was that when people interacted with this chatbot — this very basic rudimentary thing that could not think for itself, that had no ability to think, that was just using the prompts that he had coded in order to ask people questions based on what they were saying — that people felt like they were actually interacting with somebody.

And some of the people even wanted him to leave the room while they were talking to this computer system, because they felt like it was listening, and they were telling it important details and felt like they were getting something out of it, which I think displays something quite worrying, right? That instead of interacting with a person, we interact with a computer, and we want to believe that it’s listening, and that it’s interacting with us, and that it understands us when it actually doesn’t do that.It’s just trained on these algorithms that have certainly become more advanced because they’re trained on more data and have more computing power, but they still don’t understand what we’re actually communicating, and they are not actually communicating back to us.

It’s just a much more advanced form of autocorrect and what they’re spitting at us.

And I think that the issue there is that in believing that there is intelligence, then we accept this idea that it’s going to start replacing things like teachers or things like doctors when it’s not actually going to do that, but then it empowers companies and the governments, of course, to further push down on the wages of teachers, to keep fighting teachers unions, to keep trying to privatize education.And this won’t affect the Sam Altmans of the world or the children of Elon Musk, who has many, many children, but it will affect poor people, marginalized people, the people who can’t afford really high-quality private education, who rely on the public school system, or who don’t have the greatest health insurance and all this kind of stuff.They will be the people who will be stuck with these chatbots that will deliver much inferior service to what they might get right now, but we’re being told that this is a positive thing that we should accept because it works for these tech companies, but also the other companies in these industries that will profit from it as well.

And just to kind of close off this thought, there was a story just the other day because Google is working on a medical AI bot, or system, or chatbot, or whatnot, and one of the senior researchers at Google said that he wouldn’t want this medical chatbot to be used for his family’s health journey, but he was excited to see it rolled out in developing countries and on other kind of marginalized people, which I think really shows you the perspective that these people have: Don’t expect these technologies to be deployed on us, the wealthy people of the world, but we’re more than happy to deploy them on you regardless of the consequences.

KH : Something Paris has written about, that I think is really important to understand, is that we have seen the kind of hype we are witnessing around AI before.

PM : So I think that this is quite relevant to what is happening at the moment.What really got me interested in criticizing the tech industry was what happened in the mid-2010s when we had the last kind of boom of hype around automation and AI, and people might remember that in those years, the story was that the technologies were advancing very rapidly, and that we were about to have kind of robots and AI tools that were going to take over a ton of the work that was going to be done by all of us.

And that half of jobs were going to be wiped out, and all this kind of stuff, and there were going to be no more truck drivers and no more taxi drivers because self-driving cars were going to replace them.

And when you went into a fast food restaurant or a coffee shop, you were going to have a robot making your food instead of a person.All of these workers that we interacted with and that we are were going to be replaced by computers in the next few years, right? And that led to this concern about what is society going to look like when this all happens? Are we going to need a universal basic income? Are people just going to be destitute? Are we going to have fully automated luxury communism? Right? All of these narratives were happening in that moment, and what we saw was that all of the technology that all of these narratives were kind of assuming was just going to take over and have these effects never really came through and never developed like these tech companies were leading us to believe.

So we never had this mass eradication of jobs.What we did have was these tools further empowering employers and managers against workers, so that they could reclassify workers from employees to contractors like we’ve seen in the gig economy with things like Uber and delivery services; or rolling out more automated systems to have more algorithmic management of workers that we see most prominently in things like Amazon warehouses where the workers have very little power, are constantly tracked by the little guns that they use to scan the items.And of course as a result, they have very little control over the conditions of their work.

It’s very hard for them to take bathroom breaks.

You’ve probably seen the stories about people saying they have to pee in bottles because they can’t actually go to the bathroom at Amazon facilities, and certainly not when they’re doing delivery driving.

And we assume that the kind of real benefits of the Amazon model or the real innovation of the Amazon model was how it’s using these technologies in this logistics system in order to reduce costs, but one of the real innovations of the Amazon model is to take warehouse workers, which was previously a unionized and quite well paid profession, and basically turn it into something more of akin to a minimum wage worker that gets paid much less and does not have a union except in one warehouse in Staten Island, of course, or delivery drivers, these deliveries are usually done by USPS, which is unionized, or UPS, which is unionized and might be going on strike soon, but instead Amazon is increasingly shifting over to drivers that it controls, that are hired by what they call “delivery service partners,” so they’re not directly employed by Amazon, but they’re not paid very well.

They certainly can’t unionize, because what we’ve seen in a couple parts of the country now is that where they have tried to unionize, Amazon has cut the contract of those delivery service partners to cut them off and just get someone else, and now we also see that moving into airline delivery and freight.So Amazon has been expanding its logistics network of air shipping, and what it also does there is also to hire pilots through third party companies as well, so that the pilots are still unionized, but they’re not directly controlled by Amazon.So if they do try to demand better wages and conditions, Amazon can cut that contract and contract from a different company instead.

So that’s a long way of saying that this hype and this excitement around automation and AI didn’t actually eradicate a ton of jobs, but ensured that workers have less power to push back against their employers and management.And so when I look at what is happening now with this boom and this excitement around generative AI, we’re seeing some similar narratives around what it might mean for the public, how it’s going to transform everything, how it’s going to make things more efficient, how it might take away a lot of jobs, all this kind of stuff, but I think it’s very unlikely that any of that happens.And what I am actually watching for instead is: how are these tools going to be used to increase the power of employers and of management once again, and to be used against workers? And I think you can already see how they can be used to increase surveillance of workers, especially in this moment where we have more people working from home.

And so now if you have these generative AI systems, they can do more tracking of your computer and what you’re actually doing at work, but you can also see how they can be deployed in such a way to ensure that companies don’t need as many workers or don’t need as skilled workers as they had in the past.

So you can use an image-generating system or a chatbot to churn out some written words or some images.They might not be perfect, but then instead of hiring a writer or a graphic designer to make those things, all that you need to do now is get someone on contract or get someone to do a very short job of doing a bit of editing on what the AI has generated in terms of words or images, and not design something from scratch, so you can cut down that cost and you don’t need to rely on them as much.

So I think that these are the things that we should be paying much more attention to — not the ways that AI might transform everything, or how it might lead to artificial general intelligence that might cause a threat to humanity or whatever.I think that the real threat of these systems is what it will mean for our kind of power, and being able to increase the power of the tech billionaires and the companies that control these technologies, but also how they can transform welfare systems and other public services that we rely on to make them much more aggressive toward us, much more discriminatory, and basically make our lives much more difficult to lead.

KH : Well, I am so grateful for this conversation, and I am also really grateful for “Tech Won’t Save Us,” and Paris’s newsletter Disconnect.We’ll have links to both of those in the show notes of this episode, on our website at truthout.org.Paris has really helped shape my analysis by offering some essential critiques of the tech industry, and by introducing me to some really important resources and books.

In fact, I am quite sure that this block of episodes we’re doing — on what activists need to know about longtermism, AI, and the kind of storytelling we need in these times — wouldn’t exist without Paris and their work, so I just wanted to name my gratitude for that.

As we wrap up today, I also wanted to share another quote from Dan McQuillan’s book, Resisting AI.McQuillan writes:

Rather than being an apocalyptic technology, AI is more aptly characterized as a form of supercharged bureaucracy that ramps up everyday cruelties, such as those in our systems of welfare.In general … AI doesn’t lead to a new dystopia ruled over by machines but an intensification of existing misery through speculative tendencies that echo those of finance capital.These tendencies are given a particular cutting edge by the way AI operates with and through race.AI is a form of computation that inherits concepts developed under colonialism and reproduces them as a form of race science.

This is the payload of real AI under the status quo.

McQuillan also explains in his book how AI serves “as a vector for normalizing specific kinds of responses to social instabilities,” which could make it a factor in the ascent of global fascism.As we’ve discussed here, AI supercharges and encases the functionality of systems.In a world where fascism is rising, we cannot ignore the role that these systems could ultimately play.

I know these are heavy issues, and I am grateful to everyone who has been on this journey with us.In our next episode, we will discuss the religiosity of the new space race, and how we can counter narratives that center endless expansion and endless growth, while offering us nothing but a fantasy reboot of colonialism.

For now, I want to remind you all that what we need to remake the world won’t be concocted in Silicon Valley.It will come from us, in the work we do together, building relationships, caring for and protecting one another, and fashioning new ways of living amid crisis.

We are the hope we’re looking for.We just have to find each other, and work together, in order to realize that hope.

I also want to thank our listeners for joining us today, and remember: Our best defense against cynicism is to do good, and to remember that the good we do matters.Until next time, I’ll see you in the streets.

Music by Son Monarcas , David Celeste, HATAMITSUNAMI, Guustavv and Ryan James Carr

Show Notes

You can learn more from Paris Marx by checking out the Tech Won’t Save Us podcast, signing up for the Disconnect newsletter, and by checking out their book Road to Nowhere: What Silicon Valley Gets Wrong about the Future of Transportation .

I also recommend checking out Dan McQuillan’s book Resisting AI: An Anti-fascist Approach to Artificial Intelligence .Referenced:

Robots Aren’t the Solution to Elder Care by James Wright On the Dangers of Stochastic Parrots: Can Language Models Be Too Big ? By Emily M.Bender, Timnit Gebru, Angelina McMillan-Major & Shmargaret Shmitchell Sam Altman Says Sorry, AI is Definitely Destroying Jobs by Noor Al-Sibai Statement on AI Risk (Center for AI Story) 300 million jobs could be affected by latest wave of AI, says Goldman Sachs by Michelle Toh ChatGPT loses users for first time, shaking faith in AI revolution by Gerrit De Vynck A new front in the water wars: Your internet use by Shannon Osaka ‘They’re afraid their AIs will come for them’: Doug Rushkoff on why tech billionaires are in escape mode by Edward Helmore Phoenix area can’t meet groundwater demands over next century by Joshua Partlow, Yvonne Wingett Sanchez and Isaac Stanley-Becker Green Intelligence: Why Data And AI Must Become More Sustainable by Bernard Marr The Environmental Impact Of Data Centres by Matthew Giannelis Generative AI could raise global GDP by 7% (Goldman Sachs).

Leave a Reply

Your email address will not be published. Required fields are marked *

Next Post

Loch Ness Centre wants "new generation of monster hunters" for biggest search in 50 years - CBS News

The Loch Ness Centre is on a renewed hunt for “Nessie,” in what’s being described as the biggest search in more than 50 years. The Scotland-based organization wants the next “generation of monster hunters” to help uncover the truth on a late August search.The center is partnering with Loch Ness […]
Loch Ness Centre wants “new generation of monster hunters” for biggest search in 50 years – CBS News

Subscribe US Now