Cut the 'AI' bullshyt, UCPH

bnew

Veteran
Joined
Nov 1, 2015
Messages
67,197
Reputation
10,392
Daps
181,593
https://uniavisen.dk/en/cut-the-ai-bullshyt-ucph/

12 August, 2024 — 10:13

Opinion

Cut the 'AI' bullshyt, UCPH​


by: Denise Utochkin

Postdoctoral researcher, Department of Public Health

deniseutochkin-scaled.jpg


image: Mathilde Meile/Uniavisen

Illusion — Why do we keep believing that AI will solve the climate crisis (which it is facilitating), get rid of poverty (on which it is heavily relying), and unleash the full potential of human creativity (which it is undermining)?

OPINION ON THE UNIVERSITY POST

This is a featured comment/opinion piece. It expresses the author’s own opinion.

We encourage everyone to read the whole piece before commenting on social media, so that we only get constructive contributions.

Disagreement is good, but remember to uphold a civil and respectful tone.

Before the summer holidays, the University Post posted a call to the University of Copenhagen to more tightly integrate generative AI in teaching and the »university’s daily life«. The well-intentioned advice came from the researchers behind a study of students’ AI usage, which showed that the majority of students at SAMF do not use large language models. Thus, it was understood, the university is ostensibly failing to prepare the students for the »labour market of the real world«.

READ ALSO: New study: Students need more teaching in artificial intelligence

The underlying premise in the article is that because AI seems to be everywhere these days, it should be embraced by educators and students, who must »realize« that they should »exploit the potential« rather than »obstruct the trend«.

In response to this, I would like to offer a few reflections on the real and the imagined, the present and the future, and the responsibility of the university and of the students.

A different »real world« than today​


The aspiration to prepare students for the real world is commendable, indeed vital. But when encountering such calls, we should be mindful of what world exactly we are being invited to consider real.

Again and again, a future is being substituted for the present

The »real world« of which the article speaks is notably not of today, but of tomorrow. The article talks about »the reality of society and the labour market that [the students] will meet« even though the students currently enrolled in a study programme are, by definition, going to enter a different labour market from the one of today.

The article then goes on to predict that future academics will have no choice but to use AI extensively. Again and again, a future is being substituted for the present, someone’s imagination for reality.

Illusion of inevitability​


This substitution of future for present tense manufactures consent in two ways.

First, it removes the burden of proof from anyone who makes a statement about the benefits of AI, no matter how vague and grandiose. The AI revolution is upon us, we are being told. Oh, you haven’t already found yourself in a more just and enjoyable world? But of course, that’s because we’re in the midst of a transition, and so you may have to wait just a little longer to really feel its impact!

READ ALSO: The future is now: UCPH softens up on AI rules

Second, this blurring of lines creates an illusion of inevitability. We may lack evidence that AI is a force for good – but that doesn’t matter, because opposing its ever-broader deployment is wasted effort.

This preemptively brushes aside any possible criticism of the technology: whether or not this is a future we want, the argument goes, it is the future we are going to get. We are being led to believe that interrogating whether this is a desirable (or, indeed, plausible) future is futile and even counterproductive.

AI pollutes, divides and exploits​

Back in 2020, Microsoft committed to becoming carbon negative by 2030. Since then, the company has instead increased its emissions by 30 percent, largely due to new data centers used to run generative AI models (with other companies and governments, including the Danish government, following suit). However, we needn’t worry because, as Bill Gates baselessly claimed in a recent interview: »AI will pay for itself«.

The algorithms are trained by stealing creative and scholarly work

We are asked to ignore the fact that the push for mass adoption of AI is fueled by immense harm to the environment. That the hardware on which AI runs relies on extraction of conflict minerals by miners trapped in modern-day slavery. That the algorithms are trained by stealing creative and scholarly work and by exploiting a vast global underclass of ghost workers tasked with helping finetune these models under unfair, often traumatizing conditions.

And what are we offered in return? A technology with such dubious utility and low reliability that its outputs are referred to as »soft bullshyt« by academics, and attempts to shoehorn it into much simpler and lower-stakes contexts than education (for example, ordering a burger) are being abandoned after massive failures. Even Wall Street firms are growing tired of unsubstantiated claims that AI is cost-effective or even just meaningfully useful.

Yet time and time again we are asked to ignore all these present harms and misfires, because the future in which AI has solved the climate crisis (which it is facilitating), done away with poverty (on which it is heavily relying) and unleashed the full potential of human creativity (which it is undermining) is inevitable and ever so close.

UCPH should do something completely different​


The university has an obligation to interrogate the proposition that a world in which AI is widely used is desirable or inevitable. We don’t need to cheer for a vision of tomorrow in which scientists feel comfortable with not personally reading the articles their peers have written and students are not expected to gain insight through wrestling with complex concepts: a world in which creative and knowledge work is delegated to a mindless algorithm.

READ ALSO: Is AI a good study buddy? We asked students

The real world is what we make it. It is our responsibility as educators to make sure our students remember this and actively participate in deciding how to best shape a common future.

Is the future we want one where we’re all drowning in ChatGPT’s soft bullshyt?

As Richard Shaull writes in the foreword to Paulo Freire’s Pedagogy of the Oppressed: »There is no such thing as a neutral educational process. Education either functions as an instrument that is used to facilitate the integration of the younger generation into the logic of the present system and bring about conformity to it, or it becomes ‘the practice of freedom’, the means by which [people] deal critically and creatively with reality and discover how to participate in the transformation of their world.«

By insisting that the future is predetermined and that the best we can do is accept whatever next product is pitched to us by the company with the highest market cap, the university is betraying its responsibility to enable its students to perceive themselves as subjects capable of affecting the world and thinking of it critically.

To avoid falling into this trap, the university and the students should be asking this: Is the future we want one where we’re all drowning in ChatGPT’s soft bullshyt? Or does our imagination allow for any different ‘real worlds’?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
67,197
Reputation
10,392
Daps
181,593
Google President Praised MAGA Speech Slamming ‘Climate Extremist Agenda’

Google President Praised MAGA Speech Slamming ‘Climate Extremist Agenda’​


Interior Secretary Doug Burgum told an AI conference that data centers should be powered by coal, gas, and nuclear. Ruth Porat said his “comments were fantastic.”



By Geoff Dembicki
onAug 19, 2025 @ 06:01 PDT


Series: Tech vs Climate, MAGA

google-AI-fossilfuels-1.jpeg
Credit: DeSmog

This article is being co-published with The Lever, an investigative newsroom. Click here to get The Lever’s free newsletter.

At a recent artificial intelligence conference in Washington, D.C., Google’s president cheered on Trump’s interior secretary after he slammed Silicon Valley’s support of the so-called “climate extremist agenda” and pushed to expand the use of “incredibly clean” coal plants and other fossil fuels to power data centers, according to a previously unreported recording.

Following the speech by Interior Secretary Doug Burgum, Ruth Porat, president and chief investment officer of Google and Alphabet, told conference attendees that “I thought Secretary Burgum’s comments were fantastic… ecause I think it is very clear that to realize the potential of AI, you have to have the power to deliver it. And we have underinvested in this country, and to stay ahead, we need to actually address it head-on.”

Porat was speaking on a panel about how AI is “rewriting America’s future,” alongside Big Tech leaders including venture capitalist Delian Asparouhov and Kevin Weil, the chief product officer for OpenAI, maker of ChatGPT. During the panel, Porat also discussed a Google white paper advocating for U.S. investments in natural gas and nuclear to power the industry’s energy-hungry data centers.

Porat’s remarks, captured in an April video of the influential 2025 Hill & Valley Forum, suggest Big Tech now is prioritizing fossil fuels for data centers over its climate commitments.

Google and other major tech companies as recently as a few years ago led the corporate world in acknowledging the seriousness of the climate emergency and proposing concrete actions to limit Silicon Valley’s carbon emissions. Porat’s company has for years positioned itself as a climate leader in the tech industry. Among its many promises? An ambitious 2020 pledge to power all its operations with carbon-free energy by 2030.

Yet Porat’s comments at the Hill & Valley Forum, and her subsequent praise in July for the Trump administration’s “energy abundance” agenda — which supports oil, gas, and coal while severely penalizing renewables such as wind and solar — signal that, at a time when climate action is under serious threat from Republicans, the country’s largest tech companies are wavering in their support for the cheapest, cleanest, and lowest-carbon energy sources.

That’s reflected in Google’s carbon emissions, which soared nearly 50 percent between 2019 and 2024, according to a company environmental report. An independent study from the NewClimate Institute, a German nonprofit, warned in August of a “crisis” for the tech giant’s ability to meet its climate targets, stating that “data centre expansion and higher artificial intelligence (AI) usage have rapidly increased Google’s electricity demand and absolute [greenhouse gas] emissions.”

Google didn’t respond to a media request about Porat’s comments.

“Climate Extremist Agenda”​


Founded in 2021, the Hill & Valley Forum is an organization that brings together prominent tech executives and venture capitalists with federal policymakers. This year’s event, which took place in late April, featured the likes of Palantir CEO Alex Carp and billionaire venture capitalist Vinod Khosla, alongside politicians including Republican House Speaker Mike Johnson.

The opening remarks were delivered by Burgum, a former North Dakota governor with close ties to the fossil fuel industry. As interior secretary, Burgum oversees management and conservation of federal land. Previous reporting showed that in 2024, months prior to being nominated by Trump for the position, Burgum hosted a private dinner for oil, gas, and coal executives.

Burgum, a Republican, used his speech to criticize Silicon Valley for having supported “the climate extremist agenda,” which he defined as the idea that “a degree of temperature change in the year 2100 is the thing that we should drive every policy in America.” Burgum added: “I’ve always been a little offended by that.”

Echoing common climate-denier talking points about the inability of climate models to predict future temperature rise, Burgum questioned “how a group could take a spreadsheet and extrapolate [climate] data for 90 years, 80 years, now 75 years and say ‘this is absolutely what’s going to happen.’”

He then positioned coal as an energy source that can power Big Tech’s data centers. “Any coal plant running in America today is incredibly clean,” he claimed without evidence.

U.S. power plant pollution is at its highest levels in three years due to a recent surge in generation from coal.

Burgum concluded by stating that accelerating production of American oil, gas, coal, and potentially some nuclear would be key to realizing Silicon Valley’s AI agenda.

“That’s the Trump plan, and that’s what we’re doing right now,” he said.

Google Leader On Burgum’s Vision for AI: “Fantastic”​


Porat, the Google president, expressed no qualms with Burgum’s speech when she was asked about it on a panel later that day, instead stating that his “comments were fantastic.” Porat then elaborated that Google and the Trump administration were in agreement about needing to scale up nuclear production and modernize the electrical grid.

Five years ago, Google CEO Sundar Pichai warned that “we have until 2030 to chart a sustainable cause for our planet or face the worst consequences of climate change.” He outlined a plan to power its data centers by doing “things like pairing wind and solar power sources together, and increasing our use of battery storage.”



But at the Hill & Valley Forum, Porat outlined an energy agenda much more favorable to fossil fuels. During the panel, she touted a recent Google white paper that didn’t once mention wind or solar, even though they generally remain the cheapest form of power generation worldwide. The document instead called for federal investment in “affordable, reliable, and secure energy technologies, including geothermal, advanced nuclear, and natural gas generation with carbon capture (among other sources).”

Others at the conference voiced direct skepticism of renewable energy, including David Friedberg, co-host of the popular pro-Trump tech podcast All-In. “To scale up energy, it’s not about solar, it’s not about wind, those might have been nice from a narrative perspective, but scalable energy production requires these next-gen systems and we have to unlock that,” he claimed during a panel about reindustrializing America.

In reality, last year, nearly 93 percent of new power additions worldwide came from renewable sources.

Trump’s AI Action Plan​


When the Trump administration unveiled its AI Action Plan in Washington, D.C., in late July, the event was presented in the form of a live podcast hosted by Friedberg and his other All-In co-hosts, as well as the founders of Hill & Valley.

“We need to build and maintain vast AI infrastructure and the energy to power it,” the plan reads. “To do that, we will continue to reject radical climate dogma and bureaucratic red tape, as the Administration has done since Inauguration Day.”

The plan claims that it will ensure free speech in AI systems by eliminating “references to misinformation, Diversity, Equity, and Inclusion, and climate change.” It further constricts federal spending to developers of the type of AI models, such as ChatGPT or Elon Musk’s Grok, “who ensure that their systems are objective and free from top-down ideological bias.”

Some climate groups were quick to condemn the proposal. “This U.S. AI Action Plan doesn’t just open the door for Big Tech and Big Oil to team up, it unhinges and removes any and all doors,” KD Chavez, executive director of the national advocacy group Climate Justice Alliance, said in a statement.

But if Google has any concerns about the anti-climate AI policies being pursued by the White House, the company isn’t showing it. At a mid-July AI event in Pennsylvania, Porat heaped more praise on the Trump administration.

“Mr. President, thank you for your leadership and for your clear and urgent direction that our nation invest in AI infrastructure, technology and the energy to unlock its benefits so that America can continue to lead,” she said.
 
Top