SHADE Newsletter 20th February 2025

Welcome to the thirty second edition of the SHADE newsletter! 

SHADE is a research hub with a mission to explore issues at the intersection of digital technologies/AI, health and the environment. It is guided by a fundamental question: How should the balance between AI/digital enabled health and planetary health be struck in different areas of the world, and what should be the guiding principles?

The SHADE newsletter comes out every two weeks, taking an in depth look at selected topics, as well as highlighting new resources, events and opportunities in the SHADE space.

In this edition we highlight Health Impact Attribution, Frontier AI and the AI Action Summit in Paris.  We hope you enjoy it!

Please tell us what you like, what you don’t like and what you think is missing at [email protected]  

Health Impact Attribution

Frontier AI

  • As a Microsoft study finds that relying on AI kills your critical thinking skills, this Ada Lovelace policy briefing on advanced AI assistants highlights the risk that people could become emotionally dependent on them, noting the use of these assistants in mental health apps. The briefing also recognises that the rollout of these assistants implies ‘significant increases in energy and water consumption, with corresponding resource and environmental costs’. Meanwhile this paper from Hugging Face argues that fully autonomous AI agents should not be developed following an analysis that shows ‘the more control a user cedes to an AI agent, the more risks to people arise. Particularly concerning are safety risks, which affect human life and impact further values’. The paper identifies multiple values as embedded in agentic systems, including safety, privacy, equity, humanlikeness and sustainability. In its conclusion the paper notes the lessons from the nuclear industry of ceding control to autonomous systems. At the same time, AINow’s statement on the UK AI Safety Institute’s transition to the UK AI Security Institute picks up on use cases for frontier AI in national security, warning that, amid increasing geopolitical ‘AI race’ tensions, ‘an approach that under the banner of security would apply piecemeal or superficial scrutiny that gives these systems a clean chit before they are ready’ is inadvisable. (see below for more on the AI Safety Institute’s name change).

  • This preprint presents a data-centric approach to improving AI models in Biology. Using partnerships with 25 countries with biodiversity hotspots around the world, including with indigenous groups living in remote locations, a London-based biotech, Basecamp, has amassed the world’s largest ‘ethically sourced foundational biodiversity database’ for training AI. The tool trained on this data, BaseFold, is claimed to be up to six times more accurate than AlphaFold2 in predicting protein structures. ‘By sharing benefits with the stakeholders this data originates from, we present a way of simultaneously improving deep learning models for biology and incentivising protection of our planet’s biodiversity’ say the paper’s authors. Basecamp have signed a collaboration with NVIDIA to work on a generative AI platform for drug discovery and are part of collaboration that has developed an enzyme prediction tool. As Nature puts it, reporting on the move to ethical bioprospecting more generally, ‘the search for medicines derived from nature has entered a new era’.

AI Action Summit in Paris

  • The Forum for Sustainable AI, hosted by the French Ministry of Ecological Transition took place on February 11th as a side event at the AI Action Summit in Paris. Find the recording here. Sarah Myers West from the AI Now Institute is 36 minutes in, talking about AI policy making for a sustainable future. She introduces Within Bounds: Limiting AI's environmental impact, a joint statement from civil society, drawn up before the summit, and calls for an end to the ‘growth at all costs’ approach. Seven hours and 2 minutes in see Sasha Luccioni on the benefits of addressing all pillars of sustainability (economic, social and environmental) when thinking about where we go with AI.

  • The US and the UK are the two non signatories to an international agreement at the summit which pledges an "open", "inclusive" and "ethical" approach to AI development. Despite Downing Street’s denial that the UK abstention was linked to that of the US, many believe it was linked and that this change of tack is also evidenced in changes to the name and remit of the UK’s AI Safety Institute: The renamed AI Security Institute (AISI) has made subtle changes to how it describes its work which suggest it is not only pursuing a security focus, but, despite assurances to the contrary, quietly shedding previous commitments to addressing other issues with AI, including those relating to health and the environment. References to the risk of AI creating “unequal outcomes” and “harming individual welfare” have gone and the AISI’s reason for evaluating AI models has changed from “public accountability” to keeping the “public safe and secure”. It doesn’t sound like the AISI will be pushing for more transparency on AI’s environmental impacts.

  • After the summit AI Now articulated their disappointment, noting ‘the false urgency of the global AI arms race is poised to eclipse [the public interest] vision [for AI], decentering the public in favor of narrow industry interests”. Highlighting their particular concerns with regard to climate, they said that “while sustainability and biodiversity are key framing points for the Summit, too often this is interpreted through the lens of AI as a tool for sustainability, rather than contending with the clear reality that AI itself is straining our climate. Infrastructural development for AI must urgently be brought within the limit of what the planet can sustain—or we will enact damage we can never roll back”.

Resources, Events and Opportunities

  • The AI Energy Score Ratings are now live, offering a clear and standardized benchmark for measuring AI energy consumption.

  • Two new tools for measuring the environmental impacts of AI: This paper in Computer Science proposes a methodology to estimate the environmental impact of a company's AI portfolio. The methodology provides ‘actionable insights without necessitating extensive AI and Life-Cycle Assessment (LCA) expertise’. The authors advocate for ‘the introduction of a "Return on Environment" metric to align AI development with net-zero goals’. Meanwhile this study undertakes an LCA on an AI hardware accelerator and offers the ‘most comprehensive evaluation to date of AI hardware's environmental impact’, including the ‘first publication of manufacturing emissions’ of such an accelerator. It provides detailed descriptions of the LCA to ‘act as a tutorial, road map and inspiration’ for others.

  • Put your french through its paces with these chiffres clés from a report on the global footprint of IT (you can also cheat and translate it).

  • Sign up here if you’d like to be kept informed about Sustainable AI Futures, which has just kicked off and will be running till 2028. The project will be looking at how to develop and use laws, guidelines, and tools that align AI with the real needs of people and planet, including addressing the unintended impacts AI may have, both good and bad. 

  • Inside Climate Change reported at the end of January on how the Trump administration had already impacted government websites relating to environmental and climate science, and what further impacts could be expected. Amongst other sites, the report expressed concerns over how long the Environmental Protection Agency (EPA) website on climate effects on health might remain unchanged. Since then there has been a flurry of articles on this subject: ProPublica and Nature report on the human costs of the EPA’s ‘redirection’, both for EPA employees and more widely. The MIT Technology Review reports on the race to archive at risk US government’s websites, highlighting the challenges around this and how, even if the climate, health and scientific data can be successfully archived such that it is retrievable, it will gradually lose its potency if it is not kept up to date. Finally the FT highlights the extraordinarily bad timing of the Trump administration’s initiatives, noting that, as mentions of climate change were being removed from US government websites, it was emerging that this January had been the hottest on record, despite the arrival of cooling La Niña ocean conditions, which had been expected to lower the average global temperature.

  • In this latest episode from the podcast series Mind Matters: Investigating Academia’s Mental Health Crisis, three climate scientists talk about the emotional toll of researching environmental destruction.

  • This month saw the launch of a global observatory on AI’s environmental impact.

  • A critical perspective on AI from Iluminem Voices - the limits to growth are wrong and AI proves it. The article notes that unlimited growth will lead to a ‘post human economy’.

  • The deadline in the call for papers for a two day symposium, ‘Towards sustainable digital futures’, has been extended to February 28th. The symposium itself takes place on the 14th and 15th May at the University of Sheffield, UK.

  • The British Academy has opened a call for evidence on the principles that might underpin a good digital society. The deadline is March 17th.

And finally two alternatives to Big Tech’s vision for the future: Firstly ‘just enough’ approaches to AI from Friends of the Earth. This report identifies and unpacks seven ‘principles to help navigate the dilemmas of AI use’. Secondly, this article from Frédéric Bordage in the GreenIO newsletter makes the case for embracing Slow Tech, for the sake of the environment, human health and society.

We hope you have enjoyed this newsletter. If it has been forwarded to you, and you would like to receive future editions, you can subscribe here