Thursday, March 28, 2024

How our cells cope with oxygen stress: a paradigm of life's fuzzy, distributed control

 

My nephew Andrew, a chemistry postdoc at Oxford, has just published a paper in JACS on developing inhibitors of the protein HIF (hypoxia inhibitory factor) 1A. Hurrah for him! And this got me curious enough to delve into what this molecule does. Andrew had told me before that it’s a transcription factor, which naturally led me to guess it has a fair degree of intrinsic disorder – as is indeed the case (see the floppy bits of polypeptide chain here):

 

Why? Because most eukaryotic TFs do, as they tend to operate in conjunction with a host of other molecules such as cofactors and seem to benefit from having a degree of promiscuity in their interactions.

 

That’s just one way in which I suspected a protein like this might exemplify the ways in which our molecular mechanisms operate. And indeed, this turns out to be the case. At face value, how HIF1A (sometimes written as HIF1[alpha]) does what it does looks ever more perplexingly, indeed impossibly, complicated the harder you look. But in every respect I found those details confirming the kind of picture I have tried to sketch in my book How Life Works – and I’d hope that the book might help a non-specialist see how there are actually some generic principles operating in a case like this that can bring some sense of order and logic to what otherwise appears utterly confusing. So if you’re ready for the ride, strap in.

 

HIF1A is a member of a family of HIF proteins, in mammals encoded by the genes HIF1A, HIF2A and HIF3A. The proteins enable cells to cope with oxygen-depleted circumstances, in general by activating or inhibiting the expression of certain genes. For example, HIF1A can upregulate expression of vascular endothelial growth factor (VEGF), a key gene involved in angiogenesis (the formation of new blood vessels), so as to encourage the formation of new sources of oxygenation. For this reason, HIFs are not merely activated in unusual conditions of oxygen stress but are a crucial part of normal development, and are associated with disorders of blood circulation, such as atherosclerosis, hypertension and aneurysms. The 2019 Nobel Prize in physiology or medicine was awarded to William Kaelin, Peter Radcliffe and Gregg Semenza for their work in discovering the HIF proteins and how they regulate the cell’s response to hypoxia.

 

HIF1A has also become a focus of interest for cancer treatments, because if it can be inhibited specifically in cancer cells, this could enable the tumour to be slowed or even killed by oxygen depletion. That’s what Andrew and his colleagues are working on.

 

The basic mode of action is interesting, but also a major saga in itself. HIF1A is produced even when the cells have plenty of oxygen – but is then targeted by enzymes that stick ubiquityl groups onto it so as to label it for destruction by proteases. Those ubiquitylating enzymes are oxygen-sensitive, and if lack of oxygen stops them working, HIF1A is no longer degraded but is free to do its regulatory work as it accumulates in the cell nucleus. (This bit of the story, like all the others, is actually rather more complicated, as HIF1A degradation is also sensitive to factors other than oxygenation, such as nutrient levels – there is evidently a fair amount of context dependence and integration of various input signals determining HIF stability. What HIF1A does, and indeed how stable it is, is also influenced by having other chemical groups appended to it: phosphorylation, SUMOylation and acetylation.)

 

In the nucleus, HIF1A dimerizes with another member of the family, HIF1B [or beta] (which has two subunits, encoded by the genes ARNT1 and ARNT2) to form a complex that can bind to DNA and regulate genes. Those genes it regulates have promoter groups denoted hypoxia-response elements (HREs) that the HIF1A/1B complex recognizes. These are generally close to the target genes themselves, but not always; some are distal.

 

OK, so far it seems like classic switch-like regulation (albeit fiendishly complicated!). But here’s where things get complicated. For one thing, there are many more genomic loci carrying the 5-base-pair HRE recognition sequence than there are actual HRE binding sites. In fact, less than 1% of the potential HRE sites are bound by HIFs in response to hypoxia. How come HIF1A/1B isn’t sticking to all those others too? No one really knows. But it seems that some of the selectivity depends on sequences flanking the HREs, in a manner as yet unclear. This reminds me of the work I wrote about recently by Polly Fordyce at Stanford and colleagues, who showed that repetitive sequences flanking regulatory sites, previously dismissed as “junk”, might act as a sort of attractive well that accumulates and holds onto the regulatory molecules like TFs, via weak and fairly non-specific interactions that nevertheless somehow cumulatively impart the right selectivity. These so-called short tandem repeats act as a kind of “lobby” where the molecules can hang around so that they are ready when needed. I’ve no idea if anything like that is happening here, but it shows that we should not be too ready to dismiss parts of the genome that seem literally peripheral and “probably” useless. However, it seems likely that factors other than the DNA sequences are also influencing HIF binding to HREs.

 

What’s more, HIF1A doesn’t do its job alone. Eukaryotic TFs hardly ever do. There is a whole host of other molecules involved in regulating those genes, as evident in this diagram from one article:

 



When I see something like this, I now know not to take it too literally. It may well be that these molecules aren’t getting together in well defined and stoichiometric complexes, but are more probably associating in looser and fuzzier ways – perhaps involving what some call transcriptional hubs or condensates, blobs with liquid-like behaviour that constitute a distinct phase from the rest of the nucleoplasm. I haven’t been able to find any indication that this is what goes on for the HIF proteins, but it wouldn’t surprise me, given how such structures seem to be involved in other regulatory processes. One review of this topic simply says that “HIF1A may stimulate transcription either by means of cooperative DNA binding or cooperative recruitment of coactivators.” (That word “recruitment” is always a giveaway, since obviously the protein is not literally summoning its coactivators from afar – “recruitment” tends to mean “these molecules somehow gather and act together in a way we don’t understand.”)

And get this: “HIF1A has been shown to contribute to transcriptional control independently of its DNA binding activity, working instead in partnership with other DNA binding proteins to affect other cellular pathways.” In other words, there seems to be another (at least one other?) mechanism by which HIF1A does its regulatory work. How is a protein designed to do a job in two totally different ways? The answer is surely that it is modular. But how do these different channels depend on one another, if at all? When does one happen, and when the other? At what level is that decision made?

So in short: what the hell? How can we start to make any sense of this process, beyond the morass of details? Well, here’s the key thing: it seems that this fuzziness and multiplicity of actors enables the regulatory process to be sensitive to higher-level information – so that exactly which genes the HIF complex regulates is tissue- or cell-state-specific. That, after all, is what we’d expect: what’s needed to survive hypoxia will vary between tissues, so the response has to be attuned to that. This is an illustration of why we mustn’t imagine that Crick’s Central Dogma gives any kind of indication of the overall information flow in cells: it is not simply from DNA to RNA to protein (even if that applies to sequence information). What a protein does will be sensitive to higher-level information too.

 

And in fact, even what a protein is is sensitive to that too. We have known about alternative splicing – the creation of different mRNAs, and thus proteins, from the same primary transcript – since the 1970s, of course. But I am not convinced, despite protestations to the contrary, that the implications of that have really filtered through to the public consciousness, not least in terms of how it undermines the notion of a genetically encoded “program”. Contextual information from the surroundings literally changes the output of the Central Dogma. And the HIF family offer a great illustration of this, as you’ll see.

 

The other two alpha units of the HIF family, HIF2A and HIF3A, also bind to HIF1B. HIF2A has its own set of target genes. But weirdly, its DNA binding domain is very similar to that of HIF1A, and so the sequences HIF1A and HIF2A bind are essentially identical. Yet they do target different genes. How on earth does that happen? Well, it seems that for one thing they have differently spliced varieties (isoforms), meaning that the proteins are stitched together differently by the spliceosome during translation. Still, it’s hard to figure out how, or if, this is the key factor in their target specificity. One review says:

Although several studies have attempted to define the isoform-specific transcriptional programs, few common themes have emerged from these investigations, thus highlighting the complex nature of this cellular response. Variables such as cell type, severity, duration and variety of stimulation, the presence of functional VHL, and even culture conditions reportedly influence the transcriptional output mediated by HIF1A versus HIF2A. Furthermore, many of these studies have only examined either HIF1A or HIF2A, and untangling HIF-dependent from HIF-independent hypoxia-induced responses has proved challenging.

Again, what the hell? And again: it’s clear that a whole bunch of higher-level information is involved in determining the outcomes. For example, a part of the cell-type specificity seems to relate to the state that the chromatin is in: how it is packaged.

 

HIF3A, meanwhile, seems a little different from HIF1A and HIF2A, both in terms of sequence and functionally.  There are several – around six – alternative splicing variants with different regulatory functions. Some of these seem to have a negative regulatory action – for example, one isoform of HIF3A inhibits HIF1A. HIF3A seems to be a classic example of a protein with very tissue-specific alternative splicing: one form, called HIF3A4, for example, is expressed only in the corneal lens epithelium and controls vascularization there in response to hypoxia.

 

There’s more. How does HIF binding actually alter gene expression? Well, it’s sure not in the way the classic regulatory paradigm, the lac operon of E. coli, does it: by simply blocking RNA polymerase from attaching and transcribing the adjacent gene. Or perhaps we should say that yes, ultimately it’s a matter of hindering transcription, but in a manner that is far more complicated. In essence, HIF does this by initiating a change in the way the chromatin in that region is packed, for example by making the packing denser so that the DNA there is inaccessible to transcription.

 

And this too is subtle. One thing HIF binding does is to trigger enzymes that stick methyl groups onto the histone proteins around which DNA is wound in chromatin. Such changes are known to affect chromatin packing, but the details aren’t well understood. Certainly it’s not as simple as saying that methylation makes the histones bulkier and less well packed; sometimes that process will enhance transcription, and sometimes inhibit it. We don’t know what the “rules” are. But they aren’t, I think, going to be governed by any sort of simple, digital code – not least because they involve issues of three-dimensional molecular structure and solvation, and appear again to have a context dependence.

 

Nor should we imagine that the hypoxia response is merely a matter of the HIF proteins. Several others are involved too. At this point you might want to despair of making any sense of it all. But the point is that this process resembles nothing more than what goes on in the brain, where information from many sources is integrated and contextualized in the process of generating some appropriate output. That process involves several different scales – it is not simply a matter of this molecule speaking to that one in linear chains of communication. In this case, a more useful framework for thinking about the problem is one that is cognitive and analogue, not mechanical and digital.

 

Oh, there are yet more wrinkles, but I’m going to spare you those. The bottom line is that there are perils in taking the tempting line of explaining how HIF1A works by saying something along the lines of “It is a master regulator that switches genes on or off when the cell gets hypoxic.” That is true in a sense, but risks giving a false impression of understanding. In the end it implies that proteins (or their genes) just “do” things, as if by magic, and so suggests that they are in control. In reality, the way it works bears little resemblance to those pictures of blobby molecules sticking together and working via magical arrows. The information flow is much more omnidirectional, and the logic is fuzzy and combinatorial, and also poorly understood in many respects, and only makes sense if we take into account the system as a whole. When we do, it becomes clear that there is no basis for saying that genes like HIF1A dictate the hypoxia response; we can with more justification say that cells “decide” how to use their genetic resources to mount a response that is appropriate to their particular state and circumstances.

 

This is nothing that molecular biologists don’t know (and it is phenomenally impressive that they have got as far as they have). But I believe we need better ways to tell the story, which do justice to the real ingenuity, versatility, and contextuality of life.

Friday, September 15, 2023

The new Climategate that wasn't

 

Climate-change denialists got all excited last week by an alleged revelation that the top science journals are bullying climate scientists into presenting the most alarmist versions of their research, and suppressing anything that doesn’t fit with a “climate catastrophe” narrative. The problem (this story went) has been exposed by a whistleblower named Patrick Brown, formerly an academic scientist who now works for a privately funded environmental research centre.

 

Sounds bad? Is this another Climategate? But it takes very little digging at all before a very different, and extremely strange, story emerges.

 

Let’s start this tale with Matt Ridley. In his column in The Telegraph, he tells us this:

Patrick Brown, the co-director of climate and energy at the Breakthrough Institute in California, has blown the whistle on an open secret about climate science: it’s biased in favour of alarmism. He published a paper in Nature magazine on the effect of climate change on wildfires. In it he told the truth: there was an effect. But not the whole truth: other factors play a big role in fires too. On Maui, the failure of the electric utility to manage vegetation along power lines was a probable cause of the devastating recent fires, but climate change proved a convenient excuse.

 

OK, wait – what? So Brown knowingly suppressed facts relevant to the conclusion his paper reported? Was this some kind of “gotcha” stunt to show that you can get any old nonsense through peer review, even at a major journal? Oh no, not at all. In his blog about the issue, Brown tells us this:

I knew not to try to quantify key aspects other than climate change in my research because it would dilute the story that prestigious journals like Nature and its rival, Science, want to tell. 

This matters because it is critically important for scientists to be published in high-profile journals; in many ways, they are the gatekeepers for career success in academia. And the editors of these journals have made it abundantly clear, both by what they publish and what they reject, that they want climate papers that support certain preapproved narratives—even when those narratives come at the expense of broader knowledge for society. 

To put it bluntly, climate science has become less about understanding the complexities of the world and more about serving as a kind of Cassandra, urgently warning the public about the dangers of climate change. However understandable this instinct may be, it distorts a great deal of climate science research, misinforms the public, and most importantly, makes practical solutions more difficult to achieve. 

Oh, people really don’t know any longer about the myth of Cassandra, do they? Cassandra’s prophecies were true, but she was fated not to be believed. Anyway, Brown goes on to say:

 

I wanted the research to be published in the highest profile venue possible. When I began the research for this paper in 2020, I was a new assistant professor needing to maximize my prospects for a successful career.

 

So he is telling us that he wanted to get a paper in Nature to advance his career and he figured that telling this partial, distorted story was the best way to achieve this aim.

 

Kinda weird, right? And not exactly the exposé story Matt Ridley implied. Rather, Brown seems to be admitting to having committed the unethical practice of keeping certain facts hidden, or simply unexamined, in order to get on in the academic world.

 

Well OK – but can you blame him if that’s the only way to succeed? I mean, it is still weird for him to come out and admit it, but you can understand the motive, at least – right?

 

Except… is he correct about this? His charge – “the editors of these journals have made it abundantly clear, both by what they publish and what they reject, that they want climate papers that support certain preapproved narratives” – is pretty damned serious: the editors of Nature and other top journals are curating the scientific message they put out. You’d imagine Brown would back up that accusation with some solid evidence. But no, it is all assertion. It seems it’s “obvious” that Nature editors and those of other distinguished journals are biased because Brown’s own papers have been previously rejected by said journals. What other reason could there have been for that, people, than editorial bias?

 

Still, Brown does cite one bit of evidence in his favour: he says that some scientists err on the side of using worst-case climate scenarios. “It is standard practice to calculate impacts for scary hypothetical future warming scenarios that strain credibility,” he says. And here he points to an article that (rightly) decries this tendency and calls for more realistic baselines. Yet that article was published in – good lord, who’d have thought it? – Nature, the journal that allegedly always wants you to believe the worst. I’m not sure this is really helping his case.

 

Brown apparently knows for sure that his paper would have been rejected by Nature if he’d included all the complexities, such as considering the other, non-climate-related factors that could have influenced changes in the frequency of forest fires. For example, if the number of fires has increased, perhaps that might be partly due to changes in patterns of vegetation, or of human activities (like more fires getting ignited by humans either deliberately or by accident)?

 

How does he know that the Nature editors would have responded negatively to the inclusion of these caveats, though? The scientific way would, of course, have been to conduct the experiment: to send the fuller paper, including all those nuances, and see what happened. But Brown did not need to do that, it seems; he just knew. We should trust him.

 

Even Ted Nordhaus, director of the Breakthrough Institute in California where Brown now works, has admitted that this counterfactual does not exist. So Brown’s claims are mere hearsay. (Why has Nordhaus weighed in at all, given that he was not involved in the research? I’ll come back to that.) 

 

On his blog Brown says that he omitted those caveats from the study because they would just get in the way of a punchy conclusion that, in his view, would maximize the chances of getting published in Nature:

In my paper, we didn’t bother to study the influence of these other obviously relevant factors. Did I know that including them would make for a more realistic and useful analysis? I did. But I also knew that it would detract from the clean narrative centered on the negative impact of climate change and thus decrease the odds that the paper would pass muster with Nature’s editors and reviewers.

The trouble is, these days Nature provides the referees’ reports and authors’ responses to published papers online. And these contradict this narrative. It turns out the one referee highlighted precisely some of the issues that Brown and colleagues omitted. He said:

The second aspect that is a concern is the use of wildfire growth as the key variable. As the authors acknowledge there are numerous factors that play a confounding role in wildfire growth that are not directly accounted for in this study (L37-51). Vegetation type (fuel), ignitions (lightning and people), fire management activities ( direct and indirect suppression, prescribed fire, policies such as fire bans and forest closures) and fire load.

 

And Brown responded that his methods of analysis couldn’t handle these other factors:

Accounting for changes in all of these variables and their potential interactions simultaneously is very difficult. This is precisely why we chose to use a methodology that addresses the much cleaner but more narrow question of what the influence of warming alone is on the risk of extreme daily wildfire growth.

 

In a very revealing interview with Brown for the website HeatMap, Robinson Meyer pushed further on this issue. If Brown agreed that these were important considerations, and the referees asked about them, said Meyer, why didn’t he look into them further? Brown says:

I think that, that’s very good that the reviewers brought that up. But like I said before, doing that is, then, it’s not a Nature paper. It’s too diluted in my opinion to be a Nature paper.

This is what I’m trying to highlight, I guess, from the inside as a researcher doing this type of research. Reviewers absolutely will ask for good sensitivity tests, and bringing in caveats, and all that stuff, but it is absolutely your goal as the researcher to navigate the reviews as best you can. The file even gets automatically labeled Rebuttal when you respond to the reviewers. It’s your goal to essentially get the paper over the finish line.

And you don’t just acquiesce to reviewers, because you’d never get anything published. You don’t just say, Oh you’re right, okay, we will go back and do that work for five years and submit elsewhere. The reality of the situation is you have to go forward with your publication and get it published.

On the one hand, this is all honest enough: peer review is something of a game, where referees tend to want to see everything addressed and authors take the view that they’d never be ready to publish if they had to do that, so they generally aim to get away with doing the minimum needed to push things past the reviewers. That’s fair enough.

 

But it is totally at odds with the story Brown is now trying to tell. On these accounts, Brown did not in fact omit the confounding factors because he thought they would complicate the kind of message Nature and its referees would demand. He omitted them because they were too difficult to include in the study. And far from being pleased by an incomplete study that supported the narrative Brown had decided the editors and reviewers would look for, the reviewers – one of them, at least – called for a more complete analysis. It seems then that the reviewer would have been more pleased with the more complete study. Brown is admitting that it was he who tried to push the paper past the finish line in the face of these concerns. 

 

Some climate sceptics have still tried to make this sound like a shortcoming of the journal and the reviewers: ah look, they didn’t push very hard for that extra stuff, did they? But this won’t wash at all. First, the authors were commendably upfront about the limitations of the study – the paper itself says

Our findings, however, must be interpreted narrowly as idealized calculations because temperature is only one of the dozens of important variables that influences wildfire behaviour.

For the referees to pass the paper once it included this word of caution is entirely reasonable. After all, Brown stands by it even now:

You might be wondering at this point if I’m disowning my own paper. I’m not. On the contrary, I think it advances our understanding of climate change’s role in day-to-day wildfire behavior.

 

In short, there is not a problem here, beyond what Brown seems now keen to manufacture. If, as he says, the paper is “less useful than it could have been”, it is clear who is responsible for that.

 

Note by the way that, in response to a Nature news editor (independent from the manuscript handling team) who raised this issue, Nordhaus (again) said “The reviewer did not raise an issue about "vegetation and human ignition pattern changes". The reviewer raised an issue about holding absolute humidity constant.” As you can see above, this is clearly untrue. Nordhaus is simply referring to a different reviewer – despite surely having all of the reviewers’ reports available to him. I’m going to be charitable and assume he didn’t read them properly. But you will have to forgive me if I suspect an agenda behind Nordhaus’s involvement in the whole affair.

 

Talking of agendas: back to Matt Ridley, who has mentioned none of this in his column. He claims that the episode proves that “Editors at journals such as Nature seem to prefer publishing simplistic, negative news and speculation about climate change.” 

 

Matt’s story suggests that the publication of Brown’s paper has exposed the fact that climate scientists are hiding facts from us that are inconvenient to their narrative about catastrophic climate change.

 

Well, Brown’s paper is hiding from us facts that suggests the problem he looked at might not be as bad as it looks. But is this because he is a climate scientist with the agenda of doing so? No, it is because he knowingly withheld those facts - seemingly, did not even bother to investigate them, although to be fair that might be because he was unable to. But does the publication of his paper suggest that other scientists were prepared to turn a blind eye to that? No, because one of the reviewers raised the omission as a problem. Does the publication of the paper show that indeed there is a bias in the literature whereby papers that present an unmitigatedly bleak picture of extreme climate change get accepted but those that are more nuanced get rejected? Evidently it shows nothing of the sort. The only “evidence” for that is that Brown says so. Matt has not challenged that assertion, or asked for evidence, but recycles it as fact.

Matt then echoes Brown’s line that “the problem is all solutions [to climate change] are taboo [in the scientific literature].” He says:

If I waved a magic wand and gave the world unlimited clean and cheap energy tomorrow, I expect many climate scientists would be horrified: they would be out of a job. 

It is hard to know what to say about this, other than that it is one of the most absurd things Matt has ever written (yes!). Climate scientists are in fact horrified by what is happening to the climate. So am I. Like them, I would be beside myself with joy if Matt were able to do this. (This is one of the reasons why I value work being done on nuclear fusion, which could ultimately provide a significant, clean source of power, albeit not soon enough to rescue us from the current climate crisis.)

Frankly, for Matt to say this of climate scientists is not just absurd but deeply offensive.

This idea that climate scientists have to play up global warming to protect their jobs is on the one hand risible and on the other hand a standard trope of conspiracy theorists: climate scientists have their self-interest at heart. It is really very peculiar that Matt and others seem to believe that if climate change ended, there would be no more climate. For that, folks, is in fact what climate scientists study. There are so many things left for them to study, so much we don’t know about climate. I imagine some climate scientists dearly wish they could study things other than global warming (and of course lots of them do).

What is ironic to the point of hilarity about the episode is this: Ridley and others are claiming that this is a story about how climate science insists of a simplistic narrative that ignores all nuance, but in order to do that they must create a simplistic story devoid of all nuance. The fact is that the story is deeply, deeply odd. For Brown’s version amounts to something like this:

Climate science is biased and broken and ignores complexities that don’t fit its narrative, creating a misleading picture. Meanwhile, I have published a paper that ignores complexities that don’t fit that conventional narrative and is therefore misleading. But the paper is in fact good and I’m not at all ashamed of it, and its conclusions still apply. But also it is also a deliberate partial falsification. I was forced to do this for career advancement, but only because I’d decided that was the case – I didn’t bother to submit the paper I should have written to see if my preconceptions were correct, and in fact I didn’t even try to do the work that would have required. The fact that Nature published the paper just shows that they only look for the simplistic narrative, even though their peer review process asked me to go into the complexities but I told them that was not possible and they and the referees accepted my explanation on good faith. So shame on Nature for publishing this poor work which is in fact also perfectly respectable and useful work, because I did it, but not as useful as it could have been if I’d done the other things that needed doing but which I didn’t do because I chose not to or couldn’t. And it’s all a scandal!

Sorry, it really doesn’t make any sense, does it? But there you have it.

Monday, September 04, 2023

Should we colonise space? How not to debate that question.

Software engineer, astrophysicist and human spaceflight enthusiast Peter Hague has commented on Twitter about my Guardian “Big Idea” piece assessing the notion of colonising other worlds. I debated whether I should respond, given that Hague’s critique is steeped in the kind of vituperative ad hominem attacks that seem to characterize a lot of the discourse coming from advocates of space colonization (something remarked on by Erika Nesvold, whose excellent book partly inspired my piece). But perhaps a response will serve to illustrate some of the challenges of debating the issues. So here goes.

 

Hague says:

Ball claims there is “a dismaying irrationality in the answers”, and then proceeds to quote mine and cherry pick answers without adequately demonstrating that they are in fact irrational. Or, in fact, being specific about what he means by irrational. It’s actually important, because whether some action is rational or not is entirely contingent on what you are trying to accomplish. Ball’s statement has embedded values, even though he leaves them unstated – perhaps relying on the Guardian audience to share them. In that case, ‘irrational’ just becomes a word that can describe more or less anybody who doesn’t share that worldview.

 

I have not quoted anything by Hague (unless he believes he is Stephen Hawking). I had no idea what Hague might or might not have said on the issue. I’ve simply no idea what he’s talking about there.

 

The irrationality I have in mind is illustrated by what follows, but also by the ad hominem aspects mentioned above. One might imagine, for example, that Hague would start by finding out something about the author of the piece he is attacking, which would have very quickly revealed that I am not a “Guardian writer” (unless every single person who has ever written in the Guardian becomes that by default).

 

Hague quotes me thus:

"The timescales just don’t add up. Climate change either will or won’t become an existential risk well before it’s realistic to imagine a self-sustaining Martian settlement of millions: we’re talking a century or more. Speculating about nuclear war post-2123 is science fiction. So the old environmentalist cliche is right: there is no Planet B, and to suggest otherwise risks lessening the urgency of preserving Planet A. As for the threat of a civilisation-ending meteorite impact: one that big is expected only every several million years, so it’s safe to say there are more urgent worries. The sun going out? Sure, in 5bn years, and if you think there will still be humans then, you don’t understand evolution."

 

He then says:

Ironically here Ball vindicates a point I have made myself. A century probably *is* a timescale for when migration off Earth becomes a significant contributor to resolving pressure on the biosphere. But this means we need to get started now, so that we can get to that point in a century. Doing so means we only need to juggle human and environmental issues for a finite time, and we don't have to just slowly wind down human civilisation.

 

Huh? Is anyone suggesting we must “wind down human civilization”? (Well I guess some might – you can always find someone saying anything. But it is hardly the default position.) Anyway, I don’t follow this “resolving [presumably meaning “relieving”] pressure on the biosphere”. Many forecasts suspect that human population will peak around 2075-2080, and then stabilize. I don’t see many arguments that off-planet settlement is needed to absorb an excess of humans – but presumably to make a real difference, we’d need to see a billion or so decamp on that kind of timescale. Is that likely to happen? I have to say it seems hard to imagine. At any rate, my point is elsewhere, specifically about the popular idea that an off-world colony would be a back-up for civilization on Earth going off the rails. The threats we currently face can’t credibly be extrapolated to the point where a human settlement on Mars (say) might plausibly be entirely self-sufficient. And in any event, the argument seems incoherent. It’s like saying that, because Johnny’s behaviour is wreaking havoc in his neighbourhood, the solution is to send him to the next town, where somehow he’ll stop being so antisocial.

 

Hague adds:

His complacency about asteroids is not shared by those who study them, and the argument about the lifetime of the Sun is not used as an argument for immediate settlement by anybody I know of, and he doesn't attribute it, so we can move on from that.

 

This is what I mean about rationality. Sure, we are right to want to monitor asteroids and meteorites because a Tunguska-size blast over a major population centre could be devastating. And bigger ones would be terrible indeed. But a blast so great that it poses a truly existential risk to the planet? I give specific figures for at kind of threat – the chance of it happening in the next couple of centuries, say, is minuscule. If you’re kept awake at night because of that fear to humankind, you have an impressive capacity for displacement. But does Hague address this? He does not; he simply tries to imply that the issue here is a lack of expertise.

 

Hague then quotes me:

"For some, the justification for planetary settlement is not existential fear but our innate drive to explore. “The settlement of North America and other continents was a prelude to humanity’s greater challenge: the space frontier,” reads a 1986 document by the Reagan-appointed US National Commission of Space, rather clumsily letting slip who it was and was not speaking for. But at least “Because it would be cool” is an honest answer to the question: “Why go?”"

 

And he replies:

This is a low blow. He is cherry picking a forgotten government document to try and lob a vague accusation of racism around. If he wanted to look seriously at the argument that there are parellels [sic] between the opening of the American frontier and the opening of the space frontier, he might address the work of @robert_zubrin, who has articulated this far better. There is no indication the author has even heard of Zubrin though, which doesn't speak well to his knowledge of the argument he believes he is rebutting.

 

OK, there’s a fair bit to unpack here. First, there’s the question of whether you really want to hear from someone whose argument goes like this:

“Anyone who hasn’t heard of Zubrin is probably not qualified to write on this issue, and I’m going to totally guess that the author hasn’t heard of Zubrin, so there you go.”

 

What’s even more absurd is that, when it was pointed out to Hague on Twitter that in fact I very much know of Zubrin (as he could have discovered without too much trouble), he says in effect “Well that proves my point! – he knew of him but didn’t mention him!” Specifically:

 

“Then it’s especially ridiculous that Ball ignores his advocacy in favour of skimming ancient NASA documents for some hook to launch his fatuous accusation. It’s possible that he has forgotten who Zubrin is, seeing as his interest in the subject is clearly surface level.”

 

Ah, so OK I knew Zubrin but perhaps forgot about him. Sorry, but Christ on a bike.

 

Also, about that “forgotten government document”: someone on Twitter kindly pointed out that it is on the contrary it is a significant text, whereupon Hague says Sorry for dissing the document! Bear in mind I was 5 when it came out. So how does this work? Should I be confining myself only to things that were known or published after Hague grew up?

 

Moving on, Hague says:

Now he takes a swing at Gerald O'Neill. Or, more correctly, he takes a swing at Don Davis for his famous illustrations of O'Neill colonies, given that the dismissal of O'Neills entire work seems to be based entirely off aesthetics and lifestyle - a lifestyle, by the way, that although it isn't approved of in the Guardian, migrants literally risk their lives every day for a chance at.

 

Sorry, what? Let’s come back to the point, yes?

 

And my point is that there is a long history of presenting life in space as utopian, in the case of those famous illustrations at the expense of all scientific credibility (just cut out a slice of the American natural ecosystem and plant it in a rotating space colony). I don’t see a response to that here.

 

Then:

At last we get to the meat of the objection:

 

"If you want to know what to expect from colonies established by “billionauts” such as Musk or Jeff Bezos, perhaps ask their employees in Amazon warehouses or the Twitter offices. Many advocates for space settlement are “neoliberal techno-utopians”, says the astrophysicist Erika Nesvold, who sell it on a libertarian ticket as an escape from the pesky regulation of governments. The space industry doesn’t talk much about such things. As Nesvold discovered when she began quizzing commercial space companies in 2016, ethical questions such as human rights or environmental protection in space typically meet with a response of “we’ll worry about that later”. The idea is to get there first."

 

Hague says:

Ball presents Nesvold as an authority, and not an activist, which is what she is - and gives her a platform to basically label "bad" labels on the enterprise.

 

So anyone who has a view different from his (even when articulated carefully, calmly and in a deeply informed way, as in Nesvold’s book) is dealt with not by addressing those arguments but by dismissing said person as a mere “activist”. You see what I mean about longing to see a more rational debate?

 

He says:

It’s not explained why space colonies being libertarian is bad, nor why they would be run like Amazon warehouses. This is just a collection of boo words for the particular audience of this paper.

 

I think Hague is having a lot of trouble distinguishing the piece from the platform in which is appears, with which he clearly has lots of issues. In any case, if a powerful person has an ambition to establish an enterprise, I’d be curious to see how they have run other enterprises in the past. Call me naïve, but I just have a hunch we might learn something from it. Sure, I can’t speak for anyone but myself when I say that I’d not want to be living on Mars under the aegis and whim of a Musk or a Bezos. I just feel governance is an issue some might like to think about.

 

Hague quotes me thus:

"If the notion of a “colonial transporter” gave you a twinge of unease, you’re not alone. Associations of space exploration with colonialism have existed ever since it was first mooted in the 17th century. Some advocates ridicule the comparison: there are surely no indigenous people to witness the arrival of the first crewed spaceships on Mars. But the analogy gets stronger when thinking about how commercial incentives might distort rights afforded to the settlers (Musk has floated the idea of loans to get to Mars City being paid off by work on arrival), or how colonial powers waged proxy wars in far-off lands. And if the argument is that these settlements would exist to save us from catastrophe on Earth, the question of who gets to go becomes more acute. So far it has been the rich and famous."

 

Then he says:

Correctly sensing he may be ridiculed for this argument, Ball tries to preempt this but then continues to make equally ridiculous arguments, simply because the word 'colonialism' is bad, and anybody using it must be planning to become the next East India Company. Reasoning by analogy is not valid.

 

I’m curious to know what is “ridiculous” here, but there’s no indication, so it is hard to know what to say. Personally, I think history has things we can learn from, so it is worth heeding it. I think that’s probably quite a common view among historians.

 

Hague goes on to quotes me:

"Perhaps the most pernicious aspect of the “Columbus” comparison, however, is that it encourages us to believe that space is just another ocean to sail, with the lure of virgin lands to draw us. But other worlds are not the New World; space is harsh beyond any earthly comparison, and it will be constantly trying to kill you. Quite aside from the cold and airlessness, the biggest danger is the radiation: streams of charged, high-energy particles, from which we are shielded by the Earth’s magnetic field. Currently, a crewed mission to Mars would be prohibited by the permitted radiation limits for astronauts. We don’t have any solutions to that problem."

 

He says:

In the single point where he makes any kind of technical argument, Ball immediately fumbles. It is not, primarily, the Earth's magnetic field that shields us from cosmic rays,

 

Well you know what, I think I’ll go with what NASA says here, as they actually send people into space.

 

…and they are not as lethal as he believes. If they were, every geomagnetic reversal would be a mass extinction event.

 

The possibility of mass extinctions associated with geomagnetic reversals has in fact long been discussed – many palaeo scientists anticipate that this might happen. But it has been hard to assess, not least because it is not clear to what extent the geomagnetic field really does drop to nearly zero during a reversal. Some studies suggest that, while the field rearranges, it remains substantial enough to provide a fair degree of shielding. NASA again: “During a pole reversal, the magnetic field weakens, but it doesn’t completely disappear. The magnetosphere, together with Earth’s atmosphere, still continue to protect our planet from cosmic rays and charged solar particles, though there may be a small amount of particulate radiation that makes it down to Earth’s surface.” During the latest reversal 780,000 years ago, the magnetopause may still have existed a considerable distance from the Earth’s surface. It’s also been suggested that the solar wind could itself induce magnetic shielding from cosmic rays in the absence of a geomagnetic field.

 

Humanity has, in fact, survived many of them.

 

The last known geomagnetic reversal was that one 780,000 years ago. The earliest known Homo sapiens fossils are around 315,000 years old. But whatever.

 

What does protect us is the thick atmosphere of this planet, and in that we see not only is the solution known it is blindingly obvious - mass. A few metres of rock on a Martian habitat will block the radiation, as will to some extent the atmosphere of the planet.

 

Yes, there is talk of building permanent settlements inside caves on Mars, or in empty lava tubes on the Moon. It’s a good sci-fi scenario: underground cities that never see the light. I’m not envisaging that those stories would be very rosy ones, but we can make up whatever we like, I guess.

 

As for NASAs limits - he does not cite a source so its hard to tell where he is getting this from,

 

Maybe he should read Erika’s book instead of just criticizing it.

 

but its contingent on travel time, shielding, and risk tolerance. The danger is not of some horrific case of radiation poisoning - its a small increase in the lifetime risk of getting cancer. Despite sounding scary, radiation is not really the top technical hurdle.

 

Again, I think I’ll go with NASA on this: in terms of health risks, it is absolutely seen as one of the major risks.

 

I don’t want to be uncharitable, but it does rather seem as if Hague is just making confident-sounding sciencey assertions that are out of touch with the facts, and assuming he’ll sound more authoritative than a “Guardian writer”. I do think there’s an interesting discussion to be had around, and responses to be made to, the points raised in my piece. But I’m afraid it’s not to be found here.

 

Sunday, August 07, 2022

The Spectator's review of The Book of Minds: a response

 

There is a review of The Book of Minds in The Spectator by philosopher Jane O’Grady. I have some thoughts about it.

First, it is always nice to have a review that engages with the book rather than just describes it. And O’Grady says some nice things about it. So I’m not unhappy with the review. 

But it does, I must say, seem to me a little odd, and occasionally wrong or misleading.

Odd primarily because it talks about so little of the book itself. But is more an exegesis on the reviewer’s thoughts. The review focuses almost entirely on the question of definitions of mind and what these imply for putative “machine minds”. There is barely any mention of the substance of the book: the account of how to regard the human mind, the discussion of the minds of animals and other living things, thoughts on alien minds, and a chapter on free will. I suspect the reader of the review would struggle to get any real sense of what the book is about. 

In terms of what the review does cover, there are some misrepresentations both of what the book says and of thinking in the respective fields.

O’Grady says that in defining a mind thus – “For an entity to have a mind, there must be something it is like to be that entity” – I am reprising philosopher Thomas Nagel, essentially implying that I am using Nagel’s definition of mind. But I am not. Nagel did not define mind this way, and I never suggest he did. So the suggestion that I have somehow misunderstood Nagel in this respect is way off beam. 

Besides, I suggest my definition as a basis to work with and nothing more. I state explicitly that it is neither scientifically nor philosophically rigorous – because no definition of mind is. One can propose other definitions with equal justification. But the key point of the book is that thinking about a space of possible minds obviates any gatekeeping: we do not need to obsess or argue about whether something has a mind (by some definition) or not (although we can reasonably suppose that some things do (us) and some don’t (a screwdriver)). Rather, we can ask about the qualities that then seem to define mind: does this entity have some of them, and to what degree? We can find a place for machines and organisms of all sorts in this space, even if we decide that their degree of mindedness is infinitesimally small. In other words, we avoid the kind of philosophical tendentiousness in this review. 

O’Grady writes: “To use quiddity of consciousness as a criterion of mindedness, as Ball does, excludes machines at the outset.” 

This is simply wrong. My working definition only excludes today’s machines, which is consistent with what most people who design and build and theorize about those machines think. I do not exclude the possibility of conscious machines, but I explain why they will not simply arise by making today’s AI more powerful along the same lines. It will require something else, not just a faster deep-learning algorithm trained on more data. That is the general view today, and it is important to make it clear. To make a conscious machine – a genuine “machine mind” in my view – is a tremendous challenge, and we barely know yet how to begin it. But it would be foolish, given the present state of knowledge, to exclude the possibility, and I do not. 

Of course, one could adopt another definition of “mind” that will encompass today’s computers too (and presumably then also smartphones and other devices). That’s fine, except that I don’t think most AI researchers or computer scientists would regard it as advisable. 

O’Grady writes: “Nor are ‘internal models of the world’ – another ‘feature of mind’ Ball suggests – open to outside observation.”

But they are. That is precisely what some of the careful work on animal cognition aiming to do: to go beyond mere observation of responses by figuring out what kind of reasoning the animal is using. It is difficult work, and hard to be sure we have made the right deductions. But it seems to be possible.

She asks: “And how could any method at all be used to discern if matter is suffused with mind (panpsychism)?”

Indeed – that would be very hard to prove, and I’m not sure how one could do it. I don’t rule out that some ingenious method could be devised to test the idea, but it’s not obvious to me what that might be, and it is one of the shortcomings of the hypothesis: it is not obviously testable or falsifiable. This does not mean it is wrong, as I say.

She asks: “But is the mind, rather than being any sort of entity, nothing other than what it does (functionalists’ solution)?”

Well, that’s a possible view. Is it O’Grady’s? I simply can’t tell – in that paragraph, I can’t figure out if she is talking about the positions I espouse (and which she quotes), or challenging them. Can you? At any rate, I mention the functionalist position as one among others.

O’Grady writes: “He misunderstands the Turing Test. ‘Thinking’ and ‘intelligence’ in Turing’s usage (which is now everyone’s) are not mere faute-de-mieux substitutes but the real thing. The boundaries of mind have (exactly as Ball urges) been extended, so that mind-terms which once needed to be used as metaphors, or placed in inverted commas, are treated as literal.” 

This is untrue. We have no agreed definition of “thinking” or “intelligence”. Many in AI question whether “artificial intelligence” is really a good term for the field at all. What Turing meant by these terms has been debated extensively, and still is. But you’ll have to search hard to find anyone knowledgeable about AI today who thinks that today’s algorithms can be said to “think” in the same sense (let alone in the same way) as we “think”, or to be “intelligent” in the same way as we are “intelligent”.

O’Grady writes: “Minds are themselves declared to be kinds of computer.” Yes, and as I point out in the book, that view has also been strongly criticized. 

She concludes that “Ball gives us an enjoyable ride through different perspectives on the mind but seems unaware of how jarringly incommensurate these are, nor that, by enlarging the parameters of mind, we have simultaneously shrunk them.”

I simply don’t understand what she is trying to say here. I discuss different perspectives on some issues – biopsychism, say, or consciousness – and try to indicate their strengths and weaknesses. I’ve truly no idea what O’Grady intends by these “jarringly incommensurate” differences. I explain that there are differences between many of these views. I’m totally in the dark about what point is being made here, and I suspect the reader will be. As for “by enlarging the parameters of mind, we have simultaneously shrunk them” – well, do you catch the meaning of that? I’m afraid I don’t. 

The basic problem, it seems to me, is that O’Grady has definite views on what minds are, and what machine minds can be, and my book does not seem to her to reflect those – or rather, she cannot find them explicitly stated in the book (although in all honesty I’m still unclear what O’Grady does think in this regard). And therein lies the danger – for she seems to be presenting her view as the correct one, even though a myriad of other views exist. Of course, I anticipated this potential problem, because the philosophy of mind can be very dogmatic even though (or perhaps precisely because) it enjoys no consensus view. What I have attempted to do in my book is to lay out some of the range of thinking in this area, and to assess strengths and weaknesses as well as to be frank about what we don’t know or agree about. To do so is inevitably to invite disagreement from anyone who thinks we already have the answers. Yet again I think this illustrates the pitfalls of books written by specialists on topics that are still very much work in progress (and both the science and the philosophy of mind are surely that). There is no shortage of books claiming to “explain” the mind, and many have very interesting things to say. But we don’t know which of them, if any, is correct, or even on the way to being correct. What I have attempted to do instead is to suggest a framework for thinking about minds, and moreover one that does not need to be too dogmatic about what a mind is or where it might be found. I hope readers will read it with that perspective in mind.