Fickle Intuitions and the Case for Humility

Author Name

Joachim Nicolodi

Published On

April 18, 2025

Keywords/Tags

Philosophical Intuitions, Thought Experiments, Methodological Humility

Back when I was doing research in a cognitive neuroscience lab, I got to play with all kinds of cool technical tools. For some reason, it was the out-of-place, out-of-his-depth philosophy grad who turned out to be surprisingly good at handling the brain-imaging equipment, especially the delicate task of fitting the cap with optodes onto participants’ skulls. (Optodes are sensors that measure the levels of oxygenated and deoxygenated haemoglobin in the blood – a neat way of detecting areas of increased neural activity.) I was psyched! Finally, I had some irrefutable (“hard”) evidence to back up my arguments, and it was provided by a tool with a sciencey, irrefutable-evidence-producing-sounding name: fNIRS (functional near-infrared spectroscopy).

Now, back in the comfy philosopher’s armchair, my toolkit looks very different. One of the main instruments at my disposal is what Daniel Dennett famously called “intuition pumps” – thought experiments designed to spark a gut reaction, that unmistakable “Well, of course!” or “Surely not!” (Dennett, 2013). A personal favourite of mine is Judith Jarvis Thomson’s “people seeds” thought experiment, which she uses to defend the permissibility of abortion. One might claim that if a woman engages in intercourse knowing contraception isn’t foolproof, she bears some responsibility for the resulting fetus. But does that make her morally obligated to carry it to term? If we open a window in a stuffy room and a burglar climbs in, we are partly responsible – but that doesn’t give the burglar a right to stay. The same seems to hold even for innocent people, and even more so when precautions were taken, as the following thought experiment is desgined to show:

People-seeds drift about in the air like pollen, and if you open your windows, one may drift in and take root in your carpets and upholstery.  You don’t want children, so you fix up your windows with fine mesh screens, the very best you can buy.  As can happen, however, and on very, very rare occasions does happen, one of the screens is defective and a seed drifts in and takes root. Does the person-plant who now develops have a right to the use of your house? Surely not – despite the fact that you voluntarily opened your windows, you knowingly kept carpets and upholstered furniture, and you knew that screens were sometimes defective (Thomson, 1971, p. 59).

These kinds of thought experiments are especially common in ethics, but even in my own philosophical backyard – the philosophy of mind – they’re everywhere. Just say “zombies” to a philosopher and watch the debate unfold (Chalmers, 1997). The point is: in philosophy, intuitions aren’t just decorative. They’re foundational. They often form the bedrock of the more formal, structured arguments we end up building.

Take another classic example, this time from epistemology: Gettier’s challenge to the justified true belief (JTB) theory of knowledge (Gettier, 1963). Imagine Smith goes to the station, sees a clock that reads 19:15, and forms the justified belief that it’s 19:15. After all, clocks are usually reliable. Twist: the clock is broken. Double twist: it actually is 19:15. What are the odds? Smith had a belief that was both justified and true – but surely, he didn’t know that it was 19:15, right? Here’s the argument laid out more formally:

  1. Smith has a non-knowledgeable justified true belief.
  2. If S has a non-knowledgeable justified true belief, the JTB theory of knowledge is false.
  3. Therefore, the JTB theory of knowledge is false.

How do we arrive at (1)? Well, intuition. That ineffable something that tells us how things stand. Something so basic, immediate, and clear that it doesn’t seem to need further justification or argument.

But wait. Something that both scientists and philosophers aren’t particularly good at is stepping back now and then to ask whether the tools they’re using are actually appropriate for the job at hand. Is an fMRI scanner – loud, sterile, and deeply unnerving – really the best way to study something as embodied and performative as acting? (Yes, this was an actual study: actors had to lie perfectly still so as not to disrupt the scan, and simply imagine delivering their lines. Such studies cost tens of thousands of pounds.) Doubts notwithstanding, it’s easy to see why this kind of methodological introspection doesn’t happen all that often. If scientists constantly questioned their basic tools, progress would grind to a halt. A more cynical take is that, once you’ve built an entire career around a certain technique or approach, you’re not exactly thrilled when someone comes along and starts poking holes in it.

If you’re a philosopher reading this, don’t feel too smug just yet. The intuitions you build your arguments on aren’t exactly bulletproof either. In fact, they’re highly malleable – and influenced by a whole range of external factors that can seriously shape our judgment. One of the earliest studies to show this was by Petrinovich and O’Neill (1996), who found that simply rewording the trolley problem (“save” wording vs. “kill” wording) significantly affected whether participants thought it was acceptable to sacrifice one person to save five. And it’s not just the phrasing – culture matters too. Gold et al. (2014) showed that participants from non-Western countries like China were significantly less likely to choose the utilitarian option. And, again, a personal favourite: Schnall et al. (2008) found that feelings of disgust – induced, for example, by leaving a full bag of garbage in the testing room – reliably led to harsher moral judgements across a range of scenarios. And this malleability isn’t just an ethics problem. Similar findings have emerged in epistemology too (Swain et al., 2008).

Now, I’m not saying we should abandon intuitions. It’s hard to imagine what philosophy would even look like without them. (In fact, I used intuitions myself when questioning the use of fMRI to study acting! They really are everywhere.) My point is just that intuitions are fickle – and we philosophers would do well to keep that in mind as we craft our thought experiments about people-seeds, zombies, and trolleys. Intuitions are a tool – but like any other tool, say neuroimaging methods – they’re imperfect.That should give us pause. It should humble us, maybe even prompt us to dial down our confidence in some of our most cherished arguments or conclusions. After all, we – and our intuitions – might one day be proven wrong. And this is especially worth remembering in what is arguably the most heated debate in philosophy today: the one surrounding AI.

In the case of AI, our intuitions are bound to be especially volatile, given that new models with new capabilities are being released practically every month. Our judgments will also vary based on how much time we spend interacting with these systems, creating a wide spread of intuitions about autonomy, agency, rationality, and the like. A third factor is generational: intuitions about AI will shift over time, and children growing up with artificial agents will likely have very different gut reactions than those of us who met ChatGPT in adulthood. A study by Druga et al. (2017) showed that children who grow up using voice assistants like Alexa and Google Assistant often attribute feelings, thoughts, or at least social agency to these systems – even when explicitly told they don’t have them. Sure, we can tell them they’re anthropomorphising (however you’d explain that to a child), but those impressions are likely to be deeply ingrained. So how will they respond to intuition pumps about, say, the ethical status of AI when they’re the ones running the show?
This is something we should all try to keep in mind when engaging in debates about AI. As someone who’s recently spent a lot of time at conferences and discussion groups on the topic, I’ve noticed that these debates can get unusually heated – and often, ad hominem. Far more so than in other academic settings (a possible exception is consciousness science – you can imagine what conferences on AI consciousness look like). To give you one example: I recently witnessed a fellow researcher describe AI as “demonic”, and anyone engaging with it as complicit in a betrayal of near-Luciferian proportions. Just by using ChatGPT, I started to feel guilty – like an intern at Cyberdyne Systems, about to launch Skynet. Yes, the language was strong – but also, to some extent, understandable. To say that the stakes are high is a massive understatement. We’re grappling with questions about discrimination, job displacement, and perhaps even existential risk. And we need to get them right. But in doing so, we should also stay aware that we might get them wrong. The conversation would be healthier – and ultimately more productive – if we recognized that the tools we use to form our views aren’t divinely sanctioned. They’re fallible. And if someone else’s intuitions don’t match our own, that doesn’t make them deluded, or demonic.

Bibliography

  • Chalmers, D. J. (1997). The conscious mind: In search of a fundamental theory (1. issued as an Oxford Univ. Press paperback). Oxford University Press.
  • Dennett, D. C. (2013). Intuition pumps and other tools for thinking. Allen Lane.
  • Druga, S., Williams, R., Breazeal, C., & Resnick, M. (2017). ‘Hey Google is it OK if I eat you?’: Initial Explorations in Child-Agent Interaction. Proceedings of the 2017 Conference on Interaction Design and Children, 595–600. https://doi.org/10.1145/3078072.3084330
  • Gettier, E. L. (1963). Is Justified True Belief Knowledge? Analysis, 23(6), 121–123. https://doi.org/10.1093/analys/23.6.121
  • Gold, N., Colman, A. M., & Pulford, B. D. (2014). Cultural differences in responses to real-life and hypothetical trolley problems. Judgment and Decision Making, 9(1), 65–76. https://doi.org/10.1017/S193029750000499X
  • Petrinovich, L., & O’Neill, P. (1996). Influence of wording and framing effects on moral intuitions. Ethology and Sociobiology, 17(3), 145–171. https://doi.org/10.1016/0162-3095(96)00041-6
  • Schnall, S., Haidt, J., Clore, G. L., & Jordan, A. H. (2008). Disgust as Embodied Moral Judgment. Personality and Social Psychology Bulletin, 34(8), 1096–1109. https://doi.org/10.1177/0146167208317771
  • Swain, S., Alexander, J., & Weinberg, J. M. (2008). The Instability of Philosophical Intuitions: Running Hot and Cold on Truetemp. Philosophy and Phenomenological Research, 76(1), 138–155. https://doi.org/10.1111/j.1933-1592.2007.00118.x
  • Thomson, J. J. (1971). A defense of abortion. Philosophy and Public Affairs, 1(1), 47-66.
  • Hanna Barakat & Cambridge Diversity Fund, Turning Threads of Cognition