Keith Frankish — via his eminently followable twitterings — got me reading Brian Fiala, Adam Arico and Shaun Nichols: “On the psychological origins of dualism: Dual-process cognition and the explanatory gap.”
Highly interesting, enjoyably written, and — on an admittedly quick and casual read — pretty persuasive. But the question whether they are exactly right is in some ways less interesting for me than the question “Is this a good sort of approach to diagnosing what is going on with some seemingly intractable philosophical problems about consciousness”.
I’m inclined to give an emphatic “yes, this sort of line must be worth exploring!”. Some more aprioiristically minded philosophers will I guess think Fiala, Arico and Nichols are somehow missing the real philosophical questions.
Maybe your reaction to this paper — are they looking in the right sort of place or are they horribly misguided? — will a nice litmus test for how thoroughgoing your philosophical naturalism is!
2 thoughts on “How to think about consciousness?”
@Peter: Thanks for the kind words! Your guess about many aprioristic philosophers is indeed correct: many have said that we are passing over the real (let alone the _deep_) philosophical questions about consciousness by taking this approach. This is one of the most common reactions we get when presenting or discussing this work among philosophers. On the other hand, dyed-in-the-wool methodological naturalists often express sympathy, even though the view is extremely rough around the edges. Thus I think the litmus test analogy is also apt.
On something completely different: thank you for creating and maintaining LaTeX for logicians! It was (and is) a valuable resource that played a key role in helping me to kick the MS Word habit years ago.
Here’s a small issue I have with the paper. (I’m still reading it.)
“We developed the AGENCY model as a natural extension of work in
developmental and social psychology. In their landmark study, Fritz Heider and Marianne Simmel (1944) showed participants an animation of geometric shapes (triangles, squares, circles) moving in non-inertial trajectories. When participants were asked to describe what was happening on the screen, they tended to utilize mental state
terms—such as “chasing”, “wanting”, and “trying”—in their descriptions of the animation. This suggests that certain types of distinctive motion trigger us to attribute mentality to an entity, even when the entity is a mere geometric figure.”
Now, people will use anthropomorphic language (chase, want, try etc.) to describe a wide range of phenomena because I think they are easy to find metaphors. I have seen people refer to vectors, chess pieces, functions, various programming related constructs as ‘this guy’, ‘that guy’ or even ‘this guy will try to read…’
It would, I think, require tremendous amount of argumentation to think that just because people use anthropomorphic language, they also assign agency to vectors.