Complexity in practice pt. 2: writing and reading

The prior post on complexity in practice was trying to be ‘about’ a paper by Chuck Dyke that is ‘about’ Deacon, Stengers, Juarrero, Thompson et. al. However, the discussion quickly encountered an antecedent problem, just what sort of thing the paper actually is, or as Asher put it, what the author is trying to do; which is then a question about what to expect from it, how to read it and how to decide if it’s a good version of what it is. It occurs to me that addressing that question is actually a perfectly good way to talk about the paper, so here’s my take, broken out into a separate post for ease of handling.

As I asked last time, what would it look like to practice complexity, not just talk about it? My sense is that Dyke (and Deacon I think, but less so Juarrero) is trying to do this. Of course if all of them are right, and this is the general takeaway of the now-long history of systems theory, in one obvious sense we are all practicing complexity all the time – we are in fact morphodynamically and perhaps teleodynamically complex. What I mean though is that Dyke’s paper seeks to demonstrate the complexity it discusses. It is both about complexity and an enactment of complexity. In this sense it is the same sort of thing N. Pepperell argues Marx’s Capital is on a much grander scale, both a discussion and a demonstration of complexly dynamic and complexly coupled systems.

This is a rather different sort of enterprise than the usual linear thesis-driven essay or monograph, of course. In that sort of writing we’re looking for a “fundamental point,” as JohnM diagnostically put it in the prior discussion, which is then systematically developed with logical rigor and point-mapping evidentiary support – the Popperian philosophy of science model, as Michael pointed out. But as we see when we try to teach our students the technique, it’s highly stylized and artificial, not actually how anything in the world works – including the world of practicing scientists, as Latour and Woolgar famously showed now long ago. Endless handwringing and some very good jokes have been devoted to the ‘problem’ of the procrustean mismatch between logocentric linearity and anything it is ‘about’, as well as the tendency of logocentrics to pick topics and arrange situations that happen to fit the very specific and narrow virtues of their procedure.

Well, for better or worse by the time we’ve been indoctrinated and certified into the communities of expertise that constitute scholarship we have learned to ‘recognize’ disciplined, monographic linearity as the proper form of authoritative discourse, and immediately to dismiss as undisciplined, muddled, confused or meandering (‘poetic’, perhaps, if we’re being generous) anything that represents more directly the complexity it is about. This is a constraint that accomplishes a great deal, of course; the joke in Borges is after all that the map which most accurately represents the territory is also the most completely useless. The productive advantages of abstraction, specialization and focus, like the division of labor and the assembly line, really need no rehearsing, especially when volume is the objective and advanced artisanal talent is not widely distributed. Nevertheless, there is something inherently self-defeating about linear discussions of nonlinearity. If complexity is your topic, it makes a sort of elementary sense to adopt complexity as your practice. And it also makes sense to expect readers to modify their expectations accordingly. But as Asher has already discussed at length and as Dyke also thematizes, this puts a lot of pressure on readers, especially those for whom the strategies of linearity and discipline have been or promise to be the most successful.

What clues do we have that Dyke is trying to enact complexity, that is, that he’s not just wandering around pointing randomly at birdies and flowers and clouds that remind him of his first girlfriend? Just a few guideposts here.

We could start with the (sub)title of the paper, “a plea for pedagogical plurality.” Pedagogy? That’s teaching, communication more broadly. Purpose: transmission of information. Plurality? Why? If the linearity metanarrative were true, there’d be no need for pedagogical plurality; a single beam, properly focused, would pass through all receiving prisms identically. This image Gramsci called “an Enlightenment error.” But if that’s not true, and the author knows it’s not true, then perhaps the author will be compensating for the complexity of reception by shooting a variety of beams from a variety of angles, and expecting that the enlightenment effects will be subtly or even dramatically different each time. What will this text look like? It will make ‘the same’ point in a variety of ways, which will seem repetitive or chaotic exactly to the degree each reader reflects or refracts the luminous dispersion.

Of course if the author could rely on functionally identical readers, this pedagogical plurality would not be necessary. And here we see one of the amazing accomplishments of the discipline constraint: by absenting all other possible configurations, it delivers functionally identical readers who have been rigorously cut and rotated so the light they each beam out will be received and refracted just so by all the others. Like a well-hung crystal chandelier the blazing glory when such a cognitive system is well-ordered is really a beautiful and useful thing. But of course, only that one room is lit.

Let’s move on. The paper is ‘about’ Deacon, but more centrally it’s about what Deacon is trying to do in relation to what other people in a more-or-less loose network of more-or-less similar projects are trying to do. This means the network has to be mapped, and the proximities and similarities surveyed. A big middle chunk of the paper does this work, while trying to leave open sockets for the (many, many) network nodes not discussed, i.e. absent, while sampling their range and significance (e.g. the ‘random’ Pirandello reference). Dyke likes Deacon, thinks he’s right about how things work, and therefore thinks that the nodes and projects are both teleodynamically self-organizing and morphodynamically coupled into a larger system with its own dynamics. How would he show this, not just say it? What would we expect to see if this were true? Links, absences, feedbacks, feedforwards, gradients, the usual. A nonlinear, unpointy, inherently incomplete and unclosed text that, like the network it discusses, is multinodal and loops back on itself dynamically, working all the while to create, maintain and singularize itself. Circles that are actually spirals, as he slyly adumbrates under the discussion of the discovery of DNA and the structure of Deacon’s text.

And so, what is Deacon trying to do, and how does it relate to what Dyke is trying to do? The answer, we’re plurally taught to understand, is properly understood as a matter of constraint within complex dynamical systems far from equilibrium. So after a lot of loopy groundwork about situated knowledge and “ecologies of practice” and “investigative ecosystems” and a great deal of loosely, dynamically related detail we get yet another heuristic example, which I’ll let stand in as a ‘point’ for this post:

To move closer to issues of consciousness with another concrete example, why is it, we want to know, that Deacon’s book is so inhumanly tedious? Well, possibly it is so largely because of all the possible objections he can imagine to his theory. He’s probably better at identifying these possibilities than his potential critics are. Many of these possible critics don’t themselves appear as robustly singularized factishes, but only factishes in absentia. The intellectual defenses are waiting in the text to deal with them should they attack, just as the chemical defenses of a plant are on hand ready to deal with threats that never in fact materialize. But their absence is felt. I take it that I’ve just given a possible causal account of an apparent factish: Deacon’s prolixity. At any rate, the hypothesis that most absentials involve the modal characterization of constrained structure seems to me a live one.

A very, very sad story that.

Advertisements

75 Comments

  1. In the spirit of enacting complexity….

    One of the things my mind kept returning to while reading Dyke’s paper is the notion (mined and popularized by Lakoff and Johnson) of the cognitive “metaphor”. Basically, the idea is that metaphor is crucially central to the process of abstract conception, and that our linguistic utterances reflect this internal reality. To take a random and circularly self-referential example, we often conceive of the mind as a body and the process of thinking as movement; and so have utterances like “my mind wandered”, “my mind kept returning to”, and “I reached a conclusion”. Often, we carry metaphorical implications from one realm to another without being consciously aware of it. Thinking is not literally movement — so although swimming or racing thoughts are natural extensions of the metaphor, it would be a mistake to think that they make our minds more “muscular”. This metaphorical tendency doesn’t just apply to our regular old, everyday concepts — it extends to our formal theories. If you take a close look at Descartes, for instance, you can see him (arguably disastrously) carrying assumptions from a visual metaphor of “knowledge as seeing” into the realm of thought. Thus, the “Cartesian Theater” with its resident homunculus.

    Even more circularly, “metaphor” is a borrowed rather than neologistic term, carrying literary implications into the study of conceptual processes. And preventing the carrying of implications is what prompts Deacon to most of his neologisms.

    To start bringing the circle back around, conceptual metaphor is one example of enacting complexity. We often have several metaphors for the same thing — each imperfectly apt, each covering a different bit of territory. Dyke’s paper is like that, wandering around the thing, finding an apt but incomplete metaphor from each new angle.

    To complete the turn, there’s an “embodiment” and “embeddedness” metaphor that runs through Lakoff, Deacon and Dyke. For Dyke, it’s the ecosystem of theories and the analogy of Deacon’s theory to the fertility of soil. For Deacon, it’s the embodiment of the physical mind in a complex of overlapping processes and the embeddedness of morphodynamic processes in teleodynamic ones . For Lakoff, it’s the embodiment of concepts, like the thought that is a body, and the embeddedness of the mind in a body that makes the metaphors such a natural and sensible tendency.

    For me, among other things, it’s a possible answer to the fertile shit Kant dumped on us so long ago.

  2. I find the very notion of intentionally attempting to enact or embody complexity to be hugely self-defeating. We ARE complex, our actions and artifacts etc are all embodied in a COMPLEX world composed of infinitely many complex systems. To ENGAGE with complexity is to accept it and dialogue with it but is NOT to artificially impose even more complexity upon it. Ashby’s requisite variety works in reverse as well as forward. To have too many degrees of freedom will work to PREVENT UNDERSTANDING or engagement. To quote JC Spender the degree of complexity present is the extent to which the reduction strategies employed have failed BUT we need to be reductionist in order to achieve requisite variety

    That said it should be obvious that I cannot accept or agree with either Dyke’s approach to all this and while Asher and I have our differences at least I feel I can come to an understanding of his communications. Not so with either the written paper presented nor with Carl’s defense thereof above.

  3. Thank you, Asher. I sometimes think my addiction to metaphor disqualifies me for legitimate participation in these sorts of conversations, but of course you’re right that they are in fact constitutive.

    Michael, if I wanted quickly to convey a certain kind of complexity to a group of students I might choose to show them a Calder-type mobile. I could also show them a dreidel, or a weather map, or any number of other ordinary complex things, but let’s stick to the mobile for now because it’s passably complex for its size and a beautiful, familiar and therefore intuitive image for most educated people.

    Of course I could just show them a diagram of the mobile, or a photograph, but I think we can agree that these static images clean most of what we want to show out of the demonstration and aren’t, in fact, usefully the mobile at all. I could screen them a video, but this would be static in its own way, showing only one side of the mobile and one linear segment of its operations through time. No, for this demonstration what we really need is an actual mobile, along with some energy input(s) so the students could actually see, in vivo, how the dynamics work, how subject they are to initial conditions, how chaotic in relation to variable environments and inputs, how the system’s constraints are also enabling, how the gaps are as important as the surfaces, and so on. We’d want them to walk around the mobile and notice what that did to the airflow and how the mobile reacted, for example. This would be a ‘natural’, shall we say, demonstration of complexity, not just conceptually but pragmatically.

    In what way do you think doing the above with words is an ‘artificial imposition of even more complexity’?

  4. Carl in EVERY way — clearly the ability to interact with a physical object conveys more and is SIMPLER than attempting to describe all of the feelings observations sensations affordances history emotions tangents daydreams etc which might occur during such an interaction with words the very choices of words and of word order of which things to include in description and which to leave out the complicatedness of the words chosen and the manner by which those words are conveyed is indeed the artificial imposition of even more complexity compared to the physical interactions which the words are “supposed” to replace

    and this would be true IMHO no matter how gifted a writer you might be

  5. Just to clarify: I don’t think anyone on these threads is trying to model, let alone be, actual complex systems we find ourselves co-evolving with using our symbolic systems … yes or no? It seems to me that the shared project is how best to improve our collective understanding of this inheritance … yes or no?

    If ‘yes’ to the first, then that’s an important discussion to have. If ‘yes’ to the second, then we have three main tools to advance such a project: 1) our limited perception of nature, as opaque as it is to our limited senses, 2) our capacity to draw inferences from and relations among these perceptions, as biased as they are by evolutionary selection, and 3) symbolic systems to collectively cohere higher-order conceptions, which are integrations of our perceptions and memories. Conceptions are nothing if not analogies and metaphors; as such, despite their obvious limitations, they are indispensable towards these shared ends.

    Thoughts regarding these questions/claims… Just so I proceed here accordingly.

    Cheers,
    Josh

  6. I’m probably biased (this subject was to be my master’s thesis before I ran out of money), but to me, calling it an “artificial imposition” of complexity is reductive. Have you ever read a piece of fiction or poetry that led you to an understanding deeper than straight exposition could have produced? What language can do in terms of the concepts it produces in the reader is (again, to me) very much like an emergent process. What straight exposition often does is attempt to reduce and formalize (and possibly simplify) the subject so that one clear meaning is the only possibly meaning to be taken. How has that worked out?

    Here’s a fun quote from Oliver Sacks, explaining why he wrote a book the way he did:

    Such [case] histories are a form of natural history — but they tell us nothing about the individual and his history; they convey nothing of the person, and the experience of the person, as he faces, and struggles to survive, his disease. There is no ‘subject’ in a narrow case history; modern case histories allude to the subject in a cursory phrase (‘a trisomic albino female of 21’), which could as well apply to a rat as a human being. To restore the human subject at the centre—the suffering, afflicted, fighting, human subject—we must deepen a case history to a narrative or tale; only then do we have a ‘who’ as well as a ‘what’, a real person, a patient, in relation to disease—in relation to the physical.

    This is one of the possible things meant by enacting or embodying complexity. Here’s another, from Merleau-Ponty:

    Our relationship to the world, as it is untiringly enunciated within us, is not a thing which can be any further clarified by analysis; philosophy can only place it once more before our eyes and present it for our ratification.

    If phenomenology was a movement before becoming a doctrine or a philosophical system, this was attributable neither to accident, nor to fraudulent intent. It is as painstaking as the works of Balzac, Proust, Valéry or Cézanne—by reason of the same kind of attentiveness and wonder, the same demand for awareness, the same will to seize the meaning of the world or of history as that meaning comes into being.

  7. Michael shows us the desperate reductionist gamble – that you can linearize complexity and still be getting at it. Juarrero and now perhaps Asher offer the narrative / poetic line – freedom and agency are in the deterministic gaps that only storytelling and interpretation can illuminate. Deacon thinks maybe we can represent complexity as itself.

    Dyke the Elder suggests we don’t have to choose. There’s not one right way – each of these lines can be pursued to see where it leads, what it yields. And Josh has a thing to build. Can it handle all these maybes?

  8. ” clearly the ability to interact with a physical object conveys more and is SIMPLER than attempting to describe all of the feelings observations sensations affordances history emotions tangents daydreams etc which might occur during such an interaction with words the very choices of words and of word order of which things to include in description and which to leave out the complicatedness of the words chosen and the manner by which those words are conveyed is indeed the artificial imposition of even more complexity compared to the physical interactions which the words are “supposed” to replace”

    All this tells us is that by evolutionary accident you happen to be more attuned to the complexity of word things than you are to the complexity of other things.

  9. For my money, Carl and Asher are right on the money. Terry Deacon thinks I’m right on the money (he told me so). If Michael is right, then being right on the money ought to be transitive, and we all ought to be rich. — But, then, he seems to believe in linear progress, while I, and maybe Asher, think that “progress” is as metaphorical (and einseitige, as someone or other used to say) as money. Metaphors are at best awkwardly transitive.
    Ironically, what Terry liked best about the paper was the discussion of fertility: an example of ententionality exactly as he had intended ententionality. There’s absolutely nothing metaphorical about that discussion. I’m just in from checking out my struggling eggplants, thriving tomatoes, etc. all enacting complexity in my garden (recall BEING THERE), and, in fact, trying to restore order to the chaos of water and sun that the current climate has delivered here. That’s a real job.
    When I finish this note, I’ll go back to slogging through a bunch of stuff on causality in econometrics and macroeconomics (Cartwright, Hoover, Hendry, et. al) that I’m trying to sort out. They too are trying to restore order out of chaos, and, as theoretically highflying and “intellectually detached” as their stuff is, it too has its reality — as it leaks into the Fed; Capitol Hill; the Bundestag; and crashes impotently into the base of the Acropolis.
    The language of causality is “all” metaphor. Asher is right on track with that. The metaphors fester within dreams of control and management: intervention possibilities; engineering prosperity. That’s a necessary part of the long story of why the hard sciences don’t talk much about causality. Economic theory can generate, and has generated umptynine rigorous mathematical representations of causality, each one reproducing exactly the same difficulties as the last. The debates within the field yield brief illusions of enlightenment; but the illusions die with the next journal contribution. Simplicity isn’t the royal road to understanding complexity — at least on our inherited habits of defining simplicity.
    Incidentally, (a) Carl is indeed an accident; but not an evolutionary accident; and (b) Michael has accurately pinpointed one of the major reasons why I have such trouble getting anything published.

  10. To Dyke the Elder: You are, I trust, aware of D. McCloskey’s Rhetoric of Economics, which has some pointed things to say about how economists (and other social scientists of positivistic implications) are trained pretty well to deal with facts and logic but not at all well when it comes to dealing with stories and metaphors. I recall, too, something that George Soros says in one of his books, that every successful investment depends on a good story; but the key to being a successful investor is to know when good stories turn bad.

    Finally, I repeat a joke I may have told here before. The marketing department of a famous university is holding a conference to which an eminent economist, a friend of the department’s head, has been invited to be keynote speaker. In his introduction, the head of the department says, “My friend, Professor X, is an economist, one of those people who turn random numbers into absolute mathematical laws.” The eminent Professor X replies, “And my friend here is a marketer. They reverse the process.”

  11. Some thoughts Dykes the Elder (Younger to come) –

    Re: the ecosystem of ideas. The basins of disciplinary inertia are powerful (in an historical sense), and negotiating the boundary conditions is perilous work indeed – and not only for PR reasons. Tricky edges. Proceeding “with inherited ontologies held at bay for a while” is hopefully an ideal we aspire towards…

    “Turf wars were more likely outcomes of interdisciplinarity than were productive syntheses.”

    This point will stick with me for a very long time.

    “The three critiques [of Kim] ‘come to the same thing’, in any reasonable pedagogical sense of that phrase.” & “The elaboration of constraint causation pursued by J, D and T (and, as I’ve said, many others) is the important first step to a genuine answer.”

    One’s satisfaction with this bunching probably depends on whatever degree of resolution we’re entertaining, given a particular question – from a broad overview (and particularly a pedagogical approach) the bunching is certainly useful. But in the grainier mechanical sense, the differences (e.g. in relation to Kim’s critique, how constraints might cause, etc.) come to matter a bit more – particularly depending on the (disciplinary) audience.

    “Terminological innovation is no one’s monopoly.”

    “Correlations between neurological processes and daydreaming will surely be possible. But there is no reason to think that the correlations will be one to one.”

    A lovely point well known in fMRI research, or at least the ones who are doing it right.

    “The subsequent work to be done is the experimental study of the topologies, the constraints, and, of course the evolutionary trajectory from which they resulted. No one has done that work yet, not the least because the workplace hadn’t been defined.”

    I actually wonder if it’s true that “no one has done that work yet” – much of the insight I take from these theoretical approaches8 would entail a reinterpretation of research already done or underway. I’m thinking of lab work – not necessarily theoretical work, both that under discussion and in the absential future.

    “Lots of essentials must be kept absent at some stages of the investigative process, or we couldn’t organize thoughts – we’d be on perpetual overload. In this way, absentials are almost always intrinsically connected to abstraction and analogy. In fact, they’re denizens of the storehouse of potentially productive abstractions and analogies.”

    Yep – the filtering problem. Again, a very productive link. Thanks.

  12. To respond to Glasperlenspiel in kind: the two of your points I like best are, first, the one on bunching, and the second on meta-analysis of earlier work.
    Bunching: Very simply, I want all those folks to think of themselves as allies following out potentially converging insights, not as competing rivals. Making Jaegwon the lightning rod is a way of digging in and making a firm stand against the bizarre strictures of the post-Russell, versions of rationalism. The nuanced questions you want to move on to are unformalizable, let alone unanswerable in those traditions — as all the people I bunched know — along with others who I left absent or have discovered since I wrote the piece. Swaggering to the middle of the field and drawing a line in the sand is never very useful; but pointing out a wide consensus that lo and behold such a line is emerging is a different matter. The critiques of Kim come to the same thing. The potential paths forward from the critiques don’t.
    reinterpretation of research: It’s really amazing at the moment how many metastudies are being done in field after field. Your intuitions on that seem exactly right to me. I think that two dimensions of the reason for the phenomenon are first that as the traditional disciplinary frameworks show their limitations, everybody is having to play catch-up with material that wasn’t on their screen in their tightly disciplined days. Second, and related, straying beyond disciplinary bonds virtually self-generates new or modified frameworks, that virtually force reinterpretation of old work.
    Actually there are two more reasons I can think of. One you actually note when you refer to fMRI. Lately some of the most important scientific advances have been in the development of imaging techniques — more generally, more routes to cognitive access. Grafting new findings to old naturally generates rethinking of the old. And the last reason is humble, but important. Metastudies are quick and cheap. Somebody else has already done the hard work, written the grant proposals, etc.

  13. Not sure where to put this but here is as good a place as any. I want to thank everyone involved with Dead Voles for giving me the insights needed to develop anew form of on line graduate seminar essentially combining the Dead Voles format and posting rules with the new isce library (see http://isce-library.net) Without you the format and the idea would not have occurred so thank you.

    CarlD I think this is an example of practicing complexity

  14. Pingback: an immense practical advantage: clarity in the midst of confusion « power of language blog: partnering with reality by JR Fibonacci

  15. Just popping in here to recommend the pinging post at Power of Language – it’s a fun think piece with terrific links. Also, to note that I get to talk with Dyke the Elder any time, so I’m not ‘out’ here so much as leaving space for others.

  16. Nel mezzo del camin di mia vita, the other day, I sought relief from a boredom rivaling Roquentin’s by turning to the Dead Voles. In particular, I caught up with the Deacon discussion as it had been going on before I stuck my oar in. Two things stood out. First, a partial answer to what my review paper was doing was “the same thing Carl did more succinctly: say “stop it” to the priority dispute.” I just went on to try and introduce a new framework for the intellectual dynamics. But, as Caspar Guttman says to Sam Spade, “Sometimes the short farewells are the best ones.” I’m also reminded of one of my favorite lines from literature. Somewhere in the middle of one of Ring Lardner’s short stories, we find the line ” ‘ Shut up!’, he explained.”
    The other thing that struck me was that somewhere along the way, Dead Voles had apparently come to a distinction between explanation and cause — presumably a clean and useful one. That stopped me short, because it presupposes a univocity both of “explanation” and “cause” that boarders on the mind-boggle for me, though apparently not the mind bloggle of Microtus defunctus. It’s definitely possible that I’m missing something. I’m used to people saying things like, “We won’t have a full explanation of the phenomenon until we understand the causal mechanisms involved.”, and stuff like that. An exemplar, for the moment, at least, may be Eric Davidson’s THE REGULATORY GENOME, where there’s an amazing plurality of explanatory/causal language that certainly doesn’t respect any serious distinction between explanation and cause.
    Of course I know how the distinction was supposed to work in the dualistic schemes of 20th C logical positivism, logical atomism, linguistic philosophy, and so on, but that can’t be what your discussion could have been working with, since a number of you think that that tradition is as moribund as I think it is. On the other hand, there’s no real explication of the distinction in the threads — I couldn’t even identify a prime mover for it. Maybe it’s in one of the threads I haven’t looked at. If so my question is simply for a reference. Otherwise someone might give me a quick rundown of what it’s all about. I’m working, at the moment on causality and complexity (yet again), and the only stable conclusion I’ve come to is that i need all the help I can get. Actually, I’d like to know, in particular, what work the distinction could ever do. To take a local example, it certainly couldn’t do Deacon any good. I think Asher’s interesting compendium of Deacon’s thought goes a long way to show that. I really have to stretch to see how the distinction could settle anything in any of the current issues that arise within the areas I’m focusing on: evolutionary biology and macroeconomics, basically; but, again I could be missing something. Are correlations by themselves explanatory, even when causality can’t be established, or something like that? Or, the word “mechanism” frightens some people, but that makes sense only when you have a very limited conception of mechanism — “banal machines,” as von Foerster called them. And complex systems certainly outrun that. Among the factishes that can’t be exported from the old science to the new is the clockwork paradigm. So, what’s up?

  17. Deacon seems to be troubled to some extent by the word “mechanism”, saying things like:

    Though intuitively one can imagine simpler and simpler agents with stupider and stupider intentional capacities, at what point does it stop being intentional and just become mechanism?

    His choices in communicating his ideas would seem to be: A) define mechanism in a “traditional” way that excludes intentional qualities; or B) try to argue that our concept of mechanism should be expanded to include intentional processes. He appears to have chosen A.

    I personally think of mechanism as something broader — something like, “purely physical processes”. But the “purely physical” part is only there as a sort of metaphysical declension necessary to communicate with other tribes. My working assumption is that there isn’t anything non-physical.

    It’s possible that the distinction between cause and explanation follows this same pattern — the use of particular terms (or distinctions between them) for the sake of communicating with people who hold a different set of assumptions. But maybe not. I’ll try to think through it.

  18. Given 1) how little of physical reality sentient agents actually perceive, and 2) how much their experience “colors” (e.g. synesthesia) subsequent perceptions: I think cause can only ever be inferred, and explanations are the integrated aggregate of accrued inferences. As for “mechanism”, there is the traditional kind limited to efficient causes; and, then there is the kind that also entails formal causes. The human race has really only fabricated the former, more limited type, whereas Deacon and myself propose fabrications (future “machines”) that entail both efficient and formal causality.

    I think the trick with understanding, and therefore explaining, this second type of “machine” is that the evolved medium of explication (e.g. language) is itself limited in a way similar to efficient causality. For example, try to explain Ode to Joy. The words, or even the notes on the page, could never be the experience of hearing music. Among many reasons for this, the central problem is the formal nature of superimposed notes in the physical medium of air. The very simultaneous nature of linear superposition is completely orthogonal to a merely sequential (or even parallel) understanding of a very real physical phenomenon. I say ‘orthogonal’, because the sequential explanation of Ode to Joy, assuming one has never heard it, will always diverge from the real-time superposition of notes, which are both distinct and inter-related, simultaneously.

    The logic and rationality we have evolved to so covet (i.e. Positivism) certainly has its evolutionary advantages; however, the jury is still out, because this utility is a constraint in as much as it facilitates modernity. Constraint is necessary to do work, but natural selection over the last 3.5 billion years has made it clear that there is no guarantee that the work a species can do in its environment is the most adaptive or fit form of work that could potentially be done. And that, I believe, is the crux: more diverse, inter-related, and thus complex kinds of work are only possible for a sentient agent if it leverages higher-order dynamics, which require both formal (symphony analogy) and efficient (machine analogy) causality. This is at the core of my project, but it is also central to Deacon’s concept of ‘ententionality’.

    As for explanation, the exercise in explication is always causal, but it may be very limited in its capacity to then explain causality. Limited only to what can “positively” be asserted to be evidentury, our understanding is over-cohered upon an atomistic and stilted inference of reality. To understand ‘ententionality’, one has to warrant that the open or surrounding system is both efficiently causal and formally semi-transparent (e.g. notes, forces, atoms, etc.) upon the closed system one is interested in explaining. Based upon this premise alone, it is clear that a complete understanding of the local system requires understanding the surrounding system, which is of course a never ending process. So, when the analytic philosophers or the Positivists require that we “pin down” our explanations of these formal superimposed causal phenomena, they are unknowingly and ironically requiring that we edit out of our explanation the very thing they are demanding that we explain.

    Cheers,
    Josh

  19. Looking back, it seems to me that the discussions on Dead Voles about explanation and causality revolve around three things:

    1. The “access to the world” problem, and how Kant’s formulation messes with our heads.
    2. The issue of reductive explanations.
    3. The way in which theories can be thought of as being normative (or not).

    These things tie together, obviously. The viewpoint I’ve been pushing, stated very crudely, is that our access to the world comes from mental processes being physical processes embedded in the world. This viewpoint leads to at least three “corollaries” (stated, again, very crudely):

    1. The nature and form of our theories is constrained by the specific ways in which brains process the world.
    2. Theories about the world are necessarily analogical, metaphorical, isomorphic, etc., and are never exactly true or correct in the way some philosophers think is possible for certain kinds of logical arguments.
    3. Theories are always normative — at minimum in the sense of a metaphor being more or less apt.

    One particular issue becomes a focal point: explanations of the world at different “levels of organization”. It’s a focal point because it’s where we ask questions like: “Can we tell what parts of our theories are a result of how our brains process and what parts are a result of the stuff being processed?”. It’s also where issues about emergence and reductionism go to get a workout.

    That’s the basic background. The people we’re discussing all this with are Kantians, anti-Kantians, OOP people, anti-OOP people, playful agnostics, Spinozans, and who knows what else. Communication with all of these groups is a tacit part of the discussion, so you’ll almost never see someone insisting on a particular usage of a term. Instead, interlocutors tend either to accept the looseness of terminology and ask for clarifications from individual speakers, or to continue blithely talking past one another.

    For me, explanation has meant a theory or conceptual model, and the primary question about causality has been something like the question I posed above about levels of organization: “how is the nature of causality translated between the world and our brains?”. And, “what role does causality play in explanation?” Meta-theory stuff.

    So at one level, there are questions about causality that put it in the same category as “reason” or “logic” or “properties” in a meta-theoretical context. But at another level, I guess I feel that we’re still making arguments against moribund traditions. Zombie soldiers.

    The real question, as you put it, is “what work the distinction could ever do”. The only answer I can give is that I don’t have a clear enough idea of what a “good” explanation is to know whether it might have non-causal aspects.

  20. “So, when the analytic philosophers or the Positivists require that we “pin down” our explanations of these formal superimposed causal phenomena, they are unknowingly and ironically requiring that we edit out of our explanation the very thing they are demanding that we explain.”

    Yup. Before you posted I was going to say something about explanations being stories we tell about causes, but I got twisted up in the causes of explanations and explanations that become causes, etc., and I can just edit most of that out now.

    Btw if any of you want boring, come help me clean / reorganize my office.

  21. So, when the analytic philosophers or the Positivists require that we “pin down” our explanations of these formal superimposed causal phenomena, they are unknowingly and ironically requiring that we edit out of our explanation the very thing they are demanding that we explain.

    That’s an excellent way of saying it.

    It also implies that in order to argue back, we need to get meta-theoretical; i.e., we need to say something like “Your requirements for an explanation are wrong — this is what constitutes a valid explanation”.

  22. I no longer understand what’s being discussed here. Deacon has an ambitious project, but it’s fairly specific: how did intentionality evolve in a universe characterized by unintentionality? Was intentionality an emergent property of certain kinds of higher-order self-organizing systems, a property that cannot be explained in terms of the lower-order systems from which it emerged? Or can these lower-order explanations be assayed, such that the transformational gap between unintentional and intentional can be explained in terms more precise than “suddenly, emergence happened”? Would this be a causal explanation? I’m pretty sure it would, if you’re Deacon.

  23. To add one more thing: the viewpoint that I’ve been pushing leads to a (hopefully virtuously) circular way of justifying theories. The theory says X about the world, and since the world works in X way, our brains must work in Y way. If we turn around and ask a transcendental question (“how must the world be for us to have the theory that the world works in X and Y ways?”), the answer will be “in X and Y ways”.

    I also believe that it would take writing an entire book to make the above idea sound non-stupid.

  24. John – I don’t think the question is whether or not the explanation would be causal. The question for Deacon is whether our current, scientifically accepted notion of efficient causality (and levels of organization) is sufficient to the task. If not, an argument must be made as to why there are more kinds of causality than efficient causality, and how some other form of causality gets its efficacy.

  25. Final cause is just another name for intention, isn’t it? So right, Deacon wants to preserve intentionality as a cause for organisms, rather than eliminating it like behaviorists and some contemporary neuroscientists. But where did it come from, this intentionality? Deacon isn’t prepared to argue for some sort of intentionality permeating the universe. Thus, for Deacon, organisms’ final-cause orientation had to evolve from efficient causes, yes?

  26. “Now we’ve got “efficacy” flapping loose”

    Agreed. I wince every time I say “efficacy” or “power” with respect to causation. When I say “how some other form of causality gets its efficacy”, what I probably should say is “how we can adequately explain a system in terms of non-efficient causal notions” or “how our concept of final causality relates to our theory of the process acting in the way it does”.

    Both rephrasings indicate some sort of inseparability of cause and explanation.

    I used to think that causation was a “foundational” concept — one that doesn’t involve metaphor (i.e., can’t be described in terms of anything else). I’m not so convinced these days.

  27. Carl – I try not to get pedantic or etymological about the use of the two terms. However, description does carry a sense of non-causal exposition — I can describe the “properties” of a system without saying how they came to be that way. A description of a Bénard cell, for example, could mention the hexagonal structures without saying how they form. An explanation would need to do both.

  28. About half an hour ago I started experiencing a cramp just under my right front ribs. Why? Because I’d been running for a couple of miles. But why a cramp today, when I don’t usually cramp up while running? Because I ran in 99-degree heat with smoke from various forest fires hanging in the air, my conditioning already being somewhat compromised from miscellaneous lower extremity injuries suffered over the past few months. But why does this combination of factors manifest itself as a cramp? Because something about the heat and the burning of calories and the prior intake of liquid and the irritation of the lung tissue etc etc. But why does the cramp hurt? Because the pain is a product of evolution, an adaptive forewarning of more serious organ failures to come if I don’t shut down the actions causing the pain. But why was I running on a 99 degree smoky day? Because I’m a fricking idiot, that’s why.

  29. Causes have the power of reliable prediction and minimal reliance on ceteris paribus; explanation is retrospective and only suggestively predictive with heavy reliance on ceteris paribus

    Causes are usually explanatory (except when ceteris paribus has been importantly violated). Explanations are seldom predictively causal except when ceteris paribus truly holds

  30. The only answer I can give is that I don’t have a clear enough idea of what a “good” explanation is to know whether it might have non-causal aspects.

    Could your problem be the “is”? Perhaps there are no good arguments, just better or worse ones. In one of his early works, Noam Chomsky discusses three views of scientific method, described from an engineering perspective, as black boxes with particular inputs and outputs. The “discovery” view sees one input, facts, and one output, Truth. The “decision” view sees two inputs, facts and one theory, and one output, Right or Wrong. The “evaluation” view sees at least three inputs, facts and at least two theories, and one output, a judgment that given these facts, one theory is Better and the other Worse. Note what has happened. The first two views assume that a theory “is,” either the Truth, or Right or Wrong. The third assumes only that there are ways to make reasonable judgments of Better or Worse, given the facts in hand, judgments that may be reversed if new facts appear or altered if another, even better, theory is created.

    So, what makes a better theory? Having thought about this for a number of years, I have tentatively settled on three criteria: scope, distribution, and elegance. Scope is a measure of the amount of information accounted for. Distribution is the ordering of the details included within the scope of the theory. Elegance is determined by the application of Occam’s Razor. Thus, for example, Ptolemaic and Copernican theories of planetary motion are similar in scope. They differ in the ordering of details, e.g., the location of the Sun. The Copernican theory is more elegant, given that it requires fewer epicycles to account for retrograde motion than does the Ptolemaic theory. We now know, of course, that neither the Ptolemaic nor the Copernican theory was the last word on its subject. Kepler’s introduction of elliptical orbits, Newton’s law of gravity, and Einstein’s theory of relativity would introduce refinements that have gradually enlarged their scope, accounted for the distribution of more details, and done so in ways that are arguably more elegant. Is Einstein the last word? Probably not. Phenomena like entanglement suggest room for further improvement. Physics is still a going concern.

    Still, we live in a world that is complex and chaotic, not only in what are becoming relatively well defined mathematical senses, but, phenomenologically speaking, too complicated for perceptions to be captured in existing forms of analytic or algorithmic understanding. So we do what humans have always done. We make up stories, narratives in which we try to account for as much detail as possible, even that means bringing in the deus ex machina to close gaps in the story line. We can, it turns out, enjoy immensely complicated stories—The Bible, James Joyce, Proust, Gibbon on the fall of the Roman empire, Caro on the life of Lyndon Baines Johnson, Thomas Pynchon, Neil Stephenson’s Baroque Cycle, all spring to mind.

    From this perspective whether an effort to enact complexity in explaining complexity succeeds or fails is not a question of whether or not the effort in question conforms to linear logic but whether it succeeds as a good story, whether it has interesting characters, plausible plots and subplots, and a structure in which they all seem somehow to hang together, in which, at any particular moment, the reader can see the path from here to there. The problem is not metaphor per se, but whether the metaphors assembled to construct a particular tale are felicitous. Do they offer at least the appearance of fresh insight.

    In the cases at hand, I remain persuadable. I am not yet persuaded.

  31. Hm. I would think a good explanation describes properties and dynamics. It would also rule out ceteris paribus, replacing it with honest unknowns that get whittled down as information improves. This becomes possible retrospectively because the dynamics of the past are fixed.

    I would also think, but apparently I am wrong, that everyone on this thread who’s been reading in complexity knows prediction is a canard. The dynamics of the future are not fixed, so while we can say some general things about the space and probability of developments, we cannot say with certainty what specifically will happen from time to time – unless, of course, we can parabisize particular experiments under rigorously controlled conditions which by ironic fate cannot be generalized.

    JohnM, all possible audiences are in the future. We take our chances as best we can, unless of course we can use genre and discipline to parabisize particular readers under rigorously controlled conditions.

  32. “The dynamics of the future are not fixed.” If we’re still talking about the emergence of intentionality in an unintentional universe, then we’re talking not about the future but about the distant past, before there were any humans around to think about it. Further, we’re talking about a past that precedes the sort of complexity in which intentionality becomes a possible explanation. Prediction would be useful in the sense of building a model that would reproduce outcomes that occurred in that long-distant past. That’s what Deacon is working toward, I think. If the topic is now about complexity in systems that already include humans, then we’re not really talking about Deacon any more. But now I just realized that this thread isn’t about Deacon but rather about Dyke the Elder’s paper. So never mind: I’ll wait until Deacon comes back around (which I *predict* will happen eventually, at least in some of alternative future scenarios).

  33. I think unconsciously, my paradigm for a good explanation has been one which can be used to specify a computer model without artifact (leftover pieces) or ambiguity (unspecified pieces) that reliably produces the observed behavior of the process being explained.

    In terms of prediction, models like that can be extremely powerful, as long as one keeps in mind that the low-level aspects of the model will not be predictive, and that the outcomes (behavior) will be no more or less varied than they are for the real process.

    In many cases “post-diction” is, as John noted, as important as prediction. Axelrod’s modeling of alliances in WWII comes to mind.

    The beauty of computer models is that they can define behavior at a low level (molecules, neurons, people) but make measurement available at a high level (fluid flow, cognitive tasks, economies). And they can be placed within other unruly systems to see how they will behave. Judging the “correctness” of a computer model is a weird beast in science. The key is not the open-ness or closed-ness of the system being studied, but the fact that complex systems exhibit ranges of behaviors that, while not contradicting physical determinism, cannot be controlled precisely enough to guarantee single outcomes.

    So “ceteris paribus” is really only a problem insofar as it fools itself about what can be controlled for.

  34. WRT to Deacon: if we were to create a computer model of Deacon’s theory, we’d only say absence was an important aspect of the theory if we needed to include the idea of absence to make the model work. My semi-unconscious thinking has been to envision how one would model absence in Deacon’s theory.

  35. (sound of throat-clearing)
    1. Nothing is something like what Hegel thought it was, but for nothing like the reasons he thought.
    2. It’s simply not true that only ABD’s worry about the free will problem. That myth has to be dispelled at once. On the other hand the time trajectory curve of worrying about the free will problem shows regular and significant spikes early on Monday mornings.

    Now. I hate to be linear, but I’ll key some of this to particular entries by post time. From time to time I’ll continue a trope I started with Asher: things flapping loose.

    Asher 11:29 As usual (so far) we’re in basic agreement. I think that when you think about mechanism in Deacon you have always to keep in mind that the whole project is grounded in, and dependent on evolutionary biology and thermodynamics (elaborated into morphodynamics and teleodynamics). Any account of how he thinks about explanation and cause has to rest there. For instance, any image of causality as things banging into other things has to be dropped. That’s why he frets about “mechanism” — ESPECIALLY because the attempted reduction of thermodynamics to statistical mechanics yearns to think only of things banging into other things as “what’s really going on.” “Things” is flapping loose. (Carl got to that in a much earlier blog, on Calder type mobiles, and tried to get us to think of the dynamics of mobiles relationally, or, as he put it, in terms of topology. I added some stuff on symmetry to that, and we got a paper daft (sic) out of it.) So “things:” topology isn’t a thing, nor is symmetry, nor entropy, nor energy, nor space, nor time. Most of us in this conversation seem to have gotten this far. Space and time are emergent (on the relativistic standard model) but they aren’t emergent things, despite the fact that when they emerge you need a couple of new nouns. It ain’t called the theory of relativity for nothing. Not to mention field theories in general. So we shouldn’t feel heterodox in moving away from the traditional ontologies of things. In terms of explanation of what we and Deacon are interested in, that’s not where the action is. (“Action” flaps loose.) Lovely. Are leaves things? Maybe to be able to continue to think thingily we’ll have to think clearly about KINDS of things — but then we might find that our theory of kinds depended on an understanding of (actual and possible) relations.

    Josh 12:55 Another case where I can’t disagree a whole lot, but only add. The example of mapping the Ode to Joy onto the explication of the Ode to Joy is a pregnant one. As I said in the Deacon paper the relations between (now) the Ode to Joy, the representation of the Ode to Joy in symbolic space, and any brain configuration aren’t mappings. What the relations are, of course, is a hideously long story no one knows how to tell at the moment, but a number of folks in a number of fields know that they aren’t mappings, and are busily designing the research to go on from there. Deacon’s design involves the identification of morphodynamic and teleodynamic levels of emergence.

    Asher 1:08 Kant, you say? Try this one, talked out decades ago with David Depew, obviously in anticipation of its use here. For Kant (and other enlightenment thinkers) matter is dead: inert, homogenous, and lacking of capacities. In order for it to become the world we live in, it has to be animated, made active. It’s the job of natural philosophy (pure reason?) to tell the story and provide the details of this animation. But natural philosophy has serious limitations that prevent it from completing the task, thus necessitating a thorough critique. This necessity is welcome to Kant, for it allows him to demonstrate a place for God.
    But matter isn’t inert, homogenous, and lacking of capacities. It’s active, differentiated, and full of determinate capacities at any focal length you choose. In these days I guess you could say “all the way down to the quantum vacuum”, especially if you’re residually nostalgic for reduction. But you could also be Aristotelian about it, and put together a really rich concept of material cause — which is one way of looking at Cartwright’s concept of capacities, or at the theories of structural causality, or even constraint causality. In other words, shoving Deacon’s stuff into the category of final cause is only partially correct, and what we ought to be talking about is the dialectic of material conditions. (Thus cashing in the bogus remark on the pseudoHegel that I began with.)

    Carl 1:11, 2:28 Asher 3:44 description: one philosopher’s point. We talk about descriptions as accurate or inaccurate, but don’t usually, at least, say that about explanations. Relations, after all, have to have relata they relate, and most explanations seem to involve relations. But accuracy of description can now be heard warming up for a good flap. Certainly the issues of identification and re-identification arise, and some strange contextualities can arise. Think of the Steve Buscemi character in FARGO, and how he’s identified — several times and by several different people: “kinda funnylooking.” And on the basis of this description Marge knows it’s the same guy from context to context, so I guess it must have been an accurate description. In another context I’d try to get you to think about factishes and singularization.
    Having said that, I still always tell my cosmology course that the greatest story ever told is y=f(x). Think of its scope, power, and richness. Compared to it, the Gospels are parrochial, sectarian and … Well, on this line science can be defined as the activity of finding constructive and instructive constraints on narratives. I may even think that’s actually what it is, accounting for why I’m one of those who aren’t afraid to foreground metaphor.

    ktismatics 1:50 “final cause is just another name for intention, isn’t it?” Certainly not. That’s why Deacon goes to all the trouble of inventing “entention.” Here I’ll go all in with my account of soil fertility — especially since Terry liked it so well.

    Michael 3:00 “causes have the power of reliable prediction and minimal reliance on ceteris paribus …”
    A succinct precis of the dominant view a half century ago. “power”, “reliable prediction”, “minimal”, and “ceteris paribus” all flap loose. It’s problematic these days even for for those who hope their systems of interest are linear. I’ll have some stuff bearing on this later. But for now: the landmark for causal theory in the 18th C was Hume’s theory of constant conjunction; the landmark in the 19th C was Mill’s methods; and, it seems, the landmark in the 20th C was Mackie’s discovery of the INUS condition “An inus condition is an insufficient but non-redundant part of an unnecessary but sufficient condition.” (quoted from Cartwright 1989, p. 25). Having to deal with inus conditions turned out to be one of the major spurs to going beyond the orthodox view for almost everyone. Of course having to deal with non-linear systems, sensitivity to initial conditions, and so on was another.

    Lunch break. You guys mostly work in the middle of the night, and don’t know what a lunch break is.

  36. ““final cause is just another name for intention, isn’t it?” Certainly not.”

    Fine. I don’t know the philosophical terms. The rest of my comment is about Deacon: he wants to trace the emergence of ends-based activities, including intentionality if such a thing exists, in a universe that isn’t ends-based, except for its general tendency toward eventual heat death.

  37. I can’t keep up but
    Jim McCreery I enjoyed the rubric of scope, distribution, and elegance. Aesthetic criteria, have had enormous power well beyond the obvious cases of Einstein and Dirac. But you do scope in terms of the amount of evidence accounted for at a moment when battles are raging about information measures and their scope, especially in the contexts of evolutionary biology and cosmology, Are informational entropies additive? Everybody seems to forget that entropies are ratios, with proportionality constants slipped in, they’re still ratios, and that’s important when you cross boundary conditions. Distribution gets us to structure and framework, which right in line with Cartwright and others. But, given structure and framework, Occam’s razor becomes highly contextualized (though far from arbitrary). Einstein warned about that. I think about the struggles that are going on about computational depth.

    Asher today’s post Have you read Richard Watson’s COMPOSITIONAL EVOLUTION? I think you’d find it very sympathetic to what you’re up to. It also has the best account of modularity I’ve run into so far.

    Two last thoughts: In all the stuff I’ve been plowing through, the consensus that seems to be emerging is that all the conceptions of causality and tests for causality that have been concocted over the last decades — probablistic, Bayesian, Marcovian, regression rules, what have you — are locally sound research strategies, but none of them singly nor the whole set adds up to a global theory of causality. The consensus has an amazing range of members of many various persuasions. A corollary consensus is that for the sciences that’s not a terribly serious problem, but a way of life; a big tool kit. I think that’s really important for people interested in complex systems, for the tool kit is available for research strategies there too. The heuristics of analysing chaotic time series (even recognizing chaotic time series in the first place) requires the heuristic use of a lot of linearizing tricks. Asher’s latest post bears on that too.

    Last, at last, one of the books I’m currently leaning on is Kevin Hoover’s CAUSALITY IN MACROECONOMICS. I may have mentioned it before. The question that organizes the whole book, and that Hoover offers an answer to, is “Does money cause prices, or do prices cause money?” So it turns out that for him and the rest of the econometricians, for Cartwright and the whole rest of the crowd at LSE, and on and on the question is how we’ll ever be able to tell if monetary policy makes any sense, or fiscal policy for that matter. Talk about your “precise control”, or opportunities for exogenous intervention, and so on. They don’t want to (are afraid to) give up the ideal of an economic technology, but they don’t yet have a theory good enough to understand and evaluate the causal structure of the system of interest.

    So that was what was on my mind when I started this latest thread; not Deacon as such, though his relevance won’t escape you. I was looking for some thinking help, and I got it, and I’m grateful.

  38. Aesthetic criteria, have had enormous power well beyond the obvious cases of Einstein and Dirac. But you do scope in terms of the amount of evidence accounted for at a moment when battles are raging about information measures and their scope, especially in the contexts of evolutionary biology and cosmology, Are informational entropies additive? Everybody seems to forget that entropies are ratios, with proportionality constants slipped in, they’re still ratios, and that’s important when you cross boundary conditions. Distribution gets us to structure and framework, which right in line with Cartwright and others. But, given structure and framework, Occam’s razor becomes highly contextualized (though far from arbitrary). Einstein warned about that. I think about the struggles that are going on about computational depth.

    Agree. Absolutely. The saving grace of the scheme is that, in the context supplied by Chomsky’s black box account of scientific method, it’s heuristics all the way down and local complications built into different versions of the black box can be dealt with locally.

    the consensus that seems to be emerging is that all the conceptions of causality and tests for causality that have been concocted over the last decades — probablistic, Bayesian, Marcovian, regression rules, what have you — are locally sound research strategies, but none of them singly nor the whole set adds up to a global theory of causality. The consensus has an amazing range of members of many various persuasions. A corollary consensus is that for the sciences that’s not a terribly serious problem, but a way of life; a big tool kit.

    Count me a member of the consensus. To me a primary influence has been general systems theory and the observation that, if we imagine reality as a space, simple mechanical models work well in one region, statistical models work well in other regions, and most of reality remains unaccounted for except by narrative. Algorithmic and computational modeling approaches have captured a bit more space in which mathematics seem to apply, but a lot remains where story telling (art? poetry?) remains the primary way in which we human beings get our heads around what is going on.

  39. In all the stuff I’ve been plowing through, the consensus that seems to be emerging is that all the conceptions of causality and tests for causality that have been concocted over the last decades — probablistic, Bayesian, Marcovian, regression rules, what have you — are locally sound research strategies, but none of them singly nor the whole set adds up to a global theory of causality. The consensus has an amazing range of members of many various persuasions. A corollary consensus is that for the sciences that’s not a terribly serious problem, but a way of life; a big tool kit. I think that’s really important for people interested in complex systems, for the tool kit is available for research strategies there too.

    Dyke the Elder

    I like this statement as a touchstone. It accepts the utility of what are a few “tools” among many available and many more to come; therefore, a discriminating meta-process is conceived where tools are chosen based on the specific task at hand. And, it is with this self-awareness of the limits of one’s model, and how one’s tools fit into that model, that complexity is not just talked about but practiced.

    Case in point: final cause. Based on conversations and perhaps IN, Deacon is skeptical of this quadrant of Aristotelian causality, I think because it begs the question. In other words, it posits a causal agency, i.e. internationality, that is the agency in question. It is not a good model or conceptual tool to start with, because it already presupposes highly organized intentional agency, which essentially devolves into a first cause problem.

    I suggest that if we want to salvage any tools from Aristotle, et al., the formal cause is the conceptual tool with more utility in either discussing and especially in practicing complexity; because, here in lies the potential for constraint to evolve, a process of formal inter-distinctions, which is central to Deacon’s nested conception of dynamical depth: i.e. homeo, morpho, and teleodynamics.

    I just wanted to make sure formal causality was still on the table 8) Great thread.

    Cheers,
    Josh

  40. Right. You’ve just done for formal cause what I tried to do for material cause. I’ll bet there are some traditional differences in temperament and conviction lurking around. The current natural home for those of the formal cause persuasion right now is computer modeling — GA people are almost always at least closet Platonists. Those of us who are glad to be (almost) finally rid of the platonism of Russell and positivism wince to see it re-emerge in the modeling literature. But the reply to us is suspiciously irrefutable. “Well, if you can’t stand formalism to the extent that it’s absolutely essential to the modeling in the first place, then either take a hike or stop whining or both.” That’s the sort of reason why a lot of us think it’s a good time to push pluralism.
    Final cause is pretty hard to get rid of in the policy sciences. The fact that you can move it toward formal cause, and I can move it toward material cause is one of the reasons that I keep playing word games with pseudohegel and whatsisname who followed on after him. Just as a reminder of the swamp we could be wandering into.
    chuck

  41. I’m curious where you think the closet Platonism comes from. Is it a naive thing (i.e., “here is the model, here is the thing itself – thus the model has a sort of abstract existence”)? I think very much in terms of models (and I’ve done some GA stuff), but Platonism seems weird to me, seeing as how it highlights that models have substrates.

    I like the idea of formal cause, but (unlike final cause), it always seemed to me a shorthand for an aggregate of local causes.

  42. The current natural home for those of the formal cause persuasion right now is computer modeling — GA people are almost always at least closet Platonists. Those of us who are glad to be (almost) finally rid of the platonism of Russell and positivism wince to see it re-emerge in the modeling literature.

    Citations, please. I ask because it is hard to imagine modelers like Scott Page at Michigan as Platonists. Near the start of his online course on models, Page quotes statistician George Box, “All models are essentially wrong. Some are useful.” The central thrust of the course is on models as aids to thinking clearly about what could be. There is no claim that they represent Truth.

  43. Yes, I should clarify that I intend formal causality in a way as far as possible from Plato’s Ideal timeless forms. From my limited understanding on such philosophical matters, I’m more inspired by Aristotle’s ideas on how potentiality and actuality evolve through time to manifest the simultaneous co-relation between matter and form.

    In this way, the GA folks are forever removed from ‘actuality’ because their models are restricted to discrete and sequential causality due to the intrinsic nature of Turing computation. As such, there is no potential for “aggregates of local causes” in the same way they occur in the actual systems such models hope to emulate – the critical question needing an answer before living and intentional dynamics can possibly be understood: how exactly are “aggregates of local causes” implemented in physical actuality?

    The old model is sequential, parallel, and discrete Functionalism. I believe the improved model will take the simultaneity of matter and form’s co-evolution seriously enough to challenge the very metaphors and tools that actively thwart such progress; this, because, current tools, and the metaphors/models they are built upon, categorically deny the simultaneity that pervades every physical interaction in the universe – for example, the symphony example from above, all molecular interactions via EM/strong/weak forces, and all planetary interactions via gravity. All of these physical phenomena are simultaneously matter and form, each distinct arbitrary ‘body’ both creating and being created by a shared and superimposed terrain of forces, simultaneously.

  44. We may need a shrink to help us with this one — or an intellectual historian, if we can find one. Beyond that, I have a long song and dance, a theme with oodles of variations, that I keep trotting out. The way I did it in the HOW NATURE SPEAKS volume was to contrast monotheistic science with my preferred polytheistic science, then remind people that there are no gods. But dismissal doesn’t stick, especially in philosophy departments, and the one-liners merely gesture at what I think is something with enormous cultural entrenchment of great depth and breadth. So I call it (with Dewey) the quest for certainty, or call it the axiom of self-flattery, or recycle the old trope that you can take the boy out of the altar, but you can’t take the altar out of the boy, or go back where this whole thread started and let Tuco Ramirez contrast theologians with teachers as an invitation to pluralism, or whatever, then with my own situated stance acknowledged, get on with another part of the job. Barbara Herrnstein Smith is more sensitively productive about these things than I am.

    McCreery’s right again. The traditions that have debts to Simon or Dick Levins, for instance, are strongly anti-platonist. Cartwright belongs there, as does Sandra Mitchell, hence Page.. Ditto for Prigogine, Stengers, et.al. Stengers” critique of the holy Hamiltonian is terrific. It’s in cosmology that metaphysics starts to boogie. Brian Greene is head platonist there. Check out the issue of irreversibility, or the issue of the low entropy big bang.
    But now I think that we shouldn’t be thinking about what I think, but about what Josh just said. That’s a more promising step forward — and why I’m, as I said, reading Richard Watson. Josh even manages a plausible anit-platonist reading of Chomsky. Hoodov thought, given the theory of language?

  45. Back to the question about causes and explanations that Dyke the Elder introduced awhile back on this thread. I’d think that the cause of something refers to the thing itself, whereas the explanation of something has to do with people’s understanding of the thing. This distinction seems integral to any sort of realism, doesn’t it? I.e., there must have been causes for the emergence of life on earth that preceded anyone’s attempts to understand those causes. I can also see how tools useful for scientists (and poets too, I suppose) attempting to arrive at an understanding of the thing would be different from the tools useful for teachers attempting to explain this understanding to others.

  46. John – For me, there’s no real way to separate the understanding from the thing itself. So although I am a realist in the sense that I don’t doubt that things do what they do with or without us, I don’t think any element in our conceptual models can be considered a “pure” or “direct” reference to something in the world.

    So I guess if I were to translate what you’re saying into “Asher language”, I’d say that explanations are models and causes are elements of models. Philosophers do weird things with this kind of view, saying that causes are “always already present” in models if models are to be considered “causal accounts”. I get the feeling that kind of thinking is a flybottle.

    Josh – I wonder if the “discrete and sequential” limitation is what made group selection such a contentious subject in evolutionary science in general. Your remarks about simultaneity seem exactly right to me. It puts me in mind of the idea of additive synthesis, and the idea that information must be sacrificed to do a transform like a frequency analysis.

  47. “there’s no real way to separate the understanding from the thing itself.”

    I agree that an understanding cannot be separated from the thing, mostly because understanding is *about* the thing. But I presume that the thing can be separated from the understanding. Other galaxies long existed independently of anyone’s attempts to understand them, whereas human attempts to understand galaxies depend on the existence of the galaxies. It’s an asymmetrical relationship.

    I agree that a causal model or causal theory or causal explanation is not the cause itself, just as my description of the lanky friendly black dog at the car repair shop isn’t the same as the dog itself. I also agree that no description is pure. Still, I presume that some descriptions are better than others, and that criteria can be established for evaluating the relative goodness of explanations without concern for standards of perfection. If, e.g., I described the dog at the car shop as lanky, friendly, and brown-and-white spotted, it would be a poorer description than the prior description. The evaluation of a description’s relative goodness is based on comparing it with the thing that the description is meant to describe. So too with causal theories. My car wouldn’t start this morning. Is this failure to start caused by a dead battery, a corroded cable, or some other electrical malfunction? The relative accuracy of these alternative causal explanations must be evaluated relative to the car itself.

  48. “But I presume that the thing can be separated from the understanding.”

    When you say that, my tacit assumption is that “be separated” is a mental process. In my opinion, it can’t be mentally separated — there’s no meaningful distinction between saying “I’m talking about a real cause in the world” and “I’m making use of a concept of causation that has some relationship to something in the real world”. Our language is very clunky for expressing the relationships between concepts and the things they’re “about”, so it’s hard to even express it clearly (and seems to have led to a lot of philosophical weirdness).

    “Still, I presume that some descriptions are better than others, and that criteria can be established for evaluating the relative goodness of explanations without concern for standards of perfection”

    I agree completely. In fact, I have a feeling I’m misunderstanding the original thing you said: “I’d think that the cause of something refers to the thing itself, whereas the explanation of something has to do with people’s understanding of the thing.”. For me, the concept of a particular cause is something that refers to something in the world — so maybe what I’m not understanding is the second part.

  49. Right, maybe I’m not clear enough yet. “the concept of a particular cause is something that refers to something in the world” — agreed. But all explanations, including causal ones, are inextricably embedded in concepts and statements, which implicate minds and languages. I’m trying to distinguish between the world and concepts/statements about the world, between the dog and concepts/statements about the dog, between the cause of my car’s electrical problem and concepts/statements about that cause. The world can exist independently of any concepts/statements made about it. E.g., some sort of ancient geothermal activity caused what we now call the Rocky Mountains to push up from the sea.The cause of the Rocky Mountains doesn’t need to be separated mentally from explanations of that cause, since the cause took place before any minds existed that could propose explanations of that cause.

  50. “The cause of the Rocky Mountains doesn’t need to be separated mentally from explanations of that cause, since the cause took place before any minds existed that could propose explanations of that cause.”

    I’m not tracking the point – what work does this distinction do? What happened with the Rockies is a historical matter. Conceptualizing what happened in terms of cause and explanation are things we do today. This seems to me to be the sort of fact we could easily and without consequence treat as trivially true.

  51. I don’t know, Asher: it seems like I’m gesturing, as they say, toward definitions of cause and explanation. However, since in your last comment you distinguish between “you” and “our,” I infer that I’ve strayed from the consensual understanding of the original question, which focuses on whether a conceptual/verbal explanation can be adequate without its including an explanation of causality.

    “Aren’t concepts and explanations things in the world too?”

    No doubt, but the distinction still holds surely? Psychologists study concepts as subject matter; they propose concepts about the human ability to formulate concepts.

    “what work does this distinction do?”

    What work does the distinction do between the Rocky Mountains and my understanding of the Rocky Mountains? Really?

  52. Aren’t concepts and explanations things in the world too?

    Yes. In fact, I think this is the key idea. Not only does it underlie Deacon’s project (he only departs from it once in Incomplete Nature that I’ve found), I think it provides a better way to consider our “access” to the world.

    It’s also the lens through which I look at statements like John’s. The problem is that it tends to add linguistic layers.

  53. Yes I know I should go pick up the car from the shop, cook dinner, do anything other than belaboring my off-topic point hoping against hope that someone will call me a clever lad so I can beam with pride while enjoying my pre-dinner cocktail. But looking back even earlier in history to figure out where I lost the thread so to speak I find DtE’s comment of 14 June:

    “The language of causality is “all” metaphor. Asher is right on track with that. The metaphors fester within dreams of control and management: intervention possibilities; engineering prosperity. That’s a necessary part of the long story of why the hard sciences don’t talk much about causality. Economic theory can generate, and has generated umptynine rigorous mathematical representations of causality, each one reproducing exactly the same difficulties as the last. The debates within the field yield brief illusions of enlightenment; but the illusions die with the next journal contribution.”

    DtE makes it sound as though the “language of causality” is only about controlling the world and not at all about explaining the world, as if causality is “all” metaphor and no description, as if no causal representation is better than any other, as if practitioners of the hard sciences have abandoned causality in their work, as if exploring causality offers only illusion and no enlightenment. Maybe it’s because I fear this assault on the “moribund traditions” of yore that I’ve been belaboring the distinction between cause in the world and explanation of cause, between cause and the “language of causality.”

  54. Sometimes squirrels feel like a nut; sometimes they don’t. But when they do, they operate uncomfortably close to the boundary between ententionality and intentionality to find one. Within an evolutionary picture, that uncomfortable closeness is about what you’d expect. In a world where there’s a dominant intellectual tradition between human cognition has, for milennia, been set apart from the intelligent interactions with nuts and such by other species, it takes an enormous amount of untangling to get accounts of intelligent interactions that stand up to scrutiny. It makes it all the worse that, as Nietzsche, and Deacon in the last chapter of SYMBOLIC SPECIES, note, in order to be intelligent the way we are, we have to be able to lie, even to ourselves. No wonder IN is tortured; no wonder this thread is so snarled.
    Be careful: in this tangle, NOTHING is ALL about anything.
    The monetarists want money to cause price. Money? What in the world is that? What are the terms of its existence? Whatsamajingle certainly worried about that; and it’s the difference between mountains and money that Carl points us to. THAT there’s a difference is indeed trivial. The ins and outs of the difference aren’t. That’s where the work has to be done.

  55. I’m not certain of the authorial intent or entent of your last comment, DtE, so I’ll free associate a response. Sure, empirical science has been subjected to a hermeneutic of suspicion for some time. The practitioners of predictive science, those champions of rational-empirical indifference to personal interest in the pursuit of knowledge, may harbor unconscious desires to control the world while simultaneously and unwittingly being controlled by those who would use their discoveries for financial, political, and military gain. Most often this charge against “big science” has been leveled by the left, though of course in the recent dispute about global warming the shoe is on the other foot, with the right accusing climatologists of running a vast left-wing conspiracy that overrides any possible “evidence” they might bring forward about the causal relationship between fossil fuel use and temperature trends. Similarly, those who contend that cause-effect relationships of monetary policy are too uncertain be be enacted can be accused of refusing to wield power contravening the status-quo concentration of money in the hands of the global .001% and the de-facto monetary policies enabling even greater concentration.

    Surely it’s a good idea to subject scientific action, and inaction, to a scrutiny of motivations and unintended but predictable consequences. Scientists continually evaluate their methods and instruments and theories; so too should they subject their practice to psychological, sociological, political, and economic scrutiny. It’s not unlike psychologists and other cognitive scientists trying to understand the blind spots of folk psychology, heuristics and biases, and so on. By understanding the systematic distortions of intellectual work it becomes possible, if not to control and eliminate them, then at least to compensate for them.

  56. “A huge avowal of faith.”

    I deleted an incremental-improvement clause in that last sentence, figuring that it was an implicit caveat for an Enlightenment-influenced cat like myself. Maybe not. I’d call it an ongoing struggle rather than an avowal of faith.

  57. Pingback: “The Current State of Play” | Dead Voles

  58. Pingback: Why I hate David Foster Wallace and all he stands for | Dead Voles

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s