Home Forum Political Economy Essentialism and Models

  • Creator
    Topic
  • #248460

    I read with great interest the article “Essentialism and Traditionalism in Academic Research” by Ryan Kyger and Blair Fix. I agree with Kyger and Fix that philosophy (particularly as ontology and epistemology) is of crucial importance when we consider science and mathematics. We can make serious mistakes if we apply science and math models blindly and simplistically to the myriad “wicked” or complex / iterative / reflexive problems we face in dealing with what we call “external reality”; meaning reality external to brain or mind. With the wrong methods and applications, science morphs into pseudo-science quicker than a person can say “classical economics”. Category mistakes in empirical ontology (specifically involving real system and formal system ontologies) are at the root of these problems. Kyger and Fix certainly hit the nail on the head with their critique of essentialism.

    We should not falsely idealise a model by viewing it as containing some essential form or essence of reality. That was Plato’s mistake as Kyger and Fix point out. The essential form and “essence” of reality is the entirety of reality itself, not any humanly generated sub-set idea, model or equation of it. Essentialism as a theory embodies a reversal of the observed principle of causation or at least a reversal of the “arrow of time” if one prefers to conceptualize it in that manner. The evolution and emergence of the cosmos and then life (so far we can observe these things and the traces or information remaining from past events) clearly indicate the sequence is early cosmos -> “current” cosmos (remembering relativity concerns about simultaneity) – > earth – > humans – > human ideational models – > advanced scientific and mathematical models. The monist whole of the cosmos  is generating the “forms” (the formal model systems) through the interactions of “native” modelling systems in the human central nervous system with the external environment. The seeming purity and essentialness of advanced scientific and mathematical models is very much a result of the unavoidable homomorphic principles of modelling and thus of all human modelling that is accurate or accurate enough for empirical and practical purposes. “Homomorphic principles” will be explained below.

    Modelling is the only way the human mind understands or misunderstands reality external to the mind. This may seem a big claim but I think it ought to be considered straight forward and uncontroversial after we look at human physiology and neurology. Our vision system provides the most easily understood example.

    We can begin this analysis with a consideration of our naïve reification (concretisation) of our pre-rational and rational mental models. This reification is the mistake of taking an abstract model of reality for reality itself. Our pre-consciously generated models, as brain-internal models of what we perceive with our senses, and our subsequent unconscious and conscious ideational elaborations upon them, perforce generate a naïve misunderstanding of their own processes. My view of a room is constructed in my brain via a process commencing with photons being received by the retina. Thence, information is transmitted, via the optic nerve in the form of electrical signals, to the brain. The information or data thus transmitted to the brain is processed by the visual cortex to construct a virtual model of the room in my brain. It is this model which is finally apprehended by the conscious mind after it is transmitted from the pre-conscious nervous system to the conscious nervous system. External reality is truly there (as this treatise certainly holds) but what I perceive experientially is a reconstructed view in my brain. My brain models the room.

    Naïvely, we are unaware that a senses-based mental model is a model and we take its picture, constructed in the brain from sense experience as qualia or data, to be simply “what is really there”. Thence, this mental model becomes normalised in the mind and is reified as identical to the external reality outside the mind. The visual model, rendered as being close to identical, in surface appearances at least, to the room, is then as it were superimposed over the actual room, in the virtual-modelling, heads-up-display sense, accurately enough for practical purposes like moving about and locating objects. We experience our internal picture or virtual model of real externality as the real thing, which in survival and evolutionary terms is perfectly apt and functional, when the picture is good enough for those purposes. If this system did not work, complex organisms with complex senses and brain-internal modelling capabilities could not have evolved and survived.

    It tends to be forgotten however, or not apprehended at all, that coalesced or constructed perception in the brain remains nonetheless a mental model which is vastly different from and far simpler than the full array of external reality. We routinely and carelessly treat our mental models as wholly real and fully representational.  We forget that our senses have limited sensing ranges, that illusions and mistakes are possible and that our brain-internal modelling can be biased and misleading. There is much more of reality and many different aspects to reality than those aspects which we sense and then apprehend, essentially as simulacra, via brain-internal modelling. Also, many of our more elaborated ideational models about the world are either personal illusions or socially shared myths which simplify and distort reality. It is common to all modelling processes that they abstract from reality and simplify in the process, thus introducing initial distortions by omissions and simplifications. Distortions may occur also due to the limitations of the senses and by alterations introduced by brain processing; by information recombination, by selective foregrounding, selective backgrounding and other processes.

    We can continue, I argue, in deducing that all successful interaction of humans with environment, including with social environment, involves some “accurate-enough” modelling, specifically as homomorphic modelling. This extends from sense modelling to ideational modelling at all levels from simple language modelling to science models and mathematical models. I could continue with a discussion of the central and indispensable nature of what I term homomorphic modelling. In mathematics, “homomorphic” describes the transformation of one data set into another while preserving relationships between elements in both sets. Thus, I will clearly be talking about models as structure-preserving and process-preserving ideational constructs, sometimes fully formalised, which at the same time unavoidably abstract only selected elements from external reality, which simplify real elements, structures and processes in the modelling process and which yet retain or preserve some “essential” or fundamental structure and process “maps”, homomorphically.

    However, I don’t propose to continue if I am not evoking any interest in this line of enquiry. I do think it’s relevant to CasP but I certainly cannot demonstrate it yet. There are lines of enquiry relevant to discussing real systems, formal systems and their interactions at the ontological level. However, if this theory is nonsense, inapplicable or wholly derivative please tell me. I don’t want to waste anybody’s time or any “column-inches” in this forum.

    • This topic was modified 1 year, 6 months ago by Rowan Pryor.
    • This topic was modified 1 year, 6 months ago by Rowan Pryor.
Viewing 0 reply threads
  • Author
    Replies
    • #248463

      Hi Rowan

      I have been reading a fair amount of philosophical work recently, and I have found a lot of clarity in the work of Paul Cilliers, who has focused on Philosophy and Ethics within Complex Systems theory.

      One aspect that I have found very informative is that we cannot ever fully understand any complex system. The very act of modelling or framing, necessarily excludes some information from consideration, and thus renders the models inaccurate. Knowing this, means we can work from the premise that our models are incomplete, and thus are at best indicative of a “snapshot” of a portion a given complex system that should rightly be viewed as a process. If we then take regular “snapshots” we can begin to see vectors of change, and we can begin to better understand the boundaries of the complex systems by noting the limits to the understanding provided by our models, but we should not take our models to be sacrosanct truth, ever.

      Related, but tangential. Many years ago I read Science and Sanity, by Alfred Korzybski, and his study of General semantics has informed my worldview tremendously. He contended that our perception of reality is fundamentally removed from reality, and that as a species we are limited to abstractly approximating reality through our descriptions and analyses, and the best we can actually do is to center our analytical frameworks on abstractions that are as close to reality as possible (first order abstractions) rather than building higher-order abstraction upon other higher order abstractions. Higher order abstraction are unavoidable, but we should remain cognizant of their distance from reality nevertheless.

      This might seem a little tangential to the point you were originally making, but it is the contribution I am able to make at this point.

      • This reply was modified 1 year, 6 months ago by Pieter de Beer. Reason: clarified a few details and corrected grammatical/spelling errors
      • #248465

        Pieter,

        Thanks, I think your comments are related for sure. The clear agreement is that all models are necessarily incomplete. Paul Cilliers (whom I have not read, I admit) develops his position empirically. He observes that “the very act of modelling or framing, necessarily excludes some information from consideration, and thus renders the models inaccurate (or incomplete)”. Knowing this from a set of observations enables him to set up his premise, by simple philosophical induction, that all of our models of complex systems must be incomplete.

        I’ve had the same experience(s) in virtual modelling in computer games. I fan-modified existing games rather than creating new games. In trying to model real systems virtually (within a game engine cruder than a proper Newtonian physics engine) I rapidly became aware of modelling problems per se, especially the issues of scaling, where one finds physical re-scaling, in x,y and z axes, for realism simulation is trivial (in game design terms, not necessarily in coding or graphics terms) but that the time re-scaling of disparate processes for a combination of realism simulation and “playability” is very much a non-trivial exercise in game design terms. One rapidly becomes aware of how much one is abstracting and simplifying from reality to make the game code-able, processable and “playable” by humans for enjoyment. One also becomes aware that time is “different”. It can’t be proportionally scaled down from physical reality in the same way as can the space dimensions or the results look impossible or ludicrous: like a scaled down human figure scurrying like an ant and insta-stopping with minimal mass and momentum. A proper Newtonian physics engine requires you to plug in the mass. One also becomes aware that human minds actually require models (in games and in narratives) which contain “objects” recognizable as agents or actors and process-causing forces for these models to be recognizable, navigable, operable, understandable and emotionally compelling. That’s pretty much the most general theory of aesthetics for arts, entertainments and games, excluding the niche of abstract art. At least, I think so.

        After such experiences, and having read Berkeley and Hume, I shifted my interest to ontology and to a theory of sensory models and ideational models as comprising the whole of the human epistemological “project” by comprising the whole of human qualia and ideation in relation to apprehended or posited objective reality. Making an assumption of monist materialism (as so-called priority or historical monism) it can be deduced that a cosmos exhibiting evolution and emergence has itself emerged human sensory and ideational systems, including human-made models, in the process of its own cosmos-entire emergent development. Simple enough. Hence, by definition humanly created models and formal systems are sub-systems of the larger cosmos system and so are necessarily incomplete. In this case, these deductions from a plausible premise, one with scientific truth warrants, accord with the inductions from empirical  instances. It seems neat and elegant to me. “Too neat” may be the judgement of others.

        One of the conclusions I reached in this chain of deductions is that formal systems are necessarily a special sub-set of real systems. This may sound nonsensical or paradoxical to some but I think it holds and I think I can demonstrate at a general level how it might hold. I am trying to pique interest here, obviously. I am well enough aware of the dangers of being a crank so I advance these rather odd ideas with some trepidation. Blair Fix has written a light-hearted essay on the topic of being a crank and it contains some excellent advice. I do reflect on it.

        • This reply was modified 1 year, 6 months ago by Rowan Pryor.
        • This reply was modified 1 year, 6 months ago by Rowan Pryor.
        • This reply was modified 1 year, 6 months ago by Rowan Pryor. Reason: Added a couple more statements
Viewing 0 reply threads
  • You must be logged in to reply to this topic.