Bounded Rationality : Philosophy and Cognitive Science

Aristotle said that man is a rational animal. Bertrand Russell adds, “Throughout a long life I have searched diligently for evidence in favour of this statement. So far, I have not had the good fortune to come across it.” Traditional economic theory is based on the idea of homo economicus, agents as rational men attempting to maximize their utility. But as Russell points out, man is not that rational. The notion of bounded rationality better describes the way we think and act. What is the philosophy of rationality and deviations from it? How does Cognitive Science describe a bounded rationality? What are the pros and cons of it in comparison to rational choice theory? How do we use it? And how can we construct descriptive models of such systems?

K. I. Manktelow (2004) says that rationality is concerned with two things: what is true and what to do. For our beliefs to be rational, they must be in agreement with evidence (what cognitive scientists call epistemic rationality), and for our actions to be rational, they must be conducive to obtaining our goals (instrumental rationality i.e. adopting appropriate goals and behaving in a manner that optimizes one’s ability to achieve them. (Stanovich, 2009)) If this defines rationality, then not being completely rational could be described as behaviour resulting from not completely knowing what the truth is i.e. beliefs not being in agreement with the evidence, or not completely knowing what to do i.e. not adopting appropriate goals or not optimizing one’s ability to achieve them.

Bounded rationality picks up on these ideas of not being completely rational. Our decisions and their rationality are bounded by constraints of limited information available, limited time to make decisions and limited cognitive abilities of the mind. H. Simon (1972), in his ‘Theories of bounded rationality’, describes reconstructing the classical theory of a firm’s decision making of utility by including risk and uncertainty, incomplete information of alternatives and complexity in calculating the best course of action. By functioning in such a domain, decision makers are viewed as ‘satisficers’ – who seek a satisfactory and not necessarily optimum solution owing to a lack of resources and ability

Limited rationality resulting is satisficing is considered sub-optimal decision making . An alternative method of defining satisficing could be an optimization where ‘all’ costs are covered i.e optimization of the goal, costs of obtaining all necessary information and costs of calculations. So while considering the main objective of goal optimization, satisficing could be sub-optimal. The satisficing problem can also be thought of as constraint satisfaction and formulated as an optimization of satisficing requirements of an objective function. J. Odhnoff (1965) says the difference between optimizing and satisficing is often referred to as a difference in quality of a certain choice.

In the utility framework, bounded rationality can be explained by the notion of epsilon-optimization: agents choosing actions that get them close to the goal, i.e. actions where the pay-off (U(s) )is within epsilon of the optimum(U*). [   U(s) ≥ U* – ɛ   ] In the case of strict rationality, ɛ= 0 . Bounded rationality and suboptimal decision making processes result in deviations (ɛ) from rational behavior. For example, the trade-off between computing speed and accuracy of a result in numerical analysis can be analogized to heuristics and cognitive biases in decision making.

The evolutionary psychology perspective is that limitations in rational choice can be explained as being rational in context of survival. For example, if we think we see a predator, due to limited time the more profitable decision would be to fight or flee rather than waiting to obtain all the information to establish for sure that a predator is on our trail. Being more risk averse makes sense at a subsistence level. G. Miller (1956), in his very popular paper ‘The magical number seven, plus or minus two: Some limits on our capacity for processing information’, discusses limited cognitive capacity in relation to how the average human can hold 7 ± 2 objects in his working memory. Making faster, though less accurate decisions could be a factor in determining life or death. D. Kahneman (2003) describes a process of attribute substitution – when someone makes a judgment whose target is computationally complex, an easier calculated heuristic attribute is substituted, even without conscious awareness.

Heuristics like rules of thumb, intuitive guesses, stereotyping, etc. fall under this approach to problem solving where a practical methodology is employed which provides sufficient and not perfect results, sometimes even without knowing it. Bounded rationality substantially affects how we think and act. This has its pros and cons. For example, marketing could be designed in a manner to make us buy more than we need- eg. the anchoring effect where your first perception affects later perceptions and decisions. It can also be used to influence people into making better decisions for social good. Sunstein and Thaler (2008), in their book ‘Nudge’ describe many instances of choice architecture, for example placing food in such a manner that influences people to pick the healthier options. There is a lot of scope in the field of behavioral economics for marketing and administrative planning in influencing choices.

Moving on, to provide descriptive models of bounded rationality, how does one model decision making processes? Cognitive science has been concerned with providing formal, computational description for various aspects of cognition including. Could we then formulate a framework to begin to explain bounded rationality and its implications on decisions? Decision theory has been modeled by probability theory. Cognitive scientists have earlier used classical Bayesian probability and formal logic. In some cases though, this classical approach does not hold true. Alternative proponents are quantum cognition, fuzzy logic, possibility theory, info-gap decision theory, etc.

For example, a fundamental law of Bayesian probability when applied to classical decision theory is the ‘sure thing principle’ which says that if you prefer A over B in state X and in state X’, then you should prefer A over B in an unknown state as well. Tversky and Shafir (1992) tested this in a two stage gambling experiment where there is an even chance to win 2units or lose 1 unit. When players win the first round, a majority choose to play again; when players lose the first round, a majority choose not to; and when players are not informed of the results of the first round, a majority do not choose to play the second round – violating the sure thing principle, according to which they should have play the second round. Due to violation of the law of the total probability, classical probability theory cannot be employed, but quantum interference effect can be used to explain it (context and order dependent, similar to the double slit experiment) (Busemeyer & Bruza (2012)). In such a manner Aerts, Sozzo, & Tapia (2012) say that to explain decision making processes, paradoxical situations in behavioral economics and deviations from rationality, formalisms using quantum concepts like superposition, interference, contextuality and incompatibility have been found useful.

In conclusion, we can now establish that man isn’t as rational as Aristotle declared him to be. Limited resources and cognitive ability result in a ‘bounded rationality’. This bounded rationality manifests itself as deviations form rational behavior. This affects how we think and choose to act. ‘Irrational behavior’ can be explained by this. Bounded rationality is an emerging field, with behavioral scientists trying to influence the choices we make, economists updating their theories to take seemingly irrational behavior into account, and computation cognitive scientists formulating frameworks to better understand decision making processes, etc. Where are we headed? Answering that conclusively would probably not be completely rational, for want of sufficient time, information and cognitive ability!

Sources :

–  Manktelow, K. I. (2004). “Reasoning and rationality: The pure and the practical., Psychology of reasoning: Theoretical and historical perspectives (Hove, England, Psychology Press.). pp. 157-177.
–  Stanovich, K. E. (2009). “What intelligence tests miss: The psychology of rational thought.” (New Haven, Yale University Press.)
–  Simon, H. A. (1972). “Theories of Bounded Rationality,” Decision and Organization,( Amsterdam: North-Holland Publishing Company). Chapter 8
–  Odhnoff, J. (1965). “On the Techniques of Optimizing and Satisficing.” Swedish Journal of Economics, 67 (I), 24-39.
–  Miller, G. (1956). “The Magic Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information.” Psychological Review, 63, 81-97
–  Kahneman, D. (2003). “Maps of Bounded Rationality: Psychology for Behavioral Economics.” The American Economic Review, 93(5), pp.1449-1475
–  Thaler, R. H. and Sunstein C. R. (2008). Nudge: Improving Decisions about Health, Wealth and Hppiness. Yale University Press
–  Tversky, A., and Shafir, E. (1992). “The Disjunction Effect in Choice under Uncertainty.”, Psychological Science
– 
Busemeyer, J., Bruza, P. (2012). “Quantum Models of Cognition and Decision.”, Cambridge University Press, Cambridge.
–  Aerts, D., Sozzo, S., Tapia, J. (2012). “A Quantum Model for the Ellsberg and Machina Paradoxes.” Quantum Interaction 2012. brNote

The above was the term paper I submitted for the Philosophy and Cognitive Science course at YIF

36 responses to “Bounded Rationality : Philosophy and Cognitive Science

  1. I’ve never been a huge fan of the word ‘rationality’ to describe perfect decision making, as quite often what is really meant is ‘logical’. Although I strongly agree with the article that contexts such as survival and resource constraints can make a decision that is not perfect still the right one I think it misses that the brain is a decision making system within a specific environment, that of the real world.

    For example, imagine you are a caveman hunter returning to your homely cave and you see 3 bears go into it. You hide behind a tree and 10 minutes later you see 2 bears come out. How many bears are in the cave?

    The obvious answer is of course 1, and most likely this number popped into your head subconsciously. We know that logically 1 is not a complete answer. In strictly logical senses the answer could be greater than 1 (there were already bears inside), or it could be 0 (the final bear died). But logical answers are not always helpful. Even if you were given near unlimited time to consider the answer it would not be helpful to think up every possible scenario and work out the probability.

    There could be a bear behind you right now about to bite your head off. But it doesn’t pay to check every 5 seconds. Therefore I would say that humans are very rarely logical, but quite often are rational.

    Like

    • Doesn’t that point to his second definition of rationality? Checking behind you every 5 seconds is not conducive for any goal and is thus irrational.

      Like

      • I think all philosophy I’ve read on the topic of decision making uses logically and rationally interchangeably. Indeed, in logic and predicate logic, the agent can be rational with both deductive and inductive decision making. AKA their decision follows by the rules of logic and the I don’t know what of inductive reasoning.

        That being said, you were right to point out that time itself is always a variable in the overall rationality of a decision, and I think that is important as time seems to be neglected when talking about decision making (except in neuroscience. And apparently cognitive theorists)

        While not all moral decision making is done with inadequate time, much of it is. I would agree with a lot of what you said but I must insist that if it is logical it is rational, and if it is rational or must be logical. Time is a just another premise in an argument to action– it does not change the definition of rational.

        Like

        • I think for me the difference is resource based. I would confine the ‘logical’ to the optimal next move, whilst I would base ‘rational’ on the optimal best move given the processing capacity of the decision making engine.

          In that sense, from an evolutionary perspective, it makes sense at a species level to make low-resource consuming guesses based upon the utility-probability of the situation. The benefit to survival from a reduced decision making capacity (e.g. energy consumption of the brain) outweighs the benefit survival of the better decision in the long run (but maybe not for you at that time).

          The crux of my argument is that this reduced capacity is outside of the individual’s control. Therefore although the decision might be illogical (i.e. not optimal) it is rational.

          The reason why I feel this is the case is that cognitive biases are so much more prevalent when you take decision makers outside of natural environments. Anchoring, base rate fallacy, sunk cost effect, and the rest are rarely an issue in real world situations, but I see them consistently crop up in abstract analysis. Information bias is a very good min/max search algorithm in natural contexts, but when trying to understand complex financial products to make a decision that could cost millions it is extremely misleading and dangerous.

          That is why I agree with the concept of bounded rationality, it is more that I think it is not useful to define anything as illogical as irrational, because it does not leave room for separating ‘rational’ (best with what you have) from ‘irrational’ (truly dumb).

          Like

        • Personally I use ‘rational’ to mean ‘logical given some motivation’, such as self-interest, or the interest of your company/country/etc.

          So mathematical logic is a thing, whereas mathematical rationality is not. Similarly, rational behaviour is a thing, where logical behaviour isn’t meaningful: it’s logical given your own self-interest.

          Not a clear distinction really though, I admit. The two are often interchangeable.

          Like

          • One could make an argument that nothing is rational unless it is both consequentialist and utilitarian, if that is of interest.

            But no, I mean regarding philosophy, not just behavior or musings or decision action theory, but philosophy as the study, which I think we are in subreddit /philosophy, actually has a whole study devoted to logic, and logic 101 teaches you about valid, invalid, cogent, weak, and strong forms of argumentation. This is determined right by the conclusion following or not following the premises for any number of formal or informal fallacies.

            Intent is not something we even attempt to account for until well… Probably logic more advanced than I ever took… Which wasn’t very advanced I will admit. But yeah no it’s a real study in philosophy and is disregards the agent and the agent’s intentions and it only regards the argument, so per PHILOSOPHY the study I don’t know how much weight your distinction is going to hold. It’s not that philosophy isn’t flexible, it’s just that you would be using an informal fallacy to assert your distinction: that is an ad hominem.

            Like

            • Yes, I’m aware of the philosophical study of logic.

              I was with you up until

              | ‘It’s not that philosophy isn’t flexible, it’s just that you would be using an informal fallacy to assert your distinction: that is an ad hominem.’ |

              Which assertion? Who am I attacking with an ad-hom?

              Like

              • Maybe that is not the right fallacy, but it seems to me that your distinction between rational and logical involves the intent of the agent. It is my understanding that logic avoids the agent and any of their supposed intentions and focuses only on arguments themselves.

                To say something is only logical based off of self interest indicates an ad hominem circumstantial. Either what they did or said followed or it did not. Their motives are irrelevant.

                Like

                • | ‘your distinction between rational and logical involves the intent of the agent. It is my understanding that logic avoids the agent and any of their supposed intentions and focuses only on arguments themselves.’ |

                  Yes, exactly.

                  | ‘To say something is only logical based off of self interest indicates an ad hominem circumstantial.’ |

                  I didn’t say logic can only exist given a self-interest. I said that my distinction between logic and rationality is whether there’s a self-interest involved. (Again, though, I admit the distinction is a bit overwrought.)

                  | ‘Either what they did or said followed or it did not.’ |

                  ….with regard to some set of motivations and values.
                  | “Their motives are irrelevant.’ |

                  What? How can motives be irrelevant when assessing the merits of actions?

                  Like

                  • Ok, so, I remember logic to be something like this:

                    All men are mortal Socrates is a man Socrates is mortal

                    The arguments themselves are good or not. The premises are true or false. Suppose we take a premises with a motive:

                    Harry wants a fruit

                    Or maybe like

                    Sheila thinks it would be good to go to the store

                    Ok, I’m not sure what you have in mind here, because we would have to ask Harry and Sheila how they felt to determine whether or not the statement is true. Even then, Harry would just have a true statement and Sheila would have to list her reasons for thinking it is good to go to the store. She can reasonably argue that it is best to go to the store, but logic doesn’t care about what she wants to do. It only cares about whether it is reasonable or not to go to the store.

                    I’m wondering why you need a distinction between rational and logical anyway. I am ignorant to contemporary decision making theories so maybe I am behind the times. Do you have some literature on this distinction or on why an agent’s motives are important? Or can you at least lay it out for me?

                    I mean, even in neuroscience hasn’t it been shown that people believe they have decided on something about 10 seconds after they are en route to doing the act? I think i good, logical moral theory will and should not account for someone’s motives. It should address whether their reasons for doing such and such an action were logical or illogical. Again, the “motives” are premises only; they do not change the basic rules of logic.

                    Like

      • Think of “checking behind you every 5 seconds” as a function. Let’s define it as A.

        If A adds to the net value of your life, then it is rational. For instance, a child soldier in Sudan might survive to eat another meal if he happens to constantly look behind him. This is highly context-dependent: if you are in a dark alley, and you hear rustling behind you, it is probably more rational to perform A.

        if this function detracts from the net value of your life, then it is irrational. Again, it is context-dependent: when you are sitting in a lecture, other people will find you odd for looking back every 5 seconds, probably detracting from your social value in this situation.

        Now, what if you were that child from Sudan who was trained to be a child-soldier, but escaped and was able to attend a lecture at a university in Lagos. Your previous successful function of A is now unsuitable in this situation.

        The mind applies functions and methods of acting in order for us to survive and achieve our end-goals; however, it also takes time to adjust to new contexts. A child from poverty has learned heuristics and models to help her make sense and live in the world (a sort of rationality), that might not suit her later in her life.

        Like

        • Right that is my point, he was using the example to disprove the author’s rationality argument. I was saying the author accounted for this with his second definiton which says that for something to be rational it has to be conducive to achieveing a goal.

          Like

          • I think I get what you are saying.

            Would you say that a behavior which has been adaptive in another situation, but is currently useless is “logical” (in that the brain is doing what it has done in the past to acheive a goal), but not “rational” because the behavior will not reach the goal?

            I think that this would rely on the brain having a necessary understanding of the outside variables in a situation.

            Perhaps instead of logical there is a better word to use? I would say “understandable”.

            Like

            • Right exactly which goes back to the author’s first point. THat without the necessary informationf/education it is impossible for someone to make a “rational” decision or “the most rational” decision.

              Like

        • I don’t think it quite does. Checking behind you every 5 seconds will improve your survival rate. Context may adjust the level of survival improvement, but even in a low context environment it will still pay.

          So if your goal is survival then doing this is conducive for a goal. So if you applied the second definition exactly then you would say this is rational. As it would be to never drive a car, cross the road, or jump out of a perfectly good aeroplane.

          But if you were to meet people that regularly did / did not do those things then you would not think of them as being rational

          Like

          • Maybe I was too broad and should clarify that it’s not conducive to the stated goal or the goal being actively pursued. Someone skydivings goal at the time is not survival.

            Like

            • I think it is difficult to always provide the right breadth in these arguments as there are so many contexts and situations it is very hard to always describe the right one – and I know I am guilty of that a lot.

              I would say that someone skydiving would very much have survivability as a goal. I mean they do take parachutes.

              Like

              • I think survivability is a desired outcome but not the goal. And if it was a goal secondary to some other pursuit.

                Like

    • Just ask P. T. Barnum about how “rational” we really are, if you can face an extremely unflattering answer.

      We are animals. That our brain added complex linguistic / symbolic logic as a survival tool, and a “conscious” attention center to supervise, by no means makes their activity anything like the majority of what our brain does to direct our behavior. Perhaps in another 2 million years such an animal will have evolved. I rather think of our logical reasoning abilities as a partial input to the solution of any given problem; the decision will be made by a deeper part of our brain, including various and sundry instinctual logics, and influenced to some degree by what our “rational” thoughts contribute. I also think that in spite of subjective impressions to the contrary, we are barely conscious. The part of our brain that we experience as “aware” has limited visibility within the mind, and cannot normally perceive more than a small fraction of the true scope of our mental processes.

      We think we’re so * smart, but we’re not, and it’s much easier to admit when we drop the absurd Aristotelian expectations of perfect rationality. Pure rationality is a lovely fantasy, like super powers and gods, but we are animals complete, and are only slowly learning what that actually means by carefully studying nature, until our simple monkey minds can realistically parse the radical complexity of reality.

      Like

    • Limited resources and cognitive ability result in a ‘bounded rationality’. This bounded rationality manifests itself as deviations form rational behavior.

      This characterization misses the central insight from the literature on bounded rationality. The entire point is that bounded rationality is not a deviation from given norms of reasoning found in abstract idealizations of rationality, e.g. formal decision theory or probability calculus or whatever. Bounded rationality is the denial that those formal norms apply to the cognition or action of beings like us in the first place.

      Bounded rationality is normative in that it offers a different set of evaluative standards, which are immanent to a particular situation. Herb Simon used the analogy of a pair of scissors. One blade represents the agent and the other its environment; what is rational for an agent to believe or to do in a given scenario is a function of both.

      In fact this dispute is the basis of a whole debate in the literature about what rationality consists in, largely falling between Gigerenzer’s bounded rationality camp and Kahneman’s heuristics-and-biases camp. Stich and Stanovich both have argued positions situated between the extremes of the so-called Panglossion and Meliorist accounts. This article doesn’t have a good handle on the matter.

      Like

  2. Get in.
    Check biblio. No Boudon.
    Scroll the document. Shitty-formated formulas.
    Get out.

    Like

  3. When reading this I just kept thinking about the actions of the poor/homeless in western society specifically. Having done some extensive work, I always found myself bumping up against opinions or decisions that made no sense to me. Then I had others in my line of work or adjacent to my work that would insist this was due to some kind of cognitive difference in people (ie or was natural that some just won’t succeed etc I’m simplifying here) but I think this way of thinking sits more right with me. Because they lack education/insight they cannot even see what the rational solution to their various problems would be. And even if they did because they lack the resources to take the most rational course of action they must choose an alternate. Those two factors combine create a very limited set of possible decisions. I guess spiraling from this the question would be were you to introduce a more more rational decision that they hadn’t considered due to problem 1 would man accept it as more rational or does the introduction of information have to precede this?

    Like

    • The problem noted at the end there I think points to what is classically called Akrasia and currently referred to by the classic name and also ‘weakness of will’. There are several readers you can buy on this topic, and it was addressed in Plato’s Protagoris, the play Medea, Aristotle’s Nichomachean Ethics, and then today there are thousands of articles. I don’t think there is consensus, but it is very hard to get around Socrates claim that an agent will/can only decide on what he thinks is best. It has been postulated that the only way out of this is pluralism, but I once argued that rapid cycling of principal (guiding) values (axioms) due to slippage of sets of data being active at any given moment in our brain space. I’m pretty sure someone else argued that too but I dunno… if you’re looking for a short answer, there’s one. Otherwise, good luck on your philosophical journey into the world of will!

      As for will the agent choose the better option presented, I would say that depends on their ability to accept the new information and also their determination will be dictated by the breadth and quality of their precious data sets. So… Who knows? It is unlikely that someone with severe schizophrenia will accept a new piece of evidence unless it appears to them to be divinely inspired by the universe or some god. If we’re talking just some person down on their luck who is otherwise mentally fit and has a good education, they will likely be more convinced by new data. The general answer is no I think: I read this article posted on “God”‘s Facebook page and it asserted that Americans generally do not make moral decisions using facts and are generally not open to new information when it conflicts with pre-existing ideologies.

      Like

      • I guess my question wasn’t so much based on will.( Whether they do it or not.) Because I think there couold be a lot of diffferent ways we could convince and/or force someone to do things. But instead of whether they ultiamtely chose to do it or not whether they would accept that decision as the most rational choice.

        For example, I can accept that eating broccoli is more rational to my goal of losing weight than is eating chocolate. But that doesn’t mean that I do it every time.

        That is the view we usually take on poor decision making, that it is clear or relatively clear what the right decision is but we are choosing a different patth due to different motivations (chocolate Makes me happy lol).

        But this article seems to be suggesting to me not that people are necessarily choosing a less rational choice but that these choices are “hidden” from them due to circumstances. So would simply introducing them “fix” this or do they have to uncovered by themselves?

        Thanks for the reply!

        Like

        • OK, I read the article and your post again. I am sorry I am new to reddit and did not realize you had a source you were referring to.

          So if I understand you correctly, you are taking an extreme juxtaposition of +/-2 from 7 and you’re supposing an agent with only two sections of data with which to make a decision. So agent “G” has the following set:

          [A, B]

          Where A and B are moral principles or perhaps strings of related moral datum.

          And so now you at wondering suppose we insert C, being a super or hard hitting value (much like a primary axiom)(sorry I only have classical training in this topic) and so now we kind of have a set like this:

          [A, B, C+]

          Now would the agent be more likely to make a rational decision? YES, iff* you could convince the agent that C is not only a true statement or that C is a reality of the world, but also that C overrides A and B in level of importance. This could work both with conditional statements that are true or more complex inductive arguments where there is cogency. The agent would have to have the capacity to identify and accept the information, and would also need the time and equipment to process it efficiently

          Like

          • Hmm using that kind of sturcture I think I’m actually saying A and B are conditions for which one can make the most rational decision. WIthout A( understanding) the most rational choice even one of the options and without B (oppurtunity/means) even if the rational option is availablke it cannot be chosen.

            So the question is can we just give the most rational choice without fuulfiilling A and B and the person will accpt it. Or are A and B necessary for the option to obtainable?

            Like

            • Ok, I think I understand. So supposing B is a variable for “opportunity/means,” then no, the agent could not act on the best decision of they did not have the opportunity or means. They might “decide” to do the action, discover they are missing B, and then will be naturally inclined to do what the determine to be the second best thing. If I’m still not understanding, then I’m sorry but I tried my best!!! Interesting post though thanks for the convo!

              Like

              • Haha yeah for sure, the idea is still kind of “cooking” in my head so it may not have been explained as clearly as it needed to be.

                Like

        • | ‘For example, I can accept that eating broccoli is more rational to my goal of losing weight than is eating chocolate. But that doesn’t mean that I do it every time.’ |

          But is it? If you were a robot that could make that choice every time, and successfully lose weight with certainty, sure. But for humans, the empirical probability is high that at some point they will revert and gain the weight back. And in the meantime, they have to expend mental energy to make that choice. It’s not easy to say what is truly the rational choice.

          The logical part of the mind is like a people manager who has the best speaking skills and takes center stage, but can’t issue orders and ultimately has limited influence. Our rational part is not free to decide what the whole system does, it is only free to decide what ithe rational part does. Predicting effects on the rest of the system is very difficult.

          Like

          • If my goal is to lose weight, then yes eating broccoli is a more rational choice. I think you’re pointing to that I may have other, competing goals (happiness etc.) for which eating the chocolate is more rational

            Like

            • That’s not quite what I’m getting at. I would say that even bounded rationality is only an approximate model. For one, the conscious mind is not in complete control of the body’s actions, and it’s debatable whether the conscious mind is correct to declare itself the primary entity. Also, the conscious mind today is not the same as the conscious mind tomorrow. I don’t believe that humans maintain consistent goals. Goals and conscious intentions are a structure that runs on top of and shapes the unconscious mind, but they are not primary.

              Like

              • Thats fair, but assuming that at any given moment a goal is seeking to be obtained. We do have rational and irrational choices for that goal correct? But I guess it would be functionally impossible to separate out what is “an irrational choice for the stated goal” from ” a rational choice for a different goal”.

                Like

                • Yes. I consider that assumption to be, strictly speaking, false, but possibly useful for modeling. Maybe the example is just bad. Empirically, 95% of people who lose weight by doing things like eating broccoli instead of chocolate gain it back. I’m doubtful that a plan with a 95% failure rate is rational.

                  Like

Leave a comment