Thinking And Deciding

Get Started. It's Free
or sign up with your email address
Rocket clouds
Thinking And Deciding by Mind Map: Thinking And Deciding

1. Glossary

2. Most Important Concepts

2.1. Search-Inference Framework (Ch1)

2.2. Understanding (Ch1)

2.3. Rationality; luck & misfortune (Ch3 > Rationality)

2.4. Psychological difficulties (Ch 4 > Psych difficulties)...mostly concerned with wason selection task and the universal/existential troubles

2.5. Probability Judgment (Ch5 > definitions)

2.6. Frequency/logical/personal theories

2.7. Pretty much all of ch 5 an 6

2.8. Not all belief persistence is irrational ch9 (and it's child nodes)

2.9. Biased assimilation

2.9.1. People who hold strong opinions on complex social issues are likely to examine relevant empirical evidence in a biased manner. They are apt to accept "confirming" evidence at face value while subjecting "disconfirming" evidence to critical evaluation, and, as a result, draw undue support for their initial positions from mixed or random empirical findings.

2.10. Belief Overkill

2.10.1. the tendency to deny conflicting arguments, even if they do not need to be denied.

2.10.2. Example:

2.10.2.1. Consider a discussion on whether or not to ban nuclear teseting

2.10.2.1.1. "People who favored banning nuclear testing believed that (a) testing created a serious medical danger, (b) would not lead to major weapons improvements, and (c) was a source of international tension. The opposing group believed exactly the opposite on all three points." Note that these three points ARE NOT CAUSALLY CONNECTED. One can, for example, believe that (a) testing does not cause medical danger, (b) testing does lead to weapon improvement, and (c) nuclear testing is a source of international tension. The crucial thing to note here is that those who overall opposed the idea of nuclear testing gave reasons against testing on all three accounts, and those who were for nuclear testing gave reasons for testing on all three accounts. However, one can OVERALL oppose testing, EVEN IF they agree that there are good reasons for testing in some regards.

2.11. Value conflict

2.11.1. When a person is posed an issue where the opposing sides are both valued strongly by the individual, they are much more unbiased in assessing information and evidence. This makes sense, since they aren't committed to favoring one side or the other.

3. Chapters

3.1. Chapter 1: What Is Thinking?

3.1.1. Rational Thinking

3.1.1.1. Thinking that maximally achieves our ideal goals

3.1.2. Thinking occurs when we have doubt about...

3.1.2.1. Which decisions to make

3.1.2.2. Which beliefs are true

3.1.2.3. What our ideal goals should be

3.1.3. Search-Inference Framework

3.1.3.1. The framework

3.1.3.1.1. We think about three kinds of things to remove doubt

3.1.3.2. Example

3.1.3.2.1. Question: Which electiveclass should I take next semester?

3.1.3.3. Notes

3.1.3.3.1. Goals, possibilities, evidence & inference do not occur in any order; can overlap

3.1.4. Types of Thinking

3.1.4.1. Diagnosis

3.1.4.1.1. Goal: discover what a trouble is (in patient, car, etc)

3.1.4.1.2. Search for evidence only partially under thinker's control

3.1.4.2. Scientific Thinking

3.1.4.2.1. A question needs answering

3.1.4.2.2. Hypothesis testing in the form of experiments

3.1.4.2.3. Evidence result from experiments

3.1.4.3. Reflection

3.1.4.3.1. Goal: arrive at a general principle

3.1.4.3.2. Evidence largely consists of memories

3.1.4.4. Insight

3.1.4.4.1. Only phase under thinker's ability is a search for possibilities

3.1.4.4.2. Goal (ie getting correct answer) is usually quick (or prolonged but abrupt)

3.1.4.5. Prediction

3.1.4.5.1. Goal is fixed

3.1.4.5.2. Evidence usually is in the form of knowledge of an element in a proposed same class as subject in question

3.1.4.6. Behavioral learning

3.1.4.6.1. Learning what the implications of behaviors are in life

3.1.4.7. Learning from observation

3.1.4.7.1. Learn from observation alone, without intentional experiment

3.1.5. Naive Theory

3.1.5.1. System of belief based on incomplete knowledge

3.1.6. Understanding

3.1.6.1. For Perkin's, understanding involves...

3.1.6.1.1. the structure of what we want to understand

3.1.6.1.2. the purpose of the structure

3.1.6.1.3. arguments of why that structure serves that purpose

3.1.6.2. Example

3.1.6.2.1. Design: A = b x h

3.1.6.2.2. Purpose: find area of rectangle

3.1.6.2.3. Arguments: Whatever arguments are used to show that A = b x h.

3.2. Chapter 2: The Study of Thinking

3.2.1. Models of Thinking

3.2.1.1. Normative

3.2.1.1.1. Model of thinking which best achieves the thinkers goals

3.2.1.2. Prescriptive

3.2.1.2.1. Prescribes how one ought to think

3.2.1.3. Descriptive

3.2.1.3.1. How people actually think (observationally)

3.2.2. Methods for Empirical Research

3.2.2.1. Interviews

3.2.2.2. Process tracing

3.2.2.2.1. Thought process is the concern; not conclusions

3.2.2.3. Integrative Complexity

3.2.2.3.1. How does one examine an issue?

3.2.2.4. Hypothetical Situation

3.2.2.4.1. Lab experiments where variables and outcomes are examined

3.2.2.5. Observation

3.2.2.5.1. Observing people in their natural habitat

3.2.2.6. Individual differences

3.2.2.6.1. If individuals show bias, we can observe the causes of these biases

3.2.2.7. Archive data

3.2.2.7.1. Historical data

3.2.2.8. Training and De-biasing

3.2.2.8.1. Can observe how a prescriptive heuristic affects thinking

3.2.2.9. Experimental Economics

3.2.2.9.1. Economists generally assume people act in rational self-interest

3.2.2.10. Psychological Measurements

3.2.2.10.1. Conjecture about cause and effect from MRI scans and other procedures etc

3.2.2.11. Computer models and AI

3.2.2.11.1. Useful for understanding descriptive vs normative thinking

3.2.2.12. Within-Subject vs Between-Subject

3.2.2.12.1. Within-Subject

3.2.2.12.2. Between-Subject

3.2.2.13. Sampling

3.2.2.13.1. Random sample from desired demographic

3.2.2.13.2. Random sample from generalized human populous

3.2.2.14. Incentive

3.2.2.14.1. More desirable to observe what people do IRL than in hypothetical (lab) situations

3.2.3. Development of Normative Models

3.2.3.1. Theories

3.2.3.1.1. Rawl's Theory

3.2.3.1.2. Author's Theory

3.2.3.2. Side-information about intuitions

3.2.3.2.1. Intuitions are often held for good reasons

3.2.3.2.2. Intuitions can be held for bad reason such as...

3.2.4. Descriptive Models and Heuristics

3.2.4.1. Heuristics

3.2.4.1.1. Pros

3.2.4.1.2. Cons

3.2.5. Development of Prescriptive Models

3.2.5.1. Basis of prescriptive model is design

3.3. Chapter 3: Rationality

3.3.1. Rationality

3.3.1.1. Two interpretations

3.3.1.1.1. Thinking that achieves ideal goals

3.3.1.1.2. Thinking that achieves any goal

3.3.1.2. Can have rational thinking but bad outcomes, and irrational thinking but good outcomes

3.3.1.2.1. Luck and misfortune (bad luck) is the cause

3.3.1.3. Matter of degree

3.3.1.3.1. May not be a "best" way to think, but there certainly "better" and "worse" ways to think

3.3.2. Optimal Search

3.3.2.1. Actively Open-Minded Thinking

3.3.2.2. Confidence in decisions should reflect time spent in search-inference analysis

3.3.2.3. Appropriate confidence in our beliefs is often more a reasonable goal than having true beliefs.

3.3.3. Objections & Criticisms of Rationality

3.3.3.1. Paradox: Spontaneity

3.3.3.2. Rational thinking might not promote happiness

3.3.3.2.1. Author's Criticism: Believes that it is often the case that happiness as a result of irrationality is due to having irrational beliefs in the first place.

3.3.4. Rationality and Emotion

3.3.4.1. Controllable emotions

3.3.4.1.1. In this case, we can think about whether controlling them is rational or irrational

3.3.4.2. Uncontrollable emotions

3.3.4.2.1. When emotions are not in our control, then it makes no sense to say they are irrational

3.3.4.3. Fear can cause irrationality

3.3.4.3.1. We can take measures in reducing fear to increase rationality

3.3.5. Self-Deception

3.3.5.1. Occurs when you do something to control your beliefs, without being aware that you are doing so

3.3.5.2. Cure = actively open-minded thinking

3.3.5.3. Can lead to...

3.3.5.3.1. Vulnerability

3.3.5.3.2. Excessive risk-taking

3.3.5.4. Can be rational in some cases

3.4. Chapter 4: Logic

3.4.1. Types of Logic

3.4.1.1. Sentential

3.4.1.2. Predicate

3.4.1.3. Modal

3.4.1.4. Categorical

3.4.1.5. etc...

3.4.2. Errors and Difficulties

3.4.2.1. Errors

3.4.2.1.1. Error in Hypothesis Testing

3.4.2.1.2. Typical errors

3.4.2.2. Difficulties

3.4.2.2.1. Sentential seems to be easier on the mind than Predicate

3.4.2.2.2. Some people don't understand deduction, they think they need to know about the subjects in order to deduce a conclusion.

3.4.3. Mental-Models

3.4.4. Toulmin's Model of Argumentation

3.4.5. How Logic Framework Develops

3.4.5.1. The logic framework develops through reflection and evidence

3.5. Chapter 5: Normative Theory of Probability

3.5.1. Definitions

3.5.1.1. Probability

3.5.1.1.1. (as defined in this book)...a numerical measure of the strength of a belief in a certain proposition. By convention, probabilities range numerically from 0 to 1; 0 being certainly false, 1 being certainly true.

3.5.1.1.2. Probability Theory is a normative theory of inference.

3.5.1.2. Probability Judgment

3.5.1.2.1. Assignment of a numerical value to a belief.

3.5.1.3. Frequency Theory

3.5.1.3.1. Probability is the relative frequency of events

3.5.1.4. Logical Theory

3.5.1.5. Personal

3.5.1.5.1. A personal judgment of the likelihood of a proposition or event being true

3.5.1.6. Principle of Insufficient Reason

3.5.1.6.1. If there is no reason to expect one event to be more likely than another, then we should consider the two events to be equally likely

3.5.1.7. Coherence

3.5.1.8. Calibration

3.5.2. Concepts

3.5.2.1. 3 Probability Theories

3.5.2.1.1. Frequency

3.5.2.1.2. Logical

3.5.2.1.3. Personal

3.5.2.2. Principle of Insufficient Reason

3.5.2.2.1. If there is no reason to expect one event to be more likely than another, then we should consider the two events to be equally likely

3.5.2.2.2. Criticisms

3.5.2.3. Beliefs

3.5.2.3.1. Comparing beliefs by degree of certainty

3.5.2.3.2. Well-Justified beliefs

3.5.2.4. Well-Justified Probability Judgments

3.5.2.4.1. Coherence

3.5.2.5. Evaluating Probability Judgments

3.5.2.5.1. Calibration

3.5.2.6. Scoring Rules

3.5.2.6.1. Quadratic Rule

3.5.2.6.2. Notes

3.5.2.7. Bayes's Theorem

3.5.3. Notes

3.5.3.1. Logical Theory needs a definition

3.5.3.2. Simplify principles of insufficient reason criticisms

3.5.3.3. try just jotting down notes, and THEN going back and making adjustments after you complete, by categorizing all of your notes in an organized way

3.5.3.3.1. this would maximize efficiency

3.5.3.4. Bayes's Theorem needs to be finished.

3.6. Chapter 6: Descriptive Theory of Probability

3.6.1. Concepts

3.6.1.1. Overview

3.6.1.1.1. Accuracy of Probability Judgments requires...

3.6.1.2. Conjunction Fallacy

3.6.1.2.1. Apparently an affect of the representative heuristic

3.6.1.2.2. When subjects are asked what is a more likely description of a particular person (given various options), they look at how representative the person's personality is to the options available. If (a) "Joe is a doctor" and (b) "Joe is a doctor and a tennis player" are two options, and (b) sounds more representative of Joe, then subjects tend to choose (b) EVEN THOUGH (a) is always more likely, since an atomic statement is always morel likely than two conjoined atomic statements.

3.6.1.3. Representative heuristic

3.6.1.3.1. a heuristic where people judge probability only by similarity

3.6.1.3.2. people don't take into account prior probabilities even when they are available

3.6.1.3.3. Example & elaboration

3.6.1.4. Gambler's Fallacy

3.6.1.4.1. Mathematical interpretation

3.6.1.4.2. Psychological interpretation

3.6.1.5. Availability Heuristic

3.6.1.5.1. People make probability judgments given the evidence that is available to them.

3.6.1.5.2. When asked which is more likely, (a) that a word in English starts with the letter K or (b) an English word has K as its third letter? People typically find (a) more probable since its much easier to think of words that start with the letter K than to think of words that have K as their third letter.

3.6.1.5.3. Mention of things in the media, newspaper, etc can conflate a person's believe about the probability of something relative to other things. (Example: deaths from tornadoes vs asthma -- tornado deaths are overestimated due to excessive mention in the newspaper, where as asthma deaths are underestimated due to rare, if any, mention of asthma deaths.)

3.6.1.5.4. When there is a question like, "What is the most plausible cause of event X? -- A, B, C, D or Other?" Most people tend to underestimate the Other option, due to lack of awareness of what other possibilities there could be.

3.6.1.5.5. Mood can affect probability judgments

3.6.1.6. Subadditivity bias

3.6.1.6.1. Definition: When the judged probability of the whole is less than the probabilities of the parts

3.6.1.6.2. Availability or representativeness may increase if a description is more explicit.

3.6.1.6.3. Example

3.6.1.7. Hindsight bias

3.6.1.7.1. where hindsight information affects their probability judgments in foresight to an event.

3.6.1.7.2. Example

3.6.1.8. Averaging bias

3.6.1.8.1. When statistics are "averaged" when they should not be.

3.6.1.8.2. Example: Subjects were told how likely a particular used car model was to be in good shape, and they were also told the accuracy of a judge's opinion on whether it was in good (or bad) shape. When a car was 90% likely to be in good shape, and when the judge had a 60% accuracy rate (of determining the shape of the car), and when the judge said that the car was in good shape, subjects decided that the likelihood of the car being in good shape was considerably lower than 90%. (Though, somewhere between 60% and 90%). What they seemed to be doing was averaging the 60% and 90% statistics. This shows that they do not really understand how probability judgments work. (Since, using Bayesian Inference, there is a 93% chance that the car is in good shape.)

3.6.1.9. Confidence biases

3.6.1.9.1. People tend to underestimate very high frequencies and overestimate very low frequencies

3.6.1.9.2. Over/Underestimation is not suprising when we realize that an error in judgment of a 0% occurence can only be overestimated, and an error in judgment of a 100% occcurence can only be underestimated.

3.6.1.9.3. Subjects typically give confidence intervals that are too small, indicating an overconfidence

3.6.1.9.4. Subjects tend to be overconfident when confidence is high and underconfident when confidence is low

3.6.1.9.5. One explanation of biased confidence judgments is that people may have little idea what a probability judgment looks like

3.6.1.9.6. Another cause for the "overconfidence when confidence is high" phenomenon is the tendency to seek evidence in favor of an initial belief, as opposed to evidence against it.

3.6.1.10. Frequency misconceptions

3.6.1.10.1. Children (and sometimes adults) think that frequency, rather than relative frequency, matters

3.6.2. Definitions

3.6.2.1. Confidence interval

3.6.2.2. Representative heuristic

3.6.2.3. Conjunction Fallacy

3.6.2.4. Gambler's Fallacy

3.6.2.5. Availability Heuristic

3.6.2.6. Subadditivity

3.6.2.6.1. When the judged probability of the whole is less than the probabilities of the parts.

3.6.2.7. Hindsight bias

3.6.2.7.1. where hindsight information affects their probability judgments in foresight to an event.

3.6.2.8. Averaging bias

3.6.2.8.1. When statistics are "averaged" when they should not be.

3.6.3. Notes

3.6.3.1. For this chapter I am doing an experiment. I am just jotting down notes and THEN going back to organize information by category. I will cite pages for easy reference to go back to. I may write down definition words, but may wait till the end of the chapter to fill them in with definitions

3.6.3.2. I wonder...am I doing too much categorizing? Perhaps I should just leave ch 6 like it is, just bulleted main notes, and then add the most important information in the "most important concepts" section. perhaps if i want to really grasp the material in totality, i should just go back and read the chapter. i really like how the concepts section of ch 6 looks. or, perhaps just categorize it by the smallest number of categories possible. idk.

3.7. Chapter 7: Hypothesis Testing

3.7.1. Concepts

3.7.1.1. An Example From Medicine & Testing Scientific Hypothesis

3.7.1.2. Probability theory + Decision theory = normative model for hypothesis testing

3.7.1.3. Hypothesis testing is the inference part of the search-inference framework

3.7.1.4. A hypothesis is a possibility in the search-inference framework

3.7.1.5. Psychology of hypothesis testing

3.7.2. Definitions

3.7.3. Notes

3.7.3.1. An Example From Medicine & Testing Scientific Hypothesis section needs to be greatly reduced, more paraphrasing needed, and it then needs to be moved from the note section into the mind-map.

3.8. Chapter 8: Judgment of Correlation and Contingency

3.8.1. Concepts

3.8.2. Definitions

3.8.3. Notes

3.9. Chapter 9: Actively Open-Minded Thinking

3.9.1. Concepts

3.9.1.1. Search-Inference framework implies thinking can go wrong for 1 of 3 reasons

3.9.1.1.1. Our search misses something that it should have discovered, or we act with high confidence after little searching

3.9.1.1.2. We seek evidence and make inferences in ways that prevent us from choosing the best possibility

3.9.1.1.3. We think too much.

3.9.1.2. People tend to seek evidence, seek goals, and make inferences in a way that favors possibilities that already appeal to them.

3.9.1.3. Good thinking involves

3.9.1.3.1. search that is thorough in proportion to the importance of the question

3.9.1.3.2. confidence that is appropriate to the amount and quality of thinking done

3.9.1.3.3. fairness to other possibilities than the one we initially favor

3.9.1.4. Actively Open Minded Thinking...

3.9.1.4.1. When we critique a person't thinking, we should look for...

3.9.1.4.2. Example on pg 200-203

3.9.1.5. Irrational belief persistence is a problem that humans tend to face

3.9.1.5.1. It involves at least 2 biases

3.9.1.6. Not all belief persistence is irrational

3.9.1.6.1. Often the evidence against a belief is not strong enough to make a convincing case to give it up

3.9.1.6.2. If we all gave up beliefs as soon as there was evidence against them, we would hold very few beliefs with certainty, and we would give up many beliefs that are true

3.9.1.6.3. It can be rational to "expain away" or discredit certain evidence, or not take certain evidence too seriously. For example, if, in hindsight, we see that event X did occur and event Y did not, we could still brainstorm plausible evidence that would have supported the existence of event Y anyway.

3.9.1.7. Recency Effect

3.9.1.7.1. where later evidence after an initial set of evidence is more favored than the initial set of evidence

3.9.1.8. Primacy Effect

3.9.1.8.1. what happens when the order principle is violated: the first piece of evidence is weighed more heavily (more favored) than it should be.

3.9.1.8.2. Example: pg 207 bottom (elaborate on this later) another example is on pg 208-T

3.9.1.9. Neutral Evidence Principle

3.9.1.9.1. the concept

3.9.1.9.2. Would be violated if ambiguous or non-relevant information was used to support our favored belief.

3.9.1.9.3. Would be violated if the evidence is mixed but ONLY strengthened our favored belief (instead of equally weakening our belief, which would give the evidence no net strength or weakness)

3.9.1.9.4. Pitz suggests that violating neutral evidence principle could only happen if subjects make a commitment to a belief; if they don't, then resistance to discrediting opposing evidence won't occur

3.9.1.9.5. Biased assimilation

3.9.1.9.6. Illusory Correlation Effect (described in CH 8)

3.9.1.10. Beliefs and Attitudes about thinking

3.9.1.10.1. Some people and cultures believe that changing one's beliefs is a sign of weakness and that a good thinker is one who is determined and committed to their position. this poses a big problem to being rational.

3.9.1.11. Self Deception and Wishful thinking

3.9.1.11.1. Self-deception

3.9.1.11.2. Dissonance Resolution:

3.9.1.12. Selective Exposure

3.9.1.12.1. Selective exposure is the tendency to search selectively for evidence that will support current beliefs.

3.9.1.12.2. People tend to strengthen their own beliefs by convincing themselves that the opposing arguments are weak or that their opponents are foolish

3.9.1.12.3. People tend to gain knowledge from sources that favor their beliefs and disregard opposing sources of knowledge

3.9.1.12.4. People tend to strengthen their beliefs given positive, one-sided information/evidence, without even taking into account that there is another side to the story.

3.9.1.13. Belief Overkill

3.9.1.13.1. the tendency to deny conflicting arguments, even if they do not need to be denied.

3.9.1.13.2. Example:

3.9.1.14. Value conflict

3.9.1.14.1. When a person is posed an issue where the opposing sides are both valued strongly by the individual, they are much more unbiased in assessing information and evidence. This makes sense, since they aren't committed to favoring one side or the other.

3.9.1.15. Accountability

3.9.1.15.1. When people know that they will need to back up their position in front of a group of people, they are more likely to be open-minded and consider both sides of the argument. However, if they know that the audience is biased to one side, this may increase the amount of bias in favoring that side.

3.9.1.16. Stress

3.9.1.16.1. when under stress, the most useful evidence can be over looked; avoiding the issue altogether may happen as well.

3.9.2. Definitions

3.9.2.1. Actively Open Minded Thinking

3.9.2.2. Myside Bias

3.9.2.2.1. phrase coined by David Perkins. myside bias = "my side" bias, basically saying that we tend to favor certain possibilities and evidence, and thus, we tend to give alternative options an unfair chance

3.9.2.3. Irrational Belief Persistence

3.9.2.3.1. the phenomena of holding onto beliefs without sufficient regard to the evidence against them or the lack of evidence in their favor

3.9.2.4. Order Principle

3.9.2.4.1. When the order in which we encounter two pieces of evidence is not in itself informative, the order of the to pieces of evidence should have no affect on our final strength of belief.

3.9.2.5. Primacy Effect

3.9.2.5.1. what happens when the order principle is violated: the first piece of evidence is weighed more heavily (more favored) than it should be.

3.9.2.5.2. Example: pg 207 bottom (elaborate on this later) another example is on pg 208-T

3.9.2.6. Recency Effect

3.9.2.6.1. where later evidence after an initial set of evidence is more favored than the initial set of evidence

3.9.2.7. Neutral Evidence Principle

3.9.2.7.1. Neutral evidence should not strengthen a belief. Neutral evidence is evidence that is equally consistent with a belief and its converse.

3.9.2.8. Biased assimilation

3.9.2.8.1. People who hold strong opinions on complex social issues are likely to examine relevant empirical evidence in a biased manner. They are apt to accept "confirming" evidence at face value while subjecting "disconfirming" evidence to critical evaluation, and, as a result, draw undue support for their initial positions from mixed or random empirical findings.

3.9.2.9. Self-deception

3.9.2.9.1. the presence of a desire to have a certain belief

3.9.2.10. Selective Exposure

3.9.2.10.1. Selective exposure is the tendency to search selectively for evidence that will support current beliefs.

3.9.2.11. dissonance resolution

3.9.2.11.1. the act of eliminating conflict among beliefs.

3.9.2.12. Value conflict

3.9.2.12.1. When a person is posed an issue where the opposing sides are both valued strongly by the individual, they are much more unbiased in assessing information and evidence. This makes sense, since they aren't committed to favoring one side or the other.

3.9.3. Notes

3.10. Chapter 10: Normative Theory of Choice Under Certainty

3.10.1. Definitions

3.10.1.1. weak ordering principle

3.10.1.1.1. This principle is a framework for determining what option we should choose. It assumes two thing

3.10.1.1.2. Note the following: Choice between large apple and small orange -- large apple. Choice between large apple and large orange -- large orange Note that this is NOT violating the weak ordering principle, since size must also be taken into account.

3.10.1.2. sure thing principle

3.10.1.2.1. the idea is this: If you win lottery A, you get a vacation to Hawaii; if you win lottery B, you get a vacation to Europe. If you lose either lottery, you get a vacation to Phoenix. Do we choose lottery A, or B? If we lose, the outcome is the same, so the lose outcome here is IRRELEVANT to our decision. Our decision should depend only on what we want more: Hawaii or Europe. (Consider the chances of winning to be the same for both lotteries.

3.10.2. Concepts

3.10.2.1. Utility and utility theory

3.10.2.1.1. the measure of the extent to which we achieve our goals; the theory of how we should maximize utility is called utility theory

3.10.2.1.2. Utility is concerned with inference, not search. (thus, a more complete normative model of decision making would include probability theory, and it would require that we apply utility theory to the decisions involved in SEARCH, as distinct from INFERENCE.

3.10.2.1.3. The author defends utility theory as a normative model, though there has been constant controversy over this.

3.10.2.1.4. deals with decisions that can be analyzed as gambles.

3.10.2.1.5. Naively, we tend to think of some outcomes as being favorable and others unfavorable inherently; however, the value of all outcomes are relative to others.

3.10.2.2. Four Components Of Utility Theory

3.10.2.2.1. Expected Utility Theory

3.10.2.2.2. Multiattribute Utility Theory (MUAT)

3.10.2.2.3. Utilitarianism

3.10.2.2.4. 4th theory (discussed in Chapter 19)

3.10.2.3. Expected value

3.10.2.3.1. the value that is "expected" per hand, if you plan an infinite number of hands

3.10.2.3.2. If you had a choice to choose between two gambles, it makes sense to choose the one with a greater expected value

3.10.2.4. Expected utility: formula and example

3.10.2.4.1. Consider the following

3.10.2.4.2. Here is information about activity A and B. What is the expected utility of each?

3.10.2.5. Why Utility theory is normative

3.10.2.5.1. Long-Run argument

3.10.2.6. weak ordering principle

3.10.2.6.1. This principle is a framework for determining what option we should choose. It assumes two thing

3.10.2.6.2. Note the following: Choice between large apple and small orange -- large apple. Choice between large apple and large orange -- large orange Note that this is NOT violating the weak ordering principle, since size must also be taken into account.

3.10.2.7. sure thing principle

3.10.2.7.1. the idea is this: If you win lottery A, you get a vacation to Hawaii; if you win lottery B, you get a vacation to Europe. If you lose either lottery, you get a vacation to Phoenix. Do we choose lottery A, or B? If we lose, the outcome is the same, so the lose outcome here is IRRELEVANT to our decision. Our decision should depend only on what we want more: Hawaii or Europe. (Consider the chances of winning to be the same for both lotteries.

3.10.2.8. Marginal Utility/Diminishing returns

3.10.2.8.1. Marginal utility needs to be taken into consideration when considering expected utility: the more of something (ie: wealth) that we acquire, the less utility (or desire) it is for us.

3.10.3. Notes

3.10.3.1. Prospect Theory

3.10.3.1.1. http://www.investopedia.com/university/behavioral_finance/behavioral11.asp

3.10.4. Halts

4. Brainstorming Notes

4.1. How To Categorize

4.1.1. Glossary should have two categories: categorized alphabetically and categorized conceptually

4.1.2. Perhaps Categorize whole book by idea (as a separate node)

4.1.3. "Most important concepts" should be sorted by ideas

4.1.3.1. People

4.1.3.2. Concepts

4.1.3.3. Times

4.1.3.4. Places

4.1.3.5. etc

4.2. Strategies for optimizing ease of information retrieval

4.2.1. I should add book pages in case I want to read up on a concept

4.2.2. I think that it would be best to keep the nodes with as little information as possible, and to expound when necessary in the note box.

4.2.3. Most generalized nodes should have very little words; the most end notes will be where a wall of text is to be placed if needed...or else in the note box. either one.

4.2.3.1. The key idea here is maximum efficiency in information retrieval, thus I need to keep generalized nodes easy to sift through

4.3. Aesthetics/Tidiness

4.3.1. Chapter and 1st sub-node should have capitalized words. Every other sub-node should only capitalize first word of the sentence.

4.4. Misc

4.4.1. I can always add stuff later, like go back, read the book, and add as more stuff

5. Needs to be completed or have more information added

5.1. Ch 4 > Logic > Mental-Models

5.2. Perhaps some more on Ch4

5.3. Ch5 logical theory > definition (additional info about this theory is also needed.)

5.4. Finish Ch5

6. I will go back after I complete the book notes and so a serious overhaul, tidy things up, categorize better, etc...

7. when looking at the math involved in the book, don't get discouraged, just do a couple sample problems of your own, and ONE THING I FOUND TO BE VERY HELPFUL, was to write down examples while you read the information so you can see what they are talking about tangibly