Algorithms for Lawyering.

Get Started. It's Free
or sign up with your email address
Algorithms for Lawyering. by Mind Map: Algorithms for Lawyering.

1. Scheduling

1.1. THE PROBLEM

2. Optimal Stopping

2.1. THE PROBLEM

2.2. THE ALGORITHM

2.2.1. The 37% Rule

2.3. APPLICATIONS

2.3.1. Hiring

2.3.2. Intake

2.3.3. Negotiation?? / Waiting Cost

2.3.4. Marriage???

3. Dave

3.1. The Lean Law Firm

4. Networking

4.1. Netflix

4.1.1. Almost 10% of internet traffic during peak hours of Netflix is upstream ACK's from users

4.2. WE think our problem or one of them is that we are constantly connected. But the problem is not that, it's that we are always buffered.

4.2.1. People used to knock on door, go away if no response.

4.2.2. But Now.....

4.2.2.1. Emails build up in a queue

4.2.2.2. voicemails, etc.

5. Explore / Exploit

5.1. THE PROBLEM

5.1.1. valuing present more highly than future = discounting (bird in the hand)

5.2. THE ALGORITHM

5.2.1. Gittens Index

5.2.1.1. Multi-Arm Bandits

5.2.1.1.1. win stay lose shift

5.2.1.2. geometric discounting of future reward

5.2.1.2.1. If you have a 1% chance of getting hit by a bus on any given day, you should value tomorrow's dinner at 99% of the value of tonight's

5.3. APPLICATIONS

5.3.1. Marketing / Websites

5.3.1.1. AB Testing

5.3.2. Negotiation

5.3.2.1. Making same $ amount of concession in first 20% of moves as you will in the last 80% of moves.

5.3.2.2. (has negotiation application in that value of settlement is less than one today) Bird in the hand.

5.3.2.2.1. And that diminishing value has to be figured into calculation of what offer to accept and possibly when to accept.

5.3.2.2.2. A bird in the hand is worth less tomorrow than today.

5.3.2.3. When / Whether to Settle at All...

5.3.3. Case selection / career choices.

5.3.3.1. Late in the game, you should be purely exploiting

6. Sorting

6.1. THE PROBLEM

6.1.1. Balancing the TIME it takes to SORT with the value of Sorting

6.1.2. Sorting is central and essential to human perception of information.

6.1.3. What is the minimum effort required to make order?

6.2. THE ALGORITHMS

6.2.1. Mergesort

6.2.2. Bucket Sort

6.3. APPLICATIONS.

6.3.1. Google is really a sorting machine...”THE TRUNCATED TOP (10) OF AN IMMENSE SORTED LIST IS IN MANY WAYS THE UNIVERSAL USER INTERFACE”

6.3.2. Pretrial / Case / Trial Organization

6.3.3. Trial / Mediation Presentation

6.3.4. Motions Presentation

7. Bayes's Rule

7.1. Predicting the Future

7.1.1. Gott - Berlin Wall (making prediction based on single data point) Inference from single observation. (Making prediction from small data)

7.1.1.1. Copernican Principle

7.1.1.1.1. Moment Gott encountered the wall was not special in Berlin Wall's lifetime.

7.1.1.1.2. At any moment is equally likely.

7.1.1.1.3. On average, his arrival should have come precisely at halfway point in Wall's life.

7.1.1.1.4. Best guess for duration would be to take how long it has lasted so far, and double it.

7.1.1.1.5. Really this is just a instance of Bayes's Rule

7.1.1.1.6. Copernican principle useful with uninformed prior; we know literally nothing at all. E.g., it would suggest a 90 year old man would live to 180..

8. Overfitting

8.1. PROBLEM

8.1.1. How many factors do you use to make a prediction; One, Two, N?

8.2. ALGORITHMS

8.2.1. A nine figure model can actually be inferior to a 2.

8.2.2. Regularization

8.2.3. Cross-Validation

8.2.4. Using constraints that penalize complexity

8.2.4.1. Lasso algorithm.

8.2.4.2. EARLY STOPPING

8.2.4.2.1. Best prediction algorithms start with the Single most important factor, and then layer in the lesser important ones.

8.3. APPLICATIONS

8.3.1. We can make better decisions by deliberately thinking and doing less. We naturally gravitate towards the most important factors.

8.3.1.1. When we're truly in dark, the best laid plans will be the simplest ones.

8.3.1.2. When the data is noisy, also, paint with a broad brush

8.3.1.3. A LIST THAT FITS ON ONE PAGE IS FORCED REGULARIZATION.

8.3.1.4. Application - Brainstorming -- The further ahead you're planning, the thicker the pen you use on whiteboard!! P. 167

9. Relaxation

9.1. Abraham Lincoln, Esq.

9.2. Minimum Spanning Tree

10. Randomness

10.1. THE PROBLEM

10.1.1. What's the probability that a set of facts / shuffled deck will yield a winnable game?

10.1.2. Predicting outcomes where there are multiple, interconnected (and often subjective) variables.

10.1.2.1. Example?

10.2. THE ALGORITHMS

10.2.1. Replacing exhaustive Probability calculations with sample simulations.

10.2.1.1. In a sufficiently complicated problem, actual sampling is better than an examination of all the chains of possibiilities.

10.2.1.2. Laplace -- when we want to know something about a complex quantity, we can estimate its value by sampling from it.

10.2.1.2.1. We picture CPUs marching through problems one step after the other in order... but in some cases randomized algorithms produce better results.

10.2.1.2.2. The key is knowing WHEN to rely on chance.

10.2.2. Metropolis Algorithm

10.2.2.1. Metropolis Algorithm: your likelihood of following a bad idea should be inversely proportional to HOW BAD an idea it is.

10.2.3. Monte Carlo Method

10.2.3.1. Excel Functions

10.2.4. Hill Climbing Algorithm

10.2.4.1. Jitter -- if it looks like you're stuck (income wise, etc.) make a few small RANDOM changes and see what happens. Then go back to Hill Climbing,

10.2.4.2. From Hill Climbing: even if you are in the habit of sometimes acting on bad idea, you should ALWAYS act on good ones.

10.3. APPLICATIONS

10.3.1. Negotiation

10.3.2. Breaking out of a Rut

11. Game Theory

11.1. THE PROBLEM

11.1.1. What's unique about litigation?

11.1.2. The Price of anarchy

11.1.2.1. measures the gap between cooperation and competition.

11.1.2.1.1. Prisoner's Dilemma

11.1.3. Idea of "Value"

11.1.3.1. It's not really what people think it's worth, but what people think OTHER people think it's worth.

11.1.4. The problem of Recursiveness

11.1.4.1. Family Feud

11.1.4.1.1. what does average opinion expect average opinion to be?

11.1.4.1.2. Anytime a person or machine simulates the working of itself or another person, it maxes itself out.

11.1.4.1.3. Recursion is theoretically infinite

11.2. THE ALGORITHMS

11.2.1. NASH Equilibrium

11.2.1.1. Nash Equilibrium always exists in 2 player games.

11.2.1.2. When we find ourselves going down rabbit hole of recursion, we step out of opponents head and look for the equilibrium, going to best strategy, assuming rational play.

11.2.1.3. Dominant Strategy

11.2.1.3.1. a strategy that avoids recursion altogether by being the best response to opponent's possible strategies regardless of what they are.

11.2.1.4. Here's the paradox - the equilibrium set for both players (both cooperating with cops) does not lead to the BEST result for both players (both keeping mouth shut).

11.2.2. Mechanism Design: Change the Game

11.2.2.1. Reverse Game Theory -- by changing consequences to worse, you can make the result better for everyone (e.g., Mafia Don tells prisoners if they cooperate with cops they die; they keep their mouth shut and both walk).

11.2.2.1.1. By reducing number of options, behavioral constraints make certain kinds of decisions less computationally challenging.

11.3. APPLICATIONS

11.3.1. Flip the game

11.3.2. Give some value to Irrationality

11.3.2.1. Lots of things override rational decisionmaking.

11.3.2.1.1. Revenge almost never works our for seeker, yet those who will respond with irrational vehemence to being taken advantage of is for that reason more likely to get a fair deal.

11.3.3. Understand (and use) herd mentality

11.3.3.1. Cascades are caused when we misinterpret what others think based on what they do.

11.3.3.2. Use value of "precedent" in cases.

12. Caching

12.1. PROBLEM

12.1.1. “It is of the highest importance not to have useless facts crowding out the useful ones.”

12.1.2. What to keep/store, and what to get rid of.

12.1.3. The way the world forgets — Ebbinghouse. Memory is not a problem of storage, but of organization. Mind has infinite amount of storage, but only a finite time to search .

12.2. ALGRORITHMS

12.2.1. LRU - Least Recently Used (evicting item that’s gone the longest untouched.

12.3. APPLICATIONS

12.3.1. Filing — simply returning last used file to the left side rather than inserting, because it’s the one you’re most likely to need.

12.3.2. Tossing something recently used into the top of the pile is the closest you can come to clairvoyance. Basically LRU

12.3.3. (LRU) Biggest, most important, and hence MOST used concepts at top of the list. (Top 10).

13. The Inspriration

13.1. So, what's an algorithm?

13.1.1. Recipe

13.1.1.1. Rubik's Cube