Tuesday, March 13, 2018

Opportunity costs


Opportunity costs can be defined as ”The loss of other alternatives when other alternative is chosen.” Or as ”The value of the choice of a best alternative cost while making a decision.”

Some opportunity costs are easier to understand than others

All the things you do have opportunity costs. You can´t have everything. When you choose something, other possibilities are not available, in the present moment. For example, if you want to work, you cannot watch TV. You always need to compare the alternatives with each other. Opportunity costs can be different than the cost of acquiring things. For example, when you get something for free, your loss comes from lost time, space, effort, etc.

Counting or approximating the cost can be simple or extremely hard. Easy choices like choosing a brand label which costs 1.5$ or a similar product that costs 1$, make counting opportunity costs easy. The opportunity cost of choosing a brand label is 1.5$-1$=0.50$. Simple calculations about opportunity costs are useful tools for everybody. When complexity arises, opportunity costs are harder to calculate. For example, when you think about working over time or spending your time home with your family, opportunity costs are very hard to calculate. Calculations can become harder, when the options are plentiful or each of the two options have several factors that change or are harder to put any value on them. For example, choosing a laptop from different models, brands, and characteristics. In addition to them, you have to consider all the different prices.

If cow had wheels, it would be delivering your milk”

Most opportunity costs have ifs. You have to consider the uncertainty, when you are trying to figure out them. If you are using statistics and/or probabilities, to calculate opportunity costs, you have to understand what you are doing. You cannot rely on averages, when they do not matter. For example, expected payoffs are not always the same as averages. For example, stock indices can give you real 7% of annual return, on average. Comparing this number with saving money and calculating opportunity costs gives you wrong results, when stocks are very expensive or very cheap.

Most often, there are some uncertainty in calculations. And when there is uncertainty, you have to understand the need for margin of safety. Ifs create errors. These errors can be way larger than you think. This is really important, when you are thinking about changes for things that work at least on average. The other opportunity has to be much better than a previous thing. A famous investor John M. Templeton talked about 50% better opportunity before changing an investment vehicle for a better one. I am not saying that you should do the same. 50% better is just an example about investing and about one person. You should figure out your own margins in different situations. They can be larger than you think.

Some instructions for calculations

Keep things simple. The more options you use, the harder the calculations become. You should leave the least probable options away from your calculations. In economics, you only calculate the opportunity costs compared to the second best alternative. This should work with simple options considering money. I wouldn´t use only the second best option all the time, but you figure out what is best for you. You also shouldn´t compare apples with oranges. You can´t calculate or estimate opportunity costs of two totally different options. If you think that you cannot calculate or estimate opportunity costs within fairly easy, forget them, especially, when you are talking about small decisions. Calculating opportunity costs have opportunity costs too.

Have a nice end of the week!

Tuesday, March 6, 2018



Risk can be defined as ”A potential of gaining or losing something of value.” or ”An exposure to the chance of injury or loss.”

Personal risks

First I would like to say that risks affect on you in many different levels. For example, they affect on you through corporations, nations and global events like wars. I will keep things in personal level. Humans are very good in understanding risks related to their survival. Especially, when their intuitive decision-makingsystems are on. You can avoid an imminent threat to your survival without even noticing it. For example, changing your direction from the normal route, because of something is not feeling right. Survival in these cases means not hurting yourself physically in an imminent danger. You are not good in avoiding long-term risks. Modern world has less imminent threats than your brain thinks. This creates many problems that even the risk management experts do not understand.

You have many personal risks other than risks of physiological harm. Financial risk is maybe the most common risk you think about. Losing your job, or inability to pay your debt can cause you harm. You may lose your reputation if you do something really stupid or someone spreads rumours about you. You also have a risk of not being able to adapt to the changing world. Doing what you have always done before is your path of least resistance. Changing things is hard, even when it is necessary. People are also increasing their technology risks, because their dependence on technology is rising. Personal risks can compound in a long-term. Taking small, but unnecessary risks without suffering the consequences fast may lead to severe problems later on. For example, eating crappy food may not cause you any harm for decades, until one day you get a heart attack without any warnings.

Risk Management

Risk management is about probabilities. You need to understand them to understand risk. Repeatable events with fixed probabilities are easier to understand. For example, risks at the roulette table and lottery are fairly easily quantifiable. When you cannot calculate a probability of something happening, you cannot really understand risks. Uncertainty is not well understood. Managing risks without understanding power laws is one of the most common causes of financial destruction for institutions and in a personal level. The other common cause is not understanding complex systems and the second order effects in them.

You have to consider your risk appetite too. Some people cannot sleep well with moderate risks in their lives and some people cannot sleep well without them. Risks are not all bad things. Progress comes from taking risks. Some of them are managed and some of them are not. Humans have developed as a race by avoiding risks concerning on survival. Risk-seeking is about functioning against your basic instincts. It is hard, but many times worth all the effort. When you understand the risks you are taking, it is easier to get better results. You have to ask yourself some questions to understand risks better:

  • What are the consequences of not doing it? What will it cost you? What are the unwanted outcomes that can come true? How will you react if they happen to you?
  • What are the positive consequences? What will you benefit? How will you react? How do you feel about the outcoming benefits now?

Risks can also be quantified in terms of impact of their consequences. You should use a simple scale in how you want to rate the impact of the risks. Make it reasonable. For example, you can use a three-level scale of impact for risks. Risks with high, medium or low impact. Think about high risks all the time. Risks with medium impact are not that important, but you should check them regularly. Low risks are not that relevant. You should still check them occasionally. You should also remember that low and medium impact risks can become high risks through compounding. In this case, it means taking many small risks many times.

Asymmetric risk

There are two kinds of asymmetric risk. Good ones and the bad ones. An asymmetric risk means that the reward is a lot bigger than the risk taken, or the reward is a lot smaller than the risk taken. In other words, the expected payoff is high or low, depending on the risk taken. Nobody wants to take an asymmetric risk where the expected payoff is crappy. Most often, this happens, because the risk taker do not understand what he/she is doing. This happens to the most respected experts too. Especially, financial market participants do not understand risks. One reason is that they haven´t noticed the power laws in the markets or understand them. Many assumptions about the returns in the financial markets are made by thinking through a normal distribution. The problem is that returns in financial markets do not really realize through them. Most of the profits are made with a small number of stocks or during the small number of days, etc. In other words, extreme outcomes in financial markets have higher probabilities than using normal distribution tells you.

It is easy to make a statement that you should avoid all the asymmetric risks, where the downside is enormous compared to the expected payoff. For example, not going to see a doctor, when you have a severe chest pain. Not going to a doctor can cause a death and going to the doctor can save you. You can also take lots of asymmetric risks where the probabilities are against you. When expected payoff is high, you need smaller amounts of successes. This doesn´t mean that all the risks come true and payoff is always high. You should be ready for many losses with these kinds of risks. Overall, the expected payoff will be good in the long run.


Smart -> Risk, Andrew Holmes
Risky Strategy, Jamie MacAlister
Fooled by Randomness, Nassim Taleb

There are some psychological biases that are have effects on the risk taking. I will get into those biases later.

Have a nice end of the week!


Wednesday, February 28, 2018

Power laws


A power law can be defined as ”A relationship between two quantities such that one is proportional to a fixed power of the other”. It can also be defined as ”A relative change in one quantity results in a proportional relative change in another”.

Basics about the scales of power law

When you find out what the value of the power is in the power law, you can see how things scale. When the power is 1, dependent variable doubles, when the independent variable doubles. When the power is more than 1, the dependent variable more than doubles. For example, when the power is 2, dependent variable quadrubles, when the independent variable doubles. If the power is below 1, then the dependent variable is less than a doubling independent variable.

Power laws in nature

You can see many power laws working in nature. For example, animals in the world are roughly cube shaped. The amount of skin goes up by the factor of 4, when the surface area (m²) is doubled. In the same time the volume (m³) of guts goes up by a factor of 8. Animals lose heat through their skin and generate it through their guts. When the animals grow, their bone structure has to change as the bones´ ability to support the animal grows slower than the weight of the animal. Compare square with a cube. What this basically means that animals with lots of weight need to have strong bones to support it. For example, an elephant has thick legs, because they carry a lot of weight.

The metabolic rates of animals scale to their mass to the ¾ power. It tells you that the amount of the energy you need to survive. It basically means that the larger you are, the less energy you need compared to your per unit of mass. These kind of scaling laws apply to the oxygen intake and heart rate. Breathing and heart rates scale with mass to the – ¼ power. These power laws are approximations. They do not give exact truths. They are estimates based on existing data. These power laws tell about physical limits of each living thing. You cannot find a giant with thin legs or live forever.

Power laws in human constructions like cities

These laws apply to human behavior like wars and human constructions like languages and cities. For example, Lewis Fry Richardson found out a power law about wars. The higher the number of people killed in the war, the more time it takes to a war as destructive as the former one. You can also find same kind of power law working in languages. How often you use a word is described by an exponent of -1. The word that is the second most common in language is found half the times compared to the most common word. This power law applies also the population of cities in different countries and the sizes of companies. The largest city in the country has approximately twice the population of the second largest city. There are many power laws found in city sizes. For example, the bigger the population of the city, the less each person uses gasoline and the less road they need. All these power laws can be used in planning future cities.

Pareto´s law

Pareto´s law means that the minority of causes, inputs or efforts usually lead to a majority of the results, outputs or rewards. This rule states that there is an in-built imbalance between causes and results, inputs and outputs. Sometimes, this rule is defined as a 20/80-rule. For example, 20% of the customers bring 80% of the profits for the company. I wouldn´t really describe Pareto´s law with 20/80 rule, because these scales have so much variation. For example, only a small percentage of the authors sell almost 100% of the books. You have to invert, when you are using Pareto´s law too. For example, you need to understand that most of the bad things that happen have only few reasons. Most of your losses come from few sources. These rules apply to your relationships too. Most of the people are insignificant for your well-being, but few people cause most of the bad things in your life. Pareto´s law also means that small changes or lucky events may give big results. For example, when Oprah has recommended some products, their sales have gone through the roof.

Universe is full of power laws. You should understand these non-linearities as well as linear cause-and-effect relationships. These laws are much more usual than everybody thinks. Most of the people never notice them. Using power laws to your advantage is hard, but the results are worth the effort. Most of the misunderstood risks come from power laws. Do not ignore them.


These power laws are related to misunderstood risks. They will be the topic for the next week.


Wednesday, February 21, 2018

Probabilities and statistics

First, I would like to say that I am no mathematician and I am only to some extent familiar with probabilities and statistics. I will keep this introduction in a very basic level. There are many better texts about probabilities and statistics. You should probably try to find better sources for these things.


You can define probability as ”The extent to which an event is likely to occur, measured by the ratio of the favorable cases to the whole number of cases possible”. Probability can also be defined as ”The likelihood of given event´s occurrence”.

Statistics can be defined as ”A branch of mathematics dealing with the collection, analysis, interpretation, and presentation of masses of numerical data”. It can also be defined as ”A fact or piece of data obtained from a study of a large quantity of numerical data.

Basic things about probabilities

  1. The probability that two events will both occur can never be greater than each probability occurring independently. For example, a likelihood of you meeting a person who is a woman and likes ice-cream is never greater than you meeting a woman.
  2. If two possible events A and B are independent, then the probability that both A and B occur is equal to the product of their independent probabilities. For example, the probability of two consecutive heads in two coin flips is 0.5*0.5=0.25. Sometimes you need to calculate a conditional probability. For example, an event B occurs only if an event A has happened before. The probability of an event A happening will differ from the probability that A will happen if B occurs.
  3. If an event can have many different and distinct outcomes A, B and so on, then the probability that either A or B will occur is equal to the sum of the individual probabilities of A and B, and the sum of the probabilities of all the possible outcomes (A, B, and so on) is 100%. For example, you are throwing a dice with 6 numbers and you want to know what is the probability of you getting either a 1 or 2? The probability of either one happening is 1/6+1/6=1/3.
These three basic laws of probability form much of the basis of the probability theory. You should also remember that you can use inversion many times to make easier calculations for probabilities. For example, to get a probability for throwing the dice and getting 1,2,3,4, or 5 is easier by calculating a probability of not getting 6. The probability of an occurring event is always dependent on the number of ways it can occur. To calculate the ways an event can occur is easier, when you understand combinations and permutations. When you hear someone saying ”an outcome is probable, you really hear that an outcome is probable under some set of hypotheses he or she has about the way the world works. Maybe the most important way to use probabilities into one´s advantage is getting an an expected payoff. You get it by:

Multiplying the probability of each possible outcome by its payoff and add them all up

For example, you flip a coin with a friend and you bet 100$ for tails. The expected payoff is 0.5*100$=50$. You should always try to maximize the expected payoff in whatever you are doing.

We have a saying in Finland: A lie, a big lie, and statistics. When there are two different parties like employee or employer organizations, they often interpret the same statistics differently. When this is the case, the truth is found from the middle. You should never take any interpretations of statistics at face value. Different incentives give different interpretations. There are so many ways of misinterpreting statistics that I will not get into them now. I will keep things short.

First, you need to understand a sample space which is the set of all possible outcomes. For example, when you are throwing a dice once your sample space is 1,2,3,4,5, and 6. When you work with the large sample space, you can help yourself by using a one value that describes the average value of the entire sample space.This is called the central tendency. Mean median and mode are ways to describe it. Lets keep this simple and think about the mean only. If you want to find a mean, you have to add up all the values at the data set and then divide them by the number of values you added to sample space. A sample space is an important feature in statistics. This applies especially to things that go with the normal distribution.

Normal distributions are often used to represent random variables whose distributions are not completely known. These distributions do not tell much about individuals. When the data represents bigger groups they work better. A bell curve describes the variation in normal distribution. Most of the observations are close to the mean. Curve slopes symmetrically downward in both sides of the mean. First, the number of observations diminishes fast and then slower, until it is hard to see any changes.

The bigger the sample size compared to the population, the more it reflects the underlying population of being sampled. These choices for the sample should be taken randomly. Otherwise the results are useless. A sample size of 100 in the poll or survey gives a margin of error that is too big for most of the purposes. A sample size of 1000 usually have a margin of error around 3%. Often this is enough. Repeating a survey with the same sample size do not give the same results. You should expect some variation in the results.

There is a difference between statistics and probability. Statistics concerns the inference of probabilities based on observed data. Probability concerns predictions based on fixed probabilities.

Shortly about randomness

A large number of independent random variables should be distributed according to the normal distribution. This is called the central limit theorem. For example, you want to manufacture 1000 screws that weigh 10 grams. You want to add enough metal that leaves each screw weighing 10 grams, when the screws are manufactured. According to the central limit theorem, the weight of your screws should vary according to the normal distribution. Unless this is happening, somebody is probably fabricating the results. There are many random processes in which the results look like a bell curve. For example, people´s heights, how long will they live, etc. There are also some processes in which the normal distribution is useless like damages from natural disasters, etc.

You can be fooled by randomness. Sometimes, random processes look like patterns of data. For example, so called hot hand, in which there is a shooting streak for a basketball player is actually mostly a random pattern. It doesn´t mean there is no skill involved. You just have to concentrate on the long term statistics, instead of short term patterns. It is not easy to separate random streaks from patterns. Among a large group of people, there are always random streaks that look like patterns. Our brains do not understand randomness well. It is better for acknowledging patterns, even when there isn´t any. Humans have a need to be in control of events. Random events do not confirm this need which creates a clash between reality and the need to feel in control.

This is all for now. I will probably add some things to this text at some point of time. This is a big subject and not easy for me.


How Not to Be Wrong, Jordan Ellenberg
Drunkard´s Walk, Leonard Mlodinow
A Man for All Markets, Edward O. Thorp

Have a nice end of the week!


Tuesday, February 13, 2018

Feedback Loops


Donella A Meadows has defined a feedback loop in her book ”Thinking in Systems” as ”A closed chain of causal connections from a stock through a set of decisions or rules or physical laws or actions that are depended on the level of the stock, and back again through a flow to change the stock.” A little simpler definition can be ”A path by which some of the output of a circuit, a system, or a device is returned to the input.”

Balancing feedback loops

These loops are goal or balance-seeking. They try to keep stocks stable and within a certain range. They also resist the change in systems. If stock level gets too high, balancing feedback loop will try to get it lower. If stock level is too low it tries to get it higher. For example, you have a room in which you want to keep a temperature within a certain range. You have a balancing feedback loop that keeps the temperature within the range you want. There are always some losses of heat in the room, when the temperature outside is colder. Your balancing loop adjusts the heating depending on how much colder the temperature is outside. When the temperature outside is much warmer than in the room, the heat outside comes in. Then your balancing loop will cool the air inside.

These balancing loops are not always working properly. The information from them can come too late, be unclear or incomplete or hard to understand. The action they cause can be delayed or ineffective, etc. Stock-maintaining balancing loops must have their goals set for compensating the draining or inflowing processes that change the level of the stock. All these loops have their own breaking point. This is the point where the pull of other loops is stronger than the pull of the balancing loop. When your heating system is not having enough heating power compared to the leaks outside, your room temperature gets too cold.

Reinforcing feedback loops

These loops are amplifying or reinforcing. They can cause virtuous or vicious effects of healthy growth or complete destruction. It creates a bigger or smaller inflow to a stock than is already there. Reinforcing loops enhance the direction of the change in a stock or a system. When a system element can reproduce itself or grows as a constant fraction of itself, reinforcing loops are found in the system. Reinforcing loops produce an exponential growth. Growth gets faster all the time. For example, the bigger the interest in your bank account, the more money you will get into your account every year, because you get the interest for the interest too. These loops may not change the system until the path of least resistance is overcome. For example, the sales of the new product may not start growing faster until a certain amount of product is sold.

Feedback loops are mostly linked together

Single loops are seldom at work in systems. Mostly, there are complex patterns of interconnecting loops. A single stock can have many balancing and reinforcing loops pulling in many directions. A single interconnecting flow can be attached to many different stocks. This flow can increase the ouflow of one stock and at the same time increase the inflows of many stocks. These many feedback loops create a system behavior which is hard to predict. When you change the functioning or a goal of one loop, you may create changes in many others. In other words, one change can produce combinations of changes instead of one.

Lets keep things simple and concentrate on one stock system like population with both a balancing and a reinforcing feed back loops. This is one of the most common and important system structures. Lets start by identifying the most important inflows and outflows of the population. The most important inflow is people born to the population. The most important outflow is people dying. This kind of system changes when the relative strength of the loops change. In a normal situation, more people are born than dying. This means that the net effect of these loops is increasing population. It also means that the net effect is self-reinforcing growth in population.

When there is a sudden humanitarian catastrophe or a civil war in some population, the change reverses into diminishing population and the balancing loop dominates the behavior of the system. When these changes in relative strengths of feedback loops happen, the behavior of the system changes. A stock governed by linked balancing and reinforcing loops grows exponentially if the reinforcing loop dominates the balancing one. Stock declines exponentially if the balancing loop is dominant.

Other things to know about feedback loops

You have to remember that the delivery of information through a feedback loop will only affect future behavior. Feedback loop cannot deliver signal fast enough to have an effect in present. There are three typical delays in the real world. First, a perception delay. For example, a shopowner doesn´t react to any small changes in sales. She normally reacts, when the sales have changed for a longer time period, like five days. Second, there is a response delay. A shopowner makes up some part of any shortage with each new order. Third, there is a delivery delay. A subcontractor delivers the goods with delay because he has to process the order and deliver it. Changing the length of the delay will likely cause a large change in the system. When the delay is too short, the system behavior will likely oscillate.

Physical, growing systems are going to encounter limits. Those limits are balancing feedback loops. When the limits are achieved, these loops start dominating the systems by either strengthening the outflows or weakening the inflows. These limits are temporary or permanent. Eventually, the system will adjust to the limit or the limit will adjust to the system. There has to be at least one reinforcing loop delivering th growth and at least one balancing loop limiting the growth in the physical, exponentially growing systems. One interesting and important thing about feedback loops is that systems with similar feedback structures produce similar behaviors. Physically different parts do not really change behaviors.


Donella H Meadows, Thinking in Systems
John H. Miller Crude Look at the Whole

Tuesday, February 6, 2018

Systems (updated)

I had to update this mental model introduction, because the former text wasn´t good enough. 


System is defined as: ”A set of things working together as parts of a mechanism or an interconnecting network; a complex whole or ”A set of principles or procedures according to which something is done; an organized scheme or method.


Universe is a system. Our bodies, habits and skills are systems. Most of the things we do are systems. Some systems are consisted on smaller subsystems like our bodies have skin, organs, etc. Latticework of mental models is an example of a thinking system.

Systems basics

All the systems have elements, interconnections and a function or a purpose. Elements are not necessarily physical things. For example, your individual characteristics like self-confidence is a parts of the system called you. Anything with different elements is not a system. You have to ask yourself couple questions: Can you identify parts? Do they affect each other? Do they produce a behavior together that is different from the behavior with separate individual elements? And does the behavior over time stay the same in different circumstances? The interconnections are the relationships that keep the elements together. Interconnections can be physical and informational flows. Physical flows can be water going through the pipes, etc. A signal of measured temperature of the room may change the electric power used for heating. It is an informational flow. You should always think about the behavior of the whole system, even though you are concentrating on one element or interconnection.

A function or a purpose is not necessarily known consciously. It can be found by examining the system behavior for a while. You cannot really understand a purpose or a function if you believe in its stated purpose. This part is mostly the most crucial part of the system. The system is changed, when its purpose or function changes without any changes in elements or interconnections. Changing the purpose or a function of a system is one of the best ways to make it better. Depending on the system, it can be one of the hardest things to do. Change in the interconnections change the system too. It can become unrecognizable. Changing the elements of the system do not create big changes in the system, unless they do not change the purpose or the interconnections of it.

The element of the system you can feel, see or measure at any given time is a stock. It can be a the temperature in the room, a group of people or the amount of cars in the inventory. Flows are the actions that create the change over time for stocks. Flows change the amount of stocks within the system. If you understand the behavior of stocks and flows over time, you understand a lot about the behavior of the system.

Think about the room temperature when there is cold weather outside. The heating system creates inflows and the bad insulation creates outflows of the heat in the room. Change in the heating power changes the temperature inside as does the change in the temperature outside the room. When the inflow is equal to the outflow, the temperature stays the same. In other words, a temperature can be increased by decreasing its outflow rate as well as by increasing the inflow rate. These changes are mostly slow.

Stocks work as delays or buffers of the system. The bigger the stocks, the responses to the changes are slower. These changes happen mostly gradually. The speed of changes in systems are set by the changes in stocks. Stocks help inflows and outflows stay independent from each other. Flows can also be temporarily ouf of balance, because stocks work as buffers. The volumes of stocks can be changed by changing the volumes of flows. Keeping the stocks in the acceptable ranges, we need feedback processes that manipulate the volumes of flows.

Self-reinforcing, self-stabilizing feedback loops and delays

If you see a system behave consistently over time, there is likely to be a mechanism called feedback loop. There is a feedback loop, when changes in stock affect the flows into or out of the same stock. Feedback loops can be either self-reinforcing or self-stabilizing. Universe is always expanding. The interest on interest compounds the debt. In self-reinforcing loops, the previous outflows of the stock change the inflows coming into the stock and amplify the outflows coming out of the stock. What happens is that eventually self-reinforcing stock gets to the point where the amplifying destroys the whole system. The other possibility is that the limit of growth comes to the point where the growth stops, slows, diverts or reverses. When this happens, a self-stabilizing feedback loop is formed.

This self-correction keeps the system working without exploding. Every natural system has an optimal growth rate. We should use it into our advantage. Our skin keeps the temperature of our body from overheating by sweating if we are in an environment in which the temperature is too high or we are exercising. It can be hard to notice these stabilizing processes, even though they are mostly necessary. We should keep the self and self-stabilizing loops and their interaction in balance. We should always limit the effect of self-stabilizing processes on self-reinforcing processes in balance. Most of the systems have many interacting feedback loops.

All the systems have delays. Feedback loops always have some delays. People have a natural tendency in concentrating on the consequences we see right after we have done something. The second and third order consequences may come after a long delay. In complex systems, consequences can come after years. We may concentrate on the symptoms, rather than the solutions because of the delays. Some system structures have unrecognized delays and they can lead to wrong solutions. You should have better awareness about the delays. And we should also remember that causes and effects are not always close in time. When you notice a delay in the system, you have to find out, whether the length of the delay is too short or too long. When the delay is too short, it leads to too much variation for the outflow of the system. When the delay is too long the system can become too stable and ineffective.

Small changes in systems can create big results

This happens in individual circumstances as well as in systems. Well-focused small actions aimed for changing the system can create surprisingly big results. Solving a difficult problem can be a matter of finding the system structure in which the small change delivers the big and lasting results. Most of the times, finding these structures is hard. You need to understand the system and how its parts interact with each other. Otherwise you will never find the right structures for these high leverage improvements. You cannot do this without understanding the system as a whole.

Reacting to the change in the system is a lot easier than truly understanding its cause-effect relationships and ways to improve it with small changes. Reacting to the change can be a bad thing. It may cause small changes to the system structure creating bigger problems through self-reinforcing feedback loops. Changing a system structure can have different consequences in the short run and in the long run. Different parts of the systems can also have different consequences caused by the small changes. You also have to think about interactions of the parts before changing anything in one part of the system.

Most of the results in your life depend on the quality of your systems

You are mostly focusing on the different parts of your systems in a particular moment without thinking about the system as a whole or any long-term consequences. Your systems can deliver results that exceed the sums of their parts by a large margin. They can also deliver results that are not even close to the sums of their parts. People using the same systems tend to produce similar results. When you do not change your systems you cannot expect different results. Most of the time, the results you get are not caused by other people, some particular conditions, bad luck or some other explanations. The reason for bad results are the systems or their implementation or your understanding of them. You should think a lot more about the quality of our systems or their subsystems. Then you can get better results. You cannot really improve yourself without improving your systems.

Have a nice week!


Tuesday, January 30, 2018



Evolution in biology can be defined as ”A change in the gene pool of a population from generation to generation by mutation, natural selection, and genetic drift.” We can also have a general definition for evolution, such as ”A process of gradual development in a particular situation or thing over a period of time.”

Evolution is a gradual, random, and slow process most of the time

All the living things started from a birth of a self-copying system with some kind of basic, heritable mutation. Mutations are random errors in copying. Our genes replicate themselves slowly and gradually through the generations. Gradualness is the key feature of evolution. It moves step-by-step from one stage to next. These moves compound in a very long time This process creates new structures and behaviors. And it has no reason or goal in mind. Humans have evolved from simple cells to conscious animals in billions of years. This process adapts to changes by changing itself.

All mutations have an element of luck. Genes maximize their own survival. Genetic effects have their own time and place. These effects may have to be changed by working together with other genes. Together they create chemical environments in our cells. Genes have usually many effects. Some of them are beneficial and most of them bad. Beneficial effects are likely happening later than good ones. Species change through these effects all the time, but slowly. You have to consider this slow speed of change, when you think about how human beings behave. For example, our brains and bodies have changed very little in the last 10,000 years. The environment around us has changed a lot more.

Evolution is a self-organizing system

Evolution happens by adding new strucures, feedback loops, or rules. This ability is the strongest form of a system resilience. Self-organization has rules. They tell the system itself where, how and what can be changed by adding or subtracting something from the system and the under what conditions these changes can happen. Biological evolution has simple sets of rules. It has created a complex world with wide variety of species. DNA´s genetic code has four different letters, combined into words with three letters. This pattern and the rules for copying and rearranging it has not been changed in billions of years. These rules have created a variety of failed and successful creatures.

DNA is an evolutionary raw material. It is a stock of variable information and a means for experimentation. It is used for selecting and testing new patterns. Spontaneous mutation creates variety for selecting raw aterial. Changes in the environment are testing mechanisms that determine which individuals survive or reproduce. For technological evolution, the raw material is the understanding of the science. The source of variation is creativity and the selection mechanism are the organizations that fund research or human needs.

Survival of the fittest is not the best definition for evolution

There are some misunderstandings about the evolution. The most common one is that the fittest always have the best chances to survive and thrive through time. We should talk more about the most adaptable and co-operative individuals. Fastest, strongest and biggest do not always cope with changes in the environment. Top-Down hierarchies or bigger structures have problems in coping with change. The faster the changes, the bigger the problems for these structures. Biological evolution is full of random bottom-up processes.

We talk about all the self-replicating genes that are always fighting for the available resources. Genes that are more efficient of getting themselves copied tend to replace the less efficient genes. In the long run, our DNAs become full of well-surviving genes. These genes are best for surviving in the average environment of their species. When the average environment changes, the surviving genes change too. In DNA´s point of view, the average environment is the body, in which DNA visits for a body´s lifetime. The other genes are maybe the most important aspect of the environment. They are not the only things concerning on the environment that matter. For example geographical separation can create different species, because conditions change by the changing geographical location. When the conditions in the environment change fast, we can talk about disruption. This is a model I will introduce later.

Adaptability is very important for the non-living things like companies. Their environments are in states of constant change. These changes are many times completely random. Many great inventions like penicillin have been discovered accidentally. Sometimes best solutions for the problem do not win. VHS videos beat Beta videos even though the latter one was the better technology. These examples are all around us. Many researchers have no way of knowing if they ever find anything useful. Companies that suffer from these unexpected discoveries can be the fittest before them and after these discoveries some of these companies even go bankrupt. These things happen because companies couldn´t adapt to changing environments because they are still applying the rules that worked before to the new situation.

This text is just the tip of the ice-berg about evolution. Evolution is all around us and it effects on every living thing on earth. I hope I have covered some basics in an understandable way. Please tell me, if I have failed.


Thinking in Systems, Donella H: Meadows
River Out of Eden, Richard Dawkins
From Bacteria to Bach and Back, Daniel Dennett
The Evolution of Everything, Matt Ridley