by ERIC D. BEINHOCKER
a summary by Bobby Lopez
Contents
The Origin of Wealth: Evolution and Complexity.. 1
A. Chapter 4: Sugarscape: A model of how an economy originateD.. 1
(1) The Model Setup: simple rules to live: gather and consume resources at hand 1
(2) Results 1: The Rich get Richer. But not because of genetic, nor birth, but as an emergent property: the confluence of all factors 3
(3) Results 2: Population growth causes: wealth growth, improved genetics, population swings, and the rich got richer 3
(4) Results 3: The introduction of trade resulted in: 1) increase in wealth; 2) clustering of trading by regions; 3) prices approached bur never reached equilibrium; 4) sub-Pareto optimal 5) rich got richier 4
(5) Results 4: The introduction of lending resulted in: 5
B. Cahpter 7: How Networks Behave. 6
(1) When networks explode 6
(2) How Boolean networks function 6
C. Chapter 8: Emergence: How the Patterns arise. 8
(1) The mistery of economic and business cycles 8
(2) Lesson from a Beer Industry simulation: business fluctuations does not flatten o
ut, they goes through storm and calm 9
(3) Punctuated Equilibrium: Are There “Keystone” Technologies? 10
(4) A time pattern emerges in nature and economy: power laws 11
(5) Why Are Stock Markets So Volatile? Because of the market order and limit order structure. 12
(6) Conclusion: pattern behavior emerges as consequence of: individual behavior regularities; institutional traits; exogenous inputs 12
D. Chapter 9: How Evolution works in economics. 13
(1) Evolution selects the best ideasa or Business Plan 13
(2) Markets are evolutionary search mechanism 13
E. Chapter 14: A New Definition of Wealth: Fit Order. 13
(1) Economic activity consists in creating order 13
(2) A Proposal: Three Conditions for Value Creation 14
This book will argue that wealth creation is the product of a simple, but profoundly powerful, three-step formula— differentiate, select, and amplify—the formula of evolution
We are accustomed to thinking of evolution in a biological context, but modern evolutionary theory views evolution as something much more general. Evolution is an algorithm; it is an all-purpose formula for innovation, a
To picture Epstein and Axtel’s model, imagine a group of people shipwrecked on a desert island, except that both the island and the castaways are simulations inside a computer. The computer island is a perfect square with a fifty-by-fifty grid overlaid on top
The virtual island has only one resource—sugar—and each square in the grid has different amounts of sugar piled on it. The heights of the sugar piles range from four sugar units high (the maximum) to zero (no sugar). The sugar piles are arranged such that there are two sugar mountains, one mountain at the northeast corner and one at the southwest corner, each with sugar piled three and four units high.
The game begins with 250 agents randomly dropped on the Sugarscape.
Each virtual person, or “agent,” is an independent computer program that takes in information from the Sugarscape environment, crunches that information through its code, and then makes decisions and takes actions. In the most basic version of the simulation, each agent on Sugarscape can only do three things: look for sugar, move, and eat sugar.
Each agent has vision that enables it to look around for sugar, and then has the ability to move toward this source of energy. Each agent also has a metabolism for digesting sugar.
Thus, each agent had a basic set of rules that it followed during each turn of the game:
- The agent looks ahead as far as its vision will allow in each of four directions on the grid: north, south, east, and west (the agents cannot see diagonally).
- The agent determines which unoccupied square within its field of vision has the most sugar.
- The agent moves to that square and eats the sugar.
- The agent is credited by the amount of sugar eaten and debited by the amount of sugar burned by its metabolism. If the agent eats more sugar than it burns, it will accumulate sugar in its sugar savings account (you can think of this savings as body fat) and carry this savings through to the next turn. If it eats less, it will use up its savings (depleting fat).
- If the amount of sugar stored in an agent’s savings account drops below zero, then the agent is said to have starved to death and is removed from the game. Otherwise, the agent lives until it reaches a predetermined maximum age.
- In order to carry out these tasks, each agent has a “genetic endowment” for its vision and metabolism. In other words, associated with each agent is a bit of computer code, a computer DNA, that describes how many squares ahead that agent can see and how much sugar it burns each round. An agent with a slow (good) metabolism needs only one unit of sugar per turn of the game to survive, versus an agent with a fast (bad) metabolism, which requires four. Vision and metabolism endowments are randomly distributed in the population; thus,
- Each agent also has a randomly assigned maximum lifetime, after which a computer Grim Reaper comes and removes it from the game.
- Finally, as sugar is eaten, it grows back on the landscape like a crop, at the rate of one unit per time period.
As time passes, however, this distribution changes dramatically. Average wealth rose as the agents convened on the two sugar mountains but the distribution of wealth became very skewed, with a few emerging superrich agents, a long tail of upper-class yuppie agents, a shrinking middle class, and then a big, growing underclass of poor agents.
The Pareto distribution is where the so-called 80-20 rule comes from, as roughly 80 percent of the wealth is owned by 20 percent of the people.
The wealth distribution that the simple Sugarscape model produced was just this kind of a real-world Pareto distribution.
First, we can ask, is it nature? —does it have something to do with the genetic endowments of the players? That is, are all the agents with great eyesight and slow metabolisms getting all the wealth? The answer is no.
Are all the agents born on top of sugar mountains getting all the wealth and those with the bad luck of being born in the badlands staying poor? The answer to this is no as well.
How, then, from these random initial conditions do we get a skewed wealth distribution? The answer is, in essence, “everything.” The skewed distribution is an emergent property of the system. It
All this means that even in Sugarscape, there is no simple cause-and-effect relationship driving poverty and inequality. Instead, it is a complex mix of factors.
It is not easy to come up with solutions for the poverty problem even in the highly simplified world of Sugarscape.
Epstein and Axtell decided to give each agent a tag indicating its age and whether it was male or female. Once an agent reaches “child-bearing age,” and if that agent has a minimum amount of sugar savings, he or she is considered fertile. Each period, fertile agents scan their immediate neighborhood of one square to the north, south, east, and west. If they find another fertile agent of the opposite sex, they reproduce. The DNA of the resulting baby agent is then chosen randomly, half from the mother, and half from the father. Thus the child’s vision and metabolism characteristics will be some mix of the two parents. In addition, the baby agent inherits wealth from both parents, receiving an amount equal to half the father’s wealth plus half the mother’s wealth.
The baby agent is born in an empty square next to its mother and father, so if the parents live in a rich or poor sugar neighborhood, the child agent will start its life there as well.
Then three things began to happen:
- Over time, both average vision and average metabolic efficiency began to climb, as the most fit members had more and more offspring. As the average of these attributes rose, so too did wealth.
- The new birth-death dynamics introduced population swings.
- The introduction of genetic inheritance as well as wealth inheritance across generations further accelerated the trend toward the rich getting richer and the poor getting poorer.
They introduced a second commodity, called spice. Each square on the board now had a value for how much sugar it held and a value for how much spice it held.
Epstein and Axtell also tweaked their agents’ metabolisms so that they all required some of each commodity to survive.
As a final step, Epstein and Axtell made it possible for the agents to trade. There was no assumption of a market or an auctioneer as in typical Traditional Economic models. Instead, there was just straightforward bartering between individuals. As agents move around the Sugarscape, they encounter other agents. At each turn in the game, each agent looks one square to the north, south, east, and west and asks any players in its neighborhood if they want to trade. If one agent has a lot of spice and needs sugar and another agent faces the reverse situation, both agents could improve their circumstances by trading.
First result was that trading made the Sugarscape society much richer.
Secondly, there was some clustering in the trading networks by geographic region. The combination of geography and population dynamics created heavily trafficked trading routes, the computer equivalent of the ancient Silk Road, as agents shuttled back and forth between the sugar and spice mountains.
Epstein and Axtell could look inside each agent during each period of play and determine how much sugar or spice the agent was willing to buy or sell at a series of possible prices. What resulted was an almost textbook downward-sloping demand curve, along with an upward-sloping supply curve, even though Epstein and Axtell did not explicitly build anything about supply and demand into their model. Also, take into account that he intitial conditions were randomly set, but once the model got going, all the behavior was perfectly deterministic.
The actual prices and quantities traded (indicated by the dot in figure 4-4) never setded on the theoretically predicted equilibrium point (at the intersection of the supply and demand X in the figure).
Prices in Sugarscape move dynamically around an “attractor” (a term we will discuss in chapter 5) but never actually settle down into equilibrium.
Epstein and Axtell also found that there was far more trading than there would be if the system was reaching equilibrium, as in the real world.
The Sugarscape market, however, operates at less than Pareto optimality. There are always trades that could have happened, that would have made people better off, but didn’t. Again, this is because the agents’ trades are separated in time and space,
While trade in Sugarscape does “lift all boats,” making the society richer as a whole, it also has the effect of further widening the gap between rich and poor. In
In Sugarscape, there is only one reason to become a borrower—to have children. Epstein and Axtell introduced a rule that said that an agent can be a lender if it is too old to have children, or if it has more savings than it needs for its own reproductive needs. In turn, an agent can be a borrower if it has insufficient savings to have children, but has a sustainable income of sugar and spice.
What surprised Epstein and Axtell was not that significant borrowing and lending activity occurred, but that a complex, hierarchical capital market emerged. Some agents became simultaneously both borrowers and lenders—in effect, middlemen: Sugarscape had evolved banks! Cert
ain really rich agents took on a wholesale role, lending to middlemen, who then made loans to the ultimate borrowers. In some simulations, the hierarchical chain grew to five levels deep.
These large-scale macro patterns grew from the bottom up, from the dynamic interplay of the local micro assumptions.
Economists have long recognized that certain products, such as e-mail, faxes, and telephones, share a property whereby the greater the number of people who use them, the more useful they become. This is called, appropriately enough, a network effect. Traditional Economics, however, has not historically had much to say about why these types of products tend to suddenly catch fire and take off in popularity.
Picture a thousand buttons scattered on a hardwood floor. Imagine you also have in your hand pieces of thread; you then randomly pick up two buttons, connect them with the thread, and put them back down. As you first start out, the odds are that each button you pick up will be unconnected, so you will be creating a lot of two-button connections. As you work away, however, at some point you will pick up a button that is already connected to another button and you will thus be adding a third.
In random networks, the phase transition from small clusters to giant clusters happens at a specific point, when the ratio of segments of thread (edges) to buttons (nodes) exceeds the value of 1 (i.e., on average, one thread segment for every button).One can think of the ratio of one edge to one node as the “tipping point” where a random network suddenly goes from being sparsely connected to densely connected.
Networks of nodes that can be in a state of 0 or 1 are called Boolean network Imagine a string of Christmas tree lights that blink on or off. Each bulb receives inputs from the two bulbs on either side of it telling it whether they are on or off. We can then imagine that each light bulb has a rule that it follows to determine what it does next period, based on the inputs from the other two bulbs
Basically, three variables guide the behavior of such networks.
1) the number of nodes in the network.
2) a measure of how much everything is connected to everything else. And
3) measure of “bias” in the rules guiding the behavior of the nodes.
Let’s look at each of these in turn and their implications for economic and other types of organizations.
a) Number of nodes in the network: Bigger is usually better
The number of states a network can be in scales exponentially with the number of nodes. A network with 2 nodes can be in four, or 22 , states: 00, 10, 01, and 11. Likewise, a network with 3 nodes can be in eight, or 23
The exponential growth in possible states creates a very powerful kind of economy of scale in any network of information-processing entities. As the size of a Boolean network grows, the potential for novelty increases exponentially. A Boolean network with 10 nodes can be in 210 possible states a An organization the size of Boeing also has inherently more headroom for innovation in the future— the larger number of states in the Boeing organizational network means that there are more potential ways for Boeing to make a living than for my corner coffee shop.
b) Number of connections between nodes. Complexity Catastrophes when a network is big
The mathematics of Boolean networks leaves us with a quandary, however. If large organizations have more headroom for innovation than do small organizations, then why does the mythos of business hold that small organizations out-innovate large ones?
If a network has on average more than one connection per node, then as the number of nodes grows, the number of connections will scale exponentially with the number of nodes
This means that the number of interdependencies in the network grows faster than the network itself. This, then, is where the problems start to arise. As the number of interdependencies grows, changes in one part of the network are more likely to have ripple effects on other parts of the network. As the potential for these knock-on effects grows, the probability that a positive change in one part of the network will have a negative effect somewhere else also increases.
To illustrate, let’s imagine you are the cofounder of a small start-up company with only two departments: product development and marketing. You run product development and have an idea for a new product. So you have a meeting to discuss your plan, the marketing department agrees to it, and you are ready to go—
You grew. You now have an idea for your third-generation product, but something bizarre has happened. You have
your usual meeting with marketing, but now before you get the department’s OK, the marketing managers say they have to check the impact on their budget, which was approved by finance. The finance folks say they can’t approve your project until they get an estimate from customer service on the cost of the additional support needed. And customer service has to check with marketing to make sure its plans are consistent with the company’s brand and pricing strategy. All of a sudden, you have gone from three meetings to ten (if all the permutations occur)
Degrees of Possibility Versus Degrees of Freedom The problem isn’t dumb people or evil intentions. Rather, network growth creates interdependencies, interdependencies create conflicting constraints, and conflicting constraints create slow decision making and, ultimately, bureaucratic gridlock.
We thus have two opposing forces at work in organizations: the informational economies of scale from node growth, and the diseconomies of scale from the buildup of conflicting constraints. Taken together, these opposing forces help us understand why big is both beautiful and bad: as an organization grows, its degrees of possibility increase exponentially while its degrees of freedom collapse exponentially.
This tension between interdependencies and adaptability is a deep feature of networks and profoundly affects many types of systems. This tension creates upper limits on the complexity of organisms.
Hierarchies alleviates the size problem. Organizing the network into hierarchies reduces the density of connections and thus reduces the interdependencies in the network. Hierarchies are critical in enabling networks to reach larger sizes before diseconomies of scale set in. This is why so many networks in the natural and computer worlds are structured as networks within networks.
counter intuitively, hierarchy can serve to increase adaptability by reducing interdependencies and enabling an organization to reach a larger size before gridlock sets in.
Who would have thought that hierarchy actually saves meetings? Hierarchy does, of course, have its problems; for example, information can degrade as it travels up the chain, the top may become out of touch with the front line, and a poor performer in a senior role can do a lot of damage.
A related move is to give the units within a hierarchical structure more autonomy.
c) Network bias: Increasing predictability allows for more connection per node
According to Kaufman, nonhierarchical networks exhibit spontaneous order with one or two average connections per node, but went into chaos (thus creating cascades of change and the potential for a complexity catastrophe) at four connections per node or more.
Bernard Derrida and Gerard Weisbuch, discovered a parameter that could change the point at which this phase transition takes place. They called the parameter bias.
Let’s say we pick a bulb and start feeding it Is and Os at random. Our input stream will thus be approximately 50 percent Is and 50 percent 0s. If the output stream was also fifty-fifty Is and 0s, then we could say that the output was unbiased, but if the output was say 90 percent Is, then we could say it was biased toward 1. A high-bias node is easier to predict.
Derrida and Weisbuch found that the higher the bias, the more densely connected a network can be before the transition to chaos occurs. If the average bias is fifty-fifty, then the transition to chaos happens in the range between two and four average connections per node, as it did in Kauffman’s study If the average bias is closer to 75 percent, then the transition happens above four connections per node. At higher bias levels, the network can go up to six connections per node before it trips into chaos. The key point is, the more regularity there is in the behavior of the nodes, the more density in connections the network can tolerate.
In an organizational context, we can think of bias as being a measure of predictability. If there is predictability in the decision making of an organization (the equivalent of the light bulbs’ rules), then the organization can function effectively with a more densely connected network. If, however, decision making is less predictable, then less-dense connections, more hierarchy, and smaller spans of control are needed. Thus, for example, in an army, where regular, predictable behavior of troops is highly valued, it might be possible to get away with larger unit sizes than, say, in a creative advertising agency. Factors that make behavior less predictable, such as office politics and emotions, can limit the size an organization can grow to before being overwhelmed by complexity
If we combine Kauffman’s original result with the later results on hierarchy and bias, however, the phase transition shifts to the range of six to nine nodes. Interestingly, the numbers that come out of the analysis of Boolean networks are quite close to what we typically see for the size of effective working groups in human organizations
Depressions, recessions, and inflation are not exclusively modern phenomena; they are patterns that have recurred since the beginning of recorded history. Time series have a not-quite-regular, not-quite-random character to them. Economists have had very little success in trying to use the irregular historical patterns in economic
The ultimate accomplishment of Complexity Economics would be to develop a theory that takes us from theories of agents, networks, and evolution, all the way up to the macro patterns we see in real-world economies. Such a comprehensive theory does not yet exist, but we can begin to see glimmers of what it might look like. Such a theory would view macroeconomic patterns as emergent phenomena, that is, characteristics of the system as a whole that arise endogenously out of interactions of agents and their environment.
d) The game setup: 4 players are retailer, distributors and manufactures of beer. Compete to lower their cost
Four volunteers are asked to play a game simulating the manufacture and distribution of beer. The game is played as follows. Each participant has an inventory of cases of beer (represented by chips on the game board). At the beginning of each turn, the retailer turns over one card from the deck to get the order from the consumers (e.g., four cases) and then submits an order to the wholesaler. The wholesaler looks at his or her orders from the retailer and submits an order to the distributor,
1) Players incur costs of $0.50 per case for holding inventory (e.g., the cost of storing and securing the beer), and costs of $1.00 per case for running out of beer (e.g., angry customers and lost sales). Therefore,
2) The winner of the game is the one who incurs the lowest cost.
3) Likewise, there is a small delay between when orders are submitted and when they are processed
4) Finally, no communications are allowed between the players other than through the orders. Thus, the brewer doesn’t know what the customer demand is down at the retailer’s end;
The game starts out in equilibrium, with each player getting an order for four cases of beer and shipping exactly that many.
Unbeknownst to the participants, the first several cards in the consumer deck remain at four. Some players may order a bit more or a bit less than four, depending on how risk averse they are. Otherwise, not much happens. Then, suddenly, on one turn, the consumer-order card jumps from four to eight. The players do not know it, but the customer-order level will stay at eight for the rest of the game.
e) The results: a shock in the environment induces a shock in inventories that is amplified over time
According to Traditional Economics, this exogenous shock in demand should simply cause the players to move to a new equilibrium after a few turns of adjustment, with everyone ordering eight, and everyone’s inventories staying constant
In experiments with real people, however, the players inevitably overreact to the jump in demand by over-ordering as their inventory falls. As this wave of over-ordering travels up the supply chain, it is amplified.
Just what kind of behavior leads to such wild oscillations in a relatively simple environment? Sterman has been able to statistically derive the decision rule used by the participants This rule is based on a behavior known in the psychology literature as anchor and adjust. Rather than deductively calculate their future beer needs by looking at all the inventory on the board (which they can see) and incorporating the effects of the time delays and so on, the participants simply look at the past pattern of orders and inventory levels, and inductively anchor on a pattern that seems normal. Their IF-THEN rules consequently try to steer them to maintain that normal pattern. Thus, a participant might anchor on four cases as the normal pattern of orders and then struggle to adjust when things are not normal, for example, “My inventory is dropping, order more!” In an environment with time delays, the anchor-and-adjust rule causes individuals to both overshoot and undershoot, which in turn leads to the emergent pattern of cyclical behavior.
Humans don’t do well when there is a time delay between their actions and the response to those actions
There are two ways to dampen the cycles in the Beer Game: one is to reduce the time delays, and the other is to give the participants more information (e.g., giving the brewer direct visibility into what is happening at the retail level).24
a) What is punctuated equilibrium: phases of calm and storm
Over the century that followed the publication of Darwin’s Origin of Species, biologists assumed that evolution proceeded in a stately and relatively linear fashion, leading to a smooth pattern of speciation and extinction. Then, in a landmark paper in 1972, the paleontologists Niles Eldredge and Stephen Jay Gould overturned this conventional wisdom and argued that the fossil record shows that biological evolution has not followed a smooth path.2 8
For example, during the Cambrian period about 550
million years ago, a burst of evolutionary innovation saw the takeover of the earth by multicellular life and the creation of most of the major phyla on earth today. Then, about 245 million years ago, during the late Permian period, there was what Gould called “the granddaddy of all extinctions”; when 96 percent of all marine species on earth disappeared
Patterns of punctuated equilibrium show up not just in biological evolution, but in other complex systems ranging from the slides of avalanches to the crashes of stock markets.
Many types of networks self-organize into a structure that has a mixture of very dense connections and very sparse connections. It is just such a network structure that underlies the emergence of punctuated equilibrium in biological ecosystems
The researchers found that if they randomly removed “species” from their simulated ecosystem, typically not much happened. Yet, once in a while, removing a random species would set off a cascade of events leading to a mass extinction. Certain species are very densely connected to other species in the web of food relationships and niche competition. Biologists call these “keystone species”.
b) Technology evolves following evolution patterns: i.e. with emergent properties
Technological innovation proceeds in similar patterns of calm and storm.
Technologies are inherently modular: a car, for example, is made up of an engine, a transmission, a body, and so on. Modules are then assembled into “architectures,” in this case, the design of the car itself
It is innovations in architectures (e.g., the PC revolution itself) that tend to have the big catalyzing ripple effects on innovation. We thus have two of the key features that led to the punctuated equilibrium pattern in Jain and Krishna’s model—sparse-dense networks of interaction, and catalyzing effects from individual nodes.
Technology webs might be subject to cascades of change leading to the emergent pattern of punctuated equilibrium, and that certain technologies could play the role of keystones in those webs.
Several researchers have shown statistically that stocks do not follow a random walk. The clumpy pattern for IBM stock price shows that the volatility of price movements is correlated in time. This is the stormy-quiet-stormy pattern of punctuated equilibrium A few points skyrocket above, or plunge far below, the rest of the sample. What could lead to such dramatic movements in prices?
The surfacing of news does not explain much of the swing in prices. We have a mystery: why is there so much news-less volatility in the market? The answer to this mystery lies in an interesting observation: while stock price movements don t look much like a random walk, they do look like another phenomenon: earthquakes.
The following straight line on a log-log scale meant that, with earthquakes, there is no “typical” size in the middle of the distribution as there is in body heights. Rather, earthquakes occur across all size scales, but the bigger the quake, the rarer it is—specifically, with each doubling in earthquake energy, the probability of a quake of that size occurring drops by a factor of four. It is thus a slippery slope down
Physicists call this kind of relationship a power law, because the distribution is described by an equation with an exponent, or power.42 Power laws have been discovered in a wide variety of phenomena, including the sizes of biological extinction events, the intensity of solar flares, the ranking of cities by size, traffic jams, cotton prices, the number of fatalities in warfare, and even the distribution of sex partners in social networks.43 Power laws, along with oscillations and punctuated equilibrium, are another signature characteristic of complex adaptive systems.
Pareto’s study of income found a lot of poor, a middle class that stretched over a wide range, and a very few superrich. He found that for every increase of income by 1 percent, there was a corresponding decline in the number of households by 1.5 percent—graphed on log-log paper, this produces a straight line—a power law. Pareto
Power laws reemerged briefly in economics in the 1960s, when Benoit Mandelbrot became interested in the fluctuations of cotton prices on the Chicago Mercantile The fluctuations seemed to have no natural timescale. If he took one section of the graph, say, one hour, and stretched it out to the length of a day, one could not tell which graph was the hourly data and which was the daily data. He then looked at data from other commodities, including gold and wheat, and saw the same pattern—power laws
They found that the fluctuations in stock prices follow clear power laws in the tails of the distribution.
One of the consequences of this result is that financial markets are far more volatile than Traditional Economics leads us to believe.
The size of companies as measured by employees also scales according to a power law. Company sales growth, as well as the GDP growth of nations, likewise scales according to a power law.
The key is that there are two types of trades one can make on most stock exchanges. The first is a market order, in which a trader says buy (or sell) stock X right now for the best available price. The second is a limit order, in which a trader says buy stock X if the price falls to $100 (or conversely, sell stock X if the price rises to $100)
The cause of large price fluctuations was the structure of the order book itself—large fluctuations occurred when there were large gaps between the price levels in the boo
k.
The regularity of the order pattern implied that there was also some regularity in the behavior of the traders placing the orders—a result at odds with the Traditional theory that all trading is driven by unpredictable news events.
Complex emergent phenomena such as business cycles and stock price movements are likely to have three root causes:
- The first is the behavior of the participants in the system. As we have seen, real human beings have real behavioral regularities, whether it is the anchor and adjust rule of the Beer Game participants, or the yet to be understood regularity that leads to student distributions in stock ordering.
- Second, the institutional structure of the system makes a big difference. In the case of the Beer Game, the structure of the supply chain between the manufacturer and retailer created dynamics, that when combined with participant behavior, led to oscillations. In the case of the stock market, the structure of the limit order system, when combined with trader behavior, led to power law volatility.
- Third and last, are exogenous inputs into the system. In the case of the Beer Game it was the onetime jump in customer orders, and in the case of the stock market it is news.
Business Plans are instructions for creating businesses that can be implemented by qualified Business Plan readers. These instructions bind Physical Technologies and Social Technologies together into modules under a strategy.
Business Plans are differentiated through the deductive-tinkering of agents as they search for potentially profitable plans. While the distribution of experiments created by this process differs from the purely random differentiation of biological evolution, it nonetheless feeds the evolutionary algorithm with a superfecundity of Business Plans for selection to act on.
At some point the plans are implemented and the market renders its judgment. Finally, successful modules are rewarded by gaining influence over more resources.
Following the framework I have just outlined, we can reinterpret markets as an evolutionary search mechanism. Markets provide incentives for the deductive-tinkering process of differentiation. They then critically provide a fitness function and selection process that represents the broad needs of the population (and not just the needs of a few Big Men). Finally, they provide a means of shifting resources toward fit modules and away from unfit ones, thus amplifying the fit modules’ influence.
Markets win over command and control, not because of their efficiency at resource allocation in equilibrium, but because of their effectiveness at innovation in disequilibrium.
the reason that markets work so well comes down to what evolutionary theorists refer to as Orgel’s Second Rule (named after biochemist Leslie Orgel), which says, “Evolution is cleverer than you are.” Even a highly rational, intelligent, benevolent Big Man would not be able to beat an evolutionary algorithm in finding peaks in the economic fitness landscape.
The reason that markets are good at allocation has more to do with their computational efficiency as a distributed processing system (i.e., they get the right signals to the right people), than with their ability to reach a mythical global equilibrium.
In 1971 Georgescu-Roegen’, published his magnum opus The Entropy Law and the Economic Process, where he stated that that economic activity is fundamentally about order creation, and that evolution is the mechanism by which that order is created.
The Second Law thus provides a basic constraint on all life: over time, energy inputs must be greater than energy expenditures. All organisms must make a thermodynamic “profit” to survive and reproduce. The design for an organism can be thought of as a strategy for making thermodynamic profits long enough to reproduce, before the Second Law eventually catches up.
Competition for the energy and materials needed for order creation is, of course, intense; plants compete for ground, water, and sunlight, and many species have the strategy of stealing energy and materials from other species by eating them.
Just as in biological systems, the economic process materially consists of a transformation of high entropy into low entropy.
A pat
tern of matter, energy, and or information has economic value if the following three conditions are jointly met:
1. IRREVERSIBILITY. All value-creating economic transformations and transactions are thermodynamically irreversible.
2. ENTROPY. All value-creating economic transformations and transactions reduce entropy locally within the economic system, while increasing entropy globally.
3. FITNESS. All value-creating economic transformations and transactions produce artifacts and or actions that are fit for human purposes.
Consequently, low entropy might indeed be necessary for something to have economic value, but defining what kinds of order are valuable and what kinds are not seems rather subjective—order is in the eye of the beholder.
Taken together, the three G-R Conditions say that economic activity is fundamentally about order creation. Faced with the disorder and randomness of the world, humans spend most of their waking hours ordering their environment in various ways to make it a more hospitable and enjoyable place. We order our world by transforming energy, matter, and information into the goods and services we want, and we have discovered the evolutionary Good Trick that by cooperating, specializing, and trading, we can create even more order than we otherwise could on our own.
They select forms of order that meet their needs, fulfilling drives and preferences
In physics, order is the same thing as information, and thus we can also think of wealth as fit information; in other words, knowledge.
Information on its own can be worthless. Knowledge on the other hand is information that is useful, that we can do something with, that is fit for some purpose.
Evolution is a knowledge-creation machine—a learning algorithm.53 Think of all the knowledge embedded in the ingenious designs of the biological world. A grasshopper is an engineering marvel, a storehouse of knowledge of physics, chemistry, and biomechanics—knowledge that is beyond the bounds of current human ability to replicate
A grasshopper is also a snapshot of knowledge about the environment it evolved in, the foods that were good to eat, the predators that needed to be defended against, and the strategies that worked well for attracting mates and ensuring the survival of progeny. There are terabytes of knowledge embedded in a single grasshopper.
We have found the answer to our quest. Wealth is knowledge and its origin is evolution.
•••
The Origin of Wealth: Evolution and Complexity – ERIC D. BEINHOCKER – Summary
/in Resúmenes de Lecturas /by Bobby A LopezThe Origin of Wealth: Evolution and Complexity
by ERIC D. BEINHOCKER
a summary by Bobby Lopez
Contents
The Origin of Wealth: Evolution and Complexity.. 1
A. Chapter 4: Sugarscape: A model of how an economy originateD.. 1
(1) The Model Setup: simple rules to live: gather and consume resources at hand 1
(2) Results 1: The Rich get Richer. But not because of genetic, nor birth, but as an emergent property: the confluence of all factors 3
(3) Results 2: Population growth causes: wealth growth, improved genetics, population swings, and the rich got richer 3
(4) Results 3: The introduction of trade resulted in: 1) increase in wealth; 2) clustering of trading by regions; 3) prices approached bur never reached equilibrium; 4) sub-Pareto optimal 5) rich got richier 4
(5) Results 4: The introduction of lending resulted in: 5
B. Cahpter 7: How Networks Behave. 6
(1) When networks explode 6
(2) How Boolean networks function 6
C. Chapter 8: Emergence: How the Patterns arise. 8
(1) The mistery of economic and business cycles 8
(2) Lesson from a Beer Industry simulation: business fluctuations does not flatten o
ut, they goes through storm and calm 9
(3) Punctuated Equilibrium: Are There “Keystone” Technologies? 10
(4) A time pattern emerges in nature and economy: power laws 11
(5) Why Are Stock Markets So Volatile? Because of the market order and limit order structure. 12
(6) Conclusion: pattern behavior emerges as consequence of: individual behavior regularities; institutional traits; exogenous inputs 12
D. Chapter 9: How Evolution works in economics. 13
(1) Evolution selects the best ideasa or Business Plan 13
(2) Markets are evolutionary search mechanism 13
E. Chapter 14: A New Definition of Wealth: Fit Order. 13
(1) Economic activity consists in creating order 13
(2) A Proposal: Three Conditions for Value Creation 14
This book will argue that wealth creation is the product of a simple, but profoundly powerful, three-step formula— differentiate, select, and amplify—the formula of evolution
We are accustomed to thinking of evolution in a biological context, but modern evolutionary theory views evolution as something much more general. Evolution is an algorithm; it is an all-purpose formula for innovation, a
A. Chapter 4: Sugarscape: A model of how an economy originateD
(1) The Model Setup: simple rules to live: gather and consume resources at hand
To picture Epstein and Axtel’s model, imagine a group of people shipwrecked on a desert island, except that both the island and the castaways are simulations inside a computer. The computer island is a perfect square with a fifty-by-fifty grid overlaid on top
The virtual island has only one resource—sugar—and each square in the grid has different amounts of sugar piled on it. The heights of the sugar piles range from four sugar units high (the maximum) to zero (no sugar). The sugar piles are arranged such that there are two sugar mountains, one mountain at the northeast corner and one at the southwest corner, each with sugar piled three and four units high.
The game begins with 250 agents randomly dropped on the Sugarscape.
Each virtual person, or “agent,” is an independent computer program that takes in information from the Sugarscape environment, crunches that information through its code, and then makes decisions and takes actions. In the most basic version of the simulation, each agent on Sugarscape can only do three things: look for sugar, move, and eat sugar.
Each agent has vision that enables it to look around for sugar, and then has the ability to move toward this source of energy. Each agent also has a metabolism for digesting sugar.
Thus, each agent had a basic set of rules that it followed during each turn of the game:
(2) Results 1: The Rich get Richer. But not because of genetic, nor birth, but as an emergent property: the confluence of all factors
As time passes, however, this distribution changes dramatically. Average wealth rose as the agents convened on the two sugar mountains but the distribution of wealth became very skewed, with a few emerging superrich agents, a long tail of upper-class yuppie agents, a shrinking middle class, and then a big, growing underclass of poor agents.
The Pareto distribution is where the so-called 80-20 rule comes from, as roughly 80 percent of the wealth is owned by 20 percent of the people.
The wealth distribution that the simple Sugarscape model produced was just this kind of a real-world Pareto distribution.
First, we can ask, is it nature? —does it have something to do with the genetic endowments of the players? That is, are all the agents with great eyesight and slow metabolisms getting all the wealth? The answer is no.
Are all the agents born on top of sugar mountains getting all the wealth and those with the bad luck of being born in the badlands staying poor? The answer to this is no as well.
How, then, from these random initial conditions do we get a skewed wealth distribution? The answer is, in essence, “everything.” The skewed distribution is an emergent property of the system. It
All this means that even in Sugarscape, there is no simple cause-and-effect relationship driving poverty and inequality. Instead, it is a complex mix of factors.
It is not easy to come up with solutions for the poverty problem even in the highly simplified world of Sugarscape.
(3) Results 2: Population growth causes: wealth growth, improved genetics, population swings, and the rich got richer
Epstein and Axtell decided to give each agent a tag indicating its age and whether it was male or female. Once an agent reaches “child-bearing age,” and if that agent has a minimum amount of sugar savings, he or she is considered fertile. Each period, fertile agents scan their immediate neighborhood of one square to the north, south, east, and west. If they find another fertile agent of the opposite sex, they reproduce. The DNA of the resulting baby agent is then chosen randomly, half from the mother, and half from the father. Thus the child’s vision and metabolism characteristics will be some mix of the two parents. In addition, the baby agent inherits wealth from both parents, receiving an amount equal to half the father’s wealth plus half the mother’s wealth.
The baby agent is born in an empty square next to its mother and father, so if the parents live in a rich or poor sugar neighborhood, the child agent will start its life there as well.
Then three things began to happen:
(4) Results 3: The introduction of trade resulted in: 1) increase in wealth; 2) clustering of trading by regions; 3) prices approached bur never reached equilibrium; 4) sub-Pareto optimal 5) rich got richier
They introduced a second commodity, called spice. Each square on the board now had a value for how much sugar it held and a value for how much spice it held.
Epstein and Axtell also tweaked their agents’ metabolisms so that they all required some of each commodity to survive.
As a final step, Epstein and Axtell made it possible for the agents to trade. There was no assumption of a market or an auctioneer as in typical Traditional Economic models. Instead, there was just straightforward bartering between individuals. As agents move around the Sugarscape, they encounter other agents. At each turn in the game, each agent looks one square to the north, south, east, and west and asks any players in its neighborhood if they want to trade. If one agent has a lot of spice and needs sugar and another agent faces the reverse situation, both agents could improve their circumstances by trading.
First result was that trading made the Sugarscape society much richer.
Secondly, there was some clustering in the trading networks by geographic region. The combination of geography and population dynamics created heavily trafficked trading routes, the computer equivalent of the ancient Silk Road, as agents shuttled back and forth between the sugar and spice mountains.
Epstein and Axtell could look inside each agent during each period of play and determine how much sugar or spice the agent was willing to buy or sell at a series of possible prices. What resulted was an almost textbook downward-sloping demand curve, along with an upward-sloping supply curve, even though Epstein and Axtell did not explicitly build anything about supply and demand into their model. Also, take into account that he intitial conditions were randomly set, but once the model got going, all the behavior was perfectly deterministic.
The actual prices and quantities traded (indicated by the dot in figure 4-4) never setded on the theoretically predicted equilibrium point (at the intersection of the supply and demand X in the figure).
Prices in Sugarscape move dynamically around an “attractor” (a term we will discuss in chapter 5) but never actually settle down into equilibrium.
Epstein and Axtell also found that there was far more trading than there would be if the system was reaching equilibrium, as in the real world.
The Sugarscape market, however, operates at less than Pareto optimality. There are always trades that could have happened, that would have made people better off, but didn’t. Again, this is because the agents’ trades are separated in time and space,
While trade in Sugarscape does “lift all boats,” making the society richer as a whole, it also has the effect of further widening the gap between rich and poor. In
(5) Results 4: The introduction of lending resulted in:
In Sugarscape, there is only one reason to become a borrower—to have children. Epstein and Axtell introduced a rule that said that an agent can be a lender if it is too old to have children, or if it has more savings than it needs for its own reproductive needs. In turn, an agent can be a borrower if it has insufficient savings to have children, but has a sustainable income of sugar and spice.
What surprised Epstein and Axtell was not that significant borrowing and lending activity occurred, but that a complex, hierarchical capital market emerged. Some agents became simultaneously both borrowers and lenders—in effect, middlemen: Sugarscape had evolved banks! Cert
ain really rich agents took on a wholesale role, lending to middlemen, who then made loans to the ultimate borrowers. In some simulations, the hierarchical chain grew to five levels deep.
These large-scale macro patterns grew from the bottom up, from the dynamic interplay of the local micro assumptions.
B. Cahpter 7: How Networks Behave
(1) When networks explode
Economists have long recognized that certain products, such as e-mail, faxes, and telephones, share a property whereby the greater the number of people who use them, the more useful they become. This is called, appropriately enough, a network effect. Traditional Economics, however, has not historically had much to say about why these types of products tend to suddenly catch fire and take off in popularity.
Picture a thousand buttons scattered on a hardwood floor. Imagine you also have in your hand pieces of thread; you then randomly pick up two buttons, connect them with the thread, and put them back down. As you first start out, the odds are that each button you pick up will be unconnected, so you will be creating a lot of two-button connections. As you work away, however, at some point you will pick up a button that is already connected to another button and you will thus be adding a third.
In random networks, the phase transition from small clusters to giant clusters happens at a specific point, when the ratio of segments of thread (edges) to buttons (nodes) exceeds the value of 1 (i.e., on average, one thread segment for every button).One can think of the ratio of one edge to one node as the “tipping point” where a random network suddenly goes from being sparsely connected to densely connected.
(2) How Boolean networks function
Networks of nodes that can be in a state of 0 or 1 are called Boolean network Imagine a string of Christmas tree lights that blink on or off. Each bulb receives inputs from the two bulbs on either side of it telling it whether they are on or off. We can then imagine that each light bulb has a rule that it follows to determine what it does next period, based on the inputs from the other two bulbs
Basically, three variables guide the behavior of such networks.
1) the number of nodes in the network.
2) a measure of how much everything is connected to everything else. And
3) measure of “bias” in the rules guiding the behavior of the nodes.
Let’s look at each of these in turn and their implications for economic and other types of organizations.
a) Number of nodes in the network: Bigger is usually better
The number of states a network can be in scales exponentially with the number of nodes. A network with 2 nodes can be in four, or 22 , states: 00, 10, 01, and 11. Likewise, a network with 3 nodes can be in eight, or 23
The exponential growth in possible states creates a very powerful kind of economy of scale in any network of information-processing entities. As the size of a Boolean network grows, the potential for novelty increases exponentially. A Boolean network with 10 nodes can be in 210 possible states a An organization the size of Boeing also has inherently more headroom for innovation in the future— the larger number of states in the Boeing organizational network means that there are more potential ways for Boeing to make a living than for my corner coffee shop.
b) Number of connections between nodes. Complexity Catastrophes when a network is big
The mathematics of Boolean networks leaves us with a quandary, however. If large organizations have more headroom for innovation than do small organizations, then why does the mythos of business hold that small organizations out-innovate large ones?
If a network has on average more than one connection per node, then as the number of nodes grows, the number of connections will scale exponentially with the number of nodes
This means that the number of interdependencies in the network grows faster than the network itself. This, then, is where the problems start to arise. As the number of interdependencies grows, changes in one part of the network are more likely to have ripple effects on other parts of the network. As the potential for these knock-on effects grows, the probability that a positive change in one part of the network will have a negative effect somewhere else also increases.
To illustrate, let’s imagine you are the cofounder of a small start-up company with only two departments: product development and marketing. You run product development and have an idea for a new product. So you have a meeting to discuss your plan, the marketing department agrees to it, and you are ready to go—
You grew. You now have an idea for your third-generation product, but something bizarre has happened. You have
your usual meeting with marketing, but now before you get the department’s OK, the marketing managers say they have to check the impact on their budget, which was approved by finance. The finance folks say they can’t approve your project until they get an estimate from customer service on the cost of the additional support needed. And customer service has to check with marketing to make sure its plans are consistent with the company’s brand and pricing strategy. All of a sudden, you have gone from three meetings to ten (if all the permutations occur)
Degrees of Possibility Versus Degrees of Freedom The problem isn’t dumb people or evil intentions. Rather, network growth creates interdependencies, interdependencies create conflicting constraints, and conflicting constraints create slow decision making and, ultimately, bureaucratic gridlock.
We thus have two opposing forces at work in organizations: the informational economies of scale from node growth, and the diseconomies of scale from the buildup of conflicting constraints. Taken together, these opposing forces help us understand why big is both beautiful and bad: as an organization grows, its degrees of possibility increase exponentially while its degrees of freedom collapse exponentially.
This tension between interdependencies and adaptability is a deep feature of networks and profoundly affects many types of systems. This tension creates upper limits on the complexity of organisms.
Hierarchies alleviates the size problem. Organizing the network into hierarchies reduces the density of connections and thus reduces the interdependencies in the network. Hierarchies are critical in enabling networks to reach larger sizes before diseconomies of scale set in. This is why so many networks in the natural and computer worlds are structured as networks within networks.
counter intuitively, hierarchy can serve to increase adaptability by reducing interdependencies and enabling an organization to reach a larger size before gridlock sets in.
Who would have thought that hierarchy actually saves meetings? Hierarchy does, of course, have its problems; for example, information can degrade as it travels up the chain, the top may become out of touch with the front line, and a poor performer in a senior role can do a lot of damage.
A related move is to give the units within a hierarchical structure more autonomy.
c) Network bias: Increasing predictability allows for more connection per node
According to Kaufman, nonhierarchical networks exhibit spontaneous order with one or two average connections per node, but went into chaos (thus creating cascades of change and the potential for a complexity catastrophe) at four connections per node or more.
Bernard Derrida and Gerard Weisbuch, discovered a parameter that could change the point at which this phase transition takes place. They called the parameter bias.
Let’s say we pick a bulb and start feeding it Is and Os at random. Our input stream will thus be approximately 50 percent Is and 50 percent 0s. If the output stream was also fifty-fifty Is and 0s, then we could say that the output was unbiased, but if the output was say 90 percent Is, then we could say it was biased toward 1. A high-bias node is easier to predict.
Derrida and Weisbuch found that the higher the bias, the more densely connected a network can be before the transition to chaos occurs. If the average bias is fifty-fifty, then the transition to chaos happens in the range between two and four average connections per node, as it did in Kauffman’s study If the average bias is closer to 75 percent, then the transition happens above four connections per node. At higher bias levels, the network can go up to six connections per node before it trips into chaos. The key point is, the more regularity there is in the behavior of the nodes, the more density in connections the network can tolerate.
In an organizational context, we can think of bias as being a measure of predictability. If there is predictability in the decision making of an organization (the equivalent of the light bulbs’ rules), then the organization can function effectively with a more densely connected network. If, however, decision making is less predictable, then less-dense connections, more hierarchy, and smaller spans of control are needed. Thus, for example, in an army, where regular, predictable behavior of troops is highly valued, it might be possible to get away with larger unit sizes than, say, in a creative advertising agency. Factors that make behavior less predictable, such as office politics and emotions, can limit the size an organization can grow to before being overwhelmed by complexity
If we combine Kauffman’s original result with the later results on hierarchy and bias, however, the phase transition shifts to the range of six to nine nodes. Interestingly, the numbers that come out of the analysis of Boolean networks are quite close to what we typically see for the size of effective working groups in human organizations
C. Chapter 8: Emergence: How the Patterns arise
(1) The mistery of economic and business cycles
Depressions, recessions, and inflation are not exclusively modern phenomena; they are patterns that have recurred since the beginning of recorded history. Time series have a not-quite-regular, not-quite-random character to them. Economists have had very little success in trying to use the irregular historical patterns in economic
The ultimate accomplishment of Complexity Economics would be to develop a theory that takes us from theories of agents, networks, and evolution, all the way up to the macro patterns we see in real-world economies. Such a comprehensive theory does not yet exist, but we can begin to see glimmers of what it might look like. Such a theory would view macroeconomic patterns as emergent phenomena, that is, characteristics of the system as a whole that arise endogenously out of interactions of agents and their environment.
(2) Lesson from a Beer Industry simulation: business fluctuations does not flatten out, they goes through storm and calm
d) The game setup: 4 players are retailer, distributors and manufactures of beer. Compete to lower their cost
Four volunteers are asked to play a game simulating the manufacture and distribution of beer. The game is played as follows. Each participant has an inventory of cases of beer (represented by chips on the game board). At the beginning of each turn, the retailer turns over one card from the deck to get the order from the consumers (e.g., four cases) and then submits an order to the wholesaler. The wholesaler looks at his or her orders from the retailer and submits an order to the distributor,
1) Players incur costs of $0.50 per case for holding inventory (e.g., the cost of storing and securing the beer), and costs of $1.00 per case for running out of beer (e.g., angry customers and lost sales). Therefore,
2) The winner of the game is the one who incurs the lowest cost.
3) Likewise, there is a small delay between when orders are submitted and when they are processed
4) Finally, no communications are allowed between the players other than through the orders. Thus, the brewer doesn’t know what the customer demand is down at the retailer’s end;
The game starts out in equilibrium, with each player getting an order for four cases of beer and shipping exactly that many.
Unbeknownst to the participants, the first several cards in the consumer deck remain at four. Some players may order a bit more or a bit less than four, depending on how risk averse they are. Otherwise, not much happens. Then, suddenly, on one turn, the consumer-order card jumps from four to eight. The players do not know it, but the customer-order level will stay at eight for the rest of the game.
e) The results: a shock in the environment induces a shock in inventories that is amplified over time
According to Traditional Economics, this exogenous shock in demand should simply cause the players to move to a new equilibrium after a few turns of adjustment, with everyone ordering eight, and everyone’s inventories staying constant
In experiments with real people, however, the players inevitably overreact to the jump in demand by over-ordering as their inventory falls. As this wave of over-ordering travels up the supply chain, it is amplified.
Just what kind of behavior leads to such wild oscillations in a relatively simple environment? Sterman has been able to statistically derive the decision rule used by the participants [1]This rule is based on a behavior known in the psychology literature as anchor and adjust. Rather than deductively calculate their future beer needs by looking at all the inventory on the board (which they can see) and incorporating the effects of the time delays and so on, the participants simply look at the past pattern of orders and inventory levels, and inductively anchor on a pattern that seems normal. Their IF-THEN rules consequently try to steer them to maintain that normal pattern. Thus, a participant might anchor on four cases as the normal pattern of orders and then struggle to adjust when things are not normal, for example, “My inventory is dropping, order more!” In an environment with time delays, the anchor-and-adjust rule causes individuals to both overshoot and undershoot, which in turn leads to the emergent pattern of cyclical behavior.
Humans don’t do well when there is a time delay between their actions and the response to those actions
There are two ways to dampen the cycles in the Beer Game: one is to reduce the time delays, and the other is to give the participants more information (e.g., giving the brewer direct visibility into what is happening at the retail level).24
(3) Punctuated Equilibrium: Are There “Keystone” Technologies?
a) What is punctuated equilibrium: phases of calm and storm
Over the century that followed the publication of Darwin’s Origin of Species, biologists assumed that evolution proceeded in a stately and relatively linear fashion, leading to a smooth pattern of speciation and extinction. Then, in a landmark paper in 1972, the paleontologists Niles Eldredge and Stephen Jay Gould overturned this conventional wisdom and argued that the fossil record shows that biological evolution has not followed a smooth path.2 8
For example, during the Cambrian period about 550
million years ago, a burst of evolutionary innovation saw the takeover of the earth by multicellular life and the creation of most of the major phyla on earth today. Then, about 245 million years ago, during the late Permian period, there was what Gould called “the granddaddy of all extinctions”; when 96 percent of all marine species on earth disappeared
Patterns of punctuated equilibrium show up not just in biological evolution, but in other complex systems ranging from the slides of avalanches to the crashes of stock markets.
Many types of networks self-organize into a structure that has a mixture of very dense connections and very sparse connections. It is just such a network structure that underlies the emergence of punctuated equilibrium in biological ecosystems
The researchers found that if they randomly removed “species” from their simulated ecosystem, typically not much happened. Yet, once in a while, removing a random species would set off a cascade of events leading to a mass extinction. Certain species are very densely connected to other species in the web of food relationships and niche competition. Biologists call these “keystone species”.
b) Technology evolves following evolution patterns: i.e. with emergent properties
Technological innovation proceeds in similar patterns of calm and storm.
Technologies are inherently modular: a car, for example, is made up of an engine, a transmission, a body, and so on. Modules are then assembled into “architectures,” in this case, the design of the car itself
It is innovations in architectures (e.g., the PC revolution itself) that tend to have the big catalyzing ripple effects on innovation. We thus have two of the key features that led to the punctuated equilibrium pattern in Jain and Krishna’s model—sparse-dense networks of interaction, and catalyzing effects from individual nodes.
Technology webs might be subject to cascades of change leading to the emergent pattern of punctuated equilibrium, and that certain technologies could play the role of keystones in those webs.
(4) A time pattern emerges in nature and economy: power laws
Several researchers have shown statistically that stocks do not follow a random walk. The clumpy pattern for IBM stock price shows that the volatility of price movements is correlated in time. This is the stormy-quiet-stormy pattern of punctuated equilibrium A few points skyrocket above, or plunge far below, the rest of the sample. What could lead to such dramatic movements in prices?
The surfacing of news does not explain much of the swing in prices. We have a mystery: why is there so much news-less volatility in the market? The answer to this mystery lies in an interesting observation: while stock price movements don t look much like a random walk, they do look like another phenomenon: earthquakes.
The following straight line on a log-log scale meant that, with earthquakes, there is no “typical” size in the middle of the distribution as there is in body heights. Rather, earthquakes occur across all size scales, but the bigger the quake, the rarer it is—specifically, with each doubling in earthquake energy, the probability of a quake of that size occurring drops by a factor of four. It is thus a slippery slope down
Physicists call this kind of relationship a power law, because the distribution is described by an equation with an exponent, or power.42 Power laws have been discovered in a wide variety of phenomena, including the sizes of biological extinction events, the intensity of solar flares, the ranking of cities by size, traffic jams, cotton prices, the number of fatalities in warfare, and even the distribution of sex partners in social networks.43 Power laws, along with oscillations and punctuated equilibrium, are another signature characteristic of complex adaptive systems.
Pareto’s study of income found a lot of poor, a middle class that stretched over a wide range, and a very few superrich. He found that for every increase of income by 1 percent, there was a corresponding decline in the number of households by 1.5 percent—graphed on log-log paper, this produces a straight line—a power law. Pareto
Power laws reemerged briefly in economics in the 1960s, when Benoit Mandelbrot became interested in the fluctuations of cotton prices on the Chicago Mercantile The fluctuations seemed to have no natural timescale. If he took one section of the graph, say, one hour, and stretched it out to the length of a day, one could not tell which graph was the hourly data and which was the daily data. He then looked at data from other commodities, including gold and wheat, and saw the same pattern—power laws
They found that the fluctuations in stock prices follow clear power laws in the tails of the distribution.
One of the consequences of this result is that financial markets are far more volatile than Traditional Economics leads us to believe.
The size of companies as measured by employees also scales according to a power law. Company sales growth, as well as the GDP growth of nations, likewise scales according to a power law.
(5) Why Are Stock Markets So Volatile? Because of the market order and limit order structure.
The key is that there are two types of trades one can make on most stock exchanges. The first is a market order, in which a trader says buy (or sell) stock X right now for the best available price. The second is a limit order, in which a trader says buy stock X if the price falls to $100 (or conversely, sell stock X if the price rises to $100)
The cause of large price fluctuations was the structure of the order book itself—large fluctuations occurred when there were large gaps between the price levels in the boo
k.
The regularity of the order pattern implied that there was also some regularity in the behavior of the traders placing the orders—a result at odds with the Traditional theory that all trading is driven by unpredictable news events.
(6) Conclusion: pattern behavior emerges as consequence of: individual behavior regularities; institutional traits; exogenous inputs
Complex emergent phenomena such as business cycles and stock price movements are likely to have three root causes:
D. Chapter 9: How Evolution works in economics
(1) Evolution selects the best ideasa or Business Plan
Business Plans are instructions for creating businesses that can be implemented by qualified Business Plan readers. These instructions bind Physical Technologies and Social Technologies together into modules under a strategy.
Business Plans are differentiated through the deductive-tinkering of agents as they search for potentially profitable plans. While the distribution of experiments created by this process differs from the purely random differentiation of biological evolution, it nonetheless feeds the evolutionary algorithm with a superfecundity of Business Plans for selection to act on.
At some point the plans are implemented and the market renders its judgment. Finally, successful modules are rewarded by gaining influence over more resources.
(2) Markets are evolutionary search mechanism
Following the framework I have just outlined, we can reinterpret markets as an evolutionary search mechanism. Markets provide incentives for the deductive-tinkering process of differentiation. They then critically provide a fitness function and selection process that represents the broad needs of the population (and not just the needs of a few Big Men). Finally, they provide a means of shifting resources toward fit modules and away from unfit ones, thus amplifying the fit modules’ influence.
Markets win over command and control, not because of their efficiency at resource allocation in equilibrium, but because of their effectiveness at innovation in disequilibrium.
the reason that markets work so well comes down to what evolutionary theorists refer to as Orgel’s Second Rule (named after biochemist Leslie Orgel), which says, “Evolution is cleverer than you are.” Even a highly rational, intelligent, benevolent Big Man would not be able to beat an evolutionary algorithm in finding peaks in the economic fitness landscape.
The reason that markets are good at allocation has more to do with their computational efficiency as a distributed processing system (i.e., they get the right signals to the right people), than with their ability to reach a mythical global equilibrium.
E. Chapter 14: A New Definition of Wealth: Fit Order
(1) Economic activity consists in creating order
In 1971 Georgescu-Roegen’, published his magnum opus The Entropy Law and the Economic Process, where he stated that that economic activity is fundamentally about order creation, and that evolution is the mechanism by which that order is created.
The Second Law thus provides a basic constraint on all life: over time, energy inputs must be greater than energy expenditures. All organisms must make a thermodynamic “profit” to survive and reproduce. The design for an organism can be thought of as a strategy for making thermodynamic profits long enough to reproduce, before the Second Law eventually catches up.
Competition for the energy and materials needed for order creation is, of course, intense; plants compete for ground, water, and sunlight, and many species have the strategy of stealing energy and materials from other species by eating them.
Just as in biological systems, the economic process materially consists of a transformation of high entropy into low entropy.
(2) A Proposal: Three Conditions for Value Creation
A pat
tern of matter, energy, and or information has economic value if the following three conditions are jointly met:
1. IRREVERSIBILITY. All value-creating economic transformations and transactions are thermodynamically irreversible.
2. ENTROPY. All value-creating economic transformations and transactions reduce entropy locally within the economic system, while increasing entropy globally.
3. FITNESS. All value-creating economic transformations and transactions produce artifacts and or actions that are fit for human purposes.
Consequently, low entropy might indeed be necessary for something to have economic value, but defining what kinds of order are valuable and what kinds are not seems rather subjective—order is in the eye of the beholder.
Taken together, the three G-R Conditions say that economic activity is fundamentally about order creation. Faced with the disorder and randomness of the world, humans spend most of their waking hours ordering their environment in various ways to make it a more hospitable and enjoyable place. We order our world by transforming energy, matter, and information into the goods and services we want, and we have discovered the evolutionary Good Trick that by cooperating, specializing, and trading, we can create even more order than we otherwise could on our own.
They select forms of order that meet their needs, fulfilling drives and preferences
In physics, order is the same thing as information, and thus we can also think of wealth as fit information; in other words, knowledge.
Information on its own can be worthless. Knowledge on the other hand is information that is useful, that we can do something with, that is fit for some purpose.
Evolution is a knowledge-creation machine—a learning algorithm.53 Think of all the knowledge embedded in the ingenious designs of the biological world. A grasshopper is an engineering marvel, a storehouse of knowledge of physics, chemistry, and biomechanics—knowledge that is beyond the bounds of current human ability to replicate
A grasshopper is also a snapshot of knowledge about the environment it evolved in, the foods that were good to eat, the predators that needed to be defended against, and the strategies that worked well for attracting mates and ensuring the survival of progeny. There are terabytes of knowledge embedded in a single grasshopper.
We have found the answer to our quest. Wealth is knowledge and its origin is evolution.
•••
[1] Sterman, J. D. 1985. A Behavioral Model of the Economic Long Wave. Journal of Economic Behavior and Organization 6:17-53.
Más allá de la muerte – Ratzinger – Summary
/in Resúmenes de Lecturas /by Bobby A LopezMás allá de la muerte
Joseph Ratzinger
A. Importancia de la Pregunta: Qué hay más allá de la Muerte
La pregunta por lo que hay más allá de la muerte ha sido durante largo tiempo dominante en el pensamiento cristiano. Hoy ha caído bajo la sospecha de platonismo; esa sospecha que desde Marx y Nietzsche acosa de diversa forma a la conciencia cristiana.
Habrá de satisfacer además al agudo problema de la responsabilidad política y social de la fe cristiana, sobre la cual se podrá hablar tanto más eficaz y sólidamente cuanto más aclarada esté su relación con la esperanza cristiana.
B. Oposición de Resurrección e Inmortalidad del Alma
(1) La tesis de Oscar Cullmann
Las voces de los teólogos protestantes veían la “inmortalidad del alma” como un pensamiento claramente no-bíblico.
Este rechazo de la idea de la inmortalidad del alma en favor del reconocimiento de la resurrección de la carne como pensamiento más bíblico, se basa en una determinada hermenéutica bajo la que es leída la Biblia. Su contenido central me parece que es la oposición de lo bíblico con lo griego.
A esto se añade la afirmación de que la inmortalidad del alma concedida a la especie humana expresa algo propio v natural al hombre, mientras que la resurrección de la carne solo puede ser dada como gracia por el Resucitador. Esto significa, sin embargo, que junto con la idea de la inmortalidad, también la idea “alma” se hace sospechosa de platonismo
(2) Los problemas
La ciencia ha confirmado la unidad del hombre y su indivisibilidad. Precisamente eso corresponde al tenor fundamental del pensar bíb lico y contradice el nivel dualista que con cierto derecho se puede adjudicar al platonismo.
Pero, aunque con esto se supere una aporía -la del platonismo-, aparecen, sin embargo, problemas no pequeños. ¿Qué pasa con la resurrección de la carne? Si la queremos pensar en forma razonable, y no como algo ‘milagroso’, aparecen no pocas dificultades. ¿La concebimos para todos al ‘fin de los tiempos’? Pero, ¿qué hay entre tanto?, ¿en qué consiste la identidad entre el muerto y el resucitado, si entre ambos solo hay una nada total? Y ¿qué es lo que resucita?
No puede excluirse la pregunta de si algo como el concepto de “alma” no será hermenéuticamente necesario como lazo de unión, e incluso si no se impone a partir de los mismos datos.
(3) Un dato: la dimensión temporal
Troeltsch había formulado el pensamiento de que “las últimas cosas” no están en relación con el tiempo. En realidad, el Eschaton no sería de ninguna forma conmensurable con nuestro tiempo. En correspondencia con esto, el primer Barth pudo decir que el esperar la Parusía significa con otras palabras lo mismo que “tomar nuestra situación real, vital, tan seriamente como ella es”. Indudablemente aquí habría que plantearse cuál es últimamente el contenido real de tales afirmaciones
(4) Una nueva representación: la eternidad como lo totalmente otro
En la teología católica, estas manifestaciones de una filosofía del tiempo-eternidad imponen una exigencia de corregir una representación meramente lineal del fin de los días y, en su lugar, pensar las últimas cosas, juicio y resurrección, coextensivamente con el tiempo: Cada muerte es la entrada en lo otro totalmente distinto, en lo que no es tiempo, sino eternidad. “Eternidad” no viene detrás del tiempo (esto supondría hacerla a ella tiempo), sino que es lo distinto del tiempo, a la par que constantemente actual.
A aquel que se encuentra temporalmente en la tumba se le supone a la vez al otro lado de la línea temporal, como resucitado. Pero, ¿qué significa esto?, ¿hay por tanto algo así como un “alma”?; o bien, ¿qué concepto de tiempo posibilita el pensar al hombre, al mismo tiempo como resucitado y como enterrado?
C. A LA BÚSQUEDA DE NUEVAS RESPUESTAS
(1) Tiempo físico – tiempo del hombre – eternidad
La diferenciación entre tiempo y eternidad significa, como hemos dicho, un avance respecto a la linealidad no reflexionada
La aportación fundamental de Agustín sobre la memoria humana consiste en una diferenciación entre el tiempo físico y el antropológico. El tiempo físico representa los sucesivos momentos de un movimiento, que es datado con ayuda de un determinado parámetro (por ejemplo, el sol o la luna).
La existencia del hombre no se agota evidentemente en un movimiento medible del cuerpo; también los procesos decisorios espirituales del hombre están ciertamente unidos a su cuerpo, y en este sentido son, como decimos, indirectamente datables.
En este tiempo antropológico falta, por un lado, la repetibilidad del hecho físico, y por otro, el absoluto ser pasado que encontrábamos allá.
El fallo de la filosofía del tiempo-eternidad, reseñada arriba, me parece que consiste en que solamente conoce una alternativa sencilla entre el tiempo físico y la eternidad pura, en la cual esta última aparece solo como meramente negativa, como puro no-tiempo.
Si más allá de la muerte reina el puro hoy en el que Resurrección, juicio v Fin del mundo están ya presentes porque allí no hay tiempo, esto haría de la historia un mero espectáculo en el que uno cree moverse cuando, en realidad, ya está todo hecho. Lo cual sería un platonismo fatal, y peor que el de los mismos platónicos.
Frente a esto, la descripción correcta del acontecimiento de la muerte, tendría que decir que aquí, el tiempo específicamente humano se separa de su contexto físico-cronológico, y recibe con ello el carácter de definitividad. Esto significa que ambos marcos temporales, el del hombre y el de la historia, no están sin más en una relación sencilla de posterioridad y, a la vez, tampoco son sin más inconmensurables uno con el otro.
Y esto significa también que la historia acontecida del mundo y su definitivo futuro teológico tienen una relación real entre sí, que el obrar del uno no es indiferente para el hacerse del otro.
(2) Rehabilitación del “alma”
No se puede discutir que este concepto sólo de forma vacilant
e pudo encontrar entrada en la tradición cristiana. Recordemos que el judaísmo intertestamentario conocía ya claras representaciones de la vida y estado del hombre tras la muerte.
En la predicación apostólica el nuevo acento consiste en que Jesucristo es predicado como el que ya ha resucitado. El verdadero punto de referencia de la inmortalidad es menos un tiempo que llega, cuanto el Señor que ya vive.
Ya en la palabra de Jesús al ladrón en la cruz, el “conmigo” añadía un matiz cristológico a la idea del Paraíso. Y en las palabras de San Esteban “Señor Jesús, toma mi espíritu” ahora aparece el Señor mismo como el paraíso en el cual sabe el moribundo que ha de ser asumida su vida.
Así se cuajó la conciencia de una vida a partir del Señor resucitado, que es regalada al hombre ya en la muerte del cuerpo, y antes de la compleción definitiva del futuro del mundo. Se hizo visible que hay una continuidad del hombre, más allá de su existencia corporal.
¿Cómo debía ser pensada esta continuidad y factor de identidad? A partir de los griegos, se ofrecía el concepto del “alma”. Ciertamente, esta expresión estaba cargada por una imagen humana dual, peligrosa por tanto, y necesitada de purificación. Pero esto sirve en realidad para todos los conceptos; también para la palabra “Dios”: el “Dios” griego no era de ninguna manera el mismo que el Jahvé bíblico. Recientes investigaciones han mostrado cómo el pensamiento cristiano se esforzó, en especial en la alta Edad Media, por alcanzar la exigida purificación y transformación del concepto.
Es indiscutible que en la conciencia general se ha impuesto esta visión dual, y que, en consecuencia, son necesarios los esfuerzos por una purificación.
El hablar de inmortalidad del alma nos parece hoy especialmente sospechoso, porque se tiene la impresión de una sustancia metafísica, y por ello consideramos como más adecuado el hablar con conceptos más personalizados, dialógicos. En una concepción dialógica se contestaría: quien está en diálogo con Dios no muere. El amor de Dios da eternidad. Notemos que esto se diferencia del concepto “alma” sólo en el punto de partida de su formulación, y en la línea de pensamiento.
Esto presupone ciertamente que no pensemos la substancia desde abajo, a partir de una “masa” (que por lo demás siempre es cuestionable en cuanto masa), sino desde arriba, desde la dinámica de compleción espiritual. Me parece que es tiempo de llegar a una rehabilitación en la teología, de los tabuizados conceptos “inmortalidad” y “alma”. Ciertamente no están faltos de problemática, y la sacudida de los años pasados puede ser saludable e incluso necesaria.
Fe EN LA INMORTALIDAD Y RESPONSABILIDAD ANTE EL MUNDO
La pregunta bíblica de qué ayuda al hombre todo el mundo si pierde su alma, parece transformarse hoy en la cuestión, ¿qué ayuda al hombre toda su alma si con ello no se sirve al mundo? Pienso que para la vida cristiana se tendría que hacer más bien la proposición contraria: también el escéptico y el ateo tendrían que vivir “quasi Deus daretur” -como si Dios realmente existiese.
¿Qué significa esto? Vivir como si Dios existiese, significa: vivir como si se estuviese bajo una responsabilidad infinita. Como si justicia y Verdad no fuesen meramente programas, sino un poder vivo v existente, ante el que se debe dar respuesta. Obrar como si el hombre que está junto a mí, no fuese una casualidad de la naturaleza, en el que no hay nada importante, sino un pensamiento de Dios hecho carne, una imagen delCreador al cual Él conoce y ama.
La fe cristiana en la inmortalidad del alma no quiere, según su auténtica intención, imponer ninguna teoría sobre algo no cognoscible, sino hacer una afirmación sobre las medidas y la amplitud de la vida humana. Quiere afirmar que el hombre no es nunca un medio, sino siempre un fin en sí mismo.
Una creatura que le observa y ama, siendo Él eternidad, participa también en esa eternidad. Cómo se formula esto, o cómo pueda ser representado, es en definitiva secundario, si bien naturalmente es una pregunta importante.
Todos los conceptos que usemos -también, por tanto, el lenguaje de la inmortalidad del alma-son en definitiva solo ayudas para el pensamiento (en parte también insustituibles), con las cuales intentamos determinar el Todo a partir de diversos modelos antropológicos. En ningún caso, se trata de ganar descripciones del más allá, y así ampliar el espacio de nuestra curiosidad. En su esencia, la confesión de la inmortalidad no es otra cosa que la confesión de que Dios existe realmente.
•••
La física cuántica, cómo surgió
/in Curso Historia de la Ciencia /by Bobby A LopezGeneralización de la hipótesis cuántica
Ideas tomadas de Arana: Materia Universo Vida.
En 1900 Plank había descubierto (o hipotetizado) que la energía de la luz no se transmitía de forma continua sino en cantidades discretas o “paquetes” a los que llamó cuantos.
Apoyado en esto, Einstein publicó en 1905, unos meses antes de publicar la Teoría Especial de la Relatividad, un paper sobre la naturaleza de la luz, explicando cómo era posible el efecto foto-eléctrico: que la luz hiciera saltar electrones de algunos metales, como si fueran golpeados por partículas.
La naturaleza de la luz es el problema físico más estudiado en la historia. Newton había propuesto una explicación corpuscular de la luz (admitiendo que tiene algo de ondulatorio). Pero en el siglo 19 todos los científicos se fueron convenciendo de que la luz era un onda electromagnética.
Lo que Einstein venía a decir es que la luz se comporta también como si fuera una partícula. Una vez que esto se aceptó. Se fue considerando si además de la luz, esta discontinuidad se aplicaría también al movimiento, en especial al de los electrones dentro de un átomo.
Esto, cuando Plank lo descubrió y en los siguientes años, era algo muy incómodo para los científicos. Pero, sabiendo que luego resultó que todo la física y la química se ha podido explicar con este modelo cuántico o discreto ¿por qué resultaba tan incómodo este descubrimiento para la física de esos años y para su mismo descubridor, Plank? Por que “la ciencia moderna, desde el descubrimiento del cálculo infinitesimal, se basaba en la idea de que todas las relaciones causales son continuas” (Plank 1960). Poincaré, uno de los científicos de más prestigio en su época, escribía en 1912 “Su brillante genio [de Newton] había visto bien (…) que el estado de un sistema móvil, o más generalmente, del universo, solo podía depender de su estado inmediatamente anterior, y que en la naturaleza todas las variaciones deben de hacerse de una manera continua […] Y bien, hoy se discute esta idea fundamental, se pregunta si no hará falta introducir discontinuidades no aparentes sino esenciales en las leyes naturales” (Poincaré 1912).
El problema filosófico de fondo es que la ciencia moderna jamás han tratado de explicar en qué consiste el movimiento, es decir, cómo lo que es de una manera puede llegar a ser de otra. Lo que ha hecho la ciencia de establecer una escala con infinitas gradaciones intermedias entre el punto de partida y el de llegada, de forma que se mantenga lo que siempre fue de la misma manera antes y después del cambio. Pero si la energía y el movimiento se dan “a saltos” ¿qué es lo que permanece en el cambio? Y por tanto ¿sobre qué ecuaciones se apoyarán los físicos sus ecuaciones? Esto es lo que se preguntaba Plank, Poincaré y los científicos de principios del siglo XX.
En 1910 Rutherford descubrió experimentalmente el núcleo atómico y propuso el modelo del átomo como sistema solar: un núcleo siendo circundado por unos electrones que separados. Esto lo hizo aplicando la física newtoniana de movimientos. Pero este modelo no explicaba por qué los electrones no acababan cayendo sobre el núcleo a medida que consumía la energía para girar.
Neil Bohr aplicó los principios de Plank y concluyó que los electrones solo circundan al núcleo en orbitas “permitidas”, discretas, con los niveles de energía discretos. Si el átomo ganaba energía lo electrones pasaban a una órbita más alejada del núcleo; si perdía energía, se acercaba al núcleo. Por primera vez, la física cuántica había sido aplicada a la materia y no solo a los cuantos de ondas electromagnéticas.
Human Nature in Chinese and Greek Philosophy: External or Internal? – Piter – Summary
/in Resúmenes de Lecturas /by Bobby A LopezHuman Nature in Chinese and Greek Philosophy: External or Internal?
A comparison between Mencius and Hzün Tzü versus Plato
By Piter
A los filósofos chinos les preocupa la naturaleza humana para poder predecir cómo se comportará el pueblo ante distintas actuaciones del gobernante. Los filósofos griegos son más críticos con la autoridad, pero les interesa la naturaleza humana para ver cuál es el camino correcto para alcanzar la felicidad.
Mencius concluye que en el fondo todo hombre es bueno y que si se le trata bien respetará al gobernante como a un padre. El legalista Xun-Zi piensa que, de primera intención, el hombre es malo y egoista, pero que puede y debe ser educado: llevado externamente hacia el bien con premios y castigos. A Platón no le interesa tanto entender cómo se va a comportar el pueblo, sino entender porqué un individuo hace las cosas. Conluye que todo hombre busca el bien y que, si se mantiene así, conseguirá la felicidad.
The Chinese philosophers often put their efforts into giving council and advice to kings. A king would approach the philosopher with a question in need of resolution. These philosophies were valuable to kings for purposes of security, because the philosophers gave them advice on what would happen in a society after the king’s decisions were made and implemented. In this sense, Chinese political understanding of human nature attempted isolate the elements of human behavior essential to the functioning of the state and find the social laws governing them.
Each of the philosophers has a different approach to diagramming the composition the individual. The most important thing to realize is that Hzün Tzü argues that man’s nature is evil, while Mencius believes in the goodness of humans.
A. Mencius
B. Hzün Tzü
C. Plato
So, from the findings of these three sources, it looks like there is an initial disagreement as to whether human nature is found internally, ingrained in his being (Mencius), or externally, and acquired through conscious understanding (Hzün Tzü). It’s possible Plato provides the missing link by saying it is the nature of man to realize the good that is external to him.
xXx
Fe y ciencia una vision panorámica desde el siglo 19 – Estevao Fachini
/in Curso Relativismo /by Bobby A LopezFe y ciencia una vision panorámica desde el siglo 19.
por Ing Estevao Fachini, Ph D
Noviembre 2009
¿Cuál es el uso adecuado del sexo?
/in Curso de Cristianismo /by Bobby A Lopez¿Cuál es el uso adecuado del sexo?
¿Qué se puede hacer y que no, en este campo? ¿Por qué se dice que está mal la homosexualidad, las relaciones prematrimoniales, la masturbación? ¿Es porque lo dice la Biblia?
Para responder adecuadamente a esta pregunta hay que encontrar la respuesta a otra pregunta más fundamental: ¿qué es lo que hace que una acciones sean buenas y otras no? ¿cuál es el origen de la moralidad?
Las repuestas dadas a esta pregunta a lo largo de la historia se pueden clasificar en tres grupos:
A) Dios: Dios decide que cosas son buenas y cuáles malas
B) Las cosas: Las cosas son así
C) El hombre: se pone de acuerdo en cada cultura para definir lo que se va a aceptar en esa sociedad y lo que no.
La respuesta cristiana es que la moralidad –la bondad- de los actos depende de cuánto se ajusten al orden natural.
Se dice que algo es bueno si está alineado con el diseño general del universo, si cumple con las leyes naturales (no sólo con la ley natural). Para poder determinar cuál es el buen uso del sexo, necesitamos establecer cuál es el orden natural de este instinto, de esta atracción.
Existen varios placeres físicos en la naturaleza física del hombre: sexo, comer, dormir, beber. Todos están puestos para facilitar que el hombre realice algunas funciones especialmente importantes. Si no existiera el placer en comer o dormir, el hombre estaría retrasando siempre esas funciones.
¿Cuál es la finalidad del instinto sexual? Facilitar al hombre que se una a una mujer, no solo físicamente, sino institucionalmente formando una familia. ¿Pero es ésta una función natural? Es una función social necesaria para el cuerpo de la sociedad, no para cada individuo. El hombre es un animal especialmente complejo, que tarda mucho en desarrollarse. Y para desarrollarse equilibradamente necesita que los primeros años de su vida transcurran bajo el cuidado de unos adultos que le ofrezcan amor. Si esto no está, la persona se desarrolla con carencias.
Para que esto funcione hace falta que el hombre y la mujer se comprometan a vivir juntos establemente. No serviría una unión temporera, como tienen los animales, porque nadie renunciaría a su profesión por una unión temporera. Piensen en lo irracional que es el matrimonio: un muchacho que acaba de ganar la libertad se somete a una nueva esclavitud a compartir la chequera a vivir con una persona de otro sexo, lo que implica que sus intereses son completamente distintos. Fuera de bromas, pensemos seriamente en lo fuerte que es que un hombre y una mujer se comprometen a vivir juntos a formar una familia. Para poder vencer todos estos obstáculos, para vencer esta tendencia a la independencia, el diseñador de la naturaleza ha puesto el instinto sexual: un instinto a compartir la vida y el cuerpo con una mujer. Este es el sentido del instinto sexual: facilitar la procreación dentro del un matrimonio.
Si hemos entendido esto podemos dejar aqui la charla, ya tenemos aquí la regla de oro de la moralidad sexual: es bueno cualquier uso del sexo que esté orientado a la procreación dentro del matrimonio, a la procreación humana (fuera del matrimonio es procreación animal).
Vamos a aplicar esta regla a algunas situaciones concretas, para que veamos lo mucho que ilustra:
¿No es la pornografía (o simplemente, el usar imágenes provocativas de mujeres como reclamo comercial en anuncios) no es disfrutar de una belleza que Dios ha puesto en la mujer? Respuesta: Es usar de esa atracción, pero con otro fin: capturar la atención. Esto, además trivializa el atractivo de una mujer, que se usa simplemente como mercancía. Esto aplica a usar la excusa de una playa para exhibir los atributos femeninos.
La homosexualidad: para entender la moralidad de la homosexualidad hay que distinguir entre tendencia homosexual y conducta homosexual. Tendencia es sentir la atracción hacia otra persona del mismo sexo. Conducta homosexual es dejarse llevar por esa tendencia y tener relaciones con otro del mismo sexo. La conducta está mal, la tendencia no. Una persona puede sentir las tendencias más desordenadas que se puedan imaginar, pero si no se deja llevar ni se goza en esas tendencias, no está haciendo nada malo.
Pero, el vivir toda la vida reprimiendo unas tendencias ¿no es insano, anti-natural, dañino, malo? Lo natural en una naturaleza dañada como la nuestra, en una naturaleza que perdió su diseño original, es vivir toda la vida combatiendo tendencias desordenadas. La gran mayoría de los hombres tenemos que vivir toda nuestra vida con tendencias adúlteras, y eso nos es para cogernos pena. El problema de fondo es que mucha gente cree que el fin de la vida es el placer, y decirle a alguien que no va a tener fuente de placer es condenarlo, es quitarle el fin de la vida, es inhumano.
ALGUNAS CUESTIONES DE CASTIDAD MATRIMONIAL
Aplicando este mismo principio de que el uso adecuado del sexo es dentro del matrimonio y orientado o abierto a la procreación. Eso significa que no hay espacio para el sexo anal. No hay espacio para el onanisno: para arrojar el semen fuera de la vagina ni cualquier comportamiento que impida el final natural del acto conyugal.
Para Lutrero el acto conyugal es un pecado que Dios no toma en cuenta, entonces uno puede realizarlo pero no puede disfrutarlo. Para nosotros es un acto sagrado: Dios está ahí con nosotros, y lo hacemos siguiendo una regla que él estableció. Siempre que respetemos el final natural del acto, podemos adornarlo con cosas que lo realcen: una música, un vino, un juego. Porque Dios está ahí si no estamos cerrando el acto a la vida. Pero cuando dos esposos hacen el sexo con protección lo que están haciendo es masturbarse mutuamente
No significa que uno trate de tener hijos en cada relación. No es mejor cristiano el que más hijos tenga. Se trata de dejar el acto conyugal “abierto”, de que haya en cada relación un cierto riesgo (de no pretender mantener el control total sobre el acto, para que si Dios quiere pueda ser un acto creador. Se trata de darle a Dios su espacio, y no pretender nosotros tener un control completo de nuestra vida.
Por vivir nosotros en un mundo donde Dios está arrinconado, se nos mete la idea de que nosotros tenemos que tomar control completo de nuestra vida. Entonces para ver si podemos tener un hijo, sumamos y restamos a ver si tenemos los recursos. Y esto a Dios le da pena, y a nosotros nos lleva al desasosiego: al miedo a perder el trabajo, a la vergüenza por no tener a cierta edad ciertos bienes, etc. Si no salimos de esta trampa, y contamos con que Dios sabe más, y Él dirige nuestra vida, sino rompemos esa forma de pensar, llega primero la tibieza y luego la frialdad.
Vivimos en uno de esos ciclos históricos en los que las costumbres se degradan y el tono moral de todo el mundo, no solo de los pervertidos, se pone más bajito. Y puede ser que subjetivamente no nos demos cuenta de que se nos mete la impureza simplemente con “estar” en el mundo (ver la tv, caminar por la calle, ir a la playa, etc). El problema es que, aunque subjetivamente no nos demos ni cuenta, objetivamente hace daño a nuestra alma porque enfría en nosotros el amor y hace que, a la larga, las cosas no nos sepan a nada, perdamos la sensibilidad, necesitemos cada vez más carga en las cosas para que nos atraigan (en las cosas de todo tipo: proyectos del trabajo, personas, objetos personales, un carro, etc). Este embotamiento de la sensibilidad hace que disfrutemos menos la vida.
¿Cómo no perder la sensibilidad cuando estamos en un ambiente negativo? Examinándonos con valentía cada semana y pidiendo perdón a Dios y ayuda a la Virgen.
En general, hemos de ver la pureza como algo positivo, que nos hace disfrutar cada vez más de la vida y ver a Dios a nuestro lado.
Entendiendo la Física Cuántica
/in Curso Historia de la Ciencia /by Bobby A LopezEntendiendo la Física cuántica
Nov 2009
El mayor proyecto Científico de la historia, con 31 países gastando 6 billones es una máquina para comprobar las afirmaciones de la física cuántica.
La física cuántica comenzó cuando en el año 1900 Max Plank descubrió que la luz se comportaba también cono una partícula y no solo cono una onda. Al principio nadie le hizo mucho caso, porque esto iba contra la euforia que había en el ambiente por la gran unificación de James Maxwell, que había conseguido un set de ecuaciones que explicaban todas las ondas, incluida la luz, y se pensaba que el futuro estaba en olvidarse de las partículas, como de algo muy rudimentario.
El que vino a salvar esta observación de que la luz estaba hecha de partículas fue Albert Einstein. De hecho, fue por esto por lo que se ganó el premio Nobel y no por su teoría de la relatividad. Descubrió que cuando la luz golpea la materia, no es como cuando una onda, como el sonido, golpea la materia. Para empezar, un haz de luz no impacta a billones de electrones, sin solo a unos pocos. Y los electrones salen del átomo como si les hubiera golpeado una bolita. La energía con la que salen no depende de la longitud de la onda. De esta interacción entre la luz y la materia, se sacaron conocimientos útiles. De aquí salió que se le podía poner sonido a las películas, una revolución. Salió también que la luz ultra violeta puede producir cáncer de piel. Elaborando sobre esta relación entre la luz y la materia, fue que más adelanta postuló la famosa ecuación E = m c que relaciona la energía con la masa. Aunque tradicionalmente se ha interpretado esta ecuación como que “todo es energía” la realidad es que lo que originariamente quiso decir es “todo es masa”: hasta la luz tiene masa. Esto vino a romper la tendencia que había iniciado Maxwell hacia “ondización” de la física.
En los años 1920 Neils Bohr, un físico danés, demostró que no solo la energía se movía en “cuantos” o paquetes, sino que la materia también se movía en quantos: demostró que un electrón solo podía orbitar en orbitas específicas: no continuas sino discretas o separadas. El 90% de un átomo es espacio vacío.
En 1924 Louise De Broglie demuestra que, de hecho, la materia de comporta a veces como si fuera una onda: puede entrar por dos agujeros a la vez. Esto ha servido para construir microscopios que emiten electrones y PET, máquinas para ver el cuerpo humano por dentro emitiendo positrones.
En 1927 Werner Heisenberg un físico alemán extrajo de los anteriores descubrimientos una consecuencia muy perturbadora: no se puede saber, si siquiera en la teoría, dónde está una partícula y la velocidad que lleva; o sabemos la posición o sabemos la velocidad, pero no los dos a la vez. Tan solo podemos dar probabilidades. Lo que esto venía a decir es que, en el fondo, la realidad material no está completamente determinada, es decir, que una parte de la realidad material no sigue unas leyes fijas, que la posición de una cosa no está completamente determinada por la situación del universo en un momento anterior. Einstein fue de los que más protestó contra esto (Dios no juega a los dados).
Erwin Schrödinger, un físico austríaco, descubrió la ecuación que explica la posición y velocidad de cualquier cuerpo material. Esta ecuación era una ecuación de onda, por lo que se pensó que se volvía a la tendencia “maxwelliana” de que todos son ondas. Pero resultó que no, que la realidad, más que ser ondas, es partículas.
Esta ecuación que da la posición de cualquier objeto en el tiempo, resultó tener un componente probabilístico: lo que da esta ecuación es una “probabilidad” de la situación de un objeto.
Es importante resaltar que esta probabilidad cuántica no es, como clásicamente se define la probabilidad, una limitación de nuestro conocimiento. En este caso se tata de que “realmente” no está determinado en todos sus detalles.
Y sin embargo, la física cuántica es la teoría científica más probada experimentalmente y con más éxito en toda las historia de la ciencia.
Pero también muy interesante son sus implicaciones filosóficas.
La primera implicación interesante de la mecánica cuántica es que la física deja de ser determinista y pasa a ser probabilística: las cosas que existen no están completamente determinadas por cómo eran las cosas en el pasado + una leyes. Ahora resulta que hay fenómenos realmente probabilísticos (con el enfogono de Einstein que era determinista y que pensaba que “Dios no juega a los dados”).
Además, segunda implicación, el hecho de que todo el movimiento en el universo pueda verse sólo como probabilidades puede verse como una justificación de le inmaterialidad de la mente humana. Si todos los sistemas físicos solo tienen probabilidades, estas probabilidades no pueden quedarse siempre como probabilidades, en algún momento tiene que haber resultados definitivos. Solo tiene sentido decir que hay un 60% de probabilidad de que Jane pasará el examen de francés si en algún momento va a haber un examen de francés para Jane. La única forma de que las probabilidades tengan sentido es que intervenga una mente humana y entonces aparezca la certeza. Por lo tanto la mente humana no puede ser simplemente un sistema físico, describible por ecuaciones. Por lo tanto, la mente no es material.
Quantum randomness can be controlled by free will – Antoine Suarez – Summary
/in Resúmenes de Lecturas /by Bobby A LopezQuantum randomness can be controlled by free will
by Antoine Suárez
The assumption that human behavior is not completely determined by the past plays a key role in the way we behave in daily life and organize society through law. When I typewrite this article, I assume that I am governing the movements of my fingers through my free will.
This paper aims to show that quantum physics does not entail the presumed incompatibility of quantum randomness with order and control. Section II argues on the basis of the before-before experiment that quantum randomness can be controlled by unobservable in?uences from outside spacetime and, therefore, is compatible with freedom in principle: Both quantum randomness and free will, refer to agency which is not exclusively determined by the past.
A. QUANTUM RANDOMNESS CAN BE CONTROLLED BY UNOBSERVABLE INFLUENCES FROM OUTSIDE SPACETIME
Quantum mechanics predicts correlated outcomes in space-like separated regions for experiments using 2 particle entangled states.
There are two alternative ways of explaining these quantum correlations, depending on whether one assumes the freedom of the experimenter as an axiom or not.
the “freedom of the experimenter” is not something one can settle by experiment or computing, but is a matter of principle: you can either choose or reject it.
If you reject it, then you can explain things in a entirely (local or nonlocal) deterministic way:
you can choose the “Many Worlds” picture, which also is fully local deterministic everything that can happen does in fact happen, but in different worlds; even the feeling that you are ’someone’ is an illusion, because you cannot know “which ’you’ is you, and which ’you’ is a copy”. It is a merit of this picture to show that determinism implies the loss of personal identity.
If you accept the axiom of freedom on the part of the experimenter, then you are necessarily led to accept influences faster than light (nonlocal influences) between the apparatuses.
Moreover, the before-before experiment rules out the belief that physical causality necessarily is time ordered, so that an observable event (the effect) always originates from another observable event (the cause) occurring before in time. This experiment demonstrates that the nonlocal correlations have their roots outside of spacetime.
In quantum entanglement experiments (Figure 2) local randomness and nonlocal order appear inseparably united: A random event event A is nonlocally correlated according to statistical rules to another random event B, and additionally A is correlated to B, and not to any other events near B. This means that non pre-determined (“genuinely random”) local events can be controlled to generate nonlocal patterns (correlations) and, as the before-before experiment shows, the control happens from outside spacetime.
The before-before experiment demonstrates that immaterial influences can control randomness to produce nonlocal patterns. Thus, immaterial free will can control the random dynamics of the brain to produce meaningful communication and behavior, but has to pay for it with uncontrolled dynamics during sleep (no permanent conscious state). This constraint can also be formulated by stating that brain outcomes have to fulfil distributions imposed by neurophysiological parameters: statistical deviations after a certain amount of control (while waking) are corrected by further uncontrolled outcomes (while sleeping). Free will fits well with statistical laws of Nature like the quantum mechanical ones.
Templanza
/in Curso de Cristianismo /by Bobby A LopezEl que existe una realidad inmaterial junto con la material es algo que han reconocido prácticamente todos los seres humanos de todas las épocas. La existencia de dioses, ángeles, espíritus, etc ha sido universalmente creída. Los materialistas han sido, excepto en el siglo 20, algunos pensadores profesionales, como Demócrito o Lucrecio.
Otra forma de pensar que no es tan universal, pero que es muy común, es pensar que la materia es una realidad mala, fruto de un principio malo y que el espíritu es un chispazo de algo bueno, que hay en nosotros.
La doctrina católica, sin embargo, afirma la primacía del espíritu sobre la materia, pero afirmando a la vez la dignidad de la materia, negando la visión de que la materia es fruto del mal y el espíritu un chispazo del Bien. En la visión cristiana, el universo es una jerarquía de seres, que son distintos para poder reflejar entre todos a Dios. Dios es tan sólido como una roca, tan sutil como el aire, tan rápido como un cheeta, tan fuerte como un rinoceronte. El Génesis nos enseña que Dios vio que todo era bueno (en directa oposición a la cultura babilónica del momento), y además, Dios se hace hombre y toma materia y carne. Considera, además, la doctrina católica, que el cuerpo es templo del Espíritu Santo y que está llamado a un destino eterno.
Por lo tanto es cristianismo no desprecia la realidad material. Lo que sí hace es reconocer que el hombre cometió un pecado original y que esto ha tenido como consecuencia un desorden en el hombre que hace que a veces sienta una atracción desordenada por lo material o lo sensible. Esto hace que el cristianismo pregone que el hombre tiene que estar siempre evaluando la atracción que siente hacia lo material para distinguir cuándo es buena y cuándo es perniciosa.
Vivimos en una época en la que las posibilidades de extraer placer del mundo material se han multiplicado: comidas, vista, sensaciones. Esto de por sí no es malo. Pero ha coincido con un apagamiento de la fe, y ha llevado a que, de hecho, mucha gente sienta que lo único que hay es el mundo material y que, por lo tanto, el placer es el fin del hombre. Porque ya Santo Tomás lo dijo: si no existiera el espíritu, el fin del hombre sería el placer.
Por lo tanto se impone a nosotros una necesidad especial de vivir la virtud de la templanza. Vivir una virtud no es solo un acto de la voluntad, un esforzarse, sino también un acto de la inteligencia, un examinarse, para detectar, con la luz de Dios, cuando estamos amando desordenadamente algo.
La templanza no es, por tanto, represión, sino moderación sin anormalidades. Procura un equilibrio que garantice el desarrollo integral del ser humano.
La templanza produce, como su efecto propio, dar a la persona una armonía en su alma, en su vida. Y la armonía, por definición, es belleza y esto, por definición es algo que atrae a los demás. La templanza, por lo tanto, es un activo apostólico.
Uno de sus principales campos en las comidas y bebidas. Pero también en el oído, lo que necesitamos de música. En las posiciones.
Hacia dónde va la Astronomía
/in Curso Historia de la Ciencia /by Bobby A LopezASTRONOMÍA, A DONDE VA
Julio 23, 2009
Debate sobre si el universo es finito o infinito
Desde los griegos ha existido un debate intenso sobre si el universo tiene un pasado infinito o finito. Aristóteles pensaba que el universo tenía un pasado infinito. A muchos filósofos judios, cristianos y musulmanes, esto no les convencía porque era poco compatible con el concepto de creación.
En 1610 Keppler, uno de los primeros científicos modernos, basándose en observaciones sobre la noche, hipotetizó que el universo era finito.
Ya en el siglo 20, en 1910 el astrónomo Slipher descubrió que las “nebulas” (lo que luego resultaron ser otras galaxias) se estaban alejando de la tierra. Esto se determina analizando el “espectro” de la luz que emiten. Si este espectro cambia con el tiempo hacia el rojo (el extremo del espectro de la luz) significa que los objetos se están alejando unos de otros. Esto no era compatible con la cosmología del momento, que veía un universo lleno más o menos uniformemente de estrellas. Por lo tanto nadie le hizo caso, ni él mismo.
En la década de los 1920, Einstein formuló su Teoría General de la Relatividad. Aunque Einstein creía personalmente en un universo estático. Pero según su teoría, la fuerza de la gravedad actuando sobre en universo durante millones de años, debiera de haber hecho que éste colapsara sobre sí mismo. Para explicar por qué esto no ha ocurrido, se inventó una fuerza, llamada “constante cosmológica lambda” que sería como una fuerza de repulsión que mantiene estirado el universo. Al final de sus días, Einstein reconoció que lambda fue el mayor blooper de su vida. Porque en 1924, un astrónomo ruso, llamado Alexander Friedmann, desarrolló unas ecuaciones a partir de la teoría general de la relatividad, y concluyó que en universo tiene que estar expandiéndose, y que no era posible el universo estático propuesto por Einstein.
A partir de 1924, usando un telescopio en el monte Wilson en California, el astrónomo Hubble descubrió que las galaxias que estaban más lejos de la Tierra eran las que se movían más rápido, es lo que luego se llamó Ley de Hubble. Esto lo hizo observando el espectro de la luz que emiten, y viendo que en las más lejanas había un mayor, desplazamiento hacia el rojo “red-shift”.
Con estas observaciones, en 1931 el matemático Lemaitre, que era un sacerdote católico belga, propuso por primera vez que el universo provenía de un estado extremadamente denso y caliente, al que él llamó el átomo primitivo. El basó su hipótesis en los trabajos de Einstein, al que conocía. Sin embargo Eintein seguía fijo en su idea de un universo estable.
En 1949 se estaba dando un debate encendido entre los astrónomos, unos apoyando la expansión del universo y otros el universo estable. El término “Big Bang” lo empleó por primera vez el astrónomo Alfred Hoile, quien apoyaba el modelo estable y atacaba en modelo de expansión. Sarcásticamente llamó a la teoría del átomo primitivo, “esa idea del big bang”.
Evidencias a favor del big bang
En los años 50 los astrónomos estaban divididos entre uno y otro campo. Pero poco a poco las evidencias fueron apoyando a los partidarios del Bib Bang.
En primer lugar se vio que las galaxias más jóvenes solo estaban en los límites del universo, cuando la teoría del universo estable predecía que estarían distribuidos uniformemente por todo el universo.
En segundo lugar, se observó que hay en el universo una sobreabundancia de hidrógeno y helio, que son los elementos más livianos. Hay más hidrógeno y helio del que debiera de haber si el universo fuera estable. Pero si el universo explotó, es lógico que primero que se formaran fueran elementos ligeros, y que solo empezaran aparecer elementos pesados (el carbono, por ejemplo) una vez que el universo se empezó a enfriar y a condensar en estrellas.
El triunfo final del la teoría del Big Bang vino, en tercer lugar, cuando se descubrió el “backgroud radiation” en 1965. Se descubrió que, de todas las partes del universo llegan a la Tierra unas micro-ondas. La luz viene en varias longitudes de onda: desde los rayos gama, con una longitud muy pequeña, pasando por la luz visible, y llegando al final a las ondas de radio que son las que más miden. Entre la luz visible y las ondas de radio están las micro-ondas. Pues bien, esta radiación que se detectó, por su estructura y su uniformidad no puede ser emitida por ningún cuerpo concreto. Además, esta radiación tiene un espectro que se parece mucho al espectro teórico de un cuerpo en estado de equilibrio termal (llamado blackbody). Y el único estado de equilibrio termal que conocemos es el que tuvo en universo muy al principio de su existencia, si el universo proviene de un estado original muy denso y caliente.
Estas tres piezas de evidencia, que en los últimos años no han hecho más que reforzarse, han hecho que el big bang se haya convertido en el modelo cosmológico más aceptado al momento.
Interpretación filosófica del big-bang
Lemaitre, el creador de la teoría, que era sacerdote, se enfogonaba cuando trataban de identificar su explicación del big-bang con la idea religiosa de la creación. Él decía “esto es matemática, no religión”.
¿Podemos decir notros que el Big-Bang demuestra que hubo un acto creador? No. La creación, por haberse llevado a cabo antes del tiempo, es un suceso fuera del alcance de la ciencia. Nosotros conocemos de la creación por la fe.
¿Tenemos que afirmar, entonces, que la razón y la fe son dos luces que iluminan áreas distintas de la realidad? No. Tenemos que afirmar, y esta es la visión cristiana del conocimiento, que la razón y la fe ven una misma y única realidad. Pero que la fe puede ver cosas MAS ALLÁ de la razón. Son dos focos que alumbran, en la misma dirección, pero con intensidades distintas.
La implicación práctica de esta visión, es que, por de pronto, lo que sabemos por la fe no puede contradecirse con lo que sabemos por la razón. No existe un set de reglas de juego para dentro de la iglesia y otro para los negocios y otro para la política.