Author Archives: AvianWriter

About AvianWriter

Microcomputing and I grew up around the same time (a long time ago). I've always been fascinated by parallel programming but always found it more difficult than it needed to be. After much consideration, the Avian Computing project was born. Oh yeah, and I like long walks on the beach.

First KiloCore Chip Announced

Congratulations to the researchers at the University of California – Davis for their development of the world’s first chip with 1,000 independently programmed processing cores. The research team was lead by Bevan Baas, professor of electrical and computer engineering and included  Aaron Stillmaker, Jon Pimentel, Timothy Andreas, Bin Liu, Anh Tran and Emmanuel Adeagbo, all graduate students at UC Davis. More information about their impressive achievement is available here. For you spec junkies, be sure to read their detailed paper here.

Dubbed the KiloCore chip, it boasts a number of impressive capabilities beyond just the number of cores:

  • It is implemented in 32nm CMOS technology
  • It operates at 1.78 GHz max at 1.1 V
  • It dissipates only 0.7 mW @ 0.56V @ 115 MHz per processor for a total of 1.3 W
  • At 0.84 V, the 1000 cores execute 1 trillion instructions per second while dissipating just 13.1 Watt
  • Each processor contains 575,000 transistors
  • The complete chip contains a total of 621 million transistor (less that half the 1.4 billion transistors in an Intel i7 quad core CPU)
  • Each processor is independently clocked
  • Each processor can shut itself down to save energy when not needed
  • Each processor can run its own small program instead of SIMD

As impressive as its specs are, what impresses me the most is how soon it arrived. I expected that various groups developing multi-core units would sneak up on the 1,000 core mark more slowly, edging each other on a few tens of cores or maybe one hundred cores at a time. Instead the UC Davis team leapfrogged the other teams and included 3 to 4 times more cores than the few other chips with more than 100 cores. Of the 60-plus multi-core chips listed in their research, most have fewer than 60 cores and only 3 or 4 have more than 100 cores.

Color me impressed. And when can I get one.

And Another Dream Dies (or at least goes to sleep)

Well, I had great dreams for expanding Avian Computing into using Nvidia CUDA to put all those lovely GPU’s to work in parallel but that just isn’t going to work. Or, more accurately, it’s going to take a lot more work than I can afford to devote to the project at this time.

See, there is a Java product called JCUDA that provides Java functionality to CUDA-enabled Nvidia cards. Unfortunately, what that means is that the Java performs JNI calls to the c/c++ libraries that are a thin layer over the built-in GPU math functions. So if I want to do shader functions or polygon transforms or FFT operations, that’s easy. However, if you want to do Java, just a plain old “hello world” program, that’s another story. At least I think it is.

And there’s the rub. Or a couple of them. There’s an Eclipse IDE version designed for CUDA/JCUDA but that version runs on Linux. And the “simplified” install notes run a number of pages and it took until page 3 of cryptic instructions just to run a test that your Linux system is correctly configured. I mean, I do Linux haltingly and hesitantly, so that would be a long road to probable failure.

Or there’s a Windows version but that’s configured to run in Visual Studio. Which means downloading, installing, and learning Visual Studio and then installing CUDA and getting it configured. Again, another long road to probable failure.

Oh, and then there’s learning CUDA itself. Learning all the directives, all the hints to insert into the c code to suggest to the compiler where to insert parallelism, bla-bla-bla.

At the rate that I can find free time, I suspect that it would take about a year to dedicate enough time to become sufficiently capable with CUDA/JCUDA to know if I can apply it to Avian Computing.And unfortunately, there are way too many other promising fields to investigate to be able to make that kind of commitment.

Hear that sad whistling sound? That’s the sound of the wind going out of that particular sail. But better I figure it out now than 6 months from now.

ConcX v2.1 Now Available for Download

I am pleased to announce that the Concurrency Explorer (ConcX) version 2.1 source code is now available for free download. The source code is written in Java 8 and expects to be run from NetBeans IDE, both of which are also available for free download from Oracle. It should be cross-platform compatible for Windows, Linux (lightly tested) and Mac (not yet tested). See the Sidebar for the download link.

Also available is the new “Getting Started with Avian Computing” guide which describes the basics of Avian Computing and introduces ConcX, how to install it and run it from NetBeans, and the controls and indicators used in ConcX. The Getting Started guide also includes about a dozen examples of parallel scenarios that can be loaded and run and (even better) experimented with, adjusting the configurations of the birds in each scenario to see if it affects how the program runs. See the Sidebar for the download link.

Simulating Greed Results – Part 1

So the first round of simulations have been finished and the results have been very interesting. On the one hand, the results are pretty much what you’d expect; greed by one entity affects the entities closest to the greedy one. On the other hand, because this simulation makes it possible to monitor total throughput, it allows us to quantify and measure the impact of greed on the complete system. In other words, this simulation can be thought of as a way to study the impact of greed on the GDP of the system.

In the first round of testing, every farmer tries to turn on the water no more than 3 times a second. The nap() method in BasicBirds get a random value between 10ms and the nap length value for each bird after each time the bird has eaten. This means that “lucky” birds will attempt to turn on the water more frequently because the Java random number generator produced lower nap numbers. Unlucky birds will attempt to turn on the water less frequently, again because the Java random function generated higher nap numbers.

Luckiness also manifests itself in another way. Because each farmer (bird) must successfully get two shared resources, a farmer must be lucky enough to try to turn on the water when neighboring farmers aren’t already using the water. Sometimes trying more frequently just means failing more frequently. In general, however, trying more frequently results in water being turned on more frequently.

So in the first round of testing with all the farmers set up the same way, all of the farmers had pretty similar results. Some failed more than others, some succeeded more than others, but it was always different farmers in each run who succeeded or who failed. The consistent variability of the results provides assurance that ConcX isn’t favoring one farmer over another and biasing the outcomes. Click on the charts below to see the full charts.


Total Failed Attempts and Successful Attempts


Total Amount of Water Delivered by Run


Difference Between Farmers Receiving Most and Receiving Least








Run 1 – Ajia1 Received Most Water


Run 2 – Dave1 Received Most Water


Run 3 – Irma1 Received Most Water







The actual numbers reported in these runs are unimportant. With this first round (3 runs) what’s important is that most of the farmers are receiving similar amounts of water and the differences attributable to luck or chance. This round establishes the baseline that will be used for comparison to later rounds of testing.

In the next round of testing, one of the farmers is changed to try to turn on 3 times more frequently than the other farmers. Which farmer is selected is completely arbitrary; any of the 10 farmers could have been selected and similar results would have been achieved. In these results, Bill1 was selected to be greedy and try more frequently to get water.

The following results show that the number of failed attempts has gone up slightly even though the total number of successful attempts is almost unchanged. The total water delivered is also almost unchanged; it goes up slightly. The biggest change is seen in the Difference chart (Farmers: Max Deliv vs Min Deliv). The Max amount that any one farmer received (Bill1) has gone up while the least amount that any one farmer received (Ajia1 or Cate1) has gone done. Again, be sure to click on the thumbnails below to see the details.


Successes & Failures – 1 Slightly Greedy Farmer


Total Water Delivered – 1 Slightly Greedy Farmer


Differences – 1 Slightly Greedy Farmer







Bill1 is Greedy Farmer – Ajia1 and Cate1 Suffer – Run 1


Bill1 is Greedy Farmer – Ajia1 and Cate1 Suffer – Run 2


Bill1 is Greedy Farmer – Ajia1 and Cate1 Suffer – Run 3







In the next round of testing (3 runs), the greedy farmer becomes even greedier and tries to turn on the water 10 times more frequently than the neighboring farmers. Again, the farmer that was selected was completely arbitrary.


Total Failures & Successes – 1 Greedy Farmer


Total Amount Delivered – 1 Greedy Farmer


Differences Max & Min – 1 Greedy Farmer





Run 1 – Edna1 is Greedy Farmer – Dave1 and Fred1 Suffer


Run 2 – Edna1 is Greedy Farmer – Dave1 and Fred1 Suffer


Run 3 – Edna1 is the Greedy Farmer – Dave1 and Fred1 Suffer









The results of this round of testing are similar to the previous run; the total number of failed attempts have gone up while the total number of successful attempts is about the same. The total amount of water delivered is moving slightly higher. However, the difference between the maximum amount delivered to a farmer (Edna1) has gone significantly from before while the amount of water delivered to the least fortunate farmers (her neighbors Dave1 and Fred1) has dropped significantly.

One lesson that could be taken from these three rounds of testing is that the total amount of water delivered to the whole system doesn’t go up significantly by allowing one farmer to be more greedy. In this example, the gains of one greedy farmer is greater than the losses of the neighboring farmers. The biggest difference from one test to another has been in the “inequality” in the distribution of water; the additional water that the greedy farmer gained has come at the expense of the neighbors.

It is tempting to draw conclusions from these results, that in societies where there are limitations to resources (every society on earth) that the greediness of an individual is frowned upon because it doesn’t significantly increase the overall amount of product while increasing the hardship of others.

But is this always the case? What happens if death is introduced into the system? Followers of Avian Computing know that birds dying from lack of food has been part of the system design from the beginning. In the next rounds of testing, farmers who receive insufficient water are allowed to give up and move away because their crops died because of lack of water. While this seems cruel, it is in keeping with standard economic theory, where producers with a “competitive advantage” succeed while less efficient producers are allowed (or encouraged) to give up making their current product and begin making a new product.

Stay tuned for the next results.

Simulating Greed

One of the unexpected benefits of Avian Computing and ConcX is the relative ease that simulations can be developed. ConcX is based on an asynchronous model with loosely-coupled threads, allowing the threads to dynamically adjust to their environment, the way that individual birds in a flock would dynamically adjust to their real-world situations.

I was remarking to a friend that the Dining Philosophers problem was interesting because it was a dynamic representation resource allocation much discussed and dissected by standard economics. Further, when using ConcX to solve the Dining Philosophers problem, I had noticed that it was possible to give one philosopher “competitive advantages” over the other philosophers. And that was when the Simulating Greed project was born.

Diagram 1. Basic 10 Farmer Setup.

The Simulating Greed project begins by increasing the number of participants and resources. And to make it a little more realistic, the philosophers were changed to Farmers and the resources changed from forks to Faucets. Every Farmer wants to water his crops. To do that, a Farmer must turn on two Faucets, one on each side of their property. See Diagram 1. Each Faucet is shared with the Farmer’s neighbors and can be set to provide water to one Farmer or the other but not both. Each Farmer, when they have control of both Faucets, can also set how long they will receive water.

Varying levels of greed can then simulated by the Farmers by how frequently they try to water their crops and how long they set the water to run. A greedy Farmer will try to water their crops very frequently and will also set the water to run a long time. A less greedy Farmer will try to water less often and for shorter durations.

In this simulation, the assumption is that greedy Farmers always want as much water as they can possibly get. More water is always assumed to be better and there is no limit to the supply of water. In real life, there is an effective limit to how much water a single farmer can use and to the amount of water available.

Another factor in the simulation is whether or not Farmer death is allowed. A Farmer can be configured to never die, regardless of the amount of water they have received, or they can be set to die if they haven’t received water within some configurable time period. This factor can have a significant impact on the results because dead Farmers no longer compete for water. If a Farmer dies early, the neighbors of that Farmer find it easier to get water.

Basic Setup

The Farmers have properties all in a row, with Faucets between each pair of properties. The basic setup has 10 Farmers with 10 Faucets. Each neighboring pair of Farmers have to share the Faucet between them. And to make demand even, the two end neighbors have to share with each other, even though they are furthest apart. See Diagram 1. This setup makes each Faucet a shared resource for 2 Farmers.

Note that Faucet 10 is shared between Ajia1 and Jack 1. There should be a dotted line that runs between these farmers, but there wasn’t any clean way to draw this relationship without cluttering up the image. Instead of the dotted line, Faucet 10 is shown at both the top and bottom of Diagram 1with half of the valve grayed out; you’ll have to use your imagination to draw in the water line between them. Ajia 1 and Jack 1 must share a Faucet to make it a closed and equal system where they both must compete with two other Farmer, just like all the rest.

Advanced Setup


Diagram 2. Setup for 20 Farmers.

The number of Farmers is increased to 20 but the number of Faucets remains unchanged. See diagram 2. This setup makes each Faucet a shared resource for 4 Farmers.

Even More Setups

The number of Farmers is increased to 40 and 80 but the number of Faucets remains unchanged. There is no diagram for these setups because I can’t visualize a configuration of Farmers that would allow the to share the Faucets without moving their farms into satellites in outer space. With 40 Farmers, each Faucet is a shared resource for 8 Farmers. With 80 Farmers, each Faucet is a shared resource for 16 farmers.


Nap times are initially set to 300ms for each Farmer. The length of time they keep the Faucet on is initially 300ms. The amount of water delivered to a Farmer is calculated based on the number of milliseconds the Farmer keeps the water flowing and the number of times the Farmer successfully turns on the water. For example, a Farmer who successfully turns on the water 7 times for 300ms receives 2100 units of water. If the Farmer’s neighbor successfully turns on the water 7 times for 500ms receives 3500 units of water.

Summarizing Results

Each Farmer keeps track of his own results during each simulation run. When the Farmer terminates (either by early death or end of natural life), they write their individual Results to the TupleTree. A Summarizer bird is running during the run and any time a Farmer dies for any reason, the Summarizer eats their Results and adds their stats to the stats for the other users. For example, Ajia1 had 100 units of water delivered and Cate1 had 200 units of water delivered, so the Summarizer records that 300 units of water had been delivered.

When the Summarizer terminates, it writes its Summary info to the TupleTree. When the user selects the TupleTree tab and clicks the Show Tree button, all of the Food items in the tree are listed. The ResultsProcessed Food items contain summaries of the results for individual Farmers. The Summary Food item contains the totals for all Farmers as well as detailed summaries of each bird in it’s Content object. To see the SummaryDetails object contained in the Summary.Contents object, enter a filename in the field at the bottom of the TupleTree tab and the press the Save button. Any object contained in a Contents object will print the values that they hold IF that object implements the toDetailString() method. Otherwise it will print the info generated by that object’s toString() method.

In Conclusion

All together, the new features available in the TupleTree combined with saving the results of the runs provides the ability to analyze how individual Farmers are affected by the greed of their neighboring Farmers. More about those results in the next blog.

Avian Computing and JavaSpaces

A few years ago when this whole Avian Computing thing got started, I considered basing this project on Sun’s Jini/JavaSpaces technology. Why should I reinvent the proverbial wheel when Sun has already invested a significantly greater number of programming hours than I’ll ever hope to invest by myself, using much better programmers than I’ll ever hope to be?

However, a cursory review of JavaSpaces at that time yielded the gut-level feeling that JavaSpaces was a much bigger solution than the problem that I was trying to solve. Sure, both JavaSpaces and the (soon to be) Avian Computing use Linda constructs but that was about it. So I took the path less traveled and started working on Avian Computing.

Recently I started to question the wisdom of that decision and consequently started to read about JavaSpaces. Turns out my gut-level feeling was right. JavaSpaces is all about distributed computing which coincidentally happens to be asynchronous and parallel while Avian Computing is focused on thinking about parallel applications and how to mentally visualize the objects and threads of a parallel program. The fact that both projects use Linda constructs just proves how useful and universal Linda constructs are.


JavaSpaces provides a technology that allows an application to interact with Entries in (Java)Space and to access services available on other computers or to acquire the codebase (when necessary) to perform the required functions locally. JavaSpaces makes the location where the actual computations are performed invisible and irrelevant. JavaSpaces begins with the client-server architecture and morphs it into a homogeneous universal solution. Parallelism in the system is implicit and not explicitly encoded into the individual clients.

Avian Computing begins with the assumption that multicore CPU’s are the new normal and the biggest obstacle to improved computing is our inability to effectively use the power of these multicore CPU’s. And that the biggest obstacle to using the full power of multicore CPU’s is our inability to conceptualize parallel applications.

Avian Computing, as implemented in ConcX, encourages us to think about the actions of individual birds in the flock and how as a flock they will accomplish their goal. This simplifying metaphor encourages us to explore the inherent parallel possibilities of the application. The perspective provided by Avian Computing and ConcX reveals opportunities for utilizing the full power of multicore CPU’s that are frequently non-obvious to developers comfortable with single-threaded programming.

JavaSpaces is an additional library of code, increasing the complexity of Java applications and reducing the number of programmers who can develop or maintain the code. Avian Computing is a simplifying technology that tries to minimize the amount of new code that must be written or maintained. And the code that is written is typically more standard Java that can be developed and maintained by more programmers.

JavaSpaces overrides the word Public so it has a different meaning in JavaSpaces than in regular Java. Additionally, Entries in JavaSpaces require that all key fields be Public, surrendering private fields, accessor methods, and object encapsulation. Avian Computing doesn’t require learning new meanings for standard java keywords or new rules for objects.

JavaSpaces provides a complicated implementation of the Linda constructs, providing a multitude of ways in which a tuple (Entry in JavaSpace) can be NOT found. For example, the key fields not matching exactly, or the transaction not matching, or the desired tuple not being available at the right time. Avian Computing and ConcX uses a much simpler method to find matching tuples; each bird looks for only 1 or 2 types of food and if it doesn’t find appropriate food, it just waits a little while and automatically tries again. No extra programming required. No additional concepts to learn. No complicated reasons for NOT finding the tuples that actually exist.


Even though I am glad that I followed my gut and didn’t try to leverage JavaSpaces, I expect that the Avian Computing project will incorporate many of the features and strengths of JavaSpaces. But only if they can be added without overly complicating using ConcX and Avian Computing.

Modeling Operating Systems in Avian Computing

One of the initial design goals of every operating system (OS) is that it be lightweight and have minimal performance impacts on the running applications. Unfortunately, as the OS matures, it begins to take on baggage and assume a heavier footprint.

The goal of lightness probably has an unintended consequence; it probably makes it harder for the developers to understand what their code actually did. Lightness generally means terseness, meaning no excess code, not even any diagnostic code.

An interesting way to overcome this apparent limitation would be to use Avian Computing and ConcX to model the OS being designed. Each of the processes to include in the OS would initially be a ConcX entity that performs the task(s) of the final process.

There would be several advantages of this method. First, and perhaps most importantly, using the built-in logging features in ConcX entities, it would be simpler to identify the conditions that lead to a failure. This would be increasingly true as the amount of parallelism built into the OS grows. The more sophisticated and parallel an OS, the greater the need for help locating the cause of any failure.

For example, assume that the operating system will use Semaphore X to control some resource and that semaphore became unavailable to the various processes. In ConcX, it would be relatively simple to find which of the threads had obtained Semaphore X, when exactly it happened and what it was waiting for that was preventing it from releasing Semaphore X. Assuming the developer had instrumented his bird properly, it would have recorded when it ate Semaphore X and any problems or issues that it encountered that prevented it from releasing Semaphore X. The developer might even have made it easier to diagnose by writing an error food object out to the TupleTree, such as when some value is expected to be Zero or One and instead it is a negative value or greater than One.

Which leads to the second advantage of modeling the OS in ConcX; the ability to modify the system with minimal effort. When a potential fix is identified, it can be inserted into the appropriate bird(s) and the system restarted. No major recompiles or installing the executable in the test system(s). And much like with Unit Testing, a test bird that is configured to always produce the error condition and the system run to verify that the system handles the error appropriately.

Beyond error correction, easy modification of the modules in an OS makes it easier for developers to experiment with how the functions in the OS are allocated. For example, is it better to have one code module with a huge IF statement that then calls sub-modules or is it better to have a bunch of separate special-purpose modules? Should Capability X be included in Function Y or should they be separate functions?

Additionally, it should be easier to identify which birds are the bottlenecks. If one or a couple of birds are performing some capabilities that always cause other birds to wait excessively, then those birds can be analyzed to see if they can be split into separate functions or simplified or streamlined, etc.

A third advantage is the ability to catch “black swan” events. Unexpected conditions are frequently difficult to identify because the developers “knows” that some value will always be Zero or One so never considers the possibility that it might be outside the range so won’t find that error until they consider what happens when it does fall outside the range.

If the developer codes his birds correctly, any unexpected values will be recorded in the bird’s history and/or will write an error object to the TupleTree. This assumption-trapping is easy to write in ConcX and has minimal impact to overall performance but pays huge dividends by catching unexpected conditions that can lead to unexpected behavior by (or crashing of)  the OS. Identifying the failures of code or values that are “too simple to fail” and identifying all the conditions that must be correct for the module to succeed effectively produce a “criteria list” that the developer of the final OS must be able to meet.

Another advantage of modeling in ConcX is that low-level errors could intentionally be allowed to propagate thru the OS to study the effect on the system. Errors are not all created equal. Some errors could cause catastrophic results; other errors might have zero overall impact because they null out and are internally eliminated. Knowing which errors have the greatest potential for affecting the OS allows developers to focus their limited time and attention to where they will have the greatest impact.

Perhaps most importantly, modeling the OS in ConcX will allow the developers to think about and interact with their new OS at a higher level of abstraction. Conceptually, they can move functionality around and adjust behaviors with minimal costs in time and effort. ConcX provides a loosely-coupled environment where changes to one piece of code will only affect other pieces of code thru a well-defined interface (the TupleTree).

And then at the end of the modeling phase, developers have a working “flowchart” from which to code the actual OS. All of the time spent coding the birds in ConcX is thrown away. Every bit of the analysis and effort to understand the new OS is kept. The most critical portions of the OS can be coded quickly and with confidence because they are already well understood and well defined because of the time spent modeling the OS in ConcX.

Markets, Equilibrium, and

One of the fundamental concepts of Economics over the past 50 years has been market equilibrium. Simply stated, if markets are left to their own, they will naturally balance supply and demand for all products and markets will efficiently determine the prices and amounts of all products. No need for government intervention or price controls; it will all be achieved by the “invisible elbow”* of market competition. In fact, Equilibriumists believe that the government is the force that prevents markets from achieving true equilibrium. If government were only kept from interfering with markets, market equilibrium would be restored to the throne of Shangri-La and riches and profits would stream out of the mountains of commerce to quench the thirst of everyone.

IF THIS IS TRUE, then why do all markets move in the direction of monopolies? Recent history demonstrates this point.

When Bell telephone was broken up into multiple phone companies, reducing the barrier to competition, the number of phone companies exploded. Now, 30 years later, we’re down to 4 or 5 mega-phone companies.

When the US airlines were deregulated, the number of airline companies increased, providing increased competition. Now, 30 years later, the number of airline companies in the US is down to a handful, with a bunch of regional airlines handling the less-profitable scraps. How long until they’re all consolidated into a handful or regional airlines?

When I was first old enough to buy beer, there were about a dozen nationwide beer makers. That number has been reduced to only a few – the latest merger of SAB Miller and AB InBev ensures that 1 in 3 beers purchased in the US will be bought from AB InBev. And when the trend for craft beers runs its course, AB InBev will probably sell 2 out of 3 beers.

It is irrelevant what equations or theories the Equilibriumists produce; the evidence of what actually has happened is quite different from their beliefs. For equilibrium to exist, the number of producers of any product must be large enough to develop a healthy competition. Without competition, markets lose their invisible kneecap* that regulates prices. That is why most economists are against monopolies and oligopolies – they reduce competition among sellers that produces the lowest prices that “the most benefits for society at large.” (Lowest prices producing the most benefits for society is a different subject open for debate.) Instead of moving in the direction of increased competition, markets always move in the direction of consolidating competitors into fewer, larger competitors, despite the efforts of the government to limit the consolidations.

Equilibriumists and conservative economists in general agree that the government shouldn’t be in the business of picking winners and losers in business and that competition should be the sole determinant of said winners and losers, underpinning their beliefs that markets work best when left alone. However, when competition is the sole determinant of winners and losers, the market will always move in the direction of monopolies.

The commonly accepted economic thought is that the winners (or luckiest or most efficient producers, etc) will do better and sell more product and thereby claim increased market share, driving out the less efficient or more obsolete producers. However, this fails to follow the thought to its logical conclusion, that the winners eventually out-produce the majority of their competition, driving out the competition, which leads to oligopolies or monopolies, thereby producing a market for that product that is out of equilibrium because the winners no longer are constrained by competition. They can charge whatever they want.

In other words, free markets always end up destroying free markets.

Consider this: the description of markets always moving in the direction of monopolies is similar to the description of the universe and how stars and planets formed. In this description, the early universe is filled with an almost uniform distribution of atoms and nothing else. The tiny differences in uniformity causes the atoms in slightly more densely populated areas to be pulled together into clumps. Those clumps attract more of the surrounding atoms until they start to form a large body. That large body draws in even more surrounding atoms and the body grows until it is so large that it’s combined gravitation attraction forms the body into a planet or into a stars. Eventually all of the free atoms are absorbed by the planets and the stars.

Market equilibrium is a description of a perfect, ideal state which can never exist for very long, just like the early universe with its (almost perfectly) uniform distribution of atoms. Eventually, without any outside forces or influences, the markets (and atoms) begin to consolidate into larger units. Because the competitors in a market will never be exactly equal, true market equilibrium can exist for only moments because competition and innovation and efficiency and just plain good (or bad) luck will upset the equilibrium and send the market into consolidation and reduction of competition.

Market equilibrium is cloudy economic thinking; it focuses on only one part of the issue (of the cloud) and ignores its effect on other parts of the economy (on the rest of the cloud). For economics to become a useful tool, we have to move beyond wishful thinking and our dreams of ideal perfect worlds that can never exist. We have to accept that markets will twist and turn and change because of innovation and chaos, because of fashions and consumer whims, because of improvements and resistance to change. We have to abandon our (bordering on) religious belief that markets will always right themselves because of equilibrium or that bubbles can’t really exist in a market, or ONLY government interference prevents the proper functioning of markets. W have to accept that markets can, by themselves, run off into a ditch and fail. It is the nature and the essence of the beast called free markets.

*If the “invisible hand” really is invisible, how do they know it’s a hand? An elbow or a buttock are equally valid body parts UNTIL someone actually sees the invisible hand, at which point it is no longer invisible!!!

The Economics Simulation Project – Part 6

So far we have introduced the following entities that will form the baseline of the Economic Simulation:

  • Individuals
  • Consumers
  • Government
  • Vendors
  • Producers

The final entities to the Economic Simulation are all financial entities:

  • Banks
  • Credit Cards
  • Investments (Stocks, Bonds, etc)

Bank entities will begin as savings and loaning organizations. The other aspect of banks will be covered in the Investment entities. Banks will make loans to Individuals, Consumers, Vendors, and Producers and collect installment payments (with interest). Initially, Consumers, Vendors, and Producers will all pay some amount for loans, which, on average, most Consumers, Vendors, and Producers all have.

Credit Cards are usually backed by Banks but because they have such a significant role in the lives of Individuals and Consumers, Credit Card entities will be split out separately. Credit cards also have a much broader distribution than bank loans; approx 40% of Americans households rent instead of having a home mortgage, but approximately 70% to 80% of households have at least 1 credit card, depending on the year selected. Credit cards will be another payment that Consumers will make based on some typical amount for a quintile. However, Individuals will have to make credit card payments based on their individual profiles that will try to estimate their tendency to buy on credit and their tendency to carry a balance on their credit cards. And based on income and other factors, their “credit score” will affect the interest that they pay but typically in the 15% to 30% range.

Investment entities play a significant role in the US economy but in many ways they behave like a separate, parallel economy that is only connected to the mainstream economy at a few points. Some of those intersections are the Bank entities, consumer confidence, and Individuals.

When dealing with Consumers at the quintile level, investments look like savings accounts and 2% interest earned and 7% stock market returns just look like savings with different interest rates. On average, throughout a quintile, some Consumers will save more and some will save less, some Consumers will receive higher interest payments and others will receive less. The investment entitiy doesn’t really affect Consumers.

However, Banks are strongly associated with Investments and Prime Interest rates, etc, so their lending is affected by Investments in general. So changes in Fed policy will affect lending policies and interest payments and more.

Just as importantly, the stock market and bond markets affect (and reflect) consumer confidence. If consumers are feeling nervous, they will put more money into bonds. If consumers are feeling optimistic that everything is getting better, they will put more money into stocks, perhaps even taking out savings to invest heavily in wildly speculative stocks. When the stock market goes up consistently for a while, the “wealth effect” comes into play, making people feel more confident, reducing savings rates and increasing purchases, setting the “virtuous cycle” in motion.

Individuals are affected by their Investments more than Consumers in general. This is intentional as some people are lucky when they invest and others lose everything. Some individuals put their money is steady performers; others invest their money in stocks that might grow significantly or might fail completely. Based on the Individual’s investment profile and how “lucky” the individual is, an Individual may be helped greatly or lose greatly. On average, however, it is expected that most Individuals will have their Investments pay approx 5% year over year.

OK, that’s everything that I can think of for now. I’ll add more as I come across it.

The Economics Simulation Project – Part 5

So far we’ve covered Individuals, Consumers, and the Government entities in the simulation. That leaves Vendors and Producers.

Vendors sell stuff or services to Individuals, Consumers, and to the Government entities. Right now the Vendors reflect the purchases that Individuals and Consumers typically make, based on the standard Basket of Goods and Services defined for the Consumer Price Index (CPI). Using this categorization makes some unrealistic combination of vendors, such as in the Other category, which includes cigarettes and bubblegum and other miscellaneous items. Transportation is another odd Vendor because it represents both Public Transportation and Auto Dealers. These odd combinations are not considered significant at this point in the development of the Economic Simulator, although it is expected that more detailed definitions of Vendors will become part of this project.

Vendors also buy all of their stuff from Producers. If the amount spent on clothing by Individuals and Consumers increases, then the amount of clothes that the Vendor buys from Producers should also go up. If the amount spent on food goes down, then the amount the Vendors buy from Producers should also go down.

Vendors also are employers, which means that as their sales increase, the number of employees will increase, increasing their expenses. The number of employees will be allowed to increase by fractional amounts (.13 employees) because this number represents the overall increase in their specific vendor category. If the demand goes down, the number of employees will also go down.

Producers include raw materials producers, such as miners, wheat growers, and blue jeans makers. They are both employers and sellers of stuff and services to Vendors. Employees are part of their expenses, as are buying or mining or refining materials needed to make their products. When their costs go up, their prices to Vendors go up. When Vendors order more, Producers respond by producing more. More employees are hired when they need to produce more (including fractional amounts); employees are laid off when demand goes down.

Vendors and Producers will also need to have an “Automation” factor that represents the increased productivity of workers due to automation and computerization. For example, the coal mining industry has seen significant reductions in the size of their workforce primarily due to improved coal mining equipment and techniques. For example, “hilltop removal” in West Virginia has applied open pit mining techniques to extract coal, allowing huge earth moving equipment to replace the manual efforts of thousands of coal miners. For another example, large corporations used to have entire floors of accountants tracking all of their income and expenses. These hundreds of accountants have been replaced by a dozen or so accountants working at their computers.

No attempt at this time will be made to try to track the off-shoring of work, sending manufacturing jobs oversees where employments costs are lower. At the current level of analysis, a streamlined job in the US is the same as an off-shored job; both are reductions in the costs to the Producer. Some Producers in one industry will have larger cost reductions than Producers in other industries. Some Producers will be more efficient and will drive out or absorb the less efficient Producers.

All of the churning caused by automation and computerization and mergers and acquisitions and off-shoring among Producers is not going to be captured at this point. In fact, the initial representation of ALL Producers will be a single Producer entity that all Vendors buy from. Since individual consumers account for 70% of GDP, it is more important to capture their activity with some fidelity than to capture every detail of Producers.

As with Government expenses, the reporting needs and curiosity of developers will drive the level of detail applied to the Producers. If the amount of cotton grown for clothing is important, then it will be necessary to estimate how much cotton clothing is sold by Vendors as well as what affects the amount sold (styles, product options, weather, etc). If the trade-offs between renewable energy and fossil-fuel energy need to be simulated, then those Producers and the resources they consume will need to be modeled.

Temporarily, the details of Vendors and Producers will be minimized. As their models are better understood, their models will be improved to produce finer levels of detail.

More next blog