Compiler Press'

Elemental Economics

Not Accounting, Not Business, Not Commerce, Not Mathematics  - Economics  






Shared Resources






Compiler Press

Compleat World Copyright Website

Competitiveness of Nations

Cultural Econometrics

Cultural Economics

Elemental Economics

World Cultural Intelligence Network


Dr. Harry Hillman Chartrand, PhD

Cultural Economist & Publisher

Compiler Press


215 Lake Crescent

Saskatoon, Saskatchewan

Canada, S7H 3A1

Curriculum Vitae


Launched  1998




5.0 Competition

1. Monopolistic Competition   

Monopolistic competition satisfies the following conditions: 

- like perfect competition in that there is a large number of sellers so that the actions of one producer have no significant effect on rivals; 

- like monopoly and oligopoly in that each seller faces a negatively sloped demand curve for a 'distinctive' product; and,

 - each seller possesses some market power depending on the elasticity of demand.   

Under monopolistic competition, independence of producers results from the 'attachment' of certain consumers to specific producers. This affects price but to a lesser extent than under monopoly. In the long-run, price equals average costs but marginal revenue equals marginal cost. In theory monopolistic competition is considered inefficient because price is higher and quantity supplied lower than under perfect competition and there is a dead weight loss of consumer & producer surplus yet there is no long run economic profit. 

Monopolistic competition occurs in a market in which product differentiation exists and exhibits elements of both perfect competition and monopoly. There are a large number of sellers of close substitutes that are not viewed by consumers as exactly the same, i.e., they are differentiated.  Under these conditions it is difficult to determine exactly what is the industry. Mathematical and geometric proof of the profit maximizing price/quantity outcome was co-developed by Joan Robinson at Cambridge University and by Edward Chamberlin in the U.S.A. in the early 1930s. Using the Chamberlain Solution, it is assumed: 

  • firms producing such differentiated goods can be clustered into product groups; 

  • the number of firms in the group is sufficiently large so that each firm operates as if its actions had no effect on its rivals; and, 

  • demand and cost curves are the same for all firms in the group. 

In effect the market demand curve is disaggregated into distinct market segments, e.g., restaurants by food type - burgs & fries, pizza, Chinese, vegetarian, et al.  There is, however, free entry and if excess profits are being earned firms in another 'niche' can easily convert their capital plant & equipment, e.g., a Chinese food restaurant has a kitchen, seating, cash registers that can relatively easily be converted into a steakhouse. 

Given that each firm's product is slightly different it faces a negatively sloped demand curve for its 'market niche'.  In effect, the industry demand curve is disaggregated into market segments.  The position of the demand curve depends, however, on the price of other firm's output.  Thus an increase in the prices of rivals will shift the firm's demand curve up to the right; a decrease would cause a shift down to the left. This is because the goods while differentiated are close substitutes.

In the short-run equilibrium of the initial entrant will be reached where marginal cost equals marginal revenue, i.e., profit maximizing - the cost of the last unit exactly matches how much it earns but all previous units cost less.  The outcome is identical to short run Monopoly.  In the long-run, however, firms are able to change the scale of product and enter or exit the industry.  Therefore long-run equilibrium is reached where long-run average cost is tangent to the demand curve and where marginal cost is equal to marginal revenue, i.e., firms are maximizing profits, but only normal profits because price is equal to average cost and economic profits are zero.  At this point there is no incentive to entry and equilibrium is established (P&B 4th Ed. Fig. 14.2; 5th Ed. Fig. 13.2; 7th Ed Fig. 14.1 & Fig. 14.3; R&L 13th Ed Fig. 11-2; M Fig's 16.2 a & 16.3).


2. Oligopoly 

 Oligopoly satisfies the following conditions: 

  • a small number of large firms that dominate the industry;

  • a competitive fringe of smaller firms; and,

  • actions of a producer perceptible to rivals, i.e. interdependency of sellers whereby action of one results in reaction of others. 

In perfect competition, monopoly and monopolistic competition there exists a determinant solution to a profit maximizing price/quantity outcome. When there are only a few sellers, however, each firm recognizes that its best choice depends on choices made by rivals. There are dozens of alternative oligopoly pricing theories and some economists claim there is no determinant solution. In an oligopolistic market there is usually price stability because of the interdependence of sellers. Interdependence results in 'game playing' behavior whereby suppliers act like players in a game acting and reacting to the moves of their competitors. Competition tends to take place on a secondary level of: product differentiation; technological innovation; and, diversification, i.e., producing more than one commodity. In theory, oligopoly is considered inefficient because price is higher and quantity lower than under perfect competition and there is a dead weight loss of consumer & producer surplus. 

a) Cournot Solution 

 The Cournot Solution proposes that firms choose an output that will maximize profits assuming the output of rivals is fixed. The solution concludes that there is a determinant and stable price-quantity equilibrium that varies according to the number of sellers. In effect each firm makes assumptions about its rival's output that are tested in the market. Adjustment or reaction follows reaction until each firm successfully guesses the correct output of its rivals. 

A much more sophisticated and complex solution known as the 'Nash-Cournot' equilibrium was proposed by John Forbes Nash, the protagonist of the movie 'A Beautiful Mind'.

b) Sweezy Kinked Demand Curve Solution      

 The Sweezy solution postulates that oligopolists face two subjectively determined demand curves that assume:

  • rivals will maintain their prices; and, 

  • rivals will exactly match any price change. 

A key assumption is that rivals will choose the alternative least favorably to the initiator. If initiator raises p, rivals will not follow; if lowers price everyone follows. The result is p will be relative rigid in the face of moderate changes in cost or demand (P&B 4th Ed. Fig. 14.6; 5th Ed. Fig. 13.6; 7th Ed Fig. 15.2; R&L 13th Ed not displayed).


c) Non-Price Competition

If oligopolist do not compete by price then how do the compete?  There are at least 6 alternative patterns of industrial conduct:

(i) Collusion

Collusive behaviour among sellers as well as buyers especially in oligopolistic and oligopsonistic industries is historically and at present common practice.  The small number of majors makes collusion relatively easy:

 “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.”  

Adam Smith, The Wealth of Nations, 1776

Collusion includes price fixing which involves agreement to buy or sell a good only at a fixed price and to manipulate supply and/or demand to maintain that price.  The result of such cartels approximates the outcome of monopoly.  It also includes agreements to geographically divide up markets.  Maintaining discipline among members of the cartel is, however, often difficult because of the incentive to cheat.  It should be noted that imperfect knowledge is involved.  Cartel members know prices are fixed but the public does not.

A recent example of such collusive behaviour occurred in 2012 with the rate rigging scandal concerning the Libor, the London Interbank Offer Rate.  Some fifteen blue-chip banks ‘guess’ their borrowing costs, throw out high and low, and use the resulting rate as the benchmark against which to mark up riskier loans.  These firms have already been fined billions of dollars for manipulation of the Libor rate.  Such behaviour has recently, included price fixing of integrated chips, dynamic random access memory (DRAM) chips, liquid crystal display panels, lysine, citric acid, graphite electrodes, bulk vitamins, perfume as well as airlines in various countries around the world.  The result of collusive industrial behaviour has been the institutionalization of government anti-trust and anti-combines policies around the world beginning in the United States with the Sherman Anti-Trust Act of 1890.

(ii) Game Playing

A profit maximizing price/quantity solution for oligopoly cannot be found within the Standard Model of Market Economics.  To treat the indeterminacy of oligopoly, economists, beginning with Cournot in the 1830s, have struggled for a solution.  The outcome, however, depends not only on the decisions of a given firm but also the reaction of its competitors.  To get around the problem Cournot suggested a firm should guess what competitors would do.  If it guessed correctly it would maximize profits; if not, then it would guess again and again until it guessed right.  Hardly an elegant solution!

What I call the dance of the oligopolists with one step being matched by a counter-move also led in the 1950s to the Nash-Cournot solution which involves page upon page of mathematical equations (the Nash Program) that generates a solution if the underlying assumptions are correct.  If not, the search for a solution also begins again.

The ‘action-reaction’ nature and the complexity of oligopoly with a variety of possible ‘profit maximizing’ outcomes led economics to ‘spin off’ a whole new sub-field called Game Theory.  For a brief history of Game Theory please see: An Outline of the History of Game Theory by Paul Walker

Today it is claimed that video games, as an industry, is larger than the motion picture and music industries combined.  Apps for smart phones are being designed around game theory to encourage everything from weight loss and exercise to saving.  Modern corporations and the military actively engage in game playing including role playing to anticipate outcomes of competition, bargaining and other actions. Even the Arts are involved in that actors are often hired by businesses, governments, medicine, the military and other institutions to ‘role play’ in games to hone the skills of personnel.  For example, actors are used to prepare physicians for the range of possible reactions of a patient being told they have terminal cancer.  In many ways the contemporary ethos or zeitgeist is game playing.  This sentiment is summed up in the neologism ‘gamification’. This has resulted from economic game theory developed in response to the indeterminancy of oligopoly.

(iii) Legal Tactics

Legal tactics includes the tendency to litigate or use other legal means rather than the market to settle or foreclose disputes with consumers, suppliers and competitors, e.g., the EULA software agreement that limits liability, e.g., downtime suffered by users.  Legal tactics embrace contract law, non-contractual liabilities or the law of torts as well as intellectual property rights and property rights in general. 

Over the last few decades State-sponsored intellectual property rights or IPRs have increasingly become a tool of  predatory competition as opposed to an incentive for innovation.  It has been claimed that major American corporations now spend more on the legal defense of IPRs than on research & development.  The many cases filed by Apple against Samsung in courts around the world is only the tip of the iceberg.   Examples include (NOT TESTABLE):

Copyright & Patent Abuse

Legislative Collusion

Patent Thickets & Wars


(iv) Pricing

Pricing strategy includes the choice between short- or long-run profit maximization as well as between single and tied goods, e.g., selling printers cheap but ink at a high price.  Strict price competition, however, is restricted to perfect competition.  Under imperfect competition firms are price makers rather than price takers.  Another example of pricing policy concerns Microsoft Windows 95 to XP.  Initially there was no online authentication required, only a product code on the disc cover.  This meant it was easy to pirate but led to an ever widening group of users who became path dependent.  Once online authentication was introduced owners of pirated copies were effectively compelled to buy Windows 2000 and subsequent editions to preserve their accumulated work.  Pirated use of previous versions also increased the 'network economies' enjoyed by Windows.

(v) Product Differentiation

Advertising is intended to persuade consumers – final or intermediary – to buy a particular brand. Sometimes brands are technically similarly but advertising can differentiate them in the minds of consumers, e.g., Tide vs. Cheer, effectively splitting off part of the industry demand curve as its ‘owned’ share.  In the Standard Model of Market Economics only factual product information qualifies as a legitimate expense. Attempting to ‘persuade’ or influence consumer taste is ‘allocatively inefficient’ betraying the principle of ‘consumer sovereignty’, i.e., human wants, needs and desires are the roots of the economic process.

This mainstream view connects with consumer behaviour research which calls this approach the ‘information processing’ model.  A consumer has a problem, a producer has the solution and the advertiser brings them together. It is a calculatory process.  An alternative consumer behavior school of thought, ‘hedonics’ argues that people buy products to fulfill fantasy, e.g., people do not buy a Rolls Royce for transportation but rather to fulfill a lifestyle self-image (Holbrook & Hirschman 1982; Holbrook 1987).  Thus product placement, i.e., placing a product in a socially desirable context, enhances sales (McCracken 1988). In this regard the proximity of Broadway and especially off- and off-off-Broadway (the centre of live theatre) and Madison Ave. (the centre of the advertising world) in New York City is no coincidence.  Marketeers search the artistic imagination for the latest ‘cool thing’, ‘style’, ‘wave’, etc.  Such pattern recognition is embodied in the new professional ‘cool hunter’ (Gibson 2003).  In fact peer-to-peer brand approval is consumer business success in the age of Blog.

Take the case of advertising biotechnology.  The ‘advertising & marketing’ of GM products, specifically food vs. medicine, highlights these divergent approaches.  In reaching out to the final consumer GM food advertising and marketing generally takes the form of well researched and well meaning ‘risk assessments’.  Such cost-benefit analyses are presented to a public that generally finds calculatory rationalism distasteful and the concept of probability unintelligible, e.g., everyone knows the odds of winning the lottery yet people keep on buying tickets. It would appear that the chances of winning are over-rated. By contrast the even lower probability of losing the GM ‘cancer’ sweepstakes are similarly over-rated. Attempts have been made to place this question within the context of known/unknown contingencies such as GM food safety within Kuhn’s ‘normal science’ (Khatchatourians 2002). The labeling debate also illustrates the ‘information processing’ view. At a minimum it would require all GM food products to be labeled as such. At a maximum it would require that all GM food products be traceable back to the actual field from which they grew.

While attempts have been made to highlight the health and safety of GM foods little has been done to demonstrate that they ‘taste’ better. This may be the final hurdle, maybe not. Observers have noted, however, that the GM agrifood industry has been rather inept in its ‘communication’ with the general public (Katz 2001). For whatever reasons, to this point in the industry’s development, GM foods appear to feed nightmares, a.k.a., Frankenfood, not pleasant fantasies in the mind of the final consumer.

By contrast the ‘advertising & marketing’ of medical GM products and services has fed the fantasies of millions with the hope for cures to previously untreatable diseases and the extension of life itself. Failed experiments do not diminish these hopes. Even religious reservations appear more about tactics, e.g., the use of embryonic or adult stem cells, rather than the strategy of using stem cells to cure disease and extend life.

Given that intermediate rather than final demand currently feeds the biotechnology sector one must also consider what might be called ‘intermediate advertising & marketing’. Such activities are conducted by trade associations and lobbyists. The audience is not the consumer but rather decision makers in other industries and in government. Such associations exist at both the national, e.g., BIOTECanada, and regional level, e.g., Ag.West Bio Inc.

A firm may also indulge in both vertical and horizontal product differentiation.  Vertical differentiation involves designing a product to be sold at very levels of consumer income.  The classic example is Josiah Wedgewood in the late 18th century who used the same molds to make dinner settings for the royal court, aristocracy and the gentry by minimizing decoration as he moved down market.  Horizontal differentiation involves, for example, offering the same product but in different colours.

Another technique to achieve product differentiation in the minds of consumers is ‘design’.  Apple is the outstanding example today.  In effect design technology involves making the best looking thing that works.  Picture going into a computer store and seeing two technically identical systems, one is ugly, the other attractive.  Which do you buy?  Economist  Robert H. Frank’s economic guidebook unlocks everyday design enigmas.  An explanation of his findings is available on a YouTube a lecture at Google HQ.

What is important to realize is that product differentiation through advertising or design require an investment that a lean, mean perfectly competitive firm cannot afford.  It is excess or economic profit that allows a firm to make such investments.


(vi) Process/Product Innovation

With respect to process/product innovation I begin with a distinction between invention and innovation.  Invention involves creating something new; innovation involves successfully bringing it to market.  To paraphrase Einstein: it is 1% inspiration (invention) and 99% perspiration (innovation). 

Process/product innovation forms part of what economist Joseph Alesoph Schumpeter called creative destruction or the:

… process of industrial mutation - if I may use that biological term - … that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new … Creative destruction is the essential fact about capitalism.  It is what capitalism consists in and what every capitalist concern has got to live in. (p.83)

… Every piece of business strategy acquires its true significance only against the background of … the perennial gale of creative destruction; it cannot be understood irrespective of it or, in fact, on the hypothesis that there is a perennial lull. (pp. 83-84)

From this observation, and other evidence, Schumpeter concluded that the Standard Model of Market Economics missed the point.  Competition was not about long run lowest average cost per unit output but rather about innovation and surviving the perennial gale of creative destruction. 


In 1962, economist Robert Solow published “Technical Progress, Capital Formation and Economic Growth” in the American Economic Review.  In it he presented what is known as the Solow Residual.  It begins with a symbolic equation for the production function: Y = f (K, L, T) which reads: national income (Y) is some function (f) of capital (K), labour (L) and technological change (T). 

Technological change in the standard model of Market Economics refers to the impact of new knowledge on the production function of a firm or nation.  The content and source of that knowledge is not a theoretical concern; what matters is its mathematical impact on the production function. 

Over the last hundred years, depending on the study, something like 25% of growth in national income is measurably attributable to changes in the quantity and quality of capital and labour while 75% is the residual Solow attributed to technological change.  Yet we have no idea of why some things are invented and others not; and, why some things are successfully innovated and brought to market and other are not.  The Solow Residual is known in the profession as ‘the measure of our economic ignorance’.  It is why I became an economist.

The effects of technological change in the orthodox model can be broken out into two dichotomous but complimentary categories: disembodied & embodied and endogenous & exogenous technological change.

Implicitly disembodied technological change dominated economic thought since the beginning of the discipline.  It refers to generalized improvements in methods and processes as well as enhancement of systemic or facilitating factors such as communications, energy, information and transportation networks.  Such change is disembodied in that it is assumed to spread out evenly across all existing plant and equipment in all industries and all sectors of the economy.  It is what Victorians would have called ‘Progress’.

Also implicitly, the concept of embodied technological change traces back to Adam Smith’s treatment of invention as the result of the division and specialization of labour (1776).  It refers to new knowledge as a primary ingredient in new or improved capital goods.  The concept was refined and extended by Marx and Engels (1848) in the 19th and by Joseph Schumpeter in the 20th century with his concept of creative destruction (1942).  No attempt was made, however, to measure it until the 1950s (Kaldor 1957; Johansen 1959).  And it was not until 1962 that Solow introduced the term ‘embodied technological change’ into the economic lexicon, and by default, disembodied change was recognized (Solow May1962).

Formalization of embodied technological change arguably emerged out of ‘scientific’ research and development (R&D) during the Second World War followed by the post-war spread of organized industrial R&D.  This demonstrated that new scientific knowledge could be embodied in specific products and processes, e.g., the transistor in the transistor radio.  Conceptual development of embodied technological change has, however, “lost its momentum” (Romer 1996, 204).  Many theorists, according to Romer, have returned to disembodied technological change as the force locomotif of the economy meaning: “Technological change causes economic growth” (Romer 1996, 204).

While embodied/disembodied refers to form, endogenous and exogenous refers to the source of technological change.  The source of exogenous technological change is outside the economic process.  New knowledge emerges, for example, in response to the curiosity of inventors and pursuit of ‘knowledge-for-knowledge-sake’.  Exogenous change, with respect to a firm or nation, falls from heaven like manna (Scherer 1971, 347).

By contrast, endogenous technological change emerges from the economic process itself - in response to profit and loss.  For Marx and Engel, all technological change, including that emanating from the natural sciences, is endogenous.  Purity of purpose such as ‘knowledge-for-knowledge-sake’, like religion, was so much opium for the masses cloaking the inexorable teleological forces of capitalist economic development.  The term itself, however, was not introduced until 1966 (Lucas 1966) as was the related term ‘endogenous technical change’ (Shell 1966).

Endogenous change is evidenced by formal industrial research and development or R&D programs.  It therefore includes what are usually minor modifications and improvements – tinkering - to existing capital plant and products called ‘development’ (Rosenberg & Steinmueller 1988, 230).  In this way industry continues the late medieval craft tradition of experimentation.  R&D varies significantly between firms and industries.  At one extreme, a change may be significant for an individual firm but trivial to the economy as a whole.  On the other hand, ‘enabling technologies’ such as computers or biotechnology may radically transform both the growth path and the potential of an entire economy.  How to sum up the impact on the economy of the endogenous activities of individual firms remains, however, problematic.

With respect to the Nation-State, endogenous and exogenous technological change has a different meaning.  They refer to whether the source is internal, i.e., produced by domestic private or public enterprise, or external to the nation, i.e., originating with foreign sources. 

Furthermore, in the 1980s a ‘New Economic Geography’ arose inspired by the work of Nobel Prize winning economist Paul Krugman (Martin & Sunley 1996).  A central feature is the ‘industrial cluster’ such as ‘Silicon Valley’.  While economies of scale and scope are available within a single firm, external economies are available only outside.  High tech firms operating in the same sector benefit from physical proximity.  Such clusters, in turn, crystallize around the University as a nucleating agent or prime attractor.  The success of Government sponsored ‘clusters’, however, remains problematic (Economist Oct. 11, 2007).

A key industrial example of the role of the University as an exogenous source of technological change is biotechnology.  With the decoding of DNA a new enabling or transformative technology was unleashed. Its leaders are generally University-based (Zucker et al 1998, 293).  It is they who take new knowledge and commercialize it. It is they who attract the best students.  Often they establish new firms within an existing cluster or start a new cluster with the assistance of the University which shares in patent royalties.  Many new biotech firms are in fact founded with the intent of selling them to large established firms (Arora & Gambardella 1990, 362).


 previous page

next page