From Stigum - Econometrics and the Philosophy of Economics, pp. 108-9
Haug: What about exchange traded catastrophe derivatives, are these supplements or competitors for the insurance industry? Aase: Both. The direct reason for creating PSC options or catastrophe (CAT) products seems to stem from a belief that the capitalization of the international financial markets may better absorb natural catastrophe risk than the reinsurance markets. For example, the daily fluctuation in the US financial markets – about 70 basis points or $133 billion on average – exceeds the maximum possible insurance loss that might arise from an earthquake catastrophe, according to simulation experiments. The more fundamental reason, as I see it, is that catastrophe derivatives and other financial instruments are created in order to improve welfare at large, i.e. in order to get closer to the ideal situation with ‘Arrow-Debreu’ securities, in which a competitive equilibrium is Pareto efficient. However, adding new instruments will, of course, not entirely achieve this ideal goal, and we know (from, e.g. O. Hart’s work on this subject) that merely adding new securities that do not complete the market, may actually lower welfare. The emergence of markets like these can be thought of as economic efficiency: if agents think that such instruments will improve economic efficiency, they will somehow be created. — “Knut Aase on Catastrophes and Financial Economics,” in Haug, E. (Ed.). (2007). Derivatives: Models on Models. New York: Wiley, p. 233 He also notes (p. 237): There is also a danger that in the smoke from all the mathematical sophistication, the researcher loss sight of some of the more fundamental economic principles. As an example, some recent analyses of the selection problem behind insider trading have ignored the fundamental principle that prices must reflect, at least to some extent, the information possessed by the various agents involved. When there is asymmetric information, some sort of game theory cannot be dispensed with. Other examples can be taken from insurance, where, e.g. actuaries have worked with so-called ‘premium principles’ for more than 100 years now, principles that still do not reflect basic economic principles of pricing risk.

Haug: What about exchange traded catastrophe derivatives, are these supplements or competitors for the insurance industry?

Aase: Both. The direct reason for creating PSC options or catastrophe (CAT) products seems to stem from a belief that the capitalization of the international financial markets may better absorb natural catastrophe risk than the reinsurance markets. For example, the daily fluctuation in the US financial markets – about 70 basis points or $133 billion on average – exceeds the maximum possible insurance loss that might arise from an earthquake catastrophe, according to simulation experiments.

The more fundamental reason, as I see it, is that catastrophe derivatives and other financial instruments are created in order to improve welfare at large, i.e. in order to get closer to the ideal situation with ‘Arrow-Debreu’ securities, in which a competitive equilibrium is Pareto efficient.

However, adding new instruments will, of course, not entirely achieve this ideal goal, and we know (from, e.g. O. Hart’s work on this subject) that merely adding new securities that do not complete the market, may actually lower welfare.

The emergence of markets like these can be thought of as economic efficiency: if agents think that such instruments will improve economic efficiency, they will somehow be created.

“Knut Aase on Catastrophes and Financial Economics,” in Haug, E. (Ed.). (2007). Derivatives: Models on Models. New York: Wiley, p. 233

He also notes (p. 237):

There is also a danger that in the smoke from all the mathematical sophistication, the researcher loss sight of some of the more fundamental economic principles. As an example, some recent analyses of the selection problem behind insider trading have ignored the fundamental principle that prices must reflect, at least to some extent, the information possessed by the various agents involved. When there is asymmetric information, some sort of game theory cannot be dispensed with.

Other examples can be taken from insurance, where, e.g. actuaries have worked with so-called ‘premium principles’ for more than 100 years now, principles that still do not reflect basic economic principles of pricing risk.

“There exist around fifty major commodity exchanges that trade in more than ninety commodities. Trading on exchanges is however concentrated. In 2009, the top five exchanges accounted for 86% of all contracts traded globally (TheCityUK 2011). Soft commodities are traded around the world and dominate exchange trading in Asia and Latin America. Metals are predominantly traded in London, New York, Chicago and Shanghai while energy related contracts are predominantly traded in New York, London, Tokyo and the Middle East (TheCityUK 2011). In terms of future contracts traded in 2009, China and the UK accounted for three out of the top ten exchanges while the United States accounted for two and Japan and India for one. China and India have gained in importance in recent years with their emergence as significant commodity consumers and producers. London, New York and Chicago remain however the main centers of commodity future trading.”
“Indeed, focusing on the interface between arbitrageurs and noise traders, De Long et al. (1990) analyse the process by which excess volatility is generated by noise traders. They suggest that the unpredictability of noise traders’ beliefs and expectations, which can be erroneous in the light of fundamentals, could create a ‘noise trader risk’ – a risk in the asset prices, which deters rational arbitrageurs from aggressively betting against them. Hence, ‘arbitrage does not eliminate the effects of noise because noise itself creates risk’ (De Long et al., 1990: 705), since arbitrageurs are likely to be risk-averse, acting with a short time-horizon. As a result, ‘prices can diverge significantly from fundamental values even in the absence of fundamental risk’ (De Long et al., 1990: 705). Moreover, bearing a disproportionate amount of risk enables noise traders to earn a higher return than rational investors who engage in arbitrage against noise. Clearly, their model challenges the standard proposition made by Friedman (1953) that irrational noise traders are always counteracted by rational arbitrageurs who could drive asset prices close to fundamental values.”

Sharia-compliant financial inclusion represents the confluence of two rapidly growing sectors: microfinance and Islamic finance. With an estimated 650 million Muslims living on less than $2 a day (Obaidullah and Tariqullay 2008), finding sustainable Islamic models could be the key to providing financial access to millions of Muslim poor who strive to avoid financial products that do not comply with Sharia (Islamic law). […] [D]espite a four-fold increase in recent years in the number of poor clients using Sharia-compliant products (estimated at 1.28 million) and a doubling in the number of providers, the nascent sector continues to struggle to find sustainable business models with a broad array of products that can meet the diverse financial needs of religiously observant poor Muslims.
Financial tenets enshrined in Sharia challenge the microfinance sector’s ability to sustainably provide Sharia-compliant financial products at scale. One such tenet is the widely known prohibition on interest, which makes traditional microloan models technically impossible. A lesser known tenet is the encouragement of wealth creation through equity participation in business activities, which requires risk-sharing by financial service providers that does not guarantee returns.
[…] Based on this survey, the results of which are highlighted herein, we conclude that despite impressive increases in the number of Islamic microfinance providers and clients, the sector is still largely dominated by a few providers in a few countries that rely primarily on only two products.

— Consultative Group to Assist the Poor. (March 2013). Trends in Sharia-Compliant Financial Inclusion. Washington, DC: El-Zoghbi, M. & Tarazi, M. Abstract.
“Karl Pearson, the first Professor of Statistics in Britain, wrote two quite different histories of correlation and regression analysis separated by some twenty-five years. The first in 1895 attributed correlation to the French mathematician Auguste Bravais, a mid-19th century error theorist, interested in assessing the accuracy of astronomical measurements. For any given point in space, a separate set of measurements could be made for both the x and y coordinates. The overall pattern of error is then given by multiplying together the two independent laws of error associated with each coordinate. Bravais, however, was interested in the more difficult case where the same set of measurements were used to calculate both x and y together. To do that required him to calculate a joint law of error, the equation of which is remarkably similar to Galton’s later equation for correlation. […] By 1920, however, Pearson…had changed his mind, arguing that it was Galton who was the true originator of correlation because the problem to which Bravais applied his work was completely different from the one Galton was trying to solve. Bravais devised his equation…in order to get rid of the amount of statistical variation of error around the true values of the variables, whereas for Galton it was precisely the statistical variation—the error—that needed to be kept. Once explained, the variation could be made the source of intellectual progress: specifically for Galton’s purposes [as a eugenicist,] geniuses could be bred. […] Galton is interested primarily in the deviations around the mean, and not the mean itself. As Galton himself says, “The primary objects of the Gaussian Law of Error were exactly opposed, in one sense, to those which I applied them. They were to get rid of, or to provide a just allowance for errors. But those errors or deviations were the very things I wanted to preserve and know about.” (1908, p. 305, note 2).”
“Cartwright (1999, ch. 7) argues that probabilities are not there for the taking, but are characteristics of quite particular set-ups (e.g., of roulette tables or particular configurations of unstable atoms). Only in such designed set-ups do objects display well behaved probabilities. The role of economic theory (and of quantum-mechanical theory) is to provide the conditions that articulate such a well-defined set-up: a nomological (or law-generating) machine.”
thenewenlightenmentage:

 Meet the Dropleton—a “Quantum Droplet” That Acts Like a Liquid 
Physicists have created a new composite “quasiparticle” that could help probe the quantum mechanics of many particles working together
Part particle, part liquid, a newly discovered “quasiparticle” has been dubbed a quantum droplet, or a dropleton. The dropleton is a collection of electrons and “holes” (places where electrons are missing) inside a semiconductor, and it has handy properties for studying quantum mechanics.
Continue Reading
“In its attempts to attain its many objectives, economic theory was helped by greater abstraction – preference theory supplies an example again. Significant research efforts were expended on solutions of the integrability problem. That problem can be bypassed altogether, and greater simplicity can be achieved by moving from the commodity space to the more abstract space of the pairs of its points. In this space, whose dimension is twice the number of commodities, the pairs of commodity points indifferent to each other are now assumed to form a smooth (hyper)surface. As another instance of the generality permitted by abstraction, consider the notion of a commodity, which can be treated as a primitive concept, with an unspecified interpretation, in an axiomatic economic theory. A newly discovered interpretation can then increase considerably the range of applicability of the theory without requiring any change in its structure. Thus, by making the transfer of a good or service between two agents contingent on the state of the world that will obtain, Arrow (1953) made possible the immediate extension of the economic theory of certainty to an economic theory of uncertainty by a simple reinterpretation of the concept of a commodity. The theory of financial markets has been influenced by that view of uncertainty, and their practice has not been unaffected. Finally, take the problem of existence of a general equilibrium, once considered to be one of the most abstract questions of economic theory. The solutions that were proposed in the early 1950’s paved the way for the algorithms for the computation of equilibria of Herbert E. Scarf (1973) and for several of the developments of applied general equilibrium analysis (Scarf and John B. Shoven, 1984).”
The Mathematization of Economic Theory A global view of an economy that wants to take into account the large number of its commodities, the equally large number of its prices, the multitude of its agents, and their interactions requires a mathematical model. Economists have successfully constructed such a model because the central concept of the quantity of a commodity has a natural linear structure. The action of an agent can then be described by listing the quantity of its input or output for each commodity (opposite signs differentiating inputs from outputs). That list can be treated as the list of the coordinates of a point in the linear commodity space. Similarly, the price system of an economy can he treated as a point in the linear price space, dual of the commodity space, whose dimension is also the number of commodities. In those two linear spaces, the stage was set for sometimes dazzling mathematical developments that began with the elements of differential calculus and linear algebra and that gradually called on an ever broader array of powerful techniques and fundamental results offered by mathematics. Thus, the three roles of prices given earlier as instances were illuminated by basic mathematical theorems: the first, the achievement of an efficient use of resources, by results of convex analysis; the second, the equalization of supply and demand for commodities, by results of fixed point theory; the third, the prevention of the formation of destabilizing coalitions, by results of the theory of integration and of nonstandard analysis. In those three cases, the lag between the date of a mathematical discovery and the date of its application to economic theory decreased over time. It was notably short for nonstandard analysis, founded at the beginning of the 1960’s by Abraham Robinson and applied to economics by Donald Brown and Abraham Robinson (1972). The last, and most recently developed, of those three instances can be chosen, as can either of the other two, for a more detailed illustration. Competition is perfect when every agent’s influence on the outcome of economic activity is insignificant. The influence of their totality on that outcome is, however, significant. It is to solve the problem of aggregating negligible quantities so as to obtain a nonnegligible sum that integration was invented. In this perspective, the application of integration theory to the study of economic competition is entirely natural. That application requires the set of agents to be large-larger than the set of integers. Treating the set of the agents of an economy as the rich collection of the points of an interval of real numbers has long been familiar in descriptions of economic data. It became familiar in economic theory as well after Robert J. Aumann (1964) showed that, in a pure exchange economy composed of insignificant agents, the formation of destabilizing coalitions is prevented if and only if all those agents base their decisions on a price system. The concept of a convex set (i.e., a set containing the segment connecting any two of its points) had repeatedly been placed at the center of economic theory before 1964. It appeared in a new light with the introduction of integration theory in the study of economic competition: if one associates with every agent of an economy an arbitrary set in the commodity space and if one averages those individual sets over a collection of insignificant agents, then the resulting set is necessarily convex. But explanations of the three functions of prices taken as examples can be made to rest on the convexity of sets derived by that averaging process. Convexity in the commodity space obtained by aggregation over a collection of insignificant agents is an insight that economic theory owes in its revealing clarity to integration theory. — Debreu, G. (1991). ”The Mathematization of Economic Theory" American Economic Review 81(1).

The Mathematization of Economic Theory

A global view of an economy that wants to take into account the large number of its commodities, the equally large number of its prices, the multitude of its agents, and their interactions requires a mathematical model. Economists have successfully constructed such a model because the central concept of the quantity of a commodity has a natural linear structure. The action of an agent can then be described by listing the quantity of its input or output for each commodity (opposite signs differentiating inputs from outputs). That list can be treated as the list of the coordinates of a point in the linear commodity space. Similarly, the price system of an economy can he treated as a point in the linear price space, dual of the commodity space, whose dimension is also the number of commodities.

In those two linear spaces, the stage was set for sometimes dazzling mathematical developments that began with the elements of differential calculus and linear algebra and that gradually called on an ever broader array of powerful techniques and fundamental results offered by mathematics. Thus, the three roles of prices given earlier as instances were illuminated by basic mathematical theorems: the first, the achievement of an efficient use of resources, by results of convex analysis; the second, the equalization of supply and demand for commodities, by results of fixed point theory; the third, the prevention of the formation of destabilizing coalitions, by results of the theory of integration and of nonstandard analysis. In those three cases, the lag between the date of a mathematical discovery and the date of its application to economic theory decreased over time. It was notably short for nonstandard analysis, founded at the beginning of the 1960’s by Abraham Robinson and applied to economics by Donald Brown and Abraham Robinson (1972).

The last, and most recently developed, of those three instances can be chosen, as can either of the other two, for a more detailed illustration. Competition is perfect when every agent’s influence on the outcome of economic activity is insignificant. The influence of their totality on that outcome is, however, significant. It is to solve the problem of aggregating negligible quantities so as to obtain a nonnegligible sum that integration was invented. In this perspective, the application of integration theory to the study of economic competition is entirely natural. That application requires the set of agents to be large-larger than the set of integers. Treating the set of the agents of an economy as the rich collection of the points of an interval of real numbers has long been familiar in descriptions of economic data. It became familiar in economic theory as well after Robert J. Aumann (1964) showed that, in a pure exchange economy composed of insignificant agents, the formation of destabilizing coalitions is prevented if and only if all those agents base their decisions on a price system.

The concept of a convex set (i.e., a set containing the segment connecting any two of its points) had repeatedly been placed at the center of economic theory before 1964. It appeared in a new light with the introduction of integration theory in the study of economic competition: if one associates with every agent of an economy an arbitrary set in the commodity space and if one averages those individual sets over a collection of insignificant agents, then the resulting set is necessarily convex. But explanations of the three functions of prices taken as examples can be made to rest on the convexity of sets derived by that averaging process. Convexity in the commodity space obtained by aggregation over a collection of insignificant agents is an insight that economic theory owes in its revealing clarity to integration theory.

Debreu, G. (1991). ”The Mathematization of Economic Theory" American Economic Review 81(1).