E-encyclopedia of banking, stock exchange and finance

Selected letter: D


    Generally, trading venues in which transparency is poor. Within the MiFID framework, the operation (by regulated markets, MTFs or investment firms) of trading venues with the benefit of waivers granted by national regulators from the pre-trade transparency obligations of MiFID.

    ©2012 Editor: Camera dei Lords


    Generally, trading in venues in which transparency is poor (including OTC).

    ©2012 Editor: Camera dei Lords


    The Day-Ahead Market (DAM) hosts most of the electricity sale and purchase transactions.
    - In the DAM, hourly energy blocks are traded for the next day.
    - Participants submit offers/bids where they specify the quantity and the minimum/maximum price at which they are willing to sell/purchase.
    - The DAM sitting opens at 9 a.m. of the ninth day before the day of delivery and closes at 9 a.m. of the day before the day of delivery. The results of the DAM are made known by 11:30 a.m. of the day before the day of delivery.
    - Offers/bids are accepted under the economic merit order criterion and taking into account transmission capacity limits between zones. Therefore, the DAM is an auction market and not a continuous trading market.
    - All the supply offers and the demand bids, pertaining both to pumping units and consuming units belonging to neighbouring countries’ virtual zones, that are accepted in the DAM are valued at the marginal clearing price of the zone they belong to. This price is determined, for each hour, by the intersection of the demand and supply curves and it is differentiated from zone to zone when transmission capacity limits are saturated.
    - The accepted demand bids pertaining to consuming units belonging to Italian geographical zones are valued at the “Prezzo Unico Nazionale” (PUN … national single price); this price is equal to the average of the prices of geographical zones, weighted for the quantities purchased in these zones.
    - Gestore dei Mercati Energetici S.p.A. (GME) acts as a central counterparty.
    © 2009 ASSONEBB


    Directive 2004/39/EC-MiFID defines “dealing on own account” as trading “[…] against proprietary capital resulting in the conclusion of transactions in one or more financial instruments” (article 4, paragraph 1, number 6). Hence article 1, paragraph 5-bis, of the Italian Consolidated Law (TUF): “Dealing on own account is defined as the buying and selling of financial instruments, against proprietary capital and in relation to client orders, as well as market-making”.
    Editor: Maria Giovanna CERINI
    © 2010 ASSONEBB


    Receipts issued by a legal owner of securities specifying that a depositary holds the securities as trustee for the depositary receipt holder. Often used as a technique to overcome the costs and regulatory barriers associated with foreign investment.

    ©2012 Editor: Camera dei Lords


    Derivatives are contracts whose price "derives" from another price of a financial security, commodity or probability. They are bilateral contracts where the buyer receives the underlying asset at expiration from the seller, or earns (pays) the difference in prices. They are futures options, swap and forward. They can be exchange-traded (stock exchange) or Over The Counter.
    Hull J. (2008) Options, futures and other derivatives, Prentice Hall.
    Oldani C. (2008) Governing Global Derivatives, Ashgate, London.
    Editor: Chiara OLDANI
    © 2010 ASSONEBB


    Let’s consider an economy in which n identical firms would finance investment projects that require one unit of capital whose output is stochastic and available free of charge to the company only. In this context of asymmetric information ex post, investors can obtain the output only if they pay the costs associated with monitoring (the cost of a single monitoring transaction being K). All investors are assumed as small when compared to the project in which they invest, each having 1/m units of capital, hence m investors are needed to finance a project. In case of direct financing, each investor monitors the enterprise total cost nmK.
    In the case of financing through a financial intermediary that centralises resources and investments on behalf of individual financing entities, the cost of monitoring will be nk.
    But who controls the intermediary? Obviously controls exercised by individual investors are inefficient; therefore, it is much more preferable that, regardless of the realisation of the revenue by the intermediary, the broker offers a standard debt contract in which each depositor receives an amount of R / m for a deposit equal to 1 / m. The delegation of monitoring tasks to an intermediary is more efficient when the total costs are lower than those incurred in the case of direct lending: hence when nK + C < nmK (C being the cost of delegation of the intermediary’s monitoring tasks). If investors are "small" when compared to the projects (m> 1), the investment is profitable in terms of expected revenue (Ey> K + R) and there is a need for monitoring in the presence of asymmetric information. The intermediary mechanism is more efficient than direct funding when the number (n) of projects to be funded is sufficiently large.
    Cucinotta G., Nieri L. (2005) "Le assicurazioni, la gestione del rischio in una economia moderna", Il Mulino editore.
    Desiderio Luigi, Molle Giacomo. (2005) "Manuale di diritto bancario e dell'intermediazione finanziaria", Giuffrè editore.
    Di Giorgio G. (2004) "Lezioni di economia monetaria", Cedam editore.
    Guida Roberto. (2004) "La Bancassicurazione: modelli e tendenze del rapporto tra banche e assicurazioni", Cedam editore.
    Locatelli Rossella, Morpurgo Cristina, Zanette Alfeo. (2002) "L'integrazione tra banche e compagnie di assicurazione e il modello dei conglomerati finanziari in Europa", Enaudi editore.
    Patroni Griffi and Ricolfi. (1997) "La distribuzione bancaria di prodotti assicurativi in banche ed assicurazioni fra cooperazione e concorrenza", Giuffrè editore.
    Quagliariello Mario. (2001) "I rapporti tra banche e assicurazioni in Italia e in Europa: aspetti empirici e problemi di regolamentazione", Luiss University Press.
    Quagliariello Mario. (2003) "La bancassicurazione: profili operativi e scelte regolamentari", Luiss University Press.
    Ruozi Roberto. (2004) "Economia e gestione della banca", Egea editrice.
    Editor: Alberto Maria SORRENTINO
    © 2009 ASSONEBB


    Taxation in Western economies reached high percentage of disposable income and consumption, but substantially decreased following the economic downturn of 2007-2009 (OECD Tax Data); reasons are structural and behavioural. The presence of substantial asymmetric information lies at the heart of the traditional economic analysis of taxation; in fact the Government is not able to verify the economic outcomes of taxpayers, who may misrepresent their incomes, consumption, wealth, or attempt to avoid or even evade paying taxes. Estimated tax evasion reached very high levels in Western countries, like Italy (OECD Fighting Tax Evasion). The taxpayer’s willingness to tolerate risk, the size of penalties if caught evading, and the tax enforcement technology determine the extent of tax avoidance.

    In an ideal world, in the presence of perfect information, tax avoidance and evasion would not exist. Governments “would just know how much individuals (being households or firms) earn, save, and consume. If markets were perfect as well (no externalities, no monopoly, complete contracts, symmetric information, complete markets, and zero transaction costs), the second fundamental theorem of welfare economics would apply: governments could completely separate issues of allocation and distribution, since any efficient market outcome could be achieved with suitable redistributions using individualized lump-sum taxes and transfers” (IMF 2017 p. 25).
    But the world is far from being idea, and information constraints determine a government’s tax enforcement capacity. “Governments use costly verification of economic outcomes (tax audits) and penalties for non-compliance, to alleviate information problems in verifying economic outcomes” (IMF 2017, p. 25). The asymmetric information leads to market failure, and to the efficiency-equity trade off.
    The digitalization of economies and, most of all, of the information on economic activities can then improve the ability of Governments to manage tax; the digitalization can improve the tax enforcement technology of governments. “Better tax enforcement allows governments to raise the same revenue with lower taxes (more efficiency) or to raise more tax revenue with the same taxes” (IMF 2017, p. 25). Governments in digitalized system can better match information on the true economic outcomes of taxpayers, households or firms. Tax avoidance and evasion should then disappear.
    Digitalization influences both the public and private sectors; in digitalised countries, Government could better target taxes, and improve income redistribution; this would also reduce the tax rate. However, the digitalization will never eliminate the equity-efficiency trade-off since relevant variables still remain unobservable, like work effort and passion.
    The recent experience of Western countries has not been totally coherent with traditional public finance theory, since digitalisation increased tax avoidance and evasion, and then contributed to rise inequality in income and wealth, both of which tend to increase taxes (G7 Finance Minister Communique, Bari 2017).
    The digitalisation of economies can increase the quantity of information on certain economic choice that are directly linked to taxes. Western economics should raise most of their public revenues with consumption taxes; these taxes are mostly managed at firms level, and exhibit high evasion. Consumption with digital payments can be directly observed, but this can translate into higher revenues and greater efficiency only if cash is abolished.
    Taxation of wealth and capital income can also benefit form digitalised technologies that provide information and link pension funds, financial markets, bequests, real estate and home-ownerships. Privacy of data can be a substantial obstacle in certain juridical systems, while regulatory arbitrage can be a source of inefficiency. Taxation on corporate income is the challenge for regulators, since it is the most distortionary taxes of all. Firms move where tax are easy to pay and low, and the residence principle has further favoured mobility, even within economic unions like the European and the United States of America.
    The flow of information and its effective management should be pursued by global authorities; the G20 is working with member countries to establish a global tax system for firms, and markets to reduce inequality and compensate for certain losses in the labour market. In fact, global businesses over the last 20 years have reduced by a half their tax payments, reducing the ability of Governments to alleviate poverty and unemployment.
    Tax evasion can be reduced by Taxation Information Exchange Agreements, where countries share information on individuals’ and firms’ financial accounts in certain financial institutions. “Many countries participating in the Organisation for Economic Co-operation and Development (OECD) Convention on Mutual Administrative Assistance in Tax Matters have reached bilateral agreements to share information on request for all types of investment income (including interest, dividends, income from certain insurance contracts, and other similar types of income), but also account balances and proceeds from sales of financial assets. Financial institutions include banks, custodians, brokers, certain collective investment vehicles, and certain insurance companies. Digitalization can help further to build and link international registers for asset ownership (shares, property, pensions) and capital incomes (interest, dividends, capital gains, property values, pension accrual)” (IMF 2017, p. 31).
    With the full exchange of digital information, taxes on corporate profits could be eliminated, the taxation on labour could significantly diminish, and consumption would take the lead among Governments' revenues.

    G7 Finance Ministers and Central Banks’ Governors Meeting 2017. Communiqué. Bari, Italy, May 12-13.
    International Monetary Fund (IMF) 2017. Digital revolutions in public finance. Edited by Sanjeev Gupta, Washington, DC, ISBN 9781484315224.
    Organisation for Economic Cooperation and Development (OECD) 2017. Overview on Fighting Tax Evasion. Paris.
    Organisation for Economic Cooperation and Development (OECD) 2017. Taxation Information Exchange Agreements. Paris.


    Dividends are distributions of earnings made by a corporation to its shareholders. Dividends represent a return on investment in equity securities, corresponded in the form of cash or additional shares of stocks (stock dividends). In the latter case, the value of each share is reduced because the number of shares outstanding is increased. The decisions on the destination of profits are taken by the board of directors and constitute the so-called dividend policy of a company. In particular, dividends are not liabilities of a corporation that can also decide to retain the profits. However, when a dividend has been declared, it becomes a legal obligation of the corporation that cannot be easily rescinded. The procedure for dividend payments is structured according to the following chronology:
    1. Declaration date: the board of directors declares a payment of dividends.
    2. Date of record: the dividends are distributable to all individuals believed to be shareholders as of this date on the base of the notification of purchase received by the company before the specified date.
    3. Ex-dividend date (also called ex-date): shares are traded ex-dividend on the day and after a number of days, commonly two or three business days, before the record date. Investors who purchase the company’s stocks on or after this date will not receive the dividend. Instead, the seller gets the dividend.
    4. Date of payment: dividends are paid to shareholders of record.
    The amount of dividends can be declared as dividend per share (i.e. 1$per share), as dividend yield (percentage of market price) or as dividend payout (percentage of earnings per shares).

    Fabozzi F., Modigliani F., Jones F. (2010), Foundation of Financial Markets and Institutions, Pearson International Edition.
    Ross Stephen A., Westerfield Randolph W., Jaffe J. (2002), Corporate Finance, McGraw Hill

    Editor: Bianca GIANNINI
    © 2010 ASSONEBB


    The Wall Street Reform and Consumer Protection Act 2010 (US).

    ©2012 Editor: Camera dei Lords


    Doha Development Agenda (DDA), also known as Doha Round, is the unofficial name of the Doha Work Programme on negotiations and implementation. Because of the greater influence of developing countries in setting the action plan at Doha, the new round became known as the Doha Development Agenda (DDA)as a fundamental objective is to improve the trading prospects of developing countries. That Agenda includes three main topics: industrial tariffs, topics of interest to developing countries and changes to World Trade Organization (WTO) rules (see Dispute Settlement Body-DSB). The work program lists 21 subjects. The main issues are: Reforming agricultural subsidies; Ensuring that new liberalization in the global economy respects the need for sustainable economic growth in developing countries; Improving developing countries' access to global markets for their exports. The original deadline of 1 January 2005 was missed, in fact, the Doha Round ended in December 2013 (see WTO and Bali Agreement: From Uruguay Round to Doha Round).

    Editor: Giovanni AVERSA


    The Dow Jones Sustainability Indexes were launched in 1999 as the first global sustainability benchmarks. The indexes are offered cooperatively by SAM and S&P Dow Jones Indices. The family tracks the stock performance of the world's leading companies in terms of economic, environmental and social criteria. The indexes serve as benchmarks for investors who integrate sustainability considerations into their portfolios, and provide an effective engagement platform for companies who want to adopt sustainable best practices.


    Shift of an economic or stock market cycle form rising to falling.


    Sovereign wealth fund owned by the Dubai state, an Emirate of the UAE, and active in the financial, real estate, transport, logistics, navy and sports sector.
    Chairman of the fund is Sultan Ahmed Bin Sulayem. Dubai is one of the very few countries in the Gulf with no oil nor natural gas. Its growth is based on tourism and luxury real estate (e.g. the Palm).
    Its homepage states that "The Sun Never Sets on Dubai World". Yet, on 2 November 2009, just before the Muslin feats of sacrifice, Dubai World announced that the sun had started to set, and the short-term debt had to be rescheduled with banks. The nominal debt of Dubai World with European banks reached 40 billion euros, 25 of which were short-term. The banks involved were HSBC, Standard Chartered, Barclays, Abn Amro, Citibank, BNP Paribas. Dubai was considered the safe box of ayatollahs, and some Iranian banks were probably involved in the crisis.
    The most relevant effects of the Dubai difficulties are on sponsorships: F1 team McLaren-Mercedes, navy team New Zealand, and football teams Liverpool, Milan, Amburgo and Arsenal. Italian engineering and building firms working in the Gulf seem to be confident that the situation will improve, but their potential losses are estimated to be very high. The exit strategy from the crisis will probably pass through the restructuring debt with European banks, and a rescue plan by the Abu Dhabi central bank and sovereign wealth fund, an Emirate that is rich in oil and natural gas.

    Link: www.dubaiworld.ae
    © 2009 ASSONEBB

  • Dummy variable

    It is a variable whose value (0 or 1) is conditioned with respect to another parameter (i.e. time) whence depend the value of the target function, is used to highlight the cyclical nature of the considered phenomenon.


    Duration is the sum of the life of each flow of an asset weighted for the present value of inflows divided by the price:


    t = maturity of each cash flow; FC = cash flow; r = yield to maturity; P = price.
    Hence, we will have:

    Please note that if we are talking of a zero-coupon bond, then duration is equal to life to maturity. Duration is closer to the final maturity the more the final flow weights.
    Ceteris paribus (coupon, coupon frequency, etc...), the bond with higher life to maturity has greater duration.
    Duration is a measure of risk of the bond because an increase in duration means a higher volatility of the bond, and therefore a risk of oscillation of the bond price.
    Floating rate bonds have low duration since their coupon frequently adapts to market rates, hence their volatility related to market rate movements is low.
    On the other hand, fixed rate bonds have higher duration.
    The higher is the duration, the higher is the reactivity of bond prices to rates movements.
    Editor: Ugo TRENTA
    © 2009 ASSONEBB

  • Dynamic hedging

    Multiperiod portfolio hedge, obtained by updating dataset information to each end period.


    The Dynamic New Keynesian (DNK) model represents the current workhorse theoretical framework for the analysis of monetary policy. From a philosophical and methodological point of view, it achieves the synthesis between the New Classical Macroeconomics with the Keynesian school of thought. The New Classical Macroeconomics had brought about a revolution in modern macroeconomics, by proposing a new class of models, which are able to overcome, from a methodological perspective, the Lucas’ critique to the reduced-form models that were popular until the 1970s.1In order to do this, this new class of models introduces three main features. Firstly, rather than postulating behavioural equations describing the aggregate economic relations, it derives the latter from first principles, as the equilibrium outcome of intertemporal problems faced by optimizing agents, like households and firms. 2 Secondly, it substitutes the several, somewhat ad hoc, assumptions about the formation of expectations with the new, elegant and rigorous paradigm of rational expectations, according to which the agents form expectations by using efficiently all the available information. Thirdly, the equilibrium outcome of the model requires the simultaneous clearing of all active markets: the focus shifts from partial equilibrium models to general equilibrium ones. The baseline formulation of this new class of models is the Real Business Cycle (RBC) model. From a conceptual perspective, this new strand of literature builds on the Classical ideas that markets are frictionless and always clear, that all production inputs are completely and efficiently used at all times, and that money is neutral. As a consequence, the main claim of the RBC analysis is that business cycle fluctuations are mainly caused by shifts in productivity and reflect the efficient, dynamic response of a frictionless economy. Hence, there is no welfare-improving role for any stabilization policy.
    The Keynesian school of thought, on the other hand, pushes the idea that the existence of nominal and real frictions may prevent the system from reaching a full-employment equilibrium in the short-run, and makes it converge to a sub-optimal and inefficient equilibrium. In such an economy, business cycle fluctuations may be affected and amplified by the real and nominal rigidities, and the latter then assign a central role to monetary policy for the equilibrium level of real variables (as in the basic IS-LM model).
    The DNK model blends these conceptual ideas with the methodological advances of the RBC framework. For this reason, the current literature also refers to it as the New Neoclassical Synthesis (as opposed to the "old" Neoclassical Synthesis of the 1950s, from which the IS-LM model had emerged).
    With the RBC model, therefore, the DNK shares the methodological core: a Dynamic Stochastic General Equilibrium (DSGE) model of the economy, in which the demand side builds on the optimal intertemporal behaviour of infinitely-lived households which maximize the utility from consumption and leisure subject to an intertemporal budget constraint, and the supply side on the optimal behaviour of a continuum of firms using a common technology and interacting with the households in the markets for production inputs. Both households and firms have rational expectations. On top of this core structure, the DNK model adds a number of features that are typical of the Keynesian school: monopolistic competition and nominal rigidities. Therefore, differently from the RBC model, firms do not operate in perfect competition, as price-takers. On the contrary, in the DNK model, there is a large set of differentiated goods, so that each firm producing a given variety has some degree of market power, and sets its price optimally in order to maximize the discounted stream of future profits, over its planning horizon (which is infinite). Moreover, in resetting their prices, firms are subject to frictions of some kind (either in the form of constraints on the frequency of adjustment or of adjustment costs), which effectively prevent the aggregate price level from adjusting flexibly to the shocks that hit the economy. As a consequence, variations in the nominal interest rate do not affect one-for-one the expected inflation rate, implying variations also in the real interest rate and therefore non-neutrality of monetary policy.
    These additional assumptions make the DNK model differ sharply from the RBC benchmark, in terms of positive and normative implications. From a positive perspective, nominal rigidities act as an amplification device, which make the dynamic response of the economy to structural shocks inefficient, as opposed to the efficient fluctuations implied by the RBC framework. From a normative perspective, these inefficiencies leave room for a welfare-improving role of economic (and foremost monetary) policy. The DNK model, therefore, restates the main results of the basic IS-LM-AS model within a fully-fledged structural dynamic framework, in which the aggregate demand and supply schedules are not simply postulated, but rather derived from first principles, as the result of the optimal behaviour of microeconomic agents.
    Structure of the model
    The DNK model, in its baseline formulation assuming a fixed amount of physical capital and abstracting from public consumption, builds upon three main blocks, referring to the three different classes of agents that interact in the economy. The first block describes the demand side of the economy, and consists of a representative, infinitely-lived household. The household consumes a basket of all the differentiated goods produced in the economy



    wherecaptures the elasticity of substitution between any two given types, and the latter are indexed . Such differentiation among consumption goods is the first departure from the RBC framework, implying a monopoly market power for the firms producing them. At the limit, when goes to infinity, such differentiation fades away, each brand becomes a perfect substitute for the others, and the market structure converges to perfect competition.
    The household chooses consumption, saving and hours worked in order to maximize the expected discounted stream of utility flows, over an infinite planning horizon, subject to a sequence of budget constraints. The problem can be formalized as

    such that



    for all , aggregate consumption obeys equation (1) and asymptotic solvency is granted. In the formulation above, denotes the rational-expectation operator conditional on information available at time , is the time-discount factor, the level of real consumption at time t, the amount of hours worked, the price of the type-i consumption good, the aggregate price level, a riskless nominal one-period bond in which the household allocates its savings, the nominal return on such a bond between period t-1 and t, the nominal hourly wage and the nominal profits produced and distributed by the monopolistic firms.

    The formulation above nests two distinct optimization problems faced by the household: one in the intra-temporal dimension, and the other in the inter-temporal one.
    The intra-temporal problem requires choosing the optimal combination of differentiated goods for a given level of aggregate consumption, in each period. The solution to this intra-temporal problem yields as equilibrium conditions the demand for each type i, as a decreasing function of its relative price and given aggregate consumption


    which highlights the interpretation of as the price-elasticity of demand for brands; and the consumption-based aggregate price index


    The inter-temporal problem, instead, is related to the optimal consumption-saving decision (how much to consume today as opposed to saving for future consumption), and the simultaneous optimal consumption-leisure decision (how much to work to finance consumption as opposed to enjoying leisure). The solution to such a problem provides two additional equilibrium conditions, implying the equilibrium path for aggregate consumption and hours worked. The optimal consumption-saving decision requires the equalization of the marginal-utility costs of giving up consumption today, to the expected discounted marginal-utility benefits of consuming tomorrow the payoff from current savings:


    where denotes the marginal utility of consumption at time t. In turn, the optimal consumption-leisure decision requires the equalization of the Marginal Rate of Substitution (MRS) between consumption and leisure and their relative price (where captures the opportunity cost of choosing an additional hour of leisure, in terms of foregone earnings):


    The equation above implies the labour supply schedule, which determines the optimal amount of hours worked at time t, given the real hourly wage in the same period and the level of desired consumption.

    The functional form of the utility U is usually assumed "isoelastic"


    in that the elasticity of intertemporal substitution in consumption and the Frisch elasticity of labour supply are constant, primitive parameters.
    The second block describes the supply side of the economy, and consists of a continuum of firms, indexed , each producing a differentiated good out of labour services hired from the representative household, according to a Constant Returns to Scale (CRS) technology of the form


    where is an aggregate productivity index, which follows a log-stationary process:


    Each firm chooses the optimal amount of labour services to hire in each period, which, under the technology specification of equation (8), requires that the real marginal costs are equal to the real wage per efficiency unit (and therefore common across firms):


    The problem of each firm i is choosing the selling price for the differentiated goods it produces, subject to two constraints. The first one is given by the demand for its specific brand, coming from the household, as described by equation (3). The second constraint is on the frequency of price adjustment. Each firm is able to set its price optimally following a time-dependent, stochastic rule: in each period, each firm will get the chance to re-set its price optimally with probability , while with probability it will have to keep the price unchanged.3 These probabilities are history-independent, meaning that every firm in each period faces the same probability of having to keep the price unchanged, regardless of what happened in the previous periods. For the law of large numbers, therefore, in each period t a set of mass of firms will charge the last period’s price, while a set of mass re-optimizes. As a consequence, the aggregate price index (4) can be cast in the simplified form


    This price-setting mechanism implies a degree of stickiness in the general price level proportional to , in that any unexpected disturbance that would require a price adjustment as an optimal response from the firms will induce an actual adjustment only from a subset of mass of them, while the others will respond by adjusting the amount of production. This asymmetry in the response of the firms implies a misallocation of resources between sectors that are able to re-set their prices and those that instead will have to adjust production, which is the origin of the welfare cost of unstable inflation in this model.4

    When a given firm does get the chance to re-optimize, it will take into account that the chosen price will have to be charged for k more periods with probability . This feature of the price-setting mechanism effectively makes the problem of the firm dynamic, since the optimal price will have to be set in order to maximize not only the current profits, but rather the entire expected discounted flow of future profits. Formally, the problem of a generic firm that gets the chance to re-optimize in period t, is:


    such that


    the real marginal costs obey equation (10), and


    denotes the Inter-temporal Marginal Rate of Substitution (IMRS) in consumption, at which the household (which is the ultimate owner of the firms) discounts future cash-flows. The solution to such a problem implies that all firms that are able to re-set their price at time t, will choose the common level


    where is the net mark-up that each firm can charge over marginal costs as a consequence of its market power, and are period weights (decreasing with k) related to the way in which the firms discount future cash-flows. Equation (15), therefore, implies that the optimal price for a firm that is able to re-optimize should be set as a constant mark-up over a weighted average of current and expected future nominal marginal costs. In the limiting case of full price flexibility, arising when , all firms can re-set their price in every period, and therefore only the current marginal costs are relevant:



    The third and last block accounts for the behaviour of the monetary policy maker. In the cash-less specification adopted here, monetary policy is usually defined as the direct control, by the Central Bank, of the short-term nominal interest rate , according to some instrument feedback rule of the form


    The above specification of the monetary policy rule, originally proposed by Taylor (1993), implies that the nominal interest rate at time t is raised above the long-run level rr whenever the actual inflation rate or real output are higher than a given target (respectively 0 and ), or as a result of some other, un-modelled, objective of the Central Bank ,and , are the response coefficients, measuring the degree of aggressiveness towards the two targets.

    Equilibrium and Implications

    The model outlined above, in the absence of exogenous shocks hitting the level of labour productivity, converges to a non-stochastic long-run equilibrium in which the level of the interest rate is tied to the time-discount factor


    The first implication of the DNK model is in the relation above.
    The presence of the static, real distortion related to monopolistic competition implies an inefficiently low level of equilibrium output. The (simplified) RBC version of the specification adopted here, arising when , indeed, would imply a higher level of output, , as a direct consequence of perfect competition.
    From a dynamic perspective, the implications of the model above will be affected by the second dynamic distortion: nominal rigidities. Such implications can be studied by resorting to first-order approximation of the equilibrium conditions outlined above, which allows to reduce the model economy to a system of five linear stochastic equations, in which lower-case variables denote log-deviations from long-run equilibrium values. The demand side of the economy is described by the Euler Equation (5) and the aggregate resource constraint, requiring that aggregate private consumption equal aggregate output, given the absence of investment and public consumption:


    The equation above is sometime referred to as a dynamic stochastic IS curve, in that it implies a negative relation between output and the interest rate, just like the static IS schedule of the Neoclassical Synthesis of the 1950s, discussed in the undergraduate macroeconomics courses. An important difference, however, is worth mentioning. While in the static IS curve, indeed, the negative impact of the interest rate on output is triggered by the contractionary effect on investment, here the negative effect works through the intertemporal substitution in consumption: a higher interest rate makes it more convenient to consume less today, save more and postpone consumption to the future.
    The supply side is described by the aggregate production function, obtained by aggregating across firms, equation (8); the equilibrium in the labour market, equations (6) and (10); the pricing behaviour of the firms, described by equations (11) and (15). All together, these equilibrium conditions imply the short-run New Keynesian Phillips Curve (NKPC) describing the equilibrium dynamics of the inflation rate:


    in which the composite parameter k is defined as and the additional stochastic shock captures inflationary cost-push shocks.5 The third equation describes the behaviour of the Central Bank


    the fourth equation defines the target, frictionless level of output, which is derived in the limiting case ,


    and the last one describes the dynamics of the stochastic driving force, the productivity shock:


    The main implications of the DNK model, even in the small-scale version discussed here, are sharply different from the corresponding RBC version. In the latter, fluctuations in the real output would only follow the dynamics of productivity, as described by equation (22), and would therefore be efficient. Monetary policy, moreover, would be completely ineffective and the Phillips Curve would turn vertical. The system in that case would reduce to equation (19), implying the dynamics of the interest rate, (22) and (23) only. In the general DNK version, however, the presence of monopolistic competition and nominal rigidities implies a series of departures from the implications of the RBC. First of all, the dynamics of actual output are now the result of the amplification mechanism triggered by nominal rigidities, through the interaction between NKPC, the dynamic stochastic IS curve and the monetary policy rule. Secondly, a positively-sloped NKPC implies a short-run trade-off between inflation and output stabilization. Thirdly, the dynamics of the price level are sticky, and they depend mainly on the wedge between actual output and its frictionless counterpart. Fourthly, monetary policy, through changes in the interest rate and their feedback effect on output, is able to have real effects on real activity, the more so the stickier consumer prices are.

    Calvo, G. A., (1983). "Staggered prices in a utility-maximizing frame work". Journal of Monetary Economics, 12 (3).
    Clarida, R., J. Galí and M. Gertler, (1999). "The science of monetary policy: a new Keynesian perspective". Journal of Economic Literature, 37 (4)
    Galí, J., (2003). "New Perspectives on Monetary Policy, Inflation and the Business Cycle". In: Dewatripont, M., Hansen, L., Turnovsky, S. (Eds.), Advances in Economic Theory, vol. III. Cambridge University Press.
    Galí, J., (2008). Monetary Policy, Inflation and the Business Cycle, Princeton University Press, Princeton
    Goodfriend, M. and R. King, (1997) "The New Neoclassical Synthesis and the Role of Monetary Policy". In: Bernanke, B. and J. Rotemberg (Eds.), NBER Macroeconomics Annual 1997, MIT Press
    Long, J. B. and C. Plosser. (1983) "Real Business Cycles." The Journal of Political Economy, 91 (1)
    Lucas, Robert (1976), "Econometric Policy Evaluation: A Critique", in Brunner, K.; Meltzer, A., The Phillips Curve and Labor Markets, Carnegie-Rochester Conference Series on Public Policy, 1
    Nisticò, S., (2007). "The Welfare Loss from Unstable Inflation". Economics Letters, 96
    Rotemberg, J.J., (1982). "Sticky Prices in the United States". The Journal of Political Economy, 90 (6)
    Woodford, M., (2003). Interest and Prices: Foundations of a Theory of Monetary Policy, Princeton University Press, Princeton

    1"Given that the structure of an econometric model consists of optimal decision rules of economic agents, and that optimal decision rules vary systematically with changes in the structure of series relevant to the decision maker, it follows that any change in policy will systematically alter the structure of econometric models" (Lucas, 1976, p. 41): a reduced-form econometric model is useless for predicting the effects of a change in policy, because the model parameters, conditional upon which the prediction is made, are themselves dependent on policy.
    2This is the sense in which such a model overcomes the Lucas’ critique: since the model parameters are now related to first principles like the structure of preferences or technology, they are independent of the economic policy, and can be reliably used to predict the effects of changes in the latter.
    3This assumption is due to Calvo (1983), hence the denomination "Calvo rule".
    4An alternative way to introduce nominal rigidities has been proposed by Rotemberg (1982), and it is based on the assumption that all firms can re-optimize in each period, but in doing so they have to pay a real menu cost proportional to the price adjustment. Notwithstanding the lack of asymmetry implied by this price-setting mechanism, however, it can be shown that its positive and normative implications in this baseline specification of the model are equivalent to those of the "Calvo rule" (see, for example, Nisticò, 2007).
    5Although here this term has been added ad hoc, it can be easily derived from the underlying structure of the economy, for example by assuming a time-varying price-elasticity of the demand for brands .

    Editor: Salvatore NISTICO'
    © 2010 ASSONEB


    Dynamic stochastic general equilibrium (DSGE) models represent the state of the art in macroeconomic modelling by providing a coherent framework for policy discussion and analysis. DSGE models are basically complex, non-linear systems of equations and are characterized by the following theoretical aspects (Woodford, 2003). Firstly, the models build on explicit micro-foundations involving rational and forward-looking optimising behaviour of individual economic agents. Secondly, they provide a coherent representation of the interactions between households, firms and policy makers in a dynamic micro-founded context. Moreover, the DSGE models have been enriched as to also account for nominal rigidity or price stickiness by the introduction of a new class of models typically referred to as New Keynesian models. They are useful in the analysis of structural changes and of the sources of fluctuations in the economy, or they can be applied to forecast or to predict the effects of policy changes. For these reasons, DSGE has attracted the attention of central banks across the world, some of which have already developed their own model and employ them for monetary policy analysis and forecasting; in particular, the Bank of Canada (ToTEM), the Bank of England (BEQM), the Central Bank of Chile (MAS), the Central Reserve Bank of Peru (MEGA-D), the European Central Bank (NAWM), the Norges Bank (NEMO), Sveriges Riksbank (RAMSES), and the US Federal Reserve (SIGMA). Also, multilateral institutions like the IMF (i.e. GEM, GFM, or GIMF) and the European Commission (QUEST III) have developed their own DSGE models for policy evaluation. Basic DSGE models reconcile the instances of the New Keynesian paradigm, of the New Classical School and of the real business cycle approach (RBC) to give birth to a new theoretical approach commonly known as the “New Neoclassical Synthesis”. Basic DSGE adopt the real business cycle approach introduced by the seminal paper of Kydland and Prescott (1982), which is based on an impulse-response structure built around optimising agents in a general equilibrium setting. However, DSGE models differ from the RBC benchmark in the ways they explain business cycles. Some realistic assumptions are introduced by a class of DSGE models, as the Keynesian versions of DSGE models and VAR models (see, for example, Gray and Malone (2008) on the source of rigidities and imperfections in the reference markets). However, in order to take advantage from a high degree of flexibility, this class of models often involves strong simplifying assumptions (i.e. homogeneity across households and firms). At present, the modelling of the financial sector is one important challenge of the general DSGE. Finally, the development of estimation procedures of DSGE models (i.e. Bayesian techniques such as Markov Chain Monte Carlo (MCMC) methods and the expectations-maximisation (EM) algorithm increased the usefulness of DSGE models in the context of policy analysis.

    Camilo E. Tovar (2008), DSGE models and central Banks, BIS Working Papers, No 258, Monetary and Economic Department, September 2008.
    Kydland F. and Prescott E. (1982), Time to Build and Aggregate Fluctuations, Econometrica, 50(6) 1982, pp. 1345-70.
    Woodford M. (2003), Interest and Prices: Foundations of a Theory of Monetary Policy, Princeton University Press.

    © 2011 ASSONEB

Selected letter: D English version

  • Privacy Policy
  • Cookie Policy
  • Publication Ethics and Malpractice

Copyright © 2019 ASSONEBB. All Rights reserved.