Spelling suggestions: "subject:"business"" "subject:"24business""
21 |
The impact of foreignness on the compliance with the international standards for the professional practice of internal auditingAlhendi, Eyad Abdulaziz January 2017 (has links)
The Institute of Internal Auditors (IIA) was established to organise the profession. The IIA provides members with an International Professional Practices Framework (IPPF) to lead their professional practice and confirm the highest-quality internal audit effects in various environments. One of the IPPF components is the International Standards for the Professional Practice of Internal Auditing (the Standards). The goals of the Standards are to describe essential principles that characterise the practice of internal auditing, to deliver a framework for performing and promoting a wide range of value-added internal auditing, to create the basis for the assessment of internal audit performance, and to foster development of organizational processes and operations. However, some researchers have reported many factors related to the lack of compliance with the Standards, related directly to the fieldwork of the profession, which can be controlled by the board of directors, executives or audit committee either in the short or long term. This study however, is premised on the assumption that solving the internal factors (for instance, internal auditors' educational level (college degree), professional certificates, Certificate of Internal Auditors (CIA), membership of organisation and age of internal audit staff) related directly to the organization or its resources is not the ultimate solution to compliance with the International Standards for the Professional Practice of Internal Auditing Standards. This is because, if a particular organisation tries to adopt a certain strategy to eliminate negative effects associated with internal factors, there are complicated external environmental factors that may not be controllable. For this reason, this study examines foreignness (social capital) as a major factor that affects internal auditors' compliance with the Standards from an environmental perspective, which is one of the main significant contributions of this study. The study examines compliance with International Standards for the Professional Practice of Internal Auditing in relation to various cultural factors, such as personal, friendship, and family relationships, which are especially salient in developing, Arab, and Gulf countries. Moreover, another contribution of this study is to examine compliance with the standards from a linguistic prospective. Many countries may recognize and use English as an official language and have no trouble in the basic comprehension of the standards, but meaning may not be completely and accurately conveyed in the nuances of the language, which are unique to different cultural settings. For this reason, the study assumes that language will play a critical role with regard to understanding and consequently complying with the standards. In non-English speaking countries, the IIA has tried to solve this issue by translating the standards into the host country language. Therefore, the study also examines compliance with the International Standards for the Professional Practice of Internal Auditing in terms of two main linguistic factors, Understanding and Translation. A questionnaire strategy was used to collect quantitative data. The companies listed on the Saudi Stock Exchange Market, selected from different sectors in order to have a diversity of responses from many industries. The results showed that there is an influence of social capital (personal social capital, friendship social capital, and family social capital) on the compliance with the International Standards for the Professional Practice of Internal Auditing (Independence and Objectivity, Individual Objectivity, and Governance). The findings also showed that there is an effect of Linguistic Social Capital (Understanding, and Translation) on compliance with the International Standards for the Professional Practice of Internal Auditing with regard to professional terms such as Add Value and Residual Risk.
|
22 |
Essays on determinants and implications of extreme financing policiesEbrahimi, Tahera January 2017 (has links)
Within this thesis, we address the continuing puzzle that a growing number of firms follow a capital structure policy, which is far from the optimal level. These firms adopt an extreme financing policy, either as a conservative or as an aggressive debt policy. We investigate the different causes and implications of such capital structure policies in three empirical chapters. In study 1, we document that both supply and demand sides contribute to conservative debt policy, i.e., firms using zero or very low debt. The results suggest that these firms face different supply conditions, while based on their debt capacity they generally are restricted in their ability to issue debt. Further, over-optimistic investors seem to create a good stock supply condition (overvalued equity) to rely more on equity financing. Moreover, the findings suggest that the current decision of zero/very low leverage is driven by the significant future demand of capital. The results regarding the other side of the extreme financing policy, i.e., firms using aggressive debt, appear to be opposite to what we document for zero/very low leverage firms, with the exception of supply side effects; very high levered firms neither have better access to debt markets nor face less optimistic investors (undervalued equity). Furthermore, while an average firms with extreme financing policy actively rebalance their leverage to stay within an optimal zone, the degree of urgency to shift toward the optimal leverage varies, depending on how far firms deviate from being under- (or over-) levered. In study 2, our evidence reveals that CEO characteristics also affect conservative debt policy: this policy tends to be employed when CEOs are overconfident and younger, have higher equity shareholdings and maintain longer tenure. However, results from the analysis of aggressive debt users do not reveal any significant influence of CEO ability on such decisions. Finally, in study 3, we investigate the extent to which extreme financing policies affect the probability of bankruptcy. We document that the failure of firms with different levels of indebtedness is attributable to different factors. Furthermore, in favour of optimal capital structure, this study empirically confirms that having an optimal level of leverage increases the probability of survival of firms. However, paradoxically to economists' concerns, the evidence of this chapter shows that the likelihood of bankruptcy is greater where firms have relatively little debt compared to their over-levered counterparts.
|
23 |
Equity and debt market timing, cost of capital and value and performance : evidence from listed firms in ThailandChamaiporn Kumpamool January 2018 (has links)
Market timing is an infant theory of capital structure used to explain concealed motivation of managers. Equity market timing refers to equity issuance when the stock market is favourable to reduce the cost of capital, while debt market timing refers to debt financing when the interest rate is particularly low to minimize the cost of capital. However, there is no consensus in the literature as to whether firms can take such advantages in real markets, especially in Thailand. Furthermore, it is far from settled as to what the determining factors of market timing are. Additionally, the success of the decrease in cost of capital remains ambiguous. This study investigates market timing theory through three empirical studies. The first study examines the presence of equity market timing in Thailand with 285 IPO firms and 1,038 SEO issuances from 2000 to 2014. The results reveal that IPO and SEO firms tend to take advantage in the stock market when the market is in a good condition, such as a hot period, economic expansion, and bullish time. In addition, the study finds that timers obtain higher proceeds and maintain these proceeds as cash after offering. Moreover, this is the first study to explore how the corporate governance dimension is the potential determinants of equity market timing. The second study looks at the existence of debt market timing in Thailand with 189 corporate bond's issuances from 2001 to 2014. The results indicate that the firms tend to time the debt market when the market is hot and there is a low interest rate. Likewise, we find that timers gain more proceeds and pay lower interest rates. Moreover, this is the first study to reveal that timers retain the proceeds as cash after issuance and that the corporate governance and board structure are significant determinants of debt market timing. The third study investigates the influence of market timing on cost of capital and firm performance. We find that market timing policy can lead to both success and failure of cost reduction and performance increment, depending on the types of issued securities, the strategy of market timing, and the method of cost of capital and firm performance estimation. Furthermore, this study provides some suggestions for managers, shareholders, investors, regulators and other stakeholders to comprehend the cause and effect of market timing and to prepare in order to protect their benefits. Also, this study informs regulators and policy makers to improve the efficiency of stock and bond markets in Thailand.
|
24 |
Systemic risk in public sector outsourcing contractsBloomfield, Katherine January 2018 (has links)
The research presented in this thesis responds to a call towards the expansion of current perceptions of risk in complex organisational settings. Observing the literature, it becomes apparent that risk in projects is frequently treated as independent, thereby disregarding the interrelatedness or 'systemicity' of these risks, and/or any other causal dynamics. The systemicity of risk is therefore of fundamental importance to this research, particularly in terms of its definition which is sparsely covered within the literature, making it a suitable first research question. In addition to this, where a project represents an undertaking that has commissioned for by a permanent organisation, and is to be delivered by a temporary organisation, the verdict as to whether or not the commissioned project is deemed as being successful is heavily dependent upon the project's ability to protect against unwanted risk. At the forefront of the commissioned project is the contractual relationship that has been established between the buyer and seller, which sets out the obligations of the contracting parties. Since the contract governs the legality and functionality of the project, it must therefore be designed to balance and mitigate risk effectively. To improve knowledge and awareness of the risk dynamics encased within a project's legal documentation, multiple methods of analysis have been incorporated within the research design in order to extract meaningful data from a sample of MOD case studies, each of which comprise of a set of framework contracts, project documentation and interviews. In doing so, the thesis identifies the extent to which public sector organisations like the MOD account for systemic risk in their contracting procedures and reveals the shortcomings in the design and implementation of these fundamental legal agreements. Whilst the core methods introduced within this thesis represent well-established and justifiable qualitative methods (such as hermeneutics), the research provides a novel methods contribution through the development of a visual mapping tool. Throughout the research process, the visual tool has demonstrated its capacity to equip the contract writer with greater insight into the dynamic characteristics of risk that are inherent within a contract. Triangulating the data that was extracted using multiple methods, a set of key findings were deduced which reveals the current flaws that originate in the front-end phase of the project, the structural design choices made when constructing (or implementing) the formal contract and the unrealistic relational expectations that underpin the contractual agreement. As a result, it is believed that the research has contributed new knowledge to both the academic and practitioner realms, yet recognises that there is scope for further research to be undertaken. It is envisaged that such future research would benefit from further piloting, expanding the application of the research methodology towards other complex organisations within the public sector, whilst testing the robustness of the newly developed risk mapping tool.
|
25 |
Immersive systemic knowing : rational analysis and beyondRajagopalan, Raghav January 2015 (has links)
Applied systems thinking has rapidly developed through successive waves of development, and the current reigning paradigm is the revisioned approach to critical systems thinking. This research scrutinizes systemic intervention. It employs the methods of second-order science to apply some of its principles reflexively back on to the domain to discover two gaps: one between the espoused aims of systemic intervention and the adequacy of its methods, the other about its dependence on dialogic rationality. It also delves into its philosophical underpinnings to trace the reason for this gap to the 'ghosts' of rationalism. This is because modern Western thinking equates consciousness with intentionality. I argue that there is another well-recognised mode of consciousness, that of non-intentionality. I name these two modes as the becoming-striving and the being-abiding orientations. To address the gap, firstly, a characterisation of the systemic ontology is attempted. Three basic features are identified: mindful interconnectedness, enactive cognition and teleonomy. I also describe plausible political, epistemic and pragmatic goals for systems thinking arising from this ontology. Four methods from adjacent disciplines are examined in detail to show that these address the systemic ontology in better fashion than existing systemic approaches. These mature global contemporary approaches access knowings corresponding to the being-abiding orientation, absent in systems thinking. A suitable ontoepistemology for systemic knowing must comprise of two ontologies and epistemologies corresponding to each of the two consciousness modes: four component elements. Suitable conceptual models from other disciplines serve the purpose of these four components. Thus, a model of immersive systemic knowing is assembled, which meets the requirements of a framework for systems thinking in terms of the goals posited. A key feature of this research is the espousal of experiential knowing: not in a phenomenological sense, but in terms of a radical empiricism. It argues for the value of practical knowings that go beyond rationalistic formulation, which are always held in the margins (in the language of boundaries). Systemists must actively seek such experiential knowing to enact truly creative improvement. The only answer to the problem of knowing the world better is to know the shadow aspects of the knowledge generating system. This requires truly radical methods and an extended epistemology, all shown to be available plentifully in other practices and cultures. Testimony is provided from two field projects that were a part of these inquiries, and from practitioner accounts.
|
26 |
Essays on Decisions Involving Recurring Financial EventsAtlas, Stephen A. January 2013 (has links)
This dissertation explores what influences consumer financial decisions with consequences that recur over time, such as mortgages and recurring payment plans in contracts. This dissertation investigates two questions: (1) How do individual differences in intertemporal preferences influence how consumers think about recurring financial events? (2) How does the aggregation level used to describe the recurring financial consequences impact how consumers mentally represent the purchase? Taken together, this dissertation explores how consumers mentally represent recurring outcomes and express these preferences through choice. The first essay explores the relationship between individual differences in time preferences and decisions involving recurring payments in the domain of mortgage choices. It relates two components of an individual's time preference, a present bias (overvaluing immediate outcomes), and a personal discount rate (the exponential component of time preferences), to mortgage selection and the decision to strategically abandon a home worth less than its mortgage. Combining insights from an analytic model and a survey of 244 mortgaged households augmented by zip-code market house price data, this essay proposes that consumers with greater present bias and exponential discounting are more likely to choose mortgages that minimize up-front costs and be underwater. This model also suggests that present bias decreases the likelihood of walking away, but that higher discounting increases that likelihood, a result consistent with the data. Time preferences remain robust predictors with individual and market-level controls, and alternate model specifications. The second essay explores how the aggregation level of a recurring price (e.g. on a daily vs. a yearly basis) impacts how consumers mentally account for a contract's benefits. For example, if consumers are told the daily price of a car lease, they imagine the daily benefits of the car, and when they are told a monthly price they imagine their broader use of the car. This essay builds on the "pennies-a-day" model (Gourville 1998), which posits that narrowly framed recurring costs can increase a consumer's willingness to purchase by making the cost of a purchase seem trivial. The essay will present evidence that triviality is neither a necessary nor sufficient condition for narrow framing to increase willingness to purchase and expand the domain of situations where such narrow framing increases purchase. Five web-based experiments suggest that scope insensitivity plays an important role in this effect since under recurring costs, consumers repeatedly "book" the most valued units, while under one-time costs consumers tend to experience less return to scale. Together, the two essays suggest that contracts involving recurring financial events are mentally represented differently from those with one-time financial events, and that content is then discounted based on intertemporal preferences.
|
27 |
Changes in the Profitability-Growth Relation and the Implications for the Accrual AnomalyLi, Meng January 2013 (has links)
Valuation research establishes growth in net operating assets (ΔNOA) as a primary predictor of future profitability. The negative relation between ΔNOA and future profitability, after controlling for current profitability, is researched extensively in the context of earnings quality, capital investment, accounting conservatism, earnings management, and the accrual anomaly. However, this study shows that while ΔNOA is negatively related to future profitability from 1967 to 1995, it is positively related to future profitability from 1996 to 2010. The negative effects of ΔNOA on future profitability (e.g., diminishing returns on investment, accruals overstatement, and excess capitalization) continue to exist, although they are now dominated by the positive implications of ΔNOA for future profitability. The positive relation between ΔNOA and future profitability grows stronger over time for reasons including increasing intangible intensity, increased volatility of economic activities, increased accounting conservatism, accounting principles shifting toward a balance sheet/fair value approach, changing characteristics of public firms, and the increasing importance of real options. The change in the future profitability-ΔNOA relation has important implications, particularly for the accrual anomaly. The prevailing explanation for the anomaly is that an increase (decrease) in NOA predicts a decrease (increase) in profitability and investors fail to fully appreciate this negative relation. However, if this hypothesis is true, the anomaly should no longer exist. I examine the anomaly over an extended time period, including more recent years, and provide evidence that the anomaly is still present. To explain the persistence of the anomaly over time, I conjecture and show that the market reaction to ΔNOA and the future profitability implications of ΔNOA diverge throughout the sample period. Specifically, investors are always over optimistic about the future profitability implications of the growth, i.e., in the first half of the sample (1967-1988), investors do not fully react to the negative effects of growth on profitability, and in the second half (1989-2010), they appear to over-emphasize the positive implications of ΔNOA for future profitability. The anomaly weakens during periods when investors' reaction to ΔNOA aligns with the profitability implications of ΔNOA.
|
28 |
Pricing Decentralization in Customized Pricing Systems and Network ModelsSimsek, Ahmet January 2013 (has links)
In this thesis, we study the implications of multi-party pricing for both consumers and producers in different settings. Within most organizations, the final price of a product or service is usually the result of a chain of pricing decisions. This chain may consist of different departments of the same company as well as different companies in a specific industry. Understanding the implications of such chains on the final prices and on consumer and producer surplus is the key topic of this dissertation. In the first part of this thesis, we consider a network in which products consist of combinations of perishable resources. In this model, different revenue-maximizing "controllers" determine the resource prices and the price of the product is the sum of the prices of the constituent resources. For uncapacitated networks, we develop bounds on the "price of anarchy" -the loss from totally decentralized control versus centralized control- as the number of controllers increases. We present provably convergent algorithms for calculating Nash equilibrium prices for both the uncapacitated and capacitated cases and -using these algorithms- illustrate counterintuitive situations in which consumer surplus increases after decentralization. While we develop our model in the context of airline pricing, it is applicable to any service network such as freight transportation, pipelines, and toll roads as well as to the more general case of supply chain networks. In the rest of the dissertation, we focus on understanding and improving pricing decisions in the case when corporate headquarters set a list price for all products but local sales force is given discretion to adjust (or negotiate) prices for individual deals. This form of pricing is called list pricing with discretion (LPD) and it is commonly found in most business-to-business markets and in certain business-to-consumer settings, including consumer lending, insurance, and automobile sales. In the LPD setting, the question of how much (if any) pricing discretion should be granted to local sales force is crucial. In the second part of this thesis, we study this issue using two data sets - one from an online lender who sets all prices centrally and one from an indirect lender with local pricing discretion. We find strong evidence that the indirect sales force adjusts prices in a way that improves profitability. However, we also show that using a centralized, data-driven pricing optimization system has the potential of improving profitability further. In addition, using a control function approach, we show that the discretion applied by the local sales force introduces significant endogeneity into the indirect lender's pricing process. Ignoring this endogeneity can lead to severe underestimation of price sensitivity. These insights are valuable for any customized pricing market in which in-person interaction is part of the price-setting process. Finally, in the last part, we focus on the underlying negotiation process of the LPD setting and on the fact that not only buyers differ in their willingness-to-pay (WTP) but sellers also differ in the minimum prices (reservation prices) that they are willing to accept for the transaction. We develop a methodology based on the Expectation-Maximization (EM) algorithm to estimate both the WTP and the reservation price distributions given transactions data. The required data include information about both completed trades and failed trades, however price information is only available for completed trades (which is the most common situation in these markets). Using the same data from the auto lending industry, we show that our approach provides improved estimates of customer price-sensitivity over the approaches commonly used in practice. We also show how the WTP and reservation price estimates can be used to improve profits for the seller by optimally setting reservation prices on negotiations.
|
29 |
Competition and Yield Optimization in Ad ExchangesBalseiro, Santiago January 2013 (has links)
Ad Exchanges are emerging Internet markets where advertisers may purchase display ad placements, in real-time and based on specific viewer information, directly from publishers via a simple auction mechanism. The presence of such channels presents a host of new strategic and tactical questions for publishers. How should the supply of impressions be divided between bilateral contracts and exchanges? How should auctions be designed to maximize profits? What is the role of user information and to what extent should it be disclosed? In this thesis, we develop a novel framework to address some of these questions. We first study how publishers should allocate their inventory in the presence of these new markets when traditional reservation-based ad contracts are available. We then study the competitive landscape that arises in Ad Exchanges and the implications for publishers' decisions. Traditionally, an advertiser would buy display ad placements by negotiating deals directly with a publisher, and signing an agreement, called a guaranteed contract. These deals usually take the form of a specific number of ad impressions reserved over a particular time horizon. In light of the growing market of Ad Exchanges, publishers face new challenges in choosing between the allocation of contract-based reservation ads and spot market ads. In this setting, the publisher should take into account the tradeoff between short-term revenue from an Ad Exchange and the long-term impact of assigning high quality impressions to the reservations (typically measured by the click-through rate). In the first part of this thesis, we formalize this combined optimization problem as a stochastic control problem and derive an efficient policy for online ad allocation in settings with general joint distribution over placement quality and exchange bids, where the exchange bids are assumed to be exogenous and independent of the decisions of the publishers. We prove asymptotic optimality of this policy in terms of any arbitrary trade-off between quality of delivered reservation ads and revenue from the exchange, and provide a bound for its convergence rate to the optimal policy. We also give experimental results on data derived from real publisher inventory, showing that our policy can achieve any Pareto-optimal point on the quality vs. revenue curve. In the second part of this thesis, we relax the assumption of exogenous bids in the Ad Exchange and study in more detail the competitive landscape that arises in Ad Exchanges and the implications for publishers' decisions. Typically, advertisers join these markets with a pre-specified budget and participate in multiple second-price auctions over the length of a campaign. We introduce the novel notion of a Fluid Mean Field Equilibrium (FMFE) to study the dynamic bidding strategies of budget-constrained advertisers in these repeated auctions. This concept is based on a mean field approximation to relax the advertisers' informational requirements, together with a fluid approximation to handle the complex dynamics of the advertisers' control problems. Notably, we are able to derive a closed-form characterization of FMFE, which we use to study the auction design problem from the publisher's perspective focusing on three design decisions: (1) the reserve price; (2) the supply of impressions to the Exchange versus an alternative channel such as bilateral contracts; and (3) the disclosure of viewers' information. Our results provide novel insights with regard to key auction design decisions that publishers face in these markets. In the third part of this thesis, we justify the use of the FMFE as an equilibrium concept in this setting by proving that the FMFE provides a good approximation to the rational behavior of agents in large markets. To do so, we consider a sequence of scaled systems with increasing market size;. In this regime we show that, when all advertisers implement the FMFE strategy, the relative profit obtained from any unilateral deviation that keeps track of all available information in the market becomes negligible as the scale of the market increases. Hence, a FMFE strategy indeed becomes a best response in large markets.
|
30 |
Design and Analysis of Matching and Auction MarketsSaban, Daniela January 2015 (has links)
Auctions and matching mechanisms have become an increasingly important tool to allocate scarce resources among competing individuals or firms. Every day, millions of auctions are run for a variety of purposes, ranging from selling valuable art or advertisement space in websites to acquiring goods for government use. Every year matching mechanisms are used to decide the public school assignments of thousands of incoming high school students, who are competing to obtain a seat in their most preferred school. This thesis addresses several questions that arise when designing and analyzing matching and auction markets.
The first part of the dissertation is devoted to matching markets. In Chapter 2, we study markets with indivisible goods where monetary compensations are not possible. Each individual is endowed with an object and has ordinal preferences over all objects. When preferences are strict, the Top-Trading Cycles (TTC) mechanism invented by Gale is Pareto efficient, strategy-proof, and finds a core allocation, and is the only mechanism satisfying these properties. In the extensive literature on this problem since then, the TTC mechanism has been characterized in multiple ways, establishing its central role within the class of all allocation mechanisms. In many real applications, however, the individual preferences have subjective indifferences; in this case, no simple adaptation of the TTC mechanism is Pareto efficient and strategy-proof. We provide a foundation for extending the TTC mechanism to the preference domain with indifferences while guaranteeing Pareto efficiency and strategy-proofness. As a by-product, we establish sufficient conditions for a mechanism (within a broad class of mechanisms) to be strategy-proof and use these conditions to design computationally efficient mechanisms.
In Chapter 3, we study several questions associated to the Random Priority (RP) mechanism from a computational perspective. The RP mechanism is a popular way to allocate objects to agents with strict ordinal preferences over the objects. In this mechanism, an ordering over the agents is selected uniformly at random; the first agent is then allocated his most-preferred object, the second agent is allocated his most-preferred object among the remaining ones, and so on. The outcome of the mechanism is a bi-stochastic matrix in which entry (i,a) represents the probability that agent i is given object a. It is shown that the problem of computing the RP allocation matrix is #P-complete. Furthermore, it is NP-complete to decide if a given agent i receives a given object a with positive probability under the RP mechanism, whereas it is possible to decide in polynomial time whether or not agent i receives object a with probability 1. The implications of these results for approximating the RP allocation matrix as well as on finding constrained Pareto optimal matchings are discussed.
Chapter 4 focuses on assignment markets (matching markets with transferable utilities), such as labor and housing markets. We consider a two-sided assignment market with agent types and stochastic structure similar to models used in empirical studies, and characterize the size of the core in such markets. The value generated from a match between a pair of agents is the sum of two random productivity terms, each of which depends only on the type but not the identity of one of the agents, and a third deterministic term driven by the pair of types. We allow the number of agents to grow, keeping the number of agent types fixed. Let n be the number of agents and K be the number of types on the side of the market with more types. We find, under reasonable assumptions, that the relative variation in utility per agent over core outcomes is bounded as O^*(1/n^{1/K}), where polylogarithmic factors have been suppressed. Further, we show that this bound is tight in worst case, and provide a tighter bound under more restrictive assumptions.
In the second part of the dissertation, we study auction markets. Chapter 5 considers the problem faced by a procurement agency that runs an auction-type mechanism to construct an assortment of products with posted prices, from a set of differentiated products offered by strategic suppliers. Heterogeneous consumers then buy their most preferred alternative from the assortment as needed. Framework agreements (FAs), widely used in the public sector, take this form; this type of mechanism is also relevant in other contexts, such as the design of medical formularies and group buying. When evaluating the bids, the procurement agency must consider the trade-off between offering a richer menu of products for consumers, versus offering less variety, hoping to engage the suppliers in a more aggressive price competition. We develop a mechanism design approach to study this problem, and provide a characterization of the optimal mechanisms. This characterization allows us to quantify the optimal trade-off between product variety and price competition, in terms of suppliers' costs, products' characteristics, and consumers' characteristics. We then use the optimal mechanism as a benchmark to evaluate the performance of the Chilean government procurement agency's current implementation of FAs, used to acquire US $2 billion worth of goods per year. We show how simple modifications to the current mechanism, which increase price competition among close substitutes, can considerably improve performance.
|
Page generated in 0.1088 seconds