• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 20
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 1
  • Tagged with
  • 85
  • 85
  • 85
  • 19
  • 18
  • 15
  • 11
  • 11
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Investor protection and liquidity replenishment. / CUHK electronic theses & dissertations collection / ProQuest dissertations and theses

January 2007 (has links)
Chapter 2 provides the literature survey on investor protection and liquidity provision. Work in related studies and the latest developments in these areas are reviewed. / Chapter 3 coven the institutional details of the Hong Kong stock market and the specification of datasets. The descriptive statistics of the trading activities of the sample companies are also presented. An understanding of these descriptive statistics is useful in choosing the appropriate theoretical model and econometric techniques in the analysis. Apart form using regression analysis to investigate the impacts of transitory volatility on market depth and order-flow composition; additional control measures are also implemented. For instance, matched samples based on market depth, transitory volatility, daily trading volume, etc. are constructed. Statistical Tests are employed to investigate the influence of investor protection. / Chapter 4 presents the results of the regression models. Apart form investigating the impacts of transitory volatility on market depth and order-flow composition, this chapter also contributes to the literature by examining the distinction (of this interaction) between companies under different regulatory environment. It is found that the liquidity replenishments for Hong Kong-based companies are more rapid than their Chinese counterparts. The results show that companies ruled by strict governance regulations provide more liquidity when liquidity is most needed. Additional test results also suggest that this difference is robust to various control criteria. / Chapter 5 gives the summary and conclusions. / In this dissertation, data on the Hong Kong Exchange (HKEx) are employed. The Hong Kong equity market lists companies from distinct investor protection environments. These companies are traded under the same market mechanism even though they have different levels of legal protection for investors e.g. Hang Seng Index (HSI) Constituents versus H-shares/red chips. The HKEx is also a very good example of pure order driven markets. Stock prices are determined by the buy and sell orders submitted by traders without liquidity providers of the last resort. Therefore, the Hong Kong equity market provides a unique opportunity to compare the liquidity replenishment process across diverse regulatory environments, but still under one pure order driven market trading with the same mechanism and currency. The choice of Hong Kong data is also justified on the grounds of the size of the Hong Kong market and the increasing importance of Hong Kong in worldwide financial market. / The purpose of this dissertation is to examine the importance of investor protection for the dynamics between liquidity provision and transitory volatility in a pure order-driven market. I posit that environments with better investor protection lead to a more stable ecological system of the supply and the demand of liquidity. / This dissertation has five chapters. Chapter 1 is the introduction that covers the motivation and major findings of the dissertation. / Leung Chung Ho. / "June 2007." / Adviser: Raymond So. / Source: Dissertation Abstracts International, Volume: 69-01, Section: A, page: 0320. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 305-308). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest dissertations and theses, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
72

The environmental Kuznets curve reexamined for CO₂ emissions in Canadian manufacturing industries /

Li, Zhe, 1974- January 2004 (has links)
Recent studies of the environmental Kuznets curve raise questions regarding the relationship between environmental indicators and GDP and the fundamental reasons that explain this relationship. In response, this thesis presents one-sector and two-sector models to analyze the alternative causal relationships between an environmental indicator and GDP at different stages of economic development. These models analyze how economic scale, technology, preferences, and economic structure influence the causality and shape of the relationship. These theoretical studies are followed by two empirical studies. The first tests the causal relationship between CO2 emissions and GDP in Canadian manufacturing industries. The second explores several factors as the fundamental causes that influence the CO2 emissions in the same industries. Factors, such as economic scale, preferences, technological progress, structural change, and energy input, are found to be crucial in the determination of CO2 emissions. The empirical results are positive, but there are data limitations. The empirical studies can be re-evaluated as more data becomes available.
73

Equilibrium problem in the transition from a centralized economy to a competitive market

Sango, Tatiana Dmitrievna 01 January 2002 (has links)
Operations Management / (M.Sc.(Operation Research))
74

The environmental Kuznets curve reexamined for CO₂ emissions in Canadian manufacturing industries /

Li, Zhe, 1974- January 2004 (has links)
No description available.
75

An application of Box-Jenkins transfer function analysis to consumption-income relationship in South Africa / N.D. Moroke

Moroke, N.D. January 2005 (has links)
Using a simple linear regression model for estimation could give misleading results about the relationship between Yt, and Xt, . Possible problems involve (1) feedback from the output series to the inputs, (2) omitted time-lagged input terms, (3) an auto correlated disturbance series and, (4) common autocorrelation patterns shared by Y and X that can produce spurious correlations. The primary aim of this study was therefore to use the Box-Jenkins Transfer Function analysis to fit a model that related petroleum consumption to disposable income> The final Transfer Function Model z1t=)C(1-w1 B)/((1-δ1 B) B^5 Z(t^((x) +(1-θ1 B)at significantly described the data. Forecasts generated from this model show that petroleum consumption will hit a record of up to 4.8636 in 2014 if disposable income is augmented. There is 95% confidence that the forecasted value of petroleum consumption will lie between 4.5276 and 5.1997 in 2014. / Thesis (M. Com. (Statistics) North-West University, Mafikeng Campus, 2005
76

Função de acoplamento t-Student assimetrica : modelagem de dependencia assimetrica / Skewed t-Student copula function : skewed dependence modelling

Busato, Erick Andrade 12 March 2008 (has links)
Orientador: Luiz Koodi Hotta / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-12T14:00:24Z (GMT). No. of bitstreams: 1 Busato_ErickAndrade_M.pdf: 4413458 bytes, checksum: b9c4c39b4639c19e685bae736fc86c4f (MD5) Previous issue date: 2008 / Resumo: A família de distribuições t-Student Assimétrica, construída a partir da mistura em média e variância da distribuição normal multivariada com a distribuição Inversa Gama possui propriedades desejáveis de flexibilidade para as mais diversas formas de assimetria. Essas propriedades são exploradas na construção de funções de acoplamento que possuem dependência assimétrica. Neste trabalho são estudadas as características e propriedades da distribuição t-Student Assimétrica e a construção da respectiva função de acoplamento, fazendo-se uma apresentação de diferentes estruturas de dependência que pode originar, incluindo assimetrias da dependência nas caudas. São apresentados métodos de estimação de parâmetros das funções de acoplamento, com aplicações até a terceira dimensão da cópula. Essa função de acoplamento é utilizada para compor um modelo ARMA-GARCHCópula com marginais de distribuição t-Student Assimétrica, que será ajustado para os logretornos de preços do Petróleo e da Gasolina, e log-retornos do Índice de Óleo AMEX, buscando o melhor ajuste, principalmente, para a dependência nas caudas das distribuições de preços. Esse modelo será comparado, através de medidas de Valor em Risco e AIC, além de outras medidas de bondade de ajuste, com o modelo de Função de Acoplamento t-Student Simétrico. / Abstract: The Skewed t-Student distribution family, constructed upon the multivariate normal mixture distribution, known as mean-variance mixture, composed with the Inverse-Gamma distribution, has many desirable flexibility properties for many distribution asymmetry structures. These properties are explored by constructing copula functions with asymmetric dependence. In this work the properties and characteristics of the Skewed t-Student distribution and the construction of a respective copula function are studied, presenting different dependence structures that the copula function generates, including tail dependence asymmetry. Parameter estimation methods are presented for the copula, with applications up to the 3rd dimension. This copula function is used to compose an ARMAGARCH- Copula model with Skewed t-Student marginal distribution that is adjusted to logreturns of Petroleum and Gasoline prices and log-returns of the AMEX Oil Index, emphasizing the return's tail distribution. The model will be compared, by the means of the VaR (Value at Risk) and Akaike's Information Criterion, along with other Goodness-of-fit measures, with models based on the Symmetric t-Student Copula. / Mestrado / Mestre em Estatística
77

Une généralisation dynamique de la théorie de la politique économique de Tinbergen: application à un modèle à moyen terme pour la Belgique

Thys-Clément, Françoise January 1975 (has links)
Doctorat en sciences sociales, politiques et économiques / info:eu-repo/semantics/nonPublished
78

Essays to the application of behavioral economic concepts to the analysis of health behavior

Panidi, Ksenia 27 June 2012 (has links)
In this thesis I apply the concepts of Behavioral Economics to the analysis of the individual health care behavior. In the first chapter I provide a theoretical explanation of the link between loss aversion and health anxiety leading to infrequent preventive testing. In the second chapter I analyze this link empirically based on the general population questionnaire study. In the third chapter I theoretically explore the effects of motivational crowding-in and crowding-out induced by external or self-rewards for the self-control involving tasks such as weight loss or smoking cessation.<p><p>Understanding psychological factors behind the reluctance to use preventive testing is a significant step towards a more efficient health care policy. Some people visit doctors very rarely because of a fear to receive negative results of medical inspection, others prefer to resort to medical services in order to prevent any diseases. Recent research in the field of Behavioral Economics suggests that human's preferences may be significantly influenced by the choice of a reference point. In the first chapter I study the link between loss aversion and the frequently observed tendency to avoid useful but negative information (the ostrich effect) in the context of preventive health care choices. I consider a model with reference-dependent utility that allows to characterize how people choose their health care strategy, namely, the frequency of preventive checkups. In this model an individual lives for two periods and faces a trade-off. She makes a choice between delaying testing until the second period with the risk of a more costly treatment in the future, or learning a possibly unpleasant diagnosis today, that implies an emotional loss but prevents an illness from further development. The model shows that high loss aversion decreases the frequency of preventive testing due to the fear of a bad diagnosis. Moreover, I show that under certain conditions increasing risk of illness discourages testing.<p><p>In the second chapter I provide empirical support for the model predictions. I use a questionnaire study of a representative sample of the Dutch population to measure variables such as loss aversion, testing frequency and subjective risk. I consider the undiagnosed non-symptomatic population and concentrate on medical tests for four illnesses that include hypertension, diabetes, chronic lung disease and cancer. To measure loss aversion I employ a sequence of lottery questions formulated in terms of gains and losses of life years with respect to the current subjective life expectancy. To relate this measure of loss aversion to the testing frequency I use a two-part modeling approach. This approach distinguishes between the likelihood of participation in testing and the frequency of tests for those who decided to participate. The main findings confirm that loss aversion, as measured by lottery choices in terms of life expectancy, is significantly and negatively associated with the decision to participate in preventive testing for hypertension, diabetes and lung disease. Higher loss aversion also leads to lower frequency of self-tests for cancer among women. The effect is more pronounced in magnitude for people with higher subjective risk of illness.<p><p>In the third chapter I explore the phenomena of crowding-out and crowding-in of motivation to exercise self-control. Various health care choices, such as keeping a diet, reducing sugar consumption (e.g. in case of diabetes) or abstaining from smoking, require costly self-control efforts. I study the long-run and short-run influence of external and self-rewards offered to stimulate self-control. In particular, I develop a theoretical model based on the combination of the dual-self approach to the analysis of the time-inconsistency problem with the principal-agent framework. I show that the psychological property of disappointment aversion (represented as loss aversion with respect to the expected outcome) helps to explain the differences in the effects of rewards when a person does not perfectly know her self-control costs. The model is based on two main assumptions. First, a person learns her abstention costs only if she exerts effort. Second, observing high abstention costs brings disutility due to disappointment (loss) aversion. The model shows that in the absence of external reward an individual will exercise self-control only when her confidence in successful abstention is high enough. However, observing high abstention costs will discourage the individual from exerting effort in the second period, i.e. will lead to the crowding-out of motivation. On the contrary, choosing zero effort in period 1 does not reveal the self-control costs. Hence, this preserves the person's self-confidence helping her to abstain in the second period. Such crowding-in of motivation is observed for the intermediate level of self-confidence. I compare this situation to the case when an external reward is offered in the first period. The model shows that given a sufficiently low self-confidence external reward may lead to abstention in both periods. At the same time, without it a person would not abstain in any period. However, for an intermediate self-confidence, external reward may lead to the crowding-out of motivation. For the same level of self-confidence, the absence of such reward may cause crowding-in. Overall, the model generates testable predictions and helps to explain contradictory empirical findings on the motivational effects of different types of rewards. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
79

Two-Sided Matching Markets: Models, Structures, and Algorithms

Zhang, Xuan January 2022 (has links)
Two-sided matching markets are a cornerstone of modern economics. They model a wide range of applications such as ride-sharing, online dating, job positioning, school admissions, and many more. In many of those markets, monetary exchange does not play a role. For instance, the New York City public high school system is free of charge. Thus, the decision on how eighth-graders are assigned to public high schools must be made using concepts of fairness rather than price. There has been therefore a huge amount of literature, mostly in the economics community, defining various concepts of fairness in different settings and showing the existence of matchings that satisfy these fairness conditions. Those concepts have enjoyed wide-spread success, inside and outside academia. However, finding such matchings is as important as showing their existence. Moreover, it is crucial to have fast (i.e., polynomial-time) algorithms as the size of the markets grows. In many cases, modern algorithmic tools must be employed to tackle the intractability issues arising from the big data era. The aim of my research is to provide mathematically rigorous and provably fast algorithms to find solutions that extend and improve over a well-studied concept of fairness in two-sided markets known as stability. This concept was initially employed by the National Resident Matching Program in assigning medical doctors to hospitals, and is now widely used, for instance, by cities in the US for assigning students to public high schools and by certain refugee agencies to relocate asylum seekers. In the classical model, a stable matching can be found efficiently using the renowned deferred acceptance algorithm by Gale and Shapley. However, stability by itself does not take care of important concerns that arose recently, some of which were featured in national newspapers. Some examples are: how can we make sure students get admitted to the best school they deserve, and how can we enforce diversity in a cohort of students? By building on known and new tools from Mathematical Programming, Combinatorial Optimization, and Order Theory, my goal is to provide fast algorithms to answer questions like those above, and test them on real-world data. In Chapter 1, I introduce the stable matching problem and related concepts, as well as its applications in different markets. In Chapter 2, we investigate two extensions introduced in the framework of school choice that aim at finding an assignment that is more favorable to students -- legal assignments and the Efficiency Adjusted Deferred Acceptance Mechanism (EADAM) -- through the lens of classical theory of stable matchings. We prove that the set of legal assignments is exactly the set of stable assignments in another instance. Our result implies that essentially all optimization problems over the set of legal assignments can be solved within the same time bound needed for solving it over the set of stable assignments. We also give an algorithm that obtains the assignment output of EADAM. Our algorithm has the same running time as that of the deferred acceptance algorithm, hence largely improving in both theory and practice over known algorithms. In Chapter 3, we introduce a property of distributive lattices, which we term as affine representability, and show its role in efficiently solving linear optimization problems over the elements of a distributive lattice, as well as describing the convex hull of the characteristic vectors of the lattice elements. We apply this concept to the stable matching model with path-independent quota-filling choice functions, thus giving efficient algorithms and a compact polyhedral description for this model. Such choice functions can be used to model many complex real-world decision rules that are not captured by the classical model, such as those with diversity concerns. To the best of our knowledge, this model generalizes all those for which similar results were known, and our paper is the first that proposes efficient algorithms for stable matchings with choice functions, beyond classical extensions of the Deferred Acceptance algorithm. In Chapter 4, we study the discovery program (DISC), which is an affirmative action policy used by the New York City Department of Education (NYC DOE) for specialized high schools; and explore two other affirmative action policies that can be used to minimally modify and improve the discovery program: the minority reserve (MR) and the joint-seat allocation (JSA) mechanism. Although the discovery program is beneficial in increasing the number of admissions for disadvantaged students, our empirical analysis of the student-school matches from the 12 recent academic years (2005-06 to 2016-17) shows that about 950 in-group blocking pairs were created each year amongst disadvantaged group of students, impacting about 650 disadvantaged students every year. Moreover, we find that this program usually benefits lower-performing disadvantaged students more than top-performing disadvantaged students (in terms of the ranking of their assigned schools), thus unintentionally creating an incentive to under-perform. On the contrary, we show, theoretically by employing choice functions, that (i) both MR and JSA result in no in-group blocking pairs, and (ii) JSA is weakly group strategy-proof, ensures that at least one disadvantaged is not worse off, and when reservation quotas are carefully chosen then no disadvantaged student is worse-off. We show that each of these properties is not satisfied by DISC. In the general setting, we show that there is no clear winner in terms of the matchings provided by DISC, JSA, and MR, from the perspective of disadvantaged students. We however characterize a condition for markets, that we term high competitiveness, where JSA dominates MR for disadvantaged students. This condition is verified, in particular, in certain markets when there is a higher demand for seats than supply, and the performances of disadvantaged students are significantly lower than that of advantaged students. Data from NYC DOE satisfy the high competitiveness condition, and for this dataset our empirical results corroborate our theoretical predictions, showing the superiority of JSA. We believe that the discovery program, and more generally affirmative action mechanisms, can be changed for the better by implementing the JSA mechanism, leading to incentives for the top-performing disadvantaged students while providing many benefits of the affirmative action program.
80

Bayesian Auction Design and Approximation

Jin, Yaonan January 2023 (has links)
We study two classes of problems within Algorithmic Economics: revenue guarantees of simple mechanisms, and social welfare guarantees of auctions. We develop new structural and algorithmic tools for addressing these problems, and obtain the following results: In the 𝑘-unit model, four canonical mechanisms can be classified as: (i) the discriminating group, including Myerson Auction and Sequential Posted-Pricing, and (ii) the anonymous group, including Anonymous Reserve and Anonymous Pricing. We prove that any two mechanisms from the same group have an asymptotically tight revenue gap of 1 + θ(1 /√𝑘), while any two mechanisms from the different groups have an asymptotically tight revenue gap of θ(log 𝑘). In the single-item model, we prove a nearly-tight sample complexity of Anonymous Reserve for every value distribution family investigated in the literature: [0, 1]-bounded, [1, 𝐻]-bounded, regular, and monotone hazard rate (MHR). Remarkably, the setting-specific sample complexity poly(𝜖⁻¹) depends on the precision 𝜖 ∈ (0, 1), but not on the number of bidders 𝑛 ≥ 1. Further, in the two bounded-support settings, our algorithm allows correlated value distributions. These are in sharp contrast to the previous (nearly-tight) sample complexity results on Myerson Auction. In the single-item model, we prove that the tight Price of Anarchy/Stability for First Price Auctions are both PoA = PoS = 1 - 1/𝜖² ≈ 0.8647.

Page generated in 0.0772 seconds