• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 162
  • 44
  • 16
  • 14
  • 11
  • 9
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 322
  • 63
  • 39
  • 32
  • 30
  • 30
  • 23
  • 22
  • 22
  • 20
  • 20
  • 19
  • 18
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Essays on Corruption and Preferences

Viceisza, Angelino Casio 13 January 2008 (has links)
This dissertation comprises three essays. The theme that unifies them is "experiments on corruption and preferences." The first essay (chapter 2) reports theory-testing experiments on the effect of yardstick competition (a form of government competition) on corruption. The second essay (chapter 3) reports theory-testing experiments on the effect of efficiency and transparency on corruption. Furthermore, this essay revisits the yardstick competition question by implementing an alternative experimental design and protocol. Finally, the third essay (chapter 4) reports a theory-testing randomized field experiment that identifies the causes and consequences of corruption. The first essay finds the following. Theoretically, the paper derives a main proposition which suggests that institutions with more noise give rise to an increase in corrupt behavior and a decrease in voter welfare. Empirically, the paper finds a few key results. First, there are an initial nontrivial proportion of good incumbents in the population. This proportion goes down as the experiment session progresses. Secondly, a large proportion of bad incumbents make theoretically inconsistent choices given the assumptions of the model. Third, overall evidence of yardstick competition is mild. Yardstick competition has little effect as a corruption-taming mechanism when the proportion of good incumbents is low. Namely, an institution that is characterized by a small number of good incumbents has little room for yardstick competition, since bad incumbents are likely to be replaced by equally bad incumbents. Thus, incumbents have less of an incentive to build a reputation. This is also the case in which (1) yardstick competition leads to non-increasing voter welfare and (2) voters are more likely to re-elect bad domestic incumbents. Finally, a partitioning of the data by gender suggests that males and females exhibit different degrees of learning depending on the payoffs they face. Furthermore, male voter behavior exhibits mild evidence of yardstick competition when voters face the pooling equilibrium payoff. The second essay finds the following. First, efficiency is an important determinant of corruption. A decrease in efficiency makes it more costly for incumbents to "do the right thing." This drives them to divert maximum rents. While voters retaliate slightly, voters tend to be worse off. Secondly, increased lack of a particular form of transparency (as defined in terms of an increase in risk in the distribution of the unit cost) leaves corrupt incumbent behavior unchanged. In particular, if the draw of the unit cost is unfavorable, incumbents tend to be less corrupt. Third, there is strong evidence of yardstick competition. On the incumbent's side, yardstick competition acts as a corruption-taming mechanism if the incumbent is female. On the voter's side, voters are less likely to re-elect the incumbent in the presence of yardstick competition. Specifically, voters pay attention to the difference between the tax signal in their own jurisdiction and that in another. As this difference increases, voters re-elect less. This gives true meaning to the concept of "benchmarking." Finally, the analysis sheds light on the role of history and beliefs on behavior. Beliefs are an important determinant of incumbents' choices. If an incumbent perceives a tax signal to be associated with a higher likelihood of re-election, he is more likely to choose it. On the voter's side, history tends to be important. In particular, voters are more likely to vote out incumbents as time progresses. This suggests that incumbents care about tax signals because they provide access to re-elections while voters use the history of taxes and re-elections in addition to current taxes to formulate their re-election decisions. Finally, the third essay finds the following. First, 19.08% of mail is lost. Secondly, money mail is more likely to be lost at a rate of 20.90% and this finding is significant at the 10% level. This finding suggests that loss of mail is systematic (non-random), which implies that this type of corruption is due to strategic behavior as opposed to plain shirking on the part of mail handlers. Third, we find that loss of mail is non-random across other observables. In particular, middle-income neighborhoods are more likely to experience lost (money) mail. Also, female heads of household in low-income neighborhoods are more likely to experience lost mail while female heads of household in high-income neighborhoods are much less likely to experience lost (money) mail. Finally, this form of corruption is costly to different stakeholders. The sender of mail bears a direct and an indirect cost. The direct cost is the value of the mail. The indirect cost is the cost of having to switch carriers once mail has been lost. Corruption is also costly to the intended mail recipient as discussed above. Finally, corruption is costly the mail company (SERPOST) in terms of lost revenue and to society in terms of loss of trust. Overall, the findings suggest that public-private partnerships need not increase efficiency by reducing corruption; particularly, when the institution remains a monopoly. Increased efficiency in mail delivery is likely to require (1) privatization and (2) competition; otherwise, the monopolist has no incentive to provide better service and loss of mail is likely to persist.
12

Statistical Methods for Incomplete Covariates and Two-Phase Designs

McIsaac, Michael 18 December 2012 (has links)
Incomplete data is a pervasive problem in health research, and as a result statistical methods enabling inference based on partial information play a critical role. This thesis explores estimation of regression coefficients and associated inferences when variables are incompletely observed. In the later chapters, we focus primarily on settings with incomplete covariate data which arise by design, as in studies with two-phase sampling schemes, as opposed to incomplete data which arise due to events beyond the control of the scientist. We consider the problem in which "inexpensive" auxiliary information can be used to inform the selection of individuals for collection of data on the "expensive" covariate. In particular, we explore how parameter estimation relates to the choice of sampling scheme. Efficient sampling designs are defined by choosing the optimal sampling criteria within a particular class of selection models under a two-phase framework. We compare the efficiency of these optimal designs to simple random sampling and balanced sampling designs under a variety of frameworks for inference. As a prelude to the work on two-phase designs, we first review and study issues related to incomplete data arising due to chance. In Chapter 2, we discuss several models by which missing data can arise, with an emphasis on issues in clinical trials. The likelihood function is used as a basis for discussing different missing data mechanisms for incomplete responses in short-term and longitudinal studies, as well as for missing covariates. We briefly discuss common ad hoc strategies for dealing with incomplete data, such as complete-case analyses and naive methods of imputation, and we review more broadly appropriate approaches for dealing with incomplete data in terms of asymptotic and empirical frequency properties. These methods include the EM algorithm, multiple imputation, and inverse probability weighted estimating equations. Simulation studies are reported which demonstrate how to implement these procedures and examine performance empirically. We further explore the asymptotic bias of these estimators when the nature of the missing data mechanism is misspecified. We consider specific types of model misspecification in methods designed to account for the missingness and compare the limiting values of the resulting estimators. In Chapter 3, we focus on methods for two-phase studies in which covariates are incomplete by design. In the second phase of the two-phase study, subject to correct specification of key models, optimal sub-sampling probabilities can be chosen to minimise the asymptotic variance of the resulting estimator. These optimal phase-II sampling designs are derived and the empirical and asymptotic relative efficiencies resulting from these designs are compared to those from simple random sampling and balanced sampling designs. We further examine the effect on efficiency of utilising external pilot data to estimate parameters needed for derivation of optimal designs, and we explore the sensitivity of these optimal sampling designs to misspecification of preliminary parameter estimates and to the misspecification of the covariate model at the design stage. Designs which are optimal for analyses based on inverse probability weighted estimating equations are shown to result in efficiency gains for several different methods of analysis and are shown to be relatively robust to misspecification of the parameters or models used to derive the optimal designs. Furthermore, these optimal designs for inverse probability weighted estimating equations are shown to be well behaved when necessary design parameters are estimated using relatively small external pilot studies. We also consider efficient two-phase designs explicitly in the context of studies involving clustered and longitudinal responses. Model-based methods are discussed for estimation and inference. Asymptotic results are used to derive optimal sampling designs and the relative efficiencies of these optimal designs are again compared with simple random sampling and balanced sampling designs. In this more complex setting, balanced sampling designs are demonstrated to be inefficient and it is not obvious when balanced sampling will offer greater efficiency than a simple random sampling design. We explore the relative efficiency of phase-II sampling designs based on increasing amounts of information in the longitudinal responses and show that the balanced design may become less efficient when more data is available at the design stage. In contrast, the optimal design is able to exploit additional information to increase efficiency whenever more data is available at phase-I. In Chapter 4, we consider an innovative adaptive two-phase design which breaks the phase-II sampling into a phase-IIa sample obtained by a balanced or proportional sampling strategy, and a phase-IIb sample collected according to an optimal sampling design based on the data in phases I and IIa. This approach exploits the previously established robustness of optimal inverse probability weighted designs to overcome the difficulties associated with the fact that derivations of optimal designs require a priori knowledge of parameters. The efficiency of this hybrid design is compared to those of the proportional and balanced sampling designs, and to the efficiency of the true optimal design, in a variety of settings. The efficiency gains of this adaptive two-phase design are particularly apparent in the setting involving clustered response data, and it is natural to consider this approach in settings with complex models for which it is difficult to even speculate on suitable parameter values at the design stage.
13

The pricing of CDO based on Incomplete Information Credit model

Lien, Wei-chih 21 June 2006 (has links)
Credit risk and market risk have already been explored intensively and the reliable models of credit risk and market risk have also been developed progressively. This study try to find a method pricing the CDO (Collateralized Debt Obligation) based on Incomplete information credit model. For the various approaches to CDO valuation, the most widely accepted is the Copula approach. The Copula approach is considered suitable for describing default correlation. Combining with Monte Carlo Simulation, it can price CDO effectively.
14

The relationship between (16,6,3)-balanced incomplete block designs and (25,12) self-orthogonal codes

Nasr Esfahani, Navid 21 August 2014 (has links)
Balanced Incomplete Block Designs and Binary Linear Codes are two combinatorial designs. Due to the vast application of codes in communication the field of coding theory progressed more rapidly than many other fields of combinatorial designs. On the other hand, Block Designs are applicable in statistics and designing experiments in different fields, such as biology, medicine, and agriculture. Finding the relationship between instances of these two designs can be useful in constructing instances of one from the other. Applying the properties of codes to corresponding instances of Balanced Incomplete Block Designs has been used previously to show the non-existence of some designs. In this research the relationship between (16,6,3)-designs and (25,12) codes was determined.
15

Statistical Methods for Incomplete Covariates and Two-Phase Designs

McIsaac, Michael 18 December 2012 (has links)
Incomplete data is a pervasive problem in health research, and as a result statistical methods enabling inference based on partial information play a critical role. This thesis explores estimation of regression coefficients and associated inferences when variables are incompletely observed. In the later chapters, we focus primarily on settings with incomplete covariate data which arise by design, as in studies with two-phase sampling schemes, as opposed to incomplete data which arise due to events beyond the control of the scientist. We consider the problem in which "inexpensive" auxiliary information can be used to inform the selection of individuals for collection of data on the "expensive" covariate. In particular, we explore how parameter estimation relates to the choice of sampling scheme. Efficient sampling designs are defined by choosing the optimal sampling criteria within a particular class of selection models under a two-phase framework. We compare the efficiency of these optimal designs to simple random sampling and balanced sampling designs under a variety of frameworks for inference. As a prelude to the work on two-phase designs, we first review and study issues related to incomplete data arising due to chance. In Chapter 2, we discuss several models by which missing data can arise, with an emphasis on issues in clinical trials. The likelihood function is used as a basis for discussing different missing data mechanisms for incomplete responses in short-term and longitudinal studies, as well as for missing covariates. We briefly discuss common ad hoc strategies for dealing with incomplete data, such as complete-case analyses and naive methods of imputation, and we review more broadly appropriate approaches for dealing with incomplete data in terms of asymptotic and empirical frequency properties. These methods include the EM algorithm, multiple imputation, and inverse probability weighted estimating equations. Simulation studies are reported which demonstrate how to implement these procedures and examine performance empirically. We further explore the asymptotic bias of these estimators when the nature of the missing data mechanism is misspecified. We consider specific types of model misspecification in methods designed to account for the missingness and compare the limiting values of the resulting estimators. In Chapter 3, we focus on methods for two-phase studies in which covariates are incomplete by design. In the second phase of the two-phase study, subject to correct specification of key models, optimal sub-sampling probabilities can be chosen to minimise the asymptotic variance of the resulting estimator. These optimal phase-II sampling designs are derived and the empirical and asymptotic relative efficiencies resulting from these designs are compared to those from simple random sampling and balanced sampling designs. We further examine the effect on efficiency of utilising external pilot data to estimate parameters needed for derivation of optimal designs, and we explore the sensitivity of these optimal sampling designs to misspecification of preliminary parameter estimates and to the misspecification of the covariate model at the design stage. Designs which are optimal for analyses based on inverse probability weighted estimating equations are shown to result in efficiency gains for several different methods of analysis and are shown to be relatively robust to misspecification of the parameters or models used to derive the optimal designs. Furthermore, these optimal designs for inverse probability weighted estimating equations are shown to be well behaved when necessary design parameters are estimated using relatively small external pilot studies. We also consider efficient two-phase designs explicitly in the context of studies involving clustered and longitudinal responses. Model-based methods are discussed for estimation and inference. Asymptotic results are used to derive optimal sampling designs and the relative efficiencies of these optimal designs are again compared with simple random sampling and balanced sampling designs. In this more complex setting, balanced sampling designs are demonstrated to be inefficient and it is not obvious when balanced sampling will offer greater efficiency than a simple random sampling design. We explore the relative efficiency of phase-II sampling designs based on increasing amounts of information in the longitudinal responses and show that the balanced design may become less efficient when more data is available at the design stage. In contrast, the optimal design is able to exploit additional information to increase efficiency whenever more data is available at phase-I. In Chapter 4, we consider an innovative adaptive two-phase design which breaks the phase-II sampling into a phase-IIa sample obtained by a balanced or proportional sampling strategy, and a phase-IIb sample collected according to an optimal sampling design based on the data in phases I and IIa. This approach exploits the previously established robustness of optimal inverse probability weighted designs to overcome the difficulties associated with the fact that derivations of optimal designs require a priori knowledge of parameters. The efficiency of this hybrid design is compared to those of the proportional and balanced sampling designs, and to the efficiency of the true optimal design, in a variety of settings. The efficiency gains of this adaptive two-phase design are particularly apparent in the setting involving clustered response data, and it is natural to consider this approach in settings with complex models for which it is difficult to even speculate on suitable parameter values at the design stage.
16

Incomplete contracts and behavioural aspects – a case study in the construction and IT industries

Tong, Fei Carlo 05 November 2017 (has links)
Contracts capture an agreement between two parties to exchange a resource in the future (ex-ante), however the future is not certain. Only after the event has happened, might the two parties compare the resources they have received to what they expected (ex-post). Entering into a contract with unknowns gives rise to incomplete contracts theory, the focus of which includes the study of human behavior. Relational contracting is currently being studied as a method of reducing the transaction costs and incompleteness of contracts. Using case studies, this research aimed to reach a conclusion regarding why certain contractual projects run over budget. Overruns are often related to a variation agreement that is incomplete and open to interpretation. Understanding what the issues are and how to mitigate contractual risks was thus a key focus of this research. The research examined two industries - construction and IT. From the case studies, 16 interviews were conducted and 12 contracts reviewed. The least concern for all the parties was disputes, as the parties find solutions to address issues not considered when drafting contracts. Industry specific experience and knowledge is needed to mitigate some unknown contractual risks, however. Relational contracting was also very evident in resolving issues outside of a contract. Further studies into ancillary contracts will reveal more insight into behavioural and relational contracting. / Dissertation(MBA)--Gordon Institute of Business Science, University of Pretoria,2018. / Gordon Institute of Business Science (GIBS) / MBA / Unrestricted
17

Bayesian Analyses of Mediational Models for Survival Outcome

Chen, Chen 23 September 2011 (has links)
No description available.
18

Inter-block analysis of incomplete block designs

Beazley, Charles Coffin 26 April 2010 (has links)
By a study of the duality relationships of a large number of balanced and partially balanced incomplete block designs, certain ones have been found which lend themselves nicely to interblock analysis. Besides facilitating this analysis, these designs make possible the use of a new method for studying the relative variability of the inter and intra-block error. These "nice" designs, which are called twice balanced, have the property that their duals are also balanced or partially balanced. In the partially balanced designs, the investigation has been confined to those with two associate classes. Some methods are shown which may be used to prove that a dual is twice balanced. The twice balanced designs which have been found are catalogued, showing the plan numbers of the design and the dual,and the necessary identifying parameters or both. The proofs used in verifying the designs to be twice balanced are also indicated. Finally, there is an illustrative example making use of the methods and tables introduced in this paper. It includes a new computing method to be used for finding estimates of the treatment effects in a mixed model experiment. / Master of Science
19

Minimal Sufficient Statistics for Incomplete Block Designs With Interaction Under an Eisenhart Model III

Kapadia, C. H., Kvanli, Alan H., Lee, Kwan R. 01 January 1988 (has links)
The purpose of this paper is to derive minimal sufficient statistics for the balanced incomplete block design and the group divisible partially balanced incomplete block design when the Eisenhart Model III (mixed model) is assumed. The results are identical to Hultquist and Graybill's (1965) and Hirotsu's (1965) for the same model without interaction, except for the addition of a statistic, ∑ijY2ij•.
20

Optimal timing decisions in financial markets

Vannestål, Martin January 2017 (has links)
This thesis consists of an introduction and five articles. A common theme in all the articles is optimal timing when acting on a financial market. The main topics are optimal selling of an asset, optimal exercising of an American option, optimal stopping games and optimal strategies in trend following trading. In all the articles, we consider a financial market different from the standard Black-Scholes market. In two of the articles this difference consists in allowing for jumps of the underlying process. In the other three, the difference is that we have incomplete information about the drift of the underlying process. This is a natural assumption in many situations, including the case of a true buyer of an American option, trading in a market which exhibits trends, and optimal liquidation of an asset in the presence of a bubble. These examples are all addressed in this thesis.

Page generated in 0.0369 seconds