Spelling suggestions: "subject:"complementarity"" "subject:"komplementarity""
61 |
Labour market effects of immigration : evidence from CanadaIslam, Asadul 15 August 2003 (has links)
Immigration, the subject of repeated policy debates throughout the last two decades, has once again assumed a central position on the policy agenda. This debate has become more intense in recent years in Canada; the fear is over the potential job displacement and unemployment of Canadian-born workers, and the consequence to the Canadian economy. The recent immigrant incomes have been falling compared to their older counterparts helped to trigger the current policy debate. This thesis attempts to address this debate by providing an objective assessment of the displacement of Canadian-born workers due to immigration and the unemployment-immigration dynamics over the past 40 years of immigration to Canada. The thesis consists of two objectives:<p>Objective-I: Job Displacement Effects of Immigration on Canadian-born <p>First I address the job displacement effects on Canadian-born due to exogenous shifts in immigration flows. It is, therefore, necessary to consider the substitutability or complementarity between Canadian-born and immigrant workers. This is examined by estimating the set of wage earnings equation from the Generalized Leontief Production Function. The model specification abstracts from the role of capital, by assuming that labor and capital are separable in production. I then derive the iterated Zellner-efficient estimator (IZEF) (which is numerically equivalent to the maximum likelihood estimator) from the set of wage earnings equations. Then the degree of substitutability or complementarity is calculated using Hicks (as opposed to Allens) elasticity of complementarity. The estimated Hicksian elasticities suggest, in the aggregate, there is no displacement of Canadian-born workers by immigration, although there is some displacement by industry.<p>Objective-II: Unemployment and Immigration Dynamics<p>Next, I consider immigrant not only as an additions to the existing labor force but also job creation effects through their effects for goods and services. Here immigrants are considered as endogenous and I model the dynamics of unemployment and immigration. As a first step, statistical causality is investigated between immigration and unemployment. But causality methods can suffer from omitted variable problem. So, I construct a theoretical labor market and use the cointegration analysis to determine the long run relationship among unemployment rate, immigration level, real wage, and real GDP. Then, I estimate the short-run dynamics with a specification in difference form where the parameters of the cointegrating vectors from the first-step are fixed and entered as an error correction mechanism. The causality test finds no evidence of a significant effect of Canadian unemployment on immigration. The estimation of the long-run and short-run parameter indicates that no statistically significant relationship exists between unemployment and immigration.
|
62 |
Labour market effects of immigration : evidence from CanadaIslam, Asadul 15 August 2003
Immigration, the subject of repeated policy debates throughout the last two decades, has once again assumed a central position on the policy agenda. This debate has become more intense in recent years in Canada; the fear is over the potential job displacement and unemployment of Canadian-born workers, and the consequence to the Canadian economy. The recent immigrant incomes have been falling compared to their older counterparts helped to trigger the current policy debate. This thesis attempts to address this debate by providing an objective assessment of the displacement of Canadian-born workers due to immigration and the unemployment-immigration dynamics over the past 40 years of immigration to Canada. The thesis consists of two objectives:<p>Objective-I: Job Displacement Effects of Immigration on Canadian-born <p>First I address the job displacement effects on Canadian-born due to exogenous shifts in immigration flows. It is, therefore, necessary to consider the substitutability or complementarity between Canadian-born and immigrant workers. This is examined by estimating the set of wage earnings equation from the Generalized Leontief Production Function. The model specification abstracts from the role of capital, by assuming that labor and capital are separable in production. I then derive the iterated Zellner-efficient estimator (IZEF) (which is numerically equivalent to the maximum likelihood estimator) from the set of wage earnings equations. Then the degree of substitutability or complementarity is calculated using Hicks (as opposed to Allens) elasticity of complementarity. The estimated Hicksian elasticities suggest, in the aggregate, there is no displacement of Canadian-born workers by immigration, although there is some displacement by industry.<p>Objective-II: Unemployment and Immigration Dynamics<p>Next, I consider immigrant not only as an additions to the existing labor force but also job creation effects through their effects for goods and services. Here immigrants are considered as endogenous and I model the dynamics of unemployment and immigration. As a first step, statistical causality is investigated between immigration and unemployment. But causality methods can suffer from omitted variable problem. So, I construct a theoretical labor market and use the cointegration analysis to determine the long run relationship among unemployment rate, immigration level, real wage, and real GDP. Then, I estimate the short-run dynamics with a specification in difference form where the parameters of the cointegrating vectors from the first-step are fixed and entered as an error correction mechanism. The causality test finds no evidence of a significant effect of Canadian unemployment on immigration. The estimation of the long-run and short-run parameter indicates that no statistically significant relationship exists between unemployment and immigration.
|
63 |
Numerical Stability in Linear Programming and Semidefinite ProgrammingWei, Hua January 2006 (has links)
We study numerical stability for interior-point methods applied to Linear Programming, LP, and Semidefinite Programming, SDP. We analyze the difficulties inherent in current methods and present robust algorithms. <br /><br /> We start with the error bound analysis of the search directions for the normal equation approach for LP. Our error analysis explains the surprising fact that the ill-conditioning is not a significant problem for the normal equation system. We also explain why most of the popular LP solvers have a default stop tolerance of only 10<sup>-8</sup> when the machine precision on a 32-bit computer is approximately 10<sup>-16</sup>. <br /><br /> We then propose a simple alternative approach for the normal equation based interior-point method. This approach has better numerical stability than the normal equation based method. Although, our approach is not competitive in terms of CPU time for the NETLIB problem set, we do obtain higher accuracy. In addition, we obtain significantly smaller CPU times compared to the normal equation based direct solver, when we solve well-conditioned, huge, and sparse problems by using our iterative based linear solver. Additional techniques discussed are: crossover; purification step; and no backtracking. <br /><br /> Finally, we present an algorithm to construct SDP problem instances with prescribed strict complementarity gaps. We then introduce two <em>measures of strict complementarity gaps</em>. We empirically show that: (i) these measures can be evaluated accurately; (ii) the size of the strict complementarity gaps correlate well with the number of iteration for the SDPT3 solver, as well as with the local asymptotic convergence rate; and (iii) large strict complementarity gaps, coupled with the failure of Slater's condition, correlate well with loss of accuracy in the solutions. In addition, the numerical tests show that there is no correlation between the strict complementarity gaps and the geometrical measure used in [31], or with Renegar's condition number.
|
64 |
The Physical Underpinning of Security Proofs for Quantum Key DistributionBoileau, Jean Christian 25 September 2007 (has links)
The dawn of quantum technology unveils a plethora of new possibilities and challenges in the world of information technology, one of which is the quest for secure information transmission. A breakthrough in classical algorithm or the development of a quantum computer could threaten the security of messages encoded using public key cryptosystems based on one-way function such as RSA. Quantum key distribution (QKD) offers an unconditionally secure alternative to such schemes, even in the advent of a quantum computer, as it does not rely on mathematical or technological assumptions, but rather on the universality of the laws of quantum mechanics.
Physical concepts associated with quantum mechanics, like the uncertainty principle or entanglement, paved the way to the first successful security proof for QKD. Ever since, further development in security proofs for QKD has been remarkable. But the connection between entanglement distillation and the uncertainty principle has remained hidden under a pile of mathematical burden. Our main goal is to dig the physics out of the new advances in security proofs for QKD. By introducing an alternative definition of private state, which elaborates the ideas of Mayers and Koashi, we explain how the security of all QKD protocols follows from an entropic uncertainty principle. We show explicitly how privacy amplification protocol can be reduced to a private state distillation protocol constructed from our observations about the uncertainty principle. We also derive a generic security proof for one-way permutation-invariant QKD protocols. Considering collective attack, we achieve the same secret key generation rate as the Devetak-Winter's bound. Generalizing an observation from Kraus, Branciard and Renner, we have provided an improved version of the secret key generation rates by considering a different symmetrization. In certain situations, we argue that Azuma's inequality can simplify the security proof considerably, and we explain the implication, on the security level, of reducing a QKD protocol to an entanglement or a more general private state distillation protocol.
In a different direction, we introduce a QKD protocol with multiple-photon encoding that can be implemented without a shared reference frame. We prove the unconditional security of this protocol, and discuss some features of the efficiency of multiple-photon QKD schemes in general.
|
65 |
Numerical Stability in Linear Programming and Semidefinite ProgrammingWei, Hua January 2006 (has links)
We study numerical stability for interior-point methods applied to Linear Programming, LP, and Semidefinite Programming, SDP. We analyze the difficulties inherent in current methods and present robust algorithms. <br /><br /> We start with the error bound analysis of the search directions for the normal equation approach for LP. Our error analysis explains the surprising fact that the ill-conditioning is not a significant problem for the normal equation system. We also explain why most of the popular LP solvers have a default stop tolerance of only 10<sup>-8</sup> when the machine precision on a 32-bit computer is approximately 10<sup>-16</sup>. <br /><br /> We then propose a simple alternative approach for the normal equation based interior-point method. This approach has better numerical stability than the normal equation based method. Although, our approach is not competitive in terms of CPU time for the NETLIB problem set, we do obtain higher accuracy. In addition, we obtain significantly smaller CPU times compared to the normal equation based direct solver, when we solve well-conditioned, huge, and sparse problems by using our iterative based linear solver. Additional techniques discussed are: crossover; purification step; and no backtracking. <br /><br /> Finally, we present an algorithm to construct SDP problem instances with prescribed strict complementarity gaps. We then introduce two <em>measures of strict complementarity gaps</em>. We empirically show that: (i) these measures can be evaluated accurately; (ii) the size of the strict complementarity gaps correlate well with the number of iteration for the SDPT3 solver, as well as with the local asymptotic convergence rate; and (iii) large strict complementarity gaps, coupled with the failure of Slater's condition, correlate well with loss of accuracy in the solutions. In addition, the numerical tests show that there is no correlation between the strict complementarity gaps and the geometrical measure used in [31], or with Renegar's condition number.
|
66 |
The Physical Underpinning of Security Proofs for Quantum Key DistributionBoileau, Jean Christian 25 September 2007 (has links)
The dawn of quantum technology unveils a plethora of new possibilities and challenges in the world of information technology, one of which is the quest for secure information transmission. A breakthrough in classical algorithm or the development of a quantum computer could threaten the security of messages encoded using public key cryptosystems based on one-way function such as RSA. Quantum key distribution (QKD) offers an unconditionally secure alternative to such schemes, even in the advent of a quantum computer, as it does not rely on mathematical or technological assumptions, but rather on the universality of the laws of quantum mechanics.
Physical concepts associated with quantum mechanics, like the uncertainty principle or entanglement, paved the way to the first successful security proof for QKD. Ever since, further development in security proofs for QKD has been remarkable. But the connection between entanglement distillation and the uncertainty principle has remained hidden under a pile of mathematical burden. Our main goal is to dig the physics out of the new advances in security proofs for QKD. By introducing an alternative definition of private state, which elaborates the ideas of Mayers and Koashi, we explain how the security of all QKD protocols follows from an entropic uncertainty principle. We show explicitly how privacy amplification protocol can be reduced to a private state distillation protocol constructed from our observations about the uncertainty principle. We also derive a generic security proof for one-way permutation-invariant QKD protocols. Considering collective attack, we achieve the same secret key generation rate as the Devetak-Winter's bound. Generalizing an observation from Kraus, Branciard and Renner, we have provided an improved version of the secret key generation rates by considering a different symmetrization. In certain situations, we argue that Azuma's inequality can simplify the security proof considerably, and we explain the implication, on the security level, of reducing a QKD protocol to an entanglement or a more general private state distillation protocol.
In a different direction, we introduce a QKD protocol with multiple-photon encoding that can be implemented without a shared reference frame. We prove the unconditional security of this protocol, and discuss some features of the efficiency of multiple-photon QKD schemes in general.
|
67 |
Inter-temporal Effect of Technological Capabilities on Firm Performance: a Longitudinal Study of the U.s. Computed Tomography Industry (1972-2002)Chopra, Ankush 07 December 2007 (has links)
In this dissertation, I investigate how capabilities drive firm performance as an industry evolves. I show that in spite of significant research on firm capabilities, we do not understand whether technological capabilities continue to drive firm performance as an industry evolves or whether they become weaker drivers of performance over time. This question is also important to managers because its answer would inform whether in a given context, firms should invest in building technological capabilities or not.
I predict and find that in low complementarity contexts, as technology advances, customer demand for greater product performance becomes satiated. As a result, customers neither pay for greater product performance nor buy higher performing products pre maturely. As firms lose these two levers by which technological capabilities drive performance, they find that technological capabilities become weaker drivers of performance. I also propose that when technological capabilities become weaker drivers of performance, firm performance becomes more persistent, in the sense that past performance drives future performance.
Through a rigorous quantitative analysis, complemented by an in-depth qualitative analysis of the US CT scanner industry from its inception, I find support for the theory. Using robust regression and multinomial logistic regression models, I find that as technology in an industry advances, technological capabilities become weaker drivers of firm performance.
I discuss the shortcomings of this research and potential for future research. I also discuss the implications of this research on capability theory, resource based view, and on existing explanations of industry shakeout. / Dissertation
|
68 |
Information integration in the capital projects industry : interaction effects and benefits of complementary practicesKang, Young Cheol 09 November 2010 (has links)
Information integration is considered a source of competitive advantage in the capital projects industry. While it has been broadly implemented, many organizations appear to have achieved only limited benefits from their efforts. This dissertation investigates the complementarity relationship between information technology (IT) use and project execution processes and practices. It asserts that rather than directly improving an organization’s competitive advantage, IT serves instead as a mode to improve existing processes and practices which in turn serve to improve the bottom line. Building from this foundation, the dissertation proceeds to expand its findings to document mechanisms by which various resources influence the complementarity relationship.
Topics within this dissertation are investigated with both quantitative and qualitative approaches. Using capital projects data, evidence of complementarity is established quantitatively between general use of IT and best practices. The benefits of complementarity in terms of cost, schedule, and rework project performance measures are documented. Data analyses show that more use of IT is associated with more use of best practice; and, projects that intensively implement IT and best practice tend to show superior project performance. Furthermore, by investigating the use of a specific technology, this dissertation presents a thorough statistical analysis showing that IT use affects the use of practices, which together support improved project performance. Next, this dissertation lists organization resources that may affect complementarity. Using sixteen actual information integration cases, the major resources consistently affecting complementarity are identified. Illustrations of seven case studies present how the resources are managed. The case studies are also used when discussing the interaction of IT use and processes generating complementarity.
The primary contribution of this research is to provide a quantitative evidence of IT’s indirect impact on construction project performance via practices. A broad discussion citing the range of resources affecting the complementarity and identifying the major ones in the capital projects industry is another contribution of this research. / text
|
69 |
Complementarity in conflict : law, politics and the catalysing effect of the International Criminal Court in Uganda and SudanNouwen, Sarah Maria Heiltjen January 2010 (has links)
No description available.
|
70 |
The Computational Power of Extended Watson-Crick L SystemsSears, David 07 December 2010 (has links)
Lindenmayer (L) systems form a class of interesting computational formalisms due to their parallel nature, the various circumstances under which they operate, the restrictions imposed on language acceptance, and other attributes. These systems have been extensively studied in the
Formal Languages literature. In the past decade a new type of Lindenmayer system had been proposed: Watson-Crick Lindenmayer Systems. These systems are essentially a marriage between Developmental systems and DNA Computing. At their heart they are Lindenmayer systems augmented with a complementary relation amongst elements in the system just as the base pairs of DNA strands can be complementary with respect to one another. When conditions and a mechanism for 'switching' the state of a computation to it's complementary version are provided then these systems can become surprisingly more powerful than the L systems which form their backbone. This dissertation explores the computational power of new variants of Watson-Crick L systems. It is found that many of these systems are Computationally-Complete. These investigations differ from prior ones in that the systems under consideration have extended alphabets and usually Regular Triggers for complementation are considered as opposed to Context-Free Triggers investigated in previous works. / Thesis (Master, Computing) -- Queen's University, 2010-12-06 18:29:23.584
|
Page generated in 0.0589 seconds