• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 15
  • 8
  • 7
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 157
  • 157
  • 38
  • 18
  • 18
  • 16
  • 16
  • 16
  • 16
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Fast Analysis of Scattering by Arbitrarily Shaped Three-Dimensional Objects Using the Precorrected-FFT Method

Nie, Xiaochun, Li, Le-Wei 01 1900 (has links)
This paper presents an accurate and efficient method-of-moments solution of the electrical-field integral equation (EFIE) for large, three-dimensional, arbitrarily shaped objects. In this method, the generalized conjugate residual method (GCR) is used to solve the matrix equation iteratively and the precorrected-FFT technique is then employed to accelerate the matrix-vector multiplication in iterations. The precorrected-FFT method eliminates the need to generate and store the usual square impedance matrix, thus leading to a great reduction in memory requirement and execution time. It is at best an O(N log N) algorithm and can be modified to fit a wide variety of systems with different Green’s functions without excessive effort. Numerical results are presented to demonstrate the accuracy and computational efficiency of the technique. / Singapore-MIT Alliance (SMA)
32

Implementing method of moments on a GPGPU using Nvidia CUDA

Virk, Bikram 12 April 2010 (has links)
This thesis concentrates on the algorithmic aspects of Method of Moments (MoM) and Locally Corrected Nyström (LCN) numerical methods in electromagnetics. The data dependency in each step of the algorithm is analyzed to implement a parallel version that can harness the powerful processing power of a General Purpose Graphics Processing Unit (GPGPU). The GPGPU programming model provided by NVIDIA's Compute Unified Device Architecture (CUDA) is described to learn the software tools at hand enabling us to implement C code on the GPGPU. Various optimizations such as the partial update at every iteration, inter-block synchronization and using shared memory enable us to achieve an overall speedup of approximately 10. The study also brings out the strengths and weaknesses in implementing different methods such as Crout's LU decomposition and triangular matrix inversion on a GPGPU architecture. The results suggest future directions of study in different algorithms and their effectiveness on a parallel processor environment. The performance data collected show how different features of the GPGPU architecture can be enhanced to yield higher speedup.
33

Large eddy simulation of TiO₂ nanoparticle evolution in turbulent flames

Sung, Yonduck 03 February 2012 (has links)
Flame based synthesis is a major manufacturing process of commercially valuable nanoparticles for large-scale production. However, this important industrial process has been advanced mostly by trial-and-error based evolutionary studies owing to the fact that it involves tightly coupled multiphysics flow phenomena. For large scale synthesis of nanoparticles, different physical and chemical processes exist, including turbulence, fuel combustion, precursor oxidation, and nanoparticle dynamics exist. A reliable and predictive computational model based on fundamental physics and chemistry can provide tremendous insight. Development of such comprehensive computational models faces challenges as they must provide accurate descriptions not only of the individual physical processes but also of the strongly coupled, nonlinear interactions among them. In this work, a multiscale computational model for flame synthesis of TiO2 nanoparticles in a turbulent flame reactor is presented. The model is based on the large-eddy simulation (LES) methodology and incorporates detailed gas phase combustion and precursor oxidation chemistry as well as a comprehensive nanoparticle evolution model. A flamelet-based model is used to model turbulence-chemistry interactions. In particular, the transformation of TiCl4 to the solid primary nucleating TiO2 nanoparticles is represented us- ing an unsteady kinetic model considering 30 species and 70 reactions in order to accurately describe the critical nanoparticle nucleation process. The evolution of the TiO2 number density function is tracked using the quadrature method of moments (QMOM) for univariate particle number density function and conditional quadrature method of moments (CQMOM) for bivariate density distribution function. For validation purposes, the detailed computational model is compared against experimental data obtained from a canonical flame- based titania synthesis configuration, and reasonable agreement is obtained. / text
34

Novel and Efficient Numerical Analysis of Packaging Interconnects in Layered Media

Zhu, Zhaohui January 2005 (has links)
Technology trends toward lower power, higher speed and higher density devices have pushed the package performance to its limits. The high frequency effects e.g., crosstalk and signal distortion, may cause high bit error rates or malfunctioning of the circuit. Therefore, the successful release of a new product requires constant attention to the high frequency effects through the whole design process. Full-wave electromagnetic tools must be used for this purpose. Unfortunately, currently available full-wave tools require excessive computational resources to simulate large-scale interconnect structures.A prototype version of the Full-Wave Layered-Interconnect Simulator (UA-FWLIS), which employs the Method of Moments (MoM) technique, was developed in response to this design need. Instead of using standard numerical integration techniques, the MoM reaction elements were analytically evaluated, thereby greatly improving the computational efficiency of the simulator. However, the expansion and testing functions that are employed in the prototype simulator involve filamentary functions across the wire. Thus, many problems cannot be handled correctly. Therefore, a fundamental extension is made in this dissertation to incorporate rectangular-based, finite-width expansion and testing functions into the simulator. The critical mathematical issues and theoretical issues that were met during the extension are straightened out. The breakthroughs that were accomplished in this dissertation lay the foundation for future extensions. A new bend-cell expansion function is also introduced, thus allowing the simulator to handle bends on the interconnects with fewer unknowns. In addition, the Incomplete Lipschitz-Hankel integrals, which are used in the analytical solution, are studied. Two new series expansions were also developed to improve the computational efficiency and accuracy.
35

An Empirical Study of the Causes and Consequences of Mergers in the Canadian Cable Television Industry

BYRNE, DAVID P R 13 December 2010 (has links)
This dissertation consists of three essays that study mergers and consolidation in the Canadian cable television industry. The first essay provides a historical overview of regulatory and technical change in the industry, and presents the dataset that I constructed for this study. The basic pattern of interest in the data is regional consolidation, where dominant cable companies grow over time by acquiring the cablesystems of small cable operators. I perform a reduced-form empirical analysis that formally studies the determinants of mergers, and the effect that acquisitions have on cable bundles offered to consumers. The remaining essays develop and estimate structural econometric models to further study the determinants and welfare consequences of mergers in the industry. The second essay estimates an empirical analogue of the Farrell and Scotchmer (1988) coalition- formation game. I use the estimated model to measure the equilibrium impact that economies of scale and agglomeration has on firms’ acquisition incentives. I also study the impact entry and merger subsidies have on consolidation and long-run market structure. The final chapter estimates a variant of the Rochet and Stole (2002) model of multi-product monopoly with endogenous quality and prices. Using the estimated model I compute the impact mergers have on welfare. I find that both consumer and producer surplus rise with acquisitions. I also show that accounting for changes both in prices and products (i.e., cable bundle quality) is important for measuring the welfare impact of mergers. / Thesis (Ph.D, Economics) -- Queen's University, 2010-12-09 14:39:15.431
36

ESSAYS ON HUMAN CAPITAL, HEALTH CAPITAL, AND THE LABOR MARKET

Hokayem, Charles 01 January 2010 (has links)
This dissertation consists of three essays concerning the effects of human capital and health capital on the labor market. Chapter 1 presents a structural model that incorporates a health capital stock to the traditional learning-by-doing model. The model allows health to affect future wages by interrupting current labor supply and on-the-job human capital accumulation. Using data on sick time from the Panel Study Income of Dynamics the model is estimated using a nonlinear Generalized Method of Moments estimator. The results show human capital production exhibits diminishing returns. Health capital production increases with the current stock of health capital, or better current health improves future health. Among prime age working men, the effect of health on human capital accumulation is relatively small. Chapter 2 explores the role of another form of human capital, noncognitive skills, in explaining racial gaps in wages. Chapter 2 adds two noncognitive skills, locus of control and self-esteem, to a simple wage specification to determine the effect of these skills on the racial wage gap (white, black, and Hispanic) and the return to these skills across the wage distribution. The wage specifications are estimated using pooled, between, and quantile estimators. Results using the National Longitudinal Survey of Youth 1979 show these skills account for differing portions of the racial wage gap depending on race and gender. Chapter 3 synthesizes the idea of health and on-the-job human capital accumulation from Chapter 1 with the idea of noncognitive skills in Chapter 2 to examine the influence of these skills on human capital and health capital accumulation in adult life. Chapter 3 introduces noncognitive skills to a life cycle labor supply model with endogenous health and human capital accumulation. Noncognitive skills, measured by degree of future orientation, self-efficacy, trust-hostility, and aspirations, exogenously affect human capital and health production. The model uses noncognitive skills assessed in the early years of the Panel Study of Income Dynamics and relates these skills to health and human capital accumulation during adult life. The main findings suggest individuals with high self-efficacy receive higher future wages.
37

Corporate governance and firm outcomes: causation or spurious correlation?

Tan, David Tatwei, Banking & Finance, Australian School of Business, UNSW January 2009 (has links)
The rapid growth of financial markets and the increasing diffusion of corporate ownership have placed tremendous emphasis on the effectiveness of corporate governance in resolving agency conflicts within the firm. This study investigates the corporate governance and firm performance/failure relation by implementing various econometric modelling methods to disaggregate causal relations and spurious correlations. Using a panel dataset of Australian firms, a comprehensive suite of corporate governance mechanisms are considered; including the ownership, remuneration, and board structures of the firm. Initial ordinary least squares (OLS) and fixed-effects panel specifications report significant causal relations between various corporate governance measures and firm outcomes. However, the dynamic generalised method of moments (GMM) results indicate that no causal relations exist when taking into account the effects of simultaneity, dynamic endogeneity, and unobservable heterogeneity. Moreover, these results remain robust when accounting for the firm??s propensity for fraud. The findings support the equilibrium theory of corporate governance and the firm, suggesting that a firm??s corporate governance structure is an endogenous characteristic determined by other firm factors; and that any observed relations between governance and firm outcomes are spurious in nature. Chapter 2 examines the corporate governance and firm performance relation. Using a comprehensive suite of corporate governance measures, this chapter finds no evidence of a causal relation between corporate governance and firm performance when accounting for the biases introduced by simultaneity, dynamic endogeneity, and unobservable heterogeneity. This result is consistent across all firm performance measures. Chapter 3 explores the corporate governance and likelihood of firm failure relation by implementing the Merton (1974) model of firm-valuation. Similarly, no significant causal relations between a firm??s corporate governance structure and its likelihood of failure are detected when accounting for the influence of endogeneity on the parameter estimates. Chapter 4 re-examines the corporate governance and firm performance/failure relation within the context of corporate fraud. Using KPMG and ASIC fraud databases, the corporate governance and firm outcome relations are estimated whilst accounting for the firms?? vulnerability to corporate fraud. This chapter finds no evidence of a causal relation between corporate governance and firm outcomes when conditioning on a firm??s propensity for fraud.
38

Corporate governance and firm outcomes: causation or spurious correlation?

Tan, David Tatwei, Banking & Finance, Australian School of Business, UNSW January 2009 (has links)
The rapid growth of financial markets and the increasing diffusion of corporate ownership have placed tremendous emphasis on the effectiveness of corporate governance in resolving agency conflicts within the firm. This study investigates the corporate governance and firm performance/failure relation by implementing various econometric modelling methods to disaggregate causal relations and spurious correlations. Using a panel dataset of Australian firms, a comprehensive suite of corporate governance mechanisms are considered; including the ownership, remuneration, and board structures of the firm. Initial ordinary least squares (OLS) and fixed-effects panel specifications report significant causal relations between various corporate governance measures and firm outcomes. However, the dynamic generalised method of moments (GMM) results indicate that no causal relations exist when taking into account the effects of simultaneity, dynamic endogeneity, and unobservable heterogeneity. Moreover, these results remain robust when accounting for the firm??s propensity for fraud. The findings support the equilibrium theory of corporate governance and the firm, suggesting that a firm??s corporate governance structure is an endogenous characteristic determined by other firm factors; and that any observed relations between governance and firm outcomes are spurious in nature. Chapter 2 examines the corporate governance and firm performance relation. Using a comprehensive suite of corporate governance measures, this chapter finds no evidence of a causal relation between corporate governance and firm performance when accounting for the biases introduced by simultaneity, dynamic endogeneity, and unobservable heterogeneity. This result is consistent across all firm performance measures. Chapter 3 explores the corporate governance and likelihood of firm failure relation by implementing the Merton (1974) model of firm-valuation. Similarly, no significant causal relations between a firm??s corporate governance structure and its likelihood of failure are detected when accounting for the influence of endogeneity on the parameter estimates. Chapter 4 re-examines the corporate governance and firm performance/failure relation within the context of corporate fraud. Using KPMG and ASIC fraud databases, the corporate governance and firm outcome relations are estimated whilst accounting for the firms?? vulnerability to corporate fraud. This chapter finds no evidence of a causal relation between corporate governance and firm outcomes when conditioning on a firm??s propensity for fraud.
39

Empirical likelihood with applications in time series

Li, Yuyi January 2011 (has links)
This thesis investigates the statistical properties of Kernel Smoothed Empirical Likelihood (KSEL, e.g. Smith, 1997 and 2004) estimator and various associated inference procedures in weakly dependent data. New tests for structural stability are proposed and analysed. Asymptotic analyses and Monte Carlo experiments are applied to assess these new tests, theoretically and empirically. Chapter 1 reviews and discusses some estimation and inferential properties of Empirical Likelihood (EL, Owen, 1988) for identically and independently distributed data and compares it with Generalised EL (GEL), GMM and other estimators. KSEL is extensively treated, by specialising kernel-smoothed GEL in the working paper of Smith (2004), some of whose results and proofs are extended and refined in Chapter 2. Asymptotic properties of some tests in Smith (2004) are also analysed under local alternatives. These special treatments on KSEL lay the foundation for analyses in Chapters 3 and 4, which would not otherwise follow straightforwardly. In Chapters 3 and 4, subsample KSEL estimators are proposed to assist the development of KSEL structural stability tests to diagnose for a given breakpoint and for an unknown breakpoint, respectively, based on relevant work using GMM (e.g. Hall and Sen, 1999; Andrews and Fair, 1988; Andrews and Ploberger, 1994). It is also original in these two chapters that moment functions are allowed to be kernel-smoothed after or before the sample split, and it is rigorously proved that these two smoothing orders are asymptotically equivalent. The overall null hypothesis of structural stability is decomposed according to the identifying and overidentifying restrictions, as Hall and Sen (1999) advocate in GMM, leading to a more practical and precise structural stability diagnosis procedure. In this framework, these KSEL structural stability tests are also proved via asymptotic analysis to be capable of identifying different sources of instability, arising from parameter value change or violation of overidentifying restrictions. The analyses show that these KSEL tests follow the same limit distributions as their counterparts using GMM. To examine the finite-sample performance of KSEL structural stability tests in comparison to GMM's, Monte Carlo simulations are conducted in Chapter 5 using a simple linear model considered by Hall and Sen (1999). This chapter details some relevant computational algorithms and permits different smoothing order, kernel type and prewhitening options. In general, simulation evidence seems to suggest that compared to GMM's tests, these newly proposed KSEL tests often perform comparably. However, in some cases, the sizes of these can be slightly larger, and the false null hypotheses are rejected with much higher frequencies. Thus, these KSEL based tests are valid theoretical and practical alternatives to GMM's.
40

Fast Numerical Algorithms for 3-D Scattering from PEC and Dielectric Random Rough Surfaces in Microwave Remote Sensing

January 2016 (has links)
abstract: We present fast and robust numerical algorithms for 3-D scattering from perfectly electrical conducting (PEC) and dielectric random rough surfaces in microwave remote sensing. The Coifman wavelets or Coiflets are employed to implement Galerkin’s procedure in the method of moments (MoM). Due to the high-precision one-point quadrature, the Coiflets yield fast evaluations of the most off-diagonal entries, reducing the matrix fill effort from O(N^2) to O(N). The orthogonality and Riesz basis of the Coiflets generate well conditioned impedance matrix, with rapid convergence for the conjugate gradient solver. The resulting impedance matrix is further sparsified by the matrix-formed standard fast wavelet transform (SFWT). By properly selecting multiresolution levels of the total transformation matrix, the solution precision can be enhanced while matrix sparsity and memory consumption have not been noticeably sacrificed. The unified fast scattering algorithm for dielectric random rough surfaces can asymptotically reduce to the PEC case when the loss tangent grows extremely large. Numerical results demonstrate that the reduced PEC model does not suffer from ill-posed problems. Compared with previous publications and laboratory measurements, good agreement is observed. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2016

Page generated in 0.0975 seconds