• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 527
  • 232
  • 68
  • 48
  • 28
  • 25
  • 20
  • 17
  • 13
  • 12
  • 8
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1175
  • 1029
  • 201
  • 193
  • 173
  • 161
  • 155
  • 147
  • 123
  • 121
  • 106
  • 96
  • 90
  • 84
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Methodological aspects of the mapping of disease resistance loci in livestock/Aspects méthodologiques de la cartographie de gènes intervenant dans la résistance aux maladies chez les animaux d'élevage

Tilquin, Pierre 19 September 2003 (has links)
The incidence of infectious diseases in livestock is a major concern for animal breeders as well as for consumers. As a alternative approach to the use of prophylactic measures or therapeutic agents, infectious diseases can be contended by increasing the disease resistance of animals by genetic improvement. Animals can be selected either on a measure of their resistance (indicator trait) or on the presence or absence of some specific resistance genes in their genotype. A prerequisite to the latter approach is the identification of the genes, or QTL for quantitative trait loci, underlying the trait of interest. By means of sophisticated statistical tools, the QTL mapping strategy combines the information from genetic markers and phenotypic values to dissect quantitative traits into their individual genetic components. Some of the methodological aspects of this strategy are studied in the present thesis in the context of disease resistance in livestock. Indicator traits of the resistance (such as bacteria or parasites counts) are not always satisfying the normality assumption underlying most of the QTL mapping methods. In this context, the ability of statistical tests to identify the underlying genes (i.e. the statistical power) can be considerably reduced. We show that compared to the use of a non-parametric method, the use of the least-squares-based parametric method on mathematically transformed phenotypes gives always the best results. In the context of high number of ties (equal values) as observed when measuring resistance to bacterial or parasitic diseases, the non-parametric test is a good alternative to this approach, as far as midranks are used for ties instead of random ranks. The efficiency of QTL mapping methods can also be increased by use of simple combinations of repeated measurements of the same trait. As a result of analyses performed on real data sets in chicken and sheep, we show that much attention should be paid to obtaining good quality measurements, reflecting at best differences in terms of resistance between animals, before performing a QTL search. The appropriate choice of resistance traits as well as of the time of their measurement are, beside the choice of the method and the quality of marker information, among the most preponderant factors to guarantee satisfying results.
122

Least-squares variance component estimation : theory and GPS applications /

Amiri-Simkooei, AliReza, January 2007 (has links)
Originally presented as the author's thesis (doctoral)--Delft University of Technology. / Includes bibliographical references (p. [185]-194) and index.
123

How to do what you want to do when you can not do what you want : on avoiding and completing partial latin squares

Öhman, Lars-Daniel January 2006 (has links)
No description available.
124

Combinatorial Methods in Complex Analysis

Alexandersson, Per January 2013 (has links)
The theme of this thesis is combinatorics, complex analysis and algebraic geometry. The thesis consists of six articles divided into four parts. Part A: Spectral properties of the Schrödinger equation This part consists of Papers I-II, where we study a univariate Schrödinger equation with a complex polynomial potential. We prove that the set of polynomial potentials that admit solutions to the Schrödingerequation is connected, under certain boundary conditions. We also study a similar result for even polynomial potentials, where a similar result is obtained. Part B: Graph monomials and sums of squares In this part, consisting of Paper III, we study natural bases for the space of homogeneous, symmetric and translation-invariant polynomials in terms of multigraphs. We find all multigraphs with at most six edges that give rise to non-negative polynomials, and which of these that can be expressed as a sum of squares. Such polynomials appear naturally in connection to expressing certain non-negative polynomials as sums of squares. Part C: Eigenvalue asymptotics of banded Toeplitz matrices This part consists of Papers IV-V. We give a new and generalized proof of a theorem by P. Schmidt and F. Spitzer concerning asymptotics of eigenvalues of Toeplitz matrices. We also generalize the notion of eigenvalues to rectangular matrices, and partially prove the a multivariate analogue of the above. Part D: Stretched Schur polynomials This part consists of Paper VI, where we give a combinatorial proof that certain sequences of skew Schur polynomials satisfy linear recurrences with polynomial coefficients. / <p>At the time of doctoral defence the following papers were unpublished and had a status as follows: Paper 5: Manuscript; Paper 6: Manuscript</p>
125

Integrated Approach to Assess Supply Chains: A Comparison to the Process Control at the Firm Level

Karadag, Mehmet Onur 22 July 2011 (has links)
This study considers whether or not optimizing process metrics and settings across a supply chain gives significantly different outcomes than consideration at a firm level. While, the importance of supply chain integration has been shown in areas such as inventory management, this study appears to be the first empirical test for optimizing process settings. A Partial Least Squares (PLS) procedure is used to determine the crucial components and indicators that make up each component in a supply chain system. PLS allows supply chain members to have a greater understanding of critical coordination components in a given supply chain. Results and implications give an indication of what performance is possible with supply chain optimization versus local optimization on simulated and manufacturing data. It was found that pursuing an integrated approach over a traditional independent approach provides an improvement of 2% to 49% in predictive power for the supply chain under study.
126

Comparison of Two Vortex-in-cell Schemes Implemented to a Three-dimensional Temporal Mixing Layer

Sadek, Nabel 24 August 2012 (has links)
Numerical simulations are presented for three dimensional viscous incompressible free shear flows. The numerical method is based on solving the vorticity equation using Vortex-In-Cell method. In this method, the vorticity field is discretized into a finite set of Lagrangian elements (particles) and the computational domain is covered by Eulerian mesh. Velocity field is computed on the mesh by solving Poisson equation. The solution proceeds in time by advecting the particles with the flow. Second order Adam-Bashford method is used for time integration. Exchange of information between Lagrangian particles and Eulerian grid is carried out using the M’4 interpolation scheme. The classical inviscid scheme is enhanced to account for stretching and viscous effects. For that matter, two schemes are used. The first one used periodic remeshing of the vortex particles along with fourth order finite difference approximation for the partial derivatives of the stretching and viscous terms. In the second scheme, derivatives are approximated by least squares polynomial. The novelty of this work is signified by using the moving least squares technique within the framework of the Vortex-in-Cell method and implementing it to a three dimensional temporal mixing layer. Comparisons of the mean flow and velocity statistics are made with experimental studies. The results confirm the validity of the present schemes. Both schemes also demonstrate capability to qualitatively capture significant flow scales, and allow gaining physical insight as to the development of instabilities and the formation of three dimensional vortex structures. The two schemes show acceptable low numerical diffusion as well.
127

Integrated Approach to Assess Supply Chains: A Comparison to the Process Control at the Firm Level

Karadag, Mehmet Onur 22 July 2011 (has links)
This study considers whether or not optimizing process metrics and settings across a supply chain gives significantly different outcomes than consideration at a firm level. While, the importance of supply chain integration has been shown in areas such as inventory management, this study appears to be the first empirical test for optimizing process settings. A Partial Least Squares (PLS) procedure is used to determine the crucial components and indicators that make up each component in a supply chain system. PLS allows supply chain members to have a greater understanding of critical coordination components in a given supply chain. Results and implications give an indication of what performance is possible with supply chain optimization versus local optimization on simulated and manufacturing data. It was found that pursuing an integrated approach over a traditional independent approach provides an improvement of 2% to 49% in predictive power for the supply chain under study.
128

Linear Programming Algorithms Using Least-Squares Method

Kong, Seunghyun 04 April 2007 (has links)
This thesis is a computational study of recently developed algorithms which aim to overcome degeneracy in the simplex method. We study the following algorithms: the non-negative least squares algorithm, the least-squares primal-dual algorithm, the least-squares network flow algorithm, and the combined-objective least-squares algorithm. All of the four algorithms use least-squares measures to solve their subproblems, so they do not exhibit degeneracy. But they have never been efficiently implemented and thus their performance has also not been proved. In this research we implement these algorithms in an efficient manner and improve their performance compared to their preliminary results. For the non-negative least-squares algorithm, we develop the basis update technique and data structure that fit our purpose. In addition, we also develop a measure to help find a good ordering of columns and rows so that we have a sparse and concise representation of QR-factors. The least-squares primal-dual algorithm uses the non-negative least-squares problem as its subproblem, which minimizes infeasibility while satisfying dual feasibility and complementary slackness. The least-squares network flow algorithm is the least-squares primal-dual algorithm applied to min-cost network flow instances. The least-squares network flow algorithm can efficiently solve much bigger instances than the least-squares primal-dual algorithm. The combined-objective least-squares algorithm is the primal version of the least-squares primal-dual algorithm. Each subproblem tries to minimize true objective and infeasibility simultaneously so that optimality and primal feasibility can be obtained together. It uses a big-M to minimize the infeasibility. We developed the techniques to improve the convergence rates of each algorithm: the relaxation of complementary slackness condition, special pricing strategy, and dynamic-M value. Our computational results show that the least-squares primal-dual algorithm and the combined-objective least-squares algorithm perform better than the CPLEX Primal solver, but are slower than the CPLEX Dual solver. The least-squares network flow algorithm performs as fast as the CPLEX Network solver.
129

Accuracy Improvement of Closed-Form TDOA Location Methods Using IMM Algorithm

Chen, Guan-Ru 31 August 2010 (has links)
For target location and tracking in wireless communication systems, mobile target positioning and tracking play an important role. Since multi-sensor system can be used as an efficient solution to target positioning process, more accurate target location estimation and tracking results can be obtained. However, both the deployment of designed multi-sensor and location algorithm may affect the overall performance of position location. In this thesis, based on the time difference of arrival (TDOA), two closed-form least-square location methods, spherical-interpolation (SI) method and spherical-intersection (SX) method are used to estimate the target location. The two location methods are different from the usual process of iterative and nonlinear minimization. The locations of the target and the designed multiple sensors may yield geometric effects on location performance. The constraints and performance of the two location methods will first be introduced. To achieve real-time target tracking, the Kalman filtering structures are used to combine the SI and SX methods. Because these two positioning and tracking systems have different and complementary performance inside and outside the multi-sensor array, we consider using data fusion to improve location estimation results by using interacting multiple model (IMM) based estimator, in which internal filters running in parallel are designed as the SX-KF1 and the SI-KF2. However, due to the time-varying characteristics of measurement noises, we propose an adjusting scheme for measurement noise variance assignment in the Kalman filters to obtain improved location estimation results. Simulation results are obtained by running Matlab program. In three-dimensional multi-sensor array scenarios, the results of moving target location estimation shows that the IMM-based estimators effectively improve the position performance.
130

Uncertainty evaluation of delayed neutron decay parameters

Wang, Jinkai 15 May 2009 (has links)
In a nuclear reactor, delayed neutrons play a critical role in sustaining a controllable chain reaction. Delayed neutron’s relative yields and decay constants are very important for modeling reactivity control and have been studied for decades. Researchers have tried different experimental and numerical methods to assess these delayed neutron parameters. The reported parameter values vary widely, much more than the small statistical errors reported with these parameters. Interestingly, the reported parameters fit their individual measurement data well in spite of these differences. This dissertation focuses on evaluation of the errors and methods of delayed neutron relative yields and decay constants for thermal fission of U-235. Various numerical methods used to extract the delayed neutron parameter from the measured data, including Matrix Inverse, Levenberg-Marquardt, and Quasi-Newton methods, were studied extensively using simulated delayed neutron data. This simulated data was Poisson distributed around Keepin’s theoretical data. The extraction methods produced totally different results for the same data set, and some of the above numerical methods could not even find solutions for some data sets. Further investigation found that ill-conditioned matrices in the objective function were the reason for the inconsistent results. To find a reasonable solution with small variation, a regularization parameter was introduced using a numerical method called Ridge Regression. The results from the Ridge Regression method, in terms of goodness of fit to the data, were good and often better than the other methods. Due to the introduction of a regularization number in the algorithm, the fitted result contains a small additional bias, but this method can guarantee convergence no matter how large the coefficient matrix condition number. Both saturation and pulse modes were simulated to focus on different groups. Some of the factors that affect the solution stability were investigated including initial count rate, sample flight time, initial guess values. Finally, because comparing reported delayed neutron parameters among different experiments is useless to determine if their data actually differs, methods are proposed that can be used to compare the delayed neutron data sets.

Page generated in 0.0531 seconds