• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 682
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1503
  • 1029
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Methodological aspects of the mapping of disease resistance loci in livestock/Aspects méthodologiques de la cartographie de gènes intervenant dans la résistance aux maladies chez les animaux d'élevage

Tilquin, Pierre 19 September 2003 (has links)
The incidence of infectious diseases in livestock is a major concern for animal breeders as well as for consumers. As a alternative approach to the use of prophylactic measures or therapeutic agents, infectious diseases can be contended by increasing the disease resistance of animals by genetic improvement. Animals can be selected either on a measure of their resistance (indicator trait) or on the presence or absence of some specific resistance genes in their genotype. A prerequisite to the latter approach is the identification of the genes, or QTL for quantitative trait loci, underlying the trait of interest. By means of sophisticated statistical tools, the QTL mapping strategy combines the information from genetic markers and phenotypic values to dissect quantitative traits into their individual genetic components. Some of the methodological aspects of this strategy are studied in the present thesis in the context of disease resistance in livestock. Indicator traits of the resistance (such as bacteria or parasites counts) are not always satisfying the normality assumption underlying most of the QTL mapping methods. In this context, the ability of statistical tests to identify the underlying genes (i.e. the statistical power) can be considerably reduced. We show that compared to the use of a non-parametric method, the use of the least-squares-based parametric method on mathematically transformed phenotypes gives always the best results. In the context of high number of ties (equal values) as observed when measuring resistance to bacterial or parasitic diseases, the non-parametric test is a good alternative to this approach, as far as midranks are used for ties instead of random ranks. The efficiency of QTL mapping methods can also be increased by use of simple combinations of repeated measurements of the same trait. As a result of analyses performed on real data sets in chicken and sheep, we show that much attention should be paid to obtaining good quality measurements, reflecting at best differences in terms of resistance between animals, before performing a QTL search. The appropriate choice of resistance traits as well as of the time of their measurement are, beside the choice of the method and the quality of marker information, among the most preponderant factors to guarantee satisfying results.
132

Least-squares variance component estimation : theory and GPS applications /

Amiri-Simkooei, AliReza, January 2007 (has links)
Originally presented as the author's thesis (doctoral)--Delft University of Technology. / Includes bibliographical references (p. [185]-194) and index.
133

Integrated Approach to Assess Supply Chains: A Comparison to the Process Control at the Firm Level

Karadag, Mehmet Onur 22 July 2011 (has links)
This study considers whether or not optimizing process metrics and settings across a supply chain gives significantly different outcomes than consideration at a firm level. While, the importance of supply chain integration has been shown in areas such as inventory management, this study appears to be the first empirical test for optimizing process settings. A Partial Least Squares (PLS) procedure is used to determine the crucial components and indicators that make up each component in a supply chain system. PLS allows supply chain members to have a greater understanding of critical coordination components in a given supply chain. Results and implications give an indication of what performance is possible with supply chain optimization versus local optimization on simulated and manufacturing data. It was found that pursuing an integrated approach over a traditional independent approach provides an improvement of 2% to 49% in predictive power for the supply chain under study.
134

Comparison of Two Vortex-in-cell Schemes Implemented to a Three-dimensional Temporal Mixing Layer

Sadek, Nabel 24 August 2012 (has links)
Numerical simulations are presented for three dimensional viscous incompressible free shear flows. The numerical method is based on solving the vorticity equation using Vortex-In-Cell method. In this method, the vorticity field is discretized into a finite set of Lagrangian elements (particles) and the computational domain is covered by Eulerian mesh. Velocity field is computed on the mesh by solving Poisson equation. The solution proceeds in time by advecting the particles with the flow. Second order Adam-Bashford method is used for time integration. Exchange of information between Lagrangian particles and Eulerian grid is carried out using the M’4 interpolation scheme. The classical inviscid scheme is enhanced to account for stretching and viscous effects. For that matter, two schemes are used. The first one used periodic remeshing of the vortex particles along with fourth order finite difference approximation for the partial derivatives of the stretching and viscous terms. In the second scheme, derivatives are approximated by least squares polynomial. The novelty of this work is signified by using the moving least squares technique within the framework of the Vortex-in-Cell method and implementing it to a three dimensional temporal mixing layer. Comparisons of the mean flow and velocity statistics are made with experimental studies. The results confirm the validity of the present schemes. Both schemes also demonstrate capability to qualitatively capture significant flow scales, and allow gaining physical insight as to the development of instabilities and the formation of three dimensional vortex structures. The two schemes show acceptable low numerical diffusion as well.
135

Integrated Approach to Assess Supply Chains: A Comparison to the Process Control at the Firm Level

Karadag, Mehmet Onur 22 July 2011 (has links)
This study considers whether or not optimizing process metrics and settings across a supply chain gives significantly different outcomes than consideration at a firm level. While, the importance of supply chain integration has been shown in areas such as inventory management, this study appears to be the first empirical test for optimizing process settings. A Partial Least Squares (PLS) procedure is used to determine the crucial components and indicators that make up each component in a supply chain system. PLS allows supply chain members to have a greater understanding of critical coordination components in a given supply chain. Results and implications give an indication of what performance is possible with supply chain optimization versus local optimization on simulated and manufacturing data. It was found that pursuing an integrated approach over a traditional independent approach provides an improvement of 2% to 49% in predictive power for the supply chain under study.
136

Linear Programming Algorithms Using Least-Squares Method

Kong, Seunghyun 04 April 2007 (has links)
This thesis is a computational study of recently developed algorithms which aim to overcome degeneracy in the simplex method. We study the following algorithms: the non-negative least squares algorithm, the least-squares primal-dual algorithm, the least-squares network flow algorithm, and the combined-objective least-squares algorithm. All of the four algorithms use least-squares measures to solve their subproblems, so they do not exhibit degeneracy. But they have never been efficiently implemented and thus their performance has also not been proved. In this research we implement these algorithms in an efficient manner and improve their performance compared to their preliminary results. For the non-negative least-squares algorithm, we develop the basis update technique and data structure that fit our purpose. In addition, we also develop a measure to help find a good ordering of columns and rows so that we have a sparse and concise representation of QR-factors. The least-squares primal-dual algorithm uses the non-negative least-squares problem as its subproblem, which minimizes infeasibility while satisfying dual feasibility and complementary slackness. The least-squares network flow algorithm is the least-squares primal-dual algorithm applied to min-cost network flow instances. The least-squares network flow algorithm can efficiently solve much bigger instances than the least-squares primal-dual algorithm. The combined-objective least-squares algorithm is the primal version of the least-squares primal-dual algorithm. Each subproblem tries to minimize true objective and infeasibility simultaneously so that optimality and primal feasibility can be obtained together. It uses a big-M to minimize the infeasibility. We developed the techniques to improve the convergence rates of each algorithm: the relaxation of complementary slackness condition, special pricing strategy, and dynamic-M value. Our computational results show that the least-squares primal-dual algorithm and the combined-objective least-squares algorithm perform better than the CPLEX Primal solver, but are slower than the CPLEX Dual solver. The least-squares network flow algorithm performs as fast as the CPLEX Network solver.
137

Accuracy Improvement of Closed-Form TDOA Location Methods Using IMM Algorithm

Chen, Guan-Ru 31 August 2010 (has links)
For target location and tracking in wireless communication systems, mobile target positioning and tracking play an important role. Since multi-sensor system can be used as an efficient solution to target positioning process, more accurate target location estimation and tracking results can be obtained. However, both the deployment of designed multi-sensor and location algorithm may affect the overall performance of position location. In this thesis, based on the time difference of arrival (TDOA), two closed-form least-square location methods, spherical-interpolation (SI) method and spherical-intersection (SX) method are used to estimate the target location. The two location methods are different from the usual process of iterative and nonlinear minimization. The locations of the target and the designed multiple sensors may yield geometric effects on location performance. The constraints and performance of the two location methods will first be introduced. To achieve real-time target tracking, the Kalman filtering structures are used to combine the SI and SX methods. Because these two positioning and tracking systems have different and complementary performance inside and outside the multi-sensor array, we consider using data fusion to improve location estimation results by using interacting multiple model (IMM) based estimator, in which internal filters running in parallel are designed as the SX-KF1 and the SI-KF2. However, due to the time-varying characteristics of measurement noises, we propose an adjusting scheme for measurement noise variance assignment in the Kalman filters to obtain improved location estimation results. Simulation results are obtained by running Matlab program. In three-dimensional multi-sensor array scenarios, the results of moving target location estimation shows that the IMM-based estimators effectively improve the position performance.
138

Uncertainty evaluation of delayed neutron decay parameters

Wang, Jinkai 15 May 2009 (has links)
In a nuclear reactor, delayed neutrons play a critical role in sustaining a controllable chain reaction. Delayed neutron’s relative yields and decay constants are very important for modeling reactivity control and have been studied for decades. Researchers have tried different experimental and numerical methods to assess these delayed neutron parameters. The reported parameter values vary widely, much more than the small statistical errors reported with these parameters. Interestingly, the reported parameters fit their individual measurement data well in spite of these differences. This dissertation focuses on evaluation of the errors and methods of delayed neutron relative yields and decay constants for thermal fission of U-235. Various numerical methods used to extract the delayed neutron parameter from the measured data, including Matrix Inverse, Levenberg-Marquardt, and Quasi-Newton methods, were studied extensively using simulated delayed neutron data. This simulated data was Poisson distributed around Keepin’s theoretical data. The extraction methods produced totally different results for the same data set, and some of the above numerical methods could not even find solutions for some data sets. Further investigation found that ill-conditioned matrices in the objective function were the reason for the inconsistent results. To find a reasonable solution with small variation, a regularization parameter was introduced using a numerical method called Ridge Regression. The results from the Ridge Regression method, in terms of goodness of fit to the data, were good and often better than the other methods. Due to the introduction of a regularization number in the algorithm, the fitted result contains a small additional bias, but this method can guarantee convergence no matter how large the coefficient matrix condition number. Both saturation and pulse modes were simulated to focus on different groups. Some of the factors that affect the solution stability were investigated including initial count rate, sample flight time, initial guess values. Finally, because comparing reported delayed neutron parameters among different experiments is useless to determine if their data actually differs, methods are proposed that can be used to compare the delayed neutron data sets.
139

The Study of Cross-country Feed Company Constructing Sustainable Competitive Advantages - Case Study on Greatwall Enterprise

Sui-Ying, Wang 25 August 2004 (has links)
Taiwan¡¦s feed Industry contributes remarkably to the food supply chain in terms of animal protein production. However, after FMD breakout in 1997 and join WTO, feedmill companies with past glories start to erode their advantages in the industry ending up termination of business or struggling the competition. Competitive strategies of¡ucost leadership¡vand ¡udifferentiation¡v proposed by Michael Porter, the strategist, clearly pinpoint out the sustainability in competitive advantages. Those outstanding feedmill companies in different regions that can survive through the tough industry challenges and keep expanding their businesses in a successful manner coincide with these two criteria to compete in the industry. By analyzing the competitive advantage in theory and demonstrations, we also apply these ¡§cost leadership¡¨ and ¡§differentiation¡¨ criteria to the case study Greatwall Group as the suggestive sustainable competitive advantages strategies for the coming challenges in the feed industry. Competitive advantage rigidity could be lessened by meeting the ¡§cost leadership¡¨ criteria by reforming the purchasing model using e-commerce, and ¡§differentiation¡¨ strategy by 1) upgrading the nutrition formulation concept to modeling operation considering the integration advantages; 2) re-allocating the resources in service including research, quality control and vet diagnosis. In the mean times, management restructuring to reinforce the leadership and implementation performance will also be the inevitable requirements. Hopefully through the study of this case, it also sheds lights on other feedmill companies as a learning model.
140

G7 business cycle synchronization and transmission mechanism.

Chou, I-Hsiu 22 June 2006 (has links)
Since Bretton Woods System break down in year 1973. Many economists found that there are more similar business cycle between industrial countries. Recently, Doyle and Faust(2002) proposed the correlation of business cycle between two countries becomes weaker. Therefore in this search, we try to carry out two different aspect factor that effects the countries¡¦ business cycle correlation. The factor is so-called ¡§transmission mechanism.¡¨ Generally specking, Many empirical analysis have pointed out the temporary factors to the business cycle mainly come from the transferred factors of economic aspect. What is ¡§Transmission Mechanisms?¡¨ Economists often try to substitute it in good markets, financial markets, and the coordination of monetary policies. However, in this duration of the empirical analysis, using only these proxy variables to explain BCCs between two countries seems too limited. According to this situation, we believe if the BCCs can be explained by using proxy factors of non-economic variable, the result can be utilized by making up the defect. We attempt to find new factors in political approach and combine with the ¡§Transmission Mechanisms¡¨ that we have introduced earlier. To analyze further economic implication in our research, five conclusions have been summarized below: Firstly, increasing bilateral trade has significantly provided positive effect to BCCs among G7 countries from 1980 to 2002. Because bilateral trade intensity index is endogenous , we use exogenous variable as instrumental variable to estimate ¡§Trade¡¨. Secondly, we use Panel method to expand its matrix. Finally, we improve the empirical estimators of insignificant statistics before. So, when we talk about the relations between BCCs and good and service markets, we must consider these exogenous factors. Eventually, we will receive more detailed results. Secondly, although to trade in financial markets can increase the BCCs between two countries, the statistic report is insignificant . About this empirical result, we can obtain reasonable explanations from the researches (for instance: Imbs, 2004 or Kose et al, 2003), they point out that financial markets are bound excessively by globalization. Therefore, this will aggravatingly make each country to focus on its specialization. Finally, this situation will make the BCCs getting collapsed among these countries. This also explains that the specialization among these countries will reduce the positive effect from the BBCs to financial markets. Thirdly, in the research, the statistics effect of the trade intensity index and specialization are significant negative. It means that when good in transaction will result in more specialization. Two countries have similar industrial structure.Imbs(2004) consider the problem is the index we use to measure bilateral trade intensity. This index was effected from two countries¡¦ size . If use Clark and van Wincoop¡¦s trade intensity index to measure the effect, we can find that significant specialization by comparative advantage effect. Fourth, there are high level financial integration between two countries, because international risk sharing result in two countries have different industrial structure. Lastly, in the research, the statistics effect of the party variables and business cycle of correlations are very significant. This also indicates the political factor will play an important role for many sources of the fluctuation tread of BCCs. In other words, when we discuss the issue of BCCs if miss the contribution of political factors to the BCCs. Then, this might cause the omitted variable biased, and finally cause the whole computation become inefficient. In addition, we can have further discussion by an input of a factor: to conserve the joint benefit of all the member countries in an economic organization, these countries need to be ruled by the same ideal political party. Otherwise, the institute will never reach its essential result. Combining all the conclusions we have shown above, we can find out the BCCs among G7 countries from 1980-2002. Besides the influence of the ¡§Transmission Mechanisms,¡¨ the result will be varied by the political factors. In conclusion, we need to consider the contribution of the political party variables to the BCCs when talking about this issue, therefore; the original theoretical model can be more persuasive. According to a statistics of IMF, the BCCs among those industrial countries are falling little by little in recent years. Therefore, consolidating trade cooperation is essential for what we believe to improve the BCCs among G7. At the same time, pass through a strong integrate monetary policy can move forward all the incumbent parties from all the countries to agree among themselves, and even reach more substantial effect. From the example like this, we might find evidence from BCCs issues by discussing the integration process in European Monetary Union.

Page generated in 0.4459 seconds