• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 439
  • 171
  • 53
  • 40
  • 26
  • 19
  • 14
  • 13
  • 12
  • 10
  • 7
  • 6
  • 6
  • 5
  • 5
  • Tagged with
  • 957
  • 957
  • 198
  • 176
  • 160
  • 157
  • 139
  • 137
  • 123
  • 114
  • 95
  • 92
  • 78
  • 77
  • 75
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Comparison of Two Vortex-in-cell Schemes Implemented to a Three-dimensional Temporal Mixing Layer

Sadek, Nabel 24 August 2012 (has links)
Numerical simulations are presented for three dimensional viscous incompressible free shear flows. The numerical method is based on solving the vorticity equation using Vortex-In-Cell method. In this method, the vorticity field is discretized into a finite set of Lagrangian elements (particles) and the computational domain is covered by Eulerian mesh. Velocity field is computed on the mesh by solving Poisson equation. The solution proceeds in time by advecting the particles with the flow. Second order Adam-Bashford method is used for time integration. Exchange of information between Lagrangian particles and Eulerian grid is carried out using the M’4 interpolation scheme. The classical inviscid scheme is enhanced to account for stretching and viscous effects. For that matter, two schemes are used. The first one used periodic remeshing of the vortex particles along with fourth order finite difference approximation for the partial derivatives of the stretching and viscous terms. In the second scheme, derivatives are approximated by least squares polynomial. The novelty of this work is signified by using the moving least squares technique within the framework of the Vortex-in-Cell method and implementing it to a three dimensional temporal mixing layer. Comparisons of the mean flow and velocity statistics are made with experimental studies. The results confirm the validity of the present schemes. Both schemes also demonstrate capability to qualitatively capture significant flow scales, and allow gaining physical insight as to the development of instabilities and the formation of three dimensional vortex structures. The two schemes show acceptable low numerical diffusion as well.
72

Integrated Approach to Assess Supply Chains: A Comparison to the Process Control at the Firm Level

Karadag, Mehmet Onur 22 July 2011 (has links)
This study considers whether or not optimizing process metrics and settings across a supply chain gives significantly different outcomes than consideration at a firm level. While, the importance of supply chain integration has been shown in areas such as inventory management, this study appears to be the first empirical test for optimizing process settings. A Partial Least Squares (PLS) procedure is used to determine the crucial components and indicators that make up each component in a supply chain system. PLS allows supply chain members to have a greater understanding of critical coordination components in a given supply chain. Results and implications give an indication of what performance is possible with supply chain optimization versus local optimization on simulated and manufacturing data. It was found that pursuing an integrated approach over a traditional independent approach provides an improvement of 2% to 49% in predictive power for the supply chain under study.
73

Linear Programming Algorithms Using Least-Squares Method

Kong, Seunghyun 04 April 2007 (has links)
This thesis is a computational study of recently developed algorithms which aim to overcome degeneracy in the simplex method. We study the following algorithms: the non-negative least squares algorithm, the least-squares primal-dual algorithm, the least-squares network flow algorithm, and the combined-objective least-squares algorithm. All of the four algorithms use least-squares measures to solve their subproblems, so they do not exhibit degeneracy. But they have never been efficiently implemented and thus their performance has also not been proved. In this research we implement these algorithms in an efficient manner and improve their performance compared to their preliminary results. For the non-negative least-squares algorithm, we develop the basis update technique and data structure that fit our purpose. In addition, we also develop a measure to help find a good ordering of columns and rows so that we have a sparse and concise representation of QR-factors. The least-squares primal-dual algorithm uses the non-negative least-squares problem as its subproblem, which minimizes infeasibility while satisfying dual feasibility and complementary slackness. The least-squares network flow algorithm is the least-squares primal-dual algorithm applied to min-cost network flow instances. The least-squares network flow algorithm can efficiently solve much bigger instances than the least-squares primal-dual algorithm. The combined-objective least-squares algorithm is the primal version of the least-squares primal-dual algorithm. Each subproblem tries to minimize true objective and infeasibility simultaneously so that optimality and primal feasibility can be obtained together. It uses a big-M to minimize the infeasibility. We developed the techniques to improve the convergence rates of each algorithm: the relaxation of complementary slackness condition, special pricing strategy, and dynamic-M value. Our computational results show that the least-squares primal-dual algorithm and the combined-objective least-squares algorithm perform better than the CPLEX Primal solver, but are slower than the CPLEX Dual solver. The least-squares network flow algorithm performs as fast as the CPLEX Network solver.
74

Accuracy Improvement of Closed-Form TDOA Location Methods Using IMM Algorithm

Chen, Guan-Ru 31 August 2010 (has links)
For target location and tracking in wireless communication systems, mobile target positioning and tracking play an important role. Since multi-sensor system can be used as an efficient solution to target positioning process, more accurate target location estimation and tracking results can be obtained. However, both the deployment of designed multi-sensor and location algorithm may affect the overall performance of position location. In this thesis, based on the time difference of arrival (TDOA), two closed-form least-square location methods, spherical-interpolation (SI) method and spherical-intersection (SX) method are used to estimate the target location. The two location methods are different from the usual process of iterative and nonlinear minimization. The locations of the target and the designed multiple sensors may yield geometric effects on location performance. The constraints and performance of the two location methods will first be introduced. To achieve real-time target tracking, the Kalman filtering structures are used to combine the SI and SX methods. Because these two positioning and tracking systems have different and complementary performance inside and outside the multi-sensor array, we consider using data fusion to improve location estimation results by using interacting multiple model (IMM) based estimator, in which internal filters running in parallel are designed as the SX-KF1 and the SI-KF2. However, due to the time-varying characteristics of measurement noises, we propose an adjusting scheme for measurement noise variance assignment in the Kalman filters to obtain improved location estimation results. Simulation results are obtained by running Matlab program. In three-dimensional multi-sensor array scenarios, the results of moving target location estimation shows that the IMM-based estimators effectively improve the position performance.
75

Uncertainty evaluation of delayed neutron decay parameters

Wang, Jinkai 15 May 2009 (has links)
In a nuclear reactor, delayed neutrons play a critical role in sustaining a controllable chain reaction. Delayed neutron’s relative yields and decay constants are very important for modeling reactivity control and have been studied for decades. Researchers have tried different experimental and numerical methods to assess these delayed neutron parameters. The reported parameter values vary widely, much more than the small statistical errors reported with these parameters. Interestingly, the reported parameters fit their individual measurement data well in spite of these differences. This dissertation focuses on evaluation of the errors and methods of delayed neutron relative yields and decay constants for thermal fission of U-235. Various numerical methods used to extract the delayed neutron parameter from the measured data, including Matrix Inverse, Levenberg-Marquardt, and Quasi-Newton methods, were studied extensively using simulated delayed neutron data. This simulated data was Poisson distributed around Keepin’s theoretical data. The extraction methods produced totally different results for the same data set, and some of the above numerical methods could not even find solutions for some data sets. Further investigation found that ill-conditioned matrices in the objective function were the reason for the inconsistent results. To find a reasonable solution with small variation, a regularization parameter was introduced using a numerical method called Ridge Regression. The results from the Ridge Regression method, in terms of goodness of fit to the data, were good and often better than the other methods. Due to the introduction of a regularization number in the algorithm, the fitted result contains a small additional bias, but this method can guarantee convergence no matter how large the coefficient matrix condition number. Both saturation and pulse modes were simulated to focus on different groups. Some of the factors that affect the solution stability were investigated including initial count rate, sample flight time, initial guess values. Finally, because comparing reported delayed neutron parameters among different experiments is useless to determine if their data actually differs, methods are proposed that can be used to compare the delayed neutron data sets.
76

G7 business cycle synchronization and transmission mechanism.

Chou, I-Hsiu 22 June 2006 (has links)
Since Bretton Woods System break down in year 1973. Many economists found that there are more similar business cycle between industrial countries. Recently, Doyle and Faust(2002) proposed the correlation of business cycle between two countries becomes weaker. Therefore in this search, we try to carry out two different aspect factor that effects the countries¡¦ business cycle correlation. The factor is so-called ¡§transmission mechanism.¡¨ Generally specking, Many empirical analysis have pointed out the temporary factors to the business cycle mainly come from the transferred factors of economic aspect. What is ¡§Transmission Mechanisms?¡¨ Economists often try to substitute it in good markets, financial markets, and the coordination of monetary policies. However, in this duration of the empirical analysis, using only these proxy variables to explain BCCs between two countries seems too limited. According to this situation, we believe if the BCCs can be explained by using proxy factors of non-economic variable, the result can be utilized by making up the defect. We attempt to find new factors in political approach and combine with the ¡§Transmission Mechanisms¡¨ that we have introduced earlier. To analyze further economic implication in our research, five conclusions have been summarized below: Firstly, increasing bilateral trade has significantly provided positive effect to BCCs among G7 countries from 1980 to 2002. Because bilateral trade intensity index is endogenous , we use exogenous variable as instrumental variable to estimate ¡§Trade¡¨. Secondly, we use Panel method to expand its matrix. Finally, we improve the empirical estimators of insignificant statistics before. So, when we talk about the relations between BCCs and good and service markets, we must consider these exogenous factors. Eventually, we will receive more detailed results. Secondly, although to trade in financial markets can increase the BCCs between two countries, the statistic report is insignificant . About this empirical result, we can obtain reasonable explanations from the researches (for instance: Imbs, 2004 or Kose et al, 2003), they point out that financial markets are bound excessively by globalization. Therefore, this will aggravatingly make each country to focus on its specialization. Finally, this situation will make the BCCs getting collapsed among these countries. This also explains that the specialization among these countries will reduce the positive effect from the BBCs to financial markets. Thirdly, in the research, the statistics effect of the trade intensity index and specialization are significant negative. It means that when good in transaction will result in more specialization. Two countries have similar industrial structure.Imbs(2004) consider the problem is the index we use to measure bilateral trade intensity. This index was effected from two countries¡¦ size . If use Clark and van Wincoop¡¦s trade intensity index to measure the effect, we can find that significant specialization by comparative advantage effect. Fourth, there are high level financial integration between two countries, because international risk sharing result in two countries have different industrial structure. Lastly, in the research, the statistics effect of the party variables and business cycle of correlations are very significant. This also indicates the political factor will play an important role for many sources of the fluctuation tread of BCCs. In other words, when we discuss the issue of BCCs if miss the contribution of political factors to the BCCs. Then, this might cause the omitted variable biased, and finally cause the whole computation become inefficient. In addition, we can have further discussion by an input of a factor: to conserve the joint benefit of all the member countries in an economic organization, these countries need to be ruled by the same ideal political party. Otherwise, the institute will never reach its essential result. Combining all the conclusions we have shown above, we can find out the BCCs among G7 countries from 1980-2002. Besides the influence of the ¡§Transmission Mechanisms,¡¨ the result will be varied by the political factors. In conclusion, we need to consider the contribution of the political party variables to the BCCs when talking about this issue, therefore; the original theoretical model can be more persuasive. According to a statistics of IMF, the BCCs among those industrial countries are falling little by little in recent years. Therefore, consolidating trade cooperation is essential for what we believe to improve the BCCs among G7. At the same time, pass through a strong integrate monetary policy can move forward all the incumbent parties from all the countries to agree among themselves, and even reach more substantial effect. From the example like this, we might find evidence from BCCs issues by discussing the integration process in European Monetary Union.
77

Least-squares methods for computational electromagnetics

Kolev, Tzanio Valentinov 15 November 2004 (has links)
The modeling of electromagnetic phenomena described by the Maxwell's equations is of critical importance in many practical applications. The numerical simulation of these equations is challenging and much more involved than initially believed. Consequently, many discretization techniques, most of them quite complicated, have been proposed. In this dissertation, we present and analyze a new methodology for approximation of the time-harmonic Maxwell's equations. It is an extension of the negative-norm least-squares finite element approach which has been applied successfully to a variety of other problems. The main advantages of our method are that it uses simple, piecewise polynomial, finite element spaces, while giving quasi-optimal approximation, even for solutions with low regularity (such as the ones found in practical applications). The numerical solution can be efficiently computed using standard and well-known tools, such as iterative methods and eigensolvers for symmetric and positive definite systems (e.g. PCG and LOBPCG) and reconditioners for second-order problems (e.g. Multigrid). Additionally, approximation of varying polynomial degrees is allowed and spurious eigenmodes are provably avoided. We consider the following problems related to the Maxwell's equations in the frequency domain: the magnetostatic problem, the electrostatic problem, the eigenvalue problem and the full time-harmonic system. For each of these problems, we present a natural (very) weak variational formulation assuming minimal regularity of the solution. In each case, we prove error estimates for the approximation with two different discrete least-squares methods. We also show how to deal with problems posed on domains that are multiply connected or have multiple boundary components. Besides the theoretical analysis of the methods, the dissertation provides various numerical results in two and three dimensions that illustrate and support the theory.
78

An improved bus signal priority system for networks with nearside bus stops

Kim, Wonho 17 February 2005 (has links)
Bus Signal Priority (BSP), which has been deployed in many cities around the world, is a traffic signal enhancement strategy that facilitates efficient movement of buses through signalized intersections. Most BSP systems do not work well in transit networks with nearside bus stop because of the uncertainty in dwell time. Unfortunately, most bus stops on arterial roadways are of this type in the U.S. This dissertation showed that dwell time at nearside bus stops could be modeled using weighted least squares regression. More importantly, the prediction intervals associated with the estimate dwell time were calculated. These prediction intervals were subsequently used in the improved BSP algorithm that attempted to reduce the negative effects of nearside bus stops on BSP operations. The improved BSP algorithm was tested on urban arterial section of Bellaire Boulevard in Houston, Texas. VISSIM, a micro simulation model was used to evaluate the performance of the BSP operations. Prior to evaluating the algorithm, the parameters of the micro simulation model were calibrated using an automated Genetic Algorithm based methodology in order to make the model accurately represent the traffic conditions observed in the field. It was shown that the improved BSP algorithm significantly improved the bus operations in terms of bus delay. In addition, it was found that the delay to other vehicles on the network was not statistically different from other BSP algorithms currently being deployed. It is hypothesized that the new approach would be particularly useful in North America where there are many transit systems that utilize nearside bus stops in their networks.
79

A three degrees of freedom test-bed for nanosatellite and Cubesat attitude dynamics, determination, and control

Meissner, David M. January 2009 (has links) (PDF)
Thesis (M.S. in Mechanical Engineering)--Naval Postgraduate School, December 2009. / Thesis Advisor(s): Romano, Marcello ; Bevilacqua, Riccardo. "December 2009." Description based on title screen as viewed on January 27, 2010. Author(s) subject terms: spacecraft, cubesat, nanosat, TINYSCOPE, simulator, test bed, control, system identification, least squares, adaptive mass balancing, mass balancing, three axis simulator, NACL, TAS, CubeTAS, ADCS. Includes bibliographical references (p. 77-82). Also available in print.
80

Latent models for cross-covariance /

Wegelin, Jacob A. January 2001 (has links)
Thesis (Ph. D.)--University of Washington, 2001. / Vita. Includes bibliographical references (p. 139-145).

Page generated in 0.0566 seconds