• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 440
  • 171
  • 53
  • 40
  • 26
  • 19
  • 14
  • 13
  • 12
  • 10
  • 7
  • 6
  • 6
  • 5
  • 5
  • Tagged with
  • 958
  • 958
  • 198
  • 176
  • 160
  • 157
  • 139
  • 137
  • 123
  • 114
  • 95
  • 92
  • 78
  • 77
  • 75
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Attitude and Trajectory Estimation for Small Suborbital Payloads

Yuan, Yunxia January 2017 (has links)
Sounding rockets and small suborbital payloads provide a means for research in situ of the atmosphere and ionosphere. The trajectory and the attitude of the payload are critical for the evaluation of the scientific measurements and experiments. The trajectory refers the location of the measurement, while the attitude determines the orientation of the sensors. This thesis covers methods of trajectory and attitude reconstruction implemented in several experiments with small suborbital payloads carried out by the Department of Space and Plasma Physics in 2012--2016. The problem of trajectory reconstruction based on raw GPS data was studied for small suborbital payloads. It was formulated as a global least squares optimization problem. The method was applied to flight data of two suborbital payloads of the RAIN REXUS experiment. Positions and velocities were obtained with high accuracy. Based on the trajectory reconstruction technique, atmospheric densities, temperatures, and horizontal wind speeds below 80 km were obtained using rigid free falling spheres of the LEEWAVES experiment. Comparison with independent data indicates that the results are reliable for densities below 70 km, temperatures below 50 km, and wind speeds below 45 km. Attitude reconstruction of suborbital payloads from yaw-pitch-roll Euler angles was studied. The Euler angles were established by two methods: a global optimization method and an Unscented Kalman Filter (UKF) technique. The comparison of the results shows that the global optimization method provides a more accurate fit to the observations than the UKF. Improving the results of the falling sphere experiments requires understanding of the attitude motion of the sphere. An analytical consideration was developed for a free falling and axisymmetric sphere under aerodynamic torques. The motion can generally be defined as a superposition of precession and nutation. These motion phenomena were modeled numerically and compared to flight data. / <p>QC 20170510</p>
152

Estimation of Kinetic Parameters From List-Mode Data Using an Indirect Approach

Ortiz, Joseph Christian, Ortiz, Joseph Christian January 2016 (has links)
This dissertation explores the possibility of using an imaging approach to model classical pharmacokinetic (PK) problems. The kinetic parameters which describe the uptake rates of a drug within a biological system, are parameters of interest. Knowledge of the drug uptake in a system is useful in expediting the drug development process, as well as providing a dosage regimen for patients. Traditionally, the uptake rate of a drug in a system is obtained via sampling the concentration of the drug in a central compartment, usually the blood, and fitting the data to a curve. In a system consisting of multiple compartments, the number of kinetic parameters is proportional to the number of compartments, and in classical PK experiments, the number of identifiable parameters is less than the total number of parameters. Using an imaging approach to model classical PK problems, the support region of each compartment within the system will be exactly known, and all the kinetic parameters are uniquely identifiable. To solve for the kinetic parameters, an indirect approach, which is a two part process, was used. First the compartmental activity was obtained from data, and next the kinetic parameters were estimated. The novel aspect of the research is using listmode data to obtain the activity curves from a system as opposed to a traditional binned approach. Using techniques from information theoretic learning, particularly kernel density estimation, a non-parametric probability density function for the voltage outputs on each photo-multiplier tube, for each event, was generated on the fly, which was used in a least squares optimization routine to estimate the compartmental activity. The estimability of the activity curves for varying noise levels as well as time sample densities were explored. Once an estimate for the activity was obtained, the kinetic parameters were obtained using multiple cost functions, and the compared to each other using the mean squared error as the figure of merit.
153

Estimation of Aerodynamic Parameters in Real-Time : Implementation and Comparison of a Sequential Frequency Domain Method and a Batch Method

Nyman, Lina January 2016 (has links)
The flight testing and evaluation of collected data must be efficient during intensive flight-test programs such as the ones conducted during development of new aircraft. The aim of this thesis has thus been to produce a first version of an aerodynamic derivative estimation program that is to be used during real-time flight tests. The program is to give a first estimate of the aerodynamic derivatives as well as check the quality of the data collected and thus serve as a decision support during tests. The work that has been performed includes processing of data in order to use it in computations, comparing a batch and a sequential estimation method using real-time data and programming a user interface. All computations and programming has been done in Matlab. The estimation methods that have been compared are both built on transforming data to the frequency domain using a Chirp z-transform and then estimating the aerodynamic derivatives using complex least squares with instrumental variables.The sequential frequency domain method performs estimates at a given interval while the batch method performs one estimation at the end of the maneuver. Both methods compared in this thesis produce equal results. The continuous updates of the sequential method was however found to be better suited for a real-time application than the single estimation of the batch method. The telemetric data received from the aircraft must be synchronized to a common frequency of 60 Hz. Missing samples of the data stream must be linearly interpolated and different units of measured parameters must be corrected in order to be able to perform these estimations in the real-time test environment.
154

CHEMOMETRIC ANALYSIS OF COMPREHENSIVE TWO-DIMENSIONAL LIQUID CHROMATOGRAPHIC-DIODE ARRAY DETECTION DATA: PEAK RESOLUTION, QUANTIFICATION AND RAPID SCREENING

Bailey, Hope P. 09 October 2012 (has links)
This research project sought to explore, compare and develop chemometric methods with the goal of resolving chromatographically overlapped peaks though the use of spectral information gained from the four-way data sets associated with comprehensive two-dimensional liquid chromatography with diode array detection (LC ´ LC-DAD). A chemometric method combining iterative key set factor analysis (IKSFA) and multivariate curve resolution-alternating least squares (MCR-ALS) was developed. In the section of urine data analyzed, over 50 peaks were found, with 18 visually observable and 32 additional compounds found only after application of the chemometric method. Upon successful chemometric resolution of chromatographically overlapped peaks, accurate and precise quantification was then necessary. Of the compared methods for quantification, the manual baseline method was determined to offer the best precisions. Of the 50 found peaks from the urine analysis, 34 were successfully quantified using the manual baseline method with percent relative standard deviations ranging from 0.09 to 16. The accuracy of quantification was then investigated by the analysis of wastewater treatment plant effluent (WWTPE) samples. The chemometrically determined concentration of the unknown phenytoin sample was found to not exhibit a significant difference from the result obtained by the LC-MS/MS reference method, and the precision of the IKSFA-ALS method was better than that of the precision of the LC-MS/MS analysis. Chromatographic factors (data complexity, large dynamic range, retention time shifting, chromatographic and spectral peak overlap and background removal, were all found to affect the quantification results. The last part of this work focused on rapid screening methods that were capable of locating peaks between samples that exhibited significant differences in concentration. The aim here was to reduce the amount of data required to be resolved and quantified to only those peaks that were of interest. This would then reduce the time required to analyze large, complex samples by eliminating the need to first quantify all peaks in a given sample for many different samples. Both the similarity index (SI) method and the Fisher ratio (FR) method were found to fulfill this requirement in a rapid means of screening fifteen wine samples.
155

THE STRATEGIC ASSOCIATION BETWEEN ENTERPRISE CONTENT MANAGEMENT AND DECISION SUPPORT

Alalwan, Jaffar 03 April 2012 (has links)
To deal with the increasing information overload and with the structured and unstructured data complexity, many organizations have implemented enterprise content management (ECM) systems. Published research on ECM so far is very limited and reports on ECM implementations have been scarce until recently (Tyrväinen et al. 2006). However, the little available ECM literature shows that many organizations using ECM focus on operational benefits while strategic decision-making benefits are rarely considered. Moreover, the strategic capabilities such as decision making capabilities of ECM are not fully investigated in the current literature. In addition, the literature lacks a strategic management framework (SMF) that links strategies, business objectives, and performance management although there are several published studies that discuss ECM strategy. A strategic management framework would seem essential to effectively manage ECM strategy formulation, implementation, and performance evaluation (Kaplan and Norton 1996; Ittner and Larcker 1997). The absence of an appropriate strategic management framework keeps organizations from effective strategic planning, implementation, and evaluation, which affects the organizational capabilities overall. Therefore, the objective of this dissertation is to determine the decision support capabilities of ECM, and specify how ECM strategies can be formulated, implemented, and evaluated in order to fully utilize the ECM strategic capabilities. Structural equation modeling as well as design science approaches will be adopted to achieve the dissertation objectives.
156

Direct L2 Support Vector Machine

Zigic, Ljiljana 01 January 2016 (has links)
This dissertation introduces a novel model for solving the L2 support vector machine dubbed Direct L2 Support Vector Machine (DL2 SVM). DL2 SVM represents a new classification model that transforms the SVM's underlying quadratic programming problem into a system of linear equations with nonnegativity constraints. The devised system of linear equations has a symmetric positive definite matrix and a solution vector has to be nonnegative. Furthermore, this dissertation introduces a novel algorithm dubbed Non-Negative Iterative Single Data Algorithm (NN ISDA) which solves the underlying DL2 SVM's constrained system of equations. This solver shows significant speedup compared to several other state-of-the-art algorithms. The training time improvement is achieved at no cost, in other words, the accuracy is kept at the same level. All the experiments that support this claim were conducted on various datasets within the strict double cross-validation scheme. DL2 SVM solved with NN ISDA has faster training time on both medium and large datasets. In addition to a comprehensive DL2 SVM model we introduce and derive its three variants. Three different solvers for the DL2's system of linear equations with nonnegativity constraints were implemented, presented and compared in this dissertation.
157

Development of novel electrical power distribution system state estimation and meter placement algorithms suitable for parallel processing

Nusrat, Nazia January 2015 (has links)
The increasing penetration of distributed generation, responsive loads and emerging smart metering technologies will continue the transformation of distribution systems from passive to active network conditions. In such active networks, State Estimation (SE) tools will be essential in order to enable extensive monitoring and enhanced control technologies. In future distribution management systems, the novel electrical power distribution system SE requires development in a scalable manner in order to accommodate small to massive size networks, be operable with limited real time measurements and a restricted time frame. Furthermore, a significant phase of new sensor deployment is inevitable to enable distribution system SE, since present-day distribution networks lack the required level of measurement and instrumentation. In the above context, the research presented in this thesis investigates five SE optimization solution methods with various case studies related to expected scenarios of future distribution networks to determine their suitability. Hachtel's Augmented Matrix method is proposed and developed as potential SE optimizer for distribution systems due to its potential performance characteristics with regard to accuracy and convergence. Differential Evolution Algorithm (DEA) and Overlapping Zone Approach (OZA) are investigated to achieve scalability of SE tools; followed by which the network division based OZA is proposed and developed. An OZA requiring additional measurements is also proposed to provide a feasible solution for voltage estimation at a reduced computation cost. Realising the requirement of additional measurements deployment to enable distribution system SE, the development of a novel meter placement algorithm that provides economical and feasible solutions is demonstrated. The algorithm is strongly focused on reducing the voltage estimation errors and is capable of reducing the error below desired threshold with limited measurements. The scalable SE solution and meter placement algorithm are applied on a multi-processor system in order to examine effective reduction of computation time. Significant improvement in computation time is observed in both cases by dividing the problem into smaller segments. However, it is important to note that enhanced network division reduces computation time further at the cost of accuracy of estimation. Different networks including both idealised (16, 77, 356 and 711 node UKGDS) and real (40 and 43 node EG) distribution network data are used as appropriate to the requirement of the applications throughout this thesis.
158

The Distribution of Cotton Fiber Length

Belmasrour, Rachid 05 August 2010 (has links)
By testing a fiber beard, certain cotton fiber length parameters can be obtained rapidly. This is the method used by the High Volume Instrument (HVI). This study is aimed to explore the approaches and obtain the inference of length distributions of HVI beard sam- ples in order to develop new methods that can help us find the distribution of original fiber lengths and further improve HVI length measurements. At first, the mathematical functions were searched for describing three different types of length distributions related to the beard method as used in HVI: cotton fiber lengths of the original fiber population before picked by the HVI Fibrosampler, fiber lengths picked by HVI Fibrosampler, and fiber beard's pro-jecting portion that is actually scanned by HVI. Eight sets of cotton samples with a wide range of fiber lengths are selected and tested on the Advanced Fiber Information System (AFIS). The measured single fiber length data is used for finding the underlying theoreti-cal length distributions, and thus can be considered as the population distributions of the cotton samples. In addition, fiber length distributions by number and by weight are dis- cussed separately. In both cases a mixture of two Weibull distributions shows a good fit to their fiber length data. To confirm the findings, Kolmogorov-Smirnov goodness-of-fit tests were conducted. Furthermore, various length parameters such as Mean Length (ML) and Upper Half Mean Length (UHML) are compared between the original distribution from the experimental data and the fitted distributions. The results of these obtained fiber length distributions are discussed by using Partial Least Squares (PLS) regression, where the dis-tribution of the original fiber length from the distribution of the projected one is estimated.
159

Completely Recursive Least Squares and Its Applications

Bian, Xiaomeng 02 August 2012 (has links)
The matrix-inversion-lemma based recursive least squares (RLS) approach is of a recursive form and free of matrix inversion, and has excellent performance regarding computation and memory in solving the classic least-squares (LS) problem. It is important to generalize RLS for generalized LS (GLS) problem. It is also of value to develop an efficient initialization for any RLS algorithm. In Chapter 2, we develop a unified RLS procedure to solve the unconstrained/linear-equality (LE) constrained GLS. We also show that the LE constraint is in essence a set of special error-free observations and further consider the GLS with implicit LE constraint in observations (ILE-constrained GLS). Chapter 3 treats the RLS initialization-related issues, including rank check, a convenient method to compute the involved matrix inverse/pseudoinverse, and resolution of underdetermined systems. Based on auxiliary-observations, the RLS recursion can start from the first real observation and possible LE constraints are also imposed recursively. The rank of the system is checked implicitly. If the rank is deficient, a set of refined non-redundant observations is determined alternatively. In Chapter 4, base on [Li07], we show that the linear minimum mean square error (LMMSE) estimator, as well as the optimal Kalman filter (KF) considering various correlations, can be calculated from solving an equivalent GLS using the unified RLS. In Chapters 5 & 6, an approach of joint state-and-parameter estimation (JSPE) in power system monitored by synchrophasors is adopted, where the original nonlinear parameter problem is reformulated as two loosely-coupled linear subproblems: state tracking and parameter tracking. Chapter 5 deals with the state tracking which determines the voltages in JSPE, where dynamic behavior of voltages under possible abrupt changes is studied. Chapter 6 focuses on the subproblem of parameter tracking in JSPE, where a new prediction model for parameters with moving means is introduced. Adaptive filters are developed for the above two subproblems, respectively, and both filters are based on the optimal KF accounting for various correlations. Simulations indicate that the proposed approach yields accurate parameter estimates and improves the accuracy of the state estimation, compared with existing methods.
160

On the regularization of the recursive least squares algorithm. / Sobre a regularização do algoritmo dos mínimos quadrados recursivos.

Tsakiris, Manolis 25 June 2010 (has links)
This thesis is concerned with the issue of the regularization of the Recursive Least-Squares (RLS) algorithm. In the first part of the thesis, a novel regularized exponentially weighted array RLS algorithm is developed, which circumvents the problem of fading regularization that is inherent to the standard regularized exponentially weighted RLS formulation, while allowing the employment of generic time-varying regularization matrices. The standard equations are directly perturbed via a chosen regularization matrix; then the resulting recursions are extended to the array form. The price paid is an increase in computational complexity, which becomes cubic. The superiority of the algorithm with respect to alternative algorithms is demonstrated via simulations in the context of adaptive beamforming, in which low filter orders are employed, so that complexity is not an issue. In the second part of the thesis, an alternative criterion is motivated and proposed for the dynamical regulation of regularization in the context of the standard RLS algorithm. The regularization is implicitely achieved via dithering of the input signal. The proposed criterion is of general applicability and aims at achieving a balance between the accuracy of the numerical solution of a perturbed linear system of equations and its distance from the analytical solution of the original system, for a given computational precision. Simulations show that the proposed criterion can be effectively used for the compensation of large condition numbers, small finite precisions and unecessary large values of the regularization. / Esta tese trata da regularização do algoritmo dos mínimos-quadrados recursivo (Recursive Least-Squares - RLS). Na primeira parte do trabalho, um novo algoritmo array com matriz de regularização genérica e com ponderação dos dados exponencialmente decrescente no tempo é apresentado. O algoritmo é regularizado via perturbação direta da inversa da matriz de auto-correlação (Pi) por uma matriz genérica. Posteriormente, as equações recursivas são colocadas na forma array através de transformações unitárias. O preço a ser pago é o aumento na complexidade computacional, que passa a ser de ordem cúbica. A robustez do algoritmo resultante ´e demonstrada via simula¸coes quando comparado com algoritmos alternativos existentes na literatura no contexto de beamforming adaptativo, no qual geralmente filtros com ordem pequena sao empregados, e complexidade computacional deixa de ser fator relevante. Na segunda parte do trabalho, um critério alternativo ´e motivado e proposto para ajuste dinâmico da regularização do algoritmo RLS convencional. A regularização é implementada pela adição de ruído branco no sinal de entrada (dithering), cuja variância é controlada por um algoritmo simples que explora o critério proposto. O novo critério pode ser aplicado a diversas situações; procura-se alcançar um balanço entre a precisão numérica da solução de um sistema linear de equações perturbado e sua distância da solução do sistema original não-perturbado, para uma dada precisão. As simulações mostram que tal critério pode ser efetivamente empregado para compensação de números de condicionamento (CN) elevados, baixa precisão numérica, bem como valores de regularização excessivamente elevados.

Page generated in 0.0544 seconds