• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3704
  • 915
  • 683
  • 424
  • 160
  • 93
  • 61
  • 57
  • 45
  • 38
  • 36
  • 35
  • 35
  • 34
  • 27
  • Tagged with
  • 7545
  • 1136
  • 881
  • 806
  • 724
  • 722
  • 710
  • 570
  • 533
  • 531
  • 524
  • 522
  • 498
  • 481
  • 476
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Nonlinear State Estimation of the Ionosphere as Perturbed by the 2017 Great American Eclipse

Sauerwein, Kevin Lee 11 February 2019 (has links)
The 2017 Great American Eclipse provided an excellent opportunity for scientists and engineers to study the ionosphere. The dynamics of the ionosphere are affected by the amount of solar radiation it receives and a total solar eclipse produces a short perturbation to the incoming solar radiation. Analyzing how the ionosphere reacts to this type perturbation could lead to new levels of understanding of it. This study develops a nonlinear filter that estimates the state of the ionosphere's 3-D electron density profile given total electron content (TEC) measurements from dual-frequency GPS receivers located on the ground and on low-Earth-orbiting spacecraft. The electron density profile is parameterized by a bi-quintic latitude/longitude spline of Chapman Profile parameters that define the vertical electron density profile. These Chapman parameters and various latitude and longitude partial derivatives are defined at a set of latitude/longitude spline grid points. Bi-quintic interpolation between the points defines the parameters' values and the corresponding Chapman profiles at all latitude/longitude points. The Chapman parameter values and their partial derivatives at the latitude/longitude spline nodes constitute the unknowns that the nonlinear filter estimates. The filter is tested with non-eclipse datasets to determine its reliability. It performs well but does not estimate the biases of the receivers as precisely as desired. Many attempts to improve the filter's bias estimation ability are presented and tried. Eclipse datasets are input to the filter and analyzed. The filter produced results that suggest that the altitude of peak electron density increased significantly near and within the eclipse path and that the vertical TEC (VTEC) was drastically decreased near and within the eclipse path. The changes in VTEC and altitudes of peak electron density caused by the eclipse leave a lasting effect that alters the density profile for anywhere from 15 minutes to several hours. / MS / The 2017 Great American Eclipse garnered much attention in the media and scientific community. Solar eclipses provide unique opportunities to observe the ionosphere’s behavior as a result of irregular solar radiation patterns. Many devices are used to measure this behavior, including GPS receivers. Typically, GPS receivers are used to navigate by extracting and combining carrier phase and pseudorange data from signals of at least four GPS satellites. When the position of a GPS receiver is well-known, information about the portion of the ionosphere that the signal traveled through can be estimated from the GPS signals. This estimation procedure has been done with ground-based and orbiting GPS receivers. However, fusing the two data sources has never been done and will be a primary focus of this study. After demonstrating the performance of the estimation algorithm, it is used to estimate the state of the ionosphere as it was perturbed by the 2017 Great American Eclipse.
222

Error Transport Equations for Unsteady Discontinuous Applications

Ganotaki, Michael 02 April 2024 (has links)
Computational Fluid Dynamics (CFD) has been pivotal in scientific computing, providing critical insights into complex fluid dynamics unattainable through traditional experimental methods. Despite its widespread use, the accuracy of CFD results remains contingent upon the underlying modeling and numerical errors. A key aspect of ensuring simulation reliability is the accurate quantification of discretization error (DE), which is the difference between the simulation solution and the exact solution in the physical world. This study addresses quantifying DE through Error Transport Equations (ETE), which are an additional set of equations capable of quantifying the local DE in a solution. Historically, Richardson extrapolation has been a mainstay for DE estimation due to its simplicity and effectiveness. However, the method's feasibility diminishes with increasing computational demands, particularly in large-scale and high-dimensional problems. The integration of ETE into existing CFD frameworks is facilitated by their compatibility with existing numerical codes, minimizing the need for extensive code modification. By incorporating techniques developed for managing discontinuities, the study broadens ETE applicability to a wider range of scientific computing applications, particularly those involving complex, unsteady flows. The culmination of this research is demonstrated on unsteady discontinuous problems, such as Sod's problem. / Master of Science / In the ever-evolving field of Computational Fluid Dynamics (CFD), the quest for accuracy is paramount. This thesis focuses on discretization error estimation within CFD simulations, specifically on the challenge of predicting fluid behavior in scenarios marked by sudden changes, such as shock waves. At the core of this work lies an error estimation tool known as Error Transport Equations (ETE) to improve the numerical accuracy of simulations involving unsteady flows and discontinuities. Traditionally, the accuracy of CFD simulations has been limited by discretization errors, generally the largest numerical error, which is the difference between the numerical solution and the exact solution. With ETE, this research identifies these errors to enhance the simulation's overall accuracy. The implications of ETE research are far-reaching. Improved error estimation and correction methods can lead to more reliable predictions in a wide range of applications, from aeronautical engineering, where the aerodynamics of aircraft is critical, to plasma science, with applications in fusion and deep space propulsion.
223

Parameter estimates for fractional autoregressive spatial processes

Boissy, Young Hyun 01 April 2001 (has links)
No description available.
224

Methods for Covariance Matrix Estimation : A Comparison of Shrinkage Estimators in Financial Applications

Spector, Erik January 2024 (has links)
This paper explores different covariance matrix estimators in application to geometric Brownian motion. Particular interest is given to shrinkage estimation methods. In collaboration with Söderberg & Partners risk management team, the goal is to find an estimation that performs well in low-data scenarios and is robust against erroneous model assumptions, particularly the Gaussian assumption of the stock price distribution. Estimations are compared by two criteria: Frobenius norm distance between the estimate and the true covariance matrix, and the condition number of the estimate. By considering four estimates — the sample covariance matrix, Ledoit-Wolf, Tyler M-estimator, and a novel Tyler-Ledoit-Wolf (TLW) estimator — this paper concludes that the TLW estimator performs best when considering the two criteria.
225

The importance of contextual factors on the accuracy of estimates in project management. An emergence of a framework for more realistic estimation process

Lazarski, Adam January 2014 (has links)
Successful projects are characterized by the quality of their planning. Good planning that better takes into account contextual factors allows more accurate estimates to be achieved. As an outcome of this research, a new framework composed of best practices has been discovered. This comprises an open platform that project experts and practitioners can work with efficiently, and that researchers can develop further as required. The research investigation commenced in the autumn of 2008 with a pilot study and then proceeded through an inductive research process, involving a series of eleven interviews. These consisted of interviews with four well-recognized experts in the field, four interviews with different practitioners and three group interviews. In addition, a long-running observation of forty-five days was conceptualized, together with other data sources, before culminating in the proposal of a new framework for improving the accuracy of estimates. Furthermore, an emerging framework – and a description of its know-how in terms of application – have been systematically reviewed through the course of four hundred twenty-five days of meetings, dedicated for the most part to improving the use of a wide range of specific project management tools and techniques and to an improvement in understanding of planning and the estimation process associated with it. This approach constituted an ongoing verification of the research’s findings against project management practice and also served as an invaluable resource for the researcher’s professional and practice-oriented development. The results obtained offered fresh insights into the importance of knowledge management in the estimation process, including the “value of not knowing”, the oft-overlooked phenomenon of underestimation and its potential to co-exist with overestimation, and the use of negative buffer management in the critical chain concept to secure project deadlines. The project also highlighted areas of improvement for future research practice that wishes to make use of an inductive approach in order to achieve a socially agreed framework, rather than a theory alone. In addition, improvements were suggested to the various qualitative tools employed in the customized data analysis process.
226

Making use a new open-multipurpose framework for more realistic estimation process in project management

Hussain, Zahid I., Lazarski, A.B. January 2016 (has links)
Yes / The current turbulent times call for adaptability, especially in non-repetitive endeavours being a vital characteristic of project management. The research organized along five objectives commenced in the autumn of 2008 with a pilot study. Then it proceeded through an inductive research process, involving a series of interviews with well-recognized international experts in the field. In addition conceptualized long-running observation of forty-five days was used, before proposal of a new framework for improving the accuracy of estimates in project management. Furthermore, the framework’s “know-how to apply” description have been systematically reviewed through the course of four hundred twenty-five days of meetings. This achieved socially agreed understanding assured that it may be possible to improve accuracy of estimates, while having flexible, adaptable framework exploiting dependency between project context and conditioned by it, use of tools and techniques.
227

Contribution à l'implantation optimisée de l'estimateur de mouvement de la norme H.264 sur plates-formes multi composants par extension de la méthode AAA / Contribution to the implementation of optimized motion estimation of H.264 standard on multi platform components by extending the AAA method

Feki, Oussama 13 May 2015 (has links)
Les architectures mixtes contenant des composants programmables et d'autres reconfigurables peuvent fournir les performances de calcul nécessaires pour satisfaire les contraintes imposées aux applications temps réel. Mais l'implantation et d'optimisation de ces applications temps réel sur ce type d'architectures est une tâche complexe qui prend un temps énorme. Dans ce contexte, nous proposons un outil de prototypage rapide visant ce type d'architectures. Cet outil se base sur une extension que nous proposons de la méthodologie Adéquation Algorithme Architecture (AAA). Il permet d'effectuer automatiquement le partitionnement et l'ordonnancement optimisés des opérations de l'application sur les composants de l'architecture cible et la génération automatique des codes correspondants. Nous avons utilisé cet outil pour l'implantation de l'estimateur de mouvement de la norme H.264/AVC sur une architecture composée d'un processeur NIOS II d'Altera et d'un FPGA Stratix III. Ainsi nous avons pu vérifier le bon fonctionnement de notre outil et validé notre générateur automatique de code mixte / Mixed architectures containing programmable devices and reconfigurable ones can provide calculation performance necessary to meet constraints of real-time applications. But the implementation and optimization of these applications on this kind of architectures is a complex task that takes a lot of time. In this context, we propose a rapid prototyping tool for this type of architectures. This tool is based on our extension of the Adequacy Algorithm Architecture methodology (AAA). It allows to automatically perform optimized partitioning and scheduling of the application operations on the target architecture components and generation of correspondent codes. We used this tool for the implementation of the motion estimator of the H.264/AVC on an architecture composed of a Nios II processor and Altera Stratix III FPGA. So we were able to verify the correct running of our tool and validate our automatic generator of mixed code
228

Large-scale snowpack estimation using ensemble data assimilation methodologies, satellite observations and synthetic datasets

Su, Hua 03 June 2010 (has links)
This work focuses on a series of studies that contribute to the development and test of advanced large-scale snow data assimilation methodologies. Compared to the existing snow data assimilation methods and strategies, which are limited in the domain size and landscape coverage, the number of satellite sensors, and the accuracy and reliability of the product, the present work covers the continental domain, compares single- and multi-sensor data assimilations, and explores uncertainties in parameter and model structure. In the first study a continental-scale snow water equivalent (SWE) data assimilation experiment is presented, which incorporates Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover fraction (SCF) data to Community Land Model (CLM) estimates via the ensemble Kalman filter (EnKF). The greatest improvements of the EnKF approach are centered in the mountainous West, the northern Great Plains, and the west and east coast regions, with the magnitude of corrections (compared to the use of model only) greater than one standard deviation (calculated from SWE climatology) at given areas. Relatively poor performance of the EnKF, however, is found in the boreal forest region. In the second study, snowpack related parameter and model structure errors were explicitly considered through a group of synthetic EnKF simulations which integrate synthetic datasets with model estimates. The inclusion of a new parameter estimation scheme augments the EnKF performance, for example, increasing the Nash-Sutcliffe efficiency of season-long SWE estimates from 0.22 (without parameter estimation) to 0.96. In this study, the model structure error is found to significantly impact the robustness of parameter estimation. In the third study, a multi-sensor snow data assimilation system over North America was developed and evaluated. It integrates both Gravity Recovery and Climate Experiment (GRACE) Terrestrial water storage (TWS) and MODIS SCF information into CLM using the ensemble Kalman filter (EnKF) and smoother (EnKS). This GRACE/MODIS data assimilation run achieves a significantly better performance over the MODIS only run in Saint Lawrence, Fraser, Mackenzie, Churchill & Nelson, and Yukon river basins. These improvements demonstrate the value of integrating complementary information for continental-scale snow estimation. / text
229

Nonlinear Transformations and Filtering Theory for Space Operations

Weisman, Ryan Michael 1984- 14 March 2013 (has links)
Decisions for asset allocation and protection are predicated upon accurate knowledge of the current operating environment as well as correctly characterizing the evolution of the environment over time. The desired kinematic and kinetic states of objects in question cannot be measured directly in most cases and instead are inferred or estimated from available measurements using a filtering process. Often, nonlinear transformations between the measurement domain and desired state domain distort the state domain probability density function yielding a form which does not necessarily resemble the form assumed in the filtering algorithm. The distortion effect must be understood in greater detail and appropriately accounted for so that even if sensors, state estimation algorithms, and state propagation algorithms operate in different domains, they can all be effectively utilized without any information loss due to domain transformations. This research presents an analytical investigation into understanding how non-linear transformations of stochastic, but characterizable, processes affect state and uncertainty estimation with direct application to space object surveillance and space- craft attitude determination. Analysis is performed with attention to construction of the state domain probability density function since state uncertainty and correlation are derived from the statistical moments of the probability density function. Analytical characterization of the effect nonlinear transformations impart on the structure of state probability density functions has direct application to conventional non- linear filtering and propagation algorithms in three areas: (1) understanding how smoothing algorithms used to estimate indirectly observed states impact state uncertainty, (2) justification or refutation of assumed state uncertainty distribution for more realistic uncertainty quantification, and (3) analytic automation of initial state estimate and covariance in lieu of user tuning. A nonlinear filtering algorithm based upon Bayes’ Theorem is presented to ac- count for the impact nonlinear domain transformations impart on probability density functions during the measurement update and propagation phases. The algorithm is able to accommodate different combinations of sensors for state estimation which can also be used to hypothesize system parameters or unknown states from available measurements because information is able to appropriately accounted for.
230

Méthodes spectrales pour l'inférence grammaticale probabiliste de langages stochastiques rationnels

Bailly, Raphael 12 December 2011 (has links)
Nous nous plaçons dans le cadre de l’inférence grammaticale probabiliste. Il s’agit, étant donnée une distribution p sur un ensemble de chaînes S∗ inconnue, d’inférer un modèle probabiliste pour p à partir d’un échantillon fini S d’observations supposé i.i.d. selon p. L’inférence gram- maticale se concentre avant tout sur la structure du modèle, et la convergence de l’estimation des paramètres. Les modèles probabilistes dont il sera question ici sont les automates pondérés, ou WA. Les fonctions qu’ils modélisent sont appelées séries rationnelles. Dans un premier temps, nous étudierons la possibilité de trouver un critère de convergence absolue pour de telles séries. Par la suite, nous introduirons un type d’algorithme pour l’inférence de distributions rationnelles (i.e. distributions modélisées par un WA), basé sur des méthodes spectrales. Nous montrerons comment adapter cet algorithme pour l’appliquer au domaine, assez proche, des distributions sur les arbres. Enfin, nous tenterons d’utiliser cet algorithme d’inférence dans un contexte plus statistique d’estimation de densité. / Our framework is the probabilistic grammatical inference. That is, given an unknown distribution p on a set of string S∗ , to infer a probabilistic model for p from a sample S of observations assumed to be i.i.d. according to p. Grammatical inference focuses primarily on the structure of the probabilistic model, and the convergence of parameter estimate. Probabilistic models which will be considered here are weighted automata, or WA. The series they model are called rational series. Initially, we study the possibility of finding an absolute convergence criterion for such series. Subsequently, we introduce a algorithm for the inference of rational distrbutions (i.e. distributions modeled by WA), based on spectral methods. We will show how to fit this algorithm to the domain, fairly close, of rational distributions on trees. Finally, we will try to see how to use the spectral algorithm in a more statistical way, in a density estimation task.

Page generated in 0.0637 seconds