• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 1
  • Tagged with
  • 14
  • 14
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Analyzing and Modeling Low-Cost MEMS IMUs for use in an Inertial Navigation System

Barrett, Justin Michael 30 April 2014 (has links)
Inertial navigation is a relative navigation technique commonly used by autonomous vehicles to determine their linear velocity, position and orientation in three-dimensional space. The basic premise of inertial navigation is that measurements of acceleration and angular velocity from an inertial measurement unit (IMU) are integrated over time to produce estimates of linear velocity, position and orientation. However, this process is a particularly involved one. The raw inertial data must first be properly analyzed and modeled in order to ensure that any inertial navigation system (INS) that uses the inertial data will produce accurate results. This thesis describes the process of analyzing and modeling raw IMU data, as well as how to use the results of that analysis to design an INS. Two separate INS units are designed using two different micro-electro-mechanical system (MEMS) IMUs. To test the effectiveness of each INS, each IMU is rigidly mounted to an unmanned ground vehicle (UGV) and the vehicle is driven through a known test course. The linear velocity, position and orientation estimates produced by each INS are then compared to the true linear velocity, position and orientation of the UGV over time. Final results from these experiments include quantifications of how well each INS was able to estimate the true linear velocity, position and orientation of the UGV in several different navigation scenarios as well as a direct comparison of the performances of the two separate INS units.
2

Error Modeling and Analysis of Star Cameras for a Class of 1U Spacecraft

Fowler, David M. 01 May 2013 (has links)
As spacecraft today become increasingly smaller, the demand for smaller components and sensors rises as well. The smartphone, a cutting edge consumer technology, has impressive collections of both sensors and processing capabilities and may have the potential to fill this demand in the spacecraft market. If the technologies of a smartphone can be used in space, the cost of building miniature satellites would drop significantly and give a boost to the aerospace and scientific communities.Concentrating on the problem of spacecraft orientation, this study sets ground to determine the capabilities of a smartphone camera when acting as a star camera. Orientations determined from star images taken from a smartphone camera are compared to those of higher quality cameras in order to determine the associated accuracies. The results of the study reveal the abilities of low-cost off-the-shelf imagers in space and give a starting point for future research in the field.The study began with a complete geometric calibration of each analyzed imager such that all comparisons start from the same base. After the cameras were calibrated, image processing techniques were introduced to correct for atmospheric, lens, and image sensor effects. Orientations for each test image are calculated through methods of identifying the stars exposed on each image. Analyses of these orientations allow the overall errors of each camera to be defined and provide insight into the abilities of low-cost imagers.
3

Reliability-centric probabilistic analysis of VLSI circuits

Rejimon, Thara 01 June 2006 (has links)
Reliability is one of the most serious issues confronted by microelectronics industry as feature sizes scale down from deep submicron to sub-100-nanometer and nanometer regime. Due to processing defects and increased noise effects, it is almost impractical to come up with error-free circuits. As we move beyond 22nm, devices will be operating very close to their thermal limit making the gates error-prone and every gate will have a finite propensity of providing erroneous outputs. Additional factors increasing the erroneous behaviors are low operating voltages and extremely high frequencies. These types of errors are not captured by current defect and fault tolerant mechanisms as they might not be present during the testing and reconfiguration. Hence Reliability-centric CAD analysis tool is becoming more essential not only to combat defect and hard faults but also errors that are transient and probabilistic in nature.In this dissertation, we address three broad categories of errors. First, we focus on random pattern testability of logic circuits with respect to hard or permanent faults. Second, we model the effect of single-event-upset (SEU) at an internal node to primary outputs. We capture the temporal nature of SEUs by adding timing information to our model. Finally, we model the dynamic error in nano-domain computing, where reliable computation has to be achieved with "systemic" unreliable devices, thus making the entire computation process probabilistic rather than deterministic in nature.Our central theoretical scheme relies on Bayesian Belief networks that are compact efficient models representing joint probability distribution in a minimal graphical structure that not only uses conditional independencies to model the underlying probabilistic dependence but also uses them for computational advantage. We used both exact and approximate inference which has let us achieve order of magnitude improvements in both accuracy and speed and have enabled us t o study larger benchmarks than the state-of-the-art. We are also able to study error sensitivities, explore design space, and characterize the input space with respect to errors and finally, evaluate the effect of redundancy schemes.
4

Blind Network Tomography

Raza, Muhammad 18 July 2011 (has links)
abstract The parameters required for network monitoring are not directly measurable and could be estimated indirectly by network tomography. Some important research issues, related to network tomography, motivated the research in this dissertation. The research work in this dissertation makes four significant novel contributions to the field of network tomography. These research contributions were focused on the blind techniques for performing network tomography, the modeling of errors in network tomography, improving estimates with multi-metric-based network tomography, and distributed network tomography. All of these four research problems, related to network tomography, were solved by various blind techniques including NNMF, SCS, and NTF. These contributions have been verified by processing the data obtained from laboratory experiments and by examining the correlation between the estimated and measured link delays. Evaluation of these contributions was based on the data obtained from various test beds that consisted of networking devices.
5

Controlled Lagrangian particle tracking: analyzing the predictability of trajectories of autonomous agents in ocean flows

Szwaykowska, Klementyna 13 January 2014 (has links)
Use of model-based path planning and navigation is a common strategy in mobile robotics. However, navigation performance may degrade in complex, time-varying environments under model uncertainty because of loss of prediction ability for the robot state over time. Exploration and monitoring of ocean regions using autonomous marine robots is a prime example of an application where use of environmental models can have great benefits in navigation capability. Yet, in spite of recent improvements in ocean modeling, errors in model-based flow forecasts can still significantly affect the accuracy of predictions of robot positions over time, leading to impaired path-following performance. In developing new autonomous navigation strategies, it is important to have a quantitative understanding of error in predicted robot position under different flow conditions and control strategies. The main contributions of this thesis include development of an analytical model for the growth of error in predicted robot position over time and theoretical derivation of bounds on the error growth, where error can be attributed to drift caused by unmodeled components of ocean flow. Unlike most previous works, this work explicitly includes spatial structure of unmodeled flow components in the proposed error growth model. It is shown that, for a robot operating under flow-canceling control in a static flow field with stochastic errors in flow values returned at ocean model gridpoints, the error growth is initially rapid, but slows when it reaches a value of approximately twice the ocean model gridsize. Theoretical values for mean and variance of error over time under a station-keeping feedback control strategy and time-varying flow fields are computed. Growth of error in predicted vehicle position is modeled for ocean models whose flow forecasts include errors with large spatial scales. Results are verified using data from several extended field deployments of Slocum autonomous underwater gliders, in Monterey Bay, CA in 2006, and in Long Bay, SC in 2012 and 2013.
6

Evaluating the Accuracy of Pavement Deterioration Forecasts: Application to United States Air Force Airfields

Knost, Benjamin R. January 2016 (has links)
No description available.
7

Error modeling of the carpal wrist

Saccoccio, Gregory Nicholas 13 February 2009 (has links)
In recent years, increased emphasis has been placed on the development of parallel-architecture mechanisms for use as robotic manipulators. Parallel robots offer the benefits of higher load-carrying capacity, greater positioning accuracy and lower weight when compared to serial devices. However, robotic wrist development has traditionally focused on serial mechanisms having a large, spherical workspace and simpler kinematic solutions. The Carpal wrist is a unique parallel mechanism consisting of a fixed base and a movable output plane connected via three serial kinematic chains. The forward and inverse kinematic problems of the Carpal wrist are solved closed-form, making the device suitable for use as a new type of robotic wrist. The closed-form solutions are dependent upon the assumptions that the fixed and moving planes are symmetric about a mid-plane and that the three kinematic chains connecting the planes are identical. This thesis investigates the errors that result from those assumptions being violated due to manufacturing and assembly errors. In the non-ideal model, pose error is found by iteratively solving a system of equations describing the output plane position and orientation and comparing them with the ideal solution. The error model is a tool for predicting the effects of kinematic parameter errors on the positioning accuracy and reachable workspace of the Carpal wrist. In this work, a general error model is developed and validated for a range of parameter error values. Special-case results are presented for errors in the individual parameters. / Master of Science
8

Improved Methods for Pharmacometric Model-Based Decision-Making in Clinical Drug Development

Dosne, Anne-Gaëlle January 2016 (has links)
Pharmacometric model-based analysis using nonlinear mixed-effects models (NLMEM) has to date mainly been applied to learning activities in drug development. However, such analyses can also serve as the primary analysis in confirmatory studies, which is expected to bring higher power than traditional analysis methods, among other advantages. Because of the high expertise in designing and interpreting confirmatory studies with other types of analyses and because of a number of unresolved uncertainties regarding the magnitude of potential gains and risks, pharmacometric analyses are traditionally not used as primary analysis in confirmatory trials. The aim of this thesis was to address current hurdles hampering the use of pharmacometric model-based analysis in confirmatory settings by developing strategies to increase model compliance to distributional assumptions regarding the residual error, to improve the quantification of parameter uncertainty and to enable model prespecification. A dynamic transform-both-sides approach capable of handling skewed and/or heteroscedastic residuals and a t-distribution approach allowing for symmetric heavy tails were developed and proved relevant tools to increase model compliance to distributional assumptions regarding the residual error. A diagnostic capable of assessing the appropriateness of parameter uncertainty distributions was developed, showing that currently used uncertainty methods such as bootstrap have limitations for NLMEM. A method based on sampling importance resampling (SIR) was thus proposed, which could provide parameter uncertainty in many situations where other methods fail such as with small datasets, highly nonlinear models or meta-analysis. SIR was successfully applied to predict the uncertainty in human plasma concentrations for the antibiotic colistin and its prodrug colistin methanesulfonate based on an interspecies whole-body physiologically based pharmacokinetic model. Lastly, strategies based on model-averaging were proposed to enable full model prespecification and proved to be valid alternatives to standard methodologies for studies assessing the QT prolongation potential of a drug and for phase III trials in rheumatoid arthritis. In conclusion, improved methods for handling residual error, parameter uncertainty and model uncertainty in NLMEM were successfully developed. As confirmatory trials are among the most demanding in terms of patient-participation, cost and time in drug development, allowing (some of) these trials to be analyzed with pharmacometric model-based methods will help improve the safety and efficiency of drug development.
9

Contributions to Lane Marking Based Localization for Intelligent Vehicles / Contribution à la localisation de véhicules intelligents à partir de marquage routier

Lu, Wenjie 09 February 2015 (has links)
Les applications pour véhicules autonomes et les systèmes d’aide avancée à la conduite (Advanced Driving Assistance Systems - ADAS) mettent en oeuvre des processus permettant à des systèmes haut niveau de réaliser une prise de décision. Pour de tels systèmes, la connaissance du positionnement précis (ou localisation) du véhicule dans son environnement est un pré-requis nécessaire. Cette thèse s’intéresse à la détection de la structure de scène, au processus de localisation ainsi qu’à la modélisation d’erreurs. A partir d’un large spectre fonctionnel de systèmes de vision, de l’accessibilité d’un système de cartographie ouvert (Open Geographical Information Systems - GIS) et de la large diffusion des systèmes de positionnement dans les véhicules (Global Positioning System - GPS), cette thèse étudie la performance et la fiabilité d’une méthode de localisation utilisant ces différentes sources. La détection de marquage sur la route réalisée par caméra monoculaire est le point de départ permettant de connaître la structure de la scène. En utilisant, une détection multi-noyau avec pondération hiérarchique, la méthode paramétrique proposée effectue la détection et le suivi des marquages sur la voie du véhicule en temps réel. La confiance en cette source d’information a été quantifiée par un indicateur de vraisemblance. Nous proposons ensuite un système de localisation qui fusionne des informations de positionnement (GPS), la carte (GIS) et les marquages détectés précédemment dans un cadre probabiliste basé sur un filtre particulaire. Pour ce faire, nous proposons d’utiliser les marquages détectés non seulement dans l’étape de mise en correspondance des cartes mais aussi dans la modélisation de la trajectoire attendue du véhicule. La fiabilité du système de localisation, en présence d’erreurs inhabituelles dans les différentes sources d’information, est améliorée par la prise en compte de différents indicateurs de confiance. Ce mécanisme est par la suite utilisé pour identifier les sources d’erreur. Cette thèse se conclut par une validation expérimentale des méthodes proposées dans des situations réelles de conduite. Leurs performances ont été quantifiées en utilisant un véhicule expérimental et des données en libre accès sur internet. / Autonomous Vehicles (AV) applications and Advanced Driving Assistance Systems (ADAS) relay in scene understanding processes allowing high level systems to carry out decision marking. For such systems, the localization of a vehicle evolving in a structured dynamic environment constitutes a complex problem of crucial importance. Our research addresses scene structure detection, localization and error modeling. Taking into account the large functional spectrum of vision systems, the accessibility of Open Geographical Information Systems (GIS) and the widely presence of Global Positioning Systems (GPS) onboard vehicles, we study the performance and the reliability of a vehicle localization method combining such information sources. Monocular vision–based lane marking detection provides key information about the scene structure. Using an enhanced multi-kernel framework with hierarchical weights, the proposed parametric method performs, in real time, the detection and tracking of the ego-lane marking. A self-assessment indicator quantifies the confidence of this information source. We conduct our investigations in a localization system which tightly couples GPS, GIS and lane makings in the probabilistic framework of Particle Filter (PF). To this end, it is proposed the use of lane markings not only during the map-matching process but also to model the expected ego-vehicle motion. The reliability of the localization system, in presence of unusual errors from the different information sources, is enhanced by taking into account different confidence indicators. Such a mechanism is later employed to identify error sources. This research concludes with an experimental validation in real driving situations of the proposed methods. They were tested and its performance was quantified using an experimental vehicle and publicly available datasets.
10

Software Development Process and Reliability Quantification for Safety Critical Embedded Systems Design

Lockhart, Jonathan A. 01 October 2019 (has links)
No description available.

Page generated in 0.0818 seconds