• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Calibration-free image sensor modelling: deterministic and stochastic

Lim, Shen Hin, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2009 (has links)
This dissertation presents the calibration-free image sensor modelling process applicable for localisation, such that these are robust to changes in environment and in sensor properties. The modelling process consists of two distinct parts, which are deterministic and stochastic techniques, and is achieved using mechanistic deconvolution, where the sensor???s mechanical and electrical properties are utilised. In the deterministic technique, the sensor???s effective focal length is first estimated by known lens properties, and is used to approximate the lens system by a thick lens and its properties. The aperture stop position offset???which is one of the thick lens properties???then derives a new factor, namely calibration-free distortion effects factor, to characterise distortion effects inherent in the sensor. Using this factor and the given pan and tilt angles of an arbitrary plane of view, the corrected image data is generated. The corrected data complies with the image sensor constraints modified by the pan and tilt angles. In the stochastic technique, the stochastic focal length and distortion effects factor are first approximated, using tolerances of the mechanical and electrical properties. These are then utilised to develop the observation likelihood necessary in recursive Bayesian estimation. The proposed modelling process reduces dependency on image data, and, as a result, do not require experimental setup or calibration. An experimental setup was constructed to conduct extensive analysis on accuracy of the proposed modelling process and its robustness to changes in sensor properties and in pan and tilt angles without recalibration. This was compared with a conventional modelling process using three sensors with different specifications and achieved similar accuracy with one-seventh the number of iterations. The developed model has also shown itself to be robust and, in comparison to the conventional modelling process, reduced the errors by a factor of five. Using area coverage method and one-step lookahead as control strategies, the stochastic sensor model was applied into a recursive Bayesian estimation application and was also compared with a conventional approach. The proposed model provided better target estimation state, and also achieved higher efficiency and reliability when compared with the conventional approach.
2

A Probabilistic Approach to Conceptual Sensor Modeling

Sonesson, Mattias January 2005 (has links)
<p>This report develops a method for probabilistic conceptual sensor modeling. The idea is to generate probabilities for detection, recognition and identification based on a few simple factors. The</p><p>focus lies on FLIR sensors and thermal radiation, even if discussions of other wavelength bands are made. The model can be used as a hole or some or several parts can be used to create a simpler model. The core of the model is based on the Johnson criteria that uses resolution as the input parameter. Some extensions that models other factors are also implemented. In the end a short discussion of the possibility to use this model for other sensors than FLIR is made.</p>
3

A Probabilistic Approach to Conceptual Sensor Modeling

Sonesson, Mattias January 2005 (has links)
This report develops a method for probabilistic conceptual sensor modeling. The idea is to generate probabilities for detection, recognition and identification based on a few simple factors. The focus lies on FLIR sensors and thermal radiation, even if discussions of other wavelength bands are made. The model can be used as a hole or some or several parts can be used to create a simpler model. The core of the model is based on the Johnson criteria that uses resolution as the input parameter. Some extensions that models other factors are also implemented. In the end a short discussion of the possibility to use this model for other sensors than FLIR is made.
4

Occupancy grid mapping using stereo vision

Burger, Alwyn Johannes 03 1900 (has links)
Thesis (MEng)--Stellenbosch University, 2015. / ENGLISH ABSTRACT: This thesis investigates the use of stereo vision sensors for dense autonomous mapping. It characterises and analyses the errors made during the stereo matching process so measurements can be correctly integrated into a 3D grid-based map. Maps are required for navigation and obstacle avoidance on autonomous vehicles in complex, unknown environments. The safety of the vehicle as well as the public depends on an accurate mapping of the environment of the vehicle, which can be problematic when inaccurate sensors such as stereo vision are used. Stereo vision sensors are relatively cheap and convenient, however, and a system that can create reliable maps using them would be beneficial. A literature review suggests that occupancy grid mapping poses an appropriate solution, offering dense maps that can be extended with additional measurements incrementally. It forms a grid representation of the environment by dividing it into cells, and assigns a probability to each cell of being occupied. These probabilities are updated with measurements using a sensor model that relates measurements to occupancy probabilities. Numerous forms of these sensor models exist, but none of them appear to be based on meaningful assumptions and sound statistical principles. Furthermore, they all seem to be limited by an assumption of unimodal, zero-mean Gaussian measurement noise. Therefore, we derive a principled inverse sensor model (PRISM) based on physically meaningful assumptions. This model is capable of approximating any realistic measurement error distribution using a Gaussian mixture model (GMM). Training a GMM requires a characterisation of the measurement errors, which are related to the environment as well as which stereo matching technique is used. Therefore, a method for fitting a GMM to the error distribution of a sensor using measurements and ground truth is presented. Since we may consider the derived principled inverse sensor model to be theoretically correct under its assumptions, we use it to evaluate the approximations made by other models from the literature that are designed for execution speed. We show that at close range these models generally offer good approximations that worsen with an increase in measurement distance. We test our model by creating maps using synthetic and real world data. Comparing its results to those of sensor models from the literature suggests that our model calculates occupancy probabilities reliably. Since our model captures the limited measurement range of stereo vision, we conclude that more accurate sensors are required for mapping at greater distances. / AFRIKAANSE OPSOMMING: Hierdie tesis ondersoek die gebruik van stereovisie sensors vir digte outonome kartering. Dit karakteriseer en ontleed die foute wat gemaak word tydens die stereopassingsproses sodat metings korrek geïntegreer kan word in 'n 3D rooster-gebaseerde kaart. Sulke kaarte is nodig vir die navigasie en hindernisvermyding van outonome voertuie in komplekse en onbekende omgewings. Die veiligheid van die voertuig sowel as die publiek hang af van 'n akkurate kartering van die voertuig se omgewing, wat problematies kan wees wanneer onakkurate sensors soos stereovisie gebruik word. Hierdie sensors is egter relatief goedkoop en gerieflik, en daarom behoort 'n stelsel wat hulle dit gebruik om op 'n betroubare manier kaarte te skep baie voordelig te wees. 'n Literatuuroorsig dui daarop dat die besettingsroosteralgoritme 'n geskikte oplossing bied, aangesien dit digte kaarte skep wat met bykomende metings uitgebrei kan word. Hierdie algoritme skep 'n roostervoorstelling van die omgewing en ken 'n waarskynlikheid dat dit beset is aan elke sel in die voorstelling toe. Hierdie waarskynlikhede word deur nuwe metings opgedateer deur gebruik te maak van 'n sensormodel wat beskryf hoe metings verband hou met besettingswaarskynlikhede. Menigde a eidings bestaan vir hierdie sensormodelle, maar dit blyk dat geen van die modelle gebaseer is op betekenisvolle aannames en statistiese beginsels nie. Verder lyk dit asof elkeen beperk word deur 'n aanname van enkelmodale, nul-gemiddelde Gaussiese metingsgeraas. Ons lei 'n beginselfundeerde omgekeerde sensormodel af wat gebaseer is op fisies betekenisvolle aannames. Hierdie model is in staat om enige realistiese foutverspreiding te weerspieël deur die gebruik van 'n Gaussiese mengselmodel (GMM). Dit vereis 'n karakterisering van 'n stereovisie sensor se metingsfoute, wat afhang van die omgewing sowel as watter stereopassingstegniek gebruik is. Daarom stel ons 'n metode voor wat die foutverspreiding van die sensor met behulp van 'n GMM modelleer deur gebruik te maak van metings en absolute verwysings. Die afgeleide ge inverteerde sensormodel is teoreties korrek en kan gevolglik gebruik word om modelle uit die literatuur wat vir uitvoerspoed ontwerp is te evalueer. Ons wys dat op kort afstande die modelle oor die algemeen goeie benaderings bied wat versleg soos die metingsafstand toeneem. Ons toets ons nuwe model deur kaarte te skep met gesimuleerde data, sintetiese data, en werklike data. Vergelykings tussen hierdie resultate en dié van sensormodelle uit die literatuur dui daarop dat ons model besettingswaarskynlikhede betroubaar bereken. Aangesien ons model die beperkte metingsafstand van stereovisie vasvang, lei ons af dat meer akkurate sensors benodig word vir kartering oor groter afstande.
5

Feasibility Design of a Continuous Insulin Sensor from Lessons Learned using Glucose Sensors, and Point of Care Insulin Sensors

January 2018 (has links)
abstract: Glucose sensors have had many paradigm shifts, beginning with using urine, to point of care blood, now being approved for implant. This review covers various aspects of the sensors, ranging from the types of surface chemistry, and electron transduction. All the way to the algorithms, and filters used to alter and understand the signal being transduced. Focus is given to Dr. Hellerâ’s work using redox mediators, as well as Dr. Sode in his advances for direct electron transfer. Simple process of designing sensors are described, as well as the possible errors that may come with glucose sensor use. Finally, a small window into the future trends of glucose sensors is described both from a device view point, as well as organic viewpoint. Using this history the initial point of care sensor for insulin published through LaBelle’s lab is reevaluated critically. In addition, the modeling of the possibility of continuously measuring insulin is researched. To better understand the design for a continuous glucose sensor, the basic kinetic model is set up, and ran through a design of experiments to then optimized what the binding kinetics for an ideal insulin molecular recognition element would be. In addition, the phenomena of two electrochemical impedance spectroscopy peaks is analyzed, and two theories are suggests, and demonstrated to a modest level. / Dissertation/Thesis / Masters Thesis Biomedical Engineering 2018
6

Automatic digital surface model generation using graphics processing unit

Van der Merwe, Dirk Jacobus 05 June 2012 (has links)
M. Ing. / Digital Surface Models (DSM) are widely used in the earth sciences for research, visu- alizations, construction etc. In order to generate a DSM for a speci c area, specialized equipment and personnel are always required which leads to a costly and time consuming exercise. Image processing has become a viable processing technique to generate terrain models since the improvements of hardware provided adequate processing power to complete such a task. Digital Surface Models (DSM) can be generated from stereo imagery, usually obtained from a remote sensing platform. The core component of a DSM generating system is the image matching algorithm. Even though there are a variety of algorithms to date which can generate DSMs, it is a computationally complex calculation and does tend to take some time to complete. In order to achieve faster DSMs, an investigation into an alternative processing platform for the generation of terrain models has been done. The Graphics Processing Unit (GPU) is usually used in the gaming industry to manipulate display data and then render it to a computer screen. The architecture is designed to manipulate large amounts of oating point data. The scientic community has begun using the GPU processing power available for technical computing, hence the term, General Purpose computing on a Graphics Processing Unit (GPGPU). The GPU is investigated as alternative processing platform for the image matching procedure since the processing capability of the GPU is so much higher than the CPU but only for a conditioned set of input data. A matching algorithm, derived from the GC3 algorithm has been implemented on both a CPU platform and a GPU platform in order to investigate the viability of a GPU processing alternative. The algorithm makes use of a Normalized Cross Correlation similarity measurement and the geometry of the image acquisition contained in the sensor model to obtain conjugate point matches in the two source images. The results of the investigation indicated an improvement of up to 70% on the processing time required to generate a DSM. The improvements varied from 70% to some cases where the GPU has taken longer to generate the DSM. The accuracy of the automatic DSM generating system could not be clearly determined since only poor quality reference data was available. It is however shown the DSMs generated using both the CPU and GPU platforms relate to the reference data and correlate to each other. The discrepancies between the CPU and the GPU results are low enough to prove the GPU processing is bene cial with neglible drawbacks in terms of accuracy. The GPU will definitely provide superior processing capabilites for DSM generation above a CPU implementation if a matching algorithm is speci cally designed to cater for the bene ts and limitations of the GPU.
7

Parking Map Generation and Tracking Using Radar : Adaptive Inverse Sensor Model / Parkeringskartagenerering och spårning med radar

Mahmoud, Mohamed January 2020 (has links)
Radar map generation using binary Bayes filter or what is commonly known as Inverse Sensor Model; which translates the sensor measurements into grid cells occupancy estimation, is a classical problem in different fields. In this work, the focus will be on development of Inverse Sensor Model for parking space using 77 GHz FMCW (Frequency Modulated Continuous Wave) automotive radar, that can handle different environment geometrical complexity in a parking space. There are two main types of Inverse Sensor Models, where each has its own assumption about the sensor noise. One that is fixed and is similar to a lookup table, and constructed based on combination of sensor-specific characteristics, experimental data and empirically-determined parameters. The other one is learned by using ground truth labeling of the grid map cell, to capture the desired Inverse Sensor Model. In this work a new Inverse Sensor Model is proposed, that make use of the computational advantage of using fixed Inverse Sensor Model and capturing desired occupancy estimation based on ground truth labeling. A derivation of the occupancy grid mapping problem using binary Bayes filtering would be performed from the well known SLAM (Simultaneous Localization and Mapping) problem, followed by presenting the Adaptive Inverse Sensor Model, that uses fixed occupancy estimation but with adaptive occupancy shape estimation based on statistical analysis of the radar measurements distribution across the acquisition environment. A prestudy of the noise nature of the radar used in this work is performed, to have a common Inverse Sensor Model as a benchmark. Then the drawbacks of such Inverse Sensor Model would be addressed as sub steps of Adaptive Inverse Sensor Model, to be able to haven an optimal grid map occupancy estimator. Finally a comparison between the generated maps using the benchmark and the adaptive Inverse Sensor Model will take place, to show that under the fulfillment of the assumptions of the Adaptive Inverse Sensor Model, the Adaptive Inverse Sensor Model can offer a better visual appealing map to that of the benchmark.
8

Shape sensing of deformable objects for robot manipulation / Mesure et suivi de la forme d'objets déformables pour la manipulation robotisée

Sanchez Loza, Jose Manuel 24 May 2019 (has links)
Les objets déformables sont omniprésents dans notre vie quotidienne. Chaque jour, nous manipulons des vêtements dans des configurations innombrables pour nous habiller, nouons les lacets de nos chaussures, cueillons des fruits et des légumes sans les endommager pour notre consommation et plions les reçus dans nos portefeuilles. Toutes ces tâches impliquent de manipuler des objets déformables et peuvent être exécutées sans problème par une personne. Toutefois, les robots n'ont pas encore atteint le même niveau de dextérité. Contrairement aux objets rigides, que les robots sont maintenant capables de manipuler avec des performances proches de celles des humains; les objets déformables doivent être contrôlés non seulement pour les positionner, mais aussi pour définir leur forme. Cette contrainte supplémentaire, relative au contrôle de la forme d’un objet, rend les techniques utilisées pour les objets rigides inapplicables aux objets déformables. En outre, le comportement des objets déformables diffère largement entre eux, par exemple: la forme d’un câble et des vêtements est considérablement affectée par la gravité, alors que celle-ci n’affecte pas la configuration d’autres objets déformables tels que des produits alimentaires. Ainsi, différentes approches ont été proposées pour des classes spécifiques d’objets déformables.Dans cette thèse, nous cherchons à remédier à ces lacunes en proposant une approche modulaire pour détecter la forme d'un objet pendant qu'il est manipulé par un robot. La modularité de cette approche s’inspire d’un paradigme de programmation qui s’applique de plus en plus au développement de logiciels en robotique et vise à apporter des solutions plus générales en séparant les fonctionnalités en composants. Ces composants peuvent ensuite être interchangés en fonction de la tâche ou de l'objet concerné. Cette stratégie est un moyen modulaire de suivre la forme d'objets déformables.Pour valider la stratégie proposée, nous avons implémenté trois applications différentes. Deux applications portaient exclusivement sur l'estimation de la déformation de l'objet à l'aide de données tactiles ou de données issues d’un capteur d’effort. La troisième application consistait à contrôler la déformation d'un objet. Une évaluation de la stratégie proposée, réalisée sur un ensemble d'objets élastiques pour les trois applications, montre des résultats prometteurs pour une approche qui n'utilise pas d'informations visuelles et qui pourrait donc être améliorée de manière significative par l'ajout de cette modalité. / Deformable objects are ubiquitous in our daily lives. On a given day, we manipulate clothes into uncountable configurations to dress ourselves, tie the shoelaces on our shoes, pick up fruits and vegetables without damaging them for our consumption and fold receipts into our wallets. All these tasks involve manipulating deformable objects and can be performed by an able person without any trouble, however robots have yet to reach the same level of dexterity. Unlike rigid objects, where robots are now capable of handling objects with close to human performance in some tasks; deformable objects must be controlled not only to account for their pose but also their shape. This extra constraint, to control an object's shape, renders techniques used for rigid objects mainly inapplicable to deformable objects. Furthermore, the behavior of deformable objects widely differs among them, e.g. the shape of a cable and clothes are significantly affected by gravity while it might not affect the configuration of other deformable objects such as food products. Thus, different approaches have been designed for specific classes of deformable objects.In this thesis we seek to address these shortcomings by proposing a modular approach to sense the shape of an object while it is manipulated by a robot. The modularity of the approach is inspired by a programming paradigm that has been increasingly been applied to software development in robotics and aims to achieve more general solutions by separating functionalities into components. These components can then be interchanged based on the specific task or object at hand. This provides a modular way to sense the shape of deformable objects.To validate the proposed pipeline, we implemented three different applications. Two applications focused exclusively on estimating the object's deformation using either tactile or force data, and the third application consisted in controlling the deformation of an object. An evaluation of the pipeline, performed on a set of elastic objects for all three applications, shows promising results for an approach that makes no use of visual information and hence, it could greatly be improved by the addition of this modality.
9

Information-driven Sensor Path Planning and the Treasure Hunt Problem

Cai, Chenghui 25 April 2008 (has links)
This dissertation presents a basic information-driven sensor management problem, referred to as treasure hunt, that is relevant to mobile-sensor applications such as mine hunting, monitoring, and surveillance. The objective is to classify/infer one or multiple fixed targets or treasures located in an obstacle-populated workspace by planning the path and a sequence of measurements of a robotic sensor installed on a mobile platform associated with the treasures distributed in the sensor workspace. The workspace is represented by a connectivity graph, where each node represents a possible sensor deployment, and the arcs represent possible sensor movements. A methodology is developed for planning the sensing strategy of a robotic sensor deployed. The sensing strategy includes the robotic sensor's path, because it determines which targets are measurable given a bounded field of view. Existing path planning techniques are not directly applicable to robots whose primary objective is to gather sensor measurements. Thus, in this dissertation, a novel approximate cell-decomposition approach is developed in which obstacles, targets, the sensor's platform and field of view are represented as closed and bounded subsets of an Euclidean workspace. The approach constructs a connectivity graph with observation cells that is pruned and transformed into a decision tree, from which an optimal sensing strategy can be computed. It is shown that an additive incremental-entropy function can be used to efficiently compute the expected information value of the measurement sequence over time. The methodology is applied to a robotic landmine classification problem and the board game of CLUE$^{\circledR}$. In the landmine detection application, the optimal strategy of a robotic ground-penetrating radar is computed based on prior remote measurements and environmental information. Extensive numerical experiments show that this methodology outperforms shortest-path, complete-coverage, random, and grid search strategies, and is applicable to non-overpass capable platforms that must avoid targets as well as obstacles. The board game of CLUE$^{\circledR}$ is shown to be an excellent benchmark example of treasure hunt problem. The test results show that a player implementing the strategies developed in this dissertation outperforms players implementing Bayesian networks only, Q-learning, or constraint satisfaction, as well as human players. / Dissertation
10

A Comparative Study of Kalman Filter Implementations for Relative GPS Navigation

Fritz, Matthew Peyton 2009 December 1900 (has links)
Relative global positioning system (GPS) navigation is currently used for autonomous rendezvous and docking of two spacecraft as well as formation flying applications. GPS receivers deliver measurements to flight software that use this information to determine estimates of the current states of the spacecraft. The success of autonomous proximity operations in the presence of an uncertain environment and noisy measurements depends primarily on the navigation accuracy. This thesis presents the implementation and calibration of a spaceborne GPS receiver model, a visibility analysis for multiple GPS antenna cone angles, the implementation of four different extended Kalman filter architectures and a comparison of the advantages and disadvantages of each filter used for relative GPS navigation. A spaceborne GPS model is developed to generate simulated GPS measurements for a spacecraft located on any orbit around the Earth below the GPS constellation. Position and velocity estimation algorithms for GPS receivers are developed and implemented. A visibility analysis is performed to determine the number of visible satellites throughout the duration of the rendezvous. Multiple constant fields of view are analyzed and results compared to develop an understanding of how the GPS constellation evolves during the proximity operations. The comparison is used to choose a field of view with adequate satellite coverage. The advantages and disadvantages of the relative navigation architectures are evaluated based on a trade study involving several parameters. It is determined in this thesis that a reduced pseudorange filter provides the best overall performance in both relative and absolute navigation with less computational cost than the slightly more accurate pseudorange lter. A relative pseudorange architecture experiences complications due to multipath rich environments and performs well in only relative navigation. A position velocity architecture performs well in absolute state estimation but the worst of the four filters studied in relative state estimation.

Page generated in 0.0517 seconds