• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 38
  • 38
  • 18
  • 14
  • 11
  • 11
  • 9
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

People flow maps for socially conscious robot navigation

Fox O'Loughlin, Rex January 2023 (has links)
With robots becoming increasingly common in human occupied spaces, there has been a growing body of research into the problem of socially conscious robot navigation. A robot must be able to predict and anticipate the movements of people around it in order to navigate in a way that is socially acceptable, or it may face rejection and therefore failure. Often this motion prediction is achieved using neural networks or artificial intelligence to predict the trajectories or flow of people, requiring large amounts of expensive and time-consuming real-world data collection. Therefore, many recent studies have attempted to find a way to create simulated human trajectory data. A variety of methods have been used to achieve this, the main ones being path planning algorithms and pedestrian simulators, but no study has evaluated these methods against each other and real-world data. This thesis compares the ability of two path planning algorithms (A* and RRT*) and a pedestrian simulator (PTV Vissim) to make realistic maps of dynamics. It concludes that A*-based path planners are the best choice when balancing the ability to replicate realistic people flow with the ease of generating large amounts of data.
32

Enhanced Surveillance and Conflict Prediction for Airport Apron Operation using LiDAR Sensing

Braßel, Hannes 11 September 2024 (has links)
This dissertation is situated at the intersection of aviation safety, sensor technology, and computational modeling, increasing airport apron safety by developing and testing optical sensing methods for automated apron surveillance. Central to this research is the utilization of Light Detection and Ranging (LiDAR) technology combined with computer vision algorithms for automatic scene understanding, complemented by tracking, motion prediction, and accident prediction functionalities for dynamic entities. Serving as the impetus for this research, an in-depth empirical analysis of 1220 aviation ground accident reports from 2008 to 2017 exhibits that 76 % of these occurrences could have been visually observed. Notably, the data reveals that 44 % of events indicate human failure, resulting from deficiencies in situational awareness among the involved parties. These findings highlight the opportunity for increasing airport safety by integrating automated surveillance methodologies. However, the ambitious endeavor of transitioning airport surveillance tasks to an automated system presents three main challenges. First, algorithms for automatic scene understanding rely on training datasets with ground truth annotations, which refer to semantic information representing real-world conditions. Such datasets do not exist for airport apron environments. Creating a training dataset for such environments involves scanning and manually annotating every aircraft type, ground vehicle, or object from multiple perspectives in every conceivable pose, velocity, and weather condition at multiple airports. Second, developing accurate tracking algorithms for aircraft relying on LiDAR point clouds requires time-synchronized true states for validation, which are not available. Third, recognizing visual features for accident prediction requires corresponding sensor data, which cannot be acquired in sufficient quantities given aviation's high safety standards and security-related access limitations to airport airside. Thus, this dissertation addresses these challenges by developing a simulation environment that provides training data and a testing framework to develop recognition models and tracking algorithms for real-world applications, utilizing Dresden International Airport as the test field. This simulation environment includes 3D models of the test field, kinematic descriptions of aircraft ground movements, and a sensor model replicating LiDAR sensor behavior under different weather conditions. The simulation environment obviates real-world data acquisition and manual annotation by generating synthetic LiDAR scans, automatically annotated using context knowledge inherent to the simulation framework. Consequently, it enables training recognition models on synthetic data applicable to real-world data. The simulation environment can be adapted to any airport by modifying the static background elements, thus addressing the first challenge. Sensor positioning within the simulation is fully customizable. The developed motion models are formulated in a general manner, ensuring their functionality across any movement network. For validation purposes, a real LiDAR dataset was collected at the test airport and manually annotated. Competing recognition models were trained: employing real-world training data and the other leveraging synthetic training data. These models were tested on a real test dataset not seen during the training. The results show that the synthetic data-trained model achieves recognition performance comparable to, or even superior to, the real-data-trained model. Specifically, it demonstrates improved recognition of aircraft and weather-induced noise within the real test dataset. This enhanced performance is attributed to an overrepresentation of aircraft and weather effects in the synthetic training data. The semantic segmentation model assigns semantic labels to each point of the point cloud. Tracking algorithms leverage this information to estimate the pose of objects. These estimations are crucial for verifying compliance with operational rules and to predict aircraft movement. Object positioning and orientation data inherent to the simulation enables the development and evaluation of tracking algorithms, addressing the second challenge. This research introduces an adaptive point sampling method for aircraft tracking that considers the velocity and spatial relationships of the tracked object, enhancing localization accuracy compared to conventional naïve sampling strategies in a simulated test dataset. Finally, addressing the third challenge, the empirical study of accidents and incidents informing the generation of accident scenarios within the simulation environment. A kinematic motion prediction model, coupled with a deep learning architecture, is instrumental in establishing classifiers that distinguish between normal conditions and accident patterns. Evaluations conducted on a simulated test dataset have demonstrated considerable promise for accident and incident prediction while maintaining a minimal rate of false positives. The classifier has delivered lead times of up to 12 s before the precipitating event, facilitating adequate warnings for emergency braking in 80 % of the ground collision cases and 97 % of the scenarios involving infringements of holding restrictions within a test dataset. This result demonstrates a transformative potential for real-world applications, setting a new benchmark for preemptive measures in airport ground surveillance.
33

Suivi en temps réel de tumeurs cancéreuses par résonance magnétique et applications à la radiothérapie

Bourque, Alexandra 08 1900 (has links)
No description available.
34

Définition des mouvements sismiques "au rocher / Definition of "rock" motion

Laurendeau, Aurore 16 July 2013 (has links)
L'objectif de cette thèse vise à améliorer la définition des vibrations (« mouvement sismique ») sur des sites « durs » (sédiments raides ou rochers) liés à des scénarios (séismes de magnitude entre 5 et 6.5, distances inférieures à 50 kilomètres) représentatifs du contexte métropolitain français. Afin de contraindre ces mouvements sismiques sur sites « durs », une base de données accélérométriques a été construite, à partir des enregistrements accélérométriques japonais K-NET et KiK-net qui ont l'avantage d'être publiques, nombreux et de grande qualité. Un modèle de prédiction des mouvements sismiques (spectre de réponse en accélération) a été conçu à partir de cette nouvelle base. La comparaison entre modèles théoriques et observations montre la dépendance des vibrations sur sites rocheux à la fois aux caractéristiques de vitesse du site (paramètre classique décrivant la vitesse des ondes S dans les 30 derniers mètres) et aux mécanismes d'atténuation hautes fréquences (un phénomène très peu étudié jusque-là). Ces résultats confirment une corrélation entre ces deux mécanismes (les sites rocheux les plus mous atténuent plus le mouvement sismique à hautes fréquences) et nous proposons un modèle de prédiction du mouvement sismique prenant en compte l'ensemble des propriétés du site (atténuation et vitesse). Les méthodes nouvelles de dimensionnement dynamiques non linéaires (à la fois géotechniques et structurelles) ne se satisfont pas des spectres de réponse mais requièrent des traces temporelles. Dans le but de générer de telles traces temporelles, la méthode stochastique non stationnaire développée antérieurement par Pousse et al. 2006 a été revisitée. Cette méthode semi-empirique nécessite de définir au préalable les distributions des indicateurs clés du mouvement sismique. Nous avons ainsi développé des modèles de prédiction empiriques pour la durée de phase forte, l'intensité d'Arias et la fréquence centrale, paramètre décrivant la variation du contenu fréquentiel au cours du temps. Les nouveaux développements de la méthode stochastique permettent de reproduire des traces temporelles sur une large bande de fréquences (0.1-50 Hz), de reproduire la non stationnarité en temps et en fréquence et la variabilité naturelle des vibrations sismiques. Cette méthode présente l'avantage d'être simple, rapide d'exécution et de considérer les bases théoriques de la sismologie (source de Brune, une enveloppe temporelle réaliste, non stationnarité et variabilité du mouvement sismique). Dans les études de génie parasismique, un nombre réduit de traces temporelles est sélectionné, et nous analysons dans une dernière partie l'impact de cette sélection sur la conservation de la variabilité naturelle des mouvements sismiques. / The aim of this thesis is to improve the definition of vibrations ("seismic motion") on "hard" sites (hard soils or rocks) related to scenarios (earthquakes of magnitude between 5 and 6.5, distances less than 50 km) representative of the French metropolitan context.In order to constrain the seismic motions on "hard" sites, an accelerometric database was built, from the K-NET and KiK-net Japanese recordings which have the benefit of being public, numerous and high quality. A ground motion prediction equation for the acceleration response spectra was developed from this new database. The comparison between theoretical models and observations shows the dependence of vibration on rock sites in both the velocity characteristics of the site (classical parameter describing the S-wave velocity in the last 30 meters) and the high frequency attenuation mechanisms (a phenomenon little studied up to now). These results confirm a correlation between these two mechanisms (the high frequency seismic motion is more attenuated in the case of softer rock sites) and we propose a ground motion prediction equation taking into account all the properties of the site (attenuation and velocity).New methods of nonlinear dynamic analysis (both geotechnical and structural) are not satisfied with the response spectra but require time histories. To generate such time histories, the non-stationary stochastic method previously developed by Pousse et al. (2006) has been revisited. This semi-empirical method requires first to define the distributions of key indicators of seismic motion. We have developed empirical models for predicting the duration, the Arias intensity and the central frequency, parameter describing the frequency content variation over time. New developments of the stochastic method allow to reproduce time histories over a wide frequency band (0.1-50 Hz), to reproduce the non-stationarity in time and frequency and to reproduce the natural variability of seismic vibrations. This method has the advantage of being simple, fast and taking into account basic concepts of seismology (Brune's source, a realistic envelope function, non-stationarity and variability of seismic motion). In earthquake engineering studies, a small number of time histories is selected, and we analyze in the last part the impact of this selection on the conservation of the ground motion natural variability.
35

Prochaine generation paneuropéennes équations de prédiction de mouvements de terrains pour les paramêtres de ingénierie / Next generation pan-european ground-motion prediction equations for engineering parameters

Sandikkaya, Mustafa Abdullah 11 April 2014 (has links)
Cette étude présente tout d'abord la récente banque de données fort mouvement pan-européen qui est mis à jour et la version étendue de bases de données paneuropéennes précédentes. Les métadonnées relatives est soigneusement compilé et réévalué. La base de données est conforme aux normes élevées pour être des ressources de la communauté paneuropéenne de génie parasismique. Ensuite, une étude empirique non linéaire place amplification modèle, fonction de la moyenne en fonction du temps de la plus haute 30m profil de vitesse des ondes de cisaillement et l'accélération maximale du sol sur le roc, est développé. L'objectif principal de tirer un tel modèle est de l'utiliser dans les équations de prédiction des mouvements du sol (GMPEs). Par ailleurs, l'évaluation des facteurs de site dans les codes de conception parasismique montre qu'il est également applicable dans les facteurs de sites informatiques. À cette fin, une autre méthodologie qui prend en compte les résultats de l'analyse de l'aléa sismique probabiliste et déterministe modèles de site est proposé. Cette étude génère GMPEs de réponse élastique ordonnées spectrales horizontale et verticale d'amortissement de 5%. Plutôt que d'équations directs pour le mouvement vertical, afin d'obtenir spectre du danger horizontale et verticale cohérente, compatible GMPE de rapport vertical à horizontal est préférable. Modèles de mise à l'échelle d'amortissement supplémentaires pour modifier les spectres horizontaux et verticaux d'autres ratios d'amortissement sont proposées. / This study firstly presents the recent pan-European strong-motion databank that is updated and extended version of previous pan-European databases. The pertaining metadata is carefully compiled and reappraised. The database meets high standards for being resource of pan-European earthquake engineering community. Then, an empirical nonlinear site amplification model, function of time-based average of uppermost 30m shear wave velocity profile and peak ground acceleration on rock, is developed. The primary aim of deriving such a model is to use it in ground motion prediction equations (GMPEs). Besides, the evaluation of site factors in the seismic design codes shows that it is also applicable in computing site factors. To this end, an alternative methodology that considers the results of probabilistic seismic hazard analysis and deterministic site models is proposed. Finally, this study generates GMPEs for horizontal and vertical elastic response spectral ordinates for different damping values between 1% to 50%. Rather than direct equations for vertical motion, to obtain consistent horizontal and vertical hazard spectrum, compatible vertical-to-horizontal ratio GMPE is preferred. Additional damping scaling models to modify horizontal and vertical spectra at other damping ratios are proposed
36

Assessment Of Seismic Hazard With Local Site Effects : Deterministic And Probabilistic Approaches

Vipin, K S 12 1900 (has links)
Many researchers have pointed out that the accumulation of strain energy in the Penninsular Indian Shield region may lead to earthquakes of significant magnitude(Srinivasan and Sreenivas, 1977; Valdiya, 1998; Purnachandra Rao, 1999; Seeber et al., 1999; Ramalingeswara Rao, 2000; Gangrade and Arora, 2000). However very few studies have been carried out to quantify the seismic hazard of the entire Pennisular Indian region. In the present study the seismic hazard evaluation of South Indian region (8.0° N - 20° N; 72° E - 88° E) was done using the deterministic and probabilistic seismic hazard approaches. Effects of two of the important geotechnical aspects of seismic hazard, site response and liquefaction, have also been evaluated and the results are presented in this work. The peak ground acceleration (PGA) at ground surface level was evaluated by considering the local site effects. The liquefaction potential index (LPI) and factor of safety against liquefaction wee evaluated based on performance based liquefaction potential evaluation method. The first step in the seismic hazard analysis is to compile the earthquake catalogue. Since a comprehensive catalogue was not available for the region, it was complied by collecting data from different national (Guaribidanur Array, Indian Meterorological Department (IMD), National Geophysical Research Institute (NGRI) Hyderabad and Indira Gandhi Centre for Atomic Research (IGCAR) Kalpakkam etc.) and international agencies (Incorporated Research Institutions for Seismology (IRIS), International Seismological Centre (ISC), United States Geological Survey (USGS) etc.). The collected data was in different magnitude scales and hence they were converted to a single magnitude scale. The magnitude scale which is chosen in this study is the moment magnitude scale, since it the most widely used and the most advanced scientific magnitude scale. The declustering of earthquake catalogue was due to remove the related events and the completeness of the catalogue was analysed using the method suggested by Stepp (1972). Based on the complete part of the catalogue the seismicity parameters were evaluated for the study area. Another important step in the seismic hazard analysis is the identification of vulnerable seismic sources. The different types of seismic sources considered are (i) linear sources (ii) point sources (ii) areal sources. The linear seismic sources were identified based on the seismotectonic atlas published by geological survey of India (SEISAT, 2000). The required pages of SEISAT (2000) were scanned and georeferenced. The declustered earthquake data was superimposed on this and the sources which were associated with earthquake magnitude of 4 and above were selected for further analysis. The point sources were selected using a method similar to the one adopted by Costa et.al. (1993) and Panza et al. (1999) and the areal sources were identified based on the method proposed by Frankel et al. (1995). In order to map the attenuation properties of the region more precisely, three attenuation relations, viz. Toto et al. (1997), Atkinson and Boore (2006) and Raghu Kanth and Iyengar (2007) were used in this study. The two types of uncertainties encountered in seismic hazard analysis are aleatory and epistemic. The uncertainty of the data is the cause of aleatory variability and it accounts for the randomness associated with the results given by a particular model. The incomplete knowledge in the predictive models causes the epistemic uncertainty (modeling uncertainty). The aleatory variability of the attenuation relations are taken into account in the probabilistic seismic hazard analysis by considering the standard deviation of the model error. The epistemic uncertainty is considered by multiple models for the evaluation of seismic hazard and combining them using a logic tree. Two different methodologies were used in the evaluation of seismic hazard, based on deterministic and probabilistic analysis. For the evaluation of peak horizontal acceleration (PHA) and spectral acceleration (Sa) values, a new set of programs were developed in MATLAB and the entire analysis was done using these programs. In the deterministic seismic hazard analysis (DSHA) two types of seismic sources, viz. linear and point sources, were considered and three attenuation relations were used. The study area was divided into small grids of size 0.1° x 0.1° (about 12000 grid points) and the PHA and Sa values were evaluated for the mean and 84th percentile values at the centre of each of the grid points. A logic tree approach, using two types of sources and three attenuation relations, was adopted for the evaluation of PHA and Sa values. Logic tree permits the use of alternative models in the hazard evaluation and appropriate weightages can be assigned to each model. By evaluating the 84th percentile values, the uncertainty in spectral acceleration values can also be considered (Krinitzky, 2002). The spatial variations of PHA and Sa values for entire South India are presented in this work. The DSHA method will not consider the uncertainties involved in the earthquake recurrence process, hypocentral distance and the attenuation properties. Hence the seismic hazard analysis was done based on the probabilistic seismic hazard analysis (PSHA), and the evaluation of PHA and Sa values were done by considering the uncertainties involved in the earthquake occurrence process. The uncertainties in earthquake recurrence rate, hypocentral location and attenuation characteristic were considered in this study. For evaluating the seismicity parameters and the maximum expected earthquake magnitude (mmax) the study area was divided into different source zones. The division of study area was done based on the spatial variation of the seismicity parameters ‘a’ and ‘b’ and the mmax values were evaluated for each of these zones and these values were used in the analysis. Logic tree approach was adopted in the analysis and this permits the use of multiple models. Twelve different models (2 sources x 2 zones x 3 attenuation) were used in the analysis and based on the weightage for each of them; the final PHA and Sa values at bed rock level were evaluated. These values were evaluated for a grid size of 0.1° x 0.1° and the spatial variation of these values for return periods of 475 and 2500 years (10% and 2% probability of exceedance in 50 years) are presented in this work. Both the deterministic and probabilistic analyses highlighted that the seismic hazard is high at Koyna region. The PHA values obtained for Koyna, Bangalore and Ongole regions are higher than the values given by BIS-1893(2002). The values obtained for south western part of the study area, especially for parts of kerala are showing the PHA values less than what is provided in BIS-1893(2002). The 84th percentile values given DSHA can be taken as the upper bound PHA and Sa values for South India. The main geotechnical aspects of earthquake hazard are site response and seismic soil liquefaction. When the seismic waves travel from the bed rock through the overlying soil to the ground surface the PHA and Sa values will get changed. This amplification or de-amplification of the seismic waves depends on the type of the overlying soil. The assessment of site class can be done based on different site classification schemes. In the present work, the surface level peak ground acceleration (PGA) values were evaluated based on four different site classes suggested by NEHRP (BSSC, 2003) and the PGA values were developed for all the four site classes based on non-linear site amplification technique. Based on the geotechnical site investigation data, the site class can be determined and then the appropriate PGA and Sa values can be taken from the respective PGA maps. Response spectra were developed for the entire study area and the results obtained for three major cities are discussed here. Different methods are suggested by various codes to Smooth the response spectra. The smoothed design response spectra were developed for these cities based on the smoothing techniques given by NEHRP (BSSC, 2003), IS code (BIS-1893,2002) and Eurocode-8 (2003). A Comparison of the results obtained from these studies is also presented in this work. If the site class at any location in the study area is known, then the peak ground acceleration (PGA) values can be obtained from the respective map. This provides a simplified methodology for evaluating the PGA values for a vast area like South India. Since the surface level PGA values were evaluated for different site classes, the effects of surface topography and basin effects were not taken into account. The analysis of response spectra clearly indicates the variation of peak spectral acceleration values for different site classes and the variation of period of oscillation corresponding to maximum Sa values. The comparison of the smoothed design response spectra obtained using different codal provisions suggest the use of NEHRP(BSSC, 2003) provisions. The conventional liquefaction analysis method takes into account only one earthquake magnitude and ground acceleration values. In order to overcome this shortfall, a performance based probabilistic approach (Kramer and Mayfield, 2007) was adopted for the liquefaction potential evaluation in the present work. Based on this method, the factor of safety against liquefaction and the SPT values required to prevent liquefaction for return periods of 475 and 2500 years were evaluated for Bangalore city. This analysis was done based on the SPT data obtained from 450 boreholes across Bangalore. A new method to evaluate the liquefaction return period based on CPT values is proposed in this work. To validate the new method, an analysis was done for Bangalore by converting the SPT values to CPT values and then the results obtained were compared with the results obtained using SPT values. The factor of safety against liquefaction at different depths were integrated using liquefaction potential index (LPI) method for Bangalore. This was done by calculating the factor of safety values at different depths based on a performance based method and then the LPI values were evaluated. The entire liquefaction potential analysis and the evaluation of LPI values were done using a set of newly developed programs in MATLAB. Based on the above approaches it is possible to evaluate the SPT and CPT values required to prevent liquefaction for any given return period. An analysis was done to evaluate the SPT and CPT values required to prevent liquefaction for entire South India for return periods of 475 and 2500 years. The spatial variations of these values are presented in this work. The liquefaction potential analysis of Bangalore clearly indicates that majority of the area is safe against liquefaction. The liquefaction potential map developed for South India, based on both SPT and CPT values, will help hazard mitigation authorities to identify the liquefaction vulnerable area. This in turn will help in reducing the liquefaction hazard.
37

Optická lokalizace velmi vzdálených cílů ve vícekamerovém systému / Optical Localization of Very Distant Targets in Multicamera Systems

Bednařík, Jan January 2016 (has links)
This work presents a system for semi-autonomous optical localization of distant moving targets using multiple positionable cameras. The cameras were calibrated and stationed using custom designed calibration targets and methodology with the objective to alleviate the main sources of errors which were pinpointed in thorough precision analysis. The detection of the target is performed manually, while the visual tracking is automatic and it utilizes two state-of-the-art approaches. The estimation of the target location in 3-space is based on multi-view triangulation working with noisy measurements. A basic setup consisting of two camera units was tested against static targets and a moving terrestrial target, and the precision of the location estimation was compared to the theoretical model. The modularity and portability of the system allows fast deployment in a wide range of scenarios including perimeter monitoring or early threat detection in defense systems, as well as air traffic control in public space.
38

Kalman Filter Based Approach : Real-time Control-based Human Motion Prediction in Teleoperation / Kalman Filter baserad metod : Realtids uppskattningar av Kontrollbaserad Mänsklig Rörelse i Teleoperationen

Fan, Zheyu Jerry January 2016 (has links)
This work is to investigate the performance of two Kalman Filter Algorithms, namely Linear Kalman Filter and Extended Kalman Filter on control-based human motion prediction in a real-time teleoperation. The Kalman Filter Algorithm has been widely used in research areas of motion tracking and GPS-navigation. However, the potential of human motion prediction by utilizing this algorithm is rarely being mentioned. Combine with the known issue - the delay issue in today’s teleoperation services, the author decided to build a prototype of simple teleoperation model based on the Kalman Filter Algorithm with the aim of eliminated the unsynchronization between the user’s inputs and the visual frames, where all the data were transferred over the network. In the first part of the thesis, two types of Kalman Filter Algorithm are applied on the prototype to predict the movement of the robotic arm based on the user’s motion applied on a Haptic Device. The comparisons in performance among the Kalman Filters have also been focused. In the second part, the thesis focuses on optimizing the motion prediction which based on the results of Kalman filtering by using the smoothing algorithm. The last part of the thesis examines the limitation of the prototype, such as how much the delays are accepted and how fast the movement speed of the Phantom Haptic can be, to still be able to obtain reasonable predations with acceptable error rate.   The results show that the Extended Kalman Filter has achieved more advantages in motion prediction than the Linear Kalman Filter during the experiments. The unsynchronization issue has been effectively improved by applying the Kalman Filter Algorithm on both state and measurement models when the latency is set to below 200 milliseconds. The additional smoothing algorithm further increases the accuracy. More important, it also solves shaking issue on the visual frames on robotic arm which is caused by the wavy property of the Kalman Filter Algorithm. Furthermore, the optimization method effectively synchronizes the timing when robotic arm touches the interactable object in the prediction.   The method which is utilized in this research can be a good reference for the future researches in control-based human motion tracking and prediction. / Detta arbete fokuserar på att undersöka prestandan hos två Kalman Filter Algoritmer, nämligen Linear Kalman Filter och Extended Kalman Filter som används i realtids uppskattningar av kontrollbaserad mänsklig rörelse i teleoperationen. Dessa Kalman Filter Algoritmer har används i stor utsträckning forskningsområden i rörelsespårning och GPS-navigering. Emellertid är potentialen i uppskattning av mänsklig rörelse genom att utnyttja denna algoritm sällan nämnas. Genom att kombinera med det kända problemet – fördröjningsproblem i dagens teleoperation tjänster beslutar författaren att bygga en prototyp av en enkel teleoperation modell vilket är baserad på Kalman Filter algoritmen i syftet att eliminera icke-synkronisering mellan användarens inmatningssignaler och visuella information, där alla data överfördes via nätverket. I den första delen av avhandlingen appliceras både Kalman Filter Algoritmer på prototypen för att uppskatta rörelsen av robotarmen baserat på användarens rörelse som anbringas på en haptik enhet. Jämförelserna i prestandan bland de Kalman Filter Algoritmerna har också fokuserats. I den andra delen fokuserar avhandlingen på att optimera uppskattningar av rörelsen som baserat på resultaten av Kalman-filtrering med hjälp av en utjämningsalgoritm. Den sista delen av avhandlingen undersökes begräsning av prototypen, som till exempel hur mycket fördröjningar accepteras och hur snabbt den haptik enheten kan vara, för att kunna erhålla skäliga uppskattningar med acceptabel felfrekvens.   Resultaten visar att den Extended Kalman Filter har bättre prestandan i rörelse uppskattningarna än den Linear Kalman Filter under experimenten. Det icke-synkroniseringsproblemet har förbättrats genom att tillämpa de Kalman Filter Algoritmerna på både statliga och värderingsmodeller när latensen är inställd på under 200 millisekunder. Den extra utjämningsalgoritmen ökar ytterligare noggrannheten. Denna algoritm löser också det skakande problem hos de visuella bilder på robotarmen som orsakas av den vågiga egenskapen hos Kalman Filter Algoritmen. Dessutom effektivt synkroniserar den optimeringsmetoden tidpunkten när robotarmen berör objekten i uppskattningarna.   Den metod som används i denna forskning kan vara en god referens för framtida undersökningar i kontrollbaserad rörelse- spåning och uppskattning.

Page generated in 0.1209 seconds