• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 165
  • 25
  • 10
  • 8
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 263
  • 263
  • 69
  • 62
  • 61
  • 54
  • 49
  • 48
  • 43
  • 39
  • 38
  • 38
  • 33
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Deep Neural Network Pruning and Sensor Fusion in Practical 2D Detection

Mousa Pasandi, Morteza 19 May 2023 (has links)
Convolutional Neural Networks (CNNs) have been extensively studied and applied to various computer vision problems, including object detection, semantic segmentation, and autonomous driving. Convolutional Neural Networks (CNN)s extract complex features from input images or data to represent objects or patterns. Their highly complex architecture, however, and the size of their learned weights make their time and resource intensive. Measures like pruning and fusion, which aim to simplify the structure and lessen the load on the network’s resources, should be considered to resolve this problem. In this thesis, we intend to explore the effect of pruning on segmentation and object detection as well as the benefits of using sensor fusion operators in the 2d space to boost the existing networks’ performance. Specifically, we focus on structured pruning, quantization, and simple and learnable fusion operators. We also study the scalability of different algorithms in terms of the number of parameters and floating points used. First, we provide a general overview of CNNs and the history of pruning and fusion operations. Second, we explain the advantages of pruning and discuss the contrast between the unstructured and structured types. Third, we discuss the differences between simple fusion and learnable fusion. In order to evaluate our algorithms, we use several classification and object detection datasets such as Cifar-10, KITTI and Microsoft COCO. By applying our proposed methods to the studied datasets, we can assess the efficiency of the algorithms. Furthermore, this allows us to observe the improvements in task-specific losses. In conclusion, our work is focused on analyzing the effect of pruning and fusion to simplify existing networks and improve their performance in terms of scalability, task-specific losses, and resource consumption. We also discuss various algorithms, as well as datasets which serve as a basis for the evaluation of our proposed approaches.
12

Evaluation of online hardware video stabilization on a moving platform / Utvärdering av hårdvarustabilisering av video i realtid på rörlig plattform

Gratorp, Eric January 2013 (has links)
Recording a video sequence with a camera during movement often produces blurred results. This is mainly due to motion blur which is caused by rapid movement of objects in the scene or the camera during recording. By correcting for changes in the orientation of the camera, caused by e.g. uneven terrain, it is possible to minimize the motion blur and thus, produce a stabilized video. In order to do this, data gathered from a gyroscope and the camera itself can be used to measure the orientation of the camera. The raw data needs to be processed, synchronized and filtered to produce a robust estimate of the orientation. This estimate can then be used as input to some automatic control system in order to correct for changes in the orientation This thesis focuses on examining the possibility of such a stabilization. The actual stabilization is left for future work. An evaluation of the hardware as well as the implemented methods are done with emphasis on speed, which is crucial in real time computing. / En videosekvens som spelas in under rörelse blir suddig. Detta beror främst på rörelseoskärpa i bildrutorna orsakade av snabb rörelse av objekt i scenen eller av kameran själv. Genom att kompensera för ändringar i kamerans orientering, orsakade av t.ex. ojämn terräng, är det möjligt att minimera rörelseoskärpan och på så sätt stabilisera videon. För att åstadkomma detta används data från ett gyroskop och kameran i sig för att skatta kamerans orientering. Den insamlade datan behandlas, synkroniseras och filtreras för att få en robust skattning av orienteringen. Denna orientering kan sedan användas som insignal till ett reglersystem för att kompensera för ändringar i kamerans orientering. Denna avhandling undersöker möjligheten för en sådan stabilisering. Den faktiska stabiliseringen lämnas till framtida arbete. Hårdvaran och de implementerade metoderna utvärderas med fokus på beräkningshastighet, som är kritiskt inom realtidssystem.
13

Enhanced positioning in harsh environments / Förbättrad positionering i svåra miljöer

Glans, Fredrik January 2013 (has links)
Today’s heavy duty vehicles are equipped with safety and comfort systems, e.g. ABS and ESP, which totally or partly take over the vehicle in certain risk situations. When these systems become more and more autonomous more robust positioning is needed. In the right conditions the GPS system provides precise and robust positioning. However, in harsh environments, e.g. dense urban areas and in dense forests, the GPS signals may be affected by multipaths, which means that the signals are reflected on their way from the satellites to the receiver. This can cause large errors in the positioning and thus can give rise to devastating effects for autonomous systems. This thesis evaluate different methods to enhance a low cost GPS in harsh environments, with focus on mitigating multipaths. Mainly there are four different methods: Regular Unscented Kalman filter, probabilistic multipath mitigation, Unscented Kalman filter with vehicle sensor input and probabilistic multipath mitigation with vehicle sensor input. The algorithms will be tested and validated on real data from both dense forest areas and dense urban areas. The results show that the positioning is enhanced, in particular when integrating the vehicle sensors, compared to a low cost GPS.
14

Combinação de métodos de inteligência artificial para fusão de sensores / Combination of artificial intelligence methods for sensor fusion

Faceli, Katti 23 March 2001 (has links)
Robôs móveis dependem de dados provenientes de sensores para ter uma representação do seu ambiente. Porém, os sensores geralmente fornecem informações incompletas, inconsistentes ou imprecisas. Técnicas de fusão de sensores têm sido empregadas com sucesso para aumentar a precisão de medidas obtidas com sensores. Este trabalho propõe e investiga o uso de técnicas de inteligência artificial para fusão de sensores com o objetivo de melhorar a precisão e acurácia de medidas de distância entre um robô e um objeto no seu ambiente de trabalho, obtidas com diferentes sensores. Vários algoritmos de aprendizado de máquina são investigados para fundir os dados dos sensores. O melhor modelo gerado com cada algoritmo é chamado de estimador. Neste trabalho, é mostrado que a utilização de estimadores pode melhorar significativamente a performance alcançada por cada sensor isoladamente. Mas os vários algoritmos de aprendizado de máquina empregados têm diferentes características, fazendo com que os estimadores tenham diferentes comportamentos em diferentes situações. Objetivando atingir um comportamento mais preciso e confiável, os estimadores são combinados em comitês. Os resultados obtidos sugerem que essa combinação pode melhorar a confiança e precisão das medidas de distâncias dos sensores individuais e estimadores usados para fusão de sensores. / Mobile robots rely on sensor data to have a representation of their environment. However, the sensors usually provide incomplete, inconsistent or inaccurate information. Sensor fusion has been successfully employed to enhance the accuracy of sensor measures. This work proposes and investigates the use of artificial intelligence techniques for sensor fusion. Its main goal is to improve the accuracy and reliability of a distance between a robot and an object in its work environment using measures obtained from different sensors. Several machine learning algorithms are investigated to fuse the sensors data. The best model generated with each algorithm are called estimator. It is shown that the employment of the estimators based on artificial intelligence can improve significantly the performance achieved by each sensor alone. The machine learning algorithms employed have different characteristics, causing the estimators to have different behaviour in different situations. Aiming to achieve more accurate and reliable behavior, the estimators are combined in committees. The results obtained suggest that this combination can improve the reliability and accuracy of the distance measures by the individual sensors and estimators used for sensor fusion.
15

Biologically Inspired Vision and Control for an Autonomous Flying Vehicle

Garratt, Matthew Adam, m.garratt@adfa.edu.au 17 February 2008 (has links)
This thesis makes a number of new contributions to control and sensing for unmanned vehicles. I begin by developing a non-linear simulation of a small unmanned helicopter and then proceed to develop new algorithms for control and sensing using the simulation. The work is field-tested in successful flight trials of biologically inspired vision and neural network control for an unstable rotorcraft. The techniques are more robust and more easily implemented on a small flying vehicle than previously attempted methods.¶ Experiments from biology suggest that the sensing of image motion or optic flow in insects provides a means of determining the range to obstacles and terrain. This biologically inspired approach is applied to control of height in a helicopter, leading to the World’s first optic flow based terrain following controller for an unmanned helicopter in forward flight. Another novel optic flow based controller is developed for the control of velocity in hover. Using the measurements of height from other sensors, optic flow is used to provide a measure of the helicopters lateral and longitudinal velocities relative to the ground plane. Feedback of these velocity measurements enables automated hover with a drift of only a few cm per second, which is sufficient to allow a helicopter to land autonomously in gusty conditions with no absolute measurement of position.¶ New techniques for sensor fusion using Extended Kalman Filtering are developed to estimate attitude and velocity from noisy inertial sensors and optic flow measurements. However, such control and sensor fusion techniques can be computationally intensive, rendering them difficult or impossible to implement on a small unmanned vehicle due to limitations on computing resources. Since neural networks can perform these functions with minimal computing hardware, a new technique of control using neural networks is presented. First a hybrid plant model consisting of exactly known dynamics is combined with a black-box representation of the unknown dynamics. Simulated trajectories are then calculated for the plant using an optimal controller. Finally, a neural network is trained to mimic the optimal controller. Flight test results of control of the heave dynamics of a helicopter confirm the neural network controller’s ability to operate in high disturbance conditions and suggest that the neural network outperforms a PD controller. Sensor fusion and control of the lateral and longitudinal dynamics of the helicopter are also shown to be easily achieved using computationally modest neural networks.
16

Shooter Localization in a Wireless Sensor Network / Lokalisering av skytt i ett trådlöst sensornätverk

Wilsson, Olof January 2009 (has links)
<p>Shooter localization systems are used to detect and locate the origin of gunfire. A wireless sensor network is one possible implementation of such a system. A wireless sensor network is sensitive to synchronization errors. Localization techniques that rely on the timing will give less accurate or even useless results if the synchronization errors are too large.</p><p>This thesis focuses on the influence of synchronization errors on the abilityto localize a shooter using a wireless sensor network. A localization algorithm</p><p>is developed and implemented and the effect of synchronization errors is studied. The localization algorithm is evaluated using numerical experiments, simulations, and data from real gunshots collected at field trials.</p><p>The results indicate that the developed localization algorithm is able to localizea shooter with quite good accuracy. However, the localization performance is to a high degree influenced by the geographical configuration of the network as well as the synchronization error.</p> / <p><p>Skottlokaliseringssystem används för att upptäcka och lokalisera ursprunget för avlossade skott. Ett trådlöst sensornätverk är ett sätt att utforma ett sådant system.Trådlösa sensornätverk är känsliga för synkroniseringsfel. Lokaliseringsmetoder som bygger på tidsobservationer kommer med för stora synkroniseringsfel ge dåliga eller helt felaktiga resultat.</p><p>Detta examensarbete fokuserar på vilken inverkan synkroniseringsfel har på möjligheterna att lokalisera en skytt i ett trådlöst sensornätverk. En lokaliseringsalgoritm utvecklas och förmågan att korrekt lokalisera en skytt vid olika synkroniseringsfel undersöks. Lokaliseringsalgoritmen prövas med numeriska experiment, simuleringar och även för data från riktiga skottljud, insamlade vid fältförsök.</p><p>Resultaten visar att lokaliseringsalgoritmen fungerar tillfredställande, men att lokaliseringsförmågan till stor del påverkas av synkroniseringsfel men även av sensornätverkets geografiska utseende.</p></p>
17

Visual-inertial tracking using Optical Flow measurements

Larsson, Olof January 2010 (has links)
<p> </p><p>Visual-inertial tracking is a well known technique to track a combination of a camera and an inertial measurement unit (IMU). An issue with the straight-forward approach is the need of known 3D points. To by-pass this, 2D information can be used without recovering depth to estimate the position and orientation (pose) of the camera. This Master's thesis investigates the feasibility of using Optical Flow (OF) measurements and indicates the benifits using this approach.</p><p>The 2D information is added using OF measurements. OF describes the visual flow of interest points in the image plane. Without the necessity to estimate depth of these points, the computational complexity is reduced. With the increased 2D information, the 3D information required for the pose estimate decreases.</p><p>The usage of 2D points for the pose estimation has been verified with experimental data gathered by a real camera/IMU-system. Several data sequences containing different trajectories are used to estimate the pose. It is shown that OF measurements can be used to improve visual-inertial tracking with reduced need of 3D-point registrations.</p>
18

Nonlinear and distributed sensory estimation

Sugathevan, Suranthiran 29 August 2005 (has links)
Methods to improve performance of sensors with regard to sensor nonlinearity, sensor noise and sensor bandwidths are investigated and new algorithms are developed. The necessity of the proposed research has evolved from the ever-increasing need for greater precision and improved reliability in sensor measurements. After describing the current state of the art of sensor related issues like nonlinearity and bandwidth, research goals are set to create a new trend on the usage of sensors. We begin the investigation with a detailed distortion analysis of nonlinear sensors. A need for efficient distortion compensation procedures is further justified by showing how a slight deviation from the linearity assumption leads to a very severe distortion in time and in frequency domains. It is argued that with a suitable distortion compensation technique the danger of having an infinite bandwidth nonlinear sensory operation, which is dictated by nonlinear distortion, can be avoided. Several distortion compensation techniques are developed and their performance is validated by simulation and experimental results. Like any other model-based technique, modeling errors or model uncertainty affects performance of the proposed scheme, this leads to the innovation of robust signal reconstruction. A treatment for this problem is given and a novel technique, which uses a nominal model instead of an accurate model and produces the results that are robust to model uncertainty, is developed. The means to attain a high operating bandwidth are developed by utilizing several low bandwidth pass-band sensors. It is pointed out that instead of using a single sensor to measure a high bandwidth signal, there are many advantages of using an array of several pass-band sensors. Having shown that employment of sensor arrays is an economic incentive and practical, several multi-sensor fusion schemes are developed to facilitate their implementation. Another aspect of this dissertation is to develop means to deal with outliers in sensor measurements. As fault sensor data detection is an essential element of multi-sensor network implementation, which is used to improve system reliability and robustness, several sensor scheduling configurations are derived to identify and to remove outliers.
19

Visual-inertial tracking using Optical Flow measurements

Larsson, Olof January 2010 (has links)
Visual-inertial tracking is a well known technique to track a combination of a camera and an inertial measurement unit (IMU). An issue with the straight-forward approach is the need of known 3D points. To by-pass this, 2D information can be used without recovering depth to estimate the position and orientation (pose) of the camera. This Master's thesis investigates the feasibility of using Optical Flow (OF) measurements and indicates the benifits using this approach. The 2D information is added using OF measurements. OF describes the visual flow of interest points in the image plane. Without the necessity to estimate depth of these points, the computational complexity is reduced. With the increased 2D information, the 3D information required for the pose estimate decreases. The usage of 2D points for the pose estimation has been verified with experimental data gathered by a real camera/IMU-system. Several data sequences containing different trajectories are used to estimate the pose. It is shown that OF measurements can be used to improve visual-inertial tracking with reduced need of 3D-point registrations.
20

A Localisation and Navigation System for an Autonomous Wheel Loader

Lilja, Robin January 2011 (has links)
Autonomous vehicles are an emerging trend in robotics, seen in a vast range of applications and environments. Consequently, Volvo Construction Equipment endeavour to apply the concept of autonomous vehicles onto one of their main products. In the company’s Autonomous Machine project an autonomous wheel loader is being developed. As an ob jective given by the company; a demonstration proving the possibility of conducting a fully autonomous load and haul cycle should be performed. Conducting such cycle requires the vehicle to be able to localise itself in its task space and navigate accordingly. In this Master’s Thesis, methods of solving those requirements are proposed and evaluated on a real wheel loader. The approach taken regarding localisation, is to apply sensor fusion, by extended Kalman filtering, to the available sensors mounted on the vehicle, including; odometric sensors, a Global Positioning System receiver and an Inertial Measurement Unit. Navigational control is provided through an interface developed, allowing high level software to command the vehicle by specifying drive paths. A path following controller is implemented and evaluated. The main objective was successfully accomplished by integrating the developed localisation and navigational system with the existing system prior this thesis. A discussion of how to continue the development concludes the report; the addition of a continuous vision feedback is proposed as the next logical advancement.

Page generated in 0.0489 seconds