Spelling suggestions: "subject:"[een] SENSOR FUSION"" "subject:"[enn] SENSOR FUSION""
51 |
The Omni-Directional Differential Sun SensorSwartwout, Michael, Olsen, Tanya, Kitts, Christopher 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada / The Stanford University Satellite Systems Development Laboratory will flight test a telemetry reengineering experiment on its student-built SAPPHIRE spacecraft. This experiment utilizes solar panel current information and knowledge of panel geometry in order to create a virtual sun sensor that can roughly determine the satellite's sun angle. The Omni-Directional Differential Sun Sensor (ODDSS) algorithm normalizes solar panel currents and differences them to create a quasi-linear signal over a particular sensing region. The specific configuration of the SAPPHIRE spacecraft permits the construction of 24 such regions. The algorithm will account for variations in panel outputs due to battery charging, seasonal fluctuations, solar cell degradation, and albedo affects. Operationally, ODDSS telemetry data will be verified through ground processing and comparison with data derived from SAPPHIRE's infrared sensors and digital camera. The expected sensing accuracy is seven degrees. This paper reviews current progress in the design and integration of the ODDSS algorithm through a discussion of the algorithm's strategy and a presentation of results from hardware testing and software simulation.
|
52 |
A Genetic Programming Approach to Cost-Sensitive Control in Wireless Sensor NetworksYousefi Zowj, Afsoon 01 January 2016 (has links)
In some wireless sensor network applications, multiple sensors can be used to measure the same variable, while differing in their sampling cost, for example in their power requirements. This raises the problem of automatically controlling heterogeneous sensor suites in wireless sensor network applications, in a manner that balances cost and accuracy of sensors. Genetic programming (GP) is applied to this problem, considering two basic approaches. First, a hierarchy of models is constructed, where increasing levels in the hierarchy use sensors of increasing cost. If a model that polls low cost sensors exhibits too much prediction uncertainty, the burden of prediction is automatically transferred to a higher level model using more expensive sensors. Second, models are trained with cost as an optimization objective, called non-hierarchical models, that use conditionals to automatically select sensors based on both cost and accuracy. These approaches are compared in a setting where the available budget for sampling is considered to remain constant, and in a setting where the system is sensitive to a fluctuating budget, for example available battery power. It is showed that in both settings, for increasingly challenging datasets, hierarchical models makes predictions with equivalent accuracy yet lower cost than non-hierarchical models.
|
53 |
Curve Maneuvering for Precision Planter / Kurvtagning för precisionssåmaskinMourad, Jacob, Gustafsson, Emil January 2019 (has links)
With a larger global population and fewer farmers, harvests will have to be larger and easier to manage. By high precision planting, each crop will have the same available area on the field, yielding an even size of the crops which means the whole field can be harvested at the same time. This thesis investigates the possibility for such precision planting in curves. Currently, Väderstads planter collection Tempo, can deliver precision in the centimeter range for speeds up to 20 km/h when driving straight, but not when turning. This thesis makes use of the available sensors on the planters, but also investigates possible improvements by including additional sensors. An Extended Kalman Filter is used to estimate the individual speeds of the planting row units and thus enabling high precision planting for an arbitrary motion. The filter is shown to yield a satisfactory result when using the internal measurement units, the radar speed sensor and the GPS already mounted on the planter. By implementing the filter, a higher precision is obtained compared to using the same global speed for all planting row units.
|
54 |
Motion Conflict Detection and Resolution in Visual-Inertial Localization AlgorithmWisely Babu, Benzun 30 July 2018 (has links)
In this dissertation, we have focused on conflicts that occur due to disagreeing motions in multi-modal localization algorithms. In spite of the recent achievements in robust localization by means of multi-sensor fusion, these algorithms are not applicable to all environments. This is primarily attributed to the following fundamental assumptions: (i) the environment is predominantly stationary, (ii) only ego-motion of the sensor platform exists, and (iii) multiple sensors are always in agreement with each other regarding the observed motion. Recently, studies have shown how to relax the static environment assumption using outlier rejection techniques and dynamic object segmentation. Additionally, to handle non ego-motion, approaches that extend the localization algorithm to multi-body tracking have been studied. However, there has been no attention given to the conditions where multiple sensors contradict each other with regard to the motions observed.
Vision based localization has become an attractive approach for both indoor and outdoor applications due to the large information bandwidth provided by images and reduced cost of the cameras used. In order to improve the robustness and overcome the limitations of vision, an Inertial Measurement Unit (IMU) may be used. Even though visual-inertial localization has better accuracy and improved robustness due to the complementary nature of camera and IMU sensor, they are affected by disagreements in motion observations. We term such dynamic situations as environments with motion conflictbecause these are caused when multiple different but self- consistent motions are observed by different sensors. Tightly coupled visual inertial fusion approaches that disregard such challenging situations exhibit drift that can lead to catastrophic errors.
We have provided a probabilistic model for motion conflict. Additionally, a novel algorithm to detect and resolve motion conflicts is also presented. Our method to detect motion conflicts is based on per-frame positional estimate discrepancy and per- landmark reprojection errors. Motion conflicts were resolved by eliminating inconsistent IMU and landmark measurements. Finally, a Motion Conflict aware Visual Inertial Odometry (MC- VIO) algorithm that combined both detection and resolution of motion conflict was implemented. Both quantitative and qualitative evaluation of MC-VIO on visually and inertially challenging datasets were obtained. Experimental results indicated that MC-VIO algorithm reduced the absolute trajectory error by 70% and the relative pose error by 34% in scenes with motion conflict, in comparison to the reference VIO algorithm. Motion conflict detection and resolution enables the application of visual inertial localization algorithms to real dynamic environments. This paves the way for articulate object tracking in robotics. It may also find numerous applications in active long term augmented reality.
|
55 |
Multi Sensor System for Pedestrian Tracking and Activity Recognition in Indoor EnvironmentsMarron Monteserin, Juan Jose 03 March 2014 (has links)
The widespread use of mobile devices and the rise of Global Navigation Satellite Systems (GNSS) have allowed mobile tracking applications to become very popular and valuable in outdoor environments. However, tracking pedestrians in indoor environments with Global Positioning System (GPS)-based schemes is still very challenging given the lack of enough
signals to locate the user. Along with indoor tracking, the ability to recognize pedestrian behavior and activities can lead to considerable growth in location-based applications including
pervasive healthcare, leisure and guide services (such as, museum, airports, stores, etc.), and emergency services, among the most important ones.
This thesis presents a system for pedestrian tracking and activity recognition in indoor environments using exclusively common off-the-shelf sensors embedded in smartphones
(accelerometer, gyroscope, magnetometer and barometer). The proposed system combines the knowledge found in biomechanical patterns of the human body while accomplishing basic activities, such as walking or climbing stairs up and down, along with identifiable signatures that certain indoor locations (such as turns or elevators) introduce on sensing data.
The system was implemented and tested on Android-based mobile phones with a fixed phone position. The system provides accurate step detection and count with an error of 3% in flat
floor motion traces and 3.33% in stairs. The detection of user changes of direction and altitude are performed with 98.88% and 96.66% accuracy, respectively. In addition, the activity recognition module has an accuracy of 95%. The combination of modules leads to a total tracking error of 90.81% in common human motion indoor displacements.
|
56 |
MALLS - Mobile Automatic Launch and Landing Station for VTOL UAVsGising, Andreas January 2008 (has links)
<p>The market for vertical takeoff and landing unmanned aerial vehicles, VTOL UAVs, is growing rapidly. To reciprocate the demand of VTOL UAVs in offshore applications, CybAero has developed a novel concept for landing on moving objects called MALLS, Mobile Automatic Launch and Landing Station. MALLS can tilt its helipad and is supposed to align to either the horizontal plane with an operator adjusted offset or to the helicopter skids. Doing so, eliminates the gyroscopic forces otherwise induced in the rotordisc as the helicopter is forced to change attitude when the skids align to the ground during landing or when standing on a jolting boat with the rotor spun up. This master’s thesis project is an attempt to get the concept of MALLS closer to a quarter scale implementation. The main focus lies on the development of the measurement methods for achieving the references needed by MALLS, the hori- zontal plane and the plane of the helicopter skids. The control of MALLS is also discussed. The measurement methods developed have been proved by tested implementations or simulations. The theories behind them contain among other things signal filtering, Kalman filtering, sensor fusion and search algorithms. The project have led to that the MALLS prototype can align its helipad to the horizontal plane and that a method for measuring the relative attitude between the helipad and the helicopter skids have been developed. Also suggestions for future improvements are presented.</p>
|
57 |
Sensor fusion between a Synthetic Attitude and Heading Reference System and GPS / Sensorfusion mellan ett Syntetiskt attityd- och kursreferenssystem och GPSRosander, Regina January 2003 (has links)
<p>Sensor fusion deals with the merging of several signals into one, extracting a better and more reliable result. Traditionally the Kalmanfilter is used for this purpose and the aircraft navigation has benefited tremendously from its use. This thesis considers the merge of two navigation systems, the GPS positioning system and the Saab developed Synthetic Attitude and Heading Reference System (SAHRS). The purpose is to find a model for such a fusion and to investigate whether the fusion will improve the overall navigation performance. The non-linear nature of the navigation equations will lead to the use of the extended Kalman filter and the model is evaluated against both simulated and real data. The results show that this strategy indeed works but problems will arise when the GPS signal falls away.</p>
|
58 |
Robust Automotive Positioning: Integration of GPS and Relative Motion Sensors / Robust fordonspositionering: Integration av GPS och sensorer för relativ rörelseKronander, Jon January 2004 (has links)
<p>Automotive positioning systems relying exclusively on the input from a GPS receiver, which is a line of sight sensor, tend to be sensitive to situations with limited sky visibility. Such situations include: urban environments with tall buildings; inside parking structures; underneath trees; in tunnels and under bridges. In these situations, the system has to rely on integration of relative motion sensors to estimate vehicle position. However, these sensor measurements are generally affected by errors such as offsets and scale factors, that will cause the resulting position accuracy to deteriorate rapidly once GPS input is lost. </p><p>The approach in this thesis is to use a GPS receiver in combination with low cost sensor equipment to produce a robust positioning module. The module should be capable of handling situations where GPS input is corrupted or unavailable. The working principle is to calibrate the relative motion sensors when GPS is available to improve the accuracy during GPS intermission. To fuse the GPS information with the sensor outputs, different models have been proposed and evaluated on real data sets. These models tend to be nonlinear, and have therefore been processed in an Extended Kalman Filter structure. </p><p>Experiments show that the proposed solutions can compensate for most of the errors associated with the relative motion sensors, and that the resulting positioning accuracy is improved accordingly.</p>
|
59 |
Anomaly detection in unknown environments using wireless sensor networksLi, YuanYuan 01 May 2010 (has links)
This dissertation addresses the problem of distributed anomaly detection in Wireless Sensor Networks (WSN). A challenge of designing such systems is that the sensor nodes are battery powered, often have different capabilities and generally operate in dynamic environments. Programming such sensor nodes at a large scale can be a tedious job if the system is not carefully designed. Data modeling in distributed systems is important for determining the normal operation mode of the system. Being able to model the expected sensor signatures for typical operations greatly simplifies the human designer’s job by enabling the system to autonomously characterize the expected sensor data streams. This, in turn, allows the system to perform autonomous anomaly detection to recognize when unexpected sensor signals are detected. This type of distributed sensor modeling can be used in a wide variety of sensor networks, such as detecting the presence of intruders, detecting sensor failures, and so forth. The advantage of this approach is that the human designer does not have to characterize the anomalous signatures in advance.
The contributions of this approach include: (1) providing a way for a WSN to autonomously model sensor data with no prior knowledge of the environment; (2) enabling a distributed system to detect anomalies in both sensor signals and temporal events online; (3) providing a way to automatically extract semantic labels from temporal sequences; (4) providing a way for WSNs to save communication power by transmitting compressed temporal sequences; (5) enabling the system to detect time-related anomalies without prior knowledge of abnormal events; and, (6) providing a novel missing data estimation method that utilizes temporal and spatial information to replace missing values. The algorithms have been designed, developed, evaluated, and validated experimentally in synthesized data, and in real-world sensor network applications.
|
60 |
Pose Estimation and Calibration Algorithms for Vision and Inertial SensorsHol, Jeroen Diederik January 2008 (has links)
This thesis deals with estimating position and orientation in real-time, using measurements from vision and inertial sensors. A system has been developed to solve this problem in unprepared environments, assuming that a map or scene model is available. Compared to ‘camera-only’ systems, the combination of the complementary sensors yields an accurate and robust system which can handle periods with uninformative or no vision data and reduces the need for high frequency vision updates. The system achieves real-time pose estimation by fusing vision and inertial sensors using the framework of nonlinear state estimation for which state space models have been developed. The performance of the system has been evaluated using an augmented reality application where the output from the system is used to superimpose virtual graphics on the live video stream. Furthermore, experiments have been performed where an industrial robot providing ground truth data is used to move the sensor unit. In both cases the system performed well. Calibration of the relative position and orientation of the camera and the inertial sensor turn out to be essential for proper operation of the system. A new and easy-to-use algorithm for estimating these has been developed using a gray-box system identification approach. Experimental results show that the algorithm works well in practice.
|
Page generated in 0.0399 seconds