Spelling suggestions: "subject:"fieldview"" "subject:"field.after""
21 |
Neuropsychological Factors Associated with Useful Field of ViewPatel, Kruti D. 11 June 2014 (has links)
No description available.
|
22 |
Risk-Aware Human-In-The-Loop Multi-Robot Path Planning for Lost Person Search and RescueCangan, Barnabas Gavin 12 July 2019 (has links)
We introduce a framework that would enable using autonomous aerial vehicles in search and rescue scenarios associated with missing person incidents to assist human searchers. We formulate a lost person behavior model and a human searcher model informed by data collected from past search missions. These models are used to generate a probabilistic heatmap of the lost person's position and anticipated searcher trajectories. We use Gaussian processes with a Gibbs' kernel for data fusion to accurately model a limited field-of-view sensor. Our algorithm thereby computes a set of trajectories for a team of aerial vehicles to autonomously navigate, so as to assist and complement human searchers' efforts. / Master of Science / Our goal is to assist human searchers using autonomous aerial vehicles in search and rescue scenarios associated with missing person incidents. We formulate a lost person behavior model and a human searcher model informed by data collected from past search missions. These models are used to generate a probabilistic heatmap of the lost person’s position and anticipated searcher trajectories. We use Gaussian processes for data fusion with Gibbs’ kernel to accurately model a limited field-of-view sensor. Our algorithm thereby computes a set of trajectories for a team of aerial vehicles to autonomously navigate, so as to assist and complement human searchers’ efforts.
|
23 |
Analyse des besoins des conducteurs âgés et des adaptations mises en œuvre lors de la réalisation de manœuvres à basses vitesses / Analysis of older drivers’ needs and adaptations during low speed manoeuvresDouissembekov, Evgueni 14 November 2014 (has links)
Les manoeuvres de stationnement représentent une difficulté pour les conducteurs âgés. De ce fait, elles font partie des situations de conduite les plus évitées chez les séniors. L'étude s'intéresse aux difficultés et aux besoins des conducteurs âgés lors des manoeuvres à basses vitesses. Premièrement, une enquête s'intéressant aux différents aspects de l'activité de stationnement sera réalisée auprès des conducteurs âgés du Rhône. Ensuite, la gestion de ressources cognitives chez le conducteur manoeuvrant sera étudiée dans une série d'expériences avec les différents types de manoeuvres de stationnement. Pour ce faire, des places de stationnement de configuration modifiable seront aménagées dans un parking, et les manoeuvres seront effectuées par les participants. L'étude s'intéresse plus particulièrement à la gestion de la ballance saillance-pertinence lors de l'exploration visuelle de l'environnement du parking. Les informations ainsi obtenues devraient contribuer à la conception des systèmes d'aide aux manoeuvres adaptés aux seniors. / One cannot imagine driving without parking manoeuvres since they mark the beginning and the end of each trip. However, physiological and cognitive decline with ageing can increase the difficulty of parking manoeuvring. Our study is organized in two stages. Firstly, a postal survey investigated the parking behaviour among seniors. It provided information about parking habits, needs and difficulties of older drivers. An approach based on Manchester Driving Behaviour Questionnaire lead us to classify four types of parking errors. We also identified factors contributing to the difficulty of parking manoeuvring.Secondly, we studied parking manoeuvring with experimental vehicle and in a driving simulator. In order to examine attentional processes during manoeuvres, we used the MAM model of attention. The salience and the relevance of elements present in the parking were modified. Parking performance was also examined in relation to drivers’ age and their attentional and visual abilities. The salience and the relevance of parking environments interacted with driver’s age and the extent of their total, peripheral and attentional field of view. Drivers with restricted total, peripheral field or attentional field of view can meet more difficulty during manoeuvres when they must share their attention in complex parking environment. A highly salient obstacle can be more easily detected by these drivers. In the presence of a pedestrian, the difficulty of manoeuvring can increase among drivers with restricted total, peripheral field or attentional field of view and decrease among drivers without such restriction. Further research should provide more information on strategies adopted by older drivers during manoeuvring.
|
24 |
Using helicopter noise to prevent brownout crashes: an acoustic altimeterFreedman, Joseph Saul 08 July 2010 (has links)
This thesis explores one possible method of preventing helicopter crashes caused by brownout using the noise generated by the helicopter rotor as an altimeter. The hypothesis under consideration is that the helicopter's height, velocity, and obstacle locations with respect to the helicopter, can be determined by comparing incident and reflected rotor noise signals, provided adequate bandwidth and signal to noise ratio. Heights can be determined by measuring the cepstrum of the reflected helicopter noise. The velocity can be determined by measuring small amounts of Doppler distortion using the Mellin-Scale Transform. Height and velocity detection algorithms are developed, optimized for this application, and tested using a microphone array. The algorithms and array are tested using a hemianechoic chamber and outside in Georgia Tech's Burger Bowl. Height and obstacle detection are determined to be feasible with the existing array. Velocity detection and surface mapping are not successfully accomplished.
|
25 |
Vision Enhancement Systems : The Importance of Field of ViewGrönqvist, Helena January 2002 (has links)
<p>The purpose of the project, which led to this thesis, was to investigate the possible effects different horizontal Fields of View (FoV) have on driving performance when driving at night with a Vision Enhancement System (VES). The FoV chosen to be examined were 12 degree and 24 degree FoV, both displayed on a screen with the horizontal size of 12 degree FoV. This meant that the different conditions of FoV also had different display ratios 1:1 and 1:2. No effort was made to separate these parameters. </p><p>A simulator study was performed at the simulator at IKP, Linköping University. Sixteen participants in a within-group design participated in the study. The participants drove two road sections; one with a 12 degree FoV and the other with a 24 degree FoV. During each section four scenarios were presented in which the participants passed one of three types of objects; a moose, a deer or a man. In each section, two of the objects stood right next to the road and two were standing seventeen meters to the right of the road. As the drivers approached the objects standing seventeen meters to the right of the road, the objects moved out of the VES when the vehicle was 200 meters in front of the object with a 12 degree FoV. The objects could be seen with the naked eye when the vehicle was 100 meters in front of the object. When the drivers approached the objects with a 24degree FoV the objects moved out of the VES display when it was possible to see them unaided. </p><p>Results show that a 24 degree FoV displayed with a 1:2 ratio gives the drivers improved anticipatory control, compared to a 12 degree FoV displayed with a 1:1 ratio. The participants with a broader FoV were able to make informed decisions whereas with a narrow FoV some participants started to reaccelerate when they could not see an object. Results also show that any difference in recognition distance that may exist between a 12 degree and a 24 degree camera angle displayed in a 12 degree FoV display do not seem to have any adverse effect on the quality of driving.</p>
|
26 |
COMPRESSIVE IMAGING FOR DIFFERENCE IMAGE FORMATION AND WIDE-FIELD-OF-VIEW TARGET TRACKINGShikhar January 2010 (has links)
Use of imaging systems for performing various situational awareness tasks in militaryand commercial settings has a long history. There is increasing recognition,however, that a much better job can be done by developing non-traditional opticalsystems that exploit the task-specific system aspects within the imager itself. Insome cases, a direct consequence of this approach can be real-time data compressionalong with increased measurement fidelity of the task-specific features. In others,compression can potentially allow us to perform high-level tasks such as direct trackingusing the compressed measurements without reconstructing the scene of interest.In this dissertation we present novel advancements in feature-specific (FS) imagersfor large field-of-view surveillence, and estimation of temporal object-scene changesutilizing the compressive imaging paradigm. We develop these two ideas in parallel.In the first case we show a feature-specific (FS) imager that optically multiplexesmultiple, encoded sub-fields of view onto a common focal plane. Sub-field encodingenables target tracking by creating a unique connection between target characteristicsin superposition space and the target's true position in real space. This isaccomplished without reconstructing a conventional image of the large field of view.System performance is evaluated in terms of two criteria: average decoding time andprobability of decoding error. We study these performance criteria as a functionof resolution in the encoding scheme and signal-to-noise ratio. We also includesimulation and experimental results demonstrating our novel tracking method. Inthe second case we present a FS imager for estimating temporal changes in the objectscene over time by quantifying these changes through a sequence of differenceimages. The difference images are estimated by taking compressive measurementsof the scene. Our goals are twofold. First, to design the optimal sensing matrixfor taking compressive measurements. In scenarios where such sensing matrices arenot tractable, we consider plausible candidate sensing matrices that either use theavailable <italic>a priori</italic> information or are non-adaptive. Second, we develop closed-form and iterative techniques for estimating the difference images. We present results to show the efficacy of these techniques and discuss the advantages of each.
|
27 |
Vision Enhancement Systems : The Importance of Field of ViewGrönqvist, Helena January 2002 (has links)
The purpose of the project, which led to this thesis, was to investigate the possible effects different horizontal Fields of View (FoV) have on driving performance when driving at night with a Vision Enhancement System (VES). The FoV chosen to be examined were 12 degree and 24 degree FoV, both displayed on a screen with the horizontal size of 12 degree FoV. This meant that the different conditions of FoV also had different display ratios 1:1 and 1:2. No effort was made to separate these parameters. A simulator study was performed at the simulator at IKP, Linköping University. Sixteen participants in a within-group design participated in the study. The participants drove two road sections; one with a 12 degree FoV and the other with a 24 degree FoV. During each section four scenarios were presented in which the participants passed one of three types of objects; a moose, a deer or a man. In each section, two of the objects stood right next to the road and two were standing seventeen meters to the right of the road. As the drivers approached the objects standing seventeen meters to the right of the road, the objects moved out of the VES when the vehicle was 200 meters in front of the object with a 12 degree FoV. The objects could be seen with the naked eye when the vehicle was 100 meters in front of the object. When the drivers approached the objects with a 24degree FoV the objects moved out of the VES display when it was possible to see them unaided. Results show that a 24 degree FoV displayed with a 1:2 ratio gives the drivers improved anticipatory control, compared to a 12 degree FoV displayed with a 1:1 ratio. The participants with a broader FoV were able to make informed decisions whereas with a narrow FoV some participants started to reaccelerate when they could not see an object. Results also show that any difference in recognition distance that may exist between a 12 degree and a 24 degree camera angle displayed in a 12 degree FoV display do not seem to have any adverse effect on the quality of driving.
|
28 |
Příjem a zpracování vizuálních informací v dopravním provozu / Reception and Processing of Visual Information in TrafficČernochová, Dana January 2013 (has links)
Reception and processing of visual information in traffic Mgr. Dana Černochová Abstract The theoretical part of this work deals with the visual perception of the driver. Special attention is paid to the perception of visual information in the whole range of the field of view. From psychological point of view, the term functional field of view is important, the construct taking into account not only perception, but also the attention. Its size depends on the amount of information which have to be processed at any given moment. Foreign literature uses the term "Useful field of view" under the abbreviation UFOV, german literature uses the term "nutzbares Sehfeld" under the abbreviation NSF. The experimental part of the work focuses on evaluating the changes of the visual perception in the range of the visual field in relation to age. For this the SET of 1361 people in age group from 18 to 90 years. In further subchapters the experiment is described where the secondary task was used as variable in order to find out whether the increased cognitive load affects the range of the useful field of view. For this experiment, the set of 645 people of age 18-90 years was used. The parameters of the visual perception in the situations with and without added secondary task were also monitored for the relationship to the...
|
29 |
Multi-view point cloud fusion for LiDAR based cooperative environment detectionJähn, Benjamin, Lindner, Philipp, Wanielik, Gerd 11 November 2015 (has links) (PDF)
A key component for automated driving is 360◦ environment detection. The recognition capabilities of mod- ern sensors are always limited to their direct field of view. In urban areas a lot of objects occlude important areas of in- terest. The information captured by another sensor from an- other perspective could solve such occluded situations. Fur- thermore, the capabilities to detect and classify various ob- jects in the surrounding can be improved by taking multiple views into account. In order to combine the data of two sensors into one co- ordinate system, a rigid transformation matrix has to be de- rived. The accuracy of modern e.g. satellite based relative pose estimation systems is not sufficient to guarantee a suit- able alignment. Therefore, a registration based approach is used in this work which aligns the captured environment data of two sensors from different positions. Thus their relative pose estimation obtained by traditional methods is improved and the data can be fused. To support this we present an approach which utilizes the uncertainty information of modern tracking systems to de- termine the possible field of view of the other sensor. Fur- thermore, it is estimated which parts of the captured data is directly visible to both, taking occlusion and shadowing ef- fects into account. Afterwards a registration method, based on the iterative closest point (ICP) algorithm, is applied to that data in order to get an accurate alignment. The contribution of the presented approch to the achiev- able accuracy is shown with the help of ground truth data from a LiDAR simulation within a 3-D crossroad model. Re- sults show that a two dimensional position and heading esti- mation is sufficient to initialize a successful 3-D registration process. Furthermore it is shown which initial spatial align- ment is necessary to obtain suitable registration results.
|
30 |
Morphometric measurements of the retinal vasculature in ultra-wide scanning laser ophthalmoscopy as biomarkers for cardiovascular diseasePellegrini, Enrico January 2016 (has links)
Retinal imaging enables the visualization of a portion of the human microvasculature in-vivo and non-invasively. The scanning laser ophthalmoscope (SLO), provides images characterized by an ultra-wide field of view (UWFoV) covering approximately 180-200º in a single scan, minimizing the discomfort for the subject. The microvasculature visible in retinal images and its changes have been vastly investigated as candidate biomarkers for different types of systemic conditions like cardiovascular disease (CVD), which currently remains the main cause of death in Europe. For the CARMEN study, UWFoV SLO images were acquired from more than 1,000 people who were recruited from two studies, TASCFORCE and SCOT-HEART, focused on CVD. This thesis presents an automated system for SLO image processing and computation of candidate biomarkers to be associated with cardiovascular risk and MRI imaging data. A vessel segmentation technique was developed by making use of a bank of multi-scale matched filters and a neural network classifier. The technique was devised to minimize errors in vessel width estimation, in order to ensure the reliability of width measures obtained from the vessel maps. After a step of refinement of the centrelines, a multi-level classification technique was deployed to label all vessel segments as arterioles or venules. The method exploited a set of pixel-level features for local classification and a novel formulation for a graph cut approach to partition consistently the retinal vasculature that was modelled as an undirected graph. Once all the vessels were labelled, a tree representation was adopted for each vessel and its branches to fully automate the process of biomarker extraction. Finally, a set of 75 retinal parameters, including information provided by the periphery of the retina, was created for each image and used for the biomarker investigation.
|
Page generated in 0.0383 seconds