• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 615
  • 222
  • 105
  • 104
  • 67
  • 57
  • 17
  • 13
  • 11
  • 11
  • 11
  • 11
  • 9
  • 9
  • 7
  • Tagged with
  • 1470
  • 247
  • 212
  • 211
  • 201
  • 198
  • 193
  • 181
  • 152
  • 125
  • 119
  • 110
  • 103
  • 98
  • 96
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Obstacle detection using thermal imaging sensors for large passenger airplane

Shi, Jie 12 1900 (has links)
This thesis addresses the issue of ground collision in poor weather conditions. As bad weather is an adverse factor when airplanes are taxiing, an obstacle detection system based on thermal vision is proposed to enhance the awareness of pilots during taxiing in poor weather conditions. Two infrared cameras are employed to detect the objects and estimate the distance of the obstacle. The distance is computed by stereo vision technology. A warning will be given if the distance is less than the safe distance predefined. To make the system independent, the proposed system is an on-board system which does not rely on airports or other airplanes. The type of obstacle is classified by the temperature of the object. Fuzzy logic is employed in the classification. Obstacles are classified into three main categories: aircraft, vehicle and people. Membership functions are built based on the temperature distribution of obstacles measured at the airport. In order to improve the accuracy of classification, a concept of using position information is proposed. Different types of obstacle are predefined according to different area at the airport. In the classification, obstacles are classified according to the types limited in that area. Due to the limitation of the thermal infrared camera borrowed, images were captured first and then processed offline. Experiments were carried out to evaluate the detecting distance error and the performance of system in poor weather conditions. The classification of obstacle is simulated with real thermal images and pseudo position information at the airport. The results suggest that the stereo vision system developed in this research was able to detect the obstacle and estimate the distance. The classification method classified the obstacles to a certain extent. Therefore, the proposed system can improve safety of aircraft and enhance situational awareness of pilots. The programming language of the system is Python 2.7. Computer graphic library OpenCV 2.3 is used in processing images. MATLAB is used in the simulation of obstacle classification.
162

Analysis of Third Person Cameras in Current Generation Action Games

Schramm, Jonathan January 2013 (has links)
The purpose of this project was to research the virtual camera systems used in current generation third person action games and to see what could be improved upon. To do this, different camera shots were categorized into camera archetypes, which also include post process and lens effects used. Information about the games was acquired by either looking through the game’s settings or by observing gameplay. Finally the results were compared to each other as well as the film industry and several improvements regarding usage of different features and camera shots were suggested.
163

Capture Time : Recording in digital era

Uslu, Ahmet January 2013 (has links)
The primary aim of this project is getting a complete understanding of photography’s development process and looking into future, user-centered innovations. Digital evolution changed the rules of product design. Products became a part of a complex system, consisting of a variety of different touch-points which also constantly extend. Photography and cameras are changing. Mobile phones, wireless connections and sharing platforms have a big impact on photography. Everything is getting connected to each other, both people and devices. How will digital photography adapt to this new world? How will people change their perception of images? Is it possible to design a camera considering all other systems around it? While designing a highly technological device, how can user-perspective be included in the design process?
164

Surveillance of Time-varying Geometry Objects using a Multi-camera Active-vision System

Mackay, Matthew Donald 10 January 2012 (has links)
The recognition of time-varying geometry (TVG) objects (in particular, humans) and their actions is a complex task due to common real-world sensing challenges, such as obstacles and environmental variations, as well as due to issues specific to TVG objects, such as self-occlusion. Herein, it is proposed that a multi-camera active-vision system, which dynamically selects camera poses in real-time, be used to improve TVG action sensing performance by selecting camera views on-line for near-optimal sensing-task performance. Active vision for TVG objects requires an on-line sensor-planning strategy that incorporates information about the object itself, including its current action, and information about the state of the environment, including obstacles, into the pose-selection process. Thus, the focus of this research is the development of a novel methodology for real-time sensing-system reconfiguration (active vision), designed specifically for the recognition of a single TVG object and its actions in a cluttered, dynamic environment, which may contain multiple other dynamic (maneuvering) obstacles. The proposed methodology was developed as a complete, customizable sensing-system framework which can be readily modified to suit a variety of specific TVG action-sensing tasks – a 10-stage pipeline real-time architecture. Sensor Agents capture and correct camera images, removing noise and lens distortion, and segment the images into regions of interest. A Synchronization Agent aligns multiple images from different cameras to a single ‘world-time.’ Point Tracking and De-Projection Agents detect, identify, and track points of interest in the resultant 2-D images, and form constraints in normalized camera coordinates using the tracked pixel coordinates. A 3-D Solver Agent combines all constraints to estimate world-coordinate positions for all visible features of the object-of-interest (OoI) 3-D articulated model. A Form-Recovery Agent uses an iterative process to combine model constraints, detected feature points, and other contextual information to produce an estimate of the OoI’s current form. This estimate is used by an Action-Recognition Agent to determine which action the OoI is performing, if any, from a library of known actions, using a feature-vector descriptor for identification. A Prediction Agent provides estimates of future OoI and obstacle poses, given past detected locations, and estimates of future OoI forms given the current action and past forms. Using all of the data accumulated in the pipeline, a Central Planning Agent implements a formal, mathematical optimization developed from the general sensing problem. The agent seeks to optimize a visibility metric, which is positively related to sensing-task performance, to select desirable, feasible, and achievable camera poses for the next sensing instant. Finally, a Referee Agent examines the complete set of chosen poses for consistency, enforces global rules not captured through the optimization, and maintains system functionality if a suitable solution cannot be determined. In order to validate the proposed methodology, rigorous experiments are also presented herein. They confirm the basic assumptions of active vision for TVG objects, and characterize the gains in sensing-task performance. Simulated experiments provide a method for rapid evaluation of new sensing tasks. These experiments demonstrate a tangible increase in single-action recognition performance over the use of a static-camera sensing system. Furthermore, they illustrate the need for feedback in the pose-selection process, allowing the system to incorporate knowledge of the OoI’s form and action. Later real-world, multi-action and multi-level action experiments demonstrate the same tangible increase when sensing real-world objects that perform multiple actions which may occur simultaneously, or at differing levels of detail. A final set of real-world experiments characterizes the real-time performance of the proposed methodology in relation to several important system design parameters, such as the number of obstacles in the environment, and the size of the action library. Overall, it is concluded that the proposed system tangibly increases TVG action-sensing performance, and can be generalized to a wide range of applications, including human-action sensing. Future research is proposed to develop similar methods to address deformable objects and multiple objects of interest.
165

Surveillance of Time-varying Geometry Objects using a Multi-camera Active-vision System

Mackay, Matthew Donald 10 January 2012 (has links)
The recognition of time-varying geometry (TVG) objects (in particular, humans) and their actions is a complex task due to common real-world sensing challenges, such as obstacles and environmental variations, as well as due to issues specific to TVG objects, such as self-occlusion. Herein, it is proposed that a multi-camera active-vision system, which dynamically selects camera poses in real-time, be used to improve TVG action sensing performance by selecting camera views on-line for near-optimal sensing-task performance. Active vision for TVG objects requires an on-line sensor-planning strategy that incorporates information about the object itself, including its current action, and information about the state of the environment, including obstacles, into the pose-selection process. Thus, the focus of this research is the development of a novel methodology for real-time sensing-system reconfiguration (active vision), designed specifically for the recognition of a single TVG object and its actions in a cluttered, dynamic environment, which may contain multiple other dynamic (maneuvering) obstacles. The proposed methodology was developed as a complete, customizable sensing-system framework which can be readily modified to suit a variety of specific TVG action-sensing tasks – a 10-stage pipeline real-time architecture. Sensor Agents capture and correct camera images, removing noise and lens distortion, and segment the images into regions of interest. A Synchronization Agent aligns multiple images from different cameras to a single ‘world-time.’ Point Tracking and De-Projection Agents detect, identify, and track points of interest in the resultant 2-D images, and form constraints in normalized camera coordinates using the tracked pixel coordinates. A 3-D Solver Agent combines all constraints to estimate world-coordinate positions for all visible features of the object-of-interest (OoI) 3-D articulated model. A Form-Recovery Agent uses an iterative process to combine model constraints, detected feature points, and other contextual information to produce an estimate of the OoI’s current form. This estimate is used by an Action-Recognition Agent to determine which action the OoI is performing, if any, from a library of known actions, using a feature-vector descriptor for identification. A Prediction Agent provides estimates of future OoI and obstacle poses, given past detected locations, and estimates of future OoI forms given the current action and past forms. Using all of the data accumulated in the pipeline, a Central Planning Agent implements a formal, mathematical optimization developed from the general sensing problem. The agent seeks to optimize a visibility metric, which is positively related to sensing-task performance, to select desirable, feasible, and achievable camera poses for the next sensing instant. Finally, a Referee Agent examines the complete set of chosen poses for consistency, enforces global rules not captured through the optimization, and maintains system functionality if a suitable solution cannot be determined. In order to validate the proposed methodology, rigorous experiments are also presented herein. They confirm the basic assumptions of active vision for TVG objects, and characterize the gains in sensing-task performance. Simulated experiments provide a method for rapid evaluation of new sensing tasks. These experiments demonstrate a tangible increase in single-action recognition performance over the use of a static-camera sensing system. Furthermore, they illustrate the need for feedback in the pose-selection process, allowing the system to incorporate knowledge of the OoI’s form and action. Later real-world, multi-action and multi-level action experiments demonstrate the same tangible increase when sensing real-world objects that perform multiple actions which may occur simultaneously, or at differing levels of detail. A final set of real-world experiments characterizes the real-time performance of the proposed methodology in relation to several important system design parameters, such as the number of obstacles in the environment, and the size of the action library. Overall, it is concluded that the proposed system tangibly increases TVG action-sensing performance, and can be generalized to a wide range of applications, including human-action sensing. Future research is proposed to develop similar methods to address deformable objects and multiple objects of interest.
166

Development and Design of a Near-Field High-Energy Gamma Camera for Use with Neutron Stimulated Emission Computed Tomography

Sharma, Amy Congdon 10 December 2007 (has links)
A new gamma imaging method, Neutron Stimulated Emission Computed Tomography (NSECT), is being developed to non-invasively and non-destructively measure and image elemental concentrations in vivo. In NSECT a beam of fast neutrons (3 - 5 MeV) bombards a target, inelastically scattering with target nuclei and exciting them. Decay from this excited state produces characteristic gamma emissions. Collecting the resulting gamma energy spectrum allows identification of elements present in the target. As these gamma rays range in energy from 0.3 - 1.5 MeV, outside the useable energy range for existing gamma cameras (0.1 - .511 MeV), a new gamma imaging method must be developed. The purpose of this dissertation is to design and develop a near-field (less then 0.5 m) high-energy (0.3 - 1.5 MeV) gamma camera to facilitate planar NSECT imaging. Modifying a design implemented in space-based imaging (focus of infinity), a prototype camera was built. Experimental testing showed that the far-field space-based assumptions were inapplicable in the near-field. A new mathematical model was developed to describe the modulation behavior in the near-field. Additionally, a Monte Carlo simulation of the camera and imaging environment was developed. These two tools were used to facilitate optimization of the camera parameters. Simulated data was then used to reconstruct images for both small animal and human fields of view. Limitations of the camera design were identified and quantified. Image analysis demonstrated that the camera has the potential to identify regions of interest in a human field of view. / Dissertation
167

Experience using a small field of view gamma camera for intraoperative sentinel lymph node procedures

Greene, Carmen M. 18 January 2006 (has links)
Staging is critical in the management of cancer. Sentinel lymph node (SLN) biopsy is one method used in the assessment of cancer spread. SLN procedures are standard practice in the management of some cancers although; these procedures have only recently been developed and refined. SLN procedures are commonly used in the management of melanomas and breast cancers in patients with no evidence of metastatic disease on clinical exam. SLN procedures include detection, localization, and assessment of SLNs. The detection/localization components vary in technique and rates of success. The procedures with the least number of detection/localization techniques generally include the use of blue dye or the use of a radiotracer with intraoperative gamma counting. The most complex procedures involve the use of blue dye, the use of a radiotracer with preoperative gamma imaging and preoperative gamma counting, intraoperative gamma counting, or some combination of these techniques. The ideal procedure for SLN would include all the listed techniques however; all facilities do not incorporate the most complete procedure, for different reasons. An investigation using a small FOV (5 in x 5 in) gamma camera intraoperatively for SLN procedures in melanoma and breast cancer patients was performed. A smaller FOV camera is capable of obtaining some of the same information as a conventional gamma camera. It is possible that centers, which do not or are not able to take advantage of preoperative imaging, may find the use of a smaller FOV gamma camera in the operating room useful. The investigation consisted of a total of 41 patients; it was split into two studies, Study 1: melanoma and study 2: breast cancer. The melanoma study found the added use of a smaller FOV camera under the parameters of this study to be minimal. Study 2 was broken into two branches; branch 1: camera/probe/dye and branch 2: probe/dye, for a comparison study. Comparing the two branches did not show the smaller FOV camera to reduce the time spent in the operating room versus using the probe and blue dye.
168

Multi-camera Human Tracking on Realtime 3D Immersive Surveillance System

Hsieh, Meng-da 23 June 2010 (has links)
Conventional surveillance systems present video to a user from more than one camera on a single display. Such a display allows the user to observe different part of the scene, or to observe the same part of the scene from different viewpoints. Each video is usually labeled by a fixed textual annotation displayed under the video segment to identify the image. With the growing number of surveillance cameras set up and the expanse of surveillance area, the conventional split-screen display approach cannot provide intuitive correspondence between the images acquired and the areas under surveillance. Such a system has a number of inherent flaws¡GLower relativity of split videos¡BThe difficulty of tracking new activities¡BLow resolution of surveillance videos¡BThe difficulty of total surveillance¡FIn order to improve the above defects, the ¡§Immersive Surveillance for Total Situational Awareness¡¨ use computer graphic technique to construct 3D model of buildings on the 2D satellite-images, the users can construct the floor platform by defining the information of each floor or building and the position of each camera. This information is combined to construct 3D surveillance scene, and the images acquired by surveillance cameras are pasted into the constructed 3D model to provide intuitively visual presentation. The users could also walk through the scene by a fixed-frequency , self-defined business model to perform a virtual surveillance. Multi-camera Human Tracking on Realtime 3D Immersive Surveillance System based on the ¡§Immersive Surveillance for Total Situational Awareness,¡¨ 1. Salient object detection¡GThe System converts videos to corresponding image sequences and analyze the videos provided by each camera. In order to filter out the foreground pixels, the background model of each image is calculated by pixel-stability-based background update algorithm. 2. Nighttime image fusion¡GUse the fuzzy enhancement method to enhance the dark area in nighttime image, and also maintain the saturation information. Then apply the Salient object detection Algorithm to extract salient objects of the dark area. The system divides fusion results into 3 parts: wall, ceiling, and floor, then pastes them as materials into corresponding parts of 3D scene. 3. Multi-camera human tracking¡GApply connected component labeling to filter out small area and save each block¡¦s infomation. Use RGB-weight percentage information in each block and 5-state status (Enter¡BLeave¡BMatch¡BOcclusion¡BFraction) to draw out the trajectory of each person in every camera¡¦s field of view on the 3D surveillance scene. Finally, fuse every camera together to complete the multi-camera realtime people tracking. Above all, we can track every human in our 3D immersive surveillance system without watching out each of thousand of camera views.
169

Analysis of the Black-capped Vireo and White-eyed Vireo Nest Predator assemblages

Conkling, Tara J. 2010 May 1900 (has links)
Predation is the leading cause of nest failure in songbirds. My study identified nest predators of black-capped vireos and white-eyed vireos, quantified the activity of potential predator species, examined the relationships between vegetation and nest predators, and examined the relationship between nest predation and parasitism by brown-headed cowbirds. In 2008 and 2009 I monitored black-capped and white-eyed vireo nests on privately-owned properties in Coryell County and black-capped vireo nests on Kerr WMA in Kerr County and at Devils River State Natural Area in Val Verde County (2009 only). I monitored vireo nests using a video camera system to identify predators and nest fate. I also collected at-nest vegetation measurements including nest height, distance to nearest habitat edge, and nest concealment. Additionally, I sampled potential predator activity at a subset of black-capped vireo and white-eyed vireo nests in Coryell County using camera-trap bait stations and herptofaunal traps. I monitored 117 black-capped vireo nests and 54 white-eyed vireo nests. Forty-two percent of black-capped vireo and 35% of white-eyed vireo nests failed due to predation. I recorded >10 total predator species and 37 black-capped vireo and 15 white-eyed vireo nest predation events. Snakes (35%) and cowbirds (29%) were the most frequently identified nest predators; however, major predator species varied by location. I observed no significant relationship between nest fate (fledge vs. fail) and nest concealment or distance to edge for either vireo species. Nest height, concealment and distance to edge may relate to predator species in Coryell Co. for snake species, and Kerr for avian species. Additionally, I observed no difference between the predator activity and the fate of the nest. Both vireos have multiple nest predator species. Additionally, multiple cowbird predations demonstrate this species may have multi-level impacts on vireo productivity, even with active cowbird management. Vegetation structure and concealment may also affect predator species. However, the activity of other predator species near active nests may not negatively affect nest success.
170

Systems and Algorithms for Automated Collaborative Observation using Networked Robotic Cameras

Xu, Yiliang 2011 August 1900 (has links)
The development of telerobotic systems has evolved from Single Operator Single Robot (SOSR) systems to Multiple Operator Multiple Robot (MOMR) systems. The relationship between human operators and robots follows the master-slave control architecture and the requests for controlling robot actuation are completely generated by human operators. Recently, the fast evolving advances in network and computer technologies and decreasing size and cost of sensors and robots enable us to further extend the MOMR system architecture to incorporate heterogeneous components such as humans, robots, sensors, and automated agents. The requests for controlling robot actuation are generated by all the participants. We term it as the MOMR++ system. However, to reach the best potential and performance of the system, there are many technical challenges needing to be addressed. In this dissertation, we address two major challenges in the MOMR++ system development. We first address the robot coordination and planning issue in the application of an autonomous crowd surveillance system. The system consists of multiple robotic pan-tilt-zoom (PTZ) cameras assisted with a fixed wide-angle camera. The wide-angle camera provides an overview of the scene and detects moving objects, which are required for close-up views using the PTZ cameras. When applied to the pedestrian surveillance application and compared to a previous work, the system achieves increasing number of observed objects by over 210% in heavy traffic scenarios. The key issue here is given the limited number (e.g., p (p > 0)) of PTZ cameras and many more (e.g., n (n >> p)) observation requests, how to coordinate the cameras to best satisfy all the requests. We formulate this problem as a new camera resource allocation problem. Given p cameras, n observation requests, and [epsilon] being approximation bound, we develop an approximation algorithm running in O(n/[epsilon]³ + p²/[epsilon]⁶) time, and an exact algorithm, when p = 2, running in O(n³) time. We then address the automatic object content analysis and recognition issue in the application of an autonomous rare bird species detection system. We set up the system in the forest near Brinkley, Arkansas. The camera monitors the sky, detects motions, and preserves video data for only those targeted bird species. During the one-year search, the system reduces the raw video data of 29.41TB to only 146.7MB (reduction rate 99.9995%). The key issue here is to automatically recognize the flying bird species. We verify the bird body axis dynamic information by an extended Kalman filter (EKF) and compare the bird dynamic state with the prior knowledge of the targeted bird species. We quantify the uncertainty in recognition due to the measurement uncertainty and develop a novel Probable Observation Data Set (PODS)-based EKF method. In experiments with real video data, the algorithm achieves 95% area under the receiver operating characteristic (ROC) curve. Through the exploration of the two MOMR++ systems, we conclude that the new MOMR++ system architecture enables much wider range of participants, enhances the collaboration and interaction between participants so that information can be exchanged in between, suppresses the chance of any individual bias or mistakes in the observation process, and further frees humans from the control/observation process by providing automatic control/observation. The new MOMR++ system architecture is a promising direction for future telerobtics advances.

Page generated in 0.0479 seconds