• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 612
  • 222
  • 105
  • 104
  • 67
  • 57
  • 17
  • 13
  • 11
  • 11
  • 11
  • 11
  • 9
  • 9
  • 7
  • Tagged with
  • 1465
  • 246
  • 211
  • 209
  • 201
  • 197
  • 193
  • 181
  • 151
  • 123
  • 119
  • 109
  • 103
  • 97
  • 96
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Mink and Raccoon Use of Wetlands as Influenced by Wetland and Landscape Characteristics in Central Ohio

Lung, Joni M. 12 September 2008 (has links)
No description available.
92

Canon Camera Museum

Sun, Jiazhen 12 February 2016 (has links)
This thesis is a study of creating the first-ever camera museum by using unique building forms and structural elements, as well as water and light features. Additionally, throughout this thesis project, I would like to celebrate the Canon franchise with all the camera enthusiasts and express my personal appreciation to Canon camera who has always been a companion in my entire architecture journey. The building site, shape and location allow a discovery of the building elements and shape consequently while participating in the building visit. From far beyond, the building can be seen as a simple sculpture piece as an iconic gateway of this Canon Park. While approaching and pursuing the passage into the museum, building shape and components start to build the experience more than just a museum. Different lighting conditions, vertical and horizontal circulation methods, building form and structure are used to direct not only the journey of Canon camera, but also an experiment of my own architectural language. / Master of Architecture
93

Spatially Resolved Equivalence Ratio Measurements Using Tomographic Reconstruction of OH*/CH* Chemiluminescence

Giroux, Thomas Joseph III 27 July 2020 (has links)
Thermoacoustic instabilities in gas turbine operation arise due to unsteady fluctuations in heat release coupled with acoustic oscillations, often caused by varying equivalence ratio perturbations within the flame field. These instabilities can cause irreparable damage to critical turbine components, requiring an understanding of the spatial/temporal variations in equivalence ratio values to predict flame response. The technique of computed tomography for flame chemiluminescence emissions allows for 3D spatially resolved flame measurements to be acquired using a series of integral projections (camera images). High resolution tomography reconstructions require a selection of projection angles around the flame, while captured chemiluminescence of radical species intensity fields can be used to determine local fuel-air ratios. In this work, a tomographic reconstruction algorithm program was developed and utilized to reconstruct the intensity fields of CH* and OH*, and these reconstructions were used to quantify local equivalence ratios in an acoustically forced flame. A known phantom function was used to verify and validate the tomography algorithm, while convergence was determined by subsequent monitoring of selected iterative criteria. A documented method of camera calibration was also reproduced and presented here, with suggestions provided for future calibration improvement. Results are shown to highlight fluctuating equivalence ratio trends while illustrating the effectiveness of the developed tomography technique, providing a firm foundation for future study regarding heat release phenomena. / Master of Science / Acoustic sound amplification occurs in the combustion chamber of a gas turbine due to the machine ramping up in operation. These loud sound oscillations continue to grow larger and can damage the turbine machinery and even threaten the safety of the operator. Because of this, many researchers have attempted to understand and predict this behavior in hopes of ending them altogether. One method of studying these sound amplifications is looking at behaviors in the turbine combustion flame so as to potentially shed light on how these large disturbances form and accumulate. Both heat release rate (the steady release of energy in the form of heat from a combustion flame) and equivalence ratio (the mass ratio of fuel to air burned in a combustion process) have proven viable in illustrating oscillatory flame behavior, and can be visualized using chemiluminescence imaging paired with computed tomography. Chemiluminescence imaging is used to obtain intensity fields of species from high resolution camera imaging, while computed tomography techniques are capable of reconstructing these images into a three-dimensional volume to represent and visualize the combustion flame. These techniques have been shown to function effectively in previous literature and were further implemented in this work. A known calibration technique from previous work was carried out along with reconstructing a defined phantom function to show the functionality of the developed tomography algorithm. Results illustrate the effectiveness of the tomographic reconstruction technique and highlight the amplified acoustic behavior of a combustion flame in a high noise environment.
94

Evaluation Update of Red Light Camera Program in Fairfax County, Virginia

Yaungyai, Nattaporn 27 July 2004 (has links)
The Red Light Camera program in Fairfax County has been in operation for more than 2 years. As of 2003, there are 13 cameras in operation. The camera takes 2 pictures of a vehicle while it illegally entered the intersection and after it entered the intersection. These photographs give evidence of the red light violation. The citation is mailed to the register owner of the vehicle. The penalty is $50. This study has been conducted to evaluate the program. The violation and accident data at all of the study intersections were provided by Fairfax County Department of Transportation and Fairfax County Police Department. The traffic data in Fairfax County were provided by Virginia Department of Transportation. The results of the violation analysis indicate that the Red Light Camera program did reduce the violation rate by up to 58 percent in the 22nd ' 27th month period of the operation. The study also shows that the increase of the amber-time interval produced a higher reduction in violation rate up to 70 percent. The reduction in violation was found to be statistically significant. The violation rate is reduced to 1-2 violations per 10,000 vehicles considering the effect of the RLC operation only. With the effect of both RLC and amber-time increase, the violation rate is reduced to 0-1 violation per 10,000 vehicles. The accident rate is reduced by 27 percent after 2 years of the RLC operation. The Red Light Camera is found to have an effect on the reduction in Property Damage Only accident. However, the reduction in accident was not found to be statistically significant. Therefore, there is no benefit accrued from the reduction in accidents. From this study, the Red Light Camera program increases safety at camera intersections in Fairfax County by reducing violation rates after 2 years of its operation. / Master of Science
95

Generation of Orthogonal Projections from Sparse Camera Arrays

Silva, Ryan Edward 25 May 2007 (has links)
In the emerging arena of face-to-face collaboration using large, wall-size screens, a good videoconferencing system would be useful for two locations which both have a large screen. But as screens get bigger, a single camera becomes less than adequate to drive a videoconferencing system for the entire screen. Even if a wide-angle camera is placed in the center of the screen, it's possible for people standing at the sides to be hidden. We can fix this problem by placing several cameras evenly distributed in a grid pattern (what we call a sparse camera array) and merging the photos into one image. With a single camera, people standing near the sides of the screen are viewing an image with a viewpoint at the middle of the screen. Any perspective projection used in this system will look distorted when standing at a different viewpoint. If an orthogonal projection is used, there will be no perspective distortion, and the image will look correct no matter where the viewer stands. As a first step in creating this videoconferencing system, we use stereo matching to find the real world coordinates of objects in the scene, from which an orthogonal projection can be generated. / Master of Science
96

Ocelot Density and Home Range in Belize, Central America: Camera-Trapping and Radio Telemetry

Dillon, Adam 26 January 2006 (has links)
Historically, ocelots (Leopardus pardalis) were hunted in large numbers for their fur, causing declines in population abundance across their range. In recent decades protection measures (e.g. CITES) and decreased public demand for ocelot fur resulted in declines in hunting pressure. Do to their elusive nature there is little known about ocelot population size, structure or general ecology. This lack of information hampers our ability to provide protection for this endangered species. Remote cameras were deployed in 7 grids across the landscape to estimate the density of ocelots in 2 habitat types; the broadleaf rainforest and pine forest of western Belize. Camera trapping combined with mark-recapture statistics resulted in densities of 18.91 - 20.75 ocelots per 100 km2 in the rainforest and 2.31 0 3.81 ocelots per 100 km2 in the pine forest habitat. This study examined the issues of camera spacing and animals with zero distance moved and their effect on density estimation. Increased camera spacing resulted in larger buffer sizes (increasing the effective trap area) and decreased density estimates. Inclusion of zero distance animals decreased buffer sizes and increased density estimates. Regardless of these effects, ocelot density was higher in the broadleaf rainforest than the pine forest. The ocelot density estimates in Belizean forests were lower than those in other portions of their range. The camera trapping technique demonstrated ocelots to be mostly active at night, with peaks of activity after sunset and before sunrise, and to travel low-use roads in the wet season and high-use roads in the dry season. Radio telemetry was used in this study to estimate the home range size and density of ocelots in the broadleaf rainforest of western Belize. Six collared ocelots (3 male, 3 female) were collared and tracked from September 2003 - August 2004. Male ocelots had an average home range size of 33.01 km2 (95% fixed kernel) and 29.00 km2 (100% MCP), and female ocelots had an average home range size of 21.05 km2 (95% fixed kernel) and 29.58 km2 (100% MCP). Most ocelots had larger home ranges in the dry season than the wet season. Ocelots showed a large amount of same sex home range overlap; with male-male overlap averaging 25% (100% MCP) and female-female overlap averaging 16% (100% MCP). Ocelot density determined using radio telemetry was 7.79 - 10.91 ocelots per 100 km2. The radio telemetry ocelot densities were lower and their home ranges larger in the Belizean broadleaf rainforests than those in other portions of their range. The camera trapping and radio telemetry techniques were compared against one another and combined in order to test which technique may be more successful in studying certain aspects of feline behavior. Activity budgets and density estimates determined from camera trapping were superior to radio telemetry, whereas camera trapping home ranges showed higher variation and lower resolution than radio telemetry. However, home range estimates determined from camera trapping captured long distance movements, a larger percent of territory overlap, and displayed potential for estimating an animal's core use area. When radio telemetry data were used to create a buffer around camera traps based on the average radius of an ocelots' home range size, the resulting density estimates were smaller than those determined using the current camera trapping methodology. This study provided much needed baseline information on ocelot abundance, home range size, activity patterns, and trail use. While sample sizes were small, this study had the largest number of ocelots captured in Central America to date. Although camera trapping is already a useful tool in felid research, this study highlights the importance of further standardization of the camera trapping methodology, increasing its potential for monitoring and conservation across habitats and study sites. / Master of Science
97

High-resolution hyperspectral imaging of the retina with a modified fundus camera

Nourrit, V., Denniss, Jonathan, Mugit, M.M., Schiessl, I., Fenerty, C., Stanga, P.E., Henson, D.B. 26 June 2018 (has links)
No / The purpose of the research was to examine the practical feasibility of developing a hyperspectral camera from a Zeiss fundus camera and to illustrate its use in imaging diabetic retinopathy and glaucoma patients. The original light source of the camera was replaced with an external lamp filtered by a fast tunable liquid-crystal filter. The filtered light was then brought into the camera through an optical fiber. The original film camera was replaced by a digital camera. Images were obtained in normals and patients (primary open angle glaucoma, diabetic retinopathy) recruited at the Manchester Royal Eye Hospital. A series of eight images were captured across 495- to 720-nm wavelengths, and recording time was less than 1.6s. The light level at the cornea was below the ANSI limits, and patients judged the measurement to be very comfortable. Images were of high quality and were used to generate a pixel-to-pixel oxygenation map of the optic nerve head. Frame alignment is necessary for frame-to-frame comparison but can be achieved through simple methods. We have developed a hyperspectral camera with high spatial and spectral resolution across the whole visible spectrum that can be adapted from a standard fundus camera. The hyperspectral technique allows wavelength-specific visualization of retinal lesions that may be subvisible using a white light source camera. This hyperspectral technique may facilitate localization of retinal and disc pathology and consequently facilitate the diagnosis and management of retinal disease.
98

Design and thermal analysis for a novel EMCCD camera payload in a 1U CubeSat form factor

Angle, Nicholas Blake 24 June 2024 (has links)
Nüvü Camēras, a Canadian company that designs a range of CCD and EMCCD cameras and controllers, recently began development on a miniaturized EMCCD controller for a CubeSat form factor. The detector for this payload requires near-cryogenic temperatures, approximately 188K, for performance operation. A temperature requirement of that magnitude for a CubeSat form factor is challenging given the low thermal mass, volume, surface area, and power availability for heat storage, dissipation and control systems that would typically be available for larger form factor spacecraft. The goal of this project is to design and per- form thermal analysis for the Nüvü Camēras CubeSat EMCCD Controller that allows for cold-biased active temperature control of both the controller electronics and detector. The EMCCD controller had an operational temperature range of −35◦C to +60◦C while the detector had a performance range of −110◦C to −85◦C with a desire to maintain a resolu- tion of ±0.25◦C. To meet these requirements, a system was designed within 3D modeling software Autodesk Inventor and imported into Thermal Desktop for thermal analysis and iteration. Models were updated based on thermal analysis results, adjusted by hand, and then tested again until a passive cooling and active heating system that met the require- ments was achieved. The final control system was shown to be capable of cooling from 20◦C (293.15K) to −85◦C (188.15K) and beyond given a Sun Synchronous orbit at 600km with attitude control and operational requirements. It was also shown to be capable of heating up, using resistive heaters on key components, beyond the thermal inertia of the system and environment, indicating viable control on orbit. In the future a PID control method can be implemented, and its use is being investigated by Nüvü Camēras for achieving the desired resolution of ±0.25◦C in the future. / Master of Science / Nüvü Camēras, a Canadian company that designs a range of Charge-Coupled Device (CCD) and Electon Multiplication Charge-Coupled Device (EMCCD) cameras and controllers, re- cently began development on a miniaturized EMCCD controller for a CubeSat form factor. CubeSats are a standard for nanosatellites, a classification of spacecraft with dimensions on the order of centimeters instead of meters. The Nüvü Camēras CubeSat EMCCD controller will operate a detector that enables the collection of extremely low light images, or even the counting of individual photons, that other CCD detectors are not capable of in such a form factor. This kind of quality comes with operating requirements such as a very low tempera- ture for the detector, which can be challenging to achieve in a CubeSat. The challenges are the low thermal mass for heat storage, low volume and surface area for fitting components and dissipating heat via radiation, and low power availability to run active systems such as cryocoolers. The goal of this project was to develop a temperature management system that tackles these challenges and allows the operation of the Nüvü Camēras CubeSat EMCCD controller on a variety of Low Earth Orbit (LEO) missions. To achieve this, a passive cool- ing system was designed to be cooled below the necessary temperature such that it can be effectively warmed back up as a method of active control with resistive heaters. This system was designed in a Computer Aided Design (CAD) program called Autodesk Inventor and analyzed in a thermal analysis program called Thermal Desktop. After a final analysis of the full setup, it was determined that the system would allow for thermal management at LEO for the Nüvü Camēras CubeSat EMCCD controller and detector.
99

Camera positioning for 3D panoramic image rendering

Audu, Abdulkadir Iyyaka January 2015 (has links)
Virtual camera realisation and the proposition of trapezoidal camera architecture are the two broad contributions of this thesis. Firstly, multiple camera and their arrangement constitute a critical component which affect the integrity of visual content acquisition for multi-view video. Currently, linear, convergence, and divergence arrays are the prominent camera topologies adopted. However, the large number of cameras required and their synchronisation are two of prominent challenges usually encountered. The use of virtual cameras can significantly reduce the number of physical cameras used with respect to any of the known camera structures, hence adequately reducing some of the other implementation issues. This thesis explores to use image-based rendering with and without geometry in the implementations leading to the realisation of virtual cameras. The virtual camera implementation was carried out from the perspective of depth map (geometry) and use of multiple image samples (no geometry). Prior to the virtual camera realisation, the generation of depth map was investigated using region match measures widely known for solving image point correspondence problem. The constructed depth maps have been compare with the ones generated using the dynamic programming approach. In both the geometry and no geometry approaches, the virtual cameras lead to the rendering of views from a textured depth map, construction of 3D panoramic image of a scene by stitching multiple image samples and performing superposition on them, and computation of virtual scene from a stereo pair of panoramic images. The quality of these rendered images were assessed through the use of either objective or subjective analysis in Imatest software. Further more, metric reconstruction of a scene was performed by re-projection of the pixel points from multiple image samples with a single centre of projection. This was done using sparse bundle adjustment algorithm. The statistical summary obtained after the application of this algorithm provides a gauge for the efficiency of the optimisation step. The optimised data was then visualised in Meshlab software environment, hence providing the reconstructed scene. Secondly, with any of the well-established camera arrangements, all cameras are usually constrained to the same horizontal plane. Therefore, occlusion becomes an extremely challenging problem, and a robust camera set-up is required in order to resolve strongly the hidden part of any scene objects. To adequately meet the visibility condition for scene objects and given that occlusion of the same scene objects can occur, a multi-plane camera structure is highly desirable. Therefore, this thesis also explore trapezoidal camera structure for image acquisition. The approach here is to assess the feasibility and potential of several physical cameras of the same model being sparsely arranged on the edge of an efficient trapezoid graph. This is implemented both Matlab and Maya. The quality of the depth maps rendered in Matlab are better in Quality.
100

Camera Planning and Fusion in a Heterogeneous Camera Network

Zhao, Jian 01 January 2011 (has links)
Wide-area camera networks are becoming more and more common. They have widerange of commercial and military applications from video surveillance to smart home and from traffic monitoring to anti-terrorism. The design of such a camera network is a challenging problem due to the complexity of the environment, self and mutual occlusion of moving objects, diverse sensor properties and a myriad of performance metrics for different applications. In this dissertation, we consider two such challenges: camera planing and camera fusion. Camera planning is to determine the optimal number and placement of cameras for a target cost function. Camera fusion describes the task of combining images collected by heterogenous cameras in the network to extract information pertinent to a target application. I tackle the camera planning problem by developing a new unified framework based on binary integer programming (BIP) to relate the network design parameters and the performance goals of a variety of camera network tasks. Most of the BIP formulations are NP hard problems and various approximate algorithms have been proposed in the literature. In this dissertation, I develop a comprehensive framework in comparing the entire spectrum of approximation algorithms from Greedy, Markov Chain Monte Carlo (MCMC) to various relaxation techniques. The key contribution is to provide not only a generic formulation of the camera planning problem but also novel approaches to adapt the formulation to powerful approximation schemes including Simulated Annealing (SA) and Semi-Definite Program (SDP). The accuracy, efficiency and scalability of each technique are analyzed and compared in depth. Extensive experimental results are provided to illustrate the strength and weakness of each method. The second problem of heterogeneous camera fusion is a very complex problem. Information can be fused at different levels from pixel or voxel to semantic objects, with large variation in accuracy, communication and computation costs. My focus is on the geometric transformation of shapes between objects observed at different camera planes. This so-called the geometric fusion approach usually provides the most reliable fusion approach at the expense of high computation and communication costs. To tackle the complexity, a hierarchy of camera models with different levels of complexity was proposed to balance the effectiveness and efficiency of the camera network operation. Then different calibration and registration methods are proposed for each camera model. At last, I provide two specific examples to demonstrate the effectiveness of the model: 1)a fusion system to improve the segmentation of human body in a camera network consisted of thermal and regular visible light cameras and 2) a view dependent rendering system by combining the information from depth and regular cameras to collecting the scene information and generating new views in real time.

Page generated in 0.0224 seconds