• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 22
  • 4
  • 3
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 81
  • 81
  • 44
  • 20
  • 18
  • 14
  • 13
  • 12
  • 11
  • 11
  • 9
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

CMOS SPAD-based image sensor for single photon counting and time of flight imaging

Dutton, Neale Arthur William January 2016 (has links)
The facility to capture the arrival of a single photon, is the fundamental limit to the detection of quantised electromagnetic radiation. An image sensor capable of capturing a picture with this ultimate optical and temporal precision is the pinnacle of photo-sensing. The creation of high spatial resolution, single photon sensitive, and time-resolved image sensors in complementary metal oxide semiconductor (CMOS) technology offers numerous benefits in a wide field of applications. These CMOS devices will be suitable to replace high sensitivity charge-coupled device (CCD) technology (electron-multiplied or electron bombarded) with significantly lower cost and comparable performance in low light or high speed scenarios. For example, with temporal resolution in the order of nano and picoseconds, detailed three-dimensional (3D) pictures can be formed by measuring the time of flight (TOF) of a light pulse. High frame rate imaging of single photons can yield new capabilities in super-resolution microscopy. Also, the imaging of quantum effects such as the entanglement of photons may be realised. The goal of this research project is the development of such an image sensor by exploiting single photon avalanche diodes (SPAD) in advanced imaging-specific 130nm front side illuminated (FSI) CMOS technology. SPADs have three key combined advantages over other imaging technologies: single photon sensitivity, picosecond temporal resolution and the facility to be integrated in standard CMOS technology. Analogue techniques are employed to create an efficient and compact imager that is scalable to mega-pixel arrays. A SPAD-based image sensor is described with 320 by 240 pixels at a pitch of 8μm and an optical efficiency or fill-factor of 26.8%. Each pixel comprises a SPAD with a hybrid analogue counting and memory circuit that makes novel use of a low-power charge transfer amplifier. Global shutter single photon counting images are captured. These exhibit photon shot noise limited statistics with unprecedented low input-referred noise at an equivalent of 0.06 electrons. The CMOS image sensor (CIS) trends of shrinking pixels, increasing array sizes, decreasing read noise, fast readout and oversampled image formation are projected towards the formation of binary single photon imagers or quanta image sensors (QIS). In a binary digital image capture mode, the image sensor offers a look-ahead to the properties and performance of future QISs with 20,000 binary frames per second readout with a bit error rate of 1.7 x 10-3. The bit density, or cumulative binary intensity, against exposure performance of this image sensor is in the shape of the famous Hurter and Driffield densitometry curves of photographic film. Oversampled time-gated binary image capture is demonstrated, capturing 3D TOF images with 3.8cm precision in a 60cm range.
2

Calibration-free image sensor modelling: deterministic and stochastic

Lim, Shen Hin, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2009 (has links)
This dissertation presents the calibration-free image sensor modelling process applicable for localisation, such that these are robust to changes in environment and in sensor properties. The modelling process consists of two distinct parts, which are deterministic and stochastic techniques, and is achieved using mechanistic deconvolution, where the sensor???s mechanical and electrical properties are utilised. In the deterministic technique, the sensor???s effective focal length is first estimated by known lens properties, and is used to approximate the lens system by a thick lens and its properties. The aperture stop position offset???which is one of the thick lens properties???then derives a new factor, namely calibration-free distortion effects factor, to characterise distortion effects inherent in the sensor. Using this factor and the given pan and tilt angles of an arbitrary plane of view, the corrected image data is generated. The corrected data complies with the image sensor constraints modified by the pan and tilt angles. In the stochastic technique, the stochastic focal length and distortion effects factor are first approximated, using tolerances of the mechanical and electrical properties. These are then utilised to develop the observation likelihood necessary in recursive Bayesian estimation. The proposed modelling process reduces dependency on image data, and, as a result, do not require experimental setup or calibration. An experimental setup was constructed to conduct extensive analysis on accuracy of the proposed modelling process and its robustness to changes in sensor properties and in pan and tilt angles without recalibration. This was compared with a conventional modelling process using three sensors with different specifications and achieved similar accuracy with one-seventh the number of iterations. The developed model has also shown itself to be robust and, in comparison to the conventional modelling process, reduced the errors by a factor of five. Using area coverage method and one-step lookahead as control strategies, the stochastic sensor model was applied into a recursive Bayesian estimation application and was also compared with a conventional approach. The proposed model provided better target estimation state, and also achieved higher efficiency and reliability when compared with the conventional approach.
3

Development of a Wireless Video Transfer System for Remote Control of a Lightweight UAV / Utveckling av ett trådlöst videoöverföringssystem för fjärrstyrning av en minimal obemannad luftfarkost

Tosteberg, Joakim, Axelsson, Thomas January 2012 (has links)
A team of developers from Epsilon AB has developed a lightweight remote controlledquadcopter named Crazyflie. The team wants to allow a pilot to navigate thequadcopter using video from an on-board camera as the only guidance. The masterthesis evaluates the feasibility of mounting a camera module on the quadcopter andstreaming images from the camera to a computer, using the existing quadcopterradio link. Using theoretical calculations and measurements, a set of requirementsthat must be fulfilled for such a system are identified. Using the requirementsas a basis, various camera products are investigated and the findings presented.A design to fulfill the requirements, using the found products, is proposed. Theproposed design is then implemented and evaluated. It is found that the Crazyflie system has the resources necessary to transferan image stream with the quality required for navigation. Furthermore, theimplementation is found to provide the required functionality. From the evaluationseveral key factors of the design that can be changed to further improve theperformance of an implementation are identified. Ideas for future work andimprovements are proposed and possible alternative approaches are presented.
4

High Speed Camera Chip

January 2017 (has links)
abstract: The market for high speed camera chips, or image sensors, has experienced rapid growth over the past decades owing to its broad application space in security, biomedical equipment, and mobile devices. CMOS (complementary metal-oxide-semiconductor) technology has significantly improved the performance of the high speed camera chip by enabling the monolithic integration of pixel circuits and on-chip analog-to-digital conversion. However, for low light intensity applications, many CMOS image sensors have a sub-optimum dynamic range, particularly in high speed operation. Thus the requirements for a sensor to have a high frame rate and high fill factor is attracting more attention. Another drawback for the high speed camera chip is its high power demands due to its high operating frequency. Therefore, a CMOS image sensor with high frame rate, high fill factor, high voltage range and low power is difficult to realize. This thesis presents the design of pixel circuit, the pixel array and column readout chain for a high speed camera chip. An integrated PN (positive-negative) junction photodiode and an accompanying ten transistor pixel circuit are implemented using a 0.18 µm CMOS technology. Multiple methods are applied to minimize the subthreshold currents, which is critical for low light detection. A layout sharing technique is used to increase the fill factor to 64.63%. Four programmable gain amplifiers (PGAs) and 10-bit pipeline analog-to-digital converters (ADCs) are added to complete on-chip analog to digital conversion. The simulation results of extracted circuit indicate ENOB (effective number of bits) is greater than 8 bits with FoM (figures of merit) =0.789. The minimum detectable voltage level is determined to be 470μV based on noise analysis. The total power consumption of PGA and ADC is 8.2mW for each conversion. The whole camera chip reaches 10508 frames per second (fps) at full resolution with 3.1mm x 3.4mm area. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2017
5

Image denoising for real image sensors

Zhang, Jiachao 27 August 2015 (has links)
No description available.
6

Foveated Sampling Architectures for CMOS Image Sensors

Saffih, Fayçal January 2005 (has links)
Electronic imaging technologies are faced with the challenge of power consumption when transmitting large amounts of image data from the acquisition imager to the display or processing devices. This is especially a concern for portable applications, and becomes more prominent in increasingly high-resolution, high-frame rate imagers. Therefore, new sampling techniques are needed to minimize transmitted data, while maximizing the conveyed image information. <br /><br /> From this point of view, two approaches have been proposed and implemented in this thesis: <ol> <li> A system-level approach, in which the classical 1D row sampling CMOS imager is modified to a 2D ring sampling pyramidal architecture, using the same standard three transistor (3T) active pixel sensor (APS). </li> <li> A device-level approach, in which the classical orthogonal architecture has been preserved while altering the APS device structure, to design an expandable multiresolution image sensor. </li> </ol> A new scanning scheme has been suggested for the pyramidal image sensor, resulting in an intrascene foveated dynamic range (FDR) similar in profile to that of the human eye. In this scheme, the inner rings of the imager have a higher dynamic range than the outer rings. The pyramidal imager transmits the sampled image through 8 parallel output channels, allowing higher frame rates. The human eye is known to have less sensitivity to oblique contrast. Using this fact on the typical oblique distribution of fixed pattern noise, we demonstrate lower perception of this noise than the orthogonal FPN distribution of classical CMOS imagers. <br /><br /> The multiresolution image sensor principle is based on averaging regions of low interest from frame-sampled image kernels. One pixel is read from each kernel while keeping pixels in the region of interest at their high resolution. This significantly reduces the transferred data and increases the frame rate. Such architecture allows for programmability and expandability of multiresolution imaging applications.
7

Foveated Sampling Architectures for CMOS Image Sensors

Saffih, Fayçal January 2005 (has links)
Electronic imaging technologies are faced with the challenge of power consumption when transmitting large amounts of image data from the acquisition imager to the display or processing devices. This is especially a concern for portable applications, and becomes more prominent in increasingly high-resolution, high-frame rate imagers. Therefore, new sampling techniques are needed to minimize transmitted data, while maximizing the conveyed image information. <br /><br /> From this point of view, two approaches have been proposed and implemented in this thesis: <ol> <li> A system-level approach, in which the classical 1D row sampling CMOS imager is modified to a 2D ring sampling pyramidal architecture, using the same standard three transistor (3T) active pixel sensor (APS). </li> <li> A device-level approach, in which the classical orthogonal architecture has been preserved while altering the APS device structure, to design an expandable multiresolution image sensor. </li> </ol> A new scanning scheme has been suggested for the pyramidal image sensor, resulting in an intrascene foveated dynamic range (FDR) similar in profile to that of the human eye. In this scheme, the inner rings of the imager have a higher dynamic range than the outer rings. The pyramidal imager transmits the sampled image through 8 parallel output channels, allowing higher frame rates. The human eye is known to have less sensitivity to oblique contrast. Using this fact on the typical oblique distribution of fixed pattern noise, we demonstrate lower perception of this noise than the orthogonal FPN distribution of classical CMOS imagers. <br /><br /> The multiresolution image sensor principle is based on averaging regions of low interest from frame-sampled image kernels. One pixel is read from each kernel while keeping pixels in the region of interest at their high resolution. This significantly reduces the transferred data and increases the frame rate. Such architecture allows for programmability and expandability of multiresolution imaging applications.
8

An Integrated Imaging Sensor For Rare Cell Detection Applications

Altiner, Caglar 01 November 2012 (has links) (PDF)
Cell detection using image sensors is a novel and promising technique that can be used for diagnostic applications in medicine. For this purpose, cell detection studies with shadowing method are performed with yeast cells (Saccharomyces cerevisiae) using an 32&times / 32 complementary metal oxide semiconductor (CMOS) image sensor that is sensitive to optical illumination. Cells that are placed zero distance from the sensor surface are detected using the image sensor which is illuminated with four fixed leds to maintain fixed illumination levels in each test. Cells are transferred to the sensor surface with drying the medium they are in, which is phosphate buffered saline (PBS) solution. Yeast cells that are zero distance from the surface are detected with a detection rate of 72%. Then, MCF-7 (breast cancer) cells are detected with the same sensor when the PBS solution is about to dry. To investigate the detection capability of the sensor while the cells are in the PBS solution, the sensor surface is coated with gold in order to immobilize the surface with antibodies. With immobilizing antibodies, cells are thought to be bound to the surface achieving zero distance to the sensor surface. After coating gold, antibodies are immobilized, and same tests are done with MCF-7 cells. In the PBS solution, no sufficient results are obtained with the shadowing technique, but sufficient results are obtained when the solution is about to dry. After achieving cell detection with the image sensor, a similar but large format image sensor is designed. The designed CMOS image sensor has 160&times / 128 pixel array with 15&micro / m pitch. The pixel readout allows capacitive and optical detection. Thus, both DNA and cell detection are possible with this image sensor. The rolling line shutter mode is added for reducing further leakage at pixel readout. Addressing can be done which means specific array points can be investigated, and also array format can be changed for different size cells. The frame rate of the sensor can be adjusted allowing the detection of the fast moving cell samples. All the digital inputs of the sensor can be adjusted manually for the sake of flexibility. A large number of cells can be detected with using this image sensor due to its large format.
9

Phosphor Coated UV-Responsive CCD Image Sensors

Alexander, Stefan January 2002 (has links)
Typical CCD image sensors are not sensitive to Ultra-Violet (UV) radiation, because the UV photons have a penetration depth of 2nm in the ~1&micro;m thick polysilicon gate material. An inorganic phosphor coating was developed previously (by Wendy Franks et al [1, 2]) that was shown to be a viable solution to creating a UV-sensitive CCD image sensor. The coating absorbs incident UV radiation (250nm) and re-emits it in the visible (550-611nm) where it can penetrate the gate material. This coating was deposited using a settle-coat type deposition. Improved coating techniques are presented here. These include a commercial coating from Applied Scintillation Technologies (AST), a Doctor-Blade coating, e-beam deposition, and laser ablation. The properties of the coating, and of the coated sensors are presented here. Tests performed on the sensors include Quantum Efficiency, Photo-Response Non-Uniformity, Contrast Transfer Function, and Lifetime. The AST coating is a viable method for commercial UV-responsive CCD image sensors. The Doctor-Blade coatings show promise, but issues with clustering of the coating need to be resolved before the sensors can be used commercially. E-beam deposition and laser ablation need further research to provide a viable coating.
10

Phosphor Coated UV-Responsive CCD Image Sensors

Alexander, Stefan January 2002 (has links)
Typical CCD image sensors are not sensitive to Ultra-Violet (UV) radiation, because the UV photons have a penetration depth of 2nm in the ~1&micro;m thick polysilicon gate material. An inorganic phosphor coating was developed previously (by Wendy Franks et al [1, 2]) that was shown to be a viable solution to creating a UV-sensitive CCD image sensor. The coating absorbs incident UV radiation (250nm) and re-emits it in the visible (550-611nm) where it can penetrate the gate material. This coating was deposited using a settle-coat type deposition. Improved coating techniques are presented here. These include a commercial coating from Applied Scintillation Technologies (AST), a Doctor-Blade coating, e-beam deposition, and laser ablation. The properties of the coating, and of the coated sensors are presented here. Tests performed on the sensors include Quantum Efficiency, Photo-Response Non-Uniformity, Contrast Transfer Function, and Lifetime. The AST coating is a viable method for commercial UV-responsive CCD image sensors. The Doctor-Blade coatings show promise, but issues with clustering of the coating need to be resolved before the sensors can be used commercially. E-beam deposition and laser ablation need further research to provide a viable coating.

Page generated in 0.0301 seconds