1 |
DESIGN OF A CMOS BASED IMAGE SENSOR USING COMPRESSIVE IMAGE SENSINGPattnaik, Abhijeet 01 September 2021 (has links)
This work optimizes a CMOS image pixel sensor circuit for being used in a compressive sensing (CS) image sensor. The CS image sensor sums neighbor pixel outputs and hence reduces analog to digital conversions. Efforts are also made to improve the circuit that performs such pixel summation. With the optimized design, a CMOS image sensor circuit with a compression ratio of 4 is designed using a 130 nm CMOS technology from Global foundries. The design pixel sensor has a 256X256 pixel array. Simulation shows that the developed image sensors can achieve peak signal to noise ratio (PSNR) of 28 dB and 37.8 dB for benchmark images Cameraman and Lenna, respectively.
|
2 |
Full frame 3D snapshot : Possibilities and limitations of 3D image acquisition without scanning / Helbilds 3D-avbildningMöller, Björn January 2005 (has links)
<p>An investigation was initiated, targeting snapshot 3D image sensors, with the objective to match the speed and resolution of a scanning sheet-of-light system, without using a scanning motion. The goal was a system capable of acquiring 25 snapshot images per second from a quadratic scene with a side from 50 mm to 1000 mm, sampled in 512×512 height measurement points, and with a depth resolution of 1 µm and beyond. </p><p>A wide search of information about existing 3D measurement techniques resulted in a list of possible schemes, each presented with its advantages and disadvantages. No single scheme proved successful in meeting all the requirements. Pulse modulated time-of-flight is the only scheme capable of depth imaging by using only one exposure. However, a resolution of 1 µm corresponds to a pulse edge detection accuracy of 6.67 fs when visible light or other electromagnetic waves are used. Sequentially coded light projections require a logarithmic number of exposures. By projecting several patterns at the same time, using for instance light of different colours, the required number of exposures is reduced even further. The patterns are, however, not as well focused as a laser sheet-of-light can be. </p><p>Using powerful architectural concepts such as matrix array picture processing (MAPP) and near-sensor image processing (NSIP) a sensor proposal was presented, designed to give as much support as possible to a large number of 3D imaging schemes. It allows for delayed decisions about details in the future implementation. </p><p>It is necessary to relax at leastone of the demands for this project in order to realise a working 3D imaging scheme using concurrent technology. One of the candidates for relaxation is the most obvious demand of snapshot behaviour. Furthermore, there are a number of decisions to make before designing an actual system using the recommendations presented in this thesis. The ongoing development of electronics, optics, and imaging schemes might be able to meet the 3D snapshot demands in a near future. The details of light sensing electronics must be carefully evaluated and the optical components such as lenses, projectors, and fibres should be studied in detail.</p>
|
3 |
Full frame 3D snapshot : Possibilities and limitations of 3D image acquisition without scanning / Helbilds 3D-avbildningMöller, Björn January 2005 (has links)
An investigation was initiated, targeting snapshot 3D image sensors, with the objective to match the speed and resolution of a scanning sheet-of-light system, without using a scanning motion. The goal was a system capable of acquiring 25 snapshot images per second from a quadratic scene with a side from 50 mm to 1000 mm, sampled in 512×512 height measurement points, and with a depth resolution of 1 µm and beyond. A wide search of information about existing 3D measurement techniques resulted in a list of possible schemes, each presented with its advantages and disadvantages. No single scheme proved successful in meeting all the requirements. Pulse modulated time-of-flight is the only scheme capable of depth imaging by using only one exposure. However, a resolution of 1 µm corresponds to a pulse edge detection accuracy of 6.67 fs when visible light or other electromagnetic waves are used. Sequentially coded light projections require a logarithmic number of exposures. By projecting several patterns at the same time, using for instance light of different colours, the required number of exposures is reduced even further. The patterns are, however, not as well focused as a laser sheet-of-light can be. Using powerful architectural concepts such as matrix array picture processing (MAPP) and near-sensor image processing (NSIP) a sensor proposal was presented, designed to give as much support as possible to a large number of 3D imaging schemes. It allows for delayed decisions about details in the future implementation. It is necessary to relax at leastone of the demands for this project in order to realise a working 3D imaging scheme using concurrent technology. One of the candidates for relaxation is the most obvious demand of snapshot behaviour. Furthermore, there are a number of decisions to make before designing an actual system using the recommendations presented in this thesis. The ongoing development of electronics, optics, and imaging schemes might be able to meet the 3D snapshot demands in a near future. The details of light sensing electronics must be carefully evaluated and the optical components such as lenses, projectors, and fibres should be studied in detail.
|
4 |
Investigations of time-interpolated single-slope analog-to-digital converters for CMOS image sensorsLevski, Deyan January 2018 (has links)
This thesis presents a study on solutions to high-speed analog-to-digital conversion in CMOS image sensors using time-interpolation methods. Data conversion is one of the few remaining speed bottlenecks in conventional 2D imagers. At the same time, as pixel dark current continues to improve, the resolution requirements on imaging data converters impose very high system-level design challenges. The focus of the presented investigations here is to shed light on methods in Time-to-Digital Converter interpolation of single-slope ADCs. By using high-factor time-interpolation, the resolution of single-slope converters can be increased without sacrificing conversion time or power. This work emphasizes on solutions for improvement of multiphase clock interpolation schemes, following an all-digital design paradigm. Presented is a digital calibration scheme which allows a complete elimination of analog clock generation blocks, such as PLL or DLL in Flash TDC-interpolated single-slope converters. To match the multiphase clocks in time-interpolated single-slope ADCs, the latter are generated by a conventional open-loop delay line. In order to correct the process voltage and temperature drift of the delay line, a digital backend calibration has been developed. It is also executed online, in-column, and at the end of each sample conversion. The introduced concept has been tested in silicon, and has showed promising results for its introduction in practical mass-production scenarios. Methods for reference voltage generation in single-slope ADCs have also been looked at. The origins of error and noise phenomenona, which occur during both the discrete and continuous-time conversion phases in a single-slope ADC have been mathematically formalized. A method for practical measurement of noise on the ramp reference voltage has also been presented. Multiphase clock interpolation schemes are difficult for implementation when high interpolation factors are used, due to their quadratic clock phase growth with resolution. To allow high interpolation factors a time-domain binary search concept with error calibration has been introduced. Although the study being conceptual, it shows promising results for highly efficient implementations, if a solution to stable column-level unit delays can be found. The latter is listed as a matter of future investigations.
|
5 |
Etude des effets singuliers produits par les particules énergétiques chargées de l’environnement radiatif spatial sur les capteurs d’images CMOS / Study of Single Event Effects induced by highly energetic charged particles of the space environment in CMOS image SensorsLalucaa, Valérian 12 December 2013 (has links)
Ce mémoire de thèse traite des effets singuliers produits par les milieux radiatifs sur les capteurs d’images CMOS. Le travail se concentre sur les effets provoqués par les ions lourds sur les capteurs utilisant des pixels 3T à photodiode standard et des pixels 4T et 5T à photodiodes pincées. Dans un premier temps, l’étude se concentre sur l’environnement spatial et l’architecture des capteurs. La comparaison avec la littérature met en évidence les effets les plus critiques sur les capteurs : le SEL et les SET. Les capteurs testés expérimentalement valident les travaux théoriques. Les SET sont comparés aux simulations de l’outil de modélisation STARDUST, et montrent un bon accord pour toutes les puces et les ions. Il est expliqué pourquoi les SET sur les puces 3T sont insensibles aux variations de conception de la photodiode, et pourquoi l’utilisation d’un substrat épitaxié diminue grandement les SET. Une méthode de réduction des SET est implémentée avec succès sur les puces 4T et 5T, et le composant responsable du latchup est identifié. L’ensemble des mécanismes explorés permet de connaitre les paramètres importants pour durcir les imageurs. / This thesis studies the single event effects of space environment in CMOS image sensors (CIS). This work focuses on the effects of heavy ions on 3T standard photodiode pixels, and 4T and 5T pinned photodiode pixels. The first part describes the space radioactive environment and the sensor architecture. The most harmful events (SEL and SETs) are identified thanks to the scientific literature. The experimentally tested sensors agree with the theoretical work. SETs are compared to STARDUST simulations with a good agreement for all ions and sensors. The work explains why the SETs on 3T pixels are insensitive to the various photodiode designs, and they are decreased when an epitaxial substrate is used. A method using anti-blooming was successfully used in 4T and 5T pixels to prevent the spread of the SETs. The mechanism of latchup in 4T pixel sensors is described. All the identified mechanisms are very useful to provide hardening methods for the CISs.
|
6 |
Apport de la technologie d’intégration 3D à forte densité d’interconnexions pour les capteurs d'images CMOS / Contribution of the 3D integration technology using high density of interconnexions for cmos image sensorsRaymundo Luyo, Fernando Rodolpho 09 September 2016 (has links)
Ce travail a montré que l’apport de la technologie d’intégration 3D, permet de surmonter les limites imposées par la technologie monolithique sur les performances électriques (« coupling » et consommation) et sur l’implémentation physique (aire du pixel) des imageurs. Grâce à l’analyse approfondie sur la technologie d’intégration 3D, nous avons pu voir que les technologies d’intégration 3D les plus adaptées pour l’intégration des circuits dans le pixel sont : 3D wafer level et 3D construction séquentielle. La technologie choisie pour cette étude, est la technologie d'intégration 3D wafer level. Cela nous a permis de connecter 2 wafers par thermocompression et d’avoir une interconnexion par pixel entre wafers. L’étude de l’architecture CAN dans le pixel a montré qu’il existe deux limites dans le pixel : l’espace de construction et le couplage entre la partie analogique et numérique « digital coupling ». Son implémentation dans la technologie 3D autorise l’augmentation de 100% l’aire de construction et la réduction du « digital coupling » de 70%. Il a été implémenté un outil de calcul des éléments parasites des structures 3D. L’étude des imageurs rapides, a permis d’étendre l’utilisation de cette technologie. L’imageur rapide type « burst » a été étudié principalement. Cet imageur permet de dissocier la partie d’acquisition des images de la sortie. La limite principale, dans la technologie monolithique, est la taille des colonnes (pixels vers mémoires). Pour une haute cadence d’acquisition des images, il faut une grande consommation de courant. Son implémentation dans la technologie 3D a autorisé à mettre les mémoires au-dessous des pixels. Les études effectuées pour ce changement (réduction de la colonne à une interconnexion entre wafers), ont réduit la consommation totale de 90% et augmenté le temps d’acquisition des images de 184%, en comparaison à son pair monolithique. / This work has shown that the contribution of 3D integration technology allows to overcome the limitations imposed by monolithic technology on the electrical performances (coupling and consumption) and on the physical implementation (area of the pixel) of imagers. An in-depth analysis of the 3D integration technology has shown that the most suitable 3D integration technologies for the integration of the circuits at the pixel level are: 3D wafer level and 3D sequential construction. The technology chosen for this study is the 3D wafer level integration technology. This allows us to connect 2 wafers by thermocompression bonding and to have an interconnection or “bonding point” par pixel between wafers. The study of the architecture CAN at the pixel level showed that there are two limits in the pixel: the construction area and the coupling between the analog and digital part «digital coupling». Its implementation in 3D technology allows the construction area to be increased by 100% and the digital coupling reduced by 70%. It has been implemented a tool for computing the parasitic elements of 3D structures. The study of high speed imagers has allowed the use of this technology to be extended. The "burst" imager was mainly studied. This kind of imager’s architecture can dissociate the image acquisition from the output part. The main limit, in monolithic technology, is the size of the columns (pixels to memories). For a high rate of image acquisition, a high current consumption is required. Its implementation in 3D technology allowed to put the memories below the pixels. The studies carried out for this change (reduction of the column to an interconnection between wafers) reduced the total consumption by 90% and increased the acquisition time of the images by 184%, compared to its monolithic peer.
|
7 |
Characterization, calibration, and optimization of time-resolved CMOS single-photon avalanche diode image sensorZarghami, Majid 02 September 2020 (has links)
Vision has always been one of the most important cognitive tools of human beings. In this regard, the development of image sensors opens up the potential to view objects that our eyes cannot see. One of the most promising capability in some image sensors is their single-photon sensitivity that provides information at the ultimate fundamental limit of light. Time-resolved single-photon avalanche diode (SPAD) image sensors bring a new dimension as they measure the arrival time of incident photons with a precision in the order of hundred picoseconds. In addition to this characteristic, they can be fabricated in complementary metal-oxide-semiconductor (CMOS) technology enabling the integration of complex signal processing blocks at the pixel level. These unique features made CMOS SPAD sensors a prime candidate for a broad spectrum of applications. This thesis is dedicated to the optimization and characterization of quantum imagers based on the SPADs as part of the E.U. funded SUPERTWIN project to surpass the fundamental diffraction limit known as the Rayleigh limit by exploiting the spatio-temporal correlation of entangled photons.
The first characterized sensor is a 32×32-pixel SPAD array, named “SuperEllen”, with in-pixel time-to-digital converters (TDC) that measure the spatial cross-correlation functions of a flux of entangled photons. Each pixel features 19.48% fill-factor (FF) in 44.64-μm pitch fabricated in a 150-nm CMOS standard technology. The sensor is fully characterized in several electro-optical experiments, in order to be used in quantum imaging measurements. Moreover, the chip is calibrated in terms of coincidence detection achieving the minimal coincidence window determined by the SPAD jitter. The second developed sensor in the context of SUPERTWIN project is a 224×272-pixel SPAD-based array called “SuperAlice”, a multi-functional image sensor fabricated in a 110-nm CMOS image sensor technology. SuperAlice can operate in multiple modes (time-resolving or photon counting or binary imaging mode).
Thanks to the digital intrinsic nature of SPAD imagers, they have an inherent capability to achieve a high frame rate. However, running at high frame rate means high I/O power consumption and thus inefficient handling of the generated data, as SPAD arrays are employed for low light applications in which data are very sparse over time and space. Here, we present three zero-suppression mechanisms to increase the frame rate without adversely affecting power consumption. A row-skipping mechanism that is implemented in both SuperEllen and SuperAlice detects the absence of SPAD activity in a row to increase the duty cycle. A current-based mechanism implemented in SuperEllen ignores reading out a full frame when the number of triggered pixels is less than a user-defined value. A different zero-suppression technique is developed in the SuperAlice chip that is based on jumping through the non-zero pixels within one row.
The acquisition of TDC-based SPAD imagers can be speeded up further by storing and processing events inside the chip without the need to read out all data. An on-chip histogramming architecture based on analog counters is developed in a 150-nm CMOS standard technology. The test structure is a 16-bin histogram with 9 bit depth for each bin.
SPAD technology demonstrates its capability in other applications such as automotive that demands high dynamic range (HDR) imaging. We proposed two methods based on processing photon arrival times to create HDR images. The proposed methods are validated experimentally with SuperEllen obtaining >130 dB dynamic range within 30 ms of integration time and can be further extended by using a timestamping mechanism with a higher resolution.
|
8 |
Capteur de vision CMOS à réponse insensible aux variations de température / High Dynamic Range CMOS vision sensor with a perturbation insensibilityZimouche, Hakim 01 September 2011 (has links)
Les capteurs d’images CMOS sont de plus en plus utilisés dans le domaine industriel : la surveillance, la défense, le médical, etc. Dans ces domaines, les capteurs d?images CMOS sont exposés potentiellement à de grandes variations de température. Les capteurs d?images CMOS, comme tous les circuits analogiques, sont très sensibles aux variations de température, ce qui limite leurs applications. Jusquà présent, aucune solution intégrée pour contrer ce problème n’a été proposée. Afin de remédier à ce défaut, nous étudions, dans cette thèse, les effets de la température sur les deux types d?imageurs les plus connus. Plusieurs structures de compensation sont proposées. Elles reprennent globalement les trois méthodes existantes et jamais appliquées aux capteurs d’images. La première méthode utilise une entrée au niveau du pixel qui sera modulée en fonction de l’évolution de la température. La deuxième méthode utilise la technique ZTC (Zero Temperature Coefficient). La troisième méthode est inspirée de la méthode de la tension de référence bandgap. Dans tous les cas, nous réduisons de manière très intéressante l’effet de la température et nous obtenons une bonne stabilité en température de -30 à 125°C. Toutes les solutions proposées préservent le fonctionnement initial de l’imageur. Elles n’impactent également pas ou peu la surface du pixel / CMOS image sensors find widespread use in various industrial applications including military, surveillance, medical, etc. In these applications, CMOS image sensors are often exposed to large temperature variations. As analog circuits, these CMOS image sensors are very sensitive to temperature variations, which limit their applications. Until now, no integrated solution for this problem has been proposed. To solve this problem, we study, in this thesis, the temperature effects on the two most known types of CMOS image sensors. Several compensation structures are proposed. They generally return to the three existing methods and never applied to image sensors. The first method uses an entrance at the pixel level to be adjusted according to changes in temperature. The second method uses the ZTC (Zero Temperature Coefficient) technique. The third method is based on the method of the bandgap voltage reference. In all cases, we reduce a very interesting way the temperature effect and we get a good temperature stability of the sensor from -30 to 125°C. All the solutions preserve the initial operation of the imager. They also affect a little or not the surface of the pixel.
|
9 |
Estimation et modélisation de paramètres clés des capteurs d’images CMOS à photodiode pincée pour applications à haute résolution temporelle / Estimation and modeling of key design parameters of pinned photodiode CMOS image sensors for high temporal resolution applicationsPelamatti, Alice 17 November 2015 (has links)
Poussée par une forte demande et un marché très compétitif, la technologie PPD CIS est en évolution permanente. Du fait de leurs très bonnes performances en terme de bruit, les capteurs d’image CMOS à base de Photodiode Pincée (PPD CIS) peuvent désormais atteindre une sensibilité de l’ordre de quelques photons, ce qui rend cette technologie particulièrement intéressante pour les applications d’imagerie à haute résolution temporelle. Aujourd’hui, la physique des photodiodes pincées n’est pas encore comprise dans sont intégralité et il y a un manque important d’uniformisation des méthodes de caractérisation de ces détecteurs. Ces travaux s’intéressent à la définition, à la modélisation analytique, à la simulation et à l’estimation de paramètres clés des PPD CIS, tels que le temps de transfert, la tension de pincement et la full well capacity (FWC). Comme il a été mis en évidence par cette thèse, il est de première importance de comprendre l’effet des conditions expérimentales sur les performances de ces capteurs. Ceci aussi bien pour l’optimisation de ces paramètres lors de la conception du capteur, que lors de la phase de caractérisions de celui-ci, et enfin pour choisir correctement les conditions de mesures lors de la mise en œuvre du dispositif. / Driven by an aggressive market competition, CMOS Image Sensor technology is in continuous evolution. Thanks to the outstanding noise performances of Pinned Photodiode (PPD) CIS, CMOS sensors can now reach a few photons sensitivity, which makes this technology a particularly interesting candidate for high temporal resolution applications. Despite the incredibly large production volume, today, the PPD physics is not yet fully understood, and there is still a lack of golden standards for the characterization of PPD performances. This thesis focuses on the definition, analytical modeling, simulation and estimation of PPD key design parameters, with a particular focus on charge mechanisms, on the pinning voltage and on the full well capacity. The models developed in this work can help both manufacturers and users understanding the design trade-offs and the dependence of these parameters from the experimental conditions, in order to optimize the sensor design, to correctly characterize the image sensor, and to adjust the operation conditions to reach optimum performances.
|
10 |
CMOS IMAGE SENSORS WITH COMPRESSIVE SENSING ACQUISITIONDadkhah, Mohammadreza January 2013 (has links)
<p>The compressive sensing (CS) paradigm provides an efficient image acquisition technique through simultaneous sensing and compression. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures are required to design cameras suitable for CS imaging.</p> <p>While this work is focused on the hardware implementation of CS encoding for CMOS sensors, the image reconstruction problem of CS is also studied. The energy compaction properties of the image in different domains are exploited to modify conventional reconstruction problems. Experimental results show that the modified methods outperform the 1-norm and TV (total variation) reconstruction algorithms by up to 2.5dB in PSNR.</p> <p>Also, we have designed, fabricated and measured the performance of two real-time and area-efficient implementations of the CS encoding for CMOS imagers. In the first implementation, the idea of active pixel sensor (APS) with an integrator and in-pixel current switches are used to develop a compact, current-mode implementation of CS encoding in analog domain. In another implementation, the conventional three-transistor APS structure and switched capacitor (SC) circuits are exploited to develop the analog, voltage-mode implementation of the CS encoding. With the analog and block-based implementation, the sensing and encoding are performed in the same time interval, thus making a real-time encoding process. The proposed structures are designed and fabricated in 130nm technology. The experimental results confirm the scalability, the functionality of the block read-out, and the validity of the design in making monotonic and appropriate CS measurements.</p> <p>This work also discusses the CS-CMOS sensors for high frame rate CS video coding. The method of multiple-camera with coded exposure video coding is discussed and a new pixel and array structure for hardware implementation of the method is presented.</p> / Doctor of Philosophy (PhD)
|
Page generated in 0.0215 seconds