1 |
A CCD based camera for digital imaging of the nightglowMacIntosh, Michael J. January 1986 (has links)
This thesis deals with the development of a microprocessor controlled CCD based camera for digital imaging of the nightglow. A brief description of the techniques used to image the nightglow is given and the reasons for choosing a CCD as the detector are discussed. The fundamentals of CCD operation are then described with particular emphasis on buried channel CCD image sensors as the P8603 CCD used in the camera is of this type. A major part of the thesis is devoted to the detailed design of the camera electronics which consists of three main sections; (i) a MC6802 based microprocessor controller with 4 K of ROM and 64 K of dynamic RAM; (ii) a display interface which allows an on-line display of the images to be produced on an oscilloscope for monitoring purposes while observing; and (iii) the CCD interface which consists of the drive pulse buffers for the image, store and readout sections of the CCD, the bias voltage generators for the CCD on-chip charge amplifier, and the signal processing electronics which has a choice of four software selectable gains and uses correlated double sampling to achieve low noise levels. The design of a digital cassette system for recording the image data is also described. The system, which is based on a low cost stereo cassette recorder, accepts and produces data in the same RS232 serial format used by the camera and is capable of operating at up to 9600 baud on two channels. A further section deals with the optical, structural and cryogenic design. This includes a description of the camera optical system which is based on a commercial FI.4 CCTV lens, theoretical calculations of the expected response of the camera to a range of nightglow emissions, the design of the liquid nitrogen cryostat which is used to cool the CCD, the design of the camera chassis, and calculations to determine (i) the CCD temperature required to reduce the dark current to an acceptable level; and (ii) the capacity of the liquid nitrogen reservoir which is necessary to allow a whole night's observing without refilling. The detailed operation of the camera control program, which is written in 6800 assembly language, is then described with the aid of flowcharts. Currently the control program is set up to give a one minute integration period using half-frame imaging and a 3 x 2 pixel amalgamation. The final section of the thesis deals with the testing and performance of the camera. Several experiments were carried out including the measurement of the various possible ampilifier gains, the noise performance of the system, the angular response of the camera optics, and the calibration of the camera using a standard light to allow the absolute intensity of nightglow emissions to be calculated. Theoretical calculations of the expected noise levels and the expected response of the camera to the standard light are also included. A suite of image processing programs, written in Pascal for an Apple II microcomputer, are then described. These programs allow various operations to be performed such as scanning the images stored on tape, and correcting for the defective columns on the CCD and the angular response of the camera optics. Lastly, the performance of the camera in the field is discussed and the results of observations made locally, which include photographs of images believed to show hydroxyl airglow structure, are presented.
|
2 |
Detecting edges in noisy face database imagesQahwaji, Rami S.R. January 2003 (has links)
no / No Abstract
|
3 |
The asymptotic rate of the length of the longest significant chain with good continuation in Bernoulli net and its applications in filamentary detectionNi, Kai 08 April 2013 (has links)
This thesis is devoted to the detectability of an inhomogeneous region possibly embedded in a noisy environment. It presents models and algorithms using the theory of the longest significant run and percolation. We analyze the computational results based on simulation. We consider the length of the significant nodes in a chain with good continuation in a square lattice of independent nodes. Inspired by the percolation theory, we first analyze the problem in a tree based model. We give the critical probability and find the decay rate of the probability of having a significant run with length k starting at the origin. We find that the asymptotic rate of the length of the significant run can be powerfully applied in the area of image detection. Examples are detection of filamentary structures in a background of uniform random points and target tracking problems. We set the threshold for the rejection region in these problems so that the false positives diminish quickly as we have more samples.
Inspired by the convex set detection, we also give a fast and near optimal algorithm to detect a possibly inhomogeneous chain with good continuation in an image of pixels with white noise. We analyze the length of the longest significant chain after thresholding each pixel and consider the statistics over all significant chains. Such a strategy significantly reduces the complexity of the algorithm. The false positives are eliminated as the number of pixels increases. This extends the existing detection method related to the detection of inhomogeneous line segment in the literature.
|
4 |
AI-based image generation: The impact of fine-tuning on fake image detectionHagström, Nick, Rydberg, Anders January 2024 (has links)
Machine learning-based image generation models such as Stable Diffusion are now capable of generating synthetic images that are difficult to distinguish from real images, which gives rise to a number of legal and ethical concerns. As a potential measure of mitigation, it is possible to train neural networks to detect the digital artifacts present in the images synthesized by many generative models. However, as the artifacts in question are often rather model-specific, these so-called detectors usually suffer from poor performance when presented with images from models it has not been trained on. In this thesis we study DreamBooth and LoRA, two recently emerged finetuning methods, and their impact on the performance of fake image detectors. DreamBooth and LoRA can be used to fine-tune a Stable Diffusion foundation model, which has the effect of creating an altered version of the base model. The ease with which this can be done has led to a proliferation of communitygenerated synthetic images. However, the effect of model fine-tuning on the detectability of images has not yet been studied in a scientific context. We therefore formulate the following research question: Does fine-tuning a Stable Diffusion base model using DreamBooth or LoRA affect the performance metrics of detectors trained on only base model images? We employ an experimental approach, using the pretrained VGG16 architecture for binary classification as detector. We train the detector on real images from the ImageNet dataset together with images synthesized by three different Stable Diffusion foundation models, resulting in three trained detectors. We then test their performance on images generated by fine-tuned versions of these models. We find that the accuracy of detectors when tested on images generated using fine-tuned models is lower than when tested on images generated by the base models on which they were trained. Within the former category, DreamBooth-generated images have a greater negative impact on detector accuracy than LoRA-generated images. Our study suggests there is a need to consider in particular DreamBooth fine-tuned models as distinct entities in the context of fake image detector training.
|
5 |
Convolutional Neural Network Detection and Classification System Using an Infrared Camera and Image Detection Uncertainty EstimationMiethig, Benjamin Taylor January 2019 (has links)
Autonomous vehicles are equipped with systems that can detect and track the objects in a vehicle’s vicinity and make appropriate driving decisions accordingly. Infrared (IR) cameras are not typically employed on these systems, but the new information that can be supplied by IR cameras can help improve the probability of detecting all objects in a vehicle’s surroundings. The purpose of this research is to investigate how IR imaging can be leveraged to improve existing autonomous driving detection systems. This research serves as a proof-of-concept demonstration.
In order to achieve detection using thermal images, raw data from seven different driving scenarios was captured and labelled using a calibrated camera. Calibrating the camera made it possible to estimate the distance to objects within the image frame. The labelled images (ground truth data) were then used to train several YOLOv2 neural networks to detect similar objects in other image frames. Deeper YOLOv2 networks trained on larger amounts of data were shown to perform better on both precision and recall metrics.
A novel method of estimating pixel error in detected object locations has also been proposed which can be applied to any detection algorithm that has corresponding ground truth data. The pixel errors were shown to be normally distributed with unique spreads about different ranges of y-pixels. Low correlations were seen in detection errors in the x-pixel direction. This methodology can be used to create a gate estimation for the detected pixel location of an object.
Detection using IR imaging has been shown to have promising results for applications where typical autonomous sensors can have difficulties. The work done in this thesis has shown that the additional information supplied by IR cameras has potential to improve existing autonomous sensory systems. / Thesis / Master of Applied Science (MASc)
|
6 |
Comparison of GPS-Equipped Vehicles and Its Archived Data for the Estimation of Freeway SpeedsLee, Jaesup 09 April 2007 (has links)
Video image detection system (VDS) equipment provides real-time traffic data for monitored highways directly to the traffic management center (TMC) of the Georgia Department of Transportation. However, at any given time, approximately 30 to 35% of the 1,600 camera stations (STNs) fail to work properly. The main reasons for malfunctions in the VDS system include long term road construction activity and operational limitations. Thus, providing alternative data sources for offline VDS stations and developing tools that can help detect problems with VDS stations can facilitate the successful operation of the TMC.
To estimate the travel speed of non-working STNs, this research examined global positioning system (GPS) data from vehicles using the ATMS-monitored freeway system as a potential alternative measure to VDS. The goal of this study is to compare VDS speed data for the estimation of the travel speed on freeways with GPS-equipped vehicle trip data, and to assess the differences between these measurements as a potential function of traffic and roadway conditions, environmental, conditions, and driver/vehicle characteristics. The difference between GPS and VDS speeds is affected by various factors such as congestion level (expressed as level of service), onroad truck percentage, facility design (number of lanes and freeway sub-type), posted speed limit, weather, daylight, and time of day. The relationship between monitored speed difference and congestion level was particularly large and was observed to interact with most other factors.
Classification and regression tree (CART) analysis results indicated that driver age was the most relevant variable in explaining variation for the southbound of freeway dataset and freeway sub-type, speed limit, driver age, and number of lane were the most influential variables for the northbound of freeway dataset. The combination of several variables had significant contribution in the reduction of the deviation for both the northbound and the southbound dataset. Although this study identifies potential relationships between speed difference and various factors, the results of the CART analysis should be considered with the driver sample size to yield statistically significant results. Expanded sampling with larger number of drivers would enrich this study results.
|
7 |
Identifying signatures in scanned paperdocuments : A proof-of-concept at BolagsverketNorén, Björn January 2022 (has links)
Bolagsverket, a Swedish government agency receives cases both in paper form via mail, document form via e-mail and also digital forms. These cases may be about registering people in a company, changing the share capital, etc. However, handling and confirming all these papers can be time consuming, and it would be beneficial for Bolagsverket if this process could be automated with as little human input as possible. This thesis investigates if it is possible to identify whether a paper contains a signature or not by using artificial intelligence (AI) and convolutional neural networks (CNN), and also if it is possible to determine how many signatures a given paper has. If these problems prove to be solvable, it could potentially lead to a great benefit for Bolagsverket. In this paper, a residual neural network (ResNet) was implemented which later was trained on sample data provided by Bolagsverket. The results demonstrate that it is possible to determine whether a paper has a signature or not with a 99% accuracy, which was tested on 1000 images where the model was trained on 8787 images. A second ResNet architecture was implemented to identify the number of signatures, and the result shows that this was possible with an accuracy score of 94.6%.
|
8 |
Improve Nano-Cube Detection Performance Using A Method of Separate Training of Sample SubsetsNagavelli, Sai Krishnanand January 2016 (has links)
No description available.
|
9 |
Low-power high-resolution image detectionMerchant, Caleb 09 August 2019 (has links)
Many image processing algorithms exist that can accurately detect humans and other objects such as vehicles and animals. Many of these algorithms require large amounts of processing often requiring hardware acceleration with powerful central processing units (CPUs), graphics processing units (GPUs), field programmable gate arrays (FPGAs), etc. Implementing an algorithm that can detect objects such as humans at longer ranges makes these hardware requirements even more strenuous as the numbers of pixels necessary to detect objects at both close ranges and long ranges is greatly increased. Comparing the performance of different low-power implementations can be used to determine a trade-off between performance and power. An image differencing algorithm is proposed along with selected low-power hardware that is capable of detected humans at ranges of 500 m. Multiple versions of the detection algorithm are implemented on the selected hardware and compared for run-time performance on a low-power system.
|
10 |
Detecção automática de conteúdo ofensivo na web / Automatic detection of offensive content on WebBelém, Ruan Josemberg Silva 12 May 2006 (has links)
Made available in DSpace on 2015-04-11T14:03:02Z (GMT). No. of bitstreams: 1
Ruan Josemberg Silva Belem.pdf: 270710 bytes, checksum: 6becb4184530c335870aefc5042c2116 (MD5)
Previous issue date: 2006-05-12 / Fundação de Amparo à Pesquisa do Estado do Amazonas / The World Wide Web is a huge source of diverse information, including offensive material such as pornography related content. This poses the problem of automatcally detecting offensive content as a way to avoid unauthorised access, for instance, by children or by employees during working hours. Although this sort of information is published in many forms, including text, sound and video, images are the most common form of publication of offensive content on the Web. Detecting offensive images can be considered as a classification problem. Given that images are part of Web pages, textual information can be used as important evidence along with the content extracted from images, such as colour, texture and shapes. This dissertation proposes two distinct approaches for automatic detection of offensive images on the Web. The first is based on image content, specifically colour. The second approach is based on textual terms extracted from the Web page that present the images. After evidence extraction the classification is performed using the SVM technique, based on a collection of 1000 offensive
images and 1000 non-offensive images for training. Experiments carried out have shown that both approaches are effective, although they rely on simple algorithms for extracting evidences related to the images. / A World Wide Web (Web) é uma fonte de informação com grande quantidade e diversidade de conteúdo, incluindo material de caráter ofensivo relacionado á pornografia. Diante deste cenário, existe a necessidade de detectar tal conteúdo ofensivo de maneira a evitar que o mesmo seja indevidamente acessado por crianças ou por funcionários de empresas, onde o acesso a este tipo de conteúdo geralmente não é permitido. Embora este tipo de informação possa estar presente na Web em forma de texto, vídeo ou sons, grande parte deste conteúdo está disponibilizado na forma de imagens. O problema de identificação de imagens ofensivas pode
ser visto como um problema de classificação. Como as imagens em questão estão inseridas em páginas web, além das informações que podem ser extraídas da própria imagem, também têm-se as informações textuais encontradas nas páginas que possuem as imagens. Aptos a extração de evidências a classificação é realizada usando-se um classificador baseado em SVM treinado com uma coleção de 1000 imagens ofensivas e 1000 imagens não-ofensivas. Este trabalho apresenta duas abordagens diferentes para detecção de imagens ofensivas na Web: a primeira, baseada no conteúdo da imagem e a segunda, baseada em evidências textuais extraídas das páginas web onde se encontram as imagens. Ambas as abordagens se mostraram efiazes na detecção de imagens ofensivas, apesar de utilizarem algoritmos simples para a extração de informações relacionadas às imagens.
|
Page generated in 0.105 seconds