• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 6
  • 2
  • 1
  • Tagged with
  • 42
  • 42
  • 21
  • 14
  • 13
  • 11
  • 10
  • 10
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Noise-limited scene-change detection in images

Irie, Kenji January 2009 (has links)
This thesis describes the theoretical, experimental, and practical aspects of a noise-limited method for scene-change detection in images. The research is divided into three sections: noise analysis and modelling, dual illumination scene-change modelling, and integration of noise into the scene-change model. The sources of noise within commercially available digital cameras are described, with a new model for image noise derived for charge-coupled device (CCD) cameras. The model is validated experimentally through the development of techniques that allow the individual noise components to be measured from the analysis of output images alone. A generic model for complementary metal-oxide-semiconductor (CMOS) cameras is also derived. Methods for the analysis of spatial (inter-pixel) and temporal (intra-pixel) noise are developed. These are used subsequently to investigate the effects of environmental temperature on camera noise. Based on the cameras tested, the results show that the CCD camera noise response to variation in environmental temperature is complex whereas the CMOS camera response simply increases monotonically. A new concept for scene-change detection is proposed based upon a dual illumination concept where both direct and ambient illumination sources are present in an environment, such as that which occurs in natural outdoor scenes with direct sunlight and ambient skylight. The transition of pixel colour from the combined direct and ambient illuminants to the ambient illuminant only is modelled. A method for shadow-free scene-change is then developed that predicts a pixel's colour when the area in the scene is subjected to ambient illumination only, allowing pixel change to be distinguished as either being due to a cast shadow or due to a genuine change in the scene. Experiments on images captured in controlled lighting demonstrate 91% of scene-change and 83% of cast shadows are correctly determined from analysis of pixel colour change alone. A statistical method for detecting shadow-free scene-change is developed. This is achieved by bounding the dual illumination model by the confidence interval associated with the pixel's noise. Three benefits arise from the integration of noise into the scene-change detection method: - The necessity for pre-filtering images for noise is removed; - All empirical thresholds are removed; and - Performance is improved. The noise-limited scene-change detection algorithm correctly classifies 93% of scene-change and 87% of cast shadows from pixel colour change alone. When simple post-analysis size-filtering is applied both these figures increase to 95%.
42

Instance Segmentation of Multiclass Litter and Imbalanced Dataset Handling : A Deep Learning Model Comparison / Instanssegmentering av kategoriserat skräp samt hantering av obalanserat dataset

Sievert, Rolf January 2021 (has links)
Instance segmentation has a great potential for improving the current state of littering by autonomously detecting and segmenting different categories of litter. With this information, litter could, for example, be geotagged to aid litter pickers or to give precise locational information to unmanned vehicles for autonomous litter collection. Land-based litter instance segmentation is a relatively unexplored field, and this study aims to give a comparison of the instance segmentation models Mask R-CNN and DetectoRS using the multiclass litter dataset called Trash Annotations in Context (TACO) in conjunction with the Common Objects in Context precision and recall scores. TACO is an imbalanced dataset, and therefore imbalanced data-handling is addressed, exercising a second-order relation iterative stratified split, and additionally oversampling when training Mask R-CNN. Mask R-CNN without oversampling resulted in a segmentation of 0.127 mAP, and with oversampling 0.163 mAP. DetectoRS achieved 0.167 segmentation mAP, and improves the segmentation mAP of small objects most noticeably, with a factor of at least 2, which is important within the litter domain since small objects such as cigarettes are overrepresented. In contrast, oversampling with Mask R-CNN does not seem to improve the general precision of small and medium objects, but only improves the detection of large objects. It is concluded that DetectoRS improves results compared to Mask R-CNN, as well does oversampling. However, using a dataset that cannot have an all-class representation for train, validation, and test splits, together with an iterative stratification that does not guarantee all-class representations, makes it hard for future works to do exact comparisons to this study. Results are therefore approximate considering using all categories since 12 categories are missing from the test set, where 4 of those were impossible to split into train, validation, and test set. Further image collection and annotation to mitigate the imbalance would most noticeably improve results since results depend on class-averaged values. Doing oversampling with DetectoRS would also help improve results. There is also the option to combine the two datasets TACO and MJU-Waste to enforce training of more categories.

Page generated in 0.0961 seconds