• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 110
  • 1
  • Tagged with
  • 111
  • 111
  • 111
  • 111
  • 67
  • 19
  • 17
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Advanced methods for change detection in VHR multitemporal SAR images

Marin, Carlo January 2015 (has links)
Change detection aims at identifying possible changes in the state of an object or phenomenon by jointly observing data acquired at different times over the same geographical area. In this context, the repetitive coverage and high quality of remotely sensed images acquired by Earth-orbiting satellites make such kind of data an ideal information source for change detection. Among the different kinds of Earth-observation systems, here we focus on Synthetic Aperture Radar (SAR). Differently from optical sensors, SAR is able to regularly monitor the Earth surface independently from the presence of cloud cover or sunlight illumination, making SAR data very attractive from an operational point of view. A new generation of SAR systems such as TerraSAR-X, TANDEM-X and COSMO-SkyMed, which are able to acquired data with a Very High geometrical Resolution (VHR), has opened new attractive opportunities to study dynamic phenomena that occur on the Earth surface. Nevertheless, the high amount of geometrical details has brought several challenging issues related to the data analysis that should be addressed. Indeed, even though in the literature several techniques have been developed for the automatic analysis of multitemporal low- and medium-resolution SAR data, they are poorly effective when dealing with VHR images. In detail, in this thesis we aim at developing advanced methods for change detection that are able to properly exploit the characteristics of VHR SAR images. i) An approach to building change detection. The approach is based on a novel theoretical model of backscattering that describes the appearance of new or fully collapsed buildings. The use of a fuzzy rule set allows in real scenarios an efficient and effective detection of new/collapsed building among several other sources of changes. ii) A change detection approach for the identification of damages in urban areas after catastrophic events such as earthquakes or tsunami. The approach is based on two steps: first the most damaged urban areas over a large territory are detected by analyzing high resolution stripmap SAR images. These areas drive the acquisition of new VHR spotlight images, which are used in the second step of the approach to accurately identify collapsed buildings. iii) An approach for surveillance applications. The proposed strategy detects the changes of interest over important sites such as ports and airports by performing a hierarchical multiscale analysis of the multitemporal SAR images based on a Wavelet decomposi- tion technique. iv) An approach to multitemporal primitive detection. The approach, based on the Bayesian rule for compound classification integrated in a fuzzy inference system, takes advantage of the multitemporal correlation of images pairs in order to both improve the detection of the primitives and identify the changes in their state. For each of the above mentioned topic an analysis of the state of the art is carried out, the limitations of existing methods are pointed out and the proposed solutions to the considered problems are described in details. Experimental results conducted on simulated and real remote sensing data are provided in order to show and confirm the validity of each of the proposed methods.
52

Statistical and deterministic approaches for multimedia forensics

Pasquini, Cecilia January 2016 (has links)
The increasing availability and pervasiveness of multimedia data in our society is before our very eyes. As a result of globalization and worldwide connectivity, people from all over the planet are exchanging constantly increasing amounts of images, videos, audio recordings on a daily basis. Coupled with the easy access to user-friendly editing software, this poses a number of problems related to the reliability and trustworthiness of such content, as well as its potential malevolent use. For this reason, the research field of multimedia forensics focuses on the development of forensic tools for verifying the authenticity of multimedia data. The hypothesis of pristine status of images, videos or audio tracks is called into question and can be rejected if traces of manipulation are detected with a certain degree of confidence. In this framework, studying traces left by any operation that could have been employed to process the data, either for malicious purposes or simply to improve their content or presentation, turns out to be of interest for a comprehensive forensic analysis. The goal of this doctoral study is to contribute to the field of multimedia forensics by exploiting intrinsic statistical and deterministic properties of multimedia data. With this respect, much work has been devoted to the study of JPEG compression traces in digital images, resulting in the development of several innovative approaches. Indeed, some of the main related research problems have been addressed and solution based on statistical properties of digital images have been proposed. In particular, the problem of identifying traces of JPEG compressions in images that have been decompressed and saved in uncompressed formats has been extensively studied, resulting in the design of novel statistical detectors. Given the enormous practical relevance, digital images in JPEG formats have also been considered. A novel method aimed at discriminating images compressed only once and more than once has been developed, and tested on a variety of images and forensic scenarios. Being the potential presence of intelligent counterfeiters ever increasingly studied, innovative counterforensic techniques to JPEG compression based on smart reconstruction strategies are proposed. Finally, we explore the possibility of defining and exploiting deterministic properties related to a certain processing operation in the forensic analysis. With this respect, we present a first approach targeted to the detection in one-dimensional data of a common data smoothing operation, the median filter. A peculiarity of this method is the ability of providing a deterministic response on the presence of median filtering traces in the data under investigation.
53

Active and Passive Multimedia Forensics

Conotter, Valentina January 2011 (has links)
Thanks to their huge expressive capability, coupled with the widespread use of the Internet and of affordable and high quality cameras and computers, digital multimedia represent nowadays one of the principal means of communication. Besides the many benefits, the wide proliferation of such contents has lead to problematic issues regarding their authen- ticity and security. To cope with such problems, the scientific community has focused its attention on digital forensic techniques. The objective of this doctoral study is to actively contribute to this field of research, developing efficient techniques to protect digital contents and verify their integrity. Digital Watermarking has been initially proposed as a valuable instrument to prove con- tent ownership, protect copyright and verify integrity, by imperceptibly embedding a mes- sage into a documents. Such message can later be detected and used to disclose possible copyrights violations or manipulations. For specific applications, such as copyright pro- tection, the watermark is required to be as robust as possible, surviving possible attack a malevolent user may be willing to apply. In light of this, we developed a novel watermark- ing benchmarking tool able to evaluate the robustness of watermarking techniques under the attack of multiple processing operators. On the other hand, for specific applications, such as forensic and medical, the robustness requirement is overtaken by integrity preser- vation. To cope with this aim, fragile watermarking has been developed, assuming that the watermark is modified whenever a tampering occurs, thus its absence can be taken as ev- idence of manipulation. Among this class of techniques, we developed a prediction-based reversible watermarking algorithm, which allows a perfect recovery of both the original content and the watermark. More recently, passive forensics approaches, which work in absence of any watermark or special hardware, have been proposed for authentication purposes. The basic idea is that the manipulation of a digital media, if performed properly, may not leave any visual trace of its occurrence, but it alters the statistics of the content. Without any prior knowledge about the content, such alterations can be revealed and taken as evidence of forgery. We focused our study on geometric-based forensic techniques both for images and videos au- thentication. Firstly we proposed a method for authenticating text on signs and billboards, based on the assumption that text on a planar surface is imaged under perspective projec- tion, but it is unlikely to satisfy such geometric mapping when manipulated. Finally, we proposed a novel geometric technique to detect physically implausible trajectories of ob- jects in video sequences. This technique explicitly models the three-dimensional trajectory of objects in free-flight and the corresponding two-dimensional projection into the image plane. Deviations from this model provide evidence of manipulation.
54

Advanced Spectral and Spatial Techniques for Hyperspectral Image Analysis and Classification

Falco, Nicola January 2015 (has links)
Recent advances in sensor technology have led to an increased availability of hyperspectral remote sensing images with high spectral and spatial resolutions. These images are composed by hundreds of contiguous spectral channels, covering a wide spectral range of frequencies, in which each pixel contains a highly detailed representation of the reflectance of the materials present on the ground, and a better characterization in terms of geometrical detail. The burst of informative content conveyed in the hyperspectral images permits an improved characterization of different land coverages. In spite of that, it increases significantly the complexity of the analysis, introducing a series of challenges that need to be addressed, such as the computational complexity and resources required. This dissertation aims at defining novel strategies for the analysis and classification of hyperspectral remote sensing images, placing the focal point on the investigation and optimisation techniques for the extraction and integration of spectral and spatial information. In the first part of the thesis, a thorough study on the analysis of the spectral information contained in the hyperspectral images is presented. Though, independent component analysis (ICA) has been widely used to address several tasks in the remote sensing field, such as feature reduction, spectral unmixing and classification, its employment in extracting class-discriminant information remains a research topic open to further investigation. To this extend, a profound study on the performances of different ICA algorithms is performed, highlighting their strengths and weaknesses in the hyperspectral image classification task. Based on this study, a novel approach for feature reduction is proposed, where the use of ICA is optimised for the extraction of class-specific information. In the second part of the thesis, the spatial information is exploited by employing operators from the mathematical morphology framework. Morphological operators, such as attribute profiles and their multi-channel and multi-attribute extensions, are proved to be effective in the modelling of the spatial information, dealing, however, with issues such as the high feature dimensionality, the high intrinsic information redundancy and the a-priori need for parameter tuning in filtering, which are still open. Addressing the first two issues, the reduced attribute profiles are introduced, in this thesis, as an optimised version of the morphological attribute profiles, with the property to compress all the meaningful geometrical information into a few features. Regarding the filter parameter tuning issue, an innovative strategy for automatic threshold selection is proposed. Inspired by the concept of granulometry, the proposed approach defines a novel granulometric characteristic function, which provides information on the image decomposition according to a given measure. The approach exploits the tree representation of an image, allowing us to avoid additional filtering steps prior to the threshold selection, making the process computationally effective. The outcome of this dissertation advances the state-of-the-art by proposing novel methodologies for accurate hyperspectral image classification, where the results obtained by extensive experimentation on various real hyperspectral data sets confirmed their effectiveness. Concluding the thesis, insightful and concrete remarks to the aforementioned issues are discussed.
55

Energy-Efficient Medium Access Control Protocols and Network Coding in Green Wireless Networks

Palacios-Trujillo, Raul January 2014 (has links)
Wireless networks are a popular means of communications in daily social and business activities of many users nowadays. However, current estimates indicate that wireless networks are expected to significantly contribute to the rapidly increasing energy consumption and carbon emissions of the Information and Communication Technologies (ICT) sector. Crucial factors leading to this trend are the continuous growth of wireless network infrastructure coupled with the increased number of user wireless devices equipped with various radio interfaces and batteries of extremely limited capacity (e.g., smartphones). The serious problem of energy consumption in wireless networks is mainly related to the current standard designs of wireless technologies. These approaches are based on a stack of protocol layers aiming to maximize performance-related metrics, such as throughput or Quality of Service (QoS), while paying less attention to energy efficiency. Although the focus has shifted to energy efficiency recently, most of the existing wireless solutions achieve energy savings at the cost of some performance degradation.This thesis aims at contributing to the evolution of green wireless networks by exploring new approaches for energy saving at the Medium Access Control (MAC) protocol layer and the combination of these with the integration of the Network Coding (NC) paradigm into the wireless network protocol stack for further energy savings. The main contributions of the thesis are divided into two main parts. The first part of the thesis is focused on the design and performance analysis and evaluation of novel energy-efficient distributed and centralized MAC protocols for Wireless Local Area Networks (WLANs). The second part of the thesis turns the focus to the design and performance analysis and evaluation of new NC-aware energy- efficient MAC protocols for wireless ad hoc networks. The key idea of the proposed mechanisms is to enable multiple data exchanges (with or without NC data) among wireless devices and allow them to dynamically turn on and off their radio transceivers (i.e., duty cycling) during periods of no transmission and reception (i.e., when they are listening or overhearing). Validation through analysis, computer-based simulation, and experimentation in real hardware shows that the proposed MAC solutions can significantly improve both the throughput and energy efficiency of wireless networks, compared to the existing mechanisms of the IEEE 802.11 Standard when alone or combined with the NC approach. Furthermore, the results presented in this dissertation help understand the impact of the on/off transitions of radio transceivers on the energy efficiency of MAC protocols based on duty cycling. These radio transitions are shown to be critical when the available time for sleeping is comparable to the duration of the on/off radio transitions.
56

Bridging the gap between theory and implementation in cognitive networks: developing reasoning in today's networks

Facchini, Christian January 2011 (has links)
Communication networks are becoming increasingly complex and dynamic. The networking paradigm commonly employed, on the other hand, has not changed over the years, and, as a result, performs poorly in today's environments. Only very recently, a new paradigm named cognitive networking has been devised with the objective to make networks more intelligent, thereby overcoming traditional limitations and potentially achieving better performance. According to such vision, networks should be able to monitor themselves, reason upon the environment, act towards the achievement of specific goals and learn from experience. Thus far, several cognitive network architectures have been conceived and proposed in the literature, but, despite researchers seem to agree on the need for a holistic approach, their architectures pursue such a global vision only in part, as they do not consider networks nor network nodes in their entirety. In the present work, we analyze the aspects to be tackled in order to enable this holistic view and propose to base reasoning on both intra- and inter-node interactions, with the ultimate aim to devise a complete cognitive network architecture. After a thorough analysis of advantages and drawbacks of generic reasoning framework, we select the most apt to form the basis on which to build the cognitive network we envision. We first formalize its application in network environments, by determining the steps to follow in the process to equip traditional network with cognitive capabilities. Then, we shift the focus from the design side to the implementation side, by identifying the problems that could be faced when realizing such a network, and by proposing a set of optional refinements that could be taken into account to further improve the performance in some specific situations. Finally, we tackle the problem of reducing the time needed for the cognitive process to reason. Validation through simulations shows that explicitly considering cross-layer intra- and inter-node interactions when reasoning has a twofold effect. First, it leads to better performance levels than those that can be achieved by today's non-intelligent networks, and second, it helps to better understand existent causal relationships between variables in a network.
57

Advanced Pre-Processing and Change-Detection Techniques for the Analysis of Multitemporal VHR Remote Sensing Images

Marchesi, Silvia January 2011 (has links)
Remote sensing images regularly acquired by satellite over the same geographical areas (multitemporal images) provide very important information on the land cover dynamic. In the last years the ever increasing availability of multitemporal very high geometrical resolution (VHR) remote sensing images (which have sub-metric resolution) resulted in new potentially relevant applications related to environmental monitoring and land cover control and management. The most of these applications are associated with the analysis of dynamic phenomena (both anthropic and non anthropic) that occur at different scales and result in changes on the Earth surface. In this context, in order to adequately exploit the huge amount of data acquired by remote sensing satellites, it is mandatory to develop unsupervised and automatic techniques for an efficient and effective analysis of such kind of multitemporal data. In the literature several techniques have been developed for the automatic analysis of multitemporal medium/high resolution data. However these techniques do not result effective when dealing with VHR images. The main reasons consist in their inability both to exploit the high geometrical detail content of VHR data and to model the multiscale nature of the scene (and therefore of possible changes). In this framework it is important to develop unsupervised change-detection(CD) methods able to automatically manage the large amount of information of VHR data, without the need of any prior information on the area under investigation. Even if these methods usually identify only the presence/absence of changes without giving information about the kind of change occurred, they are considered the most interesting from an operational perspective, as in the most of the applications no multitemporal ground truth information is available. Considering the above mentioned limitations, in this thesis we study the main problems related to multitemporal VHR images with particular attention to registration noise (i.e. the noise related to a non-perfect alignment of the multitemporal images under investigation). Then, on the basis of the results of the conducted analysis, we develop robust unsupervised and automatic change-detection methods. In particular, the following specific issues are addressed in this work: 1. Analysis of the effects of registration noise in multitemporal VHR images and definition of a method for the estimation of the distribution of such kind of noise useful for defining: a. Change-detection techniques robust to registration noise (RN); the proposed techniques are able to significantly reduce the false alarm rate due to RN that is raised by the standard CD techniques when dealing with VHR images. b. Effective registration methods; the proposed strategies are based on a multiscale analysis of the scene which allows one to extract accurate control points for the registration of VHR images. 2. Detection and discrimination of multiple changes in multitemporal images; this techniques allow one to overcome the limitation of the existing unsupervised techniques, as they are able to identify and separate different kinds of change without any prior information on the study areas. 3. Pre-processing techniques for optimizing change detection on VHR images; in particular, in this context we evaluate the impact of: a. Image transformation techniques on the results of the CD process; b. Different strategies of image pansharpening applied to the original multitemporal images on the results of the CD process. For each of the above mentioned topic an analysis of the state of the art is carried out, the limitations of existing methods are pointed out and the proposed solutions to the addressed problems are described in details. Finally, experimental results conducted on both simulated and real data are reported in order to show and confirm the validity of all the proposed methods.
58

Detection and Analysis Methods for unmanned aerial Vehicle Images

Moranduzzo, Thomas January 2015 (has links)
Unmanned Aerial Vehicles (UAVs), commonly known as drones, are aerial platforms that are gaining large popularity in the remote sensing field. UAVs derive from military technology, but in the last few years they are establishing as reference platforms also for civilian tasks. The main advantage of these acquisition systems lies in their simplicity of use. Indeed, a UAV can be used when and where it is needed without excessive costs. Since UAVs can fly very close to the objects under investigation they allow the acquisition of extremely high resolution (EHR) images in which the items are described with a very high level of details. The huge quantity of information contained in UAV images opens the way to develop novel applications but at the same time force us to face new challenging problems at methodological level. This thesis represents a modest but hopefully useful contribution towards making UAV images completely understood and easily processed and analyzed. In particular, the proposed methodological contributions include: i) two methods devoted to the automatic detection and counting of cars present in urban scenarios; ii) a complete processing chain which monitors the traffic and estimate the speeds of moving vehicles; iii) a methodology which detects classes of objects by exploiting a nonlinear filter which combines image gradient features at different orders and Gaussian process (GP) modeling; iv) a novel strategy to “coarsely” describe extremely high resolution images using various representation and matching strategies. Experimental results conducted on real UAV images are presented and discussed. They show the validity of the proposed methods and suggest future possible improvements. Furthermore, they confirm that despite the complexity of the considered images, the potential of UAV images is very wide.
59

Innovative methods for the reconstruction of new generation satellite remote sensing images

Luca, Lorenzi January 2012 (has links)
Remote sensing satellites have demonstrated to be a helpful instrument. Indeed, satellite images have been successfully exploited to deal with several applications including environmental monitoring and prevention of natural disasters. In the last years, the increasing of the availability of very high spatial resolution (VHR) remote sensing images resulted in new potentially relevant applications related to land cover control and environmental management. In particular, optical sensors may suffer from the presence of clouds and/or of shadows. This involves the problem of missing data, which may result in an important problem especially in the case of VHR images. In this thesis, new methodologies of detection and reconstruction of missing data region in VHR images are proposed and applied on areas contaminated by the presence of clouds and/or shadows. In particular, the proposed methodological contributions include: i) a multiresolution inpainting strategy to reconstruct cloud-contaminated images; ii) a new combination of radiometric information and spatial position information in two specific kernels to perform a better reconstruction of cloud-contaminated regions by adopting a support vector regression (SVR) method; iii) the exploitation of compressive sensing theory adopting three different strategies (orthogonal matching pursuit, basis pursuit and a genetic algorithm solution) for the reconstruction of cloud-contaminated images; iv) a complete processing chain which exploits a support vector machine (SVM) classification and morphological filters for the detection and a linear regression for the reconstruction of specific shadow areas; and v) several evaluation criteria capable to assess the reconstructability of shadow areas. All of them are specifically developed to work with VHR images. Experimental results conducted on real data are reported in order to show and confirm the validity of all the proposed methods. They all suggest that, despite the complexity of the problems, it is possible to recover in a good way missing areas obscured by clouds or shadows.
60

Discrimination of Computer Generated versus Natural Human Faces

Dang Nguyen, Duc Tien January 2014 (has links)
The development of computer graphics technologies has been bringing realism to computer generated multimedia data, e.g., scenes, human characters and other objects, making them achieve a very high quality level. However, these synthetic objects may be used to create situations which may not be present in real world, hence raising the demand of having advance tools for differentiating between real and artificial data. Indeed, since 2005 the research community on multimedia forensics has started to develop methods to identify computer generated multimedia data, focusing mainly on images. However, most of them do not achieved very good performances on the problem of identifying CG characters. The objective of this doctoral study is to develop efficient techniques to distinguish between computer generated and natural human faces. We focused our study on geometric-based forensic techniques, which exploit the structure of the face and its shape, proposing methods both for image and video forensics. Firstly, we proposed a method to differentiate between computer generated and photographic human faces in photos. Based on the estimation of the face asymmetry, a given photo is classified as computer generated or not. Secondly, we introduced a method to distinguish between computer generated and natural faces based on facial expressions analysis. In particular, small variations of the facial shape models corresponding to the same expression are used as evidence of synthetic characters. Finally, by exploiting the differences between face models over time, we can identify synthetic animations since their models are usually recreated or performed in patterns, comparing to the models of natural animations.

Page generated in 0.0804 seconds