Spelling suggestions: "subject:"settore ING-INF/03 - telecomunicazioni"" "subject:"settore ING-INF/03 - lelecommunicazioni""
1 |
Analysis of forest areas by advanced remote sensing systems based on hyperspectral and LIDAR dataDalponte, Michele January 2010 (has links)
Forest management is an important and complex process, which has significant implications on the envi-ronment (e.g. protection of biological diversity, climate mitigation) and the economy (e.g. estimation of timber volume for commercial usage). An efficient management requires a very detailed knowledge of forest attributes such as species composition, trees stem volume, height, etc. Hyperspectral and LIDAR remote sensing data can provide useful information to the identification of these attributes: hyperspectral data with their dense sampling of the spectral signatures are important for the classification of tree spe-cies, while LIDAR data are important for the study and estimation of quantitative parameters of forests (e.g. stem height, volume).
This thesis presents novel systems for the exploitation of hyperspectral and LIDAR data in forest applica-tion domain. In particular, the novel contributions to the existing literature are on both the development of new systems for data processing and the analysis of the potentialities of these data in forestry. In greater detail the main contribution of this thesis are: i) an empirical analysis on the relationship be-tween spectral resolution, classifier complexity and classification accuracy in the study of complex forest areas. This analysis is very important for the design of future sensors and the better exploitation of the existing ones; ii) a novel system for the fusion of hyperspectral and LIDAR remote sensing data in the classification of forest areas. The system proposed exploits the complementary information of these data in order to obtain accurate and precise classification maps; iii) an analysis on the usefulness of different LIDAR returns and channels (elevantion and intensity) in the classification of forest areas; iv) an empiri-cal analysis on the use of multireturn LIDAR data for the estimation of tree stem volume. This study in-vestigates in detail the potentialities of variables extracted from LIDAR returns (up to four) for the esti-mation of tree stem volume; v) a novel system for the estimation of single tree stem diameter and volume with multireturn LIDAR data. A comparative analysis on the use of three different variable selection me-thods and three different estimation algorithms is also presented; vi) a system for the fusion of hyperspec-tral and LIDAR remote sensing data in the estimation of tree stem diameters. This system is able to ex-ploit hyperspectral and LIDAR data combined and separated: this is very important as the experimental analysis carried out with this system shows that hyperspectral data can be used for rough estimations of stem diameters when LIDAR data are not available.
The effectiveness of all the proposed systems is confirmed by quantitative and qualitative experimental results.
|
2 |
Trajectory Analysis for Event Detection in Ambient Intelligence ApplicationsPiotto, Nicola January 2011 (has links)
The automatic understanding of human activity is probably one of the most challenging problems for the scientific community. Several application domains would benefit of such an analysis, from context-aware computing, to area monitoring and surveillance, to assistive technologies for elderly or disabled, and more.
In a broad sense, we can define the activity analysis as the problem of finding an explanation coherent with a set of observations. These observations are typically influenced by several factors from different disciplines, such as sociology or psychology, but also mathematics and physics, making the problem particularly hard. In the last years, also the computer vision community focused its attention on this area, producing the latest advances in the acquisition and understanding of human motion data from image sequences. Despite the increasing effort spent in this field, there still exists a consistent gap between the numerical low-level pixel information that can be observed and measured, and the high abstraction level of the semantic that describes a given activity. In other words, there exist a conceptual ambiguity between the image sequence observations and their possible interpretations. Although several factors are involved, the activity modeling and the comparison strategy play crucial roles. In this proposal, a correlation between activity and corresponding path has been assumed.
In light of this, the work carried out tackles two strictly related issues: (i) obtaining a proper representation of human activity; (ii) define an effective tool for reliably measuring the similarity between activity instances. In particular, the object activity is modeled with a signature obtained through a symbolic abstraction of its spatio-temporal trace, allowing the application of particular high-level reasoning for computing the activity similarity. This representation is particularly effective since it provides a smart way to compensate the noise artifacts coming from low-level modules (i.e., tracking algorithms), allowing also the possibility of considering interesting properties, such as the invariance to shift, rotation, and scale factors. Since any complex task may be decomposed in a limited set of atomic units corresponding to elementary motion patterns, the key idea of this representation is to catch the object activities by suitably representing their trajectories through symbols. This syntactic activity description relies on the extraction and on the symbolic coding of meaningful samples of the path, while the similarity between trajectories is computed using the so-called approximate-matching, thus casting the trajectory comparison problem to a string matching one.
Also another representation scheme has been adopted, coding the signature according some relevant spots in the environment: in this case, the structural pattern information is coded in ad-hoc Context-Free Grammars, and the matching problem is solved through the parsing of the incoming string according the defined rules.
|
3 |
Design and Analysis of Load-Balancing Switch with Finite Buffers and Variable Size PacketsAudzevich, Yury January 2009 (has links)
As the traffic volume on the Internet increases exponentially, so does the demand for fast switching of packets between asynchronous high-speed routers. Although the optical fiber can provide an extremely high capacity, the Internet switches still remain the main point of traffic bottleneck. The packet switching time may run up to nanoseconds in such routers with more than thousands ports, each processing at 10 GB/s. Even modern extremely fast processing units are not capable to satisfy these needs. It is well known that switching of such a high volume of traffic from input to output requires large buffers and fast processors to perform the header processing, complex scheduling and forwarding functions. Although a large number of switching architectures is presented on the market, the considerable part of them is either not scalable or reach their limits in power consumption and complexity.
Therefore, novel and extremely scalable switching systems are essential to be investigated.
The load-balancing switching approach is simple, and therefore, may be capable of performing the switching and forwarding from all inputs to all outputs simultaneously with low complexity and high scalability. Since this simple approach has distributed topology (each component of the switch is controlled by an individual chip) and do not require fast switch control units, primarily because each stage is independent and it makes its own distributed calculations, it becomes a perfect candidate for the future practical deployment.
The load-balancing switching architecture, considered in this thesis, is proved to have high potential to scale up while maintaining good throughput and other performance characteristics. Additionally, the load-balancing switching architecture can effectively resolve the important problem of packets mis-ordering which can appear due to the distributed structure of the system. Unfortunately, in the research conducted previously, some of the mentioned characteristics were obtained under a set of strong assumptions. In particular, it was assumed that all the packets transmitted through the system have equal length, traffic is admissible and central stage buffers are infinite. On the other hand, due to the distributed control the switch is not able to control and maintain a necessary amount of traffic transmitted from stage to stage inside the switch.
The following Ph.D. thesis analyzes behavior of the load-balancing switch equipped with finite central stage buffers. Due to this fact the LB switch will always have a possibility to drop a packet due to an overflow. In this work we first analyze the packet loss probability in the central stage buffers while considering packets of the same length (data cells). The analysis will be performed for both admissible and inadmissible traffic matrices. The obtained results show that the packet loss can have a significant influence on the overall LB switch performance if inputs of the switch are overloaded.
In order to present more realistic scenario, the packet loss analysis was performed in the switch with variable size packets. It is considered that most of the internet switches are operating on the cell-based level (to increase buffer utilization), that means that arriving variable size packets are segmented at inputs and reassembled at outputs. The issue of possible cell and correspondingly a packet loss inside the switch can introduce some significant posterior problems to the load-balancing switch reassembly unit. In order to evaluate packet loss we assumed Markovian behavior to be able to use numerically efficient algorithms to solve the model. The mathematical model characterizing inhomogeneous input traffic presented inside the thesis gives the most precise way of packet loss probability
evaluation. Unfortunately, the high complexity of this model results in irresolvable complex Markov chains even in case of very small switches. Consequently, as a next step, we performed the analysis with fast solution procedures using a restrictive assumption of identical stochastic processes at all inputs. The final results allowed us to conclude that a single cell drop at the central stage buffers cause the whole packet removal and, the packet loss probability inside the system can be extremely high in comparison with the corresponding cell loss. Another important issue observed from the analysis is the difference in packet loss probabilities depending on the traffic traversing path, e.g. sequential number of input, central stage buffer and output of the switch. This property makes more complex the evaluation of the loss probabilities for large switch sizes. The last but not the least issue observed by our analysis was the instability, congestion and large delays appearing at output re-sequencing and reassembly unit due to the the central stage packet loss.
In order to cope with such a behavior, we proposed the novel algorithms which are able to efficiently minimize/avoid packet loss at the central stage buffers of the switch. For instance, the novel minimization protocol is introducing an artificial buffering threshold at the central stage buffers in such a way that packets at the input stage are are dropped in case the actual central stage buffers occupancy is above the threshold. The results show that due to possible packet removal at the input stage of the switch, the overall packet loss probability is significantly reduced. Similarly to the loss minimization service protocol, the novel NoLoss load-balancing switch operates while using information from both inputs and central stage buffers, and allows a packet transmission through the switch only if the central stage buffers have enough space to accept it during the current and the following time slots. In order to minimize communication overheads, the algorithm was implemented by means of centralize controller. Finally, such kind of management helped us to reach the lower boundary in the overall packet loss probability and resolve some other important issues of the switch, like, for instance, the congestion problem of the output reassembly unit.
|
4 |
Service-Aware Performance Optimization of Wireless Access NetworksBen Halima, Nadhir January 2009 (has links)
Internet was originally designed to offer best-effort data transport over a wired network with end machines using a layered network protocol stack to provide mainly reliability and quality of service for end user applications. However, the excess of wireless end devices and the demand for sophisticated mobile multimedia applications forces the networking research community to think about new design methodologies. In fact, this kind of applications is characterized not only by a large amount of required data-rate, but also by a significant variability of the data-rate over time due to the dynamics of scenes, when state of the art of video encoding techniques are considered and are especially challenging due to the time varying transmission characteristics of the wireless channel and the dynamic quality of service (QoS) requirements of the application (e.g., prioritized delivery of important units, variable bit rate and variable tolerance vs. bit or packet errors).
One of the focused issues in the improvement of multimedia transmission quality is to combine the characteristics of the video applications and the wireless networks.
Traditional approaches, in which the characteristics of the video application and wireless networks are isolated, would induce the resources not being optimized. Cross-layer design
also known as Cross-layering is a new paradigm in network design that not only takes the dependencies and interactions among the layers of the Open System Interface (OSI) structure into account, but also attains a global optimization of the layer-specific parameters. However, most existing cross-layer designs for Quality of Service (QoS) provisioning in multimedia communications are mainly either aiming at improving throughput of the network or reducing power consumption, yet regardless of the end-toend
qualities of multimedia transmission. Therefore, the application-driven cross-layer design over various multimedia communication systems is needed to be extensively
investigated.
Following the extensive study of performance bounds and limitations of the sate of the art in this research area, we argue that performance improvement of multimedia
applications over wireless access networks can be achieved through considering the application-specific requirements also called service- or context-awareness. Indeed, we
designed two cross-layer design schemes called CORREC and SARC for Wi-Fi and 3G networks respectively. We show that further performance improvement can be achieved by
tuning ARQ and HARQ strength respectively based on the application requirements and protocol stack operation on the mobile terminal.
In the other hand, the Transmission Control Protocol (TCP) which accounts for over 95% of Internet traffic shows poor performance in wireless domain. We propose a novel approach aiming at TCP performance improvement in WLAN networks. It consists in proposing a joint optimization of ARQ schemes operating at the transport and link layers using a cross-layer approach called ARQ proxy for Wi-Fi networks.
|
5 |
Advanced methods for the analysis of multispectral and multitemporal remote sensing imagesZanetti, Massimo January 2017 (has links)
The increasing availability of new generation remote sensing satellite multispectral images provides an unprecedented source of information for Earth observation and monitoring. Multispectral images can be now collected at high resolution covering (almost) all land surfaces with extremely short revisit time (up to a few days), making it possible the mapping of global changes. Extracting useful information from such huge amount of data requires a systematic use of automatic techiques in almost all applicative contexts. In some cases, the strict application requirements force the pratictioner to develop strongly data-driven approaches in the development of the processing chain. As a consequence, the exact relationship between the theoretical models adopted and the physical meaning of the solutions is sometimes hidden in the data analysis techniques, or not clear at all. Altough this is not a limitation for the success of the application itself, it makes however dicult to transfer the knowledge learned from one specic problem to another. In this thesis we mainly focus on this aspect and we propose a general mathematical framework for the representation and analysis of multispectral images. The proposed models are then used in the applicative context of change detection. Here, the generality of the proposed models allows us to both: (1) provide a mathematical explanation of already existing methodologies for change detection, and (2) extend them to more general cases for addressing problems of increasing complexity. Typical spatial/spectral properties of last generation multispectral images emphasize the need of having more exible models to image representation. In fact, classical methods to change detection that have worked well on previous generations of multispectral images provide sub-optimal results due to their poor capability of modeling all the complex spectral/spatial detail available in last generation products. The theoretical models presented in this thesis are aimed at giving more degrees of freedom in the representation of the images. The eectiveness of the proposed novel approaches and related techniques is demonstrated on several experiments involving both synthetic datasets and real multispectral images. Here, the improved flexibility of the models adopted allows for a better representation of the data and is always followed by a substantial improvement of the change detection performance.
|
6 |
Novel Methods based on the Fusion of Multisensor Remote Sensing Data for Accurate Forest Parameter EstimationParis, Claudia January 2016 (has links)
In the last decade the increasing availability of high resolution remote sensing data enabled precision forestry, which aims to obtain a precise reconstruction of the forest at stand, sub-stand or individual tree level. This calls for the need of developing techniques tailored on such new data that can achieve accurate forest parameters estimations. Moreover, in this context the integration of multiple remote sensing data brings to a more comprehensive representation of the forest structure. Accordingly, the goal of this thesis is the development of novel methods for the automatic estimation of forest parameters that can exploit the different properties of multiple remote sensing data sources. The thesis provides five main novel contributions to the state-of-the-art. The first contribution of the thesis addresses the problem of the single tree crowns segmentation in multilayered forest by using very high-density multireturn LiDAR data. The aim of the proposed method is to fully exploit the potential of these data to detect and delineate the single tree crowns of both dominant and sub-dominant trees by a hierarchical 3-D segmentation technique applied directly in the point cloud space. The second contribution of the thesis regards the estimation of the diameter at breast height (DBH) of each individual tree by using high-density LiDAR data. The proposed data-driven method extensively exploits the information provided by the high resolution data to model the main environmental variables that can affect the stems growth in terms of crown structure, topography and forest density. The third contribution of the thesis proposes a 3-D model based approach to the reconstruction of the tree top height by fusing low-density LiDAR data and high resolution optical images. The geometrical structure of the tree is reconstructed via a properly defined parametric model which drives the fusion of the data. Indeed, when high resolution LiDAR data is not available, the integration of different remote sensing data sources represents a valid solution to improve the parameter estimation. In this context, the fourth contribution of the thesis addresses the fusion of low-density airborne LiDAR data and terrestrial LiDAR data to perform localized forest analysis. The proposed technique automatically registers the two LiDAR point clouds by using the spatial pattern of the forest in order to integrate the data and to automatically estimate the crown parameters. The fusion of the LiDAR point clouds leads to a more comprehensive representation of the 3-D structure of the crowns. Finally, we introduce a sensor-driven domain adaptation method for the classification of forest areas sharing similar properties but located in different areas. The proposed method takes advantage from the availability of multiple remote sensing data to detect features subspaces where data manifolds are partially (or completely) aligned.
Qualitative and quantitative experimental results obtained on large forest areas confirm the effectiveness of the methods developed in this thesis, which allow an improvement in terms of accuracy when compared to other state-of-the-art methods.
|
7 |
Advanced methods for change detection in VHR multitemporal SAR imagesMarin, Carlo January 2015 (has links)
Change detection aims at identifying possible changes in the state of an object or phenomenon by jointly observing data acquired at different times over the same geographical area. In this context, the repetitive coverage and high quality of remotely sensed images acquired by Earth-orbiting satellites make such kind of data an ideal information source for change detection. Among the different kinds of Earth-observation systems, here we focus on Synthetic Aperture Radar (SAR). Differently from optical sensors, SAR is able to regularly monitor the Earth surface independently from the presence of cloud cover or sunlight illumination, making SAR data very attractive from an operational point of view. A new generation of SAR systems such as TerraSAR-X, TANDEM-X and COSMO-SkyMed, which are able to acquired data with a Very High geometrical Resolution (VHR), has opened new attractive opportunities to study dynamic phenomena that occur on the Earth surface. Nevertheless, the high amount of geometrical details has brought several challenging issues related to the data analysis that should be addressed. Indeed, even though in the literature several techniques have been developed for the automatic analysis of multitemporal low- and medium-resolution SAR data, they are poorly effective when dealing with VHR images. In detail, in this thesis we aim at developing advanced methods for change detection that are able to properly exploit the characteristics of VHR SAR images. i) An approach to building change detection. The approach is based on a novel theoretical model of backscattering that describes the appearance of new or fully collapsed buildings. The use of a fuzzy rule set allows in real scenarios an efficient and effective detection of new/collapsed building among several other sources of changes. ii) A change detection approach for the identification of damages in urban areas after catastrophic events such as earthquakes or tsunami. The approach is based on two steps: first the most damaged urban areas over a large territory are detected by analyzing high resolution stripmap SAR images. These areas drive the acquisition of new VHR spotlight images, which are used in the second step of the approach to accurately identify collapsed buildings. iii) An approach for surveillance applications. The proposed strategy detects the changes of interest over important sites such as ports and airports by performing a hierarchical multiscale analysis of the multitemporal SAR images based on a Wavelet decomposi- tion technique. iv) An approach to multitemporal primitive detection. The approach, based on the Bayesian rule for compound classification integrated in a fuzzy inference system, takes advantage of the multitemporal correlation of images pairs in order to both improve the detection of the primitives and identify the changes in their state. For each of the above mentioned topic an analysis of the state of the art is carried out, the limitations of existing methods are pointed out and the proposed solutions to the considered problems are described in details. Experimental results conducted on simulated and real remote sensing data are provided in order to show and confirm the validity of each of the proposed methods.
|
8 |
Statistical and deterministic approaches for multimedia forensicsPasquini, Cecilia January 2016 (has links)
The increasing availability and pervasiveness of multimedia data in our society is before our very eyes. As a result of globalization and worldwide connectivity, people from all over the planet are exchanging constantly increasing amounts of images, videos, audio recordings on a daily basis. Coupled with the easy access to user-friendly editing software, this poses a number of problems related to the reliability and trustworthiness of such content, as well as its potential malevolent use. For this reason, the research field of multimedia forensics focuses on the development of forensic tools for verifying the authenticity of multimedia data. The hypothesis of pristine status of images, videos or audio tracks is called into question and can be rejected if traces of manipulation are detected with a certain degree of confidence. In this framework, studying traces left by any operation that could have been employed to process the data, either for malicious purposes or simply to improve their content or presentation, turns out to be of interest for a comprehensive forensic analysis. The goal of this doctoral study is to contribute to the field of multimedia forensics by exploiting intrinsic statistical and deterministic properties of multimedia data. With this respect, much work has been devoted to the study of JPEG compression traces in digital images, resulting in the development of several innovative approaches. Indeed, some of the main related research problems have been addressed and solution based on statistical properties of digital images have been proposed. In particular, the problem of identifying traces of JPEG compressions in images that have been decompressed and saved in uncompressed formats has been extensively studied, resulting in the design of novel statistical detectors. Given the enormous practical relevance, digital images in JPEG formats have also been considered. A novel method aimed at discriminating images compressed only once and more than once has been developed, and tested on a variety of images and forensic scenarios. Being the potential presence of intelligent counterfeiters ever increasingly studied, innovative counterforensic techniques to JPEG compression based on smart reconstruction strategies are proposed. Finally, we explore the possibility of defining and exploiting deterministic properties related to a certain processing operation in the forensic analysis. With this respect, we present a first approach targeted to the detection in one-dimensional data of a common data smoothing operation, the median filter. A peculiarity of this method is the ability of providing a deterministic response on the presence of median filtering traces in the data under investigation.
|
9 |
Active and Passive Multimedia ForensicsConotter, Valentina January 2011 (has links)
Thanks to their huge expressive capability, coupled with the widespread use of the Internet and of affordable and high quality cameras and computers, digital multimedia represent nowadays one of the principal means of communication. Besides the many benefits, the wide proliferation of such contents has lead to problematic issues regarding their authen- ticity and security. To cope with such problems, the scientific community has focused its attention on digital forensic techniques.
The objective of this doctoral study is to actively contribute to this field of research, developing efficient techniques to protect digital contents and verify their integrity.
Digital Watermarking has been initially proposed as a valuable instrument to prove con- tent ownership, protect copyright and verify integrity, by imperceptibly embedding a mes- sage into a documents. Such message can later be detected and used to disclose possible copyrights violations or manipulations. For specific applications, such as copyright pro- tection, the watermark is required to be as robust as possible, surviving possible attack a malevolent user may be willing to apply. In light of this, we developed a novel watermark- ing benchmarking tool able to evaluate the robustness of watermarking techniques under the attack of multiple processing operators. On the other hand, for specific applications, such as forensic and medical, the robustness requirement is overtaken by integrity preser- vation. To cope with this aim, fragile watermarking has been developed, assuming that the watermark is modified whenever a tampering occurs, thus its absence can be taken as ev- idence of manipulation. Among this class of techniques, we developed a prediction-based reversible watermarking algorithm, which allows a perfect recovery of both the original content and the watermark.
More recently, passive forensics approaches, which work in absence of any watermark or special hardware, have been proposed for authentication purposes. The basic idea is that the manipulation of a digital media, if performed properly, may not leave any visual trace of its occurrence, but it alters the statistics of the content. Without any prior knowledge about the content, such alterations can be revealed and taken as evidence of forgery. We focused our study on geometric-based forensic techniques both for images and videos au- thentication. Firstly we proposed a method for authenticating text on signs and billboards, based on the assumption that text on a planar surface is imaged under perspective projec- tion, but it is unlikely to satisfy such geometric mapping when manipulated. Finally, we proposed a novel geometric technique to detect physically implausible trajectories of ob- jects in video sequences. This technique explicitly models the three-dimensional trajectory of objects in free-flight and the corresponding two-dimensional projection into the image plane. Deviations from this model provide evidence of manipulation.
|
10 |
Advanced Spectral and Spatial Techniques for Hyperspectral Image Analysis and ClassificationFalco, Nicola January 2015 (has links)
Recent advances in sensor technology have led to an increased availability of hyperspectral remote sensing images with high spectral and spatial resolutions. These images are composed by hundreds of contiguous spectral channels, covering a wide spectral range of frequencies, in which each pixel contains a highly detailed representation of the reflectance of the materials present on the ground, and a better characterization in terms of geometrical detail. The burst of informative content conveyed in the hyperspectral images permits an improved characterization of different land coverages. In spite of that, it increases significantly the complexity of the analysis, introducing a series of challenges that need to be addressed, such as the computational complexity and resources required. This dissertation aims at defining novel strategies for the analysis and classification of hyperspectral remote sensing images, placing the focal point on the investigation and optimisation techniques for the extraction and integration of spectral and spatial information. In the first part of the thesis, a thorough study on the analysis of the spectral information contained in the hyperspectral images is presented. Though, independent component analysis (ICA) has been widely used to address several tasks in the remote sensing field, such as feature reduction, spectral unmixing and classification, its employment in extracting class-discriminant information remains a research topic open to further investigation. To this extend, a profound study on the performances of different ICA algorithms is performed, highlighting their strengths and weaknesses in the hyperspectral image classification task. Based on this study, a novel approach for feature reduction is proposed, where the use of ICA is optimised for the extraction of class-specific information. In the second part of the thesis, the spatial information is exploited by employing operators from the mathematical morphology framework. Morphological operators, such as attribute profiles and their multi-channel and multi-attribute extensions, are proved to be effective in the modelling of the spatial information, dealing, however, with issues such as the high feature dimensionality, the high intrinsic information redundancy and the a-priori need for parameter tuning in filtering, which are still open. Addressing the first two issues, the reduced attribute profiles are introduced, in this thesis, as an optimised version of the morphological attribute profiles, with the property to compress all the meaningful geometrical information into a few features. Regarding the filter parameter tuning issue, an innovative strategy for automatic threshold selection is proposed. Inspired by the concept of granulometry, the proposed approach defines a novel granulometric characteristic function, which provides information on the image decomposition according to a given measure. The approach exploits the tree representation of an image, allowing us to avoid additional filtering steps prior to the threshold selection, making the process computationally effective. The outcome of this dissertation advances the state-of-the-art by proposing novel methodologies for accurate hyperspectral image classification, where the results obtained by extensive experimentation on various real hyperspectral data sets confirmed their effectiveness. Concluding the thesis, insightful and concrete remarks to the aforementioned issues are discussed.
|
Page generated in 0.1002 seconds