• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 1
  • Tagged with
  • 67
  • 67
  • 67
  • 67
  • 67
  • 19
  • 17
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Advanced regression and detection methods for remote sensing data analysis

Castelletti, Davide January 2017 (has links)
Nowadays the analysis of remote sensing data for environmental monitoring is fundamental to understand the local and global Earth dynamics. In this context, the main goal of this thesis is to present novel signal processing methods for the estimation of biophysical parameters and for the analysis icy terrain with active sensors. The thesis presents three main contributions. In the context of biophysical parameters estimation we focus on regression methods. According to the analysis of the literature, most of the regression techniques require a relevant number of reference samples to model a robust regression function. However, in real-word applications the ground truth observations are limited as their collection leads to high operational cost. Moreover, the availability of biased samples may result in low estimation accuracy. To address these issues, in this thesis we propose two novel contributions. The first contribution is a method for the estimation of biophysical parameters that integrates theoretical models with empirical observations associated to a small number of in-situ reference samples. The proposed method computes and correct deviations between estimates obtained through the inversion of theoretical models and empirical observations. The second contribution is a semisupervised learning (SSL) method for regression defined in the context of the ε-insensitive SVR. The proposed SSL method aims to mitigate the problems of small-sized biased training sets by injecting priors information in the initial learning of the SVR function, and jointly exploiting labeled and unlabeled samples in the learning phase of the SVR. The third contribution of this dissertation addresses the clutter detection problem in radar sounder (RS) data. The capability to detect clutter is fundamental for the interpretation of subsurface features in the radargram. In the state of the art, techniques that require accurate information on the surface topography or approaches that exploit complex multi-channel radar sounder systems have been presented. In this thesis, we propose a novel method for clutter detection that is independent from ancillary information and limits the hardware complexity of the radar system. The method relies on the interferometric analysis of two-channel RS data and discriminates the clutter and subsurface echoes by modeling the theoretical phase difference between the cross-track antennas of the RS. This allows the comparison of the phase difference distributions of real and simulated data. Qualitative and quantitative experimental results obtained on real airborne SAR and RS data confirm the effectiveness of the proposed methods.
32

Design, analysis, application and experimental assessment of algorithms for the synthesis of maximally sparse, planar, non-superdirective and steerable arrays

Tumolo, Roberto Michele January 2018 (has links)
This thesis deals with the problem of synthesizing planar, maximally sparse, steerable and non-superdirective array antennas by means of convex optimization algorithms and testing their performances on an existing array to assess its far field performances in terms of requirements ful- filment. The reason behind the choice of such topic is related to those applications wherein the power supply/consumption, the weight and the hardware/software complexity of the whole radiating system have a strong impact on the overall cost. On the other hand, the reduction of the number of elements has of course drawbacks as well (loss in directivity, which means a smaller radar coverage in radar applications, loss in robustness, etc.), however the developed algorithms can be utilized for finding acceptable trade-offs that arise, inevitably, when placing advantages and disadvantages of sparsification on the balance: it is only a matter of appropriately translating requirements in a convex way. The synthesis scheme will be described in detail in its generality at the beginning, showing how the proposed synthesis techniques outperform several results existing in literature and setting the bar for new benchmarks. In particular, an important, innovative constraint has been considered in the synthesis problem that prevents selection of elements at distances below half-wavelength: the non-superdirectivity. Moreover, an interesting result will be derived and discussed: the trend of the reduction of the number of elements Versus the (maximum) antenna size is decreasing as the latter increases. Afterwards the discussion will be focused on an existing antenna for radar applications, showing how the proposed algorithms intrinsically return a single layout that works jointly for transmitting and receiving (two-way synthesis). The results for the specific case chosen (mainly the set of weights and relative posi- tions) are first numerically validated by a full-wave software (Ansys HFSS) and then experimentally assessed in anechoic chamber through measurements.
33

Novel Methods for Change Detection in Multitemporal Remote Sensing Images

Bertoluzza, Manuel January 2019 (has links)
The scope of this dissertation is to present and discuss novel paradigms and techniques for the extraction of information from long time series of remotely sensed images. Many images are acquired everyday at high spatial and temporal resolution. The unprecedented availability of images is increasing due to the number of acquiring sensors. Nowadays, many satellites have been launched in orbit around our planet and more launches are planned in the future. Notable examples of currently operating remote sensing missions are the Landsat and Sentinel programs run by space agencies. This trend is speeding up every year with the launch of many other commercial satellites. Initiatives like cubesats propose a new paradigm to continuously monitor Earthâ€TMs surface. The larger availability of remotely sensed data does not only involve space-borne platforms. In the recent years, new platforms, such as airborne unmanned vehicles, gained popularity also thanks to the reduction of costs of these instruments. Overall, all these phenomena are fueling the so-called Big Data revolution in remote sensing. The unprecedented number of images enables a large number of applications related to the monitoring of the environment on a global and regional scale. A non-exhaustive list of applications contains climate change assessment, disaster monitoring and urban planning. In this thesis, novel paradigms and techniques are proposed for the automatic exploitation of the information acquired by the growing number of remote sensing data sources, either multispectral or Synthetic Aperture Radar (SAR) sensors. There is a need of new processing strategies being able to reliably and automatically extract information from the ever growing amount of images. In this context, this thesis focuses on Change Detection (CD) techniques capable of identifying areas within remote sensing images where the land-cover/land-use changed. Indeed, CD is one of the first steps needed to understand Earthâ€TMs surface dynamics and its evolution. Images from such long and dense time series have redundant information. So, the information extracted from one image or a single image pair in the time series is correlated to other images or image pairs. This thesis explores mechanisms to exploit the temporal correlation within long image time series for an improved information extraction. This concept is general and can be applied to any information extraction process. The thesis provides three main novel contributions to the state of the art. The first contribution consists in a novel framework for CD in image time series. The binary change variable is modeled as a conservative field. Then, it is used to improve the bi-temporal CD map computed between a target pair of images extracted from a time series. This framework takes advantage of the correlation of changes detected between pairs of images extracted from long time series. The second contribution presents an iterative approach that aims at improving the global CD performance for any possible pair of images defined within a time series. The results obtained by any bi-temporal technique, either binary or multiclass, are automatically validated against each other. By means of an iterative mechanism, the consistency of changes is tested and enforced for any pair of images. The third contribution consists in the detection of clouds in long time series of multispectral images and in the restoration of pixels covered by clouds. The presence of clouds may strongly affect the automatic analysis of images and the performance of change detection techniques (or other processes for the extraction of information). In this contribution, the temporal information of long optical image time series is exploited to improve the identification of pixels covered by clouds and their restoration with respect to standard monotemporal approaches. The effectiveness of the proposed approaches is proved on experiments on synthetic and real multispectral and SAR images. Experimental results are accompanied by comprehensive qualitative and quantitative analysis.
34

Advanced classification methods for UAV imagery

Zeggada, Abdallah January 2018 (has links)
The rapid technological advancement manifested lately in the remote sensing acquisition platforms has triggered many benefits in favor of automated territory control and monitoring. In particular, unmanned aerial vehicles (UAVs) technology has drawn a lot of attention, providing an efficient solution especially in real-time applications. This is mainly motivated by their capacity to collect extremely high resolution (EHR) data over inaccessible areas and limited coverage zones, thanks to their small size and rapidly deployable flight capability, notwithstanding their ease of use and affordability. The very high level of details of the data acquired via UAVs, however, in order to be properly availed, requires further treatment through suitable image processing and analysis approaches. In this respect, the proposed methodological contributions in this thesis include: i) a complete processing chain which assists the Avalanche Search and Rescue (SAR) operations by scanning the UAV acquired images over the avalanche debris in order to detect victims buried under snow and their related objects in real time; ii) two multilabel deep learning strategies for coarsely describing extremely high resolution images in urban scenarios; iii) a novel multilabel conditional random fields classification framework that exploits simultaneously spatial contextual information and cross-correlation between labels; iv) a novel spatial and structured support vector machine for multilabel image classification by adding to the cost function of the structured support vector machine a term that enhances spatial smoothness within a one-step process. Conducted experiments on real UAV images are reported and discussed alongside suggestions for potential future improvements and research lines.
35

Resource Abstraction and Virtualization Solutions for Wireless Networks

Gebremariam, Anteneh Atumo January 2017 (has links)
To cope up with the booming of data traffic and to accommodate new and emerging technologies such as machine-type communications, Internet-of-Things, the 5th Generation (5G) of mobile networks require multiple complex operations (i.e., allocating non-overlapping radio resources, monitoring interference, etc.). Software-defined networking (SDN) and network function virtualization (NFV) are the two emerging technologies that promise to provide programmability and flexibility in terms of managing, configuring and optimizing wireless networks such that a better performance is achieved. In this dissertation, we particularly focus on inter-cell-interference (ICI) mitigation techniques and efficient radio resource utilization schemes through the adoption of these two technologies in wireless environment. We exploit the SDN approach in order to expose the lower layers (i.e., physical and medium access control) parameters of the wireless protocol stack to a centralized control module such that it is possible to dynamically configure the network in a logically centralized manner, through specifically designed network functions (algorithms). In the first part of this work, we proposed two ICI mitigation solutions, one via an Interference Graph (IG) abstraction technique to control ICI in macro base stations and the second one is through dynamic strict fractional frequency reuse technique to overcome the limitations of ICI in dense small cell base station deployments where ICI arises from frequency reuse one in multi-tier networks. Then based on the fractional frequency reuse (FFR) technique, we propose a spatial scheduling schemes that aim to schedule users in the spatial domain through layered schedulers operating in different time scales, short and long. The cell coverage area is dynamically divided into multiple scheduling areas based on the antenna beamwidth and steerable signal-to-interference-plus-noise-ratio (SINR) threshold values. Simulation results show our proposed approaches outperform the legacy static FFR schemes in terms of spectral efficiency, aggregate throughput and packet blocking probability. Moreover, we provided the detailed analysis of the computational complexity of our proposed algorithms in comparison to the once existing in the literature. The 5G networks will be built around people and things targeted to meet the requirements different groups of uses cases (i.e., massive broadband, massive machine-type communication and critical machine-type communication). In order to support these services it is very costly and impractical to make a separate dedicated network corresponding to each of the services. The most attractive solution in terms of reducing cost at the same time improving backward compatibility is through the implementation of service-dedicated virtual networks, network slicing. Thus we proposed a dynamic spectrum-level slicing algorithm to share radio resources across different virtual networks. Through virtualization, the physical radio resources of the heterogeneous mobile networks are first abstracted into a centralized pool of virtual radio resources. Then we investigated the performance gains of our proposed algorithm though dynamically sharing the abstracted radio resources across multiple virtual networks. Simulation results show that for representative user arrival statistics, dynamic allocation of radio resources significantly lowers the percentage of dropped packets. Moreover, this work is the preliminary step towards enabling an end-to-end network slicing for 5G mobile networking, which is the base for implementing service differentiated virtual networks over a single physical infrastructure. Finally, we presented a test-bed implementation of dynamic spectrum-level slicing algorithm using an open-source software/hardware platform called OpenAirInterface that emulates the long-term evolution (LTE) protocol stack.
36

Recovering the Sight to blind People in indoor Environments with smart Technologies

Mekhalfi, Mohamed Lamine January 2016 (has links)
The methodologies presented in this thesis address the problem of blind people rehabilitation through assistive technologies. In overall terms, the basic and principal needs that a blind individual might be concerned with can be confined to two components, namely (i) navigation/ obstacle avoidance, and (ii) object recognition. Having a close look at the literature, it seems clear that the former category has been devoted the biggest concern with respect to the latter one. Moreover, the few contributions on the second concern tend to approach the recognition task on a single predefined class of objects. Furthermore, both needs, to the best of our knowledge, have not been embedded into a single prototype. In this respect, we put forth in this thesis two main contributions. The first and main one tackles the issue of object recognition for the blind, in which we propose a ‘coarse recognition’ approach that proceeds by detecting objects in bulk rather than focusing on a single class. Thus, the underlying insight of the coarse recognition is to list the bunch of objects that likely exist in a camera-shot image (acquired by the blind individual with an opportune interface, e.g., voice recognition synthesis-based support), regardless of their position in the scene. It thus trades the computational time with object information details as to lessen the processing constraints. As for the second contribution, we further incorporate the recognition algorithm, along with an implemented navigation system that is supplied with a laser-based obstacle avoidance module. Evaluated on image datasets acquired in indoor environments, the recognition schemes have exhibited, with little to mild disparities with respect to one another, interesting results in terms of either recognition rates or processing gap. On the other hand, the navigation system has been assessed in an indoor site and has revealed plausible performance and flexibility with respect to the usual blind people’s mobility speed. A thorough experimental analysis is hereby provided alongside laying the foundations for potential future research lines, including object recognition in outdoor environments.
37

A novel high-efficiency SiPM-based system for Ps-TOF

Mazzuca, Elisabetta January 2014 (has links)
A novel set up for Positronium Time Of Flight is proposed, using Silicon Photomultipliers (SiPMs) instead of Photomultiplier Tubes. The solution allows us to dramatically increase the compactness of the set up, thus improving the efficiency of 240%. Different configurations of SiPM+scintillators are characterized in order to find the best solution. Also, simulations are provided, together with preliminary tests in the particular application. A compact read-out board for the processing of up to 44 channels has been designed and tested.Further tests, expected in the near future, are needed in order to confirm the simulations and to build the final set up.
38

Dynamic Camera Positioning and Reconfiguration for Multi-Camera Networks

Konda, Krishna Reddy January 2015 (has links)
The large availability of different types of cameras and lenses, together with the reduction in price of video sensors, has contributed to a widespread use of video surveillance systems, which have become a widely adopted tool to enforce security and safety, in detecting and preventing crimes and dangerous events. The possibility for personalization of such systems is generally very high, letting the user customize the sensing infrastructure, and deploying ad-hoc solutions based on the current needs, by choosing the type and number of sensors, as well as by adjusting the different camera parameters, as field-of-view, resolution and in case of active PTZ cameras pan,tilt and zoom. Further there is also a possibility of event driven automatic realignment of camera network to better observe the occurring event. Given the above mentioned possibilities, there are two objectives of this doctoral study. First objective consists of proposal of a state of the art camera placement and static reconfiguration algorithm and secondly we present a distributive, co-operative and dynamic camera reconfiguration algorithm for a network of cameras. Camera placement and user driven reconfiguration algorithm is based realistic virtual modelling of a given environment using particle swarm optimization. A real time camera reconfiguration algorithm which relies on motion entropy metric extracted from the H.264 compressed stream acquired by the camera is also presented.
39

Modelling and Recognizing Personal Data

Bignotti, Enrico January 2018 (has links)
To define what a person is represents a hard task, due to the fact that personal data, i.e., data that refer or describe a person, have a very heterogeneous nature. The issue is only worsening with the advent of technologies that, while allowing unprecedented collection and processing capabilities, cannot \textit{understand} the world as humans do. This problem is a well-known long-standing problem in computer science called the Semantic Gap Problem. It was originally defined in the research area of image processing as "... the lack of coincidence between the information that one can extract from the visual data and the interpretation that the same data have for a user in a given situation...". In the context of this work, the semantic gap is the lack of coincidence is between sensor data collected by ubiquitous devices and the human knowledge about the world that relies on their intelligence, habits, and routines. This thesis addresses the semantic gap problem from a representational point of view, proposing an interdisciplinary approach able to model and recognize personal data in real life scenarios. In fact, the semantic gap affects many communities, ranging from ubiquitous computing to user modelling, that must face the issue of managing the complexity of personal data in terms of modelling and recognition. The contributions of this Ph. D. Thesis are: 1) The definition of a methodology based on an interdisciplinary approach that can account for how to represent and allow the recognition of personal data. The interdisciplinary approach relies on the entity-centric approach and on an interdisciplinary categorization to define and structure personal data. 2) The definition of an ontology of personal data to represent human in a general way while also accounting their different dimensions of their everyday life; 3) The instantiation of the personal data representation above in a reference architecture that allows implementing the ontology and that can exploit the methodology to account for how to recognize personal data. 4) The adoption of the methodology for defining personal data and its instantiation in three real-life use cases with different goals in mind, proving that our modelling works in different domains and can account for several dimensions of the user.
40

Energy Efficiency and Privacy in Device-to-Device Communication

Usman, Muhammad January 2017 (has links)
Mobile data traffic has increased many folds in recent years and current cellular networks are undeniably overloaded to meet the escalating user's demands of higher bandwidth and data rates. To meet such demands, Device-to-Device (D2D) communication is regarded as a potential solution to solve the capacity bottleneck problem in legacy cellular networks. Apart from offloading cellular traffic, D2D communication, due to its intrinsic property to rely on proximity, enables a broad range of proximity-based applications for both public safety and commercial users. Some potential applications, among others, include, proximity-based social interactions, exchange of information, advertisements and Vehicle-to-Vehicle (V2V) communication. The success of D2D communication depends upon the scenarios in which the users in the proximity interact with each other. Although there is a lot of work on resource allocation and interference management in D2D networks, very few works focus on the architectural aspects of D2D communication, emphasizing the benchmarking of energy efficiency for different application scenarios. In this dissertation, we benchmark the energy consumption of D2D User Equipments (UEs) in different application scenarios. To this end, first we consider a scenario wherein different UEs, interested in sharing the same service, form a Mobile Cloud (MC). Since, some UEs can involve in multiple services/applications at a time, there is a possibility of interacting with multiple MCs. In this regard, we find that there is a threshold for the number of UEs in each MC, who can participate in multiple applications, beyond which legacy cellular communication starts performing better in terms of overall energy consumption of all UEs in the system. Thereafter, we extend the concept of MC to build a multi-hop D2D network and evaluate the energy consumption of UEs for a content distribution application across the network. In this work, we optimize the size of an MC to get the maximum energy savings. Apart from many advantages, D2D communication poses potential challenges in terms of security and privacy. As a solution, we propose to bootstrap trust in D2D UEs before establishing any connection with unknown users. In particular, we propose Pretty Good Privacy (PGP) and reputation based mechanisms in D2D networks. Finally, to preserve user's privacy and to secure the contents, we propose to encrypt the contents cached at D2D nodes (or any other caching server). In particular, we leverage convergent encryption that can provide an extra benefit of eliminating duplicate contents from the caching server.

Page generated in 0.1172 seconds