• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 1
  • Tagged with
  • 67
  • 67
  • 67
  • 67
  • 67
  • 19
  • 17
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Design, analysis, application and experimental assessment of algorithms for the synthesis of maximally sparse, planar, non-superdirective and steerable arrays

Tumolo, Roberto Michele January 2018 (has links)
This thesis deals with the problem of synthesizing planar, maximally sparse, steerable and non-superdirective array antennas by means of convex optimization algorithms and testing their performances on an existing array to assess its far field performances in terms of requirements ful- filment. The reason behind the choice of such topic is related to those applications wherein the power supply/consumption, the weight and the hardware/software complexity of the whole radiating system have a strong impact on the overall cost. On the other hand, the reduction of the number of elements has of course drawbacks as well (loss in directivity, which means a smaller radar coverage in radar applications, loss in robustness, etc.), however the developed algorithms can be utilized for finding acceptable trade-offs that arise, inevitably, when placing advantages and disadvantages of sparsification on the balance: it is only a matter of appropriately translating requirements in a convex way. The synthesis scheme will be described in detail in its generality at the beginning, showing how the proposed synthesis techniques outperform several results existing in literature and setting the bar for new benchmarks. In particular, an important, innovative constraint has been considered in the synthesis problem that prevents selection of elements at distances below half-wavelength: the non-superdirectivity. Moreover, an interesting result will be derived and discussed: the trend of the reduction of the number of elements Versus the (maximum) antenna size is decreasing as the latter increases. Afterwards the discussion will be focused on an existing antenna for radar applications, showing how the proposed algorithms intrinsically return a single layout that works jointly for transmitting and receiving (two-way synthesis). The results for the specific case chosen (mainly the set of weights and relative posi- tions) are first numerically validated by a full-wave software (Ansys HFSS) and then experimentally assessed in anechoic chamber through measurements.
12

Energy-Efficient Medium Access Control Protocols and Network Coding in Green Wireless Networks

Palacios-Trujillo, Raul January 2014 (has links)
Wireless networks are a popular means of communications in daily social and business activities of many users nowadays. However, current estimates indicate that wireless networks are expected to significantly contribute to the rapidly increasing energy consumption and carbon emissions of the Information and Communication Technologies (ICT) sector. Crucial factors leading to this trend are the continuous growth of wireless network infrastructure coupled with the increased number of user wireless devices equipped with various radio interfaces and batteries of extremely limited capacity (e.g., smartphones). The serious problem of energy consumption in wireless networks is mainly related to the current standard designs of wireless technologies. These approaches are based on a stack of protocol layers aiming to maximize performance-related metrics, such as throughput or Quality of Service (QoS), while paying less attention to energy efficiency. Although the focus has shifted to energy efficiency recently, most of the existing wireless solutions achieve energy savings at the cost of some performance degradation.This thesis aims at contributing to the evolution of green wireless networks by exploring new approaches for energy saving at the Medium Access Control (MAC) protocol layer and the combination of these with the integration of the Network Coding (NC) paradigm into the wireless network protocol stack for further energy savings. The main contributions of the thesis are divided into two main parts. The first part of the thesis is focused on the design and performance analysis and evaluation of novel energy-efficient distributed and centralized MAC protocols for Wireless Local Area Networks (WLANs). The second part of the thesis turns the focus to the design and performance analysis and evaluation of new NC-aware energy- efficient MAC protocols for wireless ad hoc networks. The key idea of the proposed mechanisms is to enable multiple data exchanges (with or without NC data) among wireless devices and allow them to dynamically turn on and off their radio transceivers (i.e., duty cycling) during periods of no transmission and reception (i.e., when they are listening or overhearing). Validation through analysis, computer-based simulation, and experimentation in real hardware shows that the proposed MAC solutions can significantly improve both the throughput and energy efficiency of wireless networks, compared to the existing mechanisms of the IEEE 802.11 Standard when alone or combined with the NC approach. Furthermore, the results presented in this dissertation help understand the impact of the on/off transitions of radio transceivers on the energy efficiency of MAC protocols based on duty cycling. These radio transitions are shown to be critical when the available time for sleeping is comparable to the duration of the on/off radio transitions.
13

Bridging the gap between theory and implementation in cognitive networks: developing reasoning in today's networks

Facchini, Christian January 2011 (has links)
Communication networks are becoming increasingly complex and dynamic. The networking paradigm commonly employed, on the other hand, has not changed over the years, and, as a result, performs poorly in today's environments. Only very recently, a new paradigm named cognitive networking has been devised with the objective to make networks more intelligent, thereby overcoming traditional limitations and potentially achieving better performance. According to such vision, networks should be able to monitor themselves, reason upon the environment, act towards the achievement of specific goals and learn from experience. Thus far, several cognitive network architectures have been conceived and proposed in the literature, but, despite researchers seem to agree on the need for a holistic approach, their architectures pursue such a global vision only in part, as they do not consider networks nor network nodes in their entirety. In the present work, we analyze the aspects to be tackled in order to enable this holistic view and propose to base reasoning on both intra- and inter-node interactions, with the ultimate aim to devise a complete cognitive network architecture. After a thorough analysis of advantages and drawbacks of generic reasoning framework, we select the most apt to form the basis on which to build the cognitive network we envision. We first formalize its application in network environments, by determining the steps to follow in the process to equip traditional network with cognitive capabilities. Then, we shift the focus from the design side to the implementation side, by identifying the problems that could be faced when realizing such a network, and by proposing a set of optional refinements that could be taken into account to further improve the performance in some specific situations. Finally, we tackle the problem of reducing the time needed for the cognitive process to reason. Validation through simulations shows that explicitly considering cross-layer intra- and inter-node interactions when reasoning has a twofold effect. First, it leads to better performance levels than those that can be achieved by today's non-intelligent networks, and second, it helps to better understand existent causal relationships between variables in a network.
14

Advanced Pre-Processing and Change-Detection Techniques for the Analysis of Multitemporal VHR Remote Sensing Images

Marchesi, Silvia January 2011 (has links)
Remote sensing images regularly acquired by satellite over the same geographical areas (multitemporal images) provide very important information on the land cover dynamic. In the last years the ever increasing availability of multitemporal very high geometrical resolution (VHR) remote sensing images (which have sub-metric resolution) resulted in new potentially relevant applications related to environmental monitoring and land cover control and management. The most of these applications are associated with the analysis of dynamic phenomena (both anthropic and non anthropic) that occur at different scales and result in changes on the Earth surface. In this context, in order to adequately exploit the huge amount of data acquired by remote sensing satellites, it is mandatory to develop unsupervised and automatic techniques for an efficient and effective analysis of such kind of multitemporal data. In the literature several techniques have been developed for the automatic analysis of multitemporal medium/high resolution data. However these techniques do not result effective when dealing with VHR images. The main reasons consist in their inability both to exploit the high geometrical detail content of VHR data and to model the multiscale nature of the scene (and therefore of possible changes). In this framework it is important to develop unsupervised change-detection(CD) methods able to automatically manage the large amount of information of VHR data, without the need of any prior information on the area under investigation. Even if these methods usually identify only the presence/absence of changes without giving information about the kind of change occurred, they are considered the most interesting from an operational perspective, as in the most of the applications no multitemporal ground truth information is available. Considering the above mentioned limitations, in this thesis we study the main problems related to multitemporal VHR images with particular attention to registration noise (i.e. the noise related to a non-perfect alignment of the multitemporal images under investigation). Then, on the basis of the results of the conducted analysis, we develop robust unsupervised and automatic change-detection methods. In particular, the following specific issues are addressed in this work: 1. Analysis of the effects of registration noise in multitemporal VHR images and definition of a method for the estimation of the distribution of such kind of noise useful for defining: a. Change-detection techniques robust to registration noise (RN); the proposed techniques are able to significantly reduce the false alarm rate due to RN that is raised by the standard CD techniques when dealing with VHR images. b. Effective registration methods; the proposed strategies are based on a multiscale analysis of the scene which allows one to extract accurate control points for the registration of VHR images. 2. Detection and discrimination of multiple changes in multitemporal images; this techniques allow one to overcome the limitation of the existing unsupervised techniques, as they are able to identify and separate different kinds of change without any prior information on the study areas. 3. Pre-processing techniques for optimizing change detection on VHR images; in particular, in this context we evaluate the impact of: a. Image transformation techniques on the results of the CD process; b. Different strategies of image pansharpening applied to the original multitemporal images on the results of the CD process. For each of the above mentioned topic an analysis of the state of the art is carried out, the limitations of existing methods are pointed out and the proposed solutions to the addressed problems are described in details. Finally, experimental results conducted on both simulated and real data are reported in order to show and confirm the validity of all the proposed methods.
15

Detection and Analysis Methods for unmanned aerial Vehicle Images

Moranduzzo, Thomas January 2015 (has links)
Unmanned Aerial Vehicles (UAVs), commonly known as drones, are aerial platforms that are gaining large popularity in the remote sensing field. UAVs derive from military technology, but in the last few years they are establishing as reference platforms also for civilian tasks. The main advantage of these acquisition systems lies in their simplicity of use. Indeed, a UAV can be used when and where it is needed without excessive costs. Since UAVs can fly very close to the objects under investigation they allow the acquisition of extremely high resolution (EHR) images in which the items are described with a very high level of details. The huge quantity of information contained in UAV images opens the way to develop novel applications but at the same time force us to face new challenging problems at methodological level. This thesis represents a modest but hopefully useful contribution towards making UAV images completely understood and easily processed and analyzed. In particular, the proposed methodological contributions include: i) two methods devoted to the automatic detection and counting of cars present in urban scenarios; ii) a complete processing chain which monitors the traffic and estimate the speeds of moving vehicles; iii) a methodology which detects classes of objects by exploiting a nonlinear filter which combines image gradient features at different orders and Gaussian process (GP) modeling; iv) a novel strategy to “coarsely” describe extremely high resolution images using various representation and matching strategies. Experimental results conducted on real UAV images are presented and discussed. They show the validity of the proposed methods and suggest future possible improvements. Furthermore, they confirm that despite the complexity of the considered images, the potential of UAV images is very wide.
16

Innovative methods for the reconstruction of new generation satellite remote sensing images

Luca, Lorenzi January 2012 (has links)
Remote sensing satellites have demonstrated to be a helpful instrument. Indeed, satellite images have been successfully exploited to deal with several applications including environmental monitoring and prevention of natural disasters. In the last years, the increasing of the availability of very high spatial resolution (VHR) remote sensing images resulted in new potentially relevant applications related to land cover control and environmental management. In particular, optical sensors may suffer from the presence of clouds and/or of shadows. This involves the problem of missing data, which may result in an important problem especially in the case of VHR images. In this thesis, new methodologies of detection and reconstruction of missing data region in VHR images are proposed and applied on areas contaminated by the presence of clouds and/or shadows. In particular, the proposed methodological contributions include: i) a multiresolution inpainting strategy to reconstruct cloud-contaminated images; ii) a new combination of radiometric information and spatial position information in two specific kernels to perform a better reconstruction of cloud-contaminated regions by adopting a support vector regression (SVR) method; iii) the exploitation of compressive sensing theory adopting three different strategies (orthogonal matching pursuit, basis pursuit and a genetic algorithm solution) for the reconstruction of cloud-contaminated images; iv) a complete processing chain which exploits a support vector machine (SVM) classification and morphological filters for the detection and a linear regression for the reconstruction of specific shadow areas; and v) several evaluation criteria capable to assess the reconstructability of shadow areas. All of them are specifically developed to work with VHR images. Experimental results conducted on real data are reported in order to show and confirm the validity of all the proposed methods. They all suggest that, despite the complexity of the problems, it is possible to recover in a good way missing areas obscured by clouds or shadows.
17

Discrimination of Computer Generated versus Natural Human Faces

Dang Nguyen, Duc Tien January 2014 (has links)
The development of computer graphics technologies has been bringing realism to computer generated multimedia data, e.g., scenes, human characters and other objects, making them achieve a very high quality level. However, these synthetic objects may be used to create situations which may not be present in real world, hence raising the demand of having advance tools for differentiating between real and artificial data. Indeed, since 2005 the research community on multimedia forensics has started to develop methods to identify computer generated multimedia data, focusing mainly on images. However, most of them do not achieved very good performances on the problem of identifying CG characters. The objective of this doctoral study is to develop efficient techniques to distinguish between computer generated and natural human faces. We focused our study on geometric-based forensic techniques, which exploit the structure of the face and its shape, proposing methods both for image and video forensics. Firstly, we proposed a method to differentiate between computer generated and photographic human faces in photos. Based on the estimation of the face asymmetry, a given photo is classified as computer generated or not. Secondly, we introduced a method to distinguish between computer generated and natural faces based on facial expressions analysis. In particular, small variations of the facial shape models corresponding to the same expression are used as evidence of synthetic characters. Finally, by exploiting the differences between face models over time, we can identify synthetic animations since their models are usually recreated or performed in patterns, comparing to the models of natural animations.
18

Advanced Techniques for Automatic Change Detection in Multitemporal Hyperspectral Images

Liu, Sicong January 2015 (has links)
The increasing availability of the new generation remote sensing satellite hyperspectral images provides an important data source for Earth Observation (EO). Hyperspectral images are characterized by a very detailed spectral sampling (i.e., very high spectral resolution) over a wide spectral wavelength range. This important property makes it possible the monitoring of the land-cover dynamic and environmental evolution at a fine spectral scale. This also allows one to potentially detect subtle spectral variations associated with the land-cover transitions that are usually not detectable in the traditional multispectral images due to their poor spectral signature representation (i.e., generally sufficient for representing only the major changes). To fully utilize the available multitemporal hyperspectral images and their rich information content, it is necessary to develop advanced techniques for robust change detection (CD) in multitemporal hyperspectral images, thus to automatically discover and identify the interesting and valuable change information. This is the main goal of this thesis. In the literature, most of the CD approaches were designed for multispectral images. The effectiveness of these approaches, to the complex CD problems is reduced, when dealing with the hyperspectral images. Accordingly, the research activities carried out during this PhD study and presented in this thesis are devoted to the development of effective methods for multiple change detection in multitemporal hyperspectral images. These methods consider the intrinsic properties of the hyperspectral data and overcome the drawbacks of the existing CD techniques. In particular, the following specific novel contributions are introduced in this thesis: 1) A theoretical and empirical analysis of the multiple-change detection problem in multitemporal hyperspectral images. Definition and discussion of concepts as the changes and of the change endmembers, the hierarchical change structure and the multitemporal spectral mixture is given. 2) A novel semi-automatic sequential technique for iteratively discovering, visualizing, and detecting multiple changes. Reliable change variables are adaptively generated for the representation of each specific considered change. Thus multiple changes are discovered and discriminated according to an iterative re-projection of the spectral change vectors into new compressed change representation domains. Moreover, a simple yet effective tool is developed allowing user to have an interaction within the CD procedure. 3) A novel partially-unsupervised hierarchical clustering technique for the separation and identification of multiple changes. By considering spectral variations at different processing levels, multiple change information is adaptively modelled and clustered according to spectral homogeneity. A manual initialization is used to drive the whole hierarchical clustering procedure; 4) A novel automatic multitemporal spectral unmixing approach to detect multiple changes in hyperspectral images. A multitemporal spectral mixture model is proposed to analyse the spectral variations at sub-pixel level, thus investigating in details the spectral composition of change and no-change endmembers within a pixel. A patch-scheme is used in the endmembers extraction and unmixing, which better considers endmember variability. Comprehensive qualitative and quantitative experimental results obtained on both simulated and real multitemporal hyperspectral images confirm the effectiveness of the proposed techniques.
19

Optimisation of Performance of 4G Mobile Networks in High Load Conditions

Baraev, Alexey January 2014 (has links)
The signalling subsystem of the LTE (Long Term Evolution) networks inherited some of the limitations of its preceding technologies. It is vulnerable to overload situations that occur as a consequence of unpredicted user behaviour. The most sensitive zones are the paging procedure and random access procedure and the signalling channels associated with them. Reliable paging procedure is particularly important. As of the current design, in case of overload of paging channels it blocks the possibility of modification of configuration of a cell, thus limiting the possibilities of the system to recover. This research proposes and analyses a solution to overload of the paging channels in LTE systems. It shows that there is a possibility to completely avoid overload of the paging channels in surging load conditions. The research develops and verifies a mathematical model of the paging procedure. This model is incorporated in the solution, thus allowing computation of the critical load thresholds that trigger the reconfiguration of the paging channels. The solution is explained by a detailed algorithm and validated in a simulator of the LTE paging channels. It is partially compliant with the 3GPP specifications. The research includes a compatibility analysis and underlines the operational procedures that must be defined in the standard. It is important that the implementation of the solution does not affect already deployed hardware but requires a modification of the eNb software. Thus it is possible to prevent development of the paging overload situations, and the solution can be implemented in the hardware that is already deployed in the LTE networks. The main result of this research is a reliable paging procedure that opens further opportunities for optimisation of other signalling procedures and channels.
20

Acoustic Model Adaptation For Reverberation Robust Automatic Speech Recognition

Mohammed, Abdul Waheed January 2014 (has links)
Reverberation is a natural phenomenon observed in enclosed environments. It occurs due to the reflection of the signal from the walls and objects in the room. For humans, reverberation is beneficial as it reinforces sound and also provide the sensation of space. However, for automatic speech recognition even moderate amount of reverberation is very harmful. It corrupts the clean speech which leads to deterioration in the performance of the speech recognizer. Moreover, in the enclosed environment, reverberation has the most damaging affect over the accuracy of the recognizer. In literature, to improve speech recognition performance against environmental artifacts mostly noise compensation techniques have been proposed. As a consequence, the problem of reverberation has received relatively less attention. Lately, some techniques have emerged which are specifically tailored for compensating the effects of reverberation. Nevertheless, the problem of reverberation is far from being solved. Therefore, to handle reverberation and provide robustness to speech recognition, we propose "Semi-blind adaptation" technique which adapts the clean acoustic models to the reverberant environment and thus provide improved performance. Semi-blind adaptation technique works in two phases, in the first phase reverberation model is estimated and in the second phase using the reverberation model, adaptation of the clean acoustic models is performed. The reverberation model (Pw-EDC) proposed in this technique models the non-diffuse nature of the rooms. Therefore, the Pw-EDC model has dual slope energy decay where the first slope represents the steep decay of early reflections and second slope represents the slow decay of late reflections. The parameters to model early reflections decay were empirically calculated and to find the parameter of late reflections decay we proposed Gaussian mixture models (GMMs) based reverberation time estimation technique. Late reflections decay parameter is estimated by first training a pool of GMMs where each model represents the reverberation time of the data on which it is trained. In the test phase, test data is matched with these models and the GMM which matches with highest probability provide the estimate of late reflections decay parameter. To adapt the acoustic models, reverberation energy contributions are estimated by using the Pw-EDC model. The parameters of the current state in the model (i.e., only means) are adapted by adding the reverberation energy contributions of the previous states to the current state. In this manner, the dispersion of energy caused by the reverberation is compensated. Adaptation is performed not only on static parameters but also on dynamic parameters of the model. After adaptation, the models are evaluated on data from low, medium and high reverberant environments. The efficacy of the proposed adaptation technique is evaluated on small and medium vocabulary tasks. For these tasks reverberant data is generated by convolving clean signals with impulse responses taken from SIREAC and AIR databases. SIREAC provides RIRs of office and living room and it also has a facility to modify the reverberation time. Therefore, in our experiments the reverberation time of the RIRs is varied from 200 to 900 ms in steps of 100 ms for both rooms of SIREAC. In AIR environment, RIRs are obtained from studio booth, meeting, office and lecture rooms. These rooms have very low, low, medium and high reverberation times respectively. For small vocabulary task, Pw-EDC adaptation provide considerable improvements compared to the baseline results especially at medium and high reverberation times in both environments. Pw-EDC adaptation is compared with contemporary adaptation technique (Exp-EDC adaptation) which also adapts the models in the same manner, except it uses a crude reverberation model. It was found, Pw-EDC adaptation gives better performance in all the rooms of both environments. Pw-EDC is also compared with state-of-the-art adaptation technique i.e., unsupervised MLLR and it was found that Pw-EDC adaptation provide similar performance to MLLR only when the models contain static coefficients. For medium vocabulary task, Pw-EDC adaptation provide better performance than Exp-EDC adaptation in both the environments. However, when compared against unsupervised MLLR it shows relatively poor performance. The reason for such dismal performance is the inaccurate adaptation of dynamic coefficients of the models. In the end, the robustness of proposed adaptation technique is due to the precise modeling and estimation of reverberation energy decay by Pw-EDC model. Using Pw-EDC model, the semi-blind adaptation has shown consistent improvements across low, medium and high reverberant environments in both small and medium vocabulary speech recognition task.

Page generated in 0.2559 seconds