Spelling suggestions: "subject:"ING-INF/03 telecomunicacions"" "subject:"ING-INF/03 telecomunicaciones""
61 |
Advanced Techniques for Automatic Change Detection in Multitemporal Hyperspectral ImagesLiu, Sicong January 2015 (has links)
The increasing availability of the new generation remote sensing satellite hyperspectral images provides an important data source for Earth Observation (EO). Hyperspectral images are characterized by a very detailed spectral sampling (i.e., very high spectral resolution) over a wide spectral wavelength range. This important property makes it possible the monitoring of the land-cover dynamic and environmental evolution at a fine spectral scale. This also allows one to potentially detect subtle spectral variations associated with the land-cover transitions that are usually not detectable in the traditional multispectral images due to their poor spectral signature representation (i.e., generally sufficient for representing only the major changes). To fully utilize the available multitemporal hyperspectral images and their rich information content, it is necessary to develop advanced techniques for robust change detection (CD) in multitemporal hyperspectral images, thus to automatically discover and identify the interesting and valuable change information. This is the main goal of this thesis. In the literature, most of the CD approaches were designed for multispectral images. The effectiveness of these approaches, to the complex CD problems is reduced, when dealing with the hyperspectral images. Accordingly, the research activities carried out during this PhD study and presented in this thesis are devoted to the development of effective methods for multiple change detection in multitemporal hyperspectral images. These methods consider the intrinsic properties of the hyperspectral data and overcome the drawbacks of the existing CD techniques. In particular, the following specific novel contributions are introduced in this thesis: 1) A theoretical and empirical analysis of the multiple-change detection problem in multitemporal hyperspectral images. Definition and discussion of concepts as the changes and of the change endmembers, the hierarchical change structure and the multitemporal spectral mixture is given. 2) A novel semi-automatic sequential technique for iteratively discovering, visualizing, and detecting multiple changes. Reliable change variables are adaptively generated for the representation of each specific considered change. Thus multiple changes are discovered and discriminated according to an iterative re-projection of the spectral change vectors into new compressed change representation domains. Moreover, a simple yet effective tool is developed allowing user to have an interaction within the CD procedure. 3) A novel partially-unsupervised hierarchical clustering technique for the separation and identification of multiple changes. By considering spectral variations at different processing levels, multiple change information is adaptively modelled and clustered according to spectral homogeneity. A manual initialization is used to drive the whole hierarchical clustering procedure; 4) A novel automatic multitemporal spectral unmixing approach to detect multiple changes in hyperspectral images. A multitemporal spectral mixture model is proposed to analyse the spectral variations at sub-pixel level, thus investigating in details the spectral composition of change and no-change endmembers within a pixel. A patch-scheme is used in the endmembers extraction and unmixing, which better considers endmember variability. Comprehensive qualitative and quantitative experimental results obtained on both simulated and real multitemporal hyperspectral images confirm the effectiveness of the proposed techniques.
|
62 |
Optimisation of Performance of 4G Mobile Networks in High Load ConditionsBaraev, Alexey January 2014 (has links)
The signalling subsystem of the LTE (Long Term Evolution) networks inherited some of the limitations of its preceding technologies. It is vulnerable to overload situations that occur as a consequence of unpredicted user behaviour. The most sensitive zones are the paging procedure and random access procedure and the signalling channels associated with them. Reliable paging procedure is particularly important. As of the current design, in case of overload of paging channels it blocks the possibility of modification of configuration of a cell, thus limiting the possibilities of the system to recover. This research proposes and analyses a solution to overload of the paging channels in LTE systems. It shows that there is a possibility to completely avoid overload of the paging channels in surging load conditions. The research develops and verifies a mathematical model of the paging procedure. This model is incorporated in the solution, thus allowing computation of the critical load thresholds that trigger the reconfiguration of the paging channels. The solution is explained by a detailed algorithm and validated in a simulator of the LTE paging channels. It is partially compliant with the 3GPP specifications. The research includes a compatibility analysis and underlines the operational procedures that must be defined in the standard. It is important that the implementation of the solution does not affect already deployed hardware but requires a modification of the eNb software. Thus it is possible to prevent development of the paging overload situations, and the solution can be implemented in the hardware that is already deployed in the LTE networks. The main result of this research is a reliable paging procedure that opens further opportunities for optimisation of other signalling procedures and channels.
|
63 |
Acoustic Model Adaptation For Reverberation Robust Automatic Speech RecognitionMohammed, Abdul Waheed January 2014 (has links)
Reverberation is a natural phenomenon observed in enclosed environments. It occurs due to the reflection of the signal from the walls and objects in the room. For humans, reverberation is beneficial as it reinforces sound and also provide the sensation of space. However, for automatic speech recognition even moderate amount of reverberation is very harmful. It corrupts the clean speech which leads to deterioration in the performance of the speech recognizer. Moreover, in the enclosed environment, reverberation has the most damaging affect over the accuracy of the recognizer. In literature, to improve speech recognition performance against environmental artifacts mostly noise compensation techniques have been proposed. As a consequence, the problem of reverberation has received relatively less attention. Lately, some techniques have emerged which are specifically tailored for compensating the effects of reverberation. Nevertheless, the problem of reverberation is far from being solved. Therefore, to handle reverberation and provide robustness to speech recognition, we propose "Semi-blind adaptation" technique which adapts the clean acoustic models to the reverberant environment and thus provide improved performance. Semi-blind adaptation technique works in two phases, in the first phase reverberation model is estimated and in the second phase using the reverberation model, adaptation of the clean acoustic models is performed. The reverberation model (Pw-EDC) proposed in this technique models the non-diffuse nature of the rooms. Therefore, the Pw-EDC model has dual slope energy decay where the first slope represents the steep decay of early reflections and second slope represents the slow decay of late reflections. The parameters to model early reflections decay were empirically calculated and to find the parameter of late reflections decay we proposed Gaussian mixture models (GMMs) based reverberation time estimation technique. Late reflections decay parameter is estimated by first training a pool of GMMs where each model represents the reverberation time of the data on which it is trained. In the test phase, test data is matched with these models and the GMM which matches with highest probability provide the estimate of late reflections decay parameter. To adapt the acoustic models, reverberation energy contributions are estimated by using the Pw-EDC model. The parameters of the current state in the model (i.e., only means) are adapted by adding the reverberation energy contributions of the previous states to the current state. In this manner, the dispersion of energy caused by the reverberation is compensated. Adaptation is performed not only on static parameters but also on dynamic parameters of the model. After adaptation, the models are evaluated on data from low, medium and high reverberant environments. The efficacy of the proposed adaptation technique is evaluated on small and medium vocabulary tasks. For these tasks reverberant data is generated by convolving clean signals with impulse responses taken from SIREAC and AIR databases. SIREAC provides RIRs of office and living room and it also has a facility to modify the reverberation time. Therefore, in our experiments the reverberation time of the RIRs is varied from 200 to 900 ms in steps of 100 ms for both rooms of SIREAC. In AIR environment, RIRs are obtained from studio booth, meeting, office and lecture rooms. These rooms have very low, low, medium and high reverberation times respectively. For small vocabulary task, Pw-EDC adaptation provide considerable improvements compared to the baseline results especially at medium and high reverberation times in both environments. Pw-EDC adaptation is compared with contemporary adaptation technique (Exp-EDC adaptation) which also adapts the models in the same manner, except it uses a crude reverberation model. It was found, Pw-EDC adaptation gives better performance in all the rooms of both environments. Pw-EDC is also compared with state-of-the-art adaptation technique i.e., unsupervised MLLR and it was found that Pw-EDC adaptation provide similar performance to MLLR only when the models contain static coefficients. For medium vocabulary task, Pw-EDC adaptation provide better performance than Exp-EDC adaptation in both the environments. However, when compared against unsupervised MLLR it shows relatively poor performance. The reason for such dismal performance is the inaccurate adaptation of dynamic coefficients of the models. In the end, the robustness of proposed adaptation technique is due to the precise modeling and estimation of reverberation energy decay by Pw-EDC model. Using Pw-EDC model, the semi-blind adaptation has shown consistent improvements across low, medium and high reverberant environments in both small and medium vocabulary speech recognition task.
|
64 |
Advanced Methods for the Analysis of Radar Sounder Data Acquired at the Ice SheetsIlisei, Ana-Maria January 2016 (has links)
The World Climate Research Programme (WCRP) has recently reconfirmed the importance of a better understanding of the Cryosphere for advancing the analysis, modeling and prediction of climate change and its impact on the environment and society. One of the most complete collection of information about the ice sheets and glaciated areas is contained in the data (radargrams) acquired by Radar Sounder (RS) instruments. The need to better understand the structure of the ice sheets and the availability of enourmous quantities of radargrams call for the development of automatic techniques for an efficient extraction of information from RS data. This topic has been only marginally addressed in the literature. Thus, in this thesis we address this challenge by contributing with four novel automatic techniques for the analysis of radargrams acquired at the ice sheets. The first contribution of this thesis presents a system for the automatic classification of ice subsurface targets in RS data. The core of the system is represented by the extraction of a set of features for target discrimination. The features are based on both the specific statistical properties of the RS signal and the spatial distribution of the ice subsurface targets. The second contribution is an unsupervised model-based technique for the automatic detection and property estimation of ice subsurface targets. This is done by using the parameters of the RS system combined with the output of an automatic image segmentation algorithm. The third contribution presents an automatic technique for the local 3D reconstruction of the ice sheet. It is based on the use of RS and altimeter (ALT) data, and relies on the use of a geostatistical interpolation method and on several statistical measures for validating the interpolation results and the quality of interpolation. The fourth contribution presents a technique for the automatic estimation of radar power losses in ice as a continuous non-linear function of depth, by using RS and ice core data. The technique relies on the detection of ice layers in the RS data, the computation of their reflectivity from the ice core data and the use of the radar equation for loss estimation. Qualitative and quantitative experimental results obtained on real RS data confirm the effectiveness of the first three techniques. Also, preliminary results have been obtained by applying the fourth technique to real RS and ice core data acquired in Greenland. Due to their advantages over the traditional manual approach, e.g., efficiency, objectivity, possibility of jointly analyzing multisensor data (e.g., RS, ALT), the proposed methods can support the scientific community to enhance the data usage for a better modeling and understanding of the ice sheets. Moreover, they will become even more important in the near future, since the volume of data is expected to grow from the increase in airborne and possible Earth Observation spaceborne RS missions.
|
65 |
Advanced Methods For Building Information Extraction From Very High Resolution SAR Data To Support Emergency ResponseBrunner, Dominik January 2009 (has links)
Rapid damage assessment after natural disasters (e.g. earthquakes, floods) and violent conflicts (e.g. war-related destruction) is crucial for initiating effective emergency response actions. Remote sensing satellites equipped with multispectral and Synthetic Aperture Radar (SAR) imaging sensors can provide vital information due to their ability to map affected areas of interest with high geometric precision and in an uncensored manner. The new spaceborne Very High Resolution (VHR) SAR sensors onboard the TerraSAR-X and COSMO-SkyMed satellites can achieve spatial resolutions in the order of 1 m. In VHR SAR data, features from individual urban structures (like buildings) can be identified in their characteristic settings in urban settlement patterns. This thesis presents novel techniques to support emergency response after catastrophic events using latest generation earth observation imagery. In this context, the potential and limits of VHR SAR imagery for extracting information about individual buildings in an (semi-) automatic manner is investigated. The following main novel contributions are presented. First, we investigate the potential of the characteristic double bounce of a building in VHR SAR imagery to be exploited in automatic damage assessment techniques. In particular, we analyze empirically the relation between the double bounce effect and the aspect angle. Then, we propose a radar imaging simulator for urban structures, which is based on an adapted ray tracing procedure and a Lambertian-specular mixture model, emphasizing the geometrical effects of the scattering. Furthermore, we propose an approach to the height estimation of buildings from single detected SAR data. It is based on a "hypothesis generation - rendering - matching" procedure, where a series of hypotheses are generated and rendered by the previously introduced radar imaging simulator in order to compare the simulations with the actual VHR SAR data. Moreover, we present a method that detects buildings destroyed in an earthquake using pre-event VHR optical and post-event detected VHR SAR imagery. This technique evaluates the similarity between the predicted signature of the intact building in the post-event SAR scene and the actual scene to distinguish between damaged and undamaged buildings. Finally, we address the practical requirements of rapid emergency response scenarios by proposing an IT system infrastructure that enables collaborative and distributed geospatial data processing and on-demand map visualization. The effectiveness of all proposed techniques is confirmed by quantitative and qualitative experimental results obtained on airborne and spaceborne VHR SAR imagery.
|
66 |
Flexible functional split in the 5g radio access networksHarutyunyan, Davit January 2019 (has links)
Recent developments in mobile networks towards the fifth generation (5G) communication technology have been mainly driven by an explosive increase in mobile traffic demand and emerging vertical applications with their diverse Quality–of–Service (QoS) requirements, which current mobile networks are likely to fall short of satisfying. New technological cost–efficient solutions are, therefore, required to boost the network capacity and advance its capabilities in order to support the QoS requirements of, for example, enhanced mobile broadband services and the ones requiring ultra–reliable low latency communication. Network densification is known to be as one of the promising approaches aiming to increase the network capacity. This is achieved thanks to aggressive frequency reuse at small cells. Nonetheless, this entails performance degradation especially for cell–edge users due to a high Inter–cell Interference (ICI) level. Cloud Radio Access Network (C–RAN) architecture has been proposed as an efficient way to address the aforementioned challenges, tackle some of the problems persistent in the present–day mobile networks (e.g., inefficient use of frequency bands, high power consumption) and, by employing virtualization techniques, facilitate the network management while paving a way for new business opportunities for mobile virtual network operators. The main idea behind C–RAN is to decouple the radio unit of a base station, referred as a Decentralized Unit (DU) from the baseband processing unit, referred as a Centralized Unit (CU) and virtualize the latter in a centralized location, referred as a CU pool. Then, by executing so–called "functional split" in the RAN protocol stack between the CU and the DU, identify the RAN functionalities that are to be performed at the DU and the CU pool. Depending on the selected functional split (i.e., the resource centralization level), the bandwidth and latency requirements vary in the fronthaul network, which is the one interconnecting the DU with the CU pool. This results in a different level of resource centralization benefits. Thus, an inherent trade–off exists between resource centralization benefits and fronthaul requirements in the C–RAN architecture. C–RAN, although provides numerous advantages, raises a series of challenges one of which, depending on the functional split option, is a huge fronthaul bandwidth requirement. Optical fiber, thanks to its high bandwidth and low latency characteristics, is perceived to be the most capable fronthauling option; nevertheless, it requires a huge investment. Fortunately, recent advancement in the MillimeterWave (mmWave) wireless technology allows for multi–Gbps transmission over the distance of one kilometer, therefore, making it a good candidate for the fronthaul network in an ultra–dense small cell deployment scenario. In this doctoral dissertation, we first study the trade–offs between different functional splits, considering the mmWave technology in the fronthaul network. Specifically, we formulate and solve a Virtual Network Embedding (VNE) problem that aims at minimizing the fronthaul bandwidth utilization along with the number of active mmWave interfaces, and therefore, also minimizing the power consumption in the fronthaul network for different functional split scenarios. We then carry out a relative comparison between themmWave and optical fiber fronthauling technologies in terms of their deployment cost in order to ascertain when it would be economically more efficient to employ mmWave fronthaul instead of optical fiber. Different functional splits enable theMobile Network Operators (MNOs) to harvest different level of resource centralization benefits and pose diverse fronthaul requirements. There is no one–fits–all functional split that can be adopted in C–RAN to cope with all of its challenges since each split is more appropriate to be employed in a specific scenario in comparison with the others. Thus, another problem is to select the optimal functional split for each small cell in the network. This is a non–trivial task since there are a number of parameters to be taken into account in order to make such a choice. To this end, we developed a set of algorithms that dynamically select an optimal split option for each small cell considering ICI level as the main criterion. The dynamic functional selection approach is motivated by the argument that a single static functional split is not a viable option especially in the long run. The proposed algorithms provide MNOs with various options to select between promptness, solution optimality, and scalability. After having thoroughly analyzed the C–RAN architecture along with the pros and cons of different functional split options, the main objective for MNOs, who already own mobile network infrastructures and want to migrate to the C–RAN architecture, would be to accomplish such a migration with minimal investments. We developed an algorithm that aims at reducing the required investments by reusing the available infrastructure in the most efficient way. To quantify the economic benefit in terms of Total Cost of Ownership (TCO) savings, a case study is carried out considering a small cluster of an operational Long Term Evolution Advanced (LTE–A) network in the simulation and the proposed infrastructure–aware C–RAN migration algorithm is compared with its infrastructure–unaware counterpart. We also evaluate the multiplexing gain provided by the C–RAN in a specific functional split case and draw a comparison with the one achievable in traditional LTE networks.
|
67 |
Tape Mbo'e: A Service Oriented MethodGrau Yegros, Ilse January 2015 (has links)
In developing countries, normally, different e-government applications do not exchange data each other, affecting the quality of service provided to citizens and the transparency. This situation has motived us to focus on the development of applications in such domain and, more specifically, on the interoperability among these applications. The interoperability has been implemented in e-government context of multiple countries through the Service-Oriented Computing (SOC). In addition to the interoperability, SOC provides considerable benefits. However, developing a service-based e-government application (SBeA) is a complex challenge and requires a software engineering method to manage its development process. The potential method should take into account not only design, construction, and maintenance of SBeA, but also it should consider the context in which it will be used (i.e, developing countries). In fact, in these countries, the scarcity of economic resources and qualified professionals can impose constraints to carry out e-government projects. Thus, the adopted methods have to maintain the costs low and to consider aspects related to long-term sustainability of the applications from the beginning. These issues suggest the adoption of Agile Methods (AMs) that have been proved to offer benefits in different projects in developing countries. However, they do not include SOC characteristics. Therefore, we have proposed Tape Mbo'e (TME), that is an extension of the agile method "OpenUP" to support the construction and the maintenance of SBeA in developing countries. TME has been used in five case studies, which embraced both academia and public organizations in Paraguay. It is important to remark that it was the first application of this type of evaluation in the public sector in Paraguay. A full validation of TME requires a long-term study, including its applications in a consistent number of e-government projects. This is out of the scope, and the possibility, of the current thesis. Nevertheless, the initial results of the different case studies indicate the feasibility and simplicity of TME when it is applied in this context.
|
68 |
Advanced Techniques for the Classification of Very High Resolution and Hyperspectral Remote Sensing ImagesPersello, Claudio January 2010 (has links)
This thesis is about the classification of the last generation of very high resolution (VHR) and hyperspectral remote sensing (RS) images, which are capable to acquire images characterized by very high resolution from satellite and airborne platforms. In particular, these systems can acquire VHR multispectral images characterized by a geometric resolution in the order or smaller than one meter, and hyperspectral images, characterized by hundreds of bands associated to narrow spectral channels. This type of data allows to precisely characterizing the different materials on the ground and/or the geometrical properties of the different objects (e.g., buildings, streets, agriculture fields, etc.) in the scene under investigation. This remote sensed data provide very useful information for several applications related to the monitoring of the natural environment and of human structures. However, in order to develop real-world applications with VHR and hyperspectral data, it is necessary to define automatic techniques for an efficient and effective analysis of the data. Here, we focus our attention on RS image classification, which is at the basis of most of the applications related to environmental monitoring. Image classification is devoted to translate the features that represent the information present in the data in thematic maps of the land cover types according to the solution of a pattern recognition problem. However, the huge amount of data associated with VHR and hyperspectral RS images makes the classification problem very complex and the available techniques are still inadequate to analyze these kinds of data. For this reason, the general objective of this thesis is to develop novel techniques for the analysis and the classification of VHR and hyperspectral images, in order to improve the capability to automatically extract useful information captured from these data and to exploit it in real applications. Moreover we addressed the classification of RS images in operational conditions where the available reference labeled samples are few and/or not completely reliable (which is quite common in many real problems). In particular, the following specific issues are considered in this work:
1. development of feature selection for the classification of hyperspectral images, for identifying a subset of the original features that exhibits at the same time high capability to discriminate among the considered classes and high invariance in the spatial domain of the scene;
2. classification of RS images when the available training set is not fully reliable, i.e., some labeled samples may be associated to the wrong information class (mislabeled patterns);
3. active learning techniques for interactive classification of RS images.
4. definition of a protocol for accuracy assessment in the classification of VHR images that is based on the analysis of both thematic and geometric accuracy.
For each considered topic an in deep study of the literature is carried out and the limitations of currently published methodologies are highlighted. Starting from this analysis, novel solutions are theoretically developed, implemented and applied to real RS data in order to verify their effectiveness. The obtained experimental results confirm the effectiveness of all the proposed techniques.
|
69 |
Active learning methods for classification and regression problemsPasolli, Edoardo January 2011 (has links)
In the pattern recognition community, one of the most critical problems in the design of supervised classification and regression systems is given by the quality and the quantity of the exploited training samples (ground-truth). This problem is particularly important in such applications in which the process of training sample collection is an expensive and time consuming task subject to different sources of errors. Active learning represents an interesting approach proposed in the literature to address the problem of ground-truth collection, in which training samples are selected in an iterative way in order to minimize the number of involved samples and the intervention of human users.
In this thesis, new methodologies of active learning for classification and regression problems are proposed and applied in three main application fields, which are the remote sensing, biomedical, and chemometrics fields. In particular, the proposed methodological contributions include: i) three strategies for the support vector machine (SVM) classification of electrocardiographic signals; ii) a strategy for SVM classification in the context of remote sensing images; iii) combination of spectral and spatial information in the context of active learning for remote sensing image classification; iv) exploitation of active learning to solve the problem of covariate shift, which may occur when a classifier trained on a portion of the image is applied to the rest of the image; moreover, several strategies for regression problems are proposed to estimate v) biophysical parameters from remote sensing data and vi) chemical concentrations from spectroscopic data; vii) a framework for assisting a human user in the design of a ground-truth for classifying a given optical remote sensing image.
Experiments conducted on simulated and real data sets are reported and discussed. They all suggest that, despite their complexity, ground-truth collection problems can be tackled satisfactory by the proposed approaches.
|
70 |
Enabling Novel Interactions between Applications and Software-Defined NetworksMarsico, Antonio January 2018 (has links)
Over the last few decades the pervasive diffusion of software has greatly simplified the introduction of new functionalities: updates that used to require complex and expensive re-engineering of physical devices can now be accomplished almost at the push of a button. In the context of telecommunications networks, recently modernized by the emergence of the Software-Defined Networking (SDN) paradigm, software has manifested in the form of self-contained applications driving the behavior of the control plane of the network. Such SDN controller applications can introduce profound changes and novel functionalities to large deployed networks without requiring downtime or any changes to deployed boxes, a revolutionary approach compared to current best practices, and which greatly simplifies, perhaps even enables, solving the challenges in the provisioning of network resources imposed by modern distributed business applications consuming a network’s services (e.g., bank communication systems, smart cities, remote surgery, etc.). This thesis studies three types of interaction between business applications, SDN controller applications and networks with the aim of optimizing the network response to a consumer’s needs. First, a novel interaction paradigm between SDN controller applications and networks is proposed in order to solve a potential configuration problem of SDN networks, which is caused by the limited memory capacity of SDN devices. An algorithm that offers a virtual memory to the network devices is designed and implemented in a SDN application. This interaction shows an increase of the amount of traffic that a SDN device can process in the case of memory overflows. Second, an interaction between business applications and SDN networks shows how it is possible to reduce the blocking probability of service requests in application-centric networks. A negotiation scheme based on an Intent paradigm is presented. Business applications can request connectivity service, receive several alternative solutions from the network based on a degradation of requirements and provide a feedback. Last, an interaction between business applications, SDN controller applications and networks is defined in order to increase the number of ad-hoc connectivity services offered by network operators to customers. Several service providers can implement a connectivity service in the form of SDN applications and offer them via a SDN App Store on top of a SDN network controller. The App Store demonstrates a lower overhead for the introduction of customized connectivity services.
|
Page generated in 0.1062 seconds