• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 1
  • Tagged with
  • 67
  • 67
  • 67
  • 67
  • 67
  • 19
  • 17
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Advanced classification methods for UAV imagery

Zeggada, Abdallah January 2018 (has links)
The rapid technological advancement manifested lately in the remote sensing acquisition platforms has triggered many benefits in favor of automated territory control and monitoring. In particular, unmanned aerial vehicles (UAVs) technology has drawn a lot of attention, providing an efficient solution especially in real-time applications. This is mainly motivated by their capacity to collect extremely high resolution (EHR) data over inaccessible areas and limited coverage zones, thanks to their small size and rapidly deployable flight capability, notwithstanding their ease of use and affordability. The very high level of details of the data acquired via UAVs, however, in order to be properly availed, requires further treatment through suitable image processing and analysis approaches. In this respect, the proposed methodological contributions in this thesis include: i) a complete processing chain which assists the Avalanche Search and Rescue (SAR) operations by scanning the UAV acquired images over the avalanche debris in order to detect victims buried under snow and their related objects in real time; ii) two multilabel deep learning strategies for coarsely describing extremely high resolution images in urban scenarios; iii) a novel multilabel conditional random fields classification framework that exploits simultaneously spatial contextual information and cross-correlation between labels; iv) a novel spatial and structured support vector machine for multilabel image classification by adding to the cost function of the structured support vector machine a term that enhances spatial smoothness within a one-step process. Conducted experiments on real UAV images are reported and discussed alongside suggestions for potential future improvements and research lines.
22

Advanced Methods for the Analysis of Radar Sounder Data Acquired at the Ice Sheets

Ilisei, Ana-Maria January 2016 (has links)
The World Climate Research Programme (WCRP) has recently reconfirmed the importance of a better understanding of the Cryosphere for advancing the analysis, modeling and prediction of climate change and its impact on the environment and society. One of the most complete collection of information about the ice sheets and glaciated areas is contained in the data (radargrams) acquired by Radar Sounder (RS) instruments. The need to better understand the structure of the ice sheets and the availability of enourmous quantities of radargrams call for the development of automatic techniques for an efficient extraction of information from RS data. This topic has been only marginally addressed in the literature. Thus, in this thesis we address this challenge by contributing with four novel automatic techniques for the analysis of radargrams acquired at the ice sheets. The first contribution of this thesis presents a system for the automatic classification of ice subsurface targets in RS data. The core of the system is represented by the extraction of a set of features for target discrimination. The features are based on both the specific statistical properties of the RS signal and the spatial distribution of the ice subsurface targets. The second contribution is an unsupervised model-based technique for the automatic detection and property estimation of ice subsurface targets. This is done by using the parameters of the RS system combined with the output of an automatic image segmentation algorithm. The third contribution presents an automatic technique for the local 3D reconstruction of the ice sheet. It is based on the use of RS and altimeter (ALT) data, and relies on the use of a geostatistical interpolation method and on several statistical measures for validating the interpolation results and the quality of interpolation. The fourth contribution presents a technique for the automatic estimation of radar power losses in ice as a continuous non-linear function of depth, by using RS and ice core data. The technique relies on the detection of ice layers in the RS data, the computation of their reflectivity from the ice core data and the use of the radar equation for loss estimation. Qualitative and quantitative experimental results obtained on real RS data confirm the effectiveness of the first three techniques. Also, preliminary results have been obtained by applying the fourth technique to real RS and ice core data acquired in Greenland. Due to their advantages over the traditional manual approach, e.g., efficiency, objectivity, possibility of jointly analyzing multisensor data (e.g., RS, ALT), the proposed methods can support the scientific community to enhance the data usage for a better modeling and understanding of the ice sheets. Moreover, they will become even more important in the near future, since the volume of data is expected to grow from the increase in airborne and possible Earth Observation spaceborne RS missions.
23

Advanced Methods For Building Information Extraction From Very High Resolution SAR Data To Support Emergency Response

Brunner, Dominik January 2009 (has links)
Rapid damage assessment after natural disasters (e.g. earthquakes, floods) and violent conflicts (e.g. war-related destruction) is crucial for initiating effective emergency response actions. Remote sensing satellites equipped with multispectral and Synthetic Aperture Radar (SAR) imaging sensors can provide vital information due to their ability to map affected areas of interest with high geometric precision and in an uncensored manner. The new spaceborne Very High Resolution (VHR) SAR sensors onboard the TerraSAR-X and COSMO-SkyMed satellites can achieve spatial resolutions in the order of 1 m. In VHR SAR data, features from individual urban structures (like buildings) can be identified in their characteristic settings in urban settlement patterns. This thesis presents novel techniques to support emergency response after catastrophic events using latest generation earth observation imagery. In this context, the potential and limits of VHR SAR imagery for extracting information about individual buildings in an (semi-) automatic manner is investigated. The following main novel contributions are presented. First, we investigate the potential of the characteristic double bounce of a building in VHR SAR imagery to be exploited in automatic damage assessment techniques. In particular, we analyze empirically the relation between the double bounce effect and the aspect angle. Then, we propose a radar imaging simulator for urban structures, which is based on an adapted ray tracing procedure and a Lambertian-specular mixture model, emphasizing the geometrical effects of the scattering. Furthermore, we propose an approach to the height estimation of buildings from single detected SAR data. It is based on a "hypothesis generation - rendering - matching" procedure, where a series of hypotheses are generated and rendered by the previously introduced radar imaging simulator in order to compare the simulations with the actual VHR SAR data. Moreover, we present a method that detects buildings destroyed in an earthquake using pre-event VHR optical and post-event detected VHR SAR imagery. This technique evaluates the similarity between the predicted signature of the intact building in the post-event SAR scene and the actual scene to distinguish between damaged and undamaged buildings. Finally, we address the practical requirements of rapid emergency response scenarios by proposing an IT system infrastructure that enables collaborative and distributed geospatial data processing and on-demand map visualization. The effectiveness of all proposed techniques is confirmed by quantitative and qualitative experimental results obtained on airborne and spaceborne VHR SAR imagery.
24

Flexible functional split in the 5g radio access networks

Harutyunyan, Davit January 2019 (has links)
Recent developments in mobile networks towards the fifth generation (5G) communication technology have been mainly driven by an explosive increase in mobile traffic demand and emerging vertical applications with their diverse Quality–of–Service (QoS) requirements, which current mobile networks are likely to fall short of satisfying. New technological cost–efficient solutions are, therefore, required to boost the network capacity and advance its capabilities in order to support the QoS requirements of, for example, enhanced mobile broadband services and the ones requiring ultra–reliable low latency communication. Network densification is known to be as one of the promising approaches aiming to increase the network capacity. This is achieved thanks to aggressive frequency reuse at small cells. Nonetheless, this entails performance degradation especially for cell–edge users due to a high Inter–cell Interference (ICI) level. Cloud Radio Access Network (C–RAN) architecture has been proposed as an efficient way to address the aforementioned challenges, tackle some of the problems persistent in the present–day mobile networks (e.g., inefficient use of frequency bands, high power consumption) and, by employing virtualization techniques, facilitate the network management while paving a way for new business opportunities for mobile virtual network operators. The main idea behind C–RAN is to decouple the radio unit of a base station, referred as a Decentralized Unit (DU) from the baseband processing unit, referred as a Centralized Unit (CU) and virtualize the latter in a centralized location, referred as a CU pool. Then, by executing so–called "functional split" in the RAN protocol stack between the CU and the DU, identify the RAN functionalities that are to be performed at the DU and the CU pool. Depending on the selected functional split (i.e., the resource centralization level), the bandwidth and latency requirements vary in the fronthaul network, which is the one interconnecting the DU with the CU pool. This results in a different level of resource centralization benefits. Thus, an inherent trade–off exists between resource centralization benefits and fronthaul requirements in the C–RAN architecture. C–RAN, although provides numerous advantages, raises a series of challenges one of which, depending on the functional split option, is a huge fronthaul bandwidth requirement. Optical fiber, thanks to its high bandwidth and low latency characteristics, is perceived to be the most capable fronthauling option; nevertheless, it requires a huge investment. Fortunately, recent advancement in the MillimeterWave (mmWave) wireless technology allows for multi–Gbps transmission over the distance of one kilometer, therefore, making it a good candidate for the fronthaul network in an ultra–dense small cell deployment scenario. In this doctoral dissertation, we first study the trade–offs between different functional splits, considering the mmWave technology in the fronthaul network. Specifically, we formulate and solve a Virtual Network Embedding (VNE) problem that aims at minimizing the fronthaul bandwidth utilization along with the number of active mmWave interfaces, and therefore, also minimizing the power consumption in the fronthaul network for different functional split scenarios. We then carry out a relative comparison between themmWave and optical fiber fronthauling technologies in terms of their deployment cost in order to ascertain when it would be economically more efficient to employ mmWave fronthaul instead of optical fiber. Different functional splits enable theMobile Network Operators (MNOs) to harvest different level of resource centralization benefits and pose diverse fronthaul requirements. There is no one–fits–all functional split that can be adopted in C–RAN to cope with all of its challenges since each split is more appropriate to be employed in a specific scenario in comparison with the others. Thus, another problem is to select the optimal functional split for each small cell in the network. This is a non–trivial task since there are a number of parameters to be taken into account in order to make such a choice. To this end, we developed a set of algorithms that dynamically select an optimal split option for each small cell considering ICI level as the main criterion. The dynamic functional selection approach is motivated by the argument that a single static functional split is not a viable option especially in the long run. The proposed algorithms provide MNOs with various options to select between promptness, solution optimality, and scalability. After having thoroughly analyzed the C–RAN architecture along with the pros and cons of different functional split options, the main objective for MNOs, who already own mobile network infrastructures and want to migrate to the C–RAN architecture, would be to accomplish such a migration with minimal investments. We developed an algorithm that aims at reducing the required investments by reusing the available infrastructure in the most efficient way. To quantify the economic benefit in terms of Total Cost of Ownership (TCO) savings, a case study is carried out considering a small cluster of an operational Long Term Evolution Advanced (LTE–A) network in the simulation and the proposed infrastructure–aware C–RAN migration algorithm is compared with its infrastructure–unaware counterpart. We also evaluate the multiplexing gain provided by the C–RAN in a specific functional split case and draw a comparison with the one achievable in traditional LTE networks.
25

Tape Mbo'e: A Service Oriented Method

Grau Yegros, Ilse January 2015 (has links)
In developing countries, normally, different e-government applications do not exchange data each other, affecting the quality of service provided to citizens and the transparency. This situation has motived us to focus on the development of applications in such domain and, more specifically, on the interoperability among these applications. The interoperability has been implemented in e-government context of multiple countries through the Service-Oriented Computing (SOC). In addition to the interoperability, SOC provides considerable benefits. However, developing a service-based e-government application (SBeA) is a complex challenge and requires a software engineering method to manage its development process. The potential method should take into account not only design, construction, and maintenance of SBeA, but also it should consider the context in which it will be used (i.e, developing countries). In fact, in these countries, the scarcity of economic resources and qualified professionals can impose constraints to carry out e-government projects. Thus, the adopted methods have to maintain the costs low and to consider aspects related to long-term sustainability of the applications from the beginning. These issues suggest the adoption of Agile Methods (AMs) that have been proved to offer benefits in different projects in developing countries. However, they do not include SOC characteristics. Therefore, we have proposed Tape Mbo'e (TME), that is an extension of the agile method "OpenUP" to support the construction and the maintenance of SBeA in developing countries. TME has been used in five case studies, which embraced both academia and public organizations in Paraguay. It is important to remark that it was the first application of this type of evaluation in the public sector in Paraguay. A full validation of TME requires a long-term study, including its applications in a consistent number of e-government projects. This is out of the scope, and the possibility, of the current thesis. Nevertheless, the initial results of the different case studies indicate the feasibility and simplicity of TME when it is applied in this context.
26

Advanced Techniques for the Classification of Very High Resolution and Hyperspectral Remote Sensing Images

Persello, Claudio January 2010 (has links)
This thesis is about the classification of the last generation of very high resolution (VHR) and hyperspectral remote sensing (RS) images, which are capable to acquire images characterized by very high resolution from satellite and airborne platforms. In particular, these systems can acquire VHR multispectral images characterized by a geometric resolution in the order or smaller than one meter, and hyperspectral images, characterized by hundreds of bands associated to narrow spectral channels. This type of data allows to precisely characterizing the different materials on the ground and/or the geometrical properties of the different objects (e.g., buildings, streets, agriculture fields, etc.) in the scene under investigation. This remote sensed data provide very useful information for several applications related to the monitoring of the natural environment and of human structures. However, in order to develop real-world applications with VHR and hyperspectral data, it is necessary to define automatic techniques for an efficient and effective analysis of the data. Here, we focus our attention on RS image classification, which is at the basis of most of the applications related to environmental monitoring. Image classification is devoted to translate the features that represent the information present in the data in thematic maps of the land cover types according to the solution of a pattern recognition problem. However, the huge amount of data associated with VHR and hyperspectral RS images makes the classification problem very complex and the available techniques are still inadequate to analyze these kinds of data. For this reason, the general objective of this thesis is to develop novel techniques for the analysis and the classification of VHR and hyperspectral images, in order to improve the capability to automatically extract useful information captured from these data and to exploit it in real applications. Moreover we addressed the classification of RS images in operational conditions where the available reference labeled samples are few and/or not completely reliable (which is quite common in many real problems). In particular, the following specific issues are considered in this work: 1. development of feature selection for the classification of hyperspectral images, for identifying a subset of the original features that exhibits at the same time high capability to discriminate among the considered classes and high invariance in the spatial domain of the scene; 2. classification of RS images when the available training set is not fully reliable, i.e., some labeled samples may be associated to the wrong information class (mislabeled patterns); 3. active learning techniques for interactive classification of RS images. 4. definition of a protocol for accuracy assessment in the classification of VHR images that is based on the analysis of both thematic and geometric accuracy. For each considered topic an in deep study of the literature is carried out and the limitations of currently published methodologies are highlighted. Starting from this analysis, novel solutions are theoretically developed, implemented and applied to real RS data in order to verify their effectiveness. The obtained experimental results confirm the effectiveness of all the proposed techniques.
27

Active learning methods for classification and regression problems

Pasolli, Edoardo January 2011 (has links)
In the pattern recognition community, one of the most critical problems in the design of supervised classification and regression systems is given by the quality and the quantity of the exploited training samples (ground-truth). This problem is particularly important in such applications in which the process of training sample collection is an expensive and time consuming task subject to different sources of errors. Active learning represents an interesting approach proposed in the literature to address the problem of ground-truth collection, in which training samples are selected in an iterative way in order to minimize the number of involved samples and the intervention of human users. In this thesis, new methodologies of active learning for classification and regression problems are proposed and applied in three main application fields, which are the remote sensing, biomedical, and chemometrics fields. In particular, the proposed methodological contributions include: i) three strategies for the support vector machine (SVM) classification of electrocardiographic signals; ii) a strategy for SVM classification in the context of remote sensing images; iii) combination of spectral and spatial information in the context of active learning for remote sensing image classification; iv) exploitation of active learning to solve the problem of covariate shift, which may occur when a classifier trained on a portion of the image is applied to the rest of the image; moreover, several strategies for regression problems are proposed to estimate v) biophysical parameters from remote sensing data and vi) chemical concentrations from spectroscopic data; vii) a framework for assisting a human user in the design of a ground-truth for classifying a given optical remote sensing image. Experiments conducted on simulated and real data sets are reported and discussed. They all suggest that, despite their complexity, ground-truth collection problems can be tackled satisfactory by the proposed approaches.
28

Enabling Novel Interactions between Applications and Software-Defined Networks

Marsico, Antonio January 2018 (has links)
Over the last few decades the pervasive diffusion of software has greatly simplified the introduction of new functionalities: updates that used to require complex and expensive re-engineering of physical devices can now be accomplished almost at the push of a button. In the context of telecommunications networks, recently modernized by the emergence of the Software-Defined Networking (SDN) paradigm, software has manifested in the form of self-contained applications driving the behavior of the control plane of the network. Such SDN controller applications can introduce profound changes and novel functionalities to large deployed networks without requiring downtime or any changes to deployed boxes, a revolutionary approach compared to current best practices, and which greatly simplifies, perhaps even enables, solving the challenges in the provisioning of network resources imposed by modern distributed business applications consuming a network’s services (e.g., bank communication systems, smart cities, remote surgery, etc.). This thesis studies three types of interaction between business applications, SDN controller applications and networks with the aim of optimizing the network response to a consumer’s needs. First, a novel interaction paradigm between SDN controller applications and networks is proposed in order to solve a potential configuration problem of SDN networks, which is caused by the limited memory capacity of SDN devices. An algorithm that offers a virtual memory to the network devices is designed and implemented in a SDN application. This interaction shows an increase of the amount of traffic that a SDN device can process in the case of memory overflows. Second, an interaction between business applications and SDN networks shows how it is possible to reduce the blocking probability of service requests in application-centric networks. A negotiation scheme based on an Intent paradigm is presented. Business applications can request connectivity service, receive several alternative solutions from the network based on a degradation of requirements and provide a feedback. Last, an interaction between business applications, SDN controller applications and networks is defined in order to increase the number of ad-hoc connectivity services offered by network operators to customers. Several service providers can implement a connectivity service in the form of SDN applications and offer them via a SDN App Store on top of a SDN network controller. The App Store demonstrates a lower overhead for the introduction of customized connectivity services.
29

Crowd Motion Analysis: Segmentation, Anomaly Detection, and Behavior Classification

Ullah, Habib January 2015 (has links)
The objective of this doctoral study is to develop efficient techniques for flow segmentation, anomaly detection, and behavior classification in crowd scenes. Considering the complexities of occlusion, we focused our study on gathering the motion information at a higher scale, thus not associating it to single objects, but considering the crowd as a single entity. Firstly,we propose methods for flow segmentation based on correlation features, graph cut, Conditional Random Fields (CRF), enthalpy model, and particle mutual influence model. Secondly, methods based on deviant orientation information, Gaussian Mixture Model (GMM), and MLP neural network combined with GoodFeaturesToTrack are proposed to detect two types of anomalies. The first one detects deviant motion of the pedestrians compared to what has been observed beforehand. The second one detects panic situation by adopting the GMM and MLP to learn the behavior of the motion features extracted from a grid of particles and GoodFeaturesToTrack, respectively. Finally, we propose particle-driven and hybrid appraoches to classify the behaviors of crowd in terms of lane, arch/ring, bottleneck, blocking and fountainhead within a region of interest (ROI). For this purpose, the particle-driven approach extracts and fuses spatio-temporal features together. The spatial features represent the density of neighboring particles in the predefined proximity, whereas the temporal features represent the rendering of trajectories traveled by the particles. The hybrid approach exploits a thermal diffusion process combined with an extended variant of the social force model (SFM).
30

Analysis of Complex Human Interactions in Unconstrained Videos

Zhang, Bo January 2015 (has links)
The literature in human activity recognition is very broad and many different approaches have been presented to interpret the content of a visual scene. In this thesis, we are interested in two-person interaction analysis in unconstrained videos. Specifically, we focus on two open issues:(1)discriminative patch segmentation,and (2) human interaction recognition. For the first problem, we introduce two models to extract discriminative patches of human interactions applied to different scenarios, namely, videos from surveillance cameras and videos in TV shows. For the other problem, we propose two different frameworks: (1) human interaction recognition using the self-similarity matrix, and (2) human interaction recognition using the multiple-instance-learning approach. Experimental results demonstrate the effectiveness of our methods.

Page generated in 0.1755 seconds