Spelling suggestions: "subject:"settore ING-INF/03 - telecomunicazioni"" "subject:"settore ING-INF/03 - lelecommunicazioni""
41 |
Analysis and Protection of SIP based ServicesFerdous, Raihana January 2014 (has links)
Multimedia communications over IP are booming as they offer higher flexibility and more features than traditional voice and video services. IP telephony known as Voice over IP (VoIP) is one of the commercially most important emerging trends in multimedia communications over IP. Due to the flexibility and descriptive power, the Session Initiation Protocol (SIP) is becoming the root of many sessions-based applications such as VoIP and media streaming that are used by a growing number of users and organizations. The increase of the availability and use of such applications calls for careful attention to the possibility of transferring malformed, incorrect, or malicious SIP messages as they can cause problems ranging from relatively innocuous disturbances to full blown attacks and frauds. Given this scenario, a deep knowledge of the normal behavior of the network and users is essential to problem diagnosis and security protection of IP Telephony. Moreover, analysis tools taking into account service semantics and troubleshooting VoIP systems based on SIP are of paramount importance for network administrators. However, efficient design and deployment of robust and high performance security controlling systems remain a high challenge, in particular due to the open architecture of the Internet, heterogeneous environment and real time communication constraint. This thesis deals with the analysis and protection of services based on the SIP protocol with a special focus on SIP based VoIP applications. The first part of the work is dedicated to the conformance and security analysis of SIP based VoIP services. To this end, our first endeavor is to define a formal conceptual model of VoIP threat domain with the aim to exchange a common vocabulary about the security related information of the domain. We have introduced an ontology defined as “VoIP-Onto" that provides a formal representation of a comprehensive taxonomy of VoIP attacks followed by specific security recommendations and guidelines for protecting the underlying infrastructure from these attacks. The use of “VoIP-Onto" is not only limited to as a general vocabulary and extensible dictionary for sharing domain knowledge about VoIP security, but also can be employed in a real environment for testing or intrusion detection purposes. We have also concentrated on designing synthetic traffic generators considering the difficulties and challenges of collecting real-world VoIP traffic for the purpose of testing monitoring and security controlling tools. To this end, we have introduced “VoIPTG", a generic synthetic traffic generator, that provides flexibility and efficiency in generation of large amount of synthetic VoIP traffic by imitating the realistic behavior profiles for users and attackers. We have also implemented “SIP-Msg-Gen", a SIP fuzzer, capable to generate both the well-formed and fuzzed SIP messages with ease. Then, we focus on designing an on-line filter able to examine the stream of incoming SIP messages and classifies them as “good" or “bad" depending on whether their structure and content are deemed acceptable or not. Because of the different structure, contents and timing of the SIP “bad" messages, their filtering is best carried out by a multistage classifier consisting of deterministic lexical analyzer and supervised machine learning classifiers. The performance and efficiency of our proposed multi-stage filtering system is tested with a large set of SIP based VoIP traffic including both the real and synthetic traces. The experimental result of the filtering system is very promising with high accuracy providing fast attack detection. Next, the focus is shifted on the understanding and modeling the social interaction patterns of users of the VoIP domain. The notion of “social networks" is applied in the context of SIP based VoIP network, where “social networks" of VoIP users are built based on their telephone records. Then, Social Network Analysis (SNA) techniques are applied on these “social networks" of VoIP users to explore their social behavioral patterns. A prototype of filtering system for SIP based VoIP services is also implemented to demonstrate that the knowledge about the social behavior of the VoIP users is helpful in problem diagnosis, intruders detection, and security protection. The filtering system is trained with the normal behavioral patterns of the users. The machine, thus trained, is capable of identifying “malicious" users.
|
42 |
Multimedia Content Analysis for Event DetectionRosani, Andrea January 2015 (has links)
The wide diffusion of multimedia contents of different type and format led to the need of effective methods to efficiently handle such huge amount of information, opening interesting research challenges in the media community. In particular, the definition of suitable content understanding methodologies is attracting the effort of a large number of researchers worldwide, who proposed various tools for automatic content organization, retrieval, search, annotation and summarization. In this thesis, we will focus on an important concept, that is the inherent link between ''media" and the ''events" that such media are depicting. We will present two different methodologies related to such problem, and in particular to the automatic discovery of event-semantics from media contents. The two methodologies address this general problem at two different levels of abstraction. In the first approach we will be concerned with the detection of activities and behaviors of people from a video sequence (i.e., what a person is doing and how), while in the second we will face the more general problem of understanding a class of events from a set visual media (i.e., the situation and context). Both problems will be addressed trying to avoid making strong a-priori assumptions, i.e., considering the largely unstructured and variable nature of events.As to the first methodology, we will discuss about events related to the behavior of a person living in a home environment. The automatic understanding of human activity is still an open problems in the scientific community, although several solutions have been proposed so far, and may provide important breakthroughs in many application domains such as context-aware computing, area monitoring and surveillance, assistive technologies for the elderly or disabled, and more. An innovative approach is presented in this thesis, providing (i) a compact representation of human activities, and (ii) an effective tool to reliably measure the similarity between activity instances. In particular, the activity pattern is modeled with a signature obtained through a symbolic abstraction of its spatio-temporal trace, allowing the application of high-level reasoning through context-free grammars for activity classification. As far as the second methodology is concerned, we will address the problem of identifying an event from single image. If event discovery from media is already a complex problem, detection from a single still picture is still considered out-of-reach for current methodologies, as demonstrated by recent results of international benchmarks in the field. In this work we will focus on a solution that may open new perspectives in this area, by providing better knowledge on the link between visual perception and event semantics. In fact, what we propose is a framework that identifies image details that allow human beings identifying an event from single image that depicts it. These details are called ''event saliency", and are detected by exploiting the power of human computation through a gamification procedure. The resulting event saliency is a map of event-related image areas containing sufficient evidence of the underlying event, which could be used to learn the visual essence of the event itself, to enable improved automatic discovery techniques. Both methodologies will be demonstrated through extensive tests using publicly available datasets, as well as additional data created ad-hoc for the specific problems under analysis.
|
43 |
Resource allocation and modeling in spectrally and spatially flexible optical transport networksPederzolli, Federico January 2018 (has links)
The world's hunger for connectivity appears to be endlessly growing, yet the capacity of the networks that underpin that connectivity is anything but endless. This thesis explores both short and long term solutions for increasing the capacity of the largest and most capacious of these networks, the backbones upon which the Internet is built: optical transport networks. In the short term, Flexi-grid technology has emerged as the evolution of fixed-grid WDM optical networks, providing higher potential throughput but suffering from an aggravated form of the spectrum ragmentation problem that affects fixed-grid networks.
A novel path-based metric to better evaluate the fragmentation of spectral resources in flexi-grid networks is presented, which considers both the fact that free spectrum slices may not be available on all the links of a path, and the likelihood that an end-to-end spectral void is usable to route incoming connections, and tested by means of simulations, finding that it outperforms existing ones from literature. For the longer term, Space Division Multiplexing (SDM) is a promising solution to overcome the looming fiber capacity crunch, and, perhaps more importantly, can offer a beneficial ratio between the expected capacity gains and the resulting increase in the cost of the network thanks to Joint and Fractional Joint Switching architectures and integrated transceivers and amplifiers. A model for such network is presented, and multiple heuristics for solving the Routing, Space and Spectrum Allocation problem are described, studied via simulations and iteratively improved, with the objective of quantifying the likely performance of several SDM architectures under multiple traffic scenarios.
In addition, possible improvements to joint switching architectures, and an experimental SDN control plane for SDM networks, are presented and characterized, again by means of simulations. SDM is shown to be an attractive technology for increasing future transport networks capacity, at a reasonable cost.
|
44 |
Cooperative Push/Pull Protocols for Live Peer-Assisted StreamingRusso, Alessandro January 2013 (has links)
Video streaming is rapidly becoming one of the key services of the Internet. Most streaming is today "on demand" and delivered via unicast delivery; however, many applications require delivery to many end-users and the lack of ubiquitous IP multicast remains a weakness of the Internet. Given this scenario, the peer-to-peer (P2P) or peer-assisted communications is an appealing solution, especially in light of its intrinsic scalability and its extremely low initial investment requirements. However, the design of efficient, robust, and performing P2P streaming systems remains a high challenge, in particular when real-time (hard or soft) constraints are part of the service quality, as in TV distribution or conferencing. This thesis deals with P2P live streaming, concentrating on unstructured, swarm-based systems. The protocols explored and proposed are based in general on mixed Push/Pull phases, i.e., the behavior of peers alternates between offering content to other peers and seeking the content from other peers. The first part of the work is dedicated to the analysis of the fundamental properties of the Push/Pull protocols, including the enhancement of base protocols with a chunks' negotiation phase, which enable peers to execute parallel communications at the same time, fully exploiting their resources and drastically reducing duplicates and waste. Next, the focus is shifted on the impact of network parameters in video streaming distribution, showing that promoting locality in interactions leads to better performance than selecting target peers randomly. Then, the attention is focused on wireless scenarios by mixing local multicast techniques (based on a modified version of the Protocol Independent Multicast --PIM-- adapted to wireless environments) with active Pull recovery of missing data, with a peer-assisted approach. This protocol, called PullCast, enables end-users to pull missed data packets via unicast communications while they receive video packet in multicast via push, exhibiting interesting results in terms of chunks diffusion delay and fraction of end-users served. Finally, the GRAPES library is introduced to provide a set of open-source components conceived as basic building blocks for developing new P2P streaming applications which have in mind the intelligent usage of network resources as well as the Quality of Experience of final users. GRAPES is the core library behind PeerStreamer, an open source P2P media streaming framework developed under the NAPA-WINE European research project, and currently supported by the EIT ICT Labs.
|
45 |
Towards Energy Efficient Cooperative Spectrum Sensing in Cognitive Radio NetworksAlthunibat, Saud January 2014 (has links)
Cognitive radio has been proposed as a promising technology to resolve the spectrum scarcity problem by dynamically exploiting underutilized spectrum bands. Cognitive radio technology allows unlicensed users, also called cognitive users (CUs), to exploit the spectrum vacancies at any time with no or limited extra interference at the licensed users. Usually, cognitive radios create networks in order to better identify spectrum vacancies, avoid resultant interference, and consequently, magnify their revenues. One of the main challenges in cognitive radio networks is the high energy consumption, which may limit their implementation especially in battery-powered terminals. The initial step in cognitive transmission is called spectrum sensing. In spectrum sensing, a CU senses the spectrum in order to detect the activity of the licensed users. Spectrum sensing is usually accomplished cooperatively in order to improve the reliability of its results. In cooperative spectrum sensing (CSS), individual sensing results should be exchanged in order to make a global decision regarding spectrum occupancy. Thus, CSS consumes a significant a mount of energy, representing a challenge for CUs. Moreover, the periodicity of CSS and increasing the number of channels to be sensed complicates the problem. To this end, energy efficiency in CSS has gained an increasing attention recently. In this dissertation, a number of energy-efficient algorithms/schemes for CSS is proposed. The proposed works include energy efficient solutions for low energy consumption in local sensing stage, results’ reporting stage and decision-making stage. The proposed works are evaluated in terms of the achievable energy efficiency and detection accuracy, where they show a significant improvement compared to the state-of-the-art proposals. Moreover, a comprehensive energy-efficient approaches are proposed by combining different algorithms presented in this dissertation. These comprehensive approaches aim at proving the consistency of the proposed algorithms to each other and maximizing the achievable energy efficiency in the whole CSS process. Moreover, high energy consumption is not the only challenge of CSS. Another important problem in CSS is the vulnerability of the security risks which can effectively degrade the energy efficiency of cognitive radio networks. In this dissertation, we propose three different strategies against security attackers. Specifically, authentication protocol for outsider attackers, elimination algorithm for insider attackers, and a punishment policy are presented in this dissertation. While designing these strategies, an eye is kept on energy efficiency such that increasing immunity against attacker does not affect energy efficiency. Therefore, the tradeoff between energy efficiency and security in CSS has been achieved.
|
46 |
Bridging Sensor Data Streams and Human KnowledgeZeni, Mattia January 2017 (has links)
Generating useful knowledge out of personal big data in form of sensor streams is a difficult task that presents multiple challenges due to the intrinsic characteristics of these type of data, namely their volume, velocity, variety and noisiness. This problem is a well-known long standing problem in computer science called the Semantic Gap Problem. It was originally defined in the research area of image processing as "... the lack of coincidence between the information that one can extract from the visual data and the interpretation that the same data have for a user in a given situation..." [Smeulders et al., 2000]. In the context of this work, the lack of coincidence is between low-level raw streaming sensor data collected by sensors in a machine-readable format and higher-level semantic knowledge that can be generated from these data and that only humans can understand thanks to their intelligence, habits and routines. This thesis addresses the semantic gap problem in the context above, proposing an interdisciplinary approach able to generate human level knowledge from streaming sensor data in open domains. It leverages on two different research fields: one regarding the collection, management and analysis of big data and the field of semantic computing, focused on ontologies, which respectively map to the two elements of the semantic gap mentioned above. The contributions of this thesis are: • The definition of a methodology based on the idea that the user and the world surrounding him can be modeled, defining most of the elements of her context as entities (locations, people, objects, among other, and the relations among them) in addition with the attributes for all of them. The modeling aspects of this ontology are outside of the scope of this work. Having such a structure, the task of bridging the semantic gap is divided in many, less complex, modular and compositional micro-tasks that are which consist in mapping the streaming sensor data using contextual information to the attribute values of the corresponding entities. In this way we can create a structure out of the unstructured, noisy and highly variable sensor data that can then be used by the machine to provide personalized, context-aware services to the final user; • The definition of a reference architecture that applies the methodology above and addresses the semantic gap problem in streaming sensor data; • The instantiation of the architecture above in the Stream Base System (SB), resulting in the implementation of its main components using state-of-the-art software solutions and technologies; • The adoption of the Stream Base System in four use cases that have very different objectives one respect to the other, proving that it works in open domains.
|
47 |
Advanced Techniques based on Mathematical Morphology for the Analysis of Remote Sensing ImagesDalla Mura, Mauro January 2011 (has links)
Remote sensing optical images of very high geometrical resolution can provide a precise and detailed representation of the surveyed scene.
Thus, the spatial information contained in these images is fundamental for any application requiring the analysis of the image. However, modeling the spatial information is not a trivial task. We addressed this problem by using operators defined in the mathematical morphology framework in order to extract spatial features from the image.
In this thesis novel techniques based on mathematical morphology are presented and investigated for the analysis of remote sensing optical images addressing different applications.
Attribute Profiles (APs) are proposed as a novel generalization based on attribute filters of the Morphological Profile operator. Attribute filters are connected operators which can process an image by removing
flat zones according to a given criterion. They are flexible operators since they can transform an image according to many different attributes (e.g., geometrical, textural and spectral).
Furthermore, Extended Attribute Profiles (EAPs), a generalization of APs, are presented for the analysis of hyperspectral images. The EAPs are employed for including spatial features in the thematic classification of hyperspectral images.
Two techniques dealing with EAPs and dimensionality reduction transformations are proposed and applied in image classification. In greater detail, one of the techniques is based on Independent Component Analysis and the other one deals with feature extraction techniques.
Moreover, a technique based on APs for extracting features for the detection of buildings in a scene is investigated.
Approaches that process an image by considering both bright and dark components of a scene are investigated. In particular, the effect of applying attribute filters in an alternating sequential setting is investigated. Furthermore, the concept of Self-Dual Attribute Profile (SDAP) is introduced. SDAPs are APs built on an inclusion tree instead of a min- and max-tree, providing an operator that performs a multilevel filtering of both the bright and dark components of an image.
Techniques developed for applications different from image classification are also considered. In greater detail, a general approach for image simplification based on attribute filters is proposed. Finally, two change detection techniques are developed.
The experimental analysis performed with the novel techniques developed in this thesis demonstrates an improvement in terms of accuracies in different fields of application when compared to other state of the art methods.
|
48 |
Social interaction analysis in in videos, from wide to close perspectiveRota, Paolo January 2015 (has links)
In today’s digital age, the enhancement of the hardware technology has set new horizons on the computer science universe, asking new questions, proposing new solutions and re-opening some branches that have been temporary closed due to the overwhelming computational complexity. In this sense many algorithms have been proposed but they have never been successfully applied in practice up to now. In this work we will tackle the issues related to the detection and the localization of an interaction conducted by humans. We will begin analysing group interactions then moving to dyadic interactions and then elevate our considerations to the real world scenario. We will propose new challenging datasets, introducing new important tasks and suggesting some possible solutions.
|
49 |
Predicting Tolerance Effects on The Radiation Pattern of Reflectarray Antennas Through Interval AnalysisEbrahimiketilateh, Nasim January 2018 (has links)
The thesis focuses on predicting tolerance effects on the radiation pattern of reflectarray antennas through Interval Analysis. In fact, the uncertainty on the actual size of all parameters under fabrication tolerates such as element dimensions and dielectric properties, are modeled with interval values. Afterwards, the rules of Interval Arithmetic are exploited to compute the bounds of deviation in the resonance frequency of each element, the phase response of the element and the radiated power pattern. Due to the redundancy problems of using Interval Cartesian (IA−CS) for complex structure, the interval bounds are overestimated and the reasons are the Dependency and Wrapping effects of using interval analysis for complex structures. Different techniques are proposed and assessed in order to eliminate the dependency effect such as reformulating the interval function and the Enumerative interval analysis. Moreover, the Minkowski sum approach is used to eliminate the wrapping effect. In numerical validation, a set of representative results, show the power bounds computations with Interval Cartesian method (IA − CS), a modified Interval Cartesian method (IA − CS*), Interval Enumerative method (IA − ENUM) and Interval Enumerative Minkowski method ( IA − ENUM − MS) and a comparative study is reported in order to assess the effectiveness of the proposed approach (IA − ENUM − MS) with respect to the other methods. Furthermore, different tolerances in patch width,length, substrate thickness and dielectric permittivity are considered which shows that the higher uncertainty produces the larger deviation of the pattern bounds and the larger deviation include the smaller deviation and the nominal one. To validate the inclusion properties of the interval bounds, the results are compared with Monte Carlo simulation results. Then, a numerical study is devoted to analyse the dependency of the degradation of the pattern features to steering angle and the bandwidth. Finally, the effect of feed displacement errors on the power pattern of reflecttarray antennas is considered with Interval Enumerative Minkowski method. The maximal deviations from the nominal power pattern (error free) and its features are analysed for several reflectarray structures with different focal-length-to-diameter ratios to prove the effectiveness of the proposed method.
|
50 |
Deep neural network models for image classification and regressionMalek, Salim January 2018 (has links)
Deep learning, a branch of machine learning, has been gaining ground in many research fields as well as practical applications. Such ongoing boom can be traced back mainly to the availability and the affordability of potential processing facilities, which were not widely accessible than just a decade ago for instance. Although it has demonstrated cutting-edge performance widely in computer vision, and particularly in object recognition and detection, deep learning is yet to find its way into other research areas. Furthermore, the performance of deep learning models has a strong dependency on the way in which these latter are designed/tailored to the problem at hand. This, thereby, raises not only precision concerns but also processing overheads. The success and applicability of a deep learning system relies jointly on both components. In this dissertation, we present innovative deep learning schemes, with application to interesting though less-addressed topics. In this respect, the first covered topic is rough scene description for visually impaired individuals, whose idea is to list the objects that likely exist in an image that is grabbed by a visually impaired person, To this end, we proceed by extracting several features from the respective query image in order to capture the textural as well as the chromatic cues therein. Further, in order to improve the representativeness of the extracted features, we reinforce them with a feature learning stage by means of an autoencoder model. This latter is topped with a logistic regression layer in order to detect the presence of objects if any. In a second topic, we suggest to exploit the same model, i.e., autoencoder in the context of cloud removal in remote sensing images. Briefly, the model is learned on a cloud-free image pertaining to a certain geographical area, and applied afterwards on another cloud-contaminated image, acquired at a different time instant, of the same area. Two reconstruction strategies are proposed, namely pixel-based and patch-based reconstructions.
From the earlier two topics, we quantitatively demonstrate that autoencoders can play a pivotal role in terms of both (i) feature learning and (ii) reconstruction and mapping of sequential data.
Convolutional Neural Network (CNN) is arguably the most utilized model by the computer vision community, which is reasonable thanks to its remarkable performance in object and scene recognition, with respect to traditional hand-crafted features. Nevertheless, it is evident that CNN naturally is availed in its two-dimensional version. This raises questions on its applicability to unidimensional data. Thus, a third contribution of this thesis is devoted to the design of a unidimensional architecture of the CNN, which is applied to spectroscopic data. In other terms, CNN is tailored for feature extraction from one-dimensional chemometric data, whilst the extracted features are fed into advanced regression methods to estimate underlying chemical component concentrations. Experimental findings suggest that, similarly to 2D CNNs, unidimensional CNNs are also prone to impose themselves with respect to traditional methods. The last contribution of this dissertation is to develop new method to estimate the connection weights of the CNNs. It is based on training an SVM for each kernel of the CNN. Such method has the advantage of being fast and adequate for applications that characterized by small datasets.
|
Page generated in 0.0798 seconds