Spelling suggestions: "subject:"ING-INF/03 telecomunicacions"" "subject:"ING-INF/03 telecomunicaciones""
81 |
A novel high-efficiency SiPM-based system for Ps-TOFMazzuca, Elisabetta January 2014 (has links)
A novel set up for Positronium Time Of Flight is proposed, using Silicon Photomultipliers (SiPMs) instead of Photomultiplier Tubes.
The solution allows us to dramatically increase the compactness of the set up, thus improving the efficiency of 240%.
Different configurations of SiPM+scintillators are characterized in order to find the best solution. Also, simulations are provided, together with preliminary tests in the particular application. A compact read-out board for the processing of up to 44 channels has been designed and tested.Further tests, expected in the near future, are needed in order to confirm the simulations and to build the final set up.
|
82 |
Dynamic Camera Positioning and Reconfiguration for Multi-Camera NetworksKonda, Krishna Reddy January 2015 (has links)
The large availability of different types of cameras and lenses, together with the reduction in price of video sensors, has contributed to a widespread use of video surveillance systems, which have become a widely adopted tool to enforce security and safety, in detecting and preventing crimes and dangerous events. The possibility for personalization of such systems is generally very high, letting the user customize the sensing infrastructure, and deploying ad-hoc solutions based on the current needs, by choosing the type and number of sensors, as well as by adjusting the different camera parameters, as field-of-view, resolution and in case of active PTZ cameras pan,tilt and zoom. Further there is also a possibility of event driven automatic realignment of camera network to better observe the occurring event. Given the above mentioned possibilities, there are two objectives of this doctoral study. First objective consists of proposal of a state of the art camera placement and static reconfiguration algorithm and secondly we present a distributive, co-operative and dynamic camera reconfiguration algorithm for a network of cameras. Camera placement and user driven reconfiguration algorithm is based realistic virtual modelling of a given environment using particle swarm optimization. A real time camera reconfiguration algorithm which relies on motion entropy metric extracted from the H.264 compressed stream acquired by the camera is also presented.
|
83 |
Modelling and Recognizing Personal DataBignotti, Enrico January 2018 (has links)
To define what a person is represents a hard task, due to the fact that personal data, i.e., data that refer or describe a person, have a very heterogeneous nature. The issue is only worsening with the advent of technologies that, while allowing unprecedented collection and processing capabilities, cannot \textit{understand} the world as humans do. This problem is a well-known long-standing problem in computer science called the Semantic Gap Problem. It was originally defined in the research area of image processing as "... the lack of coincidence between the information that one can extract from the visual data and the interpretation that the same data have for a user in a given situation...". In the context of this work, the semantic gap is the lack of coincidence is between sensor data collected by ubiquitous devices and the human knowledge about the world that relies on their intelligence, habits, and routines. This thesis addresses the semantic gap problem from a representational point of view, proposing an interdisciplinary approach able to model and recognize personal data in real life scenarios. In fact, the semantic gap affects many communities, ranging from ubiquitous computing to user modelling, that must face the issue of managing the complexity of personal data in terms of modelling and recognition. The contributions of this Ph. D. Thesis are: 1) The definition of a methodology based on an interdisciplinary approach that can account for how to represent and allow the recognition of personal data. The interdisciplinary approach relies on the entity-centric approach and on an interdisciplinary categorization to define and structure personal data. 2) The definition of an ontology of personal data to represent human in a general way while also accounting their different dimensions of their everyday life; 3) The instantiation of the personal data representation above in a reference architecture that allows implementing the ontology and that can exploit the methodology to account for how to recognize personal data. 4) The adoption of the methodology for defining personal data and its instantiation in three real-life use cases with different goals in mind, proving that our modelling works in different domains and can account for several dimensions of the user.
|
84 |
Energy Efficiency and Privacy in Device-to-Device CommunicationUsman, Muhammad January 2017 (has links)
Mobile data traffic has increased many folds in recent years and current cellular networks are undeniably overloaded to meet the escalating user's demands of higher bandwidth and data rates. To meet such demands, Device-to-Device (D2D) communication is regarded as a potential solution to solve the capacity bottleneck problem in legacy cellular networks. Apart from offloading cellular traffic, D2D communication, due to its intrinsic property to rely on proximity, enables a broad range of proximity-based applications for both public safety and commercial users. Some potential applications, among others, include, proximity-based social interactions, exchange of information, advertisements and Vehicle-to-Vehicle (V2V) communication. The success of D2D communication depends upon the scenarios in which the users in the proximity interact with each other. Although there is a lot of work on resource allocation and interference management in D2D networks, very few works focus on the architectural aspects of D2D communication, emphasizing the benchmarking of energy efficiency for different application scenarios.
In this dissertation, we benchmark the energy consumption of D2D User Equipments (UEs) in different application scenarios. To this end, first we consider a scenario wherein different UEs, interested in sharing the same service, form a Mobile Cloud (MC). Since, some UEs can involve in multiple services/applications at a time, there is a possibility of interacting with multiple MCs. In this regard, we find that there is a threshold for the number of UEs in each MC, who can participate in multiple applications, beyond which legacy cellular communication starts performing better in terms of overall energy consumption of all UEs in the system. Thereafter, we extend the concept of MC to build a multi-hop D2D network and evaluate the energy consumption of UEs for a content distribution application across the network. In this work, we optimize the size of an MC to get the maximum energy savings.
Apart from many advantages, D2D communication poses potential challenges in terms of security and privacy. As a solution, we propose to bootstrap trust in D2D UEs before establishing any connection with unknown users. In particular, we propose Pretty Good Privacy (PGP) and reputation based mechanisms in D2D networks. Finally, to preserve user's privacy and to secure the contents, we propose to encrypt the contents cached at D2D nodes (or any other caching server). In particular, we leverage convergent encryption that can provide an extra benefit of eliminating duplicate contents from the caching server.
|
85 |
Analysis and Protection of SIP based ServicesFerdous, Raihana January 2014 (has links)
Multimedia communications over IP are booming as they offer higher flexibility and more features than traditional voice and video services. IP telephony known as Voice over IP (VoIP) is one of the commercially most important emerging trends in multimedia communications over IP. Due to the flexibility and descriptive power, the Session Initiation Protocol (SIP) is becoming the root of many sessions-based applications such as VoIP and media streaming that are used by a growing number of users and organizations. The increase of the availability and use of such applications calls for careful attention to the possibility of transferring malformed, incorrect, or malicious SIP messages as they can cause problems ranging from relatively innocuous disturbances to full blown attacks and frauds. Given this scenario, a deep knowledge of the normal behavior of the network and users is essential to problem diagnosis and security protection of IP Telephony. Moreover, analysis tools taking into account service semantics and troubleshooting VoIP systems based on SIP are of paramount importance for network administrators. However, efficient design and deployment of robust and high performance security controlling systems remain a high challenge, in particular due to the open architecture of the Internet, heterogeneous environment and real time communication constraint. This thesis deals with the analysis and protection of services based on the SIP protocol with a special focus on SIP based VoIP applications. The first part of the work is dedicated to the conformance and security analysis of SIP based VoIP services. To this end, our first endeavor is to define a formal conceptual model of VoIP threat domain with the aim to exchange a common vocabulary about the security related information of the domain. We have introduced an ontology defined as “VoIP-Onto" that provides a formal representation of a comprehensive taxonomy of VoIP attacks followed by specific security recommendations and guidelines for protecting the underlying infrastructure from these attacks. The use of “VoIP-Onto" is not only limited to as a general vocabulary and extensible dictionary for sharing domain knowledge about VoIP security, but also can be employed in a real environment for testing or intrusion detection purposes. We have also concentrated on designing synthetic traffic generators considering the difficulties and challenges of collecting real-world VoIP traffic for the purpose of testing monitoring and security controlling tools. To this end, we have introduced “VoIPTG", a generic synthetic traffic generator, that provides flexibility and efficiency in generation of large amount of synthetic VoIP traffic by imitating the realistic behavior profiles for users and attackers. We have also implemented “SIP-Msg-Gen", a SIP fuzzer, capable to generate both the well-formed and fuzzed SIP messages with ease. Then, we focus on designing an on-line filter able to examine the stream of incoming SIP messages and classifies them as “good" or “bad" depending on whether their structure and content are deemed acceptable or not. Because of the different structure, contents and timing of the SIP “bad" messages, their filtering is best carried out by a multistage classifier consisting of deterministic lexical analyzer and supervised machine learning classifiers. The performance and efficiency of our proposed multi-stage filtering system is tested with a large set of SIP based VoIP traffic including both the real and synthetic traces. The experimental result of the filtering system is very promising with high accuracy providing fast attack detection. Next, the focus is shifted on the understanding and modeling the social interaction patterns of users of the VoIP domain. The notion of “social networks" is applied in the context of SIP based VoIP network, where “social networks" of VoIP users are built based on their telephone records. Then, Social Network Analysis (SNA) techniques are applied on these “social networks" of VoIP users to explore their social behavioral patterns. A prototype of filtering system for SIP based VoIP services is also implemented to demonstrate that the knowledge about the social behavior of the VoIP users is helpful in problem diagnosis, intruders detection, and security protection. The filtering system is trained with the normal behavioral patterns of the users. The machine, thus trained, is capable of identifying “malicious" users.
|
86 |
Multimedia Content Analysis for Event DetectionRosani, Andrea January 2015 (has links)
The wide diffusion of multimedia contents of different type and format led to the need of effective methods to efficiently handle such huge amount of information, opening interesting research challenges in the media community. In particular, the definition of suitable content understanding methodologies is attracting the effort of a large number of researchers worldwide, who proposed various tools for automatic content organization, retrieval, search, annotation and summarization. In this thesis, we will focus on an important concept, that is the inherent link between ''media" and the ''events" that such media are depicting. We will present two different methodologies related to such problem, and in particular to the automatic discovery of event-semantics from media contents. The two methodologies address this general problem at two different levels of abstraction. In the first approach we will be concerned with the detection of activities and behaviors of people from a video sequence (i.e., what a person is doing and how), while in the second we will face the more general problem of understanding a class of events from a set visual media (i.e., the situation and context). Both problems will be addressed trying to avoid making strong a-priori assumptions, i.e., considering the largely unstructured and variable nature of events.As to the first methodology, we will discuss about events related to the behavior of a person living in a home environment. The automatic understanding of human activity is still an open problems in the scientific community, although several solutions have been proposed so far, and may provide important breakthroughs in many application domains such as context-aware computing, area monitoring and surveillance, assistive technologies for the elderly or disabled, and more. An innovative approach is presented in this thesis, providing (i) a compact representation of human activities, and (ii) an effective tool to reliably measure the similarity between activity instances. In particular, the activity pattern is modeled with a signature obtained through a symbolic abstraction of its spatio-temporal trace, allowing the application of high-level reasoning through context-free grammars for activity classification. As far as the second methodology is concerned, we will address the problem of identifying an event from single image. If event discovery from media is already a complex problem, detection from a single still picture is still considered out-of-reach for current methodologies, as demonstrated by recent results of international benchmarks in the field. In this work we will focus on a solution that may open new perspectives in this area, by providing better knowledge on the link between visual perception and event semantics. In fact, what we propose is a framework that identifies image details that allow human beings identifying an event from single image that depicts it. These details are called ''event saliency", and are detected by exploiting the power of human computation through a gamification procedure. The resulting event saliency is a map of event-related image areas containing sufficient evidence of the underlying event, which could be used to learn the visual essence of the event itself, to enable improved automatic discovery techniques. Both methodologies will be demonstrated through extensive tests using publicly available datasets, as well as additional data created ad-hoc for the specific problems under analysis.
|
87 |
Resource allocation and modeling in spectrally and spatially flexible optical transport networksPederzolli, Federico January 2018 (has links)
The world's hunger for connectivity appears to be endlessly growing, yet the capacity of the networks that underpin that connectivity is anything but endless. This thesis explores both short and long term solutions for increasing the capacity of the largest and most capacious of these networks, the backbones upon which the Internet is built: optical transport networks. In the short term, Flexi-grid technology has emerged as the evolution of fixed-grid WDM optical networks, providing higher potential throughput but suffering from an aggravated form of the spectrum ragmentation problem that affects fixed-grid networks.
A novel path-based metric to better evaluate the fragmentation of spectral resources in flexi-grid networks is presented, which considers both the fact that free spectrum slices may not be available on all the links of a path, and the likelihood that an end-to-end spectral void is usable to route incoming connections, and tested by means of simulations, finding that it outperforms existing ones from literature. For the longer term, Space Division Multiplexing (SDM) is a promising solution to overcome the looming fiber capacity crunch, and, perhaps more importantly, can offer a beneficial ratio between the expected capacity gains and the resulting increase in the cost of the network thanks to Joint and Fractional Joint Switching architectures and integrated transceivers and amplifiers. A model for such network is presented, and multiple heuristics for solving the Routing, Space and Spectrum Allocation problem are described, studied via simulations and iteratively improved, with the objective of quantifying the likely performance of several SDM architectures under multiple traffic scenarios.
In addition, possible improvements to joint switching architectures, and an experimental SDN control plane for SDM networks, are presented and characterized, again by means of simulations. SDM is shown to be an attractive technology for increasing future transport networks capacity, at a reasonable cost.
|
88 |
Cooperative Push/Pull Protocols for Live Peer-Assisted StreamingRusso, Alessandro January 2013 (has links)
Video streaming is rapidly becoming one of the key services of the Internet. Most streaming is today "on demand" and delivered via unicast delivery; however, many applications require delivery to many end-users and the lack of ubiquitous IP multicast remains a weakness of the Internet. Given this scenario, the peer-to-peer (P2P) or peer-assisted communications is an appealing solution, especially in light of its intrinsic scalability and its extremely low initial investment requirements. However, the design of efficient, robust, and performing P2P streaming systems remains a high challenge, in particular when real-time (hard or soft) constraints are part of the service quality, as in TV distribution or conferencing. This thesis deals with P2P live streaming, concentrating on unstructured, swarm-based systems. The protocols explored and proposed are based in general on mixed Push/Pull phases, i.e., the behavior of peers alternates between offering content to other peers and seeking the content from other peers. The first part of the work is dedicated to the analysis of the fundamental properties of the Push/Pull protocols, including the enhancement of base protocols with a chunks' negotiation phase, which enable peers to execute parallel communications at the same time, fully exploiting their resources and drastically reducing duplicates and waste. Next, the focus is shifted on the impact of network parameters in video streaming distribution, showing that promoting locality in interactions leads to better performance than selecting target peers randomly. Then, the attention is focused on wireless scenarios by mixing local multicast techniques (based on a modified version of the Protocol Independent Multicast --PIM-- adapted to wireless environments) with active Pull recovery of missing data, with a peer-assisted approach. This protocol, called PullCast, enables end-users to pull missed data packets via unicast communications while they receive video packet in multicast via push, exhibiting interesting results in terms of chunks diffusion delay and fraction of end-users served. Finally, the GRAPES library is introduced to provide a set of open-source components conceived as basic building blocks for developing new P2P streaming applications which have in mind the intelligent usage of network resources as well as the Quality of Experience of final users. GRAPES is the core library behind PeerStreamer, an open source P2P media streaming framework developed under the NAPA-WINE European research project, and currently supported by the EIT ICT Labs.
|
89 |
Towards Energy Efficient Cooperative Spectrum Sensing in Cognitive Radio NetworksAlthunibat, Saud January 2014 (has links)
Cognitive radio has been proposed as a promising technology to resolve the spectrum scarcity problem by dynamically exploiting underutilized spectrum bands. Cognitive radio technology allows unlicensed users, also called cognitive users (CUs), to exploit the spectrum vacancies at any time with no or limited extra interference at the licensed users. Usually, cognitive radios create networks in order to better identify spectrum vacancies, avoid resultant interference, and consequently, magnify their revenues. One of the main challenges in cognitive radio networks is the high energy consumption, which may limit their implementation especially in battery-powered terminals. The initial step in cognitive transmission is called spectrum sensing. In spectrum sensing, a CU senses the spectrum in order to detect the activity of the licensed users. Spectrum sensing is usually accomplished cooperatively in order to improve the reliability of its results. In cooperative spectrum sensing (CSS), individual sensing results should be exchanged in order to make a global decision regarding spectrum occupancy. Thus, CSS consumes a significant a mount of energy, representing a challenge for CUs. Moreover, the periodicity of CSS and increasing the number of channels to be sensed complicates the problem. To this end, energy efficiency in CSS has gained an increasing attention recently. In this dissertation, a number of energy-efficient algorithms/schemes for CSS is proposed. The proposed works include energy efficient solutions for low energy consumption in local sensing stage, results’ reporting stage and decision-making stage. The proposed works are evaluated in terms of the achievable energy efficiency and detection accuracy, where they show a significant improvement compared to the state-of-the-art proposals. Moreover, a comprehensive energy-efficient approaches are proposed by combining different algorithms presented in this dissertation. These comprehensive approaches aim at proving the consistency of the proposed algorithms to each other and maximizing the achievable energy efficiency in the whole CSS process. Moreover, high energy consumption is not the only challenge of CSS. Another important problem in CSS is the vulnerability of the security risks which can effectively degrade the energy efficiency of cognitive radio networks. In this dissertation, we propose three different strategies against security attackers. Specifically, authentication protocol for outsider attackers, elimination algorithm for insider attackers, and a punishment policy are presented in this dissertation. While designing these strategies, an eye is kept on energy efficiency such that increasing immunity against attacker does not affect energy efficiency. Therefore, the tradeoff between energy efficiency and security in CSS has been achieved.
|
90 |
Bridging Sensor Data Streams and Human KnowledgeZeni, Mattia January 2017 (has links)
Generating useful knowledge out of personal big data in form of sensor streams is a difficult task that presents multiple challenges due to the intrinsic characteristics of these type of data, namely their volume, velocity, variety and noisiness. This problem is a well-known long standing problem in computer science called the Semantic Gap Problem. It was originally defined in the research area of image processing as "... the lack of coincidence between the information that one can extract from the visual data and the interpretation that the same data have for a user in a given situation..." [Smeulders et al., 2000]. In the context of this work, the lack of coincidence is between low-level raw streaming sensor data collected by sensors in a machine-readable format and higher-level semantic knowledge that can be generated from these data and that only humans can understand thanks to their intelligence, habits and routines. This thesis addresses the semantic gap problem in the context above, proposing an interdisciplinary approach able to generate human level knowledge from streaming sensor data in open domains. It leverages on two different research fields: one regarding the collection, management and analysis of big data and the field of semantic computing, focused on ontologies, which respectively map to the two elements of the semantic gap mentioned above. The contributions of this thesis are: • The definition of a methodology based on the idea that the user and the world surrounding him can be modeled, defining most of the elements of her context as entities (locations, people, objects, among other, and the relations among them) in addition with the attributes for all of them. The modeling aspects of this ontology are outside of the scope of this work. Having such a structure, the task of bridging the semantic gap is divided in many, less complex, modular and compositional micro-tasks that are which consist in mapping the streaming sensor data using contextual information to the attribute values of the corresponding entities. In this way we can create a structure out of the unstructured, noisy and highly variable sensor data that can then be used by the machine to provide personalized, context-aware services to the final user; • The definition of a reference architecture that applies the methodology above and addresses the semantic gap problem in streaming sensor data; • The instantiation of the architecture above in the Stream Base System (SB), resulting in the implementation of its main components using state-of-the-art software solutions and technologies; • The adoption of the Stream Base System in four use cases that have very different objectives one respect to the other, proving that it works in open domains.
|
Page generated in 0.0833 seconds