Spelling suggestions: "subject:"informazioni"" "subject:"uniformazioni""
1 |
Autonomic Management of Cloud Virtual InfrastructuresLoreti, Daniela <1984> 12 May 2016 (has links)
The new model of interaction suggested by Cloud Computing has experienced a significant diffusion over the last years thanks to its capability of providing customers with the illusion of an infinite amount of reliable resources. Nevertheless, the challenge of efficiently manage a large collection of virtual computing nodes has just been partially moved from the customer's private datacenter to the larger provider's infrastructure that we generally address as “the cloud”. A lot of effort - in both academic and industrial field - is therefore concentrated on policies for the efficient and autonomous management of virtual infrastructures.
The research on this topic is further encouraged by the diffusion of cheap and portable sensors and the availability of almost ubiquitous Internet connectivity that are constantly creating large flows of information about the environment we live in. The need for fast and reliable mechanisms to process these considerable volumes of data has inevitably pushed the evolution from the initial scenario of a single (private or public) cloud towards cloud interoperability, giving birth to several forms of collaboration between clouds. The efficient resource management is further complicated in these heterogeneous environments, making autonomous administration more and more desirable.
In this thesis, we initially focus on the challenges of autonomic management in a single-cloud scenario, considering the benefits and shortcomings of centralized and distributed solutions and proposing an original decentralized model.
Later in this dissertation, we face the challenge of autonomic management in large interconnected cloud environments, where the movement of virtual resources across the infrastructure nodes is further complicated by the intrinsic heterogeneity of the scenario and difficulties introduced by the higher latency medium between datacenters. According to that, we focus on the cost model for the execution of distributed data-intensive application on multiple clouds and we propose different management policies leveraging cloud interoperability.
|
2 |
Coordination Issues in Complex Socio-technical Systems: Self-organisation of Knowledge in MoKMariani, Stefano <1987> 12 May 2016 (has links)
The thesis proposes the Molecules of Knowledge (MoK) model for self-organisation of knowledge in knowledge-intensive socio-technical systems.
The main contribution is the conception, definition, design, and implementation of the MoK model.
The model is based on a chemical metaphor for self-organising coordination, in which coordination laws are interpreted as artificial chemical reactions ruling evolution of the molecules of knowledge living in the system (the information chunks), indirectly coordinating the users working with them.
In turn, users may implicitly affect system behaviour with their interactions, according to the cognitive theory of behavioural implicit communication, integrated in MoK.
The theory states that any interaction conveys tacit messages that can be suitably interpreted by the coordination model to better support users' workflows.
Design and implementation of the MoK model required two other contributions: conception, design, and tuning of the artificial chemical reactions with custom kinetic rates, playing the role of the coordination laws, and development of an infrastructure supporting situated coordination, both in time, space, and w.r.t. the environment, along with a dedicated coordination language.
|
3 |
Distributed Information Systems and Data Mining in Self-Organizing NetworksPirini, Tommaso <1986> 14 April 2016 (has links)
The diffusion of sensors and devices to generate and collect data is capillary. The infrastructure that envelops the smart city has to react to the contingent situations and to changes in the operating environment. At the same time, the complexity of a distributed system, consisting of huge amounts of components fixed and mobile, can generate unsustainable costs and latencies to ensure robustness, scalability, and reliability, with type architectures middleware. The distributed system must be able to self-organize and self-restore adapting its operating strategies to optimize the use of resources and overall efficiency. Peer-to-peer systems (P2P) can offer solutions to face the requirements of managing, indexing, searching and analyzing data in scalable and self-organizing fashions, such as in cloud services and big data applications, just to mention two of the most strategic technologies for the next years.
In this thesis we present G-Grid, a multi-dimensional distributed data indexing able to efficiently execute arbitrary multi-attribute exact and range queries in decentralized P2P environments. G-Grid is a foundational structure and can be effectively used in a wide range of application environments, including grid computing, cloud and big data domains.
Nevertheless we proposed some improvements on the basic structure introducing a bit of randomness by using Small World networks, whereas are structures derived from social networks and show an almost uniform traffic distribution. This produced huge advantages in efficiency, cutting maintenance costs, without losing efficacy. Experiments show how this new hybrid structure obtains the best performance in traffic distribution and it a good settlement for the overall performance on the requirements desired in the modern data systems.
|
4 |
Creating and Recognizing 3D ObjectsPetrelli, Alioscia <1979> 13 May 2016 (has links)
This thesis aims at investigating on 3D Computer Vision, a research topic which is gathering even increasing attention thanks to the more and more widespread availability of affordable 3D visual sensor, such as, in particular consumer grade RGB-D cameras.
The contribution of this dissertation is twofold. First, the work addresses how to compactly represent the content of images acquired with RGB-D cameras. Second, the thesis focuses on 3D Reconstruction, key issue to efficiently populate the databases of 3D models deployed in object/category recognition scenarios.
As 3D Registration plays a fundamental role in 3D Reconstruction, the former part of the thesis proposes a pipeline for coarse registration of point clouds that is entirely based on the computation of 3D Local Reference Frames (LRF). Unlike related work in literature, we also propose a comprehensive experimental evaluation based on diverse kinds of data (such as those acquired by laser scanners, RGB-D and stereo cameras) as well as on quantitative comparison with respect to three other methods.
Driven by the ever-lower costs and growing distribution of 3D sensing devices, we expect broad-scale integration of depth sensing into mobile devices to be forthcoming.
Accordingly, the thesis investigates on merging appearance and shape information for Mobile Visual Search and focuses on encoding RGB-D images in compact binary codes.
An extensive experimental analysis on three state-of-the-art datasets, addressing both category and instance recognition scenarios, has led to the development of an RGB-D search engine architecture that can attain high recognition rates with peculiarly moderate bandwidth requirements.
Our experiments also include a comparison with the CDVS (Compact Descriptors for Visual Search) pipeline, candidate to become part of the MPEG-7 standard.
|
5 |
Data and Text Mining Techniques for In-Domain and Cross-Domain ApplicationsDomeniconi, Giacomo <1986> January 1900 (has links)
In the big data era, a wide amount of data has been generated in different domains, from social media to news feeds, from health care to genomic functionalities. When addressing a problem, we usually need to harness multiple disparate datasets. Data from different domains may follow different modalities, each of which has a different representation, distribution, scale and density. For example, text is usually represented as discrete sparse word count vectors, whereas an image is represented by pixel intensities, and so on.
Nowadays plenty of Data Mining and Machine Learning techniques are proposed in literature, which have already achieved significant success in many knowledge engineering areas, including classification, regression and clustering. Anyway some challenging issues remain when tackling a new problem: how to represent the problem? What approach is better to use among the huge quantity of possibilities? What is the information to be used in the Machine Learning task and how to represent it? There exist any different domains from which borrow knowledge?
This dissertation proposes some possible representation approaches for problems in different domains, from text mining to genomic analysis. In particular, one of the major contributions is a different way to represent a classical classification problem: instead of using an instance related to each object (a document, or a gene, or a social post, etc.) to be classified, it is proposed to use a pair of objects or a pair object-class, using the relationship between them as label. The application of this approach is tested on both flat and hierarchical text categorization datasets, where it potentially allows the efficient addition of new categories during classification. Furthermore, the same idea is used to extract conversational threads from an unregulated pool of messages and also to classify the biomedical literature based on the genomic features treated.
|
6 |
Estrazione e rappresentazione della conoscenza nella bioinformaticaBaldacci, Lorenzo <1976> 12 April 2007 (has links)
No description available.
|
7 |
Programmazione per la fruizione del progetto NUME attraverso InternetDiamanti, Tiziano <1974> 09 June 2007 (has links)
No description available.
|
8 |
Specification, execution and verification of interaction protocols: an approach based on computational logicChesani, Federico <1975> 12 April 2007 (has links)
Interaction protocols establish how different computational entities can interact with each other. The interaction can be finalized to the exchange of data, as in 'communication protocols', or can be oriented to achieve some result, as in 'application protocols'. Moreover, with the increasing complexity of modern distributed systems, protocols are used also to control such a complexity, and to ensure that the system as a whole evolves with certain features. However, the extensive use of protocols has raised some issues, from the language for specifying them to the several verification aspects. Computational Logic provides models, languages and tools that can be effectively adopted to address such issues: its declarative nature can be exploited for a protocol specification language, while its operational counterpart can be used to reason upon such specifications.
In this thesis we propose a proof-theoretic framework, called SCIFF, together with its extensions. SCIFF is based on Abductive Logic Programming, and provides a formal specification language with a clear declarative semantics (based on abduction). The operational counterpart is given by a proof procedure, that allows to reason upon the specifications and to test the conformance of given interactions w.r.t. a defined protocol. Moreover, by suitably adapting the SCIFF Framework, we propose solutions for addressing (1) the protocol properties verification (g-SCIFF Framework), and (2) the a-priori conformance verification of peers w.r.t. the given protocol (AlLoWS Framework). We introduce also an agent based architecture, the SCIFF Agent Platform, where the same protocol specification can be used to program and to ease the implementation task of the interacting peers.
|
9 |
Support infrastructures for multimedia services with guaranteed continuity and QoSFoschini, Luca <1978> 12 April 2007 (has links)
Advances in wireless networking and content delivery systems are enabling new challenging provisioning scenarios where a growing number of users access multimedia services, e.g., audio/video streaming, while moving among different points of attachment to the Internet, possibly with different connectivity technologies, e.g., Wi-Fi, Bluetooth, and cellular 3G. That calls for novel middlewares capable of dynamically personalizing service provisioning to the characteristics of client environments, in particular to
discontinuities in wireless resource availability due to handoffs. This dissertation proposes a novel middleware solution, called MUM, that performs effective and context-aware handoff management to transparently avoid service interruptions during both horizontal and vertical handoffs. To achieve the goal, MUM exploits the full visibility of wireless connections available in client localities and their handoff implementations (handoff awareness), of service quality requirements and handoff-related quality degradations (QoS awareness), and of network topology and resources available in current/future localities (location awareness). The design and implementation of the all main MUM components along with extensive on the field trials of the realized middleware architecture confirmed the validity of the proposed full
context-aware handoff management approach. In particular, the reported experimental results demonstrate that MUM can effectively maintain service continuity for a wide range of different multimedia services by exploiting handoff prediction mechanisms, adaptive buffering and pre-fetching techniques, and proactive re-addressing/re-binding.
|
10 |
Structural analysis of combinatorial optimization problem characteristics and their resolution using hybrid approachesGuerri, Alessio <1976> 12 April 2007 (has links)
Many combinatorial problems coming from the real world may not have a clear and well defined structure, typically being dirtied by side constraints, or being composed of two or more sub-problems, usually not disjoint. Such problems are not suitable to be solved with pure approaches based on a single programming paradigm, because a paradigm that can effectively face a problem characteristic may behave
inefficiently when facing other characteristics. In these cases, modelling the problem using different programming techniques, trying to ”take the best” from each technique, can produce solvers that largely dominate pure approaches. We demonstrate the effectiveness of hybridization and we discuss about different hybridization techniques by analyzing two classes of problems with particular structures, exploiting Constraint Programming and Integer Linear Programming solving tools and Algorithm Portfolios and Logic Based Benders Decomposition as integration and
hybridization frameworks.
|
Page generated in 0.0783 seconds