Spelling suggestions: "subject:"informatics""
41 |
Design and Performance Evaluation of Network-on-Chip Communication Protocols and ArchitecturesConcer, Nicola <1980> 20 April 2009 (has links)
The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip.
The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain
the increasing tight bandwidth and latency constraints that these systems demand.
The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter-
connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and
the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors.
On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems
on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top
boxes etc...
SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks
help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip.
In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose:
• a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from
many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance;
• a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip;
• a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing;
• a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to
integrate shared-memory and message-passing based components on a single System on Chip;
• a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay
stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs;
• a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible
small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced.
This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.
|
42 |
Per un museo virtuale dell'informatica. Un supporto automatico per creare "visite museali"Paliani, Luca <1974> 20 May 2009 (has links)
The subject of the present research is related to the field of computer technology applied to support intellectual activities such as text translation, screenwriting and content organization of popular and education courses, especially concerning museum visits.
The research has started with the deep analysis of the cognitive process which characterizes a screenwriter while working. This choice has been made because a screenplay is not only an aid to the realization of a show but, more in general, it can be considered as the planning of an education, popular and formative intellectual activity.
After this analysis, the research has focused on the specific area of the planning, description and introduction of topics related to the history of science, and in particular, of computer science. To focus on this area it has been fundamental to analyse subjects concerning the didactics of museum visits organization. The aim was to find out the guide lines that a teacher should follow when planning the visit of a museum (virtual museum of the history of computer science).
The consequent designing and realisation of an automatic support system for the description and the production of a formative, education and popular multimedia product (for the history of computer science), has been possible thanks to the results achieved through this research.
The system obtained is provided by the following features:
·management of multimedia slides (such as texts, video, audio or images) which can be classified on the bases of the topic and of the profile of the user;
·automatic creation of a sequence of multimedia slides which introduce the topic;
·management of the interaction with the user to check and give validity to the product.
The most innovative aspect of the present research is represented by the fact that the product is realised on the bases of the profile of the user.
|
43 |
Self-Organizing Mechanisms for Task Allocation in a Knowledge-Based EconomyMarcozzi, Andrea <1979> 20 April 2009 (has links)
A prevalent claim is that we are in knowledge economy. When we talk about knowledge economy, we generally mean the concept of “Knowledge-based economy” indicating the use of knowledge and technologies to produce economic benefits. Hence knowledge is both tool and raw material (people’s skill) for producing some kind of product or service. In this kind of environment economic organization is undergoing several changes. For example authority relations are less important, legal and ownership-based definitions of the boundaries of the firm are becoming irrelevant
and there are only few constraints on the set of coordination mechanisms. Hence what characterises a knowledge economy is the growing importance of human capital in productive processes (Foss, 2005) and the increasing knowledge intensity of jobs (Hodgson, 1999). Economic processes are also highly intertwined with
social processes: they are likely to be informal and reciprocal rather than formal and negotiated. Another important point is also the problem of the division of labor: as economic activity becomes mainly intellectual and requires the integration of specific and idiosyncratic skills, the task of dividing the job and assigning it to the most appropriate individuals becomes arduous, a “supervisory problem” (Hogdson, 1999) emerges and traditional hierarchical control may result increasingly ineffective. Not only specificity of know how makes it awkward to monitor the execution of tasks, more importantly, top-down integration of skills may be difficult because ‘the nominal supervisors will not know the best way of doing the job – or even the precise purpose of the specialist job itself – and the worker will know better’ (Hogdson,1999). We, therefore, expect that the organization of the economic activity of specialists should be, at least partially, self-organized.
The aim of this thesis is to bridge studies from computer science and in particular from Peer-to-Peer Networks (P2P) to organization theories. We think that the P2P paradigm well fits with organization problems related to all those situation in which a central authority is not possible. We believe that P2P Networks show a number of characteristics similar to firms working in a knowledge-based economy and hence that the methodology used for studying P2P Networks can be applied to organization studies.
Three are the main characteristics we think P2P have in common with firms involved in knowledge economy:
- Decentralization: in a pure P2P system every peer is an equal participant, there is no central authority governing the actions of the single peers;
- Cost of ownership: P2P computing implies shared ownership reducing the cost of owing the systems and the content, and the cost of maintaining them;
- Self-Organization: it refers to the process in a system leading to the emergence of global order within the system without the presence of another system dictating this order.
These characteristics are present also in the kind of firm that we try to address and that’ why we have shifted the techniques we adopted for studies in computer science (Marcozzi et al., 2005; Hales et al., 2007 [39]) to management science.
|
44 |
EXAM-S: an Analysis tool for Multi-Domain Policy SetsFerrini, Rodolfo <1980> 20 April 2009 (has links)
As distributed collaborative applications and architectures are adopting policy based management for tasks such as access control, network security and data privacy, the management and consolidation of a large number of policies is becoming a crucial component of such policy based systems. In large-scale distributed collaborative applications like web services, there is the need of analyzing policy interactions and integrating policies. In this thesis, we propose and implement EXAM-S, a comprehensive environment for policy analysis and management, which can be used to perform a variety of functions such as policy property analyses, policy similarity analysis, policy integration etc. As part of this environment, we have proposed and implemented new techniques for the analysis of policies that rely on a deep study of state of the art techniques. Moreover, we propose an approach for solving heterogeneity problems that usually arise when considering the analysis of policies belonging to different domains. Our work focuses on analysis of access control policies written in the dialect of XACML (Extensible Access Control Markup Language). We consider XACML policies because XACML is a rich language which can represent many policies of interest to real world applications and is gaining widespread adoption in the industry.
|
45 |
Kernel Methods for Tree Structured DataDa San Martino, Giovanni <1979> 20 April 2009 (has links)
Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data.
In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches.
They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function.
Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule.
Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain.
A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data.
This thesis proposes three contributions for resolving the above issues of kernel for trees.
A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space.
A second contribution is the proposal of a novel kernel function based on the convolution kernel framework.
Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect.
A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase.
We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.
|
46 |
Expressiveness of Concurrent LanguagesDi Giusto, Cinzia <1979> 20 April 2009 (has links)
The aim of this thesis is to go through different approaches for proving expressiveness properties in several concurrent languages. We analyse four different calculi exploiting for each one a different technique.
We begin with the analysis of a synchronous language, we explore the expressiveness of a fragment of CCS! (a variant of Milner's CCS where replication is considered instead of recursion) w.r.t. the existence of faithful encodings (i.e. encodings that respect the behaviour of the encoded model without introducing unnecessary computations) of models of computability strictly less expressive than Turing Machines. Namely, grammars of types 1,2 and 3 in the Chomsky Hierarchy.
We then move to asynchronous languages and we study full abstraction for two Linda-like languages. Linda can be considered as the asynchronous version of CCS plus a shared memory (a multiset of elements) that is used for storing messages. After having defined a denotational semantics based on traces, we obtain fully abstract semantics for both languages by using suitable abstractions in order to identify different traces which do not correspond to different behaviours.
Since the ability of one of the two variants considered of recognising multiple occurrences of messages in the store (which accounts for an increase of expressiveness) reflects in a less complex abstraction, we then study other languages where multiplicity plays a fundamental role. We consider the language CHR (Constraint Handling Rules) a language which uses multi-headed (guarded) rules. We prove that multiple heads augment the expressive power of the language. Indeed we show that if we restrict to rules where the head contains at most n atoms we could generate a hierarchy of languages with increasing expressiveness (i.e. the CHR language allowing at most n atoms in the heads is more expressive than the language allowing at most m atoms, with m<n).
Finally we analyse a language similar but simpler than CHR. The kappa-calculus is a formalism for modelling molecular biology where molecules are terms with internal state and sites, bonds are represented by shared names labelling sites, and reactions are represented by rewriting rules.
Depending on the shape of the rewriting rules, several dialects of the calculus can be obtained. We analyse the expressive power of some of these dialects by focusing on decidability and undecidability for problems like reachability and coverability.
|
47 |
A core calculus for the analysis and implementation of biologically inspired languagesVersari, Cristian <1978> 20 April 2009 (has links)
The application of Concurrency Theory to Systems Biology
is in its earliest stage of progress. The metaphor of
cells as computing systems by Regev and Shapiro
opened the employment of concurrent languages for the
modelling of biological systems. Their peculiar
characteristics led to the design of many bio-inspired
formalisms which achieve higher faithfulness and
specificity.
In this thesis we present pi@, an extremely simple and
conservative extension of the pi-calculus representing
a keystone in this respect, thanks to its expressiveness
capabilities. The pi@ calculus is obtained by the addition
of polyadic synchronisation and priority to the
pi-calculus, in order to achieve compartment semantics and
atomicity of complex operations respectively.
In its direct application to biological modelling, the
stochastic variant of the calculus, Spi@, is shown able to
model consistently several phenomena such as formation
of molecular complexes, hierarchical subdivision of the
system into compartments, inter-compartment reactions,
dynamic reorganisation of compartment structure consistent
with volume variation.
The pivotal role of pi@ is evidenced by its capability
of encoding in a compositional way several bio-inspired
formalisms, so that it represents the optimal core of
a framework for the analysis and implementation of
bio-inspired languages. In this respect, the encodings of
BioAmbients, Brane Calculi and a variant of P
Systems in pi@ are formalised. The conciseness of their
translation in pi@ allows their indirect comparison by
means of their encodings. Furthermore it provides a
ready-to-run implementation of minimal effort whose correctness is granted by the correctness of the respective
encoding functions.
Further important results of general validity are stated
on the expressive power of priority. Several impossibility
results are described, which clearly state the superior
expressiveness of prioritised languages and the problems
arising in the attempt of providing their parallel
implementation. To this aim, a new setting in distributed
computing (the last man standing problem) is singled out
and exploited to prove the impossibility of providing a
purely parallel implementation of priority by means
of point-to-point or broadcast communication.
|
48 |
3-Dimensional Protein Reconstruction from Contact Maps: Complexity and Experimental ResultsMedri, Filippo <1978> 20 April 2009 (has links)
No description available.
|
49 |
Algorithms for network design and routing problemsBartolini, Enrico <1981> 20 April 2009 (has links)
In this thesis we study three combinatorial optimization problems belonging to the classes of Network Design and Vehicle Routing problems that are strongly linked in the context of the design and management of transportation
networks: the Non-Bifurcated Capacitated Network Design Problem (NBP), the Period Vehicle Routing Problem (PVRP) and the Pickup and Delivery Problem with Time Windows (PDPTW). These problems are NP-hard and contain as special cases some well known difficult problems such as the Traveling Salesman Problem and the Steiner Tree Problem. Moreover, they model the core structure of many practical problems arising in logistics and telecommunications. The NBP is the problem of designing the optimum network to satisfy a given set of traffic demands. Given a set of nodes, a set of potential links and a set of point-to-point demands called commodities, the objective is to select the links to install and dimension their capacities so that all the demands can be routed between their respective endpoints, and the sum of link fixed costs and commodity routing costs is minimized. The problem is called non-
bifurcated because the solution network must allow each demand to follow a single path, i.e., the flow of each demand cannot be splitted. Although this is the case in many real applications, the NBP has received significantly less attention in the literature than other capacitated network design problems that allow bifurcation. We describe an exact algorithm for the NBP that is based on solving by
an integer programming solver a formulation of the problem strengthened by simple valid inequalities and four new heuristic algorithms. One of these heuristics is an adaptive memory metaheuristic, based on partial enumeration,
that could be applied to a wider class of structured combinatorial optimization problems. In the PVRP a fleet of vehicles of identical capacity must be used to service a set of customers over a planning period of several days. Each customer specifies a service frequency, a set of allowable day-combinations and a quantity of product that the customer must receive every time he is visited. For
example, a customer may require to be visited twice during a 5-day period imposing that these visits take place on Monday-Thursday or Monday-Friday or Tuesday-Friday. The problem consists in simultaneously assigning a day- combination to each customer and in designing the vehicle routes for each day so that each customer is visited the required number of times, the number of routes on each day does not exceed the number of vehicles available, and the
total cost of the routes over the period is minimized. We also consider a tactical variant of this problem, called Tactical Planning Vehicle Routing Problem, where customers require to be visited on a specific day of the period but a
penalty cost, called service cost, can be paid to postpone the visit to a later day than that required. At our knowledge all the algorithms proposed in the literature for the PVRP are heuristics. In this thesis we present for the first time an exact algorithm for the PVRP that is based on different relaxations of a set partitioning-like formulation. The effectiveness of the proposed algorithm is
tested on a set of instances from the literature and on a new set of instances. Finally, the PDPTW is to service a set of transportation requests using a fleet of identical vehicles of limited capacity located at a central depot. Each request specifies a pickup location and a delivery location and requires that a given quantity of load is transported from the pickup location to the delivery location. Moreover, each location can be visited only within an associated time window. Each vehicle can perform at most one route and the problem is to satisfy all the requests using the available vehicles so that each request
is serviced by a single vehicle, the load on each vehicle does not exceed the capacity, and all locations are visited according to their time window. We formulate the PDPTW as a set partitioning-like problem with additional cuts and we propose an exact algorithm based on different relaxations
of the mathematical formulation and a branch-and-cut-and-price algorithm. The new algorithm is tested on two classes of problems from the literature and compared with a recent branch-and-cut-and-price algorithm from the literature.
|
50 |
Higher-Order Concurrency: Expressiveness and Decidability ResultsPerez - Parra, Jorge Andres <1983> 05 May 2010 (has links)
Higher-order process calculi are formalisms for concurrency in which processes can be passed around in communications. Higher-order (or process-passing) concurrency is often presented as an alternative paradigm to the first order (or name-passing) concurrency of the pi-calculus for the description of mobile systems. These calculi are inspired by, and formally close to, the lambda-calculus, whose basic computational step ---beta-reduction--- involves term instantiation.
The theory of higher-order process calculi is more complex than that of first-order process calculi. This shows up in, for instance, the definition of behavioral equivalences. A long-standing approach to overcome this burden is to define encodings of higher-order processes into a first-order setting, so as to transfer the theory of the first-order paradigm to the higher-order one. While satisfactory in the case of calculi with basic (higher-order) primitives, this
indirect approach falls short in the case of higher-order process calculi featuring constructs for phenomena such as, e.g., localities and dynamic system reconfiguration, which are frequent in modern distributed systems. Indeed, for higher-order process calculi involving little more
than traditional process communication, encodings into some first-order language are difficult to handle or do not exist. We then observe that foundational studies for higher-order process calculi must be carried out directly on them and exploit their peculiarities.
This dissertation contributes to such foundational studies for higher-order process calculi. We concentrate on two closely interwoven issues in process calculi: expressiveness and decidability. Surprisingly, these issues have been little explored in the higher-order setting. Our
research is centered around a core calculus for higher-order concurrency in which only the operators strictly necessary to obtain higher-order communication are retained. We develop the basic theory of this core calculus and rely on it to study the expressive power of issues
universally accepted as basic in process calculi, namely synchrony, forwarding, and polyadic communication.
|
Page generated in 0.0618 seconds