Spelling suggestions: "subject:"info""
151 |
Theoretical and implementation aspects in the mechanization of the metatheory of programming languagesRicciotti, Wilmer <1982> 06 May 2011 (has links)
Interactive theorem provers are tools designed for the certification of formal proofs developed by means of man-machine collaboration. Formal proofs obtained in this way cover a large variety of logical theories, ranging from the branches of mainstream mathematics, to the field of software verification. The border between these two worlds is marked by results in theoretical computer science and proofs related to the metatheory of programming languages. This last field, which is an obvious application of interactive theorem proving, poses nonetheless a serious challenge to the users of such tools, due both to the particularly structured way in which these proofs are
constructed, and to difficulties related to the management of notions typical of programming languages like variable binding.
This thesis is composed of two parts, discussing our experience in the development of the Matita interactive theorem prover and its use in the mechanization of the metatheory of programming languages. More specifically, part I covers:
- the results of our effort in providing a better framework for the development of tactics for Matita, in order to make their implementation and debugging easier, also resulting in a much clearer code;
- a discussion of the implementation of two tactics, providing infrastructure for the unification of constructor forms and the inversion of inductive predicates; we point out interactions between induction and inversion and provide an advancement over the state of the art.
In the second part of the thesis, we focus on aspects related to the formalization of programming languages. We describe two works of ours:
- a discussion of basic issues we encountered in our formalizations of part 1A of the Poplmark challenge, where we apply the extended inversion principles we implemented for Matita;
- a formalization of an algebraic logical framework, posing more complex challenges, including multiple binding and a form of hereditary substitution; this work adopts, for the encoding of binding, an extension of Masahiko Sato's canonical locally named representation we designed during our visit to the Laboratory for Foundations of Computer Science at the University of Edinburgh, under the supervision of Randy Pollack.
|
152 |
Definition, realization and evaluation of a software reference architecture for use in space applicationsPanunzio, Marco <1982> 06 May 2011 (has links)
A recent initiative of the European Space Agency (ESA) aims at the definition and adoption of a software reference architecture for use in on-board software of future space missions.
Our PhD project placed in the context of that effort.
At the outset of our work we gathered all the industrial needs relevant to ESA and all the main European space stakeholders and we were able to consolidate a set of technical high-level requirements for the fulfillment of them.
The conclusion we reached from that phase confirmed that the adoption of a software reference architecture was
indeed the best solution for the fulfillment of the high-level requirements.
The software reference architecture we set on building rests on four constituents:
(i) a component model, to design the software as a composition of individually verifiable and reusable software units;
(ii) a computational model, to ensure that the architectural description of the software is statically analyzable;
(iii) a programming model, to ensure that the implementation of the design entities conforms
with the semantics, the assumptions and the constraints of the computational model;
(iv) a conforming execution platform, to actively preserve at run time the properties asserted by static analysis.
The nature, feasibility and fitness of constituents (ii), (iii) and (iv), were already proved by the author in an
international project that preceded the commencement of the PhD work.
The core of the PhD project was therefore centered on the design and prototype implementation of constituent (i), a component model.
Our proposed component model is centered on:
(i) rigorous separation of concerns, achieved with the support for design views and by careful allocation of concerns to the dedicated software entities;
(ii) the support for specification and model-based analysis of extra-functional properties;
(iii) the inclusion space-specific concerns.
|
153 |
La Ricostruzione 3D di Bologna nel XVI Secolo / A 3D Reconstruction of Bologna in the 16th CenturyOrlandi, Marco <1979> 24 May 2012 (has links)
Il lavoro presentato ha come oggetto la ricostruzione tridimensionale della città di Bologna nella sua fase rinascimentale. Tale lavoro vuole fornire un modello 3D delle architetture e degli spazi urbani utilizzabile sia per scopi di ricerca nell’ambito della storia delle città sia per un uso didattico-divulgativo nel settore del turismo culturale.
La base del lavoro è una fonte iconografica di grande importanza: l’affresco raffigurante Bologna risalente al 1575 e situato in Vaticano; questa è una veduta a volo d’uccello di grandi dimensioni dell’intero tessuto urbano bolognese all’interno della terza cerchia di mura. In esso sono rappresentate in maniera particolareggiata le architetture civili e ecclesiastiche, gli spazi ortivi e cortilivi interni agli isolati e alcune importanti strutture urbane presenti in città alla fine del Cinquecento, come l’area portuale e i canali interni alla città, oggi non più visibili.
La ricostruzione tridimensionale è stata realizzata tramite Blender, software per la modellazione 3D opensource, attraverso le fasi di modellazione, texturing e creazione materiali (mediante campionamento delle principali cromie presenti nell’affresco), illuminazione e animazione.
Una parte della modellazione è stata poi testata all’interno di un GIS per verificare l’utilizzo delle geometrie 3D come elementi collegabili ad altre fonti storiche relative allo sviluppo urbano e quindi sfruttabili per la ricerca storica.
Grande attenzione infine è stata data all’uso dei modelli virtuali a scopo didattico-divulgativo e per il turismo culturale. La modellazione è stata utilizzata all’interno di un motore grafico 3D per costruire un ambiente virtuale interattivo nel quale un utente anche non esperto possa muoversi per esplorare gli spazi urbani della Bologna del Cinquecento. In ultimo è stato impostato lo sviluppo di un’applicazione per sistemi mobile (Iphone e Ipad) al fine di fornire uno strumento per la conoscenza della città storica in mobilità, attraverso la comparazione dello stato attuale con quello ricostruito virtualmente. / The present work concerns a three-dimensional reconstruction of the city of Bologna as it was in the Renaissance period; it provides a 3D model of buildings and their architecture and urban spaces that is ideally suited for research into the history of cities, educational purposes, and cultural tourism.
The reconstruction is based on an iconographic source of primary importance: the 1575 fresco of Bologna situated in the Vatican. This fresco is a huge bird’s view of the whole urban fabric of Bologna within its third city walls; civil and religious buildings and their architecture are depicted with a high level of detail, featuring gardens and other green areas in city blocks and other important urban structures observable in the city in the late 1500’s, such as the dock area and the canals within the city, which today are no longer visible.
The three-dimensional reconstruction was made using Blender, an opensource 3D modelling software program and involved various steps: modelling, the creation of textures and materials (sampling the main colours in the fresco), lighting and animation.
A part of the virtual model was then imported into a GIS to test the use of 3D models as items to be linked to other historical documents related to urban development, thus making the reconstruction particularly suitable for historical research.
Great emphasis was given to the employment of virtual models suitable for educational purposes and for cultural tourism. The three-dimensional model was imported into a 3D engine to create an interactive virtual environment in which the user can move about to explore the urban spaces of Bologna in the 1500’s. The last step was the development of an application for mobile devices (e.g. Iphone and Ipad). This app allows users to compare contemporary Bologna and the virtual reconstructions while on the move.
|
154 |
Constraints meet concurrencyMauro, Jacopo <1984> 10 May 2012 (has links)
We investigate the benefits that emerge when the fields of constraint programming and concurrency meet.
On one hand, constraints can be use in concurrency theory to increase the conciseness and the expressive power of concurrent languages from a pragmatic point of view. On the other hand, problems modeled by using constraints can be solved faster and more efficiently using a concurrent system.
We explore both directions providing two separate lines of contribution. Firstly we study the expressive power of a concurrent language, namely Constraint Handling Rules, that supports constraints as a primitive construct. We show what features of this language make it Turing powerful. Then we propose a framework to solve constraint problems that is intended to be deployed on a concurrent system. For the development of this framework we used the concurrent language Jolie following the Service Oriented paradigm. Based on this experience, we also propose an extension to Service Oriented Languages to overcome some of their limitations and to improve the development of concurrent applications.
|
155 |
Fingerprint Recognition: Enhancement, Feature Extraction and Automatic Evaluation of AlgorithmsTurroni, Francesco <1983> 10 May 2012 (has links)
The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms.
An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerprint is very noisy, we are not able to detect a reliable set of features. A new fingerprint enhancement method, which is both iterative and contextual, is proposed. This approach detects high-quality regions in fingerprints, selectively applies contextual filtering and iteratively expands like wildfire toward low-quality ones.
A precise estimation of the orientation field would greatly simplify the estimation of other fingerprint features (singular points, minutiae) and improve the performance of a fingerprint recognition system. The fingerprint orientation extraction is improved following two directions. First, after the introduction of a new taxonomy of fingerprint orientation extraction methods, several variants of baseline methods are implemented and, pointing out the role of pre- and post- processing, we show how to improve the extraction. Second, the introduction of a new hybrid orientation extraction method, which follows an adaptive scheme, allows to improve significantly the orientation extraction in noisy fingerprints.
Scientific papers typically propose recognition systems that integrate many modules and therefore an automatic evaluation of fingerprint algorithms is needed to isolate the contributions that determine an actual progress in the state-of-the-art. The lack of a publicly available framework to compare fingerprint orientation extraction algorithms, motivates the introduction of a new benchmark area called FOE (including fingerprints and manually-marked orientation ground-truth) along with fingerprint matching benchmarks in the FVC-onGoing framework. The success of such framework is discussed by providing relevant statistics: more than 1450 algorithms submitted and two international competitions.
|
156 |
Reliable and Variation-Tolerant Interconnection Network for Low Power MPSOCSKakoee, Mohammad Reza <1978> 17 May 2012 (has links)
Multi-Processor SoC (MPSOC) design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection.
With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt.
Scaling down of process technologies has increased process and dynamic variations as well as transistor wearout. Because of this, delay variations increase and impact the performance of the MPSoCs. The interconnect architecture inMPSoCs becomes a single point of failure as it connects all other components of the system together. A faulty processing element may be shut down entirely, but the interconnect architecture must be able to tolerate partial failure and variations and operate with performance, power or latency overhead.
This dissertation focuses on techniques at different levels of abstraction to face with the reliability and variability issues in on-chip interconnection networks. By showing the test results of a GALS NoC testchip this dissertation motivates the need for techniques to detect and work around manufacturing faults and process variations in MPSoCs’ interconnection infrastructure. As a physical design technique, we propose the bundle routing framework as an effective way to route the Network on Chips’ global links. For architecture-level design, two cases are addressed: (I) Intra-cluster communication where we propose a low-latency interconnect with variability robustness (ii) Inter-cluster communication where an online functional testing with a reliable NoC configuration are proposed. We also propose dualVdd as an orthogonal way of compensating variability at the post-fabrication stage. This is an alternative strategy with respect to the design techniques, since it enforces the compensation at post silicon stage.
|
157 |
Variability-tolerant High-reliability Multicore PlatformsPaterna, Francesco <1979> 17 May 2012 (has links)
Next generation electronic devices have to guarantee high performance while being less power-consuming and highly reliable for several application domains ranging from the entertainment to the business.
In this context, multicore platforms have proven the most efficient design choice but new challenges have to be faced. The ever-increasing miniaturization of the components produces unexpected variations on technological parameters and wear-out characterized by soft and hard errors.
Even though hardware techniques, which lend themselves to be applied at design time, have been studied with the objective to mitigate these effects, they are not sufficient; thus software adaptive techniques are necessary.
In this thesis we focus on multicore task allocation strategies to minimize the energy consumption while meeting performance constraints. We firstly devise a technique based on an Integer Linear Problem formulation which provides the optimal solution but cannot be applied on-line since the algorithm it needs is time-demanding; then we propose a sub-optimal technique based on two steps which can be applied on-line. We demonstrate the effectiveness of the latter solution through an exhaustive comparison against the optimal solution, state-of-the-art policies, and variability-agnostic task allocations by running multimedia applications on the virtual prototype of a next generation industrial multicore platform.
We also face the problem of the performance and lifetime degradation.
We firstly focus on embedded multicore platforms and propose an idleness distribution policy that increases core expected lifetimes by duty cycling their activity; then, we investigate the use of micro thermoelectrical coolers in general-purpose multicore processors to control the temperature of the cores at runtime with the objective of meeting lifetime constraints without performance loss.
|
158 |
Nonlinear Control Strategies for Cooperative Control of Multi-Robot SystemsSabattini, Lorenzo <1983> 02 April 2012 (has links)
This thesis deals with distributed control strategies for cooperative control of multi-robot systems. Specifically, distributed coordination strategies are presented for groups of mobile robots.
The formation control problem is initially solved exploiting artificial potential fields. The purpose of the presented formation control algorithm is to drive a group of mobile robots to create a completely arbitrarily shaped formation. Robots are initially controlled to create a regular polygon formation. A bijective coordinate transformation is then exploited to extend the scope of this strategy, to obtain arbitrarily shaped formations. For this purpose, artificial potential fields are specifically designed, and robots are driven to follow their negative gradient.
Artificial potential fields are then subsequently exploited to solve the coordinated path tracking problem, thus making the robots autonomously spread along predefined paths, and move along them in a coordinated way.
Formation control problem is then solved exploiting a consensus based approach. Specifically, weighted graphs are used both to define the desired formation, and to implement collision avoidance. As expected for consensus based algorithms, this control strategy is experimentally shown to be robust to the presence of communication delays.
The global connectivity maintenance issue is then considered. Specifically, an estimation procedure is introduced to allow each agent to compute its own estimate of the algebraic connectivity of the communication graph, in a distributed manner. This estimate is then exploited to develop a gradient based control strategy that ensures that the communication graph remains connected, as the system evolves. The proposed control strategy is developed initially for single-integrator kinematic agents, and is then extended to Lagrangian dynamical systems.
|
159 |
Advanced Numerical Simulation of Silicon-Based Solar CellsZanuccoli, Mauro <1974> 30 April 2012 (has links)
Photovoltaic (PV) conversion is the direct production of electrical energy from sun without involving the emission of polluting substances. In order to be competitive with other energy sources, cost of the PV technology must be reduced ensuring adequate conversion efficiencies. These goals have motivated the interest of researchers in investigating advanced designs of crystalline silicon solar (c-Si) cells. Since lowering the cost of PV devices involves the reduction of the volume of semiconductor, an effective light trapping strategy aimed at increasing the photon absorption is required. Modeling of solar cells by electro-optical numerical simulation is helpful to predict the performance of future generations devices exhibiting advanced light-trapping schemes and to provide new and more specific guidelines to industry. The approaches to optical simulation commonly adopted for c-Si solar cells may lead to inaccurate results in case of thin film and nano-stuctured solar cells. On the other hand, rigorous solvers of Maxwell equations are really cpu- and memory-intensive. Recently, in optical simulation of solar cells, the RCWA method has gained relevance, providing a good trade-off between accuracy and computational resources requirement. This thesis is a contribution to the numerical simulation of advanced silicon solar cells by means of a state-of-the-art numerical 2-D/3-D device simulator, that has been successfully applied to the simulation of selective emitter and the rear point contact solar cells, for which the multi-dimensionality of the transport model is required in order to properly account for all physical competing mechanisms. In the second part of the thesis, the optical problems is discussed. Two novel and computationally efficient RCWA implementations for 2-D simulation domains as well as a third RCWA for 3-D structures based on an eigenvalues calculation approach have been presented. The proposed simulators have been validated in terms of accuracy, numerical convergence, computation time and correctness of results. / La conversione fotovoltaica è la produzione diretta di energia elettrica dal sole che non comporta l'emissione di sostanze inquinanti. Al fine di competere con altre fonti di energia, la tecnologia fotovoltaica deve subire una riduzione del costo garantendo contemporaneamente adeguate efficienze di conversione. Questi obiettivi hanno motivato l'interesse dei ricercatori al progetto ed all'analisi di celle solari avanzate in silicio cristallino. Poiché la riduzione del costo dei dispositivi fotovoltaici comporta tipicamente la riduzione del volume di semiconduttore, è necessaria una strategia efficace di intrappolamento della luce per aumentare l'assorbimento dei fotoni. Gli approcci orientati alla simulazione ottica comunemente adottati per la celle solari in silicio cristallino possono condurre a risultati non accurati in caso di celle a film sottile e nanostrutturate. D'altra parte, i risolutori rigorosi delle equazioni di Maxwell sono altamente onerosi in termini computazionali. Recentemente, nella simulazione ottica di celle solari, il metodo RCWA ha acquisito una forte popolarità, fornendo un buon compromesso tra accuratezza e fabbisogno di risorse computazionali. Questa tesi rappresenta un contributo alla simulazione numerica -sia ottica che elettrica- di celle solari avanzate al silicio. Un simulatore numerico di dispositivi a semiconduttore 2-D/3-D allo stato dell'arte è stato applicato con successo alla simulazione di celle a doppia diffusione di emettitore a di celle con superficie posteriore passivata e contatto locale, per le quali è richiesta la multi-dimensionalità del modello di trasporto al fine di descrivere correttamente tutti i meccanismi fisici. Nella seconda parte della tesi, vengono discussi gli aspetti relativi alla simulazione ottica. Due innovative e computazionalmente efficienti implementazioni del metodo RCWA per domini di simulazione 2-D nonché un terzo simulatore RCWA per strutture 3-D basato sul calcolo di autovalori sono stati presentati in questa tesi. I simulatori proposti sono stati validati in termini di accuratezza, convergenza numerica, tempo di calcolo e correttezza dei risultati.
|
160 |
Cache-aware development of high integrity real-time systemsMezzetti, Enrico <1974> 10 May 2012 (has links)
Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors.
Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible.
We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.
|
Page generated in 0.0435 seconds