Spelling suggestions: "subject:"info""
101 |
Kernel Methods for Tree Structured DataDa San Martino, Giovanni <1979> 20 April 2009 (has links)
Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data.
In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches.
They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function.
Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule.
Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain.
A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data.
This thesis proposes three contributions for resolving the above issues of kernel for trees.
A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space.
A second contribution is the proposal of a novel kernel function based on the convolution kernel framework.
Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect.
A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase.
We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.
|
102 |
Expressiveness of Concurrent LanguagesDi Giusto, Cinzia <1979> 20 April 2009 (has links)
The aim of this thesis is to go through different approaches for proving expressiveness properties in several concurrent languages. We analyse four different calculi exploiting for each one a different technique.
We begin with the analysis of a synchronous language, we explore the expressiveness of a fragment of CCS! (a variant of Milner's CCS where replication is considered instead of recursion) w.r.t. the existence of faithful encodings (i.e. encodings that respect the behaviour of the encoded model without introducing unnecessary computations) of models of computability strictly less expressive than Turing Machines. Namely, grammars of types 1,2 and 3 in the Chomsky Hierarchy.
We then move to asynchronous languages and we study full abstraction for two Linda-like languages. Linda can be considered as the asynchronous version of CCS plus a shared memory (a multiset of elements) that is used for storing messages. After having defined a denotational semantics based on traces, we obtain fully abstract semantics for both languages by using suitable abstractions in order to identify different traces which do not correspond to different behaviours.
Since the ability of one of the two variants considered of recognising multiple occurrences of messages in the store (which accounts for an increase of expressiveness) reflects in a less complex abstraction, we then study other languages where multiplicity plays a fundamental role. We consider the language CHR (Constraint Handling Rules) a language which uses multi-headed (guarded) rules. We prove that multiple heads augment the expressive power of the language. Indeed we show that if we restrict to rules where the head contains at most n atoms we could generate a hierarchy of languages with increasing expressiveness (i.e. the CHR language allowing at most n atoms in the heads is more expressive than the language allowing at most m atoms, with m<n).
Finally we analyse a language similar but simpler than CHR. The kappa-calculus is a formalism for modelling molecular biology where molecules are terms with internal state and sites, bonds are represented by shared names labelling sites, and reactions are represented by rewriting rules.
Depending on the shape of the rewriting rules, several dialects of the calculus can be obtained. We analyse the expressive power of some of these dialects by focusing on decidability and undecidability for problems like reachability and coverability.
|
103 |
A core calculus for the analysis and implementation of biologically inspired languagesVersari, Cristian <1978> 20 April 2009 (has links)
The application of Concurrency Theory to Systems Biology
is in its earliest stage of progress. The metaphor of
cells as computing systems by Regev and Shapiro
opened the employment of concurrent languages for the
modelling of biological systems. Their peculiar
characteristics led to the design of many bio-inspired
formalisms which achieve higher faithfulness and
specificity.
In this thesis we present pi@, an extremely simple and
conservative extension of the pi-calculus representing
a keystone in this respect, thanks to its expressiveness
capabilities. The pi@ calculus is obtained by the addition
of polyadic synchronisation and priority to the
pi-calculus, in order to achieve compartment semantics and
atomicity of complex operations respectively.
In its direct application to biological modelling, the
stochastic variant of the calculus, Spi@, is shown able to
model consistently several phenomena such as formation
of molecular complexes, hierarchical subdivision of the
system into compartments, inter-compartment reactions,
dynamic reorganisation of compartment structure consistent
with volume variation.
The pivotal role of pi@ is evidenced by its capability
of encoding in a compositional way several bio-inspired
formalisms, so that it represents the optimal core of
a framework for the analysis and implementation of
bio-inspired languages. In this respect, the encodings of
BioAmbients, Brane Calculi and a variant of P
Systems in pi@ are formalised. The conciseness of their
translation in pi@ allows their indirect comparison by
means of their encodings. Furthermore it provides a
ready-to-run implementation of minimal effort whose correctness is granted by the correctness of the respective
encoding functions.
Further important results of general validity are stated
on the expressive power of priority. Several impossibility
results are described, which clearly state the superior
expressiveness of prioritised languages and the problems
arising in the attempt of providing their parallel
implementation. To this aim, a new setting in distributed
computing (the last man standing problem) is singled out
and exploited to prove the impossibility of providing a
purely parallel implementation of priority by means
of point-to-point or broadcast communication.
|
104 |
Progetto di reti Sensori Wireless e tecniche di Fusione SensorialeZappi, Piero <1980> 25 May 2009 (has links)
Ambient Intelligence (AmI) envisions a world where smart, electronic environments are aware and responsive to their context. People moving into these settings engage many computational devices and systems simultaneously even if they are not aware of their presence. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces.
The dependence on a large amount of fixed and mobile sensors embedded into the environment makes of Wireless Sensor Networks one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes, simple devices that typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. In order to handle the large amount of data generated by a WSN several multi sensor data fusion techniques have been developed. The aim of multisensor data fusion is to combine data to achieve better accuracy and inferences than could be achieved by the use of a single sensor alone.
In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas: Multimodal Surveillance and Activity Recognition.
Novel techniques to handle data from a network of low-cost, low-power Pyroelectric InfraRed (PIR) sensors are presented. Such techniques allow the detection of the number of people moving in the environment, their direction of movement and their position. We discuss how a mesh of PIR sensors can be integrated with a video surveillance system to increase its performance in people tracking. Furthermore we embed a PIR sensor within the design of a Wireless Video Sensor Node (WVSN) to extend its lifetime.
Activity recognition is a fundamental block in natural interfaces. A challenging objective is to design an activity recognition system that is able to exploit a redundant but unreliable WSN. We present our activity in building a novel activity recognition architecture for such a dynamic system. The architecture has a hierarchical structure where simple nodes performs gesture classification and a high level meta classifiers fuses a changing number of classifier outputs. We demonstrate the benefit of such architecture in terms of increased recognition performance, and fault and noise robustness. Furthermore we show how we can extend network lifetime by performing a performance-power trade-off.
Smart objects can enhance user experience within smart environments. We present our work in extending the capabilities of the Smart Micrel Cube (SMCube), a smart object used as tangible interface within a tangible computing framework, through the development of a gesture recognition algorithm suitable for this limited computational power device.
Finally the development of activity recognition techniques can greatly benefit from the availability of shared dataset. We report our experience in building a dataset for activity recognition. Such dataset is freely available to the scientific community for research purposes and can be used as a testbench for developing, testing and comparing different activity recognition techniques.
|
105 |
Tecniche di progettazione tollerante alle variazioni per circuiti digitali in tecnologie nanometrichePaci, Giacomo <1979> 25 May 2009 (has links)
The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits.
This trend, known as technology scaling, is approaching the nanometer size.
The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and
creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes.
Those effects are no longer addressable only at the process level.
Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The
purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss.
The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow:
i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances.
ii) High level wear out models able to predict the mean time to failure of the system (MTTF).
iii) Statistical performance analysis able to predict the
impact of the process variation, both random and systematic.
The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance:
i) Thermal management strategy that increase the reliability and life time of the devices acting
to some tunable parameter,such as supply voltage or body bias.
ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage
ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability.
iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the
prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis.
In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown
that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore
the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design.
Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous
logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability.
Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of
low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.
|
106 |
Study of silicon-on-insulator multiple-gate MOS structures including band-gap engineering and self heating effectsBraccioli, Marco <1979> 09 April 2009 (has links)
The progresses of electron devices integration have proceeded for more than 40 years following the well–known Moore’s law, which states that the transistors density on chip doubles every 24 months. This trend has been possible due to the downsizing of the MOSFET dimensions (scaling); however, new issues and new challenges are arising, and the conventional ”bulk” architecture is becoming inadequate in order to face them.
In order to overcome the limitations related to conventional structures, the researchers
community is preparing different solutions, that need to be assessed.
Possible solutions currently under scrutiny are represented by:
• devices incorporating materials with properties different from those of silicon, for
the channel and the source/drain regions;
• new architectures as Silicon–On–Insulator (SOI) transistors: the body thickness of Ultra-Thin-Body SOI devices is a new design parameter, and it permits
to keep under control Short–Channel–Effects without adopting high
doping level in the channel.
Among the solutions proposed in order to overcome the difficulties related to scaling,
we can highlight heterojunctions at the channel edge, obtained by adopting for the
source/drain regions materials with band–gap different from that of the channel material.
This solution allows to increase the injection velocity of the particles travelling
from the source into the channel, and therefore increase the performance of the transistor
in terms of provided drain current.
The first part of this thesis work addresses the use of heterojunctions in SOI transistors:
chapter 3 outlines the basics of the heterojunctions theory and the adoption of such
approach in older technologies as the heterojunction–bipolar–transistors; moreover the
modifications introduced in the Monte Carlo code in order to simulate conduction band
discontinuities are described, and the simulations performed on unidimensional simplified
structures in order to validate them as well.
Chapter 4 presents the results obtained from the Monte Carlo simulations performed
on double–gate SOI transistors featuring conduction band offsets between the source
and drain regions and the channel. In particular, attention has been focused on the drain
current and to internal quantities as inversion charge, potential energy and carrier velocities.
Both graded and abrupt discontinuities have been considered.
The scaling of devices dimensions and the adoption of innovative architectures have
consequences on the power dissipation as well. In SOI technologies the channel is thermally
insulated from the underlying substrate by a SiO2 buried–oxide layer; this SiO2
layer features a thermal conductivity that is two orders of magnitude lower than the
silicon one, and it impedes the dissipation of the heat generated in the active region.
Moreover, the thermal conductivity of thin semiconductor films is much lower than
that of silicon bulk, due to phonon confinement and boundary scattering. All these aspects
cause severe self–heating effects, that detrimentally impact the carrier mobility
and therefore the saturation drive current for high–performance transistors; as a consequence,
thermal device design is becoming a fundamental part of integrated circuit
engineering.
The second part of this thesis discusses the problem of self–heating in SOI transistors.
Chapter 5 describes the causes of heat generation and dissipation in SOI devices, and
it provides a brief overview on the methods that have been proposed in order to model
these phenomena. In order to understand how this problem impacts the performance of
different SOI architectures, three–dimensional electro–thermal simulations have been
applied to the analysis of SHE in planar single and double–gate SOI transistors as well
as FinFET, featuring the same isothermal electrical characteristics.
In chapter 6 the same simulation approach is extensively employed to study the impact
of SHE on the performance of a FinFET representative of the high–performance
transistor of the 45 nm technology node. Its effects on the ON–current, the maximum
temperatures reached inside the device and the thermal resistance associated to the device
itself, as well as the dependence of SHE on the main geometrical parameters have
been analyzed. Furthermore, the consequences on self–heating of technological solutions
such as raised S/D extensions regions or reduction of fin height are explored as
well.
Finally, conclusions are drawn in chapter 7.
|
107 |
Enabling Blocks for Integrated CMOS UWB TransceiversGuermandi, Marco <1981> 23 March 2009 (has links)
The last decades have seen an unrivaled growth and diffusion of mobile telecommunications. Several standards have been developed to this purposes, from GSM mobile phone communications to WLAN IEEE 802.11, providing different services for the the transmission of signals ranging from voice to high data rate digital communications and Digital Video Broadcasting (DVB).
In this wide research and market field, this thesis focuses on Ultra Wideband (UWB) communications, an emerging technology for providing very high data rate transmissions over very short distances. In particular the presented research deals with the circuit design of enabling blocks for MB-OFDM UWB CMOS single-chip transceivers, namely the frequency synthesizer and the transmission mixer and power amplifier.
First we discuss three different models for the simulation of chargepump phase-locked loops, namely the continuous time s-domain and discrete time z-domain approximations and the exact semi-analytical time-domain model. The limitations of the two approximated models are analyzed in terms of error in the computed settling time as a function of loop parameters, deriving practical conditions under which the different models are reliable for fast settling PLLs up to fourth order.
Besides, a phase noise analysis method based upon the time-domain model is introduced and compared to the results obtained by means of the s-domain model. We compare the three models over the simulation of a fast switching PLL to be integrated in a frequency synthesizer for WiMedia MB-OFDM UWB systems.
In the second part, the theoretical analysis is applied to the design of a 60mW 3.4 to 9.2GHz 12 Bands frequency synthesizer for MB-OFDM UWB based on two wide-band PLLs. The design is presented and discussed up to layout level. A test chip has been implemented in TSMC CMOS 90nm technology, measured data is provided. The functionality of the circuit is proved and specifications are met with state-of-the-art area occupation and power consumption.
The last part of the thesis deals with the design of a transmission mixer and a power amplifier for MB-OFDM UWB band group 1. The design has been carried on up to layout level in ST Microlectronics 65nm CMOS technology. Main characteristics of the systems are the wideband behavior (1.6 GHz of bandwidth) and the constant behavior over process parameters, temperature and supply voltage thanks to the design of dedicated adaptive biasing circuits.
|
108 |
Modelling and simulations of post-CMOS devicesPoli, Stefano <1981> 09 April 2009 (has links)
No description available.
|
109 |
Design methodologies of microwawe integrated circuits for satellite telecommunicationsScappaviva, Francesco <1978> 25 May 2009 (has links)
The running innovation processes of the microwave transistor technologies, used in the implementation of microwave circuits, have to be supported by the study and development of proper design methodologies which, depending on the applications, will fully exploit the technology potentialities. After the choice of the technology to be used in the particular application, the circuit designer has few degrees of freedom when carrying out his design; in the most cases, due to the technological constrains, all the foundries develop and provide customized processes optimized for a specific performance such as power, low-noise, linearity, broadband etc. For these reasons circuit design is always a “compromise”, an investigation for the best solution to reach a trade off between the desired performances.
This approach becomes crucial in the design of microwave systems to be used in satellite applications; the tight space constraints impose to reach the best performances under proper electrical and thermal de-rated conditions, respect to the maximum ratings provided by the used technology, in order to ensure adequate levels of reliability. In particular this work is about one of the most critical components in the front-end of a satellite antenna, the High Power Amplifier (HPA). The HPA is the main power dissipation source and so the element which mostly engrave on space, weight and cost of telecommunication apparatus; it is clear from the above reasons that design strategies addressing optimization of power density, efficiency and reliability are of major concern.
Many transactions and publications demonstrate different methods for the design of power amplifiers, highlighting the availability to obtain very good levels of output power, efficiency and gain. Starting from existing knowledge, the target of the research activities summarized in this dissertation was to develop a design methodology capable optimize power amplifier performances complying all the constraints imposed by the space applications, tacking into account the thermal behaviour in the same manner of the power and the efficiency.
After a reminder of the existing theories about the power amplifier design, in the first section of this work, the effectiveness of the methodology based on the accurate control of the dynamic Load Line and her shaping will be described, explaining all steps in the design of two different kinds of high power amplifiers. Considering the trade-off between the main performances and reliability issues as the target of the design activity, we will demonstrate that the expected results could be obtained working on the characteristics of the Load Line at the intrinsic terminals of the selected active device.
The methodology proposed in this first part is based on the assumption that designer has the availability of an accurate electrical model of the device; the variety of publications about this argument demonstrates that it is so difficult to carry out a CAD model capable to taking into account all the non-ideal phenomena which occur when the amplifier operates at such high frequency and power levels. For that, especially for the emerging technology of Gallium Nitride (GaN), in the second section a new approach for power amplifier design will be described, basing on the experimental characterization of the intrinsic Load Line by means of a low frequency high power measurements bench.
Thanks to the possibility to develop my Ph.D. in an academic spin-off, MEC – Microwave Electronics for Communications, the results of this activity has been applied to important research programs requested by space agencies, with the aim support the technological transfer from universities to industrial world and to promote a science-based entrepreneurship. For these reasons the proposed design methodology will be explained basing on many experimental results.
|
110 |
Coordinated Control of Robotic Swarms in Unknown EnvironmentsFalconi, Riccardo <1978> 16 April 2009 (has links)
This thesis gathers the work carried out by the author in the last three years of research and it concerns the study and implementation of algorithms to coordinate and control a swarm of mobile robots moving in unknown environments. In particular, the author's attention is focused on two different approaches in order to solve two different problems.
The first algorithm considered in this work deals with the possibility of decomposing a main complex task in many simple subtasks by exploiting the decentralized implementation of the so called \emph{Null Space Behavioral} paradigm. This approach to the problem of merging different subtasks with assigned priority is slightly modified in order to handle critical situations that can be detected when robots are moving through an unknown environment. In fact, issues can occur when one or more robots got stuck in local minima: a smart strategy to avoid deadlock situations is provided by the author and the algorithm is validated by simulative analysis.
The second problem deals with the use of concepts borrowed from \emph{graph theory} to control a group differential wheel robots by exploiting the Laplacian solution of the consensus problem. Constraints on the swarm communication topology have been introduced by the use of a range and bearing platform developed at the Distributed Intelligent Systems and Algorithms Laboratory (DISAL), EPFL (Lausanne, CH) where part of author's work has been carried out. The control algorithm is validated by demonstration and simulation analysis and, later, is performed by a team of four robots engaged in a formation mission. To conclude, the capabilities of the algorithm based on the local solution of the consensus problem for differential wheel robots are demonstrated with an application scenario, where nine robots are engaged in a hunting task.
|
Page generated in 0.0285 seconds