Spelling suggestions: "subject:"fnf"" "subject:"nnf""
91 |
Spectrum shaping and its application: spread-spectrum clock generator and circuits for ultra wide bandDe Michele, Luca Antonio <1975> 08 April 2008 (has links)
Electromagnetic spectrum can be identified as a resource for the designer, as well as for the manufacturer, from two complementary points of view: first, because it is a good in great demand by many different kind of applications; second, because despite its scarce availability, it may be advantageous to use more spectrum than necessary. This is the case of Spread-Spectrum Systems, those systems in which the transmitted signal is spread over a wide frequency band, much wider, in fact, than the minimum bandwidth required to transmit the information being sent.
Part I of this dissertation deals with Spread-Spectrum Clock Generators (SSCG) aiming at reducing Electro Magnetic Interference (EMI) of clock signals in integrated circuits (IC) design. In particular, the modulation of the clock and the consequent spreading of its spectrum are obtained through a random modulating signal outputted by a chaotic map, i.e. a discrete-time dynamical system showing chaotic behavior. The advantages offered by this kind of modulation are highlighted. Three different prototypes of chaos-based SSCG are presented in all their aspects: design, simulation, and post-fabrication measurements. The third one, operating at a frequency equal to 3GHz, aims at being applied to Serial ATA, standard de facto for fast data transmission to and from Hard Disk Drives.
The most extreme example of spread-spectrum signalling is the emerging ultra-wideband (UWB) technology, which proposes the use of large sections of the radio spectrum at low amplitudes to transmit high-bandwidth digital data.
In part II of the dissertation, two UWB applications are presented, both dealing with the advantages as well as with the challenges of a wide-band system, namely: a chaos-based sequence generation method for reducing Multiple Access Interference (MAI) in Direct Sequence UWB Wireless-Sensor-Networks (WSNs), and design and simulations of a Low-Noise Amplifier (LNA) for impulse radio UWB. This latter topic was studied during a study-abroad period in collaboration with Delft University of Technology, Delft, Netherlands.
|
92 |
Memory hierarchy and data communication in heterogeneous reconfigurable SoCsVitkovskiy, Arseniy <1979> 08 April 2008 (has links)
The miniaturization race in the hardware industry aiming at continuous increasing
of transistor density on a die does not bring respective application performance
improvements any more. One of the most promising alternatives is to
exploit a heterogeneous nature of common applications in hardware. Supported by
reconfigurable computation, which has already proved its efficiency in accelerating
data intensive applications, this concept promises a breakthrough in contemporary
technology development.
Memory organization in such heterogeneous reconfigurable architectures becomes
very critical. Two primary aspects introduce a sophisticated trade-off. On
the one hand, a memory subsystem should provide well organized distributed data
structure and guarantee the required data bandwidth. On the other hand, it should
hide the heterogeneous hardware structure from the end-user, in order to support
feasible high-level programmability of the system.
This thesis work explores the heterogeneous reconfigurable hardware architectures
and presents possible solutions to cope the problem of memory organization
and data structure. By the example of the MORPHEUS heterogeneous platform,
the discussion follows the complete design cycle, starting from decision making
and justification, until hardware realization. Particular emphasis is made on the
methods to support high system performance, meet application requirements, and
provide a user-friendly programmer interface.
As a result, the research introduces a complete heterogeneous platform enhanced
with a hierarchical memory organization, which copes with its task by
means of separating computation from communication, providing reconfigurable
engines with computation and configuration data, and unification of heterogeneous
computational devices using local storage buffers. It is distinguished from the
related solutions by distributed data-flow organization, specifically engineered
mechanisms to operate with data on local domains, particular communication infrastructure
based on Network-on-Chip, and thorough methods to prevent computation
and communication stalls. In addition, a novel advanced technique to accelerate
memory access was developed and implemented.
|
93 |
Design and Performance Evaluation of Network-on-Chip Communication Protocols and ArchitecturesConcer, Nicola <1980> 20 April 2009 (has links)
The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip.
The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain
the increasing tight bandwidth and latency constraints that these systems demand.
The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter-
connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and
the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors.
On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems
on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top
boxes etc...
SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks
help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip.
In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose:
• a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from
many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance;
• a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip;
• a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing;
• a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to
integrate shared-memory and message-passing based components on a single System on Chip;
• a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay
stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs;
• a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible
small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced.
This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.
|
94 |
Per un museo virtuale dell'informatica. Un supporto automatico per creare "visite museali"Paliani, Luca <1974> 20 May 2009 (has links)
The subject of the present research is related to the field of computer technology applied to support intellectual activities such as text translation, screenwriting and content organization of popular and education courses, especially concerning museum visits.
The research has started with the deep analysis of the cognitive process which characterizes a screenwriter while working. This choice has been made because a screenplay is not only an aid to the realization of a show but, more in general, it can be considered as the planning of an education, popular and formative intellectual activity.
After this analysis, the research has focused on the specific area of the planning, description and introduction of topics related to the history of science, and in particular, of computer science. To focus on this area it has been fundamental to analyse subjects concerning the didactics of museum visits organization. The aim was to find out the guide lines that a teacher should follow when planning the visit of a museum (virtual museum of the history of computer science).
The consequent designing and realisation of an automatic support system for the description and the production of a formative, education and popular multimedia product (for the history of computer science), has been possible thanks to the results achieved through this research.
The system obtained is provided by the following features:
·management of multimedia slides (such as texts, video, audio or images) which can be classified on the bases of the topic and of the profile of the user;
·automatic creation of a sequence of multimedia slides which introduce the topic;
·management of the interaction with the user to check and give validity to the product.
The most innovative aspect of the present research is represented by the fact that the product is realised on the bases of the profile of the user.
|
95 |
Characterization and modeling of low-frequency noise in MOSFETsZanolla, Nicola <1980> 09 April 2009 (has links)
For many years, RF and analog integrated circuits have been mainly developed using bipolar and compound semiconductor technologies due to their better performance. In the last years, the advance made in CMOS technology allowed analog and RF circuits to be built with such a technology, but the use of CMOS technology in RF application instead of
bipolar technology has brought more issues in terms of noise. The noise cannot be completely eliminated and will therefore ultimately limit the accuracy of measurements and set a lower limit on how small signals can be detected and processed in an electronic circuit. One kind of noise which affects MOS transistors much more than bipolar ones is the low-frequency noise. In MOSFETs, low-frequency noise is mainly of two kinds: flicker or 1/f noise and random telegraph signal noise (RTS). The objective of this thesis is to characterize and to model the low-frequency noise by studying RTS and flicker noise under both constant and switched bias conditions. The effect of different biasing schemes on both RTS and flicker noise in time and frequency domain has been investigated.
|
96 |
Self-Organizing Mechanisms for Task Allocation in a Knowledge-Based EconomyMarcozzi, Andrea <1979> 20 April 2009 (has links)
A prevalent claim is that we are in knowledge economy. When we talk about knowledge economy, we generally mean the concept of “Knowledge-based economy” indicating the use of knowledge and technologies to produce economic benefits. Hence knowledge is both tool and raw material (people’s skill) for producing some kind of product or service. In this kind of environment economic organization is undergoing several changes. For example authority relations are less important, legal and ownership-based definitions of the boundaries of the firm are becoming irrelevant
and there are only few constraints on the set of coordination mechanisms. Hence what characterises a knowledge economy is the growing importance of human capital in productive processes (Foss, 2005) and the increasing knowledge intensity of jobs (Hodgson, 1999). Economic processes are also highly intertwined with
social processes: they are likely to be informal and reciprocal rather than formal and negotiated. Another important point is also the problem of the division of labor: as economic activity becomes mainly intellectual and requires the integration of specific and idiosyncratic skills, the task of dividing the job and assigning it to the most appropriate individuals becomes arduous, a “supervisory problem” (Hogdson, 1999) emerges and traditional hierarchical control may result increasingly ineffective. Not only specificity of know how makes it awkward to monitor the execution of tasks, more importantly, top-down integration of skills may be difficult because ‘the nominal supervisors will not know the best way of doing the job – or even the precise purpose of the specialist job itself – and the worker will know better’ (Hogdson,1999). We, therefore, expect that the organization of the economic activity of specialists should be, at least partially, self-organized.
The aim of this thesis is to bridge studies from computer science and in particular from Peer-to-Peer Networks (P2P) to organization theories. We think that the P2P paradigm well fits with organization problems related to all those situation in which a central authority is not possible. We believe that P2P Networks show a number of characteristics similar to firms working in a knowledge-based economy and hence that the methodology used for studying P2P Networks can be applied to organization studies.
Three are the main characteristics we think P2P have in common with firms involved in knowledge economy:
- Decentralization: in a pure P2P system every peer is an equal participant, there is no central authority governing the actions of the single peers;
- Cost of ownership: P2P computing implies shared ownership reducing the cost of owing the systems and the content, and the cost of maintaining them;
- Self-Organization: it refers to the process in a system leading to the emergence of global order within the system without the presence of another system dictating this order.
These characteristics are present also in the kind of firm that we try to address and that’ why we have shifted the techniques we adopted for studies in computer science (Marcozzi et al., 2005; Hales et al., 2007 [39]) to management science.
|
97 |
Design and Computation of Warped Time-Frequency TransformsCaporale, Salvatore <1981> 26 March 2009 (has links)
In this work we introduce an analytical approach for the frequency warping transform. Criteria for the design of operators based on arbitrary warping maps are provided and an algorithm carrying out a fast computation is defined. Such operators can be used to shape the tiling of time-frequency plane in a flexible way. Moreover, they are designed to be inverted by the application of their adjoint operator. According to the proposed mathematical model, the frequency warping transform is computed by considering two additive operators: the first one represents its nonuniform Fourier transform approximation and the second one suppresses aliasing. The first operator is known to be analytically characterized and fast computable by various interpolation approaches. A factorization of the second operator is found for arbitrary shaped non-smooth warping maps. By properly truncating the operators involved in the factorization, the computation turns out to be fast without compromising accuracy.
|
98 |
Statistical methods for biomedical signal analysis and processingPalladini, Alessandro <1981> 26 March 2009 (has links)
Statistical modelling and statistical learning theory are two powerful analytical frameworks for analyzing signals and developing efficient processing and classification algorithms. In this thesis, these frameworks are applied for modelling and processing biomedical signals in two different contexts: ultrasound medical imaging systems and primate neural activity analysis and modelling.
In the context of ultrasound medical imaging, two main applications are explored: deconvolution of signals measured from a ultrasonic transducer and automatic image segmentation and classification of prostate ultrasound scans.
In the former application a stochastic model of the radio frequency signal measured from a ultrasonic transducer is derived. This model is then employed for developing in a statistical framework a regularized deconvolution procedure, for enhancing signal resolution.
In the latter application, different statistical models are used to characterize images of prostate tissues, extracting different features. These features are then uses to segment the images in region of interests by means of an automatic procedure based on a statistical model of the extracted features. Finally, machine learning techniques are used for automatic classification of the different region of interests.
In the context of neural activity signals, an example of bio-inspired dynamical network was developed to help in studies of motor-related processes in the brain of primate monkeys. The presented model aims to mimic the abstract functionality of a cell population in 7a parietal region of primate monkeys, during the execution of learned behavioural tasks.
|
99 |
EXAM-S: an Analysis tool for Multi-Domain Policy SetsFerrini, Rodolfo <1980> 20 April 2009 (has links)
As distributed collaborative applications and architectures are adopting policy based management for tasks such as access control, network security and data privacy, the management and consolidation of a large number of policies is becoming a crucial component of such policy based systems. In large-scale distributed collaborative applications like web services, there is the need of analyzing policy interactions and integrating policies. In this thesis, we propose and implement EXAM-S, a comprehensive environment for policy analysis and management, which can be used to perform a variety of functions such as policy property analyses, policy similarity analysis, policy integration etc. As part of this environment, we have proposed and implemented new techniques for the analysis of policies that rely on a deep study of state of the art techniques. Moreover, we propose an approach for solving heterogeneity problems that usually arise when considering the analysis of policies belonging to different domains. Our work focuses on analysis of access control policies written in the dialect of XACML (Extensible Access Control Markup Language). We consider XACML policies because XACML is a rich language which can represent many policies of interest to real world applications and is gaining widespread adoption in the industry.
|
100 |
The port-Hamiltonian framework as a comprehensive approach to model and simulate complex systemsBassi, Luca <1980> 16 April 2009 (has links)
This thesis describes modelling tools and methods suited for complex systems (systems that typically are represented by a plurality of models). The basic idea is that all models representing the system should be linked by well-defined model operations in order to build a structured repository of information, a hierarchy of models. The port-Hamiltonian framework is a good candidate to solve this kind of problems as it supports the most important model operations natively. The thesis in particular addresses the problem of integrating distributed parameter systems in a model hierarchy, and shows two possible mechanisms to do that: a finite-element discretization in port-Hamiltonian form, and a structure-preserving model order reduction for discretized models obtainable from commercial finite-element packages.
|
Page generated in 0.0553 seconds