Spelling suggestions: "subject:"info""
81 |
Models and methods for power monolithic microwave integrated circuitsResca, Davide <1979> 17 April 2008 (has links)
Computer aided design of Monolithic Microwave Integrated Circuits (MMICs) depends
critically on active device models that are accurate, computationally efficient, and easily extracted
from measurements or device simulators.
Empirical models of active electron devices, which are based on actual device measurements, do
not provide a detailed description of the electron device physics. However they are numerically
efficient and quite accurate. These characteristics make them very suitable for MMIC design in the
framework of commercially available CAD tools.
In the empirical model formulation it is very important to separate linear memory effects
(parasitic effects) from the nonlinear effects (intrinsic effects). Thus an empirical active device
model is generally described by an extrinsic linear part which accounts for the parasitic passive
structures connecting the nonlinear intrinsic electron device to the external world.
An important task circuit designers deal with is evaluating the ultimate potential of a device for
specific applications. In fact once the technology has been selected, the designer would choose the
best device for the particular application and the best device for the different blocks composing the
overall MMIC. Thus in order to accurately reproducing the behaviour of different-in-size devices,
good scalability properties of the model are necessarily required.
Another important aspect of empirical modelling of electron devices is the mathematical (or
equivalent circuit) description of the nonlinearities inherently associated with the intrinsic device.
Once the model has been defined, the proper measurements for the characterization of the device
are performed in order to identify the model. Hence, the correct measurement of the device
nonlinear characteristics (in the device characterization phase) and their reconstruction (in the
identification or even simulation phase) are two of the more important aspects of empirical
modelling.
This thesis presents an original contribution to nonlinear electron device empirical modelling
treating the issues of model scalability and reconstruction of the device nonlinear characteristics.
The scalability of an empirical model strictly depends on the scalability of the linear extrinsic
parasitic network, which should possibly maintain the link between technological process
parameters and the corresponding device electrical response.
Since lumped parasitic networks, together with simple linear scaling rules, cannot provide
accurate scalable models, either complicate technology-dependent scaling rules or computationally
inefficient distributed models are available in literature.
This thesis shows how the above mentioned problems can be avoided through the use of
commercially available electromagnetic (EM) simulators. They enable the actual device geometry
and material stratification, as well as losses in the dielectrics and electrodes, to be taken into
account for any given device structure and size, providing an accurate description of the parasitic
effects which occur in the device passive structure. It is shown how the electron device behaviour
can be described as an equivalent two-port intrinsic nonlinear block connected to a linear distributed
four-port passive parasitic network, which is identified by means of the EM simulation of the device
layout, allowing for better frequency extrapolation and scalability properties than conventional
empirical models.
Concerning the issue of the reconstruction of the nonlinear electron device characteristics, a data
approximation algorithm has been developed for the exploitation in the framework of empirical
table look-up nonlinear models. Such an approach is based on the strong analogy between timedomain
signal reconstruction from a set of samples and the continuous approximation of device
nonlinear characteristics on the basis of a finite grid of measurements. According to this criterion,
nonlinear empirical device modelling can be carried out by using, in the sampled voltage domain,
typical methods of the time-domain sampling theory.
|
82 |
Tecniche per il controllo dinamico del consumo di potenza per piattaforme system-on-chipRuggiero, Martino <1979> 17 April 2008 (has links)
Providing support for multimedia applications on low-power mobile devices
remains a significant research challenge. This is primarily due to two reasons:
• Portable mobile devices have modest sizes and weights, and therefore
inadequate resources, low CPU processing power, reduced display capabilities,
limited memory and battery lifetimes as compared to desktop
and laptop systems.
• On the other hand, multimedia applications tend to have distinctive QoS
and processing requirementswhichmake themextremely resource-demanding.
This innate conflict introduces key research challenges in the design of multimedia
applications and device-level power optimization.
Energy efficiency in this kind of platforms can be achieved only via a synergistic
hardware and software approach. In fact, while System-on-Chips are
more and more programmable thus providing functional flexibility, hardwareonly
power reduction techniques cannot maintain consumption under acceptable
bounds.
It is well understood both in research and industry that system configuration
andmanagement cannot be controlled efficiently only relying on low-level
firmware and hardware drivers. In fact, at this level there is lack of information
about user application activity and consequently about the impact of power
management decision on QoS.
Even though operating system support and integration is a requirement
for effective performance and energy management, more effective and QoSsensitive
power management is possible if power awareness and hardware
configuration control strategies are tightly integratedwith domain-specificmiddleware
services.
The main objective of this PhD research has been the exploration and the
integration of amiddleware-centric energymanagement with applications and
operating-system. We choose to focus on the CPU-memory and the video subsystems,
since they are the most power-hungry components of an embedded
system. A second main objective has been the definition and implementation
of software facilities (like toolkits, API, and run-time engines) in order to improve
programmability and performance efficiency of such platforms.
Enhancing energy efficiency and programmability ofmodernMulti-Processor
System-on-Chips (MPSoCs)
Consumer applications are characterized by tight time-to-market constraints
and extreme cost sensitivity. The software that runs on modern embedded
systems must be high performance, real time, and even more important low
power. Although much progress has been made on these problems, much
remains to be done.
Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms
for high performance embedded applications. This leads to interesting
challenges in software development since efficient software development is a
major issue for MPSoc designers.
An important step in deploying applications on multiprocessors is to allocate
and schedule concurrent tasks to the processing and communication resources
of the platform. The problem of allocating and scheduling precedenceconstrained
tasks on processors in a distributed real-time system is NP-hard.
There is a clear need for deployment technology that addresses thesemulti processing
issues. This problem can be tackled by means of specific middleware
which takes care of allocating and scheduling tasks on the different processing
elements and which tries also to optimize the power consumption of the entire
multiprocessor platform.
This dissertation is an attempt to develop insight into efficient, flexible and
optimalmethods for allocating and scheduling concurrent applications tomultiprocessor
architectures.
It is a well-known problem in literature: this kind of optimization problems
are very complex even in much simplified variants, therefore most authors
propose simplified models and heuristic approaches to solve it in reasonable
time. Model simplification is often achieved by abstracting away platform
implementation ”details”. As a result, optimization problems become more
tractable, even reaching polynomial time complexity. Unfortunately, this approach
creates an abstraction gap between the optimization model and the real
HW-SW platform. The main issue with heuristic or, more in general, with incomplete
search is that they introduce an optimality gap of unknown size. They
provide very limited or no information on the distance between the best computed
solution and the optimal one.
The goal of this work is to address both abstraction and optimality gaps,
formulating accurate models which accounts for a number of ”non-idealities”
in real-life hardware platforms, developing novel mapping algorithms that deterministically
find optimal solutions, and implementing software infrastructures
required by developers to deploy applications for the targetMPSoC platforms.
Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp-
plication Processor
Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology,
their power consumption is still one of the major limitations to the battery
life of mobile appliances such as smart phones, portable media players,
gaming and navigation devices. There is a clear trend towards the increase of
LCD size to exploit the multimedia capabilities of portable devices that can receive
and render high definition video and pictures. Multimedia applications
running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore
to display video sequences and pictures with the required quality.
LCD power consumption is dependent on the backlight and pixel matrix
driving circuits and is typically proportional to the panel area. As a result, the
contribution is also likely to be considerable in future mobile appliances. To
address this issue, companies are proposing low power technologies suitable
for mobile applications supporting low power states and image control techniques.
On the research side, several power saving schemes and algorithms can be
found in literature. Some of them exploit software-only techniques to change
the image content to reduce the power associated with the crystal polarization,
some others are aimed at decreasing the backlight level while compensating
the luminance reduction by compensating the user perceived quality degradation
using pixel-by-pixel image processing algorithms. The major limitation of
these techniques is that they rely on the CPU to perform pixel-based manipulations
and their impact on CPU utilization and power consumption has not
been assessed.
This PhDdissertation shows an alternative approach that exploits in a smart
and efficient way the hardware image processing unit almost integrated in every
current multimedia application processors to implement a hardware assisted
image compensation that allows dynamic scaling of the backlight with
a negligible impact on QoS. The proposed approach overcomes CPU-intensive
techniques by saving system power without requiring either a dedicated display technology or hardware modification.
Thesis Overview
The remainder of the thesis is organized as follows.
The first part is focused on enhancing energy efficiency and programmability
of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives
an overview about architectural trends in embedded systems, illustrating the
principal features of new technologies and the key challenges still open. Chapter
3 presents a QoS-driven methodology for optimal allocation and frequency
selection for MPSoCs. The methodology is based on functional simulation
and full system power estimation. Chapter 4 targets allocation and scheduling
of pipelined stream-oriented applications on top of distributed memory
architectures with messaging support. We tackled the complexity of the problem
by means of decomposition and no-good generation, and prove the increased
computational efficiency of this approach with respect to traditional
ones. Chapter 5 presents a cooperative framework to solve the allocation,
scheduling and voltage/frequency selection problem to optimality for energyefficient
MPSoCs, while in Chapter 6 applications with conditional task graph
are taken into account. Finally Chapter 7 proposes a complete framework,
called Cellflow, to help programmers in efficient software implementation on
a real architecture, the Cell Broadband Engine processor.
The second part is focused on energy efficient software techniques for LCD
displays. Chapter 8 gives an overview about portable device display technologies,
illustrating the principal features of LCD video systems and the key challenges
still open. Chapter 9 shows several energy efficient software techniques
present in literature, while Chapter 10 illustrates in details our method for saving
significant power in an LCD panel.
Finally, conclusions are drawn, reporting the main research contributions
that have been discussed throughout this dissertation.
|
83 |
Exploration of communication strategies for computation intensive Systems-On-ChipDeledda, Antonio <1980> 17 April 2008 (has links)
No description available.
|
84 |
Control of the position of particles in open microfluidic systemsGazzola, Daniele <1976> 17 April 2008 (has links)
No description available.
|
85 |
Design of wireless sensor networks for fluid dynamic applicationsCodeluppi, Rossano <1974> 17 April 2008 (has links)
In fluid dynamics research, pressure measurements are of great importance to define
the flow field acting on aerodynamic surfaces. In fact the experimental approach is
fundamental to avoid the complexity of the mathematical models for predicting the
fluid phenomena.
It’s important to note that, using in-situ sensor to monitor pressure on large domains
with highly unsteady flows, several problems are encountered working with the
classical techniques due to the transducer cost, the intrusiveness, the time response
and the operating range.
An interesting approach for satisfying the previously reported sensor requirements is
to implement a sensor network capable of acquiring pressure data on aerodynamic
surface using a wireless communication system able to collect the pressure data with
the lowest environmental–invasion level possible.
In this thesis a wireless sensor network for fluid fields pressure has been designed,
built and tested.
To develop the system, a capacitive pressure sensor, based on polymeric membrane,
and read out circuitry, based on microcontroller, have been designed, built and
tested. The wireless communication has been performed using the Zensys Z-WAVE
platform, and network and data management have been implemented. Finally, the
full embedded system with antenna has been created.
As a proof of concept, the monitoring of pressure on the top of the mainsail in a sailboat
has been chosen as working example.
|
86 |
Interconnection systems for highly integrated computation devicesAngiolini, Federico <1978> 17 April 2008 (has links)
The sustained demand for faster,more powerful chips has beenmet by the
availability of chip manufacturing processes allowing for the integration
of increasing numbers of computation units onto a single die. The resulting
outcome, especially in the embedded domain, has often been called
SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC).
MPSoC design brings to the foreground a large number of challenges,
one of the most prominent of which is the design of the chip interconnection.
With a number of on-chip blocks presently ranging in the tens, and
quickly approaching the hundreds, the novel issue of how to best provide
on-chip communication resources is clearly felt.
NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable
answer to this design concern. By bringing large-scale networking
concepts to the on-chip domain, they guarantee a structured answer to
present and future communication requirements. The point-to-point connection
and packet switching paradigms they involve are also of great help
in minimizing wiring overhead and physical routing issues.
However, as with any technology of recent inception, NoC design is
still an evolving discipline. Several main areas of interest require deep
investigation for NoCs to become viable solutions:
• The design of the NoC architecture needs to strike the best tradeoff
among performance, features and the tight area and power constraints
of the on-chip domain.
• Simulation and verification infrastructure must be put in place to
explore, validate and optimize the NoC performance.
• NoCs offer a huge design space, thanks to their extreme customizability
in terms of topology and architectural parameters. Design
tools are needed to prune this space and pick the best solutions.
• Even more so given their global, distributed nature, it is essential to
evaluate the physical implementation of NoCs to evaluate their suitability
for next-generation designs and their area and power costs.
This dissertation focuses on all of the above points, by describing a
NoC architectural implementation called ×pipes; a NoC simulation environment
within a cycle-accurate MPSoC emulator called MPARM; a NoC
design flow consisting of a front-end tool for optimal NoC instantiation,
called SunFloor, and a set of back-end facilities for the study of NoC physical
implementations.
This dissertation proves the viability of NoCs for current and upcoming
designs, by outlining their advantages (alongwith a fewtradeoffs) and
by providing a full NoC implementation framework. It also presents some
examples of additional extensions of NoCs, allowing e.g. for increased
fault tolerance, and outlines where NoCsmay find further application scenarios,
such as in stacked chips.
|
87 |
Cooperative communication and distributed detection in wireless sensor networksLucchi, Matteo <1979> 06 May 2008 (has links)
Recent progress in microelectronic and wireless communications
have enabled the development of low cost, low
power, multifunctional sensors, which has allowed the birth
of new type of networks named wireless sensor networks
(WSNs). The main features of such networks are: the nodes
can be positioned randomly over a given field with a high
density; each node operates both like sensor (for collection of environmental data) as well as transceiver (for transmission of information to the data retrieval); the nodes have limited energy resources.
The use of wireless communications and the small size of
nodes, make this type of networks suitable for a large number of applications. For example, sensor nodes can be used to monitor a high risk region, as near a volcano; in a hospital they could be used to monitor physical conditions of patients.
For each of these possible application scenarios, it is necessary to guarantee a trade-off between energy consumptions and communication reliability.
The thesis investigates the use of WSNs in two possible
scenarios and for each of them suggests a solution that permits to solve relating problems considering the trade-off introduced.
The first scenario considers a network with a high number of
nodes deployed in a given geographical area without detailed
planning that have to transmit data toward a coordinator
node, named sink, that we assume to be located onboard an
unmanned aerial vehicle (UAV). This is a practical example of reachback communication, characterized by the high density of nodes that have to transmit data reliably and efficiently towards a far receiver. It is considered that each node transmits a common shared message directly to the receiver onboard the
UAV whenever it receives a broadcast message (triggered for
example by the vehicle). We assume that the communication
channels between the local nodes and the receiver are subject to fading and noise. The receiver onboard the UAV must be able to fuse the weak and noisy signals in a coherent way to receive the data reliably. It is proposed a cooperative diversity concept as an effective solution to the reachback problem. In particular, it is considered a spread spectrum (SS) transmission scheme in conjunction with a fusion center that can exploit cooperative
diversity, without requiring stringent synchronization
between nodes. The idea consists of simultaneous transmission of the common message among the nodes and a Rake reception at the fusion center. The proposed solution is mainly motivated by two goals: the necessity to have simple nodes (to this aim we move the computational complexity to the receiver onboard
the UAV), and the importance to guarantee high levels of
energy efficiency of the network, thus increasing the network lifetime. The proposed scheme is analyzed in order to better
understand the effectiveness of the approach presented. The
performance metrics considered are both the theoretical limit on the maximum amount of data that can be collected by
the receiver, as well as the error probability with a given
modulation scheme. Since we deal with a WSN, both of these
performance are evaluated taking into consideration the energy efficiency of the network.
The second scenario considers the use of a chain network
for the detection of fires by using nodes that have a double
function of sensors and routers. The first one is relative to the
monitoring of a temperature parameter that allows to take a
local binary decision of target (fire) absent/present. The second one considers that each node receives a decision made by the previous node of the chain, compares this with that deriving by the observation of the phenomenon, and transmits the final result to the next node. The chain ends at the sink node that transmits the received decision to the user. In this network the goals are to limit throughput in each sensor-to-sensor link and
minimize probability of error at the last stage of the chain. This
is a typical scenario of distributed detection. To obtain good performance it is necessary to define some fusion rules for
each node to summarize local observations and decisions of
the previous nodes, to get a final decision that it is transmitted
to the next node.
WSNs have been studied also under a practical point of
view, describing both the main characteristics of IEEE802:15:4
standard and two commercial WSN platforms. By using
a commercial WSN platform it is realized an agricultural
application that has been tested in a six months on-field
experimentation.
|
88 |
Wireless systems for the fourth generationSalbaroli, Enrica <1979> 06 May 2008 (has links)
Today, third generation networks are consolidated realities, and user expectations on new applications
and services are becoming higher and higher. Therefore, new systems and technologies
are necessary to move towards the market needs and the user requirements. This has driven the
development of fourth generation networks.
”Wireless network for the fourth generation” is the expression used to describe the next step
in wireless communications. There is no formal definition for what these fourth generation
networks are; however, we can say that the next generation networks will be based on the
coexistence of heterogeneous networks, on the integration with the existing radio access network
(e.g. GPRS, UMTS, WIFI, ...) and, in particular, on new emerging architectures that are obtaining
more and more relevance, as Wireless Ad Hoc and Sensor Networks (WASN). Thanks to their
characteristics, fourth generation wireless systems will be able to offer custom-made solutions and
applications personalized according to the user requirements; they will offer all types of services
at an affordable cost, and solutions characterized by flexibility, scalability and reconfigurability.
This PhD’s work has been focused on WASNs, autoconfiguring networks which are not based
on a fixed infrastructure, but are characterized by being infrastructure less, where devices have to
automatically generate the network in the initial phase, and maintain it through reconfiguration
procedures (if nodes’ mobility, or energy drain, etc..., cause disconnections). The main part of
the PhD activity has been focused on an analytical study on connectivity models for wireless ad
hoc and sensor networks, nevertheless a small part of my work was experimental. Anyway, both
the theoretical and experimental activities have had a common aim, related to the performance
evaluation of WASNs. Concerning the theoretical analysis, the objective of the connectivity
studies has been the evaluation of models for the interference estimation. This is due to the
fact that interference is the most important performance degradation cause in WASNs. As a
consequence, is very important to find an accurate model that allows its investigation, and I’ve
tried to obtain a model the most realistic and general as possible, in particular for the evaluation of
the interference coming from bounded interfering areas (i.e. a WiFi hot spot, a wireless covered
research laboratory, ...). On the other hand, the experimental activity has led to Throughput and
Packet Error Rare measurements on a real IEEE802.15.4 Wireless Sensor Network.
|
89 |
Adaptive multiscale biological signal processingTestoni, Nicola <1980> 10 April 2008 (has links)
Biological processes are very complex mechanisms, most
of them being accompanied by or manifested as signals
that reflect their essential characteristics and qualities. The
development of diagnostic techniques based on signal and
image acquisition from the human body is commonly retained
as one of the propelling factors in the advancements
in medicine and biosciences recorded in the recent past.
It is a fact that the instruments used for biological signal
and image recording, like any other acquisition system,
are affected by non-idealities which, by different degrees,
negatively impact on the accuracy of the recording. This
work discusses how it is possible to attenuate, and ideally
to remove, these effects, with a particular attention toward
ultrasound imaging and extracellular recordings.
Original algorithms developed during the Ph.D. research
activity will be examined and compared to ones in literature
tackling the same problems; results will be drawn on the
base of comparative tests on both synthetic and in-vivo
acquisitions, evaluating standard metrics in the respective
field of application. All the developed algorithms share an
adaptive approach to signal analysis, meaning that their
behavior is not dependent only on designer choices, but
driven by input signal characteristics too.
Performance comparisons following the state of the art
concerning image quality assessment, contrast gain estimation
and resolution gain quantification as well as visual
inspection highlighted very good results featured by the
proposed ultrasound image deconvolution and restoring
algorithms: axial resolution up to 5 times better than algorithms
in literature are possible. Concerning extracellular
recordings, the results of the proposed denoising technique
compared to other signal processing algorithms pointed
out an improvement of the state of the art of almost 4 dB.
|
90 |
TCAD approaches to multidimensional simulation of advanced semiconductor devicesBaravelli, Emanuele <1980> 07 April 2008 (has links)
Technology scaling increasingly emphasizes complexity and non-ideality
of the electrical behavior of semiconductor devices and boosts interest on alternatives to the conventional planar MOSFET architecture.
TCAD simulation tools are fundamental to the analysis and development of new technology generations. However, the increasing device
complexity is reflected in an augmented dimensionality of the problems
to be solved. The trade-off between accuracy and computational cost of
the simulation is especially influenced by domain discretization: mesh
generation is therefore one of the most critical steps and automatic approaches are sought. Moreover, the problem size is further increased by
process variations, calling for a statistical representation of the single
device through an ensemble of microscopically different instances. The
aim of this thesis is to present multi-disciplinary approaches to handle
this increasing problem dimensionality in a numerical simulation perspective. The topic of mesh generation is tackled by presenting a new
Wavelet-based Adaptive Method (WAM) for the automatic refinement
of 2D and 3D domain discretizations. Multiresolution techniques and
efficient signal processing algorithms are exploited to increase grid resolution in the domain regions where relevant physical phenomena take
place. Moreover, the grid is dynamically adapted to follow solution
changes produced by bias variations and quality criteria are imposed
on the produced meshes. The further dimensionality increase due to
variability in extremely scaled devices is considered with reference to
two increasingly critical phenomena, namely line-edge roughness (LER)
and random dopant fluctuations (RD). The impact of such phenomena
on FinFET devices, which represent a promising alternative to planar
CMOS technology, is estimated through 2D and 3D TCAD simulations
and statistical tools, taking into account matching performance of single
devices as well as basic circuit blocks such as SRAMs. Several process
options are compared, including resist- and spacer-defined fin patterning as well as different doping profile definitions. Combining statistical
simulations with experimental data, potentialities and shortcomings of
the FinFET architecture are analyzed and useful design guidelines are
provided, which boost feasibility of this technology for mainstream applications in sub-45 nm generation integrated circuits.
|
Page generated in 0.0559 seconds