211 |
Middleware per la gestione di interfacce di comunicazione e di sorgenti di contesto in ambienti wireless eterogeneiGiannelli, Carlo <1979> 07 April 2008 (has links)
The full exploitation of multi-hop multi-path connectivity opportunities offered
by heterogeneous wireless interfaces could enable innovative Always Best Served
(ABS) deployment scenarios where mobile clients dynamically self-organize to
offer/exploit Internet connectivity at best. Only novel middleware solutions based
on heterogeneous context information can seamlessly enable this scenario:
middleware solutions should i) provide a translucent access to low-level
components, to achieve both fully aware and simplified pre-configured
interactions, ii) permit to fully exploit communication interface capabilities, i.e.,
not only getting but also providing connectivity in a peer-to-peer fashion, thus
relieving final users and application developers from the burden of directly
managing wireless interface heterogeneity, and iii) consider user mobility as
crucial context information evaluating at provision time the suitability of available
Internet points of access differently when the mobile client is still or in motion.
The novelty of this research work resides in three primary points. First of all, it
proposes a novel model and taxonomy providing a common vocabulary to easily
describe and position solutions in the area of context-aware autonomic
management of preferred network opportunities.
Secondly, it presents PoSIM, a context-aware middleware for the synergic
exploitation and control of heterogeneous positioning systems that facilitates the
development and portability of location-based services. PoSIM is translucent, i.e.,
it can provide application developers with differentiated visibility of data
characteristics and control possibilities of available positioning solutions, thus
dynamically adapting to application-specific deployment requirements and
enabling cross-layer management decisions.
Finally, it provides the MMHC solution for the self-organization of multi-hop
multi-path heterogeneous connectivity. MMHC considers a limited set of practical
indicators on node mobility and wireless network characteristics for a coarsegrained
estimation of expected reliability/quality of multi-hop paths available at
runtime. In particular, MMHC manages the durability/throughput-aware formation
and selection of different multi-hop paths simultaneously. Furthermore, MMHC
provides a novel solution based on adaptive buffers, proactively managed based on
handover prediction, to support continuous services, especially by pre-fetching
multimedia contents to avoid streaming interruptions.
|
212 |
Ingegneria di sistemi auto‐organizzanti con il paradigma multiagenteGardelli, Luca <1980> 07 April 2008 (has links)
Self-organisation is increasingly being regarded as an effective approach to tackle
modern systems complexity. The self-organisation approach allows the development of systems exhibiting complex dynamics and adapting to environmental
perturbations without requiring a complete knowledge of the future surrounding
conditions.
However, the development of self-organising systems (SOS) is driven by different principles with respect to traditional software engineering. For instance,
engineers typically design systems combining smaller elements where the composition rules depend on the reference paradigm, but typically produce predictable
results. Conversely, SOS display non-linear dynamics, which can hardly be captured by deterministic models, and, although robust with respect to external
perturbations, are quite sensitive to changes on inner working parameters.
In this thesis, we describe methodological aspects concerning the early-design
stage of SOS built relying on the Multiagent paradigm: in particular, we refer
to the A&A metamodel, where MAS are composed by agents and artefacts, i.e.
environmental resources. Then, we describe an architectural pattern that has
been extracted from a recurrent solution in designing self-organising systems:
this pattern is based on a MAS environment formed by artefacts, modelling
non-proactive resources, and environmental agents acting on artefacts so as to
enable self-organising mechanisms. In this context, we propose a scientific approach for the early design stage of the engineering of self-organising systems:
the process is an iterative one and each cycle is articulated in four stages, modelling, simulation, formal verification, and tuning. During the modelling phase
we mainly rely on the existence of a self-organising strategy observed in Nature
and, hopefully encoded as a design pattern. Simulations of an abstract system model are used to drive design choices until the required quality properties
are obtained, thus providing guarantees that the subsequent design steps would
lead to a correct implementation. However, system analysis exclusively based
on simulation results does not provide sound guarantees for the engineering of
complex systems: to this purpose, we envision the application of formal verification techniques, specifically model checking, in order to exactly characterise the
system behaviours. During the tuning stage parameters are tweaked in order to
meet the target global dynamics and feasibility constraints.
In order to evaluate the methodology, we analysed several systems: in this
thesis, we only describe three of them, i.e. the most representative ones for
each of the three years of PhD course. We analyse each case study using the
presented method, and describe the exploited formal tools and techniques.
|
213 |
Management and routing algorithms for ad-hoc and sensor networksMonti, Gabriele <1978> 07 April 2008 (has links)
Large scale wireless adhoc
networks of computers, sensors, PDAs etc. (i.e. nodes) are
revolutionizing connectivity and leading to a paradigm shift from centralized systems to highly
distributed and dynamic environments. An example of adhoc
networks are sensor networks, which
are usually composed by small units able to sense and transmit to a sink elementary data which are
successively processed by an external machine. Recent improvements in the memory and
computational power of sensors, together with the reduction of energy consumptions, are rapidly
changing the potential of such systems, moving the attention towards datacentric
sensor networks.
A plethora of routing and data management algorithms have been proposed for the network path
discovery ranging from broadcasting/floodingbased
approaches to those using global positioning
systems (GPS).
We studied WGrid,
a novel decentralized infrastructure that organizes wireless devices in an adhoc
manner, where each node has one or more virtual coordinates through which both message routing
and data management occur without reliance on either flooding/broadcasting operations or GPS.
The resulting adhoc
network does not suffer from the deadend
problem, which happens in
geographicbased
routing when a node is unable to locate a neighbor closer to the destination than
itself.
WGrid
allow multidimensional
data management capability since nodes' virtual coordinates can
act as a distributed database without needing neither special implementation or reorganization. Any
kind of data (both single and multidimensional)
can be distributed, stored and managed. We will
show how a location service can be easily implemented so that any search is reduced to a simple
query, like for any other data type.
WGrid
has then been extended by adopting a replication methodology. We called the resulting
algorithm WRGrid.
Just like WGrid,
WRGrid
acts as a distributed database without needing
neither special implementation nor reorganization and any kind of data can be distributed, stored
and managed. We have evaluated the benefits of replication on data management, finding out, from
experimental results, that it can halve the average number of hops in the network. The direct
consequence of this fact are a significant improvement on energy consumption and a workload
balancing among sensors (number of messages routed by each node). Finally, thanks to the
replications, whose number can be arbitrarily chosen, the resulting sensor network can face sensors
disconnections/connections, due to failures of sensors, without data loss.
Another extension to {WGrid}
is {W*Grid}
which extends it by strongly improving network
recovery performance from link and/or device failures that may happen due to crashes or battery
exhaustion of devices or to temporary obstacles.
W*Grid
guarantees, by construction, at least two disjoint paths between each couple of nodes. This
implies that the recovery in W*Grid
occurs without broadcasting transmissions and guaranteeing
robustness while drastically reducing the energy consumption. An extensive number of simulations
shows the efficiency, robustness and traffic road of resulting networks under several scenarios of
device density and of number of coordinates. Performance analysis have been compared to existent
algorithms in order to validate the results.
|
214 |
Meta-models, environment and layers: agent-oriented engineering of complex systemsMolesini, Ambra <1980> 07 April 2008 (has links)
Traditional software engineering approaches and metaphors fall short when applied to
areas of growing relevance such as electronic commerce, enterprise resource planning,
and mobile computing: such areas, in fact, generally call for open architectures that
may evolve dynamically over time so as to accommodate new components and meet new
requirements. This is probably one of the main reasons that the agent metaphor and the
agent-oriented paradigm are gaining momentum in these areas.
This thesis deals with the engineering of complex software systems in terms of the
agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at
runtime typically distributed software systems. However, today the engineer often works
with technologies that do not support the abstractions used in the design of the systems.
For this reason the research on methodologies becomes the basic point in the scientific
activity. Currently most agent-oriented methodologies are supported by small teams of
academic researchers, and as a result, most of them are in an early stage and still in the
first context of mostly \academic" approaches for agent-oriented systems development.
Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta-
models becomes fundamental for comparing and evaluating the methodologies. In fact a
meta-model specifies the concepts, rules and relationships used to define methodologies.
Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its
consistency or planning extensions or modifications. A good meta-model must address all
the different aspects of a methodology, i.e. the process to be followed, the work products
to be generated and those responsible for making all this happen. In turn, specifying
the work products that must be developed implies dening the basic modelling building
blocks from which they are built.
As a building block, the agent abstraction alone is not enough to fully model all the
aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is
clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment
as a first-class abstraction in the engineering of multi-agent system is today generally
acknowledged in the multi-agent system community, so environment should be explicitly
accounted for in the engineering of multi-agent system, working as a new design dimension
for agent-oriented methodologies. At least two main ingredients shape the environment:
environment abstractions - entities of the environment encapsulating some functions -,
and topology abstractions - entities of environment that represent the (either logical or
physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over
multi-agent systems.
The research in these fields has lead to the formulation of a new version of the SODA
methodology where environment abstractions and layering principles are exploited for en-
gineering multi-agent systems.
|
215 |
Studio del comportamento meccanico di smalti porcellanati per substrati metalliciRossetti, Luigi <1978> 23 April 2008 (has links)
Composite porcelain enamels are inorganic coatings for metallic components based on a
special ceramic-vitreous matrix in which specific additives are randomly dispersed. The
ceramic-vitreous matrix is made by a mixture of various raw materials and elements and in
particular it is based on boron-silicate glass added with metal oxides(1) of titanium, zinc, tin,
zirconia, alumina, ecc. These additions are often used to improve and enhance some important
performances such as corrosion(2) and wear resistance, mechanical strength, fracture
toughness and also aesthetic functions. The coating process, called enamelling, depends on
the nature of the surface, but also on the kind of the used porcelain enamel. For metal sheets
coatings two industrial processes are actually used: one based on a wet porcelain enamel and
another based on a dry-silicone porcelain enamel. During the firing process, that is performed
at about 870°C in the case of a steel substrate, the enamel raw material melts and interacts
with the metal substrate so enabling the formation of a continuous varying structure. The
interface domain between the substrate and the external layer is made of a complex material
system where the ceramic vitreous and the metal constituents are mixed. In particular four
main regions can be identified, (i) the pure metal region, (ii) the region where the metal
constituents are dominant compared with the ceramic vitreous components, (iii) the region
where the ceramic vitreous constituents are dominant compared with the metal ones, and the
fourth region (iv) composed by the pure ceramic vitreous material. It has also to be noticed
the presence of metallic dendrites that hinder the substrate and the external layer passing
through the interphase region. Each region of the final composite structure plays a specific
role: the metal substrate has mainly the structural function, the interphase region and the
embedded dendrites guarantee the adhesion of the external vitreous layer to the substrate and
the external vitreous layer is characterized by an high tribological, corrosion and thermal
shock resistance. Such material, due to its internal composition, functionalization and
architecture can be considered as a functionally graded composite material. The knowledge of
the mechanical, tribological and chemical behavior of such composites is not well established
and the research is still in progress. In particular the mechanical performances data about the
composite coating are not jet established. In the present work the Residual Stresses, the
Young modulus and the First Crack Failure of the composite porcelain enamel coating are
studied. Due to the differences of the porcelain composite enamel and steel thermal properties
the enamelled steel sheets have residual stresses: compressive residual stress acts on the
coating and tensile residual stress acts on the steel sheet. The residual stresses estimation has
been performed by measuring the curvature of rectangular one-side coated specimens. The
Young modulus and the First Crack Failure (FCF) of the coating have been estimated by four
point bending tests (3-7) monitored by means of the Acoustic Emission (AE) technique(5,6). In
particular the AE information has been used to identify, during the bending tests, the
displacement domain over which no coating failure occurs (Free Failure Zone, FFZ). In the
FFZ domain, the Young modulus has been estimated according to ASTM D6272-02. The
FCF has been calculated as the ratio between the displacement at the first crack of the coating
and the coating thickness on the cracked side. The mechanical performances of the tested
coated specimens have also been related and discussed to respective microstructure and
surface characteristics by double entry charts.
|
216 |
Context detection and abstraction in smart environmentsPettinari, Marina <1979> 10 April 2008 (has links)
Context-aware computing is currently considered the most promising approach to overcome information
overload and to speed up access to relevant information and services. Context-awareness may be derived
from many sources, including user profile and preferences, network information, sensor analysis; usually
context-awareness relies on the ability of computing devices to interact with the physical world, i.e. with
the natural and artificial objects hosted within the "environment”. Ideally, context-aware applications
should not be intrusive and should be able to react according to user’s context, with minimum user effort.
Context is an application dependent multidimensional space and the location is an important part of it
since the very beginning. Location can be used to guide applications, in providing information or
functions that are most appropriate for a specific position. Hence location systems play a crucial role.
There are several technologies and systems for computing location to a vary degree of accuracy and
tailored for specific space model, i.e. indoors or outdoors, structured spaces or unstructured spaces.
The research challenge faced by this thesis is related to pedestrian positioning in heterogeneous
environments. Particularly, the focus will be on pedestrian identification, localization, orientation and
activity recognition.
This research was mainly carried out within the “mobile and ambient systems” workgroup of EPOCH, a
6FP NoE on the application of ICT to Cultural Heritage. Therefore applications in Cultural Heritage sites
were the main target of the context-aware services discussed.
Cultural Heritage sites are considered significant test-beds in Context-aware computing for many reasons.
For example building a smart environment in museums or in protected sites is a challenging task, because
localization and tracking are usually based on technologies that are difficult to hide or harmonize within
the environment. Therefore it is expected that the experience made with this research may be useful also
in domains other than Cultural Heritage.
This work presents three different approaches to the pedestrian identification, positioning and tracking:
Pedestrian navigation by means of a wearable inertial sensing platform assisted by the vision
based tracking system for initial settings an real-time calibration;
Pedestrian navigation by means of a wearable inertial sensing platform augmented with GPS
measurements;
Pedestrian identification and tracking, combining the vision based tracking system with WiFi
localization.
The proposed localization systems have been mainly used to enhance Cultural Heritage applications in
providing information and services depending on the user’s actual context, in particular depending on the
user’s location.
|
217 |
Progettazione con metodologie avanzate di organi di macchine sollecitati a faticaComandini, Matteo <1976> 23 April 2008 (has links)
No description available.
|
218 |
Biometric Fingerprint Recognition Systems / Sistemi Biometrici per il Riconoscimento delle Impronte DigitaliFerrara, Matteo <1979> 08 April 2009 (has links)
No description available.
|
219 |
Reconstruction from image correspondencesAzzari, Pietro <1979> 26 March 2009 (has links)
A single picture provides a largely incomplete representation of the scene one is looking at. Usually it reproduces only a limited spatial portion of the scene according to the standpoint and the viewing angle, besides it contains only instantaneous information. Thus very little can be understood on the geometrical structure of the scene, the position and orientation of the observer with respect to it remaining also hard to guess.
When multiple views, taken from different positions in space and time, observe the same scene, then a much deeper knowledge is potentially achievable. Understanding inter-views relations enables construction of a collective representation by fusing the information contained in every single image.
Visual reconstruction methods confront with the formidable, and still unanswered, challenge of delivering a comprehensive representation of structure, motion and appearance of a scene from visual information. Multi-view visual reconstruction deals with the inference of relations among multiple views and the exploitation of revealed connections to attain the best possible representation.
This thesis investigates novel methods and applications in the field of visual reconstruction from multiple views. Three main threads of research have been pursued: dense geometric reconstruction, camera pose reconstruction, sparse geometric reconstruction of deformable surfaces.
Dense geometric reconstruction aims at delivering the appearance of a scene at every single point. The construction of a large panoramic image from a set of traditional pictures has been extensively studied in the context of image mosaicing techniques. An original algorithm for sequential registration suitable for real-time applications has been conceived. The integration of the algorithm into a visual surveillance system has lead to robust and efficient motion detection with Pan-Tilt-Zoom cameras. Moreover, an evaluation methodology for quantitatively assessing and comparing image mosaicing algorithms has been devised and made available to the community.
Camera pose reconstruction deals with the recovery of the camera trajectory across an image sequence. A novel mosaic-based pose reconstruction algorithm has been conceived that exploit image-mosaics and traditional pose estimation algorithms to deliver more accurate estimates. An innovative markerless vision-based human-machine interface has also been proposed, so as to allow a user to interact with a gaming applications by moving a hand held consumer grade camera in unstructured environments.
Finally, sparse geometric reconstruction refers to the computation of the coarse geometry of an object at few preset points. In this thesis, an innovative shape reconstruction algorithm for deformable objects has been designed. A cooperation with the Solar Impulse project allowed to deploy the algorithm in a very challenging real-world scenario, i.e. the accurate measurements of airplane wings deformations.
|
220 |
Sulla energia dissipata in alcuni organi di macchinaMaldotti, Sergio <1980> 05 May 2009 (has links)
Questa tesi riguarda l'analisi delle trasmissioni ad ingranaggi e delle ruote dentate in generale, nell'ottica della minimizzazione delle perdite di energia.
È stato messo a punto un modello per il calcolo della energia e del calore dissipati in un riduttore, sia ad assi paralleli sia epicicloidale. Tale modello consente di stimare la temperatura di equilibrio dell'olio al variare delle condizioni di funzionamento. Il calcolo termico è ancora poco diffuso nel progetto di riduttori, ma si è visto essere importante soprattutto per riduttori compatti, come i riduttori epicicloidali, per i quali la massima potenza trasmissibile è solitamente determinata proprio da considerazioni termiche.
Il modello è stato implementato in un sistema di calcolo automatizzato, che può essere adattato a varie tipologie di riduttore. Tale sistema di calcolo consente, inoltre, di stimare l'energia dissipata in varie condizioni di lubrificazione ed è stato utilizzato per valutare le differenze tra lubrificazione tradizionale in bagno d'olio e lubrificazione a “carter secco” o a “carter umido”.
Il modello è stato applicato al caso particolare di un riduttore ad ingranaggi a due stadi: il primo ad assi paralleli ed il secondo epicicloidale. Nell'ambito di un contratto di ricerca tra il DIEM e la Brevini S.p.A. di Reggio Emilia, sono state condotte prove sperimentali su un prototipo di tale riduttore, prove che hanno consentito di tarare il modello proposto [1].
Un ulteriore campo di indagine è stato lo studio dell’energia dissipata per ingranamento tra due ruote dentate utilizzando modelli che prevedano il calcolo di un coefficiente d'attrito variabile lungo il segmento di contatto. I modelli più comuni, al contrario, si basano su un coefficiente di attrito medio, mentre si può constatare che esso varia sensibilmente durante l’ingranamento.
In particolare, non trovando in letteratura come varia il rendimento nel caso di ruote corrette, ci si è concentrati sul valore dell'energia dissipata negli ingranaggi al variare dello spostamento del profilo. Questo studio è riportato in [2].
È stata condotta una ricerca sul funzionamento di attuatori lineari vite-madrevite. Si sono studiati i meccanismi che determinano le condizioni di usura dell'accoppiamento vite-madrevite in attuatori lineari, con particolare riferimento agli aspetti termici del fenomeno. Si è visto, infatti, che la temperatura di contatto tra vite e chiocciola è il parametro più critico nel funzionamento di questi attuatori. Mediante una prova sperimentale, è stata trovata una legge che, data pressione, velocità e fattore di servizio, stima la temperatura di esercizio. Di tale legge sperimentale è stata data un'interpretazione sulla base dei modelli teorici noti. Questo studio è stato condotto nell'ambito di un contratto di ricerca tra il DIEM e la Ognibene Meccanica S.r.l. di Bologna ed è pubblicato in [3]. / This thesis deals with the analysis of the lubrication and the cooling of geared transmissions, with the intention of minimizing power losses.
A physical model was developed and calibrated for the calculation of the energy and the heat dissipated in the gearbox, for both parallel shaft and planetary geartrains. This model allows the determination of the equilibrium temperature of the oil for different operating conditions. Gearbox temperature calculation in their design is not yet widespread, but it is important, especially for compact gearboxes, as in planetary gearboxes, in which the maximum transmissible power is solely governed by thermal considerations.
The model here proposed was implemented in an automatic calculation system that can be tailored to various typologies of gearboxes. This calculation technique, furthermore, allows the determination of the energy dissipated under different lubrication conditions and was used to evaluate the difference between lubrication of a dry sump and an oil mist/humid gearbox.
The model was applied to the particular case of a two-stage gearbox: the first one with parallel gears and the second one with epicyclic gears. An experimental test carried out on a prototype, made within the scheme of a contract between DIEM and Brevini S.p.A. of Reggio Emilia, allowed the tuning of the model parameters [1].
Another investigation concerned the study of the energy dissipated in the meshing of two gears using a model that foresees the variations in the coefficient of friction along the contact zone. On the contrary, existing models are based on an average coefficient of friction despite recognition that it varies during meshing.
In particular, in the absence of finding within published literature how the performance varies in the case of corrected profile, focus was given to the value of the energy dissipated in the gears at various changes of profile [2].
Research was conducted on the function of a power-screw linear actuator comprising a worm and nut.
It was found that the temperature in the contact between the worm and the nut is the most critical parameter for the functioning of this actuator. The ongoing wear mechanisms were studied with particular emphasis to the thermal aspect of the phenomena. Within the scheme of a contract between DIEM and Ognibene Meccanica S.r.l. of Bologna, a model based on an experimental test was developed for the determination of the running temperature, given the pressure, the velocity and the service factor. This model was compared to existing theoretical approaches [3].
|
Page generated in 0.0355 seconds