1 |
Experimental characterization and diagonosis tools for proton exchange membrane fuel cellsPrimucci, Mauricio 10 September 2012 (has links)
A fuel cell is a device that gives electric power directly from electrochemical reduction and oxidation reactions. PEM fuel cells
present some properties that make them appropriate for portable and transport applications: high efficiency, no emissions,
solid electrolyte, low operating temperatures and high power density. However, some technical problems can be improved,
durability of the materials and the appropriate control of the operating conditions. One important aspect of the operating
conditions is the water management. The right water content is needed in the electrolyte and catalyst layers to maximize the
efficiency of the PEMFC by minimizing the voltage losses. Water content in the fuel cell is given basically by the generation of
the water in the cathode due to the reaction, the humidity of the inlet gases and the transport trough the membrane.
This thesis studies, proposes and compares different experimental characterisation methods aimed to provide performance
indicators of the PEMFC water state.
A systematic use of Electrochemical Impedance Spectroscopy technique is presented and its results are studied in order to
analyse the influence of different operating conditions over the PEMFC response. The variables under analysis include: load
current, pressure temperature and gas relative humidity. All these variables are considered with inlet gases feeding: H2/O2
and H2/Air.
A set of relevant characteristics from the EIS response has been considered. Several equivalent circuits has been analysed
and those that have the best fitting with the experimental EIS data are selected.
When air is used as oxidant, a simple equivalent circuit with a resistance and a Warburg element is proposed. When Oxygen
is used as oxidant, a more complex equivalent circuit is needed. A detailed sensitive analysis is performed indicating those
parameters that best capture the influence of the operating conditions.
A new experimental characterisation technique, based on the inlet gases humidification interruption is proposed. This
dynamic technique combines the information extracted from EIS and the temporal response in order to study the water
transport and storage effects in the PEMFC. Two advantages of this proposed technique is the simple hardware configuration
used and the relative low impact on the fuel cell response, making attractive the humidification interruption as an in-situ
technique.
Three different sets of performance indicators are proposed as diagnosis tool.
Relevant Characteristics from the EIS response, if properly monitored, can give a diagnostic of the fuel cell internal state. After
an analysis, the chosen ones are: low and high frequency resistances (RLF and RHF) and the frequency of the maximum
phase. These RC are helpful to determine if the PEMFC with the current operating conditions is well humidified. If the zone
defined by RLF decrease, RHF slight increase and the frequency of the maximum phase increase is minimal, the cathode is
optimally humidified.
Equivalent Circuit are used in order to give a physical interpretation. The selected parameters as performance indicators are:
membrane resistance, Rm, time constant and resistance of diffusion process (using Warburg elements: Tw and Rw). In this
case, the humidification of the fuel cell is optimum if the zone where Rw and Tw decrease and Rm has slow increase is
minimal.
Model Based performance indicators are proposed: Rm, effective diffusion coefficient, Deff and effective active area, Aeff. The
optimal humidification occurs when the zone where Deff is stationary and Rm has not changed significantly, is minimal. The
parameter Aeff involved in this last diagnosis procedure can be detached from the humidification interruption test and be
used to estimate the effective active area and then is also helpful to compare the PEMFC performance in different operating
conditions. / Una pila de combustible es un dispositivo que da energía eléctrica a partir de reacciones electroquímicas de reducción y
oxidación. Las pilas del tipo PEMFC presentan propiedades que las hacen adecuadas para aplicaciones de transporte: alta
eficiencia, cero emisiones, electrolito sólido, bajas temperaturas de operación y alta densidad de potencia. Sin embargo,
algunos problemas técnicos deben ser estudiados: la durabilidad de los materiales y la correcta selección de las
condiciones de funcionamiento. Una de las más importantes es la gestión del agua. Un balance adecuado del agua en la
pila es necesario para maximizar la eficiencia de la PEMFC reduciendo al mínimo las pérdidas de tensión. El contenido de
agua en la PEMFC viene dado por su generación en el cátodo debido a la reacción, la humedad de los gases de entrada y el
transporte de agua a través de la membrana.
La tesis estudia, propone y compara los diferentes métodos de caracterización experimental con el objetivo de obtener
indicadores del estado del agua en la PEMFC. Se realiza un uso sistemático de la técnica “espectroscopía de impedancia
electroquímica (EIS)” y el análisis de la influencia de las diferentes condiciones de operación sobre la respuesta de la
PEMFC. Las variables estudiadas son: corriente de carga, presión de los gases, temperatura, humedad relativa y también la
alimentación de los gases de entrada: H2/O2 y H2/aire.
Se presenta un conjunto de características relevantes de la respuesta del EIS y se usan para dar valores iniciales a los
circuitos equivalentes. Se estudian diferentes configuraciones de circuitos equivalentes y se seleccionan aquellos que
tienen la mejor conexión con los datos experimentales. Se realiza un análisis de sensibilidad de los parámetros de los
circuitos equivalentes con respecto a las diferentes condiciones de operación, para encontrar aquellos que sean útiles para
representar estas variaciones.
Se propone una nueva técnica experimental de caracterización, basada en la interrupción de la humidificación de los gases
de entrada. Esta técnica combina la información de la respuesta temporal con la frecuencial (EIS) y es útil para analizar la
influencia del agua en la respuesta de la PEMFC. Algunas ventajas de esta técnica son: la fácil implementación física y el
bajo impacto sobre la respuesta de la PEMFC, lo cual convierte esta técnica en candidata para ser utilizada “In-situ”.
Se proponen tres conjuntos de indicadores de comportamiento de la pila como herramientas de diagnosis.
En primer lugar, se presentan las “Características Relevantes” de la respuesta de la EIS que dan un diagnóstico del estado
interno de la PEMFC. De entre ellas se selecciona como indicadas: las resistencias de baja y alta frecuencia (RLF y RHF) y
la frecuencia del máximo de fase. Estas características sirven para determinar la correcta humidificación de la pila en las
condiciones actuales de operación. El cátodo está correctamente humidificado si la respuesta de las características,
muestran que la zona definida por RLF bajando, RHF subiendo ligeramente y la frecuencia de la máxima fase está
subiendo, es mínima.
En segundo lugar, se usan los “Circuitos Equivalentes” para dar una interpretación física a los indicadores. Los parámetros
seleccionados son: la resistencia de la membrana, Rm, la resistencia y la constante de tiempo de la difusión (Rw y Tw). En
este caso, la humidificación correcta del cátodo ocurre cuando la zona donde Rw y Tw bajan y Rm sube ligeramente, es
mínima.
Por ultimo, se proponen indicadores de comportamiento utilizando un modelo: Rm, coeficiente de difusión efectivo, Deff y el
área activa efectiva, Aeff. La humidificación óptima del cátodo ocurre cuando la zona donde Deff es estable y Rm no cambia
significativamente, es mínima. El parámetro Aeff es útil para estimar el área activa efectiva aun cuando no se realice una
interrupción de humidificación y para comparar la respuesta de la PEMFC bajo diferentes condiciones de operacion
|
2 |
Resource and performance trade-offs in real-time embedded control systemsLozoya Gámez, Rafael Camilo 19 July 2011 (has links)
The use of computer controlled systems has increased dramatically in our daily life. Microprocessors are embedded in most of the daily-
used devices. Due to cost constraints, many of these devices that run control applications are designed under processing power, space,
weight, and energy constraints, i.e., with limited resources. Moreover, the embedded control systems market demands new capabilities
to these devices or improvements in the existing ones without increasing the resource demands. Enabling devices with real-time
technology is a promising step toward achieving cost-effective embedded control systems. Recent results of real-time systems theory
provide methods and policies for an efficient use of the computational resources. At the same time, control systems theory is starting to
offer controllers with varying computational load. By combining both disciplines, it is theoretically feasible to design resource-constrained
embedded control systems capable of trading-off control performance and resource utilization.
This thesis focuses on the practical feasibility of this new generation of embedded control systems. To this extend, two issues are
addressed: 1) the effective implementation of control loops using real-time technology and 2) the evaluation of resource/performance-
aware policies that can be applied to a set of control loops that concurrently execute on a microprocessor.
A control task generally consists of three main activities: input, control algorithm computation, and output. The timing of the input and
output actions is critical to the performance of the controller. The implementation of these operations can be conducted within the real-
time task body or using hardware functions. The former introduces considerable amounts of jitters while the latter forces delays. This
thesis presents a novel task model as a computational abstraction for implementing control loops that is shown to remove the endemic
problems caused by jitters and delays. This model is synchronized at the output instants rather than at the input instants. This has been
shown to provide interesting properties. From the scheduling point of view, the new task model can be seamlessly integrated into existing
scheduling theory and practice, while improving task set schedulability. From a control perspective, the task model absorbs jitters
because it allows irregular sampling by incorporating predictors, and improves reactiveness in front of perturbations. In addition, Kalman
techniques have been also investigated to deal with the case of noisy measurements.
The effective implementation of simple control algorithms making use of this new task model does not guarantee the feasibility of
implementing state-of-the-art resource/performance-aware policies. These policies, which can be roughly divided into feedback
scheduling and event-driven control, have been mainly treated from a theoretical point of view while practical aspects have been omitted.
Conversely to the initial problem targeted by these policies, that is, to minimize or keep resource requirements to meet the tight cost
constraints related with mass production and strong industrial competition, research advances seem to require sophisticated procedures
that may impair a cost-effective implementation. This thesis presents a performance evaluation framework that permits to assess these
policies in terms of the potential benefits offered by the theory as well as the pay-off in terms of complexity and overhead. The framework
design is the result of a taxonomical analysis of the related state-of-the-art. Among other specifications, the framework, which is
composed by a simulation and an experimental platform, supports both event/time triggered paradigms, allows different sort of control
and optimization algorithms, and flexibly evaluates control performance and resource utilization. / El uso de sistemas controlados por computadora ha incrementado dramáticamente en nuestra vida cotidiana. En la mayor parte de los
dispositivos que usamos diariamente encontramos microprocesadores. Debido a restricciones de coste muchos de estos dispositivos
ejecutan aplicaciones de control diseñadas bajo restricciones de potencia, espacio, peso y energía, esto es, con recursos limitados.
Además, el mercado de sistemas de control embebido demanda nuevas capacidades a estos dispositivos o mejoras en los dispositivos
ya existentes sin incrementar las demandas de recursos. Incluir en estos dispositivos tecnología de tiempo real es un prometedor paso
para conseguir sistemas de control embebido de bajo coste. Resultados recientes en sistemas de tiempo real proporcionan métodos y
políticas para un uso eficiente de los recursos. Al mismo tiempo, los sistemas de control empiezan a ofrecer controladores con carga
computacional variable. Al combinar estas dos disciplinas, es teóricamente posible diseñar sistemas de control embebido con recursos
restringidos capaces de balancear el rendimiento de control y la utilización de recursos.
El objetivo de esta tesis es determinar la viabilidad de la implementación práctica de esta nueva generación de sistemas de control
embebidos. En este sentido, dos problemas principales son abordados: 1) la efectiva implementación de lazos de control usando
tecnología de tiempo real, y 2) la evaluación de políticas de mejoras en recursos y rendimiento que pueden ser aplicados a un conjunto
de lazos de control que se ejecutan concurrentemente en un microprocesador.
Una tarea de control consiste generalmente en tres actividades principales: entrada, cómputo del algoritmo de control y salida. El tiempo
en el que se ejecutan las acciones de entrada y salida es crítico con respecto al rendimiento del controlador. La implementación de
estas operaciones puede ser ejecutada dentro del cuerpo de la tarea de tiempo real o a través de funciones hardware como
interrupciones. La primera opción introduce una considerable cantidad de jitters (variaciones), mientras que la segunda introduce
retardos. Esta tesis presenta un nuevo modelo de tareaspara la implementación de lazos de control que es capaz de eliminar los
problemas endémicos causados por los jitters y los retardos. En este modelo la sincronización se realiza en los instantes de salida, en
lugar de los instantes de entrada. Esto ha demostrado tener propiedades interesantes. Desde el punto de vista de planificación de
tareas, el nuevo modelo puede ser integrado en forma directa a la teoría y práctica de planificación de tareas, mejorando la capacidad
de planificación. Desde una perspectiva de control, el modelo de tareas absorbe los jitters al permitir muestreos irregulares mediante el
uso de predictores, y además mejora la capacidad de reacción del sistema frente a perturbaciones. Adicionalmente, técnicas basadas
en filtros de Kalman han sido también investigadas para tratar situaciones en que se tengan mediciones con ruido.
La efectiva implementación de algoritmos simples de control haciendo uso de este nuevo modelo de tarea no garantiza la factibilidad de
implementar políticas más avanzadas, aparecidas recientemente en el estado del arte, para mejorar el rendimiento del control y el uso
eficiente de recursos. Estas políticas, que pueden ser divididas en planificación con retroalimentación (feedback scheduling) y control
guiado por eventos (event-driven control), han sido principalmente abordadas desde una perspectiva teórica mientras los aspectos
prácticos usualmente son omitidos. Contrariamente al problema inicial al que se enfocan estas políticas, que es minimizar o mantener
los requerimientos de uso de recursos para lograr las restricciones de coste debidas a la producción en masa y a la fuerte competencia
industrial, los avances en las investigaciones parecen requerir procedimientos sofisticados que van en detrimento de una
implementación de bajo coste. Esta tesis presenta una plataforma de evaluación de rendimiento que permite valorar estas políticas en
términos de los beneficios potenciales ofrecidos por la teoría, además de valorar los costes en términos de complejidad y uso adicional
de recursos. El diseño de la plataforma es el resultado de un análisis taxonómico de distintos métodos que forman parte del estado del
arte. Entre otras especificaciones, la plataforma, que está compuesta por una plataforma de simulación y una experimental, soporta
tanto los paradigmas basados en tiempo como los basados en eventos, permite la implementación de distintos algoritmos de
optimización y control, y es capaz de evaluar tanto el rendimiento de control como el uso de recursos.
|
3 |
Digital repetitive control under varying frequency conditionsRamos Fuentes, Germán Andrés 12 September 2012 (has links)
The tracking/rejection of periodic signals constitutes a wide field of research in the control theory and applications area and
Repetitive Control has proven to be an efficient way to face this topic; however, in some applications the period of the signal to
be tracked/rejected changes in time or is uncertain, which causes and important performance degradation in the standard
repetitive controller. This thesis presents some contributions to the open topic of repetitive control working under varying
frequency conditions. These contributions can be organized as follows:
One approach that overcomes the problem of working under time varying frequency conditions is the adaptation of the
controller sampling period, nevertheless, the system framework changes from Linear Time Invariant to Linear Time-Varying
and the closed-loop stability can be compromised. This work presents two different methodologies aimed at analysing the
system stability under these conditions. The first one uses a Linear Matrix Inequality (LMI) gridding approach which provides
necessary conditions to accomplish a sufficient condition for the closed-loop Bounded Input Bounded Output stability of the
system. The second one applies robust control techniques in order to analyse the stability and yields sufficient stability
conditions. Both methodologies yield a frequency variation interval for which the system stability can be assured. Although
several approaches exist for the stability analysis of general time-varying sampling period controllers few of them allow an
integrated controller design which assures closed-loop stability under such conditions. In this thesis two design
methodologies are presented, which assure stability of the repetitive control system working under varying sampling period
for a given frequency variation interval: a mu-synthesis technique and a pre-compensation strategy.
On a second branch, High Order Repetitive Control (HORC) is mainly used to improve the repetitive control performance
robustness under disturbance/reference signals with varying or uncertain frequency. Unlike standard repetitive control, the
HORC involves a weighted sum of several signal periods. With a proper selection of the associated weights, this high order
function offers a characteristic frequency response in which the high gain peaks located at harmonic frequencies are
extended to a wider region around the harmonics. Furthermore, the use of an odd-harmonic internal model will make the
system more appropriate for applications where signals have only odd-harmonic components, as in power electronics
systems. Thus an Odd-harmonic High Order Repetitive Controller suitable for applications involving odd-harmonic type
signals with varying/uncertain frequency is presented. The open loop stability of internal models used in HORC and the one
presented here is analysed. Additionally, as a consequence of this analysis, an Anti-Windup (AW) scheme for repetitive
control is proposed. This AW proposal is based on the idea of having a small steady state tracking error and fast recovery
once the system goes out of saturation.
The experimental validation of these proposals has been performed in two different applications: the Roto-magnet plant and
the active power filter application. The Roto-magnet plant is an experimental didactic plant used as a tool for analysing and
understanding the nature of the periodic disturbances, as well as to study the different control techniques used to tackle this
problem. This plant has been adopted as experimental test bench for rotational machines. On the other hand, shunt active
power filters have been widely used as a way to overcome power quality problems caused by nonlinear and reactive loads.
These power electronics devices are designed with the goal of obtaining a power factor close to 1 and achieving current
harmonics and reactive power compensation.
|
4 |
Generació de decisions davant d'incertesesEscobet i Canal, Antoni 17 September 2012 (has links)
Aquesta tesi tracta sobre la metodologia del raonament inductiu difús (FIR, de l’anglès fuzzy inductive reasoning) aplicada a
sistemes de detecció i diagnòstic de fallades. La metodologia FIR sorgeix de l’enfocament del general systems problem
solver (GSPS) proposat per Klir l’any 1989 i és una eina per analitzar i estudiar els modes de comportament dels sistemes
dinàmics. FIR és una metodologia de modelització i simulació qualitativa de sistemes basada principalment en l’observació
del comportament del sistema. Aquesta metodologia ha anat evolucionant al llarg del temps amb l'objectiu d'ampliar la
classe de problemes que es poden abordar amb FIR.
El treball desenvolupat en aquesta tesi té el propòsit de contribuir a reduir els esforços de modelització i simulació de
sistemes industrials reals complexos. En aquesta línia, s’ha aconseguit augmentar, mitjançant diferents aportacions, la
robustesa de FIR i desenvolupar una nova metodologia que permeti crear sistemes de detecció i diagnòstic de fallades
robustos i eficients.
L’objectiu principal d’aquesta tesi és reduir tant com sigui possible la sensibilitat de la metodologia FIR, és a dir,
maximitzar-ne la robustesa, de manera que esdevingui una eina cabdal per desenvolupar sistemes de detecció i diagnòstic
de fallades eficients.
Les contribucions principals de la tesi són:
•Incrementar la robustesa del FIR creant una nova eina, anomenada Visual-FIR, que permet identificar models i predir
comportaments futurs de sistemes dinàmics en un entorn molt senzill d’utilitzar i molt eficient.
•Desenvolupar una nova metodologia per crear sistemes de detecció i diagnòstic de fallades basats en FIR. S’ha
desenvolupat una tècnica de detecció, anomenada envolupant, i una mesura per al diagnòstic, anomenada mesura
d’acceptabilitat del model, que han permès millorar i fer més sòlids els processos de detecció i diagnòstic de fallades del
FIRFDDS (sistema de detecció i diagnòstic de fallades basat en FIR).
•Desenvolupar una eina que permeti crear de manera senzilla i altament eficient FIRFDDS per a aplicacions específiques.
S’ha desenvolupat una plataforma, anomenada VisualBlock-FIR, que permet que l’usuari creï, d’una manera senzilla,
sistemes de detecció i diagnòstic de fallades basats en el FIR.
Per validar la metodologia i les eines desenvolupades es mostren un parell de casos d’estudi. El primer correspon al
problema de referència de la vàlvula automàtica de Damadics, en què es proposen quatre fallades de petita i mitjana
magnitud que es detecten i s’aïllen/s’identifiquen d’una manera molt ràpida i eficient. En el segon es posa a prova el
VisualBlock-FIR en una pila de combustible simulada a la qual s’apliquen cinc fallades diferents, les quals són detectades i
identificades correctament. Finalment, es comprova la robustesa afegint soroll blanc, en diferents magnituds, a les sortides
de la pila de combustible. / This thesis deals with the Fuzzy Inductive Reasoning (FIR) methodology applied to fault detection and diagnosis systems.
FIR, based on the General Systems Problem Solver (GSPS) proposed by Klir in 1989, is a methodological tool for data-driven
construction of dynamical systems and for studying their conceptual modes of behavior. FIR is a qualitative modeling and
simulation methodology that is based on observation of the input¿output behavior of the system to be modeled, rather than
on structural knowledge about its internal composition. This methodology has evolved over time with the aim of enlarging the
class of problems that can be dealt with by FIR.
The work presented in this thesis aims to contribute to reducing modeling and simulation efforts of real industrial complex
systems. Several methodological contributions have been made to increase FIR robustness as well as to develop a new
methodology to create robust and efficient fault detection and diagnosis systems.
The main objective of this thesis is to reduce as much as possible the sensitivity of the FIR methodology, by maximizing its
robustness, in such a way that it becomes a fundamental tool for developing efficient fault detection and diagnosis systems.
The main contributions of this thesis are:
¿To improve the robustness of FIR by creating a new tool, Visual-FIR, that identifies patterns and predicts future behavior of
dynamical systems in a very efficient and simple to use environment.
¿To develop a new methodology for creating fault detection and diagnosis systems based on FIR. We have developed a
detection technique, the enveloping, and a diagnosis measure, known as the acceptability measure, that allow improving and
making more robust the fault detection and diagnosis processes of the FIRFDDS (fault detection and diagnosis system
based on FIR).
¿To develop a tool that allows to easily create highly efficient FIRFDDS for specific applications. A platform, named
VisualBlock-FIR, has been developed that allows the user to create, in a simple way, fault detection and diagnosis systems
based on FIR.
In order to validate the methodological contributions and the developed tools a couple of case studies have been presented
in this dissertation. The first corresponds to the benchmark problem of the Damadics automatic valve system, which
proposes four failures of small and medium sizes that are detected and isolated / identified in a quick and highly efficient
way. The second is a simulated fuel cell where five different faults are applied. The five faults are detected and identified
correctly. Finally, we check the robustness of the FIRFDDS by adding white noise, at different magnitudes, to the outputs of
the fuel cell.
Manresa
|
5 |
Grasp plannind under task-specific contact constraintsRosales Gallegos, Carlos 10 January 2013 (has links)
Several aspects have to be addressed before realizing the dream of a robotic hand-arm system with human-like capabilities, ranging from the consolidation of a proper mechatronic design, to the development of precise, lightweight sensors and actuators, to the efficient planning and control of the articular forces and motions required for interaction with the environment. This thesis provides solution algorithms for a main problem within the latter aspect, known as the {\em grasp planning} problem: Given a robotic system formed by a multifinger hand attached to an arm, and an object to be grasped, both with a known geometry and location in 3-space, determine how the hand-arm system should be moved without colliding with itself or with the environment, in order to firmly grasp the object in a suitable way.
Central to our algorithms is the explicit consideration of a given set of hand-object contact constraints to be satisfied in the final grasp configuration, imposed by the particular manipulation task to be performed with the object. This is a distinguishing feature from other grasp planning algorithms given in the literature, where a means of ensuring precise hand-object contact locations in the resulting grasp is usually not provided. These conventional algorithms are fast, and nicely suited for planning grasps for pick-an-place operations with the object, but not for planning grasps required for a specific manipulation of the object, like those necessary for holding a pen, a pair of scissors, or a jeweler's screwdriver, for instance, when writing, cutting a paper, or turning a screw, respectively. To be able to generate such highly-selective grasps, we assume that a number of surface regions on the hand are to be placed in contact with a number of corresponding regions on the object, and enforce the fulfilment of such constraints on the obtained solutions from the very beginning, in addition to the usual constraints of grasp restrainability, manipulability and collision avoidance.
The proposed algorithms can be applied to robotic hands of arbitrary structure, possibly considering compliance in the joints and the contacts if desired, and they can accommodate general patch-patch contact constraints, instead of more restrictive contact types occasionally considered in the literature. It is worth noting, also, that while common force-closure or manipulability indices are used to asses the quality of grasps, no particular assumption is made on the mathematical properties of the quality index to be used, so that any quality criterion can be accommodated in principle. The algorithms have been tested and validated on numerous situations involving real mechanical hands and typical objects, and find applications in classical or emerging contexts like service robotics, telemedicine, space exploration, prosthetics, manipulation in hazardous environments, or human-robot interaction in general.
|
6 |
A Bayesian approach to robust identification: application to fault detectionFernández Canti, Rosa Ma. 07 February 2013 (has links)
In the Control Engineering field, the so-called Robust Identification techniques deal with the problem of obtaining not only a nominal model of the plant, but also an estimate of the uncertainty associated to the nominal model. Such model of uncertainty is typically characterized as a region in the parameter space or as an uncertainty band around the frequency response of the nominal model.
Uncertainty models have been widely used in the design of robust controllers and, recently, their use in model-based fault detection procedures is increasing. In this later case, consistency between new measurements and the uncertainty region is checked. When an inconsistency is found, the existence of a fault is decided.
There exist two main approaches to the modeling of model uncertainty: the deterministic/worst case methods and the stochastic/probabilistic methods. At present, there are a number of different methods, e.g., model error modeling, set-membership identification and non-stationary stochastic embedding. In this dissertation we summarize the main procedures and illustrate their results by means of several examples of the literature.
As contribution we propose a Bayesian methodology to solve the robust identification problem. The approach is highly unifying since many robust identification techniques can be interpreted as particular cases of the Bayesian framework. Also, the methodology can deal with non-linear structures such as the ones derived from the use of observers. The obtained Bayesian uncertainty models are used to detect faults in a quadruple-tank process and in a three-bladed wind turbine.
|
7 |
Snoring and arousals in full-night polysomnographic studies from sleep apnea-hypopnea syndrome patientsMesquita, Joana Margarida Gil de 22 February 2013 (has links)
SAHS (Sleep Apnea-Hypopnea Syndrome) is recognized to be a serious disorder with high prevalence in the population. The main clinical triad for SAHS is made up of 3 symptoms: apneas and hypopneas, chronic snoring and excessive daytime sleepiness (EDS). The gold standard for diagnosing SAHS is an overnight polysomnographic study performed at the hospital, a laborious, expensive and time-consuming procedure in which multiple biosignals are recorded. In this thesis we offer improvements to the current approaches to diagnosis and assessment of patients with SAHS. We demonstrate that snoring and arousals, while recognized key markers of SAHS, should be fully appreciated as essential tools for SAHS diagnosis. With respect to snoring analysis (applied to a 34 subjects¿ database with a total of 74439 snores), as an alternative to acoustic analysis, we have used less complex approaches mostly based on time domain parameters. We concluded that key information on SAHS severity can be extracted from the analysis of the time interval between successive snores. For that, we built a new methodology which consists on applying an adaptive threshold to the whole night sequence of time intervals between successive snores. This threshold enables to identify regular and non-regular snores. Finally, we were able to correlate the variability of time interval between successive snores in short 15 minute segments and throughout the whole night with the subject¿s SAHS severity. Severe SAHS subjects show a shorter time interval between regular snores (p=0.0036, AHI cp(cut-point): 30h-1) and less dispersion on the time interval features during all sleep. Conversely, lower intra-segment variability (p=0.006, AHI cp: 30h-1) is seen for less severe SAHS subjects. Also, we have shown successful in classifying the subjects according to their SAHS severity using the features derived from the time interval between regular snores. Classification accuracy values of 88.2% (with 90% sensitivity, 75% specificity) and 94.1% (with 94.4% sensitivity, 93.8% specificity) for AHI cut-points of severity of 5 and 30h-1, respectively. In what concerns the arousal study, our work is focused on respiratory and spontaneous arousals (45 subjects with a total of 2018 respiratory and 2001 spontaneous arousals). Current beliefs suggest that the former are the main cause for sleep fragmentation. Accordingly, sleep clinicians assign an important role to respiratory arousals when providing a final diagnosis on SAHS. Provided that the two types of arousals are triggered by different mechanisms we hypothesized that there might exist differences between their EEG content. After characterizing our arousal database through spectral analysis, results showed that the content of respiratory arousals on a mild SAHS subject is similar to that of a severe one (p>>0.05). Similar results were obtained for spontaneous arousals. Our findings also revealed that no differences are observed between the features of these two kinds of arousals on a same subject (r=0.8, p<0.01 and concordance with Bland-Altman analysis). As a result, we verified that each subject has almost like a fingerprint or signature for his arousals¿ content and is similar for both types of arousals. In addition, this signature has no correlation with SAHS severity and this is confirmed for the three EEG tracings (C3A2, C4A1 and O1A2). Although the trigger mechanisms of the two arousals are known to be different, our results showed that the brain response is fairly the same for both of them. The impact that respiratory arousals have in the sleep of SAHS patients is unquestionable but our findings suggest that the impact of spontaneous arousals should not be underestimated.
|
8 |
Desarrollo de un algoritmo de propagación de flujo luminoso para un dispositivo fotométrico móvil con capacidad de discretizavión angular: aplicación a la generación de mapas de iluminación de alumbrado públicoFernández Dorado, José 22 March 2013 (has links)
Esta tesis doctoral aborda el desarrollo de un algoritmo de propagación del flujo luminoso que permite a un dispositivo fotométrico realizar las medidas de la distribución de iluminación en grandes áreas, entre las cuales se encuentran las correspondientes a los sistemas de alumbrado público.
El dispositivo permite detectar la cantidad y dirección de iluminación en cada punto de medida. El algoritmo, en conjunción con la estrategia de la medida propuesta demuestran a través de los resultados que es posible efectuar las medidas de forma dinámica. La caracterización del dispositivo empleado y los resultados experimentales obtenidos son comentados y analizados.
La tesis presentada está avalada por la concesión del proyecto público POLUX financiado por el Ministerio de Ciencia e innovación. Así mismo, la tesis ha
desembocado en la publicación y concesión de una patente. Finalmente, parte de los resultados de este trabajo de investigación han sido publicados en congresos de carácter nacional e internacional. / The continuous demand for more energy efficient systems is generating two lines of action in the market, on the one hand, new lighting systems more efficient and secondly, a systematic increase of the market demands concerning procedures for measuring the amount and distribution of light arriving on the surfaces.
This thesis aims to deepen the knowledge of measurement algorithms and instrumentation associated with the process of design and manufacture of a new device capable of measuring dynamically photometric lighting in public lighting installations, getting the amount of flow light per unit area and the origin of that making use of the spread of information obtained at each point as to provide, ultimately isolux map provided by public lighting installations.
This thesis responds to the development of an algorithm for propagation of light output that can be used by a photometric device in order to analyze the lighting systems found in urban areas, especially in public lighting systems thereby obtaining relevant data on the energy efficiency of facilities and future actions to be taken to minimize energy expenditure.
Such photometric device can process through propagation algorithm luminous flux greater amount of information about the illumination provided by public lighting installations. Measures will optimize performing a larger number of records in less time which will result in a better understanding of the lighting conditions. We analyze the regulations governing photometric measurements of public lighting installations, both at European and Spanish statewide. They did a study and analysis of the various equipment and procedures that exist today to measure photometric properties of public lighting facilities and a review of the state of the art found in the scientific and technical publications.
We study the radiometric and photometric equations that are necessary for the development of the propagation algorithm lumen. Also discussed in detail all the parameters that are of interest for the development of a plan of experimental measurements.
This thesis has been partially funded by the project Pollux mobile photometric device for street lighting, with the financing of the Ministerio de Ciencia e Innovación through, INNPACTO program IPT-2011-1675-020000, participating entities Polytechnic University of Catalonia, FIBERCOM SL Ilimit COMUNICACIONS, SIMULACIONS optiques SL SL Industrial Association optics, color and image, AIDO and an execution period to 11/31/2013 01/09/2011
The thesis has resulted in the publication of a patent and two conference papers at national and international meetings.
|
9 |
Enhancing the efficiency and practicality of software transactional memory on massively multithreaded systemsKestor, Gökçen 22 March 2013 (has links)
Chip Multithreading (CMT) processors promise to deliver higher performance by running more than one stream of instructions in parallel. To exploit CMT's capabilities, programmers have to parallelize their applications, which is not a trivial task. Transactional Memory (TM) is one of parallel programming models that aims at simplifying synchronization by raising the level of abstraction between semantic atomicity and the means by which that atomicity is achieved. TM is a promising programming model but there are still important challenges that must be addressed to make it more practical and efficient in mainstream parallel programming.
The first challenge addressed in this dissertation is that of making the evaluation of TM proposals more solid with realistic TM benchmarks and being able to run the same benchmarks on different STM systems. We first introduce a benchmark suite, RMS-TM, a comprehensive benchmark suite to evaluate HTMs and STMs. RMS-TM consists of seven applications from the Recognition, Mining and Synthesis (RMS) domain that are representative of future workloads. RMS-TM features current TM research issues such as nesting and I/O inside transactions, while also providing various TM characteristics. Most STM systems are implemented as user-level libraries: the programmer is expected to manually instrument not only transaction boundaries, but also individual loads and stores within transactions. This library-based approach is increasingly tedious and error prone and also makes it difficult to make reliable performance comparisons. To enable an "apples-to-apples" performance comparison, we then develop a software layer that allows researchers to test the same applications with interchangeable STM back ends.
The second challenge addressed is that of enhancing performance and scalability of TM applications running on aggressive multi-core/multi-threaded processors. Performance and scalability of current TM designs, in particular STM desings, do not always meet the programmer's expectation, especially at scale. To overcome this limitation, we propose a new STM design, STM2, based on an assisted execution model in which time-consuming TM operations are offloaded to auxiliary threads while application threads optimistically perform computation. Surprisingly, our results show that STM2 provides, on average, speedups between 1.8x and 5.2x over state-of-the-art STM systems. On the other hand, we notice that assisted-execution systems may show low processor utilization. To alleviate this problem and to increase the efficiency of STM2, we enriched STM2 with a runtime mechanism that automatically and adaptively detects application and auxiliary threads' computing demands and dynamically partition hardware resources between the pair through the hardware thread prioritization mechanism implemented in POWER machines.
The third challenge is to define a notion of what it means for a TM program to be correctly synchronized. The current definition of transactional data race requires all transactions to be totally ordered "as if'' serialized by a global lock, which limits the scalability of TM designs. To remove this constraint, we first propose to relax the current definition of transactional data race to allow a higher level of concurrency. Based on this definition we propose the first practical race detection algorithm for C/C++ applications (TRADE) and implement the corresponding race detection tool. Then, we introduce a new definition of transactional data race that is more intuitive, transparent to the underlying TM implementation, can be used for a broad set of C/C++ TM programs. Based on this new definition, we proposed T-Rex, an efficient and scalable race detection tool for C/C++ TM applications. Using TRADE and T-Rex, we have discovered subtle transactional data races in widely-used STAMP applications which have not been reported in the past.
|
10 |
Murine gammaherpesvirus mediated splenic fibrosisLi, Shuo January 2012 (has links)
Infection of IFNγ receptor knockout (IFNγ R-/-) mice with murine gammaherpesvirus-68 (MHV-68) results in fibrosis in the lung, spleen, liver and lymph nodes. In the spleen, pathology involves an increase in the number of latently infected B cells that corresponds with a Th2 biased immune response, in which germinal centres become walled off and fibrosis dominates the splenic architecture. Remarkably, the spleen recovers from this pathology, and the starting point for this process is a loss of latently infected B cells. The aim of this project is to gain further understanding of the control of MHV-68 latent infection in the absence of IFNγ response. This project investigates: (1) the mechanisms that result in the loss of splenocytes, in particular the reduction of latently infected B cells; (2) the dynamics of macrophages in the induction, expression and recovery of fibrosis. Several approaches were employed to examine the hypothesis that the massive cell loss in IFNγR-/- spleen is caused by apoptosis. However, there was no evidence for excessive apoptosis throughout the development of fibrosis. Moreover, RT-PCR analysis showed that there was no significant increase in expression of viral genes associated with lytic infection. Hence it is unlikely that viral reactivation and subsequent lytic infection occurs. These data suggest apoptosis and viral reactivation are not the main mechanisms that cause splenic cell loss. Furthermore, B cell subpopulations and cells that express viral ORF73 in IFNγR-/- mice were examined using a recombinant virus. The ORF73-expressing cells are mainly germinal centre B cells and memory B cells. These two subpopulations undergo a drastic decrease in numbers during fibrosis, whereas naïve B cells, which are less susceptible to infection, maintain a relatively stable population. Therefore, the significant reduction of latently infected B cells appears to be related to the removal of germinal center B cells and memory B cells. Macrophages induced by Th2 cytokines are considered to be pro-fibrotic, and they are reported to have the potential to differentiate into myofibroblasts. In order to determine the role played by macrophages in MHV-68 induced fibrosis, transgenic mice with eGFP constitutively expressed in macrophages and dendritic cells were used. A different pattern of macrophage distribution in IFNγR-/- mice was observed compared to that in wild type mice. Moreover, the number of splenic macrophages changed dramatically in the spleen at different stages of fibrosis. The possibility that alternatively activated macrophages differentiate into myofibroblasts was investigated by co-staining with α-SMA antibody. However, no evidence was found that macrophages are one of the origins of myofibroblasts. This suggests macrophages may play other roles in regulating fibrosis rather than contributing directly to the formation of fibrosis.
|
Page generated in 0.0359 seconds