321 |
Monotonicity and complete monotonicity for continuous-time Markov chainsDai Pra, Paolo, Louis, Pierre-Yves, Minelli, Ida January 2006 (has links)
We analyze the notions of monotonicity and complete monotonicity for Markov Chains in continuous-time, taking values in a finite partially ordered set. Similarly to what happens in discrete-time, the two notions are not equivalent.<br>
However, we show that there are partially ordered sets for which monotonicity and complete monotonicity coincide in continuous time but not in discrete-time. / Nous étudions les notions de monotonie et de monotonie complète pour les processus de Markov (ou chaînes de Markov à temps continu) prenant leurs valeurs dans un espace partiellement ordonné. Ces deux notions ne sont pas équivalentes, comme c'est le cas lorsque le temps est discret. Cependant, nous établissons que pour certains ensembles partiellement ordonnés, l'équivalence a lieu en temps continu bien que n'étant pas vraie en temps discret.
|
322 |
The Persistence of Retro-commissioning Savings in Ten University BuildingsToole, Cory Dawson 2010 May 1900 (has links)
This study evaluated how well energy savings persisted over time in ten university buildings that had undergone retro-commissioning in 1996. The savings achieved immediately following retro-commissioning and in three subsequent years were documented in a previous study (Cho 2002). The current study expanded on this previous study by evaluating the performance of each building over nine additional years. Follow up retro-commissioning work performed in each building during that time was documented, as well as changes to the energy management control system. Savings were determined in accordance with the methodology outlined in the International Performance Measurement and Verification Protocol (IPMVP 2007), with ASHRAE Guideline 14 also serving as a reference. Total annualized savings for all buildings in 1997 (the year just after retro-commissioning) were 45(plus or minus 2)% for chilled water, 67(plus or minue 2)% for hot water, and 12% for electricity. Combining consumption from the most recent year for each building with valid energy consumption data showed a total savings of 39(plus or minus 1)% for chilled water, 64(plus or minus 2)% for heating water, and 22% for electricity. Uncertainty values were calculated in accordance with methodology in the IPMVP and ASHRAE Guideline 14, and were reported at the 90% confidence interval. The most recent year of data for most of the buildings was 2008-2009, although a few of the buildings did not have valid consumption data for that year. Follow up work performed in the buildings, lighting retrofits, and building metering changes beginning in 2005 were the major issues believed to have contributed to the high level of savings persistence in later years. When persistence trends were evaluated with adjustment for these factors, average savings for the buildings studied were found to degrade over time, and exponential models were developed to describe this degradation. The study concluded that on average energy savings after retro-commissioning will degrade over time in a way that can be modeled exponentially. It was also concluded that high levels of savings persistence can be achieved through performing retro-commissioning follow up, particularly when significant increases are observed in metered energy consumption data, but also at other times as retro-commissioning procedures and technology continually improve.
|
323 |
Properties and Behaviours of Fuzzy Cellular AutomataBetel, Heather 14 May 2012 (has links)
Cellular automata are systems of interconnected cells which are discrete in space, time and state. Cell states are updated synchronously according to a local rule which is dependent upon the current state of the given cell and those of its neighbours in a pre-defined neighbourhood. The local rule is common to all cells. Fuzzy cellular automata extend this notion to systems which are discrete in space and time but not state. In this thesis, we explore fuzzy cellular automata which are created from the extension of Boolean rules in disjunctive normal form to continuous functions. Motivated by recent results on the classification of these rules from empirical evidence, we set out first to show that fuzzy cellular automata can shed some light on classical cellular automata and then to prove that the observed results are mathematically correct. The main results of this thesis can be divided into two categories. We first investigate the links between fuzzy cellular automata and their Boolean counter-parts. We prove that number conservation is preserved by this transformation. We further show that Boolean additive cellular automata have a definable property in their fuzzy form which we call self-oscillation. We then give a probabilistic interpretation of fuzzy cellular automata and show that homogeneous asymptotic states are equivalent to mean field approximations of Boolean cellular automata. We then turn our attention the asymptotic behaviour of fuzzy cellular automata. In the second half of the thesis we investigate the observed behaviours of the fuzzy cellular automata derived from balanced Boolean rules. We show that the empirical results of asymptotic behaviour are correct. In fuzzy form, the balanced rules can be categorized as one of three types: weighted average rules, self-averaging rules, and local majority rules. Each type is analyzed in a variety of ways using a range of tools to explain their behaviours.
|
324 |
Computing Most Probable Sequences of State Transitions in Continuous-time Markov Systems.Levin, Pavel 22 June 2012 (has links)
Continuous-time Markov chains (CTMC's) form a convenient mathematical framework for analyzing random systems across many different disciplines. A specific research problem that is often of interest is to try to predict maximum probability sequences of state transitions given initial or boundary conditions. This work shows how to solve this problem exactly through an efficient dynamic programming algorithm. We demonstrate our approach through two different applications - ranking mutational pathways of HIV virus based on their probabilities, and determining the most probable failure sequences in complex fault-tolerant engineering systems. Even though CTMC's have been used extensively to realistically model many types of complex processes, it is often a standard practice to eventually simplify the model in order to perform the state evolution analysis. As we show here, simplifying approaches can lead to inaccurate and often misleading solutions. Therefore we expect our algorithm to find a wide range of applications across different domains.
|
325 |
Cuff-free blood pressure estimation using signal processing techniquesZhang, Qiao 13 September 2010
Since blood pressure is a significant parameter to examine people's physical attributes and it is useful to indicate cardiovascular diseases, the measurement/estimation of blood pressure has gained increasing attention. The continuous, cuff-less and non-invasive blood pressure estimation is required for the daily health monitoring. In recent years, studies have been focusing on the ways of blood pressure estimation based on other physiological parameters. It is widely accepted that the pulse transit time (PTT) is related to arterial stiffness, and can be used to estimate blood pressure.<p>
A promising signal processing technology, Hilbert-Huang Transform (HHT), is introduced to analyze both ECG and PPG data, which are applied to calculate PTT. The relationship between blood pressure and PTT is illustrated, and the problems of calibration and re-calibration are also discussed. The proposed algorithm is tested based on the continuous data from MIMIC database. To verify the algorithm, the HHT algorithm is compared with other used processing technique (wavelet transform). The accuracy is calculated to validate the method. Furthermore, we collect data using our own developed system and test our algorithm.
|
326 |
Cuff-free blood pressure estimation using signal processing techniquesZhang, Qiao 13 September 2010 (has links)
Since blood pressure is a significant parameter to examine people's physical attributes and it is useful to indicate cardiovascular diseases, the measurement/estimation of blood pressure has gained increasing attention. The continuous, cuff-less and non-invasive blood pressure estimation is required for the daily health monitoring. In recent years, studies have been focusing on the ways of blood pressure estimation based on other physiological parameters. It is widely accepted that the pulse transit time (PTT) is related to arterial stiffness, and can be used to estimate blood pressure.<p>
A promising signal processing technology, Hilbert-Huang Transform (HHT), is introduced to analyze both ECG and PPG data, which are applied to calculate PTT. The relationship between blood pressure and PTT is illustrated, and the problems of calibration and re-calibration are also discussed. The proposed algorithm is tested based on the continuous data from MIMIC database. To verify the algorithm, the HHT algorithm is compared with other used processing technique (wavelet transform). The accuracy is calculated to validate the method. Furthermore, we collect data using our own developed system and test our algorithm.
|
327 |
Modelling and forecasting stochastic volatilityLopes Moreira de Veiga, Maria Helena 19 April 2004 (has links)
El objetivo de esta tesis es modelar y predecir la volatilidad de las series financieras con modelos de volatilidad en tiempo discreto y continuo.En mi primer capítulo, intento modelar las principales características de las series financieras, como a persistencia y curtosis. Los modelos de volatilidad estocástica estimados son extensiones directas de los modelos de Gallant y Tauchen (2001), donde incluyo un elemento de retro-alimentación. Este elemento es de extrema importancia porque permite captar el hecho de que períodos de alta volatilidad están, en general, seguidos de periodos de gran volatilidad y viceversa. En este capítulo, como en toda la tesis, uso el método de estimación eficiente de momentos de Gallant y Tauchen (1996). De la estimación surgen dos modelos posibles de describir los datos, el modelo logarítmico con factor de volatilidad y retroalimentación y el modelo logarítmico con dos factores de volatilidad. Como no es posible elegir entre ellos basados en los tests efectuados en la fase de la estimación, tendremos que usar el método de reprogección para obtener mas herramientas de comparación. El modelo con un factor de volatilidad se comporta muy bien y es capaz de captar la "quiebra" de los mercados financieros de 1987.En el segundo capítulo, hago la evaluación del modelo con dos factores de volatilidad en términos de predicción y comparo esa predicción con las obtenidas con los modelos GARCH y ARFIMA. La evaluación de la predicción para los tres modelos es hecha con la ayuda del R2 de las regresiones individuales de la volatilidad "realizada" en una constante y en las predicciones. Los resultados empíricos indican un mejor comportamiento del modelo en tiempo continuo. Es más, los modelos GARCH y ARFIMA parecen tener problemas en seguir la marcha de la volatilidad "realizada". Finalmente, en el tercer capítulo hago una extensión del modelo de volatilidad estocástica de memoria larga de Harvey (2003). O sea, introduzco un factor de volatilidad de corto plazo. Este factor extra aumenta la curtosis y ayuda a captar la persistencia (que es captada con un proceso integrado fraccional, como en Harvey (1993)). Los resultados son probados y el modelo implementado empíricamente. / The purpose of my thesis is to model and forecast the volatility of the financial series of returns by using both continuous and discrete time stochastic volatility models.In my first chapter I try to fit the main characteristics of the financial series of returns such as: volatility persistence, volatility clustering and fat tails of the distribution of the returns.The estimated logarithmic stochastic volatility models are direct extensions of the Gallant and Tauchen's (2001) by including the feedback feature. This feature is of extreme importance because it allows to capture the low variability of the volatility factor when the factor is itself low (volatility clustering) and it also captures the increase in volatility persistence that occurs when there is an apparent change in the pattern of volatility at the very end of the sample. In this chapter, as well as in all the thesis, I use Efficient Method of Moments of Gallant and Tauchen (1996) as an estimation method. From the estimation step, two models come out, the logarithmic model with one factor of volatility and feedback (L1F) and the logarithmic model with two factors of volatility (L2). Since it is not possible to choose between them based on the diagnostics computed at the estimation step, I use the reprojection step to obtain more tools for comparing models. The L1F is able to reproject volatility quite well without even missing the crash of 1987.In the second chapter I fit the continuous time model with two factors of volatility of Gallant and Tauchen (2001) for the return of a Microsoft share. The aim of this chapter is to evaluate the volatility forecasting performance of the continuous time stochastic volatility model comparatively to the ones obtained with the traditional GARCH and ARFIMA models. In order to inquire into this, I estimate using the Efficient Method of Moments (EMM) of Gallant and Tauchen (1996) a continuous time stochastic volatility model for the logarithm of asset price and I filter the underlying volatility using the reprojection technique of Gallant and Tauchen (1998). Under the assumption that the model is correctly specified, I obtain a consistent estimator of the integrated volatility by fitting a continuous time stochastic volatility model to the data. The forecasting evaluation for the three estimated models is going to be done with the help of the R2 of the individual regressions of realized volatility on the volatility forecasts obtained from the estimated models. The empirical results indicate the better performance of the continuous time model in the out-of-sample periods compared to the ones of the traditional GARCH and ARFIMA models. Further, these two last models show difficulties in tracking the growth pattern of the realized volatility. This probably is due to the change of pattern in volatility in this last part of the sample. Finally, in the third chapter I come back to the model specification and I extend the long memory stochastic volatility model of Harvey (1993) by introducing a short run volatility factor. This extra factor increases kurtosis and helps the model capturing volatility persistence (that it is captured by a fractionally integrated process as in Harvey (1993) ). Futhermore, considering some restrictions of the parameters it is possible to fit the empirical fact of small first order autocorrelation of squared returns. All these results are proved theoretically and the model is implemented empirically using the S&P 500 composite index returns. The empirical results show the superiority of the model in fitting the main empirical facts of the financial series of returns.
|
328 |
Design, Modeling and Analysis of a Continuous Process for Hydrogenation of Diene based Polymers using a Static Mixer ReactorMadhuranthakam, Chandra Mouli R January 2007 (has links)
Hydrogenated nitrile butadiene rubber (HNBR) which is known for its excellent elastomeric properties and mechanical retention properties after long time exposure to heat, oil and air is produced by the catalytic hydrogenation of nitrile butadiene rubber (NBR). Hydrogenation of NBR is carried out preferably in solution via homogeneous catalysis. As yet, it is being commercially produced in a semi-batch process where gaseous hydrogen continuously flows into a batch of reactant polymer. Several catalysts have been exploited successfully for the hydrogenation of NBR in organic solvents, which include palladium, rhodium, ruthenium, iridium and osmium complexes. Owing to the drawbacks of batch production (such as time taken for charging and discharging the reactants/products, heating and cooling, reactor clean up), and the huge demand for HNBR, a continuous process is proposed where potential time saving is possible in addition to the high turn over of the product.
Numerical investigation of the HNBR production in a plug flow reactor and a continuous stirred tank reactor showed that a reactor with plug flow behavior would be economical and efficient. A static mixer (SM) reactor with open-curve blade internal geometry is designed based on the simulation and hydrodynamic results. The SM reactor was designed with 24 mixing elements, 3.81 cm ID and 90 cm length. The reactor has a jacket in which steam is used to heat the polymer solution. The hydrodynamics in the SM reactor (open-flat blade structure) with air-water system showed that plug flow could be achieved even under laminar flow conditions (Reh < 20). For a constant mean residence time, the Peclet number was varying such that it is 4.7 times the number of mixing elements (ne) used in the SM reactor. Empirical correlations were developed for gas hold up (εG) and overall mass transfer coefficient (KLa). The mass transfer experiments showed that high KLa, 4 to 6 times compared to that of the conventional reactors could be achieved in the SM reactor at particular operating conditions.
Very important information on the Peclet number, liquid hold were obtained from the hydrodynamic experiments conducted with the actual working fluids (hydrogen, polymer solutions) in the SM reactor. The superficial gas velocity had an adverse effect on both Peclet number and liquid hold up. The viscosity of the polymer solution also had a marginal negative effect on the Peclet number while a positive effect on the liquid hold up. The hydrogenation performance with the homogeneous catalyst OsHCl(CO)(O2)(PCy3)2 was performed in the continuous process with SM reactor. Complete hydrogenation of NBR was possible in a single pass. The effect of mean residence time, catalyst and polymer concentration on the final degree of hydrogenation was studied. The minimum catalyst required to achieve degree of hydrogenation over 97% was empirically found and an empirical correlation was developed for degree of hydrogenation as a function of operating conditions and parameters.
Hydrogenation in the SM reactor is modeled by using plug flow with axial dispersion model that is coupled with the concentrations of carbon-carbon double bond, hydrogen and osmium catalyst. The model involves coupled, non-linear partial differential equations with different dimensionless parameters. The proposed model was verified with the experimental results obtained from the hydrogenation and hydrodynamic experiments. The model could satisfactorily predict the degree of hydrogenation obtained from experimental results at various operating conditions. In general, the designed continuous process with SM reactor performed well and was an effective method of manufacturing HNBR on a continuous basis. The designed system is amicable to the industrial operating conditions and promises to be highly efficient and economic process for production of HNBR.
|
329 |
A survey of the trust region subproblem within a semidefinite frameworkFortin, Charles January 2000 (has links)
Trust region subproblems arise within a class of unconstrained methods called trust region methods. The subproblems consist of minimizing a quadratic function subject to a norm constraint. This thesis is a survey of different methods developed to find an approximate solution to the subproblem. We study the well-known method of More and Sorensen and two recent methods for large sparse subproblems: the so-called Lanczos method of Gould et al. and the Rendland Wolkowicz algorithm. The common ground to explore these methods will be semidefinite programming. This approach has been used by Rendl and Wolkowicz to explain their method and the More and Sorensen algorithm; we extend this work to the Lanczos method. The last chapter of this thesis is dedicated to some improvements done to the Rendl and Wolkowicz algorithm and to comparisons between the Lanczos method and the Rendl and Wolkowicz algorithm. In particular, we show some weakness of the Lanczos method and show that the Rendl and Wolkowicz algorithm is more robust.
|
330 |
Design, Modeling and Analysis of a Continuous Process for Hydrogenation of Diene based Polymers using a Static Mixer ReactorMadhuranthakam, Chandra Mouli R January 2007 (has links)
Hydrogenated nitrile butadiene rubber (HNBR) which is known for its excellent elastomeric properties and mechanical retention properties after long time exposure to heat, oil and air is produced by the catalytic hydrogenation of nitrile butadiene rubber (NBR). Hydrogenation of NBR is carried out preferably in solution via homogeneous catalysis. As yet, it is being commercially produced in a semi-batch process where gaseous hydrogen continuously flows into a batch of reactant polymer. Several catalysts have been exploited successfully for the hydrogenation of NBR in organic solvents, which include palladium, rhodium, ruthenium, iridium and osmium complexes. Owing to the drawbacks of batch production (such as time taken for charging and discharging the reactants/products, heating and cooling, reactor clean up), and the huge demand for HNBR, a continuous process is proposed where potential time saving is possible in addition to the high turn over of the product.
Numerical investigation of the HNBR production in a plug flow reactor and a continuous stirred tank reactor showed that a reactor with plug flow behavior would be economical and efficient. A static mixer (SM) reactor with open-curve blade internal geometry is designed based on the simulation and hydrodynamic results. The SM reactor was designed with 24 mixing elements, 3.81 cm ID and 90 cm length. The reactor has a jacket in which steam is used to heat the polymer solution. The hydrodynamics in the SM reactor (open-flat blade structure) with air-water system showed that plug flow could be achieved even under laminar flow conditions (Reh < 20). For a constant mean residence time, the Peclet number was varying such that it is 4.7 times the number of mixing elements (ne) used in the SM reactor. Empirical correlations were developed for gas hold up (εG) and overall mass transfer coefficient (KLa). The mass transfer experiments showed that high KLa, 4 to 6 times compared to that of the conventional reactors could be achieved in the SM reactor at particular operating conditions.
Very important information on the Peclet number, liquid hold were obtained from the hydrodynamic experiments conducted with the actual working fluids (hydrogen, polymer solutions) in the SM reactor. The superficial gas velocity had an adverse effect on both Peclet number and liquid hold up. The viscosity of the polymer solution also had a marginal negative effect on the Peclet number while a positive effect on the liquid hold up. The hydrogenation performance with the homogeneous catalyst OsHCl(CO)(O2)(PCy3)2 was performed in the continuous process with SM reactor. Complete hydrogenation of NBR was possible in a single pass. The effect of mean residence time, catalyst and polymer concentration on the final degree of hydrogenation was studied. The minimum catalyst required to achieve degree of hydrogenation over 97% was empirically found and an empirical correlation was developed for degree of hydrogenation as a function of operating conditions and parameters.
Hydrogenation in the SM reactor is modeled by using plug flow with axial dispersion model that is coupled with the concentrations of carbon-carbon double bond, hydrogen and osmium catalyst. The model involves coupled, non-linear partial differential equations with different dimensionless parameters. The proposed model was verified with the experimental results obtained from the hydrogenation and hydrodynamic experiments. The model could satisfactorily predict the degree of hydrogenation obtained from experimental results at various operating conditions. In general, the designed continuous process with SM reactor performed well and was an effective method of manufacturing HNBR on a continuous basis. The designed system is amicable to the industrial operating conditions and promises to be highly efficient and economic process for production of HNBR.
|
Page generated in 0.0912 seconds