• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 31
  • 31
  • 16
  • 9
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Development Of A Knowledge-Based Hybrid Methodology For Vehicle Side Impact Safety Design

Srinivas, CH Kalyan 11 1900 (has links) (PDF)
The present research work has been carried out to develop a unified knowledge-based hybrid methodology combining regression-based, lumped parameter and finite element analyses that can be implemented in the initial phase of vehicle design resulting in a superior side crash performance. As a first step, a regression-based model (RBM) is developed between the injury parameter Thoracic Trauma Index (TTI) of the rear SID and characteristic side impact dynamic response variables such as rear door velocity (final) and intrusion supplementing an existing RBM for front TTI prediction. In order to derive the rear TTI RBM, existing public domain vehicle crash test data provided by NHTSA has been used. A computer-based tool with a Graphical User Interface (GUI) has been developed for obtaining possible solution sets of response variables satisfying the regression relations for both front and rear TTI. As a next step in the formulation of the present hybrid methodology for vehicle side impact safety design, a new Lumped Parameter Model (LPM) representing NHTSA side impact is developed. The LPM developed consists of body sub-systems like B-pillar, front door, rear door and rocker (i.e. sill) on the struck side of the vehicle, MDB, and “rest of the vehicle” as lumped masses along with representative nonlinear springs between them. It has been envisaged that for the initial conceptual design to progress, the targets of dynamic response variables obtained from RBM should yield a set of spring characteristics broadly defining the required vehicle side structure. However, this is an inverse problem of dynamics which would require an inordinate amount of time to be solved iteratively. Hence a knowledge-based approach is adopted here to link the two sets of variables i.e., the dynamic response parameters (such as average door and B-pillar velocities, door intrusion, etc.) and the stiffness and strength characteristics of the springs present in LPM. In effect, this mapping is accomplished with the help of an artificial neural network (ANN) algorithm (referred to as ANN_RBM_LPM in the current work). To generate the required knowledge database for ANN_RBM_LPM, one thousand cases of LPM chosen with the help of the Latin Hypercube technique are run with varying spring characteristics. The goal of finding the desired design solutions describing vehicle geometry in an efficient manner is accomplished with the help of a second ANN algorithm which links sets of dynamic spring characteristics with sets of sectional properties of doors, B-pillar and rocker (referred as ANN_LPM_FEM in the current work). The implementation of this approach requires creation of a knowledge database containing paired sets of spring characteristics and sectional details just mentioned. The effectiveness of the hybrid methodology comprising both ANN_RBM_LPM and ANN_LPM_FEM is finally illustrated by improving the side impact performance of a Honda Accord finite element model. Thus, the unique knowledge-based hybrid approach developed here can be deployed in real world vehicle safety design for both new and existing vehicles leading to enormous saving of time and costly design iterations.
22

Force-Amplifying Compliant Mechanisms For Micromachined Resonant Accelerometers

Madhavan, Shyamsananth 01 1900 (has links) (PDF)
This thesis work provides an insight into the design of Force-amplifying Compliant Mechanisms (FaCMs) that are integrated with micromachined resonant accelerometers to increase their sensitivity. An FaCM, by mechanically amplifying the inertial force, enhances the shift in the resonance frequency of the beams used for sensing the acceleration whose effect causes an axial force on the beams. An extensive study on different configurations of resonators namely, single beam resonator, single-ended tuning fork (SETF), and double-ended tuning fork (DETF), is carried out to gain insights about their resonant behavior. The influence of the boundary conditions on the sensor’s sensitivity emerged from the study. We found that not only the force-amplification factor but also the multi-axial stiffness of the FaCM and proof-mass influence the resonance frequency of the resonator as well as the bandwidth of the modified sensor for certain configurations but not all. Thus, four lumped parameters were identified to quantify the effectiveness of an FaCM. These parameters determine the boundary condition of the sensing beams and also the forces and the moment transmitted to them. Also presented in this work is a computationally efficient model, called the Lumped Parameter Model (LPM) for evaluation of the sensitivity. An analytical expression for the frequency-shift of the sensing resonator beams is obtained by considering the FaCM stiffness parameters as well as the lumped stiffness of the suspension of the inertial mass. Various FaCMs are evaluated and compared to understand how the four lumped parameters influence the sensor’s sensitivity. The FaCMs are synthesized using topology optimization to maximize the net amplification factor with the volume constraint. One of the FaCMs outperforms the lever by a factor of six. Microfabrication of resonant accelerometer coupled with FaCM and comb-drive actuator is carried out using a silicon-on-insulator process. Finally, the selection map technique, a compliant mechanism redesign methodology is used for enhancing the amplification of FaCMs. This technique provides scope for further design improvement in FaCMs for given sensor specifications.
23

Parameter Recovery for the Four-Parameter Unidimensional Binary IRT Model: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Approaches

Do, Hoan 26 May 2021 (has links)
No description available.
24

A Coupled CFD-Lumped Parameter Model of the Human Circulation: Elucidating the Hemodynamics of the Hybrid Norwood Palliative Treatment and Effects of the Reverse Blalock-Taussic Shunt Placement and Diameter

Ceballos, Andres 01 January 2015 (has links)
The Hybrid Norwood (HN) is a relatively new first stage procedure for neonates with Hypoplastic Left Heart Syndrome (HLHS), in which a sustainable univentricular circulation is established in a less invasive manner than with the standard procedure. A computational multiscale model of such HLHS circulation following the HN procedure was used to obtain detailed hemodynamics. Implementation of a reverse-BT shunt (RBTS), a synthetic bypass from the main pulmonary to the innominate artery placed to counteract aortic arch stenosis, and its effects on local and global hemodynamics were studied. A synthetic and a 3D reconstructed, patient derived anatomy after the HN procedure were utilized, with varying degrees of distal arch obstruction, or stenosis, (nominal and 90% reduction in lumen) and varying RBTS diameters (3.0, 3.5, 4.0 mm). A closed lumped parameter model (LPM) for the peripheral or distal circulation coupled to a 3D Computational Fluid Dynamics (CFD) model that allows detailed description of the local hemodynamics was created for each anatomy. The implementation of the RBTS in any of the chosen diameters under severe stenosis resulted in a restoration of arterial perfusion to near-nominal levels. Shunt flow velocity, vorticity, and overall wall shear stress levels are inverse functions of shunt diameter, while shunt perfusion and systemic oxygen delivery correlates positively with diameter. No correlation of shunt diameter with helicity was recorded. In the setting of the hybrid Norwood circulation, our results suggest: (1) the 4.0mm RBTS may be more thrombogenic when implemented in the absence of severe arch stenosis and (2) the 3.0mm and 3.5mm RBTS may be a more suitable alternative, with preference to the latter since it provides similar hemodynamics at lower levels of wall shear stress.
25

Factor models, VARMA processes and parameter instability with applications in macroeconomics

Stevanovic, Dalibor 05 1900 (has links)
Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients. / As information technology improves, the availability of economic and finance time series grows in terms of both time and cross-section sizes. However, a large amount of information can lead to the curse of dimensionality problem when standard time series tools are used. Since most of these series are highly correlated, at least within some categories, their co-variability pattern and informational content can be approximated by a smaller number of (constructed) variables. A popular way to address this issue is the factor analysis. This framework has received a lot of attention since late 90's and is known today as the large dimensional approximate factor analysis. Given the availability of data and computational improvements, a number of empirical and theoretical questions arises. What are the effects and transmission of structural shocks in a data-rich environment? Does the information from a large number of economic indicators help in properly identifying the monetary policy shocks with respect to a number of empirical puzzles found using traditional small-scale models? Motivated by the recent financial turmoil, can we identify the financial market shocks and measure their effect on real economy? Can we improve the existing method and incorporate another reduction dimension approach such as the VARMA modeling? Does it help in forecasting macroeconomic aggregates and impulse response analysis? Finally, can we apply the same factor analysis reasoning to the time varying parameters? Is there only a small number of common sources of time instability in the coefficients of empirical macroeconomic models? This thesis concentrates on the structural factor analysis and VARMA modeling and answers these questions through five articles. The first two articles study the effects of monetary policy and credit shocks in a data-rich environment. The third article proposes a new framework that combines the factor analysis and VARMA modeling, while the fourth article applies this method to measure the effects of credit shocks in Canada. The contribution of the final chapter is to impose the factor structure on the time varying parameters in popular macroeconomic models, and show that there are few sources of this time instability. The first article analyzes the monetary transmission mechanism in Canada using a factor-augmented vector autoregression (FAVAR) model. For small open economies like Canada, uncovering the transmission mechanism of monetary policy using VARs has proven to be an especially challenging task. Such studies on Canadian data have often documented the presence of anomalies such as a price, exchange rate, delayed overshooting and uncovered interest rate parity puzzles. We estimate a FAVAR model using large sets of monthly and quarterly macroeconomic time series. We find that the information summarized by the factors is important to properly identify the monetary transmission mechanism and contributes to mitigate the puzzles mentioned above, suggesting that more information does help. Finally, the FAVAR framework allows us to check impulse responses for all series in the informational data set, and thus provides the most comprehensive picture to date of the effect of Canadian monetary policy. As the recent financial crisis and the ensuing global economic have illustrated, the financial sector plays an important role in generating and propagating shocks to the real economy. Financial variables thus contain information that can predict future economic conditions. In this paper we examine the dynamic effects and the propagation of credit shocks using a large data set of U.S. economic and financial indicators in a structural factor model. Identified credit shocks, interpreted as unexpected deteriorations of the credit market conditions, immediately increase credit spreads, decrease rates on Treasury securities and cause large and persistent downturns in the activity of many economic sectors. Such shocks are found to have important effects on real activity measures, aggregate prices, leading indicators and credit spreads. In contrast to other recent papers, our structural shock identification procedure does not require any timing restrictions between the financial and macroeconomic factors, and yields an interpretation of the estimated factors without relying on a constructed measure of credit market conditions from a large set of individual bond prices and financial series. In third article, we study the relationship between VARMA and factor representations of a vector stochastic process, and propose a new class of factor-augmented VARMA (FAVARMA) models. We start by observing that in general multivariate series and associated factors do not both follow a finite order VAR process. Indeed, we show that when the factors are obtained as linear combinations of observable series, their dynamic process is generally a VARMA and not a finite-order VAR as usually assumed in the literature. Second, we show that even if the factors follow a finite-order VAR process, this implies a VARMA representation for the observable series. As result, we propose the FAVARMA framework that combines two parsimonious methods to represent the dynamic interactions between a large number of time series: factor analysis and VARMA modeling. We apply our approach in two pseudo-out-of-sample forecasting exercises using large U.S. and Canadian monthly panels taken from Boivin, Giannoni and Stevanovic (2010, 2009) respectively. The results show that VARMA factors help in predicting several key macroeconomic aggregates relative to standard factor forecasting models. Finally, we estimate the effect of monetary policy using the data and the identification scheme as in Bernanke, Boivin and Eliasz (2005). We find that impulse responses from a parsimonious 6-factor FAVARMA(2,1) model give an accurate and comprehensive picture of the effect and the transmission of monetary policy in U.S.. To get similar responses from a standard FAVAR model, Akaike information criterion estimates the lag order of 14. Hence, only 84 coefficients governing the factors dynamics need to be estimated in the FAVARMA framework, compared to FAVAR model with 510 VAR parameters. In fourth article we are interested in identifying and measuring the effects of credit shocks in Canada in a data-rich environment. In order to incorporate information from a large number of economic and financial indicators, we use the structural factor-augmented VARMA model. In the theoretical framework of the financial accelerator, we approximate the external finance premium by credit spreads. On one hand, we find that an unanticipated increase in US external finance premium generates a significant and persistent economic slowdown in Canada; the Canadian external finance premium rises immediately while interest rates and credit measures decline. From the variance decomposition analysis, we observe that the credit shock has an important effect on several real activity measures, price indicators, leading indicators, and credit spreads. On the other hand, an unexpected increase in Canadian external finance premium shows no significant effect in Canada. Indeed, our results suggest that the effects of credit shocks in Canada are essentially caused by the unexpected changes in foreign credit market conditions. Finally, given the identification procedure, we find that our structural factors do have an economic interpretation. The behavior of economic agents and environment may vary over time (monetary policy strategy shifts, stochastic volatility) implying parameters' instability in reduced-form models. Standard time varying parameter (TVP) models usually assume independent stochastic processes for all TVPs. In the final article, I show that the number of underlying sources of parameters' time variation is likely to be small, and provide empirical evidence on factor structure among TVPs of popular macroeconomic models. To test for the presence of, and estimate low dimension sources of time variation in parameters, I apply the factor time varying parameter (Factor-TVP) model, proposed by Stevanovic (2010), to a standard monetary TVP-VAR model. I find that one factor explains most of the variability in VAR coefficients, while the stochastic volatility parameters vary in the idiosyncratic way. The common factor is highly and positively correlated to the unemployment rate. To incorporate the recent financial crisis, the same exercise is conducted with data updated to 2010Q3. The VAR parameters present an important change after 2007, and the procedure suggests two factors. When applied to a large-dimensional structural factor model, I find that four dynamic factors govern the time instability in almost 700 coefficients.
26

Factor models, VARMA processes and parameter instability with applications in macroeconomics

Stevanovic, Dalibor 05 1900 (has links)
Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients. / As information technology improves, the availability of economic and finance time series grows in terms of both time and cross-section sizes. However, a large amount of information can lead to the curse of dimensionality problem when standard time series tools are used. Since most of these series are highly correlated, at least within some categories, their co-variability pattern and informational content can be approximated by a smaller number of (constructed) variables. A popular way to address this issue is the factor analysis. This framework has received a lot of attention since late 90's and is known today as the large dimensional approximate factor analysis. Given the availability of data and computational improvements, a number of empirical and theoretical questions arises. What are the effects and transmission of structural shocks in a data-rich environment? Does the information from a large number of economic indicators help in properly identifying the monetary policy shocks with respect to a number of empirical puzzles found using traditional small-scale models? Motivated by the recent financial turmoil, can we identify the financial market shocks and measure their effect on real economy? Can we improve the existing method and incorporate another reduction dimension approach such as the VARMA modeling? Does it help in forecasting macroeconomic aggregates and impulse response analysis? Finally, can we apply the same factor analysis reasoning to the time varying parameters? Is there only a small number of common sources of time instability in the coefficients of empirical macroeconomic models? This thesis concentrates on the structural factor analysis and VARMA modeling and answers these questions through five articles. The first two articles study the effects of monetary policy and credit shocks in a data-rich environment. The third article proposes a new framework that combines the factor analysis and VARMA modeling, while the fourth article applies this method to measure the effects of credit shocks in Canada. The contribution of the final chapter is to impose the factor structure on the time varying parameters in popular macroeconomic models, and show that there are few sources of this time instability. The first article analyzes the monetary transmission mechanism in Canada using a factor-augmented vector autoregression (FAVAR) model. For small open economies like Canada, uncovering the transmission mechanism of monetary policy using VARs has proven to be an especially challenging task. Such studies on Canadian data have often documented the presence of anomalies such as a price, exchange rate, delayed overshooting and uncovered interest rate parity puzzles. We estimate a FAVAR model using large sets of monthly and quarterly macroeconomic time series. We find that the information summarized by the factors is important to properly identify the monetary transmission mechanism and contributes to mitigate the puzzles mentioned above, suggesting that more information does help. Finally, the FAVAR framework allows us to check impulse responses for all series in the informational data set, and thus provides the most comprehensive picture to date of the effect of Canadian monetary policy. As the recent financial crisis and the ensuing global economic have illustrated, the financial sector plays an important role in generating and propagating shocks to the real economy. Financial variables thus contain information that can predict future economic conditions. In this paper we examine the dynamic effects and the propagation of credit shocks using a large data set of U.S. economic and financial indicators in a structural factor model. Identified credit shocks, interpreted as unexpected deteriorations of the credit market conditions, immediately increase credit spreads, decrease rates on Treasury securities and cause large and persistent downturns in the activity of many economic sectors. Such shocks are found to have important effects on real activity measures, aggregate prices, leading indicators and credit spreads. In contrast to other recent papers, our structural shock identification procedure does not require any timing restrictions between the financial and macroeconomic factors, and yields an interpretation of the estimated factors without relying on a constructed measure of credit market conditions from a large set of individual bond prices and financial series. In third article, we study the relationship between VARMA and factor representations of a vector stochastic process, and propose a new class of factor-augmented VARMA (FAVARMA) models. We start by observing that in general multivariate series and associated factors do not both follow a finite order VAR process. Indeed, we show that when the factors are obtained as linear combinations of observable series, their dynamic process is generally a VARMA and not a finite-order VAR as usually assumed in the literature. Second, we show that even if the factors follow a finite-order VAR process, this implies a VARMA representation for the observable series. As result, we propose the FAVARMA framework that combines two parsimonious methods to represent the dynamic interactions between a large number of time series: factor analysis and VARMA modeling. We apply our approach in two pseudo-out-of-sample forecasting exercises using large U.S. and Canadian monthly panels taken from Boivin, Giannoni and Stevanovic (2010, 2009) respectively. The results show that VARMA factors help in predicting several key macroeconomic aggregates relative to standard factor forecasting models. Finally, we estimate the effect of monetary policy using the data and the identification scheme as in Bernanke, Boivin and Eliasz (2005). We find that impulse responses from a parsimonious 6-factor FAVARMA(2,1) model give an accurate and comprehensive picture of the effect and the transmission of monetary policy in U.S.. To get similar responses from a standard FAVAR model, Akaike information criterion estimates the lag order of 14. Hence, only 84 coefficients governing the factors dynamics need to be estimated in the FAVARMA framework, compared to FAVAR model with 510 VAR parameters. In fourth article we are interested in identifying and measuring the effects of credit shocks in Canada in a data-rich environment. In order to incorporate information from a large number of economic and financial indicators, we use the structural factor-augmented VARMA model. In the theoretical framework of the financial accelerator, we approximate the external finance premium by credit spreads. On one hand, we find that an unanticipated increase in US external finance premium generates a significant and persistent economic slowdown in Canada; the Canadian external finance premium rises immediately while interest rates and credit measures decline. From the variance decomposition analysis, we observe that the credit shock has an important effect on several real activity measures, price indicators, leading indicators, and credit spreads. On the other hand, an unexpected increase in Canadian external finance premium shows no significant effect in Canada. Indeed, our results suggest that the effects of credit shocks in Canada are essentially caused by the unexpected changes in foreign credit market conditions. Finally, given the identification procedure, we find that our structural factors do have an economic interpretation. The behavior of economic agents and environment may vary over time (monetary policy strategy shifts, stochastic volatility) implying parameters' instability in reduced-form models. Standard time varying parameter (TVP) models usually assume independent stochastic processes for all TVPs. In the final article, I show that the number of underlying sources of parameters' time variation is likely to be small, and provide empirical evidence on factor structure among TVPs of popular macroeconomic models. To test for the presence of, and estimate low dimension sources of time variation in parameters, I apply the factor time varying parameter (Factor-TVP) model, proposed by Stevanovic (2010), to a standard monetary TVP-VAR model. I find that one factor explains most of the variability in VAR coefficients, while the stochastic volatility parameters vary in the idiosyncratic way. The common factor is highly and positively correlated to the unemployment rate. To incorporate the recent financial crisis, the same exercise is conducted with data updated to 2010Q3. The VAR parameters present an important change after 2007, and the procedure suggests two factors. When applied to a large-dimensional structural factor model, I find that four dynamic factors govern the time instability in almost 700 coefficients.
27

Θερμική ανάλυση ασύγχρονου κινητήρα στην μόνιμη κατάσταση λειτουργίας με την μέθοδο των συγκεντρωμένων παραμέτρων / Thermal analysis of induction motor in steady state using lumped parameters

Λυγκώνης, Ηλίας 19 October 2012 (has links)
Η θερμική ανάλυση είναι μια σημαντική περιοχή μελέτης και γίνεται περισσότερο σημαντική για την σχεδίαση ηλεκτρικών μηχανών εξαιτίας της ανάγκης για μείωση του όγκου των υλικών και του κόστους κατασκευής τους καθώς και για την αύξηση της απόδοσής τους. Είναι εξίσου σημαντική με την ηλεκτρομαγνητική ανάλυση μιας και η θέρμανση της μηχανής θα οριοθετήσει την ονομαστική της ισχύ καθώς και την διάρκεια ζωής της μόνωσης. Στόχος της παρούσας διπλωματικής εργασίας είναι η εύρεση της κατανομής της θερμοκρασίας στο εσωτερικό ενός ασύγχρονου τριφασικού κινητήρα στη μόνιμη κατάσταση λειτουργίας του με τη μέθοδο των συγκεντρωμένων παραμέτρων. Στο πρώτο κεφάλαιο αναφέρονται βασικές έννοιες της θερμοδυναμικής. Γίνεται αναφορά σε διάφορους συντελεστές, παρουσιάζονται οι θερμοδυναμικοί νόμοι και γίνεται σύντομη αναφορά στους μηχανισμούς μετάδοσης θερμότητας. Στο δεύτερο κεφάλαιο δίνεται η αναλυτική περιγραφή των μηχανισμών μετάδοσης θερμότητας και παρουσιάζεται ένα απλό δίκτυο μοντελοποίησης με ισοδύναμες θερμικές αντιστάσεις. Στο τρίτο κεφάλαιο παρουσιάζεται σύντομα η δομή, η αρχή λειτουργίας και οι τύποι μιας ασύγχρονης μηχανής. Εδώ επίσης αναφέρονται και οι διάφορες μορφές απωλειών ενέργειας κατά την λειτουργία μιας τριφασικής ασύγχρονης μηχανής. Παρουσιάζεται ακόμη ο υπό μελέτη κινητήρας και αναφέρονται τα θερμοστοιχεία που χρησιμοποιούνται στην πειραματική διαδικασία. Στο τέταρτο κεφάλαιο περιγράφεται η μέθοδος θερμικής ανάλυσης με χρήση ισοδυνάμου κυκλώματος θερμικών αντιστάσεων για την μόνιμη κατάσταση. Στη συνέχεια δίνεται το προτεινόμενο κύκλωμα και παρουσιάζονται αναλυτικά οι ισοδύναμες θερμικές αντιστάσεις του μοντέλου. Τέλος στο πέμπτο κεφάλαιο παρατίθενται τα αποτελέσματα της θερμικής ανάλυσης, γίνεται σύγκριση με τα πειραματικά δεδομένα θερμοκρασιακών τιμών που πάρθηκαν από τα θερμοστοιχεία και ακολουθεί η διαδικασία της παραμετροποίησης στους διάφορους συντελεστές που χρησιμοποιήθηκαν είτε υπολογίστηκαν κατά την ανάλυση. / Thermal analysis is an important design area and becoming more important part of the electric motor design process due to the push for reduced weights and costs and increased efficiency. Thermal analysis is of equal importance as the electromagnetic design of the machine, because the temperature rise of the machine eventually determines the maximum output power. The purpose of this study is to record the temperature distribution of the internal parts of an induction motor at steady state using an equivalent thermal circuit with lumped parameters. The first chapter is an introduction of the thermodynamic theory. The laws of thermodynamics are described and there is a brief report of heat transfer mechanisms. The second chapter describes analytically the heat transfer mechanisms. Also, an example of modelling using thermal equivalent resistances is given. The third chapter introduces shortly the operational principles of an induction machine. Here are also referred the various losses that occur during the rotation of an induction motor. The studied induction motor, with the modified stator winding to include thermocouples, is shown. The fourth chapter introduces the method of thermal analysis using thermal equivalent circuit with lumped parameters. The proposed model is given and its components are described in particular. At last, in the fifth chapter the results of temperature distribution are given and compared with experimental data of temperature values that are acquired using the thermocouples. Here also takes apart the parameterising of the various coefficients that were used or calculated during this study.
28

Methodologies for Assessment of Impact Dynamic Responses

Ranadive, Gauri Satishchandra January 2014 (has links) (PDF)
Evaluation of the performance of a product and its components under impact loading is one of the key considerations in design. In order to assess resistance to damage or ability to absorb energy through plastic deformation of a structural component, impact testing is often carried out to obtain the 'Force - Displacement' response of the deformed component. In this context, it may be noted that load cells and accelerometers are commonly used as sensors for capturing impact responses. A drop-weight impact testing set-up consisting of a moving impactor head with a lightweight piezoresistive accelerometer and a strain gage based compression load cell mounted on it is used to carry out the impact tests. The basic objective of the present study is to assess the accuracy of responses recorded by the said transducers, when these are mounted on a moving impactor head. In the present work, a novel approach of theoretically evaluating the responses obtained from this drop-weight impact testing set-up for different axially loaded specimen has been executed with the formulation of an equivalent lumped parameter model (LPM) of the test set-up. For the most common configuration of a moving impactor head mounted load cell system in which dynamic load is transferred from the impactor head to the load cell, a quantitative assessment is made of the possible discrepancy that can result in load cell response. Initially, a 3-DOF (degrees-of-freedom) LPM is considered to represent a given impact testing set-up with the test specimen represented with a nonlinear spring. Both the load cell and the accelerometer are represented with linear springs, while the impacting unit comprising an impactor head (hammer) and a main body with the load cell in between are modelled as rigid masses. An experimentally obtained force-displacement response is assumed to be a nearly true behaviour of a specimen. By specifying an impact velocity to the rigid masses as an initial condition, numerical solution of the governing differential equations is obtained using Implicit (Newmark-beta) and Explicit (Central difference) time integration techniques. It can be seen that the model accurately reproduces the input load-displacement behaviour of the nonlinear spring corresponding to the tested component, ensuring the accuracy of these numerical methods. The nonlinear spring representing the test specimen is approximated in a piecewise linear manner and the solution strategy adopted and implemented in the form of a MATLAB script is shown to yield excellent reproduction of the assumed load-displacement behaviour of the test specimen. This prediction also establishes the accuracy of the numerical approach employed in solving the LPM system. However, the spring representing the load cell yields a response that qualitatively matches the assumed input load-displacement response of the test specimen with a lower magnitude of peak load. The accelerometer, it appears, may be capable of predicting more closely the load experienced by a specimen provided an appropriate mass of the impactor system i.e. impacting unit, is chosen as the multiplier for the acceleration response. Error between input and computed (simulated) responses is quantified in terms of root mean square error (RMSE). The present study additionally throws light on the dependence of time step of integration on numerical results. For obtaining consistent results, estimation of critical time step (increment) is crucial in conditionally stable central difference method. The effect of the parameters of the impact testing set-up on the accuracy of the predicted responses has been studied for different combinations of main impactor mass and load cell stiffness. It has been found that the load cell response is oscillatory in nature which points out to the need for suitable filtering for obtaining the necessary smooth variation of axial impact load with respect to time as well as deformation. Accelerometer response also shows undulations which can similarly be observed in the experimental results as well. An appropriate standard SAE-J211 filter which is a low-pass Butterworth filter has been used to remove oscillations from the computed responses. A load cell is quite capable of predicting the nature of transient response of an impacted specimen when it is part of the impacting unit, but it may substantially under-predict the magnitudes of peak loads. All the above mentioned analysis for a 3 DOF model have been performed for thin-walled tubular specimens made of mild steel (hat-section), an aluminium alloy (square cross-section) and a glass fibre-reinforced composite (circular cross-section), thus confirming the generality of the inferences drawn on the computed responses. Further, results obtained using explicit and implicit methodologies are compared for three specimens, to find the effect, if any, on numerical solution procedure on the conclusions drawn. The present study has been further used for investigating the effects of input parameters (i.e. stiffness and mass of the system components, and impact velocity) on the computed results of transducers. Such an investigation can be beneficial in designing an impact testing set-up as well as transducers for recording impact responses. Next, the previous 3 DOF model representing the impact testing set-up has been extended to a 5 DOF model to show that additional refinement of the original 3 DOF model does not substantially alter the inferences drawn based on it. In the end, oscillations observed in computed load cell responses are analysed by computing natural frequencies for the 3 DOF lumped parameter model. To conclude the present study, a 2 DOF LPM of the given impact testing set-up with no load cell has been investigated and the frequency of oscillations in the accelerometer response is seen to increase corresponding to the mounting resonance frequency of the accelerometer. In order to explore the merits of alternative impact testing set-ups, LPMs have been formulated to idealize test configurations in which the load cell is arranged to come into direct contact with the specimen under impact, although the accelerometer is still mounted on the moving impactor head. One such arrangement is to have the load cell mounted stationary on the base under the specimen and another is to mount the load cell on the moving impactor head such that the load cell directly impacts the specimen. It is once again observed that both these models accurately reproduce the input load-displacement behaviour of the nonlinear spring corresponding to the tested component confirming the validity of the model. In contrast to the previous set-up which included a moving load cell not coming into contact with the specimen, the spring representing the load cell in these present cases yields a response that more closely matches the assumed input load-displacement response of a test specimen suggesting that the load cell coming into direct contact with the specimen can result in a more reliable measurement of the actual dynamic response. However, in practice, direct contact of the load cell with the specimen under impact loading is likely to damage the transducer, and hence needs to be mounted on the moving head, resulting in a loss of accuracy, which can be theoretically estimated and corrected by the methodology investigated in this work.
29

Refractive Indices Of Liquid Crystals And Their Applications In Display And Photonic Devices

Li, Jun 01 January 2005 (has links)
Liquid crystals (LCs) are important materials for flat panel display and photonic devices. Most LC devices use electrical field-, magnetic field-, or temperature-induced refractive index change to modulate the incident light. Molecular constituents, wavelength, and temperature are the three primary factors determining the liquid crystal refractive indices: ne and no for the extraordinary and ordinary rays, respectively. In this dissertation, we derive several physical models for describing the wavelength and temperature effects on liquid crystal refractive indices, average refractive index, and birefringence. Based on these models, we develop some high temperature gradient refractive index LC mixtures for photonic applications, such as thermal tunable liquid crystal photonic crystal fibers and thermal solitons. Liquid crystal refractive indices decrease as the wavelength increase. Both ne and no saturate in the infrared region. Wavelength effect on LC refractive indices is important for the design of direct-view displays. In Chapter 2, we derive the extended Cauchy models for describing the wavelength effect on liquid crystal refractive indices in the visible and infrared spectral regions based on the three-band model. The three-coefficient Cauchy model could be used for describing the refractive indices of liquid crystals with low, medium, and high birefringence, whereas the two-coefficient Cauchy model is more suitable for low birefringence liquid crystals. The critical value of the birefringence is deltan~0.12. Temperature is another important factor affecting the LC refractive indices. The thermal effect originated from the lamp of projection display would affect the performance of the employed liquid crystal. In Chapter 3, we derive the four-parameter and three-parameter parabolic models for describing the temperature effect on the LC refractive indices based on Vuks model and Haller equation. We validate the empirical Haller equation quantitatively. We also validate that the average refractive index of liquid crystal decreases linearly as the temperature increases. Liquid crystals exhibit a large thermal nonlinearity which is attractive for new photonic applications using photonic crystal fibers. We derive the physical models for describing the temperature gradient of the LC refractive indices, ne and no, based on the four-parameter model. We find that LC exhibits a crossover temperature To at which dno/dT is equal to zero. The physical models of the temperature gradient indicate that ne, the extraordinary refractive index, always decreases as the temperature increases since dne/dT is always negative, whereas no, the ordinary refractive index, decreases as the temperature increases when the temperature is lower than the crossover temperature (dno/dT<0 when the temperature is lower than To) and increases as the temperature increases when the temperature is higher than the crossover temperature (dno/dT>0 when the temperature is higher than To ). Measurements of LC refractive indices play an important role for validating the physical models and the device design. Liquid crystal is anisotropic and the incident linearly polarized light encounters two different refractive indices when the polarization is parallel or perpendicular to the optic axis. The measurement is more complicated than that for an isotropic medium. In Chapter 4, we use a multi-wavelength Abbe refractometer to measure the LC refractive indices in the visible light region. We measured the LC refractive indices at six wavelengths, lamda=450, 486, 546, 589, 633 and 656 nm by changing the filters. We use a circulating constant temperature bath to control the temperature of the sample. The temperature range is from 10 to 55 oC. The refractive index data measured include five low-birefringence liquid crystals, MLC-9200-000, MLC-9200-100, MLC-6608 (delta_epsilon=-4.2), MLC-6241-000, and UCF-280 (delta_epsilon=-4); four middle-birefringence liquid crystals, 5CB, 5PCH, E7, E48 and BL003; four high-birefringence liquid crystals, BL006, BL038, E44 and UCF-35, and two liquid crystals with high dno/dT at room temperature, UCF-1 and UCF-2. The refractive indices of E7 at two infrared wavelengths lamda=1.55 and 10.6 um are measured by the wedged-cell refractometer method. The UV absorption spectra of several liquid crystals, MLC-9200-000, MLC-9200-100, MLC-6608 and TL-216 are measured, too. In section 6.5, we also measure the refractive index of cured optical films of NOA65 and NOA81 using the multi-wavelength Abbe refractometer. In Chapter 5, we use the experimental data measured in Chapter 4 to validate the physical models we derived, the extended three-coefficient and two-coefficient Cauchy models, the four-parameter and three-parameter parabolic models. For the first time, we validate the Vuks model using the experimental data of liquid crystals directly. We also validate the empirical Haller equation for the LC birefringence delta_n and the linear equation for the LC average refractive index . The study of the LC refractive indices explores several new photonic applications for liquid crystals such as high temperature gradient liquid crystals, high thermal tunable liquid crystal photonic crystal fibers, the laser induced 2D+1 thermal solitons in nematic crystals, determination for the infrared refractive indices of liquid crystals, comparative study for refractive index between liquid crystals and photopolymers for polymer dispersed liquid crystal (PDLC) applications, and so on. In Chapter 6, we introduce these applications one by one. First, we formulate two novel liquid crystals, UCF-1 and UCF-2, with high dno/dT at room temperature. The dno/dT of UCF-1 is about 4X higher than that of 5CB at room temperature. Second, we infiltrate UCF-1 into the micro holes around the silica core of a section of three-rod core PCF and set up a highly thermal tunable liquid crystal photonic crystal fiber. The guided mode has an effective area of 440 ƒÝm2 with an insertion loss of less than 0.5dB. The loss is mainly attributed to coupling losses between the index-guided section and the bandgap-guided section. The thermal tuning sensitivity of the spectral position of the bandgap was measured to be 27 nm/degree around room temperature, which is 4.6 times higher than that using the commercial E7 LC mixture operated at a temperature above 50 degree C. Third, the novel liquid crystals UCF-1 and UCF-2 are preferred to trigger the laser-induced thermal solitons in nematic liquid crystal confined in a capillary because of the high positive temperature gradient at room temperature. Fourth, we extrapolate the refractive index data measured at the visible light region to the near and far infrared region basing on the extended Cauchy model and four-parameter model. The extrapolation method is validated by the experimental data measured at the visible light and infrared light regions. Knowing the LC refractive indices at the infrared region is important for some photonic devices operated in this light region. Finally, we make a completely comparative study for refractive index between two photocurable polymers (NOA65 and NOA81) and two series of Merck liquid crystals, E-series (E44, E48, and E7) and BL-series (BL038, BL003 and BL006) in order to optimize the performance of polymer dispersed liquid crystals (PDLC). Among the LC materials we studied, BL038 and E48 are good candidates for making PDLC system incorporating NOA65. The BL038 PDLC cell shows a higher contrast ratio than the E48 cell because BL038 has a better matched ordinary refractive index, higher birefringence, and similar miscibility as compared to E48. Liquid crystals having a good miscibility with polymer, matched ordinary refractive index, and higher birefringence help to improve the PDLC contrast ratio for display applications. In Chapter 7, we give a general summary for the dissertation.
30

Design, simulation, and testing of an electric propulsion cluster frame

Bek, Jeremy January 2021 (has links)
In general, electric propulsion offers very high efficiency but relatively low thrust. To remedy this, several ion engines can be assembled in a clustered configuration and operated in parallel. This requires the careful design of a frame to accommodate the individual propulsion systems. This frame must be modular to be used in different cluster sizes, and verify thermal and mechanical requirements to ensure the nominal operation of the thrusters. The present report aims to show the design process of such a frame, from preliminary modelling to the experimental study of a prototype. This document features an overview of the iterative design process driven by thermal simulations rendered on COMSOL Multiphysics. This process led to the conception of a 2-thruster and 4-thruster cluster frame. A lumped-parameter model of the electric propulsion system was also created to model its complex thermal behaviour. In addition, the 2-thruster frame was studied mechanically with analytical calculations and simulations of simple load cases on SolidWorks. Lastly, a prototype based on the 2-thruster frame model was assembled. The prototype was used to conduct temperature measurements while hosting two operating thrusters inside a vacuum chamber. The temperature distribution in the cluster was measured, and compared to simulation results. Thermal simulations of the 2-thruster and 4-thruster frame showed promising results, while mechanical simulations of the 2-thruster version met all requirements. Moreover, experimental results largely agreed with thermal simulations of the prototype. Finally, the lumped-element model proved instrumental in calibrating the models, with its high flexibility and quick computation time. / Generellt erbjuder elektrisk framdrivning hög verkningsgrad men relativt låg dragkraft. För att avhjälpa detta kan flera jonmotorer sättas samman i en klusterkonfiguration och drivs parallellt. Detta kräver en noggrann utformning av en ram för att rymma de enskilda framdrivningssystemen. Denna ram måste vara modulär för att kunna användas i olika klusterstorlekar och verifiera termiska och mekaniska krav för att säkerställa den nominella driften av motorerna. Föreliggande rapport syftar till att visa designprocessen för en sådan ram, från preliminär modellering till experimentell studie av en prototyp. Detta dokument innehåller en översikt över den iterativa designprocessen, driven av termiska simuleringar gjorda med COMSOL Multiphysics, som ledde till uppfattningen av en 2 motorer och 4 motorer ram. En klumpelementmodell av jonmotorn skapades också för att modellera dess komplexa termiska beteende. Dessutom var den 2 motorer ram studeras mekaniskt med analytiska beräkningar och simuleringar av enkla laddafall med SolidWorks. Slutligen monterades en prototyp baserad på den 2 motorer rammodellen. Prototypen användes för att göra temperaturmätningar medan den är värd för 2 jonmotorer i en vakuumkammare. Temperaturfördelningen i klustret mättes och jämfördes med simuleringsresultat. Termiska simuleringar av den 2 motorer och 4 motorer ramen visade lovande resultat, medan mekaniska simuleringar av den 2 motorer versionen klarade alla krav. Dessutom överensstämde experimentella resultat till stor del med termiska simuleringar av prototypen. Slutligen var klumpelementmodellen mycket användbar för att kalibrera de andra modellerna med sin höga flexibilitet och snabba beräkningstid.

Page generated in 0.1084 seconds