Spelling suggestions: "subject:"time varying"" "subject:"lime varying""
141 |
System Identification via the Proper Orthogonal DecompositionAllison, Timothy Charles 04 December 2007 (has links)
Although the finite element method is often applied to analyze the dynamics of structures, its application to large, complex structures can be time-consuming and errors in the modeling process may negatively affect the accuracy of analyses based on the model. System identification techniques attempt to circumvent these problems by using experimental response data to characterize or identify a system. However, identification of structures that are time-varying or nonlinear is problematic because the available methods generally require prior understanding about the equations of motion for the system. Nonlinear system identification techniques are generally only applicable to nonlinearities where the functional form of the nonlinearity is known and a general nonlinear system identification theory is not available as is the case with linear theory. Linear time-varying identification methods have been proposed for application to nonlinear systems, but methods for general time-varying systems where the form of the time variance is unknown have only been available for single-input single-output models. This dissertation presents several general linear time-varying methods for multiple-input multiple-output systems where the form of the time variance is entirely unknown. The methods use the proper orthogonal decomposition of measured response data combined with linear system theory to construct a model for predicting the response of an arbitrary linear or nonlinear system without any knowledge of the equations of motion. Separate methods are derived for predicting responses to initial displacements, initial velocities, and forcing functions. Some methods require only one data set but only promise accurate solutions for linear, time-invariant systems that are lightly damped and have a mass matrix proportional to the identity matrix. Other methods use multiple data sets and are valid for general time-varying systems. The proposed methods are applied to linear time-invariant, time-varying, and nonlinear systems via numerical examples and experiments and the factors affecting the accuracy of the methods are discussed. / Ph. D.
|
142 |
Disaggregating Within-Person and Between-Person Effects in the Presence of Linear Time Trends in Time-Varying Predictors: Structural Equation Modeling ApproachHori, Kazuki 01 June 2021 (has links)
Educational researchers are often interested in phenomena that unfold over time within a person and at the same time, relationships between their characteristics that are stable over time. Since variables in a longitudinal study reflect both within- and between-person effects, researchers need to disaggregate them to understand the phenomenon of interest correctly. Although the person-mean centering technique has been believed as the gold standard of the disaggregation method, recent studies found that the centering did not work when there was a trend in the predictor. Hence, they proposed some detrending techniques to remove the systematic change; however, they were only applicable to multilevel models. Therefore, this dissertation develops novel detrending methods based on structural equation modeling (SEM). It also establishes the links between centering and detrending by reviewing a broad range of literature. The proposed SEM-based detrending methods are compared to the existing centering and detrending methods through a series of Monte Carlo simulations. The results indicate that (a) model misspecification for the time-varying predictors or outcomes leads to large bias of and standard error, (b) statistical properties of estimates of the within- and between-person effects are mostly determined by the type of between-person predictors (i.e., observed or latent), and (c) for unbiased estimation of the effects, models with latent between-person predictors require nonzero growth factor variances, while those with observed predictors at the between level need either nonzero or zero variance, depending on the parameter. As concluding remarks, some practical recommendations are provided based on the findings of the present study. / Doctor of Philosophy / Educational researchers are often interested in longitudinal phenomena within a person and relations between the person's characteristics. Since repeatedly measured variables reflect their within- and between-person aspects, researchers need to disaggregate them statistically to understand the phenomenon of interest. Recent studies found that the traditional centering method, where the individual's average of a predictor was subtracted from the original predictor value, could not correctly disentangle the within- and between-person effects when the predictor showed a systematic change over time (i.e., trend). They proposed some techniques to remove the trend; however, the detrending methods were only applicable to multilevel models. Therefore, the present study develops novel detrending methods using structural equation modeling. The proposed models are compared to the existing methods through a series of Monte Carlo simulations, where we can manipulate a data-generating model and its parameter values. The results indicate that (a) model misspecification for the time-varying predictor or outcome leads to systematic deviation of the estimates from their true values, (b) statistical properties of estimates of the effects are mostly determined by the type of between-person predictors (i.e., observed or latent), and (c) the latent predictor models require nonzero growth factor variances for unbiased estimation, while the observed predictor models need either nonzero or zero variance, depending on the parameter. As concluding remarks, some recommendations for the practitioners are provided.
|
143 |
A Low-Cost Unmanned Aerial Vehicle Research Platform: Development, Modeling and Advanced Control ImplementationArifianto, Ony 02 July 2014 (has links)
This dissertation describes the development and modeling of a low-cost, open source, and reliable small fixed-wing unmanned aerial vehicle (UAV) for advanced control implementation. The platform is mostly constructed of low-cost commercial off-the-shelf (COTS) components. The only non-COTS components are the airdata probes which are manufactured and calibrated in-house, following a procedure provided herein. The airframe used is the commercially available radio-controlled 6-foot Telemaster airplane from Hobby Express. The airplane is chosen mainly for its adequately spacious fuselage and for being reasonably stable and sufficiently agile. One noteworthy feature of this platform is the use of two separate low-cost open source onboard computers for handling the data management/hardware interfacing and control computation. Specifically, the single board computer, Gumstix Overo Fire, is used to execute the control algorithms, whereas the autopilot, Ardupilot Mega, is mostly used to interface the Overo computer with the sensors and actuators. The platform supports multi-vehicle operations through the use of a radio modem that enables multi-point communications.
As the goal of the development of this platform is to implement rigorous control algorithms for real-time trajectory tracking and distributed control, it is important to derive an appropriate flight dynamic model of the platform, based on which the controllers will be synthesized. For that matter, reasonably accurate models of the vehicle, servo motors and propulsion system are developed. Namely, the output error method is used to estimate the longitudinal and lateral-directional aerodynamic parameters from flight test data. The moments of inertia of the platform are determined using the simple pendulum test method, and the frequency response of each servomotor is also obtained experimentally. The Javaprop applet is used to obtain lookup tables relating airspeed to propeller thrust at constant throttle settings.
Control systems are also designed for the regulation of this UAV along real-time trajectories. The reference trajectories are generated in real-time from a library of pre-specified motion primitives and hence are not known a priori. Two concatenated primitive trajectories are considered: one formed from seven primitives exhibiting a figure-8 geometric path and another composed of a Split-S maneuver that settles into a level-turn trim trajectory. Switched control systems stemming from l2-induced norm synthesis approaches are designed for discrete-time linearized models of the nonlinear UAV system. These controllers are analyzed based on simulations in a realistic operational environment and are further implemented on the physical UAV. The simulations and flight tests demonstrate that switched controllers, which take into account the effects of switching between constituent sub-controllers, manage to closely track the considered trajectories despite the various modeling uncertainties, exogenous disturbances and measurement noise. These switched controllers are composed of discrete-time linear sub-controllers designed separately for a subset of the pre-specified primitives, with the uncertain initial conditions, that arise when switching between primitives, incorporated into the control design. / Ph. D.
|
144 |
Computationally Driven Algorithms for Distributed Control of Complex SystemsAbou Jaoude, Dany 19 November 2018 (has links)
This dissertation studies the model reduction and distributed control problems for interconnected systems, i.e., systems that consist of multiple interacting agents/subsystems. The study of the analysis and synthesis problems for interconnected systems is motivated by the multiple applications that can benefit from the design and implementation of distributed controllers. These applications include automated highway systems and formation flight of unmanned aircraft systems.
The systems of interest are modeled using arbitrary directed graphs, where the subsystems correspond to the nodes, and the interconnections between the subsystems are described using the directed edges. In addition to the states of the subsystems, the adopted frameworks also model the interconnections between the subsystems as spatial states. Each agent/subsystem is assumed to have its own actuating and sensing capabilities. These capabilities are leveraged in order to design a controller subsystem for each plant subsystem. In the distributed control paradigm, the controller subsystems interact over the same interconnection structure as the plant subsystems.
The models assumed for the subsystems are linear time-varying or linear parameter-varying. Linear time-varying models are useful for describing nonlinear equations that are linearized about prespecified trajectories, and linear parameter-varying models allow for capturing the nonlinearities of the agents, while still being amenable to control using linear techniques. It is clear from the above description that the size of the model for an interconnected system increases with the number of subsystems and the complexity of the interconnection structure. This motivates the development of model reduction techniques to rigorously reduce the size of the given model. In particular, this dissertation presents structure-preserving techniques for model reduction, i.e., techniques that guarantee that the interpretation of each state is retained in the reduced order system. Namely, the sought reduced order system is an interconnected system formed by reduced order subsystems that are interconnected over the same interconnection structure as that of the full order system. Model reduction is important for reducing the computational complexity of the system analysis and control synthesis problems.
In this dissertation, interior point methods are extensively used for solving the semidefinite programming problems that arise in analysis and synthesis. / Ph. D. / The work in this dissertation is motivated by the numerous applications in which multiple agents interact and cooperate to perform a coordinated task. Examples of such applications include automated highway systems and formation flight of unmanned aircraft systems. For instance, one can think of the hazardous conditions created by a fire in a building and the benefits of using multiple interacting multirotors to deal with this emergency situation and reduce the risks on humans. This dissertation develops mathematical tools for studying and dealing with these complex systems. Namely, it is shown how controllers can be designed to ensure that such systems perform in the desired way, and how the models that describe the systems of interest can be systematically simplified to facilitate performing the tasks of mathematical analysis and control design.
|
145 |
Implementation of Instantaneous Frequency Estimation based on Time-Varying AR ModelingKadanna Pally, Roshin 27 May 2009 (has links)
Instantaneous Frequency (IF) estimation based on time-varying autoregressive (TVAR) modeling has been shown to perform well in practical scenarios when the IF variation is rapid and/or non-linear and only short data records are available for modeling. A challenging aspect of implementing IF estimation based on TVAR modeling is the efficient computation of the time-varying coefficients by solving a set of linear equations referred to as the generalized covariance equations. Conventional approaches such as Gaussian elimination or direct matrix inversion are computationally inefficient for solving such a system of equations especially when the covariance matrix has a high order.
We implement two recursive algorithms for efficiently inverting the covariance matrix. First, we implement the Akaike algorithm which exploits the block-Toeplitz structure of the covariance matrix for its recursive inversion. In the second approach, we implement the Wax-Kailath algorithm that achieves a factor of 2 reduction over the Akaike algorithm in the number of recursions involved and the computational effort required to form the inverse matrix.
Although a TVAR model works well for IF estimation of frequency modulated (FM) components in white noise, when the model is applied to a signal containing a finitely correlated signal in addition to the white noise, estimation performance degrades; especially when the correlated signal is not weak relative to the FM components. We propose a decorrelating TVAR (DTVAR) model based IF estimation and a DTVAR model based linear prediction error filter for FM interference rejection in a finitely correlated environment. Simulations show notable performance gains for a DTVAR model over the TVAR model for moderate to high SIRs. / Master of Science
|
146 |
Time-varying impacts of green credit on carbon productivity in China: New evidence from a non-parametric panel data modelHou, P., Luo, S., Liu, S., Tan, Yong, Roubaud, D. 16 July 2024 (has links)
Yes / In the context of global climate change threatening human survival, and in a post-pandemic era that advocates for a global green and low-carbon economic recovery, conducting an in-depth analysis to assess whether green f inance can effectively support low-carbon economic development from a dynamic perspective is crucial. Unlike existing research, which focuses solely on the average effects of green credit (GC) on carbon productivity (CP), we introduce a non-parametric panel data model to investigate GC’s impact on CP across 30 provinces in China from 2003 to 2021, verifying a significant time-varying effect. Specifically, during the first phase (2003–2008), GC negatively impacted CP. In the second phase (2009–2014), this negative influence gradually diminished and transformed into a positive effect. In the third phase (2015–2021), GC continued to positively influence CP, although this effect became insignificant during the pandemic. Further subgroup analysis reveals that in the regions with low environmental regulations, GC did not significantly boost CP throughout the sample period. In contrast, in the regions with high environmental regulations, GC’s positive effect persisted in the mid to late stages of the sample period. Additionally, compared to the regions with low levels of marketization, the impact of GC on CP was more pronounced in highly marketized regions. This indicates that the promoting effect of GC on CP depends on strong support from environmental regulations and well-functioning market mechanisms. By adopting a non-parametric approach, this study reveals variations in the impact of GC on CP across different stages and under the influence of the pandemic shock, offering new insights into the relationship between GC and China’s CP. / The full-text of this article will be released for public view at the end of the publisher embargo on 15 May 2025.
|
147 |
Assouplissement quantitatif : que tirer de l'experience japonaise ? / Quantitative easing : what can we learn from the japanese experience ?Moussa, Zakaria 06 December 2010 (has links)
La crise financière actuelle, en raison de sa similarité avec celle du Japon des années 1990, a poussé les autorités monétaires des plus grandes banques centrales à adopter l’assouplissement quantitatif. Seul le Japon, ayant connu une expérience d’assouplissement quantitatif récente mais depuis suffisamment d’années pour être étudiée, peut fournir des éléments de solution à cette crise.Cette thèse applique les techniques économétriques les plus appropriées et récentesà l’analyse de l’assouplissement quantitatif, appliqué par la Banque du Japon entre 2001 et 2006. En trois chapitres sont traitées les questions de savoir s’il était efficace ; sous quelles conditions ? Par quels canaux ?L’efficacité de cette stratégie de politique monétaire à stimuler l’activité et à stopperla spirale déflationniste a été montrée. Cette expérience met en avant le rôle important que la politique monétaire peut jouer pour sortir de la crise, même quand le taux directeur atteint zéro. Le canal des anticipations comme le canal de rééquilibrage des portefeuilles ont tous deux joué un rôle important dans la transmission de ces effets. Les principaux enseignements que l’on peut tirer de l’expérience japonaise sont, d’abord de remédier radicalement et immédiatement aux fragilités du secteur financier, deuxièmement, de mener une politique monétaire particulièrement agressive. Enfin, d’attendre le temps nécessaire pour que les fruits de cette politique viennent. L’expérience japonaise suggère que la Fed et la banque d’Angleterre doivent reporter leur sortie de cette stratégie, sortie qui doit être menée dansle cadre d’un programme et selon des objectifs numériques clairs. / The current financial crisis has now led most major central banks to rely on quantitative easing. The unique Japanese experience of quantitative easing is the only experience which enables us to judge this therapy’s effectiveness and the timing of the exit strategy. Is quantitative easing effective ? Under which conditions ? Through which canal ?This thesis, consisting of three essays, applies appropriate and recent econometrictechniques to examine the quantitative easing in Japan between 2001 and 2006. We show, for the first time, that quantitative easing was able not only to prevent further recession and deflation but also to provide considerable stimulation to both output and prices. Moreover, both expectation and portfolio-rebalancing channels play a crucial role in transmitting monetary policy effects. This experience shows that the monetary policy is still potent even when short-term interest rates reach a zero lower bound. The Japanese experience suggests that efforts to clean up the bank’s balance sheets significantly improved the effectiveness of quantitative easing. However, this effect, although considerable, was short-lived ; it became insignificant after one year. The short duration of this effect confirms the wisdom of the Fed’s decision to maintain quantitative easing longer, so that being short-lived, the positive effects could be exploited. In the light of the Japanese experience, we argue that, in addition to their fast reaction and the huge amount of CABs employed, which may have helped relieve short-term liquidity pressures in the financial system, the Fed was better off postponing its exit from quantitative easing.
|
148 |
TIME-VARYING FRACTIONAL-ORDER PID CONTROL FOR MITIGATION OF DERIVATIVE KICKAttila Lendek (10734243) 05 May 2021 (has links)
<div>In this thesis work, a novel approach for the design of a fractional order proportional integral</div><div>derivative (FOPID) controller is proposed. This design introduces a new time-varying FOPID controller</div><div>to mitigate a voltage spike at the controller output whenever a sudden change to the setpoint occurs. The</div><div>voltage spike exists at the output of the proportional integral derivative (PID) and FOPID controllers when a</div><div>derivative control element is involved. Such a voltage spike may cause a serious damage to the plant if it is</div><div>left uncontrolled. The proposed new FOPID controller applies a time function to force the derivative gain to</div><div>take effect gradually, leading to a time-varying derivative FOPID (TVD-FOPID) controller, which maintains</div><div>a fast system response and signi?cantly reduces the voltage spike at the controller output. The time-varying</div><div>FOPID controller is optimally designed using the particle swarm optimization (PSO) or genetic algorithm</div><div>(GA) to ?nd the optimum constants and time-varying parameters. The improved control performance is</div><div>validated through controlling the closed-loop DC motor speed via comparisons between the TVD-FOPID</div><div>controller, traditional FOPID controller, and time-varying FOPID (TV-FOPID) controller which is created</div><div>for comparison with all three PID gain constants replaced by the optimized time functions. The simulation</div><div>results demonstrate that the proposed TVD-FOPID controller not only can achieve 80% reduction of voltage</div><div>spike at the controller output but also is also able to keep approximately the same characteristics of the system</div><div>response in comparison with the regular FOPID controller. The TVD-FOPID controller using a saturation</div><div>block between the controller output and the plant still performs best according to system overshoot, rise time,</div><div>and settling time.</div>
|
149 |
Safety-Critical Teleoperation with Time-Varying Delays : MPC-CBF-based approaches for obstacle avoidance / Säkerhetskritisk teleoperation med tidsvarierande fördröjningarPeriotto, Riccardo January 2023 (has links)
The thesis focuses on the design of a control strategy for safety-critical remote teleoperation. The main goal is to make the controlled system track the desired velocity specified by a human operator while avoiding obstacles despite communication delays. Different methods adopting Control Barrier Functions (CBFs) and Model Predictive Control (MPC) have been explored and tested. In this combination, CBFs are used to define the safety constraints the system has to respect to avoid obstacles, while MPC provides the framework for filtering the desired input by solving an optimization problem. The resulting input is sent to the remote system, where appropriate low-level velocity controllers translate it into system-specific commands. The main novelty of the thesis is a method to make the CBFs robust against the uncertainties affecting the system’s state due to network delays. Other techniques are investigated to improve the quality of the system information starting from the delayed one and to formulate the optimization problem without knowing the specific dynamic of the controlled system. The results show how the proposed method successfully solves the safetycritical teleoperation problem, making the controlled systems avoid obstacles with different types of network delay. The controller has also been tested in simulation and on a real manipulator, demonstrating its general applicability when reliable low-level velocity controllers are available. / Avhandlingen fokuserar på utformningen av en kontrollstrategi för säkerhetskritisk fjärrstyrd teleoperation. Huvudmålet är att få det kontrollerade systemet att följa den önskade hastigheten som specificeras av en mänsklig operatör samtidigt som hinder undviks trots kommunikationsfördröjningar. Olika metoder som använder Control Barrier Functions (CBFs) och Model Predictive Control har undersökts och testats. I denna kombination används CBFs för att definiera de säkerhetsbegränsningar som systemet måste respektera för att undvika hinder, medan MPC utgör ramverket för filtrering av den önskade indata genom att lösa ett optimeringsproblem. Den resulterande indata skickas till fjärrsystemet, där lämpliga hastighetsregulatorer på låg nivå översätter den till systemspecifika kommandon. Den viktigaste nyheten i avhandlingen är en metod för att göra CBFs robust mot de osäkerheter som påverkar systemets tillstånd på grund av nätverksfördröjningar. Andra tekniker undersöks för att förbättra kvaliteten på systeminformationen med utgångspunkt från den fördröjda informationen och för att formulera optimeringsproblemet utan att känna till det kontrollerade systemets specifika dynamik. Resultaten visar hur den föreslagna metoden framgångsrikt löser det säkerhetskritiska teleoperationsproblemet, vilket gör att de kontrollerade systemen undviker hinder med olika typer av nätverksfördröjningar. Styrningen har också testats i simulering och på en verklig manipulator, vilket visar dess allmänna tillämpbarhet när tillförlitliga lågnivåhastighetsregulatorer finns tillgängliga.
|
150 |
Corporate financing decisions : the role of managerial overconfidenceXu, Bin January 2014 (has links)
This thesis examines the effects of managerial overconfidence on corporate financing decisions. Overconfident managers tend to overestimate the mean of future cash flow and underestimate the volatility of future cash flow. We propose a novel time-varying measure of overconfidence, which is based on computational linguistic analysis of what the managers said (i.e. Chairman s Statement). The overconfidence of CEO and CFO is also constructed based on what the managers did (i.e. how they trade their own firms shares). We conduct three empirical studies that offer new insights into the roles of managerial overconfidence in the leverage decision (i.e. debt level), pecking order behaviour (i.e. the preference for debt over equity financing) and debt maturity decision (i.e. short-term debt vs. long-term debt). Study 1 documents a negative overconfidence-leverage relationship. This new finding suggests that debt conservatism associated with managerial overconfidence might be a potential explanation for the low leverage puzzle: some firms maintain low leverage, without taking tax benefits of debt, because overconfident managers believe that firm securities are undervalued by investors and thus are too costly (Malmendier, Tate and Yan, 2011). Study 2 finds managerial overconfidence leads to reverse pecking order preference especially in small firms, which sheds light on the pecking order puzzle that smaller firms with higher information costs surprisingly exhibit weaker pecking order preference. This new evidence is consistent with Hackbarth s (2008) theory that overconfident managers who underestimate the riskiness of earnings tend to prefer equity to debt financing. Study 3 finds managerial overconfidence leads to higher debt maturity. This evidence supports our proposition that overconfidence can mitigate the underinvestment problem (which is often the major concern of long-term debt investors) (Hackbarth, 2009), which in turn allows overconfident managers to use more and cheaper long-term debt. This evidence also implies that overconfidence may mitigate the agency cost of debt. Overall, our empirical analysis suggests that managerial overconfidence has significant incremental explanatory power for corporate financing decisions.
|
Page generated in 0.0608 seconds