Spelling suggestions: "subject:"[een] STATE-SPACE"" "subject:"[enn] STATE-SPACE""
351 |
Langevinized Ensemble Kalman Filter for Large-Scale Dynamic SystemsPeiyi Zhang (11166777) 26 July 2021 (has links)
<p>The Ensemble Kalman filter (EnKF) has achieved great successes in data assimilation in atmospheric and oceanic sciences, but its failure in convergence to the right filtering distribution precludes its use for uncertainty quantification. Other existing methods, such as particle filter or sequential importance sampler, do not scale well to the dimension of the system and the sample size of the datasets. In this dissertation, we address these difficulties in a coherent way.</p><p><br></p><p> </p><p>In the first part of the dissertation, we reformulate the EnKF under the framework of Langevin dynamics, which leads to a new particle filtering algorithm, the so-called Langevinized EnKF (LEnKF). The LEnKF algorithm inherits the forecast-analysis procedure from the EnKF and the use of mini-batch data from the stochastic gradient Langevin-type algorithms, which make it scalable with respect to both the dimension and sample size. We prove that the LEnKF converges to the right filtering distribution in Wasserstein distance under the big data scenario that the dynamic system consists of a large number of stages and has a large number of samples observed at each stage, and thus it can be used for uncertainty quantification. We reformulate the Bayesian inverse problem as a dynamic state estimation problem based on the techniques of subsampling and Langevin diffusion process. We illustrate the performance of the LEnKF using a variety of examples, including the Lorenz-96 model, high-dimensional variable selection, Bayesian deep learning, and Long Short-Term Memory (LSTM) network learning with dynamic data.</p><p><br></p><p> </p><p>In the second part of the dissertation, we focus on two extensions of the LEnKF algorithm. Like the EnKF, the LEnKF algorithm was developed for Gaussian dynamic systems containing no unknown parameters. We propose the so-called stochastic approximation- LEnKF (SA-LEnKF) for simultaneously estimating the states and parameters of dynamic systems, where the parameters are estimated on the fly based on the state variables simulated by the LEnKF under the framework of stochastic approximation. Under mild conditions, we prove the consistency of resulting parameter estimator and the ergodicity of the SA-LEnKF. For non-Gaussian dynamic systems, we extend the LEnKF algorithm (Extended LEnKF) by introducing a latent Gaussian measurement variable to dynamic systems. Those two extensions inherit the scalability of the LEnKF algorithm with respect to the dimension and sample size. The numerical results indicate that they outperform other existing methods in both states/parameters estimation and uncertainty quantification.</p>
|
352 |
Contrôle d'un système multi-terminal HVDC (MTDC) et étude des interactions entre les réseaux AC et le réseau MTDC. / Control of a multi-terminal HVDC (MTDC) system and study of the interactions between the MTDC and the AC grids.Akkari, Samy 29 September 2016 (has links)
La multiplication des projets HVDC de par le monde démontre l'engouement toujours croissant pour cette technologie de transport de l'électricité. La grande majorité de ces transmissions HVDC correspondent à des liaisons point-à-point et se basent sur des convertisseurs AC/DC de type LCC ou VSC à 2 ou 3 niveaux. Les travaux de cette thèse se focalisent sur l'étude, le contrôle et la commande de systèmes HVDC de type multi-terminal (MTDC), avec des convertisseurs de type VSC classique ou modulaire multi-niveaux. La première étape consiste à obtenir les modèles moyens du VSC classique et du MMC. La différence fondamentale entre ces deux convertisseurs, à savoir la possibilité pour le MMC de stocker et de contrôler l'énergie des condensateurs des sous-modules, est détaillée et expliquée. Ces modèles et leurs commandes sont ensuite linéarisés et mis sous forme de représentations d'état, puis validés en comparant leur comportement à ceux de modèles de convertisseurs plus détaillés à l'aide de logiciels de type EMT. Une fois validés, les modèles d'état peuvent être utilisés afin de générer le modèle d'état de tout système de transmissions HVDC, qu'il soit point-à-point ou MTDC. La comparaison d'une liaison HVDC à base de VSCs classiques puis de MMCs est alors réalisée. Leurs valeurs propres sont étudiées et comparées, et les modes ayant un impact sur la tension DC sont identifiés et analysés. Cette étude est ensuite étendue à un système MTDC à 5 terminaux, et son analyse modale permet à la fois d'étudier la stabilité du système, mais aussi de comprendre l'origine de ses valeurs propres ainsi que leur impact sur la dynamique du système. La méthode de décomposition en valeurs singulières permet ensuite d'obtenir un intervalle de valeurs possibles pour le paramètre de"voltage droop", permettant ainsi le contrôle du système MTDC tout en s'assurant qu'il soit conforme à des contraintes bien définies, comme l'écart maximal admissible en tension DC. Enfin, une proposition de "frequency droop" (ou "statisme"), permettant aux convertisseurs de participer au réglage de la fréquence des réseaux AC auxquels ils sont connectés, est étudiée. Le frequency droop est utilisé conjointement avec le voltage droop afn de garantir le bon fonctionnement de la partie AC et de la partie DC. Cependant, l'utilisation des deux droop génère un couplage indésirable entre les deux commandes. Ces interactions sont mathématiquement quantifiées et une correction à apporter au paramètre de frequency droop est proposée. Ces résultats sont ensuite validés par des simulations EMT et par des essais sur la plate-forme MTDC du laboratoire L2EP. / HVDC transmission systems are largely used worldwide, mostly in the form of back-to-back and point-to-point HVDC, using either thyristor-based LCC or IGBT-based VSC. With the recent deployment of the INELFE HVDC link between France and Spain, and the commissioning in China of a three-terminal HVDC transmission system using Modular Multilevel Converters (MMCs), a modular design of voltage source converters, the focus of the scientific community has shifted onto the analysis and control of MMC-based HVDC transmission systems. In this thesis, the average value models of both a standard 2-level VSC and an MMC are proposed and the most interesting difference between the two converter technologies -the control of the stored energy in the MMC- is emphasised and explained. These models are then linearised, expressed in state-space form and validated by comparing their behaviour to more detailed models under EMT programs. Afterwards, these state-space representations are used in the modelling of HVDC transmission systems, either point-to-point or Multi-Terminal HVDC (MTDC). A modal analysis is performed on an HVDC link, for both 2-level VSCs and MMCs. The modes of these two systems are specifed and compared and the independent control of the DC voltage and the DC current in the case of an MMC is illustrated. This analysis is extended to the scope of a 5-terminal HVDC system in order to perform a stability analysis, understand the origin of the system dynamics and identify the dominant DC voltage mode that dictates the DC voltage response time. Using the Singular Value Decomposition method on the MTDC system, the proper design of the voltage-droop gains of the controllers is then achieved so that the system operation is ensured within physical constraints, such as the maximum DC voltage deviation and the maximum admissible current in the power electronics. Finally, a supplementary droop "the frequency-droop control" is proposed so that MTDC systems also participate to the onshore grids frequency regulation. However, this controller interacts with the voltage-droop controller. This interaction is mathematically quantified and a corrected frequency-droop gain is proposed. This control is then illustrated with an application to the physical converters of the Twenties project mock-up.
|
353 |
On-the-Fly Dynamic Dead Variable AnalysisSelf, Joel P. 22 March 2007 (has links) (PDF)
State explosion in model checking continues to be the primary obstacle to widespread use of software model checking. The large input ranges of variables used in software is the main cause of state explosion. As software grows in size and complexity the problem only becomes worse. As such, model checking research into data abstraction as a way of mitigating state explosion has become more and more important. Data abstractions aim to reduce the effect of large input ranges. This work focuses on a static program analysis technique called dead variable analysis. The goal of dead variable analysis is to discover variable assignments that are not used. When applied to model checking, this allows us to ignore the entire input range of dead variables and thus reduce the size of the explored state space. Prior research into dead variable analysis for model checking does not make full use of dynamic run-time information that is present during model checking. We present an algorithm for intraprocedural dead variable analysis that uses dynamic run-time information to find more dead variables on-the-fly and further reduce the size of the explored state space. We introduce a definition for the maximal state space reduction possible through an on-the-fly dead variable analysis and then show that our algorithm produces a maximal reduction in the absence of non-determinism.
|
354 |
Data-driven Interpolation Methods Applied to Antenna System Responses : Implementation of and Benchmarking / Datadrivna interpolationsmetoder applicerade på systemsvar från antenner : Implementering av och prestandajämförelseÅkerstedt, Lucas January 2023 (has links)
With the advances in the telecommunications industry, there is a need to solve the in-band full-duplex (IBFD) problem for antenna systems. One premise for solving the IBFD problem is to have strong isolation between transmitter and receiver antennas in an antenna system. To increase isolation, antenna engineers are dependent on simulation software to calculate the isolation between the antennas, i.e., the mutual coupling. Full-wave simulations that accurately calculate the mutual coupling between antennas are timeconsuming, and there is a need to reduce the required time. In this thesis, we investigate how implemented data-driven interpolation methods can be used to reduce the simulation times when applied to frequency domain solvers. Here, we benchmark the four different interpolation methods vector fitting, the Loewner framework, Cauchy interpolation, and a modified version of Nevanlinna-Pick interpolation. These four interpolation methods are benchmarked on seven different antenna frequency responses, to investigate their performance in terms of how many interpolation points they require to reach a certain root mean squared error (RMSE) tolerance. We also benchmark different frequency sampling algorithms together with the interpolation methods. Here, we have predetermined frequency sampling algorithms such as linear frequency sampling distribution, and Chebyshevbased frequency sampling distributions. We also benchmark two kinds of adaptive frequency sampling algorithms. The first type is compatible with all of the four interpolation methods, and it selects the next frequency sample by analyzing the dynamics of the previously generated interpolant. The second adaptive frequency sampling algorithm is solely for the modified NevanlinnaPick interpolation method, and it is based on the free parameter in NevanlinnaPick interpolation. From the benchmark results, two interpolation methods successfully decrease the RMSE as a function of the number of interpolation points used, namely, vector fitting and the Loewner framework. Here, the Loewner framework performs slightly better than vector fitting. The benchmark results also show that vector fitting is less dependent on which frequency sampling algorithm is used, while the Loewner framework is more dependent on the frequency sampling algorithm. For the Loewner framework, Chebyshev-based frequency sampling distributions proved to yield the best performance. / Med de snabba utvecklingarna i telekomindustrin så har det uppstått ett behov av att lösa det så kallad i-band full-duplex (IBFD) problemet. En premiss för att lösa IBFD-problemet är att framgångsrikt isolera transmissionsantennen från mottagarantennen inom ett antennsystem. För att öka isolationen mellan antennerna måste antenningenjörer använda sig av simulationsmjukvara för att beräkna isoleringen (den ömsesidiga kopplingen mellan antennerna). Full-wave-simuleringar som noggrant beräknar den ömsesidga kopplingen är tidskrävande. Det finns därför ett behov av att minska simulationstiderna. I denna avhandling kommer vi att undersöka hur våra implementerade och datadrivna interpoleringsmetoder kan vara till hjälp för att minska de tidskrävande simuleringstiderna, när de används på frekvensdomänslösare. Här prestandajämför vi de fyra interpoleringsmetoderna vector fitting, Loewner ramverket, Cauchy interpolering, och modifierad Nevanlinna-Pick interpolering. Dessa fyra interpoleringsmetoder är prestandajämförda på sju olika antennsystemsvar, med avseende på hur många interpoleringspunkter de behöver för att nå en viss root mean squared error (RMSE)-tolerans. Vi prestandajämför också olika frekvenssamplingsalgoritmer tillsammas med interpoleringsmetoderna. Här använder vi oss av förbestämda frekvenssamplingsdistributioner så som linjär samplingsdistribution och Chebyshevbaserade samplingsdistributioner. Vi använder oss också av två olika sorters adaptiv frekvenssamplingsalgoritmer. Den första sortens adaptiv frekvenssamplingsalgoritm är kompatibel med alla de fyra interpoleringsmetoderna, och den väljer nästa frekvenspunkt genom att analysera den föregående interpolantens dynamik. Den andra adaptiva frekvenssamplingsalgoritmen är enbart till den modifierade Nevanlinna-Pick interpoleringsalgoritmen, och den baserar sitt val av nästa frekvenspunkt genom att använda sig av den fria parametern i Nevanlinna-Pick interpolering. Från resultaten av prestandajämförelsen ser vi att två interpoleringsmetoder framgångsrikt lyckas minska medelvärdetsfelet som en funktion av antalet interpoleringspunkter som används. Dessa två metoder är vector fitting och Loewner ramverket. Här så presterar Loewner ramverket aningen bättre än vad vector fitting gör. Prestandajämförelsen visar också att vector fitting inte är lika beroende av vilken frekvenssamplingsalgoritm som används, medan Loewner ramverket är mer beroende på vilken frekvenssamplingsalgoritm som används. För Loewner ramverket så visade det sig att Chebyshev-baserade frekvenssamplingsalgoritmer presterade bättre.
|
355 |
Beyond the horizon : improved long-range sequence modeling, from dynamical systems to languageFathi, Mahan 01 1900 (has links)
The research presented in this thesis was conducted under the joint supervision of Pierre-Luc Bacon, affiliated with Mila - Quebec Artificial Intelligence Institute and Université de Montréal, and Ross Goroshin, affiliated with Google DeepMind. The involvement of both supervisors was integral to the development and completion of this work. / Cette thèse est ancrée dans deux aspirations principales: (i) l'extension des longueurs de séquence pour une fidélité de prédiction supérieure pendant les phases d'entraînement et de test, et (ii) l'amélioration de l'efficacité computationnelle des modèles de séquence. Le défi fondamental de la modélisation de séquences réside dans la prédiction ou la génération précise sur de longs horizons.
Les modèles traditionnels, tels que les Réseaux Neuronaux Récurrents (RNN), possèdent des capacités intrinsèques pour la gestion de séquences, mais présentent des lacunes sur de longues séquences. Le premier article, "Correction de Cours des Représentations de Koopman," introduit le Réencodage Périodique pour les Autoencodeurs de Koopman, offrant une solution à la dérive dans les prédictions à long horizon, assurant la stabilité du modèle sur de longues séquences.
Les défis subséquents des RNN ont orienté l'attention vers les Transformateurs, avec une longueur de contexte bornée et un temps d'exécution quadratique. Des innovations récentes dans les Modèles d'Espace d'État (SSM) soulignent leur potentiel pour la modélisation de séquences. Notre second article, "Transformateurs d'État-Block," exploite les puissantes capacités de contextualisation des SSM, fusionnant les forces des Transformateurs avec les avantages des SSM. Cette fusion renforce la modélisation linguistique, surtout dans les contextes exigeant une large inference et contexte.
En essence, cette thèse se concentre sur l'avancement de l'inférence de séquence à longue portée, chaque article offrant des approches distinctes pour améliorer la portée et la précision de la modélisation prédictive dans les séquences, incarnées par le titre "Au-delà de l'Horizon." / This thesis is anchored in two principal aspirations: (i) the extension of sequence lengths for superior prediction fidelity during both training and test phases, and (ii) the enhancement of computational efficiency in sequence models. The fundamental challenge in sequence modeling lies in accurate prediction or generation across extended horizons. Traditional models, like Recurrent Neural Networks (RNNs), possess inherent capacities for sequence management, but exhibit shortcomings over extended sequences. The first article, "Course Correcting Koopman Representations," introduces Periodic Reencoding for Koopman Autoencoders, offering a solution to the drift in long-horizon predictions, ensuring model stability across lengthy sequences. Subsequent challenges in RNNs have shifted focus to Transformers, with a bounded context length and quadratic runtime. Recent innovations in State-Space Models (SSMs) underscore their potential for sequence modeling. Our second article, "Block-State Transformers," exploits the potent contextualization capabilities of SSMs, melding Transformer strengths with SSM benefits. This fusion augments language modeling, especially in contexts demanding extensive range inference and context. In essence, this thesis revolves around advancing long-range sequence inference, with each article providing distinctive approaches to enhance the reach and accuracy of predictive modeling in sequences, epitomized by the title "Beyond the Horizon."
|
356 |
Advanced Features in Protocol Verification: Theory, Properties, and Efficiency in Maude-NPASantiago Pinazo, Sonia 31 March 2015 (has links)
The area of formal analysis of cryptographic protocols has been an active
one since the mid 80’s. The idea is to verify communication protocols
that use encryption to guarantee secrecy and that use authentication of
data to ensure security. Formal methods are used in protocol analysis to
provide formal proofs of security, and to uncover bugs and security flaws
that in some cases had remained unknown long after the original protocol
publication, such as the case of the well known Needham-Schroeder
Public Key (NSPK) protocol. In this thesis we tackle problems regarding
the three main pillars of protocol verification: modelling capabilities,
verifiable properties, and efficiency.
This thesis is devoted to investigate advanced features in the analysis
of cryptographic protocols tailored to the Maude-NPA tool. This tool
is a model-checker for cryptographic protocol analysis that allows for
the incorporation of different equational theories and operates in the
unbounded session model without the use of data or control abstraction.
An important contribution of this thesis is relative to theoretical aspects
of protocol verification in Maude-NPA. First, we define a forwards
operational semantics, using rewriting logic as the theoretical framework
and the Maude programming language as tool support. This is the first
time that a forwards rewriting-based semantics is given for Maude-NPA.
Second, we also study the problem that arises in cryptographic protocol
analysis when it is necessary to guarantee that certain terms generated
during a state exploration are in normal form with respect to the protocol
equational theory.
We also study techniques to extend Maude-NPA capabilities to support
the verification of a wider class of protocols and security properties.
First, we present a framework to specify and verify sequential protocol
compositions in which one or more child protocols make use of information obtained from running a parent protocol. Second, we present a
theoretical framework to specify and verify protocol indistinguishability
in Maude-NPA. This kind of properties aim to verify that an attacker
cannot distinguish between two versions of a protocol: for example, one
using one secret and one using another, as it happens in electronic voting
protocols.
Finally, this thesis contributes to improve the efficiency of protocol
verification in Maude-NPA. We define several techniques which drastically
reduce the state space, and can often yield a finite state space,
so that whether the desired security property holds or not can in fact
be decided automatically, in spite of the general undecidability of such
problems. / Santiago Pinazo, S. (2015). Advanced Features in Protocol Verification: Theory, Properties, and Efficiency in Maude-NPA [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/48527
|
357 |
Modeling and Analysis of Advanced Cryptographic Primitives and Security Protocols in Maude-NPAAparicio Sánchez, Damián 23 December 2022 (has links)
Tesis por compendio / [ES] La herramienta criptográfica Maude-NPA es un verificador de modelos especializado para protocolos de seguridad criptográficos que tienen en cuenta las propiedades algebraicas de un sistema criptográfico. En la literatura, las propiedades criptográficas adicionales han descubierto debilidades de los protocolos de seguridad y, en otros casos, son parte de los supuestos de seguridad del protocolo para funcionar correctamente. Maude-NPA tiene una base teórica en la rewriting logic, la unificación ecuacional y el narrowing para realizar una búsqueda hacia atrás desde un patrón de estado inseguro para determinar si es alcanzable o no. Maude-NPA se puede utilizar para razonar sobre una amplia gama de propiedades criptográficas, incluida la cancelación del cifrado y descifrado, la exponenciación de Diffie-Hellman, el exclusive-or y algunas aproximaciones del cifrado homomórfico.
En esta tesis consideramos nuevas propiedades criptográficas, ya sea como parte de protocolos de seguridad o para descubrir nuevos ataques. También hemos modelado diferentes familias de protocolos de seguridad, incluidos los Distance Bounding Protocols or Multi-party key agreement protocolos. Y hemos desarrollado nuevas técnicas de modelado para reducir el coste del análisis en protocolos con tiempo y espacio. Esta tesis contribuye de varias maneras al área de análisis de protocolos criptográficos y muchas de las contribuciones de esta tesis pueden ser útiles para otras herramientas de análisis criptográfico. / [CA] L'eina criptografica Maude-NPA es un verificador de models especialitzats per a protocols de seguretat criptogràfics que tenen en compte les propietats algebraiques d'un sistema criptogràfic. A la literatura, les propietats criptogràfiques addicionals han descobert debilitats dels protocols de seguretat i, en altres casos, formen part dels supòsits de seguretat del protocol per funcionar correctament. Maude-NPA te' una base teòrica a la rewriting lògic, la unificació' equacional i narrowing per realitzar una cerca cap enrere des d'un patró' d'estat insegur per determinar si es accessible o no. Maude-NPA es pot utilitzar per raonar sobre una amplia gamma de propietats criptogràfiques, inclosa la cancel·lació' del xifratge i desxifrat, l'exponenciacio' de Diffie-Hellman, el exclusive-or i algunes aproximacions del xifratge homomòrfic.
En aquesta tesi, considerem noves propietats criptogràfiques, ja sigui com a part de protocols de seguretat o per descobrir nous atacs. Tambe' hem modelat diferents famílies de protocols de seguretat, inclosos els Distance Bounding Protocols o Multi-party key agreement protocols. I hem desenvolupat noves tècniques de modelització' de protocols per reduir el cost de l'analisi en protocols amb temps i espai. Aquesta tesi contribueix de diverses maneres a l’àrea de l’anàlisi de protocols criptogràfics i moltes de les contribucions d’aquesta tesi poden ser útils per a altres eines d’anàlisi criptogràfic. / [EN] The Maude-NPA crypto tool is a specialized model checker for cryptographic security protocols that take into account the algebraic properties of the cryptosystem. In the literature, additional crypto properties have uncovered weaknesses of security protocols and, in other cases, they are part of the protocol security assumptions in order to function properly. Maude-NPA has a theoretical basis on rewriting logic, equational unification, and narrowing to perform a backwards search from an insecure state pattern to determine whether or not it is reachable. Maude-NPA can be used to reason about a wide range of cryptographic properties, including cancellation of encryption and decryption, Diffie-Hellman exponentiation, exclusive-or, and some approximations of homomorphic encryption.
In this thesis, we consider new cryptographic properties, either as part of security protocols or to discover new attacks. We have also modeled different families of security protocols, including Distance Bounding Protocols or Multi-party key agreement protocols. And we have developed new protocol modeling techniques to reduce the time and space analysis effort. This thesis contributes in several ways to the area of cryptographic protocol analysis and many of the contributions of this thesis can be useful for other crypto analysis tools. / This thesis would not have been possible without the funding of a set of research projects. The main contributions and derivative works of this thesis
have been made in the context of the following projects:
- Ministry of Economy and Business of Spain : Project LoBaSS Effective Solutions Based on Logic, Scientific Research under award number TIN2015-69175-C4-1-R, this project was focused on using powerful logic-based technologies to analyze safety-critical systems.
- Air Force Office of Scientific Research of United States of America : Project Advanced symbolic methods for the cryptographic protocol analyzer Maude-NPA Scientific Research under award number FA9550-17-1-0286
- State Investigation Agency of Spain : Project FREETech: Formal Reasoning for Enabling and Emerging Technologies Scientific I+D-i Research under award number RTI2018-094403-B-C32 / Aparicio Sánchez, D. (2022). Modeling and Analysis of Advanced Cryptographic Primitives and Security Protocols in Maude-NPA [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/190915 / Compendio
|
358 |
Analyse und Modellierung des Reifenübertragungsverhaltens bei transienten und extremen Fahrmanövern / Analysis and modelling of tyre transfer behaviour for transient and extreme driving manoeuvresEinsle, Stefan 11 March 2011 (has links) (PDF)
Durch den zunehmenden Einsatz fahrdynamischer Regelsysteme und der Fahrzeugauslegung im Grenzbereich gewinnt die Modellierung des Reifenübertragungsverhaltens bei transienten und extremen Fahrmanövern signifikant an Bedeutung. Die im Rahmen dieser Arbeit neu entwickelten Messverfahren zur Analyse und Charakterisierung des transienten Reifenseitenkraftverhaltens zeigen, dass die bisher gewählten Verzögerungsansätze erster Ordnung, beschrieben durch die Einlauflänge, keine ausreichende Abbildungsgenauigkeit liefern. Folglich wird ein neuer Verzögerungsansatz zweiter Ordnung eingeführt und durch den Parameter Einlaufdämpfung zweckmäßig beschrieben. Weiterhin wird nachgewiesen, dass die allgemein gebräuchliche Schätzung der Einlauflänge aus Schräglaufsteifigkeit und Lateralsteifigkeit vor allem bei hohen Radlasten deutlich zu geringe Werte liefert. Zur Abdeckung eines möglichst breiten Anwendungsbereichs werden die Parametereinflüsse Radlast, Fülldruck, Sturz, Vorspur und Geschwindigkeit messtechnisch ermittelt und im neuen Modellansatz berücksichtigt. Auch für die quasistatische Schräglaufsteifigkeit wird ein neues Bestimmungsverfahren mit entsprechenden Einflussanalysen vorgestellt. Bei extremen Fahrmanövern spielt die Fahrzeugstabilität, welche hochsensitiv auf das Reifenverhalten unter Extrembelastungen reagiert, eine entscheidende Rolle. Auch für diesen Anwendungsfall werden neue Mess‐ und Parametrisierungsverfahren eingeführt. Im Gegensatz zu anderen Arbeiten wird auf den gesamten Entstehungsprozess von Reifenmodelldatensätzen eingegangen. Dieser besteht im Wesentlichen aus Reifenmessung, Signalverarbeitung, Auswahl charakteristischer Kennlinien, methodischer Reifenmodellauswahl, automatischer Parameteridentifikation und qualitativem sowie quantitativem Nachweis der Abbildungsgüte des entstandenen Datensatzes. In diesem Prozess werden Schwachstellen aufgezeigt und durch neue Methoden beseitigt. Die drei Reifenmodelle MF-Tyre, FTire und TM-Easy werden analysiert, parametrisiert und unter transienten und extremen Randbedingungen in Kombination mit MKS-Modellen validiert und getestet. Somit kann die Qualität der erzielten Ergebnisse im Verhältnis zum Parametrisierungsaufwand und der Prozesssicherheit für eine Einsatzempfehlung der verschiedenen Reifenmodelle herangeführt werden. Die Qualität der neuen Reifenmodelldatensätze in Verbindung mit der Radaufhängung wird anhand eines neu entwickelten hochdynamischen Achsprüfstandes durch den Vergleich von Messung und MKS-Simulation validiert. Dazu werden sowohl transiente als auch extreme Manöver mit deren realistischen Belastungssituationen nachgestellt. Auch der Einfluss auf die Gesamtfahrzeugsimulation wird anhand entsprechender Manöver nachgewiesen. Darüber hinaus erfolgt die Herleitung eines linearen Einspurmodells mit transientem Reifenseitenkraftverhalten im Zustandsraum, anhand dessen der dominante Reifeneinfluss auf die Gierreaktion von Fahrzeugen nachgewiesen wird. / Due to the growing influence of vehicle dynamic control systems and suspension dimensioning in stability regions, transient and extreme tyre transfer behaviour gains importance significantly. Two new measurement procedures are introduced to analyze and characterize this tyre behaviour. The results show that the commonly used estimation of the relaxation length by the quotient of cornering and lateral stiffness yields far too small values and that the first order transfer model is insufficient to describe the transient tyre lateral force behaviour. Consequently, a new second order approach is introduced and described by the new parameter relaxation damping. The performed parameter study regarding wheel load, inflation pressure, camber angle, toe angle and driving velocity covers a wide application range of tyres. Moreover, the quasi‐static cornering stiffness is measured and evaluated in an extended range with reduced temperature and wear influences. Extreme manoeuvres are utilized to examine the stability of vehicles, which is dominated by the tyre transfer behaviour under extreme conditions. A new measurement and parameter identification procedure for those conditions is portrayed, as well. This thesis depicts the entire process to obtain a tyre model dataset, namely tyre measurements, signal processing, selection of characteristic curves, methodical selection of a tyre model, automatic parameter identification and qualitative and quantitative evaluation of the final dataset. The tyre models MF‐Tyre, FTire, and TM‐Easy are analysed, parameterized and validated under transient and extreme conditions. A comparison of the results in relation to the complexity of the parameter identification and the process stability leads to global recommendations of applications for different tyre models. The quality of the created tyre model datasets in combination with a vehicle suspension is assessed by a comparison of measurements from the newly developed highly dynamical suspension test rig and equivalent multi‐body simulations. That is, transient and extreme manoeuvres are performed and analysed. Additionally, a linear single‐track model with transient tyre behaviour is been derived, that shows the dominant tyre influence on the vehicle’s yaw behaviour. Finally, the influence of the created tyre model datasets and the additional lateral transfer behaviour on full‐vehicle simulations is verified.
|
359 |
Modélisation à haut niveau de systèmes hétérogènes, interfaçage analogique /numérique / High level modeling of heterogeneous systems, analog/digital interfacing.Cenni, Fabio 06 April 2012 (has links)
L’objet de la thèse est la modélisation de systèmes hétérogènes intégrant différents domaines de la physique et à signaux mixtes, numériques et analogiques (AMS). Une étude approfondie de différentes techniques d’extraction et de calibration de modèles comportementaux de composants analogiques à différents niveaux d’abstraction et de précision est présentée. Cette étude a mis en lumière trois approches principales qui ont été validées par la modélisation de plusieurs applications issues de divers domaines: un amplificateur faible bruit (LNA), un capteur chimique basé sur des ondes acoustiques de surface (SAW), le développement à plusieurs niveaux d’abstraction d’un capteur CMOS vidéo, et son intégration dans une plateforme industrielle. Les outils développés sont basés sur les extensions AMS du standard IEEE 1666 SystemC mais les techniques proposées sont facilement transposables à d’autres langages tels que VHDL-AMS ou Verilog-AMS utilisés en conception de dispositifs mixtes. / The thesis objective is the modeling of heterogeneous systems. Such systems integrate different physical domains (mechanical, chemical, optical or magnetic) therefore integrate analog and mixed- signal (AMS) parts. The aim is to provide a methodology based on high-level modeling for assisting both the design and the verification of AMS systems. A study on different techniques for extracting behavioral models of analog devices at different abstraction levels and computational weights is presented. Three approaches are identified and regrouped in three techniques. These techniques have been validated through the virtual prototyping of different applications issued from different domains: a low noise amplifier (LNA), a surface acoustic wave-based (SAW) chemical sensor, a CMOS video sensor with models developed at different abstraction levels and their integration within an industrial platform. The flows developed are based on the AMS extensions of the SystemC (IEEE 1666) standard but the methodologies can be implemented using other Analog Hardware Description Languages (VHDL-AMS, Verilog-AMS) typically used for mixed-signal microelectronics design.
|
360 |
Odhad Letových Parametrů Malého Letounu / Light Airplane Flight Parameters EstimationDittrich, Petr Unknown Date (has links)
Tato práce je zaměřena na odhad letových parametrů malého letounu, konkrétně letounu Evektor SportStar RTC. Pro odhad letových parametrů jsou použity metody "Equation Error Method", "Output Error Method" a metody rekurzivních nejmenších čtverců. Práce je zaměřena na zkoumání charakteristik aerodynamických parametrů podélného pohybu a ověření, zda takto odhadnuté letové parametry odpovídají naměřeným datům a tudíž vytvářejí předpoklad pro realizaci dostatečně přesného modelu letadla. Odhadnuté letové parametry jsou dále porovnávány s a-priorními hodnotami získanými s využitím programů Tornado, AVL a softwarovéverze sbírky Datcom. Rozdíly mezi a-priorními hodnotami a odhadnutými letovými paramatery jsou porovnány s korekcemi publikovanými pro subsonické letové podmínky modelu letounu F-18 Hornet.
|
Page generated in 2.3974 seconds