• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 191
  • 42
  • 31
  • 20
  • 19
  • 14
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 393
  • 393
  • 292
  • 64
  • 46
  • 46
  • 45
  • 42
  • 40
  • 36
  • 36
  • 34
  • 34
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Feature Analysis in Online Signature Verification on Digital Whiteboard : An analysis on the performance of handwritten signature authentication using local and global features with Hidden Markov models / Feature-analys inom online signaturigenkänning på digitala whiteboards : En analys av hur lokala och globala features presterar i dolda Markovmodeller

Olander Sahlén, Simon January 2018 (has links)
The usage of signatures for authentication is widely accepted, and remains one of the most familiar biometric in our society. Efforts to digitalise and automate the verification of these signatures are hot topics in the field of Machine Learning, and a plethora of different tools and methods have been developed and adapted for this purpose. The intention of this report is to study the authentication of handwritten signatures on digital whiteboards, and how to most effectively set up a dual verification system based on Hidden Markov models (HMMs) and global aggregate features such as average speed. The aim is to gauge which features are Suitable for determining that a signature is in fact genuine Suitable for rejecting forgeries Unsuitable for gauging the authenticity of a signature all together In addition, we take a look at the configuration of the HMMs themselves, in order to find good configurations for The number of components used in the model What type of covariance to use The best threshold to draw the line between a genuine signature and a forgery For the research, we collected a total of 200 signatures and 400 forgeries, gathered from 10 different people on digital whiteboards. We concluded that the best configurations of our HMMs had 11 components, used a full covariance model, and observed about five features, where pressure, angle and speed were the most important. Among the global features, we discarded 11 out of 35 due to either strong correlation with other features, or contained too little discriminatory information. The strongest global features were the ones pertaining to speed, acceleration, direction, and curvature. Using the combined verification we obtained an EER of 7 %, which is in the typical range of contemporary studies. We conclude that the best way to combine global feature verification with local HMM verification is to perform both separately, and only accept signatures that are admissible by both, with a tolerance level for the global and local verifications of 1.2 and 2.5 standard deviations, respectively. / Användandet av signaturer för autentisering är allmänt accepterat, och är fortfarande den mest använda biometriken i vårt samhälle. Arbetet med att digitalisera och automatisera verifieringen av dessa signaturer är ett populärt ämne inom maskininlärning, och en uppsjö av olika verktyg och metoder har utvecklats och anpassats för detta ändamål. Avsikten med denna studie är att bestämma hur man mest framgångsrikt kan inrätta ett verifikationssystem för handskrivna signatures på digitala whiteboards baserat på dolda Markovmodeller (HMMs) och globalt aggregerade attribut. Syftet är att bedöma vilka features som är Lämpliga för att bestämma huruvida en signatur är äkta Lämpliga för att avvisa förfalskningar Olämpliga för att mäta äktheten hos en signatur över huvud taget Utöver detta studerar vi HMM-konfigurationen själv, i syfte att hitta bra konfigurationer för Antalet komponenter som används i modellen Vilken typ av kovarians som ger bäst resultat Det bästa tröskelvärdet vid vilken att dra gränsen för huruvida en signatur är äkta eller förfalskad För forskningen samlade vi totalt in 200 signaturer och 400 förfalskningar från 10 olika personer med hjälp av digitala whiteboards. Vi drog slutsatsen att de bästa konfigurationerna hade 11 komponenter, använde komplett kovarians, och använde cirka fem features, där tryck, vinkel och hastighet var det viktigaste. Bland våra globala features kastade vi 11 av 35 på grund av att de antingen korrelerade för starkt med andra features, eller på grund av att de innehöll för lite information för att utröna huruvida en signatur var äkta eller ej. Våra bästa globala features var de som hänförde sig till hastighet, acceleration, riktning och krökning. Genom att använda den kombinerade verifieraren fick vi en EER på 7 %, vilket är i linje med liknande studier. Vi drog även slutsatsen att det bästa sättet att kombinera global verifiering med lokal HMM-verifiering är att utföra dem separat och endast acceptera signaturer som godkänns av bägge två. Den bästa toleransnivån för den globala och lokala verifieraren var 1,2 och 2,5 standardavvikelser, respektive.
252

An Online Evolving Method and Framework for Optimal Decision-Making in Reinforcement Learning-based Automated Vehicle Control Systems

Han, Teawon January 2020 (has links)
No description available.
253

Using Markov Chain Monte Carlo Models to Estimate the Severity, Duration and Cost of a Salmonellosis Outbreak of Known Size

Herrick, Robert L. January 2008 (has links)
No description available.
254

Intention recognition in human machine collaborative systems

Aarno, Daniel January 2007 (has links)
Robotsystem har använts flitigt under de senaste årtiondena för att skapa automationslösningar i ett flertal områden. De flesta nuvarande automationslösningarna är begränsade av att uppgifterna de kan lösa måste vara repetitiva och förutsägbara. En av anledningarna till detta är att dagens robotsystem saknar förmåga att förstå och resonera om omvärlden. På grund av detta har forskare inom robotik och artificiell intelligens försökt att skapa intelligentare maskiner. Trots att stora framsteg har gjorts då det gäller att skapa robotar som kan fungera och interagera i en mänsklig miljö så finns det för nuvarande inget system som kommer i närheten av den mänskliga förmågan att resonera om omvärlden. För att förenkla problemet har vissa forskare föreslagit en alternativ lösning till helt självständiga robotar som verkar i mänskliga miljöer. Alternativet är att kombinera människors och maskiners förmågor. Exempelvis så kan en person verka på en avlägsen plats, som kanske inte är tillgänglig för personen i fråga på grund av olika orsaker, genom att använda fjärrstyrning. Vid fjärrstyrning skickar operatören kommandon till en robot som verkar som en förlängning av operatörens egen kropp. Segmentering och identifiering av rörelser skapade av en operatör kan användas för att tillhandahålla korrekt assistans vid fjärrstyrning eller samarbete mellan människa och maskin. Assistansen sker ofta inom ramen för virtuella fixturer där eftergivenheten hos fixturen kan justeras under exekveringen för att tillhandahålla ökad prestanda i form av ökad precision och minskad tid för att utföra uppgiften. Den här avhandlingen fokuserar på två aspekter av samarbete mellan människa och maskin. Klassificering av en operatörs rörelser till ett på förhand specificerat tillstånd under en manipuleringsuppgift och assistans under manipuleringsuppgiften baserat på virtuella fixturer. Den specifika tillämpningen som behandlas är manipuleringsuppgifter där en mänsklig operatör styr en robotmanipulator i ett fjärrstyrt eller samarbetande system. En metod för att följa förloppet av en uppgift medan den utförs genom att använda virtuella fixturer presenteras. Istället för att följa en på förhand specificerad plan så har operatören möjlighet att undvika oväntade hinder och avvika från modellen. För att möjliggöra detta estimeras kontinuerligt sannolikheten att operatören följer en viss trajektorie (deluppgift). Estimatet används sedan för att justera eftergivenheten hos den virtuella fixturen så att ett beslut om hur rörelsen ska fixeras kan tas medan uppgiften utförs. En flerlagers dold Markovmodell (eng. layered hidden Markov model) används för att modellera mänskliga färdigheter. En gestemklassificerare som klassificerar en operatörs rörelser till olika grundläggande handlingsprimitiver, eller gestemer, evalueras. Gestemklassificerarna används sedan i en flerlagers dold Markovmodell för att modellera en simulerad fjärrstyrd manipuleringsuppgift. Klassificeringsprestandan utvärderas med avseende på brus, antalet gestemer, typen på den dolda Markovmodellen och antalet tillgängliga träningssekvenser. Den flerlagers dolda Markovmodellen tillämpas sedan på data från en trajektorieföljningsuppgift i 2D och 3D med en robotmanipulator för att ge både kvalitativa och kvantitativa resultat. Resultaten tyder på att den flerlagers dolda Markovmodellen är väl lämpad för att modellera trajektorieföljningsuppgifter och att den flerlagers dolda Markovmodellen är robust med avseende på felklassificeringar i de underliggande gestemklassificerarna. / Robot systems have been used extensively during the last decades to provide automation solutions in a number of areas. The majority of the currently deployed automation systems are limited in that the tasks they can solve are required to be repetitive and predicable. One reason for this is the inability of today’s robot systems to understand and reason about the world. Therefore the robotics and artificial intelligence research communities have made significant research efforts to produce more intelligent machines. Although significant progress has been made towards achieving robots that can interact in a human environment there is currently no system that comes close to achieving the reasoning capabilities of humans. In order to reduce the complexity of the problem some researchers have proposed an alternative to creating fully autonomous robots capable of operating in human environments. The proposed alternative is to allow fusion of human and machine capabilities. For example, using teleoperation a human can operate at a remote site, which may not be accessible for the operator for a number of reasons, by issuing commands to a remote agent that will act as an extension of the operator’s body. Segmentation and recognition of operator generated motions can be used to provide appropriate assistance during task execution in teleoperative and human-machine collaborative settings. The assistance is usually provided in a virtual fixture framework where the level of compliance can be altered online in order to improve the performance in terms of execution time and overall precision. Acquiring, representing and modeling human skills are key research areas in teleoperation, programming-by-demonstration and human-machine collaborative settings. One of the common approaches is to divide the task that the operator is executing into several sub-tasks in order to provide manageable modeling. This thesis is focused on two aspects of human-machine collaborative systems. Classfication of an operator’s motion into a predefined state of a manipulation task and assistance during a manipulation task based on virtual fixtures. The particular applications considered consists of manipulation tasks where a human operator controls a robotic manipulator in a cooperative or teleoperative mode. A method for online task tracking using adaptive virtual fixtures is presented. Rather than executing a predefined plan, the operator has the ability to avoid unforeseen obstacles and deviate from the model. To allow this, the probability of following a certain trajectory sub-task) is estimated and used to automatically adjusts the compliance of a virtual fixture, thus providing an online decision of how to fixture the movement. A layered hidden Markov model is used to model human skills. A gestem classifier that classifies the operator’s motions into basic action-primitives, or gestemes, is evaluated. The gestem classifiers are then used in a layered hidden Markov model to model a simulated teleoperated task. The classification performance is evaluated with respect to noise, number of gestemes, type of the hidden Markov model and the available number of training sequences. The layered hidden Markov model is applied to data recorded during the execution of a trajectory-tracking task in 2D and 3D with a robotic manipulator in order to give qualitative as well as quantitative results for the proposed approach. The results indicate that the layered hidden Markov model is suitable for modeling teleoperative trajectory-tracking tasks and that the layered hidden Markov model is robust with respect to misclassifications in the underlying gestem classifiers. / QC 20101102
255

Exploiting Cyclostationarity for Radio Environmental Awareness in Cognitive Radios

Kim, Kyou Woong 09 July 2008 (has links)
The tremendous ongoing growth of wireless digital communications has raised spectrum shortage and security issues. In particular, the need for new spectrum is the main obstacle in continuing this growth. Recent studies on radio spectrum usage have shown that pre-allocation of spectrum bands to specific wireless communication applications leads to poor utilization of those allocated bands. Therefore, research into new techniques for efficient spectrum utilization is being aggressively pursued by academia, industry, and government. Such research efforts have given birth to two concepts: Cognitive Radio (CR) and Dynamic Spectrum Access (DSA) network. CR is believed to be the key enabling technology for DSA network implementation. CR based DSA (cDSA) networks utilizes white spectrum for its operational frequency bands. White spectrum is the set of frequency bands which are unoccupied temporarily by the users having first rights to the spectrum (called primary users). The main goal of cDSA networks is to access of white spectrum. For proper access, CR nodes must identify the right cDSA network and the absence of primary users before initiating radio transmission. To solve the cDSA network access problem, methods are proposed to design unique second-order cyclic features using Orthogonal Frequency Division Multiplexing (OFDM) pilots. By generating distinct OFDM pilot patterns and measuring spectral correlation characteristics of the cyclostationary OFDM signal, CR nodes can detect and uniquely identify cDSA networks. For this purpose, the second-order cyclic features of OFDM pilots are investigated analytically and through computer simulation. Based on analysis results, a general formula for estimating the dominant cycle frequencies is developed. This general formula is used extensively in cDSA network identification and OFDM signal detection, as well as pilot pattern estimation. CR spectrum awareness capability can be enhanced when it can classify the modulation type of incoming signals at low and varying signal-to-noise ratio. Signal classification allows CR to select a suitable demodulation process at the receiver and to establish a communication link. For this purpose, a threshold-based technique is proposed which utilizes cycle-frequency domain profile for signal detection and feature extraction. Hidden Markov Models (HMMs) are proposed for the signal classifier. The spectrum awareness capability of CR can be undermined by spoofing radio nodes. Automatic identification of malicious or malfunctioning radio signal transmitters is a major concern for CR information assurance. To minimize the threat from spoofing radio devices, radio signal fingerprinting using second-order cyclic features is proposed as an approach for Specific Emitter Identification (SEI). The feasibility of this approach is demonstrated through the identification of IEEE 802.11a/g OFDM signals from different Wireless Local Area Network (WLAN) card manufactures using HMMs. / Ph. D.
256

An integrated approach to feature compensation combining particle filters and Hidden Markov Models for robust speech recognition

Mushtaq, Aleem 19 September 2013 (has links)
The performance of automatic speech recognition systems often degrades in adverse conditions where there is a mismatch between training and testing conditions. This is true for most modern systems which employ Hidden Markov Models (HMMs) to decode speech utterances. One strategy is to map the distorted features back to clean speech features that correspond well to the features used for training of HMMs. This can be achieved by treating the noisy speech as the distorted version of the clean speech of interest. Under this framework, we can track and consequently extract the underlying clean speech from the noisy signal and use this derived signal to perform utterance recognition. Particle filter is a versatile tracking technique that can be used where often conventional techniques such as Kalman filter fall short. We propose a particle filters based algorithm to compensate the corrupted features according to an additive noise model incorporating both the statistics from clean speech HMMs and observed background noise to map noisy features back to clean speech features. Instead of using specific knowledge at the model and state levels from HMMs which is hard to estimate, we pool model states into clusters as side information. Since each cluster encompasses more statistics when compared to the original HMM states, there is a higher possibility that the newly formed probability density function at the cluster level can cover the underlying speech variation to generate appropriate particle filter samples for feature compensation. Additionally, a dynamic joint tracking framework to monitor the clean speech signal and noise simultaneously is also introduced to obtain good noise statistics. In this approach, the information available from clean speech tracking can be effectively used for noise estimation. The availability of dynamic noise information can enhance the robustness of the algorithm in case of large fluctuations in noise parameters within an utterance. Testing the proposed PF-based compensation scheme on the Aurora 2 connected digit recognition task, we achieve an error reduction of 12.15% from the best multi-condition trained models using this integrated PF-HMM framework to estimate the cluster-based HMM state sequence information. Finally, we extended the PFC framework and evaluated it on a large-vocabulary recognition task, and showed that PFC works well for large-vocabulary systems also.
257

Wind models and stochastic programming algorithms for en route trajectory prediction and control

Tino, Clayton P. 13 January 2014 (has links)
There is a need for a fuel-optimal required time of arrival (RTA) mode for aircraft flight management systems capable of enabling controlled time of arrival functionality in the presence of wind speed forecast uncertainty. A computationally tractable two-stage stochastic algorithm utilizing a data-driven, location-specific forecast uncertainty model to generate forecast uncertainty scenarios is proposed as a solution. Three years of Aircraft Communications Addressing and Reporting Systems (ACARS) wind speed reports are used in conjunction with corresponding wind speed forecasts from the Rapid Update Cycle (RUC) forecast product to construct an inhomogeneous Markov model quantifying forecast uncertainty characteristics along specific route through the national airspace system. The forecast uncertainty modeling methodology addresses previously unanswered questions regarding the regional uncertainty characteristics of the RUC model, and realizations of the model demonstrate a clear tendency of the RUC product to be positively biased along routes following the normal contours of the jet stream. A two-stage stochastic algorithm is then developed to calculate the fuel optimal stage one cruise speed given a required time of arrival at a destination waypoint and wind forecast uncertainty scenarios generated using the inhomogeneous Markov model. The algorithm utilizes a quadratic approximation of aircraft fuel flow rate as a function of cruising Mach number to quickly search for the fuel-minimum stage one cruise speed while keeping computational footprint small and ensuring RTA adherence. Compared to standard approaches to the problem utilizing large scale linear programming approximations, the algorithm performs significantly better from a computational complexity standpoint, providing solutions in fractional power time while maintaining computational tractability in on-board systems.
258

Engineering system design for automated space weather forecast : designing automatic software systems for the large-scale analysis of solar data, knowledge extraction and the prediction of solar activities using machine learning techniques

Alomari, Mohammad Hani January 2009 (has links)
Coronal Mass Ejections (CMEs) and solar flares are energetic events taking place at the Sun that can affect the space weather or the near-Earth environment by the release of vast quantities of electromagnetic radiation and charged particles. Solar active regions are the areas where most flares and CMEs originate. Studying the associations among sunspot groups, flares, filaments, and CMEs is helpful in understanding the possible cause and effect relationships between these events and features. Forecasting space weather in a timely manner is important for protecting technological systems and human life on earth and in space. The research presented in this thesis introduces novel, fully computerised, machine learning-based decision rules and models that can be used within a system design for automated space weather forecasting. The system design in this work consists of three stages: (1) designing computer tools to find the associations among sunspot groups, flares, filaments, and CMEs (2) applying machine learning algorithms to the associations' datasets and (3) studying the evolution patterns of sunspot groups using time-series methods. Machine learning algorithms are used to provide computerised learning rules and models that enable the system to provide automated prediction of CMEs, flares, and evolution patterns of sunspot groups. These numerical rules are extracted from the characteristics, associations, and time-series analysis of the available historical solar data. The training of machine learning algorithms is based on data sets created by investigating the associations among sunspots, filaments, flares, and CMEs. Evolution patterns of sunspot areas and McIntosh classifications are analysed using a statistical machine learning method, namely the Hidden Markov Model (HMM).
259

Structures Markoviennes cachées et modèles à corrélations conditionnelles dynamiques : extensions et applications aux corrélations d'actifs financiers / Hidden Markov Models and dynamic conditional correlations models : extensions et application to stock market time series

Charlot, Philippe 25 November 2010 (has links)
L'objectif de cette thèse est d'étudier le problème de la modélisation des changements de régime dans les modèles a corrélations conditionnelles dynamiques en nous intéressant plus particulièrement a l'approche Markov-switching. A la différence de l'approche standard basée sur le modèle à chaîne de Markov caché (HMM) de base, nous utilisons des extensions du modèle HMM provenant des modèles graphiques probabilistes. Cette discipline a en effet proposé de nombreuses dérivations du modèle de base permettant de modéliser des structures complexes. Cette thèse se situe donc a l'interface de deux disciplines: l'économétrie financière et les modèles graphiques probabilistes.Le premier essai présente un modèle construit a partir d'une structure hiérarchique cachée markovienne qui permet de définir différents niveaux de granularité pour les régimes. Il peut être vu comme un cas particulier du modèle RSDC (Regime Switching for Dynamic Correlations). Basé sur le HMM hiérarchique, notre modèle permet de capter des nuances de régimes qui sont ignorées par l'approche Markov-Switching classique.La seconde contribution propose une version Markov-switching du modèle DCC construite a partir du modèle HMM factorise. Alors que l'approche Markov-switching classique suppose que les tous les éléments de la matrice de corrélation suivent la même dynamique, notre modèle permet à tous les éléments de la matrice de corrélation d'avoir leur propre dynamique de saut. Markov-switching. A la différence de l'approche standard basée sur le modèle à chaîne de Markov caché (HMM) de base, nous utilisons des extensions du modèle HMM provenant des modèles graphiques probabilistes. Cette discipline a en effet propose de nombreuses dérivations du modèle de base permettant de modéliser des structures complexes. Cette thèse se situe donc a l'interface de deux disciplines: l'économétrie financière et les modèles graphiques probabilistes.Le premier essai présente un modèle construit a partir d'une structure hiérarchique cachée markovienne qui permet de définir différents niveaux de granularité pour les régimes. Il peut ^etre vu commeun cas particulier du modele RSDC (Regime Switching for Dynamic Correlations). Base sur le HMMhierarchique, notre modele permet de capter des nuances de regimes qui sont ignorees par l'approcheMarkov-Switching classique.La seconde contribution propose une version Markov-switching du modele DCC construite a partir dumodele HMM factorise. Alors que l'approche Markov-switching classique suppose que les tous les elementsde la matrice de correlation suivent la m^eme dynamique, notre modele permet a tous les elements de lamatrice de correlation d'avoir leur propre dynamique de saut.Dans la derniere contribution, nous proposons un modele DCC construit a partir d'un arbre dedecision. L'objectif de cet arbre est de relier le niveau des volatilites individuelles avec le niveau descorrelations. Pour cela, nous utilisons un arbre de decision Markovien cache, qui est une extension de HMM. / The objective of this thesis is to study the modelling of change in regime in the dynamic conditional correlation models. We focus particularly on the Markov-switching approach. Unlike the standard approach based on the Hidden Markov Model (HMM), we use extensions of HMM coming from probabilistic graphical models theory. This discipline has in fact proposed many derivations of the basic model to model complex structures. Thus, this thesis can be view at the interface of twodisciplines: financial econometrics and probabilistic graphical models.The first essay presents a model constructed from a hierarchical hidden Markov which allows to increase the granularity of the regimes. It can be seen as a special case of RSDC model (Regime Switching for Dynamic Correlations). Based on the hierarchical HMM, our model can capture nuances of regimes that are ignored by the classical Markov-Switching approach.The second contribution proposes a Markov-switching version of the DCC model that is built from the factorial HMM. While the classical Markov-switching approach assumes that all elements of the correlation matrix follow the same switching dynamic, our model allows all elements of the correlation matrix to have their own switching dynamic.In the final contribution, we propose a model DCC constructed based on a decision tree. The objective of this tree is to link the level of volatility with the level of individual correlations. For this, we use a hidden Markov decision tree, which is an extension of HMM.
260

Low-dimensional modeling and control of shear flows using cluster analysis / Modélisation d'ordre réduit et contrôle d'écoulements cisaillés par partitionnement des données

Kaiser, Eurika 03 December 2015 (has links)
Une modélisation d'ordre réduit basée sur le partitionnement des données (cluster-based reduced-order modelling ou CROM) est développée pour identifier de manière non supervisée des mécanismes d'interaction non linéaires. La connaissance de ces mécanismes permet de pronostiquer la formation d’événements souhaitables ou non. L’approche proposée adopteun point de vue probabiliste en mettant à profit la linéarité de l’équation d’évolution de probabilité qui tient cependant compte d'éventuelles actions non linéaires des actionneurs. Le cadre est appliqué à l’attracteur de Lorenz, aux données numériques de la couche de mélange, à la turbulence tridimensionnelle du sillage d’un corps non profilé, d’un train, et aux données expérimentales d’un moteur à combustion.Pour ces exemples, le CROM permettait l'identification des quasi-attracteurs par exemple les deux régimes d’écoulement de la couche de mélange ou les états bimodaux du corps Ahmed . Les transitions principales entre ces quasi-attracteurs sont caractérisées par des regroupements de données appelé « flipper cluster ». L'identification de ces « flipper cluster » peut servir pour le contrôle des écoulements en utilisant le partitionnement des données obtenues par exemple de l'évolution temporelle de la traînée ou de la portance.Un contrôle en boucle fermé basé sur la CROM est appliqué à un écoulement le long d'une rampe courbée en vue de diminuer les extensions de la zone de recirculation par rapport à la meilleure excitation périodique en boucle ouverte. L'actionneur est mis en marche en fonction des regroupements préalablement observés. Le résultat est comparé à l’ensemble des lois de contrôle définies par toutes les combinaisons possibles des « on » et « of » par les regroupements de données. Bien quele contrôle basé sur la CROM ne permet pas de réduire la zone de recirculation par rapport à la réduction maximale en boucle ouverte, 28 % de l'apport d 'énergie nécessaire et 81 % pour une loi de contrôle particulière peuvent être économisé. / A cluster-based reduced-order modeling strategy is developed for the unsupervised identification of nonlinear flow mechanisms and precursors to desirable or undesirable events. The proposed approach assumes a probabilistic viewpoint taking advantage of the linearity of the evolution equation for the probability while including nonlinear actuation dynamics.The framework is applied to the Lorenz attractor, numerical data of the spatially evolving mixing layer, the three-dimensional turbulent wake of a bluf body, of a train, and experimental data of a combustion engine.For these examples, CROM has been shown to identify quasi-attractors such as the two shedding regimes of the mixing layer or the bimodal states of the Ahmed body; main transition processes between those quasiattractors are characterized by branching regions or flipper cluster; desirable phase space regions and possible actuation mechanisms areindicated by analysis of cluster features like drag and lift forces which can be further exploited for control purposes.In particular, a CROM-based feedback control is applied to a separating flow over a smooth ramp to examine whether the recirculation area can be diminished compared to the best open-loop periodic excitation by turning the actuation on or of depending on the applicable cluster. The CROMbased control is compared to the complete set of control laws defined byall possible combinations of 'on' and 'of' for the given set of clusters.While the recirculation area cannot be further decreased compared to the best open-loop forcing, a similar size can be achieved for 28% (CROMbased control) or 81% (one particular control law) savings in the control input energy.

Page generated in 0.0523 seconds