• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 352
  • 128
  • 49
  • 39
  • 12
  • 10
  • 9
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 717
  • 185
  • 97
  • 88
  • 87
  • 76
  • 69
  • 54
  • 54
  • 53
  • 53
  • 52
  • 50
  • 44
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Biologically Inspired Modular Neural Networks

Azam, Farooq 19 June 2000 (has links)
This dissertation explores the modular learning in artificial neural networks that mainly driven by the inspiration from the neurobiological basis of the human learning. The presented modularization approaches to the neural network design and learning are inspired by the engineering, complexity, psychological and neurobiological aspects. The main theme of this dissertation is to explore the organization and functioning of the brain to discover new structural and learning inspirations that can be subsequently utilized to design artificial neural network. The artificial neural networks are touted to be a neurobiologicaly inspired paradigm that emulate the functioning of the vertebrate brain. The brain is a highly structured entity with localized regions of neurons specialized in performing specific tasks. On the other hand, the mainstream monolithic feed-forward neural networks are generally unstructured black boxes which is their major performance limiting characteristic. The non explicit structure and monolithic nature of the current mainstream artificial neural networks results in lack of the capability of systematic incorporation of functional or task-specific a priori knowledge in the artificial neural network design process. The problem caused by these limitations are discussed in detail in this dissertation and remedial solutions are presented that are driven by the functioning of the brain and its structural organization. Also, this dissertation presents an in depth study of the currently available modular neural network architectures along with highlighting their shortcomings and investigates new modular artificial neural network models in order to overcome pointed out shortcomings. The resulting proposed modular neural network models have greater accuracy, generalization, comprehensible simplified neural structure, ease of training and more user confidence. These benefits are readily obvious for certain problems, depending upon availability and usage of available a priori knowledge about the problems. The modular neural network models presented in this dissertation exploit the capabilities of the principle of divide and conquer in the design and learning of the modular artificial neural networks. The strategy of divide and conquer solves a complex computational problem by dividing it into simpler sub-problems and then combining the individual solutions to the sub-problems into a solution to the original problem. The divisions of a task considered in this dissertation are the automatic decomposition of the mappings to be learned, decompositions of the artificial neural networks to minimize harmful interaction during the learning process, and explicit decomposition of the application task into sub-tasks that are learned separately. The versatility and capabilities of the new proposed modular neural networks are demonstrated by the experimental results. A comparison of the current modular neural network design techniques with the ones introduced in this dissertation, is also presented for reference. The results presented in this dissertation lay a solid foundation for design and learning of the artificial neural networks that have sound neurobiological basis that leads to superior design techniques. Areas of the future research are also presented. / Ph. D.
432

Efficient Resource Development in Electric Utilities Planning Under Uncertainty

Maricar, Noor M. 05 October 2004 (has links)
The thesis aims to introduce an efficient resource development strategy in electric utility long term planning under uncertainty considerations. In recent years, electric utilities have recognized the concepts of robustness, flexibility, and risk exposure, to be considered in their resource development strategy. The concept of robustness means to develop resource plans that can perform well for most, if not all futures, while flexibility is to allow inexpensive changes to be made if the future conditions deviate from the base assumptions. A risk exposure concept is used to quantify the risk hazards in planning alternatives for different kinds of future conditions. This study focuses on two technical issues identified to be important to the process of efficient resource development: decision-making analysis considering robustness and flexibility, and decision-making analysis considering risk exposure. The technique combines probabilistic methods and tradeoff analysis, thereby producing a decision set analysis concept to determine robustness that includes flexibility measures. In addition, risk impact analysis is incorporated to identify the risk exposure in planning alternatives. Contributions of the work are summarized as follows. First, an efficient resource development framework for planning under uncertainty is developed that combines features of utility function, tradeoff analysis, and the analytical hierarchy process, incorporating a performance evaluation approach. Second, the multi-attribute risk-impact analysis method is investigated to handle the risk hazards exposed in power system resource planning. Third, the penetration levels of wind and photovoltaic generation technologies into the total generation system mix, with their constraints, are determined using the decision-making model. The results from two case studies show the benefits of the proposed framework by offering the decision makers various options for lower cost, lower emission, better reliability, and higher efficiency plans. / Ph. D.
433

Robustness and stability in dynamic constraint satisfaction problems

Climent Aunés, Laura Isabel 07 January 2014 (has links)
Constraint programming is a paradigm wherein relations between variables are stated in the form of constraints. It is well-known that many real life problems can be modeled as Constraint Satisfaction Problems (CSPs). Much effort has been spent to increase the efficiency of algorithms for solving CSPs. However, many of these techniques assume that the set of variables, domains and constraints involved in the CSP are known and fixed when the problem is modeled. This is a strong limitation because many problems come from uncertain and dynamic environments, where both the original problem may evolve because of the environment, the user or other agents. In such situations, a solution that holds for the original problem can become invalid after changes. There are two main approaches for dealing with these situations: reactive and proactive approaches. Using reactive approaches entails re-solving the CSP after each solution loss, which is a time consuming. That is a clear disadvantage, especially when we deal with short-term changes, where solution loss is frequent. In addition, in many applications, such as on-line planning and scheduling, the delivery time of a new solution may be too long for actions to be taken on time, so a solution loss can produce several negative effects in the modeled problem. For a task assignment production system with several machines, it could cause the shutdown of the production system, the breakage of machines, the loss of the material/object in production, etc. In a transport timetabling problem, the solution loss, due to some disruption at a point, may produce a delay that propagates through the entire schedule. In addition, all the negative effects stated above will probably entail an economic loss. In this thesis we develop several proactive approaches. Proactive approaches use knowledge about possible future changes in order to avoid or minimize their effects. These approaches are applied before the changes occur. Thus, our approaches search for robust solutions, which have a high probability to remain valid after changes. Furthermore, some of our approaches also consider that the solutions can be easily adapted when they did not resist the changes in the original problem. Thus, these approaches search for stable solutions, which have an alternative solution that is similar to the previous one and therefore can be used in case of a value breakage. In this context, sometimes there exists knowledge about the uncertain and dynamic environment. However in many cases, this information is unknown or hard to obtain. For this reason, for the majority of our approaches (specifically 3 of the 4 developed approaches), the only assumptions made about changes are those inherent in the structure of problems with ordered domains. Given this framework and therefore the existence of a significant order over domain values, it is reasonable to assume that the original bounds of the solution space may undergo restrictive or relaxed modifications. Note that the possibility of solution loss only exists when changes over the original bounds of the solution space are restrictive. Therefore, the main objective for searching robust solutions in this framework is to find solutions located as far away as possible from the bounds of the solution space. In order to meet this criterion, we propose several approaches that can be divided in enumeration-based techniques and a search algorithm. / Climent Aunés, LI. (2013). Robustness and stability in dynamic constraint satisfaction problems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/34785
434

Robust strategies for glucose control in type 1 diabetes

Revert Tomás, Ana 15 October 2015 (has links)
[EN] Type 1 diabetes mellitus is a chronic and incurable disease that affects millions of people all around the world. Its main characteristic is the destruction (totally or partially) of the beta cells of the pancreas. These cells are in charge of producing insulin, main hormone implied in the control of blood glucose. Keeping high levels of blood glucose for a long time has negative health effects, causing different kinds of complications. For that reason patients with type 1 diabetes mellitus need to receive insulin in an exogenous way. Since 1921 when insulin was first isolated to be used in humans and first glucose monitoring techniques were developed, many advances have been done in clinical treatment with insulin. Currently 2 main research lines focused on improving the quality of life of diabetic patients are opened. The first one is concentrated on the research of stem cells to replace damaged beta cells and the second one has a more technological orientation. This second line focuses on the development of new insulin analogs to allow emulating with higher fidelity the endogenous pancreas secretion, the development of new noninvasive continuous glucose monitoring systems and insulin pumps capable of administering different insulin profiles and the use of decision-support tools and telemedicine. The most important challenge the scientific community has to overcome is the development of an artificial pancreas, that is, to develop algorithms that allow an automatic control of blood glucose. The main difficulty avoiding a tight glucose control is the high variability found in glucose metabolism. This fact is especially important during meal compensation. This variability, together with the delay in subcutaneous insulin absorption and action causes controller overcorrection that leads to late hypoglycemia (the most important acute complication of insulin treatment). The proposals of this work pay special attention to overcome these difficulties. In that way interval models are used to represent the patient physiology and to be able to take into account parametric uncertainty. This type of strategy has been used in both the open loop proposal for insulin dosage and the closed loop algorithm. Moreover the idea behind the design of this last proposal is to avoid controller overcorrection to minimize hypoglycemia while adding robustness against glucose sensor failures and over/under- estimation of meal carbohydrates. The algorithms proposed have been validated both in simulation and in clinical trials. / [ES] La diabetes mellitus tipo 1 es una enfermedad crónica e incurable que afecta a millones de personas en todo el mundo. Se caracteriza por una destrucción total o parcial de las células beta del páncreas. Estas células son las encargadas de producir la insulina, hormona principal en el control de glucosa en sangre. Valores altos de glucosa en la sangre mantenidos en el tiempo afectan negativamente a la salud, provocando complicaciones de diversa índole. Es por eso que los pacientes con diabetes mellitus tipo 1 necesitan recibir insulina de forma exógena. Desde que se consiguiera en 1921 aislar la insulina para poder utilizarla en clínica humana, y se empezaran a desarrollar las primeras técnicas de monitorización de glucemia, se han producido grandes avances en el tratamiento con insulina. Actualmente, las líneas de investigación que se están siguiendo en relación a la mejora de la calidad de vida de los pacientes diabéticos, tienen fundamentalmente 2 vertientes: una primera que se centra en la investigación en células madre para la reposición de las células beta y una segunda vertiente de carácter más tecnológico. Dentro de esta segunda vertiente, están abiertas varias líneas de investigación, entre las que se encuentran el desarrollo de nuevos análogos de insulina que permitan emular más fielmente la secreción endógena del páncreas, el desarrollo de monitores continuos de glucosa no invasivos, bombas de insulina capaces de administrar distintos perfiles de insulina y la inclusión de sistemas de ayuda a la decisión y telemedicina. El mayor reto al que se enfrentan los investigadores es el de conseguir desarrollar un páncreas artificial, es decir, desarrollar algoritmos que permitan disponer de un control automático de la glucosa. La principal barrera que se encuentra para conseguir un control riguroso de la glucosa es la alta variabilidad que presenta su metabolismo. Esto es especialmente significativo durante la compensación de las comidas. Esta variabilidad junto con el retraso en la absorción y actuación de la insulina administrada de forma subcutánea favorece la aparición de hipoglucemias tardías (complicación aguda más importante del tratamiento con insulina) a consecuencia de la sobreactuación del controlador. Las propuestas presentadas en este trabajo hacen especial hincapié en sobrellevar estas dificultades. Así, se utilizan modelos intervalares para representar la fisiología del paciente, y poder tener en cuenta la incertidumbre en sus parámetros. Este tipo de estrategia se ha utilizado tanto en la propuesta de dosificación automática en lazo abierto como en el algoritmo en lazo cerrado. Además la principal idea de diseño de esta última propuesta es evitar la sobreactuación del controlador evitando hipoglucemias y añadiendo robustez ante fallos en el sensor de glucosa y en la estimación de las comidas. Los algoritmos propuestos han sido validados en simulación y en clínica. / [CA] La diabetis mellitus tipus 1 és una malaltia crònica i incurable que afecta milions de persones en tot el món. Es caracteritza per una destrucció total o parcial de les cèl.lules beta del pàncrees. Aquestes cèl.lules són les encarregades de produir la insulina, hormona principal en el control de glucosa en sang. Valors alts de glucosa en la sang mantinguts en el temps afecten negativament la salut, provocant complicacions de diversa índole. És per això que els pacients amb diabetis mellitus tipus 1 necessiten rebre insulina de forma exògena. Des que s'aconseguís en 1921 aïllar la insulina per a poder utilitzar-la en clínica humana, i es començaren a desenrotllar les primeres tècniques de monitorització de glucèmia, s'han produït grans avanços en el tractament amb insulina. Actualment, les línies d'investigació que s'estan seguint en relació a la millora de la qualitat de vida dels pacients diabètics, tenen fonamentalment 2 vessants: un primer que es centra en la investigació de cèl.lules mare per a la reposició de les cèl.lules beta i un segon vessant de caràcter més tecnològic. Dins d' aquest segon vessant, estan obertes diverses línies d'investigació, entre les que es troben el desenrotllament de nous anàlegs d'insulina que permeten emular més fidelment la secreció del pàncrees, el desenrotllament de monitors continus de glucosa no invasius, bombes d'insulina capaces d'administrar distints perfils d'insulina i la inclusió de sistemes d'ajuda a la decisió i telemedicina. El major repte al què s'enfronten els investigadors és el d'aconseguir desenrotllar un pàncrees artificial, és a dir, desenrotllar algoritmes que permeten disposar d'un control automàtic de la glucosa. La principal barrera que es troba per a aconseguir un control rigorós de la glucosa és l'alta variabilitat que presenta el seu metabolisme. Açò és especialment significatiu durant la compensació dels menjars. Aquesta variabilitat junt amb el retard en l'absorció i actuació de la insulina administrada de forma subcutània afavorix l'aparició d'hipoglucèmies tardanes (complicació aguda més important del tractament amb insulina) a conseqüència de la sobreactuació del controlador. Les propostes presentades en aquest treball fan especial insistència en suportar aquestes dificultats. Així, s'utilitzen models intervalares per a representar la fisiologia del pacient, i poder tindre en compte la incertesa en els seus paràmetres. Aquest tipus d'estratègia s'ha utilitzat tant en la proposta de dosificació automàtica en llaç obert com en l' algoritme en llaç tancat. A més, la principal idea de disseny d'aquesta última proposta és evitar la sobreactuació del controlador evitant hipoglucèmies i afegint robustesa. / Revert Tomás, A. (2015). Robust strategies for glucose control in type 1 diabetes [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/56001
435

Enhancing Robustness and Explainability in Language Models : A Case Study on T0 / Förbättra robusthet och förklaring i språkmodeller : En fallstudie på T0

Yutong, Jiang January 2024 (has links)
The rapid advancement of cutting-edge techniques has propelled state-of-the-art (SOTA) language models to new heights. Despite their impressive capabilities across a variety of downstream tasks, large language models still face many challenges such as hallucination and bias. The thesis focuses on two key objectives: first, it measures the robustness of T0_3B and investigates feasible methodologies to enhance the model’s robustness. Second, it targets on the explainability of large language models, aiming to make the intrinsic working mechanism more transparent and, consequently enhance model’s steerability. Motivated by the importance of mitigating non-robust behavior in language models, the thesis initially measures model’s robustness on handling minor perturbation. After that, I proposed and verified an approach to enhance robustness by making input more contextualized, a method that does not require the step of fine-tuning. Moreover, to understand the complex working mechanism of large language models, I designed and introduced two novel visualization tools: ’Logit Lens’ and ’Hidden States Plot in Spherical Coordinate System’. These tools, combined with additional experimental analysis, revealed a noticeable differentiation of the predicted processes between the first predicted token and subsequent tokens. The contributions of the thesis are mainly in the two following aspects: it provides feasible methodologies to enhance the robustness of language models without the need of fine-tuning, and it contributes to the field of explainable AI through the development of two visualization tools that shed light on the understanding of the working mechanism. / Den snabba utvecklingen av banbrytande tekniker har drivit språkmodeller till nya höjder. Trots deras imponerande prestanda över diverse språkrelaterade uppgifter, trots detta har dessa modeller fortfarande problem som hallucinationer och bias. Avhandlingen är centrerad kring två huvudmål: för det första undersöker den robustheten hos T0_3B och undersöker framtida strategier för att förbättra dess robusthet. För det andra utforskar den språkmodellernas ”förklaringsbarhet” (dvs hur väl vi förstår deras beteende), i syfte att göra dem mer transparenta och följaktligen förbättra modellens styrbarhet. Det första vi gör är att visa experiment som vi har satt upp för att mäta modellens robusthet mot mindre störningar. Som svar föreslår och underbygger vi ett tillvägagångssätt för att öka robustheten genom att ge modellen mer kontext när en fråga ställs, en metod som inte kräver vidare träning av modellen. Dessutom, för att förstå den komplexiteten hos språkmodeller, introducerar jag två nya visualiseringsverktyg: Logit Lens och Hidden States Plot i sfäriskt koordinatsystem. Dessa verktyg, i kombination med ytterligare experimentell analys, avslöjar ett diskting mönstr för den första förutspådda ordet jämfört med efterföljande ord. Bidragen från avhandlingen är huvudsakligen i de två följande aspekterna: den ger praktiska åtgärder för att förbättra robustheten hos språkmodeller utan behov av vidare träning, och den bidrar till området för förklarabar AI genom utvecklingen av två visualiseringsverktyg som ökar våran förståelse för hur dessa modeller fungerar.
436

Identifying Induced Bias in Machine Learning

Chowdhury Mohammad Rakin Haider (18414885) 22 April 2024 (has links)
<p dir="ltr">The last decade has witnessed an unprecedented rise in the application of machine learning in high-stake automated decision-making systems such as hiring, policing, bail sentencing, medical screening, etc. The long-lasting impact of these intelligent systems on human life has drawn attention to their fairness implications. A majority of subsequent studies targeted the existing historically unfair decision labels in the training data as the primary source of bias and strived toward either removing them from the dataset (de-biasing) or avoiding learning discriminatory patterns from them during training. In this thesis, we show label bias is not a necessary condition for unfair outcomes from a machine learning model. We develop theoretical and empirical evidence showing that biased model outcomes can be introduced by a range of different data properties and components of the machine learning development pipeline.</p><p dir="ltr">In this thesis, we first prove that machine learning models are expected to introduce bias even when the training data doesn’t include label bias. We use the proof-by-construction technique in our formal analysis. We demonstrate that machine learning models, trained to optimize for joint accuracy, introduce bias even when the underlying training data is free from label bias but might include other forms of disparity. We identify two data properties that led to the introduction of bias in machine learning. They are the group-wise disparity in the feature predictivity and the group-wise disparity in the rates of missing values. The experimental results suggest that a wide range of classifiers trained on synthetic or real-world datasets are prone to introducing bias under feature disparity and missing value disparity independently from or in conjunction with the label bias. We further analyze the trade-off between fairness and established techniques to improve the generalization of machine learning models such as adversarial training, increasing model complexity, etc. We report that adversarial training sacrifices fairness to achieve robustness against noisy (typically adversarial) samples. We propose a fair re-weighted adversarial training method to improve the fairness of the adversarially trained models while sacrificing minimal adversarial robustness. Finally, we observe that although increasing model complexity typically improves generalization accuracy, it doesn’t linearly improve the disparities in the prediction rates.</p><p dir="ltr">This thesis unveils a vital limitation of machine learning that has yet to receive significant attention in FairML literature. Conventional FairML literature reduces the ML fairness task to as simple as de-biasing or avoiding learning discriminatory patterns. However, the reality is far away from it. Starting from deciding on which features collect up to algorithmic choices such as optimizing robustness can act as a source of bias in model predictions. It calls for detailed investigations on the fairness implications of machine learning development practices. In addition, identifying sources of bias can facilitate pre-deployment fairness audits of machine learning driven automated decision-making systems.</p>
437

Robust Representation Learning for Out-of-Distribution Extrapolation in Relational Data

Yangze Zhou (18369795) 17 April 2024 (has links)
<p dir="ltr">Recent advancements in representation learning have significantly enhanced the analysis of relational data across various domains, including social networks, bioinformatics, and recommendation systems. In general, these methods assume that the training and test datasets come from the same distribution, an assumption that often fails in real-world scenarios due to evolving data, privacy constraints, and limited resources. The task of out-of-distribution (OOD) extrapolation emerges when the distribution of test data differs from that of the training data, presenting a significant, yet unresolved challenge within the field. This dissertation focuses on developing robust representations for effective OOD extrapolation, specifically targeting relational data types like graphs and sets. For successful OOD extrapolation, it's essential to first acquire a representation that is adequately expressive for tasks within the distribution. In the first work, we introduce Set Twister, a permutation-invariant set representation that generalizes and enhances the theoretical expressiveness of DeepSets, a simple and widely used permutation-invariant representation for set data, allowing it to capture higher-order dependencies. We showcase its implementation simplicity and computational efficiency, as well as its competitive performances with more complex state-of-the-art graph representations in several graph node classification tasks. Secondly, we address OOD scenarios in graph classification and link prediction tasks, particularly when faced with varying graph sizes. Under causal model assumptions, we derive approximately invariant graph representations that improve extrapolation in OOD graph classification task. Furthermore, we provide the first theoretical study of the capability of graph neural networks for inductive OOD link prediction and present a novel representation model that produces structural pairwise embeddings, maintaining predictive accuracy for OOD link prediction as the test graph size increases. Finally, we investigate the impact of environmental data as a confounder between input and target variables, proposing a novel approach utilizing an auxiliary dataset to mitigate distribution shifts. This comprehensive study not only advances our understanding of representation learning in OOD contexts but also highlights potential pathways for future research in enhancing model robustness across diverse applications.</p>
438

Impulsive Control and Synchronization of Chaos-Generating-Systems with Applications to Secure Communication

Khadra, Anmar January 2004 (has links)
When two or more chaotic systems are coupled, they may exhibit synchronized chaotic oscillations. The synchronization of chaos is usually understood as the regime of chaotic oscillations in which the corresponding variables or coupled systems are equal to each other. This kind of synchronized chaos is most frequently observed in systems specifically designed to be able to produce this behaviour. In this thesis, one particular type of synchronization, called impulsive synchronization, is investigated and applied to low dimensional chaotic, hyperchaotic and spatiotemporal chaotic systems. This synchronization technique requires driving one chaotic system, called response system, by samples of the state variables of the other chaotic system, called drive system, at discrete moments. Equi-Lagrange stability and equi-attractivity in the large property of the synchronization error become our major concerns when discussing the dynamics of synchronization to guarantee the convergence of the error dynamics to zero. Sufficient conditions for equi-Lagrange stability and equi-attractivity in the large are obtained for the different types of chaos-generating systems used. The issue of robustness of synchronized chaotic oscillations with respect to parameter variations and time delay, is also addressed and investigated when dealing with impulsive synchronization of low dimensional chaotic and hyperchaotic systems. Due to the fact that it is impossible to design two identical chaotic systems and that transmission and sampling delays in impulsive synchronization are inevitable, robustness becomes a fundamental issue in the models considered. Therefore it is established, in this thesis, that under relatively large parameter perturbations and bounded delay, impulsive synchronization still shows very desired behaviour. In fact, criteria for robustness of this particular type of synchronization are derived for both cases, especially in the case of time delay, where sufficient conditions for the synchronization error to be equi-attractivity in the large, are derived and an upper bound on the delay terms is also obtained in terms of the other parameters of the systems involved. The theoretical results, described above, regarding impulsive synchronization, are reconfirmed numerically. This is done by analyzing the Lyapunov exponents of the error dynamics and by showing the simulations of the different models discussed in each case. The application of the theory of synchronization, in general, and impulsive synchronization, in particular, to communication security, is also presented in this thesis. A new impulsive cryptosystem, called induced-message cryptosystem, is proposed and its properties are investigated. It was established that this cryptosystem does not require the transmission of the encrypted signal but instead the impulses will carry the information needed for synchronization and for retrieving the message signal. Thus the security of transmission is increased and the time-frame congestion problem, discussed in the literature, is also solved. Several other impulsive cryptosystems are also proposed to accommodate more solutions to several security issues and to illustrate the different properties of impulsive synchronization. Finally, extending the applications of impulsive synchronization to employ spatiotemporal chaotic systems, generated by partial differential equations, is addressed. Several possible models implementing this approach are suggested in this thesis and few questions are raised towards possible future research work in this area.
439

Synthèse croisée de régulateurs et d'observateurs pour le contrôle robuste de la machine synchrone / Cross-synthesis of controler and observer parameters for robust control of synchronous drive

Carrière, Sébastien 28 May 2010 (has links)
Cette étude se concentre sur la synthèse de lois de commande de servo-entraînements accouplés à une charge flexible à paramètres incertains avec l’unique mesure de la position du moteur. La loi de commande a pour but de minimiser les effets de ces variations tout en gardant la maîtrise d’un cahier des charges de type industriel (temps de réponse, dépassement, simplicité d’implantation et de synthèse). De ce fait, un contrôleur et un observateur sont implantés. Un contrôleur de type retour d’état avec une minimisation d’un critère linéaire quadratique assurant un placement du pôle dominant est associé à un observateur de type Kalman. Ces deux structures utilisent des méthodologies classiques de synthèse : placement de pôles et choix de pondération des matrices de Kalman. Pour ce dernier, deux stratégies sont abordées. La première utilise les matrices de pondération diagonale standard. De nombreux degrés de liberté sont disponibles et donnent de bons résultats. La seconde défini la matrice des bruits d’état avec la variation de la matrice dynamique du système. Le nombre de degrés de liberté est réduit, les résultats restent similaires à la stratégie précédente, mais la synthèse est simplifiée. Ceci permet d’obtenir une méthode n’exigeant que peu d’investissement théorique de la part d’un ingénieur mais non robuste. Pour ceci, la méthode de micro-analyse caractérisant la stabilité robuste est appliquée en parallèle à un algorithme évolutionnaire autorisant une synthèse, plus rapide et plus précise qu’un opérateur humain. Cette méthode complète permet de voir les avantages d’une synthèse croisée de l’observateur et du correcteur au lieu d’une synthèse séparée. En effet, le placement optimal des dynamiques de commande et d’observation dans le cadre des systèmes à paramètres variants ne suit plus une stratégie classique découplée. Ici, les dynamiques se retrouvent couplées voire meme inversées (dynamique de la commande inférieure à celle de l’observateur). Des résultats expérimentaux corroborent les simulations et permettent d’expliquer les effets des observateurs et régulateurs sur le comportement du système. / This thesis is performing a study on the law control synthesis for PMSM direct driving to a load having its mechanical parameters variant. Furthermore, only the motor position is sensored. The control law aim is to minimize the eects of these variations while keeping the performance inside industrial specifications (response time at 5%, overshoot, implementation and synthesis simplicity). As a result, an observer is programmed jointly with a controller. A state feedback controller deduced from a linear quadratic minimization is associated with a Kalman observer. These both structures employ standard method definitions : poles placement and arbitrary weight of Kalman matrices choice. Two definitions strategies are employed for the observer. The first is the classical arbitrary weights choice. A lot of degrees of freedom are accessible and allow this observer to impose a good behaviour to the system. The second defines the system dynamic matrix variation as the state space noise matrix. The number of degrees of freedom decreases dramatically. However the behaviour is kept as well as the previous case. This method is then easy to understand for an engineer, gives good result but non robust in an automatic sense. Consequently, an automatic study on robustness, the micro- analysis, is added to this control definition for theoretically checking. In parallel with the study robustness, an evolutionnary algorithm leads to a quicker and more accurate synthesis than a human operator. Indeed, in the case of systems having variant parameters, the optimal dynamics choice for the controller and the observer is not following the classical way. The dynamics are coupled or even mirrored ( the controller dynamic is slower than the observer one). At the end, experimental results allow to understand the way that observer or controller operate on the system.
440

Optimalno i suboptimalno podešavanje parametara robusnih linearnih regulatora necelog reda / Optimal and suboptimal parameter tuning of robust, linear controllers of noninteger order

Jakovljević Boris 14 July 2015 (has links)
<p>Rad je posvećen robusnom upravljanju sistemima čiji je linearni regulator i/ili dinamika necelog reda, kao i upravljačkim problemima gde regulator necelog reda u sebi poseduje i linearnu i nelinearnu dinamiku, a koji upravlja procesima čija dinamika može i linearna i nelinearna.</p> / <p>The thesys is dedicated to robust control systems problems with linear<br />controllers and/or process dynamics of noninteger order, as well as control<br />issues with combination of linear and nonlinear controllers of noninteger<br />order that control either linear or nonlinear systems.</p>

Page generated in 0.0994 seconds