• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 9
  • 7
  • 1
  • 1
  • 1
  • Tagged with
  • 55
  • 55
  • 22
  • 16
  • 13
  • 12
  • 12
  • 11
  • 11
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Bayesian Inference in Structural Second-Price Auctions

Wegmann, Bertil January 2011 (has links)
The aim of this thesis is to develop efficient and practically useful Bayesian methods for statistical inference in structural second-price auctions. The models are applied to a carefully collected coin auction dataset with bids and auction-specific characteristics from one thousand Internet auctions on eBay. Bidders are assumed to be risk-neutral and symmetric, and compete for a single object using the same game-theoretic strategy. A key contribution in the thesis is the derivation of very accurate approximations of the otherwise intractable equilibrium bid functions under different model assumptions. These easily computed and numerically stable approximations are shown to be crucial for statistical inference, where the inverse bid functions typically needs to be evaluated several million times. In the first paper, the approximate bid is a linear function of a bidder's signal and a Gaussian common value model is estimated. We find that the publicly available book value and the condition of the auctioned object are important determinants of bidders' valuations, while eBay's detailed seller information is essentially ignored by the bidders. In the second paper, the Gaussian model in the first paper is contrasted to a Gamma model that allows intrinsically non-negative common values. The Gaussian model performs slightly better than the Gamma model on the eBay data, which we attribute to an almost normal or at least symmetrical distribution of valuations. The third paper compares the model in the first paper to a directly comparable model for private values. We find many interesting empirical regularities between the models, but no strong and consistent evidence in favor of one model over the other. In the last paper, we consider auctions with both private-value and common-value bidders. The equilibrium bid function is given as the solution to an ordinary differential equation, from which we derive an approximate inverse bid as an explicit function of a given bid. The paper proposes an elaborate model where the probability of being a common value bidder is a function of covariates at the auction level. The model is estimated by a Metropolis-within-Gibbs algorithm and the results point strongly to an active influx of both private-value and common-value bidders. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 1: Epub ahead of print. Paper 2: Manuscript. Paper 3: Manuscript. Paper 4: Manuscript.</p>
32

Developing Efficient Strategies for Automatic Calibration of Computationally Intensive Environmental Models

Razavi, Seyed Saman January 2013 (has links)
Environmental simulation models have been playing a key role in civil and environmental engineering decision making processes for decades. The utility of an environmental model depends on how well the model is structured and calibrated. Model calibration is typically in an automated form where the simulation model is linked to a search mechanism (e.g., an optimization algorithm) such that the search mechanism iteratively generates many parameter sets (e.g., thousands of parameter sets) and evaluates them through running the model in an attempt to minimize differences between observed data and corresponding model outputs. The challenge rises when the environmental model is computationally intensive to run (with run-times of minutes to hours, for example) as then any automatic calibration attempt would impose a large computational burden. Such a challenge may make the model users accept sub-optimal solutions and not achieve the best model performance. The objective of this thesis is to develop innovative strategies to circumvent the computational burden associated with automatic calibration of computationally intensive environmental models. The first main contribution of this thesis is developing a strategy called “deterministic model preemption” which opportunistically evades unnecessary model evaluations in the course of a calibration experiment and can save a significant portion of the computational budget (even as much as 90% in some cases). Model preemption monitors the intermediate simulation results while the model is running and terminates (i.e., pre-empts) the simulation early if it recognizes that further running the model would not guide the search mechanism. This strategy is applicable to a range of automatic calibration algorithms (i.e., search mechanisms) and is deterministic in that it leads to exactly the same calibration results as when preemption is not applied. One other main contribution of this thesis is developing and utilizing the concept of “surrogate data” which is basically a reasonably small but representative proportion of a full set of calibration data. This concept is inspired by the existing surrogate modelling strategies where a surrogate model (also called a metamodel) is developed and utilized as a fast-to-run substitute of an original computationally intensive model. A framework is developed to efficiently calibrate hydrologic models to the full set of calibration data while running the original model only on surrogate data for the majority of candidate parameter sets, a strategy which leads to considerable computational saving. To this end, mapping relationships are developed to approximate the model performance on the full data based on the model performance on surrogate data. This framework can be applicable to the calibration of any environmental model where appropriate surrogate data and mapping relationships can be identified. As another main contribution, this thesis critically reviews and evaluates the large body of literature on surrogate modelling strategies from various disciplines as they are the most commonly used methods to relieve the computational burden associated with computationally intensive simulation models. To reliably evaluate these strategies, a comparative assessment and benchmarking framework is developed which presents a clear computational budget dependent definition for the success/failure of surrogate modelling strategies. Two large families of surrogate modelling strategies are critically scrutinized and evaluated: “response surface surrogate” modelling which involves statistical or data–driven function approximation techniques (e.g., kriging, radial basis functions, and neural networks) and “lower-fidelity physically-based surrogate” modelling strategies which develop and utilize simplified models of the original system (e.g., a groundwater model with a coarse mesh). This thesis raises fundamental concerns about response surface surrogate modelling and demonstrates that, although they might be less efficient, lower-fidelity physically-based surrogates are generally more reliable as they to-some-extent preserve the physics involved in the original model. Five different surface water and groundwater models are used across this thesis to test the performance of the developed strategies and elaborate the discussions. However, the strategies developed are typically simulation-model-independent and can be applied to the calibration of any computationally intensive simulation model that has the required characteristics. This thesis leaves the reader with a suite of strategies for efficient calibration of computationally intensive environmental models while providing some guidance on how to select, implement, and evaluate the appropriate strategy for a given environmental model calibration problem.
33

Statistical Modeling of High-Dimensional Nonlinear Systems: A Projection Pursuit Solution

Swinson, Michael D. 28 November 2005 (has links)
Despite recent advances in statistics, artificial neural network theory, and machine learning, nonlinear function estimation in high-dimensional space remains a nontrivial problem. As the response surface becomes more complicated and the dimensions of the input data increase, the dreaded "curse of dimensionality" takes hold, rendering the best of function approximation methods ineffective. This thesis takes a novel approach to solving the high-dimensional function estimation problem. In this work, we propose and develop two distinct parametric projection pursuit learning networks with wide-ranging applicability. Included in this work is a discussion of the choice of basis functions used as well as a description of the optimization schemes utilized to find the parameters that enable each network to best approximate a response surface. The essence of these new modeling methodologies is to approximate functions via the superposition of a series of piecewise one-dimensional models that are fit to specific directions, called projection directions. The key to the effectiveness of each model lies in its ability to find efficient projections for reducing the dimensionality of the input space to best fit an underlying response surface. Moreover, each method is capable of effectively selecting appropriate projections from the input data in the presence of relatively high levels of noise. This is accomplished by rigorously examining the theoretical conditions for approximating each solution space and taking full advantage of the principles of optimization to construct a pair of algorithms, each capable of effectively modeling high-dimensional nonlinear response surfaces to a higher degree of accuracy than previously possible.
34

Neuro-Fuzzy System Modeling with Self-Constructed Rules and Hybrid Learning

Ouyang, Chen-Sen 09 November 2004 (has links)
Neuro-fuzzy modeling is an efficient computing paradigm for system modeling problems. It mainly integrates two well-known approaches, neural networks and fuzzy systems, and therefore possesses advantages of them, i.e., learning capability, robustness, human-like reasoning, and high understandability. Up to now, many approaches have been proposed for neuro-fuzzy modeling. However, it still exists many problems need to be solved. We propose in this thesis two self-constructing rule generation methods, i.e., similarity-based rule generation (SRG) and similarity-and-merge-based rule generation (SMRG), and one hybrid learning algorithm (HLA) for structure identification and parameter identification, respectively, of neuro-fuzzy modeling. SRG and SMRG group the input-output training data into a set of fuzzy clusters incrementally based on similarity tests on the input and output spaces. Membership functions associated with each cluster are defined according to statistical means and deviations of the data points included in the cluster. Additionally, SMRG employs a merging mechanism to merge similar clusters dynamically. Then a zero-order or first-order TSK-type fuzzy IF-THEN rule is extracted from each cluster to form an initial fuzzy rule-base which can be directly employed for fuzzy reasoning or be further refined in the next phase of parameter identification. Compared with other methods, both our SRG and SMRG have advantages of generating fuzzy rules quickly, matching membership functions closely with the real distribution of the training data points, and avoiding the generation of the whole set of clusters from the scratch when new training data are considered. Besides, SMRG supports a more reasonable and quick mechanism for cluster merging to alleviate the problems of data-input-order bias and redundant clusters, which are encountered in SRG and other incremental clustering approaches. To refine the fuzzy rules obtained in the structure identification phase, a zero-order or first-order TSK-type fuzzy neural network is constructed accordingly in the parameter identification phase. Then, we develop a HLA composed by a recursive SVD-based least squares estimator and the gradient descent method to train the network. Our HLA has the advantage of alleviating the local minimal problem. Besides, it learns faster, consumes less memory, and produces lower approximation errors than other methods. To verify the practicability of our approaches, we apply them to the applications of function approximation and classification. For function approximation, we apply our approaches to model several nonlinear functions and real cases from measured input-output datasets. For classification, our approaches are applied to a problem of human object segmentation. A fuzzy self-clustering algorithm is used to divide the base frame of a video stream into a set of segments which are then categorized as foreground or background based on a combination of multiple criteria. Then, human objects in the base frame and the remaining frames of the video stream are precisely located by a fuzzy neural network which is constructed with the fuzzy rules previously obtained and is trained by our proposed HLA. Experimental results show that our approaches can improve the accuracy of human object identification in video streams and work well even when the human object presents no significant motion in an image sequence.
35

Developing Efficient Strategies for Automatic Calibration of Computationally Intensive Environmental Models

Razavi, Seyed Saman January 2013 (has links)
Environmental simulation models have been playing a key role in civil and environmental engineering decision making processes for decades. The utility of an environmental model depends on how well the model is structured and calibrated. Model calibration is typically in an automated form where the simulation model is linked to a search mechanism (e.g., an optimization algorithm) such that the search mechanism iteratively generates many parameter sets (e.g., thousands of parameter sets) and evaluates them through running the model in an attempt to minimize differences between observed data and corresponding model outputs. The challenge rises when the environmental model is computationally intensive to run (with run-times of minutes to hours, for example) as then any automatic calibration attempt would impose a large computational burden. Such a challenge may make the model users accept sub-optimal solutions and not achieve the best model performance. The objective of this thesis is to develop innovative strategies to circumvent the computational burden associated with automatic calibration of computationally intensive environmental models. The first main contribution of this thesis is developing a strategy called “deterministic model preemption” which opportunistically evades unnecessary model evaluations in the course of a calibration experiment and can save a significant portion of the computational budget (even as much as 90% in some cases). Model preemption monitors the intermediate simulation results while the model is running and terminates (i.e., pre-empts) the simulation early if it recognizes that further running the model would not guide the search mechanism. This strategy is applicable to a range of automatic calibration algorithms (i.e., search mechanisms) and is deterministic in that it leads to exactly the same calibration results as when preemption is not applied. One other main contribution of this thesis is developing and utilizing the concept of “surrogate data” which is basically a reasonably small but representative proportion of a full set of calibration data. This concept is inspired by the existing surrogate modelling strategies where a surrogate model (also called a metamodel) is developed and utilized as a fast-to-run substitute of an original computationally intensive model. A framework is developed to efficiently calibrate hydrologic models to the full set of calibration data while running the original model only on surrogate data for the majority of candidate parameter sets, a strategy which leads to considerable computational saving. To this end, mapping relationships are developed to approximate the model performance on the full data based on the model performance on surrogate data. This framework can be applicable to the calibration of any environmental model where appropriate surrogate data and mapping relationships can be identified. As another main contribution, this thesis critically reviews and evaluates the large body of literature on surrogate modelling strategies from various disciplines as they are the most commonly used methods to relieve the computational burden associated with computationally intensive simulation models. To reliably evaluate these strategies, a comparative assessment and benchmarking framework is developed which presents a clear computational budget dependent definition for the success/failure of surrogate modelling strategies. Two large families of surrogate modelling strategies are critically scrutinized and evaluated: “response surface surrogate” modelling which involves statistical or data–driven function approximation techniques (e.g., kriging, radial basis functions, and neural networks) and “lower-fidelity physically-based surrogate” modelling strategies which develop and utilize simplified models of the original system (e.g., a groundwater model with a coarse mesh). This thesis raises fundamental concerns about response surface surrogate modelling and demonstrates that, although they might be less efficient, lower-fidelity physically-based surrogates are generally more reliable as they to-some-extent preserve the physics involved in the original model. Five different surface water and groundwater models are used across this thesis to test the performance of the developed strategies and elaborate the discussions. However, the strategies developed are typically simulation-model-independent and can be applied to the calibration of any computationally intensive simulation model that has the required characteristics. This thesis leaves the reader with a suite of strategies for efficient calibration of computationally intensive environmental models while providing some guidance on how to select, implement, and evaluate the appropriate strategy for a given environmental model calibration problem.
36

Adaptive crosstalk cancellation and Lattice aided detection in multi-user communications

Mandar Gujrathi Unknown Date (has links)
Digital subscriber lines (DSL) have revolutionised the provision of high speed data over the ‘last mile’. Subscribers demand even more bandwidth and the penetration of the service is now nearly universal. While it is feasible to provide improved broadband services on the new very high speed DSL, such as VDSL2/3, one of the greatest challenges to further improvements in speed is the problem of crosstalk. Operating over the unused higher frequencies of the twisted pair network, this technology is subjected to electromagnetic coupling among the wires, limiting the DSL data rate and service reach. Crosstalk suppression methods such as zero-forcing or decision feedback mainly use block processing. However, to cope with the time-varying VDSL environment huge computational costs can be incurred. In contrast, adaptive processing approaches are much simpler and are more beneficial to track such a channel environment. An adaptive canceller uses a training sequence and the convergence speed depends on the number of crosstalk coefficients it has to estimate. In a populated DSL binder, only a few of the crosstalking neighbours to a particular user are significant. With the aim to reduce the computational complexity in such environments, this thesis introduces the concept of detection-guided adaptive crosstalk cancellation for DSL. We propose a least-squares test feature to detect and concentrate the adaptation only on the dominant crosstalking coefficients. In comparison to conventional adaptive cancellers, the cancellers proposed in this thesis demonstrate early convergence. Thus, by incorporating the test feature, these cancellers have to detect only the most significant canceller coefficients and therefore, the length of the training sequence is reduced. Together with enhanced adaptive cancellation with a low run-time complexity and improved convergence, the greatest advantage obtained here is in the bandwidth efficiency. While enhanced adaptive cancellation is a bandwidth-efficient approach, the frequent re-transmission of training sequences may still be required for a rapidly changing VDSL channel. Again, this can be a disadvantage in terms of bandwidth consumption. To overcome this difficulty, we propose fast-converging unsupervised cancellers with an aim to improve the bandwidth efficiency by not transmitting a training sequence. An added advantage obtained here is that this would enable Internet service providers to include multiple or improved broadband services within a single subscription. Certain properties of the DSL channel ensure the communication channel is properly conditioned. This ensures the basis vectors of the channel matrix are near-orthogonal and hence, the linear cancellers, such as zero-forcing perform near-optimally. However, this is not the case with wireless channels. We investigate user detection in wireless channels using the principle of lattice reduction. User detection can also be seen as a search for the closest vector point in the lattice of received symbols. Though a maximum likelihood (ML) detector facilitates optimal user-detection, it has exponential complexity. We identify that the closest vector problem can be cast as a non-linear optimisation problem. Using the periodicity of the maximum likelihood function, we first present a novel algorithm that approximates the ML function using the Taylor series expansion of a suitable cosine function. With the aim of minimising the approximation error, we represent the ML function as a Fourier Series expansion and later, propose another approximation using Jacobi theta functions. We study the performance of these approximations when subjected to a suitable unconstrained optimisation algorithm. Through simulations, we demonstrate that the newly-developed approximations perform better than the conventional cancellers, close to the ML and, importantly, converging in polynomial time.
37

Adaptive crosstalk cancellation and Lattice aided detection in multi-user communications

Mandar Gujrathi Unknown Date (has links)
Digital subscriber lines (DSL) have revolutionised the provision of high speed data over the ‘last mile’. Subscribers demand even more bandwidth and the penetration of the service is now nearly universal. While it is feasible to provide improved broadband services on the new very high speed DSL, such as VDSL2/3, one of the greatest challenges to further improvements in speed is the problem of crosstalk. Operating over the unused higher frequencies of the twisted pair network, this technology is subjected to electromagnetic coupling among the wires, limiting the DSL data rate and service reach. Crosstalk suppression methods such as zero-forcing or decision feedback mainly use block processing. However, to cope with the time-varying VDSL environment huge computational costs can be incurred. In contrast, adaptive processing approaches are much simpler and are more beneficial to track such a channel environment. An adaptive canceller uses a training sequence and the convergence speed depends on the number of crosstalk coefficients it has to estimate. In a populated DSL binder, only a few of the crosstalking neighbours to a particular user are significant. With the aim to reduce the computational complexity in such environments, this thesis introduces the concept of detection-guided adaptive crosstalk cancellation for DSL. We propose a least-squares test feature to detect and concentrate the adaptation only on the dominant crosstalking coefficients. In comparison to conventional adaptive cancellers, the cancellers proposed in this thesis demonstrate early convergence. Thus, by incorporating the test feature, these cancellers have to detect only the most significant canceller coefficients and therefore, the length of the training sequence is reduced. Together with enhanced adaptive cancellation with a low run-time complexity and improved convergence, the greatest advantage obtained here is in the bandwidth efficiency. While enhanced adaptive cancellation is a bandwidth-efficient approach, the frequent re-transmission of training sequences may still be required for a rapidly changing VDSL channel. Again, this can be a disadvantage in terms of bandwidth consumption. To overcome this difficulty, we propose fast-converging unsupervised cancellers with an aim to improve the bandwidth efficiency by not transmitting a training sequence. An added advantage obtained here is that this would enable Internet service providers to include multiple or improved broadband services within a single subscription. Certain properties of the DSL channel ensure the communication channel is properly conditioned. This ensures the basis vectors of the channel matrix are near-orthogonal and hence, the linear cancellers, such as zero-forcing perform near-optimally. However, this is not the case with wireless channels. We investigate user detection in wireless channels using the principle of lattice reduction. User detection can also be seen as a search for the closest vector point in the lattice of received symbols. Though a maximum likelihood (ML) detector facilitates optimal user-detection, it has exponential complexity. We identify that the closest vector problem can be cast as a non-linear optimisation problem. Using the periodicity of the maximum likelihood function, we first present a novel algorithm that approximates the ML function using the Taylor series expansion of a suitable cosine function. With the aim of minimising the approximation error, we represent the ML function as a Fourier Series expansion and later, propose another approximation using Jacobi theta functions. We study the performance of these approximations when subjected to a suitable unconstrained optimisation algorithm. Through simulations, we demonstrate that the newly-developed approximations perform better than the conventional cancellers, close to the ML and, importantly, converging in polynomial time.
38

Adaptive crosstalk cancellation and Lattice aided detection in multi-user communications

Mandar Gujrathi Unknown Date (has links)
Digital subscriber lines (DSL) have revolutionised the provision of high speed data over the ‘last mile’. Subscribers demand even more bandwidth and the penetration of the service is now nearly universal. While it is feasible to provide improved broadband services on the new very high speed DSL, such as VDSL2/3, one of the greatest challenges to further improvements in speed is the problem of crosstalk. Operating over the unused higher frequencies of the twisted pair network, this technology is subjected to electromagnetic coupling among the wires, limiting the DSL data rate and service reach. Crosstalk suppression methods such as zero-forcing or decision feedback mainly use block processing. However, to cope with the time-varying VDSL environment huge computational costs can be incurred. In contrast, adaptive processing approaches are much simpler and are more beneficial to track such a channel environment. An adaptive canceller uses a training sequence and the convergence speed depends on the number of crosstalk coefficients it has to estimate. In a populated DSL binder, only a few of the crosstalking neighbours to a particular user are significant. With the aim to reduce the computational complexity in such environments, this thesis introduces the concept of detection-guided adaptive crosstalk cancellation for DSL. We propose a least-squares test feature to detect and concentrate the adaptation only on the dominant crosstalking coefficients. In comparison to conventional adaptive cancellers, the cancellers proposed in this thesis demonstrate early convergence. Thus, by incorporating the test feature, these cancellers have to detect only the most significant canceller coefficients and therefore, the length of the training sequence is reduced. Together with enhanced adaptive cancellation with a low run-time complexity and improved convergence, the greatest advantage obtained here is in the bandwidth efficiency. While enhanced adaptive cancellation is a bandwidth-efficient approach, the frequent re-transmission of training sequences may still be required for a rapidly changing VDSL channel. Again, this can be a disadvantage in terms of bandwidth consumption. To overcome this difficulty, we propose fast-converging unsupervised cancellers with an aim to improve the bandwidth efficiency by not transmitting a training sequence. An added advantage obtained here is that this would enable Internet service providers to include multiple or improved broadband services within a single subscription. Certain properties of the DSL channel ensure the communication channel is properly conditioned. This ensures the basis vectors of the channel matrix are near-orthogonal and hence, the linear cancellers, such as zero-forcing perform near-optimally. However, this is not the case with wireless channels. We investigate user detection in wireless channels using the principle of lattice reduction. User detection can also be seen as a search for the closest vector point in the lattice of received symbols. Though a maximum likelihood (ML) detector facilitates optimal user-detection, it has exponential complexity. We identify that the closest vector problem can be cast as a non-linear optimisation problem. Using the periodicity of the maximum likelihood function, we first present a novel algorithm that approximates the ML function using the Taylor series expansion of a suitable cosine function. With the aim of minimising the approximation error, we represent the ML function as a Fourier Series expansion and later, propose another approximation using Jacobi theta functions. We study the performance of these approximations when subjected to a suitable unconstrained optimisation algorithm. Through simulations, we demonstrate that the newly-developed approximations perform better than the conventional cancellers, close to the ML and, importantly, converging in polynomial time.
39

Tools for the Design of Reliable and Efficient Functions Evaluation Libraries / Outils pour la conception de bibliothèques de calcul de fonctions efficaces et fiables

Torres, Serge 22 September 2016 (has links)
La conception des bibliothèques d’évaluation de fonctions est un activité complexe qui requiert beaucoup de soin et d’application, particulièrement lorsque l’on vise des niveaux élevés de fiabilité et de performances. En pratique et de manière habituelle, on ne peut se livrer à ce travail sans disposer d’outils qui guident le concepteur dans le dédale d’un espace de solutions étendu et complexe mais qui lui garantissent également la correction et la quasi-optimalité de sa production. Dans l’état actuel de l’art, il nous faut encore plutôt raisonner en termes de « boite à outils » d’où le concepteur doit tirer et combiner des mécanismes de base, au mieux de ses objectifs, plutôt qu’imaginer que l’on dispose d’un dispositif à même de résoudre automatiquement tous les problèmes.Le présent travail s’attache à la conception et la réalisation de tels outils dans deux domaines:∙ la consolidation du test d’arrondi de Ziv utilisé, jusqu’à présent de manière plus ou moins empirique, dans l’implantation des approximations de fonction ;∙ le développement d’une implantation de l’algorithme SLZ dans le but de résoudre le « Dilemme du fabricant de table » dans le cas de fonctions ayant pour opérandes et pour résultat approché des nombres flottants en quadruple précision (format Binary64 selon la norme IEEE-754). / The design of function evaluation libraries is a complex task that requires a great care and dedication, especially when one wants to satisfy high standards of reliability and performance. In actual practice, it cannot be correctly performed, as a routine operation, without tools that not only help the designer to find his way in a complex and extended solution space but also to guarantee that his solutions are correct and (almost) optimal. As of the present state of the art, one has to think in terms of “toolbox” from which he can smartly mix-and-match the utensils that fit better his goals rather than expect to have at hand a solve-all automatic device.The work presented here is dedicated to the design and implementation of such tools in two realms:∙ the consolidation of Ziv’s rounding test that is used, in a more or less empirical way, for the implementation of functions approximation;∙ the development of an implementation of the SLZ-algorithm in order to solve the Table Maker Dilemma for the function with quad-precision floating point (IEEE-754 Binary128 format) arguments and images.
40

Développement d'une commande à modèle partiel appris : analyse théorique et étude pratique / Development of a control law based on learned sparse model : theorical analysis and practical study

Nguyen, Huu Phuc 16 December 2016 (has links)
En théorie de la commande, un modèle du système est généralement utilisé pour construire la loi de commande et assurer ses performances. Les équations mathématiques qui représentent le système à contrôler sont utilisées pour assurer que le contrôleur associé va stabiliser la boucle fermée. Mais, en pratique, le système réel s’écarte du comportement théorique modélisé. Des non-linéarités ou des dynamiques rapides peuvent être négligées, les paramètres sont parfois difficiles à estimer, des perturbations non maitrisables restent non modélisées. L’approche proposée dans ce travail repose en partie sur la connaissance du système à piloter par l’utilisation d’un modèle analytique mais aussi sur l’utilisation de données expérimentales hors ligne ou en ligne. A chaque pas de temps la valeur de la commande qui amène au mieux le système vers un objectif choisi a priori, est le résultat d’un algorithme qui minimise une fonction de coût ou maximise une récompense. Au centre de la technique développée, il y a l’utilisation d’un modèle numérique de comportement du système qui se présente sous la forme d’une fonction de prédiction tabulée ayant en entrée un n-uplet de l’espace joint entrées/état ou entrées/sorties du système. Cette base de connaissance permet l’extraction d’une sous-partie de l’ensemble des possibilités des valeurs prédites à partir d’une sous-partie du vecteur d’entrée de la table. Par exemple, pour une valeur de l’état, on pourra obtenir toutes les possibilités d’états futurs à un pas de temps, fonction des valeurs applicables de commande. Basé sur des travaux antérieurs ayant montré la viabilité du concept en entrées/état, de nouveaux développements ont été proposés. Le modèle de prédiction est initialisé en utilisant au mieux la connaissance a priori du système. Il est ensuite amélioré par un algorithme d’apprentissage simple basé sur l’erreur entre données mesurées et données prédites. Deux approches sont utilisées : la première est basée sur le modèle d’état (comme dans les travaux antérieurs mais appliquée à des systèmes plus complexes), la deuxième est basée sur un modèle entrée-sortie. La valeur de commande qui permet de rapprocher au mieux la sortie prédite dans l’ensemble des possibilités atteignables de la sortie ou de l’état désiré, est trouvée par un algorithme d’optimisation. Afin de valider les différents éléments proposés, cette commande a été mise en œuvre sur différentes applications. Une expérimentation réelle sur un quadricoptère et des essais réels de suivi de trajectoire sur un véhicule électrique du laboratoire montrent sacapacité et son efficacité sur des systèmes complexes et rapides. D’autres résultats en simulation permettent d’élargir l’étude de ses performances. Dans le cadre d’un projet partenarial, l’algorithme a également montré sa capacité à servir d’estimateur d’état dans la reconstruction de la vitesse mécanique d’une machine asynchrone à partir des signaux électriques. Pour cela, la vitesse mécanique a été considérée comme l’entrée du système. / In classical control theory, the control law is generally built, based on the theoretical model of the system. That means that the mathematical equations representing the system dynamics are used to stabilize the closed loop. But in practice, the actual system differs from the theory, for example, the nonlinearity, the varied parameters and the unknown disturbances of the system. The proposed approach in this work is based on the knowledge of the plant system by using not only the analytical model but also the experimental data. The input values stabilizing the system on open loop, that minimize a cost function, for example, the distance between the desired output and the predicted output, or maximize a reward function are calculated by an optimal algorithm. The key idea of this approach is to use a numerical behavior model of the system as a prediction function on the joint state and input spaces or input-output spaces to find the controller’s output. To do this, a new non-linear control concept is proposed, based on an existing controller that uses a prediction map built on the state-space. The prediction model is initialized by using the best knowledge a priori of the system. It is then improved by using a learning algorithm based on the sensors’ data. Two types of prediction map are employed: the first one is based on the state-space model; the second one is represented by an input-output model. The output of the controller, that minimizes the error between the predicted output from the prediction model and the desired output, will be found using optimal algorithm. The application of the proposed controller has been made on various systems. Some real experiments for quadricopter, some actual tests for the electrical vehicle Zoé show its ability and efficiency to complex and fast systems. Other the results in simulation are tested in order to investigate and study the performance of the proposed controller. This approach is also used to estimate the rotor speed of the induction machine by considering the rotor speed as the input of the system.

Page generated in 0.1683 seconds