• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 106
  • 24
  • 9
  • 7
  • 6
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 183
  • 183
  • 31
  • 29
  • 28
  • 27
  • 26
  • 26
  • 23
  • 20
  • 19
  • 18
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Metamodeling strategies for high-dimensional simulation-based design problems

Shan, Songqing 13 October 2010 (has links)
Computational tools such as finite element analysis and simulation are commonly used for system performance analysis and validation. It is often impractical to rely exclusively on the high-fidelity simulation model for design activities because of high computational costs. Mathematical models are typically constructed to approximate the simulation model to help with the design activities. Such models are referred to as “metamodel.” The process of constructing a metamodel is called “metamodeling.” Metamodeling, however, faces eminent challenges that arise from high-dimensionality of underlying problems, in addition to the high computational costs and unknown function properties (that is black-box functions) of analysis/simulation. The combination of these three challenges defines the so-called high-dimensional, computationally-expensive, and black-box (HEB) problems. Currently there is a lack of practical methods to deal with HEB problems. This dissertation, by means of surveying existing techniques, has found that the major deficiency of the current metamodeling approaches lies in the separation of the metamodeling from the properties of underlying functions. The survey has also identified two promising approaches - mapping and decomposition - for solving HEB problems. A new analytic methodology, radial basis function–high-dimensional model representation (RBF-HDMR), has been proposed to model the HEB problems. The RBF-HDMR decomposes the effects of variables or variable sets on system outputs. The RBF-HDMR, as compared with other metamodels, has three distinct advantages: 1) fundamentally reduces the number of calls to the expensive simulation in order to build a metamodel, thus breaks/alleviates exponentially-increasing computational difficulty; 2) reveals the functional form of the black-box function; and 3) discloses the intrinsic characteristics (for instance, linearity/nonlinearity) of the black-box function. The RBF-HDMR has been intensively tested with mathematical and practical problems chosen from the literature. This methodology has also successfully applied to the power transfer capability analysis of Manitoba-Ontario Electrical Interconnections with 50 variables. The test results demonstrate that the RBF-HDMR is a powerful tool to model large-scale simulation-based engineering problems. The RBF-HDMR model and its constructing approach, therefore, represent a breakthrough in modeling HEB problems and make it possible to optimize high-dimensional simulation-based design problems.
162

Metamodeling strategies for high-dimensional simulation-based design problems

Shan, Songqing 13 October 2010 (has links)
Computational tools such as finite element analysis and simulation are commonly used for system performance analysis and validation. It is often impractical to rely exclusively on the high-fidelity simulation model for design activities because of high computational costs. Mathematical models are typically constructed to approximate the simulation model to help with the design activities. Such models are referred to as “metamodel.” The process of constructing a metamodel is called “metamodeling.” Metamodeling, however, faces eminent challenges that arise from high-dimensionality of underlying problems, in addition to the high computational costs and unknown function properties (that is black-box functions) of analysis/simulation. The combination of these three challenges defines the so-called high-dimensional, computationally-expensive, and black-box (HEB) problems. Currently there is a lack of practical methods to deal with HEB problems. This dissertation, by means of surveying existing techniques, has found that the major deficiency of the current metamodeling approaches lies in the separation of the metamodeling from the properties of underlying functions. The survey has also identified two promising approaches - mapping and decomposition - for solving HEB problems. A new analytic methodology, radial basis function–high-dimensional model representation (RBF-HDMR), has been proposed to model the HEB problems. The RBF-HDMR decomposes the effects of variables or variable sets on system outputs. The RBF-HDMR, as compared with other metamodels, has three distinct advantages: 1) fundamentally reduces the number of calls to the expensive simulation in order to build a metamodel, thus breaks/alleviates exponentially-increasing computational difficulty; 2) reveals the functional form of the black-box function; and 3) discloses the intrinsic characteristics (for instance, linearity/nonlinearity) of the black-box function. The RBF-HDMR has been intensively tested with mathematical and practical problems chosen from the literature. This methodology has also successfully applied to the power transfer capability analysis of Manitoba-Ontario Electrical Interconnections with 50 variables. The test results demonstrate that the RBF-HDMR is a powerful tool to model large-scale simulation-based engineering problems. The RBF-HDMR model and its constructing approach, therefore, represent a breakthrough in modeling HEB problems and make it possible to optimize high-dimensional simulation-based design problems.
163

Gera??o autom?tica de testes a partir de descri??es de linguagens

Antunes, Cleverton Hentz 01 March 2010 (has links)
Made available in DSpace on 2014-12-17T15:47:51Z (GMT). No. of bitstreams: 1 ClevertonHA.pdf: 1775580 bytes, checksum: 9e49f67c9b7fbb459e2b24f568db691b (MD5) Previous issue date: 2010-03-01 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / Some programs may have their entry data specified by formalized context-free grammars. This formalization facilitates the use of tools in the systematization and the rise of the quality of their test process. This category of programs, compilers have been the first to use this kind of tool for the automation of their tests. In this work we present an approach for definition of tests from the formal description of the entries of the program. The generation of the sentences is performed by taking into account syntactic aspects defined by the specification of the entries, the grammar. For optimization, their coverage criteria are used to limit the quantity of tests without diminishing their quality. Our approach uses these criteria to drive generation to produce sentences that satisfy a specific coverage criterion. The approach presented is based on the use of Lua language, relying heavily on its resources of coroutines and dynamic construction of functions. With these resources, we propose a simple and compact implementation that can be optimized and controlled in different ways, in order to seek satisfaction the different implemented coverage criteria. To make the use of our tool simpler, the EBNF notation for the specification of the entries was adopted. Its parser was specified in the tool Meta-Environment for rapid prototyping / Alguns programas podem ter sua entrada formalizada atrav?s de gram?ticas livres de contexto. Esta formaliza??o facilita a utiliza??o de ferramentas na sistematiza??o e na eleva??o da qualidade do seu processo de teste. Dentro desta categoria de programas os compiladores foram os primeiros a utilizar este tipo de ferramenta para a automa??o de seus testes. Neste trabalho apresentamos uma abordagem para a defini??o de testes a partir da descri??o formal das entradas do programa. A gera??o das senten?as ? realizada levando em considera??o aspectos sint?ticos definidos pela especifica??o da entrada, a gram?tica. Por quest?es de otimiza??o s?o utilizados crit?rios de cobertura para limitar a quantidade de testes sem diminuir a sua qualidade. Nossa abordagem utiliza estes crit?rios no direcionamento da gera??o de maneira a produzir senten?as que satisfa?am um crit?rio de cobertura especifico. A abordagem apresentada se baseia na utiliza??o da linguagem Lua, se apoiando fortemente em seus recursos de corotinas e constru??o din?mica de fun??es. Com estes recursos, propomos uma implementa??o compacta e simples que pode ser otimizada e controlada de formas variadas, com o intuito de buscar a satisfa??o dos diferentes crit?rios de cobertura implementados. Para tornar simples o uso de nossa ferramenta foi adotada a nota??o EBNF para a especifica??o das entradas. O seu parser foi especificado na ferramenta Meta-Environment por esta favorecer a r?pida prototipa??o
164

Graybox-baserade säkerhetstest : Att kostnadseffektivt simulera illasinnade angrepp

Linnér, Samuel January 2008 (has links)
Att genomföra ett penetrationstest av en nätverksarkitektur är komplicerat, riskfyllt och omfattande. Denna rapport utforskar hur en konsult bäst genomför ett internt penetrationstest tidseffektivt, utan att utelämna viktiga delar. I ett internt penetrationstest får konsulten ofta ta del av systemdokumentation för att skaffa sig en bild av nätverksarkitekturen, på så sätt elimineras den tid det tar att kartlägga hela nätverket manuellt. Detta medför även att eventuella anomalier i systemdokumentationen kan identifieras. Kommunikation med driftansvariga under testets gång minskar risken för missförstånd och systemkrascher. Om allvarliga sårbarheter identifieras meddelas driftpersonalen omgå-ende. Ett annat sätt att effektivisera testet är att skippa tidskrävande uppgifter som kommer att lyckas förr eller senare, t.ex. lösenordsknäckning, och istället påpeka att orsaken till sårbarheten är att angriparen har möjlighet att testa lösenord obegränsat antal gånger. Därutöver är det lämpligt att simulera vissa attacker som annars kan störa produktionen om testet genomförs i en driftsatt miljö. Resultatet av rapporten är en checklista som kan tolkas som en generell metodik för hur ett internt penetrationstest kan genomföras. Checklistans syfte är att underlätta vid genomförande av ett test. Processen består av sju steg: förberedelse och planering, in-formationsinsamling, sårbarhetsdetektering och analys, rättighetseskalering, penetrationstest samt summering och rapportering. / A network architecture penetration test is complicated, full of risks and extensive. This report explores how a consultant carries it out in the most time effective way, without overlook important parts. In an internal penetration test the consultant are often allowed to view the system documentation of the network architecture, which saves a lot of time since no total host discovery is needed. This is also good for discovering anomalies in the system documentation. Communication with system administrators during the test minimizes the risk of misunderstanding and system crashes. If serious vulnerabilities are discovered, the system administrators have to be informed immediately. Another way to make the test more effective is to skip time consuming tasks which will succeed sooner or later, e.g. password cracking, instead; point out that the reason of the vulnerability is the ability to brute force the password. It is also appropriate to simulate attacks which otherwise could infect the production of the organization. The result of the report is a checklist by means of a general methodology of how in-ternal penetration tests could be implemented. The purpose of the checklist is to make it easier to do internal penetration tests. The process is divided in seven steps: Planning, information gathering, vulnerability detection and analysis, privilege escalation, pene-tration test and final reporting.
165

System Identification And Control Of Helicopter Using Neural Networks

Vijaya Kumar, M 02 1900 (has links) (PDF)
The present work focuses on the two areas of investigation: system identification of helicopter and design of controller for the helicopter. Helicopter system identification, the first subject of investigation in this thesis, can be described as the extraction of system characteristics/dynamics from measured flight test data. Wind tunnel experimental data suffers from scale effects and model deficiencies. The increasing need for accurate models for the design of high bandwidth control system for helicopters has initiated a renewed interest in and a more active use of system identification. Besides, system identification is likely to become mandatory in the future for model validation of ground based helicopter simulators. Such simulators require accurate models in order to be accepted by pilots and regulatory authorities like Federal Aviation Regulation for realistic complementary helicopter mission training. Two approaches are widely used for system identification, namely, black box and gray box approach. In the black-box approach, the relationship between input-output data is approximated using nonparametric methods such as neural networks and in such a case, internal details of the system and model structure may not be known. In the gray box approach, parameters are estimated after defining the model structure. In this thesis, both black box and gray box approaches are investigated. In the black box approach, in this thesis, a comparative study and analysis of different Recurrent Neural Networks(RNN) for the identification of helicopter dynamics using flight data is investigated. Three different RNN architectures namely, Nonlinear Auto Regressive eXogenous input(NARX) model, neural network with internal memory known as Memory Neuron Networks(MNN)and Recurrent MultiLayer perceptron (RMLP) networks are used to identify dynamics of the helicopter at various flight conditions. Based on the results, the practical utility, advantages and limitations of the three models are critically appraised and it is found that the NARX model is most suitable for the identification of helicopter dynamics. In the gray box approach, helicopter model parameters are estimated after defining the model structure. The identification process becomes more difficult as the number of degrees-of-freedom and model parameters increase. To avoid the drawbacks of conventional methods, neural network based techniques, called the delta method is investigated in this thesis. This method does not require initial estimates of the parameters and the parameters can be directly extracted from the flight data. The Radial Basis Function Network(RBFN)is used for the purpose of estimation of parameters. It is shown that RBFN is able to satisfactorily estimate stability and control derivatives using the delta method. The second area of investigation addressed in this thesis is the control of helicopter in flight. Helicopter requires use of a control system to achieve satisfactory flight. Designing a classical controller involves developing a nonlinear model of the helicopter and extracting linearized state space matrices from the nonlinear model at various flight conditions. After examining the stability characteristics of the helicopter, the desired response is obtained using a feedback control system. The scheduling of controller gains over the entire envelope is used to obtain the desired response. In the present work, a helicopter having a soft inplane four bladed hingeless main rotor and a four-bladed tail rotor with conventional mechanical controls is considered. For this helicopter, a mathematical model and also a model based on neural network (using flight data) has been developed. As a precursor, a feed back controller, the Stability Augmentation System(SAS), is designed using linear quadratic regulator control with full state feedback and LQR with out put feedback approaches. SAS is designed to meet the handling qualities specification known as Aeronautical Design Standard ADS-33E-PRF. The control gains have been tuned with respect to forward speed and gain scheduling has been arrived at. The SAS in the longitudinal axis meets the requirement of the Level1 handling quality specifications in hover and low speed as well as for forward speed flight conditions. The SAS in the lateral axis meets the requirement of the Level2 handling quality specifications in both hover and low speed as well as for forward speed flight conditions. Such conventional design of control has served useful purposes, however, it requires considerable flight testing which is time consuming, to demonstrate and tune these control law gains. In modern helicopters, the stringent requirements and non-linear maneuvers make the controller design further complicated. Hence, new design tools have to be explored to control such helicopters. Among the many approaches in adaptive control, neural networks present a potential alternative for modeling and control of nonlinear dynamical systems due to their approximating capabilities and inherent adaptive features. Furthermore, from a practical perspective, the massive parallelism and fast adaptability of neural network implementations provide more incentive for further investigation in problems involving dynamical systems with unknown non-linearity. Therefore, adaptive control approach based on neural networks is proposed in this thesis. A neural network based Feedback Error Neural adaptive Controller(FENC) is designed for a helicopter. The proposed controller scheme is based on feedback error learning strategy in which the outer loop neural controller enhances the inner loop conventional controller by compensating for unknown non-linearity and parameter un-certainties. Nonlinear Auto Regressive eXogenous input(NARX)neural network architecture is used to approximate the control law and the controller network parameters are adapted using updated rules Lyapunov synthesis. An offline (finite time interval)and on-line adaptation strategy is used to approximate system uncertainties. The results are validated using simulation studies on helicopter undergoing an agile maneuver. The study shows that the neuro-controller meets the requirements of ADS-33 handling quality specifications. Even though the tracking error is less in FENC scheme, the control effort required to follow the command is very high. To overcome these problems, a Direct Adaptive Neural Control(DANC)scheme to track the rate command signal is presented. The neural controller is designed to track rate command signal generated using the reference model. For the simulation study, a linearized helicopter model at different straight and level flight conditions is considered. A neural network with a linear filter architecture trained using back propagation through time is used to approximate the control law. The controller network parameters are adapted using updated rules Lyapunov synthesis. The off-line trained (for finite time interval)network provides the necessary stability and tracking performance. The on-line learning is used to adapt the network under varying flight conditions. The on-line learning ability is demonstrated through parameter uncertainties. The performance of the proposed direct adaptive neural controller is compared with feedback error learning neural controller. The performance of the controller has been validated at various flight conditions. The theoretical results are validated using simulation studies based on a nonlinear six degree-of-freedom helicopter undergoing an agile maneuver. Realistic gust and sensor noise are added to the system to study the disturbance rejection properties of the neural controllers. To investigate the on-line learning ability of the proposed neural controller, different fault scenarios representing large model error and control surface loss are considered. The performances of the proposed DANC scheme is compared with the FENC scheme. The study shows that the neuro-controller meets the requirements of ADS-33 handling quality specifications.
166

Hybridization of dynamic optimization methodologies / L'hybridation de méthodes d'optimisation dynamique

Decock, Jérémie 28 November 2014 (has links)
Dans ce manuscrit de thèse, mes travaux portent sur la combinaison de méthodes pour la prise de décision séquentielle (plusieurs étapes de décision corrélées) dans des environnements complexes et incertains. Les méthodes mises au point sont essentiellement appliquées à des problèmes de gestion et de production d'électricité tels que l'optimisation de la gestion des stocks d'énergie dans un parc de production pour anticiper au mieux la fluctuation de la consommation des clients.Le manuscrit comporte 7 chapitres regroupés en 4 parties : Partie I, « Introduction générale », Partie II, « État de l'art », Partie III, « Contributions » et Partie IV, « Conclusion générale ».Le premier chapitre (Partie I) introduit le contexte et les motivations de mes travaux, à savoir la résolution de problèmes d' « Unit commitment », c'est à dire l'optimisation des stratégies de gestion de stocks d'énergie dans les parcs de production d'énergie. Les particularités et les difficultés sous-jacentes à ces problèmes sont décrites ainsi que le cadre de travail et les notations utilisées dans la suite du manuscrit.Le second chapitre (Partie II) dresse un état de l'art des méthodes les plus classiques utilisées pour la résolution de problèmes de prise de décision séquentielle dans des environnements incertains. Ce chapitre introduit des concepts nécessaires à la bonne compréhension des chapitres suivants (notamment le chapitre 4). Les méthodes de programmation dynamique classiques et les méthodes de recherche de politique directe y sont présentées.Le 3e chapitre (Partie II) prolonge le précédent en dressant un état de l'art des principales méthodes d’optimisation spécifiquement adaptées à la gestion des parcs de production d'énergie et à leurs subtilités. Ce chapitre présente entre autre les méthodes MPC (Model Predictive Control), SDP (Stochastic Dynamic Programming) et SDDP (Stochastic Dual Dynamic Programming) avec pour chacune leurs particularités, leurs avantages et leurs limites. Ce chapitre complète le précédent en introduisant d'autres concepts nécessaires à la bonne compréhension de la suite du manuscrit.Le 4e chapitre (Partie III) contient la principale contribution de ma thèse : un nouvel algorithme appelé « Direct Value Search » (DVS) créé pour résoudre des problèmes de prise de décision séquentielle de grande échelle en milieu incertain avec une application directe aux problèmes d' « Unit commitment ». Ce chapitre décrit en quoi ce nouvel algorithme dépasse les méthodes classiques présentées dans le 3e chapitre. Cet algorithme innove notamment par sa capacité à traiter des grands espaces d'actions contraints dans un cadre non-linéaire, avec un grand nombre de variables d'état et sans hypothèse particulière quant aux aléas du système optimisé (c'est à dire applicable sur des problèmes où les aléas ne sont pas nécessairement Markovien).Le 5e chapitre (Partie III) est consacré à un concept clé de DVS : l'optimisation bruitée. Ce chapitre expose une nouvelle borne théorique sur la vitesse de convergence des algorithmes d'optimisation appliqués à des problèmes bruités vérifiant certaines hypothèses données. Des méthodes de réduction de variance sont également étudiées et appliquées à DVS pour accélérer sensiblement sa vitesse de convergence.Le 6e chapitre (Partie III) décrit un résultat mathématique sur la vitesse de convergence linéaire d’un algorithme évolutionnaire appliqué à une famille de fonctions non quasi-convexes. Dans ce chapitres, il est prouvé que sous certaines hypothèses peu restrictives sur la famille de fonctions considérée, l'algorithme présenté atteint une vitesse de convergence linéaire.Le 7e chapitre (Partie IV) conclut ce manuscrit en résumant mes contributions et en dressant quelques pistes de recherche intéressantes à explorer. / This thesis is dedicated to sequential decision making (also known as multistage optimization) in uncertain complex environments. Studied algorithms are essentially applied to electricity production ("Unit Commitment" problems) and energy stock management (hydropower), in front of stochastic demand and water inflows. The manuscript is divided in 7 chapters and 4 parts: Part I, "General Introduction", Part II, "Background Review", Part III, "Contributions" and Part IV, "General Conclusion". This first chapter (Part I) introduces the context and motivation of our work, namely energy stock management. "Unit Commitment" (UC) problems are a classical example of "Sequential Decision Making" problem (SDM) applied to energy stock management. They are the central application of our work and in this chapter we explain main challenges arising with them (e.g. stochasticity, constraints, curse of dimensionality, ...). Classical frameworks for SDM problems are also introduced and common mistakes arising with them are be discussed. We also emphasize the consequences of these - too often neglected - mistakes and the importance of not underestimating their effects. Along this chapter, fundamental definitions commonly used with SDM problems are described. An overview of our main contributions concludes this first chapter. The second chapter (Part II) is a background review of the most classical algorithms used to solve SDM problems. Since the applications we try to solve are stochastic, we there focus on resolution methods for stochastic problems. We begin our study with classical Dynamic Programming methods to solve "Markov Decision Processes" (a special kind of SDM problems with Markovian random processes). We then introduce "Direct Policy Search", a widely used method in the Reinforcement Learning community. A distinction is be made between "Value Based" and "Policy Based" exploration methods. The third chapter (Part II) extends the previous one by covering the most classical algorithms used to solve UC's subtleties. It contains a state of the art of algorithms commonly used for energy stock management, mainly "Model Predictive Control", "Stochastic Dynamic Programming" and "Stochastic Dual Dynamic Programming". We briefly overview distinctive features and limitations of these methods. The fourth chapter (Part III) presents our main contribution: a new algorithm named "Direct Value Search" (DVS), designed to solve large scale unit commitment problems. We describe how it outperforms classical methods presented in the third chapter. We show that DVS is an "anytime" algorithm (users immediately get approximate results) which can handle large state spaces and large action spaces with non convexity constraints, and without assumption on the random process. Moreover, we explain how DVS can reduce modelling errors and can tackle challenges described in the first chapter, working on the "real" detailed problem without "cast" into a simplified model. Noisy optimisation is a key component of DVS algorithm; the fifth chapter (Part III) is dedicated to it. In this chapter, some theoretical convergence rate are studied and new convergence bounds are proved - under some assumptions and for given families of objective functions. Some variance reduction techniques aimed at improving the convergence rate of graybox noisy optimization problems are studied too in the last part of this chapter. Chapter sixth (Part III) is devoted to non-quasi-convex optimization. We prove that a variant of evolution strategy can reach a log-linear convergence rate with non-quasi-convex objective functions. Finally, the seventh chapter (Part IV) concludes and suggests some directions for future work.
167

Functional testing of an Android application / Funktionell testning av en Androidapplikation

Bångerius, Sebastian, Fröberg, Felix January 2016 (has links)
Testing is an important step in the software development process in order to increase the reliability of the software. There are a number of different methods available to test software that use different approaches to find errors, all with different requirements and possible results. In this thesis we have performed a series of tests on our own mobile application developed for the Android platform. The thesis starts with a theory section in which most of the important terms for software testing are described. Afterwards our own application and test cases are presented. The results of our tests along with our experiences are reviewed and compared to existing studies and literature in the field of testing. The test cases have helped us find a number of faults in our source code that we had not found before. We have discovered that automated testing for Android is a field where there are a lot of good tools, although these are not often used in practice. We believe the app development process could be improved greatly by regularly putting the software through automated testing systems.
168

Interopérabilité sur les standards Modelica et composant logiciel pour la simulation énergétique des sytèmes de bâtiment / Interoperability based on Modelica and software component standard for building system energy simulation

Gaaloul Chouikh, Sana 18 October 2012 (has links)
Pour mieux maîtriser ses flux énergétiques et respecter les diverses restrictions mises en place dans ce secteur énergivore, le bâtiment devient un système de plus en plus complexe incluant divers technologies innovantes comme les systèmes de gestion énergétiques (SGEB), une isolation performante et intégrant les énergies renouvelables. Cette complexité exige un changement dans les techniques et paradigmes actuels de simulation du bâtiment pour la prise en compte de ses diverses évolutions. Une modélisation globale des différents composants de ce système et une simulation efficace de ses sous-systèmes hétérogènes doivent être dorénavant assurées.Ces objectifs ne pourront être atteints qu'à travers l’exploitation des approches méthodologiques d’interopérabilité. Plusieurs solutions d’interopérabilités ont été exploitées dans le secteur du bâtiment. L’état de l’art dans ce secteur, met l’accent sur le manque de standardisation des solutions appliquées. Une approche boîte blanche se basant sur le langage Modelica a remarquablement émergée. Pour monter ses intérêts ainsi que ses limites, cette solution est adoptée pour la modélisation du système de bâtiment «PREDIS», à haute performance énergétique. Une approche boîte noire complémentaire, s’appuyant sur le standard de composant logiciel dédié à la simulation, est également mise en ouvre pour palier aux difficultés rencontrées en utilisant la première approche de modélisation système. Cette approche s’articule autour du concept de bus à composants permettant une interopérabilité effective entre outils de modélisation et environnements de simulation. En plus de l’architecture logicielle autour de la plateforme d’interopérabilité, une simulation efficace du système hétérogène requière des techniques de simulations adaptées. Ces dernières peuvent exiger des adaptations des modèles utilisés qui sont prévues par la norme de composant. / To better reduce its invoices, control its energy flows and respect various restrictions in this sector characterised by important consumption, the building becomes more and more complex including various innovative technologies such as Energy Management Systems (BEMS), efficient insulation and integrating renewable energies. This complexity requires a changing in building simulation techniques and paradigms in order to take into account its various developments. A global modelling of this system taking into account its various components and ensuring an efficient simulation of its heterogeneous subsystems must be performed.These objectives can only be achieved through the use of interoperability methodological approaches. Several interoperability solutions have been explored in the building sector and the state of the art make an accent on the standardization lack of applied solutions. A white box approach based on Modelica language has emerged in this area. To raise its interest and limitations, this solution is adopted for “PREDIS” system, a high energy performance building, modelling. A complementary black box approach, based on software component standard and dedicated for simulation is also applied to overcome the first approach difficulties. This approach is based on software component bus concept that is able to ensure an effective interoperability between modelling tools and simulation environments.In addition of the established software architecture around the platform interoperability, an efficient simulation of heterogeneous systems requires appropriate simulations techniques. These techniques may require several adaptations of used models that are provided by the component standard.
169

Analysis of Randomized Adaptive Algorithms for Black-Box Continuous Constrained Optimization / Analyse d'algorithmes stochastiques adaptatifs pour l'optimisation numérique boîte-noire avec contraintes

Atamna, Asma 25 January 2017 (has links)
On s'intéresse à l'étude d'algorithmes stochastiques pour l'optimisation numérique boîte-noire. Dans la première partie de cette thèse, on présente une méthodologie pour évaluer efficacement des stratégies d'adaptation du step-size dans le cas de l'optimisation boîte-noire sans contraintes. Le step-size est un paramètre important dans les algorithmes évolutionnaires tels que les stratégies d'évolution; il contrôle la diversité de la population et, de ce fait, joue un rôle déterminant dans la convergence de l'algorithme. On présente aussi les résultats empiriques de la comparaison de trois méthodes d'adaptation du step-size. Ces algorithmes sont testés sur le testbed BBOB (black-box optimization benchmarking) de la plateforme COCO (comparing continuous optimisers). Dans la deuxième partie de cette thèse, sont présentées nos contributions dans le domaine de l'optimisation boîte-noire avec contraintes. On analyse la convergence linéaire d'algorithmes stochastiques adaptatifs pour l'optimisation sous contraintes dans le cas de contraintes linéaires, gérées avec une approche Lagrangien augmenté adaptative. Pour ce faire, on étend l'analyse par chaines de Markov faite dans le cas d'optimisation sans contraintes au cas avec contraintes: pour chaque algorithme étudié, on exhibe une classe de fonctions pour laquelle il existe une chaine de Markov homogène telle que la stabilité de cette dernière implique la convergence linéaire de l'algorithme. La convergence linéaire est déduite en appliquant une loi des grands nombres pour les chaines de Markov, sous l'hypothèse de la stabilité. Dans notre cas, la stabilité est validée empiriquement. / We investigate various aspects of adaptive randomized (or stochastic) algorithms for both constrained and unconstrained black-box continuous optimization. The first part of this thesis focuses on step-size adaptation in unconstrained optimization. We first present a methodology for assessing efficiently a step-size adaptation mechanism that consists in testing a given algorithm on a minimal set of functions, each reflecting a particular difficulty that an efficient step-size adaptation algorithm should overcome. We then benchmark two step-size adaptation mechanisms on the well-known BBOB noiseless testbed and compare their performance to the one of the state-of-the-art evolution strategy (ES), CMA-ES, with cumulative step-size adaptation. In the second part of this thesis, we investigate linear convergence of a (1 + 1)-ES and a general step-size adaptive randomized algorithm on a linearly constrained optimization problem, where an adaptive augmented Lagrangian approach is used to handle the constraints. To that end, we extend the Markov chain approach used to analyze randomized algorithms for unconstrained optimization to the constrained case. We prove that when the augmented Lagrangian associated to the problem, centered at the optimum and the corresponding Lagrange multipliers, is positive homogeneous of degree 2, then for algorithms enjoying some invariance properties, there exists an underlying homogeneous Markov chain whose stability (typically positivity and Harris-recurrence) leads to linear convergence to both the optimum and the corresponding Lagrange multipliers. We deduce linear convergence under the aforementioned stability assumptions by applying a law of large numbers for Markov chains. We also present a general framework to design an augmented-Lagrangian-based adaptive randomized algorithm for constrained optimization, from an adaptive randomized algorithm for unconstrained optimization.
170

Modelování zvukových signálů pomocí neuronových sítí / Audio signal modelling using neural networks

Pešán, Michele January 2021 (has links)
Neuronové sítě vycházející z architektury WaveNet a sítě využívající rekurentní vrstvy jsou v současnosti používány jak pro syntézu lidské řeči, tak pro „black box“ modelování systémů pro úpravu akustického signálu – modulační efekty, nelineární zkreslovače apod. Úkolem studenta bude shrnout dosavadní poznatky o možnostech využití neuronových sítí při modelování akustických signálů. Student dále implementuje některý z modelů neuronových sítí v programovacím jazyce Python a využije jej pro natrénování a následnou simulaci libovolného efektu nebo systému pro úpravu akustického signálu. V rámci semestrální práce vypracujte teoretickou část práce, vytvořte zvukovou databázi pro trénování neuronové sítě a implementujte jednu ze struktur sítí pro modelování zvukového signálu. Neuronové sítě jsou v průběhu posledních let používány stále více, a to víceméně přes celé spektrum vědních oborů. Neuronové sítě založené na architektuře WaveNet a sítě využívající rekurentních vrstev se v současné době používají v celé řadě využití, zahrnující například syntézu lidské řeči, nebo napřklad při metodě "black-box" modelování akustických systémů, které upravují zvukový signál (modulačí efekty, nelineární zkreslovače, apod.). Tato akademická práce si dává za cíl poskytnout úvod do problematiky neuronových sítí, vysvětlit základní pojmy a mechanismy této problematiky. Popsat využití neuronových sítí v modelování akustických systémů a využít těchto poznatků k implementaci neuronových sítí za cílem modelování libovolného efektu nebo zařízení pro úpravu zvukového signálu.

Page generated in 0.0645 seconds