• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • Tagged with
  • 12
  • 12
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The automatic design of batch processing systems

Dwyer, Barry January 1999 (has links)
Batch processing is a means of improving the efficiency of transaction processing systems. Despite the maturity of this field, there is no rigorous theory that can assist in the design of batch systems. This thesis proposes such a theory, and shows that it is practical to use it to automate system design. This has important consequences; the main impediment to the wider use of batch systems is the high cost of their development and intenance. The theory is developed twice: informally, in a way that can be used by a systems analyst, and formally, as a result of which a computer program has been developed to prove the feasibility of automated design. Two important concepts are identified, which can aid in the decomposition of any system: 'separability', and 'independence'. Separability is the property that allows processes to be joined together by pipelines or similar topologies. Independence is the property that allows elements of a large set to be accessed and updated independently of one another. Traditional batch processing technology exploits independence when it uses sequential access in preference to random access. It is shown how the same property allows parallel access, resulting in speed gains limited only by the number of processors. This is a useful development that should assist in the design of very high throughput transaction processing systems. Systems are specified procedurally by describing an ideal system, which generates output and updates its internal state immediately following each input event. The derived systems have the same external behaviour as the ideal system except that their outputs and internal states lag those of the ideal system arbitrarily. Indeed, their state variables may have different delays, and the systems as whole may never be in consistent state. A 'state dependency graph' is derived from a static analysis of a specification. The reduced graph of its strongly-connected components defines a canonical process network from which all possible implementations of the system can be derived by composition. From these it is possible to choose the one that minimises any imposed cost function. Although, in general, choosing the optimum design proves to be an NP-complete problem, it is shown that heuristics can find it quickly in practical cases. / Thesis (Ph.D.)--Mathematical and Computer Sciences (Department of Computer Science), 1999.
2

The automatic design of batch processing systems

Dwyer, Barry January 1999 (has links)
Batch processing is a means of improving the efficiency of transaction processing systems. Despite the maturity of this field, there is no rigorous theory that can assist in the design of batch systems. This thesis proposes such a theory, and shows that it is practical to use it to automate system design. This has important consequences; the main impediment to the wider use of batch systems is the high cost of their development and intenance. The theory is developed twice: informally, in a way that can be used by a systems analyst, and formally, as a result of which a computer program has been developed to prove the feasibility of automated design. Two important concepts are identified, which can aid in the decomposition of any system: 'separability', and 'independence'. Separability is the property that allows processes to be joined together by pipelines or similar topologies. Independence is the property that allows elements of a large set to be accessed and updated independently of one another. Traditional batch processing technology exploits independence when it uses sequential access in preference to random access. It is shown how the same property allows parallel access, resulting in speed gains limited only by the number of processors. This is a useful development that should assist in the design of very high throughput transaction processing systems. Systems are specified procedurally by describing an ideal system, which generates output and updates its internal state immediately following each input event. The derived systems have the same external behaviour as the ideal system except that their outputs and internal states lag those of the ideal system arbitrarily. Indeed, their state variables may have different delays, and the systems as whole may never be in consistent state. A 'state dependency graph' is derived from a static analysis of a specification. The reduced graph of its strongly-connected components defines a canonical process network from which all possible implementations of the system can be derived by composition. From these it is possible to choose the one that minimises any imposed cost function. Although, in general, choosing the optimum design proves to be an NP-complete problem, it is shown that heuristics can find it quickly in practical cases. / Thesis (Ph.D.)--Mathematical and Computer Sciences (Department of Computer Science), 1999.
3

On the automatic design of decision-tree induction algorithms / Sobre o projeto automático de algoritmos de indução de árvores de decisão

Barros, Rodrigo Coelho 06 December 2013 (has links)
Decision-tree induction is one of the most employed methods to extract knowledge from data. There are several distinct strategies for inducing decision trees from data, each one presenting advantages and disadvantages according to its corresponding inductive bias. These strategies have been continuously improved by researchers over the last 40 years. This thesis, following recent breakthroughs in the automatic design of machine learning algorithms, proposes to automatically generate decision-tree induction algorithms. Our proposed approach, namely HEAD-DT, is based on the evolutionary algorithms paradigm, which improves solutions based on metaphors of biological processes. HEAD-DT works over several manually-designed decision-tree components and combines the most suitable components for the task at hand. It can operate according to two different frameworks: i) evolving algorithms tailored to one single data set (specific framework); and ii) evolving algorithms from multiple data sets (general framework). The specific framework aims at generating one decision-tree algorithm per data set, so the resulting algorithm does not need to generalise beyond its target data set. The general framework has a more ambitious goal, which is to generate a single decision-tree algorithm capable of being effectively applied to several data sets. The specific framework is tested over 20 UCI data sets, and results show that HEAD-DTs specific algorithms outperform algorithms like CART and C4.5 with statistical significance. The general framework, in turn, is executed under two different scenarios: i) designing a domain-specific algorithm; and ii) designing a robust domain-free algorithm. The first scenario is tested over 35 microarray gene expression data sets, and results show that HEAD-DTs algorithms consistently outperform C4.5 and CART in different experimental configurations. The second scenario is tested over 67 UCI data sets, and HEAD-DTs algorithms were shown to be competitive with C4.5 and CART. Nevertheless, we show that HEAD-DT is prone to a special case of overfitting when it is executed under the second scenario of the general framework, and we point to possible alternatives for solving this problem. Finally, we perform an extensive experiment for evaluating the best single-objective fitness function for HEAD-DT, combining 5 classification performance measures with three aggregation schemes. We evaluate the 15 fitness functions in 67 UCI data sets, and the best of them are employed to generate algorithms tailored to balanced and imbalanced data. Results show that the automatically-designed algorithms outperform CART and C4.5 with statistical significance, indicating that HEAD-DT is also capable of generating custom algorithms for data with a particular kind of statistical profile / Árvores de decisão são amplamente utilizadas como estratégia para extração de conhecimento de dados. Existem muitas estratégias diferentes para indução de árvores de decisão, cada qual com suas vantagens e desvantagens tendo em vista seu bias indutivo. Tais estratégias têm sido continuamente melhoradas por pesquisadores nos últimos 40 anos. Esta tese, em sintonia com recentes descobertas no campo de projeto automático de algoritmos de aprendizado de máquina, propõe a geração automática de algoritmos de indução de árvores de decisão. A abordagem proposta, chamada de HEAD-DT, é baseada no paradigma de algoritmos evolutivos. HEAD-DT evolui componentes de árvores de decisão que foram manualmente codificados e os combina da forma mais adequada ao problema em questão. HEAD-DT funciona conforme dois diferentes frameworks: i) evolução de algoritmos customizados para uma única base de dados (framework específico); e ii) evolução de algoritmos a partir de múltiplas bases (framework geral). O framework específico tem por objetivo gerar um algoritmo por base de dados, de forma que o algoritmo projetado não necessite de poder de generalização que vá além da base alvo. O framework geral tem um objetivo mais ambicioso: gerar um único algoritmo capaz de ser efetivamente executado em várias bases de dados. O framework específico é testado em 20 bases públicas da UCI, e os resultados mostram que os algoritmos específicos gerados por HEAD-DT apresentam desempenho preditivo significativamente melhor do que algoritmos como CART e C4.5. O framework geral é executado em dois cenários diferentes: i) projeto de algoritmo específico a um domínio de aplicação; e ii) projeto de um algoritmo livre-de-domínio, robusto a bases distintas. O primeiro cenário é testado em 35 bases de expressão gênica, e os resultados mostram que o algoritmo gerado por HEAD-DT consistentemente supera CART e C4.5 em diferentes configurações experimentais. O segundo cenário é testado em 67 bases de dados da UCI, e os resultados mostram que o algoritmo gerado por HEAD-DT é competitivo com CART e C4.5. No entanto, é mostrado que HEAD-DT é vulnerável a um caso particular de overfitting quando executado sobre o segundo cenário do framework geral, e indica-se assim possíveis soluções para tal problema. Por fim, é realizado uma análise detalhada para avaliação de diferentes funções de fitness de HEAD-DT, onde 5 medidas de desempenho são combinadas com três esquemas de agregação. As 15 versões são avaliadas em 67 bases da UCI e as melhores versões são utilizadas para geração de algoritmos customizados para bases balanceadas e desbalanceadas. Os resultados mostram que os algoritmos gerados por HEAD-DT apresentam desempenho preditivo significativamente melhor que CART e C4.5, em uma clara indicação que HEAD-DT também é capaz de gerar algoritmos customizados para certo perfil estatístico dos dados de classificação
4

On the automatic design of decision-tree induction algorithms / Sobre o projeto automático de algoritmos de indução de árvores de decisão

Rodrigo Coelho Barros 06 December 2013 (has links)
Decision-tree induction is one of the most employed methods to extract knowledge from data. There are several distinct strategies for inducing decision trees from data, each one presenting advantages and disadvantages according to its corresponding inductive bias. These strategies have been continuously improved by researchers over the last 40 years. This thesis, following recent breakthroughs in the automatic design of machine learning algorithms, proposes to automatically generate decision-tree induction algorithms. Our proposed approach, namely HEAD-DT, is based on the evolutionary algorithms paradigm, which improves solutions based on metaphors of biological processes. HEAD-DT works over several manually-designed decision-tree components and combines the most suitable components for the task at hand. It can operate according to two different frameworks: i) evolving algorithms tailored to one single data set (specific framework); and ii) evolving algorithms from multiple data sets (general framework). The specific framework aims at generating one decision-tree algorithm per data set, so the resulting algorithm does not need to generalise beyond its target data set. The general framework has a more ambitious goal, which is to generate a single decision-tree algorithm capable of being effectively applied to several data sets. The specific framework is tested over 20 UCI data sets, and results show that HEAD-DTs specific algorithms outperform algorithms like CART and C4.5 with statistical significance. The general framework, in turn, is executed under two different scenarios: i) designing a domain-specific algorithm; and ii) designing a robust domain-free algorithm. The first scenario is tested over 35 microarray gene expression data sets, and results show that HEAD-DTs algorithms consistently outperform C4.5 and CART in different experimental configurations. The second scenario is tested over 67 UCI data sets, and HEAD-DTs algorithms were shown to be competitive with C4.5 and CART. Nevertheless, we show that HEAD-DT is prone to a special case of overfitting when it is executed under the second scenario of the general framework, and we point to possible alternatives for solving this problem. Finally, we perform an extensive experiment for evaluating the best single-objective fitness function for HEAD-DT, combining 5 classification performance measures with three aggregation schemes. We evaluate the 15 fitness functions in 67 UCI data sets, and the best of them are employed to generate algorithms tailored to balanced and imbalanced data. Results show that the automatically-designed algorithms outperform CART and C4.5 with statistical significance, indicating that HEAD-DT is also capable of generating custom algorithms for data with a particular kind of statistical profile / Árvores de decisão são amplamente utilizadas como estratégia para extração de conhecimento de dados. Existem muitas estratégias diferentes para indução de árvores de decisão, cada qual com suas vantagens e desvantagens tendo em vista seu bias indutivo. Tais estratégias têm sido continuamente melhoradas por pesquisadores nos últimos 40 anos. Esta tese, em sintonia com recentes descobertas no campo de projeto automático de algoritmos de aprendizado de máquina, propõe a geração automática de algoritmos de indução de árvores de decisão. A abordagem proposta, chamada de HEAD-DT, é baseada no paradigma de algoritmos evolutivos. HEAD-DT evolui componentes de árvores de decisão que foram manualmente codificados e os combina da forma mais adequada ao problema em questão. HEAD-DT funciona conforme dois diferentes frameworks: i) evolução de algoritmos customizados para uma única base de dados (framework específico); e ii) evolução de algoritmos a partir de múltiplas bases (framework geral). O framework específico tem por objetivo gerar um algoritmo por base de dados, de forma que o algoritmo projetado não necessite de poder de generalização que vá além da base alvo. O framework geral tem um objetivo mais ambicioso: gerar um único algoritmo capaz de ser efetivamente executado em várias bases de dados. O framework específico é testado em 20 bases públicas da UCI, e os resultados mostram que os algoritmos específicos gerados por HEAD-DT apresentam desempenho preditivo significativamente melhor do que algoritmos como CART e C4.5. O framework geral é executado em dois cenários diferentes: i) projeto de algoritmo específico a um domínio de aplicação; e ii) projeto de um algoritmo livre-de-domínio, robusto a bases distintas. O primeiro cenário é testado em 35 bases de expressão gênica, e os resultados mostram que o algoritmo gerado por HEAD-DT consistentemente supera CART e C4.5 em diferentes configurações experimentais. O segundo cenário é testado em 67 bases de dados da UCI, e os resultados mostram que o algoritmo gerado por HEAD-DT é competitivo com CART e C4.5. No entanto, é mostrado que HEAD-DT é vulnerável a um caso particular de overfitting quando executado sobre o segundo cenário do framework geral, e indica-se assim possíveis soluções para tal problema. Por fim, é realizado uma análise detalhada para avaliação de diferentes funções de fitness de HEAD-DT, onde 5 medidas de desempenho são combinadas com três esquemas de agregação. As 15 versões são avaliadas em 67 bases da UCI e as melhores versões são utilizadas para geração de algoritmos customizados para bases balanceadas e desbalanceadas. Os resultados mostram que os algoritmos gerados por HEAD-DT apresentam desempenho preditivo significativamente melhor que CART e C4.5, em uma clara indicação que HEAD-DT também é capaz de gerar algoritmos customizados para certo perfil estatístico dos dados de classificação
5

Control System Design Using Evolutionary Algorithms for Autonomous Shipboard Recovery of Unmanned Aerial Vehicles

Khantsis, Sergey, s3007192@student.rmit.edu.au January 2006 (has links)
The capability of autonomous operation of ship-based Unmanned Aerial Vehicles (UAVs) in extreme sea conditions would greatly extend the usefulness of these aircraft for both military and civilian maritime purposes. Maritime operations are often associated with Vertical Take-Off and Landing (VTOL) procedures, even though the advantages of conventional fixed-wing aircraft over VTOL aircraft in terms of flight speed, range and endurance are well known. In this work, current methods of shipboard recovery are analysed and the problems associated with recovery in adverse weather conditions are identified. Based on this analysis, a novel recovery method is proposed. This method, named Cable Hook Recovery, is intended to recover small to medium-size fixed-wing UAVs on frigate-size vessels. It is expected to have greater operational capabilities than the Recovery Net technique, which is currently the most widely employed method of recovery for similar class of UAVs, potentially providing safe recovery even in very rough sea and allowing the choice of approach directions. The recovery method is supported by the development of a UAV controller that realises the most demanding stage of recovery, the final approach. The controller provides both flight control and guidance strategy that allow fully autonomous recovery of a fixed-wing UAV. The development process involves extensive use of specially tailored Evolutionary Algorithms and represents the major contribution of this work. The Evolutionary Design algorithm developed in this work combines the power of Evolutionary Strategies and Genetic Programming, enabling automatic evolution of both the structure and parameters of the controller. The controller is evolved using a fully coupled nonlinear six-degree-of-freedom UAV model, making linearisation and trimming of the model unnecessary. The developed algorithm is applied to both flight control and guidance problems with several variations, from optimisation of a routine PID controller to automatic control laws synthesis where no a priori data available. It is demonstrated that Evolutionary Design is capable of not only optimising, but also solving automatically the real-world problems, producing human-competitive solutions. The designed UAV controller has been tested comprehensively for both performance and robustness in a nonlinear simulation environment and has been found to allow the aircraft to be recovered in the presence of both large external disturbances and uncertainty in the simulation models.
6

A modular approach to the automatic design of control software for robot swarms: From a novel perspective on the reality gap to AutoMoDe

Francesca, Gianpiero 21 April 2017 (has links)
The main issue in swarm robotics is to design the behavior of the individual robots so that a desired collective behavior is achieved. A promising alternative to the classical trial-and-error design approach is to rely on automatic design methods. In an automatic design method, the problem of designing the control software for a robot swarm is cast into an optimization problem: the different design choices define a search space that is explored using an optimization algorithm. Most of the automatic design methods proposed so far belong to the framework of evolutionary robotics. Traditionally, in evolutionary robotics the control software is based on artificial neural networks and is optimized automatically via an evolutionary algorithm, following a process inspired by natural evolution. Evolutionary robotics has been successfully adopted to design robot swarms that perform various tasks. The results achieved show that automatic design is a viable and promising approach to designing the control software of robot swarms. Despite these successes, a widely recognized problem of evolutionary robotics is the difficulty to overcome the reality gap, that is, having a seamless transition from simulation to the real world. In this thesis, we aim at conceiving an effective automatic design approach that is able to deliver robot swarms that have high performance once deployed in the real world. To this, we consider the major problem in the automatic design of robot swarms: the reality gap problem. We analyze the reality gap problem from a machine learning perspective. We show that the reality gap problem bears a strong resemblance to the generalization problem encountered in supervised learning. By casting the reality gap problem into the bias-variance tradeoff, we show that the inability to overcome the reality gap experienced in evolutionary robotics could be explained by the excessive representational power of the control architecture adopted. Consequently, we propose AutoMoDe, a novel automatic design approach that adopts a control architecture with low representational power. AutoMoDe designs software in the form of a probabilistic finite state machine that is composed automatically starting from a number of pre-existing parametric modules. In the experimental analysis presented in this thesis, we show that adopting a control architecture that features a low representational power is beneficial: AutoMoDe performs better than an evolutionary approach. Moreover, AutoMoDe is able to design robot swarms that perform better that the ones designed by human designers. AutoMoDe is the first automatic design approach that it is shown to outperform human designers in a controlled experiment. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
7

Automatic Design of Optimal Actuated Traffic Signal Control with Transit Signal Priority

Keblawi, Mahmud, Toledo, Tomer 23 June 2023 (has links)
In traffic networks, appropriately determining the traffic signal plan of each intersection is a ünecessary condition for a reasonable level of service. This paper presents the development of a new system for automatically designing optimal actuated traffic signal plans with transit signal priority. It uses an optimization algorithm combined with a mesoscopic traffic simulation model to design and evaluate optimal traffic signal plans for each intersection in the traffic network, therefore reducing the need for human intervention in the design process. The proposed method can simultaneously determine the optimal logical structure, priority strategies, timing parameters, phase composition and sequence, and detector placements. The integrated system was tested by a real-world isolated intersection in Haifa city. The results demonstrated that this approach has the potential to efficiently design signal plans without human intervention, which can minimize time, cost, and design effort. It can also help uncover problems in the design that may otherwise not be detected.
8

A rapid design methodology for generating of parallel image processing applications and parallel architectures for smart camera / Méthodologie de prototypage rapide pour générer des applications de traitement d'images parallèles et architectures parallèles dédié caméra intelligente

Chenini, Hanen 27 May 2014 (has links)
Dû à la complexité des algorithmes de traitement d’images récents et dans le but d'accélérer la procédure de la conception des MPSoCs, méthodologies de prototypage rapide sont nécessaires pour fournir différents choix pour le programmeur de générer des programmes parallèles efficaces. Ce manuscrit présente les travaux menés pour proposer une méthodologie de prototypage rapide permettant la conception des architectures MPSOC ainsi que la génération automatique de système matériel / logiciel dédié un circuit reprogrammable (FPGA). Pour faciliter la programmation parallèle, l'approche MPSoC proposée est basée sur l’utilisation de Framework « CubeGen » qui permet la génération des différentes solutions envisageables pour réaliser des prototypes dans le domaine du traitement d’image. Ce document décrit une méthode basée sur le concept des squelettes générés en fonction des caractéristiques d'application afin d'exploiter tous les types de parallélisme des algorithmes réels. Un ensemble d’expérimentations utilisant des algorithmes courants permet d’évaluer les performances du flot de conception proposé équivalente à une architecture basé des processeurs hardcore et les solutions traditionnels basé sur cibles ASIC. / Due to the complexity of image processing algorithms and the restrictions imposed by MPSoC designs to reach their full potentials, automatic design methodologies are needed to provide guidance for the programmer to generate efficient parallel programs. In this dissertation, we present a MPSoC-based design methodology solution supporting automatic design space exploration, automatic performance evaluation, as well as automatic hardware/software system generation. To facilitate the parallel programming, the presented MPSoC approach is based on a CubeGen framework that permits the expression of different scenarios for architecture and algorithmic design exploring to reach the desired level of performance, resulting in short time development. The generated design could be implemented in a FPGA technology with an expected improvement in application performance and power consumption. Starting from the application, we have evolved our effective methodology to provide several parameterizable algorithmic skeletons in the face of varying application characteristics to exploit all types of parallelism of the real algorithms. Implementing such applications on our parallel embedded system shows that our advanced methods achieve increased efficiency with respect to the computational and communication requirements. The experimental results demonstrate that the designed multiprocessing architecture can be programmed efficiently and also can have an equivalent performance to a more powerful designs based hard-core processors and better than traditional ASIC solutions which are too slow and too expensive.
9

Methods and tools for the optimization of modular electrical power distribution cabinets in aeronautical applications / Méthodes et outils pour l'optimisation de cœurs modulaires de distribution électrique pour applications aéronautiques

Morentin Etayo, Alvaro 10 March 2017 (has links)
Depuis des années, les avionneurs sont engagés pour la réduction de l’empreinte environnementale à travers le développement de nouveaux concepts. Ainsi, le remplacement des systèmes hydrauliques (hydraulicless) et pneumatiques (bleedless) de l’avion par des systèmes électriques sont envisagés d’où l’apparition du concept d’avion « plus électrique ». Toutefois, les gains espérés (diminution du coût, de la consommation de carburant ou de la masse) suite à cette substitution ne sont pas si faciles à obtenir, car les technologies précédentes ont bénéficié de plusieurs dizaines d’années de développement et d’optimisation. Les solutions électriques nouvellement proposées doivent donc elles aussi être très abouties pour être véritablement concurrentielles ; tous les degrés de liberté doivent être envisagés, qu’il s’agisse des technologies ou des architectures. En particulier, l’usage d’un nouveau réseau HVDC (540 V) semble être une solution prometteuse. A partir de ce réseau HVDC, les différentes charges AC triphasées sont alimentées par une série d’onduleurs génériques. Compte tenu de la disparité des consommations pendant les différentes phases de vol, le même onduleur peut servir à alimenter plusieurs charges. La connexion entre les onduleurs et les charges est gérée par une matrice de contacteurs. Cette solution innovante considère également des cas de redondance pour augmenter la robustesse de la solution. La conception de ce nouveau système est présentée dans ce rapport de thèse. Le compromis optimal entre le nombre d’onduleurs et la puissance nominale de chaque onduleur doit être obtenu. Ce choix déterminera fortement la taille de la matrice de contacteurs. Cependant, pour adresser cette problématique, il est nécessaire de connaître la masse des différents composants en fonction de la puissance requise. Un environnement de conception est ainsi créé dans le but de réaliser le dimensionnement optimal de convertisseurs de puissance. Les différents composants sont décrits utilisant une approche « directe » et sont codés sous le formalisme « orienté-objet ». Ces modèles sont ensuite validés expérimentalement ou par simulation numérique. Les différents modèles sont couplés à un environnement d’optimisation et à un solveur fréquentiel qui permet une résolution rapide des formes d’ondes du régime permanent. L’environnement d’optimisation réalise le dimensionnement précis des différentes parties de l’onduleur : dissipateur, module de puissance, filtre côté continu et inductance de couplage. Un onduleur est proposé pour différentes puissances nominales et fréquences de découpage. L’optimisation adresse également le choix des différentes technologies. Finalement, les résultats sont utilisés pour déterminer le meilleur compromis entre nombre d’onduleurs et puissance de l’onduleur à partir d’un algorithme heuristique. / In recent years, aircraft manufacturers have been making progress in the design of more efficient aircrafts to reduce the environmental footprint. To attain this target, aircrafts manufactures work on the replacement of the hydraulic and bleed systems for electrical systems leading to a “More Electrical Aircraft”. However, the expected mass gain is a challenge, as previous technologies have been developed and optimized for decades. The new electrical solutions need to be look into detail to be competitive with previous technologies. All degrees of freedom must be considered, that is, new technologies and architectures. In particular, an HVDC network that reduces the number of rectifier stages seems a promising solution. From the HVDC network, the different three phase AC loads will be supplied by a series of power generic inverters. As the power consumption of the different loads change during the flight mission, the same inverter is used to supply different loads. The connection between the inverters and the loads is managed by a matrix of contactors. The proposed solution also considers redundant configurations, thus increasing system robustness. The design of the innovative system is presented in this document. That is, determining the optimal trade-off between the number of power inverters and the nominal power of each generic inverter that will also impact the size of the matrix of contactors. However, to assess the combinatory problem, the mass of the different components as a function of the nominal power needs to be calculated. A design environment is therefore created to perform automatic and optimized design of power converters. The different components are described using a “direct modelling” approach and coded using “object-oriented” programming. The components are validated experimentally or by numerical simulations. The different models are coupled to an optimization environment and to a frequency solver allowing a fast calculation of the steady-state waveforms. The optimization environment performs the precise design of the different parts of the power inverter: heatsink, power module, DC filter and coupling inductor. The power inverter is designed for different values of nominal power and switching frequency. The optimization assesses as well the usage of different technologies. Finally, the results are used to determine the optimal trade-off between the number of inverters and the nominal power of each inverter using a heuristic algorithm.
10

Evolutionary membrane computing: A comprehensive survey and new results

Zhang, G., Gheorghe, Marian, Pan, L.Q., Perez-Jimenez, M.J. 19 April 2014 (has links)
No / Evolutionary membrane computing is an important research direction of membrane computing that aims to explore the complex interactions between membrane computing and evolutionary computation. These disciplines are receiving increasing attention. In this paper, an overview of the evolutionary membrane computing state-of-the-art and new results on two established topics in well defined scopes (membrane-inspired evolutionary algorithms and automated design of membrane computing models) are presented. We survey their theoretical developments and applications, sketch the differences between them, and compare the advantages and limitations. (C) 2014 Elsevier Inc. All rights reserved.

Page generated in 0.0624 seconds