• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • 2
  • 1
  • Tagged with
  • 15
  • 15
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

GMMEDA : A demonstration of probabilistic modeling in continuous metaheuristic optimization using mixture models

Naveen Kumar Unknown Date (has links)
Optimization problems are common throughout science, engineering and commerce. The desire to continually improve solutions and resolve larger, complex problems has given prominence to this field of research for several decades and has led to the development of a range of optimization algorithms for different class of problems. The Estimation of Distribution Algorithms (EDAs) are a relatively recent class of metaheuristic optimization algorithms based on using probabilistic modeling techniques to control the search process. Within the general EDA framework, a number of different probabilistic models have been previously proposed for both discrete and continuous optimization problems. This thesis focuses on GMMEDAs; continuous EDAs based on the Gaussian Mixture Models (GMM) with parameter estimation performed using the Expectation Maximization (EM) algorithm. To date, this type of model has only received limited attention in the literature. There are few previous experimental studies of the algorithms. Furthermore, a number of implementation details of Continuous Iterated Density Estimation Algorithm based on Gaussian Mixture Model have not been previously documented. This thesis intends to provide a clear description of the GMMEDAs, discuss the implementation decisions and details and provides experimental study to evaluate the performance of the algorithms. The effectiveness of the GMMEDAs with varying model complexity (structure of covariance matrices and number of components) was tested against five benchmark functions (Sphere, Rastrigin, Griewank, Ackley and Rosenbrock) with varying dimensionality (2−, 10− and 30−D). The effect of the selection pressure parameters is also studied in this experiment. The results of the 2D experiments show that a variant of the GMMEDA with moderate complexity (Diagonal GMMEDA) was able to optimize both unimodal and multimodal functions. Further, experimental analysis of the 10 and 30D functions optimized results indicates that the simpler variant of the GMMEDA (Spherical GMMEDA) was most effective of all three variants of the algorithm. However, a greater consistency in the results of these functions is achieved when the most complex variant of the algorithm (Full GMMEDA) is used. The comparison of the results for four artificial test functions - Sphere, Griewank, Ackley and Rosenbrock - showed that the GMMEDA variants optimized most of complex functions better than existing continuous EDAs. This was achieved because of the ability of the GMM components to model the functions effectively. The analysis of the results evaluated by variants of the GMMEDA showed that number of the components and the selection pressure does affect the optimum value of artificial test function. The convergence of the GMMEDA variants to the respective functions best local optimum has been caused more by the complexity in the GMM components. The complexity of GMMEDA because of the number of components increases as the complexity owing to the structure of the covariance matrices increase. However, while finding optimum value of complex functions the increased complexity in GMMEDA due to complex covariance structure overrides the complexity due to increase in number of components. Additionally, the affect on the convergence due to the number of components decreases for most functions when the selection pressure increased. These affects have been noticed in the results in the form of stability of the results related to the functions. Other factors that affect the convergence of the model to the local optima are the initialization of the GMM parameters, the number of the EM components, and the reset condition. The initialization of the GMM components, though not visible graphically in the 10D optimization has shown: for different initialization of the GMM parameters in 2D, the optimum value of the functions is affected. The initialization of the population in the Evolutionary Algorithms has shown to affect the convergence of the algorithm to the functions global optimum. The observation of similar affects due to initialization of GMM parameters on the optimization of the 2D functions indicates that the convergence of the GMM in the 10D could be affected, which in turn, could affect the optimum value of respective functions. The estimated values related to the covariance and mean over the EM iteration in the 2D indicated that some functions needed a greater number of EM iterations while finding their optimum value. This indicates that lesser number of EM iterations could affect the fitting of the components to the selected population in the 10D and the fitting can affect the effective modeling of functions with varying complexity. Finally, the reset condition has shown as resetting the covariance and the best fitness value of individual in each generation in 2D. This condition is certain to affect the convergence of the GMMEDA variants to the respective functions best local optimum. The rate at which the reset condition was invoked could certainly have caused the GMM components covariance values to reset to their initials values and thus the model fitting the percentage of the selected population could have been affected. Considering all the affects caused by the different factors, the results indicates that a smaller number of the components and percentage of the selected population with a simpler S-GMMEDA modeled most functions with a varying complexity.
2

A model-driven design-space exploration tool for the HIPAO 2 methodology / Ferramenta de exploração de espaço de projeto baseada em modelos para a metodologia HIPAO2

Lerm, Rafael Andréas Raffi January 2015 (has links)
Hoje em dia, desenvolvedores de sistemas embarcados enfrentam uma crescente complexidade de projeto, tanto nas aplicações quanto nas plataformas usadas para executá-las. O uso de plataformas complexas faz com que os engenheiros precisem fazer escolhas não-triviais, e muitas vezes contra-intuitivas durante a fase de projeto. Para permitir que os projetistas gerenciem esta complexidade, o uso de metodologias baseadas em modelos tem atraído atenção, e dentro deste contexto, a metodologia HIPAO2 está sendo desenvolvida dentro da UFRGS. Dentre os problemas que os engenheiros precisam enfrentar, o mapeamento entre tarefas e processadores em sistemas multiprocessados heterogêneos é um problema NP-completo, onde o espaço de projeto rapidamente se torna grande demais para que seja explorado satisfatoriamente de maneira manual. Este trabalho detalha a extensão das ferramentas que suportam a metodologia HIPAO2, de maneira a incluir facilidades de Exploração de Espaço de Projeto semi-automática para a solução deste problema. A ferramenta proposta faz uso de um algoritmo genético multiobjetivo para evidenciar tradeoffs existentes no projeto, e algoritmos de análise de aplicações modeladas como synchronous dataflow para avaliar possíveis mapeamentos sem um custo computacional proibitivo. / Designers of today’s embedded systems are faced with increasing complexity both in the applications being developed and the platforms they run on. The use of complex platforms means that the engineers need to make non-trivial and many times non-intuitive decisions during the design phase. To help developers work with this complexity, model-driven techniques are gaining attention, and in this context, the HIPAO2 model-driven engineering methodology is being developed at UFRGS. Among the problems that designers must solve, the task-to-processor mapping in heterogeneous multiprocessor systems is an NP-complete problem and the design space will quickly become too large to be explored adequately by humans. This work details the extension of the tools that support HIPAO2 to include semiautomatic Design-Space Exploration capabilities for the mapping problem. The proposed tool includes the use of a multiobjective genetic algorithm to make tradeoffs explicit to the designers; it also uses synchronous dataflow analysis algorithms to evaluate potential alternatives with a reasonable computational cost.
3

Towards Fault Reactiveness in Wireless Sensor Networks with Mobile Carrier Robots

Falcon Martinez, Rafael Jesus 04 April 2012 (has links)
Wireless sensor networks (WSN) increasingly permeate modern societies nowadays. But in spite of their plethora of successful applications, WSN are often unable to surmount many operational challenges that unexpectedly arise during their lifetime. Fortunately, robotic agents can now assist a WSN in various ways. This thesis illustrates how mobile robots which are able to carry a limited number of sensors can help the network react to sensor faults, either during or after its deployment in the monitoring region. Two scenarios are envisioned. In the first one, carrier robots surround a point of interest with multiple sensor layers (focused coverage formation). We put forward the first known algorithm of its kind in literature. It is energy-efficient, fault-reactive and aware of the bounded robot cargo capacity. The second one is that of replacing damaged sensing units with spare, functional ones (coverage repair), which gives rise to the formulation of two novel combinatorial optimization problems. Three nature-inspired metaheuristic approaches that run at a centralized location are proposed. They are able to find good-quality solutions in a short time. Two frameworks for the identification of the damaged nodes are considered. The first one leans upon diagnosable systems, i.e. existing distributed detection models in which individual units perform tests upon each other. Two swarm intelligence algorithms are designed to quickly and reliably spot faulty sensors in this context. The second one is an evolving risk management framework for WSNs that is entirely formulated in this thesis.
4

Towards Fault Reactiveness in Wireless Sensor Networks with Mobile Carrier Robots

Falcon Martinez, Rafael Jesus 04 April 2012 (has links)
Wireless sensor networks (WSN) increasingly permeate modern societies nowadays. But in spite of their plethora of successful applications, WSN are often unable to surmount many operational challenges that unexpectedly arise during their lifetime. Fortunately, robotic agents can now assist a WSN in various ways. This thesis illustrates how mobile robots which are able to carry a limited number of sensors can help the network react to sensor faults, either during or after its deployment in the monitoring region. Two scenarios are envisioned. In the first one, carrier robots surround a point of interest with multiple sensor layers (focused coverage formation). We put forward the first known algorithm of its kind in literature. It is energy-efficient, fault-reactive and aware of the bounded robot cargo capacity. The second one is that of replacing damaged sensing units with spare, functional ones (coverage repair), which gives rise to the formulation of two novel combinatorial optimization problems. Three nature-inspired metaheuristic approaches that run at a centralized location are proposed. They are able to find good-quality solutions in a short time. Two frameworks for the identification of the damaged nodes are considered. The first one leans upon diagnosable systems, i.e. existing distributed detection models in which individual units perform tests upon each other. Two swarm intelligence algorithms are designed to quickly and reliably spot faulty sensors in this context. The second one is an evolving risk management framework for WSNs that is entirely formulated in this thesis.
5

Towards Fault Reactiveness in Wireless Sensor Networks with Mobile Carrier Robots

Falcon Martinez, Rafael Jesus 04 April 2012 (has links)
Wireless sensor networks (WSN) increasingly permeate modern societies nowadays. But in spite of their plethora of successful applications, WSN are often unable to surmount many operational challenges that unexpectedly arise during their lifetime. Fortunately, robotic agents can now assist a WSN in various ways. This thesis illustrates how mobile robots which are able to carry a limited number of sensors can help the network react to sensor faults, either during or after its deployment in the monitoring region. Two scenarios are envisioned. In the first one, carrier robots surround a point of interest with multiple sensor layers (focused coverage formation). We put forward the first known algorithm of its kind in literature. It is energy-efficient, fault-reactive and aware of the bounded robot cargo capacity. The second one is that of replacing damaged sensing units with spare, functional ones (coverage repair), which gives rise to the formulation of two novel combinatorial optimization problems. Three nature-inspired metaheuristic approaches that run at a centralized location are proposed. They are able to find good-quality solutions in a short time. Two frameworks for the identification of the damaged nodes are considered. The first one leans upon diagnosable systems, i.e. existing distributed detection models in which individual units perform tests upon each other. Two swarm intelligence algorithms are designed to quickly and reliably spot faulty sensors in this context. The second one is an evolving risk management framework for WSNs that is entirely formulated in this thesis.
6

A model-driven design-space exploration tool for the HIPAO 2 methodology / Ferramenta de exploração de espaço de projeto baseada em modelos para a metodologia HIPAO2

Lerm, Rafael Andréas Raffi January 2015 (has links)
Hoje em dia, desenvolvedores de sistemas embarcados enfrentam uma crescente complexidade de projeto, tanto nas aplicações quanto nas plataformas usadas para executá-las. O uso de plataformas complexas faz com que os engenheiros precisem fazer escolhas não-triviais, e muitas vezes contra-intuitivas durante a fase de projeto. Para permitir que os projetistas gerenciem esta complexidade, o uso de metodologias baseadas em modelos tem atraído atenção, e dentro deste contexto, a metodologia HIPAO2 está sendo desenvolvida dentro da UFRGS. Dentre os problemas que os engenheiros precisam enfrentar, o mapeamento entre tarefas e processadores em sistemas multiprocessados heterogêneos é um problema NP-completo, onde o espaço de projeto rapidamente se torna grande demais para que seja explorado satisfatoriamente de maneira manual. Este trabalho detalha a extensão das ferramentas que suportam a metodologia HIPAO2, de maneira a incluir facilidades de Exploração de Espaço de Projeto semi-automática para a solução deste problema. A ferramenta proposta faz uso de um algoritmo genético multiobjetivo para evidenciar tradeoffs existentes no projeto, e algoritmos de análise de aplicações modeladas como synchronous dataflow para avaliar possíveis mapeamentos sem um custo computacional proibitivo. / Designers of today’s embedded systems are faced with increasing complexity both in the applications being developed and the platforms they run on. The use of complex platforms means that the engineers need to make non-trivial and many times non-intuitive decisions during the design phase. To help developers work with this complexity, model-driven techniques are gaining attention, and in this context, the HIPAO2 model-driven engineering methodology is being developed at UFRGS. Among the problems that designers must solve, the task-to-processor mapping in heterogeneous multiprocessor systems is an NP-complete problem and the design space will quickly become too large to be explored adequately by humans. This work details the extension of the tools that support HIPAO2 to include semiautomatic Design-Space Exploration capabilities for the mapping problem. The proposed tool includes the use of a multiobjective genetic algorithm to make tradeoffs explicit to the designers; it also uses synchronous dataflow analysis algorithms to evaluate potential alternatives with a reasonable computational cost.
7

A model-driven design-space exploration tool for the HIPAO 2 methodology / Ferramenta de exploração de espaço de projeto baseada em modelos para a metodologia HIPAO2

Lerm, Rafael Andréas Raffi January 2015 (has links)
Hoje em dia, desenvolvedores de sistemas embarcados enfrentam uma crescente complexidade de projeto, tanto nas aplicações quanto nas plataformas usadas para executá-las. O uso de plataformas complexas faz com que os engenheiros precisem fazer escolhas não-triviais, e muitas vezes contra-intuitivas durante a fase de projeto. Para permitir que os projetistas gerenciem esta complexidade, o uso de metodologias baseadas em modelos tem atraído atenção, e dentro deste contexto, a metodologia HIPAO2 está sendo desenvolvida dentro da UFRGS. Dentre os problemas que os engenheiros precisam enfrentar, o mapeamento entre tarefas e processadores em sistemas multiprocessados heterogêneos é um problema NP-completo, onde o espaço de projeto rapidamente se torna grande demais para que seja explorado satisfatoriamente de maneira manual. Este trabalho detalha a extensão das ferramentas que suportam a metodologia HIPAO2, de maneira a incluir facilidades de Exploração de Espaço de Projeto semi-automática para a solução deste problema. A ferramenta proposta faz uso de um algoritmo genético multiobjetivo para evidenciar tradeoffs existentes no projeto, e algoritmos de análise de aplicações modeladas como synchronous dataflow para avaliar possíveis mapeamentos sem um custo computacional proibitivo. / Designers of today’s embedded systems are faced with increasing complexity both in the applications being developed and the platforms they run on. The use of complex platforms means that the engineers need to make non-trivial and many times non-intuitive decisions during the design phase. To help developers work with this complexity, model-driven techniques are gaining attention, and in this context, the HIPAO2 model-driven engineering methodology is being developed at UFRGS. Among the problems that designers must solve, the task-to-processor mapping in heterogeneous multiprocessor systems is an NP-complete problem and the design space will quickly become too large to be explored adequately by humans. This work details the extension of the tools that support HIPAO2 to include semiautomatic Design-Space Exploration capabilities for the mapping problem. The proposed tool includes the use of a multiobjective genetic algorithm to make tradeoffs explicit to the designers; it also uses synchronous dataflow analysis algorithms to evaluate potential alternatives with a reasonable computational cost.
8

Towards Fault Reactiveness in Wireless Sensor Networks with Mobile Carrier Robots

Falcon Martinez, Rafael Jesus January 2012 (has links)
Wireless sensor networks (WSN) increasingly permeate modern societies nowadays. But in spite of their plethora of successful applications, WSN are often unable to surmount many operational challenges that unexpectedly arise during their lifetime. Fortunately, robotic agents can now assist a WSN in various ways. This thesis illustrates how mobile robots which are able to carry a limited number of sensors can help the network react to sensor faults, either during or after its deployment in the monitoring region. Two scenarios are envisioned. In the first one, carrier robots surround a point of interest with multiple sensor layers (focused coverage formation). We put forward the first known algorithm of its kind in literature. It is energy-efficient, fault-reactive and aware of the bounded robot cargo capacity. The second one is that of replacing damaged sensing units with spare, functional ones (coverage repair), which gives rise to the formulation of two novel combinatorial optimization problems. Three nature-inspired metaheuristic approaches that run at a centralized location are proposed. They are able to find good-quality solutions in a short time. Two frameworks for the identification of the damaged nodes are considered. The first one leans upon diagnosable systems, i.e. existing distributed detection models in which individual units perform tests upon each other. Two swarm intelligence algorithms are designed to quickly and reliably spot faulty sensors in this context. The second one is an evolving risk management framework for WSNs that is entirely formulated in this thesis.
9

Robust Turnaround Management: Ground Operations under Uncertainty

Asadi, Ehsan 15 April 2024 (has links)
Efficient ground handling at airports greatly adds to the performance of the entire air transportation network. In this network, airports are connected via aircraft that rely on passenger and crew connections, successful local airport operations, and efficient ground handling resource management. In addition, airport stakeholders’ decision-making processes must take into account various time scales (look-ahead times), process estimates, and both limited and multiple-dependent solution spaces. Most airlines have created integrated hub and operations control centers to monitor and adapt tactical operations. Despite this, decisions in such control centers should be made quickly in case of disruption. The decisions should also include the interests of various airline departments and local stakeholders. Taking into account the Airport Collaborative Decision Making (A-CDM) concept, the joint venture between Airports Council International Europe (ACI EUROPE) - European Organization for the Safety of Air Navigation (EUROCONTROL) - International Air Transport Association (IATA) - Civil Air Navigation Services Organization (CANSO), this study creates different tools to manage turnaround in normal and disrupted contexts, hence facilitating decision-making in an Airport Operations Control Center (AOCC) and a Hub Control Center (HCC). This research focuses on the airline role in the collaborative decision-making process. Regarding A-CDM milestones, turnaround time estimation is computed by four modeling methodologies, namely Critical Path Method (CPM), Project Evaluation and Review Technique (PERT), Fuzzy Critical Path Method (FCPM), and Analytical Convolution in deterministic and nondeterministic domains. In addition, the study develops mathematical models to return the airline schedule to its original plan in the event of delays. Chance-constrained and Robust optimization are also created for optimal decision-making when airlines confront uncertainty during real-world operations. The study also develops a novel Hybrid Shuffled Frog-Leaping Algorithm (SFLA)-Grasshopper Optimization Algorithm (GOA) to expedite the process of finding recovery solutions, allowing AOCC and HCC for real-time applications to send this information to the relevant departments. In comparison to common linear solvers, the solution process is sped up by 18 percent and the quality of the solutions is enhanced by 24 percent on average. Initial results are generated in less than 2 minutes, and global optimal results are achieved in near 15 minutes allowing the system to be applied in real-time applications.:Abstract 1 Introduction 1.1 Problem Description 1.1.1 Decision Scope 1.1.2 Airport Collaborative Decision Making (A-CDM) 1.1.3 Total Airport Management 1.1.4 Ground Handlers 1.1.5 Turnaround Management 1.2 Aims and Objectives 1.3 Thesis Contribution 1.4 Structure 2 Literature Review 2.1 Turnaround 2.2 Ground Handling 2.3 Flights and Networks 2.4 Apron and Gate Assignment 2.5 Scopes Combination 2.5.1 Gate Assignment and Turnaround 2.5.2 Gate Assignment and Flights 2.5.3 Gate Assignment and Ground Handling 2.5.4 Turnaround and Flights 2.5.5 Turnaround and Ground Handling 2.5.6 Flights and Ground Handling 2.6 Turnaround Operations 2.7 Conclusion 3 Turnaround Definition 3.1 Turnaround in A-CDM System 3.2 Turnaround and Ground Handling 3.3 Turnaround Operations 3.3.1 In-Block (INB) and Acceptance (ACC) 3.3.2 Deboarding (DEB) and Boarding (BOA) 3.3.3 Fueling (FUE) 3.3.4 Catering (CAT) 3.3.5 Cleaning (CLE) 3.3.6 Unloading (UNL) and Loading (LOA) 3.3.7 Water service (WAT) and Toilette (TOI) 3.3.8 Finalization (FIN) 4 Total Turnaround Time (TTT) Calculation 4.1 Critical Path Method (CPM) 4.2 Project Evaluation and Review Technique (PERT) 4.3 Fuzzy Critical Path Method (FCPM) 4.3.1 Fuzzy Numbers and Fuzzy Sets 4.3.2 Fuzzy Membership Functions of Turnaround Tasks 4.3.3 Probability-possibility Transformation of Turnaround Tasks 4.3.4 Fuzzy Critical Path Method (FCPM) in Total Turnaround Time (TTT) Calculation 4.3.5 Discussion 4.4 Analytical Convolution 4.4.1 Convolution Method 4.4.2 Monte Carlo (MC) Simulation Evaluation 4.4.3 Application of Convolution in Turnaround Control 5 Disruption Management 5.1 Airline Disruption Management 5.1.1 Airport Operations Control Center (AOCC) 5.1.2 Delay in the Airline Networks 5.1.3 Recovery Options 5.2 Deterministic Model 5.2.1 Mathematical Model 5.2.2 Solution Approaches 5.2.3 Problem Setting 5.3 Non Deterministic Model 5.3.1 Stochastic Arrivals 5.3.2 Stochastic Duration 6 Conclusion 6.1 Discussion around Research Questions 6.1.1 Integration of All Actors 6.1.2 Turnaround Time Prediction 6.1.3 Quick and Robust Reaction 6.2 Future Research 6.2.1 Scope Development 6.2.2 Algorithm Development 6.2.3 Parameter Development List of Acronyms List of Figures List of Tables Bibliography Acknowledgement
10

A multi-fidelity analysis selection method using a constrained discrete optimization formulation

Stults, Ian Collier 17 August 2009 (has links)
The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method that will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results.

Page generated in 0.1029 seconds