• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 511
  • 386
  • 96
  • 59
  • 43
  • 25
  • 17
  • 11
  • 10
  • 7
  • 6
  • 6
  • 4
  • 3
  • 2
  • Tagged with
  • 1404
  • 1404
  • 454
  • 268
  • 192
  • 177
  • 138
  • 135
  • 127
  • 113
  • 113
  • 111
  • 108
  • 107
  • 105
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

Metóda najemnších štvorcov genetickým algoritmom / Least squares method using genetic algorithm

Holec, Matúš January 2011 (has links)
This thesis describes the design and implementation of genetic algorithm for approximation of non-linear mathematical functions using the least squares method. One objective of this work is to theoretically describe the basics of genetic algorithms. The second objective is to create a program that would be potentially used to approximate empirically measured data by the scientific institutions. Besides the theoretical description of the given subject, the text part of the work mainly deals with the design of the genetic algorithm and the whole application solving the given problem. Specific part of the assignment is that the developed application has to support approximation of points by various mathematical non-linear functions in several different intervals, and then it has to insure, that resulting functions are continuous throughout all the intervals. Described functionality is not offered by any available software.
442

A discourse concerning certain stochastic optimization algorithms and their application to the imaging of cataclysmic variable stars

Wood, Derren W 27 July 2005 (has links)
This thesis is primarily concerned with a description of four types of stochastic algorithms, namely the genetic algorithm, the continuous parameter genetic algorithm, the particle swarm algorithm and the differential evolution algorithm. Each of these techniques is presented in sufficient detail to allow the layman to develop her own program upon examining the text. All four algorithms are applied to the optimization of a certain set of unconstrained problems known as the extended Dixon-Szegö test set. An algorithm's performance at optimizing a set of problems such as these is often used as a benchmark for judging its efficacy. Although the same thing is done here, an argument is presented that shows that no such general benchmarking is possible. Indeed, it is asserted that drawing general comparisons between stochastic algorithms on the basis of any performance criterion is a meaningless pursuit unless the scope of such comparative statements is limited to specific sets of optimization problems. The idea is a result of the no free lunch theorems proposed by Wolpert and Macready. Two methods of presenting the results of an optimization run are discussed. They are used to show that judging an optimizer's performance is largely a subjective undertaking, despite the apparently objective performance measures which are commonly used when results are published. An important theme of this thesis is the observation that a simple paradigm shift can result in a different decision regarding which algorithm is best suited to a certain task. Hence, an effort is made to present the proper interpretation of the results of such tests (from the author's point of view). Additionally, the four abovementioned algorithms are used in a modelling environment designed to determine the structure of a Magnetic Cataclysmic Variable. This 'real world' modelling problem contrasts starkly with the well defined test set and highlights some of the issues that designers must face in the optimization of physical systems. The particle swarm optimizer will be shown to be the algorithm capable of achieving the best results for this modelling problem if an unbiased <font face="symbol">c</font>2 performance measure is used. However, the solution it generates is clearly not physically acceptable. Even though this drawback is not directly attributable to the optimizer, it is at least indicative of the fact that there are practical considerations which complicate the issue of algorithm selection. / Dissertation (MEng (Mechanical Engineering))--University of Pretoria, 2006. / Mechanical and Aeronautical Engineering / unrestricted
443

Modeling of Pipeline Transients: Modified Method of Characteristics

Wood, Stephen L 08 July 2011 (has links)
The primary purpose of this research was to improve the accuracy and robustness of pipeline transient modeling. An algorithm was developed to model the transient flow in closed tubes for thin walled pipelines. Emphasis was given to the application of this type of flow to pipelines with small radius 90° elbows. An additional loss term was developed to account for the presence of 90° elbows in a pipeline. The algorithm was integrated into an optimization routine to fit results from the improved model to experimental data. A web based interface was developed to facilitate the pre- and post- processing operations. Results showed that including a loss term that represents the effects of 90° elbows in the Method of Characteristics (MOC) [1] improves the accuracy of the predicted transients by an order of magnitude. Secondary objectives of pump optimization, blockage detection and removal were investigated with promising results.
444

Abordagem lexicográfica na otimização da operação de usinas hidrelétricas / Lexicographic approach to optimize the short-term scheduling of hydroelectric power plants

Fernandes, Jéssica Pillon Torralba, 1985- 05 August 2015 (has links)
Orientadores: Ieda Geriberto Hidalgo, Paulo de Barros Correia / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-27T18:22:11Z (GMT). No. of bitstreams: 1 Fernandes_JessicaPillonTorralba_D.pdf: 6009989 bytes, checksum: a3f55f4b7f91827762cdfb4e83ebcf4c (MD5) Previous issue date: 2015 / Resumo: Em busca do desenvolvimento sustentável, a atividade de produção de energia iniciou o século XXI com foco em dois temas: eficiência energética e utilização de fontes de energia renováveis. O Brasil é um país privilegiado em termos de disponibilidade de recursos naturais para a geração de energia, principalmente através da água. Apesar da evolução de outras fontes renováveis de energia, como a biomassa e a eólica, é previsto um aumento da utilização de energia hidráulica na geração de eletricidade de forma sustentável. Para acompanhar esse aumento, existe a necessidade de expandir a oferta de energia através da instalação de novas usinas hidrelétricas e/ou otimização da operação das usinas hidrelétricas existentes. Neste contexto, esta tese apresenta uma metodologia para resolver o problema de despacho dinâmico de máquinas e geração com horizonte diário e discretização horária. Ela baseia-se na Programação por Metas Lexicográficas, utilizando Algoritmo Genético e Strength Pareto Evolutionary Algorithm. A formulação matemática do problema possui dois objetivos conflitantes. O primeiro consiste em maximizar a geração líquida da usina ao longo do dia. O segundo visa minimizar o número de partidas e paradas das unidades geradoras. A resolução é executada em duas etapas. Na Etapa 1, o Algoritmo Genético é utilizado para resolver o problema estático para cada hora. Na Etapa 2, Algoritmo Genético e Strength Pareto Evolutionary Algorithm são empregados para solucionar o problema dinâmico ao longo de um dia. As soluções encontradas são analisadas através da construção de uma curva de trade-offs. Os estudos de casos são realizados com as usinas Jupiá e Porto Primavera ,que pertencem ao Sistema Interligado Nacional. Os resultados mostram que a metodologia proposta apresenta soluções eficientes e econômicas para a programação diária de usinas hidrelétricas / Abstract: In pursuit of the sustainable development, the energy production activity began the 21st century with focus on two themes: energy efficiency and use of renewable energy sources. Brazil is a privileged country in terms of availability of natural resources to energy production, mainly through water. Despite the development of other renewable energy sources, such as biomass and wind power, hydro energy is expected to increase in the electricity generation in a sustainable way. To keep this growing, there is a need to increase the supply of energy by installing new hydroelectric plants and/or optimizing the operation of existing ones. In this context, this thesis presents a methodology to solve the dynamic dispatch problem of units and generation with a daily horizon and hourly discretization. It is based on Lexicographic Goal Programming using Genetic Algorithm and Strength Pareto Evolutionary Algorithm. The mathematical formulation of the problem has two conflicting goals. The first consists of maximizing the electric power output the plant throughout the day. The second aims to minimize the number of start-ups and shut-downs of the generating units. The resolution is divided in two steps. In Step 1, Genetic Algorithm is used to solve the static problem for each hour. Phase 2 employs Genetic Algorithm and Strength Pareto Evolutionary Algorithm to solve the dynamic problem throughout the day. The solutions are analyzed by building a trade-offs curve. The case studies are carried out with Jupiá and Porto Primavera hydroelectric power plants that belong to the National Interconnected System. The results show that the proposed methodology provides efficient and economic solutions for the daily operation of hydroelectric power plants / Doutorado / Planejamento de Sistemas Energeticos / Doutora em Planejamento de Sistemas Energéticos
445

[en] LIVER SEGMENTATION AND VISUALIZATION FROM COMPUTER TOMOGRAPHY IMAGES / [pt] SEGMENTAÇÃO E VISUALIZAÇÃO DO FÍGADO A PARTIR DE IMAGENS DE TOMOGRAFIA COMPUTADORIZADA

10 September 2009 (has links)
[pt] Esta dissertação apresenta o desenvolvimento e os resultados deste projeto de mestrado, cujo objetivo, de caráter multidisciplinar, foi desenvolver uma metodologia e uma ferramenta para segmentação do fígado, seus vasos e subregiões a partir de imagens de tomografia computadorizada da região abdominal, utilizando procedimentos de segmentação automática de imagens e visualização tridimensional de dados. A metodologia sugerida segmenta primeiramente o fígado, utilizando uma abordagem de modelos deformáveis implícitos, chamada level sets, estimando os seus parâmetros através do uso de algoritmos genéticos. Inicialmente, o contorno do fígado é manualmente definido em um tomo como solução inicial, e então o método segmenta automaticamente o fígado em todos os outros tomos, sequencialmente. Os vasos e nódulos do fígado são então identificados utilizando um modelo de mistura de funções proporcionais a gaussianas, e um método de segmentação de crescimento de regiões por histerese. As veias hepáticas e portas são classificadas dentro do conjunto de vasos, e utilizadas em uma modelagem matemática que finalmente divide o fígado em oito sub-regiões de Couinaud. Esta metodologia foi testada em 20 diferentes exames e utilizando cinco diferentes medidas de performance, e os resultados obtidos confirmam o potencial do método. Casos com baixo desempenho são apresentados para promover desenvolvimentos futuros. / [en] This dissertation presents the development and results of this M.Sc project, whose multidisciplinary objective, was to develop a methodology and a tool to segment the liver, its vessels and subregions from abdominal computed tomography images, using procedures of automatic image segmentation and visualization of three-dimensional data. The suggested methodology segments initially the liver, using an approach based on implicit deformable models, called level sets, estimating its parameters using genetic algorithms. Initially, the liver boundary is manually set in one slice an initial solution, and then the method automatically segments the liver in all other slices, sequentially. Then the vessels and nodules of the liver are identified using both a model of mixture of functions proportional to Gaussians, and a segmentation method called region growing that uses hysteresis information. The hepatic and portal veins are classified within the set of vessels, and used in a mathematical modeling that eventually divides the liver into the eight subregions of Couinaud. The methodology was tested to segment the liver using 20 different exams and five different measures of performance, and the results obtained confirm the potential of the method. The cases in which the method presented a poor performance are also discussed in order to instigate further research.
446

Développement d’une nouvelle méthode de réduction de modèle basée sur les hypersurfaces NURBS (Non-Uniform Rational B-Splines) / Development of a new metamodelling method based on NURBS (Non-Uniform Rational B-Splines) hypersurfaces

Audoux, Yohann 14 June 2019 (has links)
Malgré des décennies d’incontestables progrès dans le domaine des sciences informatiques, un certain nombre de problèmes restent difficiles à traiter en raison, soit de leur complexité numérique (problème d’optimisation, …), soit de contraintes spécifiques telle que la nécessité de traitement en temps réel (réalité virtuelle, augmentée, …). Dans ce contexte, il existe des méthodes de réduction de modèle qui permettent de réduire les temps de calcul de simulations multi-champs et/ou multi-échelles complexes. Le processus de réduction de modèle consiste à paramétrer un métamodèle qui requiert moins de ressources pour être évalué que le modèle complexe duquel il a été obtenu, tout en garantissant une certaine précision. Les méthodes actuelles nécessitent, en général, soit une expertise de l’utilisateur, soit un grand nombre de choix arbitraires de sa part. De plus, elles sont bien souvent adaptées à une application spécifique mais difficilement transposable à d’autres domaines. L’objectif de notre approche est donc d’obtenir, s'il n'est pas le meilleur, un bon métamodèle quel que soit le problème considéré. La stratégie développée s’appuie sur l’utilisation des hypersurfaces NURBS et se démarque des approches existantes par l’absence d’hypothèses simplificatrices sur les paramètres de celles-ci. Pour ce faire, une méta heuristique (de type algorithme génétique), capable de traiter des problèmes d’optimisation dont le nombre de variables n’est pas constant, permet de déterminer automatiquement l’ensemble des paramètres de l’hypersurface sans transférer la complexité des choix à l’utilisateur. / Despite undeniable progress achieved in computer sciences over the last decades, some problems remain intractable either by their numerical complexity (optimisation problems, …) or because they are subject to specific constraints such as real-time processing (virtual and augmented reality, …). In this context, metamodeling techniques can minimise the computational effort to realize complex multi-field and/or multi-scale simulations. The metamodeling process consists of setting up a metamodel that needs less resources to be evaluated than the complex one that is extracted from by guaranteeing, meanwhile, a minimal accuracy. Current methods generally require either the user’s expertise or arbitrary choices. Moreover, they are often tailored for a specific application, but they can be hardly transposed to other fields. Thus, even if it is not the best, our approach aims at obtaining a metamodel that remains a good one for whatever problem at hand. The developed strategy relies on NURBS hypersurfaces and stands out from existing ones by avoiding the use of empiric criteria to set its parameters. To do so, a metaheuristic (a genetic algorithm) able to deal with optimisation problems defined over a variable number of optimisation variables sets automatically all the hypersurface parameters so that the complexity is not transferred to the user.
447

Hybrid non-linear model predictive control of a run-of-mine ore grinding mill circuit

Botha, Stefan January 2018 (has links)
A run-of-mine (ROM) ore milling circuit is primarily used to grind incoming ore containing precious metals to a powder fine enough to liberate the valuable minerals contained therein. The ground ore has a product particle size specification that is set by the downstream separation unit. A ROM ore milling circuit typically consists of a mill, sump and classifier (most commonly a hydrocyclone). These circuits are difficult to control because of unmeasurable process outputs, non-linearities, time delays, large unmeasured disturbances and complex models with modelling uncertainties. The ROM ore milling circuit should be controlled to meet the final product quality specification, but throughput should also be maximised. This further complicates ROM ore grinding mill circuit control, since an inverse non-linear relationship exists between the quality and throughput. ROM ore grinding mill circuit control is constantly evolving to find the best control method with peripheral tools to control the plant. Although many studies have been conducted, more are continually undertaken, since the controller designs are usually based on various assumptions and the required measurements in the grinding mill circuits are often unavailable. / To improve controller performance, many studies investigated the inclusion of additional manipulated variables (MVs) in the controller formulation to help control process disturbances, or to provide some form of functional control. Model predictive control (MPC) is considered one of the best advanced process control (APC) techniques and linear MPC controllers have been implemented on grinding mill circuits, while various other advanced controllers have been investigated and tested in simulation. Because of the complexity of grinding mill circuits non-linear MPC (NMPC) controllers have achieved better results in simulations where a wider operating region is required. In the search for additional MVs some researchers have considered including the discrete dynamics as part of the controller formulation instead of segregating them from the APC or base-layer controllers. The discrete dynamics are typically controlled using a layered approach. Discrete dynamics are on/off elements and in the case of a closed-loop grinding mill circuit the discrete elements can be on/off activation variables for feed conveyor belts to select which stockpile is used, selecting whether a secondary grinding stage should be active or not, and switching hydrocyclones in a hydrocyclone cluster. Discrete dynamics are added directly to the APC controllers by using hybrid model predictive control (HMPC). HMPC controllers have been designed for grinding mill circuits, but none of them has considered the switching of hydrocyclones as an additional MV and they only include linear dynamics for the continuous elements. This study addresses this gap by implementing a hybrid NMPC (HNMPC) controller that can switch the hydrocyclones in a cluster. / A commonly used continuous-time grinding mill circuit model with one hydrocyclone is adapted to contain a cluster of hydrocyclones, resulting in a hybrid model. The model parameters are refitted to ensure that the initial design steady-state conditions for the model are still valid with the cluster. The novel contribution of this research is the design of a HNMPC controller using a cluster of hydrocyclones as an additional MV. The HNMPC controller is formulated using the complete nonlinear hybrid model and a genetic algorithm (GA) as the solver. An NMPC controller is also designed and implemented as the base case controller in order to evaluate the HNMPC controller’s performance. To further illustrate the functional control benefits of including the hydrocyclone cluster as an MV, a linear optimisation objective was added to the HNMPC to increase the grinding circuit throughput, while maintaining the quality specification. The results show that the HNMPC controller outperforms the NMPC one in terms of setpoint tracking, disturbance rejection, and process optimisation objectives. The GA is shown to be a good solver for HNMPC, resulting in a robust controller that can still control the plant even when state noise is added to the simulation. / Dissertation (MEng)--University of Pretoria, 2018. / National Research Foundation (DAAD-NRF) / Electrical, Electronic and Computer Engineering / MEng / Unrestricted
448

Relay Selection and Resource Allocation in One-Way and Two-Way Cognitive Relay Networks

Alsharoa, Ahmad M. 08 May 2013 (has links)
In this work, the problem of relay selection and resource power allocation in one- way and two-way cognitive relay networks using half duplex channels with different relaying protocols is investigated. Optimization problems for both single and multiple relay selection that maximize the sum rate of the secondary network without degrading the quality of service of the primary network by respecting a tolerated interference threshold were formulated. Single relay selection and optimal power allocation for two-way relaying cognitive radio networks using decode-and-forward and amplify-and-forward protocols were studied. Dual decomposition and subgradient methods were used to find the optimal power allocation. The transmission process to exchange two different messages between two transceivers for two-way relaying technique takes place in two time slots. In the first slot, the transceivers transmit their signals simultaneously to the relay. Then, during the second slot the relay broadcasts its signal to the terminals. Moreover, improvement of both spectral and energy efficiency can be achieved compared with the one-way relaying technique. As an extension, a multiple relay selection for both one-way and two-way relaying under cognitive radio scenario using amplify-and-forward were discussed. A strong optimization tool based on genetic and iterative algorithms was employed to solve the 
formulated optimization problems for both single and multiple relay selection, where discrete relay power levels were considered. Simulation results show that the practical and low-complexity heuristic approaches achieve almost the same performance of the optimal relay selection schemes either with discrete or continuous power distributions while providing a considerable saving in terms of computational complexity.
449

Prediction of Electricity Price Quotation Data of Prioritized Clean Energy Power Generation of Power Plants in The Buyer's Market

Li, Jiasen 05 October 2021 (has links)
No description available.
450

Robust facility location of container clinics : a South African application

Karsten, Carike January 2021 (has links)
Health care, and especially access to health care, has always been a critical metric for countries. In 2017, South Africa spent 9% of its GDP on health care. Despite the GDP health care allocation being 5% higher than recommended by the World Health Organisation for a country of its socio-economic status, South Africa's health status is poor compared to similar countries. In 1994, South Africa implemented a health care policy to make health care accessible to all South Africans. A primary health care facility within 5km of the place of residence is deemed accessible. There is still a significant gap between the actual and desired accessibility, especially for the lower-income communities. There is a need to improve access to public health care for all South Africans. Cost-effective and sustainable solutions are required to solve this problem. Therefore, an opportunity was identified to investigate the location of low-cost container clinics in lower-income communities. This report uses robust optimisation and goal programming to find robust sites for cost-effective container clinics over multiple years in an uncertain environment using multiple future city development scenarios. The study area of the report includes three metro municipalities (City of Tshwane, City of Johannesburg, and City of Ekurhuleni) in Gauteng, South Africa. Three future development scenarios were created for this study using a synthetic population and urban growth simulation model developed by the CSIR. The model provided the population distribution from 2018 to 2030 for all three of the scenarios. The simulation model provides household attribute tables as an output. Household attributes that have a causal relationship with health care demand were investigated during the literature review. Based on the literature and the available household attributes, four attributes were selected to forecast the health care demand. The four attributes are household income, the number of children in the household, the household size, and the nearest clinic's distance. Using associative forecasting, the primary health care demand was forecasted from 2018 to 2030. These forecasts were used as input into the facility location models. A p-median facility location model was developed and implemented in Python. Since facility location problems are classified as NP-hard problems, heuristics and metaheuristics were investigated to speed up the problem solving. A GA selected as the metaheuristic be used to determine a suitable configuration of facilities for each scenario. The model determined good locations of clinics from a set of candidate locations. A good year to open each clinic is also determined by the model. These decisions are made by minimising three variables: total distances travelled by the households to their nearest clinics, the total distance from the selected distribution centre to the open clinics and the total building cost. An accessibility target of 90% was added to the model to ensure that at least 90% of the households are within 5km of the nearest clinic within the first five years. In these models, operating costs were not included. Therefore all the results are skewed, with most of the clinics being opened in the first year when it is the cheapest since there is no penalty for opening a clinic before it is needed — the exclusion of operating costs is a shortcoming to address in future work. A goal programming model was developed with the variables of the individual scenarios as the goals. The goal programming model was implemented in Python and used to determine a robust configuration of where and in what year to open container clinics. A difference of 25% was set as the upper limit for the difference between the robust configuration variables and the good or acceptable variables for the individual scenarios as the scenarios investigated are very different. This ensured that the robust solution would perform well for any of the three scenarios. The model was able to find locations that provided a relatively good solution to all the scenarios. This came with a cost increase, but that is a trade-off that must be made when dealing with uncertainty. This model is a proof of concept to bridge the gap between urban planning with multiple development scenarios and facility location, more specifically robust facility location. The biggest rendement was achieved by constructing and placing the container clinics in the shortest space of time because the 90% accessibility requirement can be addressed cost-effectively without an operating cost penalty ― this is unfortunately not possible in reality due to budget constraints. An accessibility analysis was conducted to investigate the impact of the accessibility percentage on the variable values and to test the model in a scenario closer resembling the real world by adding a budget constraint. The time limit of the accessibility requirement was removed. In this case, a gradual improvement in the accessibility over the 12 years was observed due to the gradual opening of clinics over the years. Based on the analyses results, it was concluded that the model is sensitive to changes in parameters and that the model can be used for different scenarios. / Dissertation (MEng (Industrial Engineering))--University of Pretoria, 2021. / Industrial and Systems Engineering / MEng (Industrial Engineering) / Unrestricted

Page generated in 0.0657 seconds