• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 5
  • 4
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 16
  • 13
  • 12
  • 9
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Multi-objective optimization using Genetic Algorithms

Amouzgar, Kaveh January 2012 (has links)
In this thesis, the basic principles and concepts of single and multi-objective Genetic Algorithms (GA) are reviewed. Two algorithms, one for single objective and the other for multi-objective problems, which are believed to be more efficient are described in details. The algorithms are coded with MATLAB and applied on several test functions. The results are compared with the existing solutions in literatures and shows promising results. Obtained pareto-fronts are exactly similar to the true pareto-fronts with a good spread of solution throughout the optimal region. Constraint handling techniques are studied and applied in the two algorithms. Constrained benchmarks are optimized and the outcomes show the ability of algorithm in maintaining solutions in the entire pareto-optimal region. In the end, a hybrid method based on the combination of the two algorithms is introduced and the performance is discussed. It is concluded that no significant strength is observed within the approach and more research is required on this topic. For further investigation on the performance of the proposed techniques, implementation on real-world engineering applications are recommended.
12

Modeling and design of 3D Imager IC

Viswanathan, Vijayaragavan 06 September 2012 (has links) (PDF)
CMOS image sensor based on Active pixel sensor has considerably contributed to the imaging market and research interest in the past decade. Furthermore technology advancement has provided the capability to integrate more and more functionality into a single chip in multiple layers leading to a new paradigm, 3D integration. CMOS image sensor is one such application which could utilize the capability of 3D stacked architecture to achieve dedicated technologies in different layers, wire length reduction, less area, improved performancesThis research work is focused mainly on the early stages of design space exploration using hierarchical approach and aims at reducing time to market. This work investigates the imager from the top-down design perspective. Methodical anal y sis of imager is performed to achieve high level of flexibility and modularity. Re-useable models are developed to explore early design choices throughout the hierarchy. Finally, pareto front (providing trade off solutions) methodology is applied to explore the operating range of individual block at system level to help the designer making his design choice. Furthermore the thermal issues which get aggravated in the 3D stacked chip on the performance of the imager are studied. Systeme based thermal model is built to investigate the behavior of imager pixel matrix and to simulate the pixel matrix at high speed with acceptable accuracy compared to electrical simulations. The modular nature of the model makes simulations with future matrix extension straightforward. Validation of the thermal model with respect to electrical simulations is discussed. Finally an integrated design flow is developed to perform 3D floorplanning and to perform thermal anal y sis of the imager pixel matrix.
13

Algoritmo híbrido multi-objetivo para predição de estrutura terciária de proteínas / Multi-objective approach to protein tertiary structure prediction

Rodrigo Antonio Faccioli 12 April 2007 (has links)
Muitos problemas de otimização multi-objetivo utilizam os algoritmos evolutivos para encontrar as melhores soluções. Muitos desses algoritmos empregam as fronteiras de Pareto como estratégia para obter tais soluções. Entretando, conforme relatado na literatura, há a limitação da fronteira para problemas com até três objetivos, podendo tornar seu emprego insatisfatório para os problemas com quatro ou mais objetivos. Além disso, as propostas apresentadas muitas vezes eliminam o emprego dos algoritmos evolutivos, os quais utilizam tais fronteiras. Entretanto, as características dos algoritmos evolutivos os qualificam para ser empregados em problemas de otimização, como já vem sendo difundido pela literatura, evitando eliminá-lo por causa da limitação das fronteiras de Pareto. Assim sendo, neste trabalho se buscou eliminar as fronteiras de Pareto e para isso utilizou a lógica Fuzzy, mantendo-se assim o emprego dos algoritmos evolutivos. O problema escolhido para investigar essa substituição foi o problema de predição de estrutura terciária de proteínas, pois além de se encontrar em aberto é de suma relevância para a área de bioinformática. / Several multi-objective optimization problems utilize evolutionary algorithms to find the best solution. Some of these algoritms make use of the Pareto front as a strategy to find these solutions. However, according to the literature, the Pareto front limitation for problems with up to three objectives can make its employment unsatisfactory in problems with four or more objectives. Moreover, many authors, in most cases, propose to remove the evolutionay algorithms because of Pareto front limitation. Nevertheless, characteristics of evolutionay algorithms qualify them to be employed in optimization problems, as it has being spread out by literature, preventing to eliminate it because the Pareto front elimination. Thus being, this work investigated to remove the Pareto front and for this utilized the Fuzzy logic, remaining itself thus the employ of evolutionary algorithms. The choice problem to investigate this remove was the protein tertiary structure prediction, because it is a open problem and extremely relevance to bioinformatic area.
14

Ant colony optimisation algorithms for solving multi-objective power-aware metrics for mobile ad hoc networks

Constantinou, Demetrakis 01 July 2011 (has links)
A mobile ad hoc network (MANET) is an infrastructure-less multi-hop network where each node communicates with other nodes directly or indirectly through intermediate nodes. Thus, all nodes in a MANET basically function as mobile routers participating in some routing protocol required for deciding and maintaining the routes. Since MANETs are infrastructure-less, self-organizing, rapidly deployable wireless networks, they are highly suitable for applications such as military tactical operations, search and rescue missions, disaster relief operations, and target tracking. Building such ad-hoc networks poses a significant technical challenge because of energy constraints and specifically in relation to the application of wireless network protocols. As a result of its highly dynamic and distributed nature, the routing layer within the wireless network protocol stack, presents one of the key technical challenges in MANETs. In particular, energy efficient routing may be the most important design criterion for MANETs since mobile nodes are powered by batteries with limited capacity and variable recharge frequency, according to application demand. In order to conserve power it is essential that a routing protocol be designed to guarantee data delivery even should most of the nodes be asleep and not forwarding packets to other nodes. Load distribution constitutes another important approach to the optimisation of active communication energy. Load distribution enables the maximisation of the network lifetime by facilitating the avoidance of over-utilised nodes when a route is in the process of being selected. Routing algorithms for mobile networks that attempt to optimise routes while at- tempting to retain a small message overhead and maximise the network lifetime has been put forward. However certain of these routing protocols have proved to have a negative impact on node and network lives by inadvertently over-utilising the energy resources of a small set of nodes in favour of others. The conservation of power and careful sharing of the cost of routing packets would ensure an increase in both node and network lifetimes. This thesis proposes simultaneously, by using an ant colony optimisation (ACO) approach, to optimise five power-aware metrics that do result in energy-efficient routes and also to maximise the MANET's lifetime while taking into consideration a realistic mobility model. By using ACO algorithms a set of optimal solutions - the Pareto-optimal set - is found. This thesis proposes five algorithms to solve the multi-objective problem in the routing domain. The first two algorithms, namely, the energy e±ciency for a mobile network using a multi-objective, ant colony optimisation, multi-pheromone (EEMACOMP) algorithm and the energy efficiency for a mobile network using a multi-objective, ant colony optimisation, multi-heuristic (EEMACOMH) algorithm are both adaptations of multi-objective, ant colony optimisation algorithms (MOACO) which are based on the ant colony system (ACS) algorithm. The new algorithms are constructive which means that in every iteration, every ant builds a complete solution. In order to guide the transition from one state to another, the algorithms use pheromone and heuristic information. The next two algorithms, namely, the energy efficiency for a mobile network using a multi-objective, MAX-MIN ant system optimisation, multi-pheromone (EEMMASMP) algorithm and the energy efficiency for a mobile network using a multi-objective, MAX- MIN ant system optimisation, multi-heuristic (EEMMASMH) algorithm, both solve the above multi-objective problem by using an adaptation of the MAX-MIN ant system optimisation algorithm. The last algorithm implemented, namely, the energy efficiency for a mobile network using a multi-objective, ant colony optimisation, multi-colony (EEMACOMC) algorithm uses a multiple colony ACO algorithm. From the experimental results the final conclusions may be summarised as follows:<ul><li> Ant colony, multi-objective optimisation algorithms are suitable for mobile ad hoc networks. These algorithms allow for high adaptation to frequent changes in the topology of the network. </li><li> All five algorithms yielded substantially better results than the non-dominated sorting genetic algorithm (NSGA-II) in terms of the quality of the solution. </li><li> All the results prove that the EEMACOMP outperforms the other four ACO algorithms as well as the NSGA-II algorithm in terms of the number of solutions, closeness to the true Pareto front and diversity. </li></ul> / Thesis (PhD)--University of Pretoria, 2010. / Computer Science / unrestricted
15

Synergetic Attenuation of Stray Magnetic Field in Inductive Power Transfer

Lu, Ming 28 July 2017 (has links)
Significant stray magnetic field exists around the coils when charging the electric vehicles (EVs) with inductive power transfer (IPT), owning to the large air gap between the transmitter and receiver. The methods for field attenuation usually introduce extra losses and reduce the efficiency. This study focuses on the synergetic attenuation of stray magnetic field which is optimized simultaneously with the efficiency. The optimization is realized with Pareto front. In this dissertation, three methods are discussed for the field attenuation. The first method is to tune the physical parameters of the winding, such as the inner radii, outer radii, distribution of the turns, and types of the litz wires. The second method is to add metal shields around the IPT coils, in which litz wires are used as shields to reduce the shielding losses. The third method is to control the phases of winding currents, which avoids increasing the size and weight of the IPT coils. To attenuate the stray magnetic field by tuning the physical parameters, the conventional method is to sweep all the physical parameters in finite-element simulation. This takes thousands of simulations to derive the Pareto front, and it's especially time-consuming for three-dimensional simulations. This dissertation demonstrates a faster method to derive the Pareto front. The windings are replaced by the lumped loops. As long as the number of turns for each loop is known, the efficiency and magnetic field are calculated directly from the permeance matrices and current-to-field matrices. The sweep of physical parameters in finite-element simulation is replaced by the sweep of the turns numbers for the lumped loops in calculation. Only tens of simulations are required in the entire procedure, which are used to derive the matrices. An exemplary set of coils was built and tested. The efficiency from the matrix calculation is the same as the experimental measurement. The difference for stray magnetic field is less than 12.5%. Metal shields attenuate the stray magnetic field effectively, but generates significant losses owning to the uneven distribution of shield currents. This dissertation uses litz wires to replace the conventional plate shield or ring shield. Skin effect is eliminated so the shield currents are uniformly distributed and the losses are reduced. The litz shields are categorized to two types: shorted litz shield and driven litz shield. Circuit models are derived to analyze their behaviors. The concept of lumped-loop model is applied to derive the Pareto front of efficiency versus stray magnetic field for the coils with litz shield. In an exemplary IPT system, coils without metal shield and with metal shields are optimized for the same efficiency. Both the simulation and experimental measurement verify that the shorted litz shield has the best performance. The stray magnetic field is attenuated by 65% compared to the coils without shield. This dissertation also introduces the method to attenuate the stray magnetic field by controlling the phases of winding currents. The magnetic field around the coils is decomposed to the component in the axial direction and the component in the radial direction. The axial component decreases with smaller phase difference between windings' currents, while the radial component exhibits the opposite property. Because the axial component is dominant around the IPT coils, decreasing the phase difference is preferred. The dual-side-controlled converter is applied for the circuit realization. Bridges with active switches are used for both the inverter on the transmitter side and the rectifier on the receiver side. The effectiveness of this method was verified both in simulation and experiment. Compared to the conventional series-series IPT with 90° phase difference between winding currents, stray magnetic field was attenuated by up to 30% and 40% when the phase differences of winding currents are 50° and 40°, respectively. Furthermore, an analytical method is investigated to calculate the proximity-effect resistance of the planar coils with ferrite plate. The objective of this method is to work together with the fast optimization which uses the lumped-loop model. The existence of the ferrite plate complicates the calculation of the magnetic field across each turn which is critical to derive the proximity-effect resistance. In this dissertation, the ferrite plate is replaced by the mirrored turns according to the method of image. The magnetic fields are then obtained from Ampere's Law and Biot-Savart Law. Up to 200 kHz, the difference of the proximity-effect resistance is less than 15% between calculation and measurement. / Ph. D.
16

Efficient driving of CBTC ATO operated trains

Carvajal Carreño, William January 2017 (has links)
Energy consumption reduction is one of the priorities of metro operators, due to financial cost and environmental impact. The new signalling system Communications-Based Train Control (CBTC) is being installed in new and upgraded metro lines to increase transportation capacity. But its continuous communication feature also permits to improve the energy performance of traffic operation, by updating the control command of the Automatic Train Operation (ATO) system at any point of the route. The present research addresses two main topics. The first is the design of efficient CBTC speed profiles for undisturbed train trajectory between two stations. The second takes into account the interaction between two consecutive trains under abnormal traffic conditions and proposes a tracking algorithm to save energy. In the first part of the research an off-line methodology to design optimal speed profiles for CBTC-ATO controlled trains is proposed. The methodology is based on a new multi-objective optimisation algorithm named NSGA-II-F, which is used to design speed profiles in such a way that can cover all the possible efficient solutions in a pseudo-Pareto front. The pseudo–Pareto front is built by using dominated solutions to make available a complete set of feasible situations in a driving scenario. The uncertainty in the passenger load is modelled as a fuzzy parameter. Each of the resulting speed profiles is obtained as a set of parameters that can be sent to the ATO equipment to perform the driving during the operation. The proposed optimisation algorithm makes use of detailed simulation of the train motion. Therefore, a simulator of the train motion has been developed, including detailed model of the specific ATO equipment, the ATP constraints, the traction equipment, the train dynamics and the track. A subsequent analysis considers the effect in the design of considering the regenerative energy flow between the train and the surrounding railway system. The second part of the research is focused on the proposal and validation of a fuzzy tracking algorithm for controlling the motion of two consecutive trains during disturbed conditions. A disturbed condition is understood as a change in the nominal driving command of a leading train and its consequences in the subsequent trains. When a train runs close enough to the preceding one, a tracking algorithm is triggered to control the distance between both trains. The following train receives the LMA (limit of movement authority) via radio, which is updated periodically as the preceding train runs. The aim of the proposed algorithm is to take actions in such a way that the following train could track the leading train meeting the safety requirements and applying an energy saving driving technique (coasting command). The uncertainty in the variations of the speed of the preceding train is modelled as a fuzzy quantity. The proposed algorithm is based on the application of coasting commands when possible, substituting traction/braking cycles by traction/coasting cycles, and hence saving energy. Both algorithms were tested and validated by using a detailed simulation program. The NSGA-II-F algorithm provided additional energy savings when compared to fixed block distance-to-go configurations, and giving a more even distribution of the solutions. The fuzzy tracking algorithm provides energy savings with a minor impact on running times while improving comfort, because of the reduction of the inefficient traction/braking cycles. / <p>QC 20170216</p>
17

Optimisation avec prise en compte des incertitudes dans la mise en forme par hydroformage / Optimization with taking into account of uncertainties in hydroformig process

Ben Abdessalem, Mohamed Anis 08 June 2011 (has links)
Le procédé d'hydroformage est largement utilisé dans les industries automobile et aéronautique. L'optimisation déterministe a été utilisée pour le contrôle et l'optimisation du procédé durant la dernière décennie. Cependant,dans des conditions réelles, différents paramètres comme les propriétés matériaux,les dimensions géométriques, et les chargements présentent des aléas qui peuvent affecter la stabilité et la fiabilité du procédé. Il est nécessaire d'introduire ces incertitudes dans les paramètres et de considérer leur variabilité. L'objectif principal de cette contribution est l'évaluation de la fiabilité et l'optimisation du procédé d'hydroformage en présence d'incertitudes.La première partie de cette thèse consiste à proposer une approche générale pour évaluer la probabilité de défaillance spatiale du procédé d'hydroformage, principalement dans les régions critiques. Avec cette approche, il est possible d'éviter les instabilités plastiques durant une opération d'hydroformage. Cette méthode est basée sur des simulations de Monte Carlo couplée avec des métamodèles. La courbe limite de formage est utilisée comme critère de défaillance pour les instabilités plastiques potentielles.La seconde partie de cette thèse est l'optimisation avec prise en compte d'incertitudes dans le procédé d'hydroformage. En utilisant des exemples illustratifs, on montre que l'approche probabiliste est une méthode efficace pour l'optimisation du procédé pour diminuer la probabilité de défaillance et laisser le procédé insensible ou peu sensible aux sources d'incertitudes. La difficulté est liée à la considération des contraintes fiabilistes qui nécessitent d'énormes efforts de calcul et impliquent des problèmes numériques classiques comme la convergence, la précision et la stabilité. Pour contourner ce problème, la méthode de surface de réponse couplée à des simulations Monte Carlo est utilisée pour évaluer les contraintes probabilistes.L'approche probabiliste peut assurer la stabilité et la fiabilité du procédé et minimise considérablement le pourcentage des pièces défectueuses. Dans cette partie, deux méthodes sont utilisées : l'optimisation fiabiliste et l'optimisation robuste.La dernière partie consiste à optimiser le procédé avec une stratégie Multi-Objectif(MO) avec prise en compte d'incertitudes. Le procédé d'hydroformage est un problème MO qui consiste à optimiser plus d'une performance simultanément.L'objectif principal est d'étudier l'évolution du front de Pareto lorsque des incertitudes affectent les fonctions objectifs ou les paramètres. Dans cette partie, on propose une nouvelle méthodologie qui présente les solutions dans un nouvel espace et les classifie suivant leurs probabilités de défaillances. Cette classification permet d'identifier la meilleure solution et fournit une idée sur la fiabilité de chaque solution. / Hydroforming process is widely used in automotive and aerospace industries. Deterministic design optimization have been used to control and optimize this process in the last decade. However, under realistic conditions, different parameters such as material properties, geometric dimensions, and load exhibits unavoidable scatter that can affect the stability and the reliability of the process.It is interesting to introduce the uncertainties in parameter and to consider their variability. The main objective of this contribution is to evaluate the reliability and optimization of the hydroforming process in the presence of uncertainties.The first part of this thesis proposes a general approach to evaluate the spatial probability of failure in hydroforming process mainly in the critical region. With the proposed approach it is possible to avoid failure during hydroforming process.This method is based on Monte Carlo simulation coupled with metamodels, the forming limit curve is used as failure criteria for potential plastic instabilities.The second part of this thesis is the optimisation of the hydroforming process under uncertainties. Using illustrative examples, it is shown that probabilistic approach is an efficient method to optimize the process, to decrease the probability of failure and make the process insensitive or less sensitive to sources of variability. The difficulty lies in the considerations of the reliability constraints, which require a large computational effort and involve classical numerical problems, such as convergence,accuracy and stability. To overcome this problem, response surface method with Monte Carlo simulations were used to evaluate the probabilistic constraints.Probabilistic approach can ensure a stable and reliable process and decrease the percentage of defects parts significantly. Through this part, two methods were used : Reliability-Based Design Optimization and robust optimization.The last part consists of optimizing the process with Multi-Objective (MO) strategy taking account of the uncertainty. Metal forming process is MO problem that consists of optimizing more than one performance simultaneously. The main goal isto study the evolution of the Pareto front when some uncertainties can affect the objective functions or the parameters. In this part, a new methodology is proposed which presents the solutions in a new space and classify the whole solution with their probability of failure. This classification allows to identify the best solutionand can provide an idea about the reliability of each solution.
18

Procedimento de projeto de embarcações trimarã por otimização multiobjetivo. / Design procedure of a trimaran vessels using multiobjective optimization.

Ribeiro, Rafael Maximo Carreira 04 March 2015 (has links)
O presente trabalho aborda o desenvolvimento de um modelo de síntese para o projeto conceitual de uma embarcação rápida do tipo trimarã, destinada ao transporte de passageiros. Tal modelo visa possibilitar o entendimento dos mecanismos que governam o projeto deste tipo de navio, através de analises comparativas (atributos de desempenho) entre diferentes soluções de projeto, em função dos parâmetros escolhidos para sua representação (variáveis de projeto). Foram desenvolvidos dois modelos de síntese, em programas comerciais distintos. Cada modelo gera a superfície do casco, a partir de series sistemáticas, e calcula a resistência ao avanço dividindo-a nas parcelas viscosa e de ondas. A parcela viscosa e calculada pela aproximação de placa plana e a parcela de ondas calculada pela teoria de navio no. São feitas, ainda, estimativas preliminares do fator de forma e da resistência adicional em ondas. O arranjo geral dos principais espaços e subsistemas foi parametrizado com base no arranjo de embarcações semelhantes e o conforto dos passageiros a bordo e calculado segundo padrões estabelecidos por sociedades classificadoras internacionais. As variáveis de projeto escolhidas foram o comprimento do casco central e razões entre as demais dimensões, de modo a permitir o calculo das dimensões principais de cada casco e o posicionamento relativo entre eles. Com isso, tem-se que, ao mudar o valor do comprimento, escala-se o casco mantendo-se todas as proporções. A vantagem buscada ao se utilizar esta abordagem e a extinção das restrições geométricas implícitas, pois estarão embutidas diretamente nas restrições explícitas que definem os limites de exploração de cada variável de projeto. Estes modelos foram integrados a procedimentos de otimização mono e multiobjetivo, com base em diferentes versões do algoritmo genético, e aplicados a um problema de projeto exemplo. As funções de mérito, ou funções objetivo, escolhidas para este problema foram a potencia requerida pela embarcação para navegar na velocidade de projeto e a disponibilidade operacional da embarcação, calculada a partir dos índices de conforto dos passageiros. Com isso, descobriu-se que os parâmetros com maior influencia na resistência ao avanço e no conforto dos passageiros são o comprimento do casco central e sua razão de comprimento por boca. O posicionamento relativo entre os cascos e extremamente importante para a resistência de ondas geradas pela embarcação, mas não e possível observar nenhuma grande tendência em relação a posições que privilegiam seja a resistência seja o comportamento, pois o posicionamento ótimo para cada objetivo depende tanto do número de Froude quanto das demais dimensões da embarcação. / The present work studies the development of a synthesis model for the conceptual design of a fast trimaran passenger vessel, in order to understand the mechanisms that govern the design of such a vessel through a comparative analysis (performance parameters) between different design solutions, as a function of the variables chosen to uniquely represent them (design variables). Two separate models were developed, using different commercial softwares. Each model generates the hull surface based on systematic hull series and calculates the ship\'s forward resistance as the sum of the viscous and wave components. The viscous component is estimated by at plate approximation, corrected by a form factor, and the wave component is calculated according to thin ship theory. A preliminary estimate of the ship\'s added resistance in waves is also made. The general layout of the main volumes and systems was parametrized following design trends of similar ships and passenger comfort on board was calculated using international classification societies standards. The design variables chosen were the length of the center hull, the ratios between the main dimensions of each hull as to allow for their calculation and two coeficients regarding the relative positioning of the center and side hulls. The advantage sought by adopting this approach is the elimination of implicit geometrical constraints, once they will be automatically included in the explicit constraints defining the exploration range of each design variable. These models were then coupled to mono and multi objective optimization procedures, based on different versions of the genetic algorithm, and applied to a case study.The objective functions taken for this problem were the required power to achieve the design speed and the operational availability, measured from passengers comfort thresholds. It was found that the parameters with the most influence on the forward resistance, and thus on required power, and on passengers comfort level are the center hull length and its length to breadth ratio. The relative positioning of the side hulls play an important role on the total wave resistance of the ship, although it was not possible to observe any clear trend concerning positions that would favor nor the ship forward resistance neither its seakeeping performance. This is due the fact that the optimum positions of the side hulls are also a function of the Froude number and the remaining design variables.
19

Framework multiobjetivo de ranqueamento e comparação de algoritmos de predição de estrutura terciária de proteínas / Multiobjective framework for ranking and comparion of tertiary protein structure prediction algorithms

Marciano, Michelle Duarte 05 December 2016 (has links)
Submitted by Erika Demachki (erikademachki@gmail.com) on 2017-01-18T16:28:28Z No. of bitstreams: 2 Dissertação - Michelle Duarte Marciano - 2016.pdf: 2336395 bytes, checksum: 6cdabbc6871d88785ffc1b1561c3c1c7 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2017-01-19T10:32:59Z (GMT) No. of bitstreams: 2 Dissertação - Michelle Duarte Marciano - 2016.pdf: 2336395 bytes, checksum: 6cdabbc6871d88785ffc1b1561c3c1c7 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2017-01-19T10:32:59Z (GMT). No. of bitstreams: 2 Dissertação - Michelle Duarte Marciano - 2016.pdf: 2336395 bytes, checksum: 6cdabbc6871d88785ffc1b1561c3c1c7 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2016-12-05 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Determining the tertiary structure of a protein is very important, once that this is the structure that allow us to know the function of a protein on living beings. There are many algorithms that intend to do this prediction, but none of them does it with one hundred percent of accuracy, being a case of NP-complete problem. Even sill not being able to predict the tertiary structure of proteins with total precision, these algorithms are already used in areas such as pharmacology and are extremely important. This project presents a multiobjective framework for the classification and ranking of these algorithms, thus allowing a comparison among them. The goal is to help improving researches in the area, either in individual algorithms or groups of research in the bioinformatics field. / A determinação da estrutura tridimensional de uma proteína é muito importante, uma vez que esta estrutura é que fornece a função de uma proteína no corpo de seres vivos. Existem muitos algoritmos que buscam fazer essa predição, mas nenhum deles faz isso com cem por cento de eficiência, tratando-se de um problema NP-completo. Mesmo ainda não sendo capazes de predizer com total precisão a estrutura terciária das proteínas, tais algoritmos já são aproveitados em áreas como a farmacologia e são de grande importãncia. Este projeto apresenta um framework multi-objetivo para classificação e ranqueamento desses algoritmos, permitindo assim uma comparação entre eles. O objetivo é ajudar a melhorar as pesquisas na área, seja em algoritmos isolados ou grupos de pesquisa da área de bioinformática.
20

Hybrid Particle Swarm Optimization Algorithm For Obtaining Pareto Front Of Discrete Time-cost Trade-off Problem

Aminbakhsh, Saman 01 January 2013 (has links) (PDF)
In pursuance of decreasing costs, both the client and the contractor would strive to speed up the construction project. However, accelerating the project schedule will impose additional cost and might be profitable up to a certain limit. Paramount for construction management, analyses of this trade-off between duration and cost is hailed as the time-cost trade-off (TCT) optimization. Inadequacies of existing commercial software packages for such analyses tied with eminence of discretization, motivated development of different paradigms of particle swarm optimizers (PSO) for three extensions of discrete TCT problems (DTCTPs). A sole-PSO algorithm for concomitant minimization of time and cost is proposed which involves minimal adjustments to shift focus to the completion deadline problem. A hybrid model is also developed to unravel the time-cost curve extension of DCTCPs. Engaging novel principles for evaluation of cost-slopes, and pbest/gbest positions, the hybrid SAM-PSO model combines complementary strengths of overhauled versions of the Siemens Approximation Method (SAM) and the PSO algorithm. Effectiveness and efficiency of the proposed algorithms are validated employing instances derived from the literature. Throughout computational experiments, mixed integer programming technique is implemented to introduce the optimal non-dominated fronts of two specific benchmark problems for the very first time in the literature. Another chief contribution of this thesis can be depicted as potency of SAM-PSO model in locating the entire Pareto fronts of the practiced instances, within acceptable time-frames with reasonable deviations from the optima. Possible further improvements and applications of SAM-PSO model are suggested in the conclusion.

Page generated in 0.0839 seconds