101 |
Security-driven Design Optimization of Mixed Cryptographic Implementations in Distributed, Reconfigurable, and Heterogeneous Embedded SystemsNam, HyunSuk, Nam, HyunSuk January 2017 (has links)
Distributed heterogeneous embedded systems are increasingly prevalent in numerous applications, including automotive, avionics, smart and connected cities, Internet of Things, etc. With pervasive network access within these systems, security is a critical design concern. This dissertation presents a modeling and optimization framework for distributed, reconfigurable, and heterogeneous embedded systems. Distributed embedded systems consist of numerous interconnected embedded devices, each composed of different computing resources, such single core processors, asymmetric multicore processors, field-programmable gate arrays (FPGAs), and various combinations thereof.
A dataflow-based modeling framework for streaming applications integrates models for computational latency, mixed cryptographic implementations for inter-task and intra task communication, security levels, communication latency, and power consumption. For the security model, we present a level-based modeling of cryptographic algorithms using mixed cryptographic implementations, including both symmetric and asymmetric implementations. We utilize a multi-objective genetic optimization algorithm to optimize security and energy consumption subject to latency and minimum security level constraints. The presented methodology is evaluated using a video-based object detection and tracking application and several synthetic benchmarks representing various application types. Experimental results for these design and optimization frameworks demonstrate the benefits of mixed cryptographic algorithm security model compared to single cryptographic algorithm alternatives. We further consider several distributed heterogeneous embedded systems architectures.
|
102 |
An Evolutionary Method for Complementary Cell SuppressionDitrich, Eric 01 January 2010 (has links)
As privacy concerns become more important, effective and efficient security techniques will become critical to those that are charged with the protection of sensitive information. Agencies that disseminate numerical data encounter a common disclosure control problem called the complementary cell suppression problem. In this problem, cell values that are considered sensitive in the statistical table must be suppressed before the table is made public. However, suppressing only these cells may not provide adequate protection since their values may be inferred using available marginal subtotals. In order to ensure that the values of the sensitive cells cannot be estimated within a specified degree of precision additional non-sensitive cells, called complementary cells, must also be suppressed. Since suppression of non-sensitive cells diminishes the utility of the released data, the objective in the complementary cell suppression problem is to minimize the information lost due to complementary suppression while guaranteeing that the sensitive cells are adequately protected. The resulting constrained optimization problem is known to be NP-hard and has been a major focus of research in statistical data security.
Several heuristic methods have been developed to find good solutions for the complementary cell suppression problem. More recently, genetic algorithms have been used to improve upon these solutions. A problem with these GA-based approaches is that a vast majority of the solutions produced do not protect the sensitive cells. This is because the genetic operators used do not maintain the associations between cells that provide the protection. Consequently, the GA has to include an additional procedure for repairing the solutions. This dissertation details an improved GA-based method for the complementary cell suppression problem that addresses this limitation by designing more effective genetic operators. Specifically, it mitigated the problem of chromosomal repair by developing a crossover operator that maintains the necessary associations. The study also designed an improved mutation operator that exploits domain knowledge to increase the probability of finding good quality solutions. The proposed GA was evaluated by comparing it to extant methods based on the quality of its evolved solutions and its computational efficiency.
|
103 |
An evolutionary algorithm for the constrained forest problemQueern, John John 01 January 2013 (has links)
Given an undirected edge-weighted graph G and a positive integer m, the Constrained Forest Problem (CFP) seeks a lowest-cost (or minimum-weight) forest F which spans G while satisfying the requirement that each tree in F contain at least m vertices. This problem has been shown to be NP-hard for values of m greater than three, giving rise to a number of approximation strategies for finding reasonable m-forest solutions. This research presents a new genetic algorithm (GA) which can consistently find equal-or-better solutions to the problem when compared to non-genetic alternatives. This GA is unique in that it uses chromosomes which are actual candidate solutions (m-forests) and performs genetic operations (random creation, selection, recombination, and mutation) on these candidate solutions. Experiments were run using 180 different GA configurations on 50 benchmark graphs to determine which operators and techniques would be most successful in solving the m-forest problem. The "heaviest edge first" or HEF algorithm run against the minimum spanning tree (MST) of a graph was used as a performance metric. Previously, the HEF(MST) algorithm had been shown to produce the best results on m-forest problems. When the GA was able to find better results than HEF(MST) on the same problem instance, this was considered a GA success. Since the GA's initial population included heuristic candidate solutions such as HEF(MST), the GA never did worse than the best of these. GA solution quality did vary, however, often finding several different better-than-HEF(MST) solutions, illustrating the stochastic nature of the process. Based on data collected from the 9000 initial problem instances, several factors were shown to significantly improve the quality of the GA solution. Problem instances which did not include mutation had a much lower success rate than those which did. Adding calculated heuristic solutions such as HEF(MST) to the initial population allowed the GA to converge more quickly and improved its likelihood of finding better-than-HEF(MST) solutions. Building an initial population using randomly-generated candidate solutions whose edges were restricted to the problem graph's MST proved equally successful. GA configuration options were analyzed using all 9000 test cases and again using only those 403 cases in which the GA was able to find the very best solution for each graph. These analyses were consistent, and resulted in the identification of a single "best" GA configuration which combined the best overall initial population strategy, random seeding algorithms, mutation and crossover strategy. The selected configuration was then further tested using various values of m to ensure that the resulting GA could in fact find better-than-HEF(MST) solutions for the majority of problem instances.
|
104 |
Advances in genetic algorithm optimization of traffic signalsKesur, Khewal Bhupendra 29 May 2008 (has links)
Recent advances in the optimization of fixed time traffic signals have demonstrated a move
towards the use of genetic algorithm optimization with traffic network performance evaluated via
stochastic microscopic simulation models. This dissertation examines methods for improved
optimization. Several modified versions of the genetic algorithm and alternative genetic
operators were evaluated on test networks. A traffic simulation model was developed for
assessment purposes. Application of the CHC search algorithm with real crossover and mutation
operators were found to offer improved optimization efficiency over the standard genetic
algorithm with binary genetic operators. Computing resources are best utilized by using a single
replication of the traffic simulation model with common random numbers for fitness evaluations.
Combining the improvements, delay reductions between 13%-32% were obtained over the
standard approaches. A coding scheme allowing for complete optimization of signal phasing is
proposed and a statistical model for comparing genetic algorithm optimization efficiency on
stochastic functions is also introduced. Alternative delay measurements, amendments to genetic
operators and modifications to the CHC algorithm are also suggested.
|
105 |
Um algoritmo evolutivo para o problema de dimensionamento de lotes em fundições de mercado / An evolutionary algorithm to the lot-sizing in market foundriesCamargo, Victor Claudio Bento de 16 March 2009 (has links)
Segundo uma pesquisa recente realizada junto ao setor de fundições, uma importante preocupação do setor é melhorar seu planejamento de produção. Um plano de produção em uma fundição envolve duas etapas interdependentes: a determinação das ligas a serem fundidas e dos lotes que serão produzidos. Neste trabalho, estudamos o problema de dimensionamento de lotes para fundições de pequeno porte, cujo objetivo é determinar um plano de produção de mínimo custo. Como sugerido na literatura, a heurística proposta trata as etapas do problema de forma hierárquica: inicialmente são definidas as ligas e, posteriormente, os lotes que são produzidos a partir delas. Para a solução do problema, propomos um algoritmo genético que explora um conjunto de possibilidades para a determinação das ligas e utiliza uma heurística baseada em relaxação lagrangiana para determinação dos itens a serem produzidos. Além disso, uma abordagem para o mesmo problema é proposta utilizando o problema da mochila para determinar os itens a serem produzidos. Bons resultados foram obtidos pelos métodos propostos / According to a recent research made by the foundry sector, one of the most concern of the industry is to improve its production planning. A foundry production plan involves two independent stages: the determination of alloys to be merged and the lots that will be produced. In this work, we studied the lot-sizing problem for small foundries, whose purpose is to determine a plan of minimum production cost. As suggested in the literature, the heuristic proposed addresses the problem stages in a hierarchical way: rst we dene the alloys and, subsequently, the lots that are produced from them. We propose a genetic algorithm that explores some possible sets of alloys produced and uses a Lagrangian heuristic to determine the items to be produced. Also, we propose one approach to the same problem that uses the knapsack problem to determine the items to be produced. Good results were obtained by the methods proposed
|
106 |
Cognitive smart agents for optimising OpenFlow rules in software defined networksSabih, Ann Faik January 2017 (has links)
This research provides a robust solution based on artificial intelligence (AI) techniques to overcome the challenges in Software Defined Networks (SDNs) that can jeopardise the overall performance of the network. The proposed approach, presented in the form of an intelligent agent appended to the SDN network, comprises of a new hybrid intelligent mechanism that optimises the performance of SDN based on heuristic optimisation methods under an Artificial Neural Network (ANN) paradigm. Evolutionary optimisation techniques, including Particle Swarm Optimisation (PSO) and Genetic Algorithms (GAs) are deployed to find the best set of inputs that give the maximum performance of an SDN-based network. The ANN model is trained and applied as a predictor of SDN behaviour according to effective traffic parameters. The parameters that were used in this study include round-trip time and throughput, which were obtained from the flow table rules of each switch. A POX controller and OpenFlow switches, which characterise the behaviour of an SDN, have been modelled with three different topologies. Generalisation of the prediction model has been tested with new raw data that were unseen in the training stage. The simulation results show a reasonably good performance of the network in terms of obtaining a Mean Square Error (MSE) that is less than 10−6 [superscript]. Following the attainment of the predicted ANN model, utilisation with PSO and GA optimisers was conducted to achieve the best performance of the SDN-based network. The PSO approach combined with the predicted SDN model was identified as being comparatively better than the GA approach in terms of their performance indices and computational efficiency. Overall, this research demonstrates that building an intelligent agent will enhance the overall performance of the SDN network. Three different SDN topologies have been implemented to study the impact of the proposed approach with the findings demonstrating a reduction in the packets dropped ratio (PDR) by 28-31%. Moreover, the packets sent to the SDN controller were also reduced by 35-36%, depending on the generated traffic. The developed approach minimised the round-trip time (RTT) by 23% and enhanced the throughput by 10%. Finally, in the event where SDN controller fails, the optimised intelligent agent can immediately take over and control of the entire network.
|
107 |
Um algoritmo genético de chaves aleatórias viciadas para o problema de atracamento molecular / A biased random key genetic algorithm for the molecular docking problemOliveira, Eduardo Spieler de January 2016 (has links)
O Atracamento Molecular é uma importante ferramenta utilizada no descobrimento de novos fármacos. O atracamento com ligante flexível é um processo computacionalmente custoso devido ao número alto de graus de liberdade do ligante e da rugosidade do espaço de busca conformacional representando a afinidade entre o receptor e uma molécula ligante. O problema é definido como a busca pela solução de menor energia de ligação proteína-ligante. Considerando uma função suficientemente acurada, a solução ótima coincide com a melhor orientação e afinidade entre as moléculas. Assim, o método de busca e a função de energia são partes fundamentais para a resolução do problema. Muitos desafios são enfrentados para a resolução do problema, o tratamento da flexibilidade, algoritmo de amostragem, a exploração do espaço de busca, o cálculo da energia livre entre os átomos, são alguns dos focos estudados. Esta dissertação apresenta uma técnica baseada em um Algoritmo Genético de Chaves Aleatórias Viciadas, incluindo a discretização do espaço de busca e métodos de agrupamento para a multimodalidade do problema de atracamento molecular. A metodologia desenvolvida explora o espaço de busca gerando soluções diversificadas. O método proposto foi testado em uma seleção de complexos proteína-ligante e foi comparado com softwares existentes: AutodockVina e Dockthor. Os resultados foram estatisticamente analisados em termos estruturais. O método se mostrou eficiente quando comparado com outras ferramentas e uma alternativa para o problema de Atracamento Molecular. / Molecular Docking is a valuable tool for drug discovery. Receptor and flexible Ligand docking is a very computationally expensive process due to a large number of degrees of freedom of the ligand and the roughness of the molecular binding search space. A Molecular Docking simulation starts with a receptor and ligand unbounded structures and the algorithm tests hundreds of thousands of ligands conformations and orientations to find the best receptor-ligand binding affinity by assigning and optimizing an energy function. Despite the advances in the conception of methods and computational strategies for search the best protein-ligand binding affinity, the development of new strategies, the adaptation, and investigation of new approaches and the combination of existing and state-of-the-art computational methods and techniques to the Molecular Docking problem are clearly needed. We developed a Biased Random-Key Genetic Algorithm as a sampling strategy to search the protein-ligand conformational space. The proposed method has been tested on a selection of protein-ligand complexes and compared with existing tools AutodockVina and Dockthor. Compared with other traditional docking software, the proposed method has the best average Root-Mean-Square Deviation. Structural results were statistically analyzed. The proposed method proved to be efficient and a good alternative to the molecular docking problem.
|
108 |
Optimization of a Low Reynolds Number 2-D Inflatable Airfoil SectionJohansen, Todd A. 01 December 2011 (has links)
A stand-alone genetic algorithm (GA) and an surrogate-based optimization (SBO) combined with a GA were compared for accuracy and performance. Comparisons took place using the Ackley Function and Rastrigin's Function, two functions with multiple local maxima and minima that could cause problems for more traditional optimization methods, such as a gradient-based method. The GA and SBO with GA were applied to the functions through a fortran interface and it was found that the SBO could use the same number of function evaluations as the GA and achieve at least 5 orders of magnitude greater accuracy through the use of surrogate evaluations.
The two optimization methods were used in conjunction with computational fluid dy- namics (CFD) analysis to optimize the shape of a bumpy airfoil section. Results of opti- mization showed that the use of an SBO can save up to 553 hours of CPU time on 196 cores when compared to the GA through the use of surrogate evaluations. Results also show the SBO can achieve greater accuracy than the GA in a shorter amount of time, and the SBO can reduce the negative effects of noise in the simulation data while the GA cannot.
|
109 |
[en] A STUDY ABOUT THE PERFORMANCE AND THE CONVERGENCE OF GENETIC ALGORITHMS / [pt] UM ESTUDO SOBRE O DESEMPENHO E A CONVERGÊNCIA DE ALGORITMOS GENÉTICOSRODRIGO MORAES LIMA DE ARAUJO COSTA 07 August 2006 (has links)
[pt] Esta dissertação investiga a convergência e o desempenho
de Algoritmos Genéticos: os problemas, soluções e medidas
propostas. O trabalho consiste de cinco partes principais:
uma discussão sobre os fundamentos matemáticos que buscam
explicar o funcionamento de um Algoritmo genético; um
estudo dos principais problemas associados à convergência
e ao desempenho de Algoritmos genéticos; uma análise das
técnicas e algoritmos alternativos para a melhoria da
convergência; um estudo de medidas para estimar o grau de
dificuldade esperado para a convergência de Algoritmos
Genéticos; e estudo de casos.
Os fundamentos matemáticos de Algoritmos Genéticos têm por
base os conceitos de schema e blocos construtores,
desenvolvidos por Holland (apud Goldberb, 1989a). Embora
estes conceitos constituam a teoria fundamental sobre a
qual a convergência se baseia, há, no entanto, questões
importantes sobre o processo através do qual schemata
interagem durante a evolução de um Algoritmo genético
(Forrest et al, 1993b). Este trabalho apresenta uma
discussão sobre os principais questionamentos que têm sido
levantados sobre a validade destes fundamentos. São
discutidas as controvérsias geradas pela necessidade de
uma visão dinâmica dos Algoritmos Genéticos, onde a
amostra da população e os resultados obtidos pela
recombinação sejam considerados. Em especial, as objeções
apontadas pro Thornton (1995) quanto à coerência da
associação dos conceitos de schema e blocos construtores,
a contradição entre os Teoremas schema e Price vista por
Altemberg (1994), e as idéias de adequação do Teorema
Fundamental de Algoritmos Genéticos ao conceito de
variância dentro de uma população.
Os principais problemas de convergência e desempenho de um
Algoritmo Genético foram discutidos: a Decepção e a
Epistasia. É apresentada a idéia de que a Decepção, embora
esteja fortemente ligada à dificuldade de convergência de
Algoritmos Genéticos, não constitui fator suficiente para
que um problema seja considerado difícil para um Algoritmo
genético (GA-hard problems) (Grefenstette, 1993). São
também apresentados os coeficientes de Walsh (Goldberg,
1989b) e demonstrada a sua relação com as idéias de schema
e epistasia, e sua utilização em funções decepcionantes.
São analisadas diversas funções decepcionantes. São
analisadas diversas funções, associadas aos conceitos de
Decepção e Epistasia: as funções fully-deceptive e fully
easy com 6 bits, propostas por Deb e Goldberg (1994); as
funções deceptive but easy e non-deceptive but hard de
Grefenstette (op. Cit.); as funções F2 e F3 de Whitley
(1992), e ainda, as funções NK (apud Harvey, 1993) e Royal
Road (Forrest et al, op. Cit.)
Técnicas alternativas para melhorar a convergência incluem
basicamente algoritmos evolucionários com características
específicas a determinado tipo de problema. São analisados
alguns algoritmos alternativos, como o Messy de Goldberg
et alli (1989), o Estruturado de Dasgupta et al (s.d.), o
aumentado de Grefenstette (ibidem) e os algoritmos
propostos por Paredis (1996b). É ainda discutida e
exemplificada a importância da escolha adequada de
parâmetros e da representação de cromossomas, para que a
convergência seja mais facilmente alcançada.
O estudo de medidas de convergêcia de Algoritmos
Genéticos fornece uma classificação: medidas
probabilísticas e medidas baseadas em landscapes. São
apresentadas também as colocações de Koza (1994) e
Altemberg (op. Cit.) sobre a convergência de Algoritmos
Evolucionários. É dado destaque para medida da dificuldade
esperada para convergência baseada no Coeficiente de
Correlação entre a Aptidão e a Distância (FDC - Fitness
Distance Correlation), como proposto por Jones e Forrest
(1995b).
O estudo de casos consiste da análise do comportamento de
Algoritmos Genéticos pela medida FDC, quando aplicados a
um conjunto de funções matemáticas, incluindo as já citadas, e ainda as funções de teste propostas por De Jong (apud Goldberg, op. cit) e a função decepcionante de Liepins e Vose (apud Deb et al, 1994). Também é realizada uma extensão da medida de dificuldade FDC estudada, buscando adequá-la a uma visão mais dinâmica de Algoritmos Genéticos. Para executar estes testes, o ambiente GENEsYs 1.0, desenvolvido por Thomas Bäck (1992) (a partir de seu precursor Genesis de JOhn Grefenstette (apud Ribeiro et alli, 1994), foi adaptado e extendido. / [en] This wok investigates the convergence and the performance
of Genetic Algorithms: the problems, solutions and
proposed measures. It is divided into five topics: a
discussion on the mathematical foundations that explains
how Genetic Algorithms work: a study of the most important
problems associated to their convergence and performance;
an analysis of techniques and alternative Genetic
Algorithms to achieve better convergence; a study of
measures trying to estimate the level of difficulty for
the convergence of GA s; and case study.
The mathematical foundations are based in conceps of
schema and building blocks, developed by Holland (apud
Goldberg, 1989a). Although they constitute the fundamental
theory about Genetic Algorithms convergence, there has
been a lot of questions about the process in which
schemata interact during the evolution of GA s (Forrest et
al, 1993b). This work presents a discussion on the most
important questions that have been raised about the
validity of these foundations. Specifically the objections
pointed out by Thorton (1995) about the conference of the
association between schema and building blocks; the
contradiction between schema theorem and Price theorem,
mentioned by Altenberg (1994); and the new ideas raised by
the variance of fitness concept.
The most important problems related to the convergence and
performance of GA s are discussed, i.e. the Deception and
the Epistasis. Even though Deception can difficult the
convergence, the former does not constitute a sufficient
factor for the late (Grefenstette, 1993). The Walsh
coefficients (Goldberg, 1989b0 and their relation with
schema are presented, and also their utilization in
deceptive fuctions. Some functions are analised, based on
the concepts of Deception and Epistasis: the 6-bits fully-
deceptive function by Deb et all (1994): the 3-bits fully-
deceptive functions, by Deb et alli (1989); the functions
deceptive but easy and non-deceptive but hard of
Grefenstette (op. cit.) the F2 and F3 functions of Whitley
(1992) as well as the NK functions (apud Harvey, 1993) and
the Royal Road functions (Forrest et al, op. cit.).
The techniques included the alternative GA s, with special
carachteristics. The Messy GA of Goldberg (1989), the
Structured GA of Dasgupta (s.d.), the Augmenated GA of
Grefenstette (ibidem) and GA s fo Paredis (1996b). The
importance of a correct choice of parameters is also
discussed.
The study of measures classifies those Ga´s into two
types: probabilistics and based on landscapes. The
considerations of Koza (1994) and Altenberg (op. cit.) are
also discussed. It is given special enfasis to the FDC (
Fitness Distance Correlacion) measure, proposed by Jones
and Forrest (1995b).
The case study consists of the analysis of the behavior of
GA by the measure FDC, applied to a set of mathematical
functions. The environment used is GENEsYs 1.0, developed
by Thomas Bäck (1992) over the Genesis of Grefenstette.
The GENEsys 1.0 was adapted and expanded to fullfil the
requirements of this work.
|
110 |
Development of New Methods for Inferring and Evaluating Phylogenetic TreesHill, Tobias January 2007 (has links)
<p>Inferring phylogeny is a difficult computational problem. Heuristics are necessary to minimize the time spent evaluating non optimal trees. In paper I, we developed an approach for heuristic searching, using a genetic algorithm. Genetic algorithms mimic the natural selections ability to solve complex problems. The algorithm can reduce the time required for weighted maximum parsimony phylogenetic inference using protein sequences, especially for data sets involving large number of taxa. </p><p>Evaluating and comparing the ability of phylogenetic methods to infer the correct topology is complex. In paper II, we developed software that determines the minimum subtree prune and regraft (SPR) distance between binary trees to ease the process. The minimum SPR distance can be used to measure the incongruence between trees inferred using different methods. Given a known topology the methods could be evaluated on their ability to infer the correct phylogeny given specific data. </p><p>The minimum SPR software the intermediate trees that separate two binary trees. In paper III we developed software that given a set of incongruent trees determines the median SPR consensus tree i.e. the tree that explains the trees with a minimum of SPR operations. We investigated the median SPR consensus tree and its possible interpretation as a species tree given a set of gene trees. We used a set of α-proteobacteria gene trees to test the ability of the algorithm to infer a species tree and compared it to previous studies. The results show that the algorithm can successfully reconstruct a species tree.</p><p>Expressed sequence tag (EST) data is important in determining intron-exon boundaries, single nucleotide polymorphism and the coding sequence of genes. In paper IV we aligned ESTs to the genome to evaluate the quality of EST data. The results show that many ESTs are contaminated by vector sequences and low quality regions. The reliability of EST data is largely determined by the clustering of the ESTs and the association of the clusters to the correct portion of genome. We investigate the performance of EST clustering using the genome as template compared to previously existing methods using pair-wise alignments. The results show that using the genome as guidance improves the resulting EST clusters in respect to the extent ESTs originating from the same transcriptional unit are separated into disjunct clusters. </p>
|
Page generated in 0.0698 seconds