Spelling suggestions: "subject:"combination""
21 |
Problèmes morpho-syntaxiques analysés dans un modèle catégoriel étendu : application au coréen et au français avec une réalisation informatique / Morpho-syntactic problems analyzed in an extended categorial model : application to korean and to french with a development of a categorial parserChoi, Juyeon 28 June 2011 (has links)
Ce travail de thèse vise à proposer les analyses formelles de phénomènes langagiers, tels que le système casuel, le double cas, la flexibilité de l'ordre des mots, la coordination, la subordination et la thématisation, dans deux langues structurellement très distinctes : le coréen et le français. Le choix théorique s'est porté sur le formalisme de la Grammaire Catégorielle Combinatoire Applicative, développée par Jean-Pierre Desclés et Ismail Biskri, en mettant en œuvre les combinateurs de la Logique Combinatoire de Curry et le calcul fonctionnel des types de Church. Le problème à résoudre est le suivant : en prenant une langue « à cas » comme le coréen, avec les constructions « à double cas » et la flexibilité dans l'ordre des mots, spécifiques à certaines langues extrêmes orientales, cette langue est-elle analysable avec un formalisme catégoriel et selon quelle stratégie de calcul ? Nous donnons un certain nombre d'exemples qui répondent à cette question. Les analyses formelles proposées dans ce travail permettent ensuite d'examiner la pertinence syntaxique de l'hypothèse « anti-anti relativiste » en dégageant certains invariants syntaxiques à partir des opérations de prédication, de détermination, de transposition, de quantification et de coordination. Nous proposons également un analyseur catégoriel, ACCG, applicable au coréen et au français, qui permet d'engendrer, de façon automatique, les calculs catégoriels, ainsi que les structures opérateur/opérande. / This dissertation aims at proposing the formal analysis of the linguistic phenomena, such as the case system, the double case, the flexible word order, the coordination, the subordination and the thematisation, in the two structurally distinct languages: Korean and French. The formalism of Applicative Combinatory Categorial Grammar, developed by Jean-Pierre Desclés and Ismail Biskri, allow us to analyze these problems by means of the combinators of the Combinatory Logic of Curry and the functional calculus of the Church's types. By taking account of these formal analysis applied to Korean and to French, we discuss on the « anti-anti relativist » hypothesis by finding some syntactic invariants from the different operations such as the predication, the determination, the quantification, the transposition and the coordination. We propose also a categorial parser, ACCG, applicable to Korean and French sentences, which generates automatically categorial calculus and the operator-operand structures.
|
22 |
Metoda pro evoluční návrh násobiček využívající development / Evolutionary Design Method of Multipliers Using DevelopmentKaplan, Tomáš January 2010 (has links)
This work is focused on the techniques for overcoming the problem of scale in the evolutionary design of the combinational multipliers. The approaches to the evolutionary design that work directly with the target solutions are not suitable for the design of the large-scale structures. An approach based on the biological principles of development has often been utilized as a non-trivial genotypephenotype mapping in the evolutionary algorithms that allows us to design scalable structures. The instruction-based developmental approach has been applied to the evolutionary design of generic circuit structures. In this work, three methods are presented for the construction of the combinational multipliers which use a ripple-carry adder for obtaining the final product.
|
23 |
Contributions à des problèmes de partitionnement de graphe sous contraintes de ressources / Contributions to graph partitioning problems under resource constraintsNguyen, Dang Phuong 06 December 2016 (has links)
Le problème de partitionnement de graphe est un problème fondamental en optimisation combinatoire. Le problème revient à décomposer l'ensemble des nœuds d'un graphe en plusieurs sous-ensembles disjoints de nœuds (ou clusters), de sorte que la somme des poids des arêtes dont les extrémités se trouvent dans différents clusters est réduite au minimum. Dans cette thèse, nous étudions le problème de partitionnement de graphes avec des poids (non négatifs) sur les nœuds et un ensemble de contraintes supplémentaires sur les clusters (GPP-SC) précisant que la capacité totale (par exemple, le poids total des nœuds dans un cluster, la capacité totale sur les arêtes ayant au moins une extrémité dans un cluster) de chaque groupe ne doit pas dépasser une limite prédéfinie (appelée limite de capacité). Ceci diffère des variantes du problème de partitionnement de graphe le plus souvent abordées dans la littérature en ce que:_ Le nombre de clusters n'est pas imposé (et fait partie de la solution),_ Les poids des nœuds ne sont pas homogènes.Le sujet de ce travail est motivé par le problème de la répartition des tâches dans les structures multicœurs. Le but est de trouver un placement admissible de toutes les tâches sur les processeurs tout en respectant leur capacité de calcul et de minimiser le volume total de la communication inter-processeur. Ce problème peut être formulé comme un problème de partitionnement de graphe sous contraintes de type sac-à-dos (GPKC) sur des graphes peu denses, un cas particulier de GPP-SC. En outre, dans de telles applications, le cas des incertitudes sur les poids des nœuds (poids qui correspondent par exemple à la durée des tâches) doit être pris en compte.La première contribution de ce travail est de prendre en compte le caractère peu dense des graphes G = (V,E) typiques rencontrés dans nos applications. La plupart des modèles de programmation mathématique existants pour le partitionnement de graphe utilisent O(|V|^3) contraintes métriques pour modéliser les partitions de nœuds et donc supposent implicitement que G est un graphe complet. L'utilisation de ces contraintes métriques dans le cas où G n'est pas complet nécessite l'ajout de contraintes qui peuvent augmenter considérablement la taille du programme. Notre résultat montre que, pour le cas où G est un graphe peu dense, nous pouvons réduire le nombre de contraintes métriques à O(|V||E|) [1], [4]... / The graph partitioning problem is a fundamental problem in combinatorial optimization. The problem refers to partitioning the set of nodes of an edge weighted graph in several disjoint node subsets (or clusters), so that the sum of the weights of the edges whose end-nodes are in different clusters is minimized. In this thesis, we study the graph partitioning problem on graph with (non negative) node weights with additional set constraints on the clusters (GPP-SC) specifying that the total capacity (e.g. the total node weight, the total capacity over the edges having at least one end-node in the cluster) of each cluster should not exceed a specified limit (called capacity limit). This differs from the variants of graph partitioning problem most commonly addressed in the literature in that:-The number of clusters is not imposed (and is part of the solution),-The weights of the nodes are not homogeneous.The subject of the present work is motivated by the task allocation problem in multicore structures. The goal is to find a feasible placement of all tasks to processors while respecting their computing capacity and minimizing the total volume of interprocessor communication. This problem can be formulated as a graph partitioning problem under knapsack constraints (GPKC) on sparse graphs, a special case of GPP-SC. Moreover, in such applications, the case of uncertain node weights (weights correspond for example to task durations) has to be taken into account.The first contribution of the present work is to take into account the sparsity character of the graph G = (V,E). Most existing mathematical programming models for the graph partitioning problem use O(|V|^3) metric constraints to model the partition of nodes and thus implicitly assume that G is a complete graph. Using these metric constraints in the case where G is not complete requires adding edges and constraints which may greatly increase the size of the program. Our result shows that for the case where G is a sparse graph, we can reduce the number of metric constraints to O(|V||E|).The second contribution of present work is to compute lower bounds for large size graphs. We propose a new programming model for the graph partitioning problem that make use of only O(m) variables. The model contains cycle inequalities and all inequalities related to the paths in the graph to formulate the feasible partitions. Since there are an exponential number of constraints, solving the model needs a separation method to speed up the computation times. We propose such a separation method that use an all pair shortest path algorithm thus is polynomial time. Computational results show that our new model and method can give tight lower bounds for large size graphs of thousands of nodes.....
|
24 |
A Heuristic Featured Based Quantification Framework for Efficient Malware Detection. Measuring the Malicious intent of a file using anomaly probabilistic scoring and evidence combinational theory with fuzzy hashing for malware detection in Portable Executable filesNamanya, Anitta P. January 2016 (has links)
Malware is still one of the most prominent vectors through which computer networks and systems are compromised. A compromised computer system or network provides data and or processing resources to the world of cybercrime. With cybercrime projected to cost the world $6 trillion by 2021, malware is expected to continue being a growing challenge. Statistics around malware growth over the last decade support this theory as malware numbers enjoy almost an exponential increase over the period. Recent reports on the complexity of the malware show that the fight against malware as a means of building more resilient cyberspace is an evolving challenge. Compounding the problem is the lack of cyber security expertise to handle the expected rise in incidents. This thesis proposes advancing automation of the malware static analysis and detection to improve the decision-making confidence levels of a standard computer user in regards to a file’s malicious status. Therefore, this work introduces a framework that relies on two novel approaches to score the malicious intent of a file. The first approach attaches a probabilistic score to heuristic anomalies to calculate an overall file malicious score while the second approach uses fuzzy hashes and evidence combination theory for more efficient malware detection. The approaches’ resultant quantifiable scores measure the malicious intent of the file. The designed schemes were validated using a dataset of “clean” and “malicious” files. The results obtained show that the framework achieves true positive – false positive detection rate “trade-offs” for efficient malware detection.
|
25 |
Damage Detection of Rotors Using Magnetic Force Actuator: Analysis and Experimental VerificationPesch, Alexander Hans January 2008 (has links)
No description available.
|
26 |
Synthesis and Applications of Mutimodal Hybrid Albumin Nanoparticles for Chemotherapeutic Drug Delivery and Phototherml Therapy PlatformsPeralta, Donna V 13 August 2014 (has links)
Progress has been made in using human serum albumin nanoparticles (HSAPs) as carrier systems for targeted treatment of cancer. Human serum albumin (HSA), the most abundant human blood protein, can form HSAPs via a desolvation and crosslinking method, with the size of the HSAPs having crucial importance for drug loading and in vivo performance. Gold nanoparticles have also gained medicinal attention due to their ability to absorb near-infrared (NIR) light. These relatively non-toxic particles offer combinational therapy via imaging and photothermal therapy (PPTT) capabilities.
A desolvation and crosslinking approach was employed to encapsulate gold nanoparticles (AuNPs), hollow gold nanoshells (AuNSs), and gold nanorods (AuNRs), into efficiently sized HSAPs for future tumor heat ablation via PPTT. The AuNR-HSAPs, AuNP-HSAPs and AuNS-HSAPs had average particle diameters of 222 ± 5, 195 ± 9 and 156 ± 15, respectively.
We simultaneously encapsulated AuNRs and the anticancer drug paclitaxel (PAC), forming PAC-AuNR-HSAPs with overall average particle size of 299 ± 6 nm. Loading of paclitaxel into PAC-AuNR-HSAPs reached 3μg PAC/mg HSA. PAC-AuNR-HSAPs experienced photothermal heating of 46 ˚C after 15 minutes of NIR laser exposure; the temperature necessary to cause severe cellular hyperthermia. There was a burst release of paclitaxel up to 188 ng caused by the irradiation session, followed by a temporal drug release.
AuNR-HSAPs were tested for ablation of renal cell carcinoma using NIR irradiation in vitro. Particles created with the same amount of AuNRs, but varying HSA (1, 5 or 20 mg) showed overall particle size diameters 409 ± 224, 294 ± 83 and 167 ± 4 nm, respectively. Increasing HSAPs causes more toxicity under non-irradiated treatment conditions: AuNR-HSAPs with 20 mg versus 5 mg HSA caused cell viability of 64.5% versus 87%, respectively. All AuNR-HSAPs batches experienced photothermal heating above 42 ˚C. Coumarin-6, was used to visualize the cellular uptake of AuNR-HSAPs via fluorescence microscopy.
Finally, camptothecin (CPT) an antineoplastic agent and BACPT (7-butyl-10-aminocamptothecin) were loaded into HSAPs to combat their aqueous insolubility. BACPT-HSAPs loaded up to 5.25 micrograms BACPT/ mg of HSA. CPT encapsulation could not be determined. BACPT-HSAPs and CPT-HSAPs showed cytotoxicity to human sarcoma cells in vitro.
|
27 |
Método de avaliação de qualidade de vídeo por otimização condicionada. / Video quality assessment method based on constrained optimization.Begazo, Dante Coaquira 24 November 2017 (has links)
Esta Tese propõe duas métricas objetivas para avaliar a percepção de qualidade de vídeos sujeitos a degradações de transmissão em uma rede de pacotes. A primeira métrica usa apenas o vídeo degradado, enquanto que a segunda usa os vídeos de referência e degradado. Esta última é uma métrica de referência completa (FR - Full Reference) chamada de QCM (Quadratic Combinational Metric) e a primeira é uma métrica sem referência (NR - No Reference) chamada de VQOM (Viewing Quality Objective Metric). Em particular, o procedimento de projeto é aplicado à degradação de variação de atraso de pacotes (PDV - Packet Delay Variation). A métrica NR é descrita por uma spline cúbica composta por dois polinômios cúbicos que se encontram suavemente num ponto chamado de nó. Para o projeto de ambas métricas, colhem-se opiniões de observadores a respeito das sequências de vídeo degradadas que compõem o conjunto. A função objetiva inclui o erro quadrático total entre as opiniões e suas estimativas paramétricas, ainda consideradas como expressões algébricas. Acrescentam-se à função objetiva três condições de igualdades de derivadas tomadas no nó, cuja posição é especificada dentro de uma grade fina de pontos entre o valor mínimo e o valor máximo do fator de degradação. Essas condições são afetadas por multiplicadores de Lagrange e adicionadas à função objetiva, obtendo-se o lagrangiano, que é minimizado pela determinação dos coeficientes subótimos dos polinômios em função de cada valor do nó na grade. Finalmente escolhe-se o valor do nó que produz o erro quadrático mínimo, determinando assim os valores finais para dos coeficientes do polinômio. Por outro lado, a métrica FR é uma combinação não-linear de duas métricas populares, a PSNR (Peak Signal-to-Noise Ratio) e a SSIM (Structural Similarity Index). Um polinômio completo de segundo grau de duas variáveis é usado para realizar a combinação, porque é sensível a ambas métricas constituintes, evitando o sobreajuste em decorrência do baixo grau. Na fase de treinamento, o conjunto de valores dos coeficientes do polinômio é determinado através da minimização do erro quadrático médio para as opiniões sobre a base de dados de treino. Ambas métricas, a VQOM e a QCM, são treinadas e validadas usando uma base de dados, e testadas com outra independente. Os resultados de teste são comparados com métricas NR e FR recentes através de coeficientes de correlação, obtendo-se resultados favoráveis para as métricas propostas. / This dissertation proposes two objective metrics for estimating human perception of quality for video subject to transmission degradation over packet networks. The first metric just uses traffic data while the second one uses both the degraded and the reference video sequences. That is, the latter is a full reference (FR) metric called Quadratic Combinational Metric (QCM) and the former one is a no reference (NR) metric called Viewing Quality Objective Metric (VQOM). In particular, the design procedure is applied to packet delay variation (PDV) impairments, whose compensation or control is very important to maintain quality. The NR metric is described by a cubic spline composed of two cubic polynomials that meet smoothly at a point called a knot. As the first step in the design of either metric, the spectators score a training set of degraded video sequences. The objective function for designing the NR metric includes the total square error between the scores and their parametric estimates, still regarded as algebraic expressions. In addition, the objective function is augmented by the addition of three equality constraints for the derivatives at the knot, whose position is specified within a fine grid of points between the minimum value and the maximum value of the degradation factor. These constraints are affected by Lagrange multipliers and added to the objective function to obtain the Lagrangian, which is minimized by the suboptimal polynomial coefficients determined as a function of each knot in the grid. Finally, the knot value is selected that yields the minimum square error. By means of the selected knot value, the final values of the polynomial coefficients are determined. On the other hand, the FR metric is a nonlinear combination of two popular metrics, namely, the Peak Signal-to-Noise Ratio (PSNR) and the Structural Similarity Index (SSIM). A complete second-degree two-variable polynomial is used for the combination since it is sensitive to both constituent metrics while avoiding overfitting. In the training phase, the set of values for the coefficients of this polynomial is determined by minimizing the mean square error to the opinions over the training database. Both metrics, the VQOM and the QCM, are trained and validated using one database and tested with a different one. The test results are compared with recent NR and FR metrics by means of correlation coefficients, obtaining favorable results for the proposed metrics.
|
28 |
Método de avaliação de qualidade de vídeo por otimização condicionada. / Video quality assessment method based on constrained optimization.Dante Coaquira Begazo 24 November 2017 (has links)
Esta Tese propõe duas métricas objetivas para avaliar a percepção de qualidade de vídeos sujeitos a degradações de transmissão em uma rede de pacotes. A primeira métrica usa apenas o vídeo degradado, enquanto que a segunda usa os vídeos de referência e degradado. Esta última é uma métrica de referência completa (FR - Full Reference) chamada de QCM (Quadratic Combinational Metric) e a primeira é uma métrica sem referência (NR - No Reference) chamada de VQOM (Viewing Quality Objective Metric). Em particular, o procedimento de projeto é aplicado à degradação de variação de atraso de pacotes (PDV - Packet Delay Variation). A métrica NR é descrita por uma spline cúbica composta por dois polinômios cúbicos que se encontram suavemente num ponto chamado de nó. Para o projeto de ambas métricas, colhem-se opiniões de observadores a respeito das sequências de vídeo degradadas que compõem o conjunto. A função objetiva inclui o erro quadrático total entre as opiniões e suas estimativas paramétricas, ainda consideradas como expressões algébricas. Acrescentam-se à função objetiva três condições de igualdades de derivadas tomadas no nó, cuja posição é especificada dentro de uma grade fina de pontos entre o valor mínimo e o valor máximo do fator de degradação. Essas condições são afetadas por multiplicadores de Lagrange e adicionadas à função objetiva, obtendo-se o lagrangiano, que é minimizado pela determinação dos coeficientes subótimos dos polinômios em função de cada valor do nó na grade. Finalmente escolhe-se o valor do nó que produz o erro quadrático mínimo, determinando assim os valores finais para dos coeficientes do polinômio. Por outro lado, a métrica FR é uma combinação não-linear de duas métricas populares, a PSNR (Peak Signal-to-Noise Ratio) e a SSIM (Structural Similarity Index). Um polinômio completo de segundo grau de duas variáveis é usado para realizar a combinação, porque é sensível a ambas métricas constituintes, evitando o sobreajuste em decorrência do baixo grau. Na fase de treinamento, o conjunto de valores dos coeficientes do polinômio é determinado através da minimização do erro quadrático médio para as opiniões sobre a base de dados de treino. Ambas métricas, a VQOM e a QCM, são treinadas e validadas usando uma base de dados, e testadas com outra independente. Os resultados de teste são comparados com métricas NR e FR recentes através de coeficientes de correlação, obtendo-se resultados favoráveis para as métricas propostas. / This dissertation proposes two objective metrics for estimating human perception of quality for video subject to transmission degradation over packet networks. The first metric just uses traffic data while the second one uses both the degraded and the reference video sequences. That is, the latter is a full reference (FR) metric called Quadratic Combinational Metric (QCM) and the former one is a no reference (NR) metric called Viewing Quality Objective Metric (VQOM). In particular, the design procedure is applied to packet delay variation (PDV) impairments, whose compensation or control is very important to maintain quality. The NR metric is described by a cubic spline composed of two cubic polynomials that meet smoothly at a point called a knot. As the first step in the design of either metric, the spectators score a training set of degraded video sequences. The objective function for designing the NR metric includes the total square error between the scores and their parametric estimates, still regarded as algebraic expressions. In addition, the objective function is augmented by the addition of three equality constraints for the derivatives at the knot, whose position is specified within a fine grid of points between the minimum value and the maximum value of the degradation factor. These constraints are affected by Lagrange multipliers and added to the objective function to obtain the Lagrangian, which is minimized by the suboptimal polynomial coefficients determined as a function of each knot in the grid. Finally, the knot value is selected that yields the minimum square error. By means of the selected knot value, the final values of the polynomial coefficients are determined. On the other hand, the FR metric is a nonlinear combination of two popular metrics, namely, the Peak Signal-to-Noise Ratio (PSNR) and the Structural Similarity Index (SSIM). A complete second-degree two-variable polynomial is used for the combination since it is sensitive to both constituent metrics while avoiding overfitting. In the training phase, the set of values for the coefficients of this polynomial is determined by minimizing the mean square error to the opinions over the training database. Both metrics, the VQOM and the QCM, are trained and validated using one database and tested with a different one. The test results are compared with recent NR and FR metrics by means of correlation coefficients, obtaining favorable results for the proposed metrics.
|
29 |
NEUROPROTECTIVE STRATEGIES FOLLOWING EXPERIMENTAL TRAUMATIC BRAIN INJURY: LIPID PEROXIDATION-DERIVED ALDEHYDE SCAVENGING AND INHIBITION OF MITOCHONDRIAL PERMEABILITY TRANSITIONKulbe, Jacqueline Renee 01 January 2019 (has links)
Traumatic brain injury (TBI) represents a significant health crisis. To date there are no FDA-approved pharmacotherapies available to prevent the neurologic deficits caused by TBI. Following TBI, dysfunctional mitochondria generate reactive oxygen and nitrogen species, initiating lipid peroxidation (LP) and the formation of LP-derived neurotoxic aldehydes, which bind mitochondrial proteins, exacerbating dysfunction and opening of the mitochondrial permeability pore (mPTP), resulting in extrusion of mitochondrial sequestered calcium into the cytosol, and initiating a downstream cascade of calpain activation, spectrin degradation, neurodegeneration and neurologic impairment.
As central mediators of the TBI secondary injury cascade, mitochondria and LP-derived neurotoxic aldehydes make promising therapeutic targets. In fact, Cyclosporine A (CsA), an FDA-approved immunosuppressant capable of inhibiting mPTP has been shown to be neuroprotective in experimental TBI. Additionally, phenelzine (PZ), an FDA-approved non-selective irreversible monoamine oxidase inhibitor (MAOI) class antidepressant has also been shown to be neuroprotective in experimental TBI due to the presence of a hydrazine (-NH-NH2) moiety allowing for the scavenging of LP-derived neurotoxic aldehydes.
The overall goal of this dissertation is to further examine the neuroprotective effects of the mPTP inhibitor, CsA, and the LP-derived neurotoxic aldehyde scavenger, PZ, using a severe controlled cortical impact injury (CCI) model in 3-month old male Sprague-Dawley rats.
First, the effects of CsA on cortical synaptic and non-synaptic mitochondria, two heterogeneous populations, are examined. Our results indicate that compared to non-synaptic mitochondria, synaptic mitochondria sustain greater damage 24h following CCI and are protected to a greater degree by CsA.
Second, the neuroprotective effects of a novel 72h continuous subcutaneous infusion of CsA combined with PZ are compared to monotherapy. Following CCI, our results indicate that individually both CsA and PZ attenuate modification of mitochondrial proteins by LP-derived neurotoxic aldehydes, PZ is able to maintain mitochondrial respiratory control ratio and cytoskeletal integrity, but together, PZ and CsA, are unable to improve and in some cases negate monotherapy neuroprotective effects.
Finally, the effects of PZ (MAOI, aldehyde scavenger), pargyline (PG, MAOI, non-aldehyde scavenger) and hydralazine (HZ, non-MAOI, aldehyde scavenger) are compared. Our results indicate that PZ, PG, and HZ are unable to improve CCI-induced deficits to learning and memory as measured by Morris water maze (post-CCI D3-7). Of concern, PZ animals lost a significant amount of weight compared to all other group, possibly due to MAOI effects. In fact, in uninjured cortical tissue, PZ administration leads to a significant increase in norepinephrine and serotonin. Additionally, although PZ, PG, and HZ did not lead to a statistically significant improvement in cortical tissue sparing 8 days following CCI, the HZ group saw a 10% improvement over vehicle.
Overall, these results indicate that pharmacotherapies which improve mitochondrial function and decrease lipid peroxidation should continue to be pursued as neuroprotective approaches to TBI. However, further pursuit of LP-derived aldehyde scavengers for clinical use in TBI may require the development of hydrazine (-NH-NH2)-compounds which lack additional confounding mechanisms of action.
|
30 |
Estratégias de busca no projeto evolucionista de circuitos combinacionaisManfrini, Francisco Augusto Lima 23 February 2017 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-06-01T15:26:09Z
No. of bitstreams: 1
franciscoaugustolimamanfrini.pdf: 2355106 bytes, checksum: 0c2126ac87b502d91fbb53cda2fa0b2a (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-02T15:56:42Z (GMT) No. of bitstreams: 1
franciscoaugustolimamanfrini.pdf: 2355106 bytes, checksum: 0c2126ac87b502d91fbb53cda2fa0b2a (MD5) / Made available in DSpace on 2017-06-02T15:56:42Z (GMT). No. of bitstreams: 1
franciscoaugustolimamanfrini.pdf: 2355106 bytes, checksum: 0c2126ac87b502d91fbb53cda2fa0b2a (MD5)
Previous issue date: 2017-02-23 / A computação evolucionista tem sido aplicada em diversas áreas do conhecimento para a descoberta de projetos inovadores. Quando aplicada na concepção de circuitos digitais o problema da escalabilidade tem limitado a obtenção de circuitos complexos, sendo apontado como o maior problema em hardware evolutivo. O aumento do poder dos métodos evolutivos e da eficiência da busca constitui um importante passo para melhorar as ferramentas de projeto. Este trabalho aborda a computação evolutiva aplicada ao projeto de circuito lógicos combinacionais e cria estratégias para melhorar o desempenho dos algoritmos evolutivos. As três principais contribuições resultam dessa tese são: (i) o desenvolvimento de uma nova metodologia que ajuda a compreensão das causas fundamentais do sucesso/fracasso evolutivo;(ii)a proposta de uma heurística para a semeadura da população inicial; os resultados mostram que existe uma correlação entre a topologia da população inicial e a região do espaço de busca explorada; e (iii) a proposta de um novo operador de mutação denominado Biased SAM; verificou-se que esta mutação pode guiar de maneira efetiva a busca. Nos experimentos realizados o operador proposto é melhor ou equivalente ao operador de mutação tradicional. Os experimentos computacionais que validaram as respectivas contribuições foram feitos utilizando circuitos benchmark da literatura. / Evolutionary computation has been applied in several areas of knowledge for discovering Innovative designs. When applied to a digital circuit design the scalability problem has limited the obtaining of complex circuits, being pointed as the main problem in the evolvable hardware field. Increased power of evolutionary methods and efficiency of the search constitute an important step towards improving the design tool. This work approaches the evolutionary computation applied to the design of combinational logic circuits and createsstrategiestoimprovetheperformanceofevolutionaryalgorithms. The three main contributions result from this thesis are: (i) the developement of a methodology that helps to understand the success/failure of the genetic modifications that occur along the evolution; (ii) a heuristic proposed for seeding the initial population; the results showed there is a correlation between the topology of the initial population and the region of the search space which is explored. (iii) a proposal of a new mutation operator referred to as Biased SAM; it is verified that this operator can guide the search. In the experiments performed the mutation proposed is better than or equivalent to the traditional mutation. The computational experiments that prove the efficiency of the respective contributions were made using benchmark circuits of the literature.
|
Page generated in 0.1191 seconds