• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5940
  • 1422
  • 871
  • 726
  • 722
  • 669
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
621

A Distributed Algorithm for Optimal Dispatch in Smart Power Grids with Piecewise Linear Cost Functions

Yasmeen, Aneela 01 July 2013 (has links)
We consider the optimal economic dispatch of power generators in a smart electric grid for allocating power between generators to meet load requirements at minimum total cost. We assume that each generator has a piece-wise linear cost function. We first present a polynomial time algorithm that achieves optimal dispatch. We then present a decentralized algorithm where, each generator independently adjusts its power output using only the aggregate power imbalance in the network, which can be observed by each generator through local measurements of the frequency deviation on the grid. The algorithm we propose exponentially erases the power imbalance, while eventually minimizing the generation cost.
622

Sensitivity Analyses for Tumor Growth Models

Mendis, Ruchini Dilinika 01 April 2019 (has links)
This study consists of the sensitivity analysis for two previously developed tumor growth models: Gompertz model and quotient model. The two models are considered in both continuous and discrete time. In continuous time, model parameters are estimated using least-square method, while in discrete time, the partial-sum method is used. Moreover, frequentist and Bayesian methods are used to construct confidence intervals and credible intervals for the model parameters. We apply the Markov Chain Monte Carlo (MCMC) techniques with the Random Walk Metropolis algorithm with Non-informative Prior and the Delayed Rejection Adoptive Metropolis (DRAM) algorithm to construct parameters' posterior distributions and then obtain credible intervals.
623

A.I. - Algorithmic Interactions

Jackson, Delbert Wayne 01 May 2013 (has links)
This thesis will talk about how I created artwork when I started graduate school, how my artwork evolved as I explored what art making meant to me and how my thoughts about art making has developed. I will then conclude with talking about the artwork I produced for my thesis show and how that work was shaped by my previous observations and artworks.
624

Coherence-based transmissibility as a damage indicator for highway bridges

Schallhorn, Charles Joseph 01 December 2015 (has links)
Vibration-based damage detection methods are used in structural applications to identify the global dynamic response of the system. The purpose of the work presented is to exhibit a vibration-based damage detection algorithm that calculates a damage indicator, based on limited frequency bands of the transmissibility function that have high coherence, as a metric for changes in the dynamic integrity of the structure. The methodology was tested using numerical simulation, laboratory experimentation, and field testing with success in detecting, comparatively locating, and relatively quantifying different damages while also parametrically investigating variables which have been identified as issues within similar existing methods. Throughout both the numerical and laboratory analyses, the results were used to successfully detect damage as a result of crack growth or formation of new cracks. Field results using stochastic operational traffic loading have indicated the capability of the proposed methodology in evaluating the changes in the health condition of a section of the bridge and in consistently detecting cracks of various sizes (30 to 60 mm) on a sacrificial specimen integrated with the bridge abutment and a floor beam. Fluctuations in environmental and loading conditions have been known to create some uncertainties in most damage detection processes; however, this work demonstrated that by limiting the features of transmissibility to frequency ranges of high coherence, the effect of these parameters, as compared to the effect of damage, become less significant and can be neglected for some instances. The results of additional field testing using controlled impact forces on the sacrificial specimen have reinforced the findings from the operational loading in detecting damage.
625

Optimization of a Low Reynolds Number 2-D Inflatable Airfoil Section

Johansen, Todd A. 01 December 2011 (has links)
A stand-alone genetic algorithm (GA) and an surrogate-based optimization (SBO) combined with a GA were compared for accuracy and performance. Comparisons took place using the Ackley Function and Rastrigin's Function, two functions with multiple local maxima and minima that could cause problems for more traditional optimization methods, such as a gradient-based method. The GA and SBO with GA were applied to the functions through a fortran interface and it was found that the SBO could use the same number of function evaluations as the GA and achieve at least 5 orders of magnitude greater accuracy through the use of surrogate evaluations. The two optimization methods were used in conjunction with computational fluid dy- namics (CFD) analysis to optimize the shape of a bumpy airfoil section. Results of opti- mization showed that the use of an SBO can save up to 553 hours of CPU time on 196 cores when compared to the GA through the use of surrogate evaluations. Results also show the SBO can achieve greater accuracy than the GA in a shorter amount of time, and the SBO can reduce the negative effects of noise in the simulation data while the GA cannot.
626

The QR Algorithm

Chu, Hsiao-yin Edith 01 May 1979 (has links)
In this section, we will consider two methods for computing an eigenvector and in addition the associated eigenvalue of a matrix A.
627

[en] A STUDY ABOUT THE PERFORMANCE AND THE CONVERGENCE OF GENETIC ALGORITHMS / [pt] UM ESTUDO SOBRE O DESEMPENHO E A CONVERGÊNCIA DE ALGORITMOS GENÉTICOS

RODRIGO MORAES LIMA DE ARAUJO COSTA 07 August 2006 (has links)
[pt] Esta dissertação investiga a convergência e o desempenho de Algoritmos Genéticos: os problemas, soluções e medidas propostas. O trabalho consiste de cinco partes principais: uma discussão sobre os fundamentos matemáticos que buscam explicar o funcionamento de um Algoritmo genético; um estudo dos principais problemas associados à  convergência e ao desempenho de Algoritmos genéticos; uma análise das técnicas e algoritmos alternativos para a melhoria da convergência; um estudo de medidas para estimar o grau de dificuldade esperado para a convergência de Algoritmos Genéticos; e estudo de casos. Os fundamentos matemáticos de Algoritmos Genéticos têm por base os conceitos de schema e blocos construtores, desenvolvidos por Holland (apud Goldberb, 1989a). Embora estes conceitos constituam a teoria fundamental sobre a qual a convergência se baseia, há, no entanto, questões importantes sobre o processo através do qual schemata interagem durante a evolução de um Algoritmo genético (Forrest et al, 1993b). Este trabalho apresenta uma discussão sobre os principais questionamentos que têm sido levantados sobre a validade destes fundamentos. São discutidas as controvérsias geradas pela necessidade de uma visão dinâmica dos Algoritmos Genéticos, onde a amostra da população e os resultados obtidos pela recombinação sejam considerados. Em especial, as objeções apontadas pro Thornton (1995) quanto à  coerência da associação dos conceitos de schema e blocos construtores, a contradição entre os Teoremas schema e Price vista por Altemberg (1994), e as idéias de adequação do Teorema Fundamental de Algoritmos Genéticos ao conceito de variância dentro de uma população. Os principais problemas de convergência e desempenho de um Algoritmo Genético foram discutidos: a Decepção e a Epistasia. É apresentada a idéia de que a Decepção, embora esteja fortemente ligada à  dificuldade de convergência de Algoritmos Genéticos, não constitui fator suficiente para que um problema seja considerado difí­cil para um Algoritmo genético (GA-hard problems) (Grefenstette, 1993). São também apresentados os coeficientes de Walsh (Goldberg, 1989b) e demonstrada a sua relação com as idéias de schema e epistasia, e sua utilização em funções decepcionantes. São analisadas diversas funções decepcionantes. São analisadas diversas funções, associadas aos conceitos de Decepção e Epistasia: as funções fully-deceptive e fully easy com 6 bits, propostas por Deb e Goldberg (1994); as funções deceptive but easy e non-deceptive but hard de Grefenstette (op. Cit.); as funções F2 e F3 de Whitley (1992), e ainda, as funções NK (apud Harvey, 1993) e Royal Road (Forrest et al, op. Cit.) Técnicas alternativas para melhorar a convergência incluem basicamente algoritmos evolucionários com características especí­ficas a determinado tipo de problema. São analisados alguns algoritmos alternativos, como o Messy de Goldberg et alli (1989), o Estruturado de Dasgupta et al (s.d.), o aumentado de Grefenstette (ibidem) e os algoritmos propostos por Paredis (1996b). É ainda discutida e exemplificada a importância da escolha adequada de parâmetros e da representação de cromossomas, para que a convergência seja mais facilmente alcançada. O estudo de medidas de convergêcia de Algoritmos Genéticos fornece uma classificação: medidas probabilísticas e medidas baseadas em landscapes. São apresentadas também as colocações de Koza (1994) e Altemberg (op. Cit.) sobre a convergência de Algoritmos Evolucionários. É dado destaque para medida da dificuldade esperada para convergência baseada no Coeficiente de Correlação entre a Aptidão e a Distância (FDC - Fitness Distance Correlation), como proposto por Jones e Forrest (1995b). O estudo de casos consiste da análise do comportamento de Algoritmos Genéticos pela medida FDC, quando aplicados a um conjunto de funções matemáticas, incluindo as já citadas, e ainda as funções de teste propostas por De Jong (apud Goldberg, op. cit) e a função decepcionante de Liepins e Vose (apud Deb et al, 1994). Também é realizada uma extensão da medida de dificuldade FDC estudada, buscando adequá-la a uma visão mais dinâmica de Algoritmos Genéticos. Para executar estes testes, o ambiente GENEsYs 1.0, desenvolvido por Thomas Bäck (1992) (a partir de seu precursor Genesis de JOhn Grefenstette (apud Ribeiro et alli, 1994), foi adaptado e extendido. / [en] This wok investigates the convergence and the performance of Genetic Algorithms: the problems, solutions and proposed measures. It is divided into five topics: a discussion on the mathematical foundations that explains how Genetic Algorithms work: a study of the most important problems associated to their convergence and performance; an analysis of techniques and alternative Genetic Algorithms to achieve better convergence; a study of measures trying to estimate the level of difficulty for the convergence of GA s; and case study. The mathematical foundations are based in conceps of schema and building blocks, developed by Holland (apud Goldberg, 1989a). Although they constitute the fundamental theory about Genetic Algorithms convergence, there has been a lot of questions about the process in which schemata interact during the evolution of GA s (Forrest et al, 1993b). This work presents a discussion on the most important questions that have been raised about the validity of these foundations. Specifically the objections pointed out by Thorton (1995) about the conference of the association between schema and building blocks; the contradiction between schema theorem and Price theorem, mentioned by Altenberg (1994); and the new ideas raised by the variance of fitness concept. The most important problems related to the convergence and performance of GA s are discussed, i.e. the Deception and the Epistasis. Even though Deception can difficult the convergence, the former does not constitute a sufficient factor for the late (Grefenstette, 1993). The Walsh coefficients (Goldberg, 1989b0 and their relation with schema are presented, and also their utilization in deceptive fuctions. Some functions are analised, based on the concepts of Deception and Epistasis: the 6-bits fully- deceptive function by Deb et all (1994): the 3-bits fully- deceptive functions, by Deb et alli (1989); the functions deceptive but easy and non-deceptive but hard of Grefenstette (op. cit.) the F2 and F3 functions of Whitley (1992) as well as the NK functions (apud Harvey, 1993) and the Royal Road functions (Forrest et al, op. cit.). The techniques included the alternative GA s, with special carachteristics. The Messy GA of Goldberg (1989), the Structured GA of Dasgupta (s.d.), the Augmenated GA of Grefenstette (ibidem) and GA s fo Paredis (1996b). The importance of a correct choice of parameters is also discussed. The study of measures classifies those Ga´s into two types: probabilistics and based on landscapes. The considerations of Koza (1994) and Altenberg (op. cit.) are also discussed. It is given special enfasis to the FDC ( Fitness Distance Correlacion) measure, proposed by Jones and Forrest (1995b). The case study consists of the analysis of the behavior of GA by the measure FDC, applied to a set of mathematical functions. The environment used is GENEsYs 1.0, developed by Thomas Bäck (1992) over the Genesis of Grefenstette. The GENEsys 1.0 was adapted and expanded to fullfil the requirements of this work.
628

Comparison of Routing and Network Coding in Group Communications

Xu, Yangyang 24 March 2009 (has links)
In traditional communication networks, information is delivered as a sequence of packets from source to destination by routing through intermediate nodes which only store and forward those packets. Recent research shows that routing alone is not sufficient to achieve the maximum information transmission rate across a communication network [1]. Network coding is a currently researched topic in information theory that allows the nodes to generate output data by encoding their received data. Thus, nodes may mix the input packets together and send them out as fewer packets. Potential throughput benefit is the initial motivation of the research in network coding. Group communications refers to many-to-many communication sessions where multiple sources multicast independent data to the same group of receivers. Researchers always treat group communications as a simple problem by adding a super source which is connected to all the sources with unbounded capacity links. However, it cannot control the fairness between different sources in this method. Additionally, the method may be incorrect in some scenarios. In this research, we will present an example to illustrate that and analyze the reason for that. The maximum multicast throughput problem using routing only is NP-complete. Wu et al. introduced a greedy tree-packing algorithm based on Prim's algorithm as an alternate sub-optimal solution [2] . This algorithm is modified in this work for group communications problem with routing in undirected networks. The throughput benefit for network coding has been shown in directed networks. However, in undirected networks, researchers have only investigated the multiple unicast sessions problem and one multicast session problem. In most cases, network coding does not seem to yield any throughput benefit [3] [4]. Li et al. introduced a c-flow algorithm using linear programming to find the maximum throughput for one multicast session using network coding [3] . We adapted this algorithm for group communications with network coding in undirected networks to overcome the disadvantage of the traditional method. Both algorithms were simulated using MATLAB and their results were compared. Further, it is demonstrated that network coding does not have constant throughput benefit in undirected networks.
629

Learning and development in Kohonen-style self organising maps.

Keith-Magee, Russell January 2001 (has links)
This thesis presents a biologically inspired model of learning and development. This model decomposes the lifetime of a single learning system into a number of stages, analogous to the infant, juvenile, adolescent and adult stages of development in a biological system. This model is then applied to Kohonen's SOM algorithm.In order to better understand the operation of Kohonen's SOM algorithm, a theoretical analysis of self-organisation is performed. This analysis establishes the role played by lateral connections in organisation, and the significance of the Laplacian lateral connections common to many SOM architectures.This analysis of neighbourhood interactions is then used to develop three key variations on Kohonen's SOM algorithm. Firstly, a new scheme for parameter decay, known as Butterworth Step Decay, is presented. This decay scheme provides training times comparable to the best training times possible using traditional linear decay, but precludes the need for a priori knowledge of likely training times. In addition, this decay scheme allows Kohonen's SOM to learn in a continuous manner.Secondly, a method is presented for establishing core knowledge in the fundamental representation of a SOM. This technique is known as Syllabus Presentation. This technique involves using a selected training syllabus to reinforce knowledge known to be significant. A method for developing a training syllabus, known as Percept Masking, is also presented.Thirdly, a method is presented for preventing the loss of trained representations in a continuously learning SOM. This technique, known as Arbor Pruning, involves restricting the weight update process to prevent the loss of significant representations. This technique can be used if the data domain varies within a known set of dimensions. However, it cannot be used to control forgetfulness if dimensions are added to or removed from ++ / the data domain.
630

Hardware-based text-to-braille translation

Zhang, Xuan January 2007 (has links)
Braille, as a special written method of communication for the blind, has been globally accepted for years. It gives blind people another chance to learn and communicate more efficiently with the rest of the world. It also makes possible the translation of printed languages into a written language which is recognisable for blind people. Recently, Braille is experiencing a decreasing popularity due to the use of alternative technologies, like speech synthesis. However, as a form of literacy, Braille is still playing a significant role in the education of people with visual impairments. With the development of electronic technology, Braille turned out to be well suited to computer-aided production because of its coded forms. Software based text-to-Braille translation has been proved to be a successful solution in Assistive Technology (AT). However, the feasibility and advantages of the algorithm reconfiguration based on hardware implementation have rarely been substantially discussed. A hardware-based translation system with algorithm reconfiguration is able to supply greater throughput than a software-based system. Further, it is also expected as a single component integrated in a multi-functional Braille system on a chip. / Therefore, this thesis presents the development of a system for text-to-Braille translation implemented in hardware. Differing from most commercial methods, this translator is able to carry out the translation in hardware instead of using software. To find a particular translation algorithm which is suitable for a hardware-based solution, the history of, and previous contributions to Braille translation are introduced and discussed. It is concluded that Markov systems, a formal language theory, were highly suitable for application to hardware based Braille translation. Furthermore, the text-to-Braille algorithm is reconfigured to achieve parallel processing to accelerate the translation speed. Characteristics and advantages of Field Programmable Gate Arrays (FPGAs), and application of Very High Speed Integrated Circuit Hardware Description Language (VHDL) are introduced to explain how the translating algorithm can be transformed to hardware. Using a Xilinx hardware development platform, the algorithm for text-to-Braille translation is implemented and the structure of the translator is described hierarchically.

Page generated in 0.0321 seconds