• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 362
  • 149
  • 78
  • 28
  • 10
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • Tagged with
  • 855
  • 120
  • 112
  • 110
  • 106
  • 106
  • 95
  • 74
  • 63
  • 60
  • 59
  • 58
  • 58
  • 57
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Estratégias evolutivas com mutações governadas por distribuições estáveis

Gutierrez, Agostinho Benigno Monteiro 19 September 2007 (has links)
Made available in DSpace on 2016-03-15T19:38:05Z (GMT). No. of bitstreams: 1 Agostinho Benigno Monteiro Gutierrez.pdf: 1771214 bytes, checksum: b247e1232736a440c8348a9a5765749a (MD5) Previous issue date: 2007-09-19 / Fundo Mackenzie de Pesquisa / Evolutionary strategies normally use the Gaussian distributions in order to control the mutations over real values. Since there are other kinds of distributions in nature and in mathematics, such as those of Cauchy, Lévy and S-Lévy, in addition to several stable distributions, it seems a natural step to extend the standard approach, by using an algorithm that would be based upon other existing distributions, or that would even allow the choice of a stable distribution in a self-adaptive way. Such an idea is briefly sketched herein, in the context of populations of individuals that evolve towards the minimum of a test function (namely, the n-dimensional Rastrigin, Rosenberg, Griewangk and Schwefel functions) by means of evolutionary strategies, whose mutations are guided by eight types of specific types of distributions and by a self-adaptive scheme over a subset of the possible stable distributions. During the evolution of the experiment a remarkable influence on the right choice of the distribution family can be noted related to the search for the global minimum of a test function. This is due to the diversity used in the form of distribution: asymmetric and long tale (Lévy) and symmetric with various type of tale on the others. The choice of the type of distribution occurs determining four parameters properly: stability rate, asymmetric, scale and position. The choice of the type of distribution occurs determining if the four parameters above mentiones are part of the chromosome that also contains the possible coordinates of the global minimum that will be mutated according to the chosen distribution. Having applied this different mutation in the evolutionary process will lead to the global minimum of the chosen test function. The results indicate that the combined use of stable distribution controlling the mutations of the coordinates can result in a performance improvement regarding the convergence and consequent determination of the solution, when applied to spatially constrained benchmark functions. / Usualmente, as estratégias evolutivas utilizam as distribuições Gaussianas para governar as mutações sobre valores reais. Já que na natureza e na matemática existem outros tipos de distribuições, tais como de Cauchy, de Lévy e de S-Lévy, além de uma infinidade de distribuições estáveis, é razoável se pensar em expandir a abordagem tradicional, utilizando-se um algoritmo baseado em outras distribuições existentes, ou mesmo que possibilite a escolha de uma distribuição estável, de forma auto-adaptativa. Esta idéia é aqui ilustrada, no contexto de populações de indivíduos que evoluem em busca do mínimo de uma função de teste (no caso, a função de Rastrigin, vale de Rosenberg, Griewangk e Schwefel em n-dimensões) através de estratégias evolutivas cujas mutações são guiadas por oito tipos específicos de distribuições e de um esquema auto-adaptativo em um subconjunto das distribuições estáveis. Durante a evolução dos experimentos observa-se uma forte influência da escolha adequada da família de distribuição na correlação da busca do mínimo global na função de teste. Este fato se deve a diversidade utilizada na forma da distribuição: assimétrica e cauda longa (Lévy) e simétrica com vários tipos de cauda nas demais. A escolha do tipo de distribuição ocorre determinando-se adequadamente quatro parâmetros: Índice de estabilidade (α), assimétrico (β), escala (ϒ) e posição (δ). A escolha do tipo de distribuição ocorre determinando-se os quatro parâmetros acima que fazem parte do cromossomo que também contém as possíveis coordenadas do ponto de mínimo global que seram mutadas com base distribuição escolhida. Com aplicação desta mutação diferenciada no processo evolutivo chegasse ao mínimo global da função de teste escolhida. Os resultados indicaram que a utilização conjunta de distribuições estáveis governando as mutações das coordenadas podem acarretar uma melhora de desempenho com respeito à convergência e conseqüente determinação da solução, quando aplicadas sobre funções de teste delimitadas espacialmente.
322

Programação evolutiva com distribuição estável adaptativa

Carvalho, Leopoldo Bulgarelli de 12 September 2007 (has links)
Made available in DSpace on 2016-03-15T19:38:05Z (GMT). No. of bitstreams: 1 Leopoldo Bulgarelli de Carvalho.pdf: 696477 bytes, checksum: f90764d3c257bf63305bda69583c731e (MD5) Previous issue date: 2007-09-12 / Fundo Mackenzie de Pesquisa / Recent applications in evolutionary programming have suggested the use of different stable probability distributions, such as Cauchy and Lévy, in the random process associated with the mutations, as an alternative to the traditional (and also stable) Normal distribution. The motivation for this is the attempt to improve the results in some classes of optimisation problems, over those obtained with Normal distribution. Based upon an algorithm proposed in the literature, mostly its version in [Lee and Yao, 2004], that use non Normal stable distributions, we study herein the effect of turning it adaptive in respect to the determination of the more adequate stable distribution parameters for each problem. The evaluations relied upon standard benchmarking functions of the literature, and the comparative performance tests were carried out in respect to the baseline defined by a standard algorithm using Normal distribution. The results suggest numerical and statistical superiority of the stable distribution based approach, when compared with the baseline. However, they showed no improvement over the adaptive method of [Lee and Yao, 2004], possibly due to a consequence of implementation decisions that had to be made in the present implementation, that were not made explicit therein. / Aplicações recentes em programação evolutiva tem sugerido a utilização de diferentes distribuições estáveis de probabilidade, tais como de Cauchy e de Lévy, no processo aleatório associado às mutações, como alternativa à tradicional (e também estável) distribuição Normal. A motivação para tanto é melhorar os resultados em algumas classes de problemas de otimização, com relação aos obtidos através da distribuição Normal. Esse trabalho propõe uma nova classe de algoritmos auto-adaptativos com respeito à determinação dos parâmetros da distribuição estável mais adequada para cada problema de otimização. Tais algoritmos foram derivados de um existente na literatura, especialmente sua versão apresentada em [Lee e Yao, 2004]. Em um primeiro momento foram estudadas as principais características das distribuições estáveis que são, nesse trabalho, o foco dos processos aleatórios associados às mutações. Posteriormente, foram apresentadas as diferentes abordagens descritas pela literatura e as sugestões de algoritmos com características auto-adaptativas. As avaliações dos algoritmos propostos utilizaram funções de teste padrão da literatura, e os resultados comparativos de desempenho foram realizados com relação a um algoritmo tradicional baseado na distribuição Normal. Posteriormente, foram aplicados novos comparativos entre as diversas abordagens auto-adaptativas definidas no presente estudo, e feito um comparativo do melhor algoritmo auto-adaptativo aqui proposto com o melhor algoritmo adaptativo obtido de [Lee e Yao, 2004]. Os resultados evidenciaram superioridade numérica e estatística da abordagem baseada em distribuições estáveis, sobre o método tradicional baseado na distribuição Normal. No entanto, o método proposto não se mostrou mais eficaz que o método adaptativo sugerido em [Lee e Yao, 2004], o que pode ter sido decorrente de decisões de implementação não explícitas naquele trabalho, que tiveram de ser tomadas no presente contexto.
323

Acionamento suave do motor de indução bifasico atraves de eletronica de potencia / Soft starting of two-phase induction motor using power electronics

Neri Junior, Almir Laranjeira 03 July 2005 (has links)
Orientador: Ana Cristina Cavalcanti Lyra / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-04T15:55:39Z (GMT). No. of bitstreams: 1 NeriJunior_AlmirLaranjeira_M.pdf: 2063024 bytes, checksum: 66bf0807a15396d41cfa7b41d39c5a12 (MD5) Previous issue date: 2005 / Resumo: O presente texto visa a introduzir um novo método de acionamento do motor de indução bifásico de alimentação monofásica, prescindindo da chave centrífuga e reduzindo a corrente de partida da máquina sem incrementar sobremaneira os custos. Analisa-se seu comportamento de maneira bidirecional, tanto como carga elétrica para uma rede de distribuição quanto como acionador mecânico. Serão apresentadas as características convencionais desta máquina elétrica, com uma avaliação do comportamento de fluxos e a explanação das equações matemáticas que modelam o sistema simulado. Os métodos de partida mais comuns são avaliados, bem como diversos métodos de eletrônica de potência desenvolvidos em algumas referências bibliográficas. Gráficos e tabelas subsidiam as comparações entre métodos, de forma a concluir com as vantagens introduzidas pelo método proposto / Abstract: This text introduces a new topology supply for the single-phase induction motor using power electronics. The proposed method does not use centrifugal switch and reduces starting current with a cheap circuitry. The analysis will be based on drive behavior, analyzing mechanical values, and as load behavior ¿ electrical values ¿ for a distribution network. Main characteristics of the electrical machine will be presented, regarding flux interactions and model equations used to simulate. Commonly used starting method and references starting methods are simulated to offer a reliable comparison for the new topology. Graphs and tables show the whole results, and it can be concluded that the proposed starting method presents advantages for this machine / Mestrado / Automação / Mestre em Engenharia Elétrica
324

The adsorptive properties of oligomeric, non-ionic surfactants from aqueous solution

Holland, Kirsten Jane January 1998 (has links)
Surfactants from the 'Triton' range, manufactured by Rohm and Haas, Germany, were used to study the adsorptive behaviour of non-ionic surfactants (of the alkyl polyoxyethylene type) from aqueous solution onto mineral oxide surfaces. The oligomeric distributions of the surfactants were characterised using the HPLC technique. Two gradients were used: a normal phase gradient was used to study the surfactants from non-aqueous solution; an unusual gradient, which could not be definitively categorised as either normal or reversed phase and which was developed at Brunel, was used to analyse surfactants directly from aqueous solution. Quartz was used as a model mineral oxide surface. The quartz surface was characterised using a range of techniques: scanning electron microscopy (SEM), X-ray photoelectron spectroscopy, X-ray fluorescence -analysis, Fourier transform-infra red spectroscopy and BET analysis. It was found that washing the quartz with concentrated HCI removed any calcium ions present on the surface and also removed 02- ions. Calcining the sample removed carbonaceous materials from the surface and also caused a decrease in the surface area. The quartz was shown to be non-porous by SEM and BET analysis. The adsorption experiments for this study were carried out using a simple tumbling method for which known ratios of surfactant in aqueous solution and quartz silica were mixed together for a known length of time. The amounts of surfactant present were measured using ultra-violet analysis and the HPLC techniques mentioned above. It was found that the smallest oligomers were adsorbed the most. An addition of salt to the system caused an overall increase in adsorption of the bulk surfactant, and increase in temperature caused an initial decrease in adsorbed amounts before the plateau of the isotherm and a final increase in bulk adsorption at the plateau of the isotherm. The oligomeric adsorption generally appeared to mirror the behaviour of the bulk surfactant. Atomic force microscopy (AFM), dynamic light and neutron scattering studies were used to analyse the character of the adsorbed surfactant layer. It was shown that the layer reached a finite thickness that corresponded to a bilayer of adsorbed surfactant. According to AFM data, this value of thickness was not consistent over the whole of the quartz surface.
325

Réactions de fusion entre ions lourds par effet tunnel quantique : le cas des collisions entre calcium et nickel / Heavy-ion fusion reactions through quantum tunneling : collisions between calcium and nickel isotopes

Bourgin, Dominique 26 September 2016 (has links)
Les réactions de fusion-évaporation et de transfert de nucléons entre ions lourds à des énergies proches de la barrière de Coulomb jouent un rôle essentiel dans l’étude de la structure nucléaire et des mécanismes de réaction. Dans le cadre de cette thèse, deux expériences de fusion-évaporation et de transfert de nucléons ont été réalisées au Laboratoire National de Legnaro en Italie : 40Ca+58Ni et 40Ca+64Ni. Dans une première expérience, les sections efficaces de fusion de 40Ca+58,64Ni ont été mesurées à des énergies au-dessus et en dessous de la barrière de Coulomb et ont été interprétées à l’aide de calculs en voies couplées et Hartree-Fock dépendants du temps (TDHF). Les résultats montrent l’importance de l’excitation à un phonon octupolaire dans le noyau 40Ca et les excitations à un phonon quadripolaire dans les noyaux 58Ni et 64Ni, ainsi que l’importance des voies de transfert de nucléons dans le système riche en neutrons 40Ca+64Ni. Dans une expérience complémentaire, les probabilités de transfert de nucléons de 40Ca+58,64Ni ont été mesurées dans le même domaine d’énergie que l’expérience précédente et ont été interprétées en effectuant des calculs TDHF+BCS. Les résultats confirment l’importance des voies de transfert de nucléons dans 40Ca+64Ni. Une description conjointe des probabilités de transfert de nucléons et des sections efficaces de fusion a été réalisée pour les deux réactions étudiées en utilisant une approche en voies couplées. / Heavy-ion fusion-evaporation and nucleon transfer reactions at energies close to the Coulomb barrier play an essential role in the study of nuclear structure and reaction dynamics. In the framework of this PhD thesis, two fusion-evaporation and nucleon transfer experiments have been performed at the Laboratori Nazionali di Legnaro in Italy : 40Ca+58Ni and 40Ca+64Ni. In a first experiment, fusion cross sections for 40Ca+58,64Ni have been measured from above to below the Coulomb barrier and have been interpreted by means of coupled-channels and Time-Dependent Hartree-Fock (TDHF) calculations. The results show the importance of the one-phonon octupole excitation in the 40Ca nucleus and the one-phonon quadrupole excitations in the 58Ni and 64Ni nuclei, as well as the importance of the nucleon transfer channels in the neutron-rich system 40Ca+64Ni. In a complementary experiment, nucleon transfer probabilities for 40Ca+58,64Ni have been measured in the same energy region as the previous experiment and have been interpreted by performing TDHF+BCS calculations. The results confirm the importance of nucleon transfer channels in 40Ca+64Ni. A simultaneous description of the nucleon transfer probabilities and the fusion cross sections has been performed for both reactions, using a coupled-channels approach.
326

Post Disturbance Coral Populations: Patterns in Live Cover and Colony Size Classes from Transect Studies in Two Oceans

Dolphin, Claire A. 08 January 2014 (has links)
This study analyzes data acquired in French Polynesia in the Pacific and The Bahamas (Atlantic), both oceans affected by recent, well documented and sequential disturbances. For the purposes of this study, a disturbance is defined as a perturbation of environmental, physical or biological conditions that causes a distinct change in the ecosystem. After several decades of coral bleaching events, biological change, and anthropogenic impacts, rapid assessments of the coral community were accomplished by collecting photo-transects across the reefs to extract size structure of the corals, percent live tissue cover and perform a faunal evaluation. Cluster analyses and spatial autocorrelation tests were done to examine the community structure and dynamics at both locations. All multivariate analyses pointed to a disturbed ecosystem and the lack of spatial correlation indicated the impact of a local disturbance over that of a regional event. In assessing the spatial coral community structure, different responses to large versus small scales of disturbance were found. This emphasizes the importance of tailoring management of coral reefs to specific impacts. These two distinct regions were shown to have correlated spatial response patterns to sequential disturbances, supporting the idea of community pattern signatures for different scales of disturbance and the need for an adjustment in management protocols.
327

Phase Characterization Of Partial Discharge Distributions In An Oil-Pressboard Insulation System

Raja, K 10 1900 (has links) (PDF)
No description available.
328

Habitat Loss and Avian Range Dynamics through Space and Time

Desrochers, Rachelle January 2011 (has links)
The species–area relationship (SAR) has been applied to predict species richness declines as area is converted to human-dominated land covers.In many areas of the world, however, many species persist in human-dominated areas, including threatened species. Because SARs are decelerating nonlinear, small extents of natural habitat can be converted to human use with little expected loss of associated species, but with the addition of more species that are associated with human land uses. Decelerating SARs suggest that, as area is converted to human-dominated forms, more species will be added to the rare habitat than are lost from the common one. This should lead to a peaked relationship between richness and natural area. I found that the effect of natural area on avian richness across Ontario was consistent with the sum of SARs for natural habitat species and human-dominated habitat species, suggesting that almost half the natural area can be converted to human-dominated forms before richness declines. However, I found that this spatial relationship did not remain consistent through time: bird richness increased when natural cover was removed (up to 4%), irrespective of its original extent. The inclusion of metapopulation processes in predictive models of species presence improves predictions of diversity change through time dramatically. Variability in site occupancy was common among bird species evaluated in this study, likely resulting from local extinction-colonization dynamics. Likelihood of species presence declined when few neighbouring sites were previously occupied by the species. Site occupancy was also less likely when little suitable habitat was present. Consistent with expectations that larger habitats are easier targets for colonists, habitat area was more important for more isolated sites. Accounting for the effect of metapopulation dynamics on site occupancy predicted change in richness better than land cover change and increased the strength of the regional richness–natural area relationship to levels observed for continental richness–environment relationships suggesting that these metapopulation processes “scale up” to modify regional species richness patterns making them more difficult to predict. It is the existence of absences in otherwise suitable habitat within species’ ranges that appears to weaken regional richness–environment relationships.
329

Směsi pravděpodobnostních rozdělení / Mixture distributions

Nedvěd, Jakub January 2012 (has links)
Object of this thesis is to construct a mixture model of earnings of the Czech households. In first part are described characteristics of mixtures of statistical distributions with the focus on the mixtures of normal distibutions. In practical part of this thesis are constructed models with parameters extimations based on the data from EU-SILC. Models made by graphical method, EM algorithm and method of maximum likelihood. The quality of models is measured by Akaike information criterion.
330

Path integration with non-positive distributions and applications to the Schrödinger equation

Nathanson, Ekaterina Sergeyevna 01 July 2014 (has links)
In 1948, Richard Feynman published the first paper on his new approach to non-relativistic quantum mechanics. Before Feynman's work there were two mathematical formulations of quantum mechanics. Schrödinger's formulation was based on PDE (the Schrödinger equation) and states representation by wave functions, so it was in the framework of analysis and differential equations. The other formulation was Heisenberg's matrix algebra. Initially, they were thought to be competing. The proponents of one claimed that the other was “ wrong. ” Within a couple of years, John von Neumann had proved that they are equivalent. Although Feynman's theory was not fundamentally new, it nonetheless offered an entirely fresh and different perspective: via a precise formulation of Bohr's correspondence principle, it made quantum mechanics similar to classical mechanics in a precise sense. In addition, Feynman's approach made it possible to explain physical experiments, and, via diagrams, link them directly to computations. What resulted was a very powerful device for computing energies and scattering amplitudes - the famous Feynman's diagrams. In his formulation, Feynman aimed at representing the solution to the non-relativistic Schrödinger equation in the form of an “ average ” over histories or paths of a particle. This solution is commonly known as the Feynman path integral. It plays an important role in the theory but appears as a postulate based on intuition coming from physics rather than a justified mathematical object. This is why Feynman's vision has caught the attention of many mathematicians as well as physicists. The papers of Gelfand, Cameron, and Nelson are among the first, and more substantial, attempts to supply Feynman's theory with a rigorous mathematical foundation. These attempts were followed by many others, but unfortunately all of them were not quite satisfactory. The difficulty comes from a need to define a measure on an infinite-dimensional space of continuous functions that represent all possible paths of a particle. This Feynman's measure has to produce an integral with the properties requested by Feynman. In particular, the expression for the Feynman measure has to involve the non-absolutely integrable Fresnel integrands. The non-absolute integrability of the Fresnel integrands makes the measure fail to be positive and to have the countably additive property. Thus, a well-defined measure in the case of the Feynman path integral does not exist. Extensive research has been done on the methods of relating the Feynman path integral to the integral with respect to the Wiener measure. The method of analytic continuation in mass defines the Feynman path integral as a certain limit of the Wiener integrals. Unfortunately, this method can be used as definition for only almost all values of the mass parameter in the Schrödinger equation. For physicists, this is not a satisfactory result and needs to be improved. In this work we examine those questions which originally led to the Feynman path integral. By now we know that Feynman's “ dream ” cannot be realized as a positive and countably additive measure on the path-space. Here, we offer a new way out by modifying Feynman's question, and thereby achieving a solution to the Schrödinger equation via a different kind of averages in the path-space. We give our version of the question that Feynman “ should have asked ” in order to realize the elusive path integral. In our formulation, we get a Feynman path integral as a limit of linear functionals, as opposed to the more familiar inductive limits of positive measures, traditionally used for constructing the Wiener measure, and related Gaussian families. We adapt here an approach pioneered by Patrick Muldowney. In it, Muldowney suggested a Henstock integration technique in order to deal with the non-absolute integrability of the kind of Fresnel integrals which we need in our solution to Feynman's question. By applying Henstock's theory to Fresnel integrals, we construct a complex-valued “ probability distribution functions ” on the path-space. Then we use this “ probability ” distribution function to define the Feynman path integral as an inductive limit. This establishes a mathematically rigorous Feynman limit, and at the same time, preserves Feynman's intuitive idea in resulting functional. In addition, our definition, and our solution, do not place any restrictions on any of the parameters in the Schrödinger equation, and have a potential to offer useful computational experiments, and other theoretical insights.

Page generated in 0.0985 seconds