• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 361
  • 149
  • 78
  • 28
  • 10
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • Tagged with
  • 854
  • 120
  • 112
  • 110
  • 106
  • 106
  • 95
  • 74
  • 63
  • 60
  • 59
  • 58
  • 58
  • 57
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Programação evolutiva com distribuição estável adaptativa

Carvalho, Leopoldo Bulgarelli de 12 September 2007 (has links)
Made available in DSpace on 2016-03-15T19:38:05Z (GMT). No. of bitstreams: 1 Leopoldo Bulgarelli de Carvalho.pdf: 696477 bytes, checksum: f90764d3c257bf63305bda69583c731e (MD5) Previous issue date: 2007-09-12 / Fundo Mackenzie de Pesquisa / Recent applications in evolutionary programming have suggested the use of different stable probability distributions, such as Cauchy and Lévy, in the random process associated with the mutations, as an alternative to the traditional (and also stable) Normal distribution. The motivation for this is the attempt to improve the results in some classes of optimisation problems, over those obtained with Normal distribution. Based upon an algorithm proposed in the literature, mostly its version in [Lee and Yao, 2004], that use non Normal stable distributions, we study herein the effect of turning it adaptive in respect to the determination of the more adequate stable distribution parameters for each problem. The evaluations relied upon standard benchmarking functions of the literature, and the comparative performance tests were carried out in respect to the baseline defined by a standard algorithm using Normal distribution. The results suggest numerical and statistical superiority of the stable distribution based approach, when compared with the baseline. However, they showed no improvement over the adaptive method of [Lee and Yao, 2004], possibly due to a consequence of implementation decisions that had to be made in the present implementation, that were not made explicit therein. / Aplicações recentes em programação evolutiva tem sugerido a utilização de diferentes distribuições estáveis de probabilidade, tais como de Cauchy e de Lévy, no processo aleatório associado às mutações, como alternativa à tradicional (e também estável) distribuição Normal. A motivação para tanto é melhorar os resultados em algumas classes de problemas de otimização, com relação aos obtidos através da distribuição Normal. Esse trabalho propõe uma nova classe de algoritmos auto-adaptativos com respeito à determinação dos parâmetros da distribuição estável mais adequada para cada problema de otimização. Tais algoritmos foram derivados de um existente na literatura, especialmente sua versão apresentada em [Lee e Yao, 2004]. Em um primeiro momento foram estudadas as principais características das distribuições estáveis que são, nesse trabalho, o foco dos processos aleatórios associados às mutações. Posteriormente, foram apresentadas as diferentes abordagens descritas pela literatura e as sugestões de algoritmos com características auto-adaptativas. As avaliações dos algoritmos propostos utilizaram funções de teste padrão da literatura, e os resultados comparativos de desempenho foram realizados com relação a um algoritmo tradicional baseado na distribuição Normal. Posteriormente, foram aplicados novos comparativos entre as diversas abordagens auto-adaptativas definidas no presente estudo, e feito um comparativo do melhor algoritmo auto-adaptativo aqui proposto com o melhor algoritmo adaptativo obtido de [Lee e Yao, 2004]. Os resultados evidenciaram superioridade numérica e estatística da abordagem baseada em distribuições estáveis, sobre o método tradicional baseado na distribuição Normal. No entanto, o método proposto não se mostrou mais eficaz que o método adaptativo sugerido em [Lee e Yao, 2004], o que pode ter sido decorrente de decisões de implementação não explícitas naquele trabalho, que tiveram de ser tomadas no presente contexto.
322

Acionamento suave do motor de indução bifasico atraves de eletronica de potencia / Soft starting of two-phase induction motor using power electronics

Neri Junior, Almir Laranjeira 03 July 2005 (has links)
Orientador: Ana Cristina Cavalcanti Lyra / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-04T15:55:39Z (GMT). No. of bitstreams: 1 NeriJunior_AlmirLaranjeira_M.pdf: 2063024 bytes, checksum: 66bf0807a15396d41cfa7b41d39c5a12 (MD5) Previous issue date: 2005 / Resumo: O presente texto visa a introduzir um novo método de acionamento do motor de indução bifásico de alimentação monofásica, prescindindo da chave centrífuga e reduzindo a corrente de partida da máquina sem incrementar sobremaneira os custos. Analisa-se seu comportamento de maneira bidirecional, tanto como carga elétrica para uma rede de distribuição quanto como acionador mecânico. Serão apresentadas as características convencionais desta máquina elétrica, com uma avaliação do comportamento de fluxos e a explanação das equações matemáticas que modelam o sistema simulado. Os métodos de partida mais comuns são avaliados, bem como diversos métodos de eletrônica de potência desenvolvidos em algumas referências bibliográficas. Gráficos e tabelas subsidiam as comparações entre métodos, de forma a concluir com as vantagens introduzidas pelo método proposto / Abstract: This text introduces a new topology supply for the single-phase induction motor using power electronics. The proposed method does not use centrifugal switch and reduces starting current with a cheap circuitry. The analysis will be based on drive behavior, analyzing mechanical values, and as load behavior ¿ electrical values ¿ for a distribution network. Main characteristics of the electrical machine will be presented, regarding flux interactions and model equations used to simulate. Commonly used starting method and references starting methods are simulated to offer a reliable comparison for the new topology. Graphs and tables show the whole results, and it can be concluded that the proposed starting method presents advantages for this machine / Mestrado / Automação / Mestre em Engenharia Elétrica
323

The adsorptive properties of oligomeric, non-ionic surfactants from aqueous solution

Holland, Kirsten Jane January 1998 (has links)
Surfactants from the 'Triton' range, manufactured by Rohm and Haas, Germany, were used to study the adsorptive behaviour of non-ionic surfactants (of the alkyl polyoxyethylene type) from aqueous solution onto mineral oxide surfaces. The oligomeric distributions of the surfactants were characterised using the HPLC technique. Two gradients were used: a normal phase gradient was used to study the surfactants from non-aqueous solution; an unusual gradient, which could not be definitively categorised as either normal or reversed phase and which was developed at Brunel, was used to analyse surfactants directly from aqueous solution. Quartz was used as a model mineral oxide surface. The quartz surface was characterised using a range of techniques: scanning electron microscopy (SEM), X-ray photoelectron spectroscopy, X-ray fluorescence -analysis, Fourier transform-infra red spectroscopy and BET analysis. It was found that washing the quartz with concentrated HCI removed any calcium ions present on the surface and also removed 02- ions. Calcining the sample removed carbonaceous materials from the surface and also caused a decrease in the surface area. The quartz was shown to be non-porous by SEM and BET analysis. The adsorption experiments for this study were carried out using a simple tumbling method for which known ratios of surfactant in aqueous solution and quartz silica were mixed together for a known length of time. The amounts of surfactant present were measured using ultra-violet analysis and the HPLC techniques mentioned above. It was found that the smallest oligomers were adsorbed the most. An addition of salt to the system caused an overall increase in adsorption of the bulk surfactant, and increase in temperature caused an initial decrease in adsorbed amounts before the plateau of the isotherm and a final increase in bulk adsorption at the plateau of the isotherm. The oligomeric adsorption generally appeared to mirror the behaviour of the bulk surfactant. Atomic force microscopy (AFM), dynamic light and neutron scattering studies were used to analyse the character of the adsorbed surfactant layer. It was shown that the layer reached a finite thickness that corresponded to a bilayer of adsorbed surfactant. According to AFM data, this value of thickness was not consistent over the whole of the quartz surface.
324

Réactions de fusion entre ions lourds par effet tunnel quantique : le cas des collisions entre calcium et nickel / Heavy-ion fusion reactions through quantum tunneling : collisions between calcium and nickel isotopes

Bourgin, Dominique 26 September 2016 (has links)
Les réactions de fusion-évaporation et de transfert de nucléons entre ions lourds à des énergies proches de la barrière de Coulomb jouent un rôle essentiel dans l’étude de la structure nucléaire et des mécanismes de réaction. Dans le cadre de cette thèse, deux expériences de fusion-évaporation et de transfert de nucléons ont été réalisées au Laboratoire National de Legnaro en Italie : 40Ca+58Ni et 40Ca+64Ni. Dans une première expérience, les sections efficaces de fusion de 40Ca+58,64Ni ont été mesurées à des énergies au-dessus et en dessous de la barrière de Coulomb et ont été interprétées à l’aide de calculs en voies couplées et Hartree-Fock dépendants du temps (TDHF). Les résultats montrent l’importance de l’excitation à un phonon octupolaire dans le noyau 40Ca et les excitations à un phonon quadripolaire dans les noyaux 58Ni et 64Ni, ainsi que l’importance des voies de transfert de nucléons dans le système riche en neutrons 40Ca+64Ni. Dans une expérience complémentaire, les probabilités de transfert de nucléons de 40Ca+58,64Ni ont été mesurées dans le même domaine d’énergie que l’expérience précédente et ont été interprétées en effectuant des calculs TDHF+BCS. Les résultats confirment l’importance des voies de transfert de nucléons dans 40Ca+64Ni. Une description conjointe des probabilités de transfert de nucléons et des sections efficaces de fusion a été réalisée pour les deux réactions étudiées en utilisant une approche en voies couplées. / Heavy-ion fusion-evaporation and nucleon transfer reactions at energies close to the Coulomb barrier play an essential role in the study of nuclear structure and reaction dynamics. In the framework of this PhD thesis, two fusion-evaporation and nucleon transfer experiments have been performed at the Laboratori Nazionali di Legnaro in Italy : 40Ca+58Ni and 40Ca+64Ni. In a first experiment, fusion cross sections for 40Ca+58,64Ni have been measured from above to below the Coulomb barrier and have been interpreted by means of coupled-channels and Time-Dependent Hartree-Fock (TDHF) calculations. The results show the importance of the one-phonon octupole excitation in the 40Ca nucleus and the one-phonon quadrupole excitations in the 58Ni and 64Ni nuclei, as well as the importance of the nucleon transfer channels in the neutron-rich system 40Ca+64Ni. In a complementary experiment, nucleon transfer probabilities for 40Ca+58,64Ni have been measured in the same energy region as the previous experiment and have been interpreted by performing TDHF+BCS calculations. The results confirm the importance of nucleon transfer channels in 40Ca+64Ni. A simultaneous description of the nucleon transfer probabilities and the fusion cross sections has been performed for both reactions, using a coupled-channels approach.
325

Post Disturbance Coral Populations: Patterns in Live Cover and Colony Size Classes from Transect Studies in Two Oceans

Dolphin, Claire A. 08 January 2014 (has links)
This study analyzes data acquired in French Polynesia in the Pacific and The Bahamas (Atlantic), both oceans affected by recent, well documented and sequential disturbances. For the purposes of this study, a disturbance is defined as a perturbation of environmental, physical or biological conditions that causes a distinct change in the ecosystem. After several decades of coral bleaching events, biological change, and anthropogenic impacts, rapid assessments of the coral community were accomplished by collecting photo-transects across the reefs to extract size structure of the corals, percent live tissue cover and perform a faunal evaluation. Cluster analyses and spatial autocorrelation tests were done to examine the community structure and dynamics at both locations. All multivariate analyses pointed to a disturbed ecosystem and the lack of spatial correlation indicated the impact of a local disturbance over that of a regional event. In assessing the spatial coral community structure, different responses to large versus small scales of disturbance were found. This emphasizes the importance of tailoring management of coral reefs to specific impacts. These two distinct regions were shown to have correlated spatial response patterns to sequential disturbances, supporting the idea of community pattern signatures for different scales of disturbance and the need for an adjustment in management protocols.
326

Phase Characterization Of Partial Discharge Distributions In An Oil-Pressboard Insulation System

Raja, K 10 1900 (has links) (PDF)
No description available.
327

Habitat Loss and Avian Range Dynamics through Space and Time

Desrochers, Rachelle January 2011 (has links)
The species–area relationship (SAR) has been applied to predict species richness declines as area is converted to human-dominated land covers.In many areas of the world, however, many species persist in human-dominated areas, including threatened species. Because SARs are decelerating nonlinear, small extents of natural habitat can be converted to human use with little expected loss of associated species, but with the addition of more species that are associated with human land uses. Decelerating SARs suggest that, as area is converted to human-dominated forms, more species will be added to the rare habitat than are lost from the common one. This should lead to a peaked relationship between richness and natural area. I found that the effect of natural area on avian richness across Ontario was consistent with the sum of SARs for natural habitat species and human-dominated habitat species, suggesting that almost half the natural area can be converted to human-dominated forms before richness declines. However, I found that this spatial relationship did not remain consistent through time: bird richness increased when natural cover was removed (up to 4%), irrespective of its original extent. The inclusion of metapopulation processes in predictive models of species presence improves predictions of diversity change through time dramatically. Variability in site occupancy was common among bird species evaluated in this study, likely resulting from local extinction-colonization dynamics. Likelihood of species presence declined when few neighbouring sites were previously occupied by the species. Site occupancy was also less likely when little suitable habitat was present. Consistent with expectations that larger habitats are easier targets for colonists, habitat area was more important for more isolated sites. Accounting for the effect of metapopulation dynamics on site occupancy predicted change in richness better than land cover change and increased the strength of the regional richness–natural area relationship to levels observed for continental richness–environment relationships suggesting that these metapopulation processes “scale up” to modify regional species richness patterns making them more difficult to predict. It is the existence of absences in otherwise suitable habitat within species’ ranges that appears to weaken regional richness–environment relationships.
328

Směsi pravděpodobnostních rozdělení / Mixture distributions

Nedvěd, Jakub January 2012 (has links)
Object of this thesis is to construct a mixture model of earnings of the Czech households. In first part are described characteristics of mixtures of statistical distributions with the focus on the mixtures of normal distibutions. In practical part of this thesis are constructed models with parameters extimations based on the data from EU-SILC. Models made by graphical method, EM algorithm and method of maximum likelihood. The quality of models is measured by Akaike information criterion.
329

Path integration with non-positive distributions and applications to the Schrödinger equation

Nathanson, Ekaterina Sergeyevna 01 July 2014 (has links)
In 1948, Richard Feynman published the first paper on his new approach to non-relativistic quantum mechanics. Before Feynman's work there were two mathematical formulations of quantum mechanics. Schrödinger's formulation was based on PDE (the Schrödinger equation) and states representation by wave functions, so it was in the framework of analysis and differential equations. The other formulation was Heisenberg's matrix algebra. Initially, they were thought to be competing. The proponents of one claimed that the other was “ wrong. ” Within a couple of years, John von Neumann had proved that they are equivalent. Although Feynman's theory was not fundamentally new, it nonetheless offered an entirely fresh and different perspective: via a precise formulation of Bohr's correspondence principle, it made quantum mechanics similar to classical mechanics in a precise sense. In addition, Feynman's approach made it possible to explain physical experiments, and, via diagrams, link them directly to computations. What resulted was a very powerful device for computing energies and scattering amplitudes - the famous Feynman's diagrams. In his formulation, Feynman aimed at representing the solution to the non-relativistic Schrödinger equation in the form of an “ average ” over histories or paths of a particle. This solution is commonly known as the Feynman path integral. It plays an important role in the theory but appears as a postulate based on intuition coming from physics rather than a justified mathematical object. This is why Feynman's vision has caught the attention of many mathematicians as well as physicists. The papers of Gelfand, Cameron, and Nelson are among the first, and more substantial, attempts to supply Feynman's theory with a rigorous mathematical foundation. These attempts were followed by many others, but unfortunately all of them were not quite satisfactory. The difficulty comes from a need to define a measure on an infinite-dimensional space of continuous functions that represent all possible paths of a particle. This Feynman's measure has to produce an integral with the properties requested by Feynman. In particular, the expression for the Feynman measure has to involve the non-absolutely integrable Fresnel integrands. The non-absolute integrability of the Fresnel integrands makes the measure fail to be positive and to have the countably additive property. Thus, a well-defined measure in the case of the Feynman path integral does not exist. Extensive research has been done on the methods of relating the Feynman path integral to the integral with respect to the Wiener measure. The method of analytic continuation in mass defines the Feynman path integral as a certain limit of the Wiener integrals. Unfortunately, this method can be used as definition for only almost all values of the mass parameter in the Schrödinger equation. For physicists, this is not a satisfactory result and needs to be improved. In this work we examine those questions which originally led to the Feynman path integral. By now we know that Feynman's “ dream ” cannot be realized as a positive and countably additive measure on the path-space. Here, we offer a new way out by modifying Feynman's question, and thereby achieving a solution to the Schrödinger equation via a different kind of averages in the path-space. We give our version of the question that Feynman “ should have asked ” in order to realize the elusive path integral. In our formulation, we get a Feynman path integral as a limit of linear functionals, as opposed to the more familiar inductive limits of positive measures, traditionally used for constructing the Wiener measure, and related Gaussian families. We adapt here an approach pioneered by Patrick Muldowney. In it, Muldowney suggested a Henstock integration technique in order to deal with the non-absolute integrability of the kind of Fresnel integrals which we need in our solution to Feynman's question. By applying Henstock's theory to Fresnel integrals, we construct a complex-valued “ probability distribution functions ” on the path-space. Then we use this “ probability ” distribution function to define the Feynman path integral as an inductive limit. This establishes a mathematically rigorous Feynman limit, and at the same time, preserves Feynman's intuitive idea in resulting functional. In addition, our definition, and our solution, do not place any restrictions on any of the parameters in the Schrödinger equation, and have a potential to offer useful computational experiments, and other theoretical insights.
330

Particle size distribution (PSD) equivalency using novel statistical comparators and PBPK input models

Ngeacharernkul, Pratak 01 December 2017 (has links)
For disperse system drug formulations, meaningful particle size distribution (PSD) comparators are essential in determining pharmaceutical equivalency and predicting biopharmaceutical equivalence in terms of the effect of particle size on the rate and extent of drug input. In formulation development and licensure, particle size characterization has been applied to establish relationships for bioequivalence of generic pharmaceutical drug products. The current approaches recommended by the US-FDA using median and span are not adequate to predict drug product performances or account for multi-modal PSD performance properties. The use of PSD similarity metric and the development and incorporation of drug release predictions based on PSD properties into PBPK models for various drug administration routes may provide a holistic approach for evaluating the effect of PSD differences on in vitro release of disperse systems and the resulting pharmacokinetic impact on drug product performance. The objectives of this study are to provide a rational approach for PSD comparisons by 1) developing similarity computations for PSD comparisons and 2) using PBPK-models to specifically account for PSD effects on drug input rates via a subcutaneous (SQ) administration route. Two techniques for measuring PSDs of reference (reference-listed drug product) and test (generic) drug products were investigated: OVL and PROB, as well as the current standard measurements of median and span. In addition, release rate profiles of each product pair simulated from modified Bikhazi and Higuchi’s model were used to compute release rate comparators such as similarity factor (f2) and fractional time ratios. A subcutaneous input PBPK model was developed and used to simulate blood concentration-time profiles of reference and test drug products. Pharmacokinetic responses such as AUC, Cmax, and Tmax were compared using standard bioequivalence criteria. PSD comparators, release rate comparators, and bioequivalence metrics were related to determine their relationships and identify the appropriate approach for bioequivalence waiver. OVL showed better predictions for bioequivalence compared to PROB, median, and span. For release profile comparisons, the f2 method was the best for bioequivalence prediction. The use of both release rate (e.g., f2) and PSD (e.g., OVL) comparison metrics significantly improved bioequivalence prediction to about 90%.

Page generated in 0.1031 seconds