• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 361
  • 149
  • 78
  • 28
  • 10
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • Tagged with
  • 854
  • 120
  • 112
  • 110
  • 106
  • 106
  • 95
  • 74
  • 63
  • 60
  • 59
  • 58
  • 58
  • 57
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

Ecological impacts of ash dieback in Great Britain

Hill, Louise January 2017 (has links)
Ash dieback is a severe disease of ash trees (Fraxinus spp.), caused by the invasive fungus Hymenoscyphus fraxineus. In its native East Asia, H. fraxineus is a harmless endophyte, but since its accidental import into Europe in the early 1990s it has infected over 90% of ash trees in some areas, with long-term mortality sometimes exceeding 90%. The disease was discovered in Great Britain in 2012, and has since spread rapidly. This thesis investigates some of the possible impacts on biodiversity, ecosystem functioning, and society, and in doing so identifies ways to alleviate some impacts. Britain has only 13% tree cover (among the lowest in Europe), so may be particularly vulnerable to ash loss. Better understanding of the effects and how to minimise them is critical to deliver an evidence-based response. First, we investigated impacts in woodlands by experimentally killing woodland ash trees by ring-barking. We found no short-term effect of ash loss on ground flora or earthworm communities, or on the regeneration or growth of other woody species. Observational evidence suggested that remaining canopy trees rapidly filled gaps left by ash, perhaps contributing to stability. Our woodlands appeared to be remarkably resilient to ash loss, although there may be long-term effects or impacts on other species that this experiment failed to observe. To investigate broader-scale impacts, we required high-quality abundance maps for ash and other trees across Britain. Using species distribution modelling and random forest regression, we developed a protocol to produce abundance maps from readily available data. We tested the predictive power of the resulting maps using cross validation. Our maps are the best available for abundance of British tree species, and will be useful across a wide range of disciplines. We then used them to model ecosystem vulnerability to ash loss, based on the abundance of ash and other tree species, and their ecological trait similarity. We identified areas at risk of the largest impacts, and produced guidance for positive management actions to minimise ecological change. Lastly, we investigated the financial impacts of ash dieback, estimating the total cost to Britain at £9.2 billion. This figure is many times larger than the value of lost trade if biosecurity were improved to prevent future invasions, questioning the validity of financial arguments against biosecurity. We also found that loss of ecosystem services accounted for less than a third of the total cost, suggesting that ecosystem service assessments may miss a large proportion of the true cost of biodiversity loss. Overall, we found that some impacts may be less than expected, such as local effects on woodland ground flora, and others, such as the economic cost, may be much larger than expected. However, the resilience of ecosystems to a major shock such as loss of a common species, and actions to mitigate the impacts, depend on having a diversity of other trees present. The ash dieback outbreak highlights the importance of preventing other severe pests and diseases of trees from being introduced; something that has been increasing exponentially, largely due to international trade in trees. This thesis provides further firm evidence that there is an ecological and social imperative to halt this trend.
422

Extrêmes multivariés et spatiaux : approches spectrales et modèles elliptiques / Multivariate and spatial extremes : spectral approaches and elliptical models

Opitz, Thomas 30 October 2013 (has links)
Cette thèse présente des contributions à la modélisation multivariée et spatiale des valeurs extrêmes. Au travers d'une extension de la représentation par coordonnées pseudo-polaires, représentation très utilisée en théorie des valeurs extrêmes, une approche unifiée et générale pour la modélisation en valeurs extrêmes est proposée. La variable radiale de ces coordonnées est donnée par une fonction non négative et homogène dite fonction d'agrégation permettant d'agréger un vecteur dans un scalaire. La loi de la variable d'angle est caractérisée par une mesure dite angulaire ou spectrale. Nous définissons les lois radiales de Pareto et une version inversée de ces lois, toutes deux motivées dans le cadre de la variation régulière multivariée. Cette classe de modèles est assez souple et permet de modéliser les valeurs extrêmes de vecteurs aléatoires dont la variable agrégée est à décroissance de type Pareto ou Pareto inversé. Dans le cadre spatial, nous mettons l'accent sur les lois bivariées à l'instar des méthodes couramment utilisées. Des approches inférentielles originales sont développées, fondées sur un nouvel outil de représentation appelé spectrogramme. Le spectrogramme est constitué des mesures spectrales caractérisant le comportement extrémalbivarié. Enfin, la construction dite spectrale du processus limite max-stable des processus elliptiques, à savoir le processus t-extrémal, est présentée. Par ailleurs, nous énonçons des méthodesd'inférence et explorons des méthodes de simulation des processus de type max-stable et de type Pareto. L'intérêt pratique des modèles et méthodes proposés est illustré au travers d'applications à des données environnementales et financières. / This PhD thesis presents contributions to the modelling of multivariate andspatial extreme values. Using an extension of commonly used pseudo-polar representations inextreme value theory, we propose a general unifying approachto modelling of extreme value dependence. The radial variable of such coordinates is obtained from applying a nonnegative and homogeneous function, called aggregation function, allowing us to aggregate a vector into a scalar value. The distribution of the angle component is characterized by a so-called angular or spectral measure. We define radial Pareto distribution and an inverted version of thesedistributions, both motivated within the framework of multivariateregular variation. This flexible class of models allows for modelling of extreme valuesin random vectors whose aggregated variable shows tail decay of thePareto or inverted Pareto type. For the purpose of spatial extreme value analysis, we follow standard methodology in geostatistics of extremes and put the focus on bivariatedistributions. Inferentialapproaches are developed based on the notion of a spectrogram,a tool composed of thespectral measures characterizing bivariate extreme value behavior. Finally, the so-called spectral construction of the max-stable limit processobtained from elliptical processes, known as extremal-t process, ispresented. We discuss inference and explore simulation methods for the max-stableprocess and the corresponding Pareto process. The utility of the proposed models and methods is illustrated throughapplications to environmental and financial data.
423

Distribuições de probabilidade no intervalo unitário / Probability distributions in the unit interval

Francimário Alves de Lima 16 March 2018 (has links)
A distribuição beta é a mais frequentemente utilizada para a modelagem de dados contínuos observados no intervalo unitário, como taxas e proporções. Embora seja flexível, admitindo formas variadas, tais como J, J invertido, U e unimodal, não é adequada em todas as situações práticas. Nesta dissertação fazemos uma revisão sobre distribuições contínuas no intervalo unitário englobando as distribuições beta, Kumaraswamy, simplex, gama unitária e beta retangular. Também abordamos uma ampla classe de distribuições obtida por transformações (Smithson e Merkle, 2013). Em particular, focamos em duas subclasses, uma apresentada e estudada por Lemonte e Bazán (2015), que chamaremos de classe de distribuições logito, e outra que chamaremos de classe de distribuições logito skew. Todas as distribuições consideradas são aplicadas a conjuntos de dados do Banco Mundial. / The beta distribution is the most frequently used for modeling continuous data observed in the unit interval, such as rates and proportions. Although flexible, assuming varied forms, such as J, inverted J, U and unimodal, it is not suitable in all practical situations. In this dissertation we make a review on continuous distributions in the unit interval encompassing the beta, Kumaraswamy, simplex, unit gamma and rectangular beta distributions. We also address a wide class of distributions obtained by transformations (Smithson and Merkle, 2013). In particular, we focus on two subclasses, one presented and studied by Lemonte and Bazán (2015), which we will call the logit class of distributions, and another that we will call the logit class of skew distributions. All distributions considered are applied to World Bank data sets.
424

Modelos GAS com distribuições estáveis para séries temporais financeiras / Stable GAS models for financial time series

Daniel Takata Gomes 06 December 2017 (has links)
Modelos GARCH tendo a normal e a t-Student como distribuições condicionais são amplamente utilizados para modelagem da volatilidade de dados financeiros. No entanto, tais distribuições podem não ser apropriadas para algumas séries com caudas pesadas e comportamento leptocúrtico. As chamadas distribuições estáveis podem ser mais adequadas para sua modelagem, como já explorado na literatura. Por outro lado, os modelos GAS (Generalized Autoregressive Score), com desenvolvimento recente, tratam-se de modelos dinâmicos que possuem em sua estrutura a função score (derivada do logaritmo da verossimilhança). Tal abordagem oferece uma direção natural para a evolução dos parâmetros da distribuição dos dados. Neste trabalho, é proposto um novo modelo GAS em conjunção com distribuições estáveis simétricas para a modelagem da volatilidade - de fato, é uma generalização do GARCH, pois, para uma particular escolha de distribuição estável e de estrutura do modelo, tem-se o clássico modelo GARCH gaussiano. Como em geral a função densidade das distribuições estáveis não possui forma analítica fechada, é apresentado seu procedimento de cálculo, bem como de suas derivadas, para o completo desenvolvimento do método de estimação dos parâmetros. Também são analisadas as condições de estacionariedade e a estrutura de dependência do modelo. Estudos de simulação são conduzidos, bem como uma aplicação a dados reais, para comparação entre modelos usuais, que utilizam distribuições normal e t-Student, e o modelo proposto, demonstrando a eficácia deste. / GARCH models with normal and t-Student conditional distributions are widely used for volatility modeling in financial data. However, such distributions may not be suitable for some heavy-tailed and leptokurtic series. The stable distributions may be more adequate to fit such characteristics, as already exploited in the literature. On the other hand, the recently developed GAS (Generalized Autoregressive Score) models are dynamic models in which the updating mechanism of the time-varying parameters is based on the score function (first derivative of the log-likelihood function). This provides the natural direction for updating the parameters, based on the complete density. We propose a new GAS model with symmetric stable distribution for volatility modeling. The model can be interpreted as a generalization of the GARCH models, since the classic gaussian GARCH model is derived from it by using particular choices of the stable distribution and the model structure. There are no closed analytical expressions for general stable densities in most cases, hence its numeric computation and derivatives are detailed for the sake of complete development of the estimation process. The stationarity conditions and the dependence structure of the model are analysed. Simulation studies, as well as an application to real data, are presented for comparisons between the usual models and the proposed model, illustrating the effectiveness of the latter.
425

Definição do nível de significância em função do tamanho amostral / Setting the level of significance depending on the sample size

Melaine Cristina de Oliveira 28 July 2014 (has links)
Atualmente, ao testar hipóteses utiliza-se como convenção um valor fixo (normalmente 0,05) para o Erro Tipo I máximo aceitável (probabilidade de Rejeitar H0 dado que ela é verdadeira) , também conhecido como nível de significância do teste de hipóteses proposto, representado por alpha. Na maioria das vezes nem se chega a calcular o Erro tipo II ou beta (probabilidade de Aceitar H0 dado que ela é falsa). Tampouco costuma-se questionar se o alpha adotado é razoável para o problema analisado ou mesmo para o tamanho amostral apresentado. Este texto visa levar à reflexão destas questões. Inclusive sugere que o nível de significância deve ser função do tamanho amostral. Ao invés de fixar-se um nível de significância único, sugerimos fixar a razão de gravidade entre os erros tipo I e tipo II baseado nas perdas incorridas em cada caso e assim, dado um tamanho amostral, definir o nível de significância ideal que minimiza a combinação linear dos erros de decisão. Mostraremos exemplos de hipóteses simples, compostas e precisas para a comparação de proporções, da forma mais convencionalmente utilizada comparada com a abordagem bayesiana proposta. / Usually the significance level of the hypothesis test is fixed (typically 0.05) for the maximum acceptable Type I error (probability of Reject H0 as it is true), also known as the significance level of the hypothesis test, represented here by alpha. Normally the type II error or beta (probability of Accept H0 as it is false) is not calculed. Nor often wonder whether the alpha adopted is reasonable for the problem or even analyzed for the sample size presented. This text aims to take the reflection of these issues. Even suggests that the significance level should be a function of the sample size. Instead of fix a unique level of significance, we suggest fixing the ratio of gravity between type I and type II errors based on losses incurred in each case and so, given a sample size, set the ideal level of significance that minimizes the linear combination of the decision errors. There are examples of simple, composite and sharp hypotheses for the comparison of proportions, the more conventionally used form compared with the Bayesian approach proposed.
426

The dimensional variation analysis of complex mechanical systems

Sleath, Leslie C. January 2014 (has links)
Dimensional variation analysis (DVA) is a computer based simulation process used to identify potential assembly process issues due the effects of component part and assembly variation during manufacture. The sponsoring company has over a number of years developed a DVA process to simulate the variation behaviour of a wide range of static mechanical systems. This project considers whether the current DVA process used by the sponsoring company is suitable for the simulation of complex kinematic systems. The project, which consists of three case studies, identifies several issues that became apparent with the current DVA process when applied to three types of complex kinematic systems. The project goes on to develop solutions to the issues raised in the case studies in the form of new or enhanced methods of information acquisition, simulation modelling and the interpretation and presentation of the simulation output Development of these methods has enabled the sponsoring company to expand the range of system types that can be successfully simulated and significantly enhances the information flow between the DVA process and the wider product development process.
427

Estimação de distribuições discretas via cópulas de Bernstein / Discrete Distributions Estimation via Bernstein Copulas

Fossaluza, Victor 15 March 2012 (has links)
As relações de dependência entre variáveis aleatórias é um dos assuntos mais discutidos em probabilidade e estatística e a forma mais abrangente de estudar essas relações é por meio da distribuição conjunta. Nos últimos anos vem crescendo a utilização de cópulas para representar a estrutura de dependência entre variáveis aleatórias em uma distribuição multivariada. Contudo, ainda existe pouca literatura sobre cópulas quando as distribuições marginais são discretas. No presente trabalho será apresentada uma proposta não-paramétrica de estimação da distribuição conjunta bivariada de variáveis aleatórias discretas utilizando cópulas e polinômios de Bernstein. / The relations of dependence between random variables is one of the most discussed topics in probability and statistics and the best way to study these relationships is through the joint distribution. In the last years has increased the use of copulas to represent the dependence structure among random variables in a multivariate distribution. However, there is still little literature on copulas when the marginal distributions are discrete. In this work we present a non-parametric approach for the estimation of the bivariate joint distribution of discrete random variables using copulas and Bernstein polynomials.
428

Étude sur la remise en suspension de particules suite à la marche d’un opérateur / Particle resuspension due to human walking

Mana, Zakaria 09 December 2014 (has links)
Lors des interventions humaines pendant les arrêts de tranche des installations nucléaires d’EDF, on remarque une remise en suspension de certains radionucléides sous forme d’aérosols (1 µm < dp < 10 µm). Dans le cadre d’une augmentation des interventions effectuées de façon simultanée en bâtiment réacteur, il devient important de mieux comprendre la remise en suspension due à l’activité des opérateurs pour adapter leur radioprotection. Le but des travaux de cette thèse est de quantifier la remise en suspension des particules suite à la marche des opérateurs sur un sol faiblement contaminé. Pour cela, la démarche suivie consiste à coupler un modèle de remise en suspension aéraulique avec des calculs numériques d’écoulement sous une chaussure, puis à caractériser expérimentalement certains paramètres d’entrée du modèle (diamètre de particule, forces d’adhésion, mouvement de la chaussure).Le modèle de remise en suspension Rock’n’Roll proposé par Reeks et Hall (2001) a été choisi car il décrit de manière physique ce mécanisme et est basé sur le moment des forces appliquées à une particule. Il nécessite la maîtrise de paramètres d’entrée tels que la vitesse de frottement de l’air, la distribution des forces d’adhésion et le diamètre des particules.Concernant le premier paramètre, des simulations numériques d’écoulement ont été réalisées, à l’aide du code de calcul ANSYS CFX, sous une chaussure de sécurité en mouvement (numérisée par CAO 3D) ; les cartographies des vitesses de frottement obtenues donnent des valeurs de l’ordre de 1 m.s⁻ ¹ pour une vitesse angulaire moyenne de 200 °.s⁻ ¹ .Concernant le deuxième paramètre, des mesures AFM (Atomic Force Microscope) ont été réalisées avec des particules d’alumine ainsi que des particules d’oxyde de cobalt en contact avec des surfaces en époxy représentatives de celles rencontrées dans les installations d’EDF. L’AFM permet d’obtenir la distribution des forces d’adhésion et révèle une valeur moyenne bien plus faible que ce qui peut être calculé de façon théorique en utilisant par exemple le modèle JKR proposé par Johnson et al. (1971). De plus, cette technique, tenant compte des rugosités de surface, montre que plus la taille de la particule augmente, plus la moyenne des forces d’adhésion diminue. Enfin, l’analyse des mesures AFM a permis d’obtenir une corrélation liant la distribution des forces d’adhésion au diamètre des particules, remplaçant celle de Biasi et al. (2001) initialement utilisée dans le modèle Rock’n’Roll et permettant ainsi d’adapter le modèle aux particules et aux revêtements de sol étudiés. Le couplage, effectué dans le code de calcul ANSYS CFX, entre les calculs de vitesses de frottement et le modèle de remise en suspension, a permis de déterminer des taux de remise en suspension théoriques pour le cas d’un cycle unique de marche. Ce couplage a été dans un premier temps validé par une comparaison à l’expérience pour le cas simple d’une plaque en rotation dans un volume contrôlé. En complément, des expériences à l’échelle d’un local ventilé de 30 m³ ont été réalisées en marchant sur un revêtement époxy ensemencé en particules de tailles calibrées (1,1 µm et 3,3 µm). Ces expériences ont permis de mettre en évidence les paramètres influant la remise en suspension des particules, tels que la fréquence de pas et la taille des particules. / In nuclear facilities, during normal operations in controlled areas, workers could be exposed to radioactive aerosols (1 µm < dp < 10 µm). One of the airborne contamination sources is particles that are initially seeded on the floor and could be removed by workers while they are walking. During the outage of EDF nuclear facilities, there is a resuspension of some radionuclides in aerosol form (1 µm < dp < 10 µm). Since the number of co-activity will increase in reactors buildings of EDF, it becomes important to understand particle resuspension due to the activity of the operators to reduce their radiation exposure. The purpose of this Ph.D thesis is to quantify the resuspension of particles due to the progress of operators on a contaminated soil. Thus, the approach is to combine an aerodynamic resuspension model with numerical calculations of flow under a shoe, and then to characterize experimentally some input parameters of the model (particle diameter, adhesion forces, shoes motion).The resuspension model Rock'n'Roll proposed by Reeks and Hall (2001) was chosen because it describes physically the resuspension mechanism and because it is based on the moment of forces applied to a particle. This model requires two input parameters such as friction velocity and adhesion forces distribution applied on each particle.Regarding the first argument, numerical simulations were carried on using the ANSYS CFX software applied to a safety shoe in motion (digitized by 3D CAO); the mapping of friction velocity shows values of about 1 m.s⁻ ¹ for an angular average velocity of 200 °.s⁻ ¹ . As regards the second parameter, AFM (Atomic Force Microscopy) measurements were carried out with alumina and cobalt oxide particles in contact with epoxy surfaces representative of those encountered in EDF power plants. AFM provides the distribution of adhesion forces and reveals a much lower value than what can be calculated theoretically using JKR model (Johnson et al. (1971)). Moreover, this technique, taking into account the surface roughness, shows that adhesion forces decrease while particle diameter increase. Finally, the analysis of AFM measurements gives a correlation linking the distribution of adhesion forces to the particle diameter, replacing the one given by Biasi et al. (2001) originally used in the Rock'n'Roll model and thereby adapt the model to particles and flooring studied in our case.Coupling, performed in ANSYS CFX software, between the calculations of friction velocity and model of particle resuspension, gives theoretical resuspension rate during shoe motion. This coupling was initially validated by comparison to the experience for the simple case of a rotating plate in a controlled volume. Secondly, experiments at the scale of a ventilated room of 30 m³ were performed by walking on an epoxy coating initially seeded by calibrated particle size (1,1 µm and 3,3 µm). These experiments highlight the parameters influencing the suspension of particles, such as step frequency and particle size.
429

Laser Hardening for Application on Crankshaft Surfaces Using Non-Uniform Beam Intensity Distributions

Rönnerfjäll, Victor January 2019 (has links)
A controlled continuous laser output using a circular geometry with a gaussian intensity distribution was used to harden the surface of a particular metal specimen (44MnSiVS6). Said beam operated within a relatively small power interval, just barely past the melting point. The resulting martensite track was shown to expand laterally at a positive exponential rate, with respect to the energy input. This was furthermore accompanied with an increase of the average slope at each lateral edge. The thickness was seen to expand at a significantly slower rate (by about one order of magnitude), with declining efficiency in regard to the energy input used. Thermal measurements along the surface indicated somewhat uniform temperature patterns within a relatively large area surrounding the middle of the beam spot. Though a slight elevation in temperature was often noted in the vicinity of its centre. In addition to using a gaussian beam, three other intensity distributions were utilized. The results obtained from said distributions may suggest effectual alterations to occur in terms of the shape and extent of the resulting martensite zone, if the spread of the gaussian intensity profile is allowed to be modified. Ideally, this would be carried out while still remaining close to the melting point, as well as keeping the spot size unchanged. A series of vicker's hardness measurements was carried out for each track induced by a different beam distribution. A clear transition in hardness was noted across the perceived boundary between the martensite zone and the base material, confirming the legitimacy regarding the phase identification. / Stiffcrank - Advanced laser surface hardening of microalloyed steels for fatigue enhancement of automotive engine components, funded by EU-RFCS, no. 754155
430

Representation and Interpretation of Manual and Non-Manual Information for Automated American Sign Language Recognition

Parashar, Ayush S 09 July 2003 (has links)
Continuous recognition of sign language has many practical applications and it can help to improve the quality of life of deaf persons by facilitating their interaction with hearing populace in public situations. This has led to some research in automated continuous American Sign Language recognition. But most work in continuous ASL recognition has only used top-down Hidden Markov Model (HMM) based approaches for recognition. There is no work on using facial information, which is considered to be fairly important. In this thesis, we explore bottom-up approach based on the use of Relational Distributions and Space of Probability Functions (SoPF) for intermediate level ASL recognition. We also use non-manual information, firstly, to decrease the number of deletion and insertion errors and secondly, to find whether the ASL sentence has 'Negation' in it, for which we use motion trajectories of the face. The experimental results show: The SoPF representation works well for ASL recognition. The accuracy based on the number of deletion errors, considering the 8 most probable signs in the sentence is 95%, while when considering 6 most probable signs, is 88%. Using facial or non-manual information increases accuracy when we consider top 6 signs, from 88% to 92%. Thus face does have information content in it. It is difficult to directly combine the manual information (information from hand motion) with non-manual (facial information) to improve the accuracy because of following two reasons: Manual images are not synchronized with the non-manual images. For example the same facial expressions is not present at the same manual position in two instances of the same sentences. One another problem in finding the facial expresion related with the sign, occurs when there is presence of a strong non-manual indicating 'Assertion' or 'Negation' in the sentence. In such cases the facial expressions are totally dominated by the face movements which is indicated by 'head shakes' or 'head nods'. The number of sentences, that have 'Negation' in them and are correctly recognized with the help of motion trajectories of the face are, 27 out of 30.

Page generated in 0.1657 seconds