• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 410
  • 58
  • 47
  • 19
  • 13
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 7
  • 7
  • 4
  • 3
  • Tagged with
  • 690
  • 132
  • 95
  • 94
  • 76
  • 70
  • 62
  • 59
  • 56
  • 54
  • 46
  • 42
  • 38
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
571

Enhancing Our Understanding of Human Poverty: An Examination of the Relationship Between Income Poverty and Material Hardship

Bennett, Robert Michael, Jr. 30 October 2017 (has links)
No description available.
572

Contributions to the theory of unequal probability sampling

Lundquist, Anders January 2009 (has links)
This thesis consists of five papers related to the theory of unequal probability sampling from a finite population. Generally, it is assumed that we wish to make modelassisted inference, i.e. the inclusion probability for each unit in the population is prescribed before the sample is selected. The sample is then selected using some random mechanism, the sampling design. Mostly, the thesis is focused on three particular unequal probability sampling designs, the conditional Poisson (CP-) design, the Sampford design, and the Pareto design. They have different advantages and drawbacks: The CP design is a maximum entropy design but it is difficult to determine sampling parameters which yield prescribed inclusion probabilities, the Sampford design yields prescribed inclusion probabilities but may be hard to sample from, and the Pareto design makes sample selection very easy but it is very difficult to determine sampling parameters which yield prescribed inclusion probabilities. These three designs are compared probabilistically, and found to be close to each other under certain conditions. In particular the Sampford and Pareto designs are probabilistically close to each other. Some effort is devoted to analytically adjusting the CP and Pareto designs so that they yield inclusion probabilities close to the prescribed ones. The result of the adjustments are in general very good. Some iterative procedures are suggested to improve the results even further. Further, balanced unequal probability sampling is considered. In this kind of sampling, samples are given a positive probability of selection only if they satisfy some balancing conditions. The balancing conditions are given by information from auxiliary variables. Most of the attention is devoted to a slightly less general but practically important case. Also in this case the inclusion probabilities are prescribed in advance, making the choice of sampling parameters important. A complication which arises in the context of choosing sampling parameters is that certain probability distributions need to be calculated, and exact calculation turns out to be practically impossible, except for very small cases. It is proposed that Markov Chain Monte Carlo (MCMC) methods are used for obtaining approximations to the relevant probability distributions, and also for sample selection. In general, MCMC methods for sample selection does not occur very frequently in the sampling literature today, making it a fairly novel idea.
573

Nuclear reactions inside the water molecule

Dicks, Jesse 30 June 2005 (has links)
A scheme, analogous to the linear combination of atomic orbitals (LCAO), is used to calculate rates of reactions for the fusion of nuclei con¯ned in molecules. As an example, the possibility of nuclear fusion in rotationally excited H2O molecules of angular momentum 1¡ is estimated for the p + p + 16O ! 18Ne¤(4:522; 1¡) nuclear transition. Due to a practically exact agreement of the energy of the Ne resonance and of the p + p + 16O threshold, the possibility of an enhanced transition probability is investigated. / Physics / M.Sc.
574

Estimation de la variance en présence de données imputées pour des plans de sondage à grande entropie

Vallée, Audrey-Anne 07 1900 (has links)
Les travaux portent sur l’estimation de la variance dans le cas d’une non- réponse partielle traitée par une procédure d’imputation. Traiter les valeurs imputées comme si elles avaient été observées peut mener à une sous-estimation substantielle de la variance des estimateurs ponctuels. Les estimateurs de variance usuels reposent sur la disponibilité des probabilités d’inclusion d’ordre deux, qui sont parfois difficiles (voire impossibles) à calculer. Nous proposons d’examiner les propriétés d’estimateurs de variance obtenus au moyen d’approximations des probabilités d’inclusion d’ordre deux. Ces approximations s’expriment comme une fonction des probabilités d’inclusion d’ordre un et sont généralement valides pour des plans à grande entropie. Les résultats d’une étude de simulation, évaluant les propriétés des estimateurs de variance proposés en termes de biais et d’erreur quadratique moyenne, seront présentés. / Variance estimation in the case of item nonresponse treated by imputation is the main topic of this work. Treating the imputed values as if they were observed may lead to substantial under-estimation of the variance of point estimators. Classical variance estimators rely on the availability of the second-order inclusion probabilities, which may be difficult (even impossible) to calculate. We propose to study the properties of variance estimators obtained by approximating the second-order inclusion probabilities. These approximations are expressed in terms of first-order inclusion probabilities and are usually valid for high entropy sampling designs. The results of a simulation study evaluating the properties of the proposed variance estimators in terms of bias and mean squared error will be presented.
575

LES of two-phase reacting flows : stationary and transient operating conditions / Simulations aux grandes échelles découlements diphasiques réactifs : régimes stationnaires et transitoires

Eyssartier, Alexandre 05 October 2012 (has links)
L'allumage et le réallumage de haute altitude présentent de grandes difficultés dans le cadre des chambres de combustion aéronautiques. Le succès d'un allumage dépend de multiples facteurs, des caractéristiques de l'allumeur à la taille des gouttes du spray en passant par le niveau de turbulence au point d'allumage. Déterminer la position optimale de l'allumeur ou le potentiel d'allumage d'une source d'énergie donnée à une position donnée sont ainsi des paramètres essentiels lors du design de chambre de combustion. Le but de ces travaux de thèse est d'étudier l'allumage forcé des chambres de combustion aéronautiques. Pour cela, des Simulation numériques aux Grandes Echelles (SGE) d'écoulements diphasiques réactifs sont utilisées et analysées. Afin de les valider, des données expérimentales issues du banc MERCATO installé à l'ONERA Fauga-Mauzac sont utilisées. Cela permet dans un premier temps de valider la méthodologie ainsi que les modèles utilisés pour les SGE diphasiques évaporantes avant leur utilisation dans d'autres conditions d'écoulement. Le cas diphasique réactif statistiquement stationnaire est ensuite comparé aux données disponibles pour évaluer les modèles en condition réactives. Ce cas est étudié plus en détail à travers l'analyse de caractéristiques de la flamme. Celle-ci semble être le théâtre de régimes de combustion très différents. On note aussi que la détermination de la méthode numérique la plus appropriée pour le calcul d'écoulements diphasiques n'est pas évidente. De plus, deux méthodes numériques différentes peuvent donner des résultats en bon accord avec l'expérience et pourtant avoir des modes de combustion différents. Les capacités de la SGE à correctement calculer un écoulement diphasique réactif étant validé, des SGE du phénomène transitoire d'allumage sont effectuées. La sensibilité observée expérimentalement de l'allumage aux conditions initiales, i.e. à l'instant de claquage, est retrouvé par les SGE. L'analyse met en évidence le rôle prépondérant de la dispersion du spray dans le développement initial du noyau de flamme. L'utilisation des SGE pour calculer les séquences d'allumage fournie de nombreuses informations sur le phénomène d'allumage, cependant d'un point de vue industriel, cela ne donne pas de résultat optimal, à moins de ne tester toutes les positions, ce qui rendrait le coût CPU déraisonnable. Des alternatives sont donc nécessaires et font l'objet de la dernière partie de ces travaux. On propose de dériver un critère local d'allumage, donnant la probabilité d'allumage à partir d'un écoulement diphasique (air et carburant) non réactif instationnaire. Ce modèle est basé sur des critères liés aux différentes phases menant à un allumage réussi, de la formation d'un premier noyau à la propagation de la flamme vers l'injecteur. Enfin, des comparaisons avec des données expérimentales sur des chambres aéronautiques sont présentées et sont en bon accord, indiquant que le critère d'allumage proposé, couplé avec une SGE d'écoulement diphasique non réactif, peut être utilisé pour optimiser la puissance et la position du système d'allumage. / Ignition and altitude reignition are critical issues for aeronautical combustion chambers. The success of ignition depends on multiple factors, from the characteristics of the igniter to the spray droplet size or the level of turbulence at the ignition site. Finding the optimal location of the igniter or the potential of ignition success of a given energy source at a given location are therefore parameters of primary importance in the design of combustion chambers. The purpose of this thesis is to study forced ignition of aeronautical combustion chambers. To do so, Large Eddy Simulations (LES) of two-phase reacting flows are performed and analyzed. First, the equations of the Eulerian formalism used to describe the dispersed phase are presented. To validate the successive LES, experimental data from the MERCATO bench installed at ONERA Fauga-Mauzac are used. It allows to validate the two-phase evaporating flow LES methodology and models prior to its use to other flow conditions. The statistically stationary two-phase flow reacting case is then compared to available data to evaluate the model in reacting conditions. This case is more deeply studied through the analysis of the characteristics of the flame. This last one appears to experience very different combustion regimes. It is also seen that the determination of the most appropriate methodology to compute two-phase flow flame is not obvious. Furthermore, two different methodologies may both agree with the data and still have different burning modes. The ability of the LES to correctly compute burning two-phase flow being validated, LES of the transient ignition phenomena are performed. The experimentally observed sensitivity of ignition to initial conditions, i.e. to sparking time, is recovered with LES. The analysis highlights the major role played by the spray dispersion in the development of the initial flame kernel. The use of LES to compute ignition sequences provides a lot of information about the ignition phenomena, however from an industrial point of view, it does not give an optimal result, unless all locations are tested, which brings the CPU cost to unreasonable values. Alternatives are hence needed and are the objective of the last part of this work. It is proposed to derive a local ignition criterion, giving the probability of ignition from the knowledge of the unsteady non-reacting two-phase (air and fuel) flow. This model is based on criteria for the phases of a successful ignition process, from the first kernel formation to the flame propagation towards the injector. Then, comparisons with experimental data on aeronautical chambers are done and show good agreement, indicating that the proposed ignition criterion, coupled to a Large Eddy Simulation of the stationary evaporating two-phase non-reacting flow, can be used to optimize the igniter location and power.
576

Inference for the K-sample problem based on precedence probabilities

Dey, Rajarshi January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Paul I. Nelson / Rank based inference using independent random samples to compare K>1 continuous distributions, called the K-sample problem, based on precedence probabilities is developed and explored. There are many parametric and nonparametric approaches, most dealing with hypothesis testing, to this important, classical problem. Most existing tests are designed to detect differences among the location parameters of different distributions. Best known and most widely used of these is the F- test, which assumes normality. A comparable nonparametric test was developed by Kruskal and Wallis (1952). When dealing with location-scale families of distributions, both of these tests can perform poorly if the differences among the distributions are among their scale parameters and not in their location parameters. Overall, existing tests are not effective in detecting changes in both location and scale. In this dissertation, I propose a new class of rank-based, asymptotically distribution- free tests that are effective in detecting changes in both location and scale based on precedence probabilities. Let X_{i} be a random variable with distribution function F_{i} ; Also, let _pi_ be the set of all permutations of the numbers (1,2,...,K) . Then P(X_{i_{1}}<...<X_{i_{K}}) is a precedence probability if (i_{1},...,i_{K}) belongs to _pi_. Properties of these of tests are developed using the theory of U-statistics (Hoeffding, 1948). Some of these new tests are related to volumes under ROC (Receiver Operating Characteristic) surfaces, which are of particular interest in clinical trials whose goal is to use a score to separate subjects into diagnostic groups. Motivated by this goal, I propose three new index measures of the separation or similarity among two or more distributions. These indices may be used as “effect sizes”. In a related problem, Properties of precedence probabilities are obtained and a bootstrap algorithm is used to estimate an interval for them.
577

Previsão de movimentos gravitacionais de massa na Serra de Ouro Preto com base em árvore de eventos / not available

Dias, Ezilma Cordeiro 21 March 2002 (has links)
Ouro Preto é uma cidade histórica reconhecida como patrimônio histórico da humanidade pela UNESCO, que tem um passado de intensa exploração mineral e construções sem nenhum planejamento e critério desde 1680. Recentemente, várias ocorrências de escorregamento, quedas e outros movimentos gravitacionais de massa têm ocorrido na área. A ocorrência dos movimentos gravitacionais de massa é atribuída às chuvas associadas aos fatores predisponentes e modificadores que existem na área. Observa-se que o grande problema não é a magnitude desses processos, mas sim a freqüência com que eles ocorrem. Neste contexto, apresenta-se neste trabalho a previsão de movimentos gravitacionais de massa na Serra de Ouro Preto a partir de árvore de eventos. A avaliação probabilística é realizada a partir de árvore de eventos, para que valores quantitativos sejam atribuídos a cada atributo. Por meio desta análise, é possível se obter uma previsão quantitativa dos possíveis hazards e problemas que podem ser esperados na região. Cada tipo de movimento gravitacional de massa é composto por uma seqüência condicionada de atributos que deve ser considerada, e para cada um deles, a probabilidade de ocorrência foi estabelecida considerando os aspectos probabilísticos. A probabilidade de cada atributo foi determinada pela freqüência relativa do atributo no cenário (análise unidimensional) e a sua intensidade (análise bidimensional). Conclui-se que os principais processos de movimentos gravitacionais de massa que ocorrem na área são classificados como escorregamentos translacionais (em rocha e em material inconsolidado), rolamentos (em rocha, detritos e em blocos de rocha), corridas, quedas (em rocha, blocos de rocha e em material inconsolidado), escoamentos (em detritos e em material inconsolidado), complexo (em rocha e em material inconsolidado) os quais apresentam-se ativos (todos sem nenhuma exceção). Quanto aos valores de probabilidades calculados para os processos de movimentos gravitacionais de massa, obteve-se valores de 1 a 17 porcento, resultando num período de retorno de 50 a 5 anos. Estes valores permitem caracterizar a área como uma zona perigosa e potencialmente perigosa. / Ouro Preto is a historical city that enjoys the World Heritage Landmark Status as granted by UNESCO, and has a history of heavy mining and rampant unstructured housing dating back to 1680. Recently there have been many occurrences of landslides, rock fall and other landslide hazards. The effects of rainfall events associated with predisposing and modifying factors that exist in the area are attributed to the occurrence of landslides. It should be noted that the biggest problem is not the magnitude of these hazards, but the frequency with they occur. Thus, the assessment of landslide processes in Ouro Preto is presented through event tree form. The probabilistic analysis is performed in an event tree form in order to give quantitative values for every attribute. Such analysis allows a quantitative prediction of the possible hazards and problems that can be expected in the region. Every type of landslide process has a conditional sequence of attributes that must be taken into account and for every kind of attribute the probability of occurrence is established through probabilistic aspects. The probability of every attribute is identified by its relative frequency in the scenario (one-dimensional analysis) and the intensity (two-dimensional analysis). It was possible to conclude that the main landslide processes that occur in the area are classified as translational slides, bouncing or rolling, quickly flows, falls, flows and complex and all the processes are currently active. Concerning the probability values that were calculated for the landslide processes, a range from 1 to 17 percent, they resulted in a return period from 50 to 5 years. These values allow characterizing the area as a dangerous to a potentially dangerous zone.
578

Aplicação de modelos teóricos de distribuição de abundância das espécies na avaliação de efeitos de fragmentação sobre as comunidades de aves da Mata Atlântica / Use of the species abundance distributions to evaluate the effects os fragmentation on the bird communities of Atlantic Forest

Mandai, Camila Yumi 26 October 2010 (has links)
As distribuições de abundância relativa das espécies tiveram um papel importante no desenvolvimento da ecologia de comunidades, revelando um dos padrões mais bem estabelecidos da ecologia, que é a alta dominância de algumas espécies nas comunidades biológicas. Este padrão provocou a criação de dezenas de modelos teóricos na tentativa de explicar quais mecanismos ecológicos poderiam gerá-lo. Os modelos teóricos de abundância relativa das espécies podem ser vistos como descritores das comunidades, e seus parâmetros, medidas sintéticas de dimensões da diversidade. Esses parâmetros podem ser utilizados não só como descritores biologicamente interpretáveis das comunidades, mas também como variáveis respostas a possíveis fatores ambientais que afetam as comunidades. Adotando então esta aplicação descritiva dos modelos, nosso objetivo foi comparar as comunidades de aves de áreas em um gradiente de fragmentação, utilizando como variável resposta os valores estimados do parâmetro do modelo série logarítmica, o &#945; de Fisher. Como todos os modelos teóricos de abundância relativa propostos têm como premissa, a igualdade de probabilidade de captura entre as espécies, o que para comunidades de espécies de organismos móveis, como aves, parece pouco realista, neste trabalho investigamos também o grau de sensibilidade dos modelos quanto à quebra dessa premissa. Assim, por meio de simulações de comunidades, analisamos o viés de seleção e estimação, e revelamos que o aumento do grau de heterogeneidade entre as probabilidades de captura das espécies acarreta no incremento do viés de seleção do modelo real e também de estimação dos parâmetros. Porém, como o objetivo do estudo era identificar os fatores que influenciam a diversidade das comunidades, mesmo com o viés de estimação, talvez ainda fosse possível revelar o grau de influência sobre os valores dos parâmetros, quando ele existir. Assim, prosseguimos com mais uma etapa de simulações, em que geramos comunidades cujos valores de parâmetros tinham uma relação linear com a área dos fragmentos. O que encontramos é que independente da igualdade ou desigualdade de capturabilidade das espécies, quando o efeito existe, ele é sempre detectado, porém dependendo do grau de diferença de probabilidade de captura das espécies, o efeito pode ser subestimado. E, na ausência de efeito, ele pode ser falsamente detectado, dependendo do grau de heterogeneidade de probabilidades de captura entre as espécies, mas sempre com estimativas bem baixas para o efeito inexistente. Com esses resultados então, pudemos quantificar os tipos de efeitos da heterogeneidade de probabilidades de captura e prosseguir com as análises dos efeitos de fragmentação. O que nossos resultados mostraram é que na paisagem com 10% de cobertura vegetal, a área parece influenciar a diversidade dos fragmentos mais que o isolamento, e que na paisagem de 50% de cobertura vegetal, a variável de isolamento se torna mais importante que a área para explicar os dados. Porém, em uma interpretação mais parcimoniosa, consideramos as estimativas dos efeitos muito baixas para considerar que ele de fato existia. Com isso, concluímos que o processo de fragmentação provavelmente não tem efeito sobre a hierarquia de abundância das espécies, e é independente da porcentagem de cobertura vegetal da paisagem. Contudo, em uma descrição do número de capturas de cada espécie nos fragmentos, ponderada pelo número de capturas amostrado em áreas contínuas adjacentes, revelaram que o tamanho do fragmento pode ser importante na determinação de quais espécies serão extintas ou beneficiadas e que talvez a qualidade da matriz seja decisiva para a manutenção de espécies altamente sensíveis em fragmentos pequenos. Assim, demonstramos que, embora as SADs sejam pouco afetadas pela fragmentação, a posição das espécies na hierarquia de abundâncias pode mudar muito, o que reflete as diferenças de sensibilidade das espécies a área e isolamento dos fragmentos. / Species abundance distribution (SADs) had an important role in community ecology, revealing one of the most well established pattern in ecology, which is the high dominance by just a few species. This pattern stimulated the proposal of innumerous theoretical models in an attempt to explain the ecological mechanism which could generate it. However these models can also be a descriptor of the communities and their parameters synthetic measures of diversity. Such parameters can be used as response variables to environmental impact affecting communities. Adopting this approach our objective was to compare bird communities through areas of different levels of fragmentation, using as response variable the estimates of &#945;, the parameter of Fishers logseries. Considering the implicit assumption of equal capture probabilities among species in SAD models we also investigated the degree of sensibility of the models when this assumption is disrespected, once it seems so unrealistic. Thus simulating communities in which species had equal and different capture probabilities among them we found that increases in the degrees of heterogeneity in species catchability lead to a gain in biases on the model selection and parameters estimations. Additionally, since our goal in this study was identify some factors that may influence the diversity in communities, even with the biases, if they were constant, maybe it was still possible to test the relation. In this context we proceed to another stage of simulations, where we generate communities whose parameter values had a linear relationship with remnant area. What we find is that regardless of equal or unequal in catchability of species, when the effect exists, it is always detected, but depending on the degree of difference in probability of catching the species, the effect may be underestimated. Further, in the absence of effect, it can be falsely detected, depending on the degree of heterogeneity of capture probabilities among species, but always with very low estimates for the effect non-existent. With these results, we could quantify the types of effects of heterogeneity on capture probabilities and proceed with the analysis of the effects of fragmentation. What we showed is that the landscape with 10% vegetation cover, the fragment area appears to influence the diversity of the fragments rather than isolation, and landscape in 50% of plant cover, the isolation variable becomes more important than area to explain the data. But in a more parsimonious interpretation, we consider the estimated of the effects too low to consider that they actually exist. Therefore, we conclude that the fragmentation process probably has no effect on the hierarchy of species abundance. However, in a description of the number of captures of each species in the fragments, weighted by the number of catches sampled in continuous adjacent areas revealed that the fragment size may be important in determining which species will be extinct or benefit and that perhaps the quality of matrix is decisive for the maintenance of highly sensitive species in small fragments. Thus, we demonstrated that while the SAD are not significantly affected by fragmentation, the position in the hierarchy of species abundances can change a lot, which reflects the different sensitivity of species to area and isolation in the fragments.
579

Uncertainties in land change modeling

Krüger, Carsten 13 May 2016 (has links)
Der Einfluss des Menschen verändert die Erdoberfläche in gravierendem Maße. Die Anwendung von Landnutzungsmodellen ist etabliert, um derartige Prozesse zu analysieren und um Handlungsempfehlungen für Entscheidungsträger zu geben. Landnutzungsmodelle stehen unter dem Einfluss von Unsicherheiten, welche beim Interpretieren der Ergebnisse berücksichtigt werden müssen. Dennoch gibt es wenige Ansätze, die unterschiedliche Unsicherheitsquellen mit ihren Interdependenzen untersuchen und ihre Auswirkungen auf die projizierte Änderung der Landschaft analysieren. Aus diesem Grund ist das erste Ziel dieser Arbeit einen systematischen Ansatz zu entwickeln, der wesentliche Unsicherheitsquellen analysiert und ihre Fortentwicklung zur resultierenden Änderungskarte untersucht. Eine andere Herausforderung in der Landnutzungsmodellierung ist es, die Eignung von Projektionen abzuschätzen wenn keine Referenzdaten vorliegen. Bayes’sche Netze wurden als eine vielseitige Methode identifiziert, um das erste Ziel zu erreichen. Darüber hinaus wurden die Modellierungsschritte „Definition der Modellstruktur“, „Auswahl der Eingangsdaten“ und „Weiterverarbeitung der Daten“ als wesentliche Unsicherheitsquellen identifiziert. Um das zweite Ziel zu adressieren wurde eine Auswahl an Maßzahlen entwickelt. Diese quantifizieren Unsicherheit mit Hilfe einer projizierten Änderungskarte und ohne den Vergleich mit Referenzdaten. Mit diesen Maßzahlen ist es zusätzlich möglich zwischen quantitativer und räumlicher Unsicherheit zu unterscheiden. Vor allem in räumlichen Anwendungen wie der Landnutzungsmodellierung ist diese Möglichkeit wertvoll. Dennoch kann auch ein absolut sicheres Modell gleichzeitig ein falsches und nutzloses Modell sein. Deswegen wird ein Ansatz empfohlen, der die Beziehung zwischen Unsicherheit und Genauigkeit in bekannten Zeitschritten schätzt. Die entwickelten Ansätze geben wichtige Informationen um die Eignung von modellierten Entwicklungspfaden der Landnutzung zu verstehen. / Human influence has led to substantial changes to the Earth’s surface. Land change models are widely applied to analyze land change processes and to give recommendations for decision-making. Land change models are affected by uncertainties which have to be taken into account when interpreting their results. However, approaches which examine different sources of uncertainty with regard to their interdependencies and their influence on projected land change are rarely applied. The first objective of this thesis is therefore to develop a systematic approach which identifies major sources of uncertainty and the propagation to the resulting land change map. Another challenge in land change modeling is the estimation of the reliability of land change predictions when no reference data are available. Bayesian Belief Networks were identified as a useful technique to reach the first objective. Moreover, the modeling steps of “model structure definition”, “data selection” and “data preprocessing” were detected as relevant sources of uncertainty. To address the second research objective, a set of measures based on probabilities were developed. They quantify uncertainty by means of a single predicted land change map without using a reference map. It is additionally possible to differentiate uncertainty into its spatial and quantitative components by means of these measures. This is especially useful in spatial applications such as land change modeling. However, even a certain model can be wrong and therefore useless. Therefore, an approach is suggested which estimates the relationship between disagreement and uncertainty in known time steps to use this relationship in future time steps. The approaches give important information for understanding the reliability of modeled future development paths of land change.
580

Untersuchung von Halbleiteroberflächen im stationären Nichtgleichgewicht durchgeführt an Germanium

Gräfe, Wolfgang 19 December 2011 (has links)
Ausgehend von einer kritischen Betrachtung der bekannten Meßverfahren zur Untersuchung von Halbleiteroberflächen wurde der Feldeffekt mit Sperrstrommessungen derart gekoppelt, dass gleichzeitig an dem Teil der Probe, der auf der Rückseite von einem in Sperr-Richtung vorgespannten pn-Übergang bedeckt ist, der Oberflächenleitwert und der Sperrstrom des pn-Übergangs gemessen werden. Der Sinn dieser Kopplung ist es, die Termanalyse gegenüber den bekannten Methoden zu verbessern. Die vorgenommene Erweiterung der Feldeffektuntersuchungen auf den stationären Nichtgleichgewichtzustand ermöglicht die direkte Messung der Aufspaltung des Ferminiveaus und damit die Bestimmung der Oberflächenrekombinationsgeschwindigkeit allein durch Oberflächenleitwertmessungen. Auch mit der Erweiterung der Sperrstromuntersuchungen, durch Messung der Sperrstromänderung bei Belichtung, ist es möglich, die Oberflächenrekombinationsgeschwindigkeit zu messen. Mit der entwickelten Apparatur wurde durch gleichzeitige Messung der Sperrstromänderungen und des Oberflächenleitwertes eine Verbreiterung der Sperrstromkurven verbunden mit einer Verschiebung des Sperrstrommaximums gegen das Leitwertminimum an CP-4-geätzten Germanium-Oberflächen beobachtet. Die Deutung dieser Erscheinung ermöglicht eine eindeutige Bestimmung des Energieniveaus des Rekombinationsterms und weist darauf hin, dass die Übergangswahrscheinlichkeiten des Rekombinationsterms keine Daten der Terme allein, sondern Daten der Terme und des Zustands der Oberfläche sind. Für die untersuchten, in CP-4-geätzten Oberflächen wurde die bei den Channeluntersuchungen gemachte Voraussetzung bestätigt, dass für die Terme in der oberen Hälfte der verbotenen Zone Cp/Cn > e2(Et – Ei)/kT gilt. Mit der entwickelten Methode zur Messung des Oberflächenleitwertes mit einem hochfrequenten Wechselstrom werden Verwehungseffekte der Ladungsträger vermieden. Durch die Parallelschaltung des ohmschen und des kapazitiven Leitwertes zwischen Inversionsschicht und Volumen werden weiterhin die Schwierigkeiten der Kontaktierung der Inversionsschicht umgangen. Einen solchen Kontakt herzustellen, ist besonders schwierig bei Halbleitern mit niedriger Intrinsic-Konzentration. Bei Silizium wurde diese Methode mit Erfolg angewendet. / Starting with a critical consideration of the well-established measurement procedures for the investigation of semiconductor surfaces, the field effect and the measurement of the reverse current have were linked in such a manner that the surface conductance and the reverse current could be measured simultaneously at that part of the specimen which was covered on the back side by a biased pn-junction. The use of this link-up is the improvement of the analysis of surface terms in comparison with the known methods. The implemented extension of the field effect measurements to the stationary non-equilibrium state allows the direct measurement of the split up of the Fermi level and in this way the determination of the surface recombination velocity by measurements of the surface conductance solely. Also with an extension of the measurements of the reverse current, by measuring the change in reverse current due to illumination, it is possible to determine the surface recombination velocity. By a simultaneous measurement of the changes in reverse current and the surface conductance with the developed apparatus it could be observed a broadening of the reverse current curve in conjunction with a shift of the reverse current maximum versus the conductance minimum on germanium surfaces etched with CP-4.The interpretation of this effect allows an unique determination of the energy level of the recombination term and points to the fact that the transient probabilities of the recombination term are not data of the term itself but data of the term and the state of the surface. By the investigations of the surfaces etched in CP-4, the requirements have been confirmed which were made in channel investigations that for the terms in the upper half of the forbidden zone it is Cp/Cn >e2(Et – Ei)/kT. With the developed method for the measurement of the surface conductance using a high frequency alternating current sweep out effects of the charge carriers can be avoided. Furthermore, by the shunt circuit of the ohmic and the capacitive conductance between the inversion layer and the bulk the difficulties in contacting the inversion layer will be avoided. The production of such a contact is especially difficult for semiconductors with a low intrinsic concentration. This method has been successfully applied to silicon.

Page generated in 0.0589 seconds