• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 412
  • 58
  • 47
  • 19
  • 13
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 7
  • 7
  • 4
  • 3
  • Tagged with
  • 692
  • 132
  • 97
  • 95
  • 76
  • 70
  • 63
  • 60
  • 56
  • 54
  • 48
  • 43
  • 38
  • 38
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
571

MetaStackVis: Visually-Assisted Performance Evaluation of Metamodels in Stacking Ensemble Learning

Ploshchik, Ilya January 2023 (has links)
Stacking, also known as stacked generalization, is a method of ensemble learning where multiple base models are trained on the same dataset, and their predictions are used as input for one or more metamodels in an extra layer. This technique can lead to improved performance compared to single layer ensembles, but often requires a time-consuming trial-and-error process. Therefore, the previously developed Visual Analytics system, StackGenVis, was designed to help users select the set of the most effective and diverse models and measure their predictive performance. However, StackGenVis was developed with only one metamodel: Logistic Regression. The focus of this Bachelor's thesis is to examine how alternative metamodels affect the performance of stacked ensembles through the use of a visualization tool called MetaStackVis. Our interactive tool facilitates visual examination of individual metamodels and metamodels' pairs based on their predictive probabilities (or confidence), various supported validation metrics, and their accuracy in predicting specific problematic data instances. The efficiency and effectiveness of MetaStackVis are demonstrated with an example based on a real healthcare dataset. The tool has also been evaluated through semi-structured interview sessions with Machine Learning and Visual Analytics experts. In addition to this thesis, we have written a short research paper explaining the design and implementation of MetaStackVis. However, this thesis  provides further insights into the topic explored in the paper by offering additional findings and in-depth analysis. Thus, it can be considered a supplementary source of information for readers who are interested in diving deeper into the subject.
572

Enhancing Our Understanding of Human Poverty: An Examination of the Relationship Between Income Poverty and Material Hardship

Bennett, Robert Michael, Jr. 30 October 2017 (has links)
No description available.
573

Probabilistic Approaches for Deep Learning: Representation Learning and Uncertainty Estimation

Park, Yookoon January 2024 (has links)
In this thesis, we present probabilistic approaches for two critical aspects of deep learning: unsupervised representation learning and uncertainty estimation. The first part of the thesis focuses on developing a probabilistic method for deep representation learning and an application of representation learning on multimodal text-video data. Unsupervised representation learning has been proven effective for learning useful representations of data using deep learning and enhancing the performance on downstream applications. However, current methods for representation learning lack a solid theoretical foundation despite their empirical success. To bridge this gap, we present a novel perspective for unsupervised representation learning: we argue that representation learning should maximize the effective nonlinear expressivity of a deep neural network on the data so that the downstream predictors can take full advantage of its nonlinear representation power. To this end, we propose our method of neural activation coding (NAC) that maximizes the mutual information between activation patterns of the encoder and the data over a noisy communication channel. We show that learning for a noise-robust activation code maximizes the number of distinct linear regions of ReLU encoders, hence maximizing its nonlinear expressivity. Experiment results demonstrate that NAC enhances downstream performance on linear classification and nearest neighbor retrieval on natural image datasets, and furthermore significantly improve the training of deep generative models. Next, we study an application of representation learning for multimodal text-video retrieval. We reveal that when using a pretrained representation model, many test instances are either over- or under-represented during text-video retrieval, hurting the retrieval performance. To address the problem, we propose normalized contrastive learning (NCL) that utilizes the Sinkhorn-Knopp algorithm to normalize the retrieval probabilities of text and video instances, thereby significantly enhancing the text-video retrieval performance. The second part of the thesis addresses the critical challenge of quantifying the predictive uncertainty of deep learning models, which is crucial for high-stakes applications of ML including medical diagnosis, autonomous driving, and financial forecasting. However, uncertainty estimation for deep learning remains an open challenge and current Bayesian approximations often output unreliable uncertainty estimates. We propose a density-based uncertainty criterion that posits that a model’s predictive uncertainty should be grounded in the density of the model’s training data so that the predictive uncertainty is high for inputs that are unlikely under the training data distribution. To this end, we introduce density uncertainty layers as a general building block for density-aware deep architectures. These layers embed the density-based uncertainty criterion directly into the model architecture and can be used as a drop-in replacement for existing neural network layers to produce reliable uncertainty estimates for deep learning models. On uncertainty estimation benchmarks, we show that the proposed method delivers more reliable uncertainty estimates and robust out-of-distribution detection performance.
574

Contributions to the theory of unequal probability sampling

Lundquist, Anders January 2009 (has links)
This thesis consists of five papers related to the theory of unequal probability sampling from a finite population. Generally, it is assumed that we wish to make modelassisted inference, i.e. the inclusion probability for each unit in the population is prescribed before the sample is selected. The sample is then selected using some random mechanism, the sampling design. Mostly, the thesis is focused on three particular unequal probability sampling designs, the conditional Poisson (CP-) design, the Sampford design, and the Pareto design. They have different advantages and drawbacks: The CP design is a maximum entropy design but it is difficult to determine sampling parameters which yield prescribed inclusion probabilities, the Sampford design yields prescribed inclusion probabilities but may be hard to sample from, and the Pareto design makes sample selection very easy but it is very difficult to determine sampling parameters which yield prescribed inclusion probabilities. These three designs are compared probabilistically, and found to be close to each other under certain conditions. In particular the Sampford and Pareto designs are probabilistically close to each other. Some effort is devoted to analytically adjusting the CP and Pareto designs so that they yield inclusion probabilities close to the prescribed ones. The result of the adjustments are in general very good. Some iterative procedures are suggested to improve the results even further. Further, balanced unequal probability sampling is considered. In this kind of sampling, samples are given a positive probability of selection only if they satisfy some balancing conditions. The balancing conditions are given by information from auxiliary variables. Most of the attention is devoted to a slightly less general but practically important case. Also in this case the inclusion probabilities are prescribed in advance, making the choice of sampling parameters important. A complication which arises in the context of choosing sampling parameters is that certain probability distributions need to be calculated, and exact calculation turns out to be practically impossible, except for very small cases. It is proposed that Markov Chain Monte Carlo (MCMC) methods are used for obtaining approximations to the relevant probability distributions, and also for sample selection. In general, MCMC methods for sample selection does not occur very frequently in the sampling literature today, making it a fairly novel idea.
575

Nuclear reactions inside the water molecule

Dicks, Jesse 30 June 2005 (has links)
A scheme, analogous to the linear combination of atomic orbitals (LCAO), is used to calculate rates of reactions for the fusion of nuclei con¯ned in molecules. As an example, the possibility of nuclear fusion in rotationally excited H2O molecules of angular momentum 1¡ is estimated for the p + p + 16O ! 18Ne¤(4:522; 1¡) nuclear transition. Due to a practically exact agreement of the energy of the Ne resonance and of the p + p + 16O threshold, the possibility of an enhanced transition probability is investigated. / Physics / M.Sc.
576

Estimation de la variance en présence de données imputées pour des plans de sondage à grande entropie

Vallée, Audrey-Anne 07 1900 (has links)
Les travaux portent sur l’estimation de la variance dans le cas d’une non- réponse partielle traitée par une procédure d’imputation. Traiter les valeurs imputées comme si elles avaient été observées peut mener à une sous-estimation substantielle de la variance des estimateurs ponctuels. Les estimateurs de variance usuels reposent sur la disponibilité des probabilités d’inclusion d’ordre deux, qui sont parfois difficiles (voire impossibles) à calculer. Nous proposons d’examiner les propriétés d’estimateurs de variance obtenus au moyen d’approximations des probabilités d’inclusion d’ordre deux. Ces approximations s’expriment comme une fonction des probabilités d’inclusion d’ordre un et sont généralement valides pour des plans à grande entropie. Les résultats d’une étude de simulation, évaluant les propriétés des estimateurs de variance proposés en termes de biais et d’erreur quadratique moyenne, seront présentés. / Variance estimation in the case of item nonresponse treated by imputation is the main topic of this work. Treating the imputed values as if they were observed may lead to substantial under-estimation of the variance of point estimators. Classical variance estimators rely on the availability of the second-order inclusion probabilities, which may be difficult (even impossible) to calculate. We propose to study the properties of variance estimators obtained by approximating the second-order inclusion probabilities. These approximations are expressed in terms of first-order inclusion probabilities and are usually valid for high entropy sampling designs. The results of a simulation study evaluating the properties of the proposed variance estimators in terms of bias and mean squared error will be presented.
577

LES of two-phase reacting flows : stationary and transient operating conditions / Simulations aux grandes échelles découlements diphasiques réactifs : régimes stationnaires et transitoires

Eyssartier, Alexandre 05 October 2012 (has links)
L'allumage et le réallumage de haute altitude présentent de grandes difficultés dans le cadre des chambres de combustion aéronautiques. Le succès d'un allumage dépend de multiples facteurs, des caractéristiques de l'allumeur à la taille des gouttes du spray en passant par le niveau de turbulence au point d'allumage. Déterminer la position optimale de l'allumeur ou le potentiel d'allumage d'une source d'énergie donnée à une position donnée sont ainsi des paramètres essentiels lors du design de chambre de combustion. Le but de ces travaux de thèse est d'étudier l'allumage forcé des chambres de combustion aéronautiques. Pour cela, des Simulation numériques aux Grandes Echelles (SGE) d'écoulements diphasiques réactifs sont utilisées et analysées. Afin de les valider, des données expérimentales issues du banc MERCATO installé à l'ONERA Fauga-Mauzac sont utilisées. Cela permet dans un premier temps de valider la méthodologie ainsi que les modèles utilisés pour les SGE diphasiques évaporantes avant leur utilisation dans d'autres conditions d'écoulement. Le cas diphasique réactif statistiquement stationnaire est ensuite comparé aux données disponibles pour évaluer les modèles en condition réactives. Ce cas est étudié plus en détail à travers l'analyse de caractéristiques de la flamme. Celle-ci semble être le théâtre de régimes de combustion très différents. On note aussi que la détermination de la méthode numérique la plus appropriée pour le calcul d'écoulements diphasiques n'est pas évidente. De plus, deux méthodes numériques différentes peuvent donner des résultats en bon accord avec l'expérience et pourtant avoir des modes de combustion différents. Les capacités de la SGE à correctement calculer un écoulement diphasique réactif étant validé, des SGE du phénomène transitoire d'allumage sont effectuées. La sensibilité observée expérimentalement de l'allumage aux conditions initiales, i.e. à l'instant de claquage, est retrouvé par les SGE. L'analyse met en évidence le rôle prépondérant de la dispersion du spray dans le développement initial du noyau de flamme. L'utilisation des SGE pour calculer les séquences d'allumage fournie de nombreuses informations sur le phénomène d'allumage, cependant d'un point de vue industriel, cela ne donne pas de résultat optimal, à moins de ne tester toutes les positions, ce qui rendrait le coût CPU déraisonnable. Des alternatives sont donc nécessaires et font l'objet de la dernière partie de ces travaux. On propose de dériver un critère local d'allumage, donnant la probabilité d'allumage à partir d'un écoulement diphasique (air et carburant) non réactif instationnaire. Ce modèle est basé sur des critères liés aux différentes phases menant à un allumage réussi, de la formation d'un premier noyau à la propagation de la flamme vers l'injecteur. Enfin, des comparaisons avec des données expérimentales sur des chambres aéronautiques sont présentées et sont en bon accord, indiquant que le critère d'allumage proposé, couplé avec une SGE d'écoulement diphasique non réactif, peut être utilisé pour optimiser la puissance et la position du système d'allumage. / Ignition and altitude reignition are critical issues for aeronautical combustion chambers. The success of ignition depends on multiple factors, from the characteristics of the igniter to the spray droplet size or the level of turbulence at the ignition site. Finding the optimal location of the igniter or the potential of ignition success of a given energy source at a given location are therefore parameters of primary importance in the design of combustion chambers. The purpose of this thesis is to study forced ignition of aeronautical combustion chambers. To do so, Large Eddy Simulations (LES) of two-phase reacting flows are performed and analyzed. First, the equations of the Eulerian formalism used to describe the dispersed phase are presented. To validate the successive LES, experimental data from the MERCATO bench installed at ONERA Fauga-Mauzac are used. It allows to validate the two-phase evaporating flow LES methodology and models prior to its use to other flow conditions. The statistically stationary two-phase flow reacting case is then compared to available data to evaluate the model in reacting conditions. This case is more deeply studied through the analysis of the characteristics of the flame. This last one appears to experience very different combustion regimes. It is also seen that the determination of the most appropriate methodology to compute two-phase flow flame is not obvious. Furthermore, two different methodologies may both agree with the data and still have different burning modes. The ability of the LES to correctly compute burning two-phase flow being validated, LES of the transient ignition phenomena are performed. The experimentally observed sensitivity of ignition to initial conditions, i.e. to sparking time, is recovered with LES. The analysis highlights the major role played by the spray dispersion in the development of the initial flame kernel. The use of LES to compute ignition sequences provides a lot of information about the ignition phenomena, however from an industrial point of view, it does not give an optimal result, unless all locations are tested, which brings the CPU cost to unreasonable values. Alternatives are hence needed and are the objective of the last part of this work. It is proposed to derive a local ignition criterion, giving the probability of ignition from the knowledge of the unsteady non-reacting two-phase (air and fuel) flow. This model is based on criteria for the phases of a successful ignition process, from the first kernel formation to the flame propagation towards the injector. Then, comparisons with experimental data on aeronautical chambers are done and show good agreement, indicating that the proposed ignition criterion, coupled to a Large Eddy Simulation of the stationary evaporating two-phase non-reacting flow, can be used to optimize the igniter location and power.
578

Inference for the K-sample problem based on precedence probabilities

Dey, Rajarshi January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Paul I. Nelson / Rank based inference using independent random samples to compare K>1 continuous distributions, called the K-sample problem, based on precedence probabilities is developed and explored. There are many parametric and nonparametric approaches, most dealing with hypothesis testing, to this important, classical problem. Most existing tests are designed to detect differences among the location parameters of different distributions. Best known and most widely used of these is the F- test, which assumes normality. A comparable nonparametric test was developed by Kruskal and Wallis (1952). When dealing with location-scale families of distributions, both of these tests can perform poorly if the differences among the distributions are among their scale parameters and not in their location parameters. Overall, existing tests are not effective in detecting changes in both location and scale. In this dissertation, I propose a new class of rank-based, asymptotically distribution- free tests that are effective in detecting changes in both location and scale based on precedence probabilities. Let X_{i} be a random variable with distribution function F_{i} ; Also, let _pi_ be the set of all permutations of the numbers (1,2,...,K) . Then P(X_{i_{1}}<...<X_{i_{K}}) is a precedence probability if (i_{1},...,i_{K}) belongs to _pi_. Properties of these of tests are developed using the theory of U-statistics (Hoeffding, 1948). Some of these new tests are related to volumes under ROC (Receiver Operating Characteristic) surfaces, which are of particular interest in clinical trials whose goal is to use a score to separate subjects into diagnostic groups. Motivated by this goal, I propose three new index measures of the separation or similarity among two or more distributions. These indices may be used as “effect sizes”. In a related problem, Properties of precedence probabilities are obtained and a bootstrap algorithm is used to estimate an interval for them.
579

Previsão de movimentos gravitacionais de massa na Serra de Ouro Preto com base em árvore de eventos / not available

Dias, Ezilma Cordeiro 21 March 2002 (has links)
Ouro Preto é uma cidade histórica reconhecida como patrimônio histórico da humanidade pela UNESCO, que tem um passado de intensa exploração mineral e construções sem nenhum planejamento e critério desde 1680. Recentemente, várias ocorrências de escorregamento, quedas e outros movimentos gravitacionais de massa têm ocorrido na área. A ocorrência dos movimentos gravitacionais de massa é atribuída às chuvas associadas aos fatores predisponentes e modificadores que existem na área. Observa-se que o grande problema não é a magnitude desses processos, mas sim a freqüência com que eles ocorrem. Neste contexto, apresenta-se neste trabalho a previsão de movimentos gravitacionais de massa na Serra de Ouro Preto a partir de árvore de eventos. A avaliação probabilística é realizada a partir de árvore de eventos, para que valores quantitativos sejam atribuídos a cada atributo. Por meio desta análise, é possível se obter uma previsão quantitativa dos possíveis hazards e problemas que podem ser esperados na região. Cada tipo de movimento gravitacional de massa é composto por uma seqüência condicionada de atributos que deve ser considerada, e para cada um deles, a probabilidade de ocorrência foi estabelecida considerando os aspectos probabilísticos. A probabilidade de cada atributo foi determinada pela freqüência relativa do atributo no cenário (análise unidimensional) e a sua intensidade (análise bidimensional). Conclui-se que os principais processos de movimentos gravitacionais de massa que ocorrem na área são classificados como escorregamentos translacionais (em rocha e em material inconsolidado), rolamentos (em rocha, detritos e em blocos de rocha), corridas, quedas (em rocha, blocos de rocha e em material inconsolidado), escoamentos (em detritos e em material inconsolidado), complexo (em rocha e em material inconsolidado) os quais apresentam-se ativos (todos sem nenhuma exceção). Quanto aos valores de probabilidades calculados para os processos de movimentos gravitacionais de massa, obteve-se valores de 1 a 17 porcento, resultando num período de retorno de 50 a 5 anos. Estes valores permitem caracterizar a área como uma zona perigosa e potencialmente perigosa. / Ouro Preto is a historical city that enjoys the World Heritage Landmark Status as granted by UNESCO, and has a history of heavy mining and rampant unstructured housing dating back to 1680. Recently there have been many occurrences of landslides, rock fall and other landslide hazards. The effects of rainfall events associated with predisposing and modifying factors that exist in the area are attributed to the occurrence of landslides. It should be noted that the biggest problem is not the magnitude of these hazards, but the frequency with they occur. Thus, the assessment of landslide processes in Ouro Preto is presented through event tree form. The probabilistic analysis is performed in an event tree form in order to give quantitative values for every attribute. Such analysis allows a quantitative prediction of the possible hazards and problems that can be expected in the region. Every type of landslide process has a conditional sequence of attributes that must be taken into account and for every kind of attribute the probability of occurrence is established through probabilistic aspects. The probability of every attribute is identified by its relative frequency in the scenario (one-dimensional analysis) and the intensity (two-dimensional analysis). It was possible to conclude that the main landslide processes that occur in the area are classified as translational slides, bouncing or rolling, quickly flows, falls, flows and complex and all the processes are currently active. Concerning the probability values that were calculated for the landslide processes, a range from 1 to 17 percent, they resulted in a return period from 50 to 5 years. These values allow characterizing the area as a dangerous to a potentially dangerous zone.
580

Aplicação de modelos teóricos de distribuição de abundância das espécies na avaliação de efeitos de fragmentação sobre as comunidades de aves da Mata Atlântica / Use of the species abundance distributions to evaluate the effects os fragmentation on the bird communities of Atlantic Forest

Mandai, Camila Yumi 26 October 2010 (has links)
As distribuições de abundância relativa das espécies tiveram um papel importante no desenvolvimento da ecologia de comunidades, revelando um dos padrões mais bem estabelecidos da ecologia, que é a alta dominância de algumas espécies nas comunidades biológicas. Este padrão provocou a criação de dezenas de modelos teóricos na tentativa de explicar quais mecanismos ecológicos poderiam gerá-lo. Os modelos teóricos de abundância relativa das espécies podem ser vistos como descritores das comunidades, e seus parâmetros, medidas sintéticas de dimensões da diversidade. Esses parâmetros podem ser utilizados não só como descritores biologicamente interpretáveis das comunidades, mas também como variáveis respostas a possíveis fatores ambientais que afetam as comunidades. Adotando então esta aplicação descritiva dos modelos, nosso objetivo foi comparar as comunidades de aves de áreas em um gradiente de fragmentação, utilizando como variável resposta os valores estimados do parâmetro do modelo série logarítmica, o &#945; de Fisher. Como todos os modelos teóricos de abundância relativa propostos têm como premissa, a igualdade de probabilidade de captura entre as espécies, o que para comunidades de espécies de organismos móveis, como aves, parece pouco realista, neste trabalho investigamos também o grau de sensibilidade dos modelos quanto à quebra dessa premissa. Assim, por meio de simulações de comunidades, analisamos o viés de seleção e estimação, e revelamos que o aumento do grau de heterogeneidade entre as probabilidades de captura das espécies acarreta no incremento do viés de seleção do modelo real e também de estimação dos parâmetros. Porém, como o objetivo do estudo era identificar os fatores que influenciam a diversidade das comunidades, mesmo com o viés de estimação, talvez ainda fosse possível revelar o grau de influência sobre os valores dos parâmetros, quando ele existir. Assim, prosseguimos com mais uma etapa de simulações, em que geramos comunidades cujos valores de parâmetros tinham uma relação linear com a área dos fragmentos. O que encontramos é que independente da igualdade ou desigualdade de capturabilidade das espécies, quando o efeito existe, ele é sempre detectado, porém dependendo do grau de diferença de probabilidade de captura das espécies, o efeito pode ser subestimado. E, na ausência de efeito, ele pode ser falsamente detectado, dependendo do grau de heterogeneidade de probabilidades de captura entre as espécies, mas sempre com estimativas bem baixas para o efeito inexistente. Com esses resultados então, pudemos quantificar os tipos de efeitos da heterogeneidade de probabilidades de captura e prosseguir com as análises dos efeitos de fragmentação. O que nossos resultados mostraram é que na paisagem com 10% de cobertura vegetal, a área parece influenciar a diversidade dos fragmentos mais que o isolamento, e que na paisagem de 50% de cobertura vegetal, a variável de isolamento se torna mais importante que a área para explicar os dados. Porém, em uma interpretação mais parcimoniosa, consideramos as estimativas dos efeitos muito baixas para considerar que ele de fato existia. Com isso, concluímos que o processo de fragmentação provavelmente não tem efeito sobre a hierarquia de abundância das espécies, e é independente da porcentagem de cobertura vegetal da paisagem. Contudo, em uma descrição do número de capturas de cada espécie nos fragmentos, ponderada pelo número de capturas amostrado em áreas contínuas adjacentes, revelaram que o tamanho do fragmento pode ser importante na determinação de quais espécies serão extintas ou beneficiadas e que talvez a qualidade da matriz seja decisiva para a manutenção de espécies altamente sensíveis em fragmentos pequenos. Assim, demonstramos que, embora as SADs sejam pouco afetadas pela fragmentação, a posição das espécies na hierarquia de abundâncias pode mudar muito, o que reflete as diferenças de sensibilidade das espécies a área e isolamento dos fragmentos. / Species abundance distribution (SADs) had an important role in community ecology, revealing one of the most well established pattern in ecology, which is the high dominance by just a few species. This pattern stimulated the proposal of innumerous theoretical models in an attempt to explain the ecological mechanism which could generate it. However these models can also be a descriptor of the communities and their parameters synthetic measures of diversity. Such parameters can be used as response variables to environmental impact affecting communities. Adopting this approach our objective was to compare bird communities through areas of different levels of fragmentation, using as response variable the estimates of &#945;, the parameter of Fishers logseries. Considering the implicit assumption of equal capture probabilities among species in SAD models we also investigated the degree of sensibility of the models when this assumption is disrespected, once it seems so unrealistic. Thus simulating communities in which species had equal and different capture probabilities among them we found that increases in the degrees of heterogeneity in species catchability lead to a gain in biases on the model selection and parameters estimations. Additionally, since our goal in this study was identify some factors that may influence the diversity in communities, even with the biases, if they were constant, maybe it was still possible to test the relation. In this context we proceed to another stage of simulations, where we generate communities whose parameter values had a linear relationship with remnant area. What we find is that regardless of equal or unequal in catchability of species, when the effect exists, it is always detected, but depending on the degree of difference in probability of catching the species, the effect may be underestimated. Further, in the absence of effect, it can be falsely detected, depending on the degree of heterogeneity of capture probabilities among species, but always with very low estimates for the effect non-existent. With these results, we could quantify the types of effects of heterogeneity on capture probabilities and proceed with the analysis of the effects of fragmentation. What we showed is that the landscape with 10% vegetation cover, the fragment area appears to influence the diversity of the fragments rather than isolation, and landscape in 50% of plant cover, the isolation variable becomes more important than area to explain the data. But in a more parsimonious interpretation, we consider the estimated of the effects too low to consider that they actually exist. Therefore, we conclude that the fragmentation process probably has no effect on the hierarchy of species abundance. However, in a description of the number of captures of each species in the fragments, weighted by the number of catches sampled in continuous adjacent areas revealed that the fragment size may be important in determining which species will be extinct or benefit and that perhaps the quality of matrix is decisive for the maintenance of highly sensitive species in small fragments. Thus, we demonstrated that while the SAD are not significantly affected by fragmentation, the position in the hierarchy of species abundances can change a lot, which reflects the different sensitivity of species to area and isolation in the fragments.

Page generated in 0.0947 seconds