• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 11
  • 6
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 125
  • 125
  • 95
  • 26
  • 20
  • 16
  • 15
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Análise espaço-temporal do uso do habitat pelo boto-cinza (Sotalia Guianensis) na Baía de Guanabara, Rio de Janeiro / Spatio temporal analysis on the habitat use of Guiana dolphins (Sotalia Guianensis) in Guanabara Bay, Rio de Janeiro

Rafael Ramos de Carvalho 01 February 2013 (has links)
Fundação Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Determine home ranges have been an important issue in studies trying to understand the relationship between species and their environment. Guanabara Bay shelter a resident population of Guiana dolphins (Sotalia guianensis) and this study is aimed at analysing the habitat use of Sotalia guianensis in Guanabara Bay (RJ) between the years of 2002 and 2012. A total of 204 days of survey effort was analysed and 902 points were selected to be used in the distribution maps. The bay was divided into four sections where differences in survey effort did not exceed 16%. The Kernel Density method was utilized to estimate and interpret the habitat use presented by the Guiana dolphin groups. The core areas interpretation was also developed through grid cells of 1,5km x 1,5km and the calculation of Piankas niche overlap index. Dephts utilized by S. guianensis did not show significant differences along the study period (p = 0,531). Areas utilized during the period of 2002/2004 were estimated in 79,4 km and core areas of 19,4 km. The periods of 2008/2010 and 2010/2012 demonstrated that areas were estimated in 51.4 and 58.9 km, respectively, while core areas were estimated in 10.8 and 10.4 km, respectivelly. The areas utilized by the group of dolphins gathered regions that cross the main channel of the bay and the northeast region of the study area where is also located the Environmental Protection Area of Guapimirim. Nevertheless, the population home range as well as its core areas decreased gradually along the years, specifically around Paquetá Island and the central-south portion of the main channel. Groups of more than 10 individuals and that had ≥ 25% of calves on its composition, showed reductions on habitat use of more than 60%. The population size of Guiana dolphins has been decreasing drastically and the individuals interact with disturbance sources in a daily basis. These could be the possible causes of the reduction on habitat use in Guanabara Bay. For that reason, the results are of fundamental importance for the conservation of this population as it demonstrates the consequences of a long-term interaction with a highly impacted coastal environment. / Determinar áreas de vida tem sido um tema amplamente discutido em trabalhos que procuram entender a relação da espécie estudada com as características de seu habitat. A Baía de Guanabara abriga uma população residente de botos-cinza (Sotalia guianensis) e o objetivo do presente estudo foi analisar o uso espacial de Sotalia guianensis, na Baía de Guanabara (RJ), entre 2002 e 2012. Um total de 204 dias de coleta foi analisado e 902 pontos selecionados para serem gerados os mapas de distribuição. A baía foi dividida em quatro subáreas e a diferença no esforço entre cada uma não ultrapassou 16%. O método Kernel Density foi utilizado nas análises para estimativa e interpretação do uso do habitat pelos grupos de botos-cinza. A interpretação das áreas de concentração da população também foi feita a partir de células (grids) de 1,5km x 1,5km com posterior aplicação do índice de sobreposição de nicho de Pianka. As profundidades utilizadas por S. guianensis não apresentaram variações significativas ao longo do período de estudo (p = 0,531). As áreas utilizadas durante o período de 2002/2004 foram estimadas em 79,4 km com áreas de concentração de 19,4 km. Os períodos de 2008/2010 e 2010/2012 apresentaram áreas de uso estimadas em um total de 51,4 e 58,9 km, respectivamente e áreas de concentração com 10,8 e 10,4 km, respectivamente. As áreas utilizadas envolveram regiões que se estendem por todo o canal central e região nordeste da Baía de Guanabara, onde também está localizada a Área de Proteção Ambiental de Guapimirim. Apesar disso, a área de vida da população, assim como suas áreas de concentração, diminuiu gradativamente ao longo dos anos, especialmente no entorno da Ilha de Paquetá e centro-sul do canal central. Grupos com mais de 10 indivíduos e grupos na classe ≥ 25% de filhotes em sua composição, evidenciaram reduções de mais de 60% no tamanho das áreas utilizadas. A população de botos-cinza vem decrescendo rapidamente nos últimos anos, além de interagir diariamente com fontes perturbadoras, sendo estas possíveis causas da redução do uso do habitat da Baía de Guanabara. Por esse motivo, os resultados apresentados são de fundamental importância para a conservação desta população já que representam consequências da interação em longo prazo com um ambiente costeiro altamente impactado pela ação antrópica.
12

Bayesian kernel density estimation

Rademeyer, Estian January 2017 (has links)
This dissertation investigates the performance of two-class classi cation credit scoring data sets with low default ratios. The standard two-class parametric Gaussian and naive Bayes (NB), as well as the non-parametric Parzen classi ers are extended, using Bayes' rule, to include either a class imbalance or a Bernoulli prior. This is done with the aim of addressing the low default probability problem. Furthermore, the performance of Parzen classi cation with Silverman and Minimum Leave-one-out Entropy (MLE) Gaussian kernel bandwidth estimation is also investigated. It is shown that the non-parametric Parzen classi ers yield superior classi cation power. However, there is a longing for these non-parametric classi ers to posses a predictive power, such as exhibited by the odds ratio found in logistic regression (LR). The dissertation therefore dedicates a section to, amongst other things, study the paper entitled \Model-Free Objective Bayesian Prediction" (Bernardo 1999). Since this approach to Bayesian kernel density estimation is only developed for the univariate and the uncorrelated multivariate case, the section develops a theoretical multivariate approach to Bayesian kernel density estimation. This approach is theoretically capable of handling both correlated as well as uncorrelated features in data. This is done through the assumption of a multivariate Gaussian kernel function and the use of an inverse Wishart prior. / Dissertation (MSc)--University of Pretoria, 2017. / The financial assistance of the National Research Foundation (NRF) towards this research is hereby acknowledged. Opinions expressed and conclusions arrived at, are those of the authors and are not necessarily to be attributed to the NRF. / Statistics / MSc / Unrestricted
13

Development of a robbery prediction model for the City of Tshwane Metropolitan Municipality

Kemp, Nicolas James January 2020 (has links)
Crime is not spread evenly over space or time. This suggests that offenders favour certain areas and/or certain times. People base their daily activities on this notion and make decisions to avoid certain areas or feel the need to be more alert in some places rather than others. Even when making choices of where to stay, shop, and go to school, people take into account how safe they feel in those places. Crime in relation to space and time has been studied over several centuries; however, the era of the computer has brought new insight to this field. Indeed, computing technology and in particular geographic information systems (GIS) and crime mapping software, has increased the interest in explaining criminal activities. It is the ability to combine the type, time and spatial occurrences of crime events that makes the use of these computing technologies attractive to crime analysts. This current study predicts robbery crime events in the City of Tshwane Metropolitan Municipality. By combining GIS and statistical models, a proposed method was developed to predict future robbery hotspots. More specifically, a robbery probability model was developed for the City of Tshwane Metropolitan Municipality based on robbery events that occurred during 2006 and this model is evaluated using actual robbery events that occurred in the 2007. This novel model was based on the social disorganisation, routine activity, crime pattern and temporal constraint crime theories. The efficacy of the model was tested by comparing it to a traditional hotspot model. The robbery prediction model was developed using both built and social environmental features. Features in the built environment were divided into two main groups: facilities and commuter nodes. The facilities used in the current study included cadastre parks, clothing stores, convenience stores, education facilities, fast food outlets, filling stations, office parks and blocks, general stores, restaurants, shopping centres and supermarkets. The key commuter nodes consisted of highway nodes, main road nodes and railway stations. The social environment was built using demographics obtained from the 2001 census data. The selection of these features that may impact the occurrence of robbery was guided by spatial crime theories housed within the school of environmental criminology. Theories in this discipline argue that neighbourhoods experiencing social disorganisation are more prone to crime, while different facilities act as crime attractors or generators. Some theories also include a time element suggesting that criminals are constrained by time, leaving little time to explore areas far from commuting nodes. The current study combines these theories using GIS and statistics. A programmatic approach in R was used to create kernel density estimations (hotspots), select relevant features, compute regression models with the use of the caret and mlr packages and predict crime hotspots. R was further used for the majority of spatial queries and analyses. The outcome consisted of various hotspot raster layers predicting future robbery occurrences. The accuracy of the model was tested using 2007 robbery events. Therefore, this current study not only provides a novel statistical predictive model but also showcases R’s spatial capabilities. The current study found strong supporting evidence for the routine activity and crime pattern theory in that robberies tended to cluster around facilities within the city of Tshwane, South Africa. The findings also show a strong spatial association between robberies and neighbourhoods that experience high social disorganisation. Support was also found for the time constraint theory in that a large portion of robberies occur in the immediate vicinity of highway nodes, main road nodes and railway stations. When tested against the traditional hotspot model the robbery probability model was found slightly less effective in predicting future events. However, the current study showcases the effectiveness of the robbery probability model which can be improved upon and used in future studies to determine the effect that future urban development will have on crime. / Dissertation (MSc)--University of Pretoria, 2020. / Geography, Geoinformatics and Meteorology / MSc / Unrestricted
14

On Non-Parametric Confidence Intervals for Density and Hazard Rate Functions & Trends in Daily Snow Depths in the United States and Canada

Xu, Yang 09 December 2016 (has links)
The nonparametric confidence interval for an unknown function is quite a useful tool in statistical inferential procedures; and thus, there exists a wide body of literature on the topic. The primary issues are the smoothing parameter selection using an appropriate criterion and then the coverage probability and length of the associated confidence interval. Here our focus is on the interval length in general and, in particular, on the variability in the lengths of nonparametric intervals for probability density and hazard rate functions. We start with the analysis of a nonparametric confidence interval for a probability density function noting that the confidence interval length is directly proportional to the square root of a density function. That is variability of the length of the confidence interval is driven by the variance of the estimator used to estimate the square-root of the density function. Therefore we propose and use a kernel-based constant variance estimator of the square-root of a density function. The performance of confidence intervals so obtained is studied through simulations. The methodology is then extended to nonparametric confidence intervals for the hazard rate function. Changing direction somewhat, the second part of this thesis presents a statistical study of daily snow trends in the United States and Canada from 1960-2009. A storage model balance equation with periodic features is used to describe the daily snow depth process. Changepoint (inhomogeneities features) are permitted in the model in the form of mean level shifts. The results show that snow depths are mostly declining in the United States. In contrast, snow depths seem to be increasing in Canada, especially in north-western areas of the country. On the whole, more grids are estimated to have an increasing snow trend than a decreasing trend. The changepoint component in the model serves to lessen the overall magnitude of the trends in most locations.
15

A Novel Data-based Stochastic Distribution Control for Non-Gaussian Stochastic Systems

Zhang, Qichun, Wang, H. 06 April 2021 (has links)
Yes / This note presents a novel data-based approach to investigate the non-Gaussian stochastic distribution control problem. As the motivation of this note, the existing methods have been summarised regarding to the drawbacks, for example, neural network weights training for unknown stochastic distribution and so on. To overcome these disadvantages, a new transformation for dynamic probability density function is given by kernel density estimation using interpolation. Based upon this transformation, a representative model has been developed while the stochastic distribution control problem has been transformed into an optimisation problem. Then, data-based direct optimisation and identification-based indirect optimisation have been proposed. In addition, the convergences of the presented algorithms are analysed and the effectiveness of these algorithms has been evaluated by numerical examples. In summary, the contributions of this note are as follows: 1) a new data-based probability density function transformation is given; 2) the optimisation algorithms are given based on the presented model; and 3) a new research framework is demonstrated as the potential extensions to the existing st
16

Multiple imputation in the presence of a detection limit, with applications : an empirical approach / Shawn Carl Liebenberg

Liebenberg, Shawn Carl January 2014 (has links)
Scientists often encounter unobserved or missing measurements that are typically reported as less than a fixed detection limit. This especially occurs in the environmental sciences when detection of low exposures are not possible due to limitations of the measuring instrument, and the resulting data are often referred to as type I and II left censored data. Observations lying below this detection limit are therefore often ignored, or `guessed' because it cannot be measured accurately. However, reliable estimates of the population parameters are nevertheless required to perform statistical analysis. The problem of dealing with values below a detection limit becomes increasingly complex when a large number of observations are present below this limit. Researchers thus have interest in developing statistical robust estimation procedures for dealing with left- or right-censored data sets (SinghandNocerino2002). The aim of this study focuses on several main components regarding the problems mentioned above. The imputation of censored data below a fixed detection limit are studied, particularly using the maximum likelihood procedure of Cohen(1959), and several variants thereof, in combination with four new variations of the multiple imputation concept found in literature. Furthermore, the focus also falls strongly on estimating the density of the resulting imputed, `complete' data set by applying various kernel density estimators. It should be noted that bandwidth selection issues are not of importance in this study, and will be left for further research. In this study, however, the maximum likelihood estimation method of Cohen (1959) will be compared with several variant methods, to establish which of these maximum likelihood estimation procedures for censored data estimates the population parameters of three chosen Lognormal distribution, the most reliably in terms of well-known discrepancy measures. These methods will be implemented in combination with four new multiple imputation procedures, respectively, to assess which of these nonparametric methods are most effective with imputing the 12 censored values below the detection limit, with regards to the global discrepancy measures mentioned above. Several variations of the Parzen-Rosenblatt kernel density estimate will be fitted to the complete filled-in data sets, obtained from the previous methods, to establish which is the preferred data-driven method to estimate these densities. The primary focus of the current study will therefore be the performance of the four chosen multiple imputation methods, as well as the recommendation of methods and procedural combinations to deal with data in the presence of a detection limit. An extensive Monte Carlo simulation study was performed to compare the various methods and procedural combinations. Conclusions and recommendations regarding the best of these methods and combinations are made based on the study's results. / MSc (Statistics), North-West University, Potchefstroom Campus, 2014
17

Multiple imputation in the presence of a detection limit, with applications : an empirical approach / Shawn Carl Liebenberg

Liebenberg, Shawn Carl January 2014 (has links)
Scientists often encounter unobserved or missing measurements that are typically reported as less than a fixed detection limit. This especially occurs in the environmental sciences when detection of low exposures are not possible due to limitations of the measuring instrument, and the resulting data are often referred to as type I and II left censored data. Observations lying below this detection limit are therefore often ignored, or `guessed' because it cannot be measured accurately. However, reliable estimates of the population parameters are nevertheless required to perform statistical analysis. The problem of dealing with values below a detection limit becomes increasingly complex when a large number of observations are present below this limit. Researchers thus have interest in developing statistical robust estimation procedures for dealing with left- or right-censored data sets (SinghandNocerino2002). The aim of this study focuses on several main components regarding the problems mentioned above. The imputation of censored data below a fixed detection limit are studied, particularly using the maximum likelihood procedure of Cohen(1959), and several variants thereof, in combination with four new variations of the multiple imputation concept found in literature. Furthermore, the focus also falls strongly on estimating the density of the resulting imputed, `complete' data set by applying various kernel density estimators. It should be noted that bandwidth selection issues are not of importance in this study, and will be left for further research. In this study, however, the maximum likelihood estimation method of Cohen (1959) will be compared with several variant methods, to establish which of these maximum likelihood estimation procedures for censored data estimates the population parameters of three chosen Lognormal distribution, the most reliably in terms of well-known discrepancy measures. These methods will be implemented in combination with four new multiple imputation procedures, respectively, to assess which of these nonparametric methods are most effective with imputing the 12 censored values below the detection limit, with regards to the global discrepancy measures mentioned above. Several variations of the Parzen-Rosenblatt kernel density estimate will be fitted to the complete filled-in data sets, obtained from the previous methods, to establish which is the preferred data-driven method to estimate these densities. The primary focus of the current study will therefore be the performance of the four chosen multiple imputation methods, as well as the recommendation of methods and procedural combinations to deal with data in the presence of a detection limit. An extensive Monte Carlo simulation study was performed to compare the various methods and procedural combinations. Conclusions and recommendations regarding the best of these methods and combinations are made based on the study's results. / MSc (Statistics), North-West University, Potchefstroom Campus, 2014
18

A non-parametric efficiency and productivity analysis of transition banking

Kenjegalieva, Karligash January 2007 (has links)
This thesis examines banking efficiency and the productivity of thirteen transition Central and Eastern European banking systems during 1998-2003 using Data Envelopment Analysis (DEA). It proposes a non-parametric methodology for non-radial Russell output efficiency measure of banking firms, incorporating risk as an undesirable output. In addition, the proposed efficiency measure handles unrestricted data, i. e. both positive and negative. The Luenberger productivity index is suggested, which is applicable to technology where the desirable and undesirable outputs are jointly produced, and are possibly negative. Furthermore, the thesis addresses the main issue in the literature on banking performance measurement, which concerns the lack of consistency in the conceptual and theoretical considerations in describing the banking production process. Consequently, a metaanalysis tool, to examine the choice of inputs and outputs definitions in the banking efficiency literature, is suggested. In addition, the performance measures are estimated using three alternative definitions of the banking production process focusing on the risk and environmental dimensions of bank efficiency and productivity, with further comparative analysis using bootstrapping and kernel density techniques. Overall, the empirical results suggest that in Central and Eastern Europe Czech, Hungarian and Polish banks were the most technical efficient banks and the banking risk was mainly affected by external environmental factors during the analyzed period. Productivity analysis implies that the main driver of productivity change in the Central and Eastern European banks is the technological improvement. As meta-analysis revealed, the choice of particular approach of describing the banking production process is determined not by the availability of particular input or output variable information but the concepts of researcher's theoretical considerations. Statistical tests and density analysis indicate that efficiency scores, returns parameters and productivity indexes are sensitive to the choice of particular approaches.
19

Bergman kernel on toric Kahler manifolds

Pokorny, Florian Till January 2011 (has links)
Let (L,h) → (X,ω) be a compact toric polarized Kahler manifold of complex dimension n. For each k ε N, the fibre-wise Hermitian metric hk on Lk induces a natural inner product on the vector space C∞(X,Lk) of smooth global sections of Lk by integration with respect to the volume form ωn /n! . The orthogonal projection Pk : C∞(X,Lk) → H0(X,Lk) onto the space H0(X,Lk) of global holomorphic sections of Lk is represented by an integral kernel Bk which is called the Bergman kernel (with parameter k ε N). The restriction ρk : X → R of the norm of Bk to the diagonal in X × X is called the density function of Bk. On a dense subset of X, we describe a method for computing the coefficients of the asymptotic expansion of ρk as k → ∞ in this toric setting. We also provide a direct proof of a result which illuminates the off-diagonal decay behaviour of toric Bergman kernels. We fix a parameter l ε N and consider the projection Pl,k from C∞(X,Lk) onto those global holomorphic sections of Lk that vanish to order at least lk along some toric submanifold of X. There exists an associated toric partial Bergman kernel Bl,k giving rise to a toric partial density function ρl,k : X → R. For such toric partial density functions, we determine new asymptotic expansions over certain subsets of X as k → ∞. Euler-Maclaurin sums and Laplace’s method are utilized as important tools for this. We discuss the case of a polarization of CPn in detail and also investigate the non-compact Bargmann-Fock model with imposed vanishing at the origin. We then discuss the relationship between the slope inequality and the asymptotics of Bergman kernels with vanishing and study how a version of Song and Zelditch’s toric localization of sums result generalizes to arbitrary polarized Kahler manifolds. Finally, we construct families of induced metrics on blow-ups of polarized Kahler manifolds. We relate those metrics to partial density functions and study their properties for a specific blow-up of Cn and CPn in more detail.
20

Estimation of Kinetic Parameters From List-Mode Data Using an Indirect Approach

Ortiz, Joseph Christian, Ortiz, Joseph Christian January 2016 (has links)
This dissertation explores the possibility of using an imaging approach to model classical pharmacokinetic (PK) problems. The kinetic parameters which describe the uptake rates of a drug within a biological system, are parameters of interest. Knowledge of the drug uptake in a system is useful in expediting the drug development process, as well as providing a dosage regimen for patients. Traditionally, the uptake rate of a drug in a system is obtained via sampling the concentration of the drug in a central compartment, usually the blood, and fitting the data to a curve. In a system consisting of multiple compartments, the number of kinetic parameters is proportional to the number of compartments, and in classical PK experiments, the number of identifiable parameters is less than the total number of parameters. Using an imaging approach to model classical PK problems, the support region of each compartment within the system will be exactly known, and all the kinetic parameters are uniquely identifiable. To solve for the kinetic parameters, an indirect approach, which is a two part process, was used. First the compartmental activity was obtained from data, and next the kinetic parameters were estimated. The novel aspect of the research is using listmode data to obtain the activity curves from a system as opposed to a traditional binned approach. Using techniques from information theoretic learning, particularly kernel density estimation, a non-parametric probability density function for the voltage outputs on each photo-multiplier tube, for each event, was generated on the fly, which was used in a least squares optimization routine to estimate the compartmental activity. The estimability of the activity curves for varying noise levels as well as time sample densities were explored. Once an estimate for the activity was obtained, the kinetic parameters were obtained using multiple cost functions, and the compared to each other using the mean squared error as the figure of merit.

Page generated in 0.0925 seconds