761 |
Ecosystem succession : a general hypothesis and a test model of a grasslandJanuary 1980 (has links)
Luis T. Gutierrez, Willard R. Fey. / Includes index. / Bibliography: p. [220]-228.
|
762 |
Modelling space-use and habitat preference from wildlife telemetry dataAarts, Geert January 2007 (has links)
Management and conservation of populations of animals requires information on where they are, why they are there, and where else they could be. These objectives are typically approached by collecting data on the animals’ use of space, relating these to prevailing environmental conditions and employing these relations to predict usage at other geographical regions. Technical advances in wildlife telemetry have accomplished manifold increases in the amount and quality of available data, creating the need for a statistical framework that can use them to make population-level inferences for habitat preference and space-use. This has been slow-in-coming because wildlife telemetry data are, by definition, spatio-temporally autocorrelated, unbalanced, presence-only observations of behaviorally complex animals, responding to a multitude of cross-correlated environmental variables. I review the evolution of techniques for the analysis of space-use and habitat preference, from simple hypothesis tests to modern modeling techniques and outline the essential features of a framework that emerges naturally from these foundations. Within this framework, I discuss eight challenges, inherent in the spatial analysis of telemetry data and, for each, I propose solutions that can work in tandem. Specifically, I propose a logistic, mixed-effects approach that uses generalized additive transformations of the environmental covariates and is fitted to a response data-set comprising the telemetry and simulated observations, under a case-control design. I apply this framework to non-trivial case-studies using data from satellite-tagged grey seals (Halichoerus grypus) foraging off the east and west coast of Scotland, and northern gannets (Morus Bassanus) from Bass Rock. I find that sea bottom depth and sediment type explain little of the variation in gannet usage, but grey seals from different regions strongly prefer coarse sediment types, the ideal burrowing habitat of sandeels, their preferred prey. The results also suggest that prey aggregation within the water column might be as important as horizontal heterogeneity. More importantly, I conclude that, despite the complex behavior of the study species, flexible empirical models can capture the environmental relationships that shape population distributions.
|
763 |
Geographically weighted spatial interaction (GWSI)Kordi, Maryam January 2013 (has links)
One of the key concerns in spatial analysis and modelling is to study and analyse similarities or dissimilarities between places over geographical space. However, ”global“ spatial models may fail to identify spatial variations of relationships (spatial heterogeneity) by assuming spatial stationarity of relationships. In many real-life situations spatial variation in relationships possibly exists and the assumption of global stationarity might be highly unrealistic leading to ignorance of a large amount of spatial information. In contrast, local spatial models emphasise differences or dissimilarity over space and focus on identifying spatial variations in relationships. These models allow the parameters of models to vary locally and can provide more useful information on the processes generating the data in different parts of the study area. In this study, a framework for localising spatial interaction models, based on geographically weighted (GW) techniques, has been developed. This framework can help in detecting, visualising and analysing spatial heterogeneity in spatial interaction systems. In order to apply the GW concept to spatial interaction models, we investigate several approaches differing mainly in the way calibration points (flows) are defined and spatial separation (distance) between flows is calculated. As a result, a series of localised geographically weighted spatial interaction (GWSI) models are developed. Using custom-built algorithms and computer code, we apply the GWSI models to a journey-to-work dataset in Switzerland for validation and comparison with the related global models. The results of the model calibrations are visualised using a series of conventional and flow maps along with some matrix visualisations. The comparison of the results indicates that in most cases local GWSI models exhibit an improvement over the global models both in providing more useful local information and also in model performance and goodness-of-fit.
|
764 |
Optimal asset allocation for South African pension funds under the revised Regulation 28Koegelenberg, Frederik Johannes 03 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: On 1 July 2011 the revised version of Regulation 28, which governs the South African
pension fund industry with regard to investments, took effect. The new version allows for
pension funds to invest up to 25 percent compared to 20 percent, in the previous version,
of its total investment in foreign assets. The aim of this study is to determine whether
it would be optimal for a South African pension fund to invest the full 25 percent of its
portfolio in foreign assets.
Seven different optimization models are evaluated in this study to determine the optimal
asset mix. The optimization models were selected through an extensive literature study
in order to address key optimization issues, e.g. which risk measure to use, whether
parametric or non parametric optimization should be used and if the Mean Variance model
for optimization defined by Markowitz, which has been the benchmark with regard to asset
allocation, is the best model to determine the long term asset allocation strategies.
The results obtained from the different models were used to recommend the optimal
long term asset allocation for a South African pension fund and also compared to determine
which optimization model proved to be the most efficient.
The study found that when using only the past ten years of data to construct the
portfolios, it would have been optimal to invest in only South African asset classes with
statistical differences with regard to returns in some cases. Using the past 20-years of data
to construct the optimal portfolios provided mixed results, while the 30-year period were
more in favour of an international portfolio with the full 25% invested in foreign asset
classes.
A comparison of the different models provided a clear winner with regard to a probability
of out performance. The Historical Resampled Mean Variance optimization provided the highest probability of out performing the benchmark. From the study it also became
evident that a 20-year data period is the optimal period when considering the historical
data that should be used to construct the optimal portfolio. / AFRIKAANSE OPSOMMING: Op 1 Julie 2011 het die hersiene Regulasie 28, wat die investering van Suid-Afrikaanse
pensioenfondse reguleer, in werking getree. Hierdie hersiene weergawe stel pensioenfondse
in staat om 25% van hulle fondse in buitelandse bateklasse te belê in plaas van 20%, soos
in die vorige weergawe. Hierdie studie stel vas of dit werklik voordelig sal wees vir ‘n SA
pensioenfonds om die volle 25% in buitelandse bateklasse te belê.
Sewe verskillende optimeringsmodelle is gebruik om die optimale portefeulje te probeer
skep. Die optimeringsmodelle is gekies na ’n uitgebreide literatuurstudie sodat van die
sleutelkwessies met betrekking tot optimering aangespreek kon word. Die kwessies waarna
verwys word sluit in, watter risikomaat behoort gebruik te word in die optimeringsproses,
of ‘n parametriese of nie-parametriese model gebruik moet word en of die “Mean-Variance”
model wat deur Markowitz in 1952 gedefinieer is en al vir baie jare as maatstaf vir portefeulje
optimering dien, nog steeds die beste model is om te gebruik.
Die uiteindelike resultate, verkry van die verskillende optimeringsmodelle, is gevolglik
gebruik om die optimale langtermyn bate-allokasie vir ‘n Suid-Afrikaanse pensioenfonds
op te stel. Die verskillende optimeringsmodelle is ook met mekaar vergelyk om te bepaal
of daar ‘n model is wat beter is as die res.
Vanuit die resultate was dit duidelik dat ’n portfeulje wat slegs uit Suid-Afrikaanse bates
bestaan beter sal presteer as slegs die laaste 10-jaar se data gebruik word om die portefeulje
op stel. Hierdie resultate is ook in meeste van die gevalle bevestig deur middel van hipotese
toetse. Deur gebruik te maak van die afgelope 20-jaar se data om die portefeuljes op te
stel, het gemengde resultate gelewer, terwyl die afgelope 30-jaar se data in meeste van die
gevalle ’n internasionaal gediversifiseerde portefeulje as die beter portefeulje uitgewys het.
In ’n vergelyking van die verskillende optimeringsmodelle is die “Historical Resampled Mean Variance” model duidelik as die beter model uitgewys. Hierdie model het die hoogste
waarskynlikheid behaal om die vasgstelde maatstafportefeuljes uit te presteer. Die resultate
het ook gedui op die 20-jaar periode as die beste data periode om te gebruik as die optimale
portfeulje opgestel word.
|
765 |
The implementation of noise addition partial least squaresMoller, Jurgen Johann 03 1900 (has links)
Thesis (MComm (Statistics and Actuarial Science))--University of Stellenbosch, 2009. / When determining the chemical composition of a specimen, traditional laboratory techniques are often both expensive and time consuming. It is therefore preferable to employ more cost effective spectroscopic techniques such as near infrared (NIR). Traditionally, the calibration problem has been solved by means of multiple linear regression to specify the model between X and Y. Traditional regression techniques, however, quickly fail when using spectroscopic data, as the number of wavelengths can easily be several hundred, often exceeding the number of chemical samples. This scenario, together with the high level of collinearity between wavelengths, will necessarily lead to singularity problems when calculating the regression coefficients.
Ways of dealing with the collinearity problem include principal component regression (PCR), ridge regression (RR) and PLS regression. Both PCR and RR require a significant amount of computation when the number of variables is large. PLS overcomes the collinearity problem in a similar way as PCR, by modelling both the chemical and spectral data as functions of common latent variables.
The quality of the employed reference method greatly impacts the coefficients of the regression model and therefore, the quality of its predictions. With both X and Y subject to random error, the quality the predictions of Y will be reduced with an increase in the level of noise. Previously conducted research focussed mainly on the effects of noise in X. This paper focuses on a method proposed by Dardenne and Fernández Pierna, called Noise Addition Partial Least Squares (NAPLS) that attempts to deal with the problem of poor reference values.
Some aspects of the theory behind PCR, PLS and model selection is discussed. This is then followed by a discussion of the NAPLS algorithm. Both PLS and NAPLS are implemented on various datasets that arise in practice, in order to determine cases where NAPLS will be beneficial over conventional PLS. For each dataset, specific attention is given to the analysis of outliers, influential values and the linearity between X and Y, using graphical techniques.
Lastly, the performance of the NAPLS algorithm is evaluated for various
|
766 |
Derivation of Probability Density Functions for the Relative Differences in the Standard and Poor's 100 Stock Index Over Various Intervals of TimeBunger, R. C. (Robert Charles) 08 1900 (has links)
In this study a two-part mixed probability density function was derived which described the relative changes in the Standard and Poor's 100 Stock Index over various intervals of time. The density function is a mixture of two different halves of normal distributions. Optimal values for the standard deviations for the two halves and the mean are given. Also, a general form of the function is given which uses linear regression models to estimate the standard deviations and the means.
The density functions allow stock market participants trading index options and futures contracts on the S & P 100 Stock Index to determine probabilities of success or failure of trades involving price movements of certain magnitudes in given lengths of time.
|
767 |
Vícejazyčná databáze kolokací / Vícejazyčná databáze kolokacíHelcl, Jindřich January 2014 (has links)
Collocations are groups of words which are co-occurring more often than appearing separately. They also include phrases that give a new meaning to a group of unrelated words. This thesis is aimed to find collocations in large data and to create a database that allows their retrieval. The Pointwise Mutual Information, a value based on word frequency, is computed for finding the collocations. Words with the highest value of PMI are considered candidates for good collocations. Chosen collocations are stored in a database in a format that allows searching with Apache Lucene. A part of the thesis is to create a Web user interface as a quick and easy way to search collocations. If this service is fast enough and the collocations are good, translators will be able to use it for finding proper equivalents in the target language. Students of a foreign language will also be able to use it to extend their vocabulary. Such database will be created independently in several languages including Czech and English. Powered by TCPDF (www.tcpdf.org)
|
768 |
An exploration of renewable energy policies with an econometric approachKilinc Ata, Nurcan January 2015 (has links)
This thesis focuses on the renewable energy policies for the case study countries (European Union, United States, United Kingdom, Turkey, and Nigeria) with using quantitative and qualitative analysis. The thesis adopts a three -pronged approach to address three main issues: The first paper investigates a 1990-2008 panel dataset to conduct an econometric analysis of policy instruments, such as; feed-in tariffs, quotas, tenders, and tax incentives, in promoting renewable energy deployment in 27 EU countries and 50 US states. The results suggest that renewable energy policy instruments play a significant role in encouraging renewable energy sources. Using data from 1990 to 2012 with the vector auto regression (VAR) approach for three case study countries, namely United Kingdom, Turkey, and Nigeria, the second paper focuses on how renewable energy consumption as part of total electricity consumption is affected by economic growth and electricity prices. The findings from the VAR model illustrate that the relationship between case study countries’ economic growth and renewable energy consumption is positive and economic growth in case study countries respond positively and significantly. The third paper focuses on the relationship between renewable energy policies and investment in renewables in the countries of United Kingdom and Turkey. The third paper builds upon current knowledge of renewable energy investment and develops a new conceptual framework to guide analyses of policies to support renewables. Past and current trends in the field of renewable energy investment are investigated by reviewing the literature on renewable energy investment linkage with policies, which identifies patterns and similarities in RE investment. This also includes the interview analysis with investors focusing on policies for renewable energy investment. The results from the interview and conceptual analysis show that renewable policies play a crucial role in determining investment in renewable energy sources. The findings from this thesis demonstrate that renewable energy policies increase with a growth of the renewable energy investment in the sector. Finally, the outcomes of this thesis also contribute to the energy economics literature, especially for academic and subsequent research purposes.
|
769 |
The "ADaM Cube" : Categorizing Portland, Oregon's Urbanization Using GIS and Spatial StatisticsGrotbo, Jeremy 26 May 2016 (has links)
Transportation availability and land use intensity demonstrate a strong relationship, with intense development concentrated near significant transportation investment. Transportation networks evolved in response to emergent transportation technologies and changing urban land uses. The irregular distribution of transportation systems reinforced patterns of land use development, shaping urban form. Understanding the relationships between transportation and the intensity of land uses allows urban geographers and city planners to explain the urbanization processes, as well as to identify areas historically susceptible to future development. The goal of this research is to develop a quantitative framework for the analysis of the development of urban form and its relationship to urban transportation systems. This research focuses on transportation accessibility, building density, and the structural massing as the basic metrics in the categorization of urban form. Portland, Oregon serves as the case study environment, while the research methodology examines the spatial and statistical relationship between these metrics for much of the city's urban area. Applying geographic information systems (GIS) and k-means cluster analysis, urban form metrics are compared within the ADaM (Accessibility, Density, and Massing) cube, a model demonstrating comparative relationships, as well as the geographic distribution and patterns of urban form in Portland, Oregon's neighborhoods. A finalized urban form catalog describes existing urban environments, but also indicates areas of impending transition, places having the strong potential for reorganization with respect to higher levels of transportation accessibility. The ADaM Cube is a tool for characterizing Portland's existing urban form, and describing the vulnerabilities of urban neighborhoods to the pressure of redevelopment.
|
770 |
[en] METHODS FOR THE IMIDACLOPRID QUANTIFICATION IN AQUEOUS SOLUTIONS: METROLOGICAL VALIDATION AND COMPARISON BETWEEN UV-VIS SPECTROPHOTOMETRY AND HIGH-PERFORMANCE LIQUID CHROMATOGRAPHY / [pt] MÉTODOS DE QUANTIFICAÇÃO DE IMIDACLOPRID EM SOLUÇÕES AQUOSAS: VALIDAÇÃO METROLÓGICA E COMPARAÇÃO ENTRE ABSORCIOMETRIA MOLECULAR E CROMATOGRAFIA LÍQUIDA DE ALTA EFICIÊNCIAKELLY NEOOB DE CARVALHO CASTRO 21 May 2007 (has links)
[pt] Um procedimento analítico é considerado apropriado para
uma aplicação
específica quando é capaz de gerar resultados confiáveis,
que possibilitem a
tomada de decisão com grau de confiança adequado, sendo
sua adequação
consolidada mediante a realização de ensaios de validação.
Este trabalho
propõe a utilização de um procedimento analítico
alternativo, mais simples e
econômico, baseado na técnica espectrofotométrica de
absorção molecular,
para quantificação de imidacloprid, um inseticida
sistêmico, em amostras de
solução aquosa. Foi demonstrado que o procedimento
proposto é adequado ao
uso pretendido descrevendo-se, detalhadamente, cada etapa
da validação,
considerando-se os limites estabelecidos para cada
parâmetro de validação e
aplicando-se técnicas estatísticas apropriadas na
avaliação dos mesmos: análise
de variância, análise de regressão, testes de
significância, gráficos de controle e
a estimativa da incerteza de medição. As incertezas de
medição dos
procedimentos de rotina e alternativo foram estimadas e
comparadas às
tolerâncias estipuladas, estabelecendo o procedimento
alternativo como
adequado. Uma comparação experimental deste procedimento
com o de rotina
(baseado em HPLC) foi realizada como parte do protocolo de
validação. Além
da avaliação do procedimento para quantificação de
imidacloprid em nível de
traços, foi investigada também a possibilidade de sua
utilização, para a
quantificação do mesmo ingrediente ativo em produtos
formulados. Neste caso,
foi demonstrado, através da comparação das incertezas
estimadas às tolerâncias
estabelecidas, que o procedimento alternativo não é
adequado, por apresentar
incertezas na ordem de aproximadamente 50% do valor destas
tolerâncias, não
possuindo assim o rigor metrológico requerido para esta
aplicação. / [en] Fit for purpose analytical procedures must be sufficiently
reliable to support
any decision taken based on the generated results. In
order to achieve that,
consolidated adequacy evaluation of the analytical
procedure must be obtained
by performing validation experiments. In this work, an
alternative and simpler
spectrophotometric method for the quantification of
imidacloprid in aqueous
samples was compared to the HPLC-UV based reference method
used in
routine. The overall validation procedure started with a
detailed description of
each validation stage, followed by the settling of the
limits for each of the
validation parameters and then, applying the following
statistical techniques to
evaluate each of the parameters: ANOVA, regression
analysis, significance
tests, control charts and uncertainty estimation. The
measurement of
uncertainties estimation, based on ISO-GUM
recommendations, was done for
both analytical procedures (the alternative one and the
reference one). After
comparing these uncertainties with the tolerance values,
the adequacy of the
alternative proposed procedure was confirmed. Finally, by
consolidating the
validation, the experimental comparison of quantification
methods was
conducted. Besides evaluating the analytical procedure for
trace-level
imidacloprid quantification in water samples, the proposed
method was also
evaluated as the analytical procedure for imidacloprid
based formulated
products. In this case, it was demonstrated that the
spectrophotometric method
did not present the requested metrological requirements
for such application,
since the estimated uncertainties of the alternative
procedure were about 50 %
of the tolerance values.
|
Page generated in 0.1584 seconds