101 |
Algoritmiese rangordebepaling van akademiese tydskrifteStrydom, Machteld Christina 31 October 2007 (has links)
Opsomming
Daar bestaan 'n behoefte aan 'n objektiewe maatstaf om die gehalte van
akademiese publikasies te bepaal en te vergelyk.
Hierdie navorsing het die invloed of reaksie wat deur 'n publikasie gegenereer
is uit verwysingsdata bepaal. Daar is van 'n iteratiewe algoritme gebruik
gemaak wat gewigte aan verwysings toeken.
In die Internetomgewing word hierdie benadering reeds met groot sukses
toegepas deur onder andere die PageRank-algoritme van die Google soekenjin.
Hierdie en ander algoritmes in die Internetomgewing is bestudeer om 'n
algoritme vir akademiese artikels te ontwerp. Daar is op 'n variasie van die
PageRank-algoritme besluit wat 'n Invloedwaarde bepaal. Die algoritme is
op gevallestudies getoets. Die empiriese studie dui daarop dat hierdie variasie
spesialisnavorsers se intu¨ıtiewe gevoel beter weergee as net die blote tel van
verwysings.
Abstract
Ranking of journals are often used as an indicator of quality, and is extensively
used as a mechanism for determining promotion and funding.
This research studied ways of extracting the impact, or influence, of a journal
from citation data, using an iterative process that allocates a weight to the
source of a citation.
After evaluating and discussing the characteristics that influence quality and
importance of research with specialist researchers, a measure called the Influence
factor was introduced, emulating the PageRankalgorithm used by
Google to rank web pages. The Influence factor can be seen as a measure
of the reaction that was generated by a publication, based on the number of
scientists who read and cited itA good correlation between the rankings produced by the Influence factor
and that given by specialist researchers were found. / Mathematical Sciences / M.Sc. (Operasionele Navorsing)
|
102 |
The work-leisure relationship among working youth-centre members: implication for program planning with a test of the 'compensatory hypothesis'.January 1987 (has links)
by Ho Kam Wan. / Thesis (M.S.W.)--Chinese University of Hong Kong, 1987. / Bibliography: leaves 158-167.
|
103 |
Developing Criteria for Extracting Principal Components and Assessing Multiple Significance Tests in Knowledge Discovery ApplicationsKeeling, Kellie Bliss 08 1900 (has links)
With advances in computer technology, organizations are able to store large amounts of data in data warehouses. There are two fundamental issues researchers must address: the dimensionality of data and the interpretation of multiple statistical tests. The first issue addressed by this research is the determination of the number of components to retain in principal components analysis. This research establishes regression, asymptotic theory, and neural network approaches for estimating mean and 95th percentile eigenvalues for implementing Horn's parallel analysis procedure for retaining components. Certain methods perform better for specific combinations of sample size and numbers of variables. The adjusted normal order statistic estimator (ANOSE), an asymptotic procedure, performs the best overall. Future research is warranted on combining methods to increase accuracy. The second issue involves interpreting multiple statistical tests. This study uses simulation to show that Parker and Rothenberg's technique using a density function with a mixture of betas to model p-values is viable for p-values from central and non-central t distributions. The simulation study shows that final estimates obtained in the proposed mixture approach reliably estimate the true proportion of the distributions associated with the null and nonnull hypotheses. Modeling the density of p-values allows for better control of the true experimentwise error rate and is used to provide insight into grouping hypothesis tests for clustering purposes. Future research will expand the simulation to include p-values generated from additional distributions. The techniques presented are applied to data from Lake Texoma where the size of the database and the number of hypotheses of interest call for nontraditional data mining techniques. The issue is to determine if information technology can be used to monitor the chlorophyll levels in the lake as chloride is removed upstream. A relationship established between chlorophyll and the energy reflectance, which can be measured by satellites, enables more comprehensive and frequent monitoring. The results have both economic and political ramifications.
|
104 |
Change Detection in Telecommunication Data using Time Series Analysis and Statistical Hypothesis TestingEriksson, Tilda January 2013 (has links)
In the base station system of the GSM mobile network there are a large number of counters tracking the behaviour of the system. When the software of the system is updated, we wish to find out which of the counters that have changed their behaviour. This thesis work has shown that the counter data can be modelled as a stochastic time series with a daily profile and a noise term. The change detection can be done by estimating the daily profile and the variance of the noise term and perform statistical hypothesis tests of whether the mean value and/or the daily profile of the counter data before and after the software update can be considered equal. When the chosen counter data has been analysed, it seems to be reasonable in most cases to assume that the noise terms are approximately independent and normally distributed, which justies the hypothesis tests. When the change detection is tested on data where the software is unchanged and on data with known software updates, the results are as expected in most cases. Thus the method seems to be applicable under the conditions studied.
|
105 |
The statistical tests on mean reversion properties in financial marketsWong, Chun-mei, May., 王春美 January 1994 (has links)
published_or_final_version / Statistics / Master / Master of Philosophy
|
106 |
Algoritmiese rangordebepaling van akademiese tydskrifteStrydom, Machteld Christina 31 October 2007 (has links)
Opsomming
Daar bestaan 'n behoefte aan 'n objektiewe maatstaf om die gehalte van
akademiese publikasies te bepaal en te vergelyk.
Hierdie navorsing het die invloed of reaksie wat deur 'n publikasie gegenereer
is uit verwysingsdata bepaal. Daar is van 'n iteratiewe algoritme gebruik
gemaak wat gewigte aan verwysings toeken.
In die Internetomgewing word hierdie benadering reeds met groot sukses
toegepas deur onder andere die PageRank-algoritme van die Google soekenjin.
Hierdie en ander algoritmes in die Internetomgewing is bestudeer om 'n
algoritme vir akademiese artikels te ontwerp. Daar is op 'n variasie van die
PageRank-algoritme besluit wat 'n Invloedwaarde bepaal. Die algoritme is
op gevallestudies getoets. Die empiriese studie dui daarop dat hierdie variasie
spesialisnavorsers se intu¨ıtiewe gevoel beter weergee as net die blote tel van
verwysings.
Abstract
Ranking of journals are often used as an indicator of quality, and is extensively
used as a mechanism for determining promotion and funding.
This research studied ways of extracting the impact, or influence, of a journal
from citation data, using an iterative process that allocates a weight to the
source of a citation.
After evaluating and discussing the characteristics that influence quality and
importance of research with specialist researchers, a measure called the Influence
factor was introduced, emulating the PageRankalgorithm used by
Google to rank web pages. The Influence factor can be seen as a measure
of the reaction that was generated by a publication, based on the number of
scientists who read and cited itA good correlation between the rankings produced by the Influence factor
and that given by specialist researchers were found. / Mathematical Sciences / M.Sc. (Operasionele Navorsing)
|
107 |
Détection statistique d'information cachée dans des images naturelles / Statistical detection of hidden information in natural imagesZitzmann, Cathel 24 June 2013 (has links)
La nécessité de communiquer de façon sécurisée n’est pas chose nouvelle : depuis l’antiquité des méthodes existent afin de dissimuler une communication. La cryptographie a permis de rendre un message inintelligible en le chiffrant, la stéganographie quant à elle permet de dissimuler le fait même qu’un message est échangé. Cette thèse s’inscrit dans le cadre du projet "Recherche d’Informations Cachées" financé par l’Agence Nationale de la Recherche, l’Université de Technologie de Troyes a travaillé sur la modélisation mathématique d’une image naturelle et à la mise en place de détecteurs d’informations cachées dans les images. Ce mémoire propose d’étudier la stéganalyse dans les images naturelles du point de vue de la décision statistique paramétrique. Dans les images JPEG, un détecteur basé sur la modélisation des coefficients DCT quantifiés est proposé et les calculs des probabilités du détecteur sont établis théoriquement. De plus, une étude du nombre moyen d’effondrements apparaissant lors de l’insertion avec les algorithmes F3 et F4 est proposée. Enfin, dans le cadre des images non compressées, les tests proposés sont optimaux sous certaines contraintes, une des difficultés surmontées étant le caractère quantifié des données / The need of secure communication is not something new: from ancient, methods exist to conceal communication. Cryptography helped make unintelligible message using encryption, steganography can hide the fact that a message is exchanged.This thesis is part of the project "Hidden Information Research" funded by the National Research Agency, Troyes University of Technology worked on the mathematical modeling of a natural image and creating detectors of hidden information in digital pictures.This thesis proposes to study the steganalysis in natural images in terms of parametric statistical decision. In JPEG images, a detector based on the modeling of quantized DCT coefficients is proposed and calculations of probabilities of the detector are established theoretically. In addition, a study of the number of shrinkage occurring during embedding by F3 and F4 algorithms is proposed. Finally, for the uncompressed images, the proposed tests are optimal under certain constraints, a difficulty overcome is the data quantization
|
108 |
Statistical modeling and detection for digital image forensics / Modélisation et déctection statistiques pour la criminalistique des images numériquesThai, Thanh Hai 28 August 2014 (has links)
Le XXIème siècle étant le siècle du passage au tout numérique, les médias digitaux jouent maintenant un rôle de plus en plus important dans la vie de tous les jours. De la même manière, les logiciels sophistiqués de retouche d’images se sont démocratisés et permettent aujourd’hui de diffuser facilement des images falsifiées. Ceci pose un problème sociétal puisqu’il s’agit de savoir si ce que l’on voit a été manipulé. Cette thèse s'inscrit dans le cadre de la criminalistique des images numériques. Deux problèmes importants sont abordés : l'identification de l'origine d'une image et la détection d'informations cachées dans une image. Ces travaux s'inscrivent dans le cadre de la théorie de la décision statistique et proposent la construction de détecteurs permettant de respecter une contrainte sur la probabilité de fausse alarme. Afin d'atteindre une performance de détection élevée, il est proposé d'exploiter les propriétés des images naturelles en modélisant les principales étapes de la chaîne d'acquisition d'un appareil photographique. La méthodologie, tout au long de ce manuscrit, consiste à étudier le détecteur optimal donné par le test du rapport de vraisemblance dans le contexte idéal où tous les paramètres du modèle sont connus. Lorsque des paramètres du modèle sont inconnus, ces derniers sont estimés afin de construire le test du rapport de vraisemblance généralisé dont les performances statistiques sont analytiquement établies. De nombreuses expérimentations sur des images simulées et réelles permettent de souligner la pertinence de l'approche proposée / The twenty-first century witnesses the digital revolution that allows digital media to become ubiquitous. They play a more and more important role in our everyday life. Similarly, sophisticated image editing software has been more accessible, resulting in the fact that falsified images are appearing with a growing frequency and sophistication. The credibility and trustworthiness of digital images have been eroded. To restore the trust to digital images, the field of digital image forensics was born. This thesis is part of the field of digital image forensics. Two important problems are addressed: image origin identification and hidden data detection. These problems are cast into the framework of hypothesis testing theory. The approach proposes to design a statistical test that allows us to guarantee a prescribed false alarm probability. In order to achieve a high detection performance, it is proposed to exploit statistical properties of natural images by modeling the main steps of image processing pipeline of a digital camera. The methodology throughout this manuscript consists of studying an optimal test given by the Likelihood Ratio Test in the ideal context where all model parameters are known in advance. When the model parameters are unknown, a method is proposed for parameter estimation in order to design a Generalized Likelihood Ratio Test whose statistical performances are analytically established. Numerical experiments on simulated and real images highlight the relevance of the proposed approach
|
109 |
Die kerk en die sorggewers van VIGS-weeskindersStrydom, Marina 01 January 2002 (has links)
Text in Afrikaans / Weens die veeleisende aard van sorggewing aan VIGS-weeskinders, bevind die sorggewers hulle dikwels in 'n posisie waar hulle self sorg en ondersteuning nodig het. Die vraag het begin ontstaan op watter manier hierdie sorggewers ondersteun kan word. Dit het duidelik geword dat die kerk vanuit hul sosiale verantwoordelikheid sorg en ondersteuning aan die sorggewers kan bied.
Sorggewers van een instansie wat aan die navorsingsreis deelgeneem het, het inderdaad nie genoeg sorg en ondersteuning van die kerk ontvang nie. Hierdie gebrek aan ondersteuning het 'n direkte invloed op die sorggewers se hantering van sorggewingseise. Sorggewers van die ander twee deelnemende instansies ontvang genoeg ondersteuning van lidmate, en dit maak 'n groot verskil aan hoe sorggewingspanning beleef word. In hierdie studie is daar krities gekyk na wyses waarop die kerk betrokke is en verder kan betrokke raak by die sorggewers van VIGSweeskinders. / Philosophy, Practical & Systematic Theology / M.Th. (Praktiese Teologie)
|
110 |
Asymptotic theory for decentralized sequential hypothesis testing problems and sequential minimum energy design algorithmWang, Yan 19 May 2011 (has links)
The dissertation investigates asymptotic theory of decentralized sequential hypothesis testing problems as well as asymptotic behaviors of the Sequential Minimum Energy Design (SMED). The main results are summarized as follows. 1.We develop the first-order asymptotic optimality theory for decentralized sequential multi-hypothesis testing under a Bayes framework. Asymptotically optimal tests are obtained from the class of "two-stage" procedures and the optimal local quantizers are shown to be the "maximin" quantizers that are characterized as a randomization of at most M-1 Unambiguous Likelihood Quantizers (ULQ) when testing M >= 2 hypotheses. 2. We generalize the classical Kullback-Leibler inequality to investigate the quantization effects on the second-order and other general-order moments of log-likelihood ratios. It is shown that a quantization may increase these quantities, but such an increase is bounded by a universal constant that depends on the order of the moment. This result provides a simpler sufficient condition for asymptotic theory of decentralized sequential detection. 3. We propose a class of multi-stage tests for decentralized sequential multi-hypothesis testing problems, and show that with suitably chosen thresholds at different stages, it can hold the second-order asymptotic optimality properties when the hypotheses testing problem is "asymmetric." 4. We characterize the asymptotic behaviors of SMED algorithm, particularly the denseness and distributions of the design points. In addition, we propose a simplified version of SMED that is computationally more efficient.
|
Page generated in 0.0754 seconds