• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 165
  • 30
  • 15
  • 10
  • 9
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 293
  • 293
  • 143
  • 82
  • 59
  • 46
  • 46
  • 37
  • 32
  • 31
  • 31
  • 26
  • 24
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Algoritmiese rangordebepaling van akademiese tydskrifte

Strydom, Machteld Christina 31 October 2007 (has links)
Opsomming Daar bestaan 'n behoefte aan 'n objektiewe maatstaf om die gehalte van akademiese publikasies te bepaal en te vergelyk. Hierdie navorsing het die invloed of reaksie wat deur 'n publikasie gegenereer is uit verwysingsdata bepaal. Daar is van 'n iteratiewe algoritme gebruik gemaak wat gewigte aan verwysings toeken. In die Internetomgewing word hierdie benadering reeds met groot sukses toegepas deur onder andere die PageRank-algoritme van die Google soekenjin. Hierdie en ander algoritmes in die Internetomgewing is bestudeer om 'n algoritme vir akademiese artikels te ontwerp. Daar is op 'n variasie van die PageRank-algoritme besluit wat 'n Invloedwaarde bepaal. Die algoritme is op gevallestudies getoets. Die empiriese studie dui daarop dat hierdie variasie spesialisnavorsers se intu¨ıtiewe gevoel beter weergee as net die blote tel van verwysings. Abstract Ranking of journals are often used as an indicator of quality, and is extensively used as a mechanism for determining promotion and funding. This research studied ways of extracting the impact, or influence, of a journal from citation data, using an iterative process that allocates a weight to the source of a citation. After evaluating and discussing the characteristics that influence quality and importance of research with specialist researchers, a measure called the Influence factor was introduced, emulating the PageRankalgorithm used by Google to rank web pages. The Influence factor can be seen as a measure of the reaction that was generated by a publication, based on the number of scientists who read and cited itA good correlation between the rankings produced by the Influence factor and that given by specialist researchers were found. / Mathematical Sciences / M.Sc. (Operasionele Navorsing)
192

Error Detection and Error Correction for PMU Data as Applied to Power System State Estimators

January 2013 (has links)
abstract: In modern electric power systems, energy management systems (EMSs) are responsi-ble for monitoring and controlling the generation system and transmission networks. State estimation (SE) is a critical `must run successful' component within the EMS software. This is dictated by the high reliability requirements and need to represent the closest real time model for market operations and other critical analysis functions in the EMS. Tradi-tionally, SE is run with data obtained only from supervisory control and data acquisition (SCADA) devices and systems. However, more emphasis on improving the performance of SE drives the inclusion of phasor measurement units (PMUs) into SE input data. PMU measurements are claimed to be more accurate than conventional measurements and PMUs `time stamp' measurements accurately. These widely distributed devices meas-ure the voltage phasors directly. That is, phase information for measured voltages and currents are available. PMUs provide data time stamps to synchronize measurements. Con-sidering the relatively small number of PMUs installed in contemporary power systems in North America, performing SE with only phasor measurements is not feasible. Thus a hy-brid SE, including both SCADA and PMU measurements, is the reality for contemporary power system SE. The hybrid approach is the focus of a number of research papers. There are many practical challenges in incorporating PMUs into SE input data. The higher reporting rates of PMUs as compared with SCADA measurements is one of the salient problems. The disparity of reporting rates raises a question whether buffering the phasor measurements helps to give better estimates of the states. The research presented in this thesis addresses the design of data buffers for PMU data as used in SE applications in electric power systems. The system theoretic analysis is illustrated using an operating electric power system in the southwest part of the USA. Var-ious instances of state estimation data have been used for analysis purposes. The details of the research, results obtained and conclusions drawn are presented in this document. / Dissertation/Thesis / M.S. Electrical Engineering 2013
193

Analyse de performance d'un système d'authentification utilisant des codes graphiques / Performance Analysis of an Authentication Method relying on Graphical Codes

Mai Hoang, Bao An 01 December 2014 (has links)
Nous étudions dans cette thèse l'influence d'un système d'authentification utilisant des codes graphiques 2D modifiés lors de l'impression par un procédé physique non-clônable. Un tel procédé part du principe qu'à très haute résolution le système d'impression acquisition peut être modélisé comme un processus stochastique, de part le caractère aléatoire de la disposition des fibres de papiers, de mélange des particules d'encre, de l'adressabilité de l'imprimante ou encore du bruit d'acquisition. Nous considérons un scénario où l'adversaire pourra estimer le code original et essaiera de le reproduire en utilisant son propre système d'impression. La première solution que nous proposons pour arriver à l'authentification est d'utiliser un test d'hypothèse à partir des modèles à priori connus et sans mémoire des canaux d'impression-acquisition de l'imprimeur légitime et du contrefacteur. Dans ce contexte nous proposons une approximation fiable des probabilités d'erreur via l'utilisation de bornes exponentiels et du principe des grandes déviations. Dans un second temps, nous analysons un scénario plus réaliste qui prends en compte une estimation a priori du canal du contrefacteur et nous mesurons l'impact de cette étape sur les performances du système d'authentification. Nous montrons qu'il est possible de calculer la distribution des probabilité de non-détection et d'en extraire par exemple ses performances moyennes. La dernière partie de cette thèse propose d'optimiser, au travers d'un jeu minimax, le canal de l'imprimeur. / We study in this thesis the impact of an authentication system based on 2D graphical codes that are corrupted by a physically unclonable noise such as the one emitted by a printing process. The core of such a system is that a printing process at very high resolution can be seen as a stochastic process and hence produces noise, this is due to the nature of different elements such as the randomness of paper fibers, the physical properties of the ink drop, the dot addressability of the printer, etc. We consider a scenario where the opponent may estimate the original graphical code and tries to reproduce the forged one using his printing process in order to fool the receiver. Our first solution to perform authentication is to use hypothesis testing on the observed memoryless sequences of a printed graphical code considering the assumption that we are able to perfectly model the printing process. The proposed approach arises from error exponent using exponential bounds as a direct application of the large deviation principle. Moreover, when looking for a more practical scenario, we take into account the estimation of the printing process used to generate the graphical code of the opponent, and we see how it impacts the performance of the authentication system. We show that it is both possible to compute the distribution of the probability of non-detection and to compute the average performance of the authentication system when the opponent channel has to be estimated. The last part of this thesis addresses the optimization problem of the printing channel.
194

Rape, Race, and Capital Punishment in North Carolina: A Qualitative Approach to Examining an Enduring Cultural Legacy

Wholl, Douglas 16 September 2015 (has links)
Despite positive steps toward the suppression of racial discrimination in the United States capital punishment process, the enduring effects of a cultural legacy of Black oppression (e.g., slavery; segregation; lynching) and historic and systemic racial discrimination in the criminal justice system have persisted to the present day. The purpose of the current study is to explore whether this enduring cultural legacy still exists by examining whether juries in rape-involved capital murder trials in North Carolina are more likely to recommend a sentence of death when the defendant is a Black male and the victim is a White female (compared to White male victims and White female victims). Within an analytic induction framework, the current study utilizes qualitative hypothesis testing to critically test each of the rape-involved homicide cases in an effort to elucidate the legal (e.g., circumstances of the case) and extra-legal (e.g., race of the defendant and victim, respectively; multiple dimensions of the ECL) factors that influence death sentence recommendations in North Carolina during this time period. The qualitative analysis involves the comprehensive reading and documentation of case narratives and newspaper articles in which I re-sort (i.e., reclassify) the hypothesis-supporting, hypothesis-non-supporting, and hypothesis-rejecting cases while considering the salient circumstances of the trial (e.g., aggravating circumstances; perceived brutality of the crimes committed) and the influence of multiple dimensions of the ECL (e.g., the liberation hypothesis; credibility of the White female victim). Findings from the qualitative analysis failed to show support for the ECL hypothesis (24.1% of trials showed support for the hypothesis, 19% of trials rejected the hypothesis, 57% of trials did not show support for or reject the hypothesis). While the findings did not show support for the ECL hypothesis in any context, the rich information uncovered in the extensive review of LexisNexis case narratives and newspaper articles that had a direct bearing on the qualitative findings and interpretations that could not be identified in a quantitative approach to the data (e.g., a juror’s expression of racial attitudes that was the single greatest piece of evidence showing support for the ECL; detailed descriptions of especially brutal trial circumstances that may have influenced jury sentencing decisions; the perceived credibility or chastity of the victim; the inclusion of relevant trials and exclusion of trials not appropriate for analysis) demonstrates the value of a qualitative approach to the study of racial discrimination in jury sentencing decisions.
195

Détection statistique d'information cachée dans des images naturelles / Statistical detection of hidden information in natural images

Zitzmann, Cathel 24 June 2013 (has links)
La nécessité de communiquer de façon sécurisée n’est pas chose nouvelle : depuis l’antiquité des méthodes existent afin de dissimuler une communication. La cryptographie a permis de rendre un message inintelligible en le chiffrant, la stéganographie quant à elle permet de dissimuler le fait même qu’un message est échangé. Cette thèse s’inscrit dans le cadre du projet "Recherche d’Informations Cachées" financé par l’Agence Nationale de la Recherche, l’Université de Technologie de Troyes a travaillé sur la modélisation mathématique d’une image naturelle et à la mise en place de détecteurs d’informations cachées dans les images. Ce mémoire propose d’étudier la stéganalyse dans les images naturelles du point de vue de la décision statistique paramétrique. Dans les images JPEG, un détecteur basé sur la modélisation des coefficients DCT quantifiés est proposé et les calculs des probabilités du détecteur sont établis théoriquement. De plus, une étude du nombre moyen d’effondrements apparaissant lors de l’insertion avec les algorithmes F3 et F4 est proposée. Enfin, dans le cadre des images non compressées, les tests proposés sont optimaux sous certaines contraintes, une des difficultés surmontées étant le caractère quantifié des données / The need of secure communication is not something new: from ancient, methods exist to conceal communication. Cryptography helped make unintelligible message using encryption, steganography can hide the fact that a message is exchanged.This thesis is part of the project "Hidden Information Research" funded by the National Research Agency, Troyes University of Technology worked on the mathematical modeling of a natural image and creating detectors of hidden information in digital pictures.This thesis proposes to study the steganalysis in natural images in terms of parametric statistical decision. In JPEG images, a detector based on the modeling of quantized DCT coefficients is proposed and calculations of probabilities of the detector are established theoretically. In addition, a study of the number of shrinkage occurring during embedding by F3 and F4 algorithms is proposed. Finally, for the uncompressed images, the proposed tests are optimal under certain constraints, a difficulty overcome is the data quantization
196

Statistical modeling and detection for digital image forensics / Modélisation et déctection statistiques pour la criminalistique des images numériques

Thai, Thanh Hai 28 August 2014 (has links)
Le XXIème siècle étant le siècle du passage au tout numérique, les médias digitaux jouent maintenant un rôle de plus en plus important dans la vie de tous les jours. De la même manière, les logiciels sophistiqués de retouche d’images se sont démocratisés et permettent aujourd’hui de diffuser facilement des images falsifiées. Ceci pose un problème sociétal puisqu’il s’agit de savoir si ce que l’on voit a été manipulé. Cette thèse s'inscrit dans le cadre de la criminalistique des images numériques. Deux problèmes importants sont abordés : l'identification de l'origine d'une image et la détection d'informations cachées dans une image. Ces travaux s'inscrivent dans le cadre de la théorie de la décision statistique et proposent la construction de détecteurs permettant de respecter une contrainte sur la probabilité de fausse alarme. Afin d'atteindre une performance de détection élevée, il est proposé d'exploiter les propriétés des images naturelles en modélisant les principales étapes de la chaîne d'acquisition d'un appareil photographique. La méthodologie, tout au long de ce manuscrit, consiste à étudier le détecteur optimal donné par le test du rapport de vraisemblance dans le contexte idéal où tous les paramètres du modèle sont connus. Lorsque des paramètres du modèle sont inconnus, ces derniers sont estimés afin de construire le test du rapport de vraisemblance généralisé dont les performances statistiques sont analytiquement établies. De nombreuses expérimentations sur des images simulées et réelles permettent de souligner la pertinence de l'approche proposée / The twenty-first century witnesses the digital revolution that allows digital media to become ubiquitous. They play a more and more important role in our everyday life. Similarly, sophisticated image editing software has been more accessible, resulting in the fact that falsified images are appearing with a growing frequency and sophistication. The credibility and trustworthiness of digital images have been eroded. To restore the trust to digital images, the field of digital image forensics was born. This thesis is part of the field of digital image forensics. Two important problems are addressed: image origin identification and hidden data detection. These problems are cast into the framework of hypothesis testing theory. The approach proposes to design a statistical test that allows us to guarantee a prescribed false alarm probability. In order to achieve a high detection performance, it is proposed to exploit statistical properties of natural images by modeling the main steps of image processing pipeline of a digital camera. The methodology throughout this manuscript consists of studying an optimal test given by the Likelihood Ratio Test in the ideal context where all model parameters are known in advance. When the model parameters are unknown, a method is proposed for parameter estimation in order to design a Generalized Likelihood Ratio Test whose statistical performances are analytically established. Numerical experiments on simulated and real images highlight the relevance of the proposed approach
197

The Impotency of Post Hoc Power

Sebyhed, Hugo, Gunnarsson, Emma January 2020 (has links)
In this thesis, we hope to dispel some confusion regarding the so-called post hoc power, i.e. power computed making the assumption that the estimated sample effect is equal to the population effect size. In previous research, it has been shown that post hoc power is a function of the p-value, making it redundant as a tool of analysis. We go further, arguing for it to never be reported, since it is a source of confusion and potentially harmful incentives. We also conduct a Monte Carlo simulation to illustrate our points of view. Previous research is confirmed by the results of this study.
198

Improved critical values for extreme normalized and studentized residuals in Gauss-Markov models

Lehmann, Rüdiger January 2012 (has links)
We investigate extreme studentized and normalized residuals as test statistics for outlier detection in the Gauss-Markov model possibly not of full rank. We show how critical values (quantile values) of such test statistics are derived from the probability distribution of a single studentized or normalized residual by dividing the level of error probability by the number of residuals. This derivation neglects dependencies between the residuals. We suggest improving this by a procedure based on the Monte Carlo method for the numerical computation of such critical values up to arbitrary precision. Results for free leveling networks reveal significant differences to the values used so far. We also show how to compute those critical values for non‐normal error distributions. The results prove that the critical values are very sensitive to the type of error distribution. / Wir untersuchen extreme studentisierte und normierte Verbesserungen als Teststatistik für die Ausreißererkennung im Gauß-Markov-Modell von möglicherweise nicht vollem Rang. Wir zeigen, wie kritische Werte (Quantilwerte) solcher Teststatistiken von der Wahrscheinlichkeitsverteilung einer einzelnen studentisierten oder normierten Verbesserung abgeleitet werden, indem die Irrtumswahrscheinlichkeit durch die Anzahl der Verbesserungen dividiert wird. Diese Ableitung vernachlässigt Abhängigkeiten zwischen den Verbesserungen. Wir schlagen vor, diese Prozedur durch Einsatz der Monte-Carlo-Methode zur Berechnung solcher kritischen Werte bis zu beliebiger Genauigkeit zu verbessern. Ergebnisse für freie Höhennetze zeigen signifikante Differenzen zu den bisher benutzten Werten. Wir zeigen auch, wie man solche Werte für nicht-normale Fehlerverteilungen berechnet. Die Ergebnisse zeigen, dass die kritischen Werte sehr empfindlich auf den Typ der Fehlerverteilung reagieren.
199

New Approaches to Distributed State Estimation, Inference and Learning with Extensions to Byzantine-Resilience

Aritra Mitra (9154928) 29 July 2020 (has links)
<div>In this thesis, we focus on the problem of estimating an unknown quantity of interest, when the information required to do so is dispersed over a network of agents. In particular, each agent in the network receives sequential observations generated by the unknown quantity, and the collective goal of the network is to eventually learn this quantity by means of appropriately crafted information diffusion rules. The abstraction described above can be used to model a variety of problems ranging from environmental monitoring of a dynamical process using autonomous robot teams, to statistical inference using a network of processors, to social learning in groups of individuals. The limited information content of each agent, coupled with dynamically changing networks, the possibility of adversarial attacks, and constraints imposed by the communication channels, introduce various unique challenges in addressing such problems. We contribute towards systematically resolving some of these challenges.</div><div><br></div><div>In the first part of this thesis, we focus on tracking the state of a dynamical process, and develop a distributed observer for the most general class of LTI systems, linear measurement models, and time-invariant graphs. To do so, we introduce the notion of a multi-sensor observable decomposition - a generalization of the Kalman observable canonical decomposition for a single sensor. We then consider a scenario where certain agents in the network are compromised based on the classical Byzantine adversary model. For this worst-case adversarial setting, we identify certain fundamental necessary conditions that are a blend of system- and network-theoretic requirements. We then develop an attack-resilient, provably-correct, fully distributed state estimation algorithm. Finally, by drawing connections to the concept of age-of-information for characterizing information freshness, we show how our framework can be extended to handle a broad class of time-varying graphs. Notably, in each of the cases above, our proposed algorithms guarantee exponential convergence at any desired convergence rate.</div><div><br></div><div>In the second part of the thesis, we turn our attention to the problem of distributed hypothesis testing/inference, where each agent receives a stream of stochastic signals generated by an unknown static state that belongs to a finite set of hypotheses. To enable each agent to uniquely identify the true state, we develop a novel distributed learning rule that employs a min-protocol for data-aggregation, as opposed to the large body of existing techniques that rely on "belief-averaging". We establish consistency of our rule under minimal requirements on the observation model and the network structure, and prove that it guarantees exponentially fast convergence to the truth with probability 1. Most importantly, we establish that the learning rate of our algorithm is network-independent, and a strict improvement over all existing approaches. We also develop a simple variant of our learning algorithm that can account for misbehaving agents. As the final contribution of this work, we develop communication-efficient rules for distributed hypothesis testing. Specifically, we draw on ideas from event-triggered control to reduce the number of communication rounds, and employ an adaptive quantization scheme that guarantees exponentially fast learning almost surely, even when just 1 bit is used to encode each hypothesis. </div>
200

Correctly Modeling Plant-Insect-Herbivore-Pesticide Interactions as Aggregate Data

Banks, H. T., Banks, John E., Catenacci, Jared, Joyner, Michele, Stark, John 01 January 2020 (has links)
We consider a population dynamics model in investigating data from controlled experiments with aphids in broccoli patches surrounded by different margin types (bare or weedy ground) and three levels of insecticide spray (no, light, or heavy spray). The experimental data is clearly aggregate in nature. In previous efforts [1], the aggregate nature of the data was ignored. In this paper, we embrace this aspect of the experiment and correctly model the data as aggregate data, comparing the results to the previous approach. We discuss cases in which the approach may provide similar results as well as cases in which there is a clear difference in the resulting fit to the data.

Page generated in 0.0786 seconds