Spelling suggestions: "subject:"point detection"" "subject:"joint detection""
41 |
Development of a Client-Side Evil Twin Attack Detection System for Public Wi-Fi Hotspots based on Design Science ApproachHorne, Liliana R. 01 January 2018 (has links)
Users and providers benefit considerably from public Wi-Fi hotspots. Users receive wireless Internet access and providers draw new prospective customers. While users are able to enjoy the ease of Wi-Fi Internet hotspot networks in public more conveniently, they are more susceptible to a particular type of fraud and identify theft, referred to as evil twin attack (ETA). Through setting up an ETA, an attacker can intercept sensitive data such as passwords or credit card information by snooping into the communication links. Since the objective of free open (unencrypted) public Wi-Fi hotspots is to provide ease of accessibility and to entice customers, no security mechanisms are in place. The public’s lack of awareness of the security threat posed by free open public Wi-Fi hotspots makes this problem even more heinous. Client-side systems to help wireless users detect and protect themselves from evil twin attacks in public Wi-Fi hotspots are in great need. In this dissertation report, the author explored the problem of the need for client-side detection systems that will allow wireless users to help protect their data from evil twin attacks while using free open public Wi-Fi. The client-side evil twin attack detection system constructed as part of this dissertation linked the gap between the need for wireless security in free open public Wi-Fi hotspots and limitations in existing client-side evil twin attack detection solutions. Based on design science research (DSR) literature, Hevner’s seven guidelines of DSR, Peffer’s design science research methodology (DSRM), Gregor’s IS design theory, and Hossen & Wenyuan’s (2014) study evaluation methodology, the author developed design principles, procedures and specifications to guide the construction, implementation, and evaluation of a prototype client-side evil twin attack detection artifact. The client-side evil twin attack detection system was evaluated in a hotel public Wi-Fi environment. The goal of this research was to develop a more effective, efficient, and practical client-side detection system for wireless users to independently detect and protect themselves from mobile evil twin attacks while using free open public Wi-Fi hotspots. The experimental results showed that client-side evil twin attack detection system can effectively detect and protect users from mobile evil twin AP attacks in public Wi-Fi hotspots in various real-world scenarios despite time delay caused by many factors.
|
42 |
Tracking of railroads for autonomous guidance of UAVs : using Vanishing Point detectionClerc, Anthony January 2018 (has links)
UAVs have gained in popularity and the number of applications has soared over the past years, ranging from leisure to commercial activities. This thesis is discussing specifically railroad applications, which is a domain rarely explored. Two different aspects are analysed. While developing a new application or migrating a ground-based system to UAV platform, the different challenges encountered are often unknown. Therefore, this thesis highlights the most important ones to take into consideration during the development process. From a more technical aspect, the implementation of autonomous guidance for UAVs over railroads using vanishing point extraction is studied. Two different algorithms are presented and compared, the first one is using line extraction method whereas the second uses joint activities of Gabor filters. The results demonstrate that the applied methodologies provide good results and that a significant difference exists between both algorithms in terms of computation time. A second implementation tackling the detection of railway topologies to enable the use on multiple rail road configurations is discussed. A first technique is presented using exclusively vanishing points for the detection, however, the results for complex images are not satisfactory. Therefore, a second method is studied using line characteristics on top of the previous algorithm. This second implementation has proven to give good results.
|
43 |
Adaptive Measurement Strategies for Network Optimization and Control / Adaptiva Mätstrategier för Optimering och Reglering av NätverkLindståhl, Simon January 2023 (has links)
The fifth generation networks is rapidly becoming the new network standardand its new technological capabilities are expected to enable a far widervariety of services compared to the fourth generation networks. To ensurethat these services can co-exist and meet their standardized requirements,the network’s resources must be provisioned, managed and reconfigured ina far more complex manner than before. As such, it is no longer sufficientto select a simple, static scheme for gathering the necessary information totake decisions. Instead, it is necessary to adaptively, with regards to networksystem dynamics, trade-off the cost in terms of power, CPU and bandwidthconsumption of the taken measurements to the value their information brings.Orchestration is a wide field, and the way to quantify the value of a givenmeasurement heavily depends on the problem studied. As such, this thesisaddresses adaptive measurement schemes for a number of well-defined networkoptimization problems. The thesis is presented as a compilation, whereafter an introduction detailing the background, purpose, problem formulation,methodology and contributions of our work, we present each problemseparately through the papers submitted to several conferences. First, we study the problem of optimal spectrum access for low priorityservices. We assume that the network manager has limited opportunitiesto measure the spectrum before assigning one (if any) resource block to thesecondary service for transmission, and this measurement has a known costattached to it. We study this framework through the lens of multi-armedbandits with multiple arm pulls per decision, a framework we call predictivebandits. We analyze such bandits and show a problem specific lower bound ontheir regret, as well as design an algorithm which meets this regret asymptotically,studying both the case where measurements are perfect and the casewhere the measurement has noise of known quantity. Studying a syntheticsimulated problem, we find that it performs considerably better compared toa simple benchmark strategy. Secondly, we study a variation of admission control where the controllermust select one of multiple slices to enter a new service into. The agentdoes not know the resources available in the slices initially, and must insteadmeasure these, subject to noise. Mimicking three commonly used admissioncontrol strategies, we study this as a best arm identification problem, whereone or multiple arms is ”correct” (the arm chose by the strategy if it had fullinformation). Through this framework, we analyze each strategy and devisesample complexity lower bounds, as well as algorithms that meet these lowerbounds. In simulations with synthetic data, we show that our measurementalgorithm can vastly reduce the number of required measurements comparedto uniform sampling strategies. Finally, we study a network monitoring system where the controller mustdetect sudden changes in system behavior such as batch traffic arrivals orhandovers, in order to take future action. We study this through the lensof change point detection but argue that the classical framework is insufficientfor capturing both physical time aspects such as delay as well as measurementcosts independently, and present an alternative framework whichiidecouples these, requiring more sophisticated monitoring agents. We show,both through theory and through simulation with both synthetic data anddata from a 5G testbed, that such adaptive schedules qualitatively and quantitativelyimprove upon classical change point detection schemes in terms ofmeasurment frequency, without losing classical optimality guarantees such asthe one on required measurements post change. / Femte generationens nätverk håller snabbt på att bli den nya standarden och dess teknologiska förmågor förväntas bereda väg för en avsevärt större variation av tjänster jämfört med fjärde generationens nätverk. För att se till att dessa tjänster kan samexistera och möta sina standardiserade krav måste nätverkens resurser provisioneras, hanteras och omkonfigureras på ett mycket mer komplext vis än tidigare. Det är därmed inte längre tillräckligt att välja en simpel, statisk plan för att samla den nödvändiga information som krävs för att ta beslut. Istället behöver man adaptivt, med hänsyn till nätversystemens dynamik, avväga mätningarnas kostnad i termer av effekt-, CPU- och bandbreddskonsumtion mot det värde som de medför. Den här sortens nätverksorkestrering är ett brett fält, och hur mätningarnas värde ska kvantifieras beror i hög grad på vilket optimeringsproblem som studeras. Således bemöter den här avhandlningen adaptiva mätplaner för ett antal väldefinerade optimeringsproblem. Avhandlingen tar formen av en sammanlänkning, där följandes en introduktion som beskriver bakgrund, syfte, problemformulering, metodologi och forskningsbidrag så presenterar vi varje problem separat genom de artiklar vi inlämnat till olika konferenser. Först studerar vi optimal spektrumaccess för lågprioritetstjänster. Vi antar att nätverksregulatorn har begränsat med möjligheter att mäta spektrumanvändning innan den tillger som mest ett resursblock till tjänsten med lägre prioritet att skicka data på, och de här mätningarna har en känd kostnad. Vi studerar det här ramverket från perspektivet av flerarmade banditer med flera armdragningar per beslut, ett ramverk vi benämner förutsägande banditer (predictive bandits). Vi analyserar sådana banditer och visar en problemspecifik undre gräns på dess inlärningsförlust, samt designar en algorithm som presterar lika bra som denna gräns i den asymptotiska regimen. Vi studerar fallet där mätningarna är perfekta såväl som fallet där mätningarna har brus med känd storlek. Genom att studera ett syntetiskt simulerat problem av detta slag finner vi att vår algoritm presterar avsevärt bättre jämfört med en simplare riktmärkesstrategi. Därefter studerar vi en variation av tillträdeskontroll, där en regulator måste välja en av ett antal betjänter att släppa in en ny tjänst till (om någon alls). Agenten vet ursprungligen inte vilka resurser som finns betjänterna tillgängliga, utan måste mäta detta med brusiga mätningar. Vi härmar tre vanligt använda tillträdesstrategier och studerar detta som ett bästa-arms identifieringsproblem, där en eller flera armar är "korrekta" (det vill säga, de armar som hade valts av tillträdesstrategin om den hade haft perfekt kännedom). Med det här ramverket analyserar vi varje strategi och visar undre gränser på antalet mätningar som krävs, och skapar algoritmer som möter dessa gränser. I simuleringar med syntetisk data visar vi att våra mätalgoritmer kan drastiskt reducera antalet mätningar som krävs jämfört med jämlika mätstrategier. Slutligen studerar vi ett övervakningssystem där agenten måste upptäcka plötsliga förändringar i systemets beteende såsom förändringar i trafiken eller överräckningar mellan master, för att kunna agera därefter. Vi studerar detta med ramverket förändringsdetektion, men argumenterar att det klassiska ramverket är otillräckligt för att bemöta aspekter berörande fysisk tid (som fördröjning) samtidigt som den bemöter mätningarnas kostnad. Vi presenterar därmed ett alternativt ramverk som frikopplar de två, vilket i sin tur kräver mer sostifikerade övervakningssystem. Vi visar, genom både teori och simulering med både syntetisk och experimentell data, att sådana adaptiva mätscheman kan förbättra mätfrekvensen jämfört med klassiska periodiska mätscheman, både kvalitativt och kvantitativt, utan att förlora klassiska optimalitetsgarantier såsom det på antalet mätningar som behövs när förändringen har skett. / <p>QC 20230915</p>
|
44 |
Intégration du retour d'expérience pour une stratégie de maintenance dynamique / Integrate experience feedback for dynamic maintenance strategyRozas, Rony 19 December 2014 (has links)
L'optimisation de stratégies de maintenance est un sujet primordial pour un grand nombre d'industriels. Il s'agit d'établir un plan de maintenance qui garantisse des niveaux de sécurité, de sûreté et de fiabilité élevé avec un coût minimum et respectant d'éventuelles contraintes. Le nombre de travaux grandissant sur l'optimisation de paramètres de maintenance et notamment sur la planification d'actions préventives de maintenance souligne l'intérêt de ce problème. Un grand nombre d'études sur la maintenance repose sur une modélisation du processus de dégradation du système étudié. Les Modèles Graphiques Probabilistes (MGP) et particulièrement les MGP Markoviens (MGPM) fournissent un cadre de travail pour la modélisation de processus stochastiques complexes. Le problème de ce type d'approche est que la qualité des résultats est dépendante de celle du modèle. De plus, les paramètres du système considéré peuvent évoluer au cours du temps. Cette évolution est généralement la conséquence d'un changement de fournisseur pour les pièces de remplacement ou d'un changement de paramètres d'exploitation. Cette thèse aborde le problème d'adaptation dynamique d'une stratégie de maintenance face à un système dont les paramètres changent. La méthodologie proposée repose sur des algorithmes de détection de changement dans un flux de données séquentielles et sur une nouvelle méthode d'inférence probabiliste spécifique aux réseaux bayésiens dynamiques. D'autre part, les algorithmes proposés dans cette thèse sont mis en place dans le cadre d'un projet d'étude avec Bombardier Transport. L'étude porte sur la maintenance du système d'accès voyageurs d'une nouvelle automotrice destiné à une exploitation sur le réseau ferré d'Ile-de-France. L'objectif général est de garantir des niveaux de sécurité et de fiabilité importants au cours de l'exploitation du train / The optimization of maintenance strategies is a major issue for many industrial applications. It involves establishing a maintenance plan that ensures security levels, security and high reliability with minimal cost and respecting any constraints. The increasing number of works on optimization of maintenance parameters in particular in scheduling preventive maintenance action underlines the importance of this issue. A large number of studies on maintenance are based on a modeling of the degradation of the system studied. Probabilistic Models Graphics (PGM) and especially Markovian PGM (M-PGM) provide a framework for modeling complex stochastic processes. The issue with this approach is that the quality of the results is dependent on the model. More system parameters considered may change over time. This change is usually the result of a change of supplier for replacement parts or a change in operating parameters. This thesis deals with the issue of dynamic adaptation of a maintenance strategy, with a system whose parameters change. The proposed methodology is based on change detection algorithms in a stream of sequential data and a new method for probabilistic inference specific to the dynamic Bayesian networks. Furthermore, the algorithms proposed in this thesis are implemented in the framework of a research project with Bombardier Transportation. The study focuses on the maintenance of the access system of a new automotive designed to operate on the rail network in Ile-de-France. The overall objective is to ensure a high level of safety and reliability during train operation
|
45 |
Near-infrared Spectroscopy as an Access Channel: Prefrontal Cortex Inhibition During an Auditory Go-no-go TaskKo, Linda 24 February 2009 (has links)
The purpose of this thesis was to explore the potential of near-infrared spectroscopy (NIRS) as an access channel by establishing reliable signal detection to verify the existence of signal differences associated with changes in activity. This thesis focused on using NIRS to measure brain activity from the prefrontal cortex during an auditory Go-No-Go task. A singular spectrum analysis change-point detection algorithm was applied to identify transition points where the NIRS signal properties varied from previous data points in the signal, indicating a change in brain activity. With this algorithm, latency values for change-points detected ranged from 6.44 s to 9.34 s. The averaged positive predictive values over all runs were modest (from 49.41% to 67.73%), with the corresponding negative predictive values being generally higher (48.66% to 78.80%). However, positive and negative predictive values up to 97.22% and 95.14%, respectively, were achieved for individual runs. No hemispheric differences were found.
|
46 |
Near-infrared Spectroscopy as an Access Channel: Prefrontal Cortex Inhibition During an Auditory Go-no-go TaskKo, Linda 24 February 2009 (has links)
The purpose of this thesis was to explore the potential of near-infrared spectroscopy (NIRS) as an access channel by establishing reliable signal detection to verify the existence of signal differences associated with changes in activity. This thesis focused on using NIRS to measure brain activity from the prefrontal cortex during an auditory Go-No-Go task. A singular spectrum analysis change-point detection algorithm was applied to identify transition points where the NIRS signal properties varied from previous data points in the signal, indicating a change in brain activity. With this algorithm, latency values for change-points detected ranged from 6.44 s to 9.34 s. The averaged positive predictive values over all runs were modest (from 49.41% to 67.73%), with the corresponding negative predictive values being generally higher (48.66% to 78.80%). However, positive and negative predictive values up to 97.22% and 95.14%, respectively, were achieved for individual runs. No hemispheric differences were found.
|
47 |
Stochastic Modelling of Random Variables with an Application in Financial Risk Management.Moldovan, Max January 2003 (has links)
The problem of determining whether or not a theoretical model is an accurate representation of an empirically observed phenomenon is one of the most challenging in the empirical scientific investigation. The following study explores the problem of stochastic model validation. Special attention is devoted to the unusual two-peaked shape of the empirically observed distributions of the conditional on realised volatility financial returns. The application of statistical hypothesis testing and simulation techniques leads to the conclusion that the conditional on realised volatility returns are distributed with a specific previously undocumented distribution. The probability density that represents this distribution is derived, characterised and applied for validation of the financial model.
|
48 |
Robustní monitorovací procedury pro závislá data / Robust Monitoring Procedures for Dependent DataChochola, Ondřej January 2013 (has links)
Title: Robust Monitoring Procedures for Dependent Data Author: Ondřej Chochola Department: Department of Probability and Mathematical Statistics Supervisor: Prof. RNDr. Marie Hušková, DrSc. Supervisor's e-mail address: huskova@karlin.mff.cuni.cz Abstract: In the thesis we focus on sequential monitoring procedures. We extend some known results towards more robust methods. The robustness of the procedures with respect to outliers and heavy-tailed observations is introduced via use of M-estimation instead of classical least squares estimation. Another extension is towards dependent and multivariate data. It is assumed that the observations are weakly dependent, more specifically they fulfil strong mixing condition. For several models, the appropriate test statistics are proposed and their asymptotic properties are studied both under the null hypothesis of no change as well as under the alternatives, in order to derive proper critical values and show consistency of the tests. We also introduce retrospective change-point procedures, that allow one to verify in a robust way the stability of the historical data, which is needed for the sequential monitoring. Finite sample properties of the tests need to be also examined. This is done in a simulation study and by application on some real data in the capital asset...
|
49 |
Titulador automático baseado em filmes digitais para determinação de dureza e alcalinidade total em águas minerais/Titulador automático baseado em filmes digitais para determinação de dureza e alcalinidade total em águas mineraisSiqueira, Lucas Alfredo 29 February 2016 (has links)
Submitted by Maike Costa (maiksebas@gmail.com) on 2017-06-21T14:26:33Z
No. of bitstreams: 1
arquivototal.pdf: 3690977 bytes, checksum: 752560aa5c7d78968c32cb55f0778788 (MD5) / Made available in DSpace on 2017-06-21T14:26:33Z (GMT). No. of bitstreams: 1
arquivototal.pdf: 3690977 bytes, checksum: 752560aa5c7d78968c32cb55f0778788 (MD5)
Previous issue date: 2016-02-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Total hardness and Total alkalinity are important physico-chemical parameters
for the evaluation of water quality and are determined by volumetric analytical
methods. These methods have difficult to detect the endpoint of the titration due
to the difficult of viewing the color transition inherent to each of them. To
circumvent this problem, here is proposed a new automatic method for the
detection of the titration end point for the determination of total hardness and
total alkalinity in mineral water samples. The proposed flow-batch titrator
consists of a peristaltic pump, five three-way solenoid valves, a magnetic stirrer,
an electronic actuator, an Arduino MEGA 2560TM board, a mixing chamber and
a webcam. The webcam records the digital movie (DM) during the addition of
the titrant towards mixing chamber, also recording the color variations resulting
from chemical reactions between titrant and sample within chamber. While the
DM is recorded, it is decompiled into frames ordered sequentially at a constant
rate of 30 frames per second (FPS). The first frame is used as a reference to
define the region of interest (RI) of 48 × 50 pixels and the R channel values,
which are used to calculate the Pearson's correlation coefficient (r) values. r is
calculated between the R values of the initial frame and each subsequent
frame. The titration curves are plotted in real time using the values of r (ordinate
axis) and the total opening time of the valve titrant (abscissa axis). The end
point is estimated by the second derivative method. A software written in
ActionScript 3.0 language manages all analytical steps and data treatment in
real time. The feasibility of the method was attested by its application for the
analysis of natural water samples. Results were compared with classical
titration and did not present statistically significant differences when the paired ttest
at the 95% confidence level was applied. The proposed method is able to
process about 71 samples per hour, and its precision was confirmed by overall
relative standard deviation (RSD) values, always lower than the 2,4% for total
hardness and 1,4% for total alkalinity. / A dureza total e a alcalinidade total são importantes parâmetros físico-químicos
para avaliação da qualidade de águas e são determinados por métodos
volumétricos de análise. Estes métodos apresentam difícil detecção do ponto
final da titulação devido à dificuldade de visualização das transições de cores
inerentes a cada um deles. Para contornar este problema, foi proposta neste
trabalho uma nova metodologia automática para a detecção do ponto final nas
determinações de dureza total e alcalinidade total em águas. O titulador em
fluxo-batelada proposto é composto de uma bomba peristáltica, cinco válvulas
solenoides de três vias, um agitador magnético, um acionador de válvulas, uma
placa Arduíno MEGA 2560TM, uma câmara de mistura e uma webcam. O
programa de gerenciamento e controle do titulador foi escrito em linguagem
ActionScript 3.0. A webcam grava o filme digital durante a adição do titulante na
câmara de mistura, registrando as variações de cor decorrentes das reações
químicas entre titulante e amostra no interior de câmara. Enquanto o filme é
gravado, este é decomposto em quadros ordenados sequencialmente a uma
taxa constante de 30 quadros por segundo (FPS). O primeiro quadro é utilizado
como referência para definir uma região de interesse (RI) com 48 x 50 pixels,
na qual seus valores R, G e B são utilizados para calcular os valores de
coeficiente de correlação de Pearson (r). O valor de r é calculado entre os
valores de R do quadro inicial e de cada quadro subsequente. As curvas de
titulação são obtidas em tempo real usando os valores de r (ordenadas) e o
tempo total de abertura da válvula de titulante (abscissas). O ponto final é
estimado pelo método de segunda derivada. O método foi aplicado na análise
de águas minerais e os resultados foram comparados com a titulação clássica,
não apresentando diferenças estatisticamente significativas com aplicação do
teste t pareado a 95% de confiança. O método proposto foi capaz de processar
até 71 amostras por hora e a sua precisão foi confirmada pelos valores de
desvio padrão relativos (DPR) globais, sempre inferiores as 2,4% para dureza
total e 1,4% para alcalinidade total.
|
50 |
Approche algébrique et théorie des valeurs extrêmes pour la détection de ruptures : Application aux signaux biomédicaux / Algebraic approach and extreme value theory for change-point detection : Application to the biomedical signalsDebbabi, Nehla 14 December 2015 (has links)
Ce travail développe des techniques non-supervisées de détection et de localisation en ligne de ruptures dans les signaux enregistrés dans un environnement bruité. Ces techniques reposent sur l'association d'une approche algébrique avec la TVE. L'approche algébrique permet d'appréhender aisément les ruptures en les caractérisant en termes de distributions de Dirac retardées et leurs dérivées dont la manipulation est facile via le calcul opérationnel. Cette caractérisation algébrique, permettant d'exprimer explicitement les instants d'occurrences des ruptures, est complétée par une interprétation probabiliste en termes d'extrêmes : une rupture est un évènement rare dont l'amplitude associée est relativement grande. Ces évènements sont modélisés dans le cadre de la TVE, par une distribution de Pareto Généralisée. Plusieurs modèles hybrides sont proposés dans ce travail pour décrire à la fois le comportement moyen (bruit) et les comportements extrêmes (les ruptures) du signal après un traitement algébrique. Des algorithmes entièrement non-supervisés sont développés pour l'évaluation de ces modèles hybrides, contrairement aux techniques classiques utilisées pour les problèmes d'estimation en question qui sont heuristiques et manuelles. Les algorithmes de détection de ruptures développés dans cette thèse ont été validés sur des données générées, puis appliqués sur des données réelles provenant de différents phénomènes, où les informations à extraire sont traduites par l'apparition de ruptures. / This work develops non supervised techniques for on-line detection and location of change-points in noisy recorded signals. These techniques are based on the combination of an algebraic approach with the Extreme Value Theory (EVT). The algebraic approach offers an easy identification of the change-points. It characterizes them in terms of delayed Dirac distributions and their derivatives which are easily handled via operational calculus. This algebraic characterization, giving rise to an explicit expression of the change-points locations, is completed with a probabilistic interpretation in terms of extremes: a change point is seen as a rare and extreme event. Based on EVT, these events are modeled by a Generalized Pareto Distribution.Several hybrid multi-components models are proposed in this work, modeling at the same time the mean behavior (noise) and the extremes ones (change-points) of the signal after an algebraic processing. Non supervised algorithms are proposed to evaluate these hybrid models, avoiding the problems encountered with classical estimation methods which are graphical ad hoc ones. The change-points detection algorithms developed in this thesis are validated on generated data and then applied on real data, stemming from different phenomenons, where change-points represent the information to be extracted.
|
Page generated in 0.1324 seconds