Spelling suggestions: "subject:"algorithms (evaluation)"" "subject:"a.lgorithms (evaluation)""
1 |
Community Detection in Social Networks: Multilayer Networks and Pairwise CovariatesHuang, Sihan January 2020 (has links)
Community detection is one of the most fundamental problems in network study. The stochastic block model (SBM) is arguably the most studied model for network data with different estimation methods developed with their community detection consistency results unveiled. Due to its stringent assumptions, SBM may not be suitable for many real-world problems. In this thesis, we present two approaches that incorporate extra information compared with vanilla SBM to help improve community detection performance and be suitable for applications.
One approach is to stack multilayer networks that are composed of multiple single-layer networks with common community structure. Numerous methods have been proposed based on spectral clustering, but most rely on optimizing an objective function while the associated theoretical properties remain to be largely unexplored. We focus on the `early fusion' method, of which the target is to minimize the spectral clustering error of the weighted adjacency matrix (WAM). We derive the optimal weights by studying the asymptotic behavior of eigenvalues and eigenvectors of the WAM. We show that the eigenvector of WAM converges to a normal distribution, and the clustering error is monotonically decreasing with the eigenvalue gap. This fact reveals the intrinsic link between eigenvalues and eigenvectors, and thus the algorithm will minimize the clustering error by maximizing the eigenvalue gap. The numerical study shows that our algorithm outperforms other state-of-art methods significantly, especially when signal-to-noise ratios of layers vary widely. Our algorithm also yields higher accuracy result for S&P 1500 stocks dataset than competing models.
The other approach we propose is to consider heterogeneous connection probabilities to remove the strong assumption that all nodes in the same community are stochastically equivalent, which may not be suitable for practical applications. We introduce a pairwise covariates-adjusted stochastic block model (PCABM), a generalization of SBM that incorporates pairwise covariates information. We study the maximum likelihood estimates of the coefficients for the covariates as well as the community assignments. It is shown that both the coefficient estimates of the covariates and the community assignments are consistent under suitable sparsity conditions. Spectral clustering with adjustment (SCWA) is introduced to fit PCABM efficiently. Under certain conditions, we derive the error bound of community estimation under SCWA and show that it is community detection consistent. PCABM compares favorably with the SBM or degree-corrected stochastic block model under a wide range of simulated and real networks when covariate information is accessible.
|
2 |
Data-Driven Quickest Change DetectionKurt, Mehmet Necip January 2020 (has links)
The quickest change detection (QCD) problem is to detect abrupt changes in a sensing environment as quickly as possible in real time while limiting the risk of false alarm. Statistical inference about the monitored stochastic process is performed through observations acquired sequentially over time. After each observation, QCD algorithm either stops and declares a change or continues to have a further observation in the next time interval. There is an inherent tradeoff between speed and accuracy in the decision making process. The design goal is to optimally balance the average detection delay and the false alarm rate to have a timely and accurate response to abrupt changes.
The objective of this thesis is to investigate effective and scalable QCD approaches for real-world data streams. The classical QCD framework is model-based, that is, statistical data model is assumed to be known for both the pre- and post-change cases. However, real-world data often exhibit significant challenges for data modeling such as high dimensionality, complex multivariate nature, lack of parametric models, unknown post-change (e.g., attack or anomaly) patterns, and complex temporal correlation. Further, in some cases, data is privacy-sensitive and distributed over a system, and it is not fully available to QCD algorithm. This thesis addresses these challenges and proposes novel data-driven QCD approaches that are robust to data model mismatch and hence widely applicable to a variety of practical settings.
In Chapter 2, online cyber-attack detection in the smart power grid is formulated as a partially observable Markov decision process (POMDP) problem based on the QCD framework. A universal robust online cyber-attack detection algorithm is proposed using the model-free reinforcement learning (RL) for POMDPs. In Chapter 3, online anomaly detection for big data streams is studied where the nominal (i.e., pre-change) and anomalous (i.e., post-change) high-dimensional statistical data models are unknown. A data-driven solution approach is proposed, where firstly a set of useful univariate summary statistics is computed from a nominal dataset in an offline phase and next, online summary statistics are evaluated for a persistent deviation from the nominal statistics.
In Chapter 4, a generic data-driven QCD procedure is proposed, called DeepQCD, that learns the change detection rule directly from the observed raw data via deep recurrent neural networks. With sufficient amount of training data including both pre- and post-change samples, DeepQCD can effectively learn the change detection rule for all complex, high-dimensional, and temporally correlated data streams. Finally, in Chapter 5, online privacy-preserving anomaly detection is studied in a setting where the data is distributed over a network and locally sensitive to each node, and its statistical model is unknown. A data-driven differentially private distributed detection scheme is proposed, which infers network-wide anomalies based on the perturbed and encrypted statistics received from nodes. Furthermore, analytical privacy-security tradeoff in the network-wide anomaly detection problem is investigated.
|
3 |
An evaluation of algorithms for real-time strategic placement of sensorsTiberg, Jesper January 2004 (has links)
<p>In this work an investigation is performed in whether the algorithms Simultaneous Perturbation Stochastic Approximation (SPSA) and Virtual Force Algorithm (VFA) are suitable for real-time strategic placement of sensors in a dynamic environment. An evaluation of these algorithms is conducted and compared to Simulated Annealing (SA), which has been used before in similar applications.</p><p>For the tests, a computer based model of the sensors and the environment in which they are used, is implemented. The model handles sensors, moving objects, specifications for the area the sensors are supposed to monitor, and all interaction between components within the model.</p><p>It was the belief of the authors that SPSA and VFA are suited for this kind of problem, and that they have advantages over SA in complex scenarios. The results shows this to be true although SA seems to perform better when it comes to smaller number of sensors to be placed</p>
|
4 |
An evaluation of algorithms for real-time strategic placement of sensorsTiberg, Jesper January 2004 (has links)
In this work an investigation is performed in whether the algorithms Simultaneous Perturbation Stochastic Approximation (SPSA) and Virtual Force Algorithm (VFA) are suitable for real-time strategic placement of sensors in a dynamic environment. An evaluation of these algorithms is conducted and compared to Simulated Annealing (SA), which has been used before in similar applications. For the tests, a computer based model of the sensors and the environment in which they are used, is implemented. The model handles sensors, moving objects, specifications for the area the sensors are supposed to monitor, and all interaction between components within the model. It was the belief of the authors that SPSA and VFA are suited for this kind of problem, and that they have advantages over SA in complex scenarios. The results shows this to be true although SA seems to perform better when it comes to smaller number of sensors to be placed
|
5 |
A Geometric Approach to Dynamical System: Global Analysis for Non-Convex OptimizationXu, Ji January 2020 (has links)
Non-convex optimization often plays an important role in many machine learning problems. Study the existing algorithms that aim to solve the non-convex optimization problems can help us understand the optimization problem itself and may shed light on developing more effective algorithms or methods. In this thesis, we study two popular non-convex optimization problems along with two popular algorithms.
The first pair is maximum likelihood estimation with the expectation maximization algorithm. Expectation Maximization (EM) is among the most popular algorithms for estimating parameters of statistical models. However, EM, which is an iterative algorithm based on the maximum likelihood principle, is generally only guaranteed to find stationary points of the likelihood objective, and these points may be far from any maximizer. We address this disconnect between the statistical principles behind EM and its algorithmic properties.
Specifically, we provide a global analysis of EM for specific models in which the observations comprise an i.i.d.~sample from a mixture of two Gaussians. This is achieved by (i) studying the sequence of parameters from idealized execution of EM in the infinite sample limit, and fully characterizing the limit points of the sequence in terms of the initial parameters; and then (ii) based on this convergence analysis, establishing statistical consistency (or lack thereof) for the actual sequence of parameters produced by EM.
The second pair is phase retrieval problem with approximate message passing algorithm. Specifically, we consider an $\ell_2$-regularized non-convex optimization problem for recovering signals from their noisy phaseless observations. We design and study the performance of a message passing algorithm that aims to solve this optimization problem. We consider the asymptotic setting $m,n \rightarrow \infty$, $m/n \rightarrow \delta$ and obtain sharp performance bounds, where $m$ is the number of measurements and $n$ is the signal dimension. We show that for complex signals the algorithm can perform accurate recovery with only $m= \frac{64}{\pi^2}-4 \approx 2.5n$ measurements. The sharp analyses in this paper enable us to compare the performance of our method with other phase recovery schemes.
Finally, the convergence analysis of the iterative algorithms are done by a geometric approach to dynamical systems. By analyzing the movements from iteration to iteration, we provide a general tool that can show global convergence for many two dimensional dynamical systems. We hope this can shed light on convergence analysis for general dynamical systems.
|
6 |
Controle de congestionamento para voz sobre IP em HSDPA / Congestion control for voice over IP in HSDPAAndrà Ribeiro Braga 09 May 2006 (has links)
nÃo hà / O crescimento do nÃmero dos usuÃrios do serviÃo de Voice over IP(VoIP) faz dele o serviÃo com o maior interesse de ser provido por operadoras de telefonia celular. Por outro lado, este demanda um controle de Quality of Service (QoS) bastante rÃgido, o que torna-se mais complicado em redes sem fio, porque alÃm de congestionamentos na rede, os pacotes podem ser perdidos devido à erros nas transmissÃes no enlace de rÃdio. Dentro deste paradigma, estratÃgias de controle de congestionamento aparecem como uma boa soluÃÃo para lidar com as garantias de QoS em situaÃÃes de sobrecarga do sistema, onde os recursos se encontram exauridos e os requerimentos de qualidade se encontram ameaÃados. Este trabalho consiste na avaliaÃÃo de algoritmos de controle de congestionamento objetivando um aumento de capacidade e das garantias de QoS para serviÃos de voz. Os algoritmos avaliados neste trabalho sÃo os escalonamentos de pacotes e os controles de admissÃo. A anÃlise em cenÃrios de serviÃos mistos composto por usuÃrios VoIP e Web tambÃm està contida neste trabalho. O maior foco està no controle do atraso de pacote, jà que este à um requerimento crucial para serviÃos de tempo-real, como o VoIP. Os resultados mostram que um arcabouÃo de controle de congestionamento projetado para este serviÃo à capaz de melhorar o desempenho do sistema e mitigar os efeitos de congestionamento da rede. No cenÃrio de serviÃos mistos, os algoritmos sÃo capazes de efetuar reserva de recursos dependendo da prioridade definida para cada serviÃo, levando a um aumento na qualidade percebida pelo serviÃo mais sensÃvel atravÃs de uma leve degradaÃÃo no serviÃo mais robusto. / The growth in the number of Voice over IP(VoIP) users on the internet makes it the service with the highest interest to be provided by cellular operators. On the other hand, it demands very strict Quality of Service (QoS) control, which becomes even more complicated in wireless networks, because packets can be lost due to radio link transmission erros, as well as networks congestion. Within this paradigm, congestion control strategies appear as a good solution to cope with QoS guarantees under high loads, where the resources are exhausted and the service quality is threatened. This works comprises the evaluation of congestion control algorithms aiming to improve system capacity and QoS guarantees for speech users. The evaluated alagorithms within this work are packet scheduling and admission control. The analysys in mixed services scenarios composed of VoIP and Web users is also provid in this works. The main focus of the framework is to control the packet delay, since it is a crucial requirement for real-time services. The results show thata suitable congestion control framework is able to provid perfomace improvements and mitigation of the the effects from overloaded conditions. In the mixed services scenario, the algorithms are capable to perform resource reservation depending on the priority defined to each service, leanding to an increase in the quality of more sensitive service by degrading the more robust service
|
7 |
Algoritmos para avaliação da qualidade de vídeo em sistemas de televisão digital. / Video quality assessment algorithms in digital television applications.Fonseca, Roberto Nery da 15 October 2008 (has links)
Nesta dissertação é abordado o tema da avaliação de qualidade em sinais de vídeo, especificamente da avaliação objetiva completamente referenciada de sinais de vídeo em definição padrão. A forma mais confiável de se medir a diferença de qualidade entre duas cenas de vídeo é utilizando um painel formado por telespectadores, resultando em uma medida subjetiva da diferença de qualidade. Esta metodologia demanda um longo período de tempo e um elevado custo operacional, o que a torna pouco prática para utilização. Neste trabalho são apresentados os aspectos relevantes do sistema visual humano, das metodologias para avaliação de vídeo em aplicações de televisão digital em definição padrão e também da validação destas metodologias. O objetivo desta dissertação é testar métricas de baixo custo computacional como a que avalia a relação sinal-ruído de pico (PSNR: Peak Signal-to-Noise Ratio), a que mede similaridade estrutural (SSIM: Structural SIMilarity) e a que mede diferenças em três componentes de cor definidas pela CIE (Commission Internationale de l\'Eclairage), representadas por L*, a* e b* em uma dada extensão espacial (S-CIELAB: Spatial-CIELAB). Uma metodologia de validação destas métricas é apresentada, tendo como base as cenas e resultados dos testes subjetivos efetuados pelo Grupo de Especialistas em Qualidade de Vídeo (VQEG: Video Quality Expert Group). A estas métricas é introduzida uma etapa de preparação das cenas, na qual são efetuadas equalização de brilho, suavização de detalhes e detecção de contornos. Controlando-se a intensidade destes filtros, um novo conjunto de medidas é obtido. Comparações de desempenho são realizadas entre estes novos conjuntos de medidas e o conjunto de medidas obtido pelo VQEG. Os resultados mostram que para aplicações em televisão digital de definição padrão, a avaliação utilizando componentes de cor pouco influencia na correlação com as medidas obtidas nos testes subjetivos. Por outro lado, foi verificado que a aplicação adequada de técnicas para suavização de imagens, combinadas com métricas de fácil implementação como a SSIM, elevam seu grau de correlação com medidas subjetivas. Também foi demonstrado que técnicas para extração de contornos, combinadas com a métrica PSNR, podem aumentar significativamente seu desempenho em termos de correlação com os testes efetuados pelo VQEG. À luz destes resultados, foi concluído que medidas objetivas de fácil implementação do ponto de vista computacional podem ser usadas para comparação da qualidade de sinais de vídeo SDTV, desde que devidamente combinadas com técnicas para adequação ao sistema visual humano como a suavização e extração de contornos. / This research is about the video signal quality comparison issue, focusing at full reference metrics using standard definition television. The most reliable way to predict the differences in terms of quality between two video scenes is using a panel of television viewers, under controlled psychometric experimental conditions, resulting in statistical meaningful Differences in Mean Opinion Score (DMOS). The Subjective assessment is both time consuming and costly, therefore with practical limitations. The ideal substitute are objective quality assessment algorithms, whose scores have been shown to correlate highly with the results of DMOS. The goal for this research is to optimize the performance of simple metrics combining it with digital image processing. First this work presents many relevant aspects of the human visual system, methodologies for video evaluation in digital television applications using standard definition (SDTV) and also a validation methodology of these methods. After that, the main goal is to test three very simple metrics in terms of computational cost: PSNR (Peak Signal-to-Noise Ratio), SSIM (Structural SIMilarity) and S-CIELAB (Spatial-CIELAB). original metrics were modified in order to improve their correlations against subjective assessment data. Several experiments combining the advantages of digital image filters for softness and edge extraction have been accomplished within this work. The results show that such simple metrics combined with digital image processing for edge extraction, for example, do improve their correlations with subjective assessment.
|
8 |
Algoritmos para avaliação da qualidade de vídeo em sistemas de televisão digital. / Video quality assessment algorithms in digital television applications.Roberto Nery da Fonseca 15 October 2008 (has links)
Nesta dissertação é abordado o tema da avaliação de qualidade em sinais de vídeo, especificamente da avaliação objetiva completamente referenciada de sinais de vídeo em definição padrão. A forma mais confiável de se medir a diferença de qualidade entre duas cenas de vídeo é utilizando um painel formado por telespectadores, resultando em uma medida subjetiva da diferença de qualidade. Esta metodologia demanda um longo período de tempo e um elevado custo operacional, o que a torna pouco prática para utilização. Neste trabalho são apresentados os aspectos relevantes do sistema visual humano, das metodologias para avaliação de vídeo em aplicações de televisão digital em definição padrão e também da validação destas metodologias. O objetivo desta dissertação é testar métricas de baixo custo computacional como a que avalia a relação sinal-ruído de pico (PSNR: Peak Signal-to-Noise Ratio), a que mede similaridade estrutural (SSIM: Structural SIMilarity) e a que mede diferenças em três componentes de cor definidas pela CIE (Commission Internationale de l\'Eclairage), representadas por L*, a* e b* em uma dada extensão espacial (S-CIELAB: Spatial-CIELAB). Uma metodologia de validação destas métricas é apresentada, tendo como base as cenas e resultados dos testes subjetivos efetuados pelo Grupo de Especialistas em Qualidade de Vídeo (VQEG: Video Quality Expert Group). A estas métricas é introduzida uma etapa de preparação das cenas, na qual são efetuadas equalização de brilho, suavização de detalhes e detecção de contornos. Controlando-se a intensidade destes filtros, um novo conjunto de medidas é obtido. Comparações de desempenho são realizadas entre estes novos conjuntos de medidas e o conjunto de medidas obtido pelo VQEG. Os resultados mostram que para aplicações em televisão digital de definição padrão, a avaliação utilizando componentes de cor pouco influencia na correlação com as medidas obtidas nos testes subjetivos. Por outro lado, foi verificado que a aplicação adequada de técnicas para suavização de imagens, combinadas com métricas de fácil implementação como a SSIM, elevam seu grau de correlação com medidas subjetivas. Também foi demonstrado que técnicas para extração de contornos, combinadas com a métrica PSNR, podem aumentar significativamente seu desempenho em termos de correlação com os testes efetuados pelo VQEG. À luz destes resultados, foi concluído que medidas objetivas de fácil implementação do ponto de vista computacional podem ser usadas para comparação da qualidade de sinais de vídeo SDTV, desde que devidamente combinadas com técnicas para adequação ao sistema visual humano como a suavização e extração de contornos. / This research is about the video signal quality comparison issue, focusing at full reference metrics using standard definition television. The most reliable way to predict the differences in terms of quality between two video scenes is using a panel of television viewers, under controlled psychometric experimental conditions, resulting in statistical meaningful Differences in Mean Opinion Score (DMOS). The Subjective assessment is both time consuming and costly, therefore with practical limitations. The ideal substitute are objective quality assessment algorithms, whose scores have been shown to correlate highly with the results of DMOS. The goal for this research is to optimize the performance of simple metrics combining it with digital image processing. First this work presents many relevant aspects of the human visual system, methodologies for video evaluation in digital television applications using standard definition (SDTV) and also a validation methodology of these methods. After that, the main goal is to test three very simple metrics in terms of computational cost: PSNR (Peak Signal-to-Noise Ratio), SSIM (Structural SIMilarity) and S-CIELAB (Spatial-CIELAB). original metrics were modified in order to improve their correlations against subjective assessment data. Several experiments combining the advantages of digital image filters for softness and edge extraction have been accomplished within this work. The results show that such simple metrics combined with digital image processing for edge extraction, for example, do improve their correlations with subjective assessment.
|
Page generated in 0.0774 seconds