Spelling suggestions: "subject:"messagepassing algorithm"" "subject:"memssagepassing algorithm""
1 |
A Unified Robust Minimax Framework for Regularized Learning ProblemsZhou, Hongbo 01 May 2014 (has links)
Regularization techniques have become a principled tool for model-based statistics and artificial intelligence research. However, in most situations, these regularization terms are not well interpreted, especially on how they are related to the loss function and data matrix in a given statistic model. In this work, we propose a robust minimax formulation to interpret the relationship between data and regularization terms for a large class of loss functions. We show that various regularization terms are essentially corresponding to different distortions to the original data matrix. This supplies a unified framework for understanding various existing regularization terms, designing novel regularization terms based on perturbation analysis techniques, and inspiring novel generic algorithms. To show how to apply minimax related concepts to real-world learning tasks, we develop a new fault-tolerant classification framework to combat class noise for general multi-class classification problems; further, by studying the relationship between the majorizable function class and the minimax framework, we develop an accurate, efficient, and scalable algorithm for solving a large family of learning formulations. In addition, this work has been further extended to tackle several important matrix-decomposition-related learning tasks, and we have validated our work on various real-world applications including structure-from-motion (with missing data) and latent structure dictionary learning tasks. This work, composed of a unified formulation, a scalable algorithm, and promising applications in many real-world learning problems, contributes to the understanding of various hidden robustness in many learning models. As we show, many classical statistical machine learning models can be unified using this formulation and accurate, efficient, and scalable algorithms become available from our research.
|
2 |
Studies on Discrete-Valued Vector Reconstruction from Underdetermined Linear Measurements / 劣決定線形観測に基づく離散値ベクトル再構成に関する研究Hayakawa, Ryo 23 March 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第22587号 / 情博第724号 / 新制||情||124(附属図書館) / 京都大学大学院情報学研究科システム科学専攻 / (主査)教授 下平 英寿, 教授 田中 利幸, 教授 山下 信雄, 教授 林 和則(大阪市立大学) / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
3 |
Multi-User Detection of Overloaded Systems with Low-Density SpreadingFantuz, Mitchell 11 September 2019 (has links)
Future wireless networks will have applications that require many devices to be connected to the network. Non-orthogonal multiple access (NOMA) is a promising multiple access scheme that allows more users to simultaneously transmit in a common channel than orthogonal signaling techniques. This overloading allows for high spectral efficiencies which can support the high demand for wireless access. One notable NOMA scheme is low-density spreading (LDS), which is a code domain multiple access scheme. Low density spreading operates like code division multiple access (CDMA) in the sense that users use a spreading sequence to spread their data, but the spreading sequences have a low number of nonzero chips, hence the term low-density. The message passing algorithm (MPA) is typically used for multi-user detection (MUD) of LDS systems. The MPA detector has complexity that is exponential to the number of users contributing to each chip. LDS systems suffer from two inherent problems: high computational complexity, and vulnerability to multipath channels. In this thesis, these two problems are addressed. A lower complexity MUD technique is presented, which offers complexity that is proportional to the number of users squared. The proposed detector is based on minimum mean square error (MMSE) and parallel interference cancellation (PIC) detectors. Simulation results show the proposed MUD technique achieves reductions in multiplications and additions by 81.84% and 67.87% with a loss of about 0.25 dB with overloading at 150%. In addition, a precoding scheme designed to mitigate the effects of the multipath channel is also presented. This precoding scheme applies an inverse channel response to the input signal before transmission. This allows for the received signal to eliminate the multipath effects that destroy the low-density structure.
|
4 |
Generalized Survey PropagationTu, Ronghui 09 May 2011 (has links)
Survey propagation (SP) has recently been discovered as an efficient algorithm in solving classes of hard constraint-satisfaction problems (CSP). Powerful as it is, SP is still a heuristic algorithm, and further understanding its algorithmic nature, improving its effectiveness and extending its applicability are highly desirable.
Prior to the work in this thesis, Maneva et al. introduced a Markov Random Field (MRF) formalism for k-SAT problems, on which SP may be viewed as a special case of the well-known belief propagation (BP) algorithm. This result had sometimes been interpreted to an understanding that “SP is BP” and allows a rigorous extension of SP to a “weighted” version, or a family of algorithms, for k-SAT problems.
SP has also been generalized, in a non-weighted fashion, for solving non-binary CSPs. Such generalization is however presented using statistical physics language and somewhat difficult to access by more general audience.
This thesis generalizes SP both in terms of its applicability to non-binary problems and in terms of introducing “weights” and extending SP to a family of algorithms. Under a generic formulation of CSPs, we first present an understanding of non-weighted SP for arbitrary CSPs in terms of “probabilistic token passing” (PTP).
We then show that this probabilistic interpretation of non-weighted SP makes it naturally generalizable to a weighted version, which we call weighted PTP.
Another main contribution of this thesis is a disproof of the folk belief that “SP is BP”. We show that the fact that SP is a special case of BP for k-SAT problems is rather incidental. For more general CSPs, SP and generalized SP do not reduce from BP. We also established the conditions under which generalized SP may reduce as special cases of BP.
To explore the benefit of generalizing SP to a wide family and for arbitrary, particularly non-binary, problems, we devised a simple weighted PTP based algorithm for solving 3-COL problems. Experimental results, compared against an existing non-weighted SP based algorithm, reveal the potential performance gain that generalized SP may bring.
|
5 |
Generalized Survey PropagationTu, Ronghui 09 May 2011 (has links)
Survey propagation (SP) has recently been discovered as an efficient algorithm in solving classes of hard constraint-satisfaction problems (CSP). Powerful as it is, SP is still a heuristic algorithm, and further understanding its algorithmic nature, improving its effectiveness and extending its applicability are highly desirable.
Prior to the work in this thesis, Maneva et al. introduced a Markov Random Field (MRF) formalism for k-SAT problems, on which SP may be viewed as a special case of the well-known belief propagation (BP) algorithm. This result had sometimes been interpreted to an understanding that “SP is BP” and allows a rigorous extension of SP to a “weighted” version, or a family of algorithms, for k-SAT problems.
SP has also been generalized, in a non-weighted fashion, for solving non-binary CSPs. Such generalization is however presented using statistical physics language and somewhat difficult to access by more general audience.
This thesis generalizes SP both in terms of its applicability to non-binary problems and in terms of introducing “weights” and extending SP to a family of algorithms. Under a generic formulation of CSPs, we first present an understanding of non-weighted SP for arbitrary CSPs in terms of “probabilistic token passing” (PTP).
We then show that this probabilistic interpretation of non-weighted SP makes it naturally generalizable to a weighted version, which we call weighted PTP.
Another main contribution of this thesis is a disproof of the folk belief that “SP is BP”. We show that the fact that SP is a special case of BP for k-SAT problems is rather incidental. For more general CSPs, SP and generalized SP do not reduce from BP. We also established the conditions under which generalized SP may reduce as special cases of BP.
To explore the benefit of generalizing SP to a wide family and for arbitrary, particularly non-binary, problems, we devised a simple weighted PTP based algorithm for solving 3-COL problems. Experimental results, compared against an existing non-weighted SP based algorithm, reveal the potential performance gain that generalized SP may bring.
|
6 |
Generalized Survey PropagationTu, Ronghui 09 May 2011 (has links)
Survey propagation (SP) has recently been discovered as an efficient algorithm in solving classes of hard constraint-satisfaction problems (CSP). Powerful as it is, SP is still a heuristic algorithm, and further understanding its algorithmic nature, improving its effectiveness and extending its applicability are highly desirable.
Prior to the work in this thesis, Maneva et al. introduced a Markov Random Field (MRF) formalism for k-SAT problems, on which SP may be viewed as a special case of the well-known belief propagation (BP) algorithm. This result had sometimes been interpreted to an understanding that “SP is BP” and allows a rigorous extension of SP to a “weighted” version, or a family of algorithms, for k-SAT problems.
SP has also been generalized, in a non-weighted fashion, for solving non-binary CSPs. Such generalization is however presented using statistical physics language and somewhat difficult to access by more general audience.
This thesis generalizes SP both in terms of its applicability to non-binary problems and in terms of introducing “weights” and extending SP to a family of algorithms. Under a generic formulation of CSPs, we first present an understanding of non-weighted SP for arbitrary CSPs in terms of “probabilistic token passing” (PTP).
We then show that this probabilistic interpretation of non-weighted SP makes it naturally generalizable to a weighted version, which we call weighted PTP.
Another main contribution of this thesis is a disproof of the folk belief that “SP is BP”. We show that the fact that SP is a special case of BP for k-SAT problems is rather incidental. For more general CSPs, SP and generalized SP do not reduce from BP. We also established the conditions under which generalized SP may reduce as special cases of BP.
To explore the benefit of generalizing SP to a wide family and for arbitrary, particularly non-binary, problems, we devised a simple weighted PTP based algorithm for solving 3-COL problems. Experimental results, compared against an existing non-weighted SP based algorithm, reveal the potential performance gain that generalized SP may bring.
|
7 |
Generalized Survey PropagationTu, Ronghui January 2011 (has links)
Survey propagation (SP) has recently been discovered as an efficient algorithm in solving classes of hard constraint-satisfaction problems (CSP). Powerful as it is, SP is still a heuristic algorithm, and further understanding its algorithmic nature, improving its effectiveness and extending its applicability are highly desirable.
Prior to the work in this thesis, Maneva et al. introduced a Markov Random Field (MRF) formalism for k-SAT problems, on which SP may be viewed as a special case of the well-known belief propagation (BP) algorithm. This result had sometimes been interpreted to an understanding that “SP is BP” and allows a rigorous extension of SP to a “weighted” version, or a family of algorithms, for k-SAT problems.
SP has also been generalized, in a non-weighted fashion, for solving non-binary CSPs. Such generalization is however presented using statistical physics language and somewhat difficult to access by more general audience.
This thesis generalizes SP both in terms of its applicability to non-binary problems and in terms of introducing “weights” and extending SP to a family of algorithms. Under a generic formulation of CSPs, we first present an understanding of non-weighted SP for arbitrary CSPs in terms of “probabilistic token passing” (PTP).
We then show that this probabilistic interpretation of non-weighted SP makes it naturally generalizable to a weighted version, which we call weighted PTP.
Another main contribution of this thesis is a disproof of the folk belief that “SP is BP”. We show that the fact that SP is a special case of BP for k-SAT problems is rather incidental. For more general CSPs, SP and generalized SP do not reduce from BP. We also established the conditions under which generalized SP may reduce as special cases of BP.
To explore the benefit of generalizing SP to a wide family and for arbitrary, particularly non-binary, problems, we devised a simple weighted PTP based algorithm for solving 3-COL problems. Experimental results, compared against an existing non-weighted SP based algorithm, reveal the potential performance gain that generalized SP may bring.
|
8 |
Low-density Parity-Check decoding Algorithms / Low-density Parity-Check avkodare algoritmPirou, Florent January 2004 (has links)
<p>Recently, low-density parity-check (LDPC) codes have attracted much attention because of their excellent error correcting performance and highly parallelizable decoding scheme. However, the effective VLSI implementation of and LDPC decoder remains a big challenge and is a crucial issue in determining how well we can exploit the benefits of the LDPC codes in the real applications. In this master thesis report, following a error coding background, we describe Low-Density Parity-Check codes and their decoding algorithm, and also requirements and architectures of LPDC decoder implementations.</p>
|
9 |
Low-density Parity-Check decoding Algorithms / Low-density Parity-Check avkodare algoritmPirou, Florent January 2004 (has links)
Recently, low-density parity-check (LDPC) codes have attracted much attention because of their excellent error correcting performance and highly parallelizable decoding scheme. However, the effective VLSI implementation of and LDPC decoder remains a big challenge and is a crucial issue in determining how well we can exploit the benefits of the LDPC codes in the real applications. In this master thesis report, following a error coding background, we describe Low-Density Parity-Check codes and their decoding algorithm, and also requirements and architectures of LPDC decoder implementations.
|
10 |
A Modified Sum-Product Algorithm over Graphs with Short CyclesRaveendran, Nithin January 2015 (has links) (PDF)
We investigate into the limitations of the sum-product algorithm for binary low density parity check (LDPC) codes having isolated short cycles. Independence assumption among messages passed, assumed reasonable in all configurations of graphs, fails the most
in graphical structures with short cycles. This research work is a step forward towards
understanding the effect of short cycles on error floors of the sum-product algorithm.
We propose a modified sum-product algorithm by considering the statistical dependency
of the messages passed in a cycle of length 4. We also formulate a modified algorithm in
the log domain which eliminates the numerical instability and precision issues associated
with the probability domain. Simulation results show a signal to noise ratio (SNR) improvement for the modified sum-product algorithm compared to the original algorithm.
This suggests that dependency among messages improves the decisions and successfully
mitigates the effects of length-4 cycles in the Tanner graph. The improvement is significant at high SNR region, suggesting a possible cause to the error floor effects on such graphs. Using density evolution techniques, we analysed the modified decoding algorithm. The threshold computed for the modified algorithm is higher than the threshold computed for the sum-product algorithm, validating the observed simulation results. We also prove that the conditional entropy of a codeword given the estimate obtained using the modified algorithm is lower compared to using the original sum-product algorithm.
|
Page generated in 0.0966 seconds