Spelling suggestions: "subject:"algorithms"" "subject:"a.lgorithms""
191 |
Complexity on some bin packing problems. / CUHK electronic theses & dissertations collectionJanuary 2000 (has links)
by Lau Siu Chung. / "April 2000." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (p. 97-102). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
|
192 |
Polynomial optimization problems: approximation algorithms and applications. / CUHK electronic theses & dissertations collectionJanuary 2011 (has links)
Li, Zhening. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 138-146). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
|
193 |
An algorithm for detecting and resolving store-and-forward deadlocks in packet-switched networks.January 1986 (has links)
by Chan Cheung-wing. / Bibliography: leaves 62-63 / Thesis (M.Ph.)--Chinese University of Hong Kong, 1986
|
194 |
On-line algorithms for the K-server problem and its variants.January 1995 (has links)
by Chi-ming Wat. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 77-82). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Performance analysis of on-line algorithms --- p.2 / Chapter 1.2 --- Randomized algorithms --- p.4 / Chapter 1.3 --- Types of adversaries --- p.5 / Chapter 1.4 --- Overview of the results --- p.6 / Chapter 2 --- The k-server problem --- p.8 / Chapter 2.1 --- Introduction --- p.8 / Chapter 2.2 --- Related Work --- p.9 / Chapter 2.3 --- The Evolution of Work Function Algorithm --- p.12 / Chapter 2.4 --- Definitions --- p.16 / Chapter 2.5 --- The Work Function Algorithm --- p.18 / Chapter 2.6 --- The Competitive Analysis --- p.20 / Chapter 3 --- The weighted k-server problem --- p.27 / Chapter 3.1 --- Introduction --- p.27 / Chapter 3.2 --- Related Work --- p.29 / Chapter 3.3 --- Fiat and Ricklin's Algorithm --- p.29 / Chapter 3.4 --- The Work Function Algorithm --- p.32 / Chapter 3.5 --- The Competitive Analysis --- p.35 / Chapter 4 --- The Influence of Lookahead --- p.41 / Chapter 4.1 --- Introduction --- p.41 / Chapter 4.2 --- Related Work --- p.42 / Chapter 4.3 --- The Role of l-lookahead --- p.43 / Chapter 4.4 --- The LRU Algorithm with l-lookahead --- p.45 / Chapter 4.5 --- The Competitive Analysis --- p.45 / Chapter 5 --- Space Complexity --- p.57 / Chapter 5.1 --- Introduction --- p.57 / Chapter 5.2 --- Related Work --- p.59 / Chapter 5.3 --- Preliminaries --- p.59 / Chapter 5.4 --- The TWO Algorithm --- p.60 / Chapter 5.5 --- Competitive Analysis --- p.61 / Chapter 5.6 --- Remarks --- p.69 / Chapter 6 --- Conclusions --- p.70 / Chapter 6.1 --- Summary of Our Results --- p.70 / Chapter 6.2 --- Recent Results --- p.71 / Chapter 6.2.1 --- The Adversary Models --- p.71 / Chapter 6.2.2 --- On-line Performance-Improvement Algorithms --- p.73 / Chapter A --- Proof of Lemma1 --- p.75 / Bibliography --- p.77
|
195 |
Learning algorithms for non-overlapped trees of probabilistic logic neurons.January 1990 (has links)
by Law Hing Man, Hudson. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1990. / Bibliography: leaves 109-112. / Acknowledgements / Abstract / Chapter Chapter I. --- Introduction --- p.1 / Chapter 1.1 --- Overview of the thesis --- p.2 / Chapter 1.2 --- Organization of the thesis --- p.7 / Chapter Chapter II. --- Artificial Neural Networks --- p.9 / Chapter 2.1 --- Architectures of Artificial Neural Networks --- p.10 / Chapter 2.1.1 --- Neuron Models --- p.10 / Chapter 2.1.2 --- Network Models --- p.12 / Chapter 2.2 --- Learning algorithms --- p.13 / Chapter Chapter III. --- From Logic Neuron to Non-Overlapped Trees --- p.15 / Chapter 3.1 --- Deterministic Logic Neuron (DLN) --- p.15 / Chapter 3.2 --- Probabilistic Logic Neuron (PLN) --- p.20 / Chapter 3.2.1 --- Well-behaved learning of orthogonal patterns in PLN network --- p.23 / Chapter 3.2.2 --- Well-behaved learning algorithm for non-orthogonal patterns --- p.23 / Chapter 3.3 --- Non-Overlapped Trees --- p.28 / Chapter 3.3.1 --- Homogeneous learning algorithm --- p.30 / Chapter 3.3.2 --- An external comparator --- p.34 / Chapter 3.3.3 --- Problems solved by NOTPLN --- p.35 / Chapter Chapter IV. --- Properties of NOTPLN --- p.37 / Chapter 4.1 --- Noise Insensitivity --- p.37 / Chapter 4.1.1 --- Noise insensitivity with one bit noise --- p.38 / Chapter 4.1.2 --- Noise insensitivity under different noise distributions --- p.40 / Chapter 4.2 --- Functionality --- p.46 / Chapter 4.3 --- Capacity --- p.49 / Chapter 4.4 --- Distributed representation --- p.50 / Chapter 4.5 --- Generalization --- p.51 / Chapter 4.5.1 --- Text-to-Phoneme Problem --- p.52 / Chapter 4.5.2 --- Automobile Learning --- p.53 / Chapter Chapter V. --- Learning Algorithms --- p.54 / Chapter 5.1 --- Presentation methods --- p.54 / Chapter 5.2 --- Learning algorithms --- p.56 / Chapter 5.2.1 --- Heterogeneous algorithm --- p.57 / Chapter 5.2.2 --- Conflict reduction agorithm --- p.61 / Chapter 5.3 --- Side effects of learning algorithms --- p.68 / Chapter 5.3.1 --- Existence of Side Effects --- p.68 / Chapter 5.3.2 --- Removal of Side Effects --- p.69 / Chapter Chapter VI. --- Practical Considerations --- p.71 / Chapter 6.1 --- Input size constraint --- p.71 / Chapter 6.2 --- Limitations of functionality --- p.72 / Chapter 6.3 --- Thermometer code --- p.72 / Chapter 6.4 --- Output definitions --- p.73 / Chapter 6.5 --- More trees for one bit --- p.74 / Chapter 6.6 --- Repeated recall --- p.75 / Chapter Chapter VII. --- Implementation and Simulations --- p.78 / Chapter 7.1 --- Implementation --- p.78 / Chapter 7.2 --- Simulations --- p.81 / Chapter 7.2.1 --- Parity learning --- p.81 / Chapter 7.2.2 --- Performance of learning algorithms under different hamming distances --- p.82 / Chapter 7.2.3 --- Performance of learning algorithms with different output size --- p.83 / Chapter 7.2.4 --- Numerals recognition and noise insensitivity --- p.84 / Chapter 7.2.5 --- Automobile learning and generalization --- p.86 / Chapter Chapter VIII. --- Spoken Numerals Recognition System based on NOTPLN --- p.89 / Chapter 8.1 --- End-point detection --- p.90 / Chapter 8.2 --- Linear Predictive Analysis --- p.91 / Chapter 8.3 --- Formant Frequency Extraction --- p.93 / Chapter 8.4 --- Coding --- p.95 / Chapter 8.5 --- Results and discussion --- p.96 / Chapter Chapter IX. --- Concluding Remarks --- p.97 / Chapter 9.1 --- Revisit of the contributions of the thesis --- p.97 / Chapter 9.2 --- Further researches --- p.99 / Chapter Appendix A --- Equation for calculating the probability of random selection --- p.102 / Chapter Appendix B --- Training sets with different hamming distances --- p.103 / Chapter Appendix C --- Set of numerals with their associated binary values --- p.107 / References --- p.109
|
196 |
Genetic based clustering algorithms and applications.January 2000 (has links)
by Lee Wing Kin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 81-90). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgments --- p.iii / List of Figures --- p.vii / List of Tables --- p.viii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Clustering --- p.1 / Chapter 1.1.1 --- Hierarchical Classification --- p.2 / Chapter 1.1.2 --- Partitional Classification --- p.3 / Chapter 1.1.3 --- Comparative Analysis --- p.4 / Chapter 1.2 --- Cluster Analysis and Traveling Salesman Problem --- p.5 / Chapter 1.3 --- Solving Clustering Problem --- p.7 / Chapter 1.4 --- Genetic Algorithms --- p.9 / Chapter 1.5 --- Outline of Work --- p.11 / Chapter 2 --- The Clustering Algorithms and Applications --- p.13 / Chapter 2.1 --- Introduction --- p.13 / Chapter 2.2 --- Traveling Salesman Problem --- p.14 / Chapter 2.2.1 --- Related Work on TSP --- p.14 / Chapter 2.2.2 --- Solving TSP using Genetic Algorithm --- p.15 / Chapter 2.3 --- Applications --- p.22 / Chapter 2.3.1 --- Clustering for Vertical Partitioning Design --- p.22 / Chapter 2.3.2 --- Horizontal Partitioning a Relational Database --- p.36 / Chapter 2.3.3 --- Object-Oriented Database Design --- p.42 / Chapter 2.3.4 --- Document Database Design --- p.49 / Chapter 2.4 --- Conclusions --- p.53 / Chapter 3 --- The Experiments for Vertical Partitioning Problem --- p.55 / Chapter 3.1 --- Introduction --- p.55 / Chapter 3.2 --- Comparative Study --- p.56 / Chapter 3.3 --- Experimental Results --- p.59 / Chapter 3.4 --- Conclusions --- p.61 / Chapter 4 --- Three New Operators for TSP --- p.62 / Chapter 4.1 --- Introduction --- p.62 / Chapter 4.2 --- Enhanced Cost Edge Recombination Operator --- p.63 / Chapter 4.3 --- Shortest Path Operator --- p.66 / Chapter 4.4 --- Shortest Edge Operator --- p.69 / Chapter 4.5 --- The Experiments --- p.71 / Chapter 4.5.1 --- Experimental Results for a 48-city TSP --- p.71 / Chapter 4.5.2 --- Experimental Results for Problems in TSPLIB --- p.73 / Chapter 4.6 --- Conclusions --- p.77 / Chapter 5 --- Conclusions --- p.78 / Chapter 5.1 --- Summary of Achievements --- p.78 / Chapter 5.2 --- Future Development --- p.80 / Bibliography --- p.81
|
197 |
Mining multi-level association rules using data cubes and mining N-most interesting itemsets.January 2000 (has links)
by Kwong, Wang-Wai Renfrew. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 102-105). / Abstracts in English and Chinese. / Abstract --- p.ii / Acknowledgments --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Data Mining Tasks --- p.1 / Chapter 1.1.1 --- Characterization --- p.2 / Chapter 1.1.2 --- Discrimination --- p.2 / Chapter 1.1.3 --- Classification --- p.2 / Chapter 1.1.4 --- Clustering --- p.3 / Chapter 1.1.5 --- Prediction --- p.3 / Chapter 1.1.6 --- Description --- p.3 / Chapter 1.1.7 --- Association Rule Mining --- p.4 / Chapter 1.2 --- Motivation --- p.4 / Chapter 1.2.1 --- Motivation for Mining Multi-level Association Rules Using Data Cubes --- p.4 / Chapter 1.2.2 --- Motivation for Mining N-most Interesting Itemsets --- p.8 / Chapter 1.3 --- Outline of the Thesis --- p.10 / Chapter 2 --- Survey on Previous Work --- p.11 / Chapter 2.1 --- Data Warehousing --- p.11 / Chapter 2.1.1 --- Data Cube --- p.12 / Chapter 2.2 --- Data Mining --- p.13 / Chapter 2.2.1 --- Association Rules --- p.14 / Chapter 2.2.2 --- Multi-level Association Rules --- p.15 / Chapter 2.2.3 --- Multi-Dimensional Association Rules Using Data Cubes --- p.16 / Chapter 2.2.4 --- Apriori Algorithm --- p.19 / Chapter 3 --- Mining Multi-level Association Rules Using Data Cubes --- p.22 / Chapter 3.1 --- Use of Multi-level Concept --- p.22 / Chapter 3.1.1 --- Multi-level Concept --- p.22 / Chapter 3.1.2 --- Criteria of Using Multi-level Concept --- p.23 / Chapter 3.1.3 --- Use of Multi-level Concept in Association Rules --- p.24 / Chapter 3.2 --- Use of Data Cube --- p.25 / Chapter 3.2.1 --- Data Cube --- p.25 / Chapter 3.2.2 --- Mining Multi-level Association Rules Using Data Cubes --- p.26 / Chapter 3.2.3 --- Definition --- p.28 / Chapter 3.3 --- Method for Mining Multi-level Association Rules Using Data Cubes --- p.31 / Chapter 3.3.1 --- Algorithm --- p.33 / Chapter 3.3.2 --- Example --- p.35 / Chapter 3.4 --- Experiment --- p.44 / Chapter 3.4.1 --- Simulation of Data Cube by Array --- p.44 / Chapter 3.4.2 --- Simulation of Data Cube by B+ Tree --- p.48 / Chapter 3.5 --- Discussion --- p.54 / Chapter 4 --- Mining the N-most Interesting Itemsets --- p.56 / Chapter 4.1 --- Mining the N-most Interesting Itemsets --- p.56 / Chapter 4.1.1 --- Criteria of Mining the N-most Interesting itemsets --- p.56 / Chapter 4.1.2 --- Definition --- p.58 / Chapter 4.1.3 --- Property --- p.59 / Chapter 4.2 --- Method for Mining N-most Interesting Itemsets --- p.60 / Chapter 4.2.1 --- Algorithm --- p.60 / Chapter 4.2.2 --- Example --- p.76 / Chapter 4.3 --- Experiment --- p.81 / Chapter 4.3.1 --- Synthetic Data --- p.81 / Chapter 4.3.2 --- Real Data --- p.85 / Chapter 4.4 --- Discussion --- p.98 / Chapter 5 --- Conclusion --- p.100 / Bibliography --- p.101 / Appendix --- p.106 / Chapter A --- Programs for Mining the N-most Interesting Itemset --- p.106 / Chapter A.1 --- Programs --- p.106 / Chapter A.2 --- Data Structures --- p.108 / Chapter A.3 --- Global Variables --- p.109 / Chapter A.4 --- Functions --- p.110 / Chapter A.5 --- Result Format --- p.113 / Chapter B --- Programs for Mining the Multi-level Association Rules Using Data Cube --- p.114 / Chapter B.1 --- Programs --- p.114 / Chapter B.2 --- Data Structure --- p.118 / Chapter B.3 --- Variables --- p.118 / Chapter B.4 --- Functions --- p.119
|
198 |
Investigation on prototype learning.January 2000 (has links)
Keung Chi-Kin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 128-135). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Classification --- p.2 / Chapter 1.2 --- Instance-Based Learning --- p.4 / Chapter 1.2.1 --- Three Basic Components --- p.5 / Chapter 1.2.2 --- Advantages --- p.6 / Chapter 1.2.3 --- Disadvantages --- p.7 / Chapter 1.3 --- Thesis Contributions --- p.7 / Chapter 1.4 --- Thesis Organization --- p.8 / Chapter 2 --- Background --- p.10 / Chapter 2.1 --- Improving Instance-Based Learning --- p.10 / Chapter 2.1.1 --- Scaling-up Nearest Neighbor Searching --- p.11 / Chapter 2.1.2 --- Data Reduction --- p.12 / Chapter 2.2 --- Prototype Learning --- p.12 / Chapter 2.2.1 --- Objectives --- p.13 / Chapter 2.2.2 --- Two Types of Prototype Learning --- p.15 / Chapter 2.3 --- Instance-Filtering Methods --- p.15 / Chapter 2.3.1 --- Retaining Border Instances --- p.16 / Chapter 2.3.2 --- Removing Border Instances --- p.21 / Chapter 2.3.3 --- Retaining Center Instances --- p.22 / Chapter 2.3.4 --- Advantages --- p.23 / Chapter 2.3.5 --- Disadvantages --- p.24 / Chapter 2.4 --- Instance-Abstraction Methods --- p.25 / Chapter 2.4.1 --- Advantages --- p.30 / Chapter 2.4.2 --- Disadvantages --- p.30 / Chapter 2.5 --- Other Methods --- p.32 / Chapter 2.6 --- Summary --- p.34 / Chapter 3 --- Integration of Filtering and Abstraction --- p.36 / Chapter 3.1 --- Incremental Integration --- p.37 / Chapter 3.1.1 --- Motivation --- p.37 / Chapter 3.1.2 --- The Integration Method --- p.40 / Chapter 3.1.3 --- Issues --- p.41 / Chapter 3.2 --- Concept Integration --- p.42 / Chapter 3.2.1 --- Motivation --- p.43 / Chapter 3.2.2 --- The Integration Method --- p.44 / Chapter 3.2.3 --- Issues --- p.45 / Chapter 3.3 --- Difference between Integration Methods and Composite Clas- sifiers --- p.48 / Chapter 4 --- The PGF Framework --- p.49 / Chapter 4.1 --- The PGF1 Algorithm --- p.50 / Chapter 4.1.1 --- Instance-Filtering Component --- p.51 / Chapter 4.1.2 --- Instance-Abstraction Component --- p.52 / Chapter 4.2 --- The PGF2 Algorithm --- p.56 / Chapter 4.3 --- Empirical Analysis --- p.57 / Chapter 4.3.1 --- Experimental Setup --- p.57 / Chapter 4.3.2 --- Results of PGF Algorithms --- p.59 / Chapter 4.3.3 --- Analysis of PGF1 --- p.61 / Chapter 4.3.4 --- Analysis of PGF2 --- p.63 / Chapter 4.3.5 --- Overall Behavior of PGF --- p.66 / Chapter 4.3.6 --- Comparisons with Other Approaches --- p.69 / Chapter 4.4 --- Time Complexity --- p.72 / Chapter 4.4.1 --- Filtering Components --- p.72 / Chapter 4.4.2 --- Abstraction Component --- p.74 / Chapter 4.4.3 --- PGF Algorithms --- p.74 / Chapter 4.5 --- Summary --- p.75 / Chapter 5 --- Integrated Concept Prototype Learner --- p.77 / Chapter 5.1 --- Motivation --- p.78 / Chapter 5.2 --- Abstraction Component --- p.80 / Chapter 5.2.1 --- Issues for Abstraction --- p.80 / Chapter 5.2.2 --- Investigation on Typicality --- p.82 / Chapter 5.2.3 --- Typicality in Abstraction --- p.85 / Chapter 5.2.4 --- The TPA algorithm --- p.86 / Chapter 5.2.5 --- Analysis of TPA --- p.90 / Chapter 5.3 --- Filtering Component --- p.93 / Chapter 5.3.1 --- Investigation on Associate --- p.96 / Chapter 5.3.2 --- The RT2 Algorithm --- p.100 / Chapter 5.3.3 --- Analysis of RT2 --- p.101 / Chapter 5.4 --- Concept Integration --- p.103 / Chapter 5.4.1 --- The ICPL Algorithm --- p.104 / Chapter 5.4.2 --- Analysis of ICPL --- p.106 / Chapter 5.5 --- Empirical Analysis --- p.106 / Chapter 5.5.1 --- Experimental Setup --- p.106 / Chapter 5.5.2 --- Results of ICPL Algorithm --- p.109 / Chapter 5.5.3 --- Comparisons with Pure Abstraction and Pure Filtering --- p.110 / Chapter 5.5.4 --- Comparisons with Other Approaches --- p.114 / Chapter 5.6 --- Time Complexity --- p.119 / Chapter 5.7 --- Summary --- p.120 / Chapter 6 --- Conclusions and Future Work --- p.122 / Chapter 6.1 --- Conclusions --- p.122 / Chapter 6.2 --- Future Work --- p.126 / Bibliography --- p.128 / Chapter A --- Detailed Information for Tested Data Sets --- p.136 / Chapter B --- Detailed Experimental Results for PGF --- p.138
|
199 |
Algorithmic properties of the bilinear transformHein, David Nicholas January 2010 (has links)
Digitized by Kansas Correctional Industries
|
200 |
Approches évolutionnaires pour la reconstruction de réseaux de régulation génétique par apprentissage de réseaux bayésiens / Learning bayesian networks with evolutionary approaches for the reverse-engineering of gene regulatory networksAuliac, Cédric 24 September 2008 (has links)
De nombreuses fonctions cellulaires sont réalisées grâce à l'interaction coordonnée de plusieurs gènes. Identifier le graphe de ces interactions, appelé réseau de régulation génétique, à partir de données d'expression de gènes est l'un des objectifs majeurs de la biologie des systèmes. Dans cette thèse, nous abordons ce problème en choisissant de modéliser les relations entre gènes par un réseau bayésien. Se pose alors la question de l'apprentissage de la structure de ce type de modèle à partir de données qui sont en général peu nombreuses. Pour résoudre ce problème, nous recherchons parmi tous les modèles possibles le modèle le plus simple, expliquant le mieux les données. Pour cela, nous introduisons et étudions différents types d'algorithmes génétiques permettant d'explorer l'espace des modèles. Nous nous intéressons plus particulièrement aux méthodes de spéciation. ces dernières, en favorisant la diversité des solutions candidates considérées, empêchent l'algorithme de converger trop rapidement vers des optima locaux. Ces algorithmes génétiques sont comparés avec différentes méthodes d'apprentissage de structure de réseaux bayésiens, classiquement utilisées dans la littérature. Nous mettons ainsi en avant la pertinence des approches evolutionnaires pour l'apprentissage de ces graphes d'interactions. Enfin, nous les comparons à une classe alternative d'algorithmes évolutionnaires qui s'avère particulièrement prometteuse : les algorithmes à estimation de distribution. Tous ces algorithmes sont testés et comparés sur un modèle du réseau de régulation de l'insuline de 35 noeuds dont nous tirons des jeux de données synthétiques de taille modeste. / Inferring gene regulatory networks from data requires the development of algorithms devoted to structure extraction. When only static data are available, gene interactions may be modelled by a bayesian network that represents the presence of direct interactions from regulators to regulees by conditional probability distributions. In this work, we used enhanced evolutionary algorithms to stochastically evolve a set of candidate bayesian network structures and found the model that best fits data without prior knowledge. We proposed various evolutionary strategies suitable for the task and tested our choices using simulated data drawn from a given bio-realistic network of 35 nodes, the so-called insulin network, which has been used in the literature for benchmarking. We introduced a niching strategy that reinforces diversity through the population and avoided trapping of the algorithm in one local minimum in the early steps of learning. We compared our best evolutionary approach with various well known learning algorithms (mcmc, k2, greedy search, tpda, mmhc) devoted to bayesian network structure learning. Then, we compared our best genetic algorithm with another class of evolutionary algorithms : estimation of distribution algorithms. We show that an evolutionary approach enhanced by niching outperforms classical structure learning methods in elucidating the original model. Finally, it appears that estimation of distribution algorithms are a promising approach to extend this work. These results were obtained for the learning of a bio-realistic network and, more importantly, on various small datasets.
|
Page generated in 0.0552 seconds