• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 588
  • 44
  • 39
  • 37
  • 9
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 768
  • 768
  • 185
  • 174
  • 156
  • 135
  • 119
  • 82
  • 71
  • 66
  • 63
  • 63
  • 59
  • 55
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Synergistic use of promoter prediction algorithms: a choice of small training dataset?

Oppon, Ekow CruickShank January 2000 (has links)
<p>Promoter detection, especially in prokaryotes, has always been an uphill task and may remain so, because of the many varieties of sigma factors employed by various organisms in transcription. The situation is made more complex by the fact, that any seemingly unimportant sequence segment may be turned into a promoter sequence by an activator or repressor (if the actual promoter sequence is made unavailable). Nevertheless, a computational approach to promoter detection has to be performed due to number of reasons. The obvious that comes to mind is the long and tedious process involved in elucidating promoters in the &lsquo / wet&rsquo / laboratories not to mention the financial aspect of such endeavors. Promoter detection/prediction of an organism with few characterized promoters (M.tuberculosis) as envisaged at the beginning of this work was never going to be easy. Even for the few known Mycobacterial promoters, most of the respective sigma factors associated with their transcription were not known. If the information (promoter-sigma) were available, the research would have been focused on categorizing the promoters according to sigma factors and training the methods on the respective categories. That is assuming that, there would be enough training data for the respective categories. Most promoter detection/prediction studies have been carried out on E.coli because of the availability of a number of experimentally characterized promoters (+- 310). Even then, no researcher to date has extended the research to the entire E.coli genome.</p>
452

Study of Parallel Algorithms Related to Subsequence Problems on the Sequent Multiprocessor System

Pothuru, Surendra 08 1900 (has links)
The primary purpose of this work is to study, implement and analyze the performance of parallel algorithms related to subsequence problems. The problems include string to string correction problem, to determine the longest common subsequence problem and solving the sum-range-product, 1 —D pattern matching, longest non-decreasing (non-increasing) (LNS) and maximum positive subsequence (MPS) problems. The work also includes studying the techniques and issues involved in developing parallel applications. These algorithms are implemented on the Sequent Multiprocessor System. The subsequence problems have been defined, along with performance metrics that are utilized. The sequential and parallel algorithms have been summarized. The implementation issues which arise in the process of developing parallel applications have been identified and studied.
453

Design and implementation of efficient routing protocols in delay tolerant networks

Unknown Date (has links)
Delay tolerant networks (DTNs) are occasionally-connected networks that may suffer from frequent partitions. DTNs provide service despite long end to end delays or infrequent connectivity. One fundamental problem in DTNs is routing messages from their source to their destination. DTNs differ from the Internet in that disconnections are the norm instead of the exception. Representative DTNs include sensor-based networks using scheduled intermittent connectivity, terrestrial wireless networks that cannot ordinarily maintain end-to-end connectivity, satellite networks with moderate delays and periodic connectivity, underwater acoustic networks with moderate delays and frequent interruptions due to environmental factors, and vehicular networks with cyclic but nondeterministic connectivity. The focus of this dissertation is on routing protocols that send messages in DTNs. When no connected path exists between the source and the destination of the message, other nodes may relay the message to the destination. This dissertation covers routing protocols in DTNs with both deterministic and non-deterministic mobility respectively. In DTNs with deterministic and cyclic mobility, we proposed the first routing protocol that is both scalable and delivery guaranteed. In DTNs with non-deterministic mobility, numerous heuristic protocols are proposed to improve the routing performance. However, none of those can provide a theoretical optimization on a particular performance measurement. In this dissertation, two routing protocols for non-deterministic DTNs are proposed, which minimizes delay and maximizes delivery rate on different scenarios respectively. First, in DTNs with non-deterministic and cyclic mobility, an optimal single-copy forwarding protocol which minimizes delay is proposed. / In DTNs with non-deterministic mobility, an optimal multi-copy forwarding protocol is proposed. which maximizes delivery rate under the constraint that the number of copies per message is fixed . Simulation evaluations using both real and synthetic trace are conducted to compare the proposed protocols with the existing ones. / by Cong Liu. / Vita. / Thesis (Ph.D.)--Florida Atlantic University, 2009. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2009. Mode of access: World Wide Web.
454

Probabilistic predictor-based routing in disruption-tolerant networks

Unknown Date (has links)
Disruption-Tolerant Networks (DTNs) are the networks comprised of a set of wireless nodes, and they experience unstable connectivity and frequent connection disruption because of the limitations of radio range, power, network density, device failure, and noise. DTNs are characterized by their lack of infrastructure, device limitation, and intermittent connectivity. Such characteristics make conventional wireless network routing protocols fail, as they are designed with the assumption the network stays connected. Thus, routing in DTNs becomes a challenging problem, due to the temporal scheduling element in a dynamic topology. One of the solutions is prediction-based, where nodes mobility is estimated with a history of observations. Then, the decision of forwarding messages during data delivery can be made with that predicted information. Current prediction-based routing protocols can be divided into two sub-categories in terms of that whether they are probability related: probabilistic and non-probabilistic. This dissertation focuses on the probabilistic prediction-based (PPB) routing schemes in DTNs. We find that most of these protocols are designed for a specified topology or scenario. So almost every protocol has some drawbacks when applied to a different scenario. Because every scenario has its own particular features, there could hardly exist a universal protocol which can suit all of the DTN scenarios. Based on the above motivation, we investigate and divide the current DTNs scenarios into three categories: Voronoi-based, landmark-based, and random moving DTNs. For each category, we design and implement a corresponding PPB routing protocol for either basic routing or a specified application with considering its unique features. / Specifically, we introduce a Predict and Relay routing protocol for Voronoi-based DTNs, present a single-copy and a multi-copy PPB routing protocol for landmark-based DTNs, and propose DRIP, a dynamic Voronoi region-based publish/subscribe protocol, to adapt publish/subscribe systems to random moving DTNs. New concepts, approaches, and algorithms are introduced during our work. / by Quan Yuan. / Vita. / Thesis (Ph.D.)--Florida Atlantic University, 2009. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2009. Mode of access: World Wide Web.
455

Implementation and comparison of the Golay and first order Reed-Muller codes

Unknown Date (has links)
In this project we perform data transmission across noisy channels and recover the message first by using the Golay code, and then by using the first-order Reed- Muller code. The main objective of this thesis is to determine which code among the above two is more efficient for text message transmission by applying the two codes to exactly the same data with the same channel error bit probabilities. We use the comparison of the error-correcting capability and the practical speed of the Golay code and the first-order Reed-Muller code to meet our goal. / by Olga Shukina. / Thesis (M.S.)--Florida Atlantic University, 2013. / Includes bibliography. / Mode of access: World Wide Web. / System requirements: Adobe Reader.
456

A utility-based routing scheme in multi-hop wireless networks

Unknown Date (has links)
Multi-hop wireless networks are infrastructure-less networks consisting of mobile or stationary wireless devices, which include multi-hop wireless mesh networks and multi-hop wireless sensor networks. These networks are characterized by limited bandwidth and energy resources, unreliable communication, and a lack of central control. These characteristics lead to the research challenges of multi-hop wireless networks. Building up routing schemes with good balance among the routing QoS (such as reliability, cost, and delay) is a paramount concern to achieve high performance wireless networks. These QoS metrics are internally correlated. Most existing works did not fully utilize this correlation. We design a metric to balance the trade-off between reliability and cost, and build up a framework of utility-based routing model in multi-hop wireless networks. This dissertation focuses on the variations with applications of utility-based routing models, designing new concepts, and developing new algorithms for them. A review of existing routing algorithms and the basic utility-based routing model for multi-hop wireless networks has been provided at the beginning. An efficient algorithm, called MaxUtility, has been proposed for the basic utility-based routing model. MaxUtility is an optimal algorithm that can find the best routing path with the maximum expected utility. / Various utility-based routing models are extended to further enhance the routing reliability while reducing the routing overhead. Besides computing the optimal path for a given benefit value and a given source-destination pair, the utility-based routing can be further extended to compute all optimal paths for all possible benefit values and/or all source-destination pairs. Our utility-based routing can also adapt to different applications and various environments. In the self-organized environment, where network users are selfish, we design a truthful routing, where selfish users have to tell the truth in order to maximize their utilities. We apply our utility-based routing scheme to the data-gathering wireless sensor networks, where a routing scheme is required to transmit data sensed by multiple sensor nodes to a common sink node. / by Mingming Lu. / Vita. / University Library's copy lacks signatures of Supervisory Committee. / Thesis (Ph.D.)--Florida Atlantic University, 2008. / Includes bibliography. / Electronic reproduction. Boca Raton, FL : 2008 Mode of access: World Wide Web.
457

Locally connected recurrent neural networks.

January 1993 (has links)
by Evan, Fung-yu Young. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves 161-166). / List of Figures --- p.vi / List of Tables --- p.vii / List of Graphs --- p.viii / Abstract --- p.ix / Chapter Part I --- Learning Algorithms / Chapter 1 --- Representing Time in Connectionist Models --- p.1 / Chapter 1.1 --- Introduction --- p.1 / Chapter 1.2 --- Temporal Sequences --- p.2 / Chapter 1.2.1 --- Recognition Tasks --- p.2 / Chapter 1.2.2 --- Reproduction Tasks --- p.3 / Chapter 1.2.3 --- Generation Tasks --- p.4 / Chapter 1.3 --- Discrete Time v.s. Continuous Time --- p.4 / Chapter 1.4 --- Time Delay Neural Network (TDNN) --- p.4 / Chapter 1.4.1 --- Delay Elements in the Connections --- p.5 / Chapter 1.4.2 --- NETtalk: An Application of TDNN --- p.7 / Chapter 1.4.3 --- Drawbacks of TDNN --- p.8 / Chapter 1.5 --- Networks with Context Units --- p.8 / Chapter 1.5.1 --- Jordan's Network --- p.9 / Chapter 1.5.2 --- Elman's Network --- p.10 / Chapter 1.5.3 --- Other Architectures --- p.14 / Chapter 1.5.4 --- Drawbacks of Using Context Units --- p.15 / Chapter 1.6 --- Recurrent Neural Networks --- p.16 / Chapter 1.6.1 --- Hopfield Models --- p.17 / Chapter 1.6.2 --- Fully Recurrent Neural Networks --- p.20 / Chapter A. --- EXAMPLES OF USING RECURRENT NETWORKS --- p.22 / Chapter 1.7 --- Our Objective --- p.25 / Chapter 2 --- Learning Algorithms for Recurrent Neural Networks --- p.27 / Chapter 2.1 --- Introduction --- p.27 / Chapter 2.2 --- Gradient Descent Methods --- p.29 / Chapter 2.2.1 --- Backpropagation Through Time (BPTT) --- p.29 / Chapter 2.2.2 --- Real Time Recurrent Learning Rule (RTRL) --- p.30 / Chapter A. --- RTRL WITH TEACHER FORCING --- p.32 / Chapter B. --- TERMINAL TEACHER FORCING --- p.33 / Chapter C. --- CONTINUOUS TIME RTRL --- p.33 / Chapter 2.2.3 --- Variants of RTRL --- p.34 / Chapter A. --- SUB GROUPED RTRL --- p.34 / Chapter B. --- A FIXED SIZE STORAGE 0(n3) TIME COMPLEXITY LEARNGING RULE --- p.35 / Chapter 2.3 --- Non-Gradient Descent Methods --- p.37 / Chapter 2.3.1 --- Neural Bucket Brigade (NBB) --- p.37 / Chapter 2.3.2 --- Temporal Driven Method (TO) --- p.38 / Chapter 2.4 --- Comparison between Different Approaches --- p.39 / Chapter 2.5 --- Conclusion --- p.41 / Chapter 3 --- Locally Connected Recurrent Networks --- p.43 / Chapter 3.1 --- Introduction --- p.43 / Chapter 3.2 --- Locally Connected Recurrent Networks --- p.44 / Chapter 3.2.1 --- Network Topology --- p.44 / Chapter 3.2.2 --- Subgrouping --- p.46 / Chapter 3.2.3 --- Learning Algorithm --- p.47 / Chapter 3.2.4 --- Continuous Time Learning Algorithm --- p.50 / Chapter 3.3 --- Analysis --- p.51 / Chapter 3.3.1 --- Time Complexity --- p.51 / Chapter 3.3.2 --- Space Complexity --- p.51 / Chapter 3.3.3 --- Local Computations in Time and Space --- p.51 / Chapter 3.4 --- Running on Parallel Architectures --- p.52 / Chapter 3.4.1 --- Mapping the Algorithm to Parallel Architectures --- p.52 / Chapter 3.4.2 --- Parallel Learning Algorithm --- p.53 / Chapter 3.4.3 --- Analysis --- p.54 / Chapter 3.5 --- Ring-Structured Recurrent Network (RRN) --- p.55 / Chapter 3.6 --- Comparison between RRN and RTRL in Sequence Recognition --- p.55 / Chapter 3.6.1 --- Training Sets and Testing Sequences --- p.56 / Chapter 3.6.2 --- Comparison in Training Speed --- p.58 / Chapter 3.6.3 --- Comparison in Recalling Power --- p.59 / Chapter 3.7 --- Comparison between RRN and RTRL in Time Series Prediction --- p.59 / Chapter 3.7.1 --- Comparison in Training Speed --- p.62 / Chapter 3.7.2 --- Comparison in Predictive Power --- p.63 / Chapter 3.8 --- Conclusion --- p.65 / Chapter Part II --- Applications / Chapter 4 --- Sequence Recognition by Ring-Structured Recurrent Networks --- p.67 / Chapter 4.1 --- Introduction --- p.67 / Chapter 4.2 --- Related Works --- p.68 / Chapter 4.2.1 --- Feedback Multilayer Perceptron (FMLP) --- p.68 / Chapter 4.2.2 --- Back Propagation Unfolded Recurrent Rule (BURR) --- p.69 / Chapter 4.3 --- Experimental Details --- p.71 / Chapter 4.3.1 --- Network Architecture --- p.71 / Chapter 4.3.2 --- Input/Output Representations --- p.72 / Chapter 4.3.3 --- Training Phase --- p.73 / Chapter 4.3.4 --- Recalling Phase --- p.73 / Chapter 4.4 --- Experimental Results --- p.74 / Chapter 4.4.1 --- Temporal Memorizing Power --- p.74 / Chapter 4.4.2 --- Time Warping Performance --- p.80 / Chapter 4.4.3 --- Fault Tolerance --- p.85 / Chapter 4.4.4 --- Learning Rate --- p.87 / Chapter 4.5 --- Time Delay --- p.88 / Chapter 4.6 --- Conclusion --- p.91 / Chapter 5 --- Time Series Prediction --- p.92 / Chapter 5.1 --- Introduction --- p.92 / Chapter 5.2 --- Modelling in Feedforward Networks --- p.93 / Chapter 5.3 --- Methodology with Recurrent Networks --- p.94 / Chapter 5.3.1 --- Network Structure --- p.94 / Chapter 5.3.2 --- Model Building - Training --- p.95 / Chapter 5.3.3 --- Model Diagnosis - Testing --- p.95 / Chapter 5.4 --- Training Paradigms --- p.96 / Chapter 5.4.1 --- A Quasiperiodic Series with White Noise --- p.96 / Chapter 5.4.2 --- A Chaotic Series --- p.97 / Chapter 5.4.3 --- Sunspots Numbers --- p.98 / Chapter 5.4.4 --- Hang Seng Index --- p.99 / Chapter 5.5 --- Experimental Results and Discussions --- p.99 / Chapter 5.5.1 --- A Quasiperiodic Series with White Noise --- p.101 / Chapter 5.5.2 --- Logistic Map --- p.103 / Chapter 5.5.3 --- Sunspots Numbers --- p.105 / Chapter 5.5.4 --- Hang Seng Index --- p.109 / Chapter 5.6 --- Conclusion --- p.112 / Chapter 6 --- Chaos in Recurrent Networks --- p.114 / Chapter 6.1 --- Introduction --- p.114 / Chapter 6.2 --- Important Features of Chaos --- p.115 / Chapter 6.2.1 --- First Return Map --- p.115 / Chapter 6.2.2 --- Long Term Unpredictability --- p.117 / Chapter 6.2.3 --- Sensitivity to Initial Conditions (SIC) --- p.118 / Chapter 6.2.4 --- Strange Attractor --- p.119 / Chapter 6.3 --- Chaotic Behaviour in Recurrent Networks --- p.120 / Chapter 6.3.1 --- Network Structure --- p.121 / Chapter 6.3.2 --- Dynamics in Training --- p.121 / Chapter 6.3.3 --- Dynamics in Testing --- p.122 / Chapter 6.4 --- Experiments and Discussions --- p.123 / Chapter 6.4.1 --- Henon Model --- p.123 / Chapter 6.4.2 --- Lorenz Model --- p.127 / Chapter 6.5 --- Conclusion --- p.134 / Chapter 7 --- Conclusion --- p.135 / Appendix A Series 1 Sine Function with White Noise --- p.137 / Appendix B Series 2 Logistic Map --- p.138 / Appendix C Series 3 Sunspots Numbers from 1700 to 1979 --- p.139 / Appendix D A Quasiperiodic Series with White Noise --- p.141 / Appendix E Hang Seng Daily Closing Index in 1991 --- p.142 / Appendix F Network Model for the Quasiperiodic Series with White Noise --- p.143 / Appendix G Network Model for the Logistic Map --- p.144 / Appendix H Network Model for the Sunspots Numbers --- p.145 / Appendix I Network Model for the Hang Seng Index --- p.146 / Appendix J Henon Model --- p.147 / Appendix K Network Model for the Henon Map --- p.150 / Appendix L Lorenz Model --- p.151 / Appendix M Network Model for the Lorenz Map --- p.159 / Bibliography --- p.161
458

On the training of feedforward neural networks.

January 1993 (has links)
by Hau-san Wong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves [178-183]). / Chapter 1 --- INTRODUCTION / Chapter 1.1 --- Learning versus Explicit Programming --- p.1-1 / Chapter 1.2 --- Artificial Neural Networks --- p.1-2 / Chapter 1.3 --- Learning in ANN --- p.1-3 / Chapter 1.4 --- Problems of Learning in BP Networks --- p.1-5 / Chapter 1.5 --- Dynamic Node Architecture for BP Networks --- p.1-7 / Chapter 1.6 --- Incremental Learning --- p.1-10 / Chapter 1.7 --- Research Objective and Thesis Organization --- p.1-11 / Chapter 2 --- THE FEEDFORWARD MULTILAYER NEURAL NETWORK / Chapter 2.1 --- The Perceptron --- p.2-1 / Chapter 2.2 --- The Generalization of the Perceptron --- p.2-4 / Chapter 2.3 --- The Multilayer Feedforward Network --- p.2-5 / Chapter 3 --- SOLUTIONS TO THE BP LEARNING PROBLEM / Chapter 3.1 --- Introduction --- p.3-1 / Chapter 3.2 --- Attempts in the Establishment of a Viable Hidden Representation Model --- p.3-5 / Chapter 3.3 --- Dynamic Node Creation Algorithms --- p.3-9 / Chapter 3.4 --- Concluding Remarks --- p.3-15 / Chapter 4 --- THE GROWTH ALGORITHM FOR NEURAL NETWORKS / Chapter 4.1 --- Introduction --- p.4-2 / Chapter 4.2 --- The Radial Basis Function --- p.4-6 / Chapter 4.3 --- The Additional Input Node and the Modified Nonlinearity --- p.4-9 / Chapter 4.4 --- The Initialization of the New Hidden Node --- p.4-11 / Chapter 4.5 --- Initialization of the First Node --- p.4-15 / Chapter 4.6 --- Practical Considerations for the Growth Algorithm --- p.4-18 / Chapter 4.7 --- The Convergence Proof for the Growth Algorithm --- p.4-20 / Chapter 4.8 --- The Flow of the Growth Algorithm --- p.4-21 / Chapter 4.9 --- Experimental Results and Performance Analysis --- p.4-21 / Chapter 4.10 --- Concluding Remarks --- p.4-33 / Chapter 5 --- KNOWLEDGE REPRESENTATION IN NEURAL NETWORKS / Chapter 5.1 --- An Alternative Perspective to Knowledge Representation in Neural Network: The Temporal Vector (T-Vector) Approach --- p.5-1 / Chapter 5.2 --- Prior Research Works in the T-Vector Approach --- p.5-2 / Chapter 5.3 --- Formulation of the T-Vector Approach --- p.5-3 / Chapter 5.4 --- Relation of the Hidden T-Vectors to the Output T-Vectors --- p.5-6 / Chapter 5.5 --- Relation of the Hidden T-Vectors to the Input T-Vectors --- p.5-10 / Chapter 5.6 --- An Inspiration for a New Training Algorithm from the Current Model --- p.5-12 / Chapter 6 --- THE DETERMINISTIC TRAINING ALGORITHM FOR NEURAL NETWORKS / Chapter 6.1 --- Introduction --- p.6-1 / Chapter 6.2 --- The Linear Independency Requirement for the Hidden T-Vectors --- p.6-3 / Chapter 6.3 --- Inspiration of the Current Work from the Barmann T-Vector Model --- p.6-5 / Chapter 6.4 --- General Framework of Dynamic Node Creation Algorithm --- p.6-10 / Chapter 6.5 --- The Deterministic Initialization Scheme for the New Hidden Nodes / Chapter 6.5.1 --- Introduction --- p.6-12 / Chapter 6.5.2 --- Determination of the Target T-Vector / Chapter 6.5.2.1 --- Introduction --- p.6-15 / Chapter 6.5.2.2 --- Modelling of the Target Vector βQhQ --- p.6-16 / Chapter 6.5.2.3 --- Near-Linearity Condition for the Sigmoid Function --- p.6-18 / Chapter 6.5.3 --- Preparation for the BP Fine-Tuning Process --- p.6-24 / Chapter 6.5.4 --- Determination of the Target Hidden T-Vector --- p.6-28 / Chapter 6.5.5 --- Determination of the Hidden Weights --- p.6-29 / Chapter 6.5.6 --- Determination of the Output Weights --- p.6-30 / Chapter 6.6 --- Linear Independency Assurance for the New Hidden T-Vector --- p.6-30 / Chapter 6.7 --- Extension to the Multi-Output Case --- p.6-32 / Chapter 6.8 --- Convergence Proof for the Deterministic Algorithm --- p.6-35 / Chapter 6.9 --- The Flow of the Deterministic Dynamic Node Creation Algorithm --- p.6-36 / Chapter 6.10 --- Experimental Results and Performance Analysis --- p.6-36 / Chapter 6.11 --- Concluding Remarks --- p.6-50 / Chapter 7 --- THE GENERALIZATION MEASURE MONITORING SCHEME / Chapter 7.1 --- The Problem of Generalization for Neural Networks --- p.7-1 / Chapter 7.2 --- Prior Attempts in Solving the Generalization Problem --- p.7-2 / Chapter 7.3 --- The Generalization Measure --- p.7-4 / Chapter 7.4 --- The Adoption of the Generalization Measure to the Deterministic Algorithm --- p.7-5 / Chapter 7.5 --- Monitoring of the Generalization Measure --- p.7-6 / Chapter 7.6 --- Correspondence between the Generalization Measure and the Generalization Capability of the Network --- p.7-8 / Chapter 7.7 --- Experimental Results and Performance Analysis --- p.7-12 / Chapter 7.8 --- Concluding Remarks --- p.7-16 / Chapter 8 --- THE ESTIMATION OF THE INITIAL HIDDEN LAYER SIZE / Chapter 8.1 --- The Need for an Initial Hidden Layer Size Estimation --- p.8-1 / Chapter 8.2 --- The Initial Hidden Layer Estimation Scheme --- p.8-2 / Chapter 8.3 --- The Extension of the Estimation Procedure to the Multi-Output Network --- p.8-6 / Chapter 8.4 --- Experimental Results and Performance Analysis --- p.8-6 / Chapter 8.5 --- Concluding Remarks --- p.8-16 / Chapter 9 --- CONCLUSION / Chapter 9.1 --- Contributions --- p.9-1 / Chapter 9.2 --- Suggestions for Further Research --- p.9-3 / REFERENCES --- p.R-1 / APPENDIX --- p.A-1
459

Applications and implementation of neuro-connectionist architectures.

January 1996 (has links)
by H.S. Ng. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (leaves 91-97). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Introduction --- p.1 / Chapter 1.2 --- Neuro-connectionist Network --- p.2 / Chapter 2 --- Related Works --- p.5 / Chapter 2.1 --- Introduction --- p.5 / Chapter 2.1.1 --- Kruskal's Algorithm --- p.5 / Chapter 2.1.2 --- Prim's algorithm --- p.6 / Chapter 2.1.3 --- Sollin's algorithm --- p.7 / Chapter 2.1.4 --- Bellman-Ford algorithm --- p.8 / Chapter 2.1.5 --- Floyd-Warshall algorithm --- p.9 / Chapter 3 --- Binary Relation Inference Network and Path Problems --- p.11 / Chapter 3.1 --- Introduction --- p.11 / Chapter 3.2 --- Topology --- p.12 / Chapter 3.3 --- Network structure --- p.13 / Chapter 3.3.1 --- Single-destination BRIN architecture --- p.14 / Chapter 3.3.2 --- Comparison between all-pair BRIN and single-destination BRIN --- p.18 / Chapter 3.4 --- Path Problems and BRIN Solution --- p.18 / Chapter 3.4.1 --- Minimax path problems --- p.18 / Chapter 3.4.2 --- BRIN solution --- p.19 / Chapter 4 --- Analog and Voltage-mode Approach --- p.22 / Chapter 4.1 --- Introduction --- p.22 / Chapter 4.2 --- Analog implementation --- p.24 / Chapter 4.3 --- Voltage-mode approach --- p.26 / Chapter 4.3.1 --- The site function --- p.26 / Chapter 4.3.2 --- The unit function --- p.28 / Chapter 4.3.3 --- The computational unit --- p.28 / Chapter 4.4 --- Conclusion --- p.29 / Chapter 5 --- Current-mode Approach --- p.32 / Chapter 5.1 --- Introduction --- p.32 / Chapter 5.2 --- Current-mode approach for analog VLSI Implementation --- p.33 / Chapter 5.2.1 --- Site and Unit output function --- p.33 / Chapter 5.2.2 --- Computational unit --- p.34 / Chapter 5.2.3 --- A complete network --- p.35 / Chapter 5.3 --- Conclusion --- p.37 / Chapter 6 --- Neural Network Compensation for Optimization Circuit --- p.40 / Chapter 6.1 --- Introduction --- p.40 / Chapter 6.2 --- A Neuro-connectionist Architecture for error correction --- p.41 / Chapter 6.2.1 --- Linear Relationship --- p.42 / Chapter 6.2.2 --- Output Deviation of Computational Unit --- p.44 / Chapter 6.3 --- Experimental Results --- p.46 / Chapter 6.3.1 --- Training Phase --- p.46 / Chapter 6.3.2 --- Generalization Phase --- p.48 / Chapter 6.4 --- Conclusion --- p.50 / Chapter 7 --- Precision-limited Analog Neural Network Compensation --- p.51 / Chapter 7.1 --- Introduction --- p.51 / Chapter 7.2 --- Analog Neural Network hardware --- p.53 / Chapter 7.3 --- Integration of analog neural network compensation of connectionist net- work for general path problems --- p.54 / Chapter 7.4 --- Experimental Results --- p.55 / Chapter 7.4.1 --- Convergence time --- p.56 / Chapter 7.4.2 --- The accuracy of the system --- p.57 / Chapter 7.5 --- Conclusion --- p.58 / Chapter 8 --- Transitive Closure Problems --- p.60 / Chapter 8.1 --- Introduction --- p.60 / Chapter 8.2 --- Different ways of implementation of BRIN for transitive closure --- p.61 / Chapter 8.2.1 --- Digital Implementation --- p.61 / Chapter 8.2.2 --- Analog Implementation --- p.61 / Chapter 8.3 --- Transitive Closure Problem --- p.63 / Chapter 8.3.1 --- A special case of maximum spanning tree problem --- p.64 / Chapter 8.3.2 --- Analog approach solution for transitive closure problem --- p.65 / Chapter 8.3.3 --- Current-mode approach solution for transitive closure problem --- p.67 / Chapter 8.4 --- Comparisons between the different forms of implementation of BRIN for transitive closure --- p.71 / Chapter 8.4.1 --- Convergence Time --- p.71 / Chapter 8.4.2 --- Circuit complexity --- p.72 / Chapter 8.5 --- Discussion --- p.73 / Chapter 9 --- Critical path problems --- p.74 / Chapter 9.1 --- Introduction --- p.74 / Chapter 9.2 --- Problem statement and single-destination BRIN solution --- p.75 / Chapter 9.3 --- Analog implementation --- p.76 / Chapter 9.3.1 --- Separated building block --- p.78 / Chapter 9.3.2 --- Combined building block --- p.79 / Chapter 9.4 --- Current-mode approach --- p.80 / Chapter 9.4.1 --- "Site function, unit output function and a completed network" --- p.80 / Chapter 9.5 --- Conclusion --- p.83 / Chapter 10 --- Conclusions --- p.85 / Chapter 10.1 --- Summary of Achievements --- p.85 / Chapter 10.2 --- Future development --- p.88 / Chapter 10.2.1 --- Application for financial problems --- p.88 / Chapter 10.2.2 --- Fabrication of VLSI Implementation --- p.88 / Chapter 10.2.3 --- Actual prototyping of Analog Integrated Circuits for critical path and transitive closure problems --- p.89 / Chapter 10.2.4 --- Other implementation platform --- p.89 / Chapter 10.2.5 --- On-line update of routing table inside the router for network com- munication using BRIN --- p.89 / Chapter 10.2.6 --- Other BRIN's applications --- p.90 / Bibliography --- p.91
460

Semi-automatic grammar induction for bidirectional machine translation.

January 2002 (has links)
Wong, Chin Chung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 137-143). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Objectives --- p.3 / Chapter 1.2 --- Thesis Outline --- p.5 / Chapter 2 --- Background in Natural Language Understanding --- p.6 / Chapter 2.1 --- Rule-based Approaches --- p.7 / Chapter 2.2 --- Corpus-based Approaches --- p.8 / Chapter 2.2.1 --- Stochastic Approaches --- p.8 / Chapter 2.2.2 --- Phrase-spotting Approaches --- p.9 / Chapter 2.3 --- The ATIS Domain --- p.10 / Chapter 2.3.1 --- Chinese Corpus Preparation --- p.11 / Chapter 3 --- Semi-automatic Grammar Induction - Baseline Approach --- p.13 / Chapter 3.1 --- Background in Grammar Induction --- p.13 / Chapter 3.1.1 --- Simulated Annealing --- p.14 / Chapter 3.1.2 --- Bayesian Grammar Induction --- p.14 / Chapter 3.1.3 --- Probabilistic Grammar Acquisition --- p.15 / Chapter 3.2 --- Semi-automatic Grammar Induction 一 Baseline Approach --- p.16 / Chapter 3.2.1 --- Spatial Clustering --- p.16 / Chapter 3.2.2 --- Temporal Clustering --- p.18 / Chapter 3.2.3 --- Post-processing --- p.19 / Chapter 3.2.4 --- Four Aspects for Enhancements --- p.20 / Chapter 3.3 --- Chapter Summary --- p.22 / Chapter 4 --- Semi-automatic Grammar Induction - Enhanced Approach --- p.23 / Chapter 4.1 --- Evaluating Induced Grammars --- p.24 / Chapter 4.2 --- Stopping Criterion --- p.26 / Chapter 4.2.1 --- Cross-checking with Recall Values --- p.29 / Chapter 4.3 --- Improvements on Temporal Clustering --- p.32 / Chapter 4.3.1 --- Evaluation --- p.39 / Chapter 4.4 --- Improvements on Spatial Clustering --- p.46 / Chapter 4.4.1 --- Distance Measures --- p.48 / Chapter 4.4.2 --- Evaluation --- p.57 / Chapter 4.5 --- Enhancements based on Intelligent Selection --- p.62 / Chapter 4.5.1 --- Informed Selection between Spatial Clustering and Tem- poral Clustering --- p.62 / Chapter 4.5.2 --- Selecting the Number of Clusters Per Iteration --- p.64 / Chapter 4.5.3 --- An Example for Intelligent Selection --- p.64 / Chapter 4.5.4 --- Evaluation --- p.68 / Chapter 4.6 --- Chapter Summary --- p.71 / Chapter 5 --- Bidirectional Machine Translation using Induced Grammars ´ؤBaseline Approach --- p.73 / Chapter 5.1 --- Background in Machine Translation --- p.75 / Chapter 5.1.1 --- Rule-based Machine Translation --- p.75 / Chapter 5.1.2 --- Statistical Machine Translation --- p.76 / Chapter 5.1.3 --- Knowledge-based Machine Translation --- p.77 / Chapter 5.1.4 --- Example-based Machine Translation --- p.78 / Chapter 5.1.5 --- Evaluation --- p.79 / Chapter 5.2 --- Baseline Configuration on Bidirectional Machine Translation System --- p.84 / Chapter 5.2.1 --- Bilingual Dictionary --- p.84 / Chapter 5.2.2 --- Concept Alignments --- p.85 / Chapter 5.2.3 --- Translation Process --- p.89 / Chapter 5.2.4 --- Two Aspects for Enhancements --- p.90 / Chapter 5.3 --- Chapter Summary --- p.91 / Chapter 6 --- Bidirectional Machine Translation ´ؤ Enhanced Approach --- p.92 / Chapter 6.1 --- Concept Alignments --- p.93 / Chapter 6.1.1 --- Enhanced Alignment Scheme --- p.95 / Chapter 6.1.2 --- Experiment --- p.97 / Chapter 6.2 --- Grammar Checker --- p.100 / Chapter 6.2.1 --- Components for Grammar Checking --- p.101 / Chapter 6.3 --- Evaluation --- p.117 / Chapter 6.3.1 --- Bleu Score Performance --- p.118 / Chapter 6.3.2 --- Modified Bleu Score --- p.122 / Chapter 6.4 --- Chapter Summary --- p.130 / Chapter 7 --- Conclusions --- p.131 / Chapter 7.1 --- Summary --- p.131 / Chapter 7.2 --- Contributions --- p.134 / Chapter 7.3 --- Future work --- p.136 / Bibliography --- p.137 / Chapter A --- Original SQL Queries --- p.144 / Chapter B --- Seeded Categories --- p.146 / Chapter C --- 3 Alignment Categories --- p.147 / Chapter D --- Labels of Syntactic Structures in Grammar Checker --- p.148

Page generated in 0.0767 seconds