Spelling suggestions: "subject:"beural"" "subject:"aneural""
591 |
Restoration network design and neural network.January 1992 (has links)
by Leung Lee. / Thesis (M.Sc.)--Chinese University of Hong Kong, 1992. / Includes bibliographical references. / Chapter SECTION 1. --- Introduction --- p.1 / Chapter SECTION 2. --- Formulation of Problem --- p.2 / Chapter 2.1 --- Problem Identification --- p.2 / Chapter 2.2 --- Network Planning Parameters and Assumptions --- p.3 / Chapter 2.3 --- Neural Network Model Transformation --- p.5 / Chapter 2.4 --- Algorithm and Implementation --- p.12 / Chapter SECTION 3. --- Simulation Results --- p.15 / Chapter 3.1 --- All Link Costs Are Same or Nearly the Same --- p.17 / Chapter 3.2 --- Fluctuated Cost in One or Two Fibre Paths --- p.18 / Chapter 3.3 --- Sudden Traffic Demand Change in Last Season --- p.19 / Chapter SECTION 4. --- Discussion --- p.20 / Chapter SECTION 5. --- Conclusion --- p.26 / GLOSSARY OF TERMS --- p.27 / BIBLIOGRAPHY --- p.29 / APPENDIX --- p.A1 / Chapter A --- Simulation Results --- p.A1 / Chapter B --- ANN Traffic Routing Example --- p.B1
|
592 |
Recurrent neural network for optimization with application to computer vision.January 1993 (has links)
by Cheung Kwok-wai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves [146-154]). / Chapter Chapter 1 --- Introduction / Chapter 1.1 --- Programmed computing vs. neurocomputing --- p.1-1 / Chapter 1.2 --- Development of neural networks - feedforward and feedback models --- p.1-2 / Chapter 1.3 --- State of art of applying recurrent neural network towards computer vision problem --- p.1-3 / Chapter 1.4 --- Objective of the Research --- p.1-6 / Chapter 1.5 --- Plan of the thesis --- p.1-7 / Chapter Chapter 2 --- Background / Chapter 2.1 --- Short history on development of Hopfield-like neural network --- p.2-1 / Chapter 2.2 --- Hopfield network model --- p.2-3 / Chapter 2.2.1 --- Neuron's transfer function --- p.2-3 / Chapter 2.2.2 --- Updating sequence --- p.2-6 / Chapter 2.3 --- Hopfield energy function and network convergence properties --- p.2-1 / Chapter 2.4 --- Generalized Hopfield network --- p.2-13 / Chapter 2.4.1 --- Network order and generalized Hopfield network --- p.2-13 / Chapter 2.4.2 --- Associated energy function and network convergence property --- p.2-13 / Chapter 2.4.3 --- Hardware implementation consideration --- p.2-15 / Chapter Chapter 3 --- Recurrent neural network for optimization / Chapter 3.1 --- Mapping to Neural Network formulation --- p.3-1 / Chapter 3.2 --- Network stability verse Self-reinforcement --- p.3-5 / Chapter 3.2.1 --- Quadratic problem and Hopfield network --- p.3-6 / Chapter 3.2.2 --- Higher-order case and reshaping strategy --- p.3-8 / Chapter 3.2.3 --- Numerical Example --- p.3-10 / Chapter 3.3 --- Local minimum limitation and existing solutions in the literature --- p.3-12 / Chapter 3.3.1 --- Simulated Annealing --- p.3-13 / Chapter 3.3.2 --- Mean Field Annealing --- p.3-15 / Chapter 3.3.3 --- Adaptively changing neural network --- p.3-16 / Chapter 3.3.4 --- Correcting Current Method --- p.3-16 / Chapter 3.4 --- Conclusions --- p.3-17 / Chapter Chapter 4 --- A Novel Neural Network for Global Optimization - Tunneling Network / Chapter 4.1 --- Tunneling Algorithm --- p.4-1 / Chapter 4.1.1 --- Description of Tunneling Algorithm --- p.4-1 / Chapter 4.1.2 --- Tunneling Phase --- p.4-2 / Chapter 4.2 --- A Neural Network with tunneling capability Tunneling network --- p.4-8 / Chapter 4.2.1 --- Network Specifications --- p.4-8 / Chapter 4.2.2 --- Tunneling function for Hopfield network and the corresponding updating rule --- p.4-9 / Chapter 4.3 --- Tunneling network stability and global convergence property --- p.4-12 / Chapter 4.3.1 --- Tunneling network stability --- p.4-12 / Chapter 4.3.2 --- Global convergence property --- p.4-15 / Chapter 4.3.2.1 --- Markov chain model for Hopfield network --- p.4-15 / Chapter 4.3.2.2 --- Classification of the Hopfield markov chain --- p.4-16 / Chapter 4.3.2.3 --- Markov chain model for tunneling network and its convergence towards global minimum --- p.4-18 / Chapter 4.3.3 --- Variation of pole strength and its effect --- p.4-20 / Chapter 4.3.3.1 --- Energy Profile analysis --- p.4-21 / Chapter 4.3.3.2 --- Size of attractive basin and pole strength required --- p.4-24 / Chapter 4.3.3.3 --- A new type of pole eases the implementation problem --- p.4-30 / Chapter 4.4 --- Simulation Results and Performance comparison --- p.4-31 / Chapter 4.4.1 --- Simulation Experiments --- p.4-32 / Chapter 4.4.2 --- Simulation Results and Discussions --- p.4-37 / Chapter 4.4.2.1 --- Comparisons on optimal path obtained and the convergence rate --- p.4-37 / Chapter 4.4.2.2 --- On decomposition of Tunneling network --- p.4-38 / Chapter 4.5 --- Suggested hardware implementation of Tunneling network --- p.4-48 / Chapter 4.5.1 --- Tunneling network hardware implementation --- p.4-48 / Chapter 4.5.2 --- Alternative implementation theory --- p.4-52 / Chapter 4.6 --- Conclusions --- p.4-54 / Chapter Chapter 5 --- Recurrent Neural Network for Gaussian Filtering / Chapter 5.1 --- Introduction --- p.5-1 / Chapter 5.1.1 --- Silicon Retina --- p.5-3 / Chapter 5.1.2 --- An Active Resistor Network for Gaussian Filtering of Image --- p.5-5 / Chapter 5.1.3 --- Motivations of using recurrent neural network --- p.5-7 / Chapter 5.1.4 --- Difference between the active resistor network model and recurrent neural network model for gaussian filtering --- p.5-8 / Chapter 5.2 --- From Problem formulation to Neural Network formulation --- p.5-9 / Chapter 5.2.1 --- One Dimensional Case --- p.5-9 / Chapter 5.2.2 --- Two Dimensional Case --- p.5-13 / Chapter 5.3 --- Simulation Results and Discussions --- p.5-14 / Chapter 5.3.1 --- Spatial impulse response of the 1-D network --- p.5-14 / Chapter 5.3.2 --- Filtering property of the 1-D network --- p.5-14 / Chapter 5.3.3 --- Spatial impulse response of the 2-D network and some filtering results --- p.5-15 / Chapter 5.4 --- Conclusions --- p.5-16 / Chapter Chapter 6 --- Recurrent Neural Network for Boundary Detection / Chapter 6.1 --- Introduction --- p.6-1 / Chapter 6.2 --- From Problem formulation to Neural Network formulation --- p.6-3 / Chapter 6.2.1 --- Problem Formulation --- p.6-3 / Chapter 6.2.2 --- Recurrent Neural Network Model used --- p.6-4 / Chapter 6.2.3 --- Neural Network formulation --- p.6-5 / Chapter 6.3 --- Simulation Results and Discussions --- p.6-7 / Chapter 6.3.1 --- Feasibility study and Performance comparison --- p.6-7 / Chapter 6.3.2 --- Smoothing and Boundary Detection --- p.6-9 / Chapter 6.3.3 --- Convergence improvement by network decomposition --- p.6-10 / Chapter 6.3.4 --- Hardware implementation consideration --- p.6-10 / Chapter 6.4 --- Conclusions --- p.6-11 / Chapter Chapter 7 --- Conclusions and Future Researches / Chapter 7.1 --- Contributions and Conclusions --- p.7-1 / Chapter 7.2 --- Limitations and Suggested Future Researches --- p.7-3 / References --- p.R-l / Appendix I The assignment of the boundary connection of 2-D recurrent neural network for gaussian filtering --- p.Al-1 / Appendix II Formula for connection weight assignment of 2-D recurrent neural network for gaussian filtering and the proof on symmetric property --- p.A2-1 / Appendix III Details on reshaping strategy --- p.A3-1
|
593 |
On implementation and applications of the adaptive-network-based fuzzy inference system.January 1994 (has links)
by Ong Kai Hin George. / Thesis (M.Sc.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leaves [102-104]).
|
594 |
On the Synthesis of fuzzy neural systems.January 1995 (has links)
by Chung, Fu Lai. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 166-174). / ACKNOWLEDGEMENT --- p.iii / ABSTRACT --- p.iv / Chapter 1. --- Introduction --- p.1 / Chapter 1.1 --- Integration of Fuzzy Systems and Neural Networks --- p.1 / Chapter 1.2 --- Objectives of the Research --- p.7 / Chapter 1.2.1 --- Fuzzification of Competitive Learning Algorithms --- p.7 / Chapter 1.2.2 --- Capacity Analysis of FAM and FRNS Models --- p.8 / Chapter 1.2.3 --- Structure and Parameter Identifications of FRNS --- p.9 / Chapter 1.3 --- Outline of the Thesis --- p.9 / Chapter 2. --- A Fuzzy System Primer --- p.11 / Chapter 2.1 --- Basic Concepts of Fuzzy Sets --- p.11 / Chapter 2.2 --- Fuzzy Set-Theoretic Operators --- p.15 / Chapter 2.3 --- "Linguistic Variable, Fuzzy Rule and Fuzzy Inference" --- p.19 / Chapter 2.4 --- Basic Structure of a Fuzzy System --- p.22 / Chapter 2.4.1 --- Fuzzifier --- p.22 / Chapter 2.4.2 --- Fuzzy Knowledge Base --- p.23 / Chapter 2.4.3 --- Fuzzy Inference Engine --- p.24 / Chapter 2.4.4 --- Defuzzifier --- p.28 / Chapter 2.5 --- Concluding Remarks --- p.29 / Chapter 3. --- Categories of Fuzzy Neural Systems --- p.30 / Chapter 3.1 --- Introduction --- p.30 / Chapter 3.2 --- Fuzzification of Neural Networks --- p.31 / Chapter 3.2.1 --- Fuzzy Membership Driven Models --- p.32 / Chapter 3.2.2 --- Fuzzy Operator Driven Models --- p.34 / Chapter 3.2.3 --- Fuzzy Arithmetic Driven Models --- p.35 / Chapter 3.3 --- Layered Network Implementation of Fuzzy Systems --- p.36 / Chapter 3.3.1 --- Mamdani's Fuzzy Systems --- p.36 / Chapter 3.3.2 --- Takagi and Sugeno's Fuzzy Systems --- p.37 / Chapter 3.3.3 --- Fuzzy Relation Based Fuzzy Systems --- p.38 / Chapter 3.4 --- Concluding Remarks --- p.40 / Chapter 4. --- Fuzzification of Competitive Learning Networks --- p.42 / Chapter 4.1 --- Introduction --- p.42 / Chapter 4.2 --- Crisp Competitive Learning --- p.44 / Chapter 4.2.1 --- Unsupervised Competitive Learning Algorithm --- p.46 / Chapter 4.2.2 --- Learning Vector Quantization Algorithm --- p.48 / Chapter 4.2.3 --- Frequency Sensitive Competitive Learning Algorithm --- p.50 / Chapter 4.3 --- Fuzzy Competitive Learning --- p.50 / Chapter 4.3.1 --- Unsupervised Fuzzy Competitive Learning Algorithm --- p.53 / Chapter 4.3.2 --- Fuzzy Learning Vector Quantization Algorithm --- p.54 / Chapter 4.3.3 --- Fuzzy Frequency Sensitive Competitive Learning Algorithm --- p.58 / Chapter 4.4 --- Stability of Fuzzy Competitive Learning --- p.58 / Chapter 4.5 --- Controlling the Fuzziness of Fuzzy Competitive Learning --- p.60 / Chapter 4.6 --- Interpretations of Fuzzy Competitive Learning Networks --- p.61 / Chapter 4.7 --- Simulation Results --- p.64 / Chapter 4.7.1 --- Performance of Fuzzy Competitive Learning Algorithms --- p.64 / Chapter 4.7.2 --- Performance of Monotonically Decreasing Fuzziness Control Scheme --- p.74 / Chapter 4.7.3 --- Interpretation of Trained Networks --- p.76 / Chapter 4.8 --- Concluding Remarks --- p.80 / Chapter 5. --- Capacity Analysis of Fuzzy Associative Memories --- p.82 / Chapter 5.1 --- Introduction --- p.82 / Chapter 5.2 --- Fuzzy Associative Memories (FAMs) --- p.83 / Chapter 5.3 --- Storing Multiple Rules in FAMs --- p.87 / Chapter 5.4 --- A High Capacity Encoding Scheme for FAMs --- p.90 / Chapter 5.5 --- Memory Capacity --- p.91 / Chapter 5.6 --- Rule Modification --- p.93 / Chapter 5.7 --- Inference Performance --- p.99 / Chapter 5.8 --- Concluding Remarks --- p.104 / Chapter 6. --- Capacity Analysis of Fuzzy Relational Neural Systems --- p.105 / Chapter 6.1 --- Introduction --- p.105 / Chapter 6.2 --- Fuzzy Relational Equations and Fuzzy Relational Neural Systems --- p.107 / Chapter 6.3 --- Solving a System of Fuzzy Relational Equations --- p.109 / Chapter 6.4 --- New Solvable Conditions --- p.112 / Chapter 6.4.1 --- Max-t Fuzzy Relational Equations --- p.112 / Chapter 6.4.2 --- Min-s Fuzzy Relational Equations --- p.117 / Chapter 6.5 --- Approximate Resolution --- p.119 / Chapter 6.6 --- System Capacity --- p.123 / Chapter 6.7 --- Inference Performance --- p.125 / Chapter 6.8 --- Concluding Remarks --- p.127 / Chapter 7. --- Structure and Parameter Identifications of Fuzzy Relational Neural Systems --- p.129 / Chapter 7.1 --- Introduction --- p.129 / Chapter 7.2 --- Modelling Nonlinear Dynamic Systems by Fuzzy Relational Equations --- p.131 / Chapter 7.3 --- A General FRNS Identification Algorithm --- p.138 / Chapter 7.4 --- An Evolutionary Computation Approach to Structure and Parameter Identifications --- p.139 / Chapter 7.4.1 --- Guided Evolutionary Simulated Annealing --- p.140 / Chapter 7.4.2 --- An Evolutionary Identification (EVIDENT) Algorithm --- p.143 / Chapter 7.5 --- Simulation Results --- p.146 / Chapter 7.6 --- Concluding Remarks --- p.158 / Chapter 8. --- Conclusions --- p.159 / Chapter 8.1 --- Summary of Contributions --- p.160 / Chapter 8.1.1 --- Fuzzy Competitive Learning --- p.160 / Chapter 8.1.2 --- Capacity Analysis of FAM and FRNS --- p.160 / Chapter 8.1.3 --- Numerical Identification of FRNS --- p.161 / Chapter 8.2 --- Further Investigations --- p.162 / Appendix A Publication List of the Candidate --- p.164 / BIBLIOGRAPHY --- p.166
|
595 |
Approaches to the implementation of binary relation inference network.January 1994 (has links)
by C.W. Tong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leaves 96-98). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- The Availability of Parallel Processing Machines --- p.2 / Chapter 1.1.1 --- Neural Networks --- p.5 / Chapter 1.2 --- Parallel Processing in the Continuous-Time Domain --- p.6 / Chapter 1.3 --- Binary Relation Inference Network --- p.10 / Chapter 2 --- Binary Relation Inference Network --- p.12 / Chapter 2.1 --- Binary Relation Inference Network --- p.12 / Chapter 2.1.1 --- Network Structure --- p.14 / Chapter 2.2 --- Shortest Path Problem --- p.17 / Chapter 2.2.1 --- Problem Statement --- p.17 / Chapter 2.2.2 --- A Binary Relation Inference Network Solution --- p.18 / Chapter 3 --- A Binary Relation Inference Network Prototype --- p.21 / Chapter 3.1 --- The Prototype --- p.22 / Chapter 3.1.1 --- The Network --- p.22 / Chapter 3.1.2 --- Computational Element --- p.22 / Chapter 3.1.3 --- Network Response Time --- p.27 / Chapter 3.2 --- Improving Response --- p.29 / Chapter 3.2.1 --- Removing Feedback --- p.29 / Chapter 3.2.2 --- Selecting Minimum with Diodes --- p.30 / Chapter 3.3 --- Speeding Up the Network Response --- p.33 / Chapter 3.4 --- Conclusion --- p.35 / Chapter 4 --- VLSI Building Blocks --- p.36 / Chapter 4.1 --- The Site --- p.37 / Chapter 4.2 --- The Unit --- p.40 / Chapter 4.2.1 --- A Minimum Finding Circuit --- p.40 / Chapter 4.2.2 --- A Tri-state Comparator --- p.44 / Chapter 4.3 --- The Computational Element --- p.45 / Chapter 4.3.1 --- Network Performances --- p.46 / Chapter 4.4 --- Discussion --- p.47 / Chapter 5 --- A VLSI Chip --- p.48 / Chapter 5.1 --- Spatial Configuration --- p.49 / Chapter 5.2 --- Layout --- p.50 / Chapter 5.2.1 --- Computational Elements --- p.50 / Chapter 5.2.2 --- The Network --- p.52 / Chapter 5.2.3 --- I/O Requirements --- p.53 / Chapter 5.2.4 --- Optional Modules --- p.53 / Chapter 5.3 --- A Scalable Design --- p.54 / Chapter 6 --- The Inverse Shortest Paths Problem --- p.57 / Chapter 6.1 --- Problem Statement --- p.59 / Chapter 6.2 --- The Embedded Approach --- p.63 / Chapter 6.2.1 --- The Formulation --- p.63 / Chapter 6.2.2 --- The Algorithm --- p.65 / Chapter 6.3 --- Implementation Results --- p.66 / Chapter 6.4 --- Other Implementations --- p.67 / Chapter 6.4.1 --- Sequential Machine --- p.67 / Chapter 6.4.2 --- Parallel Machine --- p.68 / Chapter 6.5 --- Discussion --- p.68 / Chapter 7 --- Closed Semiring Optimization Circuits --- p.71 / Chapter 7.1 --- Transitive Closure Problem --- p.72 / Chapter 7.1.1 --- Problem Statement --- p.72 / Chapter 7.1.2 --- Inference Network Solutions --- p.73 / Chapter 7.2 --- Closed Semirings --- p.76 / Chapter 7.3 --- Closed Semirings and the Binary Relation Inference Network --- p.79 / Chapter 7.3.1 --- Minimum Spanning Tree --- p.80 / Chapter 7.3.2 --- VLSI Implementation --- p.84 / Chapter 7.4 --- Conclusion --- p.86 / Chapter 8 --- Conclusions --- p.87 / Chapter 8.1 --- Summary of Achievements --- p.87 / Chapter 8.2 --- Future Work --- p.89 / Chapter 8.2.1 --- VLSI Fabrication --- p.89 / Chapter 8.2.2 --- Network Robustness --- p.90 / Chapter 8.2.3 --- Inference Network Applications --- p.91 / Chapter 8.2.4 --- Architecture for the Bellman-Ford Algorithm --- p.91 / Bibliography --- p.92 / Appendices --- p.99 / Chapter A --- Detailed Schematic --- p.99 / Chapter A.1 --- Schematic of the Inference Network Structures --- p.99 / Chapter A.1.1 --- Unit with Self-Feedback --- p.99 / Chapter A.1.2 --- Unit with Self-Feedback Removed --- p.100 / Chapter A.1.3 --- Unit with a Compact Minimizer --- p.100 / Chapter A.1.4 --- Network Modules --- p.100 / Chapter A.2 --- Inference Network Interface Circuits --- p.100 / Chapter B --- Circuit Simulation and Layout Tools --- p.107 / Chapter B.1 --- Circuit Simulation --- p.107 / Chapter B.2 --- VLSI Circuit Design --- p.110 / Chapter B.3 --- VLSI Circuit Layout --- p.111 / Chapter C --- The Conjugate-Gradient Descent Algorithm --- p.113 / Chapter D --- Shortest Path Problem on MasPar --- p.115
|
596 |
Recurrent neural networks for force optimization of multi-fingered robotic hands.January 2002 (has links)
Fok Lo Ming. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 133-135). / Abstracts in English and Chinese. / Chapter 1. --- Introduction --- p.1 / Chapter 1.1 --- Multi-fingered Robotic Hands --- p.1 / Chapter 1.2 --- Grasping Force Optimization --- p.2 / Chapter 1.3 --- Neural Networks --- p.6 / Chapter 1.4 --- Previous Work for Grasping Force Optimization --- p.9 / Chapter 1.5 --- Contributions of this work --- p.10 / Chapter 1.6 --- Organization of this thesis --- p.12 / Chapter 2. --- Problem Formulations --- p.13 / Chapter 2.1 --- Grasping Force Optimization without Joint Torque Limits --- p.14 / Chapter 2.1.1 --- Linearized Friction Cone Approach --- p.15 / Chapter i. --- Linear Formulation --- p.17 / Chapter ii. --- Quadratic Formulation --- p.18 / Chapter 2.1.2 --- Nonlinear Friction Cone as Positive Semidefinite Matrix --- p.19 / Chapter 2.1.3 --- Constrained Optimization with Nonlinear Inequality Constraint --- p.20 / Chapter 2.2 --- Grasping Force Optimization with Joint Torque Limits --- p.21 / Chapter 2.2.1 --- Linearized Friction Cone Approach --- p.23 / Chapter 2.2.2 --- Constrained Optimization with Nonlinear Inequality Constraint --- p.23 / Chapter 2.3 --- Grasping Force Optimization with Time-varying External Wrench --- p.24 / Chapter 2.3.1 --- Linearized Friction Cone Approach --- p.25 / Chapter 2.3.2 --- Nonlinear Friction Cone as Positive Semidefinite Matrix --- p.25 / Chapter 2.3.3 --- Constrained Optimization with Nonlinear Inequality Constraint --- p.26 / Chapter 3. --- Recurrent Neural Network Models --- p.27 / Chapter 3.1 --- Networks for Grasping Force Optimization without Joint Torque Limits / Chapter 3.1.1 --- The Primal-dual Network for Linear Programming --- p.29 / Chapter 3.1.2 --- The Deterministic Annealing Network for Linear Programming --- p.32 / Chapter 3.1.3 --- The Primal-dual Network for Quadratic Programming --- p.34 / Chapter 3.1.4 --- The Dual Network --- p.35 / Chapter 3.1.5 --- The Deterministic Annealing Network --- p.39 / Chapter 3.1.6 --- The Novel Network --- p.41 / Chapter 3.2 --- Networks for Grasping Force Optimization with Joint Torque Limits / Chapter 3.2.1 --- The Dual Network --- p.43 / Chapter 3.2.2 --- The Novel Network --- p.45 / Chapter 3.3 --- Networks for Grasping Force Optimization with Time-varying External Wrench / Chapter 3.3.1 --- The Primal-dual Network for Quadratic Programming --- p.48 / Chapter 3.3.2 --- The Deterministic Annealing Network --- p.50 / Chapter 3.3.3 --- The Novel Network --- p.52 / Chapter 4. --- Simulation Results --- p.54 / Chapter 4.1 --- Three-finger Grasping Example of Grasping Force Optimization without Joint Torque Limits --- p.54 / Chapter 4.1.1 --- The Primal-dual Network for Linear Programming --- p.57 / Chapter 4.1.2 --- The Deterministic Annealing Network for Linear Programming --- p.59 / Chapter 4.1.3 --- The Primal-dual Network for Quadratic Programming --- p.61 / Chapter 4.1.4 --- The Dual Network --- p.63 / Chapter 4.1.5 --- The Deterministic Annealing Network --- p.65 / Chapter 4.1.6 --- The Novel Network --- p.57 / Chapter 4.1.7 --- Network Complexity Analysis --- p.59 / Chapter 4.2 --- Four-finger Grasping Example of Grasping Force Optimization without Joint Torque Limits --- p.73 / Chapter 4.2.1 --- The Primal-dual Network for Linear Programming --- p.75 / Chapter 4.2.2 --- The Deterministic Annealing Network for Linear Programming --- p.77 / Chapter 4.2.3 --- The Primal-dual Network for Quadratic Programming --- p.79 / Chapter 4.2.4 --- The Dual Network --- p.81 / Chapter 4.2.5 --- The Deterministic Annealing Network --- p.83 / Chapter 4.2.6 --- The Novel Network --- p.85 / Chapter 4.2.7 --- Network Complexity Analysis --- p.87 / Chapter 4.3 --- Three-finger Grasping Example of Grasping Force Optimization with Joint Torque Limits --- p.90 / Chapter 4.3.1 --- The Dual Network --- p.93 / Chapter 4.3.2 --- The Novel Network --- p.95 / Chapter 4.3.3 --- Network Complexity Analysis --- p.97 / Chapter 4.4 --- Three-finger Grasping Example of Grasping Force Optimization with Time-varying External Wrench --- p.99 / Chapter 4.4.1 --- The Primal-dual Network for Quadratic Programming --- p.101 / Chapter 4.4.2 --- The Deterministic Annealing Network --- p.103 / Chapter 4.4.3 --- The Novel Network --- p.105 / Chapter 4.4.4 --- Network Complexity Analysis --- p.107 / Chapter 4.5 --- Four-finger Grasping Example of Grasping Force Optimization with Time-varying External Wrench --- p.109 / Chapter 4.5.1 --- The Primal-dual Network for Quadratic Programming --- p.111 / Chapter 4.5.2 --- The Deterministic Annealing Network --- p.113 / Chapter 4.5.3 --- The Novel Network --- p.115 / Chapter 5.5.4 --- Network Complexity Analysis --- p.117 / Chapter 4.6 --- Four-finger Grasping Example of Grasping Force Optimization with Nonlinear Velocity Variation --- p.119 / Chapter 4.5.1 --- The Primal-dual Network for Quadratic Programming --- p.121 / Chapter 4.5.2 --- The Deterministic Annealing Network --- p.123 / Chapter 4.5.3 --- The Novel Network --- p.125 / Chapter 5.5.4 --- Network Complexity Analysis --- p.127 / Chapter 5. --- Conclusions and Future Work --- p.129 / Publications --- p.132 / Bibliography --- p.133 / Appendix --- p.136
|
597 |
Investigating the role of Yes-associated protein (YAP) in neural crest developmentGesell, Anne E. January 2015 (has links)
The neural crest (NC) is a multipotent embryonic cell type derived from the ectoderm during neurulation giving rise to a variety of cell lineages such as neurons, glia and pigment cells. Most genes associated with the correct initiation, differentiation and migration of the neural crest have been found through reverse genetics. Similarities between neural crest development and some features of cancer progression are remarkable. For instance, it has been suggested that some cancer types recapitulate NC processes in an unregulated manner such as epithelial-mesenchymal transition or active cell migration throughout the body to form distant metastases. However, to date very little is known about initiators and drivers that direct neural crest cell migration to specific target sites. The Medaka mutant hirame represents an interesting melanocyte specific migration defect on the yolk sac caused by a loss of functional Yes-associated protein (YAP). Medaka hirame mutants were initially studied for their profound changes in body morphology. Genomic mapping identified the causal mutation as a nonsense point mutation within the first WW domain in the Yes-associated protein 1 (YAP1), causing translation of a dysfunctional YAP protein. YAP is a downstream transcriptional co-activator of the recently discovered and evolutionarily conserved Hippo pathway. Alterations within Hippo signalling are linked to cell survival, proliferation and abnormal tissue overgrowth. We demonstrate that hirame melanocyte precursors (melanoblasts) are initially present in normal abundance, but show an early migration defect with a lack of melanoblasts on the yolk sac, and corresponding accumulation in the lateral parts of the body. Subsequently, we observe an overall decline in differentiated melanocyte numbers during late stage embryogenesis. We designed an overexpression cassette linking enhanced GFP to either wild type or a mutated activated version of YAP and present evidence that it can efficiently rescue the melanocyte defect after injection of mRNA into one-cell stage embryos. Furthermore, analysis of the yolk sac anatomy via transmission electron microscopy indicates that a fraction of yolk membrane cells undergo apoptosis and we propose that this may contribute to the establishment of altered environmental cues leading to abnormal melanoblast migration onto the yolk sac. Injection of yap mRNA directly into the yolk sac however, failed to rescue melanoblast patterning. To advance our study, we isolated and characterised a 3.6 kb Medaka dopachrome tautomerase (Dct) promoter fragment, and used it to drive expression of enhanced green fluorescent protein (eGFP) in vivo. We generated germline transgenics with this construct that showed lineage-specific expression of eGFP within early migrating melanoblasts, a phenotype that is maintained in differentiated melanocytes throughout embryogenesis. In addition, using this promoter we overexpressed our egfp-yap fusion cassette and established transgenic lines to assess the cell autonomy of YAP within the melanocyte lineage. However, no fluorescent signal could be detected in the latter transgenics, necessitating future experimentation to properly characterise these lines. Finally, we analysed a range of neural crest markers to examine the extent of the neural crest defects in hirame mutants. In addition to the melanocyte phenotype, we identified a dramatic reduction in xanthophore numbers, although early leucophore development appears unaffected. We also observed a decreased number of dorsal root ganglia in the peripheral nervous system as well as smaller and partly ectopic cranial neural crest ganglia populations within the epibranchial arches. The characterisation of a novel Medaka melanocyte specific promoter as well as additional novel NC markers will be widely applicable and useful to the wider Medaka research community as a tool for the study of neural crest related mechanisms during development.
|
598 |
Faster Training of Neural Networks for Recommender SystemsKogel, Wendy E. 01 May 2002 (has links)
In this project we investigate the use of artificial neural networks(ANNs) as the core prediction function of a recommender system. In the past, research concerned with recommender systems that use ANNs have mainly concentrated on using collaborative-based information. We look at the effects of adding content-based information and how altering the topology of the network itself affects the accuracy of the recommendations generated. In particular, we investigate a mixture of experts topology. We create two expert clusters in the hidden layer of the ANN, one for content-based data and another for collaborative-based data. This greatly reduces the number of connections between the input and hidden layers. Our experimental evaluation shows that this new architecture produces the same accuracy of recommendation as the fully connected configuration with a large decrease in the amount of time it takes to train the network. This decrease in time is a great advantage because of the need for recommender systems to provide real time results to the user.
|
599 |
Deep Learning Binary Neural Network on an FPGARedkar, Shrutika 27 April 2017 (has links)
In recent years, deep neural networks have attracted lots of attentions in the field of computer vision and artificial intelligence. Convolutional neural network exploits spatial correlations in an input image by performing convolution operations in local receptive fields. When compared with fully connected neural networks, convolutional neural networks have fewer weights and are faster to train. Many research works have been conducted to further reduce computational complexity and memory requirements of convolutional neural networks, to make it applicable to low-power embedded applications. This thesis focuses on a special class of convolutional neural network with only binary weights and activations, referred as binary neural networks. Weights and activations for convolutional and fully connected layers are binarized to take only two values, +1 and -1. Therefore, the computations and memory requirement have been reduced significantly. The proposed architecture of binary neural networks has been implemented on an FPGA as a real time, high speed, low power computer vision platform. Only on-chip memories are utilized in the FPGA design. The FPGA implementation is evaluated using the CIFAR-10 benchmark and achieved a processing speed of 332,164 images per second for CIFAR-10 dataset with classification accuracy of about 86.06%.
|
600 |
Ballistocardiography-based Authentication using Convolutional Neural NetworksHebert, Joshua A 25 April 2018 (has links)
This work demonstrates the viability of the ballistocardiogram (BCG) signal derived from a head-worn device as a biometric modality for authentication. The BCG signal is the measure of an individual's body acceleration as a result of the heart's ejection of blood. It is a characterization of an individual's cardiac cycle and can be derived non-invasively from the measurement of subtle movements of a person's extremities. Through the use of accelerometer and gyroscope sensors on a Smart Eyewear (SEW) device, derived BCG signals are used to train a convolutional neural network (CNN) as an authentication model, which is personalized for each wearer. This system is evaluated using data from 12 subjects, showing that this approach has an equal error rate of 3.5% immediately after training, and only marginally degrades to 13% after about 2 months, in the worst case. We also explore the use of our authentication approach for individuals with severe motor disabilities, and observe that the results fall only slightly short of those of the larger population, with immediate EER values at 11.2% before rising to 21.6%, again in the worst case.. Overall, we demonstrate that this model presents a longitudinally-viable authentication solution for passive biometric authentication.
|
Page generated in 0.0393 seconds