• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 780
  • 103
  • 65
  • 32
  • 18
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 9
  • 4
  • 3
  • 2
  • Tagged with
  • 1136
  • 1136
  • 1136
  • 1136
  • 252
  • 152
  • 135
  • 135
  • 125
  • 121
  • 109
  • 109
  • 109
  • 108
  • 103
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Approaches to the implementation of binary relation inference network.

January 1994 (has links)
by C.W. Tong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leaves 96-98). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- The Availability of Parallel Processing Machines --- p.2 / Chapter 1.1.1 --- Neural Networks --- p.5 / Chapter 1.2 --- Parallel Processing in the Continuous-Time Domain --- p.6 / Chapter 1.3 --- Binary Relation Inference Network --- p.10 / Chapter 2 --- Binary Relation Inference Network --- p.12 / Chapter 2.1 --- Binary Relation Inference Network --- p.12 / Chapter 2.1.1 --- Network Structure --- p.14 / Chapter 2.2 --- Shortest Path Problem --- p.17 / Chapter 2.2.1 --- Problem Statement --- p.17 / Chapter 2.2.2 --- A Binary Relation Inference Network Solution --- p.18 / Chapter 3 --- A Binary Relation Inference Network Prototype --- p.21 / Chapter 3.1 --- The Prototype --- p.22 / Chapter 3.1.1 --- The Network --- p.22 / Chapter 3.1.2 --- Computational Element --- p.22 / Chapter 3.1.3 --- Network Response Time --- p.27 / Chapter 3.2 --- Improving Response --- p.29 / Chapter 3.2.1 --- Removing Feedback --- p.29 / Chapter 3.2.2 --- Selecting Minimum with Diodes --- p.30 / Chapter 3.3 --- Speeding Up the Network Response --- p.33 / Chapter 3.4 --- Conclusion --- p.35 / Chapter 4 --- VLSI Building Blocks --- p.36 / Chapter 4.1 --- The Site --- p.37 / Chapter 4.2 --- The Unit --- p.40 / Chapter 4.2.1 --- A Minimum Finding Circuit --- p.40 / Chapter 4.2.2 --- A Tri-state Comparator --- p.44 / Chapter 4.3 --- The Computational Element --- p.45 / Chapter 4.3.1 --- Network Performances --- p.46 / Chapter 4.4 --- Discussion --- p.47 / Chapter 5 --- A VLSI Chip --- p.48 / Chapter 5.1 --- Spatial Configuration --- p.49 / Chapter 5.2 --- Layout --- p.50 / Chapter 5.2.1 --- Computational Elements --- p.50 / Chapter 5.2.2 --- The Network --- p.52 / Chapter 5.2.3 --- I/O Requirements --- p.53 / Chapter 5.2.4 --- Optional Modules --- p.53 / Chapter 5.3 --- A Scalable Design --- p.54 / Chapter 6 --- The Inverse Shortest Paths Problem --- p.57 / Chapter 6.1 --- Problem Statement --- p.59 / Chapter 6.2 --- The Embedded Approach --- p.63 / Chapter 6.2.1 --- The Formulation --- p.63 / Chapter 6.2.2 --- The Algorithm --- p.65 / Chapter 6.3 --- Implementation Results --- p.66 / Chapter 6.4 --- Other Implementations --- p.67 / Chapter 6.4.1 --- Sequential Machine --- p.67 / Chapter 6.4.2 --- Parallel Machine --- p.68 / Chapter 6.5 --- Discussion --- p.68 / Chapter 7 --- Closed Semiring Optimization Circuits --- p.71 / Chapter 7.1 --- Transitive Closure Problem --- p.72 / Chapter 7.1.1 --- Problem Statement --- p.72 / Chapter 7.1.2 --- Inference Network Solutions --- p.73 / Chapter 7.2 --- Closed Semirings --- p.76 / Chapter 7.3 --- Closed Semirings and the Binary Relation Inference Network --- p.79 / Chapter 7.3.1 --- Minimum Spanning Tree --- p.80 / Chapter 7.3.2 --- VLSI Implementation --- p.84 / Chapter 7.4 --- Conclusion --- p.86 / Chapter 8 --- Conclusions --- p.87 / Chapter 8.1 --- Summary of Achievements --- p.87 / Chapter 8.2 --- Future Work --- p.89 / Chapter 8.2.1 --- VLSI Fabrication --- p.89 / Chapter 8.2.2 --- Network Robustness --- p.90 / Chapter 8.2.3 --- Inference Network Applications --- p.91 / Chapter 8.2.4 --- Architecture for the Bellman-Ford Algorithm --- p.91 / Bibliography --- p.92 / Appendices --- p.99 / Chapter A --- Detailed Schematic --- p.99 / Chapter A.1 --- Schematic of the Inference Network Structures --- p.99 / Chapter A.1.1 --- Unit with Self-Feedback --- p.99 / Chapter A.1.2 --- Unit with Self-Feedback Removed --- p.100 / Chapter A.1.3 --- Unit with a Compact Minimizer --- p.100 / Chapter A.1.4 --- Network Modules --- p.100 / Chapter A.2 --- Inference Network Interface Circuits --- p.100 / Chapter B --- Circuit Simulation and Layout Tools --- p.107 / Chapter B.1 --- Circuit Simulation --- p.107 / Chapter B.2 --- VLSI Circuit Design --- p.110 / Chapter B.3 --- VLSI Circuit Layout --- p.111 / Chapter C --- The Conjugate-Gradient Descent Algorithm --- p.113 / Chapter D --- Shortest Path Problem on MasPar --- p.115
52

Recurrent neural networks for force optimization of multi-fingered robotic hands.

January 2002 (has links)
Fok Lo Ming. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 133-135). / Abstracts in English and Chinese. / Chapter 1. --- Introduction --- p.1 / Chapter 1.1 --- Multi-fingered Robotic Hands --- p.1 / Chapter 1.2 --- Grasping Force Optimization --- p.2 / Chapter 1.3 --- Neural Networks --- p.6 / Chapter 1.4 --- Previous Work for Grasping Force Optimization --- p.9 / Chapter 1.5 --- Contributions of this work --- p.10 / Chapter 1.6 --- Organization of this thesis --- p.12 / Chapter 2. --- Problem Formulations --- p.13 / Chapter 2.1 --- Grasping Force Optimization without Joint Torque Limits --- p.14 / Chapter 2.1.1 --- Linearized Friction Cone Approach --- p.15 / Chapter i. --- Linear Formulation --- p.17 / Chapter ii. --- Quadratic Formulation --- p.18 / Chapter 2.1.2 --- Nonlinear Friction Cone as Positive Semidefinite Matrix --- p.19 / Chapter 2.1.3 --- Constrained Optimization with Nonlinear Inequality Constraint --- p.20 / Chapter 2.2 --- Grasping Force Optimization with Joint Torque Limits --- p.21 / Chapter 2.2.1 --- Linearized Friction Cone Approach --- p.23 / Chapter 2.2.2 --- Constrained Optimization with Nonlinear Inequality Constraint --- p.23 / Chapter 2.3 --- Grasping Force Optimization with Time-varying External Wrench --- p.24 / Chapter 2.3.1 --- Linearized Friction Cone Approach --- p.25 / Chapter 2.3.2 --- Nonlinear Friction Cone as Positive Semidefinite Matrix --- p.25 / Chapter 2.3.3 --- Constrained Optimization with Nonlinear Inequality Constraint --- p.26 / Chapter 3. --- Recurrent Neural Network Models --- p.27 / Chapter 3.1 --- Networks for Grasping Force Optimization without Joint Torque Limits / Chapter 3.1.1 --- The Primal-dual Network for Linear Programming --- p.29 / Chapter 3.1.2 --- The Deterministic Annealing Network for Linear Programming --- p.32 / Chapter 3.1.3 --- The Primal-dual Network for Quadratic Programming --- p.34 / Chapter 3.1.4 --- The Dual Network --- p.35 / Chapter 3.1.5 --- The Deterministic Annealing Network --- p.39 / Chapter 3.1.6 --- The Novel Network --- p.41 / Chapter 3.2 --- Networks for Grasping Force Optimization with Joint Torque Limits / Chapter 3.2.1 --- The Dual Network --- p.43 / Chapter 3.2.2 --- The Novel Network --- p.45 / Chapter 3.3 --- Networks for Grasping Force Optimization with Time-varying External Wrench / Chapter 3.3.1 --- The Primal-dual Network for Quadratic Programming --- p.48 / Chapter 3.3.2 --- The Deterministic Annealing Network --- p.50 / Chapter 3.3.3 --- The Novel Network --- p.52 / Chapter 4. --- Simulation Results --- p.54 / Chapter 4.1 --- Three-finger Grasping Example of Grasping Force Optimization without Joint Torque Limits --- p.54 / Chapter 4.1.1 --- The Primal-dual Network for Linear Programming --- p.57 / Chapter 4.1.2 --- The Deterministic Annealing Network for Linear Programming --- p.59 / Chapter 4.1.3 --- The Primal-dual Network for Quadratic Programming --- p.61 / Chapter 4.1.4 --- The Dual Network --- p.63 / Chapter 4.1.5 --- The Deterministic Annealing Network --- p.65 / Chapter 4.1.6 --- The Novel Network --- p.57 / Chapter 4.1.7 --- Network Complexity Analysis --- p.59 / Chapter 4.2 --- Four-finger Grasping Example of Grasping Force Optimization without Joint Torque Limits --- p.73 / Chapter 4.2.1 --- The Primal-dual Network for Linear Programming --- p.75 / Chapter 4.2.2 --- The Deterministic Annealing Network for Linear Programming --- p.77 / Chapter 4.2.3 --- The Primal-dual Network for Quadratic Programming --- p.79 / Chapter 4.2.4 --- The Dual Network --- p.81 / Chapter 4.2.5 --- The Deterministic Annealing Network --- p.83 / Chapter 4.2.6 --- The Novel Network --- p.85 / Chapter 4.2.7 --- Network Complexity Analysis --- p.87 / Chapter 4.3 --- Three-finger Grasping Example of Grasping Force Optimization with Joint Torque Limits --- p.90 / Chapter 4.3.1 --- The Dual Network --- p.93 / Chapter 4.3.2 --- The Novel Network --- p.95 / Chapter 4.3.3 --- Network Complexity Analysis --- p.97 / Chapter 4.4 --- Three-finger Grasping Example of Grasping Force Optimization with Time-varying External Wrench --- p.99 / Chapter 4.4.1 --- The Primal-dual Network for Quadratic Programming --- p.101 / Chapter 4.4.2 --- The Deterministic Annealing Network --- p.103 / Chapter 4.4.3 --- The Novel Network --- p.105 / Chapter 4.4.4 --- Network Complexity Analysis --- p.107 / Chapter 4.5 --- Four-finger Grasping Example of Grasping Force Optimization with Time-varying External Wrench --- p.109 / Chapter 4.5.1 --- The Primal-dual Network for Quadratic Programming --- p.111 / Chapter 4.5.2 --- The Deterministic Annealing Network --- p.113 / Chapter 4.5.3 --- The Novel Network --- p.115 / Chapter 5.5.4 --- Network Complexity Analysis --- p.117 / Chapter 4.6 --- Four-finger Grasping Example of Grasping Force Optimization with Nonlinear Velocity Variation --- p.119 / Chapter 4.5.1 --- The Primal-dual Network for Quadratic Programming --- p.121 / Chapter 4.5.2 --- The Deterministic Annealing Network --- p.123 / Chapter 4.5.3 --- The Novel Network --- p.125 / Chapter 5.5.4 --- Network Complexity Analysis --- p.127 / Chapter 5. --- Conclusions and Future Work --- p.129 / Publications --- p.132 / Bibliography --- p.133 / Appendix --- p.136
53

Extended Kalman filter based pruning algorithms and several aspects of neural network learning. / CUHK electronic theses & dissertations collection

January 1998 (has links)
by John Pui-Fai Sum. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (p. 155-[163]). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web.
54

Continuous-time recurrent neural networks for quadratic programming: theory and engineering applications.

January 2005 (has links)
Liu Shubao. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 90-98). / Abstracts in English and Chinese. / Abstract --- p.i / 摘要 --- p.iii / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Time-Varying Quadratic Optimization --- p.1 / Chapter 1.2 --- Recurrent Neural Networks --- p.3 / Chapter 1.2.1 --- From Feedforward to Recurrent Networks --- p.3 / Chapter 1.2.2 --- Computational Power and Complexity --- p.6 / Chapter 1.2.3 --- Implementation Issues --- p.7 / Chapter 1.3 --- Thesis Organization --- p.9 / Chapter I --- Theory and Models --- p.11 / Chapter 2 --- Linearly Constrained QP --- p.13 / Chapter 2.1 --- Model Description --- p.14 / Chapter 2.2 --- Convergence Analysis --- p.17 / Chapter 3 --- Quadratically Constrained QP --- p.26 / Chapter 3.1 --- Problem Formulation --- p.26 / Chapter 3.2 --- Model Description --- p.27 / Chapter 3.2.1 --- Model 1 (Dual Model) --- p.28 / Chapter 3.2.2 --- Model 2 (Improved Dual Model) --- p.28 / Chapter II --- Engineering Applications --- p.29 / Chapter 4 --- KWTA Network Circuit Design --- p.31 / Chapter 4.1 --- Introduction --- p.31 / Chapter 4.2 --- Equivalent Reformulation --- p.32 / Chapter 4.3 --- KWTA Network Model --- p.36 / Chapter 4.4 --- Simulation Results --- p.40 / Chapter 4.5 --- Conclusions --- p.40 / Chapter 5 --- Dynamic Control of Manipulators --- p.43 / Chapter 5.1 --- Introduction --- p.43 / Chapter 5.2 --- Problem Formulation --- p.44 / Chapter 5.3 --- Simplified Dual Neural Network --- p.47 / Chapter 5.4 --- Simulation Results --- p.51 / Chapter 5.5 --- Concluding Remarks --- p.55 / Chapter 6 --- Robot Arm Obstacle Avoidance --- p.56 / Chapter 6.1 --- Introduction --- p.56 / Chapter 6.2 --- Obstacle Avoidance Scheme --- p.58 / Chapter 6.2.1 --- Equality Constrained Formulation --- p.58 / Chapter 6.2.2 --- Inequality Constrained Formulation --- p.60 / Chapter 6.3 --- Simplified Dual Neural Network Model --- p.64 / Chapter 6.3.1 --- Existing Approaches --- p.64 / Chapter 6.3.2 --- Model Derivation --- p.65 / Chapter 6.3.3 --- Convergence Analysis --- p.67 / Chapter 6.3.4 --- Model Comparision --- p.69 / Chapter 6.4 --- Simulation Results --- p.70 / Chapter 6.5 --- Concluding Remarks --- p.71 / Chapter 7 --- Multiuser Detection --- p.77 / Chapter 7.1 --- Introduction --- p.77 / Chapter 7.2 --- Problem Formulation --- p.78 / Chapter 7.3 --- Neural Network Architecture --- p.82 / Chapter 7.4 --- Simulation Results --- p.84 / Chapter 8 --- Conclusions and Future Works --- p.88 / Chapter 8.1 --- Concluding Remarks --- p.88 / Chapter 8.2 --- Future Prospects --- p.88 / Bibliography --- p.89
55

Analysis and design of neurodynamic approaches to nonlinear and robust model predictive control.

January 2014 (has links)
模型預測控制是一種基於模型的先進控制策略,它通過反復優化一個有限時域内的約束優化問題實時求解最優控制信號。作爲一種有效的多變量控制方法,模型預測控制在過程控制、機械人、經濟學等方面取得了巨大的成功。模型預測控制研究與發展的一個關鍵問題在於如何實現高性能非綫性和魯棒預測控制算法。實時優化是一項具有挑戰性的任務,尤其在優化問題時非凸優化的情況下,實時優化變得更爲艱巨。在模型預測控制取得發展的同時,以建立仿腦計算模型為目標的神經網絡研究也取得一些重要突破,尤其是在系統辨識和實時優化方面。神經網絡為解決模型預測控制面臨的瓶頸問題提供了有力的工具。 / 本篇論文重點討論基於神經動力學方法的模型預測控制的設計與分析。論文的主要目標在於設計高性能神經動力學算法進而提高模型預測控制的最優性與計算效率。論文包括兩大部分。第一部分討論如何在不需要求解非凸優化的前提下解決非綫性和魯棒模型預測控制。主要的解決方案是將非相信模型分解為帶有未知項的仿射模型,或將非綫性模型轉換為綫性變參數系統。仿射模型中的未知項通過極限學習機進行建模和數值補償。針對系統中的不塙定干擾,利用極小極大算法和擾動不變集方法獲取控制系統魯棒性。儅需要考慮多個評價指標是,採用目標規劃設計多目標優化算法。論文第一部分提出的設計方法可以將非綫性和魯邦模型預測控制設計為凸優化問題,進而採用神經動力學優化的方法進行實時求解。論文的第二部分設計了針對非凸優化的多神經網絡算法,並在此基礎上提出了模型預測控制算法。多神經網絡算法模型人類頭腦風暴的過程,同時應用多個神經網絡相互協作地進行全局搜索。神經網絡的動態方程指導其進行局部精確搜索,神經網絡之間的信息交換指導全局搜索。實驗結果表明該算法可以高效地獲得非凸優化的全局最優解。基於多神經網絡優化的的模型預測控制算法是一種創新性的高性能控制方法。論文的最後討論了應用模型預測控制解決海洋航行器的運動控制問題。 / Model predictive control (MPC) is an advanced model-based control strategy that generates control signals in real time by optimizing an objective function iteratively over a finite moving prediction horizon, subject to system constraints. As a very effective multivariable control technology, MPC has achieved enormous success in process industries, robotics, and economics. A major challenge of the MPC research and development lies in the realization of high-performance nonlinear and robust MPC algorithms. MPC requires to perform real time dynamic optimization, which is extremely demanding in terms of solution optimality and computational efficiency. The difficulty is significantly amplified when the optimization problem is nonconvex. / In parallel to the development of MPC, research on neural networks has made significant progress, aiming at building brain-like models for modeling complex systems and computing optimal solutions. It is envisioned that the advances in neural network research will play a more important role in the MPC synthesis. This thesis is concentrated on analysis and design of neurodynamic approaches to nonlinear and robust MPC. The primary objective is to improve solution optimality by developing highly efficient neurodynamic optimization methods. / The thesis is comprised of two coherent parts under a unified framework. The first part consists of several neurodynamics-based MPC approaches, aiming at solving nonlinear and robust MPC problems without confronting non-convexity. The nonlinear models are decomposed to input affine models with unknown terms, or transformed to linear parameter varying systems. The unknown terms are learned by using extreme learning machines via supervised learning. Minimax method and disturbance invariant tube method are used to achieve robustness against uncertainties. When multiobjective MPC is considered, goal programming technique is used to deal with multiple objectives. The presented techniques enable MPC to be reformulated as convex programs. Neurodynamic models with global convergence, guaranteed optimality, and low complexity are customized and applied for solving the convex programs in real time. Simulation results are presented to substantiate the effectiveness and to demonstrate the characteristics of proposed approaches. The second part consists of collective neurodynamic optimization approaches, aiming at directly solving the constrained nonconvex optimization problems in MPC. Multiple recurrent neural networks are exploited in framework of particle swarm optimization by emulating the paradigm of brainstorming. Each individual neural network carries out precise constrained local search, and the information exchange among neural networks guides the improvement of the solution quality. Implementation results on benchmark problems are included to show the superiority of the collective neurodynamic optimization approaches. The essence of the collective neurodynamic optimization lies in its global search capability and real time computational efficiency. By using collective neurodynamic optimization, high-performance nonlinear MPC methods can be realized. Finally, the thesis discusses applications of MPC on the motion control of marine vehicles. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Yan, Zheng. / Thesis (Ph.D.) Chinese University of Hong Kong, 2014. / Includes bibliographical references (leaves 186-203). / Abstracts also in Chinese.
56

The stability and attractivity of neural associative memories.

January 1996 (has links)
Han-bing Ji. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (p. 160-163). / Microfiche. Ann Arbor, Mich.: UMI, 1998. 2 microfiches ; 11 x 15 cm.
57

Studies of model selection and regularization for generalization in neural networks with applications. / CUHK electronic theses & dissertations collection

January 2002 (has links)
Guo Ping. / "March 2002." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (p. 166-182). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
58

Solving variational inequalities and related problems using recurrent neural networks. / CUHK electronic theses & dissertations collection

January 2007 (has links)
During the past two decades, numerous recurrent neural networks (RNNs) have been proposed for solving VIs and related problems. However, first, the theories of many emerging RNNs have not been well founded yet; and their capabilities have been underestimated. Second, these RNNs have limitations in handling some types of problems. Third, it is certainly not true that these RNNs are best choices for solving all problems, and new network models with more favorable characteristics could be devised for solving specific problems. / In the research, the above issues are extensively explored from dynamic system perspective, which leads to the following major contributions. On one hand, many new capabilities of some existing RNNs have been revealed for solving VIs and related problems. On the other hand, several new RNNs have been invented for solving some types of these problems. The contributions are established on the following facts. First, two existing RNNs, called TLPNN and PNN, are found to be capable of solving pseudomonotone VIs and related problems with simple bound constraints. Second, many more stability results are revealed for an existing RNN, called GPNN, for solving GVIs with simple bound constraints, and it is then extended to solve linear VIs (LVIs) and generalized linear VIs (GLVIs) with polyhedron constraints. Third, a new RNN, called IDNN, is proposed for solving a special class of quadratic programming problems which features lower structural complexity compared with existing RNNs. Fourth, some local convergence results of an existing RNN, called EPNN, for nonconvex optimization are obtained, and two variants of the network by incorporating two augmented Lagrangian function techniques are proposed for seeking Karush-Kuhn-Tucker (KKT) points, especially local optima, of the problems. / Variational inequality (VI) can be viewed as a natural framework for unifying the treatment of equilibrium problems, and hence has applications across many disciplines. In addition, many typical problems are closely related to VI, including general VI (GVI), complementarity problem (CP), generalized CP (GCP) and optimization problem (OP). / Hu, Xiaolin. / "July 2007." / Adviser: Jun Wang. / Source: Dissertation Abstracts International, Volume: 69-02, Section: B, page: 1102. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 193-207). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract in English and Chinese. / School code: 1307.
59

Feature matching by Hopfield type neural networks. / CUHK electronic theses & dissertations collection

January 2002 (has links)
Li Wenjing. / "April 2002." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (p. 155-167). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
60

A neurodynamic optimization approach to constrained pseudoconvex optimization.

January 2011 (has links)
Guo, Zhishan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (p. 71-82). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement i --- p.ii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Constrained Pseudoconvex Optimization --- p.1 / Chapter 1.2 --- Recurrent Neural Networks --- p.4 / Chapter 1.3 --- Thesis Organization --- p.7 / Chapter 2 --- Literature Review --- p.8 / Chapter 2.1 --- Pseudo convex Optimization --- p.8 / Chapter 2.2 --- Recurrent Neural Networks --- p.10 / Chapter 3 --- Model Description and Convergence Analysis --- p.17 / Chapter 3.1 --- Model Descriptions --- p.18 / Chapter 3.2 --- Global Convergence --- p.20 / Chapter 4 --- Numerical Examples --- p.27 / Chapter 4.1 --- Gaussian Optimization --- p.28 / Chapter 4.2 --- Quadratic Fractional Programming --- p.36 / Chapter 4.3 --- Nonlinear Convex Programming --- p.39 / Chapter 5 --- Real-time Data Reconciliation --- p.42 / Chapter 5.1 --- Introduction --- p.42 / Chapter 5.2 --- Theoretical Analysis and Performance Measurement --- p.44 / Chapter 5.3 --- Examples --- p.45 / Chapter 6 --- Real-time Portfolio Optimization --- p.53 / Chapter 6.1 --- Introduction --- p.53 / Chapter 6.2 --- Model Description --- p.54 / Chapter 6.3 --- Theoretical Analysis --- p.56 / Chapter 6.4 --- Illustrative Examples --- p.58 / Chapter 7 --- Conclusions and Future Works --- p.67 / Chapter 7.1 --- Concluding Remarks --- p.67 / Chapter 7.2 --- Future Works --- p.68 / Chapter A --- Publication List --- p.69 / Bibliography --- p.71

Page generated in 0.1023 seconds