Spelling suggestions: "subject:"least square"" "subject:"yeast square""
81 |
Applying Levenberg-Marquardt algorithm with block-diagonal Hessian approximation to recurrent neural network training.January 1999 (has links)
by Chi-cheong Szeto. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 162-165). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgment --- p.ii / Table of Contents --- p.iii / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Time series prediction --- p.1 / Chapter 1.2 --- Forecasting models --- p.1 / Chapter 1.2.1 --- Networks using time delays --- p.2 / Chapter 1.2.1.1 --- Model description --- p.2 / Chapter 1.2.1.2 --- Limitation --- p.3 / Chapter 1.2.2 --- Networks using context units --- p.3 / Chapter 1.2.2.1 --- Model description --- p.3 / Chapter 1.2.2.2 --- Limitation --- p.6 / Chapter 1.2.3 --- Layered fully recurrent networks --- p.6 / Chapter 1.2.3.1 --- Model description --- p.6 / Chapter 1.2.3.2 --- Our selection and motivation --- p.8 / Chapter 1.2.4 --- Other models --- p.8 / Chapter 1.3 --- Learning methods --- p.8 / Chapter 1.3.1 --- First order and second order methods --- p.9 / Chapter 1.3.2 --- Nonlinear least squares methods --- p.11 / Chapter 1.3.2.1 --- Levenberg-Marquardt method ´ؤ our selection and motivation --- p.13 / Chapter 1.3.2.2 --- Levenberg-Marquardt method - algorithm --- p.13 / Chapter 1.3.3 --- "Batch mode, semi-sequential mode and sequential mode of updating" --- p.15 / Chapter 1.4 --- Jacobian matrix calculations in recurrent networks --- p.15 / Chapter 1.4.1 --- RTBPTT-like Jacobian matrix calculation --- p.15 / Chapter 1.4.2 --- RTRL-like Jacobian matrix calculation --- p.17 / Chapter 1.4.3 --- Comparison between RTBPTT-like and RTRL-like calculations --- p.18 / Chapter 1.5 --- Computation complexity reduction techniques in recurrent networks --- p.19 / Chapter 1.5.1 --- Architectural approach --- p.19 / Chapter 1.5.1.1 --- Recurrent connection reduction method --- p.20 / Chapter 1.5.1.2 --- Treating the feedback signals as additional inputs method --- p.20 / Chapter 1.5.1.3 --- Growing network method --- p.21 / Chapter 1.5.2 --- Algorithmic approach --- p.21 / Chapter 1.5.2.1 --- History cutoff method --- p.21 / Chapter 1.5.2.2 --- Changing the updating frequency from sequential mode to semi- sequential mode method --- p.22 / Chapter 1.6 --- Motivation for using block-diagonal Hessian matrix --- p.22 / Chapter 1.7 --- Objective --- p.23 / Chapter 1.8 --- Organization of the thesis --- p.24 / Chapter Chapter 2 --- Learning with the block-diagonal Hessian matrix --- p.25 / Chapter 2.1 --- Introduction --- p.25 / Chapter 2.2 --- General form and factors of block-diagonal Hessian matrices --- p.25 / Chapter 2.2.1 --- General form of block-diagonal Hessian matrices --- p.25 / Chapter 2.2.2 --- Factors of block-diagonal Hessian matrices --- p.27 / Chapter 2.3 --- Four particular block-diagonal Hessian matrices --- p.28 / Chapter 2.3.1 --- Correlation block-diagonal Hessian matrix --- p.29 / Chapter 2.3.2 --- One-unit block-diagonal Hessian matrix --- p.35 / Chapter 2.3.3 --- Sub-network block-diagonal Hessian matrix --- p.35 / Chapter 2.3.4 --- Layer block-diagonal Hessian matrix --- p.36 / Chapter 2.4 --- Updating methods --- p.40 / Chapter Chapter 3 --- Data set and setup of experiments --- p.41 / Chapter 3.1 --- Introduction --- p.41 / Chapter 3.2 --- Data set --- p.41 / Chapter 3.2.1 --- Single sine --- p.41 / Chapter 3.2.2 --- Composite sine --- p.42 / Chapter 3.2.3 --- Sunspot --- p.43 / Chapter 3.3 --- Choices of recurrent neural network parameters and initialization methods --- p.44 / Chapter 3.3.1 --- "Choices of numbers of input, hidden and output units" --- p.45 / Chapter 3.3.2 --- Initial hidden states --- p.45 / Chapter 3.3.3 --- Weight initialization method --- p.45 / Chapter 3.4 --- Method of dealing with over-fitting --- p.47 / Chapter Chapter 4 --- Updating methods --- p.48 / Chapter 4.1 --- Introduction --- p.48 / Chapter 4.2 --- Asynchronous updating method --- p.49 / Chapter 4.2.1 --- Algorithm --- p.49 / Chapter 4.2.2 --- Method of study --- p.50 / Chapter 4.2.3 --- Performance --- p.51 / Chapter 4.2.4 --- Investigation on poor generalization --- p.52 / Chapter 4.2.4.1 --- Hidden states --- p.52 / Chapter 4.2.4.2 --- Incoming weight magnitudes of the hidden units --- p.54 / Chapter 4.2.4.3 --- Weight change against time --- p.56 / Chapter 4.3 --- Asynchronous updating with constraint method --- p.68 / Chapter 4.3.1 --- Algorithm --- p.68 / Chapter 4.3.2 --- Method of study --- p.69 / Chapter 4.3.3 --- Performance --- p.70 / Chapter 4.3.3.1 --- Generalization performance --- p.70 / Chapter 4.3.3.2 --- Training time performance --- p.71 / Chapter 4.3.4 --- Hidden states and incoming weight magnitudes of the hidden units --- p.73 / Chapter 4.3.4.1 --- Hidden states --- p.73 / Chapter 4.3.4.2 --- Incoming weight magnitudes of the hidden units --- p.73 / Chapter 4.4 --- Synchronous updating methods --- p.84 / Chapter 4.4.1 --- Single λ and multiple λ's synchronous updating methods --- p.84 / Chapter 4.4.1.1 --- Algorithm of single λ synchronous updating method --- p.84 / Chapter 4.4.1.2 --- Algorithm of multiple λ's synchronous updating method --- p.85 / Chapter 4.4.1.3 --- Method of study --- p.87 / Chapter 4.4.1.4 --- Performance --- p.87 / Chapter 4.4.1.5 --- Investigation on long training time: analysis of λ --- p.89 / Chapter 4.4.2 --- Multiple λ's with line search synchronous updating method --- p.97 / Chapter 4.4.2.1 --- Algorithm --- p.97 / Chapter 4.4.2.2 --- Performance --- p.98 / Chapter 4.4.2.3 --- Comparison of λ --- p.100 / Chapter 4.5 --- Comparison between asynchronous and synchronous updating methods --- p.101 / Chapter 4.5.1 --- Final training time --- p.101 / Chapter 4.5.2 --- Computation load per complete weight update --- p.102 / Chapter 4.5.3 --- Convergence speed --- p.103 / Chapter 4.6 --- Comparison between our proposed methods and the gradient descent method with adaptive learning rate and momentum --- p.111 / Chapter Chapter 5 --- Number and sizes of the blocks --- p.113 / Chapter 5.1 --- Introduction --- p.113 / Chapter 5.2 --- Performance --- p.113 / Chapter 5.2.1 --- Method of study --- p.113 / Chapter 5.2.2 --- Trend of performance --- p.115 / Chapter 5.2.2.1 --- Asynchronous updating method --- p.115 / Chapter 5.2.2.2 --- Synchronous updating method --- p.116 / Chapter 5.3 --- Computation load per complete weight update --- p.116 / Chapter 5.4 --- Convergence speed --- p.117 / Chapter 5.4.1 --- Trend of inverse of convergence speed --- p.117 / Chapter 5.4.2 --- Factors affecting the convergence speed --- p.117 / Chapter Chapter 6 --- Weight-grouping methods --- p.125 / Chapter 6.1 --- Introduction --- p.125 / Chapter 6.2 --- Training time and generalization performance of different weight-grouping methods --- p.125 / Chapter 6.2.1 --- Method of study --- p.125 / Chapter 6.2.2 --- Performance --- p.126 / Chapter 6.3 --- Degree of approximation of block-diagonal Hessian matrix with different weight- grouping methods --- p.128 / Chapter 6.3.1 --- Method of study --- p.128 / Chapter 6.3.2 --- Performance --- p.128 / Chapter Chapter 7 --- Discussion --- p.150 / Chapter 7.1 --- Advantages and disadvantages of using block-diagonal Hessian matrix --- p.150 / Chapter 7.1.1 --- Advantages --- p.150 / Chapter 7.1.2 --- Disadvantages --- p.151 / Chapter 7.2 --- Analysis of computation complexity --- p.151 / Chapter 7.2.1 --- Trend of computation complexity of each calculation --- p.154 / Chapter 7.2.2 --- Batch mode of updating --- p.155 / Chapter 7.2.3 --- Sequential mode of updating --- p.155 / Chapter 7.3 --- Analysis of storage complexity --- p.156 / Chapter 7.3.1 --- Trend of storage complexity of each set of variables --- p.157 / Chapter 7.3.2 --- Trend of overall storage complexity --- p.157 / Chapter 7.4 --- Parallel implementation --- p.158 / Chapter 7.5 --- Alternative implementation of weight change constraint --- p.158 / Chapter Chapter 8 --- Conclusions --- p.160 / References --- p.162
|
82 |
Estimation of factor scores in a three-level confirmatory factor analysis model.January 1998 (has links)
by Yuen Wai-ying. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 50-51). / Abstract also in Chinese. / Chapter Chapter 1 --- Introduction --- p.1 / Chapter Chapter 2 --- Estimation of Factor Scores in a Three-level Factor Analysis Model / Chapter 2.1 --- The Three-level Factor Analysis Model --- p.5 / Chapter 2.2 --- Estimation of Factor Scores in Between-group --- p.7 / Chapter 2.2.1 --- REG Method --- p.9 / Chapter 2.2.2 --- GLS Method --- p.11 / Chapter 2.3 --- Estimation of Factor Scores in Second Level Within-group --- p.13 / Chapter 2.3.1 --- REG Method --- p.15 / Chapter 2.3.2 --- GLS Method --- p.17 / Chapter 2.4 --- Estimation of Factor Scores in First Level Within-group / Chapter 2.4.1 --- First Approach --- p.19 / Chapter 2.4.2 --- Second Approach --- p.24 / Chapter 2.4.3 --- Comparison of the Two Approaches in Estimating Factor Scores in First Level Within-group --- p.31 / Chapter 2.5 --- Summary on the REG and GLS Methods --- p.35 / Chapter Chapter 3 --- Simulation Studies / Example1 --- p.37 / Example2 --- p.42 / Chapter Chapter 4 --- Conclusion and Discussion --- p.48 / References --- p.50 / Figures --- p.52
|
83 |
Virtual Training System for Diagnostic UltrasoundSkehan, Daniel Patrick 24 October 2011 (has links)
"Ultrasound has become a widely used form of medical imaging because it is low-cost, safe, and portable. However, it is heavily dependent on the skill of the operator to capture quality images and properly detect abnormalities. Training is a key component of ultrasound, but the limited availability of training courses and programs presents a significant obstacle to the wider use of ultrasound systems. The goal of this work was to design and implement an interactive training system to help train and evaluate sonographers. This Virtual Training System for Diagnostic Ultrasound is an inexpensive, software-based training system in which the trainee scans a generic scan surface with a sham transducer containing position and orientation sensors. The observed ultrasound image is generated from a pre-stored 3D image volume and is controlled interactively by the user€™s movements of the sham transducer. The patient in the virtual environment represented by the 3D image data may depict normal anatomy, exhibit a specific trauma, or present a given physical condition. The training system provides a realistic scanning experience by providing an interactive real-time display with adjustable image parameters similar to those of an actual diagnostic ultrasound system. This system has been designed to limit the amount of hardware needed to allow for low-cost and portability for the user. The system is able to utilize a PC to run the software. To represent the patient to be scanned, a specific scan surface has been produced that allows for an optical sensor to track the position of the sham transducer. The orientation of the sham transducer is tracked by using an inexpensive inertial measurement unit that relies on the use of quaternions to be integrated into the system. The lack of a physical manikin is overcome by using a visual implementation of a virtual patient in the software along with a virtual transducer that reflects the movements of the user on the scan surface. Pre-processing is performed on the selected 3D image volume to provide coordinate transformation parameters that yield a least-mean square fit from the scan surface to the scanning region of the virtual patient. This thesis presents a prototype training system accomplishing the main goals of being low-cost, portable, and accurate. The ultrasound training system can provide cost-effective and convenient training of physicians and sonographers. This system has the potential to become a powerful tool for training sonographers in recognizing a wide variety of medical conditions."
|
84 |
Implementation of multiple comparison procedures in a generalized least squares programMarasinghe, Mervyn G January 2010 (has links)
Typescript, etc. / Digitized by Kansas Correctional Industries
|
85 |
Dynamic Machine Learning with Least Square ObjectivesGultekin, San January 2019 (has links)
As of the writing of this thesis, machine learning has become one of the most active research fields. The interest comes from a variety of disciplines which include computer science, statistics, engineering, and medicine. The main idea behind learning from data is that, when an analytical model explaining the observations is hard to find ---often in contrast to the models in physics such as Newton's laws--- a statistical approach can be taken where one or more candidate models are tuned using data.
Since the early 2000's this challenge has grown in two ways: (i) The amount of collected data has seen a massive growth due to the proliferation of digital media, and (ii) the data has become more complex. One example for the latter is the high dimensional datasets, which can for example correspond to dyadic interactions between two large groups (such as customer and product information a retailer collects), or to high resolution image/video recordings.
Another important issue is the study of dynamic data, which exhibits dependence on time. Virtually all datasets fall into this category as all data collection is performed over time, however I use the term dynamic to hint at a system with an explicit temporal dependence. A traditional example is target tracking from signal processing literature. Here the position of a target is modeled using Newton's laws of motion, which relates it to time via the target's velocity and acceleration.
Dynamic data, as I defined above, poses two important challenges. Firstly, the learning setup is different from the standard theoretical learning setup, also known as Probably Approximately Correct (PAC) learning. To derive PAC learning bounds one assumes a collection of data points sampled independently and identically from a distribution which generates the data. On the other hand, dynamic systems produce correlated outputs. The learning systems we use should accordingly take this difference into consideration. Secondly, as the system is dynamic, it might be necessary to perform the learning online. In this case the learning has to be done in a single pass. Typical applications include target tracking and electricity usage forecasting.
In this thesis I investigate several important dynamic and online learning problems, where I develop novel tools to address the shortcomings of the previous solutions in the literature. The work is divided into three parts for convenience. The first part is about matrix factorization for time series analysis which is further divided into two chapters. In the first chapter, matrix factorization is used within a Bayesian framework to model time-varying dyadic interactions, with examples in predicting user-movie ratings and stock prices. In the next chapter, a matrix factorization which uses autoregressive models to forecast future values of multivariate time series is proposed, with applications in predicting electricity usage and traffic conditions. Inspired by the machinery we use in the first part, the second part is about nonlinear Kalman filtering, where a hidden state is estimated over time given observations. The nonlinearity of the system generating the observations is the main challenge here, where a divergence minimization approach is used to unify the seemingly unrelated methods in the literature, and propose new ones. This has applications in target tracking and options pricing. The third and last part is about cost sensitive learning, where a novel method for maximizing area under receiver operating characteristics curve is proposed. Our method has theoretical guarantees and favorable sample complexity. The method is tested on a variety of benchmark datasets, and also has applications in online advertising.
|
86 |
Residual empirical processes for nearly unstable long-memory time series. / CUHK electronic theses & dissertations collectionJanuary 2009 (has links)
The first part of this thesis considers the residual empirical process of a nearly unstable long-memory time series. Chan and Ling [8] showed that the usual limit distribution of the Kolmogorov-Smirnov test statistics does not hold when the characteristic polynomial of the unstable autoregressive model has a unit root. A key question of interest is what happens when this model has a near unit root, that is, when it is nearly non-stationary. In this thesis, it is established that the statistics proposed by Chan and Ling can be extended. The limit distribution is expressed as a functional of an Orenstein-Uhlenbeck process that is driven by a fractional Brownian motion. This result extends and generalizes Chan and Ling's results to a nearly non-stationary long-memory time series. / The second part of the thesis investigates the weak convergence of weighted sums of random variables that are functionals of moving aver- age processes. A non-central limit theorem is established in which the Wiener integrals with respect to the Hermite processes appear as the limit. As an application of the non-central limit theorem, we examine the asymptotic theory of least squares estimators (LSE) for a nearly unstable AR(1) model when the innovation sequences are functionals of moving average processes. It is shown that the limit distribution of the LSE appears as functionals of the Ornstein-Uhlenbeck processes driven by Hermite processes. / Liu, Weiwei. / Adviser: Chan Ngai Hang. / Source: Dissertation Abstracts International, Volume: 73-01, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 60-67). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
|
87 |
Fully modified least squares estimation and vector autoregression of models with seasonally integrated processes.January 1997 (has links)
by Gilbert Chiu-sing Lui. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 112-117). / Chapter 1. --- Introduction --- p.1 / Chapter 2. --- Models and Assumptions --- p.4 / Chapter 3. --- Asymptotics of FM-SEA Estimators --- p.15 / Chapter 3.1. --- Model without Determinstic Trends --- p.15 / Chapter 3.2. --- Model with Determinstic Trends --- p.27 / Chapter 4. --- Asymptotics of FM-SEA Estimators of VAR System --- p.33 / Chapter 4.1. --- General Model --- p.33 / Chapter 4.2. --- Model with d = 4 --- p.44 / Chapter 5. --- Monte Carlo Experimental Results --- p.49 / Chapter 6. --- Conclusion --- p.54 / Chapter 7. --- Mathematical Appendix --- p.56 / Chapter 8. --- References --- p.112
|
88 |
Sensor network deployment as least squares problems.January 2011 (has links)
Xu, Yang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 99-104). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background of Sensors and Sensor Networks --- p.2 / Chapter 1.2 --- Introduction to Coverage Problems --- p.6 / Chapter 1.3 --- Literature Review --- p.8 / Chapter 1.3.1 --- Deterministic Deployment Methods --- p.9 / Chapter 1.3.2 --- Dynamic Deployment Methods --- p.10 / Chapter 1.4 --- A Brief Introduction to Least Squares Analysis --- p.13 / Chapter 1.5 --- Thesis Outline --- p.15 / Chapter 2 --- Mobile Sensor Network Deployment Problem --- p.18 / Chapter 2.1 --- Sensor Coverage Models --- p.18 / Chapter 2.1.1 --- Binary Sensor Models --- p.19 / Chapter 2.1.2 --- Attenuated and Truncated Attenuated Disk Models --- p.20 / Chapter 2.2 --- Problem Statement --- p.23 / Chapter 3 --- Coverage Optimization as Nonlinear Least Squares Problems --- p.26 / Chapter 3.1 --- Introduction --- p.26 / Chapter 3.2 --- Network Deployment as Least Squares Problems --- p.28 / Chapter 3.2.1 --- Assignment of Sample Points --- p.28 / Chapter 3.2.2 --- Least Squares Function --- p.30 / Chapter 3.2.3 --- Gauss-Newton Method --- p.33 / Chapter 3.2.4 --- Solutions --- p.36 / Chapter 3.3 --- Extension to Binary Sensor Models --- p.39 / Chapter 3.3.1 --- Restrictions of Subgradient Methods --- p.40 / Chapter 3.3.2 --- Sigmoid Functions --- p.42 / Chapter 3.4 --- Convergence and Multiple Minima Issues --- p.44 / Chapter 3.4.1 --- Convergence --- p.44 / Chapter 3.4.2 --- Multiple Minima --- p.48 / Chapter 3.5 --- Stopping Criteria --- p.52 / Chapter 3.6 --- Summary --- p.53 / Chapter 4 --- Experimental Results --- p.55 / Chapter 4.1 --- Introduction --- p.55 / Chapter 4.2 --- Numerical Examples --- p.56 / Chapter 4.2.1 --- Examples of Attenuated Disk Models --- p.57 / Chapter 4.2.2 --- Examples of Binary Sensor Models --- p.63 / Chapter 4.3 --- Performance Metrics of Mobile Sensor Deployment Schemes --- p.68 / Chapter 4.4 --- Comparison to Existing Methods --- p.74 / Chapter 4.5 --- Summary --- p.81 / Chapter 5 --- Conclusions --- p.83 / Chapter 5.1 --- Conclusions --- p.83 / Chapter 5.2 --- Future Research Directions --- p.85 / Appendices --- p.87 / Chapter A --- An Overview of Existing Deployment Methods --- p.88 / Chapter A.1 --- Potential Fields and Virtual Forces --- p.88 / Chapter A.2 --- Distributed Self-Spreading Algorithm --- p.92 / Chapter A.3 --- VD-Based Deployment Algorithm --- p.96 / Bibliography --- p.99
|
89 |
Least median squares algorithm for clusterwise linear regression.January 2009 (has links)
Fung, Chun Yip. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 53-54). / Abstract also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- The Exchange Algorithm Framework --- p.4 / Chapter 2.1 --- Ordinary Least Squares Linear Regression --- p.5 / Chapter 2.2 --- The Exchange Algorithm --- p.6 / Chapter 3 --- Methodology --- p.12 / Chapter 3.1 --- Least Median Squares Linear Regression --- p.12 / Chapter 3.2 --- Least Median Squares Algorithm for Clusterwise Linear Re- gression --- p.16 / Chapter 3.3 --- Measures of Performance --- p.20 / Chapter 3.4 --- An Illustrative Example --- p.24 / Chapter 4 --- Monte Carlo Simulation Study --- p.34 / Chapter 4.1 --- Simulation Plan --- p.34 / Chapter 4.2 --- Simulation Results --- p.41 / Chapter 4.2.1 --- Effects of the Six factors --- p.41 / Chapter 4.2.2 --- Comparisons between LMSA and the Exchange Algorithm --- p.47 / Chapter 4.2.3 --- Evaluation of the Improvement of Regression Parame- ters by Performing Stage 3 in LMSA --- p.50 / Chapter 5 --- Concluding Remarks --- p.51 / Bibliography --- p.52
|
90 |
Analysis of the United States Hop MarketDasso, Michael W 01 June 2015 (has links)
Hops are one of the four main ingredients used to produce beer. Many studies have been done to analyze the science behind growing and harvesting hops, creating hop hybrids, and how to brew beer with hops. However, there has been little research done revolving around the economic demand and supply model of the hop market. The objectives of this study are to create an econometric model of supply and demand of hops in the United States from 1981 to 2012, and to identify important exogenous variables that explain the supply and demand of hops using the two-stage least squares (2SLS) method of analysis. Using the 2SLS method, the demand model yielded that the US beer production variable is significant at the 10 percent level. For every 1 percent change in US beer production, there will be a 6.25 percent change in quantity of hops demanded in the same direction. The supply model showed that US acreage is significant at the 1 percent level. For every 1 percent change in US acreage, there will be a 0.889 percent change in quantity of hops supplied in the same direction. The implications of this study are viewed in relation to both producers and consumers.
|
Page generated in 0.041 seconds