471 |
Decision-theoretic estimation of parameter matrices in manova and canonical correlations.January 1995 (has links)
by Lo Tai-yan, Milton. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 112-114). / Chapter 1 --- Preliminaries --- p.1 / Chapter 1.1 --- Introduction --- p.1 / Chapter 1.1.1 --- The Noncentral Multivariate F distribution --- p.2 / Chapter 1.1.2 --- The Central Problems and the Approach --- p.4 / Chapter 1.2 --- Concepts and Terminology --- p.7 / Chapter 1.3 --- Choice of Estimates --- p.10 / Chapter 1.4 --- Related Work --- p.11 / Chapter 2 --- Estimation of the noncentrality parameter of a Noncentral Mul- tivariate F distribution --- p.19 / Chapter 2.1 --- Unbiased and Linear Estimators --- p.19 / Chapter 2.1.1 --- The unbiased estimate --- p.20 / Chapter 2.1.2 --- The Class of Linear Estimates --- p.24 / Chapter 2.2 --- Optimal Linear Estimate --- p.32 / Chapter 2.3 --- Nonlinear Estimate --- p.34 / Chapter 2.4 --- Monte Carlo Simulation Study --- p.41 / Chapter 2.5 --- Evaluation and Further Investigation --- p.42 / Chapter 3 --- Estimation of Canonical Correlation Coefficients --- p.73 / Chapter 3.1 --- Preliminary --- p.73 / Chapter 3.2 --- The Estimation Problem --- p.76 / Chapter 3.3 --- Orthogonally Invariant Estimates --- p.77 / Chapter 3.3.1 --- The Unbiased Estimate --- p.78 / Chapter 3.3.2 --- The Class of Linear Estimates --- p.78 / Chapter 3.3.3 --- The Class of Nonlinear Estimates --- p.80 / Chapter 3.4 --- Monte Carlo Simulation Study --- p.87 / Chapter 3.5 --- Evaluation and Further Investigation --- p.89 / Chapter A --- p.104 / Chapter A.1 --- Lemma 3.2 --- p.104 / Chapter A.2 --- Theorem 3.3 Leung(1992) --- p.105 / Chapter A.3 --- The Noncentral F Identity --- p.106 / Chapter B --- Bibliography --- p.111
|
472 |
Data prefetching using hardware register value predictable table.January 1996 (has links)
by Chin-Ming, Cheung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (leaves 95-97). / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Overview --- p.1 / Chapter 1.2 --- Objective --- p.3 / Chapter 1.3 --- Organization of the dissertation --- p.4 / Chapter 2 --- Related Works --- p.6 / Chapter 2.1 --- Previous Cache Works --- p.6 / Chapter 2.2 --- Data Prefetching Techniques --- p.7 / Chapter 2.2.1 --- Hardware Vs Software Assisted --- p.7 / Chapter 2.2.2 --- Non-selective Vs Highly Selective --- p.8 / Chapter 2.2.3 --- Summary on Previous Data Prefetching Schemes --- p.12 / Chapter 3 --- Program Data Mapping --- p.13 / Chapter 3.1 --- Regular and Irregular Data Access --- p.13 / Chapter 3.2 --- Propagation of Data Access Regularity --- p.16 / Chapter 3.2.1 --- Data Access Regularity in High Level Program --- p.17 / Chapter 3.2.2 --- Data Access Regularity in Machine Code --- p.18 / Chapter 3.2.3 --- Data Access Regularity in Memory Address Sequence --- p.20 / Chapter 3.2.4 --- Implication --- p.21 / Chapter 4 --- Register Value Prediction Table (RVPT) --- p.22 / Chapter 4.1 --- Predictability of Register Values --- p.23 / Chapter 4.2 --- Register Value Prediction Table --- p.26 / Chapter 4.3 --- Control Scheme of RVPT --- p.29 / Chapter 4.3.1 --- Details of RVPT Mechanism --- p.29 / Chapter 4.3.2 --- Explanation of the Register Prediction Mechanism --- p.32 / Chapter 4.4 --- Examples of RVPT --- p.35 / Chapter 4.4.1 --- Linear Array Example --- p.35 / Chapter 4.4.2 --- Linked List Example --- p.36 / Chapter 5 --- Program Register Dependency --- p.39 / Chapter 5.1 --- Register Dependency --- p.40 / Chapter 5.2 --- Generalized Concept of Register --- p.44 / Chapter 5.2.1 --- Cyclic Dependent Register(CDR) --- p.44 / Chapter 5.2.2 --- Acyclic Dependent Register(ADR) --- p.46 / Chapter 5.3 --- Program Register Overview --- p.47 / Chapter 6 --- Generalized RVPT Model --- p.49 / Chapter 6.1 --- Level N RVPT Model --- p.49 / Chapter 6.1.1 --- Identification of Level N CDR --- p.51 / Chapter 6.1.2 --- Recording CDR instructions of Level N CDR --- p.53 / Chapter 6.1.3 --- Prediction of Level N CDR --- p.55 / Chapter 6.2 --- Level 2 Register Value Prediction Table --- p.55 / Chapter 6.2.1 --- Level 2 RVPT Structure --- p.56 / Chapter 6.2.2 --- Identification of Level 2 CDR --- p.58 / Chapter 6.2.3 --- Control Scheme of Level 2 RVPT --- p.59 / Chapter 6.2.4 --- Example of Index Array --- p.63 / Chapter 7 --- Performance Evaluation --- p.66 / Chapter 7.1 --- Evaluation Methodology --- p.66 / Chapter 7.1.1 --- Trace-Drive Simulation --- p.66 / Chapter 7.1.2 --- Architectural Method --- p.68 / Chapter 7.1.3 --- Benchmarks and Metrics --- p.70 / Chapter 7.2 --- General Result --- p.75 / Chapter 7.2.1 --- Constant Stride or Regular Data Access Applications --- p.77 / Chapter 7.2.2 --- Non-constant Stride or Irregular Data Access Applications --- p.79 / Chapter 7.3 --- Effect of Design Variations --- p.80 / Chapter 7.3.1 --- Effect of Cache Size --- p.81 / Chapter 7.3.2 --- Effect of Block Size --- p.83 / Chapter 7.3.3 --- Effect of Set Associativity --- p.86 / Chapter 7.4 --- Summary --- p.87 / Chapter 8 --- Conclusion and Future Research --- p.88 / Chapter 8.1 --- Conclusion --- p.88 / Chapter 8.2 --- Future Research --- p.90 / Bibliography --- p.95 / Appendix --- p.98 / Chapter A --- MCPI vs. cache size --- p.98 / Chapter B --- MCPI Reduction Percentage Vs cache size --- p.102 / Chapter C --- MCPI vs. block size --- p.106 / Chapter D --- MCPI Reduction Percentage Vs block size --- p.110 / Chapter E --- MCPI vs. set-associativity --- p.114 / Chapter F --- MCPI Reduction Percentage Vs set-associativity --- p.118
|
473 |
Estimation of the precision matrix in the inverse Wishart distribution.January 1999 (has links)
Leung Kit Ying. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 86-88). / Abstracts in English and Chinese. / Declaration --- p.i / Acknowledgement --- p.ii / Chapter 1 --- INTRODUCTION --- p.1 / Chapter 2 --- IMPROVED ESTIMATION OF THE NORMAL PRECISION MATRIX USING THE L1 AND L2 LOSS FUNCTIONS --- p.7 / Chapter 2.1 --- Previous Work --- p.9 / Chapter 2.2 --- Important Lemmas --- p.13 / Chapter 2.3 --- Improved Estimation of Σ-1 under L1 Loss Function --- p.20 / Chapter 2.4 --- Improved Estimation of Σ-1 under L2 Loss Function --- p.26 / Chapter 2.5 --- Simulation Study --- p.31 / Chapter 2.6 --- Comparison with Krishnammorthy and Gupta's result --- p.38 / Chapter 3 --- IMPROVED ESTIMATION OF THE NORMAL PRECISION MATRIX USING THE L3 AND L4 LOSS FUNCTIONS --- p.43 / Chapter 3.1 --- Justification of the Loss Functions --- p.46 / Chapter 3.2 --- Important Lemmas for Calculating Risks --- p.48 / Chapter 3.3 --- Improved Estimation of Σ-1 under L3 Loss Function --- p.55 / Chapter 3.4 --- Improved Estimation of Σ-1 under L4 Loss Function --- p.62 / Chapter 3.5 --- Simulation Study --- p.69 / Appendix --- p.77 / Reference --- p.35
|
474 |
Replacement and placement policies for prefetched lines.January 1998 (has links)
by Sze Siu Ching. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 119-122). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Overlapping Computations with Memory Accesses --- p.3 / Chapter 1.2 --- Cache Line Replacement Policies --- p.4 / Chapter 1.3 --- The Rest of This Paper --- p.4 / Chapter 2 --- A Brief Review of IAP Scheme --- p.6 / Chapter 2.1 --- Embedded Hints for Next Data References --- p.6 / Chapter 2.2 --- Instruction Opcode and Addressing Mode Prefetching --- p.8 / Chapter 2.3 --- Chapter Summary --- p.9 / Chapter 3 --- Motivation --- p.11 / Chapter 3.1 --- Chapter Summary --- p.14 / Chapter 4 --- Related Work --- p.15 / Chapter 4.1 --- Existing Replacement Algorithms --- p.16 / Chapter 4.2 --- Placement Policies for Cache Lines --- p.18 / Chapter 4.3 --- Chapter Summary --- p.20 / Chapter 5 --- Replacement and Placement Policies of Prefetched Lines --- p.21 / Chapter 5.1 --- IZ Cache Line Replacement Policy in IAP scheme --- p.22 / Chapter 5.1.1 --- The Instant Zero Scheme --- p.23 / Chapter 5.2 --- Priority Pre-Updating and Victim Cache --- p.27 / Chapter 5.2.1 --- Priority Pre-Updating --- p.27 / Chapter 5.2.2 --- Priority Pre-Updating for Cache --- p.28 / Chapter 5.2.3 --- Victim Cache for Unreferenced Prefetch Lines --- p.28 / Chapter 5.3 --- Prefetch Cache for IAP Lines --- p.31 / Chapter 5.4 --- Chapter Summary --- p.33 / Chapter 6 --- Performance Evaluation --- p.34 / Chapter 6.1 --- Methodology and metrics --- p.34 / Chapter 6.1.1 --- Trace Driven Simulation --- p.35 / Chapter 6.1.2 --- Caching Models --- p.36 / Chapter 6.1.3 --- Simulation Models and Performance Metrics --- p.39 / Chapter 6.2 --- Simulation Results --- p.43 / Chapter 6.2.1 --- General Results --- p.44 / Chapter 6.3 --- Simulation Results of IZ Replacement Policy --- p.49 / Chapter 6.3.1 --- Analysis To IZ Cache Line Replacement Policy --- p.50 / Chapter 6.4 --- Simulation Results for Priority Pre-Updating with Victim Cache --- p.52 / Chapter 6.4.1 --- PPUVC in Cache with IAP Scheme --- p.52 / Chapter 6.4.2 --- PPUVC in prefetch-on-miss Cache --- p.54 / Chapter 6.5 --- Prefetch Cache --- p.57 / Chapter 6.6 --- Chapter Summary --- p.63 / Chapter 7 --- Architecture Without LOAD-AND-STORE Instructions --- p.64 / Chapter 8 --- Conclusion --- p.66 / Chapter A --- CPI Due to Cache Misses --- p.68 / Chapter A.1 --- Varying Cache Size --- p.68 / Chapter A.1.1 --- Instant Zero Replacement Policy --- p.68 / Chapter A.1.2 --- Priority Pre-Updating with Victim Cache --- p.70 / Chapter A.1.3 --- Prefetch Cache --- p.73 / Chapter A.2 --- Varying Cache Line Size --- p.75 / Chapter A.2.1 --- Instant Zero Replacement Policy --- p.75 / Chapter A.2.2 --- Priority Pre-Updating with Victim Cache --- p.77 / Chapter A.2.3 --- Prefetch Cache --- p.80 / Chapter A.3 --- Varying Cache Set Associative --- p.82 / Chapter A.3.1 --- Instant Zero Replacement Policy --- p.82 / Chapter A.3.2 --- Priority Pre-Updating with Victim Cache --- p.84 / Chapter A.3.3 --- Prefetch Cache --- p.87 / Chapter B --- Simulation Results of IZ Replacement Policy --- p.89 / Chapter B.1 --- Memory Delay Time Reduction --- p.89 / Chapter B.1.1 --- Varying Cache Size --- p.89 / Chapter B.1.2 --- Varying Cache Line Size --- p.91 / Chapter B.1.3 --- Varying Cache Set Associative --- p.93 / Chapter C --- Simulation Results of Priority Pre-Updating with Victim Cache --- p.95 / Chapter C.1 --- PPUVC in IAP Scheme --- p.95 / Chapter C.1.1 --- Memory Delay Time Reduction --- p.95 / Chapter C.2 --- PPUVC in Cache with Prefetch-On-Miss Only --- p.101 / Chapter C.2.1 --- Memory Delay Time Reduction --- p.101 / Chapter D --- Simulation Results of Prefetch Cache --- p.107 / Chapter D.1 --- Memory Delay Time Reduction --- p.107 / Chapter D.1.1 --- Varying Cache Size --- p.107 / Chapter D.1.2 --- Varying Cache Line Size --- p.109 / Chapter D.1.3 --- Varying Cache Set Associative --- p.111 / Chapter D.2 --- Results of the Three Replacement Policies --- p.113 / Chapter D.2.1 --- Varying Cache Size --- p.113 / Chapter D.2.2 --- Varying Cache Line Size --- p.115 / Chapter D.2.3 --- Varying Cache Set Associative --- p.117 / Bibliography --- p.119
|
475 |
The stability of host-pathogen multi-strain modelsHawkins, Susan January 2017 (has links)
Previous multi-strain mathematical models have elucidated that the degree of cross-protective responses between similar strains, acting as a form of immune selection, generates different behavioural states of the pathogen population. This thesis explores these multi-strain dynamic states, to examine their robustness and stability in the face of pathogenic intrinsic phenotypic variation, and the extrinsic force of immune selection. This is achieved in two main ways: Chapter 2 introduces phenotypic variation in pathogen transmissibility, testing the robustness of a stable pathogen population to the emergence of an introduced strain of higher transmission potential; and Chapter 3 introduces a new model with a possibility of immunity to both strain-specific and cross-strain (conserved) determinants, to investigate how heterogeneity in the specificity of a host immune response alters the pathogen population structure. A final investigation in Chapter 4 develops a method of reverse-pattern oriented modelling using a machine learning algorithm to determine which intrinsic properties of the pathogen, and their combinations, lead to particular disease-like population patterns. This research offers novel techniques to complement previous and ongoing work on multi-strain modelling, with direct applications to a range of infectious agents such as Plasmodium falciparum, influenza A, and rotavirus, but also with a wider potential for other multi-strain systems.
|
476 |
Stochastic processes in random environmentOrtgiese, Marcel January 2009 (has links)
We are interested in two probabilistic models of a process interacting with a random environment. Firstly, we consider the model of directed polymers in random environment. In this case, a polymer, represented as the path of a simple random walk on a lattice, interacts with an environment given by a collection of time-dependent random variables associated to the vertices. Under certain conditions, the system undergoes a phase transition from an entropy-dominated regime at high temperatures, to a localised regime at low temperatures. Our main result shows that at high temperatures, even though a central limit theorem holds, we can identify a set of paths constituting a vanishing fraction of all paths that supports the free energy. We compare the situation to a mean-field model defined on a regular tree, where we can also describe the situation at the critical temperature. Secondly, we consider the parabolic Anderson model, which is the Cauchy problem for the heat equation with a random potential. Our setting is continuous in time and discrete in space, and we focus on time-constant, independent and identically distributed potentials with polynomial tails at infinity. We are concerned with the long-term temporal dynamics of this system. Our main result is that the periods, in which the profile of the solutions remains nearly constant, are increasing linearly over time, a phenomenon known as ageing. We describe this phenomenon in the weak sense, by looking at the asymptotic probability of a change in a given time window, and in the strong sense, by identifying the almost sure upper envelope for the process of the time remaining until the next change of profile. We also prove functional scaling limit theorems for profile and growth rate of the solution of the parabolic Anderson model.
|
477 |
Boundary conditions in Abelian sandpilesGamlin, Samuel January 2016 (has links)
The focus of this thesis is to investigate the impact of the boundary conditions on configurations in the Abelian sandpile model. We have two main results to present in this thesis. Firstly we give a family of continuous, measure preserving, almost one-to-one mappings from the wired spanning forest to recurrent sandpiles. In the special case of $Z^d$, $d \geq 2$, we show how these bijections yield a power law upper bound on the rate of convergence to the sandpile measure along any exhaustion of $Z^d$. Secondly we consider the Abelian sandpile on ladder graphs. For the ladder sandpile measure, $\nu$, a recurrent configuration on the boundary, I, and a cylinder event, E, we provide an upper bound for $\nu(E|I) − \nu(E)$.
|
478 |
A Markov Random Field Based Approach to 3D Mosaicing and Registration Applied to Ultrasound SimulationKutarnia, Jason Francis 27 August 2014 (has links)
"
A novel Markov Random Field (MRF) based method for the mosaicing of 3D ultrasound volumes is presented in this dissertation. The motivation for this work is the production of training volumes for an affordable ultrasound simulator, which offers a low-cost/portable training solution for new users of diagnostic ultrasound, by providing the scanning experience essential for developing the necessary psycho-motor skills. It also has the potential for introducing ultrasound instruction into medical education curriculums. The interest in ultrasound training stems in part from the widespread adoption of point-of-care scanners, i.e. low cost portable ultrasound scanning systems in the medical community.
This work develops a novel approach for producing 3D composite image volumes and validates the approach using clinically acquired fetal images from the obstetrics department at the University of Massachusetts Medical School (UMMS). Results using the Visible Human Female dataset as well as an abdominal trauma phantom are also presented. The process is broken down into five distinct steps, which include individual 3D volume acquisition, rigid registration, calculation of a mosaicing function, group-wise non-rigid registration, and finally blending. Each of these steps, common in medical image processing, has been investigated in the context of ultrasound mosaicing and has resulted in improved algorithms. Rigid and non-rigid registration methods are analyzed in a probabilistic framework and their sensitivity to ultrasound shadowing artifacts is studied.
The group-wise non-rigid registration problem is initially formulated as a maximum likelihood estimation, where the joint probability density function is comprised of the partially overlapping ultrasound image volumes. This expression is simplified using a block-matching methodology and the resulting discrete registration energy is shown to be equivalent to a Markov Random Field. Graph based methods common in computer vision are then used for optimization, resulting in a set of transformations that bring the overlapping volumes into alignment. This optimization is parallelized using a fusion approach, where the registration problem is divided into 8 independent sub-problems whose solutions are fused together at the end of each iteration. This method provided a speedup factor of 3.91 over the single threaded approach with no noticeable reduction in accuracy during our simulations. Furthermore, the registration problem is simplified by introducing a mosaicing function, which partitions the composite volume into regions filled with data from unique partially overlapping source volumes. This mosaicing functions attempts to minimize intensity and gradient differences between adjacent sources in the composite volume.
Experimental results to demonstrate the performance of the group-wise registration algorithm are also presented. This algorithm is initially tested on deformed abdominal image volumes generated using a finite element model of the Visible Human Female to show the accuracy of its calculated displacement fields. In addition, the algorithm is evaluated using real ultrasound data from an abdominal phantom. Finally, composite obstetrics image volumes are constructed using clinical scans of pregnant subjects, where fetal movement makes registration/mosaicing especially difficult.
Our solution to blending, which is the final step of the mosaicing process, is also discussed. The trainee will have a better experience if the volume boundaries are visually seamless, and this usually requires some blending prior to stitching. Also, regions of the volume where no data was collected during scanning should have an ultrasound-like appearance before being displayed in the simulator. This ensures the trainee's visual experience isn't degraded by unrealistic images. A discrete Poisson approach has been adapted to accomplish these tasks. Following this, we will describe how a 4D fetal heart image volume can be constructed from swept 2D ultrasound. A 4D probe, such as the Philips X6-1 xMATRIX Array, would make this task simpler as it can acquire 3D ultrasound volumes of the fetal heart in real-time; However, probes such as these aren't widespread yet.
Once the theory has been introduced, we will describe the clinical component of this dissertation. For the purpose of acquiring actual clinical ultrasound data, from which training datasets were produced, 11 pregnant subjects were scanned by experienced sonographers at the UMMS following an approved IRB protocol. First, we will discuss the software/hardware configuration that was used to conduct these scans, which included some custom mechanical design. With the data collected using this arrangement we generated seamless 3D fetal mosaics, that is, the training datasets, loaded them into our ultrasound training simulator, and then subsequently had them evaluated by the sonographers at the UMMS for accuracy. These mosaics were constructed from the raw scan data using the techniques previously introduced. Specific training objectives were established based on the input from our collaborators in the obstetrics sonography group. Important fetal measurements are reviewed, which form the basis for training in obstetrics ultrasound. Finally clinical images demonstrating the sonographer making fetal measurements in practice, which were acquired directly by the Philips iU22 ultrasound machine from one of our 11 subjects, are compared with screenshots of corresponding images produced by our simulator. "
|
479 |
A Study of the Delta-Normal Method of Measuring VaRKondapaneni, Rajesh 09 May 2005 (has links)
This thesis describes the Delta-Normal method of computing Value-at-Risk. The advantages and disadvantages of the Delta-Normal method compared to the Historical and Monte Carlo method of computing Value-at-Risk are discussed. The Delta-Normal method of computing Value-at-Risk is compared with the Historical Simulation method of Value-at-Risk using an implementation of portfolio consisting of ten stocks for 400 time intervals. Based on the normality of the distribution of the portfolio risk factors, Delta-Normal would be suitable if the distribution is normal and Historical Simulation method of calculating Value-at-Risk would be ideally suited if the distribution is non-normal.
|
480 |
Designing energy-efficient computing systems using equalization and machine learningTakhirov, Zafar 20 February 2018 (has links)
As technology scaling slows down in the nanometer CMOS regime and mobile computing becomes more ubiquitous, designing energy-efficient hardware for mobile systems is becoming increasingly critical and challenging. Although various approaches like near-threshold computing (NTC), aggressive voltage scaling with shadow latches, etc. have been proposed to get the most out of limited battery life, there is still no “silver bullet” to increasing power-performance demands of the mobile systems. Moreover, given that a mobile system could operate in a variety of environmental conditions, like different temperatures, have varying performance requirements, etc., there is a growing need for designing tunable/reconfigurable systems in order to achieve energy-efficient operation. In this work we propose to address the energy- efficiency problem of mobile systems using two different approaches: circuit tunability and distributed adaptive algorithms.
Inspired by the communication systems, we developed feedback equalization based digital logic that changes the threshold of its gates based on the input pattern. We showed that feedback equalization in static complementary CMOS logic enabled up to 20% reduction in energy dissipation while maintaining the performance metrics. We also achieved 30% reduction in energy dissipation for pass-transistor digital logic (PTL) with equalization while maintaining performance. In addition, we proposed a mechanism that leverages feedback equalization techniques to achieve near optimal operation of static complementary CMOS logic blocks over the entire voltage range from near threshold supply voltage to nominal supply voltage. Using energy-delay product (EDP) as a metric we analyzed the use of the feedback equalizer as part of various sequential computational blocks. Our analysis shows that for near-threshold voltage operation, when equalization was used, we can improve the operating frequency by up to 30%, while the energy increase was less than 15%, with an overall EDP reduction of ≈10%. We also observe an EDP reduction of close to 5% across entire above-threshold voltage range.
On the distributed adaptive algorithm front, we explored energy-efficient hardware implementation of machine learning algorithms. We proposed an adaptive classifier that leverages the wide variability in data complexity to enable energy-efficient data classification operations for mobile systems. Our approach takes advantage of varying classification hardness across data to dynamically allocate resources and improve energy efficiency. On average, our adaptive classifier is ≈100× more energy efficient but has ≈1% higher error rate than a complex radial basis function classifier and is ≈10× less energy efficient but has ≈40% lower error rate than a simple linear classifier across a wide range of classification data sets. We also developed a field of groves (FoG) implementation of random forests (RF) that achieves an accuracy comparable to Convolutional Neural Networks (CNN) and Support Vector Machines (SVM) under tight energy budgets. The FoG architecture takes advantage of the fact that in random forests a small portion of the weak classifiers (decision trees) might be sufficient to achieve high statistical performance. By dividing the random forest into smaller forests (Groves), and conditionally executing the rest of the forest, FoG is able to achieve much higher energy efficiency levels for comparable error rates. We also take advantage of the distributed nature of the FoG to achieve high level of parallelism. Our evaluation shows that at maximum achievable accuracies FoG consumes ≈1.48×, ≈24×, ≈2.5×, and ≈34.7× lower energy per classification compared to conventional RF, SVM-RBF , Multi-Layer Perceptron Network (MLP), and CNN, respectively. FoG is 6.5× less energy efficient than SVM-LR, but achieves 18% higher accuracy on average across all considered datasets.
|
Page generated in 0.0627 seconds