• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • Tagged with
  • 6
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Algorithms for Near-optimal Alignment Problems on Biosequences

Tseng, Kuo-Tsung 26 August 2008 (has links)
With the improvement of biological techniques, the amount of biosequences data, such as DNA, RNA and protein sequences, are growing explosively. It is almost impossible to handle such huge amount of data purely by manpower. Thus the requirement of the great computing power is essential. There are some ways to treat biosequence data, finding identical biosequences, searching similar biosequences, or mining the signature of biosequences. All of these are based on the same problems, the biosequence alignment problems. In this dissertation, we shall study the biosequence alignment problems to raise the biological meaning of the optimal or near-optimal alignments since the biologists and computer scientists sometimes argue the biological meaning of the mathematically optimal alignment obtained based on some scoring functions. We first study the methods to improve the optimal alignment of two given biosequences. Since usually the optimal alignment is not unique, there should exist the best one among the optimal alignments, and we try to extract this by defining some other criteria to judge the goodness of the alignments when the traditional methods cannot decide which is the better one. Two algorithms are proposed for solving the newly defined biosequence alignment problems, the smoothest optimal alignment and the most conserved optimal alignment problems. Some other criteria are also discussed since most of them can be solved in a similar way. Then we notice that the most biologically meaningful alignment may not be the optimal one since there is no perfect scoring matrix. We address our candidates in those near-optimal alignments, and present a tracing marking function to get all near-optimal alignments and use the criterion "the most conserved" to filter it, which is named as the near-optimal block alignment (NBA) problem. Finally, as everybody knows that existing scoring matrices are not perfect at all, we try to figure out how we choose the winner when multiple scoring matrices are applied. We define some reasonable schemes to decide the winner alignment. In this dissertation, we solve and discuss the algorithms for near-optimal alignment problems on biosequences. In the future, we would like to do some experiments to support or reject these concepts.
2

Near-optimal designs for Gaussian Process regression models

Nguyen, Huong January 2018 (has links)
No description available.
3

Selection of Optimal Threshold and Near-Optimal Interval Using Profit Function and ROC Curve: A Risk Management Application

CHEN, JINGRU January 2011 (has links)
The ongoing financial crisis has had major adverse impact on the credit market. As the financial crisis progresses, the skyrocketing unemployment rate puts more and more customers in such a position that they cannot pay back their credit debts. The deteriorating economic environment and growing pressures for revenue generation have led creditors to re-assess their existing portfolios. The credit re-assessment is to accurately estimate customers' behavior and distill information for credit decisions that differentiate bad customers from good customers. Lending institutions often need a specific rule for defining an optimal cut-off value to maximize revenue and minimize risk. In this dissertation research, I consider a problem in the broad area of credit risk management: the selection of critical thresholds, which comprises of the "optimal cut-off point" and an interval containing cut-off points near the optimal cut-off point (a "near-optimal interval"). These critical thresholds can be used in practice to adjust credit lines, to close accounts involuntarily, to re-price, etc. Better credit re-assessment practices are essential for banks to prevent loan loss in the future and restore the flow of credit to entrepreneurs and individuals. The Profit Function is introduced to estimate the optimal cut-off and the near-optimal interval, which are used to manage the credit risk in the financial industry. The credit scores of the good population and bad population are assumed from two distributions, with the same or different dispersion parameters. In a homoscedastic Normal-Normal model, a closed-form solution of optimal cut-off and some properties of optimal cut-off are provided for three possible shapes of the Profit Functions. The same methodology can be generalized to other distributions in the exponential family, including the heteroscedastic Normal-Normal Profit Function and the Gamma-Gamma Profit Function. It is shown that a Profit Function is a comprehensive tool in the selection of critical thresholds, and its solution can be found using easily implemented computing algorithms. The estimation of near-optimal interval is developed in three possible shapes of the bi-distributional Profit Function. The optimal cut-off has a closed-form formula, and the estimation results of near-optimal intervals can be simplified to this closed-form formula when the tolerance level is zero. Two nonparametric methods are introduced to estimate critical thresholds if the latent risk score is not from some known distribution. One method uses the Kernel density estimation method to derive a tabulated table, which is used to estimate the values of critical thresholds. A ROC Graphical method is also developed to estimate critical thresholds. In the theoretical portion of the dissertation, we use Taylor Series and the Delta method to develop the asymptotic distribution of the non-constrained optimal cut-off. We also use the Kernel density estimator to derive the asymptotic variance of the Profit function. / Statistics
4

Power Efficient Last Level Cache For Chip Multiprocessors

Mandke, Aparna 01 1900 (has links) (PDF)
The number of processor cores and on-chip cache size has been increasing on chip multiprocessors (CMPs). As a result, leakage power dissipated in the on-chip cache has become very significant. We explore various techniques to switch-off the over-allocated cache so as to reduce leakage power consumed by it. A large cache offers non-uniform access latency to different cores present on a CMP and such a cache is called “Non-Uniform Cache Architecture (NUCA)”. Past studies have explored techniques to reduce leakage power for uniform access latency caches and with a single application executing on a uniprocessor. Our ideas of power optimized caches are applicable to any memory technology and architecture for which the difference of leakage power in the on-state and off-state of on-chip cache bank is significant. Switching off the last level shared cache on a CMP is a challenging problem due to concurrently executing threads/processes and large dispersed NUCA cache. Hence, to determine cache requirement on a CMP, first we propose a new highly accurate method to estimate working set size of an application, which we call “tagged working set size estimation (TWSS)” method. This method has a negligible hardware storage overhead of 0.1% of the cache size. The use of TWSS is demonstrated by adaptively adjusting cache associativity. Our ideas of adaptable associative cache is scalable with respect to the number of cores present on a CMP. It uses information available locally in a tile on a tiled CMP and thus avoids network access unlike other commonly used heuristics such as average memory access latency and cache miss ratio. Our implementation gives 25% and 19% higher EDP savings than that obtained with average memory access latency and cache miss ratio heuristics on a static NUCA platform (SNUCA), respectively. Cache misses increase with reduced cache associativity. Hence, we also propose to map some of the L2 slices onto the rest L2 slices and switch-off mapped L2 slices. The L2 slice includes all L2 banks in a tile. We call this technique the “remap policy”. Some applications execute with lesser number of threads than available cores during their execution. In such applications L2 slices which are farther to those threads are switched-off and mapped on-to L2 slices which are located nearer to those threads. By using nearer L2 slices with the help of remapped technology, some applications show improved execution time apart from reduction in leakage power consumption in NUCA caches. To estimate the maximum possible gains that can be obtained using the remap policy, we statically determine the near-optimal remap configuration using the genetic algorithms. We formulate this problem as a energy-delay product minimization problem. Our dynamic remap policy implementation gives energy-delay savings within an average of 5% than that obtained with the near-optimal remap configuration. Energy-delay product can also be minimized by improving execution time, which depends mainly on the static and dynamic NUCA access policies (DNUCA). The suitability of cache access policy depends on data sharing properties of a multi-threaded application. Hence, we propose three indices to quantify data sharing properties of an application and use them to predict a more suitable cache access policy among SNUCA and DNUCA for an application.
5

Power Efficient Last Level Cache for Chip Multiprocessors

Mandke, Aparna January 2013 (has links) (PDF)
The number of processor cores and on-chip cache size has been increasing on chip multiprocessors (CMPs). As a result, leakage power dissipated in the on-chip cache has become very significant. We explore various techniques to switch-off the over-allocated cache so as to reduce leakage power consumed by it. A large cache offers non-uniform access latency to different cores present on a CMP and such a cache is called “Non-Uniform Cache Architecture (NUCA)”. Past studies have explored techniques to reduce leakage power for uniform access latency caches and with a single application executing on a uniprocessor. Our ideas of power optimized caches are applicable to any memory technology and architecture for which the difference of leakage power in the on-state and off-state of on-chip cache bank is significant. Switching off the last level shared cache on a CMP is a challenging problem due to concurrently executing threads/processes and large dispersed NUCA cache. Hence, to determine cache requirement on a CMP, first we propose a new highly accurate method to estimate working set size of an application, which we call “tagged working set size estimation (TWSS)” method. This method has a negligible hardware storage overhead of 0.1% of the cache size. The use of TWSS is demonstrated by adaptively adjusting cache associativity. Our ideas of adaptable associative cache is scalable with respect to the number of cores present on a CMP. It uses information available locally in a tile on a tiled CMP and thus avoids network access unlike other commonly used heuristics such as average memory access latency and cache miss ratio. Our implementation gives 25% and 19% higher EDP savings than that obtained with average memory access latency and cache miss ratio heuristics on a static NUCA platform (SNUCA), respectively. Cache misses increase with reduced cache associativity. Hence, we also propose to map some of the L2 slices onto the rest L2 slices and switch-off mapped L2 slices. The L2 slice includes all L2 banks in a tile. We call this technique the “remap policy”. Some applications execute with lesser number of threads than available cores during their execution. In such applications L2 slices which are farther to those threads are switched-off and mapped on-to L2 slices which are located nearer to those threads. By using nearer L2 slices with the help of remapped technology, some applications show improved execution time apart from reduction in leakage power consumption in NUCA caches. To estimate the maximum possible gains that can be obtained using the remap policy, we statically determine the near-optimal remap configuration using the genetic algorithms. We formulate this problem as a energy-delay product minimization problem. Our dynamic remap policy implementation gives energy-delay savings within an average of 5% than that obtained with the near-optimal remap configuration. Energy-delay product can also be minimized by improving execution time, which depends mainly on the static and dynamic NUCA access policies (DNUCA). The suitability of cache access policy depends on data sharing properties of a multi-threaded application. Hence, we propose three indices to quantify data sharing properties of an application and use them to predict a more suitable cache access policy among SNUCA and DNUCA for an application.
6

Quadratic Spline Approximation of the Newsvendor Problem Optimal Cost Function

Burton, Christina Marie 10 March 2012 (has links) (PDF)
We consider a single-product dynamic inventory problem where the demand distributions in each period are known and independent but with density. We assume the lead time and the fixed cost for ordering are zero and that there are no capacity constraints. There is a holding cost and a backorder cost for unfulfilled demand, which is backlogged until it is filled by another order. The problem may be nonstationary, and in fact our approximation of the optimal cost function using splines is most advantageous when demand falls suddenly. In this case the myopic policy, which is most often used in practice to calculate optimal inventory level, would be very costly. Our algorithm uses quadratic splines to approximate the optimal cost function for this dynamic inventory problem and calculates the optimal inventory level and optimal cost.

Page generated in 0.0327 seconds