• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2606
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5945
  • 1424
  • 873
  • 728
  • 722
  • 669
  • 492
  • 492
  • 480
  • 448
  • 421
  • 414
  • 386
  • 366
  • 341
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Synthesizing a Hybrid Benchmark Suite with BenchPrime

Wu, Xiaolong 09 October 2018 (has links)
This paper presents BenchPrime, an automated benchmark analysis toolset that is systematic and extensible to analyze the similarity and diversity of benchmark suites. BenchPrime takes multiple benchmark suites and their evaluation metrics as inputs and generates a hybrid benchmark suite comprising only essential applications. Unlike prior work, BenchPrime uses linear discriminant analysis rather than principal component analysis, as well as selects the best clustering algorithm and the optimized number of clusters in an automated and metric-tailored way, thereby achieving high accuracy. In addition, BenchPrime ranks the benchmark suites in terms of their application set diversity and estimates how unique each benchmark suite is compared to other suites. As a case study, this work for the first time compares the DenBench with the MediaBench and MiBench using four different metrics to provide a multi-dimensional understanding of the benchmark suites. For each metric, BenchPrime measures to what degree DenBench applications are irreplaceable with those in MediaBench and MiBench. This provides means for identifying an essential subset from the three benchmark suites without compromising the application balance of the full set. The experimental results show that the necessity of including DenBench applications varies across the target metrics and that significant redundancy exists among the three benchmark suites. / Master of Science / Representative benchmarks are widely used in the research area to achieve an accurate and fair evaluation of hardware and software techniques. However, the redundant applications in the benchmark set can skew the average towards redundant characteristics overestimating the benefit of any proposed research. This work proposes a machine learning-based framework BenchPrime to generates a hybrid benchmark suite comprising only essential applications. In addition, BenchPrime ranks the benchmark suites in terms of their application set diversity and estimates how unique each benchmark suite is compared to other suites.
432

On the Effect of Numerical Noise in Simulation-Based Optimization

Vugrin, Kay E. 10 April 2003 (has links)
Numerical noise is a prevalent concern in many practical optimization problems. Convergence of gradient based optimization algorithms in the presence of numerical noise is not always assured. One way to improve optimization algorithm performance in the presence of numerical noise is to adjust the method of gradient computation. This study investigates the use of Continuous Sensitivity Equation (CSE) gradient approximations in the context of numerical noise and optimization. Three problems are considered: a problem with a system of ODE constraints, a single parameter flow problem constrained by the Navier-Stokes equations, and a multiple parameter flow problem constrained by the Navier-Stokes equations. All three problems use adaptive methods in the simulation of the constraint and are numerically noisy. Gradients for each problem are computed with both CSE and finite difference methods. The gradients are analyzed and compared. The two flow problems are optimized with a trust region optimization algorithm using both sets of gradient calculations. Optimization results are also compared, and the CSE gradient approximation yields impressive results for these examples. / Master of Science
433

Alternative Methodology To Household Activity Matching In TRANSIMS

Paradkar, Rajan 04 February 2002 (has links)
TRANSIMS (Transportation Analysis and Simulation System) developed at the Los Alamos National Laboratory, is an integrated system of travel forecasting models designed to give transportation planners accurate and complete information on traffic impacts, congestion, and pollution. TRANSIMS is a micro-simulation model which uses census data to generate a synthetic population and assigns activities using activity survey data to each person of every household of the synthetic population. The synthetic households generated from the census data are matched with the survey households based on their demographic characteristics. The activities of the survey household individuals are then assigned to the individuals of the matched synthetic households. The CART algorithm is used to match the households. With the use of CART algorithm a classification tree is built for the activity survey households based on some dependent and independent variables from the demographic data. The TRANSIMS model assumes activity times as dependent variables for building the classification tree. The topic of this research is to compare the TRANSIMS approach of using times spent in executing the activities as dependent variables, compared to match the alternative of using travel times for trips between activities as dependent variables i.e. to use the travel time pattern instead of activity time pattern to match the persons in the survey households with the synthetic households. Thus assuming that if the travel time patterns are the same then we can match the survey households to the synthetic population i.e. people with similar demographic characteristics tend to have similar travel time patterns. The algorithm of the Activity Generator module along with the original set of dependent variables, were first used to generate a base case scenario. Further tests were carried out using an alternative set of dependent variables in the algorithm. A sensitivity analysis was also carried out to test the affect of different sets of dependent variables in generating activities using the algorithm of the Activity Generator. The thesis also includes a detailed documentation of the results from all the tests. / Master of Science
434

An Adaptive Time Window Algorithm for Large Scale Network Emulation

Kodukula, Surya Ravikiran 07 February 2002 (has links)
With the continuing growth of the Internet and network protocols, there is a need for Protocol Development Environments. Simulation environments like ns and OPNET require protocol code to be rewritten in a discrete event model. Direct Code Execution Environments (DCEE) solve the Verification and Validation problems by supporting the execution of unmodified protocol code in a controlled environment. Open Network Emulator (ONE) is a system supporting Direct Code Execution in a parallel environment - allowing unmodified protocol code to run on top of a parallel simulation layer, capable of simulating complex network topologies. Traditional approaches to the problem of Parallel Discrete Event Simulation (PDES) broadly fall into two categories. Conservative approaches allow processing of events only after it has been asserted that the event handling would not result in a causality error. Optimistic approaches allow for causality errors and support means of restoring state — i.e., rollback. All standard approaches to the problem of PDES are either flawed by their assumption of existing event patterns in the system or cannot be applied to ONE due to their restricted analysis on simplified models like queues and Petri-nets. The Adaptive Time Window algorithm is a bounded optimistic parallel simulation algorithm with the capability to change the degree of optimism with changes in the degree of causality in the network. The optimism at any instant is bounded by the amount of virtual time called the time window. The algorithm assumes efficient rollback capabilities supported by the â Weaves' framework. The algorithm is reactive and responds to changes in the degree of causality in the system by adjusting the length of its time window. With sufficient history gathered the algorithm adjusts to the increasing causality in the system with a small time window (conservative approach) and increases to a higher value (optimistic approach) during idle periods. The problem of splitting the entire simulation run into time windows of arbitrary length, whereby the total number of rollbacks in the system is minimal, is NP-complete. The Adaptive Time Window algorithm is compared against offline greedy approaches to the NP-complete problem called Oracle Computations. The total number of rollbacks in the system and the total execution time for the Adaptive Time Window algorithm were comparable to the ones for Oracle Computations. / Master of Science
435

The AlgoViz Project: Building an Algorithm Visualization Web Community

Alon, Alexander Joel Dacara 13 September 2010 (has links)
Algorithm visualizations (AVs) have become a popular teaching aid in classes on algorithms and data structures. The AlgoViz Project attempts to provide an online venue for educators, students, developers,researchers, and other AV users. The Project is comprised of two websites. The first, the AlgoViz Portal, provides two major informational resources: an AV catalog that provides both descriptive and evaluative metadata of indexed visualizations, and an annotated bibliography of research literature. Both resources have over 500 entries and are actively updated by the AV community. The Portal also provides field reports, discussion forums, and other community-building mechanisms. The second website, OpenAlgoViz, is a SourceForge site intended to showcase exemplary AVs, as well as provide logistical and hosting support to AV developers. / Master of Science
436

Partitioning Methods and Algorithms for Configurable Computing Machines

Chandrasekhar, Suresh 18 August 1998 (has links)
This thesis addresses the partitioning problem for configurable computing machines. Specifically, this thesis presents algorithms to partition chain-structured task graphs across configurable computing machines. The algorithms give optimal solutions for throughput and total execution time for these problems under constraints on area, pin count, and power consumption. The algorithms provide flexibility for applying these constraints while remaining polynomial in complexity. Proofs of correctness as well as an analysis of runtime complexity are given. Experiments are performed to illustrate the runtime of these algorithms. / Master of Science
437

Thermal Characterization of Complex Aerospace Structures

Hanuska, Alexander Robert Jr. 24 April 1998 (has links)
Predicting the performance of complex structures exposed to harsh thermal environments is a crucial issue in many of today's aerospace and space designs. To predict the thermal stresses a structure might be exposed to, the thermal properties of the independent materials used in the design of the structure need to be known. Therefore, a noninvasive estimation procedure involving Genetic Algorithms was developed to determine the various thermal properties needed to adequately model the Outer Wing Subcomponent (OWS), a structure located at the trailing edge of the High Speed Civil Transport's (HSCT) wing tip. Due to the nature of the nonlinear least-squares estimation method used in this study, both theoretical and experimental temperature histories were required. Several one-dimensional and two-dimensional finite element models of the OWS were developed to compute the transient theoretical temperature histories. The experimental data were obtained from optimized experiments that were run at various surrounding temperature settings to investigate the temperature dependence of the estimated properties. An experimental optimization was performed to provide the most accurate estimates and reduce the confidence intervals. The simultaneous estimation of eight thermal properties, including the volumetric heat capacities and out-of-plane thermal conductivities of the facesheets, the honeycomb, the skins, and the torque tubes, was successfully completed with the one-dimensional model and the results used to evaluate the remaining in-plane thermal conductivities of the facesheets, the honeycomb, the skins, and the torque tubes with the two-dimensional model. Although experimental optimization did not eliminate all correlation between the parameters, the minimization procedure based on the Genetic Algorithm performed extremely well, despite the high degree of correlation and low sensitivity of many of the parameters. / Master of Science
438

Knowledge-Discovery Incorporated Evolutionary Search for Microcalcification Detection in Breast Cancer Diagnosis.

Peng, Yonghong, Yao, Bin, Jiang, Jianmin January 2006 (has links)
No / Objectives The presence of microcalcifications (MCs), clusters of tiny calcium deposits that appear as small bright spots in a mammogram, has been considered as a very important indicator for breast cancer diagnosis. Much research has been performed for developing computer-aided systems for the accurate identification of MCs, however, the computer-based automatic detection of MCs has been shown difficult because of the complicated nature of surrounding of breast tissue, the variation of MCs in shape, orientation, brightness and size. Methods and materials This paper presents a new approach for the effective detection of MCs by incorporating a knowledge-discovery mechanism in the genetic algorithm (GA). In the proposed approach, called knowledge-discovery incorporated genetic algorithm (KD-GA), the genetic algorithm is used to search for the bright spots in mammogram and a knowledge-discovery mechanism is integrated to improve the performance of the GA. The function of the knowledge-discovery mechanism includes evaluating the possibility of a bright spot being a true MC, and adaptively adjusting the associated fitness values. The adjustment of fitness is to indirectly guide the GA to extract the true MCs and eliminate the false MCs (FMCs) accordingly. Results and conclusions The experimental results demonstrate that the incorporation of knowledge-discovery mechanism into the genetic algorithm is able to eliminate the FMCs and produce improved performance comparing with the conventional GA methods. Furthermore, the experimental results show that the proposed KD-GA method provides a promising and generic approach for the development of computer-aided diagnosis for breast cancer.
439

Cross-Platform Cloth Simulation API for Games

Tang, W., Sagi, A.S., Green, D., Wan, Tao Ruan 04 June 2016 (has links)
No / Physics simulation is an active research topic in games, because without realistic physics, even the most beautiful game feels static and lifeless. Although cloth simulation is not new in computer graphics, high level of details of cloth in video games is rare and mostly coarse due to complex and nonlinear physical properties of cloth, which requires substantial computing power necessary to simulate it in real-time. This paper presents a robust and scalable real-time cloth simulation framework by exploring a variety of modern simulation techniques to produce realistic cloth simulation for games. The framework integrates with OpenCL GPGPU library to leverage parallelism. The final result is an API for games development with enriched interactive environments.
440

Approximation Algorithms for Rectangle Piercing Problems

Mahmood, Abdullah-Al January 2005 (has links)
Piercing problems arise often in facility location, which is a well-studied area of computational geometry. The general form of the piercing problem discussed in this dissertation asks for the minimum number of facilities for a set of given rectangular demand regions such that each region has at least one facility located within it. It has been shown that even if all regions are uniform sized squares, the problem is NP-hard. Therefore we concentrate on approximation algorithms for the problem. As the known approximation ratio for arbitrarily sized rectangles is poor, we restrict our effort to designing approximation algorithms for unit-height rectangles. Our e-approximation scheme requires <I>n</I><sup><I>O</I>(1/&epsilon;??)</sup> time. We also consider the problem with restrictions like bounding the depth of a point and the width of the rectangles. The approximation schemes for these two cases take <I>n</I><sup><I>O</I>(1/&epsilon;)</sup> time. We also show how to maintain a factor 2 approximation of the piercing set in <I>O</I>(log <I>n</I>) amortized time in an insertion-only scenario.

Page generated in 0.031 seconds