• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5939
  • 1421
  • 871
  • 726
  • 722
  • 668
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Estimating Reachability Set Sizes in Dynamic Graphs

Aji, Sudarshan Mandayam 01 July 2014 (has links)
Graphs are a commonly used abstraction for diverse kinds of interactions, e.g., on Twitter and Facebook. Different kinds of topological properties of such graphs are computed for gaining insights into their structure. Computing properties of large real networks is computationally very challenging. Further, most real world networks are dynamic, i.e., they change over time. Therefore there is a need for efficient dynamic algorithms that offer good space-time trade-offs. In this thesis we study the problem of computing the reachability set size of a vertex, which is a fundamental problem, with applications in databases and social networks. We develop the first Giraph based algorithms for different dynamic versions of these problems, which scale to graphs with millions of edges. / Master of Science
432

A Gillespie-Type Algorithm for Particle Based Stochastic Model on Lattice

Liu, Weigang January 2019 (has links)
In this thesis, I propose a general stochastic simulation algorithm for particle based lattice model using the concepts of Gillespie's stochastic simulation algorithm, which was originally designed for well-stirred systems. I describe the details about this method and analyze its complexity compared with the StochSim algorithm, another simulation algorithm originally proposed to simulate stochastic lattice model. I compare the performance of both algorithms with application to two different examples: the May-Leonard model and Ziff-Gulari-Barshad model. Comparison between the simulation results from both algorithms has validate our claim that our new proposed algorithm is comparable to the StochSim in simulation accuracy. I also compare the efficiency of both algorithms using the CPU cost of each code and conclude that the new algorithm is as efficient as the StochSim in most test cases, while performing even better for certain specific cases. / Computer simulation has been developed for almost one century. Stochastic lattice model, which follows the physics concept of lattice, is defined as a kind of system in which individual entities live on grids and demonstrate certain random behaviors according to certain specific rules. It is mainly studied using computer simulations. The most widely used simulation method to for stochastic lattice systems is the StochSim algorithm, which just randomly pick an entity and then determine its behavior based on a set of specific random rules. Our goal is to develop new simulation methods so that it is more convenient to simulate and analyze stochastic lattice system. In this thesis I propose another type of simulation methods for the stochastic lattice model using totally different concepts and procedures. I developed a simulation package and applied it to two different examples using both methods, and then conducted a series of numerical experiment to compare their performance. I conclude that they are roughly equivalent and our new method performs better than the old one in certain special cases.
433

Synthesizing a Hybrid Benchmark Suite with BenchPrime

Wu, Xiaolong 09 October 2018 (has links)
This paper presents BenchPrime, an automated benchmark analysis toolset that is systematic and extensible to analyze the similarity and diversity of benchmark suites. BenchPrime takes multiple benchmark suites and their evaluation metrics as inputs and generates a hybrid benchmark suite comprising only essential applications. Unlike prior work, BenchPrime uses linear discriminant analysis rather than principal component analysis, as well as selects the best clustering algorithm and the optimized number of clusters in an automated and metric-tailored way, thereby achieving high accuracy. In addition, BenchPrime ranks the benchmark suites in terms of their application set diversity and estimates how unique each benchmark suite is compared to other suites. As a case study, this work for the first time compares the DenBench with the MediaBench and MiBench using four different metrics to provide a multi-dimensional understanding of the benchmark suites. For each metric, BenchPrime measures to what degree DenBench applications are irreplaceable with those in MediaBench and MiBench. This provides means for identifying an essential subset from the three benchmark suites without compromising the application balance of the full set. The experimental results show that the necessity of including DenBench applications varies across the target metrics and that significant redundancy exists among the three benchmark suites. / Master of Science / Representative benchmarks are widely used in the research area to achieve an accurate and fair evaluation of hardware and software techniques. However, the redundant applications in the benchmark set can skew the average towards redundant characteristics overestimating the benefit of any proposed research. This work proposes a machine learning-based framework BenchPrime to generates a hybrid benchmark suite comprising only essential applications. In addition, BenchPrime ranks the benchmark suites in terms of their application set diversity and estimates how unique each benchmark suite is compared to other suites.
434

On the Effect of Numerical Noise in Simulation-Based Optimization

Vugrin, Kay E. 10 April 2003 (has links)
Numerical noise is a prevalent concern in many practical optimization problems. Convergence of gradient based optimization algorithms in the presence of numerical noise is not always assured. One way to improve optimization algorithm performance in the presence of numerical noise is to adjust the method of gradient computation. This study investigates the use of Continuous Sensitivity Equation (CSE) gradient approximations in the context of numerical noise and optimization. Three problems are considered: a problem with a system of ODE constraints, a single parameter flow problem constrained by the Navier-Stokes equations, and a multiple parameter flow problem constrained by the Navier-Stokes equations. All three problems use adaptive methods in the simulation of the constraint and are numerically noisy. Gradients for each problem are computed with both CSE and finite difference methods. The gradients are analyzed and compared. The two flow problems are optimized with a trust region optimization algorithm using both sets of gradient calculations. Optimization results are also compared, and the CSE gradient approximation yields impressive results for these examples. / Master of Science
435

Alternative Methodology To Household Activity Matching In TRANSIMS

Paradkar, Rajan 04 February 2002 (has links)
TRANSIMS (Transportation Analysis and Simulation System) developed at the Los Alamos National Laboratory, is an integrated system of travel forecasting models designed to give transportation planners accurate and complete information on traffic impacts, congestion, and pollution. TRANSIMS is a micro-simulation model which uses census data to generate a synthetic population and assigns activities using activity survey data to each person of every household of the synthetic population. The synthetic households generated from the census data are matched with the survey households based on their demographic characteristics. The activities of the survey household individuals are then assigned to the individuals of the matched synthetic households. The CART algorithm is used to match the households. With the use of CART algorithm a classification tree is built for the activity survey households based on some dependent and independent variables from the demographic data. The TRANSIMS model assumes activity times as dependent variables for building the classification tree. The topic of this research is to compare the TRANSIMS approach of using times spent in executing the activities as dependent variables, compared to match the alternative of using travel times for trips between activities as dependent variables i.e. to use the travel time pattern instead of activity time pattern to match the persons in the survey households with the synthetic households. Thus assuming that if the travel time patterns are the same then we can match the survey households to the synthetic population i.e. people with similar demographic characteristics tend to have similar travel time patterns. The algorithm of the Activity Generator module along with the original set of dependent variables, were first used to generate a base case scenario. Further tests were carried out using an alternative set of dependent variables in the algorithm. A sensitivity analysis was also carried out to test the affect of different sets of dependent variables in generating activities using the algorithm of the Activity Generator. The thesis also includes a detailed documentation of the results from all the tests. / Master of Science
436

An Adaptive Time Window Algorithm for Large Scale Network Emulation

Kodukula, Surya Ravikiran 07 February 2002 (has links)
With the continuing growth of the Internet and network protocols, there is a need for Protocol Development Environments. Simulation environments like ns and OPNET require protocol code to be rewritten in a discrete event model. Direct Code Execution Environments (DCEE) solve the Verification and Validation problems by supporting the execution of unmodified protocol code in a controlled environment. Open Network Emulator (ONE) is a system supporting Direct Code Execution in a parallel environment - allowing unmodified protocol code to run on top of a parallel simulation layer, capable of simulating complex network topologies. Traditional approaches to the problem of Parallel Discrete Event Simulation (PDES) broadly fall into two categories. Conservative approaches allow processing of events only after it has been asserted that the event handling would not result in a causality error. Optimistic approaches allow for causality errors and support means of restoring state — i.e., rollback. All standard approaches to the problem of PDES are either flawed by their assumption of existing event patterns in the system or cannot be applied to ONE due to their restricted analysis on simplified models like queues and Petri-nets. The Adaptive Time Window algorithm is a bounded optimistic parallel simulation algorithm with the capability to change the degree of optimism with changes in the degree of causality in the network. The optimism at any instant is bounded by the amount of virtual time called the time window. The algorithm assumes efficient rollback capabilities supported by the â Weaves' framework. The algorithm is reactive and responds to changes in the degree of causality in the system by adjusting the length of its time window. With sufficient history gathered the algorithm adjusts to the increasing causality in the system with a small time window (conservative approach) and increases to a higher value (optimistic approach) during idle periods. The problem of splitting the entire simulation run into time windows of arbitrary length, whereby the total number of rollbacks in the system is minimal, is NP-complete. The Adaptive Time Window algorithm is compared against offline greedy approaches to the NP-complete problem called Oracle Computations. The total number of rollbacks in the system and the total execution time for the Adaptive Time Window algorithm were comparable to the ones for Oracle Computations. / Master of Science
437

The AlgoViz Project: Building an Algorithm Visualization Web Community

Alon, Alexander Joel Dacara 13 September 2010 (has links)
Algorithm visualizations (AVs) have become a popular teaching aid in classes on algorithms and data structures. The AlgoViz Project attempts to provide an online venue for educators, students, developers,researchers, and other AV users. The Project is comprised of two websites. The first, the AlgoViz Portal, provides two major informational resources: an AV catalog that provides both descriptive and evaluative metadata of indexed visualizations, and an annotated bibliography of research literature. Both resources have over 500 entries and are actively updated by the AV community. The Portal also provides field reports, discussion forums, and other community-building mechanisms. The second website, OpenAlgoViz, is a SourceForge site intended to showcase exemplary AVs, as well as provide logistical and hosting support to AV developers. / Master of Science
438

Partitioning Methods and Algorithms for Configurable Computing Machines

Chandrasekhar, Suresh 18 August 1998 (has links)
This thesis addresses the partitioning problem for configurable computing machines. Specifically, this thesis presents algorithms to partition chain-structured task graphs across configurable computing machines. The algorithms give optimal solutions for throughput and total execution time for these problems under constraints on area, pin count, and power consumption. The algorithms provide flexibility for applying these constraints while remaining polynomial in complexity. Proofs of correctness as well as an analysis of runtime complexity are given. Experiments are performed to illustrate the runtime of these algorithms. / Master of Science
439

Thermal Characterization of Complex Aerospace Structures

Hanuska, Alexander Robert Jr. 24 April 1998 (has links)
Predicting the performance of complex structures exposed to harsh thermal environments is a crucial issue in many of today's aerospace and space designs. To predict the thermal stresses a structure might be exposed to, the thermal properties of the independent materials used in the design of the structure need to be known. Therefore, a noninvasive estimation procedure involving Genetic Algorithms was developed to determine the various thermal properties needed to adequately model the Outer Wing Subcomponent (OWS), a structure located at the trailing edge of the High Speed Civil Transport's (HSCT) wing tip. Due to the nature of the nonlinear least-squares estimation method used in this study, both theoretical and experimental temperature histories were required. Several one-dimensional and two-dimensional finite element models of the OWS were developed to compute the transient theoretical temperature histories. The experimental data were obtained from optimized experiments that were run at various surrounding temperature settings to investigate the temperature dependence of the estimated properties. An experimental optimization was performed to provide the most accurate estimates and reduce the confidence intervals. The simultaneous estimation of eight thermal properties, including the volumetric heat capacities and out-of-plane thermal conductivities of the facesheets, the honeycomb, the skins, and the torque tubes, was successfully completed with the one-dimensional model and the results used to evaluate the remaining in-plane thermal conductivities of the facesheets, the honeycomb, the skins, and the torque tubes with the two-dimensional model. Although experimental optimization did not eliminate all correlation between the parameters, the minimization procedure based on the Genetic Algorithm performed extremely well, despite the high degree of correlation and low sensitivity of many of the parameters. / Master of Science
440

Approximation Algorithms for Rectangle Piercing Problems

Mahmood, Abdullah-Al January 2005 (has links)
Piercing problems arise often in facility location, which is a well-studied area of computational geometry. The general form of the piercing problem discussed in this dissertation asks for the minimum number of facilities for a set of given rectangular demand regions such that each region has at least one facility located within it. It has been shown that even if all regions are uniform sized squares, the problem is NP-hard. Therefore we concentrate on approximation algorithms for the problem. As the known approximation ratio for arbitrarily sized rectangles is poor, we restrict our effort to designing approximation algorithms for unit-height rectangles. Our e-approximation scheme requires <I>n</I><sup><I>O</I>(1/&epsilon;??)</sup> time. We also consider the problem with restrictions like bounding the depth of a point and the width of the rectangles. The approximation schemes for these two cases take <I>n</I><sup><I>O</I>(1/&epsilon;)</sup> time. We also show how to maintain a factor 2 approximation of the piercing set in <I>O</I>(log <I>n</I>) amortized time in an insertion-only scenario.

Page generated in 0.2224 seconds