• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 423
  • 141
  • 54
  • 50
  • 18
  • 10
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 840
  • 90
  • 77
  • 76
  • 75
  • 69
  • 67
  • 61
  • 59
  • 58
  • 55
  • 51
  • 50
  • 45
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

MEMORY SYNTHESIS FOR FPGA-BASED RECONFIGURABLE COMPUTERS

KASAT, AMIT 11 October 2001 (has links)
No description available.
152

Fate and Transport of Pathogen Indicators from Pasturelands

Soupir, Michelle Lynn 15 April 2008 (has links)
The U.S. EPA has identified pathogen indicators as a leading cause of impairments in rivers and streams in the U.S. Elevated levels of bacteria in streams draining the agricultural watersheds cause concern because they indicate the potential presence of pathogenic organisms. Limited understanding of how bacteria survive in the environment and are released from fecal matter and transported along overland flow pathways results in high uncertainty in the design and selection of appropriate best management practices (BMPs) and in the bacterial fate and transport models used to identify sources of pathogens. The overall goal of this study was to improve understanding of the fate and transport mechanisms of two pathogen indicators, E. coli and enterococci, from grazed pasturelands. This goal was addressed by monitoring pathogen indicator concentrations in fresh fecal deposits for an extended period of time. Transport mechanisms of pathogen indicators were examined by developing a method to partition between the attached and unattached phases and then applying this method to analyze runoff samples collected from small box plots and large transport plots. The box plot experiments examined the partitioning of pathogen indicators in runoff from three different soil types while the transport plot experiments examined partitioning at the edge-of-the-field from well-managed and poorly-managed pasturelands. A variety of techniques have been previously used to assess bacterial attachment to particulates including filtration, fractional filtration and centrifugation. In addition, a variety of chemical and physical dispersion techniques are employed to release attached and bioflocculated cells from particulates. This research developed and validated an easy-to-replicate laboratory procedure for separation of unattached from attached E. coli with the ability to identify particle sizes to which indicators preferentially attach. Testing of physical and chemical dispersion techniques identified a hand shaker treatment for 10 minutes followed by dilutions in 1,000 mg L-1 of Tween-85 as increasing total E. coli concentrations by 31% (P value = 0.0028) and enterococci concentrations by 17% (P value = 0.3425) when compared to a control. Separation of the unattached and attached fractions was achieved by fractional filtration followed by centrifugation. Samples receiving the filtration and centrifugation treatments did not produce statistically different E. coli (P value = 0.97) or enterococci (P value = 0.83) concentrations when compared to a control, indicating that damage was not inflicted upon the cells during the separation procedure. In-field monitoring of E. coli and enterococci re-growth and decay patterns in cowpats applied to pasturelands was conducted during the spring, summer, fall and winter seasons. First order approximations were used to determine die-off rate coefficients and decimal reduction times (D-values). Higher order approximations and weather parameters were evaluated by multiple regression analysis to identify environmental parameters impacting in-field E. coli and enterococci decay. First order kinetics approximated E. coli and enterococci decay rates with regression coefficients ranging from 0.70 to 0.90. Die-off rate constants were greatest in cowpats applied to pasture during late winter and monitored into summer months for E. coli (k = 0.0995 d-1) and applied to the field during the summer and monitored until December for enterococci (k = 0.0978 d-1). Decay rates were lowest in cowpats applied to the pasture during the fall and monitored over the winter (k = 0.0581 d-1 for E. coli and k = 0.0557 d-1 for enterococci). Higher order approximations and the addition of weather variables improved regression coefficients (R2) to values ranging from 0.81 to 0.97. Statistically significant variables used in the models for predicting bacterial decay included temperature, solar radiation, rainfall and relative humidity. Attachment of E. coli and enterococci to particulates present in runoff from highly erodible soils was evaluated through the application of rainfall to small box plots containing different soil types. Partitioning varied by indicator and by soil type. In general, enterococci had a higher percent attached to the silty loam (49%) and silty clay loam (43%) soils while E. coli had a higher percent attached to the loamy fine sand soils (43%). At least 50% of all attached E. coli and enterococci were associated with sediment and organic particles ranging from 8 – 62 μm in diameter. Much lower attachment rates were observed from runoff samples collected at the edge-of-the-field, regardless of pastureland management strategy. On average, 4.8% of E. coli and 13% of enterococci were attached to particulates in runoff from well-managed pasturelands. A second transport plot study found that on average only 0.06% of E. coli PC and 0.98% of enterococci were attached to particulates in runoff from well-managed pasturelands, but percent attachment increased slightly in runoff from poorly-managed pasture with 2.8% of E. coli and 1.23% of enterococci attached to particulates. Equations to predict E. coli and enterococci loading rates in the attached and unattached forms as a function of total suspended solids (TSS), phosphorous and organic carbon loading rates appeared to be a promising tool for improving prediction of bacterial loading rates from grazed pasturelands (R2 values ranged from 0.61 to 0.99). This study provides field-based seasonal die-off rate coefficients and higher order approximations to improve predictions of indicator re-growth and decay patterns. The transport studies provide partitioning coefficients that can be implemented into NPS models to improve predictions of bacterial concentrations in surface waters and regression equations to predict bacterial partitioning and loading based on TSS and nutrient data. Best management practices to reduce bacterial loadings to the edge-of-the-field from pasturelands (regardless of management strategy) should focus on retention of pathogen indicators moving through overland flow pathways in the unattached state. Settling of particulates prior to release of runoff to surface waters might be an appropriate method of reducing bacterial loadings by as much as 50% from highly erodible soils. / Ph. D.
153

A Deterministic Approach to Partitioning Neural Network Training Data for the Classification Problem

Smith, Gregory Edward 28 September 2006 (has links)
The classification problem in discriminant analysis involves identifying a function that accurately classifies observations as originating from one of two or more mutually exclusive groups. Because no single classification technique works best for all problems, many different techniques have been developed. For business applications, neural networks have become the most commonly used classification technique and though they often outperform traditional statistical classification methods, their performance may be hindered because of failings in the use of training data. This problem can be exacerbated because of small data set size. In this dissertation, we identify and discuss a number of potential problems with typical random partitioning of neural network training data for the classification problem and introduce deterministic methods to partitioning that overcome these obstacles and improve classification accuracy on new validation data. A traditional statistical distance measure enables this deterministic partitioning. Heuristics for both the two-group classification problem and k-group classification problem are presented. We show that these heuristics result in generalizable neural network models that produce more accurate classification results, on average, than several commonly used classification techniques. In addition, we compare several two-group simulated and real-world data sets with respect to the interior and boundary positions of observations within their groups' convex polyhedrons. We show by example that projecting the interior points of simulated data to the boundary of their group polyhedrons generates convex shapes similar to real-world data group convex polyhedrons. Our two-group deterministic partitioning heuristic is then applied to the repositioned simulated data, producing results superior to several commonly used classification techniques. / Ph. D.
154

Site and species specific wildlife habitat assessment

Heinen, Joel T. January 1982 (has links)
This document contains three manuscripts, each forming a separate chapter. The first chapter is a sensitivity analysis, conducted on a wildlife habitat analysis system previously described. This was designed to mathematically test the effects of changing various parameters used in the system on the calculation of specific indices that this system measures. Chapters 2 and 3 represent specific applications of the proposed habitat analysis system. Each has been submitted to appropriate professional journals. All three chapters are self-contained. / M.S.
155

Interfacing VHDL performance models to algorithm partitioning tools

Balasubramanian, Priya 13 February 2009 (has links)
Performance modeling is widely used to efficiently and rapidly assess the ability of multiprocessor architectures to effectively execute a given algorithm. In a typical design environment, VHD L performance models of hardware components are interconnected to form structural models of the target multiprocessor architectures. Algorithm features are described in application specific tools. Other automated tools partition the software among the various processors. Performance models evaluate the system performance. Since several iterations may be needed before a suitable configuration is obtained, a set of tools that directly interfaces the VHDL performance models to the algorithm partitioning tools will significantly reduce the time and effort needed to prepare the various models. In order to develop the interface tools, it is essential to determine the information that needs to be interchanged between the two systems. The primary goals of this thesis are to study the various models, determine the information that needs to be exchanged, and to develop tools to automatically extract the desired information from each model. / Master of Science
156

Partitioning Strategies to Enhance Symbolic Execution

Marcellino, Brendan Adrian 11 August 2015 (has links)
Software testing is a fundamental part of the software development process. However, testing is still costly and consumes about half of the development cost. The path explosion problem often necessitates one to consider an extremely large number of paths in order to reach a specific target. Symbolic execution can reduce this cost by using symbolic values and heuristic exploration strategies. Although various exploration strategies have been proposed in the past, the number of Satisfiability Modulo Theories (SMT) solver calls for reaching a target is still large, resulting in longer execution times for programs containing many paths. In this paper, we present two partitioning strategies in order to mitigate this problem, consequently reducing unnecessary SMT solver calls as well. In sequential partitioning, code sections are analyzed sequentially to take advantage of infeasible paths discovered in earlier sections. On the other hand, using dynamic partitioning on SSA-applied code, the code sections are analyzed in a non-consecutive order guided by data dependency metrics within the sections. Experimental results show that both strategies can achieve significant speedup in reducing the number of unnecessary solver calls in large programs. More than 1000x speedup can be achieved in large programs over conflict-driven learning. / Master of Science
157

Establishing the Physical Basis for Calcification by Amorphous Pathways

Blue, Christina R. 28 May 2014 (has links)
The scientific community is undergoing a paradigm shift with the realization that the formation of carbonate minerals with diverse compositions and textures can be understood within the framework of multiple pathways to mineralization. A variety of common minerals can form via an amorphous pathway, where molecules or clusters aggregate to form a metastable amorphous phase that later transforms to one or more crystalline polymorphs. Amorphous calcium carbonate (ACC) is now recognized in a wide variety of natural environments. Recent studies indicate the chemical signatures and properties of the carbonate polymorphs that transform from an ACC pathway may obey a different set of dependencies than those established for the "classical" step-growth process. The Mg content of ACC and calcite is of particular interest as a minor element that is frequently found in ACC and the final crystalline products of calcified skeletons or sediments at significant concentrations. Previous studies of ACC have provided important insights into ACC properties, but a quantitative understanding of the controls on ACC composition and the effect of mineralization pathway on Mg signatures in calcite has not been established. This study utilized a new mixed-flow reactor (MFR) procedure to synthesize ACC from well-characterized solutions that maintain a constant supersaturation. The experimental design controlled the input solution Mg/Ca ratio, total carbonate concentration, and pH to produce ACC with systematic chemical compositions. Results show that ACC composition is regulated by the interplay of three factors at steady state conditions: 1) Mg/Ca ratio, 2) total carbonate concentration, and 3) solution pH. Findings from transformation experiments show a systematic and predictable chemical framework for understanding polymorph selection during ACC transformation. Furthermore, results suggest a chemical basis for a broad range of Mg contents in calcite, including high Mg calcite. We find that the final calcite produced from ACC is similar to the composition of the initial ACC phase, suggesting that calcite composition reflects local conditions of formation, regardless of the pathway to mineralization. The findings from this study provide a chemical road map to future studies on ACC composition, ACC transformation, polymorph selection, and impurities in calcite. / Ph. D.
158

LWFG: A Cache-Aware Multi-core Real-Time Scheduling Algorithm

Lindsay, Aaron Charles 27 June 2012 (has links)
As the number of processing cores contained in modern processors continues to increase, cache hierarchies are becoming more complex. This added complexity has the effect of increasing the potential cost of any cache misses on such architectures. When cache misses become more costly, minimizing them becomes even more important, particularly in terms of scalability concerns. In this thesis, we consider the problem of cache-aware real-time scheduling on multiprocessor systems. One avenue for improving real-time performance on multi-core platforms is task partitioning. Partitioning schemes statically assign tasks to cores, eliminating task migrations and reducing system overheads. Unfortunately, no current partitioning schemes explicitly consider cache effects when partitioning tasks. We develop the LWFG (Largest Working set size First, Grouping) cache-aware partitioning algorithm, which seeks to schedule tasks which share memory with one another in such a way as to minimize the total number of cache misses. LWFG minimizes cache misses by partitioning tasks that share memory onto the same core and by distributing the system's sum working set size as evenly as possible across the available cores. We evaluate the LWFG partitioning algorithm against several other commonly-used partitioning heuristics on a modern 48-core platform running ChronOS Linux. Our evaluation shows that in some cases, the LWFG partitioning algorithm increases execution efficiency by as much as 15% (measured by instructions per cycle) and decreases mean maximum tardiness by up to 60%. / Master of Science
159

On Improving Distributed Transactional Memory through Nesting, Partitioning and Ordering

Turcu, Alexandru 03 March 2015 (has links)
Distributed Transactional Memory (DTM) is an emerging, alternative concurrency control model that aims to overcome the challenges of distributed-lock based synchronization. DTM employs transactions in order to guarantee consistency in a concurrent execution. When two or more transactions conflict, all but one need to be delayed or rolled back. Transactional Memory supports code composability by nesting transactions. Nesting how- ever can be used as a strategy to improve performance. The closed nesting model enables partial rollback by allowing a sub-transaction to abort without aborting its parent, thus reducing the amount of work that needs to be retried. In the open nesting model, sub- transactions can commit to the shared state independently of their parents. This reduces isolation and increases concurrency. Our first main contribution in this dissertation are two extensions to the existing Transac- tional Forwarding Algorithm (TFA). Our extensions are N-TFA and TFA-ON, and support closed nesting and open nesting, respectively. We additionally extend the existing SCORe algorithm with support for open nesting (we call the result SCORe-ON). We implement these algorithms in a Java DTM framework and evaluate them. This represents the first study of transaction nesting in the context of DTM, and contributes the first DTM implementation which supports closed nesting or open nesting. Closed nesting through our N-TFA implementation proved insufficient for any significant throughput improvements. It ran on average 2% faster than flat nesting, while performance for individual tests varied between 42% slowdown and 84% speedup. The workloads that benefit most from closed nesting are characterized by short transactions, with between two and five sub-transactions. Open nesting, as exemplified by our TFA-ON and SCORe-ON implementations, showed promising results. We determined performance improvement to be a trade-off between the overhead of additional commits and the fundamental conflict rate. For write-intensive, high- conflict workloads, open nesting may not be appropriate, and we observed a maximum speedup of 30%. On the other hand, for lower fundamental-conflict workloads, open nesting enabled speedups of up to 167% in our tests. In addition to the two nesting algorithms, we also develop Hyflow2, a high-performance DTM framework for the Java Virtual Machine, written in Scala. It has a clean Scala API and a compatibility Java API. Hyflow2 was on average two times faster than Hyflow on high-contention workloads, and up to 16 times faster in low-contention workloads. Our second main contribution for improving DTM performance is automated data partition- ing. Modern transactional processing systems need to be fast and scalable, but this means many such systems settled for weak consistency models. It is however possible to achieve all of strong consistency, high scalability and high performance, by using fine-grained partitions and light-weight concurrency control that avoids superfluous synchronization and other over- heads such as lock management. Independent transactions are one such mechanism, that rely on good partitions and appropriately defined transactions. On the downside, it is not usually straightforward to determine optimal partitioning schemes, especially when dealing with non-trivial amounts of data. Our work attempts to solve this problem by automating the partitioning process, choosing the correct transactional primitive, and routing transactions appropriately. Our third main contribution is Alvin, a system for managing concurrently running trans- actions on a geographically replicated data-store. Alvin supports general-purpose transactions, and guarantees strong consistency criteria. Through a novel partial order broadcast protocol, Alvin maximizes the parallelism of ordering and local transaction processing, resulting in low client-perceived latency. Alvin can process read-only transactions either lo- cally or globally, according to the desired consistency criterion. Conflicting transactions are ordered across all sites. We built Alvin in the Go programming language. We conducted our evaluation study on Amazon EC2 infrastructure and compared against Paxos- and EPaxos- based state machine replication protocols. Our results reveal that Alvin provides significant speed-up for read-dominated TPC-C workloads: as much as 4.8x when compared to EPaxos on 7 datacenters, and up to 26% in write-intensive workloads. Our fourth and final contribution is M2Paxos, a multi-leader implementation of Generalized Consensus. Single leader-based consensus protocols are known to stop scaling once the leader reaches its saturation point. Ordering commands based on conflicts is appealing due to the potentially higher parallelism, but is imperfect due to the higher quorum sizes required for fast decisions and the need to compare commands and track their dependencies. M2Paxos on the other hand exploits fast decisions (i.e., delivery of a command in two communication delays) by leveraging a classic quorum size, matching a majority of nodes deployed. M2Paxos does not establish command dependencies based on conflicts, but it binds accessed objects to nodes, making sure commands operating on the same object will be ordered by the same node. Our evaluation study of M2Paxos (also built in Go) confirms the effectiveness of this approach, getting up to 7⨉ improvements in performance over state- of-the-art consensus and generalized consensus algorithms. / Ph. D.
160

New Differential Zone Protection Scheme Using Graph Partitioning for an Islanded Microgrid

Alsaeidi, Fahad S. 19 May 2022 (has links)
Microgrid deployment in electric grids improves reliability, efficiency, and quality, as well as the overall sustainability and resiliency of the grid. Specifically, microgrids alleviate the effects of power outages. However, microgrid implementations impose additional challenges on power systems. Microgrid protection is one of the technical challenges implicit in the deployment of microgrids. These challenges occur as a result of the unique properties of microgrid networks in comparison to traditional electrical networks. Differential protection is a fast, selective, and sensitive technique. Additionally, it offers a viable solution to microgrid protection concerns. The differential zone protection scheme is a cost-effective variant of differential protection. To implement a differential zone protection scheme, the network must be split into different protection zones. The reliability of this protection scheme is dependent upon the number of protective zones developed. This thesis proposes a new differential zone protection scheme using a graph partitioning algorithm. A graph partitioning algorithm is used to partition the microgrid into multiple protective zones. The IEEE 13-node microgrid is used to demonstrate the proposed protection scheme. The protection scheme is validated with MATLAB Simulink, and its impact is simulated with DIgSILENT PowerFactory software. Additionally, a comprehensive comparison was made to a comparable differential zone protection scheme. / Master of Science / A microgrid is a group of connected distributed energy resources (DERs) with the loads to be served that acts as a local electrical network. In electric grids, microgrid implementation enhances grid reliability, efficiency, and quality, as well as the system's overall sustainability and resiliency. Microgrids mitigate the consequences of power disruptions. Microgrid solutions, on the other hand, bring extra obstacles to power systems. One of the technological issues inherent in the implementation of microgrids is microgrid protection. These difficulties arise as a result of microgrid networks' distinct characteristics as compared to standard electrical networks. Differential protection is a technique that is fast, selective, and sensitive. It also provides a feasible solution to microgrid protection problems. This protection scheme, on the other hand, is more expensive than others. The differential zone protection scheme is a cost-effective variation of differential protection that lowers protection scheme expenses while improving system reliability. The network must be divided into different protection zones in order to deploy a differential zone protection scheme. The number of protective zones generated determines the reliability of this protection method. Using a network partitioning technique, this thesis presents a new differential zone protection scheme. The microgrid is divided into various protection zones using a graph partitioning algorithm. The proposed protection scheme is demonstrated using the IEEE 13-node microgrid. MATLAB Simulink is used to validate the protection scheme, while DIgSILENT PowerFactory is used to simulate its impact. A comparison of a similar differential zone protection scheme was also done.

Page generated in 0.0166 seconds