• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 423
  • 141
  • 54
  • 50
  • 18
  • 10
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 841
  • 90
  • 77
  • 76
  • 75
  • 69
  • 67
  • 61
  • 59
  • 58
  • 55
  • 51
  • 50
  • 45
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Interference interactions in experimental pine-hardwood stands

Fredericksen, Todd Simon 28 July 2008 (has links)
Competition for resources and other interference from non-crop vegetation often limits the productivity of pine and pine-hardwood forest stands in the southern United States. However, forest researchers have yet to fully quantify the effect of this interference on forest tree yield and there is an incomplete understanding of the biological mechanisms of interference. To better quantify the effects of interference interactions and elucidate their mechanisms, a field replacement series experiment and two supporting greenhouse experiments were carried out using loblolly pine (Pinus taeda L.), red maple (Acer rubrum L.), black locust (Robinia pseudoacacia L.), and herbaceous vegetation. Interference between pine. hardwood species, and herbaceous vegetation significantly impacted the growth and yield of young experimental pine-hardwood stands. While herbaceous vegetation significantly affected all stands. it reduced the yield of hardwood species more than loblolly pine. Loblolly pine appeared to ameliorate the effect of herbaceous vegetation on hardwoods in some stands. Interference outcomes were site- and scale-dependent. In field stands, synergistic adjustment in total yield due to pine-hardwood interference was not observed, except for loblolly pine-black locust mixtures on lower site quality replicates. Hardwood species suppressed the growth of pine in seedling stands planted at very close spacing in greenhouse boxes, while the yield at age three of less densely-planted field stands was positively related to the proportion of pine in the stand. Close spacing increased the ability of wide-spreading hardwood crowns to overtop and restrict light availability to conically-shaped pine crowns. Interference outcomes were related to the interactive effect of light, soil moisture, and soil nitrogen resources on tree growth and competitive ability. If not overtopped by hardwoods, loblolly pine had high yields in mixtures with hardwoods and competed effectively for soil moisture and nitrogen through efficient use of these resources. Changes in allometric relationships were observed for tree species in response to interference including root-shoot ratios, crown dimensions, and specific leaf areas. Tall fescue (Festuca arundinacea Schreb.), the principal herbaceous species in the field study, appeared to affect the physiology and yield of all species through allelopathy in a greenhouse experiment. suggesting that reduced yield in herbaceous plots may be due to direct chemical effects in addition to resource competition. / Ph. D.
412

Habitat use by nongame birds in central Appalachian riparian forests

Murray, Norman L. 16 February 2010 (has links)
I sampled bird densities and habitat characteristics along a gradient from a second-order stream to 454 m upland at 16 locations. Total bird density, species richness, and densities of 28 bird species were tested to determine whether riparian habitats influenced bird communities. Total bird density and species richness showed no riparian influence. Acadian flycatchers and Louisiana waterthrushes were closely linked to the streams. Carolina wrens, American robins, and red-eyed vireos showed weaker but positive associations with the streams. Eastern wood-pewees, black-and-white warblers, pine warblers, worm-eating warblers, and scarlet tanagers demonstrated a negative association with streams. A cluster analysis was used to group the 28 bird species into 5 assemblages based on their distribution among the sampling stations. The species were classified as belonging to the following assemblages: riparian, upland forest, mesic hardwoods, xeric forest, and mature hardwoods generalist. Logistic models were developed to predict the number of species in each assemblage that were present and the presence of each species at each station based on the habitat characteristics at the site. Regression models were developed to predict the relative abundance of each assemblage and species at occupied stations. / Master of Science
413

GraphDHT: Scaling Graph Neural Networks' Distributed Training on Edge Devices on a Peer-to-Peer Distributed Hash Table Network

Gupta, Chirag 03 January 2024 (has links)
This thesis presents an innovative strategy for distributed Graph Neural Network (GNN) training, leveraging a peer-to-peer network of heterogeneous edge devices interconnected through a Distributed Hash Table (DHT). As GNNs become increasingly vital in analyzing graph-structured data across various domains, they pose unique challenges in computational demands and privacy preservation, particularly when deployed for training on edge devices like smartphones. To address these challenges, our study introduces the Adaptive Load- Balanced Partitioning (ALBP) technique in the GraphDHT system. This approach optimizes the division of graph datasets among edge devices, tailoring partitions to the computational capabilities of each device. By doing so, ALBP ensures efficient resource utilization across the network, significantly improving upon traditional participant selection strategies that often overlook the potential of lower-performance devices. Our methodology's core is weighted graph partitioning and model aggregation in GNNs, based on partition ratios, improving training efficiency and resource use. ALBP promotes inclusive device participation in training, overcoming computational limits and privacy concerns in large-scale graph data processing. Utilizing a DHT-based system enhances privacy in the peer-to-peer setup. The GraphDHT system, tested across various datasets and GNN architectures, shows ALBP's effectiveness in distributed GNN training and its broad applicability in different domains and structures. This contributes to applied machine learning, especially in optimizing distributed learning on edge devices. / Master of Science / Graph Neural Networks (GNNs) are a type of machine learning model that focuses on analyzing data structured like a network, such as social media connections or biological systems. These models can help identify patterns and make predictions in various tasks, but training them on large-scale datasets can require significant computing power and careful handling of sensitive data. This research proposes a new method for training GNNs on small devices, like smartphones, by dividing the data into smaller pieces and using a peer-to-peer (p2p) network for communication between devices. This approach allows the devices to work together and learn from the data while keeping sensitive information private. The main contributions of this research are threefold: (1) examining existing ways to divide network data and how they can be used for training GNNs on small devices, (2) improving the training process by creating a localized, decentralized network of devices that can communicate and learn together, and (3) testing the method on different types of datasets and GNN models, showing that it works well across a variety of situations. To sum up, this research offers a novel way to train GNNs on small devices, allowing for more efficient learning and better protection of sensitive information.
414

Dynamic Power-Aware Techniques for Real-Time Multicore Embedded Systems

March Cabrelles, José Luis 30 March 2015 (has links)
The continuous shrink of transistor sizes has allowed more complex and powerful devices to be implemented in the same area, which provides new capabilities and functionalities. However, this complexity increase comes with a considerable rise in power consumption. This situation is critical in portable devices where the energy budget is limited and, hence, battery lifetime defines the usefulness of the system. Therefore, power consumption has become a major concern in the design of real-time multicore embedded systems. This dissertation proposes several techniques aimed to save energy without sacrifying real-time schedulability in this type of systems. The proposed techniques deal with different main components of the system. In particular, the techniques affect the task partitioner and the scheduler, as well as the memory controller. Some of the techniques are especially tailored for multicores with shared Dynamic Voltage and Frequency Scaling (DVFS) domains. Workload balancing among cores in a given domain has a strong impact on power consumption, since all the cores sharing a DVFS domain must run at the speed required by the most loaded core. In this thesis, a novel workload partitioning algorithm is proposed, namely Loadbounded Resource Balancing (LRB). The proposal allocates tasks to cores to balance a given resource (processor or memory) consumption among cores, improving real-time schedulability by increasing overlapping between processor and memory. However, distributing tasks in this way regardless the individual core utilizations could lead to unfair load distributions. That is, one of the cores could become much loaded than the others. To avoid this scenario, when a given utilization threshold is exceeded, tasks are assigned to the least loaded core. Unfortunately, workload partitioning alone is sometimes not able to achieve a good workload balance among cores. Therefore, this work also explores novel task migration approaches. Two task migration heuristics are proposed. The first heuristic, referred to as Single Option Migration (SOM ), attempts to perform only one migration when the workload changes to improve utilization balance. Three variants of the SOM algorithm have been devised, depending on the point of time the migration attempt is performed: when a task arrives to the system (SOMin), when a task leaves the system (SOMout), and in both cases (SOMin−out). The second heuristic, referred to as Multiple Option Migration (MOM ) explores an additional alternative workload partitioning before performing the migration attempt. Regarding the memory controller, memory controller scheduling policies are devised. Conventional policies used in Non Real-Time (NRT) systems are not appropriate for systems providing support for both Hard Real-Time (HRT) and Soft Real-Time (SRT) tasks. Those policies can introduce variability in the latencies of the memory requests and, hence, cause an HRT deadline miss that could lead to a critical failure of the real-time system. To deal with this drawback, a simple policy, referred to as HR- first, which prioritizes requests of HRT tasks, is proposed. In addition, a more advanced approach, namely ATR-first, is presented. ATR-first prioritizes only those requests of HRT tasks that are necessary to ensure real-time schedulability, improving the Quality of Service (QoS) of SRT tasks. Finally, this thesis also tackles dynamic execution time estimation. The accuracy of this estimation is important to avoid deadline misses of HRT tasks but also to increase QoS in SRT systems. Besides, it can also help to improve the schedulability of the systems and reduce power consumption. The Processor-Memory (Proc-Mem) model, that dynamically predicts the execution time of real-time application for each frequency level, is proposed. This model measures at the first hyperperiod, making use of Performance Monitoring Counters (PMCs) at run-time, the portion of time that each core is performing computation (CPU ), waiting for memory (MEM ), or both (OVERLAP). This information will be used to estimate the execution time at any other working frequency / March Cabrelles, JL. (2014). Dynamic Power-Aware Techniques for Real-Time Multicore Embedded Systems [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/48464
415

Computational Cost Analysis of Large-Scale Agent-Based Epidemic Simulations

Kamal, Tariq 21 September 2016 (has links)
Agent-based epidemic simulation (ABES) is a powerful and realistic approach for studying the impacts of disease dynamics and complex interventions on the spread of an infection in the population. Among many ABES systems, EpiSimdemics comes closest to the popular agent-based epidemic simulation systems developed by Eubank, Longini, Ferguson, and Parker. EpiSimdemics is a general framework that can model many reaction-diffusion processes besides the Susceptible-Exposed-Infectious-Recovered (SEIR) models. This model allows the study of complex systems as they interact, thus enabling researchers to model and observe the socio-technical trends and forces. Pandemic planning at the world level requires simulation of over 6 billion agents, where each agent has a unique set of demographics, daily activities, and behaviors. Moreover, the stochastic nature of epidemic models, the uncertainty in the initial conditions, and the variability of reactions require the computation of several replicates of a simulation for a meaningful study. Given the hard timelines to respond, running many replicates (15-25) of several configurations (10-100) (of these compute-heavy simulations) can only be possible on high-performance clusters (HPC). These agent-based epidemic simulations are irregular and show poor execution performance on high-performance clusters due to the evolutionary nature of their workload, large irregular communication and load imbalance. For increased utilization of HPC clusters, the simulation needs to be scalable. Many challenges arise when improving the performance of agent-based epidemic simulations on high-performance clusters. Firstly, large-scale graph-structured computation is central to the processing of these simulations, where the star-motif quality nodes (natural graphs) create large computational imbalances and communication hotspots. Secondly, the computation is performed by classes of tasks that are separated by global synchronization. The non-overlapping computations cause idle times, which introduce the load balancing and cost estimation challenges. Thirdly, the computation is overlapped with communication, which is difficult to measure using simple methods, thus making the cost estimation very challenging. Finally, the simulations are iterative and the workload (computation and communication) may change through iterations, as a result introducing load imbalances. This dissertation focuses on developing a cost estimation model and load balancing schemes to increase the runtime efficiency of agent-based epidemic simulations on high-performance clusters. While developing the cost model and load balancing schemes, we perform the static and dynamic load analysis of such simulations. We also statically quantified the computational and communication workloads in EpiSimdemics. We designed, developed and evaluated a cost model for estimating the execution cost of large-scale parallel agent-based epidemic simulations (and more generally for all constrained producer-consumer parallel algorithms). This cost model uses computational imbalances and communication latencies, and enables the cost estimation of those applications where the computation is performed by classes of tasks, separated by synchronization. It enables the performance analysis of parallel applications by computing its execution times on a number of partitions. Our evaluations show that the model is helpful in performance prediction, resource allocation and evaluation of load balancing schemes. As part of load balancing algorithms, we adopted the Metis library for partitioning bipartite graphs. We have also developed lower-overhead custom schemes called Colocation and MetColoc. We performed an evaluation of Metis, Colocation, and MetColoc. Our analysis showed that the MetColoc schemes gives a performance similar to Metis, but with half the partitioning overhead (runtime and memory). On the other hand, the Colocation scheme achieves a similar performance to Metis on a larger number of partitions, but at extremely lower partitioning overhead. Moreover, the memory requirements of Colocation scheme does not increase as we create more partitions. We have also performed the dynamic load analysis of agent-based epidemic simulations. For this, we studied the individual and joint effects of three disease parameter (transmissiblity, infection period and incubation period). We quantified the effects using an analytical equation with separate constants for SIS, SIR and SI disease models. The metric that we have developed in this work is useful for cost estimation of constrained producer-consumer algorithms, however, it has some limitations. The applicability of the metric is application, machine and data-specific. In the future, we plan to extend the metric to increase its applicability to a larger set of machine architectures, applications, and datasets. / Ph. D.
416

Directive-Based Data Partitioning and Pipelining and Auto-Tuning for High-Performance GPU Computing

Cui, Xuewen 15 December 2020 (has links)
The computer science community needs simpler mechanisms to achieve the performance potential of accelerators, such as graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and co-processors (e.g., Intel Xeon Phi), due to their increasing use in state-of-the-art supercomputers. Over the past 10 years, we have seen a significant improvement in both computing power and memory connection bandwidth for accelerators. However, we also observe that the computation power has grown significantly faster than the interconnection bandwidth between the central processing unit (CPU) and the accelerator. Given that accelerators generally have their own discrete memory space, data needs to be copied from the CPU host memory to the accelerator (device) memory before computation starts on the accelerator. Moreover, programming models like CUDA, OpenMP, OpenACC, and OpenCL can efficiently offload compute-intensive workloads to these accelerators. However, achieving the overlapping of data transfers with computation in a kernel with these models is neither simple nor straightforward. Instead, codes copy data to or from the device without overlapping or requiring explicit user design and refactoring. Achieving performance can require extensive refactoring and hand-tuning to apply data transfer optimizations, and users must manually partition their dataset whenever its size is larger than device memory, which can be highly difficult when the device memory size is not exposed to the user. As the systems are becoming more and more complex in terms of heterogeneity, CPUs are responsible for handling many tasks related to other accelerators, computation and data movement tasks, task dependency checking, and task callbacks. Leaving all logic controls to the CPU not only costs extra communication delay over the PCI-e bus but also consumes the CPU resources, which may affect the performance of other CPU tasks. This thesis work aims to provide efficient directive-based data pipelining approaches for GPUs that tackle these issues and improve performance, programmability, and memory management. / Doctor of Philosophy / Over the past decade, parallel accelerators have become increasingly prominent in this emerging era of "big data, big compute, and artificial intelligence.'' In more recent supercomputers and datacenter clusters, we find multi-core central processing units (CPUs), many-core graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and co-processors (e.g., Intel Xeon Phi) being used to accelerate many kinds of computation tasks. While many new programming models have been proposed to support these accelerators, scientists or developers without domain knowledge usually find existing programming models not efficient enough to port their code to accelerators. Due to the limited accelerator on-chip memory size, the data array size is often too large to fit in the on-chip memory, especially while dealing with deep learning tasks. The data need to be partitioned and managed properly, which requires more hand-tuning effort. Moreover, performance tuning is difficult for developers to achieve high performance for specific applications due to a lack of domain knowledge. To handle these problems, this dissertation aims to propose a general approach to provide better programmability, performance, and data management for the accelerators. Accelerator users often prefer to keep their existing verified C, C++, or Fortran code rather than grapple with the unfamiliar code. Since 2013, OpenMP has provided a straightforward way to adapt existing programs to accelerated systems. We propose multiple associated clauses to help developers easily partition and pipeline the accelerated code. Specifically, the proposed extension can overlap kernel computation and data transfer between host and device efficiently. The extension supports memory over-subscription, meaning the memory size required by the tasks could be larger than the GPU size. The internal scheduler guarantees that the data is swapped out correctly and efficiently. Machine learning methods are also leveraged to help with auto-tuning accelerator performance.
417

Mass Properties Calculation and Fuel Analysis in the Conceptual Design of Uninhabited Air Vehicles

Ohanian, Osgar John 17 December 2003 (has links)
The determination of an aircraft's mass properties is critical during its conceptual design phase. Obtaining reliable mass property information early in the design of an aircraft can prevent design mistakes that can be extremely costly further along in the development process. In this thesis, several methods are presented in order to automatically calculate the mass properties of aircraft structural components and fuel stored in tanks. The first method set forth calculates the mass properties of homogenous solids represented by polyhedral surface geometry. A newly developed method for calculating the mass properties of thin shell objects, given the same type of geometric representation, is derived and explained. A methodology for characterizing the mass properties of fuel in tanks has also been developed. While the concepts therein are not completely original, the synthesis of past research from diverse sources has yielded a new comprehensive approach to fuel mass property analysis during conceptual design. All three of these methods apply to polyhedral geometry, which in many cases is used to approximate NURBS (Non-Uniform Rational B-Spline) surface geometry. This type of approximate representation is typically available in design software since this geometric format is conducive to graphically rendering three-dimensional geometry. The accuracy of each method is within 10% of analytical values. The methods are highly precise (only affected by floating point error) and therefore can reliably predict relative differences between models, which is much more important during conceptual design than accuracy. Several relevant and useful applications of the presented methods are explored, including a methodology for creating a CG (Center of Gravity) envelope graph. / Master of Science
418

Metacommunity Dynamics of Medium- and Large-Bodied Mammals in the LBJ National Grasslands

McCain, Wesley Craig Stade 05 1900 (has links)
Using metacommunity theory, I investigated the mechanisms of meta-assemblage structure and assembly among medium- to large-bodied mammals in North Texas. Mammals were surveyed with camera-traps in thirty property units of the LBJ National Grasslands (LBJNG). In Chapter II the dispersal and environmental-control based processes in community assembly were quantified within a metacommunity context and the best-fit metacommunity structure identified. A hypothesis-driven modelling approach was used in Chapter III to determine if the patterns of species composition and site use could be explained by island biogeography theory (IBT) or the habitat amount hypothesis (HAH). Islands were defined as the LBJNG property unit or the forest patch bounded by the property unit. Forest cover was selected as the focal habitat for the HAH. Seasonal dynamics were explored in both chapters. Metacommunity structure changed with each season, resulting in quasi-nested and both quasi and idealized Gleasonian and Clementsian structures. Results indicated that the anthropogenic development is, overall, not disadvantageous for this assemblage, that community assembly receives equal contributions from spatial and environmental factors, and that the metacommunity appears to operate under the mass effects paradigm. The patterns of species composition and site use were not explained by either IBT or HAH. Likely because this assemblage of generalist, dispersal-capable mammals are utilizing multiple habitat types both in the protected land and in the private land. This research highlights the versatility of these species and the potential value of rural countryside landscapes for wildlife conservation.
419

Partitioning of Drugs and Lignin Precursor Models into Artificial Membranes

Boija, Elisabet January 2006 (has links)
<p>The main aim of this thesis was to characterize membrane-solute interactions using artificial membranes in immobilized liposome chromatography or capillary electrophoresis. The partitioning of a solute into a cell membrane is an essential step in diffusion across the membrane. It is a valid parameter in drug research and can be linked to the permeability as well as the absorption of drugs. Immobilized liposome chromatography was also used to study partitioning of lignin precursor models. Lignin precursors are synthesized within plant cells and need to pass the membrane to be incorporated into lignin in the cell wall.</p><p>In immobilized liposome chromatography, liposomes or lipid bilayer disks were immobilized in gel beads and the partitioning of solutes was determined. Capillary electrophoresis using disks as a pseudostationary phase was introduced as a new approach in drug partitioning studies. In addition, octanol/water partitioning was used to determine the hydrophobicity of the lignin precursor models.</p><p>Electrostatic interactions occurred between bilayers and charged drugs, whereas neutral drugs were less affected. However, neutral lignin precursor models exhibited polar interactions. Moreover, upon changing the buffer ionic strength or the buffer ions, the interactions between charged drugs and neutral liposomes were affected. Hydrophobic interactions were also revealed by including a fatty acid or a neutral detergent into the bilayer or by using a buffer with a high salt concentration. The bilayer manipulation had only a moderate effect on drug partitioning, but the high salt concentration had a large impact on partitioning of lignin precursor models.</p><p>Upon comparing the partitioning into liposomes and disks, the latter showed a more pronounced partitioning due to the larger fraction of lipids readily available for interaction. Finally, bilayer disk capillary electrophoresis was successfully introduced for partitioning studies of charged drugs. This application will be evaluated further as an analytical partitioning method and separation technique.</p>
420

Niche partitioning, distribution and competition in North Atlantic beaked whales

MacLeod, Colin D. January 2005 (has links) (PDF)
Thesis (Ph.D.)--University of Aberdeen, 2005. / Includes bibliographical references. Available in PDF format via the World Wide Web.

Page generated in 0.1488 seconds