• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 423
  • 141
  • 54
  • 50
  • 18
  • 10
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 841
  • 90
  • 77
  • 76
  • 75
  • 69
  • 67
  • 61
  • 59
  • 58
  • 55
  • 51
  • 50
  • 45
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Performance analysis and modeling of GYRO

Lively, Charles Wesley, III 30 October 2006 (has links)
Efficient execution of scientific applications requires an understanding of how system features impact the performance of the application. Performance models provide significant insight into the performance relationships between an application and the system used for execution. In particular, models can be used to predict the relative performance of different systems used to execute an application. Recently, a significant effort has been devoted to gaining a more detailed understanding of the performance characteristics of a fusion reaction application, GYRO. GYRO is a plasma-physics application used to gain a better understanding of the interaction of ions and electrons in fusion reactions. In this thesis, we use the well-known Prophesy system to analyze and model the performance of GYRO across various supercomputer platforms. Using processor partitioning, we determine that utilizing the smallest number of processors per node is the most effective processor configuration for executing the application. Further, we explore trends in kernel coupling values across platforms to understand how kernels of GYRO interact. In this work, experiments are conducted on the supercomputers Seaborg and Jacquard at the DOE National Energy Research Scientific Computing Center and the supercomputers DataStar P655 and P690 at the San Diego Supercomputing Center. Across all four platforms, our results show that utilizing one processor per node (ppn) yields better performance than full or half ppn usage. Our experimental results also show that using kernel coupling to model and predict the performance of GYRO is more accurate than summation. On average, kernel coupling provides for prediction estimates that have less than a 7% error. The performance relationship between kernel coupling values and the sharing of information throughout the GYRO application is explored by understanding the global communication within the application and data locality.
262

Micro-scheduling and its interaction with cache partitioning

Choudhary, Dhruv 05 July 2011 (has links)
The thesis explores the sources of energy inefficiency in asymmetric multi- core architectures where energy efficiency is measured by the energy-delay squared product. The insights gathered from this study drive the development of optimized thread scheduling and coordinated cache management strategies in an important class of asymmetric shared memory architectures. The proposed techniques are founded on well known mathematical optimization techniques yet are lightweight enough to be implemented in practical systems.
263

A Mobile Agent Based Service Architecture for Internet Telephony

Glitho, Roch H. January 2002 (has links)
<p>Internet Telephony defined as real time voice or multimediacommunications over packet switched networks dates back to theearly days of the Internet. ARPA's Network SecureCommunications project had implemented, as early as December1973, an infrastructure for local and transnet real time voicecommunication. Two main sets of standards have emerged: H. 323from the ITU-T and the session initiation protocol (SIP) fromthe Internet Engineering Task Force (IETF). Both includespecifications for value added services. Value added services,or more simply services, are critical to service providers'survival and success. Unfortunately, the service architecturesthat come with the ITU-T and the IETF sets of standards arerather weak. Although they are constantly evolving,alternatives and complements need to be researched. This thesiswhich is made up of a formal dissertation and 6 appendices,proposes a novel mobile agent based service architecture forInternet Telephony. The architecture addresses the issues noneof the existing architectures solves in a satisfactory manner.Furthermore it adds mobile agents to the panoply of servicecreation tools. The appendices are reprints of articlespublished in refereed magazines/journals or under considerationfor publication. The formal dissertation is a summary of thepublications. A consistent and comprehensive set ofrequirements are derived. They are TINA-C flavored, but adaptedto Internet Telephony. They are used to critically reviewrelated work and also used to motivate the use of mobile agentsas the pillars of a novel architecture. The components of thisnovel architecture are identified. The key component is themobile service agent. It acts as a folder and carriesservice(s) to which the end-user has subscribed. Mobile serviceagents need to be upgraded when new versions of service logicare available and when end-users make changes to service data.This thesis proposes a novel upgrading framework. The currentInternet infrastructure comprises a wide range of hosts. Mobileagent platforms are now available for most of thesehosts/clients including memory/processing power constrainedPDAs. Our mobile service agents need to adapt to hostvariability when roaming. A novel adaptivity framework is alsoproposed. These two frameworks are general and can be appliedto any other mobile agent which meets a basic set ofassumptions. A key advantage of a mobile agent based servicearchitecture is that it enables the developement of mobileagent based services. The thesis proposes a novel mobile agentbased multi-party session scheduler. The feasibility and theadvantages of the architecture proposed by this thesis havebeen demonstrated by a prototype on which measurements havebeen made. Future work includes the addition of a securityframework to the architecture, and refinenements to theupgrading and adaptivity frameworks. More mobile agent basedservices, especially mobile multi agent based services willalso be developed.</p>
264

Methods and Applications in Integer Programming : All-Integer Column Generation and Nurse Scheduling

Rönnberg, Elina January 2008 (has links)
<p>Integer programming can be used to provide solutionsto complex decision and planning problems occurring in a wide varietyof situations. Applying integer programming to a real life problembasically involves a first phase where a mathematical model isconstructed, and a second phase where the problem described by themodel is solved. While the nature of the challenges involved in therespective two phases differ, the strong relationship between theproperties of models, and which methods that are appropriate for theirsolution, links the two phases. This thesis constitutes of threepapers, of which the third one considers the modeling phase, while thefirst and second one consider the solution phase.</p><p> </p><p>Many applications of column generation yield master problems of setpartitioning type, and the first and second papers presentmethodologies for solving such problems. The characteristics of themethodologies presented are that all successively found solutions arefeasible and integral, where the retention of integrality is a majordistinction from other column generation methods presented in theliterature.</p><p> </p><p>The third paper concerns nurse scheduling and describes the results ofa pilot implementation of a scheduling tool at a Swedish nursing ward.This paper focuses on the practical aspects of modeling and thechallenges of providing a solution to a complex real life problem.</p>
265

Leakage power driven behavioral synthesis of pipelined asics

Gopalan, Ranganath 01 June 2005 (has links)
Traditional approaches for power optimization during high level synthesis, have targetted single-cycle designs where only one input is being processed by the datapath at any given time. Throughput of large single-cycle designs can be improved by means of pipelining. In this work, we present a framework for the high-level synthesis of pipelined datapaths with low leakage power dissipation. We explore the effect of pipelining on the leakage power dissipation of data-flow intensive designs. An algorithm for minimization of leakage power during behavioral pipelining is presented. The transistor level leakage reduction technique employed here is based on Multi-Threshold CMOS (MTCMOS) technology. Pipelined allocation of functional units and registers is performed considering fixed data introduction intervals. Our algorithm uses simulated annealing to perform scheduling, allocation, and binding for obtaining pipelined datapaths that have low leakage dissipation.
266

Geophysical investigations in the Nankai Trough and Sumatran subduction zones

Martin, Kylara Margaret 08 July 2013 (has links)
The 2004 Sumatra-Andaman and the 2011 Tohoku-Oki earthquakes demonstrate the importance of understanding subduction zone earthquakes and the faults that produce them. Faults that produce earthquakes and/or tsunamis in these systems include plate boundary megathrusts, splay faults (out of sequence thrusts), and strike-slip faults from strain partitioning. Offshore Japan, IODP Exp. 314 collected logging while drilling (LWD) data across several seismically-imaged fault splays in the Nankai Trough accretionary prism. I combine LWD resistivity data with a model of fluid invasion to compare the permeabilities of sands. My results indicate that sands within faulted zones are 2-3 orders of magnitude more permeable than similar undisturbed sands. Therefore fault zones are likely to be fluid conduits within the accretionary wedge. Fluids can affect the physical and chemical properties of the faulted material, increasing pore pressures and effectively lubricating the faults. Fluids play an important role in fault slip, but hazard analysis also requires an understanding of fault geometry and slip direction. Both Japan and Sumatra exhibit strain partitioning, where oblique convergence between tectonic plates is partitioned between the megathrust and strike-slip faults proximal to the arc. Offshore Sumatra, I combine profiles from a 2D seismic survey (SUMUT) with previous bathymetry and active seismic surveys to characterize the West Andaman Fault adjacent to the Aceh forearc Basin. Along this fault I interpret transpressional flower structures that cut older thrust faults. These flower structures indicate that the modern West Andaman Fault is a right lateral strike-slip fault and thus helps to accommodate the translational component of strain in this highly oblique subduction zone. Offshore the Kii Peninsula, Japan, I analyze a trench-parallel depression that forms a notch in the seafloor just landward of the megasplay fault system, along the seaward edge of the forearc Kumano Basin. Using a 12 km wide, 3D seismic volume, I observe vertical faults and faults which dip toward the central axis of the depression, forming apparent flower structures. The along-strike geometry of the vertical faults makes predominantly normal or thrust motion unlikely. I conclude, therefore, that this linear depression is the bathymetric expression of a transtensional fault system. While the obliquity of convergence in the Nankai Trough is small (~15 degrees), this Kumano Basin Edge Fault Zone could be due to partitioning of the plate convergent strain. The location of the West Andaman Fault and KBEFZ within the forearc may be controlled by the rheology contrast between active accretionary wedges and the more stable crust beneath forearc basins. / text
267

Scalable frameworks and algorithms for cluster ensembles and clustering data streams

Hore, Prodip 01 June 2007 (has links)
Clustering algorithms are an important tool for data mining and data analysis purposes. Clustering algorithms fall under the category of unsupervised learning algorithms, which can group patterns without an external teacher or labels using some kind of similarity metric. Clustering algorithms are generally iterative in nature and computationally intensive. They will have disk accesses in every iteration for data sets larger than memory, making the algorithms unacceptably slow. Data could be processed in chunks, which fit into memory, to provide a scalable framework. Multiple processors may be used to process chunks in parallel. Clustering solutions from each chunk together form an ensemble and can be merged to provide a global solution. So, merging multiple clustering solutions, an ensemble, is important for providing a scalable framework. Combining multiple clustering solutions or partitions, is also important for obtaining a robust clustering solution, merging distributed clustering solutions, and providing a knowledge reuse and privacy preserving data mining framework. Here we address combining multiple clustering solutions in a scalable framework. We also propose algorithms for incrementally clustering large or very large data sets. We propose an algorithm that can cluster large data sets through a single pass. This algorithm is also extended to handle clustering infinite data streams. These types of incremental/online algorithms can be used for real time processing as they don't revisit data and are capable of processing data streams under the constraint of limited buffer size and computational time. Thus, different frameworks/algorithms have been proposed to address scalability issues in different settings. To our knowledge we are the first to introduce scalable algorithms for merging cluster ensembles, in terms of time and space complexity, on large real world data sets. We are also the first to introduce single pass and streaming variants of the fuzzy c means algorithm. We have evaluated the performance of our proposed frameworks/algorithms both on artificial and large real world data sets. A comparison of our algorithms with other relevant algorithms is discussed. These comparisons show the scalability and effectiveness of the partitions created by these new algorithms.
268

Climate Variability and Ecohydrology of Seasonally Dry Ecosystems

Feng, Xue January 2015 (has links)
<p>Seasonally dry ecosystems cover large areas over the world, have high potential for carbon sequestration, and harbor high levels of biodiversity. They are characterized by high rainfall variability at timescales ranging from the daily to the seasonal to the interannual, and water availability and timing play key roles in primary productivity, biogeochemical cycles, phenology of growth and reproduction, and agricultural production. In addition, a growing demand for food and other natural resources in these regions renders seasonally dry ecosystems increasingly vulnerable to human interventions. Compounded with changes in rainfall regimes due to climate change, there is a need to better understand the role of climate variabilities in these regions to pave the way for better management of existing infrastructure and investment into future adaptations. </p><p>In this dissertation, the ecohydrological responses of seasonally dry ecosystem to climate variabilities are investigated under a comprehensive framework. This is achieved by first developing diagnostic tools to quantify the degree of rainfall seasonality across different types of seasonal climates, including tropical dry, Mediterranean, and monsoon climates. This global measure of seasonality borrows from information theory and captures the essential contributions from both the magnitude and concentration of the rainy season. By decomposing the rainfall signal from seasonality hotspots, increase in the interannual variability of rainfall seasonality is found, accompanied by concurrent changes in the magnitude, timing, and durations of seasonal rainfall, suggesting that increase in the uncertainty of seasonal rainfall may well extend into the next century. Next, changes in the hydrological partitioning, and the temporal responses of vegetation resulting from these climate variabilities, are analyzed using a set of stochastic models that accounts for the unpredictability rainfall as well as its seasonal trajectories. Soil water storage is found to play a pivotal role in regulating seasonal soil water hysteresis, and the balance between seasonal soil water availability and growth duration is found to induce maximum plant growth for a given amount of annual rainfall. Finally, these methods are applied in the context of biodiversity and the interplay of irrigation and soil salinity, which are prevailing management issues in seasonally dry ecosystems.</p> / Dissertation
269

Toward an improved understanding of the global biogeochemical cycle of mercury

Amos, Helen Marie 06 June 2014 (has links)
Mercury (Hg) is a potent neurotoxin, has both natural and anthropogenic sources to the environment, and is globally dispersed. Humans have been using Hg since antiquity and continue its use in large quantities, mobilizing Hg from stable long-lived geologic reservoirs to actively cycling surface terrestrial and aquatic ecosystems. Human activities, such as mining and coal combustion, have perturbed the natural biogeochemical cycle of Hg. However, the distribution of natural versus anthropogenic Hg in the environment today and the extent of anthropogenic perturbation (i.e., enrichment) are uncertain. Previous model estimates of anthropogenic enrichment have been limited by a lack of information about historical emissions, examined only near-term effects, or have not accounted for the full coupling between biogeochemical reservoirs. Presented here is a framework that integrates recently available historical emission inventories and overcomes these barriers, providing an improved quantitative understanding of global Hg cycling. / Earth and Planetary Sciences
270

The application of machine learning methods in software verification and validation

Phuc, Nguyen Vinh, 1955- 04 January 2011 (has links)
Machine learning methods have been employed in data mining to discover useful, valid, and beneficial patterns for various applications of which, the domain encompasses areas in business, medicine, agriculture, census, and software engineering. Focusing on software engineering, this report presents an investigation of machine learning techniques that have been utilized to predict programming faults during the verification and validation of software. Artifacts such as traces in program executions, information about test case coverage and data pertaining to execution failures are of special interest to address the following concerns: Completeness for test suite coverage; Automation of test oracles to reduce human intervention in Software testing; Detection of faults causing program failures; Defect prediction in software. A survey of literature pertaining to the verification and validation of software also revealed a novel concept designed to improve black-box testing using Category-Partition for test specifications and test suites. The report includes two experiments using data extracted from source code available from the website (15) to demonstrate the application of a decision tree (C4.5) and the multilayer perceptron for fault prediction, and an example that shows a potential candidate for the Category-Partition scheme. The results from several research projects shows that the application of machine learning in software testing has achieved various degrees of success in effectively assisting software developers to improve their test strategy in verification and validation of software systems. / text

Page generated in 0.0964 seconds