• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 17
  • 9
  • 6
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 96
  • 96
  • 23
  • 17
  • 17
  • 15
  • 15
  • 12
  • 11
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Runtime Verification and Debugging of Concurrent Software

Zhang, Lu 29 July 2016 (has links)
Our reliance on software has been growing fast over the past decades as the pervasive use of computer and software penetrated not only our daily life but also many critical applications. As the computational power of multi-core processors and other parallel hardware keeps increasing, concurrent software that exploit these parallel computing hardware become crucial for achieving high performance. However, developing correct and efficient concurrent software is a difficult task for programmers due to the inherent nondeterminism in their executions. As a result, concurrency related software bugs are among the most troublesome in practice and have caused severe problems in recent years. In this dissertation, I propose a series of new and fully automated methods for verifying and debugging concurrent software. They cover the detection, prevention, classification, and repair of some important types of bugs in the implementation of concurrent data structures and client-side web applications. These methods can be adopted at various stages of the software development life cycle, to help programmers write concurrent software correctly as well as efficiently. / Ph. D.
42

Development of Wastewater Pipe Performance Index and Performance Prediction Model

Angkasuwansiri, Thiti 11 June 2013 (has links)
Water plays a critical role in every aspect of civilization: agriculture, industry, economy, environment, recreation, transportation, culture, and health. Much of America's drinking water and wastewater infrastructure; however, is old and deteriorating. A crisis looms as demands on these systems increase. The costs associated with renewal of these aging systems are staggering. There is a critical disconnect between the methodological remedies for infrastructure renewal problems and the current sequential or isolated manner of renewal analysis and execution. This points to the need for a holistic systems perspective to address the renewal problem. Therefore, new tools are needed to provide support for wastewater infrastructure decisions. Such decisions are necessary to sustain economic growth, environmental quality, and improved societal benefits. Accurate prediction of wastewater pipe structural and functional deterioration plays an essential role in asset management and capital improvement planning. The key to implementing an asset management strategy is a comprehensive understanding of asset condition, performance, and risk profile. The primary objective of this research is therefore to develop protocols and methods for evaluating the wastewater pipe performance. This research presents the life cycle of wastewater pipeline identifying the causes of pipe failure in different phases including design, manufacture, construction, operation and maintenance, and repair/rehabilitation/replacement. Various modes and mechanisms of pipe failure in wastewater pipes were identified for different pipe material which completed with results from extensive literature reviews, and interviews with utilities and pipe associations. After reviewing all relevant reports and utility databases, a set of standard pipe parameter list (data structure) and a pipe data collection methodology were developed. These parameters includes physical/structural, operational/functional, environmental and other parameters, for not only the pipe, but also the entire pipe system. This research presents a development of a performance index for wastewater pipes. The performance index evaluates each parameter and combines them mathematically through a weighted summation and a fuzzy inference system that reflects the importance of the various factors. The performance index were evaluated based on artificial data and field data to ensure that the index could be implemented to real scenarios. Developing a performance index led to the development of a probabilistic performance prediction model for wastewater pipes. A framework would enable effective and systematic wastewater pipe performance evaluation and prediction in asset management programs. / Ph. D.
43

Algorithm Visualization: The State of the Field

Cooper, Matthew Lenell 01 May 2007 (has links)
We report on the state of the field of algorithm visualization, both quantitatively and qualitatively. Computer science educators seem to find algorithm and data structure visualizations attractive for their classrooms. Educational research shows that some are effective while many are not. Clearly, then, visualizations are difficult to create and use right. There is little in the way of a supporting community, and many visualizations are downright poor. Topic distribution is heavily skewed towards simple concepts with advanced topics receiving little to no attention. We have cataloged nearly 400 visualizations available on the Internet. We have a wiki-based catalog which includes availability, platform, strengths and weaknesses, responsible personnel and institutions, and other data about each visualization. We have developed extraction and analysis tools to gather statistics about the corpus of visualizations. Based on analysis of this collection, we point out areas where improvements may be realized and suggest techniques for implementing such improvements. We pay particular attention to the free and open source software movement as a model which the visualization community may do well to emulate, from both a software engineering perspective and a community-building standpoint. / Master of Science
44

Efficient Parallel Algorithms and Data Structures Related to Trees

Chen, Calvin Ching-Yuen 12 1900 (has links)
The main contribution of this dissertation proposes a new paradigm, called the parentheses matching paradigm. It claims that this paradigm is well suited for designing efficient parallel algorithms for a broad class of nonnumeric problems. To demonstrate its applicability, we present three cost-optimal parallel algorithms for breadth-first traversal of general trees, sorting a special class of integers, and coloring an interval graph with the minimum number of colors.
45

A Common Representation Format for Multimedia Documents

Jeong, Ki Tai 12 1900 (has links)
Multimedia documents are composed of multiple file format combinations, such as image and text, image and sound, or image, text and sound. The type of multimedia document determines the form of analysis for knowledge architecture design and retrieval methods. Over the last few decades, theories of text analysis have been proposed and applied effectively. In recent years, theories of image and sound analysis have been proposed to work with text retrieval systems and progressed quickly due in part to rapid progress in computer processing speed. Retrieval of multimedia documents formerly was divided into the categories of image and text, and image and sound. While standard retrieval process begins from text only, methods are developing that allow the retrieval process to be accomplished simultaneously using text and image. Although image processing for feature extraction and text processing for term extractions are well understood, there are no prior methods that can combine these two features into a single data structure. This dissertation will introduce a common representation format for multimedia documents (CRFMD) composed of both images and text. For image and text analysis, two techniques are used: the Lorenz Information Measurement and the Word Code. A new process named Jeong's Transform is demonstrated for extraction of text and image features, combining the two previous measurements to form a single data structure. Finally, this single data measurements to form a single data structure. Finally, this single data structure is analyzed by using multi-dimensional scaling. This allows multimedia objects to be represented on a two-dimensional graph as vectors. The distance between vectors represents the magnitude of the difference between multimedia documents. This study shows that image classification on a given test set is dramatically improved when text features are encoded together with image features. This effect appears to hold true even when the available text is diffused and is not uniform with the image features. This retrieval system works by representing a multimedia document as a single data structure. CRFMD is applicable to other areas of multimedia document retrieval and processing, such as medical image retrieval, World Wide Web searching, and museum collection retrieval.
46

Assessing variance components of multilevel models pregnancy data

Letsoalo, Marothi Peter January 2019 (has links)
Thesis (M. Sc. (Statistics) / Most social and health science data are longitudinal and additionally multilevel in nature, which means that response data are grouped by attributes of some cluster. Ignoring the differences and similarities generated by these clusters results to misleading estimates, hence motivating for a need to assess variance components (VCs) using multilevel models (MLMs) or generalised linear mixed models (GLMMs). This study has explored and fitted teenage pregnancy census data that were gathered from 2011 to 2015 by the Africa Centre at Kwa-Zulu Natal, South Africa. The exploration of these data revealed a two level pure hierarchy data structure of teenage pregnancy status for some years nested within female teenagers. To fit these data, the effects that census year (year) and three female characteristics (namely age (age), number of household membership (idhhms), number of children before observation year (nch) have on teenage pregnancy were examined. Model building of this work, firstly, fitted a logit gen eralised linear model (GLM) under the assumption that teenage pregnancy measurements are independent between females and secondly, fitted a GLMM or MLM of female random effect. A better fit GLMM indicated, for an additional year on year, a 0.203 decrease on the log odds of teenage pregnancy while GLM suggested a 0.21 decrease and 0.557 increase for each additional year on age and year, respectively. A GLM with only year effect uncovered a fixed estimate which is higher, by 0.04, than that of a better fit GLMM. The inconsistency in the effect of year was caused by a significant female cluster variance of approximately 0.35 that was used to compute the VCs. Given the effect of year, the VCs suggested that 9.5% of the differences in teenage pregnancy lies between females while 0.095 similarities (scale from 0 to 1) are for the same female. It was also revealed that year does not vary within females. Apart from the small differences between observed estimates of the fitted GLM and GLMM, this work produced evidence that accounting for cluster effect improves accuracy of estimates. Keywords: Multilevel Model, Generalised Linear Mixed Model, Variance Components, Hier archical Data Structure, Social Science Data, Teenage Pregnancy
47

RootChord

Cwik, Lukasz 22 April 2010 (has links)
We present a distributed data structure, which we call "RootChord". To our knowledge, this is the first distributed hash table which is able to adapt to changes in the size of the network and answer lookup queries within a guaranteed two hops while maintaining a routing table of size Theta(sqrt(N)). We provide pseudocode and analysis for all aspects of the protocol including routing, joining, maintaining, and departing the network. In addition we discuss the practical implementation issues of parallelization, data replication, remote procedure calls, dead node discovery, and network convergence.
48

RootChord

Cwik, Lukasz 22 April 2010 (has links)
We present a distributed data structure, which we call "RootChord". To our knowledge, this is the first distributed hash table which is able to adapt to changes in the size of the network and answer lookup queries within a guaranteed two hops while maintaining a routing table of size Theta(sqrt(N)). We provide pseudocode and analysis for all aspects of the protocol including routing, joining, maintaining, and departing the network. In addition we discuss the practical implementation issues of parallelization, data replication, remote procedure calls, dead node discovery, and network convergence.
49

Temporal streams: programming abstractions for distributed live stream analysis applications

Hilley, David B 20 October 2009 (has links)
Continuous live stream analysis applications are increasingly common. Video-based surveillance, emergency response, disaster recovery, and critical infrastructure monitoring are all examples of such applications. These applications are distributed and typically require significant computing resources (like a cluster of workstations) for analysis. In addition to live data, many such applications also require access to historical data that was streamed in the past and is now archived. While distributed programming support for traditional high-performance computing applications is fairly mature, existing solutions for live stream analysis applications are still in their early stages and, in our view, inadequate. We explore the system-level value of recognizing temporal properties -- a critical aspect of the application domain. We present "temporal streams", a programming model supporting a higher-level, domain-targeted programming abstraction for such applications. It provides a simple but expressive stream abstraction encompassing transport, manipulation and storage of streaming data. The semantics of the programming model are tailored to the application domain by explicitly recognizing the temporal aspects of continuous streams, providing a common interface for both time-based retrieval of current streaming data and data persistence. The unifying trait of time enables access to both current streaming data and archived historical data using the same interface; the communication and storage abstraction are the same -- a unified stream data abstraction, uniformly modeling stream data interactions. "Temporal streams" defines how distributed threads of computation interact implicitly via streams, but does not impose a particular model of computation constraining the interactions between distributed actors, targeting loosely coupled distributed systems with no centralized control. In particular, it targets stream analysis scenarios requiring significant signal processing on heavyweight streams such as audio and video. These unstructured streams are data rich but are not directly interpretable until meaningful features are extracted; consequently, feature detection and subsequent analysis are the major computational requirements. We also use the programming model as a vehicle for exploring systems software design issues, realizing "temporal streams" as a distributed runtime in the tradition of loosely coupled distributed systems with strong communication boundaries. We thoroughly examine the concrete software architecture and elements of implementation. We also describe two generations of system implementations, including the broad development philosophy, specific design principles and salient low-level details. The runtime is designed to be relatively lightweight and suitable as a substrate for higher-level, more domain-specific middleware or application functionality. Even with a relatively simple programming model, a carefully designed system architecture can provide a surprisingly rich and flexibly substrate for upper software layers. We also evaluate our system implementation in two ways; first, we present a series of quantitative experimental results designed to assess the performance of key primitives in our architecture in isolation. We also use motivating applications to evaluate "temporal streams" in the context of realistic application scenarios. We develop three motivating applications and provide quantitative and qualitative analyses of these applications in the context of "temporal streams." We show that, although it provides needed higher-level functionality to enable live stream analysis applications, our runtime does not add significant overhead to the stream computation at the core of each application. Finally, we also review the relationship of "temporal streams" (both the programming model and architecture) to other approaches, including database-oriented Stream Data Management Systems (SDMS), various stream processing engines, stream programming languages and parallel batch processing systems, as well as traditional distributed programming systems and communication frameworks.
50

Programming models for speculative and optimistic parallelism based on algorithmic properties

Cledat, Romain 24 August 2011 (has links)
Today's hardware is becoming more and more parallel. While embarrassingly parallel codes, such as high-performance computing ones, can readily take advantage of this increased number of cores, most other types of code cannot easily scale using traditional data and/or task parallelism and cores are therefore left idling resulting in lost opportunities to improve performance. The opportunistic computing paradigm, on which this thesis rests, is the idea that computations should dynamically adapt to and exploit the opportunities that arise due to idling resources to enhance their performance or quality. In this thesis, I propose to utilize algorithmic properties to develop programming models that leverage this idea thereby providing models that increase and improve the parallelism that can be exploited. I exploit three distinct algorithmic properties: i) algorithmic diversity, ii) the semantic content of data-structures, and iii) the variable nature of results in certain applications. This thesis presents three main contributions: i) the N-way model which leverages algorithmic diversity to speed up hitherto sequential code, ii) an extension to the N-way model which opportunistically improves the quality of computations and iii) a framework allowing the programmer to specify the semantics of data-structures to improve the performance of optimistic parallelism.

Page generated in 0.0361 seconds