• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 17
  • 9
  • 6
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 102
  • 102
  • 25
  • 19
  • 17
  • 15
  • 15
  • 12
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Algorithm Visualization: The State of the Field

Cooper, Matthew Lenell 01 May 2007 (has links)
We report on the state of the field of algorithm visualization, both quantitatively and qualitatively. Computer science educators seem to find algorithm and data structure visualizations attractive for their classrooms. Educational research shows that some are effective while many are not. Clearly, then, visualizations are difficult to create and use right. There is little in the way of a supporting community, and many visualizations are downright poor. Topic distribution is heavily skewed towards simple concepts with advanced topics receiving little to no attention. We have cataloged nearly 400 visualizations available on the Internet. We have a wiki-based catalog which includes availability, platform, strengths and weaknesses, responsible personnel and institutions, and other data about each visualization. We have developed extraction and analysis tools to gather statistics about the corpus of visualizations. Based on analysis of this collection, we point out areas where improvements may be realized and suggest techniques for implementing such improvements. We pay particular attention to the free and open source software movement as a model which the visualization community may do well to emulate, from both a software engineering perspective and a community-building standpoint. / Master of Science
52

A Common Representation Format for Multimedia Documents

Jeong, Ki Tai 12 1900 (has links)
Multimedia documents are composed of multiple file format combinations, such as image and text, image and sound, or image, text and sound. The type of multimedia document determines the form of analysis for knowledge architecture design and retrieval methods. Over the last few decades, theories of text analysis have been proposed and applied effectively. In recent years, theories of image and sound analysis have been proposed to work with text retrieval systems and progressed quickly due in part to rapid progress in computer processing speed. Retrieval of multimedia documents formerly was divided into the categories of image and text, and image and sound. While standard retrieval process begins from text only, methods are developing that allow the retrieval process to be accomplished simultaneously using text and image. Although image processing for feature extraction and text processing for term extractions are well understood, there are no prior methods that can combine these two features into a single data structure. This dissertation will introduce a common representation format for multimedia documents (CRFMD) composed of both images and text. For image and text analysis, two techniques are used: the Lorenz Information Measurement and the Word Code. A new process named Jeong's Transform is demonstrated for extraction of text and image features, combining the two previous measurements to form a single data structure. Finally, this single data measurements to form a single data structure. Finally, this single data structure is analyzed by using multi-dimensional scaling. This allows multimedia objects to be represented on a two-dimensional graph as vectors. The distance between vectors represents the magnitude of the difference between multimedia documents. This study shows that image classification on a given test set is dramatically improved when text features are encoded together with image features. This effect appears to hold true even when the available text is diffused and is not uniform with the image features. This retrieval system works by representing a multimedia document as a single data structure. CRFMD is applicable to other areas of multimedia document retrieval and processing, such as medical image retrieval, World Wide Web searching, and museum collection retrieval.
53

Assessing variance components of multilevel models pregnancy data

Letsoalo, Marothi Peter January 2019 (has links)
Thesis (M. Sc. (Statistics) / Most social and health science data are longitudinal and additionally multilevel in nature, which means that response data are grouped by attributes of some cluster. Ignoring the differences and similarities generated by these clusters results to misleading estimates, hence motivating for a need to assess variance components (VCs) using multilevel models (MLMs) or generalised linear mixed models (GLMMs). This study has explored and fitted teenage pregnancy census data that were gathered from 2011 to 2015 by the Africa Centre at Kwa-Zulu Natal, South Africa. The exploration of these data revealed a two level pure hierarchy data structure of teenage pregnancy status for some years nested within female teenagers. To fit these data, the effects that census year (year) and three female characteristics (namely age (age), number of household membership (idhhms), number of children before observation year (nch) have on teenage pregnancy were examined. Model building of this work, firstly, fitted a logit gen eralised linear model (GLM) under the assumption that teenage pregnancy measurements are independent between females and secondly, fitted a GLMM or MLM of female random effect. A better fit GLMM indicated, for an additional year on year, a 0.203 decrease on the log odds of teenage pregnancy while GLM suggested a 0.21 decrease and 0.557 increase for each additional year on age and year, respectively. A GLM with only year effect uncovered a fixed estimate which is higher, by 0.04, than that of a better fit GLMM. The inconsistency in the effect of year was caused by a significant female cluster variance of approximately 0.35 that was used to compute the VCs. Given the effect of year, the VCs suggested that 9.5% of the differences in teenage pregnancy lies between females while 0.095 similarities (scale from 0 to 1) are for the same female. It was also revealed that year does not vary within females. Apart from the small differences between observed estimates of the fitted GLM and GLMM, this work produced evidence that accounting for cluster effect improves accuracy of estimates. Keywords: Multilevel Model, Generalised Linear Mixed Model, Variance Components, Hier archical Data Structure, Social Science Data, Teenage Pregnancy
54

RootChord

Cwik, Lukasz 22 April 2010 (has links)
We present a distributed data structure, which we call "RootChord". To our knowledge, this is the first distributed hash table which is able to adapt to changes in the size of the network and answer lookup queries within a guaranteed two hops while maintaining a routing table of size Theta(sqrt(N)). We provide pseudocode and analysis for all aspects of the protocol including routing, joining, maintaining, and departing the network. In addition we discuss the practical implementation issues of parallelization, data replication, remote procedure calls, dead node discovery, and network convergence.
55

RootChord

Cwik, Lukasz 22 April 2010 (has links)
We present a distributed data structure, which we call "RootChord". To our knowledge, this is the first distributed hash table which is able to adapt to changes in the size of the network and answer lookup queries within a guaranteed two hops while maintaining a routing table of size Theta(sqrt(N)). We provide pseudocode and analysis for all aspects of the protocol including routing, joining, maintaining, and departing the network. In addition we discuss the practical implementation issues of parallelization, data replication, remote procedure calls, dead node discovery, and network convergence.
56

Temporal streams: programming abstractions for distributed live stream analysis applications

Hilley, David B 20 October 2009 (has links)
Continuous live stream analysis applications are increasingly common. Video-based surveillance, emergency response, disaster recovery, and critical infrastructure monitoring are all examples of such applications. These applications are distributed and typically require significant computing resources (like a cluster of workstations) for analysis. In addition to live data, many such applications also require access to historical data that was streamed in the past and is now archived. While distributed programming support for traditional high-performance computing applications is fairly mature, existing solutions for live stream analysis applications are still in their early stages and, in our view, inadequate. We explore the system-level value of recognizing temporal properties -- a critical aspect of the application domain. We present "temporal streams", a programming model supporting a higher-level, domain-targeted programming abstraction for such applications. It provides a simple but expressive stream abstraction encompassing transport, manipulation and storage of streaming data. The semantics of the programming model are tailored to the application domain by explicitly recognizing the temporal aspects of continuous streams, providing a common interface for both time-based retrieval of current streaming data and data persistence. The unifying trait of time enables access to both current streaming data and archived historical data using the same interface; the communication and storage abstraction are the same -- a unified stream data abstraction, uniformly modeling stream data interactions. "Temporal streams" defines how distributed threads of computation interact implicitly via streams, but does not impose a particular model of computation constraining the interactions between distributed actors, targeting loosely coupled distributed systems with no centralized control. In particular, it targets stream analysis scenarios requiring significant signal processing on heavyweight streams such as audio and video. These unstructured streams are data rich but are not directly interpretable until meaningful features are extracted; consequently, feature detection and subsequent analysis are the major computational requirements. We also use the programming model as a vehicle for exploring systems software design issues, realizing "temporal streams" as a distributed runtime in the tradition of loosely coupled distributed systems with strong communication boundaries. We thoroughly examine the concrete software architecture and elements of implementation. We also describe two generations of system implementations, including the broad development philosophy, specific design principles and salient low-level details. The runtime is designed to be relatively lightweight and suitable as a substrate for higher-level, more domain-specific middleware or application functionality. Even with a relatively simple programming model, a carefully designed system architecture can provide a surprisingly rich and flexibly substrate for upper software layers. We also evaluate our system implementation in two ways; first, we present a series of quantitative experimental results designed to assess the performance of key primitives in our architecture in isolation. We also use motivating applications to evaluate "temporal streams" in the context of realistic application scenarios. We develop three motivating applications and provide quantitative and qualitative analyses of these applications in the context of "temporal streams." We show that, although it provides needed higher-level functionality to enable live stream analysis applications, our runtime does not add significant overhead to the stream computation at the core of each application. Finally, we also review the relationship of "temporal streams" (both the programming model and architecture) to other approaches, including database-oriented Stream Data Management Systems (SDMS), various stream processing engines, stream programming languages and parallel batch processing systems, as well as traditional distributed programming systems and communication frameworks.
57

Programming models for speculative and optimistic parallelism based on algorithmic properties

Cledat, Romain 24 August 2011 (has links)
Today's hardware is becoming more and more parallel. While embarrassingly parallel codes, such as high-performance computing ones, can readily take advantage of this increased number of cores, most other types of code cannot easily scale using traditional data and/or task parallelism and cores are therefore left idling resulting in lost opportunities to improve performance. The opportunistic computing paradigm, on which this thesis rests, is the idea that computations should dynamically adapt to and exploit the opportunities that arise due to idling resources to enhance their performance or quality. In this thesis, I propose to utilize algorithmic properties to develop programming models that leverage this idea thereby providing models that increase and improve the parallelism that can be exploited. I exploit three distinct algorithmic properties: i) algorithmic diversity, ii) the semantic content of data-structures, and iii) the variable nature of results in certain applications. This thesis presents three main contributions: i) the N-way model which leverages algorithmic diversity to speed up hitherto sequential code, ii) an extension to the N-way model which opportunistically improves the quality of computations and iii) a framework allowing the programmer to specify the semantics of data-structures to improve the performance of optimistic parallelism.
58

Self-describing objects with tangible data structures

Sinha, Arnab 28 May 2014 (has links) (PDF)
Pervasive computing or ambient computing aims to integrate information systems into the environment, in a manner as transparent as possible to the users. It allows the information systems to be tightly coupled with the physical activities within the environment. Everyday used objects, along with their environment, are made smarter with the use of embedded computing, sensors etc. and also have the ability to communicate among themselves. In pervasive computing, it is necessary to sense the real physical world and to perceive its "context" ; a high level representation of the physical situation. There are various ways to derive the context. Typically, the approach is a multi-step process which begins with sensing. Various sensing technologies are used to capture low level information of the physical activities, which are then aggregated, analyzed and computed elsewhere in the information systems, to become aware of the context. Deployed applications then react, depending on the context situation. Among sensors, RFID is an important emerging technology which allows a direct digital link between information systems and physical objects. Besides storing identification data, RFID also provides a general purpose storage space on objects, enabling new architectures for pervasive computing. In this thesis, we defend an original approach adopting the later use of RFID i.e. a digital memory integrated to real objects. The approach uses the principle where the objects self-support information systems. This way of integration reduces the need of communication for remote processing. The principle is realized in two ways. First, objects are piggybacked with semantic information, related to itself ; as self-describing objects. Hence, relevant information associated with the physical entities are readily available locally for processing. Second, group of related objects are digitally linked using dedicated or ad-hoc data structure, distributed over the objects. Hence, it would allow direct data processing - like validating some property involving the objects in proximity. This property of physical relation among objects can be interpreted digitally from the data structure ; this justifies the appellation "Tangible Data Structures". Unlike the conventional method of using identifiers, our approach has arguments on its benefits in terms of privacy, scalability, autonomy and reduced dependency with respect to infrastructure. But its challenge lies in the expressivity due to limited memory space available in the tags. The principles are validated by prototyping in two different application domains. The first application is developed for waste management domain that helps in efficient sorting and better recycling. And the second, provides added services like assistance while assembling and verification for composite objects, using the distributed data structure across the individual pieces.
59

[en] SCALABLE TOPOLOGICAL DATA{STRUCTURES FOR 2 AND 3 MANIFOLDS / [pt] ESTRUTURAS DE DADOS TOPOLÓGICAS ESCALONÁVEIS PARA VARIEDADES DE DIMENSÃO 2 E 3

MARCOS DE OLIVEIRA LAGE FERREIRA 24 April 2006 (has links)
[pt] Pesquisas na área de estrutura de dados são fundamentais para aumentar a generalidade e eficiência computacional da representacão de modelos geometricos. Neste trabalho, apresentamos duas estruturas de dados topológicas escalonáveis, uma para superfícies triânguladas, chamada CHE (Compact Half--Edge), e outra para malhas de tetraedros, chamada CHF (Compact Half--Face). Tais estruturas são compostas de diferentes níveis, que nos possibilitam alterar a quantidade de dados armazenados com objetivo de melhorar sua eficiência computacional. O uso de APIs baseadas no conceito de objeto, e de haran»ca de classes, possibilitam uma interface única para cada função em todos os níveis das estruturas. A CHE e a CHF requerem pouca memória e são simples de implementar já que substituem o uso de ponteiros pelo de contêineres genéricos e regras aritméticas. / [en] Research in data structure area are essential to increase the generality and computational effciency of geometric models` representation. In this work, we present two new scalable topological data structures, one for triangulated surfaces, called CHE (Compact Half { Edge ), and the another for tetrahedral meshes, called CHF (Compact Half { Face ). Such structures are composed of different levels, that enable us to modify the amount of data stored with the objective to improve its computational effciency. The use of APIs based in the object concept and class inheritance, makes possible an unique interface for each function at any level. CHE and CHF requires very few memory and are simple to implement since they substitute the use of pointers by generic containeres and arithmetical rules.
60

Représentation, modélisation et génération procédurale de terrains / Representation, modelisation and procedural generation of terrains

Genevaux, Jean-David 03 September 2015 (has links)
Cette thèse (qui a pour intitulé "Représentation, modélisation et génération procédurale de terrains") a pour cadre la génération de contenus numériques destinés aux films et aux jeux-vidéos, en particulier les scènes naturelles. Nos travaux visent à représenter et à générer des terrains. Nous proposons, en particulier, un nouveau modèle de représentation qui s'appuie sur un arbre de construction et qui va permettre à l'utilisateur de manipuler des morceaux de terrain de façon intuitive. Nous présentons également des techniques pour visualiser ce modèle avec un maximum d'efficacité. Enfin nous développons un nouvel algorithme de génération de terrains qui construit de très grands reliefs possédant des structures hiérarchiques découlant d'un réseau hydrographique : le relief généré est conforme aux grands principes d'écoulement des eaux sans avoir besoin d'utiliser de coûteuses simulations d'érosion hydrique. / This PhD (entitled "Representation, modelisation and procedural generation of terrains") is related to movie and videogames digital content creation, especially natural scenes.Our work is dedicated to handle and to generate landscapes efficently. We propose a new model based on a construction tree inside which the user can handle parts of the terrain intuitively. We also present techniques to efficently visualize such model. Finally, we present a new algorithm for generating large-scale terrains exhibiting hierarchical structures based on their hydrographic networks: elevation is generated in a broad compliance to water-tansport principles without having to resort on costly hydraulic simulations.

Page generated in 0.0831 seconds