• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 270
  • 52
  • 27
  • 25
  • 19
  • 10
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 481
  • 481
  • 355
  • 335
  • 188
  • 99
  • 65
  • 64
  • 58
  • 53
  • 53
  • 52
  • 49
  • 49
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Computational proxies : an object-based infrastructure for computational science /

Cushing, Judith Bayard. January 1995 (has links)
Thesis, (Ph. D.)--Oregon Graduate Institute of Science and Technology, 1995.
302

Application of evolutionary algorithm strategies to entity relationship diagrams /

Heinze, Glenn. January 2004 (has links) (PDF)
Thesis (M.Sc)--Athabasca University, 2004. / Includes bibliographical references (leaves 31-32). Also available online.
303

Generalized Statistical Tolerance Analysis and Three Dimensional Model for Manufacturing Tolerance Transfer in Manufacturing Process Planning

January 2011 (has links)
abstract: Mostly, manufacturing tolerance charts are used these days for manufacturing tolerance transfer but these have the limitation of being one dimensional only. Some research has been undertaken for the three dimensional geometric tolerances but it is too theoretical and yet to be ready for operator level usage. In this research, a new three dimensional model for tolerance transfer in manufacturing process planning is presented that is user friendly in the sense that it is built upon the Coordinate Measuring Machine (CMM) readings that are readily available in any decent manufacturing facility. This model can take care of datum reference change between non orthogonal datums (squeezed datums), non-linearly oriented datums (twisted datums) etc. Graph theoretic approach based upon ACIS, C++ and MFC is laid out to facilitate its implementation for automation of the model. A totally new approach to determining dimensions and tolerances for the manufacturing process plan is also presented. Secondly, a new statistical model for the statistical tolerance analysis based upon joint probability distribution of the trivariate normal distributed variables is presented. 4-D probability Maps have been developed in which the probability value of a point in space is represented by the size of the marker and the associated color. Points inside the part map represent the pass percentage for parts manufactured. The effect of refinement with form and orientation tolerance is highlighted by calculating the change in pass percentage with the pass percentage for size tolerance only. Delaunay triangulation and ray tracing algorithms have been used to automate the process of identifying the points inside and outside the part map. Proof of concept software has been implemented to demonstrate this model and to determine pass percentages for various cases. The model is further extended to assemblies by employing convolution algorithms on two trivariate statistical distributions to arrive at the statistical distribution of the assembly. Map generated by using Minkowski Sum techniques on the individual part maps is superimposed on the probability point cloud resulting from convolution. Delaunay triangulation and ray tracing algorithms are employed to determine the assembleability percentages for the assembly. / Dissertation/Thesis / Ph.D. Mechanical Engineering 2011
304

Assessing Dimensionality in Complex Data Structures: A Performance Comparison of DETECT and NOHARM Procedures

January 2011 (has links)
abstract: The purpose of this study was to investigate the effect of complex structure on dimensionality assessment in compensatory and noncompensatory multidimensional item response models (MIRT) of assessment data using dimensionality assessment procedures based on conditional covariances (i.e., DETECT) and a factor analytical approach (i.e., NOHARM). The DETECT-based methods typically outperformed the NOHARM-based methods in both two- (2D) and three-dimensional (3D) compensatory MIRT conditions. The DETECT-based methods yielded high proportion correct, especially when correlations were .60 or smaller, data exhibited 30% or less complexity, and larger sample size. As the complexity increased and the sample size decreased, the performance typically diminished. As the complexity increased, it also became more difficult to label the resulting sets of items from DETECT in terms of the dimensions. DETECT was consistent in classification of simple items, but less consistent in classification of complex items. Out of the three NOHARM-based methods, χ2G/D and ALR generally outperformed RMSR. χ2G/D was more accurate when N = 500 and complexity levels were 30% or lower. As the number of items increased, ALR performance improved at correlation of .60 and 30% or less complexity. When the data followed a noncompensatory MIRT model, the NOHARM-based methods, specifically χ2G/D and ALR, were the most accurate of all five methods. The marginal proportions for labeling sets of items as dimension-like were typically low, suggesting that the methods generally failed to label two (three) sets of items as dimension-like in 2D (3D) noncompensatory situations. The DETECT-based methods were more consistent in classifying simple items across complexity levels, sample sizes, and correlations. However, as complexity and correlation levels increased the classification rates for all methods decreased. In most conditions, the DETECT-based methods classified complex items equally or more consistent than the NOHARM-based methods. In particular, as complexity, the number of items, and the true dimensionality increased, the DETECT-based methods were notably more consistent than any NOHARM-based method. Despite DETECT's consistency, when data follow a noncompensatory MIRT model, the NOHARM-based method should be preferred over the DETECT-based methods to assess dimensionality due to poor performance of DETECT in identifying the true dimensionality. / Dissertation/Thesis / Ph.D. Educational Psychology 2011
305

A estrutura de dados gema para representação de mapas n-dimensionais / The gem data structure for n-dimensional maps

Montagner, Arnaldo Jovanini 03 May 2007 (has links)
Orientador: Jorge Stolfi [Orientador] / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação. / Made available in DSpace on 2018-08-10T07:43:58Z (GMT). No. of bitstreams: 1 Montagner_ArnaldoJovanini_D.pdf: 2204093 bytes, checksum: 4c9c86ca0312f3b5507e1daa8c6553db (MD5) Previous issue date: 2007 / Resumo: Mapas são subdivisões de espaços topológicos em regiões simples, e triangulações são um tipo específico de mapa em que cada elemento é um simplexo (aresta, triângulo, tetraedro, etc). Neste trabalho, tratamos o problema de representação da topologia de triangulações e mapas de dimensão arbitrária. Estudamos a utilização de uma representação baseada em grafos de arestas coloridas, já utilizada como ferramenta teórica, mas nunca empregada em aplicações práticas. A principal limitação desta representação é a relativa inflexibilidade imposta sobre a manipulação da topologia. Há porém grandes vantagens em sua utilização, como a simplicidade de representação e a generalidade. Este trabalho consiste na especificação teórica de uma estrutura de dados baseada nestes grafos coloridos e de operações topológicas para construção e manipulação da estrutura. A utilização desta estrutura é ilustrada através de algoritmos para resolução de problemas em geometria computacional / Abstract: Maps are subdivisions of topological spaces into simple regions, and triangulations are a specific kind of map wherein each element is a simplex (edge, triangle, tetrahedron, etc). In this work, we analyze the problem of representing the topology of triangulations and maps with arbitrary dimension. We study a representation based on edge-colored graphs, already used as theoretical tool, but never employed in practical applications. The main limitation of this representation is the relative inexibility imposed on the manipulation of topology. There are, though, great advantages in its use, as its simplicity and generality. This work consists in the theoretic specification of a data structure based on these colored graphs and of topological operators to build and manipulate the structure.The use of this structure is illustrated by algorithms for computational geometry problems / Doutorado / Computação Grafica / Mestre em Ciência da Computação
306

Distância de edição para estruturas de dados

Silva Junior, Paulo Matias da January 2018 (has links)
Orientador: Prof. Dr. Rodrigo de Alencar Hausen / Coorientador: Prof. Dr. Jerônimo Cordoni Pellegrini / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Ciência da Computação, Santo André, 2018. / O problema de distância de edição geral de árvores consiste na comparação de duas Árvores enraizadas e rotuladas a partir de operações de edição tais como a deleção e a inserção de nós, buscando obter o menor custo necessário para uma sequência de operações que transforme uma árvore em outra. Neste trabalho provamos que encontrar a maior subfloresta comum pela deleção de nós dentre duas árvores dadas, chamada de LCS-floresta, é um caso particular de distância de edição. Para o problema de encontrar a subárvore comum máxima entre duas árvores, existe uma demonstração feita por Valiente[Val02] de que esse problema é um caso particular de distância de edição considerando uma condição que preserva fortemente a ancestralidade entre os pares de nós das árvores comparadas. Realizamos uma demonstração alternativa para esse problema que toma por condição a existência de caminhos entre os pares de nós. Também estabelecemos uma hierarquia que relaciona as distâncias obtidas como solução desses três problemas, mostrando que a distância que se obtém como solução do problema de edição mais geral é limite inferior para a distância encontrada como solução do LCS-floresta, e esta última é limite inferior para a distância obtida com a subárvore comum máxima. Na segunda parte do trabalho, descrevemos as estruturas de dados como árvores enraizadas e rotuladas, assim pudemos aplicar o conceito de distância de edição e, com isso, analisar os custos para comparar uma estrutura de dados consigo mesma após uma sequência de operações. Para tal, modelamos os custos das operações nas árvores das respectivas estruturas considerando informações como o número de nós da árvore e o nível do nó que passou pela operação. Nos modelos de pilha, lista ligada e árvore de busca binária as distâncias de edição foram relacionadas às complexidades de tempo de se operar nessas estruturas. Adaptamos também os custos operacionais para tries e árvores B. Realizamos experimentos para calcular as distâncias de edição de uma estrutura de dados consigo mesma após uma sequência aleatória de operações com o intuito de verificar como essas medidas de distância atuavam sobre cada estrutura. Observamos nesses testes que o tamanho da sequência influencia na distância final. Também verificamos que os custos operacionais que consideram o nível do nó operado obtinham distâncias menores se comparadas com aquelas obtidas pelo custo de tamanho da estrutura. / The general tree edit distance problem consists in the comparison between two rooted labelled trees using operations which change one tree into another. The tree edit distance is defined as the minimum cost sequence of edit operations needed to transform two trees. The edit operations studied are inserting, deleting and replacing nodes. In this work, we prove that find the largest common subforest between trees restricted to node deletion, called LCS-forest, is a particular case of tree edit distance. Valiente [Val02] proved that find the maximum common subtree is a particular case of tree edit distance considering a ancestrality preserving condition, while we present an alternative proof using paths between pair of nodes. These three problems of distance are shown related in a hierarchy, where the general tree edit distance is a lower bound of the distance value obtained from LCS-forest solution. The latter is a lower bound of the distance obtained from maximum common subtree solution. In the second part of this work, we describe data structures as rooted labelled trees. Then it is possible to compare a data structure with itself after a sequence of operations applying the tree edit distance. For this, the model of operational cost of a tree considers information like number of nodes in the tree and level of operated node. The data structures modeled as trees were stack, linked list and binary search tree. The models associate the edit distance with the time complexities of these data structures operations. The operational costs of tries and B-trees also were adaptated for the edit distances. Some experiments to compute the distances are presented. They compare each data structure with itself after random sequences of operations. The results show how each proposed measure operate on the respective structure. The sequence size was an influence factor on distance values. For the operational costs, the cost defined as the level of operated nodes obtain smaller distances compared to the case of cost defined as the structure size.
307

Metric space indexing for nearest neighbor search in multimedia context : Indexação de espaços métricos para busca de vizinho mais próximo em contexto multimídia / Indexação de espaços métricos para busca de vizinho mais próximo em contexto multimídia

Silva, Eliezer de Souza da, 1988- 26 August 2018 (has links)
Orientador: Eduardo Alves do Valle Junior / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-26T08:10:33Z (GMT). No. of bitstreams: 1 Silva_EliezerdeSouzada_M.pdf: 2350845 bytes, checksum: dd31928bd19312563101a08caea74d63 (MD5) Previous issue date: 2014 / Resumo: A crescente disponibilidade de conteúdo multimídia é um desafio para a pesquisa em Recuperação de Informação. Usuários querem não apenas ter acesso aos documentos multimídia, mas também obter semântica destes documentos, de modo que a capacidade de encontrar um conteúdo específico em grandes coleções de documentos textuais e não textuais é fundamental. Nessas grandes escalas, sistemas de informação multimídia de recuperação devem contar com a capacidade de executar a busca por semelhança de forma eficiente. No entanto, documentos multimídia são muitas vezes representados por descritores multimídia representados por vetores de alta dimensionalidade, ou por outras representações complexas em espaços métricos. Fornecer a possibilidade de uma busca por similaridade eficiente para esse tipo de dados é extremamente desafiador. Neste projeto, vamos explorar uma das famílias mais citado de soluções para a busca de similaridade, o Hashing Sensível à Localidade (LSH - Locality-sensitive Hashing em inglês), que se baseia na criação de funções de hash que atribuem, com maior probabilidade, a mesma chave para os dados que são semelhantes. O LSH está disponível apenas para um punhado funções de distância, mas, quando disponíveis, verificou-se ser extremamente eficiente para arquiteturas com custo de acesso uniforme aos dados. A maioria das funções LSH existentes são restritas a espaços vetoriais. Propomos dois métodos novos para o LSH, generalizando-o para espaços métricos quaisquer utilizando particionamento métrico (centróides aleatórios e k-medoids). Apresentamos uma comparação com os métodos LSH bem estabelecidos em espaços vetoriais e com os últimos concorrentes novos métodos para espaços métricos. Desenvolvemos uma modelagem teórica do comportamento probalístico dos algoritmos propostos e demonstramos algumas relações e limitantes para a probabilidade de colisão de hash. Dentre os algoritmos propostos para generelizar LSH para espaços métricos, esse desenvolvimento teórico é novo. Embora o problema seja muito desafiador, nossos resultados demonstram que ela pode ser atacado com sucesso. Esta dissertação apresentará os desenvolvimentos do método, a formulação teórica e a discussão experimental dos métodos propostos / Abstract: The increasing availability of multimedia content poses a challenge for information retrieval researchers. Users want not only have access to multimedia documents, but also make sense of them --- the ability of finding specific content in extremely large collections of textual and non-textual documents is paramount. At such large scales, Multimedia Information Retrieval systems must rely on the ability to perform search by similarity efficiently. However, Multimedia Documents are often represented by high-dimensional feature vectors, or by other complex representations in metric spaces. Providing efficient similarity search for that kind of data is extremely challenging. In this project, we explore one of the most cited family of solutions for similarity search, the Locality-Sensitive Hashing (LSH), which is based upon the creation of hashing functions which assign, with higher probability, the same key for data that are similar. LSH is available only for a handful distance functions, but, where available, it has been found to be extremely efficient for architectures with uniform access cost to the data. Most existing LSH functions are restricted to vector spaces. We propose two novel LSH methods (VoronoiLSH and VoronoiPlex LSH) for generic metric spaces based on metric hyperplane partitioning (random centroids and K-medoids). We present a comparison with well-established LSH methods in vector spaces and with recent competing new methods for metric spaces. We develop a theoretical probabilistic modeling of the behavior of the proposed algorithms and show some relations and bounds for the probability of hash collision. Among the algorithms proposed for generalizing LSH for metric spaces, this theoretical development is new. Although the problem is very challenging, our results demonstrate that it can be successfully tackled. This dissertation will present the developments of the method, theoretical and experimental discussion and reasoning of the methods performance / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
308

An Optimized Representation for Dynamic k-ary Cardinal Trees

Yasam, Venkata Sudheer Kumar Reddy January 2009 (has links)
Trees are one of the most fundamental structures in computer science. Standard pointer-based representations consume a significant amount of space while only supporting a small set of navigational operations. Succinct data structures have been developed to overcome these difficulties. A succinct data structure for an object from a given class of objects occupies space close to the information-theoretic lower-bound for representing an object from the class, while supporting the required operations on the object efficiently. In this thesis we consider representing trees succinctly. Various succinct representations have been designed for representing different classes of trees, namely, ordinal trees, cardinal trees and labelled trees. Barring a few, most of these representations are static in that they do not support inserting and deleting nodes. We consider succinct representations for cardinal trees that also support updates (insertions and deletions), i.e., dynamic cardinal trees. A cardinal tree of degree k, also referred to as a k-ary cardinal tree or simply a k-ary tree is a tree where each node has place for up to k children with labels from 1 to k. The information-theoretic lower bound for representing a k-ary cardinal tree on n nodes is roughly (2n+n log k) bits. Representations that take (2n+n log k+ o(n log k ) ) bits have been designed that support basic navigations operations like finding the parent, i-th child, child-labeled j, size of a subtree etc. in constant time. But these could not support updates efficiently. The only known succinct dynamic representation was given by Diego, who gave a structure that still uses (2n+n log k+o(n log k ) ) bits and supports basic navigational operations in O((log k+log log n) ) time, and updates in O((log k + log log n)(1+log k /log (log k + log log n))) amortized time. We improve the times for the operations without increasing the space complexity, for the case when k is reasonably small compared to n. In particular, when k=(O(√(log n ))) our representation supports all the navigational operations in constant time while supporting updates in O(√(log log n )) amortized time.
309

Object serialization vs relational data modelling in Apache Cassandra: a performance evaluation

Johansen, Valdemar January 2015 (has links)
Context. In newer database solutions designed for large-scale, cloud-based services, database performance is of particular concern as these services face scalability challenges due to I/O bottlenecks. These issues can be alleviated through various data model optimizations that reduce I/O loads. Object serialization is one such approach. Objectives. This study investigates the performance of serialization using the Apache Avro library in the Cassandra database. Two different serialized data models are compared with a traditional relational database model. Methods. This study uses an experimental approach that compares read and write latency using Twitter data in JSON format. Results. Avro serialization is found to improve performance. However, the extent of the performance benefit is found to be highly dependent on the serialization granularity defined by the data model. Conclusions. The study concludes that developers seeking to improve database throughput in Cassandra through serialization should prioritize data model optimization as serialization by itself will not outperform relational modelling in all use cases. The study also recommends that further work is done to investigate additional use cases, as there are potential performance issues with serialization that are not covered in this study.
310

Visualisering av datastrukturer : Utveckling av ett tolkningsverktyg

Adborn, Mats January 2013 (has links)
Tolking och tillgodogörande av datastrukturer, organiserad information ochprogramkodsfiler förekommer frekvent i arbete med mjukvaruutveckling. Dennainformation är lagrad i textbaserad form och dess förståelse kräver stornoggrannhet och tidsinvestering från utvecklarens sida. I syfte att försöka förenklaprocessen beskriver detta examensarbete utvecklingen av en prototyp till ettverktygsprogram, vilket automatiserar tolkning av XML-data och källkodsfiler förprogrammeringsspråken C och C++. Programmet skapar och presenterar sedanen visuell graf av den undersökta strukturen. Algoritmen klarar av att presenteragodtyckligt stora XML-filer samt ett begränsat antal samtidigt inlästakällkodsfiler. Effekterna på tolkningens tidsåtgång och dess tillförlitlighet harutvärderats i en undersökning bland studenter inom mjukvaruutveckling.Resultatet visade på en viss mätbar ökning i antalet korrekta slutsatser somanvändaren drog efter att ha studerat datasammanhanget grafiskt jämfört meddess ursprungliga textform. Tidsåtgången mättes inte mer noggrant än subjektivthos användarna, av vilka en övervägande andel ansåg att tiden förkortades medden grafiska representationen till deras hjälp. Examensarbetet visar attanvändandet av detta eller motsvarande verktyg kan öka tillgodogörandet avdatastrukturer genom att både höja graden av tillförlitligheten hos dennainformation och samtidigt minska tidsåtgången. Däremot är den kvantifierbaravinsten av dessa resultat inte statistiskt säkerställd till en högre grad. / Interpretation and assimilation of data structures, organized information andsource code files are frequently occuring during software development. This kindof information is stored in text-based form and its understanding requires greatthoroughness and investment in time from the developer's part. This thesisdescribes the development of a utility program prototype, which automates theparsing of XML data and source code files from the programming languages Cand C++, in purpose of trying to simplify the interpretation process. The programcreates and presents a visual graph of the structure found, using an algorithmwhich can present arbitrary large XML files as well as a limited number ofconcurrent source code files. The effects on the interpretation time and itsreliablity has been evaluated in a survey among software development students.The result showed a certain increase in the number of correct conclusions fromthe participants' side after studying the visual representation compared to itsorignial text-based form. The amount of time used was not measured other thansubjectively by the users themselves, of which a predominant proportionconsidered a reduction in needed time when using the graphical representation.The thesis shows that the use of this or an equivalent utility can enhance theassimilation of data structures by increasing the rate of reliabilty whilesimultaneously decreasing the needed amount of time. Still, the quantifyable gainsof these results remains statistically largely uncertain.

Page generated in 0.0807 seconds