Spelling suggestions: "subject:"datastructure"" "subject:"datastructures""
61 |
[en] DATA STRUCTURES FOR TIME SERIES / [pt] ESTRUTURAS DE DADOS PARA SERIES TEMPORAISCAIO DIAS VALENTIM 24 April 2013 (has links)
[pt] Séries temporais são ferramentas importantes para análise de eventos que ocorrem em diferentes domínios do conhecimento humano, como medicina, física, meteorologia e finanças. Uma tarefa comum na análise de séries temporais é a busca por eventos pouco frequentes que refletem fatos de interesse sobre o domínio de origem da série. Neste trabalho, buscamos desenvolver técnicas para detecção de eventos raros em séries temporais. Formalmente, uma série temporal A igual a (a1, a2,..., an) é uma sequência de valores reais indexados por números inteiros de 1 a n. Dados dois números, um inteiro t e um real d, dizemos que um par de índices i e j formam um evento-(t, d) em A se, e somente se, 0 menor que j - i menor ou igual a t e aj - ai maior ou igual a d. Nesse caso, i é o início do evento e j o fim. Os parâmetros t e d servem para controlar, respectivamente, a janela de tempo em que o evento pode ocorrer e a
magnitude da variação na série. Assim, nos concentramos em dois tipos de perguntas relacionadas aos eventos-(t, d), são elas: - Quais são os eventos-(t, d) em uma série A? - Quais são os índices da série A que participam como inícios de ao menos um evento-(t, d)? Ao longo desse trabalho estudamos, do ponto de vista prático e teórico, diversas estruturas de dados e algoritmos para responder às duas perguntas
listadas. / [en] Time series are important tools for the anaylsis of events that occur in different fields of human knowledge such as medicine, physics, meteorology and finance. A common task in analysing time series is to try to find events that happen infrequently as these events usually reflect facts of interest about the domain of the series. In this study, we develop techniques for the detection of rare events in time series. Technically, a time series A equal to (a1, a2,..., an) is a sequence of real values indexed by integer numbers from 1 to n. Given an integer t and a real number d, we say that a pair of time indexes i and j is a (t, d)-event in A, if and only if 0 less than j - i less than or equal to t and aj - ai greater than or equal to d. In this case, i is said to be the beginning of the event and j is its end. The parameters t and d control, respectively, the time window in which the event can occur and magnitude of the variation in the series. Thus, we focus on two types of queries related to the (t, d)-events, which are: - What are the (t, d)-events in a series A? - What are the indexes in the series A which are the beginning of at least one (t, d)-event? Throughout this study we discuss, from both theoretical and practical points of view, several data structures and algorithms to answer the two queries mentioned above.
|
62 |
Représentation et échange de données tridimensionnelles géolocalisées de la ville / Representation and exchange of three-dimensional geolocated city dataGaillard, Jeremy 22 May 2018 (has links)
Le perfectionnement des modes d’acquisition 3D (relevés laser, photographiques, etc.) a conduit à la multiplication des données 3D géolocalisées disponibles. De plus en plus de villes mettent leur modèle numérique 3D à disposition en libre accès. Pour garantir l’interopérabilité des différentes sources de données, des travaux ont été effectués sur la standardisation des protocoles d’échange et des formats de fichier. En outre, grâce aux nouveaux standards du Web et à l’augmentation de la puissance des machines, il est devenu possible ces dernières années d’intégrer des contenus riches, comme des applications 3D, directement dans une page web. Ces deux facteurs rendent aujourd’hui possible la diffusion et l’exploitation des données tridimensionnelles de la ville dans un navigateur web. Ma thèse, dotée d’un financement de type CIFRE avec la société Oslandia, s’intéresse à la représentation tridimensionnelle de la ville sur le Web. Plus précisément, il s’agit de récupérer et de visualiser, à partir d’un client léger, de grandes quantités de données de la ville sur un ou plusieurs serveurs distants. Ces données sont hétérogènes : il peut s’agir de la représentations 3D des bâtiments (maillages) et du terrain (carte de hauteur), mais aussi d’informations sémantiques telles que des taux de pollution (volumes), la localisation de stations de vélos (points) et le nombre de vélos disponibles, etc. Durant ma thèse, j’ai exploré différentes manières d’organiser ces données dans des structures génériques afin de permettre une transmission progressive de fortes volumétries de données 3D. La prise en compte de l’aspect multi-échelle de la ville est un élément clef de la conception de ces structures.L’adaptation de la visualisation des données à l’utilisateur est un autre grand axe de ma thèse. Du fait du grand nombre de cas d’utilisations existants pour la ville numérique, les besoins de l’utilisateur varient grandement : des zones d’intérêts se dégagent, les données doivent être représentées d’une manière spécifique... J’explore différentes manières de satisfaire ces besoins, soit par la priorisation de données par rapport à d’autres lors de leur chargement, soit par la génération de scènes personnalisés selon les préférences indiquées par l’utilisateur. / Advances in 3D data acquisition techniques (laser scanning, photography, etc.) has led to a sharp increase in the quantity of available 3D geolocated data. More and more cities provide the scanned data on open access platforms. To ensure the intercompatibility of different data sources, standards have been developed for exchange protocols and file formats. Moreover, thanks to new web standards and the increase in processing power of personal devices, it is now possible to integrate rich content, such as 3D applications, directly in a web page. These two elements make it possible to share and exploit 3D city data into a web browser.The subject of my thesis, co-financed by the Oslandia company, is the 3D representation of city data on the Web. More precisely, the goal is to retrieve and visualize a great quantity of city data from one or several distant servers in a thin client. This data is heterogenous: it can be 3D representations of buildings (meshes) or terrain (height maps), but also semantic information such as pollution levels (volume data), the position of bike stations (points) and their availability, etc. During my thesis, I explored various ways of organising this data in generic structures in order to allow the progressive transmission of high volumes of 3D data. Taking into account the multiscale nature of the city is a key element in the design of these structures. Adapting the visualisation of the data to the user is another important objective of my thesis. Because of the high number of uses of 3D city models, the user’s needs vary greatly: some specific areas are of higher interest, data has to be represented in a certain way... I explore different methods to satisfy these needs, either by priroritising some data over others during the loading stage, or by generating personalised scenesbased on a set of preferences defined by the user.
|
63 |
Thread Safe Multi-Tier Priority Queue for Managing Pending Events in Multi-Threaded Discrete Event SimulationsDePero, Matthew Michael 28 August 2018 (has links)
No description available.
|
64 |
TCB Minimizing Model of Computation (TMMC)Bushra, Naila 13 December 2019 (has links)
The integrity of information systems is predicated on the integrity of processes that manipulate data. Processes are conventionally executed using the conventional von Neumann (VN) architecture. The VN computation model is plagued by a large trusted computing base (TCB), due to the need to include memory and input/output devices inside the TCB. This situation is becoming increasingly unjustifiable due to the steady addition of complex features such as platform virtualization, hyper-threading, etc. In this research work, we propose a new model of computation - TCB minimizing model of computation (TMMC) - which explicitly seeks to minimize the TCB, viz., hardware and software that need to be trusted to guarantee the integrity of execution of a process. More specifically, in one realization of the model, the TCB can be shrunk to include only a low complexity module; in a second realization, the TCB can be shrunk to include nothing, by executing processes in a blockchain network. The practical utilization of TMMC using a low complexity trusted module, as well as a blockchain network, is detailed in this research work. The utility of the TMMC model in guaranteeing the integrity of execution of a wide range of useful algorithms (graph algorithms, computational geometric algorithms, NP algorithms, etc.), and complex large-scale processes composed of such algorithms, are investigated.
|
65 |
Adaptive Slicing in Additive Manufacturing Process using a Modified Boundary Octree Data StructureSiraskar, Nandkumar S. January 2012 (has links)
No description available.
|
66 |
AN EFFICIENT ALGORITHM FOR CONVERTING POLYHEDRAL OBJECTS WITH WINGED-EDGE DATA STRUCTURE TO OCTREE DATA STRUCTUREVELAYUTHAM, PRAKASH SANKAREN 31 May 2005 (has links)
No description available.
|
67 |
Impacts of Ignoring Nested Data Structure in Rasch/IRT Model and Comparison of Different Estimation MethodsChungbaek, Youngyun 06 June 2011 (has links)
This study involves investigating the impacts of ignoring nested data structure in Rasch/1PL item response theory (IRT) model via a two-level and three-level hierarchical generalized linear model (HGLM). Currently, Rasch/IRT models are frequently used in educational and psychometric researches for data obtained from multistage cluster samplings, which are more likely to violate the assumption of independent observations of examinees required by Rasch/IRT models. The violation of the assumption of independent observation, however, is ignored in the current standard practices which apply the standard Rasch/IRT for the large scale testing data. A simulation study (Study Two) was conducted to address this issue of the effects of ignoring nested data structure in Rasch/IRT models under various conditions, following a simulation study (Study One) to compare the performances of three methods, such as Penalized Quasi-Likelihood (PQL), Laplace approximation, and Adaptive Gaussian Quadrature (AGQ), commonly used in HGLM in terms of accuracy and efficiency in estimating parameters.
As expected, PQL tended to produce seriously biased item difficulty estimates and ability variance estimates whereas almost unbiased for Laplace or AGQ for both 2-level and 3-level analysis. As for the root mean squared errors (RMSE), three methods performed without substantive differences for item difficulty estimates and ability variance estimates in both 2-level and 3-level analysis, except for level-2 ability variance estimates in 3-level analysis. Generally, Laplace and AGQ performed similarly well in terms of bias and RMSE of parameter estimates; however, Laplace exhibited a much lower convergence rate than that of AGQ in 3-level analyses.
The results from AGQ, which produced the most accurate and stable results among three computational methods, demonstrated that the theoretical standard errors (SE), i.e., asymptotic information-based SEs, were underestimated by at most 34% when 2-level analyses were used for the data generated from 3-level model, implying that the Type I error rate would be inflated when the nested data structures are ignored in Rasch/IRT models. The underestimated theoretical standard errors were substantively more severe as the true ability variance increased or the number of students within schools increased regardless of test length or the number of schools. / Ph. D.
|
68 |
Árvores de Ukkonen: caracterização combinatória e aplicações / Ukkonen\'s tree: combinatorial characterization and applicationsSacomoto, Gustavo Akio Tominaga 08 February 2011 (has links)
A árvore de sufixos é uma estrutura dados, que representa em espaço linear todos os fatores de uma palavra, com diversos exemplos de aplicações práticas. Neste trabalho, definimos uma estrutura mais geral: a árvore de Ukkonen. Provamos para ela diversas propriedades combinatórias, dentre quais, a minimalidade em um sentido preciso. Acreditamos que a apresentação aqui oferecida, além de mais geral que as árvores de sufixo, tem a vantagem de oferecer uma descrição explícita da topologia da árvore, de seus vértices, arestas e rótulos, o que não vimos em nenhum outro trabalho. Como aplicações, apresentamos também a árvore esparsa de sufixos (que armazena apenas um subconjunto dos sufixos) e a árvore de k-fatores (que armazena apenas os segmentos de comprimento k, ao invés dos sufixos) definidas como casos particulares das árvores de Ukkonen. Propomos para as árvores esparsas um novo algoritmo de construção com tempo O(n) e espaço O(m), onde n é tamanho da palavra e m é número de sufixos. Para as árvores de k-fatores, propomos um novo algoritmo online com tempo e espaço O(n), onde n é o tamanho da palavra. / The suffix tree is a data structure that represents, in linear space, all factors of a given word, with several examples of practical applications. In this work, we define a more general structure: the Ukkonen\'s tree. We prove many properties for it, among them, its minimality in a precise sense. We believe that this presentation, besides being more general than the suffix trees, has the advantage of offering an explicit description of the tree topology, its vertices, edges and labels, which was not seen in any other work. As applications, we also presents the sparse suffix tree (which stores only a subset of the suffixes) and the k-factor tree (which stores only the substrings of length k, instead of the suffixes), both defined as Ukkonen\'s tree special cases. We propose a new construction algorithm for the sparse suffix trees with time O(n) and space O(m), where n is the size of the word and m is the number of suffixes. For the k-factor trees, we propose a new online algorithm with time and space O(n), where n is the size of the word.
|
69 |
[en] BOOLEAN OPERATIONS WITH COMPOUND SOLIDS REPRESENTED BY BOUNDARY / [pt] OPERAÇÕES BOOLEANAS COM SÓLIDOS COMPOSTOS REPRESENTADOS POR FRONTEIRAMARCOS CHATAIGNIER DE ARRUDA 13 July 2005 (has links)
[pt] Num modelador de sólidos, uma das ferramentas mais
poderosas para a
criação de objetos tridimensionais de qualquer nível de
complexidade geométrica
é a aplicação das operações booleanas. Elas são formas
intuitivas e populares
de combinar sólidos, baseadas nas operações aplicadas a
conjuntos. Os tipos
principais de operações booleanas comumente aplicadas a
sólidos são: união,
interseção e diferença. Havendo interesse prático, para
garantir que os objetos
resultantes possuam a mesma dimensão dos objetos originais,
sem partes soltas
ou pendentes, o processo de regularização é aplicado.
Regularizar significa
restringir o resultado de tal forma que apenas volumes
preenchíveis possam
existir. Na prática, a regularização é realizada
classificando-se os elementos
topológicos e eliminando-se estruturas de dimensão
inferior. A proposta deste
trabalho é o desenvolvimento de um algoritmo genérico que
permita a aplicação
do conjunto de operações booleanas em um ambiente de
modelagem
geométrica aplicada à análise por elementos finitos e que
agregue as seguintes
funcionalidades: trabalhar com um número indefinido de
entidades topológicas
(conceito de Grupo), trabalhar com objetos de dimensões
diferentes, trabalhar
com objetos non-manifold, trabalhar com objetos não
necessariamente poliedrais
ou planos e garantir a eficiência, robustez e
aplicabilidade em qualquer ambiente
de modelagem baseado em representação B-Rep. Neste
contexto, apresenta-se
a implementação do algoritmo num modelador geométrico pré-
existente,
denominado MG, seguindo o conceito de programação orientada
a objetos e
mantendo a interface com o usuário simples e eficiente. / [en] In a solid modeler, one of the most powerful tools to
create threedimensional
objects with any level of geometric complexity is the
application of
the Boolean set operations. They are intuitive and popular
ways to combine
solids, based on the operations applied to sets. The main
types of Boolean
operations commonly applied to solids are: union,
intersection and difference. If
there is practical interest, in order to assure that the
resulting objects have the
same dimension of the original objects, without loose or
dangling parts, the
regularization process is applied. To regularize means to
restrict the result in a
way that only filling volumes are allowed. In practice, the
regularization is
performed classifying the topological elements and removing
the lower
dimensional structures. The objective of this work is the
development of a generic
algorithm that allows the application of the Boolean set
operations in a geometric
modeling environment applied to finite element analysis,
which aggregates the
following functionalities: working with an undefined number
of topological entities
(Group concept), working with objects of different
dimensions, working with nonmanifold
objects, working with objects not necessarily plane or
polyhedrical and
assuring the efficiency, robustness and applicability in
any modeling environment
based on B-Rep representation. In this context, the
implementation of the
algorithm in a pre-existing geometric modeler named MG is
presented, using the
concept of object oriented programming and keeping the user
interface simple
and efficient.
|
70 |
Program Understanding Techniques in Database Reverse EngineeringHenrard, Jean 19 September 2003 (has links)
For many years software engineering has primarily focused on the development of new systems and neglected maintenance and reengineering of legacy applications. Maintenance typically represents 70% of the cost during the life cycle of a system. In order to allow an efficient and safe maintenance of a legacy system, we need to reverse engineer it in order to reconstruct its missing or out-of-date documentation. In data-oriented applications the reverse engineering complexity can be broken down by considering that the database can be reverse engineered independently of the procedural components.
Database reverse engineering can be defined as the process of recovering the database's schema(s) of an application from database declaration text and program source code that use the data in order to understand their exact structure and meaning. A database reverse engineering methodology is broken down into three processes: project preparation, data structure extraction that recovers the database's logical schema and data structure conceptualization that interprets the logical schema in conceptual terms.
In order to validate our methodology and program understanding techniques, we have developed tools to support them. Those tools have proved absolutely necessary to perform database reverse engineering of medium to larger applications in reasonable time and at reasonable cost. To cut down on the cost of large projects, we have stressed the need for automation to reduce the manual work of the analyst. Our experience with real size projects has taught us that the management aspects of a project are essential success factors. The management of a project comprises different aspects such as database reverse engineering explanation, cost evaluation and database reverse engineering result evaluation.
|
Page generated in 0.0393 seconds