• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 17
  • 9
  • 6
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 102
  • 102
  • 25
  • 19
  • 17
  • 15
  • 15
  • 12
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Árvores de Ukkonen: caracterização combinatória e aplicações / Ukkonen\'s tree: combinatorial characterization and applications

Gustavo Akio Tominaga Sacomoto 08 February 2011 (has links)
A árvore de sufixos é uma estrutura dados, que representa em espaço linear todos os fatores de uma palavra, com diversos exemplos de aplicações práticas. Neste trabalho, definimos uma estrutura mais geral: a árvore de Ukkonen. Provamos para ela diversas propriedades combinatórias, dentre quais, a minimalidade em um sentido preciso. Acreditamos que a apresentação aqui oferecida, além de mais geral que as árvores de sufixo, tem a vantagem de oferecer uma descrição explícita da topologia da árvore, de seus vértices, arestas e rótulos, o que não vimos em nenhum outro trabalho. Como aplicações, apresentamos também a árvore esparsa de sufixos (que armazena apenas um subconjunto dos sufixos) e a árvore de k-fatores (que armazena apenas os segmentos de comprimento k, ao invés dos sufixos) definidas como casos particulares das árvores de Ukkonen. Propomos para as árvores esparsas um novo algoritmo de construção com tempo O(n) e espaço O(m), onde n é tamanho da palavra e m é número de sufixos. Para as árvores de k-fatores, propomos um novo algoritmo online com tempo e espaço O(n), onde n é o tamanho da palavra. / The suffix tree is a data structure that represents, in linear space, all factors of a given word, with several examples of practical applications. In this work, we define a more general structure: the Ukkonen\'s tree. We prove many properties for it, among them, its minimality in a precise sense. We believe that this presentation, besides being more general than the suffix trees, has the advantage of offering an explicit description of the tree topology, its vertices, edges and labels, which was not seen in any other work. As applications, we also presents the sparse suffix tree (which stores only a subset of the suffixes) and the k-factor tree (which stores only the substrings of length k, instead of the suffixes), both defined as Ukkonen\'s tree special cases. We propose a new construction algorithm for the sparse suffix trees with time O(n) and space O(m), where n is the size of the word and m is the number of suffixes. For the k-factor trees, we propose a new online algorithm with time and space O(n), where n is the size of the word.
82

Localisation et cartographie simultanées par optimisation de graphe sur architectures hétérogènes pour l’embarqué / Embedded graph-based simultaneous localization and mapping on heterogeneous architectures

Dine, Abdelhamid 05 October 2016 (has links)
La localisation et cartographie simultanées connue, communément, sous le nom de SLAM (Simultaneous Localization And Mapping) est un processus qui permet à un robot explorant un environnement inconnu de reconstruire une carte de celui-ci tout en se localisant, en même temps, sur cette carte. Dans ce travail de thèse, nous nous intéressons au SLAM par optimisation de graphe. Celui-ci utilise un graphe pour représenter et résoudre le problème de SLAM. Une optimisation de graphe consiste à trouver une configuration de graphe (trajectoire et carte) qui correspond le mieux aux contraintes introduites par les mesures capteurs. L'optimisation de graphe présente une forte complexité algorithmique et requiert des ressources de calcul et de mémoire importantes, particulièrement si l'on veut explorer de larges zones. Cela limite l'utilisation de cette méthode dans des systèmes embarqués temps-réel. Les travaux de cette thèse contribuent à l'atténuation de la complexité de calcul du SLAM par optimisation de graphe. Notre approche s’appuie sur deux axes complémentaires : la représentation mémoire des données et l’implantation sur architectures hétérogènes embarquées. Dans le premier axe, nous proposons une structure de données incrémentale pour représenter puis optimiser efficacement le graphe. Dans le second axe, nous explorons l'utilisation des architectures hétérogènes récentes pour accélérer le SLAM par optimisation de graphe. Nous proposons, donc, un modèle d’implantation adéquat aux applications embarquées en mettant en évidence les avantages et les inconvénients des architectures évaluées, à savoir SoCs basés GPU et FPGA. / Simultaneous Localization And Mapping is the process that allows a robot to build a map of an unknown environment while at the same time it determines the robot position on this map.In this work, we are interested in graph-based SLAM method. This method uses a graph to represent and solve the SLAM problem. A graph optimization consists in finding a graph configuration (trajectory and map) that better matches the constraints introduced by the sensors measurements. Graph optimization is characterized by a high computational complexity that requires high computational and memory resources, particularly to explore large areas. This limits the use of graph-based SLAM in real-time embedded systems. This thesis contributes to the reduction of the graph-based computational complexity. Our approach is based on two complementary axes: data representation in memory and implementation on embedded heterogeneous architectures. In the first axis, we propose an incremental data structure to efficiently represent and then optimize the graph. In the second axis, we explore the use of the recent heterogeneous architectures to speed up graph-based SLAM. We propose an efficient implementation model for embedded applications. We highlight the advantages and disadvantages of the evaluated architectures, namely GPU-based and FPGA-based System-On-Chips.
83

Méthodes et structures non locales pour la restaurationd'images et de surfaces 3D / Non local methods and structures for images and 3D surfaces restoration

Guillemot, Thierry 03 February 2014 (has links)
Durant ces dernières années, les technologies d’acquisition numériques n’ont cessé de se perfectionner, permettant d’obtenir des données d’une qualité toujours plus fine. Néanmoins, le signal acquis reste corrompu par des défauts qui ne peuvent être corrigés matériellement et nécessitent l’utilisation de méthodes de restauration adaptées. J'usqu’au milieu des années 2000, ces approches s’appuyaient uniquement sur un traitement local du signal détérioré. Avec l’amélioration des performances de calcul, le support du filtre a pu être étendu à l’ensemble des données acquises en exploitant leur caractère autosimilaire. Ces approches non locales ont principalement été utilisées pour restaurer des données régulières et structurées telles que des images. Mais dans le cas extrême de données irrégulières et non structurées comme les nuages de points 3D, leur adaptation est peu étudiée à l’heure actuelle. Avec l’augmentation de la quantité de données échangées sur les réseaux de communication, de nouvelles méthodes non locales ont récemment été proposées. Elles utilisent un modèle a priori extrait à partir de grands ensembles d’échantillons pour améliorer la qualité de la restauration. Néanmoins, ce type de méthode reste actuellement trop coûteux en temps et en mémoire. Dans cette thèse, nous proposons, tout d’abord, d’étendre les méthodes non locales aux nuages de points 3D, en définissant une surface de points capable d’exploiter leur caractère autosimilaire. Nous introduisons ensuite une nouvelle structure de données, le CovTree, flexible et générique, capable d’apprendre les distributions d’un grand ensemble d’échantillons avec une capacité de mémoire limitée. Finalement, nous généralisons les méthodes de restauration collaboratives appliquées aux données 2D et 3D, en utilisant notre CovTree pour apprendre un modèle statistique a priori à partir d’un grand ensemble de données. / In recent years, digital technologies allowing to acquire real world objects or scenes have been significantly improved in order to obtain high quality datasets. However, the acquired signal is corrupted by defects which can not be rectified materially and require the use of adapted restoration methods. Until the middle 2000s, these approaches were only based on a local process applyed on the damaged signal. With the improvement of computing performance, the neighborhood used by the filter has been extended to the entire acquired dataset by exploiting their self-similar nature. These non-local approaches have mainly been used to restore regular and structured data such as images. But in the extreme case of irregular and unstructured data as 3D point sets, their adaptation is few investigated at this time. With the increase amount of exchanged data over the communication networks, new non-local methods have recently been proposed. These can improve the quality of the restoration by using an a priori model extracted from large data sets. However, this kind of method is time and memory consuming. In this thesis, we first propose to extend the non-local methods for 3D point sets by defining a surface of points which exploits their self-similar of the point cloud. We then introduce a new flexible and generic data structure, called the CovTree, allowing to learn the distribution of a large set of samples with a limited memory capacity. Finally, we generalize collaborative restoration methods applied to 2D and 3D data by using our CovTree to learn a statistical a priori model from a large dataset.
84

Effective Graph-Based Content--Based Image Retrieval Systems for Large-Scale and Small-Scale Image Databases

Chang, Ran 01 December 2013 (has links)
This dissertation proposes two novel manifold graph-based ranking systems for Content-Based Image Retrieval (CBIR). The two proposed systems exploit the synergism between relevance feedback-based transductive short-term learning and semantic feature-based long-term learning to improve retrieval performance. Proposed systems first apply the active learning mechanism to construct users' relevance feedback log and extract high-level semantic features for each image. These systems then create manifold graphs by incorporating both the low-level visual similarity and the high-level semantic similarity to achieve more meaningful structures for the image space. Finally, asymmetric relevance vectors are created to propagate relevance scores of labeled images to unlabeled images via manifold graphs. The extensive experimental results demonstrate two proposed systems outperform the other state-of-the-art CBIR systems in the context of both correct and erroneous users' feedback.
85

E-learningový kurz pro výuku Algoritmů / E-learning Course for Algorithms Teaching

Hrozány, Boris Unknown Date (has links)
The aim of this work is to design and implement E-learning system. This system will be targeted on the Algorithms course education in English. Thanks to this fact, foreign students will be able to attend the course, too. The introduction part offers an overview of basic conceptions related to E-learning systems: conceptual and detailed design of E-learning system, system development, designing system testing and maintenance and system improving. Theoretical part of this project is focused on selected topics from data structures, searching and sorting algorithms. The work is dealing with the development of E-learning system from the entire beginning to activities related with its release. Through the interface of administrator the application enables administration of courses and users. The other part of the system serves for checking of students´ results. Student part of the application makes the Algorithms course materials available and provides convenient means of knowledge testing. The results of tests are processed in a form of statistics within the system.
86

Improving Value Proposition by Structuring Product Information Flow / Förbättra värdeerbjudande genom att strukturera produktinformationsflödet

Dahlström, Daniel, Stark, Jacob January 2023 (has links)
This thesis investigates the impact management of product information has on the well-studied concepts of value propositions and business model innovation. Previously, it was not sufficiently documented how this type of information influences these concepts. As the pressure for digitalizing increases and traditional industries risk getting overthrown, understanding the factors affecting a company’s value proposition is more important than ever. To do this, this thesis outlines what product information is, for which stakeholders it matters, and during which phases in a product's lifecycle it has significance. Further, it gives four real-life examples of current use states (“as-is”) at a manufacturer of industrial motors - the case company. These examples are then analyzed and developed into proposed states (“to-be”) based on the findings from live observationat the factory, interviews with relevant stakeholders, and internal documents from the case company. Two main conclusions can be drawn based on the analysis performed on the real-lifeexamples. Firstly, it is found that well-managed product data have the ability to positively impact the value proposition. Secondly, it is found that it is too complex to determine which product information various stakeholders should have access to. It is more manageable to make all product data available to all stakeholders, via a data structure that minimizes the learning curve for users. The two conclusions have several practical and theoretical implications that are also discussed in detail. Finally, this thesis contributes to the field of industrial management by specifically outlining how product information affects value proposition, which in turn influences business models. / Denna rapport undersöker effekterna av hanteringen av produktinformation på de väletablerade koncepten värdeerbjudande och affärsmodellsinnovation. Tidigare har det inte varit tillräckligt dokumenterat hur denna typ av information påverkar dessa koncept. I takt med att trycket för digitalisering ökar och traditionella industrier riskerar kullkastas, så är förståelsen för de faktorer som påverkar ett företags värdeerbjudande viktigare än någonsin. För att göra detta har denna rapport undersökt vad produktinformation är, för vilka användare det har betydelse, och undervilka faser i en produktlivscykel som det har relevans. Dessutom ger rapporten fyra exempel, baserade från vekliga livet, på användarscenarion (“befintliga fall”) hos en tillverkare av industrimotorer - fallföretaget. Dessa exempel är sedan analyserade och utvecklade till föreslagna scenarion (“blivande fall”) baserade på de iakttagelser som gjorts på fallföretagets fabrik, intervjuer med relevanta parter och intern dokumentation från fallföretaget. Två huvudsakliga slutsatser kan konstateras baserat på analysen av dessa autentiska exempel. För det första konstateras att väl hanterad produktinformation kan påverka värdeerbjudandet positivt. För det andra upptäcks det att det är för komplicerat att avgöra vilken produktinformation olika parter bör ha tillgång till. I stället är det mer hanterbart att göra all produktinformation tillgänglig för allaparter, via en datastruktur som minimerar inlärningskurvan för användarna. Dessa två slutsatser har flertalet praktiska och teoretiska konsekvenser, som också diskuteras i detalj i rapporten. Avslutningsvis så bidrar denna rapport till fältet inom industriell ekonomi genom att särskilt diskutera hur produktinformation kan påverka värderbjudande, som i sin tur kan influera affärsmodeller.
87

Bridging of complex data structures between xtUML domains / Bryggning av komplexa datastrukturer mellan xtUML-domäner

Elgh, Jesper January 2022 (has links)
Executable and Translatable UML (xtUML) is a high level software development method where models are developed using UML diagrams and action language code. Model compilers can translate a model into another programming language which is then executable. When developing xtUML models one of the main benefits is that the documentation of the program is created at the same time as the program in the shape of UML diagrams. It is therefore also important that it is possible to create good UML diagrams that gives the reader a good and clear understanding of how the program works without having to look at the code. One problem is the use of arrays and structured data types in the models because they can make a model more difficult to understand and therefore it would be good to be able to refrain from using them and instead model arrays and structured data types as classes with relations between them. This becomes an issue when an array should be sent to another domain in the system because a lot of action language code must be written which is inconvenient. A solution to this problem would be to send class object instances directly to other domains. In this thesis a solution to the problem has been proposed along with alternate options of solving it. The proposed solution has also been implemented in an existing model compiler and the results show that the performance in compilation time is slower compared to when using the built-in arrays and structured data types, but faster or the same compared to letting the user write its own code for sending object instances. The execution time for a small model using the new solution has increased by a lot compared to using arrays and structured data types, and the size of the executable file has almost doubled but if bigger models are created this difference may become negligible.
88

Supporting Applications Involving Dynamic Data Structures and Irregular Memory Access on Emerging Parallel Platforms

Ren, Bin 09 September 2014 (has links)
No description available.
89

Theory and numerical integration of subsurface light transport

Milaenen, David 08 1900 (has links)
En synthèse d’images, reproduire les effets complexes de la lumière sur des matériaux transluminescents, tels que la cire, le marbre ou la peau, contribue grandement au réalisme d’une image. Malheureusement, ce réalisme supplémentaire est couteux en temps de calcul. Les modèles basés sur la théorie de la diffusion visent à réduire ce coût en simulant le comportement physique du transport de la lumière sous surfacique tout en imposant des contraintes de variation sur la lumière incidente et sortante. Une composante importante de ces modèles est leur application à évaluer hiérarchiquement l’intégrale numérique de l’illumination sur la surface d’un objet. Cette thèse révise en premier lieu la littérature actuelle sur la simulation réaliste de la transluminescence, avant d’investiguer plus en profondeur leur application et les extensions des modèles de diffusion en synthèse d’images. Ainsi, nous proposons et évaluons une nouvelle technique d’intégration numérique hiérarchique utilisant une nouvelle analyse fréquentielle de la lumière sortante et incidente pour adapter efficacement le taux d’échantillonnage pendant l’intégration. Nous appliquons cette théorie à plusieurs modèles qui correspondent à l’état de l’art en diffusion, octroyant une amélioration possible à leur efficacité et précision. / In image synthesis, reproducing the complex appearance of objects with subsurface light scattering, such as wax, marble and skin, greatly contributes to the realism of an image. Unfortunately, this added realism comes at a high computational cost. Models based on diffusion theory aim to reduce this computational cost by simulating the physical behaviour of subsurface light scattering while imposing smoothness constraints on the incident and outgoing light fields. An important component of these models is how they are employed to hierarchically evaluate the numerical integral of lighting over the surface of an object. This thesis will first review the existing literature on realistic subsurface lighting simulation, before investigating in more depth the application and extension of modern diffusion models in image synthesis. In doing so, we propose and evaluate a new hierarchical numerical integration technique that uses a novel frequency analysis of the incident and outgoing light fields to reliably adapt the sampling rate during integration. We realize our resulting theory in the context of several state-of-the-art diffusion models, providing a marked improvement in their efficiency and accuracy.
90

Lh*rs p2p : une nouvelle structure de données distribuée et scalable pour les environnements Pair à Pair / Lh*rsp2p : a new scalable and distributed data structure for Peer to Peer environnements

Yakouben, Hanafi 14 May 2013 (has links)
Nous proposons une nouvelle structure de données distribuée et scalable appelée LH*RSP2P conçue pour les environnements pair à pair(P2P).Les données de l'application forment un fichier d’enregistrements identifiés par les clés primaires. Les enregistrements sont dans des cases mémoires sur des pairs, adressées par le hachage distribué (LH*). Des éclatements créent dynamiquement de nouvelles cases pour accommoder les insertions. L'accès par clé à un enregistrement comporte un seul renvoi au maximum. Le scan du fichier s’effectue au maximum en deux rounds. Ces résultats sont parmi les meilleurs à l'heure actuelle. Tout fichier LH*RSP2P est également protégé contre le Churn. Le calcul de parité protège toute indisponibilité jusqu’à k cases, où k ≥ 1 est un paramètre scalable. Un nouveau type de requêtes, qualifiées de sûres, protège également contre l’accès à toute case périmée. Nous prouvons les propriétés de notre SDDS formellement par une implémentation prototype et des expérimentations. LH*RSP2P apparaît utile aux applications Big Data, sur des RamClouds tout particulièrement / We propose a new scalable and distributed data structure termed LH*RSP2P designed for Peer-to-Peer environment (P2P). Application data forms a file of records identified by primary keys. Records are in buckets on peers, addressed by distributed linear hashing (LH*). Splits create new buckets dynamically, to accommodate inserts. Key access to a record uses at most one hop. Scan of the file proceeds in two rounds at most. These results are among best at present. An LH*RSP2P file is also protected against Churn. Parity calculation recovers from every unavailability of up to k≥1, k is a scalable parameter. A new type of queries, qualified as sure, protects also against access to any out-of-date bucket. We prove the properties of our SDDS formally, by a prototype implementation and experiments. LH*RSP2P appears useful for Big Data manipulations, over RamClouds especially.

Page generated in 0.4498 seconds