• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 4
  • 4
  • 3
  • 1
  • Tagged with
  • 35
  • 35
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Distributed Inference for Degenerate U-Statistics with Application to One and Two Sample Test

Atta-Asiamah, Ernest January 2020 (has links)
In many hypothesis testing problems such as one-sample and two-sample test problems, the test statistics are degenerate U-statistics. One of the challenges in practice is the computation of U-statistics for a large sample size. Besides, for degenerate U-statistics, the limiting distribution is a mixture of weighted chi-squares, involving the eigenvalues of the kernel of the U-statistics. As a result, it’s not straightforward to construct the rejection region based on this asymptotic distribution. In this research, we aim to reduce the computation complexity of degenerate U-statistics and propose an easy-to-calibrate test statistic by using the divide-and-conquer method. Specifically, we randomly partition the full n data points into kn even disjoint groups, and compute U-statistics on each group and combine them by averaging to get a statistic Tn. We proved that the statistic Tn has the standard normal distribution as the limiting distribution. In this way, the running time is reduced from O(n^m) to O( n^m/km_n), where m is the order of the one sample U-statistics. Besides, for a given significance level , it’s easy to construct the rejection region. We apply our method to the goodness of fit test and two-sample test. The simulation and real data analysis show that the proposed test can achieve high power and fast running time for both one and two-sample tests.
22

Divide-and-conquer based summarization framework for extracting affective video content

Mehmood, Irfan, Sajjad, M., Rho, S., Baik, S.W. 18 July 2019 (has links)
Yes / Recent advances in multimedia technology have led to tremendous increases in the available volume of video data, thereby creating a major requirement for efficient systems to manage such huge data volumes. Video summarization is one of the key techniques for accessing and managing large video libraries. Video summarization can be used to extract the affective contents of a video sequence to generate a concise representation of its content. Human attention models are an efficient means of affective content extraction. Existing visual attention driven summarization frameworks have high computational cost and memory requirements, as well as a lack of efficiency in accurately perceiving human attention. To cope with these issues, we propose a divide-and-conquer based framework for an efficient summarization of big video data. We divide the original video data into shots, where an attention model is computed from each shot in parallel. Viewer's attention is based on multiple sensory perceptions, i.e., aural and visual, as well as the viewer's neuronal signals. The aural attention model is based on the Teager energy, instant amplitude, and instant frequency, whereas the visual attention model employs multi-scale contrast and motion intensity. Moreover, the neuronal attention is computed using the beta-band frequencies of neuronal signals. Next, an aggregated attention curve is generated using an intra- and inter-modality fusion mechanism. Finally, the affective content in each video shot is extracted. The fusion of multimedia and neuronal signals provides a bridge that links the digital representation of multimedia with the viewer’s perceptions. Our experimental results indicate that the proposed shot-detection based divide-and-conquer strategy mitigates the time and computational complexity. Moreover, the proposed attention model provides an accurate reflection of the user preferences and facilitates the extraction of highly affective and personalized summaries. / Supported by the ICT R&D program of MSIP/IITP. [2014(R0112-14-1014), The Development of Open Platform for Service of Convergence Contents].
23

Diagramme de Voronoi généralisé pour un ensemble de polygones : algorithmes, réalisation et application en analyse de formes

Hu, Hai-Tao 01 July 1991 (has links) (PDF)
.
24

Biologically Inspired Modular Neural Networks

Azam, Farooq 19 June 2000 (has links)
This dissertation explores the modular learning in artificial neural networks that mainly driven by the inspiration from the neurobiological basis of the human learning. The presented modularization approaches to the neural network design and learning are inspired by the engineering, complexity, psychological and neurobiological aspects. The main theme of this dissertation is to explore the organization and functioning of the brain to discover new structural and learning inspirations that can be subsequently utilized to design artificial neural network. The artificial neural networks are touted to be a neurobiologicaly inspired paradigm that emulate the functioning of the vertebrate brain. The brain is a highly structured entity with localized regions of neurons specialized in performing specific tasks. On the other hand, the mainstream monolithic feed-forward neural networks are generally unstructured black boxes which is their major performance limiting characteristic. The non explicit structure and monolithic nature of the current mainstream artificial neural networks results in lack of the capability of systematic incorporation of functional or task-specific a priori knowledge in the artificial neural network design process. The problem caused by these limitations are discussed in detail in this dissertation and remedial solutions are presented that are driven by the functioning of the brain and its structural organization. Also, this dissertation presents an in depth study of the currently available modular neural network architectures along with highlighting their shortcomings and investigates new modular artificial neural network models in order to overcome pointed out shortcomings. The resulting proposed modular neural network models have greater accuracy, generalization, comprehensible simplified neural structure, ease of training and more user confidence. These benefits are readily obvious for certain problems, depending upon availability and usage of available a priori knowledge about the problems. The modular neural network models presented in this dissertation exploit the capabilities of the principle of divide and conquer in the design and learning of the modular artificial neural networks. The strategy of divide and conquer solves a complex computational problem by dividing it into simpler sub-problems and then combining the individual solutions to the sub-problems into a solution to the original problem. The divisions of a task considered in this dissertation are the automatic decomposition of the mappings to be learned, decompositions of the artificial neural networks to minimize harmful interaction during the learning process, and explicit decomposition of the application task into sub-tasks that are learned separately. The versatility and capabilities of the new proposed modular neural networks are demonstrated by the experimental results. A comparison of the current modular neural network design techniques with the ones introduced in this dissertation, is also presented for reference. The results presented in this dissertation lay a solid foundation for design and learning of the artificial neural networks that have sound neurobiological basis that leads to superior design techniques. Areas of the future research are also presented. / Ph. D.
25

Large-scale and high-quality multi-view stereo / Stéréo multi-vues à grande-échelle et de haute-qualité

Vu, Hoang Hiep 05 December 2011 (has links)
L'acquisition de modèles 3D des scènes réelles trouve son utilité dans de nombreuses applications pratiques, comme l'archivage numérique, les jeux vidéo, l'ingénierie, la publicité. Il existe principalement deux méthodes pour acquérir un modèle 3D: la reconstruction avec un scanner laser (méthode active) et la reconstruction à partir de plusieurs photographies d'une même scène prise dans des points de vues différentes (méthode passive). Si la méthode active permet d'acquérir des modèles avec une grande précision, il est cependant coûteux et difficile à mettre en place pour de grandes scènes extérieures. La méthode passive, ou la stéréo multi-vues est en revanche plus flexible, facile à mettre en oeuvre et surtout moins coûteuse que la méthode active. Cette thèse s'attaque au problème de la reconstruction de stéréo multi-vues à grande échelle et précise pour les scènes extérieures. Nous améliorons des méthodes précédentes et les assemblons pour créer une chaîne de stéréo multi-vues efficace tirant parti de l'accélération de cartes graphiques. La chaîne produit des maillages de qualité à partir d'images de haute résolution, ce qui permet d'atteindre les meilleurs scores dans de nombreuses évaluations. Aux plus grandes échelles, nous développons d'une part des techniques de type diviser-pour-régner pour reconstruire des morceaux partiaux de la scène. D'autre part, pour combiner ces résultats séparés, nous créons une nouvelle méthode qui fusionne rapidement des centaines de maillages. Nous réussissons à reconstruire de beaux maillages urbains et des monuments historiques précis à partir de grandes collections d'images (environ 1600 images de 5M Pixel) / Acquisition of 3D model of real objects and scenes is indispensable and useful in many practical applications, such as digital archives, game and entertainment industries, engineering, advertisement. There are 2 main methods for 3D acquisition : laser-based reconstruction (active method) and image-based reconstruction from multiple images of the scene in different points of view (passive method). While laser-based reconstruction achieves high accuracy, it is complex, expensive and difficult to set up for large-scale outdoor reconstruction. Image-based, or multi-view stereo methods are more versatile, easier, faster and cheaper. By the time we begin this thesis, most multi-view methods could handle only low resolution images under controlled environment. This thesis targets multi-view stereo both both in large scale and high accuracy issues. We significantly improve some previous methods and combine them into a remarkably effective multi-view pipeline with GPU acceleration. From high-resolution images, we produce highly complete and accurate meshes that achieve best scores in many international recognized benchmarks. Aiming even larger scale, on one hand, we develop Divide and Conquer approaches in order to reconstruct many small parts of a big scene. On the other hand, to combine separate partial results, we create a new merging method, which can merge automatically and quickly hundreds of meshes. With all these components, we are successful to reconstruct highly accurate water-tight meshes for cities and historical monuments from large collections of high-resolution images (around 1600 images of 5 M Pixel images)
26

Algorithmes Parallèles Efficaces Appliqués aux Calculs sur Maillages Non Structurés / Scalable and Efficient Algorithms for Unstructured Mesh Computations

Thebault, Loïc 14 October 2016 (has links)
Le besoin croissant en simulation a conduit à l’élaboration de supercalculateurs complexes et d’un nombre croissant de logiciels hautement parallèles. Ces supercalculateurs requièrent un rendement énergétique et une puissance de calcul de plus en plus importants. Les récentes évolutions matérielles consistent à augmenter le nombre de noeuds de calcul et de coeurs par noeud. Certaines ressources n’évoluent cependant pas à la même vitesse. La multiplication des coeurs de calcul implique une diminution de la mémoire par coeur, plus de trafic de données, un protocole de cohérence plus coûteux et requiert d’avantage de parallélisme. De nombreuses applications et modèles actuels peinent ainsi à s’adapter à ces nouvelles tendances. En particulier, générer du parallélisme massif dans des méthodes d’éléments finis utilisant des maillages non structurés, et ce avec un nombre minimal de synchronisations et des charges de travail équilibrées, s’avèrent particulièrement difficile. Afin d’exploiter efficacement les multiples niveaux de parallélisme des architectures actuelles, différentes approches parallèles doivent être combinées. Cette thèse propose plusieurs contributions destinées à paralléliser les codes et les structures irrégulières de manière efficace. Nous avons développé une approche parallèle hybride par tâches à grain fin combinant les formes de parallélisme distribuée, partagée et vectorielle sur des structures irrégulières. Notre approche a été portée sur plusieurs applications industrielles développées par Dassault Aviation et a permis d’importants gains de performance à la fois sur les multicoeurs classiques ainsi que sur le récent Intel Xeon Phi. / The growing need for numerical simulations results in larger and more complex computing centers and more HPC softwares. Actual HPC system architectures have an increasing requirement for energy efficiency and performance. Recent advances in hardware design result in an increasing number of nodes and an increasing number of cores per node. However, some resources do not scale at the same rate. The increasing number of cores and parallel units implies a lower memory per core, higher requirement for concurrency, higher coherency traffic, and higher cost for coherency protocol. Most of the applications and runtimes currently in use struggle to scale with the present trend. In the context of finite element methods, exposing massive parallelism on unstructured mesh computations with efficient load balancing and minimal synchronizations is challenging. To make efficient use of these architectures, several parallelization strategies have to be combined together to exploit the multiple levels of parallelism. This P.h.D. thesis proposes several contributions aimed at overpassing this limitation by addressing irregular codes and data structures in an efficient way. We developed a hybrid parallelization approach combining the distributed, shared, and vectorial forms of parallelism in a fine grain taskbased approach applied to irregular structures. Our approach has been ported to several industrial applications developed by Dassault Aviation and has led to important speedups using standard multicores and the Intel Xeon Phi manycore.
27

Quelques propositions pour la mise en oeuvre d'algorithmes combinatoires

Tallot, Didier 28 June 1985 (has links) (PDF)
Le travail, exposé dans ce rapport, se divise en deux parties. La première partie a fait l'objet d'un rapport de recherche publié en Avril 1984 [Tall]. La deuxième partie résulte d'un travail qui s'est déroulé de juin 1984 i juin 1985. Pour produire des images de hautes qualités, on est obligé de manipuler des grandes quantités de données. D'où l' intérêt d'étudier des algorithmes et des structures de données efficaces pour résoudre ~es problèmes géométriques. On pourra ainsi obtenir des méthodes efficaces pour la manipulation de données graphiques. On peut citer i ce propos, les travaux pionners de SHAMOS dans le domaine de la complexité géométrique [Sham]. La deuxième partie contient la description d 'un logiciel interactif *de manipulation de graphes et d'ordres. A notre connaissnce, la plus ancienne réalisation de ce type de logiciel est le "Graph theory interactive system" de WOLFBERG [Wolf]. Suivant les travaux de GUIDO sur CABRI (CAhier de BRouillon Informatisé), nous desirons offrir un poste de travail sur les graphes pour des chercheurs en théorie des graphes. CABRI est une bonne approche du problème, mais reste d'un emploi malaisé. Nous avons donc étudiés de nouveau le problème en nous attachant à décrire une meilleure interface utilisateur. Nous nous sommes inspirés des travaux sur les logiciels interactif existant, comme ceux développés chez XEROX, au PALO-ALTO RESEARCH CENTER (Smit] chez NIEVERGELT [Sere].
28

Automated Reasoning Support for Invasive Interactive Parallelization

Moshir Moghaddam, Kianosh January 2012 (has links)
To parallelize a sequential source code, a parallelization strategy must be defined that transforms the sequential source code into an equivalent parallel version. Since parallelizing compilers can sometimes transform sequential loops and other well-structured codes into parallel ones automatically, we are interested in finding a solution to parallelize semi-automatically codes that compilers are not able to parallelize automatically, mostly because of weakness of classical data and control dependence analysis, in order to simplify the process of transforming the codes for programmers.Invasive Interactive Parallelization (IIP) hypothesizes that by using anintelligent system that guides the user through an interactive process one can boost parallelization in the above direction. The intelligent system's guidance relies on a classical code analysis and pre-defined parallelizing transformation sequences. To support its main hypothesis, IIP suggests to encode parallelizing transformation sequences in terms of IIP parallelization strategies that dictate default ways to parallelize various code patterns by using facts which have been obtained both from classical source code analysis and directly from the user.In this project, we investigate how automated reasoning can supportthe IIP method in order to parallelize a sequential code with an acceptable performance but faster than manual parallelization. We have looked at two special problem areas: Divide and conquer algorithms and loops in the source codes. Our focus is on parallelizing four sequential legacy C programs such as: Quick sort, Merge sort, Jacobi method and Matrix multipliation and summation for both OpenMP and MPI environment by developing an interactive parallelizing assistance tool that provides users with the assistanceneeded for parallelizing a sequential source code.
29

Sub-graph Approach In Iterative Sum-product Algorithm

Bayramoglu, Muhammet Fatih 01 September 2005 (has links) (PDF)
Sum-product algorithm can be employed for obtaining the marginal probability density functions from a given joint probability density function (p.d.f.). The sum-product algorithm operates on a factor graph which represents the dependencies of the random variables whose joint p.d.f. is given. The sum-product algorithm can not be operated on factor-graphs that contain loops. For these factor graphs iterative sum-product algorithm is used. A factor graph which contains loops can be divided in to loop-free sub-graphs. Sum-product algorithm can be operated in these loop-free sub-graphs and results of these sub-graphs can be combined for obtaining the result of the whole factor graph in an iterative manner. This method may increase the convergence rate of the algorithm significantly while keeping the complexity of an iteration and accuracy of the output constant. A useful by-product of this research that is introduced in this thesis is a good approximation to message calculation in factor nodes of the inter-symbol interference (ISI) factor graphs. This approximation has a complexity that is linearly proportional with the number of neighbors instead of being exponentially proportional. Using this approximation and the sub-graph idea we have designed and simulated joint decoding-equalization (turbo equalization) algorithm and obtained good results besides the low complexity.
30

Designing a Question Answering System in the Domain of Swedish Technical Consulting Using Deep Learning / Design av ett frågebesvarande system inom svensk konsultverksamhet med användning av djupinlärning

Abrahamsson, Felix January 2018 (has links)
Question Answering systems are greatly sought after in many areas of industry. Unfortunately, as most research in Natural Language Processing is conducted in English, the applicability of such systems to other languages is limited. Moreover, these systems often struggle in dealing with long text sequences. This thesis explores the possibility of applying existing models to the Swedish language, in a domain where the syntax and semantics differ greatly from typical Swedish texts. Additionally, the text length may vary arbitrarily. To solve these problems, transfer learning techniques and state-of-the-art Question Answering models are investigated. Furthermore, a novel, divide-and-conquer based technique for processing long texts is developed. Results show that the transfer learning is partly unsuccessful, but the system is capable of perform reasonably well in the new domain regardless. Furthermore, the system shows great performance improvement on longer text sequences with the use of the new technique. / System som givet en text besvarar frågor är högt eftertraktade inom många arbetsområden. Eftersom majoriteten av all forskning inom naturligtspråkbehandling behandlar engelsk text är de flesta system inte direkt applicerbara på andra språk. Utöver detta har systemen ofta svårt att hantera långa textsekvenser. Denna rapport utforskar möjligheten att applicera existerande modeller på det svenska språket, i en domän där syntaxen och semantiken i språket skiljer sig starkt från typiska svenska texter. Dessutom kan längden på texterna variera godtyckligt. För att lösa dessa problem undersöks flera tekniker inom transferinlärning och frågebesvarande modeller i forskningsfronten. En ny metod för att behandla långa texter utvecklas, baserad på en dekompositionsalgoritm. Resultaten visar på att transfer learning delvis misslyckas givet domänen och modellerna, men att systemet ändå presterar relativt väl i den nya domänen. Utöver detta visas att systemet presterar väl på långa texter med hjälp av den nya metoden.

Page generated in 0.0575 seconds