• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 6
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 42
  • 35
  • 10
  • 9
  • 9
  • 8
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Escalonamento Work-Stealing de programas Divisão-e-Conquista com MPI-2 / Scheduling Divide-and-Conquer programs by Work-Stealing with MPI-2

Pezzi, Guilherme Peretti January 2006 (has links)
Com o objetivo de ser portável e eficiente em arquiteturas HPC atuais, a execução de um programa paralelo deve ser adaptável. Este trabalho mostra como isso pode ser atingido utilizando MPI, através de criação dinâmica de processos, integrada com programação Divisão-e-Conquista e uma estratégia Work-Stealing para balancear os processos MPI, em ambientes heterogêneos e/ou dinâmicos, em tempo de execução. Este trabalho explica como implementar uma aplicação segundo o modelo de Divisão-e-Conquista com MPI, bem como a implementação de uma estratégia Work-Stealing. São apresentados resultados experimentais baseados em uma aplicação sintética, o problema das N-Rainhas (N-Queens). Valida-se tanto a adaptabilidade e a eficiência do código. Os resultados mostram que é possível utilizar um padrão amplamente difundido como o MPI, mesmo em plataformas de HPC não tão homogêneas como um cluster. / In order to be portable and efficient on modern HPC architectures, the execution of a parallel program must be adaptable. This work shows how to achieve this in MPI, by the dynamic creation of processes, coupled with Divide-and-Conquer programming and a Work-Stealing strategy to balance the MPI processes, in a heterogeneous and/or dynamic environment, at runtime. The application of Divide and Conquer with MPI is explained, as well as the implementation of a Work-Stealing strategy. Experimental results are provided, based on a synthetic application, the N-Queens computation. Both the adaptability of the code and its efficiency are validated. The results show that it is possible to use widely spread standards such as MPI, even in parallel HPC platforms that are not as homogeneous as a Cluster.
22

Escalonamento Work-Stealing de programas Divisão-e-Conquista com MPI-2 / Scheduling Divide-and-Conquer programs by Work-Stealing with MPI-2

Pezzi, Guilherme Peretti January 2006 (has links)
Com o objetivo de ser portável e eficiente em arquiteturas HPC atuais, a execução de um programa paralelo deve ser adaptável. Este trabalho mostra como isso pode ser atingido utilizando MPI, através de criação dinâmica de processos, integrada com programação Divisão-e-Conquista e uma estratégia Work-Stealing para balancear os processos MPI, em ambientes heterogêneos e/ou dinâmicos, em tempo de execução. Este trabalho explica como implementar uma aplicação segundo o modelo de Divisão-e-Conquista com MPI, bem como a implementação de uma estratégia Work-Stealing. São apresentados resultados experimentais baseados em uma aplicação sintética, o problema das N-Rainhas (N-Queens). Valida-se tanto a adaptabilidade e a eficiência do código. Os resultados mostram que é possível utilizar um padrão amplamente difundido como o MPI, mesmo em plataformas de HPC não tão homogêneas como um cluster. / In order to be portable and efficient on modern HPC architectures, the execution of a parallel program must be adaptable. This work shows how to achieve this in MPI, by the dynamic creation of processes, coupled with Divide-and-Conquer programming and a Work-Stealing strategy to balance the MPI processes, in a heterogeneous and/or dynamic environment, at runtime. The application of Divide and Conquer with MPI is explained, as well as the implementation of a Work-Stealing strategy. Experimental results are provided, based on a synthetic application, the N-Queens computation. Both the adaptability of the code and its efficiency are validated. The results show that it is possible to use widely spread standards such as MPI, even in parallel HPC platforms that are not as homogeneous as a Cluster.
23

A critical review of four novels by Celia Brayfield considering their production and impact in the context of contemporary literature

Brayfield, Celia January 2015 (has links)
This critical review of four novels by Celia Brayfield, Getting Home, Mister Fabulous And Friends, Heartswap and Wild Weekend, outlines the themes that give the works defining coherence, which are a feminist evaluation of gender roles and an exploration of the relationship between space or place in millennial Britain. The author contextualises her novels in considering literary representations of the suburb in literature and use of the device of gender reversal in fiction. The review demonstrates that the novels make a significant and coherent contribution to knowledge as resonant and well-received creative works and provides an assessment of their international and national impact. In discussing the inspiration and influences of her work, her choices in characterisation, narrative and dramatised argument, and in particular her decision to create responses to two classic texts using the device of gender reversal, the author justifies the overarching approach and methodologies used for these novels.
24

Résolution séquentielle et parallèle du problème de la satisfiabilité propositionnelle. / Sequential and parallel resolution of the problem of propositionnal satistifiability

Guo, Long 08 July 2013 (has links)
Cette thèse porte sur la résolution séquentielle et parallèle du problème de la satisfiabilité propositionnelle(SAT). Ce problème important sur le plan théorique admet de nombreuses applications qui vont de la vérification formelle de matériels et de logiciels à la cryptographie en passant par la planification et la bioinformatique. Plusieurs contributions sont apportées dans cette thèse. La première concerne l’étude et l’intégration des concepts d’intensification et de diversification dans les solveurs SAT parallèle de type portfolio. Notre seconde contribution exploite l’état courant de la recherche partiellement décrit par les récentes polarités des littéraux « progress saving », pour ajuster et diriger dynamiquement les solveurs associés aux différentes unités de calcul. Dans la troisième contribution, nous proposons des améliorations de la stratégie de réduction de labase des clauses apprises. Deux nouveaux critères, permettant d’identifier les clauses pertinentes pour la suite de la recherche, ont été proposés. Ces critères sont utilisés ensuite comme paramètre supplémentaire de diversification dans les solveurs de type portfolio. Finalement, nous présentons une nouvelle approche de type diviser pour régner où la division s’effectue par ajout de contraintes particulières. / In this thesis, we deal with the sequential and parallel resolution of the problem SAT. Despite of its complexity, the resolution of SAT problem is an excellent and competitive approach for solving thecombinatorial problems such as the formal verification of hardware and software, the cryptography, theplanning and the bioinfomatics. Several contribution are made in this thesis. The first contribution aims to find the compromise of diversification and intensification in the solver of type portfolio. In our second contribution, we propose to dynamically adjust the configuration of a core in a portfolio parallel sat solver when it is determined that another core performs similar work. In the third contribution, we improve the strategy of reduction of the base of learnt clauses, we construct a portfolio strategy of reduction in parallel solver. Finally, we present a new approach named "Virtual Control" which is to distribute the additional constraints to each core in a parallel solver and verify their consistency during search.
25

Polynomial-Space Exact Algorithms for Traveling Salesman Problem in Degree Bounded Graphs / 次数の制限されたグラフにおけるトラベリングセールスマン問題に対する多項式領域厳密アルゴリズム

Norhazwani, Md Yunos 23 March 2017 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第20516号 / 情博第644号 / 新制||情||111(附属図書館) / 京都大学大学院情報学研究科数理工学専攻 / (主査)教授 永持 仁, 教授 太田 快人, 教授 山下 信雄 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
26

Distributed Inference for Degenerate U-Statistics with Application to One and Two Sample Test

Atta-Asiamah, Ernest January 2020 (has links)
In many hypothesis testing problems such as one-sample and two-sample test problems, the test statistics are degenerate U-statistics. One of the challenges in practice is the computation of U-statistics for a large sample size. Besides, for degenerate U-statistics, the limiting distribution is a mixture of weighted chi-squares, involving the eigenvalues of the kernel of the U-statistics. As a result, it’s not straightforward to construct the rejection region based on this asymptotic distribution. In this research, we aim to reduce the computation complexity of degenerate U-statistics and propose an easy-to-calibrate test statistic by using the divide-and-conquer method. Specifically, we randomly partition the full n data points into kn even disjoint groups, and compute U-statistics on each group and combine them by averaging to get a statistic Tn. We proved that the statistic Tn has the standard normal distribution as the limiting distribution. In this way, the running time is reduced from O(n^m) to O( n^m/km_n), where m is the order of the one sample U-statistics. Besides, for a given significance level , it’s easy to construct the rejection region. We apply our method to the goodness of fit test and two-sample test. The simulation and real data analysis show that the proposed test can achieve high power and fast running time for both one and two-sample tests.
27

Divide-and-conquer based summarization framework for extracting affective video content

Mehmood, Irfan, Sajjad, M., Rho, S., Baik, S.W. 18 July 2019 (has links)
Yes / Recent advances in multimedia technology have led to tremendous increases in the available volume of video data, thereby creating a major requirement for efficient systems to manage such huge data volumes. Video summarization is one of the key techniques for accessing and managing large video libraries. Video summarization can be used to extract the affective contents of a video sequence to generate a concise representation of its content. Human attention models are an efficient means of affective content extraction. Existing visual attention driven summarization frameworks have high computational cost and memory requirements, as well as a lack of efficiency in accurately perceiving human attention. To cope with these issues, we propose a divide-and-conquer based framework for an efficient summarization of big video data. We divide the original video data into shots, where an attention model is computed from each shot in parallel. Viewer's attention is based on multiple sensory perceptions, i.e., aural and visual, as well as the viewer's neuronal signals. The aural attention model is based on the Teager energy, instant amplitude, and instant frequency, whereas the visual attention model employs multi-scale contrast and motion intensity. Moreover, the neuronal attention is computed using the beta-band frequencies of neuronal signals. Next, an aggregated attention curve is generated using an intra- and inter-modality fusion mechanism. Finally, the affective content in each video shot is extracted. The fusion of multimedia and neuronal signals provides a bridge that links the digital representation of multimedia with the viewer’s perceptions. Our experimental results indicate that the proposed shot-detection based divide-and-conquer strategy mitigates the time and computational complexity. Moreover, the proposed attention model provides an accurate reflection of the user preferences and facilitates the extraction of highly affective and personalized summaries. / Supported by the ICT R&D program of MSIP/IITP. [2014(R0112-14-1014), The Development of Open Platform for Service of Convergence Contents].
28

Diagramme de Voronoi généralisé pour un ensemble de polygones : algorithmes, réalisation et application en analyse de formes

Hu, Hai-Tao 01 July 1991 (has links) (PDF)
.
29

Biologically Inspired Modular Neural Networks

Azam, Farooq 19 June 2000 (has links)
This dissertation explores the modular learning in artificial neural networks that mainly driven by the inspiration from the neurobiological basis of the human learning. The presented modularization approaches to the neural network design and learning are inspired by the engineering, complexity, psychological and neurobiological aspects. The main theme of this dissertation is to explore the organization and functioning of the brain to discover new structural and learning inspirations that can be subsequently utilized to design artificial neural network. The artificial neural networks are touted to be a neurobiologicaly inspired paradigm that emulate the functioning of the vertebrate brain. The brain is a highly structured entity with localized regions of neurons specialized in performing specific tasks. On the other hand, the mainstream monolithic feed-forward neural networks are generally unstructured black boxes which is their major performance limiting characteristic. The non explicit structure and monolithic nature of the current mainstream artificial neural networks results in lack of the capability of systematic incorporation of functional or task-specific a priori knowledge in the artificial neural network design process. The problem caused by these limitations are discussed in detail in this dissertation and remedial solutions are presented that are driven by the functioning of the brain and its structural organization. Also, this dissertation presents an in depth study of the currently available modular neural network architectures along with highlighting their shortcomings and investigates new modular artificial neural network models in order to overcome pointed out shortcomings. The resulting proposed modular neural network models have greater accuracy, generalization, comprehensible simplified neural structure, ease of training and more user confidence. These benefits are readily obvious for certain problems, depending upon availability and usage of available a priori knowledge about the problems. The modular neural network models presented in this dissertation exploit the capabilities of the principle of divide and conquer in the design and learning of the modular artificial neural networks. The strategy of divide and conquer solves a complex computational problem by dividing it into simpler sub-problems and then combining the individual solutions to the sub-problems into a solution to the original problem. The divisions of a task considered in this dissertation are the automatic decomposition of the mappings to be learned, decompositions of the artificial neural networks to minimize harmful interaction during the learning process, and explicit decomposition of the application task into sub-tasks that are learned separately. The versatility and capabilities of the new proposed modular neural networks are demonstrated by the experimental results. A comparison of the current modular neural network design techniques with the ones introduced in this dissertation, is also presented for reference. The results presented in this dissertation lay a solid foundation for design and learning of the artificial neural networks that have sound neurobiological basis that leads to superior design techniques. Areas of the future research are also presented. / Ph. D.
30

Large-scale and high-quality multi-view stereo / Stéréo multi-vues à grande-échelle et de haute-qualité

Vu, Hoang Hiep 05 December 2011 (has links)
L'acquisition de modèles 3D des scènes réelles trouve son utilité dans de nombreuses applications pratiques, comme l'archivage numérique, les jeux vidéo, l'ingénierie, la publicité. Il existe principalement deux méthodes pour acquérir un modèle 3D: la reconstruction avec un scanner laser (méthode active) et la reconstruction à partir de plusieurs photographies d'une même scène prise dans des points de vues différentes (méthode passive). Si la méthode active permet d'acquérir des modèles avec une grande précision, il est cependant coûteux et difficile à mettre en place pour de grandes scènes extérieures. La méthode passive, ou la stéréo multi-vues est en revanche plus flexible, facile à mettre en oeuvre et surtout moins coûteuse que la méthode active. Cette thèse s'attaque au problème de la reconstruction de stéréo multi-vues à grande échelle et précise pour les scènes extérieures. Nous améliorons des méthodes précédentes et les assemblons pour créer une chaîne de stéréo multi-vues efficace tirant parti de l'accélération de cartes graphiques. La chaîne produit des maillages de qualité à partir d'images de haute résolution, ce qui permet d'atteindre les meilleurs scores dans de nombreuses évaluations. Aux plus grandes échelles, nous développons d'une part des techniques de type diviser-pour-régner pour reconstruire des morceaux partiaux de la scène. D'autre part, pour combiner ces résultats séparés, nous créons une nouvelle méthode qui fusionne rapidement des centaines de maillages. Nous réussissons à reconstruire de beaux maillages urbains et des monuments historiques précis à partir de grandes collections d'images (environ 1600 images de 5M Pixel) / Acquisition of 3D model of real objects and scenes is indispensable and useful in many practical applications, such as digital archives, game and entertainment industries, engineering, advertisement. There are 2 main methods for 3D acquisition : laser-based reconstruction (active method) and image-based reconstruction from multiple images of the scene in different points of view (passive method). While laser-based reconstruction achieves high accuracy, it is complex, expensive and difficult to set up for large-scale outdoor reconstruction. Image-based, or multi-view stereo methods are more versatile, easier, faster and cheaper. By the time we begin this thesis, most multi-view methods could handle only low resolution images under controlled environment. This thesis targets multi-view stereo both both in large scale and high accuracy issues. We significantly improve some previous methods and combine them into a remarkably effective multi-view pipeline with GPU acceleration. From high-resolution images, we produce highly complete and accurate meshes that achieve best scores in many international recognized benchmarks. Aiming even larger scale, on one hand, we develop Divide and Conquer approaches in order to reconstruct many small parts of a big scene. On the other hand, to combine separate partial results, we create a new merging method, which can merge automatically and quickly hundreds of meshes. With all these components, we are successful to reconstruct highly accurate water-tight meshes for cities and historical monuments from large collections of high-resolution images (around 1600 images of 5 M Pixel images)

Page generated in 0.0349 seconds