• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 142
  • 18
  • 10
  • 8
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 1
  • 1
  • Tagged with
  • 204
  • 204
  • 204
  • 204
  • 44
  • 42
  • 41
  • 41
  • 40
  • 33
  • 24
  • 20
  • 20
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Video transcoding using machine learning

Unknown Date (has links)
The field of Video Transcoding has been evolving throughout the past ten years. The need for transcoding of video files has greatly increased because of the new upcoming standards which are incompatible with old ones. This thesis takes the method of using machine learning for video transcoding mode decisions and discusses ways to improve the process of generating the algorithm for implementation in different video transcoders. The transcoding methods used decrease the complexity in the mode decision inside the video encoder. Also methods which automate and improve results are discussed and implemented in two different sets of transcoders: H.263 to VP6 , and MPEG-2 to H.264. Both of these transcoders have shown a complexity loss of almost 50%. Video transcoding is important because the quantity of video standards have been increasing while devices usually can only decode one specific codec. / by Christopher Holder. / Thesis (M.S.C.S.)--Florida Atlantic University, 2008. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2008. Mode of access: World Wide Web.
182

OntoSELF a 3D ontology visualization tool /

Somasundaram, Ramanathan. January 2007 (has links)
Thesis (M.C.S.)--Miami University, Dept. of Computer Science and Systems Analysis, 2007. / Title from first page of PDF document. Includes bibliographical references (p. 86-88).
183

Some questions in risk management and high-dimensional data analysis

Wang, Ruodu 04 May 2012 (has links)
This thesis addresses three topics in the area of statistics and probability, with applications in risk management. First, for the testing problems in the high-dimensional (HD) data analysis, we present a novel method to formulate empirical likelihood tests and jackknife empirical likelihood tests by splitting the sample into subgroups. New tests are constructed to test the equality of two HD means, the coefficient in the HD linear models and the HD covariance matrices. Second, we propose jackknife empirical likelihood methods to formulate interval estimations for important quantities in actuarial science and risk management, such as the risk-distortion measures, Spearman's rho and parametric copulas. Lastly, we introduce the theory of completely mixable (CM) distributions. We give properties of the CM distributions, show that a few classes of distributions are CM and use the new technique to find the bounds for the sum of individual risks with given marginal distributions but unspecific dependence structure. The result partially solves a problem that had been a challenge for decades, and directly leads to the bounds on quantities of interest in risk management, such as the variance, the stop-loss premium, the price of the European options and the Value-at-Risk associated with a joint portfolio.
184

Integration of computational methods and visual analytics for large-scale high-dimensional data

Choo, Jae gul 20 September 2013 (has links)
With the increasing amount of collected data, large-scale high-dimensional data analysis is becoming essential in many areas. These data can be analyzed either by using fully computational methods or by leveraging human capabilities via interactive visualization. However, each method has its drawbacks. While a fully computational method can deal with large amounts of data, it lacks depth in its understanding of the data, which is critical to the analysis. With the interactive visualization method, the user can give a deeper insight on the data but suffers when large amounts of data need to be analyzed. Even with an apparent need for these two approaches to be integrated, little progress has been made. As ways to tackle this problem, computational methods have to be re-designed both theoretically and algorithmically, and the visual analytics system has to expose these computational methods to users so that they can choose the proper algorithms and settings. To achieve an appropriate integration between computational methods and visual analytics, the thesis focuses on essential computational methods for visualization, such as dimension reduction and clustering, and it presents fundamental development of computational methods as well as visual analytic systems involving newly developed methods. The contributions of the thesis include (1) the two-stage dimension reduction framework that better handles significant information loss in visualization of high-dimensional data, (2) efficient parametric updating of computational methods for fast and smooth user interactions, and (3) an iteration-wise integration framework of computational methods in real-time visual analytics. The latter parts of the thesis focus on the development of visual analytics systems involving the presented computational methods, such as (1) Testbed: an interactive visual testbed system for various dimension reduction and clustering methods, (2) iVisClassifier: an interactive visual classification system using supervised dimension reduction, and (3) VisIRR: an interactive visual information retrieval and recommender system for large-scale document data.
185

Incorporating semantic and syntactic information into document representation for document clustering

Wang, Yong, January 2005 (has links)
Thesis (Ph.D.) -- Mississippi State University. Department of Computer Science and Engineering. / Title from title screen. Includes bibliographical references.
186

Uma abordagem não intrusiva e automática para configuração do Hadoop / An approach non intrusive and automation for Hadoop configuration

Alves, Nathália de Meneses 29 September 2015 (has links)
The amount of digital data produce in the last years has increased significantly. MapRe- duce framework such as Hadoop have been widely used for processing big data on top of cloud resources. In spite of these advances, contemporary systems are complex and dy- namic which makes them hard to configure in order to improve application performance. Software auto-tuning is a solution to this problem as it helps developers and system ad- ministrators to handle hundreds of system parameters. For example, current work in the literature use machine learning algorithms for Hadoop automatic configuration to improve performance. However, these solutions use single machine learning algorithms, thus making unfeasible to compare these solutions with each other to understand which approach is best suited given an application and its input. In addition, current work is intrusive or expose operational details for developers and/or system administrators. This work proposes a transparent, modular and hybrid approach to improve the performance of Hadoop applications. The approach proposes an architecture and implementation of transparent software that automatically configures the Hadoop. Furthermore, this ap- proach proposes a hybrid solution that combines genetic algorithms with various machine learning techniques as separate modules. A research prototype was implemented and eval- uated proving that the proposed approach can significantly reduce the execution time of applications Hadoop WordCount and Terasort autonomously. Furthermore, the approach converges quickly to the most suitable configuration application with low overhead. / Nas últimas décadas, a quantidade de dados gerados no mundo tem aumentado de maneira significativa. A Computação em Nuvem juntamente com o modelo de programação Map- Reduce, através do arcabouço Hadoop, têm sido utilizados para o processamento desses dados. Contudo, os sistemas contemporâneos ainda são complexos e dinâmicos, tornando-se difíceis de se configurar. A configuração automática de software é uma solução para esse problema, ajudando os programadores e administradores gerir a complexidade desses sistemas. Por exemplo, há soluções na literatura que utilizam aprendizado de máquina para a configuração automática do Hadoop com o intuito de melhorar o desempenho das suas aplicações. Apesar desses avanços, as soluções atuais para configurar automaticamente o Hadoop utilizam soluções muito específicas, aplicando algoritmos de aprendizagem de máquinas isoladamente. Assim, esses algoritmos não são comparados entre si para entender qual abordagem é mais adequada para a configuração automática do Hadoop. Além disso, essas soluções são intrusivas, ou seja, expõem detalhes operacionais para programadores e/ou administradores de sistemas. Esse trabalho tem por objetivo propor uma abordagem transparente, modular e híbrida para melhorar o desempenho de aplicações Hadoop. A abordagem propõe uma arquitetura e implementação de software transparente que configura automaticamente o Hadoop. Além disso, a abordagem propõe uma solução híbrida que combina Algoritmos Genéticos e várias técnicas de aprendizado de máquina (machine learning) implementadas em módulos separados. Um protótipo de pesquisa foi implementado a avaliado mostrando que a abordagem proposta consegue diminuir significativamente o tempo de execução das aplicações Hadoop WordCount e Terasort. Além disso, a abordagem consegue convergir rapidamente para a configuração mais adequada de cada aplicação, alcançando baixos níveis de custos adicionais (overhead).
187

Reconhecimento e delineamento sinergicos de objetos em imagens com aplicações na medicina / Synergistic delineation and recognition of objects in images with applications in medicine

Miranda, Paulo Andre Vechiatto de 11 April 2009 (has links)
Orientador: Alexandre Xavier Falcão / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-15T00:40:22Z (GMT). No. of bitstreams: 1 Miranda_PauloAndreVechiattode_D.pdf: 2915697 bytes, checksum: 45d7bf96dd5d6d6a8042a30dcbb51278 (MD5) Previous issue date: 2009 / Resumo: Segmentar uma imagem consiste em particioná-la em regiões relevantes para uma dada aplicação (e.g., objetos e fundo). A segmentação de imagem é um dos problemas mais fundamentais e desafiadores em processamento de imagem e vis¿ao computacional. O problema da segmentação representa um desafio técnico importante na computação devido 'a dificuldade da máquina em extrair informações globais sobre os objetos nas imagens (e.g., forma e textura) contando apenas com informações locais (e.g., brilho e cor) dos pixels. Segmentação de imagens envolve o reconhecimento de objetos e o delineamento. O reconhecimento é representado por tarefas cognitivas que determinam a localização aproximada de um objeto desejado em uma determinada imagem (detecção de objeto),e identificam um objeto desejado de entre uma lista de objetos candidatos (classificação de objeto). Já o delineamento consiste em definir de forma precisa a extensão espacial do objeto de interesse. No entanto, métodos de segmentação efetivos devem explorar essas tarefas de forma sinérgica. Esse tema constitui o foco central deste trabalho que apresenta soluções interativas e automáticas para segmentação. A automação é obtida mediante o uso de modelos discretos que são criados por aprendizado supervisionado. Esses modelos empregam reconhecimento e delineamento de uma maneira fortemente acoplada pelo conceito de Clouds. Estes modelos são demonstrados no âmbito da neurologia para a segmentação automática do cérebro (sem o tronco cerebral), do cerebelo, e de cada hemisfério cerebral a partir de imagens de ressonância magnética. Estas estruturas estão ligadas em várias partes, o que impõe sérios desafios para a segmentação. Os resultados indicam que estes modelos são ferramentas rápidas e precisas para eliminar as intervenções do usuário ou, pelo menos, reduzi-las para simples correções, no contexto da segmentação de imagens do cérebro. / Abstract: Segmenting an image consists of partitioning it into regions relevant for a given application (e.g., objects and background). The image segmentation is one of the most fundamental and challenging problems in image processing and computer vision. The segmentation problem represents a significant technical challenge in computer science because of the difficulty of the machine in extracting global informations about the objects in the images (e.g., shape and texture) counting only with local information (e.g., brightness and color) of the pixels. Image segmentation involves object recognition and delineation. Recognition is represented by cognitive tasks that determine the approximate location of a desired object in a given image (object detection), and identify a desired object among candidate ones (object classification), while delineation consists in defining the exact spatial extent of the object. Effective segmentation methods should exploit these tasks in a synergistic way. This topic forms the central focus of this work that presents solutions for interactive and automatic segmentation. The automation is achieved through the use of discrete models that are created by supervised learning. These models employ recognition and delineation in a tightly coupled manner by the concept of Clouds. We demonstrate their usefulness in the automatic MR-image segmentation of the brain (without the brain stem), the cerebellum, and each brain hemisphere. These structures are connected in several parts, imposing serious challenges for segmentation. The results indicate that these models are fast and accurate tools to eliminate user's intervention or, at least, reduce it to simple corrections, in the context of brain image segmentation. / Doutorado / Ciência da Computação / Doutor em Ciência da Computação
188

Kernel methods for flight data monitoring / Méthodes à noyau pour l'analyse de données de vols appliquées aux opérations aériennes

Chrysanthos, Nicolas 24 October 2014 (has links)
L'analyse de données de vols appliquée aux opérations aériennes ou "Flight Data Monitoring" (FDM), est le processus par lequel une compagnie aérienne recueille, analyse et traite de façon régulière les données enregistrées dans les avions, dans le but d'améliorer de façon globale la sécurité.L'objectif de cette thèse est d'élaborer dans le cadre des méthodes à noyau, des techniques pour la détection des vols atypiques qui présentent potentiellement des problèmes qui ne peuvent être trouvés en utilisant les méthodes classiques. Dans la première partie, nous proposons une nouvelle méthode pour la détection d'anomalies.Nous utilisons une nouvelle technique de réduction de dimension appelée analyse en entropie principale par noyau afin de concevoir une méthode qui est à la fois non supervisée et robuste.Dans la deuxième partie, nous résolvons le problème de la structure des données dans le domaine FDM.Tout d'abord, nous étendons la méthode pour prendre en compte les paramètres de différents types tels que continus, discrets ou angulaires.Ensuite, nous explorons des techniques permettant de prendre en compte l'aspect temporel des vols et proposons un nouveau noyau dans la famille des techniques de déformation de temps dynamique, et démontrons qu'il est plus rapide à calculer que les techniques concurrentes et est de plus défini positif.Nous illustrons notre approche avec des résultats prometteurs sur des données réelles des compagnies aériennes TAP et Transavia comprenant plusieurs centaines de vols / Flight Data Monitoring (FDM), is the process by which an airline routinely collects, processes, and analyses the data recorded in aircrafts with the goal of improving the overall safety or operational efficiency.The goal of this thesis is to investigate machine learning methods, and in particular kernel methods, for the detection of atypical flights that may present problems that cannot be found using traditional methods.Atypical flights may present safety of operational issues and thus need to be studied by an FDM expert.In the first part we propose a novel method for anomaly detection that is suited to the constraints of the field of FDM.We rely on a novel dimensionality reduction technique called kernel entropy component analysis to design a method which is both unsupervised and robust.In the second part we solve the most salient issue regarding the field of FDM, which is how the data is structured.Firstly, we extend the method to take into account parameters of diverse types such as continuous, discrete or angular.Secondly, we explore techniques to take into account the temporal aspect of flights and propose a new kernel in the family of dynamic time warping techniques, and demonstrate that it is faster to compute than competing techniques and is positive definite.We illustrate our approach with promising results on real world datasets from airlines TAP and Transavia comprising hundreds of flights
189

Taxonomy of synchronization and barrier as a basic mechanism for building other synchronization from it

Braginton, Pauline 01 January 2003 (has links)
A Distributed Shared Memory(DSM) system consists of several computers that share a memory area and has no global clock. Therefore, an ordering of events in the system is necessary. Synchronization is a mechanism for coordinating activities between processes, which are program instantiations in a system.
190

A heuristic on the rearrangeability of shuffle-exchange networks

Alston, Katherine Yvette 01 January 2004 (has links)
The algorithms which control network routing are specific to the network because the algorithms are designed to take advantage of that network's topology. The "goodness" of a network includes such criteria as a simple routing algorithm and a simple routing algorithm would increase the use of the shuffle-exchange network.

Page generated in 0.129 seconds