• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Détection de sources quasi-ponctuelles dans des champs de données massifs / Quasi-ponctual sources detection in massive data fields

Meillier, Céline 15 October 2015 (has links)
Dans cette thèse, nous nous sommes intéressés à la détection de galaxies lointaines dans les données hyperspectrales MUSE. Ces galaxies, en particulier, sont difficiles à observer, elles sont spatialement peu étendues du fait de leur distance, leur spectre est composé d'une seule raie d'émission dont la position est inconnue et dépend de la distance de la galaxie, et elles présentent un rapport signal-à-bruit très faible. Ces galaxies lointaines peuvent être considérées comme des sources quasi-ponctuelles dans les trois dimensions du cube. Il existe peu de méthodes dans la littérature qui permettent de détecter des sources dans des données en trois dimensions. L'approche proposée dans cette thèse repose sur la modélisation de la configuration de galaxies par un processus ponctuel marqué. Ceci consiste à représenter la position des galaxies comme une configuration de points auxquels nous ajoutons des caractéristiques géométriques, spectrales, etc, qui transforment un point en objet. Cette approche présente l'avantage d'avoir une représentation mathématique proche du phénomène physique et permet de s'affranchir des approches pixelliques qui sont pénalisées par les dimensions conséquentes des données (300 x 300 x 3600 pixels). La détection des galaxies et l'estimation de leurs caractéristiques spatiales, spectrales ou d'intensité sont réalisées dans un cadre entièrement bayésien, ce qui conduit à un algorithme générique et robuste, où tous les paramètres sont estimés sur la base des seules données observées, la détection des objets d'intérêt étant effectuée conjointement.La dimension des données et la difficulté du problème de détection nous ont conduit à envisager une phase de prétraitement des données visant à définir des zones de recherche dans le cube. Des approches de type tests multiples permettent de construire des cartes de proposition des objets. La détection bayésienne est guidée par ces cartes de pré-détection (définition de la fonction d'intensité du processus ponctuel marqué), la proposition des objets est réalisée sur les pixels sélectionnés sur ces cartes. La qualité de la détection peut être caractérisée par un critère de contrôle des erreurs.L'ensemble des traitements développés au cours de cette thèse a été validé sur des données synthétiques, et appliqué ensuite à un jeu de données réelles acquises par MUSE suite à sa mise en service en 2014. L'analyse de la détection obtenue est présentée dans le manuscrit. / Detecting the faintest galaxies in the hyperspectral MUSE data is particularly challenging because they have a small spatial extension, a very sparse spectrum that contains only one narrow emission line, which position in the spectral range is unknown. Moreover, their signal-to-noise ratio are very low. These galaxies are modeled as quasi point sources in the three dimensions of the data cube. We propose a method for the detection of a galaxy configuration based on a marked point process in a nonparametric Bayesian framework. A galaxy is modeled by a point (its position in the spatial domain), and marks (geometrical, spectral features) are added to transform a point into an object. These processes yield a natural sparse representation of massive data (300 x 300 x 3600 pixels). The fully Bayesian framework leads to a general and robust algorithm where the parameters of the objects are estimated in a fully data-driven way. Preprocessing strategies are drawn to tackle the massive dimensions of the data and the complexity of the detection problem, they allow to reduce the exploration of the data to areas that probably contain sources. Multiple testing approaches have been proposed to build proposition map. This map is also used to define the intensity of the point process, textit{i.e.} it describes the probability density function of the point process. It also gives a global error control criterion for the detection. The performance of the proposed algorithm is illustrated on synthetic data and real hyperspectral data acquired by the MUSE instrument for young galaxy detection.
2

POPT: uma abordagem de ensino de programa??o orientada a problema e testes

Lustosa Neto, Vicente Pires 05 August 2013 (has links)
Made available in DSpace on 2014-12-17T15:48:09Z (GMT). No. of bitstreams: 1 VicentePLN_DISSERT.pdf: 5303387 bytes, checksum: d5eb370b53d6220bf321369b13df3957 (MD5) Previous issue date: 2013-08-05 / There is a growing interest of the Computer Science education community for including testing concepts on introductory programming courses. Aiming at contributing to this issue, we introduce POPT, a Problem-Oriented Programming and Testing approach for Introductory Programming Courses. POPT main goal is to improve the traditional method of teaching introductory programming that concentrates mainly on implementation and neglects testing. POPT extends POP (Problem Oriented Programing) methodology proposed on the PhD Thesis of Andrea Mendon?a (UFCG). In both methodologies POPT and POP, students skills in dealing with ill-defined problems must be developed since the first programming courses. In POPT however, students are stimulated to clarify ill-defined problem specifications, guided by de definition of test cases (in a table-like manner). This paper presents POPT, and TestBoot a tool developed to support the methodology. In order to evaluate the approach a case study and a controlled experiment (which adopted the Latin Square design) were performed. In an Introductory Programming course of Computer Science and Software Engineering Graduation Programs at the Federal University of Rio Grande do Norte, Brazil. The study results have shown that, when compared to a Blind Testing approach, POPT stimulates the implementation of programs of better external quality the first program version submitted by POPT students passed in twice the number of test cases (professor-defined ones) when compared to non-POPT students. Moreover, POPT students submitted fewer program versions and spent more time to submit the first version to the automatic evaluation system, which lead us to think that POPT students are stimulated to think better about the solution they are implementing. The controlled experiment confirmed the influence of the proposed methodology on the quality of the code developed by POPT students / Podemos perceber um crescente interesse por parte da comunidade de educa??o de Ci?ncia da Computa??o na inclus?o de conceitos de testes em cursos introdut?rios de programa??o. Visando contribuir neste sentido, apresentamos POPT (do ingl?s: Problem Oriented Programing and Testing), uma abordagem de ensino de programa??o orientada para o problema e testes, com foco nos cursos introdut?rios. O principal objetivo de POPT ? o de melhorar o m?todo tradicional de ensino de introdu??o a programa??o que se concentra essencialmente na implementa??o (regras de sintaxe e sem?ntica da linguagem) negligenciando o teste do c?digo sendo implementado. A metodologia POPT, estende a metodologia POP (do ingl?s: Problem Oriented Programing) proposta na Tese de Doutorado de Andrea Mendon?a. Ambas as metodologias pregam que devemos desenvolver a habilidade dos alunos lidarem com especifica??es de problemas mal definidos. O diferencial de POPT ? que os alunos s?o estimulados a desenvolver casos de teste formatados em uma tabela com o objetivo de melhorar o entendimento sobre os requisitos dos problemas (mal definidos) e tamb?m, para melhorar a qualidade do c?digo gerado. Al?m de apresentar a metodologia POPT, este trabalho apresenta a ferramenta TestBoot desenvolvida no contexto deste trabalho para dar suporte a esta metodologia. Com o objetivo de avaliar a abordagem proposta em rela??o ? metodologia tradicional de ensino, foi realizado um caso de estudo e um experimento controlado (seguindo o design do Quadrado Latino). Tanto o estudo de caso quando o experimento controlado foram realizados em disciplinas de introdu??o a programa??o do curso de Ci?ncia da Computa??o e Engenharia de software da Universidade Federal do Rio Grande do Norte, Brasil. Os resultados destas avalia??es mostraram que, quando comparado com uma abordagem tradicional, POPT estimula a implementa??o de programas de melhor qualidade. No estudo de caso a primeira vers?o dos programas submetidos pelos alunos POPT passaram em duas vezes o n?mero de casos de teste (definidos pelo professor) quando comparados aos alunos n?o POPT; al?m disso, os alunos POPT submeteram menos vers?es do programa e passaram mais tempo para apresentar a primeira vers?o para o sistema de avalia??o autom?tica, o que nos leva a pensar que os alunos s?o estimulados a pensar melhor sobre a solu??o que eles est?o a programar. O experimento serviu para confirmar o impacto da metodologia proposta na qualidade do c?digo gerado pelos alunos quando comparado a metodologia tradicional
3

Error Locating Arrays, Adaptive Software Testing, and Combinatorial Group Testing

Chodoriwsky, Jacob N. 17 July 2012 (has links)
Combinatorial Group Testing (CGT) is a process of identifying faulty interactions (“errors”) within a particular set of items. Error Locating Arrays (ELAs) are combinatorial designs that can be built from Covering Arrays (CAs) to not only cover all errors in a system (each involving up to a certain number of items), but to locate and identify the errors as well. In this thesis, we survey known results for CGT, as well as CAs, ELAs, and some other types of related arrays. More importantly, we give several new results. First, we give a new algorithm that can be used to test a system in which each component (factor) has two options (values), and at most two errors are present. We show that, for systems with at most two errors, our algorithm improves upon a related algorithm by Mart´ınez et al. in terms of both robustness and efficiency. Second, we give the first adaptive CGT algorithm that can identify, among a given set of k items, all faulty interactions involving up to three items. We then compare it, performance-wise, to current-best nonadaptive method that can identify faulty interactions involving up to three items. We also give the first adaptive ELA-building algorithm that can identify all faulty interactions involving up to three items when safe values are known. Both of our new algorithms are generalizations of ones previously given by Mart´ınez et al. for identifying all faulty interactions involving up to two items.
4

Error Locating Arrays, Adaptive Software Testing, and Combinatorial Group Testing

Chodoriwsky, Jacob N. 17 July 2012 (has links)
Combinatorial Group Testing (CGT) is a process of identifying faulty interactions (“errors”) within a particular set of items. Error Locating Arrays (ELAs) are combinatorial designs that can be built from Covering Arrays (CAs) to not only cover all errors in a system (each involving up to a certain number of items), but to locate and identify the errors as well. In this thesis, we survey known results for CGT, as well as CAs, ELAs, and some other types of related arrays. More importantly, we give several new results. First, we give a new algorithm that can be used to test a system in which each component (factor) has two options (values), and at most two errors are present. We show that, for systems with at most two errors, our algorithm improves upon a related algorithm by Mart´ınez et al. in terms of both robustness and efficiency. Second, we give the first adaptive CGT algorithm that can identify, among a given set of k items, all faulty interactions involving up to three items. We then compare it, performance-wise, to current-best nonadaptive method that can identify faulty interactions involving up to three items. We also give the first adaptive ELA-building algorithm that can identify all faulty interactions involving up to three items when safe values are known. Both of our new algorithms are generalizations of ones previously given by Mart´ınez et al. for identifying all faulty interactions involving up to two items.
5

Error Locating Arrays, Adaptive Software Testing, and Combinatorial Group Testing

Chodoriwsky, Jacob N. January 2012 (has links)
Combinatorial Group Testing (CGT) is a process of identifying faulty interactions (“errors”) within a particular set of items. Error Locating Arrays (ELAs) are combinatorial designs that can be built from Covering Arrays (CAs) to not only cover all errors in a system (each involving up to a certain number of items), but to locate and identify the errors as well. In this thesis, we survey known results for CGT, as well as CAs, ELAs, and some other types of related arrays. More importantly, we give several new results. First, we give a new algorithm that can be used to test a system in which each component (factor) has two options (values), and at most two errors are present. We show that, for systems with at most two errors, our algorithm improves upon a related algorithm by Mart´ınez et al. in terms of both robustness and efficiency. Second, we give the first adaptive CGT algorithm that can identify, among a given set of k items, all faulty interactions involving up to three items. We then compare it, performance-wise, to current-best nonadaptive method that can identify faulty interactions involving up to three items. We also give the first adaptive ELA-building algorithm that can identify all faulty interactions involving up to three items when safe values are known. Both of our new algorithms are generalizations of ones previously given by Mart´ınez et al. for identifying all faulty interactions involving up to two items.

Page generated in 0.1147 seconds