• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 451
  • 82
  • 77
  • 47
  • 41
  • 40
  • 38
  • 20
  • 13
  • 7
  • 7
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 981
  • 597
  • 329
  • 263
  • 138
  • 100
  • 98
  • 70
  • 69
  • 68
  • 68
  • 66
  • 62
  • 61
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Deep Neural Networks for Large Vocabulary Handwritten Text Recognition / Réseaux de Neurones Profonds pour la Reconnaissance de Texte Manucrit à Large Vocabulaire

Bluche, Théodore 13 May 2015 (has links)
La transcription automatique du texte dans les documents manuscrits a de nombreuses applications, allant du traitement automatique des documents à leur indexation ou leur compréhension. L'une des approches les plus populaires de nos jours consiste à parcourir l'image d'une ligne de texte avec une fenêtre glissante, de laquelle un certain nombre de caractéristiques sont extraites, et modélisées par des Modèles de Markov Cachés (MMC). Quand ils sont associés à des réseaux de neurones, comme des Perceptrons Multi-Couches (PMC) ou Réseaux de Neurones Récurrents de type Longue Mémoire à Court Terme (RNR-LMCT), et à un modèle de langue, ces modèles produisent de bonnes transcriptions. D'autre part, dans de nombreuses applications d'apprentissage automatique, telles que la reconnaissance de la parole ou d'images, des réseaux de neurones profonds, comportant plusieurs couches cachées, ont récemment permis une réduction significative des taux d'erreur.Dans cette thèse, nous menons une étude poussée de différents aspects de modèles optiques basés sur des réseaux de neurones profonds dans le cadre de systèmes hybrides réseaux de neurones / MMC, dans le but de mieux comprendre et évaluer leur importance relative. Dans un premier temps, nous montrons que des réseaux de neurones profonds apportent des améliorations cohérentes et significatives par rapport à des réseaux ne comportant qu'une ou deux couches cachées, et ce quel que soit le type de réseau étudié, PMC ou RNR, et d'entrée du réseau, caractéristiques ou pixels. Nous montrons également que les réseaux de neurones utilisant les pixels directement ont des performances comparables à ceux utilisant des caractéristiques de plus haut niveau, et que la profondeur des réseaux est un élément important de la réduction de l'écart de performance entre ces deux types d'entrées, confirmant la théorie selon laquelle les réseaux profonds calculent des représentations pertinantes, de complexités croissantes, de leurs entrées, en apprenant les caractéristiques de façon automatique. Malgré la domination flagrante des RNR-LMCT dans les publications récentes en reconnaissance d'écriture manuscrite, nous montrons que des PMCs profonds atteignent des performances comparables. De plus, nous avons évalué plusieurs critères d'entrainement des réseaux. Avec un entrainement discriminant de séquences, nous reportons, pour des systèmes PMC/MMC, des améliorations comparables à celles observées en reconnaissance de la parole. Nous montrons également que la méthode de Classification Temporelle Connexionniste est particulièrement adaptée aux RNRs. Enfin, la technique du dropout a récemment été appliquée aux RNR. Nous avons testé son effet à différentes positions relatives aux connexions récurrentes des RNRs, et nous montrons l'importance du choix de ces positions.Nous avons mené nos expériences sur trois bases de données publiques, qui représentent deux langues (l'anglais et le français), et deux époques, en utilisant plusieurs types d'entrées pour les réseaux de neurones : des caractéristiques prédéfinies, et les simples valeurs de pixels. Nous avons validé notre approche en participant à la compétition HTRtS en 2014, où nous avons obtenu la deuxième place. Les résultats des systèmes présentés dans cette thèse, avec les deux types de réseaux de neurones et d'entrées, sont comparables à l'état de l'art sur les bases Rimes et IAM, et leur combinaison dépasse les meilleurs résultats publiés sur les trois bases considérées. / The automatic transcription of text in handwritten documents has many applications, from automatic document processing, to indexing and document understanding. One of the most popular approaches nowadays consists in scanning the text line image with a sliding window, from which features are extracted, and modeled by Hidden Markov Models (HMMs). Associated with neural networks, such as Multi-Layer Perceptrons (MLPs) or Long Short-Term Memory Recurrent Neural Networks (LSTM-RNNs), and with a language model, these models yield good transcriptions. On the other hand, in many machine learning applications, including speech recognition and computer vision, deep neural networks consisting of several hidden layers recently produced a significant reduction of error rates. In this thesis, we have conducted a thorough study of different aspects of optical models based on deep neural networks in the hybrid neural network / HMM scheme, in order to better understand and evaluate their relative importance. First, we show that deep neural networks produce consistent and significant improvements over networks with one or two hidden layers, independently of the kind of neural network, MLP or RNN, and of input, handcrafted features or pixels. Then, we show that deep neural networks with pixel inputs compete with those using handcrafted features, and that depth plays an important role in the reduction of the performance gap between the two kinds of inputs, supporting the idea that deep neural networks effectively build hierarchical and relevant representations of their inputs, and that features are automatically learnt on the way. Despite the dominance of LSTM-RNNs in the recent literature of handwriting recognition, we show that deep MLPs achieve comparable results. Moreover, we evaluated different training criteria. With sequence-discriminative training, we report similar improvements for MLP/HMMs as those observed in speech recognition. We also show how the Connectionist Temporal Classification framework is especially suited to RNNs. Finally, the novel dropout technique to regularize neural networks was recently applied to LSTM-RNNs. We tested its effect at different positions in LSTM-RNNs, thus extending previous works, and we show that its relative position to the recurrent connections is important. We conducted the experiments on three public databases, representing two languages (English and French) and two epochs, using different kinds of neural network inputs: handcrafted features and pixels. We validated our approach by taking part to the HTRtS contest in 2014. The results of the final systems presented in this thesis, namely MLPs and RNNs, with handcrafted feature or pixel inputs, are comparable to the state-of-the-art on Rimes and IAM. Moreover, the combination of these systems outperformed all published results on the considered databases.
482

Relação aluno-instituição: o caso da licenciatura do Instituto de Química da UNESP/Araraquara / Relationship student-institution: the case of teacher training of the Institute of Chemistry UNESP/Araraquara

Massi, Luciana 07 January 2013 (has links)
Esta pesquisa teve como objetivo de um lado, desvendar a trajetória da instituição e suas disposições institucionais, e, de outro lado, a trajetória escolar, o patrimônio (cultural, econômico e social) e as disposições dos alunos, procurando compreender também as possíveis relações estabelecidas (encontros e desencontros, consonâncias e dissonâncias) entre os estudantes e a instituição. O estudo de caso se baseia em diferentes dados e análises que visam capturar a experiência de formação como um todo: através de entrevistas realizadas com docentes e análise documental reconstruímos a história da instituição procurando desvendar suas disposições institucionais; realizamos análises estatísticas sobre o perfil dos alunos do curso de graduação e a evasão nessa instituição; discutimos as entrevistas realizadas com 27 alunos do curso de licenciatura transformadas em retratos sociológicos, procurando organizar suas trajetórias acadêmicas de acordo com as modalidades de sua integração à instituição. As principais disposições institucionais presentes em diversas fases de desenvolvimento da instituição apontam para a importância da pesquisa, uma união inicial entre os membros do corpo docente, a busca da autonomia e isolamento e os projetos de extensão como forma de marketing institucional. O perfil dos alunos mostra que muitos dos que se matriculam na licenciatura preferiam ter ingressado no bacharelado, porém, ao contrário do esperado, a instituição consegue ter baixos índices de evasão. Os retratos apontam para uma predominância de trajetos que conjugam uma integração social e acadêmica, favorecida por um \"encaixe\" das disposições anteriores atualizadas na universidade. Este quadro resulta em um curso de licenciatura diferenciado, no qual o currículo oculto tem papel fundamental na formação docente e na promoção de uma sociologia da transformação, em que licenciandos, reconhecidamente com perfil sócio-econômico menos privilegiado, concluem o curso com as mesmas perspectivas profissionais que os bacharéis. Em linhas gerais, o IQ promove muitas iniciativas diversificadas de formação dos graduandos, por outro lado a instituição parece não perceber que o diferencial da formação oferecida é o conjunto dessas atividades. Concluímos apontando várias possibilidades de aprimoramento da formação dos licenciandos. / This research aimed to understand the possible relationships established (agreements and disagreements, consonances and dissonances) between students and institution. In order to accomplish that goal we analyze on one hand the trajectory of the institution and its institutional arrangements and, on the other hand, the school history and heritage (cultural, economic and social) and students\' dispositions. The case study is based on different data and analyzes that aim to capture the learning experience as a whole: using interviews with teachers and document analysis we reconstructed the institution history seeking for unveil its institutional arrangements; we performed statistical analyzes on the undergraduate students profiles and the attrition at that institution; we discuss the interviews with 27 undergraduate students transformed into sociological portraits, trying to organize their academic trajectories according to their integration into the institution. The main institutional arrangements present in various stages of institutional development point to the importance of research, the union between the faculty members, the pursuit of autonomy and isolation and the community outreach projects as a form of institutional marketing. The profile shows that many of the students who enroll in the teacher training course rather have joined the bachelor, however, contrary to expectations, the institution have lower dropout rates. The portraits show a predominance of paths that combines social and academic integration, favored by a \'fit\' of the above dispositions updated in college. This framework results in a distinguished course in which the hidden curriculum plays a fundamental role in teacher training and in promoting a sociology of transformation, in which undergraduates with recognized less privileged socio-economic profile conclude the course with the same career prospects of the bachelors. In general, the IC promotes many diversified initiatives to training the undergraduate students, but the institution does not seem to realize that the differential training is offered through the entire group of activities. We conclude presenting several possibilities for improving the training of undergraduates.
483

Genomic variation detection using dynamic programming methods

Zhao, Mengyao January 2014 (has links)
Thesis advisor: Gabor T. Marth / Background: Due to the rapid development and application of next generation sequencing (NGS) techniques, large amounts of NGS data have become available for genome-related biological research, such as population genetics, evolutionary research, and genome wide association studies. A crucial step of these genome-related studies is the detection of genomic variation between different species and individuals. Current approaches for the detection of genomic variation can be classified into alignment-based variation detection and assembly-based variation detection. Due to the limitation of current NGS read length, alignment-based variation detection remains the mainstream approach. The Smith-Waterman algorithm, which produces the optimal pairwise alignment between two sequences, is frequently used as a key component of fast heuristic read mapping and variation detection tools for next-generation sequencing data. Though various fast Smith-Waterman implementations are developed, they are either designed as monolithic protein database searching tools, which do not return detailed alignment, or they are embedded into other tools. These issues make reusing these efficient Smith-Waterman implementations impractical. After the alignment step in the traditional variation detection pipeline, the afterward variation detection using pileup data and the Bayesian model is also facing great challenges especially from low-complexity genomic regions. Sequencing errors and misalignment problems still influence variation detection (especially INDEL detection) a lot. The accuracy of genomic variation detection still needs to be improved, especially when we work on low- complexity genomic regions and low-quality sequencing data. Results: To facilitate easy integration of the fast Single-Instruction-Multiple-Data Smith-Waterman algorithm into third-party software, we wrote a C/C++ library, which extends Farrar's Striped Smith-Waterman (SSW) to return alignment information in addition to the optimal Smith-Waterman score. In this library we developed a new method to generate the full optimal alignment results and a suboptimal score in linear space at little cost of efficiency. This improvement makes the fast Single-Instruction-Multiple-Data Smith-Waterman become really useful in genomic applications. SSW is available both as a C/C++ software library, as well as a stand-alone alignment tool at: https://github.com/mengyao/Complete- Striped-Smith-Waterman-Library. The SSW library has been used in the primary read mapping tool MOSAIK, the split-read mapping program SCISSORS, the MEI detector TAN- GRAM, and the read-overlap graph generation program RZMBLR. The speeds of the mentioned software are improved significantly by replacing their ordinary Smith-Waterman or banded Smith-Waterman module with the SSW Library. To improve the accuracy of genomic variation detection, especially in low-complexity genomic regions and on low-quality sequencing data, we developed PHV, a genomic variation detection tool based on the profile hidden Markov model. PHV also demonstrates a novel PHMM application in the genomic research field. The banded PHMM algorithms used in PHV make it a very fast whole-genome variation detection tool based on the HMM method. The comparison of PHV to GATK, Samtools and Freebayes for detecting variation from both simulated data and real data shows PHV has good potential for dealing with sequencing errors and misalignments. PHV also successfully detects a 49 bp long deletion that is totally misaligned by the mapping tool, and neglected by GATK and Samtools. Conclusion: The efforts made in this thesis are very meaningful for methodology development in studies of genomic variation detection. The two novel algorithms stated here will also inspire future work in NGS data analysis. / Thesis (PhD) — Boston College, 2014. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Biology.
484

Mesure de la fragilité et détection de chutes pour le maintien à domicile des personnes âgées / Measure of frailty and fall detection for helping elderly people to stay at home

Dubois, Amandine 15 September 2014 (has links)
Le vieillissement de la population est un enjeu majeur pour les prochaines années en raison, notamment, de l'augmentation du nombre de personnes dépendantes. La question du maintien à domicile de ces personnes se pose alors, du fait de l'impossibilité pour les instituts spécialisés de les accueillir toutes et, surtout, de la volonté des personnes âgées de rester chez elles le plus longtemps possible. Or, le développement de systèmes technologiques peut aider à résoudre certains problèmes comme celui de la sécurisation en détectant les chutes, et de l'évaluation du degré d'autonomie pour prévenir les accidents. Plus particulièrement, nous nous intéressons au développement des systèmes ambiants, peu coûteux, pour l'équipement du domicile. Les caméras de profondeur permettent d'analyser en temps réel les déplacements de la personne. Nous montrons dans cette thèse qu'il est possible de reconnaître l'activité de la personne et de mesurer des paramètres de sa marche à partir de l'analyse de caractéristiques simples extraites des images de profondeur. La reconnaissance d'activité est réalisée à partir des modèles de Markov cachés, et permet en particulier de détecter les chutes et des activités à risque. Lorsque la personne marche, l'analyse de la trajectoire du centre de masse nous permet de mesurer les paramètres spatio-temporels pertinents pour l'évaluation de la fragilité de la personne. Ce travail a été réalisé sur la base d'expérimentations menées en laboratoire, d'une part, pour la construction des modèles par apprentissage automatique et, d'autre part, pour évaluer la validité des résultats. Les expérimentations ont montré que certains modèles de Markov cachés, développés pour ce travail, sont assez robustes pour classifier les différentes activités. Nous donnons, également dans cette thèse, la précision, obtenue avec notre système, des paramètres de la marche en comparaison avec un tapis actimètrique. Nous pensons qu'un tel système pourrait facilement être installé au domicile de personnes âgées, car il repose sur un traitement local des images. Il fournit, au quotidien, des informations sur l'analyse de l'activité et sur l'évolution des paramètres de la marche qui sont utiles pour sécuriser et évaluer le degré de fragilité de la personne. / Population ageing is a major issue for society in the next years, especially because of the increase of dependent people. The limits in specialized institutes capacity and the wish of the elderly to stay at home as long as possible explain a growing need for new specific at home services. Technologies can help securing the person at home by detecting falls. They can also help in the evaluation of the frailty for preventing future accidents. This work concerns the development of low cost ambient systems for helping the stay at home of elderly. Depth cameras allow analysing in real time the displacement of the person. We show that it is possible to recognize the activity of the person and to measure gait parameters from the analysis of simple feature extracted from depth images. Activity recognition is based on Hidden Markov Models and allows detecting at risk behaviours and falls. When the person is walking, the analysis of the trajectory of her centre of mass allows measuring gait parameters that can be used for frailty evaluation. This work is based on laboratory experimentations for the acquisition of data used for models training and for the evaluation of the results. We show that some of the developed Hidden Markov Models are robust enough for classifying the activities. We also evaluate de precision of the gait parameters measurement in comparison to the measures provided by an actimetric carpet. We believe that such a system could be installed in the home of the elderly because it relies on a local processing of the depth images. It would be able to provide daily information on the person activity and on the evolution of her gait parameters that are useful for securing her and evaluating her frailty
485

\'Por trás do véu e da espada\': o disfarce subjacente à representação das personagens cervantinas / \'Behind the veil and the sword\': the masquerade behind the depiction of Cervantine characters

Almeida, Edwirgens Aparecida Ribeiro Lopes de 27 August 2013 (has links)
Este texto estuda as narrativas breves El celoso extremeño e Las dos doncellas, integrantes do conjunto intitulado Novelas Ejemplares, publicadas por Miguel de Cervantes, em 1613. Tendo em vista que os textos nos permitem experimentar distintas dimensões de leitura, temos, no plano explícito, a expressão desse autor conformista e ortodoxo, nas palavras de Williamson (1990) e, no plano sugestivo, um olhar irônico e crítico, sobretudo da condição e do papel da mulher, nas relações de gênero e ainda na vida em sociedade. Em outras palavras, os dois níveis sobre os quais traçamos os nossos estudos podem ser vistos, simultaneamente, isto é, no discurso explícito, estão revelados fragmentos da realidade a partir de um discurso conservador das regras prescritas, mas, por outro lado, no plano implícito, está o caráter reflexivo, crítico, às vezes, até transgressor dos costumes predominantes. Nessas narrativas estudadas, se por um lado as mulheres atuam como perfectas corderas, cumprindo as regras para uma nobre donzela ou para uma mulher casada, por outro lado, essas mesmas mulheres vencem as suas fragilidades e inseguranças no ato de tomar decisões, demonstrando certa autoridade ou deixando ver poderes ocultos femininos. Em vista do comportamento masculino, nos deparamos com homens que, por trás de toda a superioridade e privilégio ditado pela cultura vigente, revelam-se sujeitos aos desejos e domínios femininos, o que nos faz entender todo aquele controle outrora evidenciado como uma forma simbólica de poder. Tendo em conta a sutileza com que o autor trabalha artisticamente a questão, essas dimensões de leitura só são perceptíveis pela interpretação do leitor cuidadoso, tomando de empréstimo o termo cervantino, em que é capaz de identificar a dupla face de um escritor escurridizo e irónico ao invés de solemne y ejemplar. Enfim, essas controvérsias de leitura nos fazem concluir que o autor exibe, pelas páginas da ficção, uma representação da dinâmica da vida em sociedade, na qual a formação de novas personagens demonstra que a força e a vontade do ser humano predominam sobre as regras e as convenções sociais. Sendo assim, a proposta é analisar como, ao deixar entrevisto um discurso crítico e irônico, novas representações de personagens masculinas e femininas flexibilizam os mecanismos culturais que sustentam a suposta hierarquia entre os gêneros. Nessa perspectiva, entendemos que essa leitura pode revelar alguns dos mistérios escondidos como as rupturas, subversões e transgressões às convenções revelando uma sociedade secreta em que as mulheres exibem alguma autoridade num contexto de ordem, ainda, predominantemente, masculino. / This paper studies the brief narratives El celoso extremeño and Las dos doncellas, part of the collection called Novelas Ejemplares, published by Miguel de Cervantes in 1613. Considering that the texts allow us to experience different aspects of reading, we have in the explicit plan the expression of this conformist and orthodox author in the words of Williamson (1990) and in the suggestive plan we have an ironic and critical look, particularly on the womens condition and position in the gender roles and also in society life. In other words, these two levels studied can be viewed simultaneously, that is, on the explicit discourse, \'fragments of reality\' are disclosed from a conservative discourse of the prescribed rules, but on the other hand, in the implicit plan there is the reflexive, critical character which sometimes can even transgress the prevalent customs. In these studied narratives, on the one hand, women act as \'perfectas corderas\', fulfilling the rules for a noble maiden or a married woman. On the other hand, these women overcome their weaknesses and insecurities in the act of making decisions, showing certain authority or revealing female \'occult powers\'. In view of male behavior, we find men who behind the superiority and privilege dictated by the prevailing culture, reveal themselves subject to female wishes and domains making us understand all that formerly evidenced control as a \'symbolic \'form of power. Given the subtlety the author works the question with art, these reading dimensions are only perceived by the interpretation of the \'careful\' reader, borrowing the term Cervantine which is able to identify the two faces of a \'escurridizo and ironic writer instead of\' solemne y ejemplar. Anyway, these reading controversies make us conclude that the author displays a representation of the social life dynamics through the fiction pages in which the formation of new characters demonstrates that the strength and will of the human being predominate over the rules and social conventions. Therefore, the proposal is to analyze through a critical and ironic discourse how new representations of male and female characters make flexible cultural mechanisms that underpin the supposed hierarchy between genders. In this perspective, we understand that this reading can reveal some of the \'hidden mysteries\' such as disruptions, subversions and transgressions to conventions revealing a \'secret\' society in which women exhibit some authority in the context of order still predominantly male.
486

RASTREAMENTO DE AGROBOTS EM ESTUFAS AGRÍCOLAS USANDO MODELOS OCULTOS DE MARKOV: Comparação do desempenho e da correção dos algoritmos de Viterbi e Viterbi com janela de observações deslizante

Alves, Roberson Junior Fernandes 17 September 2015 (has links)
Made available in DSpace on 2017-07-21T14:19:26Z (GMT). No. of bitstreams: 1 Roberson Junior Fernandes Alves.pdf: 17901245 bytes, checksum: 170e17bbccf0e54fa9b0dab204aca2e4 (MD5) Previous issue date: 2015-09-17 / Developing mobile and autonomous agrobots for greenhouses requires the use of procedures which allow robot autolocalization and tracking. The tracking problem can be modeled as finding the most likely sequence of states in a hidden Markov model„ whose states indicate the positions of an occupancy grid. This sequence can be estimated with Viterbi’s algorithm. However, the processing time and consumed memory, of this algorithm, grows with the dimensions of the grid and tracking duration, and, this can constraint its use for tracking agrobots. Considering it, this work presents a tracking procedure which uses two approximated implementations of Viterbi’s algorithm called Viterbi-JD(Viterbi’s algorithm with a sliding window) and Viterbi-JD-MTE(Viterbi’s algorithm with a sliding window over an hidden Markov model with sparse transition matrix). The experimental results show that the time and memory performance of tracking with this two approximated implementations are significantly higher than the Viterbi’s based tracking. The reported tracking hypothesis is suboptimal, when compared to the hypothesis generated by Viterbi, but the error does not grows substantially. Th experimentos was performed using RSSI(Received Signal Strength Indicator) simulated data. / O desenvolvimento de agrobots móveis e autônomos para operar em estufas agrícolas depende da implementação de procedimentos que permitam o rastreamento do robô no ambiente. O problema do rastreamento pode ser modelado como a determinação da sequência de estados mais prováveis de um modelo oculto de Markov cujos estados indicam posições de uma grade de ocupação. Esta sequência pode ser estimada pelo algoritmo de Viterbi. No entanto, o tempo de processamento e a memória consumida, por esse algoritmo, crescem com as dimensões da grade e com a duração do rastreamento, e isto pode limitar seu uso no rastreamento de agrobots em estufas. Considerando o exposto, este trabalho apresenta um procedimento de rastreamento que utiliza mplementações aproximadas do algoritmo de Viterbi denominadas de Viterbi-JD(Viterbi com janela deslizante) e Viterbi- JD-MTE(Viterbi com janela deslizante sobre um modelo oculto de Markov com matriz de transição esparsa). Os experimentos mostram que o desempenho de tempo e memória do rastreamento baseado nessas implementações aproximadas é significativamente melhor que aquele do algoritmo original. A hipótese de rastreamento gerada é sub ótima em relação àquela calculada pelo algoritmo original, contudo, não há um aumento substancial do erro. Os experimentos foram realizados utilizando dados simulados de RSSI (Received Signal Strength Indicator).
487

Reklam i kostym : En kvantitativ och kvalitativ undersökning av de köpta debatterna på Newsmill.se / Advertising suited up : A quantitative and qualitative survey of the bought debates on Newsmill.se

Nilsson, Christoffer, Roos, Pontus January 2009 (has links)
The aim of this thesis was to explore and analyze the sponsored debates on Newsmill, so called seminars. How does the sponsor use Newsmills seminars for marketing purposes? Who is allowed to write in Newsmills seminars? To answer these questions we used both a quantitative survey and a qualitative survey. To examine how the sponsors use Newsmills seminars for marketing purposes we conducted a qualitative analysis which included three of the eleven seminars - a total of 26 articles. We examined how the sponsors conveyed the picture of themselves and if the written content in Newsmills seminars contained any hidden marketing. We could see that the sponsor has a great deal of influence on the seminars and also used them to market their brand with hybrid messages. Sponsors often tried to relate their brand to a public issue in order to camouflage their commercial purposes. The quantitative survey aimed to map the seminars and answer the question “Who is allowed to write in Newsmills seminars?”. We observed how many of the writers were male/female, had a Swedish/foreign name, had a certain position in society and if the writer had any connection to, or wrote about, the sponsor. We examined all articles ever published in Newsmills seminars, up to the day of the survey. A total of 164 articles in 11 different seminars. Our result showed that female writers are a minority group in Newsmills seminars. So are people with foreign names and people who lack a position which grants them authority.
488

Detecção visual de atividade de voz com base na movimentação labial / Visual voice activity detection using as information the lips motion

Lopes, Carlos Bruno Oliveira January 2013 (has links)
O movimento dos lábios é um recurso visual relevante para a detecção da atividade de voz do locutor e para o reconhecimento da fala. Quando os lábios estão se movendo eles transmitem a idéia de ocorrências de diálogos (conversas ou períodos de fala) para o observador, enquanto que os períodos de silêncio podem ser representados pela ausência de movimentações dos lábios (boca fechada). Baseado nesta idéia, este trabalho foca esforços para detectar a movimentação de lábios e usá-la para realizar a detecção de atividade de voz. Primeiramente, é realizada a detecção de pele e a detecção de face para reduzir a área de extração dos lábios, sendo que as regiões mais prováveis de serem lábios são computadas usando a abordagem Bayesiana dentro da área delimitada. Então, a pré-segmentação dos lábios é obtida pela limiarização da região das probabilidades calculadas. A seguir, é localizada a região da boca pelo resultado obtido na pré-segmentação dos lábios, ou seja, alguns pixels que não são de lábios e foram detectados são eliminados, e em seguida são aplicados algumas operações morfológicas para incluir alguns pixels labiais e não labiais em torno da boca. Então, uma nova segmentação de lábios é realizada sobre a região da boca depois de aplicada uma transformação de cores para realçar a região a ser segmentada. Após a segmentação, é aplicado o fechamento das lacunas internas dos lábios segmentados. Finalmente, o movimento temporal dos lábios é explorado usando o modelo das cadeias ocultas de Markov (HMMs) para detectar as prováveis ocorrências de atividades de fala dentro de uma janela temporal. / Lips motion are relevant visual feature for detecting the voice active of speaker and speech recognition. When the lips are moving, they carries an idea of occurrence of dialogues (talk) or periods of speeches to the watcher, whereas the periods of silences may be represented by the absence of lips motion (mouth closed). Based on this idea, this work focus efforts to obtain the lips motion as features and to perform visual voice activity detection. First, the algorithm performs skin segmentation and face detection to reduce the search area for lip extraction, and the most likely lip regions are computed using a Bayesian approach within the delimited area. Then, the pre-segmentation of the lips is obtained by thresholding the calculated probability region. After, it is localized the mouth region by resulted obtained in pre-segmentation of the lips, i.e., some nonlips pixels detected are eliminated, and it are applied a simple morphological operators to include some lips pixels and non-lips around the mouth. Thus, a new segmentation of lips is performed over mouth region after transformation of color to enhance the region to be segmented. And, is applied the closing of gaps internal of lips segmented. Finally, the temporal motion of the lips is explored using Hidden Markov Models (HMMs) to detect the likely occurrence of active speech within a temporal window.
489

Caractérisation des images à Rayon-X de la main par des modèles mathématiques : application à la biométrie / « Characterization of X-ray images of the hand by mathematical models : application to biometrics »

Kabbara, Yeihya 09 March 2015 (has links)
Dans son contexte spécifique, le terme « biométrie » est souvent associé à l'étude des caractéristiques physiques et comportementales des individus afin de parvenir à leur identification ou à leur vérification. Ainsi, le travail développé dans cette thèse nous a conduit à proposer un algorithme d'identification robuste, en considérant les caractéristiques intrinsèques des phalanges de la main. Considérée comme une biométrie cachée, cette nouvelle approche peut s'avérer intéressante, notamment lorsqu'il est question d'assurer un niveau de sécurité élevé, robuste aux différentes attaques qu'un système biométrique doit contrer. La base des techniques proposées requière trois phases, à savoir: (1) la segmentation des phalanges, (2) l'extraction de leurs caractéristiques par la génération d'une empreinte, appelée « Phalange-Code » et (3) l'identification basée sur la méthode du 1-plus proche voisin ou la vérification basée sur une métrique de similarité. Ces algorithmes opèrent sur des niveaux hiérarchiques permettant l'extraction de certains paramètres, invariants à des transformations géométriques telles que l'orientation et la translation. De plus, nous avons considéré des techniques robustes au bruit, pouvant opérer à différentes résolutions d'images. Plus précisément, nous avons élaboré trois approches de reconnaissance biométrique : la première approche utilise l'information spectrale des contours des phalanges de la main comme signature individuelle, alors que la deuxième approche nécessite l'utilisation des caractéristiques géométriques et morphologiques des phalanges (i.e. surface, périmètre, longueur, largeur, capacité). Enfin, la troisième approche requière la génération d'un nouveau rapport de vraisemblance entre les phalanges, utilisant la théorie de probabilités géométriques. En second lieu, la construction d'une base de données avec la plus faible dose de rayonnement a été l'un des grands défis de notre étude. Nous avons donc procédé par la collecte de 403 images radiographiques de la main, acquises en utilisant la machine Apollo EZ X-Ray. Ces images sont issues de 115 adultes volontaires (hommes et femmes), non pathologiques. L'âge moyen étant de 27.2 ans et l'écart-type est de 8.5. La base de données ainsi construite intègre des images de la main droite et gauche, acquises à des positions différentes et en considérant des résolutions différentes et des doses de rayonnement différentes (i.e. réduction jusqu'à 98 % de la dose standard recommandée par les radiologues « 1 µSv »).Nos expériences montrent que les individus peuvent être distingués par les caractéristiques de leurs phalanges, que ce soit celles de la main droite ou celles de la main gauche. Cette distinction est également valable pour le genre des individus (homme/femme). L'étude menée a montré que l'approche utilisant l'information spectrale des contours des phalanges permet une identification par seulement trois phalanges, à un taux EER (Equal Error Rate) inférieur à 0.24 %. Par ailleurs, il a été constaté « de manière surprenante » que la technique fondée sur les rapports de vraisemblance entre les phalanges permet d'atteindre un taux d'identification de 100 % et un taux d'EER de 0.37 %, avec une seule phalange. Hormis l'aspect identification/authentification, notre étude s'est penchée sur l'optimisation de la dose de rayonnement permettant une identification saine des individus. Ainsi, il a été démontré qu'il était possible d'acquérir plus de 12500/an d'images radiographiques de la main, sans pour autant dépasser le seuil administratif de 0.25 mSv / In its specific context, the term "biometrics" is often associated with the study of the physical and behavioral of individual's characteristics to achieve their identification or verification. Thus, the work developed in this thesis has led us to suggest a robust identification algorithm, taking into account the intrinsic characteristics of the hand phalanges. Considered as hidden biometrics, this new approach can be of high interest, particularly when it comes to ensure a high level of security, robust to various attacks that a biometric system must address. The basis of the proposed techniques requires three phases, namely: (1) the segmentation of the phalanges (2) extracting their characteristics by generating an imprint, called "Phalange-Code" and (3) the identification based on the method of 1-nearest neighbor or the verification based on a similarity metric. This algorithm operates on hierarchical levels allowing the extraction of certain parameters invariant to geometric transformations such as image orientation and translation. Furthermore, the considered algorithm is particularly robust to noise, and can function at different resolutions of images. Thus, we developed three approaches to biometric recognition: the first approach produces individual signature from the spectral information of the contours issued from the hand phalanges, whereas the second approach requires the use of geometric and morphological characteristics of the phalanges (i.e. surface, perimeter, length, width, and capacity). Finally, the third approach requires the generation of a new likelihood ratio between the phalanges, using the geometric probability theory. Furthermore, the construction of a database with the lowest radiation dose was one of the great challenges of our study. We therefore proceeded with the collection of 403 x-ray images of the hand, acquired using the Apollo EZ X-Ray machine. These images are from 115 non-pathological volunteering adult (men and women). The average age is 27.2 years and the standard deviation is 8.5. Thus, the constructed database incorporates images of the right and left hands, acquired at different positions and by considering different resolutions and different radiation doses (i.e. reduced till 98% of the standard dose recommended by radiologists "1 µSv").Our experiments show that individuals can be distinguished by the characteristics of their phalanges, whether those of the right hand or the left hand. This distinction also applies to the kind of individuals (male/female). The study has demonstrated that the approach using the spectral information of the phalanges' contours allows identification by only three phalanges, with an EER (Equal Error Rate) lower than 0.24 %. Furthermore, it was found “Surprisingly” that the technique based on the likelihood ratio between phalanges reaches an identification rate of 100% and an EER of 0.37% with a single phalanx. Apart from the identification/authentication aspect, our study focused on the optimization of the radiation dose in order to offer safe identification of individuals. Thus, it has been shown that it was possible to acquire more than 12,500/year radiographic hand images, without exceeding the administrative control of 0.25 mSv
490

Analyse de la qualité des signatures manuscrites en-ligne par la mesure d'entropie / Quality analysis of online signatures based on entropy measure

Houmani, Nesma 13 January 2011 (has links)
Cette thèse s'inscrit dans le contexte de la vérification d'identité par la signature manuscrite en-ligne. Notre travail concerne plus particulièrement la recherche de nouvelles mesures qui permettent de quantifier la qualité des signatures en-ligne et d'établir des critères automatiques de fiabilité des systèmes de vérification. Nous avons proposé trois mesures de qualité faisant intervenir le concept d’entropie. Nous avons proposé une mesure de qualité au niveau de chaque personne, appelée «Entropie personnelle», calculée sur un ensemble de signatures authentiques d’une personne. L’originalité de l’approche réside dans le fait que l’entropie de la signature est calculée en estimant les densités de probabilité localement, sur des portions, par le biais d’un Modèle de Markov Caché. Nous montrons que notre mesure englobe les critères habituels utilisés dans la littérature pour quantifier la qualité d’une signature, à savoir: la complexité, la variabilité et la lisibilité. Aussi, cette mesure permet de générer, par classification non supervisée, des catégories de personnes, à la fois en termes de variabilité de la signature et de complexité du tracé. En confrontant cette mesure aux performances de systèmes de vérification usuels sur chaque catégorie de personnes, nous avons trouvé que les performances se dégradent de manière significative (d’un facteur 2 au minimum) entre les personnes de la catégorie «haute Entropie» (signatures très variables et peu complexes) et celles de la catégorie «basse Entropie» (signatures les plus stables et les plus complexes). Nous avons ensuite proposé une mesure de qualité basée sur l’entropie relative (distance de Kullback-Leibler), dénommée «Entropie Relative Personnelle» permettant de quantifier la vulnérabilité d’une personne aux attaques (bonnes imitations). Il s’agit là d’un concept original, très peu étudié dans la littérature. La vulnérabilité associée à chaque personne est calculée comme étant la distance de Kullback-Leibler entre les distributions de probabilité locales estimées sur les signatures authentiques de la personne et celles estimées sur les imitations qui lui sont associées. Nous utilisons pour cela deux Modèles de Markov Cachés, l'un est appris sur les signatures authentiques de la personne et l'autre sur les imitations associées à cette personne. Plus la distance de Kullback-Leibler est faible, plus la personne est considérée comme vulnérable aux attaques. Cette mesure est plus appropriée à l’analyse des systèmes biométriques car elle englobe en plus des trois critères habituels de la littérature, la vulnérabilité aux imitations. Enfin, nous avons proposé une mesure de qualité pour les signatures imitées, ce qui est totalement nouveau dans la littérature. Cette mesure de qualité est une extension de l’Entropie Personnelle adaptée au contexte des imitations: nous avons exploité l’information statistique de la personne cible pour mesurer combien la signature imitée réalisée par un imposteur va coller à la fonction de densité de probabilité associée à la personne cible. Nous avons ainsi défini la mesure de qualité des imitations comme étant la dissimilarité existant entre l'entropie associée à la personne à imiter et celle associée à l'imitation. Elle permet lors de l’évaluation des systèmes de vérification de quantifier la qualité des imitations, et ainsi d’apporter une information vis-à-vis de la résistance des systèmes aux attaques. Nous avons aussi montré l’intérêt de notre mesure d’Entropie Personnelle pour améliorer les performances des systèmes de vérification dans des applications réelles. Nous avons montré que la mesure d’Entropie peut être utilisée pour : améliorer la procédure d’enregistrement, quantifier la dégradation de la qualité des signatures due au changement de plateforme, sélectionner les meilleures signatures de référence, identifier les signatures aberrantes, et quantifier la pertinence de certains paramètres pour diminuer la variabilité temporelle. / This thesis is focused on the quality assessment of online signatures and its application to online signature verification systems. Our work aims at introducing new quality measures quantifying the quality of online signatures and thus establishing automatic reliability criteria for verification systems. We proposed three quality measures involving the concept of entropy, widely used in Information Theory. We proposed a novel quality measure per person, called "Personal Entropy" calculated on a set of genuine signatures of such a person. The originality of the approach lies in the fact that the entropy of the genuine signature is computed locally, on portions of such a signature, based on local density estimation by a Hidden Markov Model. We show that our new measure includes the usual criteria of the literature, namely: signature complexity, signature variability and signature legibility. Moreover, this measure allows generating, by an unsupervised classification, 3 coherent writer categories in terms of signature variability and complexity. Confronting this measure to the performance of two widely used verification systems (HMM, DTW) on each Entropy-based category, we show that the performance degrade significantly (by a factor 2 at least) between persons of "high Entropy-based category", containing the most variable and the least complex signatures and those of "low Entropy-based category", containing the most stable and the most complex signatures. We then proposed a novel quality measure based on the concept of relative entropy (also called Kullback-Leibler distance), denoted « Personal Relative Entropy » for quantifying person's vulnerability to attacks (good forgeries). This is an original concept and few studies in the literature are dedicated to this issue. This new measure computes, for a given writer, the Kullback-Leibler distance between the local probability distributions of his/her genuine signatures and those of his/her skilled forgeries: the higher the distance, the better the writer is protected from attacks. We show that such a measure simultaneously incorporates in a single quantity the usual criteria proposed in the literature for writer categorization, namely signature complexity, signature variability, as our Personal Entropy, but also the vulnerability criterion to skilled forgeries. This measure is more appropriate to biometric systems, because it makes a good compromise between the resulting improvement of the FAR and the corresponding degradation of FRR. We also proposed a novel quality measure aiming at quantifying the quality of skilled forgeries, which is totally new in the literature. Such a measure is based on the extension of our former Personal Entropy measure to the framework of skilled forgeries: we exploit the statistical information of the target writer for measuring to what extent an impostor’s hand-draw sticks to the target probability density function. In this framework, the quality of a skilled forgery is quantified as the dissimilarity existing between the target writer’s own Personal Entropy and the entropy of the skilled forgery sample. Our experiments show that this measure allows an assessment of the quality of skilled forgeries of the main online signature databases available to the scientific community, and thus provides information about systems’ resistance to attacks. Finally, we also demonstrated the interest of using our Personal Entropy measure for improving performance of online signature verification systems in real applications. We show that Personal Entropy measure can be used to: improve the enrolment process, quantify the quality degradation of signatures due to the change of platforms, select the best reference signatures, identify the outlier signatures, and quantify the relevance of times functions parameters in the context of temporal variability.

Page generated in 0.3255 seconds