• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1453
  • 955
  • 694
  • 320
  • 225
  • 149
  • 97
  • 72
  • 60
  • 43
  • 32
  • 32
  • 27
  • 19
  • 18
  • Tagged with
  • 4838
  • 828
  • 767
  • 700
  • 549
  • 548
  • 469
  • 467
  • 452
  • 429
  • 384
  • 372
  • 367
  • 328
  • 309
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
581

Text-to-Speech Synthesis Using Found Data for Low-Resource Languages

Cooper, Erica Lindsay January 2019 (has links)
Text-to-speech synthesis is a key component of interactive, speech-based systems. Typically, building a high-quality voice requires collecting dozens of hours of speech from a single professional speaker in an anechoic chamber with a high-quality microphone. There are about 7,000 languages spoken in the world, and most do not enjoy the speech research attention historically paid to such languages as English, Spanish, Mandarin, and Japanese. Speakers of these so-called "low-resource languages" therefore do not equally benefit from these technological advances. While it takes a great deal of time and resources to collect a traditional text-to-speech corpus for a given language, we may instead be able to make use of various sources of "found'' data which may be available. In particular, sources such as radio broadcast news and ASR corpora are available for many languages. While this kind of data does not exactly match what one would collect for a more standard TTS corpus, it may nevertheless contain parts which are usable for producing natural and intelligible parametric TTS voices. In the first part of this thesis, we examine various types of found speech data in comparison with data collected for TTS, in terms of a variety of acoustic and prosodic features. We find that radio broadcast news in particular is a good match. Audiobooks may also be a good match despite their largely more expressive style, and certain speakers in conversational and read ASR corpora also resemble TTS speakers in their manner of speaking and thus their data may be usable for training TTS voices. In the rest of the thesis, we conduct a variety of experiments in training voices on non-traditional sources of data, such as ASR data, radio broadcast news, and audiobooks. We aim to discover which methods produce the most intelligible and natural-sounding voices, focusing on three main approaches: 1) Training data subset selection. In noisy, heterogeneous data sources, we may wish to locate subsets of the data that are well-suited for building voices, based on acoustic and prosodic features that are known to correspond with TTS-style speech, while excluding utterances that introduce noise or other artifacts. We find that choosing subsets of speakers for training data can result in voices that are more intelligible. 2) Augmenting the frontend feature set with new features. In cleaner sources of found data, we may wish to train voices on all of the data, but we may get improvements in naturalness by including acoustic and prosodic features at the frontend and synthesizing in a manner that better matches the TTS style. We find that this approach is promising for creating more natural-sounding voices, regardless of the underlying acoustic model. 3) Adaptation. Another way to make use of high-quality data while also including informative acoustic and prosodic features is to adapt to subsets, rather than to select and train only on subsets. We also experiment with training on mixed high- and low-quality data, and adapting towards the high-quality set, which produces more intelligible voices than training on either type of data by itself. We hope that our findings may serve as guidelines for anyone wishing to build their own TTS voice using non-traditional sources of found data.
582

Data-driven temporal information extraction with applications in general and clinical domains

Filannino, Michele January 2016 (has links)
The automatic extraction of temporal information from written texts is pivotal for many Natural Language Processing applications such as question answering, text summarisation and information retrieval. However, Temporal Information Extraction (TIE) is a challenging task because of the amount of types of expressions (durations, frequencies, times, dates) and their high morphological variability and ambiguity. As far as the approaches are concerned, the most common among the existing ones is rule-based, while data-driven ones are under-explored. This thesis introduces a novel domain-independent data-driven TIE strategy. The identification strategy is based on machine learning sequence labelling classifiers on features selected through an extensive exploration. Results are further optimised using an a posteriori label-adjustment pipeline. The normalisation strategy is rule-based and builds on a pre-existing system. The methodology has been applied to both specific (clinical) and generic domain, and has been officially benchmarked at the i2b2/2012 and TempEval-3 challenges, ranking respectively 3rd and 1st. The results prove the TIE task to be more challenging in the clinical domain (overall accuracy 63%) rather than in the general domain (overall accuracy 69%).Finally, this thesis also presents two applications of TIE. One of them introduces the concept of temporal footprint of a Wikipedia article, and uses it to mine the life span of persons. In the other case, TIE techniques are used to improve pre-existing information retrieval systems by filtering out temporally irrelevant results.
583

Multi-lingual text retrieval and mining.

January 2003 (has links)
Law Yin Yee. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 130-134). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Cross-Lingual Information Retrieval (CLIR) --- p.2 / Chapter 1.2 --- Bilingual Term Association Mining --- p.5 / Chapter 1.3 --- Our Contributions --- p.6 / Chapter 1.3.1 --- CLIR --- p.6 / Chapter 1.3.2 --- Bilingual Term Association Mining --- p.7 / Chapter 1.4 --- Thesis Organization --- p.8 / Chapter 2 --- Related Work --- p.9 / Chapter 2.1 --- CLIR Techniques --- p.9 / Chapter 2.1.1 --- Existing Approaches --- p.9 / Chapter 2.1.2 --- Difference Between Our Model and Existing Approaches --- p.13 / Chapter 2.2 --- Bilingual Term Association Mining Techniques --- p.13 / Chapter 2.2.1 --- Existing Approaches --- p.13 / Chapter 2.2.2 --- Difference Between Our Model and Existing Approaches --- p.17 / Chapter 3 --- Cross-Lingual Information Retrieval (CLIR) --- p.18 / Chapter 3.1 --- Cross-Lingual Query Processing and Translation --- p.18 / Chapter 3.1.1 --- Query Context and Document Context Generation --- p.20 / Chapter 3.1.2 --- Context-Based Query Translation --- p.23 / Chapter 3.1.3 --- Query Term Weighting --- p.28 / Chapter 3.1.4 --- Final Weight Calculation --- p.30 / Chapter 3.2 --- Retrieval on Documents and Automated Summaries --- p.32 / Chapter 4 --- Experiments on Cross-Lingual Information Retrieval --- p.38 / Chapter 4.1 --- Experimental Setup --- p.38 / Chapter 4.2 --- Results of English-to-Chinese Retrieval --- p.45 / Chapter 4.2.1 --- Using Mono-Lingual Retrieval as the Gold Standard --- p.45 / Chapter 4.2.2 --- Using Human Relevance Judgments as the Gold Stan- dard --- p.49 / Chapter 4.3 --- Results of Chinese-to-English Retrieval --- p.53 / Chapter 4.3.1 --- Using Mono-lingual Retrieval as the Gold Standard --- p.53 / Chapter 4.3.2 --- Using Human Relevance Judgments as the Gold Stan- dard --- p.57 / Chapter 5 --- Discovering Comparable Multi-lingual Online News for Text Mining --- p.61 / Chapter 5.1 --- Story Representation --- p.62 / Chapter 5.2 --- Gloss Translation --- p.64 / Chapter 5.3 --- Comparable News Discovery --- p.67 / Chapter 6 --- Mining Bilingual Term Association Based on Co-occurrence --- p.75 / Chapter 6.1 --- Bilingual Term Cognate Generation --- p.75 / Chapter 6.2 --- Term Mining Algorithm --- p.77 / Chapter 7 --- Phonetic Matching --- p.87 / Chapter 7.1 --- Algorithm Design --- p.87 / Chapter 7.2 --- Discovering Associations of English Terms and Chinese Terms --- p.93 / Chapter 7.2.1 --- Converting English Terms into Phonetic Representation --- p.93 / Chapter 7.2.2 --- Discovering Associations of English Terms and Man- darin Chinese Terms --- p.100 / Chapter 7.2.3 --- Discovering Associations of English Terms and Can- tonese Chinese Terms --- p.104 / Chapter 8 --- Experiments on Bilingual Term Association Mining --- p.111 / Chapter 8.1 --- Experimental Setup --- p.111 / Chapter 8.2 --- Result and Discussion of Bilingual Term Association Mining Based on Co-occurrence --- p.114 / Chapter 8.3 --- Result and Discussion of Phonetic Matching --- p.121 / Chapter 9 --- Conclusions and Future Work --- p.126 / Chapter 9.1 --- Conclusions --- p.126 / Chapter 9.1.1 --- CLIR --- p.126 / Chapter 9.1.2 --- Bilingual Term Association Mining --- p.127 / Chapter 9.2 --- Future Work --- p.128 / Bibliography --- p.134 / Chapter A --- Original English Queries --- p.135 / Chapter B --- Manual translated Chinese Queries --- p.137 / Chapter C --- Pronunciation symbols used by the PRONLEX Lexicon --- p.139 / Chapter D --- Initial Letter-to-Phoneme Tags --- p.141 / Chapter E --- English Sounds with their Chinese Equivalents --- p.143
584

Semi-supervised document clustering with active learning. / CUHK electronic theses & dissertations collection

January 2008 (has links)
Most existing semi-supervised document clustering approaches are model-based clustering and can be treated as parametric model taking an assumption that the underlying clusters follow a certain pre-defined distribution. In our semi-supervised document clustering, each cluster is represented by a non-parametric probability distribution. Two approaches are designed for incorporating pairwise constraints in the document clustering approach. The first approach, term-to-term relationship approach (TR), uses pairwise constraints for capturing term-to-term dependence relationships. The second approach, linear combination approach (LC), combines the clustering objective function with the user-provided constraints linearly. Extensive experimental results show that our proposed framework is effective. / This thesis presents a new framework for automatically partitioning text documents taking into consideration of constraints given by users. Semi-supervised document clustering is developed based on pairwise constraints. Different from traditional semi-supervised document clustering approaches which assume pairwise constraints to be prepared by user beforehand, we develop a novel framework for automatically discovering pairwise constraints revealing the user grouping preference. Active learning approach for choosing informative document pairs is designed by measuring the amount of information that can be obtained by revealing judgments of document pairs. For this purpose, three models, namely, uncertainty model, generation error model, and term-to-term relationship model, are designed for measuring the informativeness of document pairs from different perspectives. Dependent active learning approach is developed by extending the active learning approach to avoid redundant document pair selection. Two models are investigated for estimating the likelihood that a document pair is redundant to previously selected document pairs, namely, KL divergence model and symmetric model. / Huang, Ruizhang. / Adviser: Wai Lam. / Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3600. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 117-123). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
585

Geometric and topological approaches to semantic text retrieval. / CUHK electronic theses & dissertations collection

January 2007 (has links)
In the first part of this thesis, we present a new understanding of the latent semantic space of a dataset from the dual perspective, which relaxes the above assumed conditions and leads naturally to a unified kernel function for a class of vector space models. New semantic analysis methods based on the unified kernel function are developed, which combine the advantages of LSI and GVSM. We also show that the new methods possess the stable property on the rank choice, i.e., even if the selected rank is quite far away from the optimal one, the retrieval performance will not degrade much. The experimental results of our methods on the standard test sets are promising. / In the second part of this thesis, we propose that the mathematical structure of simplexes can be attached to a term-document matrix in the vector-space model (VSM) for information retrieval. The Q-analysis devised by R. H. Atkin may then be applied to effect an analysis of the topological structure of the simplexes and their corresponding dataset. Experimental results of this analysis reveal that there is a correlation between the effectiveness of LSI and the topological structure of the dataset. By using the information obtained from the topological analysis, we develop a new query expansion method. Experimental results show that our method can enhance the performance of VSM for datasets over which LSI is not effective. Finally, the notion of homology is introduced to the topological analysis of datasets and its possible relation to word sense disambiguation is studied through a simple example. / With the vast amount of textual information available today, the task of designing effective and efficient retrieval methods becomes more important and complex. The Basic Vector Space Model (BVSM) is well known in information retrieval. Unfortunately, it can not retrieve all relevant documents since it is based on literal term matching. The Generalized Vector Space Model (GVSM) and the Latent Semantic Indexing (LSI) are two famous semantic retrieval methods, in which some underlying latent semantic structures in the dataset are assumed. However, their assumptions about where the semantic structure locates are a bit strong. Moreover, the performance of LSI can be very different for various datasets and the questions of what characteristics of a dataset and why these characteristics contribute to this difference have not been fully understood. The present thesis focuses on providing answers to these two questions. / Li , Dandan. / "August 2007." / Adviser: Chung-Ping Kwong. / Source: Dissertation Abstracts International, Volume: 69-02, Section: B, page: 1108. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 118-120). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract in English and Chinese. / School code: 1307.
586

Re-writing Ariadne : following the thread of literary and artistic representations of Ariadne's abandonment

Schoess, Ann-Sophie January 2018 (has links)
This thesis takes Ariadne's abandonment as a case study in order to examine the literary processes of reception that underlie the transmission of classical myth in different eras and cultural contexts - from Classical Antiquity through the Italian Renaissance. Rather than focusing on the ways in which visual representations of Ariadne relate to literary treatments, it draws attention to the literary reliance on a cultural framework, shared by writer and reader, that enables dynamic storytelling. It argues that literary variation of the myth is central to its successful transmission, not least because it allows for appropriations and adaptations that can be made to fit new social and religious parameters, such as Christian conventions in the Middle Ages. In focusing on the important role played by the visual arts in the classical tradition, this research further challenges the still prevalent misconception that the visual arts are secondary to literature, and refutes the common assumption that the relationship between image and text is unidirectional. It highlights the visual impulses leading to paradigm shifts in the literary treatment of the abandonment narrative, and examines the ways in which writers engage with the visual tradition in order to re-shape the ancient narrative. Throughout, attention is drawn to the visual and cultural framework shared by ancient writers and readers, and to the lack of engagement with this framework in traditional classical scholarship. Through its focus on the literary narratives' visuality and mutability, this thesis offers a new paradigm for studying classical myth and its reception.
587

New learning strategies for automatic text categorization.

January 2001 (has links)
Lai Kwok-yin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 125-130). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Automatic Textual Document Categorization --- p.1 / Chapter 1.2 --- Meta-Learning Approach For Text Categorization --- p.3 / Chapter 1.3 --- Contributions --- p.6 / Chapter 1.4 --- Organization of the Thesis --- p.7 / Chapter 2 --- Related Work --- p.9 / Chapter 2.1 --- Existing Automatic Document Categorization Approaches --- p.9 / Chapter 2.2 --- Existing Meta-Learning Approaches For Information Retrieval --- p.14 / Chapter 2.3 --- Our Meta-Learning Approaches --- p.20 / Chapter 3 --- Document Pre-Processing --- p.22 / Chapter 3.1 --- Document Representation --- p.22 / Chapter 3.2 --- Classification Scheme Learning Strategy --- p.25 / Chapter 4 --- Linear Combination Approach --- p.30 / Chapter 4.1 --- Overview --- p.30 / Chapter 4.2 --- Linear Combination Approach - The Algorithm --- p.33 / Chapter 4.2.1 --- Equal Weighting Strategy --- p.34 / Chapter 4.2.2 --- Weighting Strategy Based On Utility Measure --- p.34 / Chapter 4.2.3 --- Weighting Strategy Based On Document Rank --- p.35 / Chapter 4.3 --- Comparisons of Linear Combination Approach and Existing Meta-Learning Methods --- p.36 / Chapter 4.3.1 --- LC versus Simple Majority Voting --- p.36 / Chapter 4.3.2 --- LC versus BORG --- p.38 / Chapter 4.3.3 --- LC versus Restricted Linear Combination Method --- p.38 / Chapter 5 --- The New Meta-Learning Model - MUDOF --- p.40 / Chapter 5.1 --- Overview --- p.41 / Chapter 5.2 --- Document Feature Characteristics --- p.42 / Chapter 5.3 --- Classification Errors --- p.44 / Chapter 5.4 --- Linear Regression Model --- p.45 / Chapter 5.5 --- The MUDOF Algorithm --- p.47 / Chapter 6 --- Incorporating MUDOF into Linear Combination approach --- p.52 / Chapter 6.1 --- Background --- p.52 / Chapter 6.2 --- Overview of MUDOF2 --- p.54 / Chapter 6.3 --- Major Components of the MUDOF2 --- p.57 / Chapter 6.4 --- The MUDOF2 Algorithm --- p.59 / Chapter 7 --- Experimental Setup --- p.66 / Chapter 7.1 --- Document Collection --- p.66 / Chapter 7.2 --- Evaluation Metric --- p.68 / Chapter 7.3 --- Component Classification Algorithms --- p.71 / Chapter 7.4 --- Categorical Document Feature Characteristics for MUDOF and MUDOF2 --- p.72 / Chapter 8 --- Experimental Results and Analysis --- p.74 / Chapter 8.1 --- Performance of Linear Combination Approach --- p.74 / Chapter 8.2 --- Performance of the MUDOF Approach --- p.78 / Chapter 8.3 --- Performance of MUDOF2 Approach --- p.87 / Chapter 9 --- Conclusions and Future Work --- p.96 / Chapter 9.1 --- Conclusions --- p.96 / Chapter 9.2 --- Future Work --- p.98 / Chapter A --- Details of Experimental Results for Reuters-21578 corpus --- p.99 / Chapter B --- Details of Experimental Results for OHSUMED corpus --- p.114 / Bibliography --- p.125
588

Processo automático de reconhecimento de texto em imagens de documentos de identificação genéricos. / Automatic text recognition process in identification document images.

Rodolfo Valiente Romero 12 December 2017 (has links)
Existe uma busca crescente por métodos de extração de texto em imagens de documentos. O uso de imagens digitais tem se tornado cada vez mais frequente em diversas áreas. O mundo moderno está cheio de texto, que os seres humanos usam para identificar objetos, navegar e tomar decisões. Embora o problema do reconhecimento de texto tenha sido amplamente estudado dentro de determinados domínios, detectar e ler texto em documentos de identificação, continua sendo um desafio aberto. Apresenta-se uma arquitetura que integra os diferentes algoritmos de localização, extração e reconhecimento aplicados à extração de texto em documentos de identificação genéricos. O método de localização proposto usa o algoritmo MSER junto com uma melhoria do contraste e a informação das bordas dos objetos da imagem, para localizar os possíveis caracteres. A etapa de seleção desenvolveu-se mediante a busca de heurísticas, capazes de classificar as regiões localizadas como textuais e não-textuais. Na etapa de reconhecimento é proposto um método iterativo para melhorar o desempenho do OCR. O processo foi avaliado usando as métricas precisão e revocação e foi realizada uma prova de conceito do sistema em um ambiente real. A abordagem proposta é robusta na detecção de textos oriundos de imagens complexas com diferentes orientações, dimensões e cores. O sistema de reconhecimento de texto proposto apresenta resultados competitivos, tanto em precisão e taxa de reconhecimento, quando comparados com outros sistemas. Mostrando excelente desempenho e viabilidade de sua implementação em sistemas reais. / The use of digital images has become more and more frequent in several areas. The modern world is full of text, which humans use to identify objects, navigate and make decisions. Although the problem of text recognition has been extensively studied within certain domains, detecting and recognizing text in identification documents remains an open challenge. We present an architecture that integrates the different localization, extraction and recognition algorithms applied to extracting text in generic identification documents. The proposed localization method uses the MSER algorithm together to contrast enhance and edge detection to find the possible characters. The selection stage was developed through the search for heuristics, capable of classifying the located regions in textual and non-textual. In the recognition step, an iterative method is proposed to improve OCR performance. The process was evaluated using the metrics precision and recall and a proof of concept of the system was performed in a real environment. The proposed approach is robust in detecting texts from complex images with different orientations, dimensions and colors. The text recognition system presents competitive results, both in accuracy and recognition rate, when compared with other systems in the current technical literature. Showing excellent performance and feasibility of its implementation in real systems.
589

Normatização paulista de nomeação de figuras do maternal até universidade e de teste de vocabulário auditivo de 1 a 6 anos

Valéria Bertozzi Negrão 13 December 2007 (has links)
Figuras são empregadas freqüentemente em materiais de avaliação psicológica e de proficiência, e de ensino e intervenção clínica e didático-pedagógica. A validade dos materiais de avaliação e a eficácia dos materiais de ensino dependem freqüentemente da escolha de figuras apropriadas à faixa etária e de escolaridade do avaliando e educando. Dispor de bancos de figuras normatizadas para diversas faixas etárias pode aperfeiçoar a validade e a eficácia dos materiais. A partir de um conjunto original de 2.100 figuras, este estudo gerou sete bancos de figuras com nomeação normatizada para milhares alunos do Ensino Superior (ES), Ensino Fundamental (EF) e Educação Infantil (EI) até a 1a. série do Maternal, numa faixa de 18 meses a 2 anos de idade. Com o objetivo de ilustrar o uso do banco de figuras para gerar testes, o estudo selecionou as 214 figuras mais unívocas das 365 figuras que compõem o banco de alta univocidade para a 2a. série da Educação Infantil, para gerar duas formas alternadas originais de um teste de vocabulário auditivo, cada qual com 107 itens, sendo uma figura alvo e quatro figuras distraidoras para cada item. O estudo aplicou as duas formas do teste a 396 crianças de 2 a 5 anos (10 de 2 anos, 93 de 3 anos, 160 de 4 anos, e 133 de 5 anos), de modo a obter normas de desenvolvimento do vocabulário auditivo. Os objetivos do estudo foram: 1) gerar sete bancos de figuras normatizados, o primeiro para universitários, o segundo para escolares de 1a. a 4a. série do EF e 3a. série da EI, o terceiro para crianças de 3a. série da EI, o quarto para crianças de 2a. série da EI, o quinto para crianças de 1a. série da EI, o sexto para crianças de 2a. série do Maternal, e o sétimo para crianças de 1a. série do Maternal; 2) apresentar os sete bancos sob a forma de 3 dicionários, o primeiro para universitários, o segundo para escolares de 1a. a 4a. série do EF e 3a. série da EI, e o terceiro para crianças de 3a., 2a. e 1a. séries da EI, e para crianças de 2a. e 1a. séries do Maternal; 3) derivar duas formas originais (A e B) do teste de vocabulário auditivo com 107 itens e 5 figuras por item; 4) aplicar as duas formas originais do teste a cerca de 300 crianças de 2 a 5 anos para aferir se eles são capazes de identificar crescimento significativo do vocabulário auditivo nessa faixa etária e se eles são capazes de discriminar entre séries escolares sucessivas e se há correlação positiva significativa entre as duas formas, o que permitiria seu uso alternado em estudos de teste-reteste; 5) normatizar a(s) forma(s) original(is) do teste que for(em) capaz(es) de identificar crescimento significativo do vocabulário auditivo; 6) fazer análise de item para excluir os itens com menor correlação item-média e obter a forma reordenada abreviada; 7) aplicar as formas reordenadas abreviadas dos testes a 177 crianças de 18 meses a 6 anos de idade para verificar são capazes de discriminar entre faixas etárias sucessivas; 8) normatizar a(s) forma(s) reordenada(s) abreviadas. Inicialmente foi gerado um corpus de 2.100 figuras originais. Essas 2.100 figuras foram apresentadas a 1.250 universitários do ES para nomeação por escrito. Dessas 2.100, 1.190 figuras passaram pelo critério de pelo menos 70% de concordância entre os universitários. Dessas 1.190, 244 foram excluídas e as restantes 944 foram selecionadas para serem apresentadas a 1.000 alunos de 1a. a 4a. séries do EF (de 7, 8, 9 e 10 anos de idade) e 3a. série da EI (com 6 anos de idade) para nomeação por escrito. Dessas 944 figuras, 566 figuras passaram pelo critério e foram apresentadas a 600 alunos da 3a. série da EI para nomeação por escrito. Dessas 566 figuras, 429 passaram pelo critério e foram apresentadas a 500 crianças de 2a.série da EI (de 5 anos de idade) para nomeação oral. Dessas 429, 365 figuras passaram pelo critério e foram apresentadas a 500 crianças de 1a.série da EI (de 4 anos de idade) para nomeação oral. Dessas 365, 187 figuras passaram pelo critério e foram apresentadas a 500 crianças de 2a.série do Maternal (de 3 anos de idade) para nomeação oral. Dessas 187, 99 figuras passaram pelo critério e foram apresentadas a 500 crianças de 1a.série do Maternal (de 2 anos de idade). Dessas 99, 90 figuras passaram pelo critério de 70% de nomeação unívoca. O estudo oferece os sete bancos de imagens com nomeação normatizada distribuídos em três livros, o primeiro contendo dados de universitários, o segundo contendo dados de estudantes de 1a. a 4a. série do Ensino Fundamental e 3a. série da Educação Infantil; e o terceiro com dados de estudantes do Maternal 1 até a 3a. série da Educação Infantil. O estudo ilustra o uso eficaz do banco de figuras ao oferecer duas formas originais com 107 itens cada uma do teste de vocabulário auditivo, bem como sua normatização com 396 crianças de 2 a 5 anos de idade. Oferece, ainda as versões reordenadas das duas formas do teste. Oferece, por fim, as versões originais abreviadas com 33 itens cada uma, bem como sua normatização com 177 crianças de 1 a 6 anos de idade; bem como as versões abreviadas reordenadas a partir dessa aplicação. / Text not informed by the author.
590

Mixing the library : information interaction and the DJ

Norton, Daniel January 2013 (has links)
Digital collections have been amassed by institutions and individuals for over two decades. Large collections are becoming increasingly available as resources for research, learning, creativity, and pleasure. However, the value of these collections can remain elusive. Systems and methods are needed to unlock the potential held within collections, to access the knowledge and to make new discoveries with the available information. The aim of this research is to identify and describe a system for interacting with large volumes of digital material that supports both learning and creative development. This is done by investigating the Disc Jockey (DJ) who works with electronic media files. DJs have worked with large digital collections since the birth of file sharing in the 1990s. Their activities necessitate a library system that supports retrieval, creative play, and public presentation of material. The investigation will develop a model of information interaction from their activities. To examine the practice, the research employs an autoethnographic diary study, video interviews, and a practice-led method that combines Grounded Theory with digital interface development. Findings indicate a model of interaction which facilitates learning through the development of a personal collection, and allows creative innovation through key information behaviours of selecting and mixing. The research distinguishes fundamental interface requirements that support the process, and demonstrates transferability of the model to other data representations.

Page generated in 0.0331 seconds