Spelling suggestions: "subject:"text"" "subject:"next""
581 |
Imagens de sombras / Images of ShadowsPaulino, Rosana 05 May 2011 (has links)
O objetivo desta tese é construir, através de trabalhos realizados na área de poéticas visuais, uma reflexão que procure compreender como a mulher negra é vista na sociedade brasileira atual e o modo pelo qual as sombras lançadas pela escravidão sobre esta população se refletem nas negrodescendentes ainda hoje, criando e perpetuando locais simbólicos e sociais para este grupo. Este questionamento será realizado tendo como base uma investigação que transita entre diferentes manifestações artísticas, que vão da instalação à gravura, sempre procurando os meios plásticos adequados para a produção das obras que irão tratar do problema. Uma breve análise sobre os motivos que levam à opção por determinados procedimentos técnicos necessários à realização dos trabalhos, a escolha e aplicação de diferentes meios artísticos e suas adaptações ao pensamento visual também fazem parte desta investigação. É ainda intenção do trabalho pensar sobre a forma de apresentação do texto que acompanha as obras produzidas e esclarecer os motivos da eleição por uma escrita de artista para o relato apresentado em conjunto com as imagens executadas. / The goal of this thesis is to offer, through artworks in the field of visual art, a reflection that seeks to comprehend how black women are seen in current Brazilian society and the way that shadows cast by slavery over this population is still reflected in female black descendants today, creating and perpetuating social and symbolic places for that group. This issue will be tackled on the basis of a research that moves across different art forms, ranging from installations to printmaking, and always looks for suitable art materials to produce the works that will address the problem. A brief analysis of the reasons for choosing certain technical procedures necessary to create the artworks, the choice and application of different artistic media, and the adaptation of them to the visual thinking are also part of this investigation. The work is also intended to think about the text format accompanying the artworks and give grounds for plumping for an \"artist writing\" to deliver the reports associated with the images.
|
582 |
Processo automático de reconhecimento de texto em imagens de documentos de identificação genéricos. / Automatic text recognition process in identification document images.Romero, Rodolfo Valiente 12 December 2017 (has links)
Existe uma busca crescente por métodos de extração de texto em imagens de documentos. O uso de imagens digitais tem se tornado cada vez mais frequente em diversas áreas. O mundo moderno está cheio de texto, que os seres humanos usam para identificar objetos, navegar e tomar decisões. Embora o problema do reconhecimento de texto tenha sido amplamente estudado dentro de determinados domínios, detectar e ler texto em documentos de identificação, continua sendo um desafio aberto. Apresenta-se uma arquitetura que integra os diferentes algoritmos de localização, extração e reconhecimento aplicados à extração de texto em documentos de identificação genéricos. O método de localização proposto usa o algoritmo MSER junto com uma melhoria do contraste e a informação das bordas dos objetos da imagem, para localizar os possíveis caracteres. A etapa de seleção desenvolveu-se mediante a busca de heurísticas, capazes de classificar as regiões localizadas como textuais e não-textuais. Na etapa de reconhecimento é proposto um método iterativo para melhorar o desempenho do OCR. O processo foi avaliado usando as métricas precisão e revocação e foi realizada uma prova de conceito do sistema em um ambiente real. A abordagem proposta é robusta na detecção de textos oriundos de imagens complexas com diferentes orientações, dimensões e cores. O sistema de reconhecimento de texto proposto apresenta resultados competitivos, tanto em precisão e taxa de reconhecimento, quando comparados com outros sistemas. Mostrando excelente desempenho e viabilidade de sua implementação em sistemas reais. / The use of digital images has become more and more frequent in several areas. The modern world is full of text, which humans use to identify objects, navigate and make decisions. Although the problem of text recognition has been extensively studied within certain domains, detecting and recognizing text in identification documents remains an open challenge. We present an architecture that integrates the different localization, extraction and recognition algorithms applied to extracting text in generic identification documents. The proposed localization method uses the MSER algorithm together to contrast enhance and edge detection to find the possible characters. The selection stage was developed through the search for heuristics, capable of classifying the located regions in textual and non-textual. In the recognition step, an iterative method is proposed to improve OCR performance. The process was evaluated using the metrics precision and recall and a proof of concept of the system was performed in a real environment. The proposed approach is robust in detecting texts from complex images with different orientations, dimensions and colors. The text recognition system presents competitive results, both in accuracy and recognition rate, when compared with other systems in the current technical literature. Showing excellent performance and feasibility of its implementation in real systems.
|
583 |
Text-to-Speech Synthesis Using Found Data for Low-Resource LanguagesCooper, Erica Lindsay January 2019 (has links)
Text-to-speech synthesis is a key component of interactive, speech-based systems. Typically, building a high-quality voice requires collecting dozens of hours of speech from a single professional speaker in an anechoic chamber with a high-quality microphone. There are about 7,000 languages spoken in the world, and most do not enjoy the speech research attention historically paid to such languages as English, Spanish, Mandarin, and Japanese. Speakers of these so-called "low-resource languages" therefore do not equally benefit from these technological advances. While it takes a great deal of time and resources to collect a traditional text-to-speech corpus for a given language, we may instead be able to make use of various sources of "found'' data which may be available. In particular, sources such as radio broadcast news and ASR corpora are available for many languages. While this kind of data does not exactly match what one would collect for a more standard TTS corpus, it may nevertheless contain parts which are usable for producing natural and intelligible parametric TTS voices.
In the first part of this thesis, we examine various types of found speech data in comparison with data collected for TTS, in terms of a variety of acoustic and prosodic features. We find that radio broadcast news in particular is a good match. Audiobooks may also be a good match despite their largely more expressive style, and certain speakers in conversational and read ASR corpora also resemble TTS speakers in their manner of speaking and thus their data may be usable for training TTS voices.
In the rest of the thesis, we conduct a variety of experiments in training voices on non-traditional sources of data, such as ASR data, radio broadcast news, and audiobooks. We aim to discover which methods produce the most intelligible and natural-sounding voices, focusing on three main approaches:
1) Training data subset selection. In noisy, heterogeneous data sources, we may wish to locate subsets of the data that are well-suited for building voices, based on acoustic and prosodic features that are known to correspond with TTS-style speech, while excluding utterances that introduce noise or other artifacts. We find that choosing subsets of speakers for training data can result in voices that are more intelligible.
2) Augmenting the frontend feature set with new features. In cleaner sources of found data, we may wish to train voices on all of the data, but we may get improvements in naturalness by including acoustic and prosodic features at the frontend and synthesizing in a manner that better matches the TTS style. We find that this approach is promising for creating more natural-sounding voices, regardless of the underlying acoustic model.
3) Adaptation. Another way to make use of high-quality data while also including informative acoustic and prosodic features is to adapt to subsets, rather than to select and train only on subsets. We also experiment with training on mixed high- and low-quality data, and adapting towards the high-quality set, which produces more intelligible voices than training on either type of data by itself.
We hope that our findings may serve as guidelines for anyone wishing to build their own TTS voice using non-traditional sources of found data.
|
584 |
Data-driven temporal information extraction with applications in general and clinical domainsFilannino, Michele January 2016 (has links)
The automatic extraction of temporal information from written texts is pivotal for many Natural Language Processing applications such as question answering, text summarisation and information retrieval. However, Temporal Information Extraction (TIE) is a challenging task because of the amount of types of expressions (durations, frequencies, times, dates) and their high morphological variability and ambiguity. As far as the approaches are concerned, the most common among the existing ones is rule-based, while data-driven ones are under-explored. This thesis introduces a novel domain-independent data-driven TIE strategy. The identification strategy is based on machine learning sequence labelling classifiers on features selected through an extensive exploration. Results are further optimised using an a posteriori label-adjustment pipeline. The normalisation strategy is rule-based and builds on a pre-existing system. The methodology has been applied to both specific (clinical) and generic domain, and has been officially benchmarked at the i2b2/2012 and TempEval-3 challenges, ranking respectively 3rd and 1st. The results prove the TIE task to be more challenging in the clinical domain (overall accuracy 63%) rather than in the general domain (overall accuracy 69%).Finally, this thesis also presents two applications of TIE. One of them introduces the concept of temporal footprint of a Wikipedia article, and uses it to mine the life span of persons. In the other case, TIE techniques are used to improve pre-existing information retrieval systems by filtering out temporally irrelevant results.
|
585 |
Multi-lingual text retrieval and mining.January 2003 (has links)
Law Yin Yee. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 130-134). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Cross-Lingual Information Retrieval (CLIR) --- p.2 / Chapter 1.2 --- Bilingual Term Association Mining --- p.5 / Chapter 1.3 --- Our Contributions --- p.6 / Chapter 1.3.1 --- CLIR --- p.6 / Chapter 1.3.2 --- Bilingual Term Association Mining --- p.7 / Chapter 1.4 --- Thesis Organization --- p.8 / Chapter 2 --- Related Work --- p.9 / Chapter 2.1 --- CLIR Techniques --- p.9 / Chapter 2.1.1 --- Existing Approaches --- p.9 / Chapter 2.1.2 --- Difference Between Our Model and Existing Approaches --- p.13 / Chapter 2.2 --- Bilingual Term Association Mining Techniques --- p.13 / Chapter 2.2.1 --- Existing Approaches --- p.13 / Chapter 2.2.2 --- Difference Between Our Model and Existing Approaches --- p.17 / Chapter 3 --- Cross-Lingual Information Retrieval (CLIR) --- p.18 / Chapter 3.1 --- Cross-Lingual Query Processing and Translation --- p.18 / Chapter 3.1.1 --- Query Context and Document Context Generation --- p.20 / Chapter 3.1.2 --- Context-Based Query Translation --- p.23 / Chapter 3.1.3 --- Query Term Weighting --- p.28 / Chapter 3.1.4 --- Final Weight Calculation --- p.30 / Chapter 3.2 --- Retrieval on Documents and Automated Summaries --- p.32 / Chapter 4 --- Experiments on Cross-Lingual Information Retrieval --- p.38 / Chapter 4.1 --- Experimental Setup --- p.38 / Chapter 4.2 --- Results of English-to-Chinese Retrieval --- p.45 / Chapter 4.2.1 --- Using Mono-Lingual Retrieval as the Gold Standard --- p.45 / Chapter 4.2.2 --- Using Human Relevance Judgments as the Gold Stan- dard --- p.49 / Chapter 4.3 --- Results of Chinese-to-English Retrieval --- p.53 / Chapter 4.3.1 --- Using Mono-lingual Retrieval as the Gold Standard --- p.53 / Chapter 4.3.2 --- Using Human Relevance Judgments as the Gold Stan- dard --- p.57 / Chapter 5 --- Discovering Comparable Multi-lingual Online News for Text Mining --- p.61 / Chapter 5.1 --- Story Representation --- p.62 / Chapter 5.2 --- Gloss Translation --- p.64 / Chapter 5.3 --- Comparable News Discovery --- p.67 / Chapter 6 --- Mining Bilingual Term Association Based on Co-occurrence --- p.75 / Chapter 6.1 --- Bilingual Term Cognate Generation --- p.75 / Chapter 6.2 --- Term Mining Algorithm --- p.77 / Chapter 7 --- Phonetic Matching --- p.87 / Chapter 7.1 --- Algorithm Design --- p.87 / Chapter 7.2 --- Discovering Associations of English Terms and Chinese Terms --- p.93 / Chapter 7.2.1 --- Converting English Terms into Phonetic Representation --- p.93 / Chapter 7.2.2 --- Discovering Associations of English Terms and Man- darin Chinese Terms --- p.100 / Chapter 7.2.3 --- Discovering Associations of English Terms and Can- tonese Chinese Terms --- p.104 / Chapter 8 --- Experiments on Bilingual Term Association Mining --- p.111 / Chapter 8.1 --- Experimental Setup --- p.111 / Chapter 8.2 --- Result and Discussion of Bilingual Term Association Mining Based on Co-occurrence --- p.114 / Chapter 8.3 --- Result and Discussion of Phonetic Matching --- p.121 / Chapter 9 --- Conclusions and Future Work --- p.126 / Chapter 9.1 --- Conclusions --- p.126 / Chapter 9.1.1 --- CLIR --- p.126 / Chapter 9.1.2 --- Bilingual Term Association Mining --- p.127 / Chapter 9.2 --- Future Work --- p.128 / Bibliography --- p.134 / Chapter A --- Original English Queries --- p.135 / Chapter B --- Manual translated Chinese Queries --- p.137 / Chapter C --- Pronunciation symbols used by the PRONLEX Lexicon --- p.139 / Chapter D --- Initial Letter-to-Phoneme Tags --- p.141 / Chapter E --- English Sounds with their Chinese Equivalents --- p.143
|
586 |
Semi-supervised document clustering with active learning. / CUHK electronic theses & dissertations collectionJanuary 2008 (has links)
Most existing semi-supervised document clustering approaches are model-based clustering and can be treated as parametric model taking an assumption that the underlying clusters follow a certain pre-defined distribution. In our semi-supervised document clustering, each cluster is represented by a non-parametric probability distribution. Two approaches are designed for incorporating pairwise constraints in the document clustering approach. The first approach, term-to-term relationship approach (TR), uses pairwise constraints for capturing term-to-term dependence relationships. The second approach, linear combination approach (LC), combines the clustering objective function with the user-provided constraints linearly. Extensive experimental results show that our proposed framework is effective. / This thesis presents a new framework for automatically partitioning text documents taking into consideration of constraints given by users. Semi-supervised document clustering is developed based on pairwise constraints. Different from traditional semi-supervised document clustering approaches which assume pairwise constraints to be prepared by user beforehand, we develop a novel framework for automatically discovering pairwise constraints revealing the user grouping preference. Active learning approach for choosing informative document pairs is designed by measuring the amount of information that can be obtained by revealing judgments of document pairs. For this purpose, three models, namely, uncertainty model, generation error model, and term-to-term relationship model, are designed for measuring the informativeness of document pairs from different perspectives. Dependent active learning approach is developed by extending the active learning approach to avoid redundant document pair selection. Two models are investigated for estimating the likelihood that a document pair is redundant to previously selected document pairs, namely, KL divergence model and symmetric model. / Huang, Ruizhang. / Adviser: Wai Lam. / Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3600. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 117-123). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
|
587 |
Geometric and topological approaches to semantic text retrieval. / CUHK electronic theses & dissertations collectionJanuary 2007 (has links)
In the first part of this thesis, we present a new understanding of the latent semantic space of a dataset from the dual perspective, which relaxes the above assumed conditions and leads naturally to a unified kernel function for a class of vector space models. New semantic analysis methods based on the unified kernel function are developed, which combine the advantages of LSI and GVSM. We also show that the new methods possess the stable property on the rank choice, i.e., even if the selected rank is quite far away from the optimal one, the retrieval performance will not degrade much. The experimental results of our methods on the standard test sets are promising. / In the second part of this thesis, we propose that the mathematical structure of simplexes can be attached to a term-document matrix in the vector-space model (VSM) for information retrieval. The Q-analysis devised by R. H. Atkin may then be applied to effect an analysis of the topological structure of the simplexes and their corresponding dataset. Experimental results of this analysis reveal that there is a correlation between the effectiveness of LSI and the topological structure of the dataset. By using the information obtained from the topological analysis, we develop a new query expansion method. Experimental results show that our method can enhance the performance of VSM for datasets over which LSI is not effective. Finally, the notion of homology is introduced to the topological analysis of datasets and its possible relation to word sense disambiguation is studied through a simple example. / With the vast amount of textual information available today, the task of designing effective and efficient retrieval methods becomes more important and complex. The Basic Vector Space Model (BVSM) is well known in information retrieval. Unfortunately, it can not retrieve all relevant documents since it is based on literal term matching. The Generalized Vector Space Model (GVSM) and the Latent Semantic Indexing (LSI) are two famous semantic retrieval methods, in which some underlying latent semantic structures in the dataset are assumed. However, their assumptions about where the semantic structure locates are a bit strong. Moreover, the performance of LSI can be very different for various datasets and the questions of what characteristics of a dataset and why these characteristics contribute to this difference have not been fully understood. The present thesis focuses on providing answers to these two questions. / Li , Dandan. / "August 2007." / Adviser: Chung-Ping Kwong. / Source: Dissertation Abstracts International, Volume: 69-02, Section: B, page: 1108. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 118-120). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract in English and Chinese. / School code: 1307.
|
588 |
Re-writing Ariadne : following the thread of literary and artistic representations of Ariadne's abandonmentSchoess, Ann-Sophie January 2018 (has links)
This thesis takes Ariadne's abandonment as a case study in order to examine the literary processes of reception that underlie the transmission of classical myth in different eras and cultural contexts - from Classical Antiquity through the Italian Renaissance. Rather than focusing on the ways in which visual representations of Ariadne relate to literary treatments, it draws attention to the literary reliance on a cultural framework, shared by writer and reader, that enables dynamic storytelling. It argues that literary variation of the myth is central to its successful transmission, not least because it allows for appropriations and adaptations that can be made to fit new social and religious parameters, such as Christian conventions in the Middle Ages. In focusing on the important role played by the visual arts in the classical tradition, this research further challenges the still prevalent misconception that the visual arts are secondary to literature, and refutes the common assumption that the relationship between image and text is unidirectional. It highlights the visual impulses leading to paradigm shifts in the literary treatment of the abandonment narrative, and examines the ways in which writers engage with the visual tradition in order to re-shape the ancient narrative. Throughout, attention is drawn to the visual and cultural framework shared by ancient writers and readers, and to the lack of engagement with this framework in traditional classical scholarship. Through its focus on the literary narratives' visuality and mutability, this thesis offers a new paradigm for studying classical myth and its reception.
|
589 |
New learning strategies for automatic text categorization.January 2001 (has links)
Lai Kwok-yin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 125-130). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Automatic Textual Document Categorization --- p.1 / Chapter 1.2 --- Meta-Learning Approach For Text Categorization --- p.3 / Chapter 1.3 --- Contributions --- p.6 / Chapter 1.4 --- Organization of the Thesis --- p.7 / Chapter 2 --- Related Work --- p.9 / Chapter 2.1 --- Existing Automatic Document Categorization Approaches --- p.9 / Chapter 2.2 --- Existing Meta-Learning Approaches For Information Retrieval --- p.14 / Chapter 2.3 --- Our Meta-Learning Approaches --- p.20 / Chapter 3 --- Document Pre-Processing --- p.22 / Chapter 3.1 --- Document Representation --- p.22 / Chapter 3.2 --- Classification Scheme Learning Strategy --- p.25 / Chapter 4 --- Linear Combination Approach --- p.30 / Chapter 4.1 --- Overview --- p.30 / Chapter 4.2 --- Linear Combination Approach - The Algorithm --- p.33 / Chapter 4.2.1 --- Equal Weighting Strategy --- p.34 / Chapter 4.2.2 --- Weighting Strategy Based On Utility Measure --- p.34 / Chapter 4.2.3 --- Weighting Strategy Based On Document Rank --- p.35 / Chapter 4.3 --- Comparisons of Linear Combination Approach and Existing Meta-Learning Methods --- p.36 / Chapter 4.3.1 --- LC versus Simple Majority Voting --- p.36 / Chapter 4.3.2 --- LC versus BORG --- p.38 / Chapter 4.3.3 --- LC versus Restricted Linear Combination Method --- p.38 / Chapter 5 --- The New Meta-Learning Model - MUDOF --- p.40 / Chapter 5.1 --- Overview --- p.41 / Chapter 5.2 --- Document Feature Characteristics --- p.42 / Chapter 5.3 --- Classification Errors --- p.44 / Chapter 5.4 --- Linear Regression Model --- p.45 / Chapter 5.5 --- The MUDOF Algorithm --- p.47 / Chapter 6 --- Incorporating MUDOF into Linear Combination approach --- p.52 / Chapter 6.1 --- Background --- p.52 / Chapter 6.2 --- Overview of MUDOF2 --- p.54 / Chapter 6.3 --- Major Components of the MUDOF2 --- p.57 / Chapter 6.4 --- The MUDOF2 Algorithm --- p.59 / Chapter 7 --- Experimental Setup --- p.66 / Chapter 7.1 --- Document Collection --- p.66 / Chapter 7.2 --- Evaluation Metric --- p.68 / Chapter 7.3 --- Component Classification Algorithms --- p.71 / Chapter 7.4 --- Categorical Document Feature Characteristics for MUDOF and MUDOF2 --- p.72 / Chapter 8 --- Experimental Results and Analysis --- p.74 / Chapter 8.1 --- Performance of Linear Combination Approach --- p.74 / Chapter 8.2 --- Performance of the MUDOF Approach --- p.78 / Chapter 8.3 --- Performance of MUDOF2 Approach --- p.87 / Chapter 9 --- Conclusions and Future Work --- p.96 / Chapter 9.1 --- Conclusions --- p.96 / Chapter 9.2 --- Future Work --- p.98 / Chapter A --- Details of Experimental Results for Reuters-21578 corpus --- p.99 / Chapter B --- Details of Experimental Results for OHSUMED corpus --- p.114 / Bibliography --- p.125
|
590 |
Processo automático de reconhecimento de texto em imagens de documentos de identificação genéricos. / Automatic text recognition process in identification document images.Rodolfo Valiente Romero 12 December 2017 (has links)
Existe uma busca crescente por métodos de extração de texto em imagens de documentos. O uso de imagens digitais tem se tornado cada vez mais frequente em diversas áreas. O mundo moderno está cheio de texto, que os seres humanos usam para identificar objetos, navegar e tomar decisões. Embora o problema do reconhecimento de texto tenha sido amplamente estudado dentro de determinados domínios, detectar e ler texto em documentos de identificação, continua sendo um desafio aberto. Apresenta-se uma arquitetura que integra os diferentes algoritmos de localização, extração e reconhecimento aplicados à extração de texto em documentos de identificação genéricos. O método de localização proposto usa o algoritmo MSER junto com uma melhoria do contraste e a informação das bordas dos objetos da imagem, para localizar os possíveis caracteres. A etapa de seleção desenvolveu-se mediante a busca de heurísticas, capazes de classificar as regiões localizadas como textuais e não-textuais. Na etapa de reconhecimento é proposto um método iterativo para melhorar o desempenho do OCR. O processo foi avaliado usando as métricas precisão e revocação e foi realizada uma prova de conceito do sistema em um ambiente real. A abordagem proposta é robusta na detecção de textos oriundos de imagens complexas com diferentes orientações, dimensões e cores. O sistema de reconhecimento de texto proposto apresenta resultados competitivos, tanto em precisão e taxa de reconhecimento, quando comparados com outros sistemas. Mostrando excelente desempenho e viabilidade de sua implementação em sistemas reais. / The use of digital images has become more and more frequent in several areas. The modern world is full of text, which humans use to identify objects, navigate and make decisions. Although the problem of text recognition has been extensively studied within certain domains, detecting and recognizing text in identification documents remains an open challenge. We present an architecture that integrates the different localization, extraction and recognition algorithms applied to extracting text in generic identification documents. The proposed localization method uses the MSER algorithm together to contrast enhance and edge detection to find the possible characters. The selection stage was developed through the search for heuristics, capable of classifying the located regions in textual and non-textual. In the recognition step, an iterative method is proposed to improve OCR performance. The process was evaluated using the metrics precision and recall and a proof of concept of the system was performed in a real environment. The proposed approach is robust in detecting texts from complex images with different orientations, dimensions and colors. The text recognition system presents competitive results, both in accuracy and recognition rate, when compared with other systems in the current technical literature. Showing excellent performance and feasibility of its implementation in real systems.
|
Page generated in 0.0411 seconds