• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 132
  • 9
  • 9
  • 5
  • 4
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 194
  • 70
  • 61
  • 57
  • 57
  • 43
  • 40
  • 40
  • 39
  • 36
  • 36
  • 36
  • 32
  • 28
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

On syzygies of algebraic varieties with applications to moduli

Agostini, Daniele 17 September 2018 (has links)
Diese Dissertation beschäftigt sich mit asymptotischen Syzygien und Gleichungen Abelscher Varietäten, sowie mit deren Anwendung auf zyklische Überdeckungen von Kurven von Geschlecht zwei. Was asymptotischen Syzygien angeht, zeigen wir für beliebige Geradenbündel auf projektiven Schemata: Wenn die asymptotischen Syzygien von Grad p eines Geradenbündels verschwinden, dann ist das Geradenbündel p-sehr ampel. Darüber hinaus verwenden wir die Bridgeland-King-Reid-Haiman Korrespondenz, um zu zeigen, dass dieses Ergebnis auch umgekehrt wahr ist, wenn es um eine glatte Fläche und kleine p geht. Dies dehnt Ergebnisse von Ein-Lazarsfeld und Ein-Lazarsfeld-Yang aus. Wir verwenden unsere Ergebnisse, um zu untersuchen, wie Syzygien verwendet werden können, um den Grad der Irrationalität einer Varietät zu begrenzen. Ferner, beweisen wir eine Vermutung von Gross and Popescu über Abelsche Flächen, deren Ideal durch Quadriken und Kubiken erzeugt wird. Außerdem verwenden wir die projektive Normalität einer Abelschen Fläche, um die Prym Abbildung, die mit zyklischen Überdeckungen von Geschlecht zwei Kurven assoziert ist, zu untersuchen. Wir zeigen, dass das Differential der Abbildung generisch injektiv ist, wenn der Grad der Überdeckung mindestens sieben ist. Wir dehnen damit Ergebnisse von Lange und Ortega aus. Abschließend zeigen wir, dass das Differential genau für bielliptische Überdeckungen nicht injectiv ist. / In this thesis we study asymptotic syzygies of algebraic varieties and equations of abelian surfaces, with applications to cyclic covers of genus two curves. First, we show that vanishing of asymptotic p-th syzygies implies p-very ampleness for line bundles on arbitrary projective schemes. For smooth surfaces we prove that the converse holds, when p is small, by studying the Bridgeland-King-Reid-Haiman correspondence for the Hilbert scheme of points. This extends previous results of Ein-Lazarsfeld and Ein-Lazarsfeld-Yang. As an application of our results, we show how to use syzygies to bound the irrationality of a variety. Furthermore, we confirm a conjecture of Gross and Popescu about abelian surfaces whose ideal is generated by quadrics and cubics. In addition, we use projective normality of abelian surfaces to study the Prym map associated to cyclic covers of genus two curves. We show that the differential of the map is generically injective as soon as the degree of the cover is at least seven, extending a previous result of Lange and Ortega. Moreover, we show that the differentials fails to be injective precisely at bielliptic covers.
132

Deep neural semantic parsing: translating from natural language into SPARQL / Análise semântica neural profunda: traduzindo de linguagem natural para SPARQL

Luz, Fabiano Ferreira 07 February 2019 (has links)
Semantic parsing is the process of mapping a natural-language sentence into a machine-readable, formal representation of its meaning. The LSTM Encoder-Decoder is a neural architecture with the ability to map a source language into a target one. We are interested in the problem of mapping natural language into SPARQL queries, and we seek to contribute with strategies that do not rely on handcrafted rules, high-quality lexicons, manually-built templates or other handmade complex structures. In this context, we present two contributions to the problem of semantic parsing departing from the LSTM encoder-decoder. While natural language has well defined vector representation methods that use a very large volume of texts, formal languages, like SPARQL queries, suffer from lack of suitable methods for vector representation. In the first contribution we improve the representation of SPARQL vectors. We start by obtaining an alignment matrix between the two vocabularies, natural language and SPARQL terms, which allows us to refine a vectorial representation of SPARQL items. With this refinement we obtained better results in the posterior training for the semantic parsing model. In the second contribution we propose a neural architecture, that we call Encoder CFG-Decoder, whose output conforms to a given context-free grammar. Unlike the traditional LSTM encoder-decoder, our model provides a grammatical guarantee for the mapping process, which is particularly important for practical cases where grammatical errors can cause critical failures. Results confirm that any output generated by our model obeys the given CFG, and we observe a translation accuracy improvement when compared with other results from the literature. / A análise semântica é o processo de mapear uma sentença em linguagem natural para uma representação formal, interpretável por máquina, do seu significado. O LSTM Encoder-Decoder é uma arquitetura de rede neural com a capacidade de mapear uma sequência de origem para uma sequência de destino. Estamos interessados no problema de mapear a linguagem natural em consultas SPARQL e procuramos contribuir com estratégias que não dependam de regras artesanais, léxico de alta qualidade, modelos construídos manualmente ou outras estruturas complexas feitas à mão. Neste contexto, apresentamos duas contribuições para o problema de análise semântica partindo da arquitetura LSTM Encoder-Decoder. Enquanto para a linguagem natural existem métodos de representação vetorial bem definidos que usam um volume muito grande de textos, as linguagens formais, como as consultas SPARQL, sofrem com a falta de métodos adequados para representação vetorial. Na primeira contribuição, melhoramos a representação dos vetores SPARQL. Começamos obtendo uma matriz de alinhamento entre os dois vocabulários, linguagem natural e termos SPARQL, o que nos permite refinar uma representação vetorial dos termos SPARQL. Com esse refinamento, obtivemos melhores resultados no treinamento posterior para o modelo de análise semântica. Na segunda contribuição, propomos uma arquitetura neural, que chamamos de Encoder CFG-Decoder, cuja saída está de acordo com uma determinada gramática livre de contexto. Ao contrário do modelo tradicional LSTM Encoder-Decoder, nosso modelo fornece uma garantia gramatical para o processo de mapeamento, o que é particularmente importante para casos práticos nos quais erros gramaticais podem causar falhas críticas em um compilador ou interpretador. Os resultados confirmam que qualquer resultado gerado pelo nosso modelo obedece à CFG dada, e observamos uma melhora na precisão da tradução quando comparada com outros resultados da literatura.
133

Espaces de Müntz, plongements de Carleson, et opérateurs de Cesàro / Müntz spaces, Carleson embeddings and Cesàro operators

Gaillard, Loïc 07 December 2017 (has links)
Pour une suite ⋀ = (λn) satisfaisant la condition de Müntz Σn 1/λn < +∞ et pour p ∈ [1,+∞), on définit l'espace de Müntz Mp⋀ comme le sous-espace fermé de Lp([0, 1]) engendré par les monômes yn : t ↦ tλn. L'espace M∞⋀ est défini de la même façon comme un sous-espace de C([0, 1]). Lorsque la suite (λn + 1/p)n est lacunaire avec un grand indice, nous montrons que la famille (gn) des monômes normalisés dans Lp est (1 + ε)-isométrique à la base canonique de lp. Dans le cas p = +∞, les monômes (yn) forment une famille normalisée et (1 + ε)-isométrique à la base sommante de c. Ces résultats sont un raffinement asymptotique d'un théorème bien connu pour les suites lacunaires. D'autre part, pour p ∈ [1, +∞), nous étudions les mesures de Carleson des espaces de Müntz, c'est-à-dire les mesures boréliennes μ sur [0,1) telles que l'opérateur de plongement Jμ,p : Mp⋀ ⊂ Lp(μ) est borné. Lorsque ⋀ est lacunaire, nous prouvons que si les (gn) sont uniformément bornés dans Lp(μ), alors μ est une mesure de Carleson de Mq⋀ pour tout q > p. Certaines conditionsgéométriques sur μ au voisinage du point 1 sont suffsantes pour garantir la compacité de Jμ,p ou son appartenance à d'autres idéaux d'opérateurs plus fins. Plus précisément, nous estimons les nombres d'approximation de Jμ,p dans le cas lacunaire et nous obtenons même des équivalents pour certaines suites ⋀. Enfin, nous calculons la norme essentielle del'opérateur de moyenne de Cesàro Γp : Lp → Lp : elle est égale à sa norme, c'est-à-dire à p'. Ce résultat est aussi valide pour l'opérateur de Cesàro discret. Nous introduisons les sous-espaces de Müntz des espaces de Cesàro Cesp pour p ∈ [1, +∞]. Nous montrons que la norme essentielle de l'opérateur de multiplication par Ψ est égale à ∥Ψ∥∞ dans l'espace deCesàro, et à |Ψ(1)| dans les espaces de Müntz-Cesàro. / For a sequence ⋀ = (λn) satisfying the Müntz condition Σn 1/λn < +∞ and for p ∈ [1,+∞), we define the Müntz space Mp⋀ as the closed subspace of Lp([0, 1]) spanned by the monomials yn : t ↦ tλn. The space M∞⋀ is defined in the same way as a subspace of C([0, 1]). When the sequence (λn + 1/p)n is lacunary with a large ratio, we prove that the sequence of normalized Müntz monomials (gn) in Lp is (1 + ε)-isometric to the canonical basis of lp. In the case p = +∞, the monomials (yn) form a sequence which is (1 + ε)-isometric to the summing basis of c. These results are asymptotic refinements of a well known theorem for the lacunary sequences. On the other hand, for p ∈ [1, +∞), we investigate the Carleson measures for Müntz spaces, which are defined as the Borel measures μ on [0; 1) such that the embedding operator Jμ,p : Mp⋀ ⊂ Lp(μ) is bounded. When ⋀ is lacunary, we prove that if the (gn) are uniformly bounded in Lp(μ), then for any q > p, the measure μ is a Carleson measure for Mq⋀. These questions are closely related to the behaviour of μ in the neighborhood of 1. Wealso find some geometric conditions about the behaviour of μ near the point 1 that ensure the compactness of Jμ,p, or its membership to some thiner operator ideals. More precisely, we estimate the approximation numbers of Jμ,p in the lacunary case and we even obtain some equivalents for particular lacunary sequences ⋀. At last, we show that the essentialnorm of the Cesàro-mean operator Γp : Lp → Lp coincides with its norm, which is p'. This result is also valid for the Cesàro sequence operator. We introduce some Müntz subspaces of the Cesàro function spaces Cesp, for p ∈ [1, +∞]. We show that the value of the essential norm of the multiplication operator TΨ is ∥Ψ∥∞ in the Cesàaro spaces. In the Müntz-Cesàrospaces, the essential norm of TΨ is equal to |Ψ(1)|.
134

An analysis of hierarchical text classification using word embeddings

Stein, Roger Alan 28 March 2018 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2019-03-07T14:41:05Z No. of bitstreams: 1 Roger Alan Stein_.pdf: 476239 bytes, checksum: a87a32ffe84d0e5d7a882e0db7b03847 (MD5) / Made available in DSpace on 2019-03-07T14:41:05Z (GMT). No. of bitstreams: 1 Roger Alan Stein_.pdf: 476239 bytes, checksum: a87a32ffe84d0e5d7a882e0db7b03847 (MD5) Previous issue date: 2018-03-28 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Efficient distributed numerical word representation models (word embeddings) combined with modern machine learning algorithms have recently yielded considerable improvement on automatic document classification tasks. However, the effectiveness of such techniques has not been assessed for the hierarchical text classification (HTC) yet. This study investigates application of those models and algorithms on this specific problem by means of experimentation and analysis. Classification models were trained with prominent machine learning algorithm implementations—fastText, XGBoost, and Keras’ CNN—and noticeable word embeddings generation methods—GloVe, word2vec, and fastText—with publicly available data and evaluated them with measures specifically appropriate for the hierarchical context. FastText achieved an LCAF1 of 0.871 on a single-labeled version of the RCV1 dataset. The results analysis indicates that using word embeddings is a very promising approach for HTC. / Modelos eficientes de representação numérica textual (word embeddings) combinados com algoritmos modernos de aprendizado de máquina têm recentemente produzido uma melhoria considerável em tarefas de classificação automática de documentos. Contudo, a efetividade de tais técnicas ainda não foi avaliada com relação à classificação hierárquica de texto. Este estudo investiga a aplicação daqueles modelos e algoritmos neste problema em específico através de experimentação e análise. Modelos de classificação foram treinados usando implementações proeminentes de algoritmos de aprendizado de máquina—fastText, XGBoost e CNN (Keras)— e notórios métodos de geração de word embeddings—GloVe, word2vec e fastText—com dados disponíveis publicamente e avaliados usando métricas especificamente adequadas ao contexto hierárquico. Nesses experimentos, fastText alcançou um LCAF1 de 0,871 usando uma versão da base de dados RCV1 com apenas uma categoria por tupla. A análise dos resultados indica que a utilização de word embeddings é uma abordagem muito promissora para classificação hierárquica de texto.
135

Biomedical Concept Association and Clustering Using Word Embeddings

Setu Shah (5931128) 12 February 2019 (has links)
<div>Biomedical data exists in the form of journal articles, research studies, electronic health records, care guidelines, etc. While text mining and natural language processing tools have been widely employed across various domains, these are just taking off in the healthcare space.</div><div><br></div><div>A primary hurdle that makes it difficult to build artificial intelligence models that use biomedical data, is the limited amount of labelled data available. Since most models rely on supervised or semi-supervised methods, generating large amounts of pre-processed labelled data that can be used for training purposes becomes extremely costly. Even for datasets that are labelled, the lack of normalization of biomedical concepts further affects the quality of results produced and limits the application to a restricted dataset. This affects reproducibility of the results and techniques across datasets, making it difficult to deploy research solutions to improve healthcare services.</div><div><br></div><div>The research presented in this thesis focuses on reducing the need to create labels for biomedical text mining by using unsupervised recurrent neural networks. The proposed method utilizes word embeddings to generate vector representations of biomedical concepts based on semantics and context. Experiments with unsupervised clustering of these biomedical concepts show that concepts that are similar to each other are clustered together. While this clustering captures different synonyms of the same concept, it also captures the similarities between various diseases and the symptoms that those diseases are symptomatic of.</div><div><br></div><div>To test the performance of the concept vectors on corpora of documents, a document vector generation method that utilizes these concept vectors is also proposed. The document vectors thus generated are used as an input to clustering algorithms, and the results show that across multiple corpora, the proposed methods of concept and document vector generation outperform the baselines and provide more meaningful clustering. The applications of this document clustering are huge, especially in the search and retrieval space, providing clinicians, researchers and patients more holistic and comprehensive results than relying on the exclusive term that they search for.</div><div><br></div><div>At the end, a framework for extracting clinical information that can be mapped to electronic health records from preventive care guidelines is presented. The extracted information can be integrated with the clinical decision support system of an electronic health record. A visualization tool to better understand and observe patient trajectories is also explored. Both these methods have potential to improve the preventive care services provided to patients.</div>
136

Planejamentos combinatórios construindo sistemas triplos de steiner

Barbosa, Enio Perez Rodrigues 26 August 2011 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2014-09-16T12:52:36Z No. of bitstreams: 2 Dissertação EnioPerez.pdf: 2190954 bytes, checksum: 8abd6c2cd31279e28971c632f6ed378b (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2014-09-16T14:10:30Z (GMT) No. of bitstreams: 2 Dissertação EnioPerez.pdf: 2190954 bytes, checksum: 8abd6c2cd31279e28971c632f6ed378b (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2014-09-16T14:10:30Z (GMT). No. of bitstreams: 2 Dissertação EnioPerez.pdf: 2190954 bytes, checksum: 8abd6c2cd31279e28971c632f6ed378b (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2011-08-26 / Intuitively, the basic idea of Design Theory consists of a way to select subsets, also called blocks, of a finite set, so that some properties are satisfied. The more general case are the blocks designs. A PBD is an ordered pair (S;B), where S is a finite set of symbols, and B is a collection of subsets of S called blocks, such that each pair of distinct elements of S occur together in exactly one block of B. A Steiner Triple System is a particular case of a PBD, where every block has size only 3, being called triples. The main focus is in building technology systems. By resolvability is discussed as a Steiner Triple Systems is resolvable, and when it is not resolvable. This theory has several applications, eg, embeddings and even problems related to computational complexity. / Intuitivamente, a idéia básica de um Planejamento Combinatório consiste em uma maneira de selecionar subconjuntos, também chamados de blocos, de um conjunto finito, de modo que algumas propriedades especificadas sejam satisfeitas. O caso mais geral são os planejamentos balanceados. Um PBD é um par ordenado (S;B), onde S é um conjunto finito de símbolos, e B é uma coleção de subconjuntos de S chamados blocos, tais que cada par de elementos distintos de S ocorrem juntos em exatamente um bloco de B. Um Sistema Triplo de Steiner é um caso particular de um PBD, em que todos os blocos tem tamanho único 3, sendo chamados de triplas. O foco principal está nas técnicas de construção dos sistemas. Por meio da resolubilidade se discute quando um Sistema Triplo de Steiner é resolvível e quando não é resolvível. Esta teoria possui várias aplicações, por exemplo: imersões e até mesmo problemas relacionados à complexidade computacional.
137

Klasické operátory harmonické analýzy v Orliczových prostorech / Classical operators of harmonic analysis in Orlicz spaces

Musil, Vít January 2018 (has links)
Classical operators of harmonic analysis in Orlicz spaces V'ıt Musil We deal with classical operators of harmonic analysis in Orlicz spaces such as the Hardy-Littlewood maximal operator, the Hardy-type integral operators, the maximal operator of fractional order, the Riesz potential, the Laplace transform, and also with Sobolev-type embeddings on open subsets of Rn or with respect to Frostman measures and, in particular, trace embeddings on the boundary. For each operator (in case of embeddings we consider the identity operator) we investigate the question of its boundedness from an Orlicz space into another. Particular attention is paid to the sharpness of the results. We further study the question of the existence of optimal Orlicz domain and target spaces and their description. The work consists of author's published and unpublished results compiled together with material appearing in the literature.
138

Embedding Theorems for Mixed Norm Spaces and Applications

Algervik, Robert January 2008 (has links)
<p>This thesis is devoted to the study of mixed norm spaces that arise in connection with embeddings of Sobolev and Besov type spaces. The work in this direction originates in a paper due to Gagliardo (1958), and was continued by Fournier (1988) and by Kolyada (2005).</p><p><p><p>We consider fully anisotropic mixed norm spaces. Our main theorem states an embedding of these spaces into Lorentz spaces. Applying this result, we obtain sharp embedding theorems for anisotropic fractional Sobolev spaces and anisotropic Sobolev-Besov spaces. The methods used are based on non-increasing rearrangements and on estimates of sections of functions and sections of sets. We also study limiting relations between embeddings of spaces of different type. More exactly, mixed norm estimates enable us to get embedding constants with sharp asymptotic behaviour. This gives an extension of the results obtained for isotropic Besov spaces $B_p^\alpha$ by Bourgain, Brezis, and Mironescu, and for Besov spaces $B^{\alpha_1,\dots,\alpha_n}_p$ by Kolyada.</p><p>We study also some basic properties (in particular the approximation properties) of special weak type spaces that play an important role in the construction of mixed norm spaces and in the description of Sobolev type embeddings.</p></p></p>
139

Embedding Theorems for Mixed Norm Spaces and Applications

Algervik, Robert January 2008 (has links)
This thesis is devoted to the study of mixed norm spaces that arise in connection with embeddings of Sobolev and Besov type spaces. The work in this direction originates in a paper due to Gagliardo (1958), and was continued by Fournier (1988) and by Kolyada (2005). We consider fully anisotropic mixed norm spaces. Our main theorem states an embedding of these spaces into Lorentz spaces. Applying this result, we obtain sharp embedding theorems for anisotropic fractional Sobolev spaces and anisotropic Sobolev-Besov spaces. The methods used are based on non-increasing rearrangements and on estimates of sections of functions and sections of sets. We also study limiting relations between embeddings of spaces of different type. More exactly, mixed norm estimates enable us to get embedding constants with sharp asymptotic behaviour. This gives an extension of the results obtained for isotropic Besov spaces $B_p^\alpha$ by Bourgain, Brezis, and Mironescu, and for Besov spaces $B^{\alpha_1,\dots,\alpha_n}_p$ by Kolyada. We study also some basic properties (in particular the approximation properties) of special weak type spaces that play an important role in the construction of mixed norm spaces and in the description of Sobolev type embeddings.
140

Data-driven language understanding for spoken dialogue systems

Mrkšić, Nikola January 2018 (has links)
Spoken dialogue systems provide a natural conversational interface to computer applications. In recent years, the substantial improvements in the performance of speech recognition engines have helped shift the research focus to the next component of the dialogue system pipeline: the one in charge of language understanding. The role of this module is to translate user inputs into accurate representations of the user goal in the form that can be used by the system to interact with the underlying application. The challenges include the modelling of linguistic variation, speech recognition errors and the effects of dialogue context. Recently, the focus of language understanding research has moved to making use of word embeddings induced from large textual corpora using unsupervised methods. The work presented in this thesis demonstrates how these methods can be adapted to overcome the limitations of language understanding pipelines currently used in spoken dialogue systems. The thesis starts with a discussion of the pros and cons of language understanding models used in modern dialogue systems. Most models in use today are based on the delexicalisation paradigm, where exact string matching supplemented by a list of domain-specific rephrasings is used to recognise users' intents and update the system's internal belief state. This is followed by an attempt to use pretrained word vector collections to automatically induce domain-specific semantic lexicons, which are typically hand-crafted to handle lexical variation and account for a plethora of system failure modes. The results highlight the deficiencies of distributional word vectors which must be overcome to make them useful for downstream language understanding models. The thesis next shifts focus to overcoming the language understanding models' dependency on semantic lexicons. To achieve that, the proposed Neural Belief Tracking (NBT) model forsakes the use of standard one-hot n-gram representations used in Natural Language Processing in favour of distributed representations of user utterances, dialogue context and domain ontologies. The NBT model makes use of external lexical knowledge embedded in semantically specialised word vectors, obviating the need for domain-specific semantic lexicons. Subsequent work focuses on semantic specialisation, presenting an efficient method for injecting external lexical knowledge into word vector spaces. The proposed Attract-Repel algorithm boosts the semantic content of existing word vectors while simultaneously inducing high-quality cross-lingual word vector spaces. Finally, NBT models powered by specialised cross-lingual word vectors are used to train multilingual belief tracking models. These models operate across many languages at once, providing an efficient method for bootstrapping language understanding models for lower-resource languages with limited training data.

Page generated in 0.0449 seconds