• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 19
  • 17
  • 7
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 142
  • 142
  • 70
  • 64
  • 54
  • 54
  • 38
  • 27
  • 26
  • 26
  • 24
  • 22
  • 21
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Langages epsilon-sûrs et caractérisations des langages d'ordres supérieurs / Epsilon-safe languages and characterizations of higher order languages

Voundy, El Makki 15 November 2017 (has links)
Une ligne de recherche présente dans la littérature depuis les années soixante est celle des \emph{théorèmes de représentation}. Son résultat fondateur est le théorème de Chomsky--Schützenberger qui stipule qu'un langage est algébrique si et seulement si il est l'image par homomorphisme de l'intersection entre un langage régulier et le langage de Dyck. Ce résultat a connu depuis diverses généralisations à différentes familles de langages. Dans cette thèse, nous proposons plusieurs généralisations de ce résultat aux langages d'ordres supérieurs. En particulier, nous introduisons une notion de langages de Dyck d'ordres supérieurs, nous définissons et étudions des classes de transductions que nous qualifions d'$\varepsilon$-sûres et nous montrons qu'un langage appartient à un niveau $k+l$ de la hiérarchie des ordres supérieurs si et seulement si il est l'image d'un langage de Dyck de niveau $k$ par une transductions $\varepsilon$-sûre de niveau $l$. Ces résultats nous permettent aussi d'obtenir d'autres types de caractérisations tels que des caractérisations logiques. / Amongst the classical results of the language theory, one can cite the known characterization of algebraic languages proved by Chomsky and Schützenberger and which states that a language is algebraic if and only if it is the homomorphic image of a regular set intersected with the Dyck language. This result has opened a new line of research and defined a new type of characterizations known as \emph{representation theorems}. In this thesis, we prove various representation theorems for the higher order languages hierarchy. In particular, we introduce a notion of higher order Dyck languages and a hierarchy of classes of transductions that we call $\varepsilon$-stable (or $\varepsilon$-safe) transductions and we prove that a language belongs to some level $k+l$ of the higher order hierarchy if and only if it can be represented as the image of a level-$k$ Dyck language by a level-$l$ $\varepsilon$-stable transduction. These representations also allow us to approach other types of characterizations such as logical characterizations.
102

Processo de design baseado no projeto axiomático para domínios próximos: estudo de caso na análise e reconhecimento de textura. / Design process based on the axiomatic design for close domain: case study in texture analysis and recognition.

Ricardo Alexandro de Andrade Queiroz 19 December 2011 (has links)
O avanço tecnológico recente tem atraído tanto a comunidade acadêmica quanto o mercado para a investigação de novos métodos, técnicas e linguagens formais para a área de Projeto de Engenharia. A principal motivação é o atendimento à demanda para desenvolver produtos e sistemas cada vez mais completos e que satisfaçam as necessidades do usuário final. Necessidades estas que podem estar ligadas, por exemplo, à análise e reconhecimento de objetos que compõe uma imagem pela sua textura, um processo essencial na automação de uma enorme gama de aplicações como: visão robótica, monitoração industrial, sensoriamento remoto, segurança e diagnóstico médico assistido. Em vista da relevância das inúmeras aplicações envolvidas e pelo fato do domínio de aplicação ser muito próximo do contexto do desenvolvedor, é apresentada uma proposta de um processo de design baseado no Projeto Axiomático como sendo o mais indicado para esta situação. Especificamente, se espera que no estudo de caso da análise de textura haja uma convergência mais rápida para a solução - se esta existir. No estudo de caso, se desenvolve uma nova concepção de arquitetura de rede neural artificial (RNA), auto-organizável, com a estrutura espacial bidimensional da imagem de entrada preservada, tendo a extração e reconhecimento/classificação de textura em uma única fase de aprendizado. Um novo conceito para o paradigma da competição entre os neurônios também é estabelecida. O processo é original por permitir que o desenvolvedor assuma concomitantemente o papel do cliente no projeto, e especificamente por estabelecer o processo de sistematização e estruturação do raciocínio lógico do projetista para a solução do problema a ser desenvolvido e implementado em RNA. / The recent technological advance has attracted the industry and the academic community to research and propose methods, seek for new techniques, and formal languages for engineering design in order to respond to the growing demand for sophisticated product and systems that fully satisfy customers needs. It can be associated, for instance, with an application of object recognition using texture features, essential to a variety of applications domains, such as robotic vision, industrial inspection, remote sensing, security and medical image diagnosis. Considering the importance of the large number of applications mentioned before, and due to their characteristic where both application and developer domain are very close to each other, this work aims to present a design process based on ideas extracted from axiomatic design to accelerate the development for the classical approach to texture analysis. Thus, a case study is accomplished where a new conception of neural network architecture is specially designed for the following proposal: preserving the two-dimensional spatial structure of the input image, and performing texture feature extraction and classification within the same architecture. As a result, a new mechanism for neuronal competition is also developed as specific knowledge for the domain. In fact, the process proposed has some originality because it does take into account that the developer assumes also the customers role on the project, and establishes the systematization process and structure of logical reasoning of the developer in order to develop and implement the solution in neural network domain.
103

Sobre os fundamentos de programação lógica paraconsistente / On the foundations of paraconsistent logic programming

Rodrigues, Tarcísio Genaro 17 August 2018 (has links)
Orientador: Marcelo Esteban Coniglio / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Filosofia e Ciencias Humanas / Made available in DSpace on 2018-08-17T03:29:03Z (GMT). No. of bitstreams: 1 Rodrigues_TarcisioGenaro_M.pdf: 1141020 bytes, checksum: 59bb8a3ae7377c05cf6a8d8e6f7e45a5 (MD5) Previous issue date: 2010 / Resumo: A Programação Lógica nasce da interação entre a Lógica e os fundamentos da Ciência da Computação: teorias de primeira ordem podem ser interpretadas como programas de computador. A Programação Lógica tem sido extensamente utilizada em ramos da Inteligência Artificial tais como Representação do Conhecimento e Raciocínio de Senso Comum. Esta aproximação deu origem a uma extensa pesquisa com a intenção de definir sistemas de Programação Lógica paraconsistentes, isto é, sistemas nos quais seja possível manipular informação contraditória. Porém, todas as abordagens existentes carecem de uma fundamentação lógica claramente definida, como a encontrada na programação lógica clássica. A questão básica é saber quais são as lógicas paraconsistentes subjacentes a estas abordagens. A presente dissertação tem como objetivo estabelecer uma fundamentação lógica e conceitual clara e sólida para o desenvolvimento de sistemas bem fundados de Programação Lógica Paraconsistente. Nesse sentido, este trabalho pode ser considerado como a primeira (e bem sucedida) etapa de um ambicioso programa de pesquisa. Uma das teses principais da presente dissertação é que as Lógicas da Inconsistência Formal (LFI's), que abrangem uma enorme família de lógicas paraconsistentes, proporcionam tal base lógica. Como primeiro passo rumo à definição de uma programação lógica genuinamente paraconsistente, demonstramos nesta dissertação uma versão simplificada do Teorema de Herbrand para uma LFI de primeira ordem. Tal teorema garante a existência, em princípio, de métodos de dedução automática para as lógicas (quantificadas) em que o teorema vale. Um pré-requisito fundamental para a definição da programação lógica é justamente a existência de métodos de dedução automática. Adicionalmente, para a demonstração do Teorema de Herbrand, são formuladas aqui duas LFI's quantificadas através de sequentes, e para uma delas demonstramos o teorema da eliminação do corte. Apresentamos também, como requisito indispensável para os resultados acima mencionados, uma nova prova de correção e completude para LFI's quantificadas na qual mostramos a necessidade de exigir o Lema da Substituição para a sua semântica / Abstract: Logic Programming arises from the interaction between Logic and the Foundations of Computer Science: first-order theories can be seen as computer programs. Logic Programming have been broadly used in some branches of Artificial Intelligence such as Knowledge Representation and Commonsense Reasoning. From this, a wide research activity has been developed in order to define paraconsistent Logic Programming systems, that is, systems in which it is possible to deal with contradictory information. However, no such existing approaches has a clear logical basis. The basic question is to know what are the paraconsistent logics underlying such approaches. The present dissertation aims to establish a clear and solid conceptual and logical basis for developing well-founded systems of Paraconsistent Logic Programming. In that sense, this text can be considered as the first (and successful) stage of an ambitious research programme. One of the main thesis of the present dissertation is that the Logics of Formal Inconsistency (LFI's), which encompasses a broad family of paraconsistent logics, provide such a logical basis. As a first step towards the definition of genuine paraconsistent logic programming we shown, in this dissertation, a simplified version of the Herbrand Theorem for a first-order LFI. Such theorem guarantees the existence, in principle, of automated deduction methods for the (quantified) logics in which the theorem holds, a fundamental prerequisite for the definition of logic programming over such logics. Additionally, in order to prove the Herbrand Theorem we introduce sequent calculi for two quantified LFI's, and cut-elimination is proved for one of the systems. We also present, as an indispensable requisite for the above mentioned results, a new proof of soundness and completeness for first-order LFI's in which we show the necessity of requiring the Substitution Lemma for the respective semantics / Mestrado / Filosofia / Mestre em Filosofia
104

Métodos de pontos interiores como alternativa para estimar os parâmetros de uma gramática probabilística livre do contexto / Interior point methods as an alternative for estimating parameters of a stochastic context-free grammar

Mamián López, Esther Sofía, 1985- 10 July 2013 (has links)
Orientadores: Aurelio Ribeiro Leite de Oliveira, Fredy Angel Amaya Robayo / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica / Made available in DSpace on 2018-08-23T17:46:00Z (GMT). No. of bitstreams: 1 MamianLopez_EstherSofia_M.pdf: 1176541 bytes, checksum: 8f49901f40e77c9511c30e86c0d1bb0d (MD5) Previous issue date: 2013 / Resumo: Os modelos probabilísticos de uma linguagem (MPL) são modelos matemáticos onde é definida uma função de probabilidade que calcula a probabilidade de ocorrência de uma cadeia em uma linguagem. Os parâmetros de um MPL, que são as probabilidades de uma cadeia, são aprendidos a partir de uma base de dados (amostras de cadeias) pertencentes à linguagem. Uma vez obtidas as probabilidades, ou seja, um modelo da linguagem, existe uma medida para comparar quanto o modelo obtido representa a linguagem em estudo. Esta medida é denominada perplexidade por palavra. O modelo de linguagem probabilístico que propomos estimar, está baseado nas gramáticas probabilísticas livres do contexto. O método clássico para estimar os parâmetros de um MPL (Inside-Outside) demanda uma grande quantidade de tempo, tornando-o inviável para aplicações complexas. A proposta desta dissertação consiste em abordar o problema de estimar os parâmetros de um MPL usando métodos de pontos interiores, obtendo bons resultados em termos de tempo de processamento, número de iterações até obter convergência e perplexidade por palavra / Abstract: In a probabilistic language model (PLM), a probability function is defined to calculate the probability of a particular string ocurring within a language. These probabilities are the PLM parameters and are learned from a corpus (string samples), being part of a language. When the probabilities are calculated, with a language model as a result, a comparison can be realized in order to evaluate the extent to which the model represents the language being studied. This way of evaluation is called perplexity per word. The PLM proposed in this work is based on the probabilistic context-free grammars as an alternative to the classic method inside-outside that can become quite time-consuming, being unviable for complex applications. This proposal is an approach to estimate the PLM parameters using interior point methods with good results being obtained in processing time, iterations number until convergence and perplexity per word / Mestrado / Matematica Aplicada / Mestra em Matemática Aplicada
105

Um modelo computacional de aquisição de primeira língua / Properties of natural languages and the acquisitions process

Faria, Pablo, 1978- 24 August 2018 (has links)
Orientadores: Ruth Elisabeth Vasconcellos Lopes, Charlotte Marie Chamberlland Galves / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Estudos da Linguagem / Made available in DSpace on 2018-08-24T00:21:30Z (GMT). No. of bitstreams: 1 Faria_Pablo_D.pdf: 2488292 bytes, checksum: c75ee78601e9b73c790d12ce7e2d414e (MD5) Previous issue date: 2013 / Resumo: Neste trabalho, o fenômeno de aquisição de uma língua natural é investigado através de uma modelagem computacional. O aprendiz modelado - apelidado de IASMIM - se caracteriza como um modelo computacional integrado de aquisição de primeira língua, visto que integra os processos de aquisição lexical e sintática. Além disso, o modelo foi concebido de modo a atender certos critérios de plausibilidade empírica e psicológica. A perspectiva teórica que norteia a investigação é a da Gramática Gerativa (cf. Chomsky, 1986) e este é um modelo voltado para a competência linguística, não um modelo de processamento ou de performance (i.e., de uso do conhecimento linguístico). O aprendiz modelado é capaz de adquirir um conhecimento gramatical relativamente abrangente e demonstra algum potencial translinguístico, particularmente no que diz respeito a variações de ordem. As simulações para avaliação do modelo permitem observar a emergência de padrões de adjunção e de recursividade na gramática, considerados aqui como as principais evidências de um conhecimento sintático mais elaborado. Finalmente, o modelo incorpora algumas noções caras à teoria sintática no âmbito do Programa Minimalista (cf. Chomsky, 1995b), tais como set- Merge, pair-Merge, "traço seletor" (cf. Chomsky, 1998), em conjunto com assunções sobre a binariedade das representações sintáticas e a hipótese de que a ordem linear não tem papel na sintaxe (cf. Uriagereka, 1999). O modelo incorpora, ainda, uma versão da representação semântico-conceitual proposta em Jackendoff (1990). Nesta modelagem, estas noções e assunções ganham uma interpretação concreta e integrada, interagindo na determinação das propriedades do conhecimento adquirido / Abstract: In the present work, the acquisition of natural languages is investigated through a computer simulation. The modelled learner - dubbed IASMIM - is characterized as an integrated computational model of first language acquisition, in the sense that it integrates the processes of lexical and syntactic acquisition. Furthermore, the model was conceived in order to be empirically and psychologically plausible. The theoretical perspective of this enterprise is that of Generative Grammar (cf. Chomsky, 1986) and this is a model concerned with linguistic competence, rather than language processing or performance (i.e., how the acquired knowledge is put to use). The modelled learner is capable of acquiring a relatively broad grammatical knowledge and shows some crosslinguistic abilities, in particular, the ability to handle languages with distinct word orders. In the simulations for evaluation of the model we can observe the emergence of adjunction and recursive patterns in the grammar, taken here as the main pieces of evidence of a more elaborated syntactic knowledge. Finally, the model embodies some central notions for syntactic theory under the Minimalist Program (cf. Chomsky, 1995b), such as set-Merge, pair-Merge and "selector feature" (cf. Chomsky, 1998), together with the assumptions that syntactic representations are strictly binary branching and that linear word order has no significant role in syntactic phenomena (cf. Uriagereka, 1999). The model also embodies a version of the semantic-conceptual representation proposed in Jackendoff (1990). They take a concrete and integrated existence in this model, interacting with one another to determine the properties of the acquired grammatical knowledge / Doutorado / Linguistica / Doutor em Linguística
106

Visibly pushdown transducers

Servais, Frédéric 26 September 2011 (has links)
The present work proposes visibly pushdown transducers (VPTs) for defining transformations of documents with a nesting structure. We show that this subclass of pushdown transducers enjoy good properties. Notably, we show that functionality is decidable in PTime and k-valuedness in co-NPTime. While this class is not closed under composition and its type checking problem against visibly pushdown automata is undecidable, we identify a subclass, the well-nested VPTs, closed under composition and with a decidable type checking problem. Furthermore, we show that the class of VPTs is closed under look-ahead, and that the deterministic VPTs with look-ahead characterize the functional VPTs transductions. Finally, we investigate the resources necessary to perform transformations defined by VPTs. We devise a memory efficient algorithm. Then we show that it is decidable whether a VPT transduction can be performed with a memory that depends on the level of nesting of the input document but not on its length. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
107

Développement d'un outil d'évaluation performantielle des réglementations incendie en France et dans les pays de l'Union Européenne / Development of a performantial evaluation tool for fire regulations in France and the countries of the European union

Chanti, Houda 04 July 2017 (has links)
Dans le but de faciliter la tâche d'évaluation du niveau de sécurité incendie aux ingénieurs et permettre aux spécialistes impliqués dans le domaine d'utiliser leurs langages et outils préférés, nous proposons de créer un langage dédié au domaine de la sécurité incendie générant automatiquement une simulation en prenant en considération les langages métiers utilisés par les spécialistes intervenants dans le domaine. Ce DSL nécessite la définition, la formalisation, la composition et l'intégration de plusieurs modèles, par rapport aux langages spécifiques utilisés par les spécialistes impliqués dans le domaine. Le langage spécifique dédié au domaine de la sécurité incendie est conçu par composition et intégration de plusieurs autres DSLs décrits par des langages techniques et naturels (ainsi que des langages naturels faisant référence à des langages techniques). Ces derniers sont modélisés de manière à ce que leurs composants soient précis et fondés sur des bases mathématiques permettant de vérifier la cohérence du système (personnes et matériaux sont en sécurité) avant sa mise en œuvre. Dans ce contexte, nous proposons d'adopter une approche formelle, basée sur des spécifications algébriques, pour formaliser les langages utilisés par les spécialistes impliqués dans le système de génération, en se concentrant à la fois sur les syntaxes et les sémantiques des langages dédiés. Dans l'approche algébrique, les concepts du domaine sont abstraits par des types de données et les relations entre eux. La sémantique des langages spécifiques est décrite par les relations, le mapping (correspondances) entre les types de données définis et leurs propriétés. Le langage de simulation est basé sur un langage conçu par la composition de plusieurs DSL spécifiques précédemment décrits et formalisés. Les différents DSLs sont implémentés en se basant sur les concepts de la programmation fonctionnelle et le langage fonctionnel Haskell bien adapté à cette approche. Le résultat de ce travail est un outil informatique dédié à la génération automatique de simulation, dans le but de faciliter la tâche d'évaluation du niveau de sécurité incendie aux ingénieurs. Cet outil est la propriété du Centre Scientifique et Technique du bâtiment (CSTB), une organisation dont la mission est de garantir la qualité et la sécurité des bâtiments, en réunissant des compétences multidisciplinaires pour développer et partager des connaissances scientifiques et techniques, afin de fournir aux différents acteurs les réponses attendues dans leur pratique professionnelle. / In order to facilitate the engineers task of evaluating the fire safety level, and to allow the specialists involved in the field to use their preferred languages and tools, we propose to create a language dedicated to the field of fire safety, which automatically generates a simulation, taking into account the specific languages used by the specialists involved in the field. This DSL requires the definition, the formalization, the composition and the integration of several models, regardig to the specific languages used by the specialists involved in the field. The specific language dedicated to the field of fire safety is designed by composing and integrating several other DSLs described by technical and natural languages (as well as natural languages referring to technical ones). These latter are modeled in a way that their components must be precise and based on mathematical foundations, in order to verify the consistency of the system (people and materials are safe) before it implementation. In this context, we propose to adopt a formal approach, based on algebraic specifications, to formalize the languages used by the specialists involved in the generation system, focusing on both syntaxes and semantics of the dedicated languages. In the algebraic approach, the concepts of the domain are abstracted by data types and the relationships between them. The semantics of specific languages is described by the relationships, the mappings between the defined data types and their properties. The simulation language is based on a composition of several specific DSLs previously described and formalized. The DSLs are implemented based on the concepts of functional programming and the Haskell functional language, well adapted to this approach. The result of this work is a software dedicated to the automatic generation of a simulation, in order to facilitate the evaluation of the fire safety level to the engineers. This tool is the property of the Scientific and Technical Center for Building (CSTB), an organization whose mission is to guarantee the quality and safety of buildings by combining multidisciplinary skills to develop and share scientific and technical knowledge, in order to provide to the different actors the expected answers in their professional practice.
108

Grammar-Based Translation Framework / Grammar-Based Translation Framework

Vít, Radek January 2019 (has links)
V této práci prozkoumáváme existující algoritmy pro přijímání jazyků definovaných bezkontextovými gramatikami. Na základě těchto znalostí navrhujeme nový model pro reprezentaci LR automatů a s jeho pomocí definujeme nový algoritmus LSCELR. Modifikujeme algoritmy pro přijímání jazyků k vytvoření algoritmů pro překlad založený na překladových gramatikách. Definujeme atributové překladové gramatiky jako rozšířené překladové gramatiky pro definici vztahů mezi vstupními a výstupními symboly překladu. Implementujeme překladový framework ctf založený na gramatikách, který implementuje překlad pomocí LSCELR. Definujeme jazyk pro popis atributových překladových gramatik a implementujeme překladač pro překlad této reprezentace do zdrojového kódu pro implementovaný framework.
109

Matematické metody modelování morfologie jehličnanů / Mathematical methods of morphology modelling of coniferous trees

Janoutová, Růžena January 2012 (has links)
The thesis was focused on creation of a coniferous tree by nondestructive method allowing description of structure of adult spruce trees. After processing provided data we created a model of L-system which creates a tree branch. Thereafter, a Python script generated parameters which were required for the creation of the model of the tree in graphical software Blender. Model of coniferous tree was sucesfully generated. Its memory requirements are high but for our purposes this is not an essential problem.
110

Cestami řízené gramatiky / Path-Controlled Grammars

Adamec, Ondřej January 2015 (has links)
This thesis deals with path-controlled grammars, which are grammars that place restrictions on the paths in a derivation tree of a context-free grammar. The goal of this thesis is to create an algorithm for conversion between the path-controlled grammars and the state grammars, which is a di erent type of regulated grammars. Another goal is to study the generative power of path-controlled grammars based on the conversion algorithm. The conversion algorithm is implemented and tested on a number of path-controlled grammars. Also, its complexity is discussed. Finally, a parsing tool for path-controlled grammars is implemented. Complexity of this tool is analyzed as well.

Page generated in 0.0758 seconds