• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 8
  • 5
  • 5
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 123
  • 21
  • 19
  • 19
  • 15
  • 15
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Modelling and analysis of engineering changes in complex systems

Lemmens, Yves Claude Jean January 2007 (has links)
Complex products are comprised of a large number of tightly integrated components, assemblies and systems resulting in extensive logical and physical interdependences between the constituent parts. Thus a change to one item of a system is highly likely to lead to a change to another item, which in turn can propagate further. The aim of this research therefore is to investigate dependency models that can be used to identify the impact and trace thepropagation of changes in different information domains, such as requirements, physical product architecture or organisation. Cont/d.
32

Lexical Category Acquisition Via Nonadjacent Dependencies in Context: Evidence of Developmental Change and Individual Differences

Sandoval, Michelle January 2014 (has links)
Lexical categories like noun and verb are foundational to language acquisition, but these categories do not come neatly packaged for the infant language learner. Some have proposed that infants can begin to solve this problem by tracking the frequent nonadjacent word (or morpheme) contexts of these categories. However, nonadjacent relationships that frame categories contain reliable adjacent relationships making the type of context (adjacent or nonadjacent) used for category acquisition unclear. In addition, previous research suggests that infants show learning of adjacent dependencies earlier than learning of nonadjacent dependencies and that the learning of nonadjacent word relationships is affected by the intervening information (how informative it is and how familiar it is). Together these issues raise the question of whether the type of context used for category acquisition changes as a function of development. To address this question, infants ages 13, 15, and 18 months were exposed to an artificial language containing adjacent and nonadjacent information that predicted a category. Infants were then tested to determine whether they 1) detected the category using adjacent information 2) only detected the nonadjacent dependency, with no categorization, or 3) detected both the nonadjacent relationship and the category. The results showed high individual variability in the youngest age group with a gradual convergence towards detecting the category and the associated environments by 18 months. These findings suggest that both adjacent and nonadjacent information may be used at early stages in category acquisition. The results reveal a dynamic picture of how infants use distributional information for category acquisition and support a developmental shift consistent with previous infant studies examining dependencies between words.
33

Genre and Domain Dependencies in Sentiment Analysis

Remus, Robert 29 April 2015 (has links) (PDF)
Genre and domain influence an author\'s style of writing and therefore a text\'s characteristics. Natural language processing is prone to such variations in textual characteristics: it is said to be genre and domain dependent. This thesis investigates genre and domain dependencies in sentiment analysis. Its goal is to support the development of robust sentiment analysis approaches that work well and in a predictable manner under different conditions, i.e. for different genres and domains. Initially, we show that a prototypical approach to sentiment analysis -- viz. a supervised machine learning model based on word n-gram features -- performs differently on gold standards that originate from differing genres and domains, but performs similarly on gold standards that originate from resembling genres and domains. We show that these gold standards differ in certain textual characteristics, viz. their domain complexity. We find a strong linear relation between our approach\'s accuracy on a particular gold standard and its domain complexity, which we then use to estimate our approach\'s accuracy. Subsequently, we use certain textual characteristics -- viz. domain complexity, domain similarity, and readability -- in a variety of applications. Domain complexity and domain similarity measures are used to determine parameter settings in two tasks. Domain complexity guides us in model selection for in-domain polarity classification, viz. in decisions regarding word n-gram model order and word n-gram feature selection. Domain complexity and domain similarity guide us in domain adaptation. We propose a novel domain adaptation scheme and apply it to cross-domain polarity classification in semi- and unsupervised domain adaptation scenarios. Readability is used for feature engineering. We propose to adopt readability gradings, readability indicators as well as word and syntax distributions as features for subjectivity classification. Moreover, we generalize a framework for modeling and representing negation in machine learning-based sentiment analysis. This framework is applied to in-domain and cross-domain polarity classification. We investigate the relation between implicit and explicit negation modeling, the influence of negation scope detection methods, and the efficiency of the framework in different domains. Finally, we carry out a case study in which we transfer the core methods of our thesis -- viz. domain complexity-based accuracy estimation, domain complexity-based model selection, and negation modeling -- to a gold standard that originates from a genre and domain hitherto not used in this thesis.
34

A history of military government in newly acquired territory of the United States

Thomas, David Y. January 1904 (has links)
Thesis (PH. D.)--Columbia University, 1903. / Published also as Studies in history, economics and public law, vol. 20, no. 2.
35

The effect of delay-lines on sequence recall - A study of B-RAAM

Eriksson, Timea January 2005 (has links)
<p>Connectionist models have been criticized for not being able to form compositional representations of recursive data structures such as trees and lists, a matter that has been addressed by models as Elman networks, RAAM and B-RAAM. These architectures seem to have common features with the human short-term memory regarding recall. Both show a strong recency effect; however, the human memory also exhibits a primacy effect due to rehearsal. The problem is that the connectionist models do not have the primacy aspect, which complicates the learning of long-term dependencies. A long-term dependency is when items presented early should affect the behaviour of the model. Learning long-term dependencies is a problem that is hard to address within these architectures.</p><p>Delay-lines might be used as a mechanism for implementing rehearsal within connectionist models. However, it has not been clarified how the use of delay-lines affects the recency and the primacy aspect. In this thesis, delay-lines are introduced in B-RAAM. This study investigates how the primacy and the recency aspect are affected by the use of delay-lines, aiming to improve the ability to identify long-term dependencies. The results show that by using delay-lines, B-RAAM has both primacy and recency.</p>
36

Syntaktická analýza textů se střídáním kódů / Syntaktická analýza textů se střídáním kódů

Ravishankar, Vinit January 2018 (has links)
(English) Vinit Ravishankar July 2018 The aim of this thesis is twofold; first, we attempt to dependency parse existing code-switched corpora, solely by training on monolingual dependency treebanks. In an attempt to do so, we design a dependency parser and ex- periment with a variety of methods to improve upon the baseline established by raw training on monolingual treebanks: these methods range from treebank modification to network modification. On this task, we obtain state-of-the- art results for most evaluation criteria on the task for our evaluation language pairs: Hindi/English and Komi/Russian. We beat our own baselines by a sig- nificant margin, whilst simultaneously beating most scores on similar tasks in the literature. The second part of the thesis involves introducing the relatively understudied task of predicting code-switching points in a monolingual utter- ance; we provide several architectures that attempt to do so, and provide one of them as our baseline, in the hopes that it should continue as a state-of-the-art in future tasks. 1
37

The effect of delay-lines on sequence recall - A study of B-RAAM

Eriksson, Timea January 2005 (has links)
Connectionist models have been criticized for not being able to form compositional representations of recursive data structures such as trees and lists, a matter that has been addressed by models as Elman networks, RAAM and B-RAAM. These architectures seem to have common features with the human short-term memory regarding recall. Both show a strong recency effect; however, the human memory also exhibits a primacy effect due to rehearsal. The problem is that the connectionist models do not have the primacy aspect, which complicates the learning of long-term dependencies. A long-term dependency is when items presented early should affect the behaviour of the model. Learning long-term dependencies is a problem that is hard to address within these architectures. Delay-lines might be used as a mechanism for implementing rehearsal within connectionist models. However, it has not been clarified how the use of delay-lines affects the recency and the primacy aspect. In this thesis, delay-lines are introduced in B-RAAM. This study investigates how the primacy and the recency aspect are affected by the use of delay-lines, aiming to improve the ability to identify long-term dependencies. The results show that by using delay-lines, B-RAAM has both primacy and recency.
38

Modular Reasoning For Software Product Lines With Emergent Feature Interfaces

MELO, Jean Carlos de Carvalho 31 January 2014 (has links)
Submitted by Nayara Passos (nayara.passos@ufpe.br) on 2015-03-10T13:51:24Z No. of bitstreams: 2 DISSERTAÇÃO Jean Carlos de Carvalho Melo.pdf: 1961390 bytes, checksum: d66fd564809f98e0c5bd50687923f9e0 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-10T13:51:24Z (GMT). No. of bitstreams: 2 DISSERTAÇÃO Jean Carlos de Carvalho Melo.pdf: 1961390 bytes, checksum: d66fd564809f98e0c5bd50687923f9e0 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2014 / INES, CNPq / Diante do ambiente complexo e dinâmico encontrado nas empresas atualmente, o sistema tradicional de Workflow não está sendo flexível suficiente para modelar Processos de Negócio. Nesse contexto, surgiram os Processos Flexíveis que tem por principal objetivo suprir a necessidade de modelar processos menos estáticos. Processo declarativo é um tipo de processo flexível que permite os participantes decidirem a ordem em que as atividades são executadas através de regras de negócio. As regras de negócio determinam as restrições e obrigações que devem ser satisfeitas durante a execução. Tais regras descrevem o que deve ou não deve ser feito durante a execução do processo, mas não definem como. Os métodos e ferramentas atualmente disponíveis para modelar e executar processos declarativos apresentam várias limitações que prejudicam a sua utilização para este fim. Em particular, a abordagem que emprega lógica temporal linear (LTL) sofre do problema de explosão de estados a medida que o tamanho do modelo do processo cresce. Embora mecanismos eficientes em relação a memória terem surgido, eles não são capazes de adequadamente garantir a conclusão correta do processo, uma vez que permitem o usuário alcançar estados proibidos ou que causem deadlock. Além disso, as implementações atuais de ferramentas para execução de processos declarativos se concentram apenas em atividades manuais. Comunicação automática com aplicações externas para troca de dados e reutilização de funcionalidade não é suportado. Essas oportunidades de automação poderiam ser melhor exploradas por uma engine declarativa que se integra com tecnologias SOC existentes. Este trabalho propõe uma nova engine de regras baseada em grafo, chamado de REFlex. Tal engine não compartilha os problemas apresentados pelas abordagens disponíveis, sendo mais adequada para modelar processos de negócio declarativos. Além disso, REFlex preenche a lacuna entre os processos declarativos e SOC. O orquestrador REFlex é um orquestrador de serviços declarativo, eficiente e dependente de dados. Ele permite que os participantes chamem serviços externos para executar tarefas automatizadas. Diferente dos trabalhos relacionados, o algoritmo de REFlex não depende da geração de todos os estados alcançáveis, o que o torna adequado para modelar processos de negócios grandes e complexos. Além disso, REFlex suporta regras de negócio dependentes de dados, o que proporciona sensibilidade ao contexto. / Declarative business process modeling is a flexible approach to business process management in which participants can decide the order in which activities are performed. Business rules are employed to determine restrictions and obligations that must be satisfied during execution time. Such business rules describe what must or must not be done during the process execution, but do not prescribe how. In this way, complex control-flows are simplified and participants have more flexibility to handle unpredicted situations. The methods and tools currently available to model and execute declarative processes present several limitations that impair their use to this application. In particular, the well-known approach that employs Linear Temporal Logic (LTL) has the drawback of the state space explosion as the size of the process model grows. Although approaches proposing memory efficient methods have been proposed in the literature, they are not able to properly guarantee the correct termination of the process, since they allow the user to reach deadlock states. Moreover, current implementations of declarative business process engines focus only on manual activities. Automatic communication with external applications to exchange data and reuse functionality is barely supported. Such automation opportunities could be better exploited by a declarative engine that integrates with existing SOC technologies. This work proposes a novel graph-based rule engine called REFlex that does not share the problems presented by other engines, being better suited to model declarative business processes than the techniques currently in use. Additionally, such engine fills this gap between declarative processes and SOC. The REFlex orchestrator is an efficient, data-aware declarative web services orchestrator. It enables participants to call external web services to perform automated tasks. Different from related work, the REFlex algorithm does not depend on the generation of all reachable states, which makes it well suited to model large and complex business processes. Moreover, REFlex is capable of modeling data-dependent business rules, which provides unprecedented context awareness and modeling power to the declarative paradigm.
39

Future Tuning Process For Embedded Control Systems

Arsalan, Muhammad January 2009 (has links)
This master’s thesis concerns development of embedded control systems.Development process for embedded control systems involves several steps, such as control design, rapid prototyping, fixedpoint implementation and hardware-in-the-loop-simulations. Another step, which Volvo is not currently (September 2009) using within climate control is on-line tuning. One reason for not using this technique today is that the available tools for this task (ATI Vision, INCA from ETAS or CalDesk from dSPACE) do not handle parameter dependencies in a atisfactory way. With these constraints of today, it is not possible to use online tuning and controller development process is more laborious and time consuming.The main task of this thesis is to solve the problem with parameter dependencies and to make online tuning possible.
40

Contribution à la définition d'une méthode de conception de bases de données à base ontologique / Contribution to the definition of a mathod for designing an ontology-based database

Chakroun, Chedlia 02 October 2013 (has links)
Récemment, les ontologies ont été largement adoptées par différentes entreprises dans divers domaines. Elles sontdevenues des composantes centrales dans bon nombre d'applications. Ces modèles conceptualisent l'univers du discours auxmoyens de concepts primitifs et parfois redondants (calculés à partir de concepts primitifs). Au début, la relation entreontologies et base de données a été faiblement couplée. Avec l'explosion des données sémantiques, des solutions depersistance assurant une haute performance des applications ont été proposées. En conséquence, un nouveau type de base dedonnées, appelée base de données à base ontologique (BDBO) a vu le jour. Plusieurs types de BDBO ont été proposés, ilsutilisent différents SGBD. Chaque BDBO possède sa propre architecture et ses modèles de stockage dédiés à la persistancedes ontologies et de ses instances. A ce stade, la relation entre les bases de données et les ontologies devient fortementcouplée. En conséquence, plusieurs études de recherche ont été proposées sur la phase de conception physique des BDBO.Les phases conceptuelle et logique n'ont été que partiellement traitées. Afin de garantir un succès similaire au celui connupar les bases de données relationnelles, les BDBO doivent être accompagnées par des méthodologies de conception et desoutils traitant les différentes étapes du cycle de vie d'une base de données. Une telle méthodologie devrait identifier laredondance intégrée dans l'ontologie. Nos travaux proposent une méthodologie de conception dédiée aux bases de données àbase ontologique incluant les principales phases du cycle de vie du développement d'une base de données : conceptuel,logique, physique ainsi que la phase de déploiement. La phase de conception logique est réalisée grâce à l'incorporation desdépendances entre les concepts ontologiques. Ces dépendances sont semblables au principe des dépendances fonctionnellesdéfinies pour les bases de données relationnelles. En raison de la diversité des architectures des BDBO et la variété desmodèles de stockage utilisés pour stocker et gérer les données ontologiques, nous proposons une approche de déploiement àla carte. Pour valider notre proposition, une implémentation de notre approche dans un environnement de BDBO sousOntoDB est proposée. Enfin, dans le but d'accompagner l'utilisateur pendant le processus de conception, un outil d'aide à laconception des bases de données à partir d'une ontologie conceptuelle est présenté / Recently, ontologies have been widely adopted by small, medium and large companies in various domains. Theyhave become central components in many applications. These models conceptualize the universe of discourse by means ofprimitive and sometimes redundant concepts (derived from primitive concepts). At first, the relationship between ontologiesand database was loosely coupled. With the explosion of semantic data, persistence solutions providing high performanceapplications have been proposed. As a consequence, a new type of database, called ontology-based database (OBDB) isborn. Several types of OBDB have been proposed including different architectures of the target DBMS and storage modelsfor ontologies and their instances. At this stage, the relationship between databases and ontologies becomes strongly coupled.As a result, several research studies have been proposed on the physical design phase of OBDB. Conceptual and logicalphases were only partially treated. To ensure similar success to that known by relational databases, OBDB must beaccompanied by design methodologies and tools dealing with the different stages of the life cycle of a database. Such amethodology should identify the redundancy built into the ontology. In our work, we propose a design methodologydedicated to ontology-based databases including the main phases of the lifecycle of the database development: conceptual,logical and physical as well as the deployment phase. The logical design phase is performed thanks to the incorporation ofdependencies between concepts and properties of the ontologies. These dependencies are quite similar to the functionaldependencies in traditional databases. Due to the diversity of the OBDB architectures and the variety of the used storagemodels (triplet, horizontal, etc.) to store and manage ontological data, we propose a deployment ‘à la carte. To validate ourproposal, an implementation of our approach in an OBDB environment on OntoDB is proposed. Finally, in order to supportthe user during the design process, a tool for designing databases from a conceptual ontology is presented.

Page generated in 0.0691 seconds