121 |
View integration using the entity-relationship modelHassan, Mansoor Ahmed January 1989 (has links)
No description available.
|
122 |
A Deep 3D Object Pose Estimation Framework for Robots with RGB-D SensorsWagh, Ameya Yatindra 24 April 2019 (has links)
The task of object detection and pose estimation has widely been done using template matching techniques. However, these algorithms are sensitive to outliers and occlusions, and have high latency due to their iterative nature. Recent research in computer vision and deep learning has shown great improvements in the robustness of these algorithms. However, one of the major drawbacks of these algorithms is that they are specific to the objects. Moreover, the estimation of pose depends significantly on their RGB image features. As these algorithms are trained on meticulously labeled large datasets for object's ground truth pose, it is difficult to re-train these for real-world applications. To overcome this problem, we propose a two-stage pipeline of convolutional neural networks which uses RGB images to localize objects in 2D space and depth images to estimate a 6DoF pose. Thus the pose estimation network learns only the geometric features of the object and is not biased by its color features. We evaluate the performance of this framework on LINEMOD dataset, which is widely used to benchmark object pose estimation frameworks. We found the results to be comparable with the state of the art algorithms using RGB-D images. Secondly, to show the transferability of the proposed pipeline, we implement this on ATLAS robot for a pick and place experiment. As the distribution of images in LINEMOD dataset and the images captured by the MultiSense sensor on ATLAS are different, we generate a synthetic dataset out of very few real-world images captured from the MultiSense sensor. We use this dataset to train just the object detection networks used in the ATLAS Robot experiment.
|
123 |
The awareness of semantic prosody and its implications for the EFL vocabulary teaching :a studyChoi, Ka Fai January 2018 (has links)
University of Macau / Faculty of Arts and Humanities. / Department of English
|
124 |
Types of Errors in a Memory Interference Task in Normal and Abnormal AgingUnknown Date (has links)
The types of intrusion errors (Prior List, Semantically Related, and Unrelated)
made on the LASSI-L verbal memory task were compared across three diagnostic groups
(N = 160, 61 % female), Cognitively Normal (CN), amnestic Mild Cognitive Impairment
(aMCI), and Alzheimer’s disease (AD). Errors related to Proactive, Recovery from
Proactive, and Retroactive Interference were also analyzed, as well as the relationship of
errors to Amyloid load, a biomarker of AD. Results suggest that the types of error made
indicated the level of cognitive decline. It appears that as deficits increase, impaired
semantic networks result in the simultaneous activation of items that are semantically
related to LASSI-L words. In the aMCI group, providing a semantic cue resulted in an
increased production of Semantically Related intrusions. Unrelated intrusions occurred
rarely, although, a small number occurred even in the CN group, warranting further
investigation. Amyloid load correlated with all intrusion errors. / Includes bibliography. / Thesis (M.A.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
|
125 |
A machine learning approach for plagiarism detectionAlsallal, M. January 2016 (has links)
Plagiarism detection is gaining increasing importance due to requirements for integrity in education. The existing research has investigated the problem of plagrarim detection with a varying degree of success. The literature revealed that there are two main methods for detecting plagiarism, namely extrinsic and intrinsic. This thesis has developed two novel approaches to address both of these methods. Firstly a novel extrinsic method for detecting plagiarism is proposed. The method is based on four well-known techniques namely Bag of Words (BOW), Latent Semantic Analysis (LSA), Stylometry and Support Vector Machines (SVM). The LSA application was fine-tuned to take in the stylometric features (most common words) in order to characterise the document authorship as described in chapter 4. The results revealed that LSA based stylometry has outperformed the traditional LSA application. Support vector machine based algorithms were used to perform the classification procedure in order to predict which author has written a particular book being tested. The proposed method has successfully addressed the limitations of semantic characteristics and identified the document source by assigning the book being tested to the right author in most cases. Secondly, the intrinsic detection method has relied on the use of the statistical properties of the most common words. LSA was applied in this method to a group of most common words (MCWs) to extract their usage patterns based on the transitivity property of LSA. The feature sets of the intrinsic model were based on the frequency of the most common words, their relative frequencies in series, and the deviation of these frequencies across all books for a particular author. The Intrinsic method aims to generate a model of author “style” by revealing a set of certain features of authorship. The model’s generation procedure focuses on just one author as an attempt to summarise aspects of an author’s style in a definitive and clear-cut manner. The thesis has also proposed a novel experimental methodology for testing the performance of both extrinsic and intrinsic methods for plagiarism detection. This methodology relies upon the CEN (Corpus of English Novels) training dataset, but divides that dataset up into training and test datasets in a novel manner. Both approaches have been evaluated using the well-known leave-one-out-cross-validation method. Results indicated that by integrating deep analysis (LSA) and Stylometric analysis, hidden changes can be identified whether or not a reference collection exists.
|
126 |
Beyond Discourse: Computational Text Analysis and Material Historical ProcessesAtria, Jose Tomas January 2018 (has links)
This dissertation proposes a general methodological framework for the application of computational text analysis to the study of long duration material processes of transformation, beyond their traditional application to the study of discourse and rhetorical action. Over a thin theory of the linguistic nature of social facts, the proposed methodology revolves around the compilation of term co-occurrence matrices and their projection into different representations of an hypothetical semantic space. These representations offer solutions to two problems inherent to social scientific research: that of "mapping" features in a given representation to theoretical entities and that of "alignment" of the features seen in models built from different sources in order to enable their comparison.
The data requirements of the exercise are discussed through the introduction of the notion of a "narrative horizon", the extent to which a given source incorporates a narrative account in its rendering of the context that produces it. Useful primary data will consist of text with short narrative horizons, such that the ideal source will correspond to a continuous archive of institutional, ideally bureaucratic text produced as mere documentation of a definite population of more or less stable and comparable social facts across a couple of centuries. Such a primary source is available in the Proceedings of the Old Bailey (POB), a collection of transcriptions of 197,752 criminal trials seen by the Old Bailey and the Central Criminal Court of London and Middlesex between 1674 and 1913 that includes verbatim transcriptions of witness testimony. The POB is used to demonstrate the proposed framework, starting with the analysis of the evolution of an historical corpus to illustrate the procedure by which provenance data is used to construct longitudinal and cross-sectional comparisons of different corpus segments.
The co-occurrence matrices obtained from the POB corpus are used to demonstrate two different projections: semantic networks that model different notions of similarity between the terms in a corpus' lexicon as an adjacency matrix describing a graph and semantic vector spaces that approximate a lower-dimensional representation of an hypothetical semantic space from its empirical effects on the co-occurrence matrix.
Semantic networks are presented as discrete mathematical objects that offer a solution to the mapping problem through operation that allow for the construction of sets of terms over which an order can be induced using any measure of significance of the strength of association between a term set and its elements. Alignment can then be solved through different similarity measures computed over the intersection and union of the sets under comparison.
Semantic vector spaces are presented as continuous mathematical objects that offer a solution to the mapping problem in the linear structures contained in them. This include, in all cases, a meaningful metric that makes it possible to define neighbourhoods and regions in the semantic space and, in some cases, a meaningful orientation that makes it possible to trace dimensions across them. Alignment can then proceed endogenously in the case of oriented vector spaces for relative comparisons, or through the construction of common basis sets for non-oriented semantic spaces for absolute comparisons.
The dissertation concludes with the proposition of a general research program for the systematic compilation of text distributional patterns in order to facilitate a much needed process of calibration required by the techniques discussed in the previous chapters. Two specific avenues for further research are identified. First, the development of incremental methods of projection that allow a semantic model to be updated as new observations come along, an area that has received considerable attention from the field of electronic finance and the pervasive use of Gentleman's algorithm for matrix factorisation. Second, the development of additively decomposable models that may be combined or disaggregated to obtain a similar result to the one that would have been obtained had the model being computed from the union or difference of their inputs. This is established to be dependent on whether the functions that actualise a given model are associative under addition or not.
|
127 |
Abstracting over semantic theoriesHolt, Alexander G. B. January 1993 (has links)
The topic of this thesis is abstraction over theories of formal semantics for natural language. It is motivated by the belief that a metatheoretical perspective can contribute both to a better theoretical understanding of semantic theories, and to improved practical mechanisms for developing theories of semantics and combining them with theories of syntax. The argument for a new way to understand semantic theories rest spartly on the present difficulty of accurately comparing and clasifying theories, aswell as on the desire to easily combine theories that concentrate on different areas of semantics. There is a strong case for encouraging more modularity in the structure of semantic theories, to promote a division of labour, and potentially the development of reusable semantic modules. A more abstract approach to the syntax-semantics interface holds out the hope of further benefits, notably a degree of guaranteed semantic coherence via typesor constraints. Two case studies of semantic abstraction are presented. First,alternative characterizations of intensional abstraction and predication are developed with respect to three different semantic theories, but in a theory-independent fashion. Second,an approach to semantic abstraction recently proposed by Johnson and Kayis analyzed in detail,and the nature of its abstraction described with formal specifications. Finaly, a programme for modular semantic specifications is described, and applied to the area of quantification and anaphora,demonstrating succesfuly that theory-independent devices can be used to simultaneously abstract across both semantic theories and syntax-semantics interfaces.
|
128 |
Contextually-dependent lexical semanticsVerspoor, Cornelia M. January 1997 (has links)
This thesis is an investigation of phenomena at the interface between syntax, semantics, and pragmatics, with the aim of arguing for a view of semantic interpretation as lexically driven yet contextually dependent. I examine regular, generative processes which operate over the lexicon to induce verbal sense shifts, and discuss the interaction of these processes with the linguistic or discourse context. I concentrate on phenomena where only an interaction between all three linguistic knowledge sources can explain the constraints on verb use: conventionalised lexical semantic knowledge constrains productive syntactic processes, while pragmatic reasoning is both constrained by and constrains the potential interpretations given to certain verbs. The phenomena which are closely examined are the behaviour of PP sentential modifiers (specifically dative and directional PPs) with respect to the lexical semantic representation of the verb phrases they modify, resultative constructions, and logical metonymy. The analysis is couched in terms of a lexical semantic representation drawing on Davis (1995), Jackendoff (1983, 1990), and Pustejovsky (1991, 1995) which aims to capture “linguistically relevant” components of meaning. The representation is shown to have utility for modeling of the interaction between the syntactic form of an utterance and its meaning. I introduce a formalisation of the representation within the framework of Head Driven Phrase Structure Grammar (Pollard and Sag 1994), and rely on the model of discourse coherence proposed by Lascarides and Asher (1992), Discourse in Commonsense Entailment. I furthermore discuss the implications of the contextual dependency of semantic interpretation for lexicon design and computational processing in Natural Language Understanding systems.
|
129 |
Decoding semantic representations during production of minimal adjective-noun phrasesHonari Jahromi, Maryam 25 April 2019 (has links)
Through linguistic abilities, our brain can comprehend and produce an infinite number of new sentences constructed from a finite set of words. Although recent research has uncovered the neural representation of semantics during comprehension of isolated words or adjective-noun phrases, the neural representation of the words during utterance planning is less understood. We apply existing machine learning methods to Magnetoencephalography (MEG) data recorded during a picture naming
experiment, and predict the semantic properties of uttered words before they are
said. We explore the representation of concepts over time, under controlled tasks,
with varying compositional requirements. Our results imply that there is enough
information in brain activity recorded by MEG to decode the semantic properties of
the words during utterance planning. Also, we observe a gradual improvement in
the semantic decoding of the first uttered word, as the participant is about to say it.
Finally, we show that, compared to non-compositional tasks, planning to compose an
adjective-noun phrase is associated with an enhanced and sustained representation
of the noun. Our results on the neural mechanisms of basic compositional structures
are a small step towards the theory of language in the brain. / Graduate
|
130 |
Is Semantic Query Optimization Worthwhile?Genet, Bryan Howard January 2007 (has links)
The term quote semantic query optimization quote (SQO) denotes a methodology whereby queries against databases are optimized using semantic information about the database objects being queried. The result of semantically optimizing a query is another query which is syntactically different to the original, but semantically equivalent and which may be answered more efficiently than the original. SQO is distinctly different from the work performed by the conventional SQL optimizer. The SQL optimizer generates a set of logically equivalent alternative execution paths based ultimately on the rules of relational algebra. However, only a small proportion of the readily available semantic information is utilised by current SQL optimizers. Researchers in SQO agree that SQO can be very effective. However, after some twenty years of research into SQO, there is still no commercial implementation. In this thesis we argue that we need to quantify the conditions for which SQO is worthwhile. We investigate what these conditions are and apply this knowledge to relational database management systems (RDBMS) with static schemas and infrequently updated data. Any semantic query optimizer requires the ability to reason using the semantic information available, in order to draw conclusions which ultimately facilitate the recasting of the original query into a form which can be answered more efficiently. This reasoning engine is currently not part of any commercial RDBMS implementation. We show how a practical semantic query optimizer may be built utilising readily available semantic information, much of it already captured by meta-data typically stored in commercial RDBMS. We develop cost models which predict an upper bound to the amount of optimization one can expect when queries are pre-processed by a semantic optimizer. We present a series of empirical results to confirm the effectiveness or otherwise of various types of SQO and demonstrate the circumstances under which SQO can be effective.
|
Page generated in 0.0644 seconds