• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • 1
  • Tagged with
  • 8
  • 8
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Increasing the robustness of active upper limb prostheses

Stango, Antonietta 23 November 2016 (has links)
No description available.
2

Syntax-driven argument identification and multi-argument classification for semantic role labeling

Lin, Chi-San Althon January 2007 (has links)
Semantic role labeling is an important stage in systems for Natural Language Understanding. The basic problem is one of identifying who did what to whom for each predicate in a sentence. Thus labeling is a two-step process: identify constituent phrases that are arguments to a predicate, then label those arguments with appropriate thematic roles. Existing systems for semantic role labeling use machine learning methods to assign roles one-at-a-time to candidate arguments. There are several drawbacks to this general approach. First, more than one candidate can be assigned the same role, which is undesirable. Second, the search for each candidate argument is exponential with respect to the number of words in the sentence. Third, single-role assignment cannot take advantage of dependencies known to exist between semantic roles of predicate arguments, such as their relative juxtaposition. And fourth, execution times for existing algorithm are excessive, making them unsuitable for real-time use. This thesis seeks to obviate these problems by approaching semantic role labeling as a multi-argument classification process. It observes that the only valid arguments to a predicate are unembedded constituent phrases that do not overlap that predicate. Given that semantic role labeling occurs after parsing, this thesis proposes an algorithm that systematically traverses the parse tree when looking for arguments, thereby eliminating the vast majority of impossible candidates. Moreover, instead of assigning semantic roles one at a time, an algorithm is proposed to assign all labels simultaneously; leveraging dependencies between roles and eliminating the problem of duplicate assignment. Experimental results are provided as evidence to show that a combination of the proposed argument identification and multi-argument classification algorithms outperforms all existing systems that use the same syntactic information.
3

Matrix groups : theory, algorithms and applications

Ambrose, Sophie January 2006 (has links)
[Abstract] This thesis is divided into two parts, both containing algorithms for dealing with matrices and matrix groups. Part I is concerned with individual matrices over an arbitrary field. Our algorithms make use of a sequence called the rank profile which is related to the linear dependence relations between the columns of a matrix. First we look at LSP decompositions of matrices as defined by Ibarra et al. in 1982. This decomposition is related to, and a little more general than, the LUP decomposition. The algorithm given by Ibarra et al. to compute an LSP decomposition was only defined for m?n matrices where m ≤ n and is claimed to have the same asymptotic cost as matrix multiplication. We prove that their cost analysis overlooked some aspects of the computation and present a new version of the algorithm which finds both an LSP decomposition and the rank profile of any matrix. The cost of our algorithm is the same as that claimed by Ibarra et al. when m ≤ n and has a similar cost when m > n. One of the steps in the Ibarra et al. algorithm is not completely explicit, so that any one of several choices can be made. Our algorithm is designed so that the particular choice made at this point allows for the simultaneous calculation of the rank profile. Next we study algorithms to find the characteristic polynomial of a square matrix. The current fastest algorithm to find the characteristic polynomial of a square matrix was developed by Keller-Gehrig in 1985. We present a new, simpler version of this algorithm with the same cost which makes the algorithm?s reliance on the rank profile explicit. In Part II we present generalised sifting, a scheme for creating Monte Carlo black box constructive group recognition algorithms. Generalised sifting is designed to facilitate computation in a known group, specifically re-writing arbitrary elements as words or straight-line programs in a standard generating set. It can also be used to create membership tests in black-box groups. Generalised sifting was inspired by the subgroup sifting techniques originally introduced by Sims in 1970 but uses a chain of subsets rather than subgroups. We break the problem down into a sequence of separately analysed and proven steps which sift down into each subset in turn ... All of the algorithms in Parts I and II are given with a theoretical proof and (where appropriate) complexity analysis. The LSP decomposition, characteristic polynomial and generalised sifting algorithms have all been implemented and tested in the computer algebra package GAP.
4

A Natural User Interface for Virtual Object Modeling for Immersive Gaming

Xu, Siyuan 01 October 2013 (has links)
" We designed an interactive 3D user interface system to perform object modeling in virtual environments. Expanding on existing 3D user interface techniques, we integrate low-cost human gesture recognition that endows the user with powerful abilities to perform complex virtual object modeling tasks in an immersive game setting. Much research has been done to explore the possibilities of developing biosensors for Virtual Reality (VR) use. In the game industry, even though full body interaction techniques are involved in modern game consoles, most of the utilizations, in terms of game control, are still simple. In this project, we extended the use of motion tracking and gesture recognition techniques to create a new 3D UI system to support immersive gaming. We set a goal for the usability, which is virtual object modeling, and finally developed a game application to test its performance. "
5

The influences of cognitive, experiential and habitual factors in online games playing

Said, Laila Refiana January 2006 (has links)
[Truncated abstract] Online games are an exciting new trend in the consumption of entertainment and provide the opportunity to examine selected antecedents of online game-playing based on studying the cognitive, experiential and habitual factors. This study was divided into two parts. The first part analysed the structural relations among research variables that might explain online game-playing using the Structural Equation Modeling (SEM) techniques. These analyses were conducted on a final sample of 218 online gamers. Specific issues examined were: If the variables of Perceived Game Performance, Satisfaction, Hedonic Responses, Flow and Habit Strength influence the Intention to Replay an online game. The importance of factors such as Hedonic Responses and Flow on Satisfaction in online game play. In addition to the SEM, analyses of the participants? reported past playing behaviour were conducted to test whether past game play was simply a matter of random frequency of past behaviour, or followed the specific pattern of the Negative Binomial Distribution (NBD). … The playing-time distribution was not significantly different to the Gamma distribution, in which the largest number of gamers plays for a short time (light gamers) and only a few gamers account for a large proportion of playing time (heavy gamers). Therefore, the reported time play followed a simple and predictable NBD pattern (Chisquare=. 390; p>.05). This study contributes to knowledge in the immediate field of online games and to the wider body of literature on consumer research. The findings demonstrate that gamers tend to act habitually in their playing behaviour. These findings support the argument that past behaviour (habit) is a better explanation of future behaviour than possible cognitive and affective explanations, especially for the apparent routinesed behaviour pattern on online games. The pattern of online game-playing is consistent with the finding of the NBD pattern in television viewing, in which the generalisability of the NBD model has been found in stable environments of repetitive behaviour. This supports the application of the NBD to areas beyond those of patterns in gambling and the purchase of consumer items. The findings have implications both for managerial and public policy decision-making.
6

Биометријско обележје за препознавање говорника: дводимензионална информациона ентропија говорног сигнала / Biometrijsko obeležje za prepoznavanje govornika: dvodimenzionalna informaciona entropija govornog signala / A novel solution for indoor human presence and motion detection in wireless sensor networks based on the analysis of radio signals propagation

Božilović Boško 26 September 2016 (has links)
<p>Mотив за истраживање је унапређење процеса аутоматског препознавања говорника без обзира на садржај изговоренoг текста.<br />Циљ ове докторске дисертације је дефинисање новог биометријског обележја за препознавање говорника независно од изговореног текста &minus; дводимензионалне информационе ентропије говорног сигнала.<br />Дефинисање новог обележја се врши искључиво у временском домену, па је рачунарска сложеност алгоритма за његово издвајање знатно мања у односу на обележја која се издвајају у фреквенцијском домену. Оцена перформанси дводимензионалне информационе ентропије је урађена над репрезентативним скупом случајно одабраних говорника. Показано је да предложено обележје има малу варијабилност унутар говорног сигнала једног говорника, а велику варијабилност између говорних сигнала различитих говорника.</p> / <p>Motiv za istraživanje je unapređenje procesa automatskog prepoznavanja govornika bez obzira na sadržaj izgovorenog teksta.<br />Cilj ove doktorske disertacije je definisanje novog biometrijskog obeležja za prepoznavanje govornika nezavisno od izgovorenog teksta &minus; dvodimenzionalne informacione entropije govornog signala.<br />Definisanje novog obeležja se vrši isključivo u vremenskom domenu, pa je računarska složenost algoritma za njegovo izdvajanje znatno manja u odnosu na obeležja koja se izdvajaju u frekvencijskom domenu. Ocena performansi dvodimenzionalne informacione entropije je urađena nad reprezentativnim skupom slučajno odabranih govornika. Pokazano je da predloženo obeležje ima malu varijabilnost unutar govornog signala jednog govornika, a veliku varijabilnost između govornih signala različitih govornika.</p> / <p>Тhe motivation for the research is the improvement of the automatic speaker recognition process regardless of the content of spoken text.<br />The objective of this dissertation is to define a new biometric text-independent speaker recognition feature &minus; the two-dimensional informational entropy of speech signal.<br />Definition of the new feature is performed in time domain exclusively, so the computing complexity of the algorithm for feature extraction is significantly lower in comparison to feature extraction in spectral domain. Performance analysis of two-dimensional information entropy is performed on the representative set of randomly chosen speakers. It has been shown that new feature has small within-speaker variability and significant between-speaker variability.</p>
7

Analysis Of Multi-lingual Documents With Complex Layout And Content

Pati, Peeta Basa 11 1900 (has links)
A document image, beside text, may contain pictures, graphs, signatures, logos, barcodes, hand-drawn sketches and/or seals. Further, the text blocks in an image may be in Manhattan or any complex layout. Document Layout Analysis is an important preprocessing step before subjecting any such image to OCR. Here, the image with complex layout and content is segmented into its constituent components. For many present day applications, separating the text from the non-text blocks is sufficient. This enables the conversion of the text elements present in the image to their corresponding editable form. In this work, an effort has been made to separate the text areas from the various kinds of possible non-text elements. The document images may have been obtained from a scanner or camera. If the source is a scanner, there is control on the scanning resolution, and lighting of the paper surface. Moreover, during the scanning process, the paper surface remains parallel to the sensor surface. However, when an image is obtained through a camera, these advantages are no longer available. Here, an algorithm is proposed to separate the text present in an image from the clutter, irrespective of the imaging technology used. This is achieved by using both the structural and textural information of the text present in the gray image. A bank of Gabor filters characterizes the statistical distribution of the text elements in the document. A connected component based technique removes certain types of non-text elements from the image. When a camera is used to acquire document images, generally, along with the structural and textural information of the text, color information is also obtained. It can be assumed that text present in an image has a certain amount of color homogeneity. So, a graph-theoretical color clustering scheme is employed to segment the iso-color components of the image. Each iso-color image is then analyzed separately for its structural and textural properties. The results of such analyses are merged with the information obtained from the gray component of the image. This helps to separate the colored text areas from the non-text elements. The proposed scheme is computationally intensive, because the separation of the text from non-text entities is performed at the pixel level Since any entity is represented by a connected set of pixels, it makes more sense to carry out the separation only at specific points, selected as representatives of their neighborhood. Harris' operator evaluates an edge-measure at each pixel and selects pixels, which are locally rich on this measure. These points are then employed for separating text from non-text elements. Many government documents and forms in India are bi-lingual or tri-lingual in nature. Further, in school text books, it is common to find English words interspersed within sentences in the main Indian language of the book. In such documents, successive words in a line of text may be of different scripts (languages). Hence, for OCR of these documents, the script must be recognized at the level of words, rather than lines or paragraphs. A database of about 20,000 words each from 11 Indian scripts1 is created. This is so far the largest database of Indian words collected and deployed for script recognition purpose. Here again, a bank of 36 Gabor filters is used to extract the feature vector which represents the script of the word. The effectiveness of Gabor features is compared with that of DCT and it is found that Gabor features marginally outperform the DOT. Simple, linear and non-linear classifiers are employed to classify the word in the feature space. It is assumed that a scheme developed to recognize the script of the words would work equally fine for sentences and paragraphs. This assumption has been verified with supporting results. A systematic study has been conducted to evaluate and compare the accuracy of various feature-classifier combinations for word script recognition. We have considered the cases of bi-script and tri-script documents, which are largely available. Average recognition accuracies for bi-script and tri-script cases are 98.4% and 98.2%, respectively. A hierarchical blind script recognizer, involving all eleven scripts has been developed and evaluated, which yields an average accuracy of 94.1%. The major contributions of the thesis are: • A graph theoretic color clustering scheme is used to segment colored text. • A scheme is proposed to separate text from the non-text content of documents with complex layout and content, captured by scanner or camera. • Computational complexity is reduced by performing the separation task on a selected set of locally edge-rich points. • Script identification at word level is carried out using different feature classifier combinations. Gabor features with SVM classifier outperforms any other feature-classifier combinations. A hierarchical blind script recognition algorithm, involving the recognition of 11 Indian scripts, is developed. This structure employs the most efficient feature-classifier combination at each individual nodal point of the tree to maximize the system performance. A sequential forward feature selection algorithm is employed to. select the most discriminating features, in a case by case basis, for script-recognition. The 11 scripts are Bengali, Devanagari, Gujarati, Kannada, Malayalam, Odiya, Puniabi, Roman. Tamil, Telugu and Urdu.
8

Novel algorithms for 3D human face recognition

Gupta, Shalini, 1979- 27 April 2015 (has links)
Automated human face recognition is a computer vision problem of considerable practical significance. Existing two dimensional (2D) face recognition techniques perform poorly for faces with uncontrolled poses, lighting and facial expressions. Face recognition technology based on three dimensional (3D) facial models is now emerging. Geometric facial models can be easily corrected for pose variations. They are illumination invariant, and provide structural information about the facial surface. Algorithms for 3D face recognition exist, however the area is far from being a matured technology. In this dissertation we address a number of open questions in the area of 3D human face recognition. Firstly, we make available to qualified researchers in the field, at no cost, a large Texas 3D Face Recognition Database, which was acquired as a part of this research work. This database contains 1149 2D and 3D images of 118 subjects. We also provide 25 manually located facial fiducial points on each face in this database. Our next contribution is the development of a completely automatic novel 3D face recognition algorithm, which employs discriminatory anthropometric distances between carefully selected local facial features. This algorithm neither uses general purpose pattern recognition approaches, nor does it directly extend 2D face recognition techniques to the 3D domain. Instead, it is based on an understanding of the structurally diverse characteristics of human faces, which we isolate from the scientific discipline of facial anthropometry. We demonstrate the effectiveness and superior performance of the proposed algorithm, relative to existing benchmark 3D face recognition algorithms. A related contribution is the development of highly accurate and reliable 2D+3D algorithms for automatically detecting 10 anthropometric facial fiducial points. While developing these algorithms, we identify unique structural/textural properties associated with the facial fiducial points. Furthermore, unlike previous algorithms for detecting facial fiducial points, we systematically evaluate our algorithms against manually located facial fiducial points on a large database of images. Our third contribution is the development of an effective algorithm for computing the structural dissimilarity of 3D facial surfaces, which uses a recently developed image similarity index called the complex-wavelet structural similarity index. This algorithm is unique in that unlike existing approaches, it does not require that the facial surfaces be finely registered before they are compared. Furthermore, it is nearly an order of magnitude more accurate than existing facial surface matching based approaches. Finally, we propose a simple method to combine the two new 3D face recognition algorithms that we developed, resulting in a 3D face recognition algorithm that is competitive with the existing state-of-the-art algorithms. / text

Page generated in 0.102 seconds