• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 198
  • 198
  • 198
  • 26
  • 16
  • 13
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Critical computer animation : an examination of "practice as research" and its reflection and review processes

Lo-Garry, Yasumiko Cindy Tszyan January 2010 (has links)
My doctoral study investigated the “Practice as Research” model for critical 3D computer animation. I designed a structure for the model using mixed research methods and a critical process, and applied this proposed methodology first into a pilot study to examine some selected methods and identify other required techniques for this research model. The refined "Practice as Research" model was then applied into different fields of animation - a game development project, a narrative, and experimental animation for detailed analysis and improvement of its flexibility. The study examined a variety of practices and procedures used by animators and studios and identified processes for the analysis and evaluation of computer animation. Within the created research space in both commercial project and experimental works, I demonstrated that there were effective and different procedures, depending on the application and its target qualities. Also, I clarified some of the basic differences between traditional animation techniques and 3D skills; hence, explained and modified some of the well-established animation practices to best suit 3D animation development. The "Practice as Research" model encouraged critical research methods and attitudes into industrial settings to expand the receptiveness of experiences and knowledge, shifting away from the common creative product-oriented view. The model naturally led a practitioner to intervene one's perspective and previous ways of doing. It showed that the “Practice as Research” approach could increase creativity in a product while maintaining control in time management and encourage animators to welcome other perspectives. The research concluded that if “Practice as Research” model was used properly, it could be an effective and efficient method to satisfy both commercial qualities and personal development. The most interesting part of the research was perhaps the search for an animator’s mindset, personal qualities, preconceptions and preferences that could influence practices and qualities. With those additional information, I refined the proposed “Practice as Research” model that allowed animators to modify their previous way of working and thinking during the process, and encouraged continuous development to aim for a higher quality of work.
22

ASSESSING A BRAIN-COMPUTER INTERFACE BY EVOKING THE AUDITORY CORTEX THROUGH BINAURAL BEATS

Potgieter, Louwrens 22 July 2013 (has links)
Why can some people study, read books, and work while listening to music or with noise in the background while other people simply cannot? This was the question that prompted this research study. The aim of this project was to assess the impact of binaural beats on participants during the performance of a task. The participants were exposed to different binaural beats that changed the dominant brainwaves while they were engaging in the task. A braincomputer interface was used to monitor the performance of the task in which a Lego Mindstorm robot was controlled as it moved through a course. To accomplish the aim of the project, the effects of binaural tones on participantsâ task performance were investigated in relation to participantsâ levels of frustration, excitement, engagement, meditation and performance. Participants were monitored by means of using an Emotiv EPOC neuroheadset. Although previous studies on binaural beats have been done, most of these studies were done on Attention deficit-hyperactivity disorder (ADHD) children, with users performing everyday tasks. In these studies, time was the only metric used. The researcher collected data by means of questionnaires that were completed by the participants to obtain personal information and measure the user experience. The aspects of frustration, excitement, engagement, meditation and performance were determined using the Emotiv headset in combination with the Emotiv software development kit, Microsoft Robotics Studio and software created by the researcher. After intensive statistical analysis, the researcher found that different sound frequencies did indeed affect user performance. Sessions where no sound frequency was applied were associated with more errors and longer time durations compared with all other frequencies. It can be concluded that invoking a participantâs dominant brainwave by means of binaural tones can change his/her state of mind. This in turn can affect the long-term excitement, short-term excitement, engagement, meditation, frustration or performance of a participant while performing a task. Much remains to be learned, in particular regarding the combination of brain-computer interfaces and human-computer interaction. The possibility of new cutting-edge technologies that could provide a platform for further in-depth research is an exciting prospect
23

Multi view image : surveillance and tracking

Black, James January 2004 (has links)
No description available.
24

A new visual query language and query optimization for mobile GPS

Elsidani Elariss, Haifa January 2008 (has links)
In recent years computer applications have been deployed to manage spatial data with Geographic Information Systems (GIS) to store and analyze data related to domains such as transportation and tourism. Recent developments have shown that there is an urgent need to develop systmes for mobile devices and particularly for Location Based Services (LBS) such as proximity analysis that helps in finding the nearest neighbors, for example. restaurant, and the facilities that are located within a circle area around the user's location, known as a buffer area, for example, all restaurants within 100 meters. The mobile market potential is across geographical and cultural boundaries. Hence the visualization of queries becomes important especially that the existing visual query languages have a number of limitations. They are not tailored for mobile GIS and they do not support dynamic complex queries (DCQ) and visual query formation. Thus, the first aim of this research is to develop a new visual query language (IVQL) for mobile GIS that handles static and DCQ for proximity analysis. IVQL is designed and implemented using smiley icons that visualize operators, values, and objects. The evaluation results reveal that it has an expressive power, easy-to-use user interface, easy query building, and a high user satisfaction. There is also a need that new optimization strategies consider the scale of mobile user queries. Existing query optimization strategies are based on the sharing and push-down paradigms and they do not cover multiple-DCQ (MDCQ) for proximity analysis. This leads to the second aim of this thesis which is to develop the query melting processor (QMP) that is responsible for processing MDCQs. QMP is based on the new Query Melting paradigm which consists of the sharing paradigm, query optimization, and is implemented by a new strategy "Melting Ruler". Moreover, with the increase in volume of cost sensitive mobile users, the need emerges to develop a time cost optimizer for processing MDCQs. Thus, the thirs aim of the thesis is to develop a new Decision Making Mechanism for time cost optimization (TCOP) and prove its cost effectiveness. TCOP is based on the new paradigm "Sharing global execution plans by MDCQs with similar scenarios". The experimental evaluation results, using a case study based on the map of Paris, proved that significant saving in time can be achieved by employing the newly developed strategies.
25

Genomic signal processing for enhanced microarray data clustering

Sungoor, Ala M. H. January 2009 (has links)
Genomic signal processing is a new area of research that combines genomics with digital signal processing methodologies for enhanced genetic data analysis. Microarray is a well known technology for the evaluation of thousands of gene expression profiles. By considering these profiles as digital signals, the power of DSP methods can be applied to produce robust and unsupervised clustering of microarray samples. This can be achieved by transferring expression profiles into spectral components which are interpreted as a measure of profile similarity. This thesis introduces enhanced signal processing algorithms for robust clustering of micro array gene expression samples. The main aim of the research is to design and validate novel genomic signal processing methodologies for micro array data analysis based on different DSP methods. More specifically, clustering algorithms based on Linear prediction coding, Wavelet decomposition and Fractal dimension methods combined with Vector quantisation algorithm are applied and compared on a set of test microarray datasets. These techniques take as an input microarray gene expression samples and produce predictive coefficients arrays associated to the microarray data that are quantised in discrete levels, and consequently used for sample clustering. A variety of standard micro array datasets are used in this work to validate the robustness of these methods compared to conventional methods. Two well known validation approaches, i.e. Silhouette and Davies Bouldin index methods, are applied to evaluate internally and externally the genomic signal processing clustering results. In conclusion, the results demonstrate that genomic signal processing based methods outperform traditional methods by providing more clustering accuracy. Moreover, the study shows that the local features of the gene expression signals are better clustered using wavelets compared to the other DSP methods.
26

Investigating optical flow and tracking techniques for recovering motion within image sequences

Corvee, Etienne January 2005 (has links)
Analysing objects interacting in a 3D environment and captured by a video camera requires knowledge of their motions. Motion estimation provides such information, and consists of re-covering 2D image velocity, or optical flow, of the corresponding moving 3D objects. A gradient-based optical flow estimator is implemented in this thesis to produce a dense field of velocity vectors across an image. An iterative and parameterised approach is adopted which fits planar motion models locally on the image plane. Motion is then estimated using a least-squares minimisation approach. The possible approximations of the optical flow derivative are shown to differ greatly when the magnitude of the motion increases. However, the widely used derivative term remains the optimal approximation to use in the range of accuracies of the gradient-based estimators i.e. small motion magnitudes. Gradient-based estimators do not estimate motion robustly when noise, large motions and multiple motions are present across object boundaries. A robust statistical and multi-resolution estimator is developed in this study to address these limitations. Despite significant improvement in performance, the multiple motion problem remains a major limitation. A confidence measurement is designed around optical flow covariance to represent motion accuracy, and is shown to visually represent the lack of robustness across motion boundaries. The recent hyperplane technique is also studied as a global motion estimator but proved unreliable compared to the gradient-based approach. A computationally expensive optical flow estimator is then designed for the purpose of detecting at frame-rate moving objects occluding background scenes which are composed of static objects captured by moving pan and tilt cameras. This was achieved by adapting the estimator to perform global motion estimation i.e. estimating the motion of the background scenes. Moving objects are segmented from a thresholding operation on the grey-level differences between motion compensated background frames and captured frames. Filtering operations on small object dimensions and using moving edge information produced reliable results with small levels of noise. The issue of tracking moving objects is studied with the specific problem of data correspondence in occlusion scenarios.
27

Motion estimation and segmentation of colour image sequences

Amanatidis, Dimitrios E. January 2008 (has links)
The principal objective of this thesis is to develop improved motion estimation and segmentation techniques that meet the image-processing requirements of the post¬production industry. Starting with a rigorous taxonomy of existing image segmentation techniques, we proceed by focusing on motion estimation by means of optical flow calculation. A parametric motion model based method to estimate optical flow fields on three consecutive frames is developed and tested on a number of colour real sequences. Initial estimates are robustly refined in an iterative scheme and are enhanced by colour probability distribution information to enable foreground/background segmentation in a maximum a posteriori pixel classification scheme. Experiments, . show the significant contribution of the colour part towards a well-segmented image.Additionally, a very accurate variational optical flow computation method based on brightness constancy, gradient constancy and spatiotemporal smoothness constraints is modified and implemented so that it can robustly estimate global motion over three consecutive frames. Motion is enhanced by colour evidence in a similar manner and the method adopts the same probabilistic labelling procedure. After a comparison of the two methods on the same colour sequences, a third neural network based method is implemented, which initially estimates motion by employing two twin-layer optical flow calculating Gellular Neural Networks and proceeds in a similar manner, (incorporating colour information and probabilistic ally classifying pixels), leading to similar or improved quality results with the added advantage of significantly accelerated performance. Moreover, another CNN is employed with the task of offering spatial and temporal pixel compatibility constraint support, further improving the quality of the segmented images. Weights are used to control the respective contributing terms enabling optimization of the segmentation results for each sequence individually. Finally, as a case study of CNN implementation in hardware (FPGA), the use of Handel-G, a C-like, high-level, parallel, hardware description language, is exploited to allow for rapid translation of our algorithms to efficient hardware.
28

The significance of models of vision for the development of artificial handwriting recognition systems

Lenaghan, Andrew January 2001 (has links)
Artificial Handwriting Recognition (AHR) systems are currently developed in a largely ad hoc fashion. The central premise of this work is the need to return to first principles and identify an underlying rationale for the representations used in handwriting recognition. An interdisciplinary approach is advocated that combines the perspectives of cognitive science and pattern recognition engineering. Existing surveys of handwriting recognition are reviewed and an information-space analogy is presented to model how features encode evidence. Handwriting recognition is treated as an example of a simple visual task that uses a limited set of our visual abilities based on the observations that i) biological systems provide an example of a successful handwriting recognition system, and ii) vision is a prerequisite of recognition. A set of six feature types for which there is empirical evidence of their detection in early visual rocessing is identified and a layered framework for handwriting recognition is proposed that unifies the perspectives of cognitive science and engineering. The outer layers of the framework relate to the capture of raw sensory data and feature extraction. The inner layers concern the derivation and comparison of structural descriptions of handwriting. The implementation of an online AHR system developed in the context of the framework is reported. The implementation uses a fuzzy graph-based approach is used to represent structural descriptions of characters. Simple directed graphs for characters are compared by searching for subgraph isomorphisms between input characters and know prototypes. Trials are reported for a test set of 1000 digits drawn from 100 different subjects using a KNearest Neighbour approach (KNN) approach to classification. For K=3, the mean recognition accuracy is 68.3% and for K=5 it is 70.7%. Linear features were found to be the most significant. The work concludes that the current understanding of visual cognition is incomplete but does provide a basis for the development of artificial handwriting recognition systems although their performance is currently less than that of existing engineered systems.
29

Hybrid modelling of time-variant heterogeneous objects

Kravtsov, Denis January 2011 (has links)
The physical world consists of a wide range of objects of a diverse constitution. Past research was mainly focussed on the modelling of simple homogeneous objects of a uniform constitution. Such research resulted in the development of a number of advanced theoretical concepts and practical techniques for describing such physical objects. As a result, the process of modelling and animating certain types of homogeneous objects became feasible. In fact most physical objects are not homogeneous but heterogeneous in their constitution and it is thus important that one is able to deal with such heterogeneous objects that are composed of diverse materials and may have complex internal structures. Heterogeneous object modelling is still a very new and evolving research area, which is likely to prove useful in a wide range of application areas. Despite its great promise, heterogeneous object modelling is still at an embryonic state of development and there is a dearth of extant tools that would allow one to work with static and dynamic heterogeneous objects. In addition, the heterogeneous nature of the modelled objects makes it appealing to employ a combination of different representations resulting in the creation of hybrid models. In this thesis we present a new dynamic Implicit Complexes (IC) framework incorporating a number of existing representations and animation techniques. This framework can be used for the modelling of dynamic multidimensional heterogeneous objects. We then introduce an Implicit Complexes Application Programming Interface (IC API). This IC API is designed to provide various applications with a unified set of tools allowing these to model time-variant heterogeneous objects. We also present a new Function Representation (FRep) API, which is used for the integration of FReps into complex time-variant hybrid models. This approach allows us to create a practical multilevel modelling system suited for complex multidimensional hybrid modelling of dynamic heterogeneous objects. We demonstrate the advantages of our approach through the introduction of a novel set of tools tailored to problems encountered in simulation applications, computer animation and computer games. These new tools empower users and amplify their creativity by allowing them to overcome a large number of extant modelling and animation problems, which were previously considered difficult or even impossible to solve.
30

Requirement validation with enactable descriptions of use cases

Kanyaru, J. M. January 2006 (has links)
The validation of stakeholder requirements for a software system is a pivotal activity for any nontrivial software development project. Often, differences in knowledge regarding development issues, and knowledge regarding the problem domain, impede the elaboration of requirements amongst developers and stakeholders. A description technique that provides a user perspective of the system behaviour is likely to enhance shared understanding between the developers and stakeholders. The Unified Modelling Language (UML) use case is such a notation. Use cases describe the behaviour of a system (using natural language) in terms of interactions between the external users and the system. Since the standardisation of the UML by the Object Management Group in 1997, much research has been devoted to use cases. Some researchers have focussed on the provision of writing guidelines for use case specifications whereas others have focussed on the application of formal techniques. This thesis investigates the adequacy of the use case description for the specification and validation of software behaviour. In particular, the thesis argues that whereas the user-system interaction scheme underpins the essence of the use case notation, the UML specification of the use case does not provide a mechanism by which use cases can describe dependencies amongst constituent interaction steps. Clarifying these issues is crucial for validating the adequacy of the specification against stakeholder expectations. This thesis proposes a state-based approach (the Educator approach) to use case specification where constituent events are augmented with pre and post states to express both intra-use case and inter-use case dependencies. Use case events are enacted to visualise implied behaviour, thereby enhancing shared understanding among users and developers. Moreover, enaction provides an early "feel" of the behaviour that would result from the implementation of the specification. The Educator approach and the enaction of descriptions are supported by a prototype environment, the EducatorTool, developed to demonstrate the efficacy and novelty of the approach. To validate the work presented in this thesis an industrial study, involving the specification of realtime control software, is reported. The study involves the analysis of use case specifications of the subsystems prior to the application of the proposed approach, and the analysis of the specification where the approach and tool support are applied. This way, it is possible to determine the efficacy of the Educator approach within an industrial setting.

Page generated in 0.0983 seconds