Spelling suggestions: "subject:"1earning detechniques"" "subject:"1earning 3dtechniques""
1 |
EBKAT : an explanation-based knowledge acquisition toolWusteman, Judith January 1990 (has links)
No description available.
|
2 |
Using machine-learning to efficiently explore the architecture/compiler co-design spaceDubach, Christophe January 2009 (has links)
Designing new microprocessors is a time consuming task. Architects rely on slow simulators to evaluate performance and a significant proportion of the design space has to be explored before an implementation is chosen. This process becomes more time consuming when compiler optimisations are also considered. Once the architecture is selected, a new compiler must be developed and tuned. What is needed are techniques that can speedup this whole process and develop a new optimising compiler automatically. This thesis proposes the use of machine-learning techniques to address architecture/compiler co-design. First, two performance models are developed and are used to efficiently search the design space of amicroarchitecture. These models accurately predict performance metrics such as cycles or energy, or a tradeoff of the two. The first model uses just 32 simulations to model the entire design space of new applications, an order of magnitude fewer than state-of-the-art techniques. The second model addresses offline training costs and predicts the average behaviour of a complete benchmark suite. Compared to state-of-the-art, it needs five times fewer training simulations when applied to the SPEC CPU 2000 and MiBench benchmark suites. Next, the impact of compiler optimisations on the design process is considered. This has the potential to change the shape of the design space and improve performance significantly. A new model is proposed that predicts the performance obtainable by an optimising compiler for any design point, without having to build the compiler. Compared to the state-of-the-art, this model achieves a significantly lower error rate. Finally, a new machine-learning optimising compiler is presented that predicts the best compiler optimisation setting for any new program on any new microarchitecture. It achieves an average speedup of 1.14x over the default best gcc optimisation level. This represents 61% of the maximum speedup available, using just one profile run of the application.
|
3 |
Theories Contrasted: Rudy's Variability in the Associative Process (V.A.P.) and Martin's Encoding VariabilityFuhr, Susan R. 12 1900 (has links)
A paired-associate list of three-word stimuli and one-word responses comprised the first list of an A-B, A-Br paradigm. Each of the three words from the first-list three-word stimuli was singly re-paired with first-list responses to make up three of the second-list conditions. The fourth second-list condition used the first-list stimuli plus re-paired first-list responses. Results obtained were that: (a) nine of the sixteen subjects spontaneously shifted encoding cues from first to second lists, (b) evidence of significantly greater negative transfer occurred only in the A-B, A1 2 3-Br condition, and (c) although not attaining significance level, across all A -Br conditions there were more errors on second-list learning for those not shifting encoding cues from first to second list. For those who did shift, performance was only slightly lower than the A-B, C-B control condition. Neither the encoding variability nor the associative variability theory was entirely supported. A gestalt interpretation was suggested.
|
4 |
Nonlinear mixed effects models for longitudinal DATAMahbouba, Raid January 2015 (has links)
The main objectives of this master thesis are to explore the effectiveness of nonlinear mixed effects model for longitudinal data. Mixed effect models allow to investigate the nature of relationship between the time-varying covariates and the response while also capturing the variations of subjects. I investigate the robustness of the longitudinal models by building up the complexity of the models starting from multiple linear models and ending up with additive nonlinear mixed models. I use a dataset where firms’ leverage are explained by four explanatory variables in addition to a grouping factor that is the firm factor. The models are compared using comparison statistics such as AIC, BIC and by a visual inspection of residuals. Likelihood ratio test has been used in some nested models only. The models are estimated by maximum likelihood and restricted maximum likelihood estimation. The most efficient model is the nonlinear mixed effects model which has lowest AIC and BIC. The multiple linear regression model failed to explain the relation and produced unrealistic statistics
|
5 |
South African Sign Language Hand Shape and Orientation Recognition on Mobile Devices Using Deep LearningJacobs, Kurt January 2017 (has links)
>Magister Scientiae - MSc / In order to classify South African Sign Language as a signed gesture, five fundamental parameters need to be considered. These five parameters to be considered are: hand shape, hand orientation, hand motion, hand location and facial expressions. The research in this thesis will utilise Deep Learning techniques, specifically Convolutional Neural Networks, to recognise hand shapes in various hand orientations. The research will focus on two of the five fundamental parameters, i.e., recognising six South African Sign Language hand shapes for each of five different hand orientations. These hand shape and orientation combinations will be recognised by means of a video stream captured on a mobile device. The efficacy of Convolutional Neural Network for gesture recognition will be judged with respect to its classification accuracy and classification speed in both a desktop and embedded context. The research methodology employed to carry out the research was Design Science Research. Design Science Research refers to a set of analytical techniques and perspectives for performing research in the field of Information Systems and Computer Science. Design Science Research necessitates the design of an artefact and the analysis thereof in order to better understand its behaviour in the context of Information Systems or Computer Science. / National Research Foundation (NRF)
|
6 |
Machine Learning-Based Ontology Mapping Tool to Enable Interoperability in Coastal Sensor NetworksBheemireddy, Shruthi 11 December 2009 (has links)
In today’s world, ontologies are being widely used for data integration tasks and solving information heterogeneity problems on the web because of their capability in providing explicit meaning to the information. The growing need to resolve the heterogeneities between different information systems within a domain of interest has led to the rapid development of individual ontologies by different organizations. These ontologies designed for a particular task could be a unique representation of their project needs. Thus, integrating distributed and heterogeneous ontologies by finding semantic correspondences between their concepts has become the key point to achieve interoperability among different representations. In this thesis, an advanced instance-based ontology matching algorithm has been proposed to enable data integration tasks in ocean sensor networks, whose data are highly heterogeneous in syntax, structure, and semantics. This provides a solution to the ontology mapping problem in such systems based on machine-learning methods and string-based methods.
|
7 |
A Java-based Smart Object Model for use in Digital Learning EnvironmentsPushpagiri, Vara Prashanth 16 October 2003 (has links)
The last decade has seen the scope of digital library usage extend from data warehousing and other common library services to building quality collections of electronic resources and providing web-based information retrieval mechanisms for distributed learning. This is clear from the number of ongoing research initiatives aiming to provide dynamic learning environments.
A major task in providing learning environments is to define a resource model (learning object). The flexibility of the learning object model determines the quality of the learning environment. Further, dynamic environments can be realized by changing the contents and structure of the learning object, i.e. make it mutable. Most existing models are immutable after creation and require the library to support operations that help in creating these environments. This leaves the learning object at the mercy of the parent library's functionality. This thesis work is an extension of an existing model and allows a learning object to function independent of the operational constraints of a digital library by equipping learning objects with software components called methods that influence their operation and structure even after being deployed. It provides a reference implementation of an aggregate, intelligent, self-sufficient, object-oriented, platform-independent learning object model, which is conformant to popular digital library standards.
It also presents a Java-based development tool for creating and modifying smart objects. It is capable of performing content aggregation, metadata harvesting and user repository maintenance operations, in addition to supporting the addition/removal of methods to a smart object. The current smart object implementation and the development tool have been deployed successfully on two platforms (Windows and Linux) where their operation was found to be satisfactory. / Master of Science
|
8 |
Effective harp pedagogy - A Study of Techniques, Physical and MentalHuang, Jo-Ying Angela January 2011 (has links)
This study examines the techniques required to effectively play the modern concert harp. Following a study of the main harp performing methods and an examination of the most popular instructional books published in recent times, this study explores and analyses the practice techniques of harp playing. It investigates and identifies general current practice techniques in music, and sees ways in which these may be incorporated into the learning of the harp. A number of musical excerpts are selected as the bases of specific practice plans which are designed to demonstrate how physical and mental techniques may be combined to support accurate and musical harp playing. The practice techniques and plans are assessed and supported by referring them to teachers and senior students. These research participants provided useful information regarding their own learning experiences and observations on the place that technical studies played in the growth of their own performance skills.
|
9 |
3D facial feature extraction and recognition : an investigation of 3D face recognition : correction and normalisation of the facial data, extraction of facial features and classification using machine learning techniquesAl-Qatawneh, Sokyna M. S. January 2010 (has links)
Face recognition research using automatic or semi-automatic techniques has emerged over the last two decades. One reason for growing interest in this topic is the wide range of possible applications for face recognition systems. Another reason is the emergence of affordable hardware, supporting digital photography and video, which have made the acquisition of high-quality and high resolution 2D images much more ubiquitous. However, 2D recognition systems are sensitive to subject pose and illumination variations and 3D face recognition which is not directly affected by such environmental changes, could be used alone, or in combination with 2D recognition. Recently with the development of more affordable 3D acquisition systems and the availability of 3D face databases, 3D face recognition has been attracting interest to tackle the limitations in performance of most existing 2D systems. In this research, we introduce a robust automated 3D Face recognition system that implements 3D data of faces with different facial expressions, hair, shoulders, clothing, etc., extracts features for discrimination and uses machine learning techniques to make the final decision. A novel system for automatic processing for 3D facial data has been implemented using multi stage architecture; in a pre-processing and registration stage the data was standardized, spikes were removed, holes were filled and the face area was extracted. Then the nose region, which is relatively more rigid than other facial regions in an anatomical sense, was automatically located and analysed by computing the precise location of the symmetry plane. Then useful facial features and a set of effective 3D curves were extracted. Finally, the recognition and matching stage was implemented by using cascade correlation neural networks and support vector machine for classification, and the nearest neighbour algorithms for matching. It is worth noting that the FRGC data set is the most challenging data set available supporting research on 3D face recognition and machine learning techniques are widely recognised as appropriate and efficient classification methods.
|
10 |
Reducing the cost of heuristic generation with machine learningOgilvie, William Fraser January 2018 (has links)
The space of compile-time transformations and or run-time options which can improve the performance of a given code is usually so large as to be virtually impossible to search in any practical time-frame. Thus, heuristics are leveraged which can suggest good but not necessarily best configurations. Unfortunately, since such heuristics are tightly coupled to processor architecture performance is not portable; heuristics must be tuned, traditionally manually, for each device in turn. This is extremely laborious and the result is often outdated heuristics and less effective optimisation. Ideally, to keep up with changes in hardware and run-time environments a fast and automated method to generate heuristics is needed. Recent works have shown that machine learning can be used to produce mathematical models or rules in their place, which is automated but not necessarily fast. This thesis proposes the use of active machine learning, sequential analysis, and active feature acquisition to accelerate the training process in an automatic way, thereby tackling this timely and substantive issue. First, a demonstration of the efficiency of active learning over the previously standard supervised machine learning technique is presented in the form of an ensemble algorithm. This algorithm learns a model capable of predicting the best processing device in a heterogeneous system to use per workload size, per kernel. Active machine learning is a methodology which is sensitive to the cost of training; specifically, it is able to reduce the time taken to construct a model by predicting how much is expected to be learnt from each new training instance and then only choosing to learn from those most profitable examples. The exemplar heuristic is constructed on average 4x faster than a baseline approach, whilst maintaining comparable quality. Next, a combination of active learning and sequential analysis is presented which reduces both the number of samples per training example as well as the number of training examples overall. This allows for the creation of models based on noisy information, sacrificing accuracy per training instance for speed, without having a significant affect on the quality of the final product. In particular, the runtime of high-performance compute kernels is predicted from code transformations one may want to apply using a heuristic which was generated up to 26x faster than with active learning alone. Finally, preliminary work demonstrates that an automated system can be created which optimises both the number of training examples as well as which features to select during training to further substantially accelerate learning, in cases where each feature value that is revealed comes at some cost.
|
Page generated in 0.1176 seconds