• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1086
  • 257
  • 89
  • 85
  • 75
  • 26
  • 23
  • 21
  • 18
  • 17
  • 14
  • 13
  • 11
  • 7
  • 7
  • Tagged with
  • 2100
  • 521
  • 516
  • 479
  • 428
  • 349
  • 337
  • 314
  • 279
  • 268
  • 264
  • 260
  • 236
  • 176
  • 170
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Black Comedy and the Principles of Screenwriting/The Actions

Maxwell, Nicholas Elliott, nmaxwel1@bigpond.net.au January 2008 (has links)
This exegesis will aim to research and analyse the conventions of writing a black comedy in a feature film script. As a screenwriter with a particular interest in black comedy, my aim is to explore the technical structures of black comedy in order to facilitate the writing of a tragicomic screenplay. We will attempt to define the components of black comedy and survey its origin in theatre and literature. The exegesis will aim to explore what components comprise the middle ground between drama and humour and position it in relation to the classical genres of tragedy and comedy. The exegesis will also aim to examine the function of black comedy in relation to the psychology of the protagonist and the audience, as well as defining the characteristics of the genre in the context of Screenwriting. The exegesis will observe the film adaptation of the renowned play, Who's Afraid of Virginia Woolf? as a case study. The research will inform the writing of the feature length screenplay entitled The Actions.
72

Non-Iterative, Feature-Preserving Mesh Smoothing

Jones, Thouis R., Durand, Frédo, Desbrun, Mathieu 01 1900 (has links)
With the increasing use of geometry scanners to create 3D models, there is a rising need for fast and robust mesh smoothing to remove inevitable noise in the measurements. While most previous work has favored diffusion-based iterative techniques for feature-preserving smoothing, we propose a radically different approach, based on robust statistics and local first-order predictors of the surface. The robustness of our local estimates allows us to derive a non-iterative feature-preserving filtering technique applicable to arbitrary "triangle soups". We demonstrate its simplicity of implementation and its efficiency, which make it an excellent solution for smoothing large, noisy, and non-manifold meshes. / Singapore-MIT Alliance (SMA)
73

Improving Multi-class Text Classification with Naive Bayes

Rennie, Jason D. M. 01 September 2001 (has links)
There are numerous text documents available in electronic form. More and more are becoming available every day. Such documents represent a massive amount of information that is easily accessible. Seeking value in this huge collection requires organization; much of the work of organizing documents can be automated through text classification. The accuracy and our understanding of such systems greatly influences their usefulness. In this paper, we seek 1) to advance the understanding of commonly used text classification techniques, and 2) through that understanding, improve the tools that are available for text classification. We begin by clarifying the assumptions made in the derivation of Naive Bayes, noting basic properties and proposing ways for its extension and improvement. Next, we investigate the quality of Naive Bayes parameter estimates and their impact on classification. Our analysis leads to a theorem which gives an explanation for the improvements that can be found in multiclass classification with Naive Bayes using Error-Correcting Output Codes. We use experimental evidence on two commonly-used data sets to exhibit an application of the theorem. Finally, we show fundamental flaws in a commonly-used feature selection algorithm and develop a statistics-based framework for text feature selection. Greater understanding of Naive Bayes and the properties of text allows us to make better use of it in text classification.
74

Multiclass Classification of SRBCTs

Yeo, Gene, Poggio, Tomaso 25 August 2001 (has links)
A novel approach to multiclass tumor classification using Artificial Neural Networks (ANNs) was introduced in a recent paper cite{Khan2001}. The method successfully classified and diagnosed small, round blue cell tumors (SRBCTs) of childhood into four distinct categories, neuroblastoma (NB), rhabdomyosarcoma (RMS), non-Hodgkin lymphoma (NHL) and the Ewing family of tumors (EWS), using cDNA gene expression profiles of samples that included both tumor biopsy material and cell lines. We report that using an approach similar to the one reported by Yeang et al cite{Yeang2001}, i.e. multiclass classification by combining outputs of binary classifiers, we achieved equal accuracy with much fewer features. We report the performances of 3 binary classifiers (k-nearest neighbors (kNN), weighted-voting (WV), and support vector machines (SVM)) with 3 feature selection techniques (Golub's Signal to Noise (SN) ratios cite{Golub99}, Fisher scores (FSc) and Mukherjee's SVM feature selection (SVMFS))cite{Sayan98}.
75

Wavelet based analysis of circuit breaker operation

Ren, Zhifang Jennifer 30 September 2004 (has links)
Circuit breaker is an important interrupting device in power system network. It usually has a lifetime about 20 to 40 years. During breaker's service time, maintenance and inspection are imperative duties to achieve its reliable operation. To automate the diagnostic practice for circuit breaker operation and reduce the utility company's workload, Wavelet based analysis software of circuit breaker operation is developed here. Combined with circuit breaker monitoring system, the analysis software processes the original circuit breaker information, speeds up the analysis time and provides stable and consistent evaluation for the circuit breaker operation.
76

Observational Learning of a Bimanual Coordination Task: Understanding Movement Feature Extraction, Model Performance Level, and Perspective Angle

Dean, Noah J. 2009 December 1900 (has links)
One experiment was adminstered to address three issues central to identifying the processes that underlie our ability to learn through observation. One objective of the study was to identify the movement features (relative or absolute) extracted by an observer when demonstration acts as the training protocol. A second objective was to investigate how the performance level of the model (trial-to-trial variability in strategy selection) providing the demonstrations influences movement feature extraction. Lastly, a goal was to test whether or not visual perspective of the model by the observer (first-person or third-person) interacts with the aforementioned variables. The goal of the task was to trace two circles templates with a 90 degree relative phase offset between the two hands. Video recordings of two models practicing over three days were used to make three videos for the study; an expert performance, discovery performance, and instruction performance video. The discovery video portrayed a decrease in relative phase error and a transition from high trial-to-trial variability in the strategy selection to use of a single strategy. The instruction video also portrayed a decrease in relative phase error, but with no strategy search throughout practice. The expert video showed no strategy search with trial-to-trial variability within 5% of the goal relative phase of 90 across every trial. Observers watched one of the three video recordings from either a first-person or third-person perspective. In a retention test, the expert observers showed the most consistant capability (learning) in performing the goal phase. The instruction observers also showed learning, but to a lesser degree than the expert observers. The discovery group observers showed the least amount of learning of relative phase. The absolute feature of movement amplitude was not extracted by any observer group, results consistent with postulations by Scully and Newell (1985). Observation from the 1P perspective proved optimal in the expert and instruction observation groups, but the 3P perspective allowed for greater learning of of the goal relative phase (90 degree) in the discovery observation group. Hand lead, a relative feature of motion, was extracted by most obsevers, except those who observed the discovery model from the 3P perspective. It's concluded that the trial-to-trial variabiliy in terms of strategy selection interacted with the process of mental rotation, which prevented the extraction of hand lead in those observers that viewed the discovery model.
77

Sparse Value Function Approximation for Reinforcement Learning

Painter-Wakefield, Christopher Robert January 2013 (has links)
<p>A key component of many reinforcement learning (RL) algorithms is the approximation of the value function. The design and selection of features for approximation in RL is crucial, and an ongoing area of research. One approach to the problem of feature selection is to apply sparsity-inducing techniques in learning the value function approximation; such sparse methods tend to select relevant features and ignore irrelevant features, thus automating the feature selection process. This dissertation describes three contributions in the area of sparse value function approximation for reinforcement learning.</p><p>One method for obtaining sparse linear approximations is the inclusion in the objective function of a penalty on the sum of the absolute values of the approximation weights. This <italic>L<sub>1</sub></italic> regularization approach was first applied to temporal difference learning in the LARS-inspired, batch learning algorithm LARS-TD. In our first contribution, we define an iterative update equation which has as its fixed point the <italic>L<sub>1</sub></italic> regularized linear fixed point of LARS-TD. The iterative update gives rise naturally to an online stochastic approximation algorithm. We prove convergence of the online algorithm and show that the <italic>L<sub>1</sub></italic> regularized linear fixed point is an equilibrium fixed point of the algorithm. We demonstrate the ability of the algorithm to converge to the fixed point, yielding a sparse solution with modestly better performance than unregularized linear temporal difference learning.</p><p>Our second contribution extends LARS-TD to integrate policy optimization with sparse value learning. We extend the <italic>L<sub>1</sub></italic> regularized linear fixed point to include a maximum over policies, defining a new, "greedy" fixed point. The greedy fixed point adds a new invariant to the set which LARS-TD maintains as it traverses its homotopy path, giving rise to a new algorithm integrating sparse value learning and optimization. The new algorithm is demonstrated to be similar in performance with policy iteration using LARS-TD.</p><p>Finally, we consider another approach to sparse learning, that of using a simple algorithm that greedily adds new features. Such algorithms have many of the good properties of the <italic>L<sub>1</sub></italic> regularization methods, while also being extremely efficient and, in some cases, allowing theoretical guarantees on recovery of the true form of a sparse target function from sampled data. We consider variants of orthogonal matching pursuit (OMP) applied to RL. The resulting algorithms are analyzed and compared experimentally with existing <italic>L<sub>1</sub></italic> regularized approaches. We demonstrate that perhaps the most natural scenario in which one might hope to achieve sparse recovery fails; however, one variant provides promising theoretical guarantees under certain assumptions on the feature dictionary while another variant empirically outperforms prior methods both in approximation accuracy and efficiency on several benchmark problems.</p> / Dissertation
78

Segmentation and Line Filling of 2D Shapes

Pérez Rocha, Ana Laura 21 January 2013 (has links)
The evolution of technology in the textile industry reached the design of embroidery patterns for machine embroidery. In order to create quality designs the shapes to be embroidered need to be segmented into regions that define different parts. One of the objectives of our research is to develop a method to automatically segment the shapes and by doing so making the process faster and easier. Shape analysis is necessary to find a suitable method for this purpose. It includes the study of different ways to represent shapes. In this thesis we focus on shape representation through its skeleton. We make use of a shape's skeleton and the shape's boundary through the so-called feature transform to decide how to segment a shape and where to place the segment boundaries. The direction of stitches is another important specification in an embroidery design. We develop a technique to select the stitch orientation by defining direction lines using the skeleton curves and information from the boundary. We compute the intersections of segment boundaries and direction lines with the shape boundary for the final definition of the direction line segments. We demonstrate that our shape segmentation technique and the automatic placement of direction lines produce sufficient constrains for automated embroidery designs. We show examples for lettering, basic shapes, as well as simple and complex logos.
79

A study on machine learning algorithms for fall detection and movement classification

Ralhan, Amitoz Singh 04 January 2010
Fall among the elderly is an important health issue. Fall detection and movement tracking techniques are therefore instrumental in dealing with this issue. This thesis responds to the challenge of classifying different movement types as a part of a system designed to fulfill the need for a wearable device to collect data for fall and near-fall analysis. Four different fall activities (forward, backward, left and right), three normal activities (standing, walking and lying down) and near-fall situations are identified and detected. Different machine learning algorithms are compared and the best one is used for the real time classification. The comparison is made using Waikato Environment for Knowledge Analysis or in short WEKA. The system also has the ability to adapt to different gaits of different people. A feature selection algorithm is also introduced to reduce the number of features required for the classification problem.
80

High-Level Intuitive Features (HLIFs) for Melanoma Detection

Amelard, Robert January 2013 (has links)
Feature extraction of segmented skin lesions is a pivotal step for implementing accurate decision support systems. Existing feature sets combine many ad-hoc calculations and are unable to easily provide intuitive diagnostic reasoning. This thesis presents the design and evaluation of a set of features for objectively detecting melanoma in an intuitive and accurate manner. We call these "high-level intuitive features" (HLIFs). The current clinical standard for detecting melanoma, the deadliest form of skin cancer, is visual inspection of the skin's surface. A widely adopted rule for detecting melanoma is the "ABCD" rule, whereby the doctor identifies the presence of Asymmetry, Border irregularity, Colour patterns, and Diameter. The adoption of specialized medical devices for this cause is extremely slow due to the added temporal and financial burden. Therefore, recent research efforts have focused on detection support systems that analyse images acquired with standard consumer-grade camera images of skin lesions. The central benefit of these systems is the provision of technology with low barriers to adoption. Recently proposed skin lesion feature sets have been large sets of low-level features attempting to model the widely adopted ABCD criteria of melanoma. These result in high-dimensional feature spaces, which are computationally expensive and sparse due to the lack of available clinical data. It is difficult to convey diagnostic rationale using these feature sets due to their inherent ad-hoc mathematical nature. This thesis presents and applies a generic framework for designing HLIFs for decision support systems relying on intuitive observations. By definition, a HLIF is designed explicitly to model a human-observable characteristic such that the feature score can be intuited by the user. Thus, along with the classification label, visual rationale can be provided to further support the prediction. This thesis applies the HLIF framework to design 10 HLIFs for skin cancer detection, following the ABCD rule. That is, HLIFs modeling asymmetry, border irregularity, and colour patterns are presented. This thesis evaluates the effectiveness of HLIFs in a standard classification setting. Using publicly-available images obtained in unconstrained environments, the set of HLIFs is compared with and against a recently published low-level feature set. Since the focus is on evaluating the features, illumination correction and manually-defined segmentations are used, along with a linear classification scheme. The promising results indicate that HLIFs capture more relevant information than low-level features, and that concatenating the HLIFs to the low-level feature set results in improved accuracy metrics. Visual intuitive information is provided to indicate the ability of providing intuitive diagnostic reasoning to the user.

Page generated in 0.0316 seconds