• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1112
  • 257
  • 89
  • 85
  • 75
  • 26
  • 23
  • 21
  • 18
  • 17
  • 14
  • 13
  • 11
  • 7
  • 7
  • Tagged with
  • 2127
  • 526
  • 523
  • 490
  • 439
  • 359
  • 344
  • 319
  • 282
  • 271
  • 270
  • 264
  • 236
  • 180
  • 175
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A novel method for finding small highly discriminant gene sets

Gardner, Jason H. 15 November 2004 (has links)
In a normal microarray classification problem there will be many genes, on the order of thousands, and few samples, on the order of tens. This necessitates a massive feature space reduction before classification can take place. While much time and effort has gone into evaluating and comparing the performance of different classifiers, less thought has been spent on the problem of efficient feature space reduction. There are in the microarray classification literature several widely used heuristic feature reduction algorithms that will indeed find small feature subsets to classify over. These methods work in a broad sense but we find that they often require too much computation, find overly large gene sets or are not properly generalizable. Therefore, we believe that a systematic study of feature reduction, as it is related to microarray classification, is in order. In this thesis we review current feature space reduction algorithms and propose a new, mixed model algorithm. This mixed-modified algorithm uses the best aspects of the filter algorithms and the best aspects of the wrapper algorithms to find very small yet highly discriminant gene sets. We also discuss methods to evaluate alternate, ambiguous gene sets. Applying our new mixed model algorithm to several published datasets we find that our new algorithm outperforms current gene finding methods.
12

A note on difference spectra for fast extraction of global image information

Van Wyk, BJ, Van Wyk, MA, Van den Bergh, F 01 June 2007 (has links)
The concept of an Image Difference Spectrum, a novel tool for the extraction of global image information, is introduced. It is shown that Image Difference Spectra are fast alternatives to granulometric curves, also referred to as pattern spectra. Image Difference Spectra are computationally easy to implement and are suitable for real-time applications.
13

Quantitative data validation (automated visual evaluations)

Martin, Anthony John Michael January 1999 (has links)
Historically, validation has been perfonned on a case study basis employing visual evaluations, gradually inspiring confidence through continual application. At present, the method of visual evaluation is the most prevalent form of data analysis, as the brain is the best pattern recognition device known. However, the human visual/perceptual system is a complicated mechanism, prone to many types of physical and psychological influences. Fatigue is a major source of inaccuracy within the results of subjects perfonning complex visual evaluation tasks. Whilst physical and experiential differences along with age have an enormous bearing on the visual evaluation results of different subjects. It is to this end that automated methods of validation must be developed to produce repeatable, quantitative and objective verification results. This thesis details the development of the Feature Selective Validation (FSV) method. The FSV method comprises two component measures based on amplitude differences and feature differences. These measures are combined employing a measured level of subjectivity to fonn an overall assessment of the comparison in question or global difference. The three measures within the FSV method are strengthened by statistical analysis in the form of confidence levels based on amplitude, feature or global discrepancies between compared signals. Highly detailed diagnostic infonnation on the location and magnitude of discrepancies is also made available through the employment of graphical (discrete) representations of the three measures. The FSV method also benefits from the ability to mirror human perception, whilst producing infonnation which directly relates human variability and the confidence associated with it. The FSV method builds on the common language of engineers and scientists alike, employing categories which relate to human interpretations of comparisons, namely: 'ideal', 'excellent', 'very good', 'good', 'fair', 'poor' and 'extremely poor' . Quantitative
14

Comparison of Salient Feature Descriptors

Farzaneh, Sara January 2008 (has links)
In robot navigation, and image content searches reliable salient features are of pivotal importance. Also in biometric human recognition, salient features are increasingly used. Regardless the application, image matching is one of the many problems in computer vision, including object recognition. This report investigates some salient features to match sub-images of different images. An underlying assumption is that sub-images, also called image objects, or objects, are possible to recognize by the salient features that can be recognized independently. Since image objects are images of 3D objects, the salient features in 2D images must be invariant to reasonably large viewing direction and distance (scale) changes. These changes are typically due to 3D rotations and translations of the 3D object with respect to the camera. Other changes that influence the matching of two 2D image objects is illumination changes, and image acquisition noise. This thesis will discuss how to find the salient features and will compare them with respect to their matching performance. Also it will explore how these features are invariant to rotation and scaling.
15

Feature-Based Mesh Simplification With Quadric Error Metric Using A Line Simplification Algorithm

Falcon Lins, Rafael Jose 26 August 2010 (has links)
Mesh simplification is an important task in Computer Graphics due to the ever increasing complexity of polygonal geometric models. Specifically in real-time rendering, there is a necessity that these models, which can be acquired through 3D scanning or through artistic conception, have to be simplified or optimized to be rendered on today's hardware while losing as little detail as possible. This thesis proposes a mesh simplification algorithm that works by identifying and simplifying features. Then it simplifies the remaining mesh with the simplified features frozen. The algorithm is called Quadric Error with Feature Curves (QEFC). Quadric Error with Feature Curves works as a tool that allows the user to interactively select a percentage of the most important points of the feature curves to be preserved along with the points determined by the Quadric Error Metric algorithm.
16

Matching of image features and vector objects to automatically correct spatial misalignment between image and vector data sets

O'Donohue, Daniel Gerard January 2010 (has links)
Direct georeferencing of aerial imagery has the potential to meet escalating demand for image data sets of increasingly higher temporal and spatial resolution. However, variability in terms of spatial accuracy within the resulting images may severely limit the use of this technology with regard to operations involving other data sets. Spatial misalignment between data sets can be corrected manually; however, an automated solution is preferable given the volume of data involved. This research has developed and tested an automated custom solution to the spatial misalignment between directly georeference aerial thermal imagery and vector data representing building outlines. The procedure uses geometric matches between image features and vector objects to relate pixel locations to geographic coordinates. The results suggest that the concept is valid and capable of significantly improving the spatial accuracy of directly georeferencing aerial imagery.
17

Modeling spatial variation of data quality in databases

Mohamed Ghouse, S. M. Z. S. January 2008 (has links)
The spatial data community relies on the quality of its data. This research investigates new ways of storing and retrieving spatial data quality information in databases. Given the importance of features and sub-feature variation, three different data quality models of spatial variation in quality have been identified and defined: per-feature, feature-independent and feature-hybrid. Quality information is stored against each feature in the per-feature model. In the feature-independent model, quality information is independent of the feature. The feature-hybrid is derived from a combination of the other two models. In general, each model of spatial variation is different in its representational and querying capabilities. However, no model is entirely superior in storing and retrieving spatially varying quality. Hence, an integrated data model called as RDBMS for Spatial Variation in Quality (RSVQ) was developed by integrating per-feature, feature-independent and feature-hybrid data quality models. The RSVQ data model provides flexible representation of SDQ, which can be stored alongside individual features or parts of features in the database, or as an independent spatial data layer. / The thesis reports on how Oracle 10g spatial RDBMS was used to implement this model. An investigation into the different querying mechanisms resulted in the development of a new WITHQUALITY keyword as an extension to SQL. The WITHQUALITY keyword has been designed in such a way that it can perform automatic query optimization, which leads to faster retrieval of quality when compared to existing query mechanism. A user interface was built using Oracle Forms 10g which enables the user to perform single and multiple queries in addition to conversion between models (example, per-feature to feature-independent). The evaluation, which includes an industry case study, shows how these techniques can improve the spatial data community’s ability to represent and record data quality information.
18

Feature Tracking in Two Dimensional Time Varying Datasets

Thampy, Sajjit 10 May 2003 (has links)
This study investigates methods that can be used for tracking features in computationalluid-dynamics datasets. The two approaches of overlap based feature tracking and attribute-based feature tracking are studied. Overlap based techniques use the actual degree of overlap between sucessive time steps to conclude a match. Attribute-based techniques use characteristics native to the feature being studied, like size, orientation, speed etc, to conclude a match between candidate features. Due to limitations on the number of time steps that can be held in a computer's memory, it may be possible to load only a time-subsampled data set. This might result in a decrease in the overlap obtained, and hence a subsequent decrease in the confidence of the match. This study looks into using specific attributes of features, like rotational velocity, linear velocity to predict the presence of that feature in a future time step. The use of predictive techniques is tested on swirling features, i.e., vortices. An ellipse-like representation is assumed to be a good approximation of any such feature. The location of a feature in previous time-steps are used to predict its position in a future time-step. The ellipse-like representation of the feature is translated over to the predicted location and aligned in the predicted orientation. An overlap test is then done. Use of predictive techniques will help increase the overlap, and subsequently the confidence in the match obtained. The techniques were tested on an artificial data set for linear velocity and rotation and on a real data set of simulation of flow past a cylinder. Regions of swirling flow, detected by computing the swirl parameter, were taken as features for study. The degree of overlap obtained by a basic overlap and by the use of predictive methods were tabulated. The results show that the use of predictive techniques improved the overlap.
19

When the Sun Rises

Willoughby, Kamiya 01 January 2024 (has links) (PDF)
A lonely, lovelorn butcher is tasked with caring for a sadistic drug dealer's daughter.
20

A Feature-Oriented Software Engineering Approach to Integrate ASSISTments with Learning Management Systems

Duong, Hien D 29 May 2014 (has links)
"Object-Oriented Programming (OOP), in the past two decades, has become the most influential and dominant programming paradigm for developing large and complex software systems. With OOP, developers can rely on design patterns that are widely accepted as solutions for recurring problems and used to develop flexible, reusable and modular software. However, recent studies have shown that Objected-Oriented Abstractions are not able to modularize these pattern concerns and tend to lead to programs with poor modularity. Feature-Oriented Programming (FOP) is an extension of OOP that aims to improve the modularity and to support software variability in OOP by refining classes and methods. In this thesis, based upon the work of integrating an online tutor systems, ASSISTments, with other online learning management systems, we evaluate FOP with respect to modularity. This proof-of-concept effort demonstrates how to reduce the effort in designing integration code."

Page generated in 0.0517 seconds