• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 162
  • 30
  • 10
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 264
  • 102
  • 78
  • 76
  • 65
  • 49
  • 49
  • 48
  • 47
  • 44
  • 39
  • 37
  • 36
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Normalized Convolution Network and Dataset Generation for Refining Stereo Disparity Maps

Cranston, Daniel, Skarfelt, Filip January 2019 (has links)
Finding disparity maps between stereo images is a well studied topic within computer vision. While both classical and machine learning approaches exist in the literature, they frequently struggle to correctly solve the disparity in regions with low texture, sharp edges or occlusions. Finding approximate solutions to these problem areas is frequently referred to as disparity refinement, and is usually carried out separately after an initial disparity map has been generated. In the recent literature, the use of Normalized Convolution in Convolutional Neural Networks have shown remarkable results when applied to the task of stereo depth completion. This thesis investigates how well this approach performs in the case of disparity refinement. Specifically, we investigate how well such a method can improve the initial disparity maps generated by the stereo matching algorithm developed at Saab Dynamics using a rectified stereo rig. To this end, a dataset of ground truth disparity maps was created using equipment at Saab, namely a setup for structured light and the stereo rig cameras. Because the end goal is a dataset fit for training networks, we investigate an approach that allows for efficient creation of significant quantities of dense ground truth disparities. The method for generating ground truth disparities generates several disparity maps for every scene measured by using several stereo pairs. A densified disparity map is generated by merging the disparity maps from the neighbouring stereo pairs. This resulted in a dataset of 26 scenes and 104 dense and accurate disparity maps. Our evaluation results show that the chosen Normalized Convolution Network based method can be adapted for disparity map refinement, but is dependent on the quality of the input disparity map.
52

Real-time localization of balls and hands in videos of juggling using a convolutional neural network

Åkerlund, Rasmus January 2019 (has links)
Juggling can be both a recreational activity that provides a wide variety of challenges to participants and an art form that can be performed on stage. Non-learning-based computer vision techniques, depth sensors, and accelerometers have been used in the past to augment these activities. These solutions either require specialized hardware or only work in a very limited set of environments. In this project, a 54 000 frame large video dataset of annotated juggling was created and a convolutional neural network was successfully trained that could locate the balls and hands with high accuracy in a variety of environments. The network was sufficiently light-weight to provide real-time inference on CPUs. In addition, the locations of the balls and hands were recorded for thirty-six common juggling pattern, and small neural networks were trained that could categorize them almost perfectly. By building on the publicly available code, models and datasets that this project has produced jugglers will be able to create interactive juggling games for beginners and novel audio-visual enhancements for live performances.
53

Analysis of Current Flows in Electrical Networks for Error-Tolerant Graph Matching

Gutierrez Munoz, Alejandro 10 November 2008 (has links)
Information contained in chemical compounds, fingerprint databases, social networks, and interactions between websites all have one thing in common: they can be represented as graphs. The need to analyze, compare, and classify graph datasets has become more evident over the last decade. The graph isomorphism problem is known to belong to the NP class, and the subgraph isomorphism problem is known to be an NP-complete problem. Several error-tolerant graph matching techniques have been developed during the last two decades in order to overcome the computational complexity associated with these problems. Some of these techniques rely upon similarity measures based on the topology of the graphs. Random walks and edit distance kernels are examples of such methods. In conjunction with learning algorithms like back-propagation neural networks, k-nearest neighbor, and support vector machines (SVM), these methods provide a way of classifying graphs based on a training set of labeled instances. This thesis presents a novel approach to error-tolerant graph matching based on current flow analysis. Analysis of current flow in electrical networks is a technique that uses the voltages and currents obtained through nodal analysis of a graph representing an electrical circuit. Current flow analysis in electrical networks shares some interesting connections with the number of random walks along the graph. We propose an algorithm to calculate a similarity measure between two graphs based on the current flows along geodesics of the same degree. This similarity measure can be applied over large graph datasets, allowing these datasets to be compared in a reasonable amount of time. This thesis investigates the classification potential of several data mining algorithms based on the information extracted from a graph dataset and represented as current flow vectors. We describe our operational prototype and evaluate its effectiveness on the NCI-HIV dataset.
54

APPLY DATA CLUSTERING TO GENE EXPRESSION DATA

Abualhamayl, Abdullah Jameel, Mr. 01 December 2015 (has links)
Data clustering plays an important role in effective analysis of gene expression. Although DNA microarray technology facilitates expression monitoring, several challenges arise when dealing with gene expression datasets. Some of these challenges are the enormous number of genes, the dimensionality of the data, and the change of data over time. The genetic groups which are biologically interlinked can be identified through clustering. This project aims to clarify the steps to apply clustering analysis of genes involved in a published dataset. The methodology for this project includes the selection of the dataset representation, the selection of gene datasets, Similarity Matrix Selection, the selection of clustering algorithm, and analysis tool. R language with the focus of Kmeans, fpc, hclust, and heatmap3 packages in R is used in this project as an analysis tool. Different clustering algorithms are used on Spellman dataset to illustrate how genes are grouped together in clusters which help to understand our genetic behaviors.
55

Real-time rendering of very large 3D scenes using hierarchical mesh simplification

Jönsson, Daniel January 2009 (has links)
<p>Captured and generated 3D data can be so large that it creates a problem for today's computers since they do not fit into the main or graphics card memory. Therefore methods for handling and rendering the data must be developed. This thesis presents a way to pre-process and render out-of-core height map data for real time use. The pre-processing uses a mesh decimation API called Simplygon developed by Donya Labs to optimize the geometry. From the height map a normal map can also be created and used at render time to increase the visual quality. In addition to the 3D data textures are also supported. To decrease the time to load an object the normal and texture maps can be compressed on the graphics card prior to rendering. Three different methods for covering gaps are explored of which one turns out to be insufficient for rendering cylindrical equidistant projected data.At render time two threads work in parallel. One thread is used to page the data from the hard drive to the main and graphics card memory. The other thread is responsible for rendering all data. To handle precision errors caused by spatial difference in the data each object receives a local origin and is then rendered relative to the camera. An atmosphere which handles views from both space and ground is computed on the graphics card.The result is an application adapted to current graphics card technology which can page out-of-core data and render a dataset covering the entire earth at 500 meters spatial resolution with a realistic atmosphere.</p>
56

Real-time rendering of very large 3D scenes using hierarchical mesh simplification

Jönsson, Daniel January 2009 (has links)
Captured and generated 3D data can be so large that it creates a problem for today's computers since they do not fit into the main or graphics card memory. Therefore methods for handling and rendering the data must be developed. This thesis presents a way to pre-process and render out-of-core height map data for real time use. The pre-processing uses a mesh decimation API called Simplygon developed by Donya Labs to optimize the geometry. From the height map a normal map can also be created and used at render time to increase the visual quality. In addition to the 3D data textures are also supported. To decrease the time to load an object the normal and texture maps can be compressed on the graphics card prior to rendering. Three different methods for covering gaps are explored of which one turns out to be insufficient for rendering cylindrical equidistant projected data.At render time two threads work in parallel. One thread is used to page the data from the hard drive to the main and graphics card memory. The other thread is responsible for rendering all data. To handle precision errors caused by spatial difference in the data each object receives a local origin and is then rendered relative to the camera. An atmosphere which handles views from both space and ground is computed on the graphics card.The result is an application adapted to current graphics card technology which can page out-of-core data and render a dataset covering the entire earth at 500 meters spatial resolution with a realistic atmosphere.
57

Distributed Algorithms for SVD-based Least Squares Estimation

Peng, Yu-Ting 19 July 2011 (has links)
Singular value decomposition (SVD) is a popular decomposition method for solving least-squares estimation problems. However, for large datasets, SVD is very time consuming and memory demanding in obtaining least squares solutions. In this paper, we propose a least squares estimator based on an iterative divide-and-merge scheme for large-scale estimation problems. The estimator consists of several levels. At each level, the input matrices are subdivided into submatrices. The submatrices are decomposed by SVD respectively and the results are merged into smaller matrices which become the input of the next level. The process is iterated until the resulting matrices are small enough which can then be solved directly and efficiently by the SVD algorithm. However, the iterative divide-and-merge algorithms executed on a single machine is still time demanding on large scale datasets. We propose two distributed algorithms to overcome this shortcoming by permitting several machines to perform the decomposition and merging of the submatrices in each level in parallel. The first one is implemented in MapReduce on the Hadoop distributed platform which can run the tasks in parallel on a collection of computers. The second one is implemented on CUDA which can run the tasks in parallel using the Nvidia GPUs. Experimental results demonstrate that the proposed distributed algorithms can greatly reduce the time required to solve large-squares problems.
58

A Similarity-based Data Reduction Approach

Ouyang, Jeng 07 September 2009 (has links)
Finding an efficient data reduction method for large-scale problems is an imperative task. In this paper, we propose a similarity-based self-constructing fuzzy clustering algorithm to do the sampling of instances for the classification task. Instances that are similar to each other are grouped into the same cluster. When all the instances have been fed in, a number of clusters are formed automatically. Then the statistical mean for each cluster will be regarded as representing all the instances covered in the cluster. This approach has two advantages. One is that it can be faster and uses less storage memory. The other is that the number of new representative instances need not be specified in advance by the user. Experiments on real-world datasets show that our method can run faster and obtain better reduction rate than other methods.
59

Evaluating the Performance of Propensity Scores to Address Selection Bias in a Multilevel Context: A Monte Carlo Simulation Study and Application Using a National Dataset

Lingle, Jeremy Andrew 16 October 2009 (has links)
When researchers are unable to randomly assign students to treatment conditions, selection bias is introduced into the estimates of treatment effects. Random assignment to treatment conditions, which has historically been the scientific benchmark for causal inference, is often impossible or unethical to implement in educational systems. For example, researchers cannot deny services to those who stand to gain from participation in an academic program. Additionally, students select into a particular treatment group through processes that are impossible to control, such as those that result in a child dropping-out of high school or attending a resource-starved school. Propensity score methods provide valuable tools for removing the selection bias from quasi-experimental research designs and observational studies through modeling the treatment assignment mechanism. The utility of propensity scores has been validated for the purposes of removing selection bias when the observations are assumed to be independent; however, the ability of propensity scores to remove selection bias in a multilevel context, in which group membership plays a role in the treatment assignment, is relatively unknown. A central purpose of the current study was to begin filling in the gaps in knowledge regarding the performance of propensity scores for removing selection bias, as defined by covariate balance, in multilevel settings using a Monte Carlo simulation study. The performance of propensity scores were also examined using a large-scale national dataset. Results from this study provide support for the conclusion that multilevel characteristics of a sample have a bearing upon the performance of propensity scores to balance covariates between treatment and control groups. Findings suggest that propensity score estimation models should take into account the cluster-level effects when working with multilevel data; however, the numbers of treatment and control group individuals within each cluster must be sufficiently large to allow estimation of those effects. Propensity scores that take into account the cluster-level effects can have the added benefit of balancing covariates within each cluster as well as across the sample as a whole.
60

Novel Image Representations and Learning Tasks

January 2017 (has links)
abstract: Computer Vision as a eld has gone through signicant changes in the last decade. The eld has seen tremendous success in designing learning systems with hand-crafted features and in using representation learning to extract better features. In this dissertation some novel approaches to representation learning and task learning are studied. Multiple-instance learning which is generalization of supervised learning, is one example of task learning that is discussed. In particular, a novel non-parametric k- NN-based multiple-instance learning is proposed, which is shown to outperform other existing approaches. This solution is applied to a diabetic retinopathy pathology detection problem eectively. In cases of representation learning, generality of neural features are investigated rst. This investigation leads to some critical understanding and results in feature generality among datasets. The possibility of learning from a mentor network instead of from labels is then investigated. Distillation of dark knowledge is used to eciently mentor a small network from a pre-trained large mentor network. These studies help in understanding representation learning with smaller and compressed networks. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2017

Page generated in 0.0372 seconds