• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 225
  • 72
  • 24
  • 22
  • 18
  • 9
  • 9
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 462
  • 462
  • 462
  • 156
  • 128
  • 109
  • 105
  • 79
  • 76
  • 70
  • 67
  • 64
  • 60
  • 55
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Screening Web Breaks in a Pressroom by Soft Computing

Ahmad, Alzghoul January 2008 (has links)
<p>Web breaks are considered as one of the most significant runnability problems</p><p>in a pressroom. This work concerns the analysis of relation between various</p><p>parameters (variables) characterizing the paper, printing press, the printing</p><p>process and the web break occurrence. A large number of variables, 61 in</p><p>total, obtained off-line as well as measured online during the printing process</p><p>are used in the investigation. Each paper reel is characterized by a vector x</p><p>of 61 components.</p><p>Two main approaches are explored. The first one treats the problem as a</p><p>data classification task into "break" and "non break" classes. The procedures</p><p>of classifier training, the selection of relevant input variables and the selection</p><p>of hyper-parameters of the classifier are aggregated into one process based on</p><p>genetic search. The second approach combines procedures of genetic search</p><p>based variable selection and data mapping into a low dimensional space. The</p><p>genetic search process results into a variable set providing the best mapping</p><p>according to some quality function.</p><p>The empirical study was performed using data collected at a pressroom</p><p>in Sweden. The total number of data points available for the experiments</p><p>was equal to 309. Amongst those, only 37 data points represent the web</p><p>break cases. The results of the investigations have shown that the linear</p><p>relations between the independent variables and the web break frequency</p><p>are not strong.</p><p>Three important groups of variables were identified, namely Lab data</p><p>(variables characterizing paper properties and measured off-line in a paper</p><p>mill lab), Ink registry (variables characterizing operator actions aimed to</p><p>adjust ink registry) and Web tension. We found that the most important</p><p>variables are: Ink registry Y LS MD (adjustments of yellow ink registry</p><p>in machine direction on the lower paper side), Air permeability (character-</p><p>izes paper porosity), Paper grammage, Elongation MD, and four variables</p><p>characterizing web tension: Moment mean, Min sliding Mean, Web tension</p><p>variance, and Web tension mean.</p><p>The proposed methods were helpful in finding the variables influencing </p><p>the occurrence of web breaks and can also be used for solving other industrial</p><p>problems.</p>
52

Predicting mutation score using source code and test suite metrics

Jalbert, Kevin 01 September 2012 (has links)
Mutation testing has traditionally been used to evaluate the effectiveness of test suites and provide con dence in the testing process. Mutation testing involves the creation of many versions of a program each with a single syntactic fault. A test suite is evaluated against these program versions (i.e., mutants) in order to determine the percentage of mutants a test suite is able to identify (i.e., mutation score). A major drawback of mutation testing is that even a small program may yield thousands of mutants and can potentially make the process cost prohibitive. To improve the performance and reduce the cost of mutation testing, we proposed a machine learning approach to predict mutation score based on a combination of source code and test suite metrics. We conducted an empirical evaluation of our approach to evaluated its effectiveness using eight open source software systems. / UOIT
53

Protein Secondary Structure Prediction Using Support Vector Machines, Nueral Networks and Genetic Algorithms

Reyaz-Ahmed, Anjum B 03 May 2007 (has links)
Bioinformatics techniques to protein secondary structure prediction mostly depend on the information available in amino acid sequence. Support vector machines (SVM) have shown strong generalization ability in a number of application areas, including protein structure prediction. In this study, a new sliding window scheme is introduced with multiple windows to form the protein data for training and testing SVM. Orthogonal encoding scheme coupled with BLOSUM62 matrix is used to make the prediction. First the prediction of binary classifiers using multiple windows is compared with single window scheme, the results shows single window not to be good in all cases. Two new classifiers are introduced for effective tertiary classification. This new classifiers use neural networks and genetic algorithms to optimize the accuracy of the tertiary classifier. The accuracy level of the new architectures are determined and compared with other studies. The tertiary architecture is better than most available techniques.
54

Classification of Genotype and Age of Eyes Using RPE Cell Size and Shape

Yu, Jie 18 December 2012 (has links)
Retinal pigment epithelium (RPE) is a principal site of pathogenesis in age-related macular de-generation (AMD). AMD is a main source of vision loss even blindness in the elderly and there is no effective treatment right now. Our aim is to describe the relationship between the morphology of RPE cells and the age and genotype of the eyes. We use principal component analysis (PCA) or functional principal component method (FPCA), support vector machine (SVM), and random forest (RF) methods to analyze the morphological data of RPE cells in mouse eyes to classify their age and genotype. Our analyses show that amongst all morphometric measures of RPE cells, cell shape measurements (eccentricity and solidity) are good for classification. But combination of cell shape and size (perimeter) provide best classification.
55

Spam filter for SMS-traffic

Fredborg, Johan January 2013 (has links)
Communication through text messaging, SMS (Short Message Service), is nowadays a huge industry with billions of active users. Because of the huge userbase it has attracted many companies trying to market themselves through unsolicited messages in this medium in the same way as was previously done through email. This is such a common phenomenon that SMS spam has now become a plague in many countries. This report evaluates several established machine learning algorithms to see how well they can be applied to the problem of filtering unsolicited SMS messages. Each filter is mainly evaluated by analyzing the accuracy of the filters on stored message data. The report also discusses and compares requirements for hardware versus performance measured by how many messages that can be evaluated in a fixed amount of time. The results from the evaluation shows that a decision tree filter is the best choice of the filters evaluated. It has the highest accuracy as well as a high enough process rate of messages to be applicable. The decision tree filter which was found to be the most suitable for the task in this environment has been implemented. The accuracy in this new implementation is shown to be as high as the implementation used for the evaluation of this filter. Though the decision tree filter is shown to be the best choice of the filters evaluated it turned out the accuracy is not high enough to meet the specified requirements. It however shows promising results for further testing in this area by using improved methods on the best performing algorithms.
56

Detecting Land Cover Change over a 20 Year Time Period in the Niagara Escarpment Plan Using Satellite Remote Sensing

Waite, Holly January 2009 (has links)
The Niagara Escarpment is one of Southern Ontario’s most important landscapes. Due to the nature of the landform and its location, the Escarpment is subject to various development pressures including urban expansion, mineral resource extraction, agricultural practices and recreation. In 1985, Canada’s first large scale environmentally based land use plan was put in place to ensure that only development that is compatible with the Escarpment occurred within the Niagara Escarpment Plan (NEP). The southern extent of the NEP is of particular interest in this study, since a portion of the Plan is located within the rapidly expanding Greater Toronto Area (GTA). The Plan area located in the Regional Municipalities of Hamilton and Halton represent both urban and rural geographical areas respectively, and are both experiencing development pressures and subsequent changes in land cover. Monitoring initiatives on the NEP have been established, but have done little to identify consistent techniques for monitoring land cover on the Niagara Escarpment. Land cover information is an important part of planning and environmental monitoring initiatives. Remote sensing has the potential to provide frequent and accurate land cover information over various spatial scales. The goal of this research was to examine land cover change in the Regional Municipalities of Hamilton and Halton portions of the NEP. This was achieved through the creation of land cover maps for each region using Landsat 5 Thematic Mapper (TM) remotely sensed data. These maps aided in determining the qualitative and quantitative changes that had occurred in the Plan area over a 20 year time period from 1986 to 2006. Change was also examined based on the NEP’s land use designations, to determine if the Plan policy has been effective in protecting the Escarpment. To obtain land cover maps, five different supervised classification methods were explored: Minimum Distance, Mahalanobis Distance, Maximum Likelihood, Object-oriented and Support Vector Machine. Seven land cover classes were mapped (forest, water, recreation, bare agricultural fields, vegetated agricultural fields, urban and mineral resource extraction areas) at a regional scale. SVM proved most successful at mapping land cover on the Escarpment, providing classification maps with an average accuracy of 86.7%. Land cover change analysis showed promising results with an increase in the forested class and only slight increases to the urban and mineral resource extraction classes. Negatively, there was a decrease in agricultural land overall. An examination of land cover change based on the NEP land use designations showed little change, other than change that is regulated under Plan policies, proving the success of the NEP for protecting vital Escarpment lands insofar as this can be revealed through remote sensing. Land cover should be monitored in the NEP consistently over time to ensure changes in the Plan area are compatible with the Niagara Escarpment. Remote sensing is a tool that can provide this information to the Niagara Escarpment Commission (NEC) in a timely, comprehensive and cost-effective way. The information gained from remotely sensed data can aid in environmental monitoring and policy planning into the future.
57

Large Scale Terrain Modelling for Autonomous Mining

Norberg, Johan January 2010 (has links)
This thesis is concerned with development of a terrain model using Gaussian Processes to support the automation of open-pit mines. Information can be provided from a variety of sources including GPS, laser scans and manual surveys. The information is then fused together into a single representation of the terrain together with a measure of uncertainty of the estimated model. The model is also used to detect and label specific features in the terrain. In the context of mining, theses features are edges known as toes and crests. A combination of clustering and classification using supervised learning detects and labels these regions. Data gathered from production iron ore mines in Western Australia and a farm in Marulan outside Sydney is used to demonstrate and verify the ability of Gaussian Processes to estimate a model of the terrain. The estimated terrain model is then used for detecting features of interest.Results show that the Gaussian Process correctly estimates the terrain and uncertainties, and provide a good representation of the area. Toes and crests are also successfully identified and labelled.
58

Machine vision for automating visual inspectionof wooden railway sleepers

Sajjad Pasha, Mohammad January 2007 (has links)
No description available.
59

Detecting Land Cover Change over a 20 Year Time Period in the Niagara Escarpment Plan Using Satellite Remote Sensing

Waite, Holly January 2009 (has links)
The Niagara Escarpment is one of Southern Ontario’s most important landscapes. Due to the nature of the landform and its location, the Escarpment is subject to various development pressures including urban expansion, mineral resource extraction, agricultural practices and recreation. In 1985, Canada’s first large scale environmentally based land use plan was put in place to ensure that only development that is compatible with the Escarpment occurred within the Niagara Escarpment Plan (NEP). The southern extent of the NEP is of particular interest in this study, since a portion of the Plan is located within the rapidly expanding Greater Toronto Area (GTA). The Plan area located in the Regional Municipalities of Hamilton and Halton represent both urban and rural geographical areas respectively, and are both experiencing development pressures and subsequent changes in land cover. Monitoring initiatives on the NEP have been established, but have done little to identify consistent techniques for monitoring land cover on the Niagara Escarpment. Land cover information is an important part of planning and environmental monitoring initiatives. Remote sensing has the potential to provide frequent and accurate land cover information over various spatial scales. The goal of this research was to examine land cover change in the Regional Municipalities of Hamilton and Halton portions of the NEP. This was achieved through the creation of land cover maps for each region using Landsat 5 Thematic Mapper (TM) remotely sensed data. These maps aided in determining the qualitative and quantitative changes that had occurred in the Plan area over a 20 year time period from 1986 to 2006. Change was also examined based on the NEP’s land use designations, to determine if the Plan policy has been effective in protecting the Escarpment. To obtain land cover maps, five different supervised classification methods were explored: Minimum Distance, Mahalanobis Distance, Maximum Likelihood, Object-oriented and Support Vector Machine. Seven land cover classes were mapped (forest, water, recreation, bare agricultural fields, vegetated agricultural fields, urban and mineral resource extraction areas) at a regional scale. SVM proved most successful at mapping land cover on the Escarpment, providing classification maps with an average accuracy of 86.7%. Land cover change analysis showed promising results with an increase in the forested class and only slight increases to the urban and mineral resource extraction classes. Negatively, there was a decrease in agricultural land overall. An examination of land cover change based on the NEP land use designations showed little change, other than change that is regulated under Plan policies, proving the success of the NEP for protecting vital Escarpment lands insofar as this can be revealed through remote sensing. Land cover should be monitored in the NEP consistently over time to ensure changes in the Plan area are compatible with the Niagara Escarpment. Remote sensing is a tool that can provide this information to the Niagara Escarpment Commission (NEC) in a timely, comprehensive and cost-effective way. The information gained from remotely sensed data can aid in environmental monitoring and policy planning into the future.
60

Sparse Modeling in Classification, Compression and Detection

Chen, Jihong 12 July 2004 (has links)
The principal focus of this thesis is the exploration of sparse structures in a variety of statistical modelling problems. While more comprehensive models can be useful to solve a larger number of problems, its calculation may be ill-posed in most practical instances because of the sparsity of informative features in the data. If this sparse structure can be exploited, the models can often be solved very efficiently. The thesis is composed of four projects. Firstly, feature sparsity is incorporated to improve the performance of support vector machines when there are a lot of noise features present. The second project is about an empirical study on how to construct an optimal cascade structure. The third project involves the design of a progressive, rate-distortionoptimized shape coder by combining zero-tree algorithm with beamlet structure. Finally, the longest run statistics is applied for the detection of a filamentary structure in twodimensional rectangular region. The fundamental ideas of the above projects are common — extract an efficient summary from a large amount of data. The main contributions of this work are to develop and implement novel techniques for the efficient solutions of several dicult problems that arise in statistical signal/image processing.

Page generated in 0.0291 seconds