• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5940
  • 1422
  • 871
  • 726
  • 722
  • 669
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
821

Computer-Enhanced Knowledge Discovery in Environmental Science

Fukuda, Kyoko January 2009 (has links)
Encouraging the use of computer algorithms by developing new algorithms and introducing uncommonly known algorithms for use on environmental science problems is a significant contribution, as it provides knowledge discovery tools to extract new aspects of results and draw new insights, additional to those from general statistical methods. Conducting analysis with appropriately chosen methods, in terms of quality of performance and results, computation time, flexibility and applicability to data of various natures, will help decision making in the policy development and management process for environmental studies. This thesis has three fundamental aims and motivations. Firstly, to develop a flexibly applicable attribute selection method, Tree Node Selection (TNS), and a decision tree assessment tool, Tree Node Selection for assessing decision tree structure (TNS-A), both of which use decision trees pre-generated by the widely used C4.5 decision tree algorithm as their information source, to identify important attributes from data. TNS helps the cost effective and efficient data collection and policy making process by selecting fewer, but important, attributes, and TNS-A provides a tool to assess the decision tree structure to extract information on the relationship of attributes and decisions. Secondly, to introduce the use of new, theoretical or unknown computer algorithms, such as the K-Maximum Subarray Algorithm (K-MSA) and Ant-Miner, by adjusting and maximizing their applicability and practicality to assess environmental science problems to bring new insights. Additionally, the unique advanced statistical and mathematical method, Singular Spectrum Analysis (SSA), is demonstrated as a data pre-processing method to help improve C4.5 results on noisy measurements. Thirdly, to promote, encourage and motivate environmental scientists to use ideas and methods developed in this thesis. The methods were tested with benchmark data and various real environmental science problems: sea container contamination, the Weed Risk Assessment model and weed spatial analysis for New Zealand Biosecurity, air pollution, climate and health, and defoliation imagery. The outcome of this thesis will be to introduce the concept and technique of data mining, a process of knowledge discovery from databases, to environmental science researchers in New Zealand and overseas by collaborating on future research to achieve, together with future policy and management, to maintain and sustain a healthy environment to live in.
822

Computing stable models of logic programs

Singhi, Soumya 01 January 2003 (has links)
Solution of any search problem lies in its search space. A search is a systematic examination of candidate solutions of a search problem. In this thesis, we present a search heuristic that we can cr-smodels. cr-smodels prunes the search space to quickly reach to the solution of a problem. The idea is to pick an atom for branching , that lowers the growth rate of the linear recurrence and thuse, minimizes the remaining search space. Our goal in developing cr-smodels is to develop a search heuristic that is efficient on a wide range of problems. Then, we test cr-smodels over a wide range of randomly generated benchmarks. we observed that often randomly generated graphs with no Hamiltonian cycle were trivial to solve. Since, Hamiltonian cycle is an important benchmark problem, my other goal is to develop techniques that generate hard instances of graphs with no Hamiltonian cycle.
823

MINIMUM FLOW TIME SCHEDULE GENETIC ALGORITHM FOR MASS CUSTOMIZATION MANUFACTURING USING MINICELLS

Chadalavada, Phanindra Kumar 01 January 2006 (has links)
Minicells are small manufacturing cells dedicated to an option family and organized in a multi-stage configuration for mass customization manufacturing. Product variants, depending on the customization requirements of each customer, are routed through the minicells as necessary. For successful mass customization, customized products must be manufactured at low cost and with short turn around time. Effective scheduling of jobs to be processed in minicells is essential to quickly deliver customized products. In this research, a genetic algorithm based approach is developed to schedule jobs in a minicell configuration by considering it as a multi-stage flow shop. A new crossover strategy is used in the genetic algorithm to obtain a minimum flow time schedule.
824

Extended Information Matrices for Optimal Designs when the Observations are Correlated II

Pazman, Andrej, Müller, Werner January 1996 (has links) (PDF)
Regression models with correlated errors lead to nonadditivity of the information matrix. This makes the usual approach of design optimization (approximation with a continuous design, application of an equivalence theorem, numerical calculations by a gradient algorithm) impossible. A method is presented that allows the construction of a gradient algorithm by altering the information matrices through adding of supplementary noise. A heuristic is formulated to circumvent the nonconvexity problem and the method is applied to typical examples from the literature. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
825

Searching Genome-wide Disease Association Through SNP Data

Guo, Xuan 11 August 2015 (has links)
Taking the advantage of the high-throughput Single Nucleotide Polymorphism (SNP) genotyping technology, Genome-Wide Association Studies (GWASs) are regarded holding promise for unravelling complex relationships between genotype and phenotype. GWASs aim to identify genetic variants associated with disease by assaying and analyzing hundreds of thousands of SNPs. Traditional single-locus-based and two-locus-based methods have been standardized and led to many interesting findings. Recently, a substantial number of GWASs indicate that, for most disorders, joint genetic effects (epistatic interaction) across the whole genome are broadly existing in complex traits. At present, identifying high-order epistatic interactions from GWASs is computationally and methodologically challenging. My dissertation research focuses on the problem of searching genome-wide association with considering three frequently encountered scenarios, i.e. one case one control, multi-cases multi-controls, and Linkage Disequilibrium (LD) block structure. For the first scenario, we present a simple and fast method, named DCHE, using dynamic clustering. Also, we design two methods, a Bayesian inference based method and a heuristic method, to detect genome-wide multi-locus epistatic interactions on multiple diseases. For the last scenario, we propose a block-based Bayesian approach to model the LD and conditional disease association simultaneously. Experimental results on both synthetic and real GWAS datasets show that the proposed methods improve the detection accuracy of disease-specific associations and lessen the computational cost compared with current popular methods.
826

Network Exceptions Modelling Using Hidden Markov Model : A Case Study of Ericsson’s DroppedCall Data

Li, Shikun January 2014 (has links)
In telecommunication, the series of mobile network exceptions is a processwhich exhibits surges and bursts. The bursty part is usually caused by systemmalfunction. Additionally, the mobile network exceptions are often timedependent. A model that successfully captures these aspects will make troubleshootingmuch easier for system engineers. The Hidden Markov Model(HMM) is a good candidate as it provides a mechanism to capture both thetime dependency and the random occurrence of bursts. This thesis focuses onan application of the HMM to mobile network exceptions, with a case study ofEricsson’s Dropped Call data. For estimation purposes, two methods of maximumlikelihood estimation for HMM, namely, EM algorithm and stochasticEM algorithm, are used.
827

Neuromorphic systems for legged robot control

Monteiro, Hugo Alexandre Pereira January 2013 (has links)
Locomotion automation is a very challenging and complex problem to solve. Besides the obvious navigation problems, there are also problems regarding the environment in which navigation has to be performed. Terrains with obstacles such as rocks, steps or high inclinations, among others, pose serious difficulties to normal wheeled vehicles. The flexibility of legged locomotion is ideal for these types of terrains but this alternate form of locomotion brings with it its own challenges to be solved, caused by the high number of degrees of freedom inherent to it. This problem is usually computationally intensive, so an alternative, using simple and hardware amenable bio-inspired systems, was studied. The goal of this thesis was to investigate if using a biologically inspired learning algorithm, integrated in a fully biologically inspired system, can improve its performance on irregular terrain by adapting its gait to deal with obstacles in its path. At first, two different versions of a learning algorithm based on unsupervised reinforcement learning were developed and evaluated. These systems worked by correlating different events and using them to adjust the behaviour of the system so that it predicts difficult situations and adapts to them beforehand. The difference between these versions was the implementation of a mechanism that allowed for some correlations to be forgotten and suppressed by stronger ones. Secondly, a depth from motion system was tested with unsatisfactory results. The source of the problems are analysed and discussed. An alternative system based on stereo vision was implemented, together with an obstacle detection system based on neuron and synaptic models. It is shown that this system is able to detect obstacles in the path of the robot. After the individual systems were completed, they were integrated together and the system performance was evaluated in a series of 3D simulations using various scenarios. These simulations allowed to conclude that both learning systems were able to adapt to simple scenarios but only the one capable of forgetting past correlations was able to adjust correctly in the more complex experiments.
828

A Study on Effects of Migration in MOGA with Island Model by Visualization

Furuhashi, Takeshi, Yoshikawa, Tomohiro, Yamamoto, Masafumi January 2008 (has links)
Session ID: SA-G4-2 / Joint 4th International Conference on Soft Computing and Intelligent Systems and 9th International Symposium on advanced Intelligent Systems, September 17-21, 2008, Nagoya University, Nagoya, Japan
829

Moving Object Detection based on Background Modeling

Luo, Yuanqing January 2014 (has links)
Aim at the moving objects detection, after studying several categories of background modeling methods, we design an improved Vibe algorithm based on image segmentation algorithm. Vibe algorithm builds background model via storing a sample set for each pixel. In order to detect moving objects, it uses several techniques such as fast initialization, random update and classification based on distance between pixel value and its sample set. In our improved algorithm, firstly we use histograms of multiple layers to extract moving objects in block-level in pre-process stage. Secondly we segment the blocks of moving objects via image segmentation algorithm. Then the algorithm constructs region-level information for the moving objects, designs the classification principles for regions and the modification mechanism among neighboring regions. In addition, to solve the problem that the original Vibe algorithm can easily introduce the ghost region into the background model, the improved algorithm designs and implements the fast ghost elimination algorithm. Compared with the tradition pixel-level background modeling methods, the improved method has better  robustness and reliability against the factors like background disturbance, noise and existence of moving objects in the initial stage. Specifically, our algorithm improves the precision rate from 83.17% in the original Vibe algorithm to 95.35%, and recall rate from 81.48% to 90.25%. Considering the affection of shadow to moving objects detection, this paper designs a shadow elimination algorithm based on Red Green and Illumination (RGI) color feature, which can be converted from RGB color space, and dynamic match threshold. The results of experiments demonstrate  that the algorithm can effectively reduce the influence of shadow on the moving objects detection. At last this paper makes a conclusion for the work of this thesis and discusses the future work.
830

Developing an optimization algorithm within an e-referral program for clinical specialist selection, based on an extensive e-referral program analysis

Carrick, Curtis 08 July 2013 (has links)
When referring physicians decide to refer their patients to specialist care, they rarely, if ever, make a referral decision with the benefit of having access to all of the desirable information. It is therefore highly unlikely that the referring physician will make the optimal choice of specialist for that particular referral. A specialist selection optimization algorithm was developed to guarantee that the “right specialist” for each patient’s referral was chosen. The specialist selection optimization algorithm was developed based on feedback from over 120 users of the e-referral program. The developed algorithm was simulated, tested, and validated in MATLAB. Results from the MATLAB simulation demonstrate that the algorithm functioned as it was designed to. The developed algorithm provides referring physicians with an unprecedented level of support for their decision of which specialist to refer their patient to.

Page generated in 0.0452 seconds