• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2615
  • 940
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5983
  • 1458
  • 887
  • 728
  • 724
  • 703
  • 493
  • 493
  • 481
  • 451
  • 421
  • 414
  • 386
  • 366
  • 342
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1001

Adaptive Control Strategy for Isolated Intersection and Traffic Network

Shao, Chun 09 June 2009 (has links)
No description available.
1002

Can Application of Artifact Reduction Algorithm or Increasing Scan Resolution Improve CBCT Diagnostic Accuracy of TAD - Tooth Root Contact?

McLaughlin, Victoria L. 01 October 2021 (has links)
No description available.
1003

Using the EM Algorithm to Estimate the Difference in Dependent Proportions in a 2 x 2 Table with Missing Data.

Talla Souop, Alain Duclaux 18 August 2004 (has links) (PDF)
In this thesis, I am interested in estimating the difference between dependent proportions from a 2 × 2 contingency table when there are missing data. The Expectation-Maximization (EM) algorithm is used to obtain an estimate for the difference between correlated proportions. To obtain the standard error of this difference I employ a resampling technique known as bootstrapping. The performance of the bootstrap standard error is evaluated for different sample sizes and different fractions of missing information. Finally, a 100(1-α)% bootstrap confidence interval is proposed and its coverage is evaluated through simulation.
1004

Smarter NEAT Nets

Dehaven, Ryan Swords 01 August 2013 (has links) (PDF)
This paper discusses a modification to improve usability and functionality of a ge- netic neural net algorithm called NEAT (NeuroEvolution of Augmenting Topolo- gies). The modification aims to accomplish its goal by automatically changing parameters used by the algorithm with little input from a user. The advan- tage of the modification is to reduce the guesswork needed to setup a successful experiment with NEAT that produces a usable Artificial Intelligence (AI). The modified algorithm is tested against the unmodified NEAT with several different setups and the results are discussed. The algorithm shows strengths in some areas but can increase the runtime of NEAT due to the addition of parameters into the solution search space.
1005

MULTI-CORE PARALLEL GRAPH ALGORITHMS

GUO, BIN January 2023 (has links)
Large sizes of real-world data graphs, such as social networks, communication networks, hyperlink networks, and model-checking networks, call for fast and scalable analytic algorithms. The shared-memory multicore machine is a prevalent parallel computation model that can handle such volumes of data. Unfortunately, many graph algorithms do not take full advantage of such a parallel model. This thesis focuses on the parallelism of two graph problems, graph trimming and core maintenance. Graph trimming is to prune the vertices without outgoing edges; core maintenance is to maintain the core numbers of vertices when inserting or removing edges, where the core number of a vertex can be a parameter of density in the graph. The goal of this thesis is to develop fast, provable, and scalable parallel graph algorithms that perform on shared-memory multicore machines. Toward this goal, we first discuss the sequential algorithms and then propose corresponding parallel algorithms. The thesis adopts a three-pronged approach of studying parallel graph algorithms from the algorithm design, correctness proof, and performance analysis. Our experiments on multicore machines show significant speedups over various real and synthetic graphs. / Dissertation / Doctor of Philosophy (PhD) / Graphs are important data structures to model real networks like social networks, communication networks, hyperlink networks, and model-checking networks. These network graphs are becoming larger and larger. Analyzing large data graphs requires efficient parallel algorithms executed on multicore machines. In this thesis, we focus on two graph problems, graph trimming and core maintenance. The graph trimming is to remove the vertices without outgoing edges, which may repeatedly cause other vertices to be removed. For each vertex in the graph, the core number is a parameter to indicate the density; the core maintenance is to maintain the core numbers of vertices when edges are inserted or removed dynamically, without recalculating all core numbers again. We evaluate our methods on a 16-core or 64-core machine over a variety of real and synthetic graphs. The experiments show that our parallel algorithms are much faster compared with existing ones.
1006

Food Shelf Life: Estimation and Experimental Design

Larsen, Ross Allen Andrew 15 May 2006 (has links) (PDF)
Shelf life is a parameter of the lifetime distribution of a food product, usually the time until a specified proportion (1-50%) of the product has spoiled according to taste. The data used to estimate shelf life typically come from a planned experiment with sampled food items observed at specified times. The observation times are usually selected adaptively using ‘staggered sampling.’ Ad-hoc methods based on linear regression have been recommended to estimate shelf life. However, other methods based on maximizing a likelihood (MLE) have been proposed, studied, and used. Both methods assume the Weibull distribution. The observed lifetimes in shelf life studies are censored, a fact that the ad-hoc methods largely ignore. One purpose of this project is to compare the statistical properties of the ad-hoc estimators and the maximum likelihood estimator. The simulation study showed that the MLE methods have higher coverage than the regression methods, better asymptotic properties in regards to bias, and have lower median squared errors (mese) values, especially when shelf life is defined by smaller percentiles. Thus, they should be used in practice. A genetic algorithm (Hamada et al. 2001) was used to find near-optimal sampling designs. This was successfully programmed for general shelf life estimation. The genetic algorithm generally produced designs that had much smaller median squared errors than the staggered design that is used commonly in practice. These designs were radically different than the standard designs. Thus, the genetic algorithm may be used to plan studies in the future that have good estimation properties.
1007

Reinforcement Programming: A New Technique in Automatic Algorithm Development

White, Spencer Kesson 03 July 2006 (has links) (PDF)
Reinforcement programming is a new technique for using computers to automatically create algorithms. By using the principles of reinforcement learning and Q-learning, reinforcement programming learns programs based on example inputs and outputs. State representations and actions are provided. A transition function and rewards are defined. The system is trained until the system converges on a policy that can be directly implemented as a computer program. The efficiency of reinforcement programming is demonstrated by comparing a generalized in-place iterative sort learned through genetic programming to a sorting algorithm of the same type created using reinforcement programming. The sort learned by reinforcement programming is a novel algorithm. Reinforcement programming is more efficient and provides a more effective solution than genetic programming in the cases attempted. As additional examples, reinforcement programming is used to learn three binary addition problems.
1008

The Bioluminescence Heterozygous Genome Assembler

Price, Jared Calvin 01 December 2014 (has links) (PDF)
High-throughput DNA sequencing technologies are currently revolutionizing the fields of biology and medicine by elucidating the structure and function of the components of life. Modern DNA sequencing machines typically produce relatively short reads of DNA which are then assembled by software in an attempt to produce a representation of the entire genome. Due to the complex structure of all but the smallest genomes, especially the abundant presence of exact or almost exact repeats, all genome assemblers introduce errors into the final sequence and output a relatively large set of contigs instead of full-length chromosomes (a contig is a DNA sequence built from the overlaps between many reads). These problems are dramatically worse when homologous copies of the same chromosome differ substantially. Currently such genomes are usually avoided as assembly targets and, when they are not avoided, they generally produce assemblies of relatively low quality. An improved algorithm for the assembly of such data would dramatically improve our understanding of the genetics of a large class of organisms. We present a unique algorithm for the assembly of diploid genomes which have a high degree of variation between homologous chromosomes. The approach uses coverage, graph patterns and machine-learning classification to identify haplotype-specific sequences in the input reads. It then uses these haplotype-specific markers to guide an improved assembly. We validate the approach with a large experiment that isolates and elucidates the effect of single nucleotide polymorphisms (SNPs) on genome assembly more clearly than any previous study. The experiment conclusively demonstrates that the Bioluminescence heterozygous genome assembler produces dramatically longer contigs with fewer haplotype-switch errors than competing algorithms under conditions of high heterozygosity.
1009

A performance study of anevolutionary algorithm for twopoint stock forecasting / En studie av prestandan av en evolutionär algoritm för aktieprogonser i två punkter

Hyyrynen, Fredrik, Lignercrona, Marcus January 2017 (has links)
This study was conducted to conclude whether or not it was possible to accurately predict stock behavior by analyzing general patterns in historical stock data. This was done by creating an evolutionary algorithm that learned and weighted possible outcomes by studying the behaviour of the Nasdaq stock market between 2000 and 2016 and using the result from the training to make predictions. The result of testing with varied parameters concluded that clear patterns could not reliably be established with the suggested method as small adjustments to the measuring dates yielded wildly different results. The results also suggests that the amount of data is more relevant than how closely the stocks are related for the performance and that less precise predictions performs better than predicting multiple degrees of change. The performance of the seemingly better setting was shown to perform worse than random predictions but research with other settings might yield more accurate predictions. / Den här studien utfördes för att konstatera ifall det är möjligt att säkert förutspå beetendet hos en aktiekurs genom att analyser generella mönster i historiska aktiedata. Detta gjordes genom att skapa en evolutionär algorithm som lär och sätter vikt på möjliga utfall genom studie av aktiekurser av Nasdaq-aktiemarknaden mellan 2000 och 2016 för att sedan avnända resultatet av inlärningen för att göra prognoser. Resultaten av test med varierade parametrar konstaterade att tydliga mönster inte kunde etableras med den föreslagna metoden eftersom små justeringar i mätdatum gav stora skillnader i resultatet men förslog att mängden data var mer relevant för prestandan än huruvida aktierna var relaterade till varandra och att mindre nogrannhet gav bättre prestanda än prognoser av fler grader av förändring. Prestandan av inställningen som verkade bättre visades prestera sämre än slumpade prognoser men vidare forskning med andra inställninger skulle kunna ge säkrare prognoser.
1010

Artificial intelligence to model bedrock depth uncertainty

Machado, Beatriz January 2019 (has links)
The estimation of bedrock level for soil and rock engineering is a challenge associated to many uncertainties. Nowadays, this estimation is performed by geotechnical or geophysics investigations. These methods are expensive techniques, that normally are not fully used because of limited budget. Hence, the bedrock levels in between investigations are roughly estimated and the uncertainty is almost unknown. Machine learning (ML) is an artificial intelligence technique that uses algorithms and statistical models to predict determined tasks. These mathematical models are built dividing the data between training, testing and validation samples so the algorithm improve automatically based on passed experiences. This thesis explores the possibility of applying ML to estimate the bedrock levels and tries to find a suitable algorithm for the prediction and estimation of the uncertainties. Many diferent algorithms were tested during the process and the accuracy level was analysed comparing with the input data and also with interpolation methods, like Kriging. The results show that Kriging method is capable of predicting the bedrock surface with considerably good accuracy. However, when is necessary to estimate the prediction interval (PI), Kriging presents a high standard deviation. The machine learning presents a bedrock surface almost as smooth as Kriging with better results for PI. The Bagging regressor with decision tree was the algorithm more capable of predicting an accurate bedrock surface and narrow PI. / BIG and BeFo project "Rock and ground water including artificial intelligence

Page generated in 0.0626 seconds