• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2619
  • 941
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 32
  • 27
  • 26
  • Tagged with
  • 5997
  • 1459
  • 890
  • 730
  • 724
  • 706
  • 494
  • 494
  • 482
  • 453
  • 422
  • 414
  • 386
  • 366
  • 342
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
541

Training of Hidden Markov models as an instance of the expectation maximization algorithm

Majewsky, Stefan 27 July 2017 (has links) (PDF)
In Natural Language Processing (NLP), speech and text are parsed and generated with language models and parser models, and translated with translation models. Each model contains a set of numerical parameters which are found by applying a suitable training algorithm to a set of training data. Many such training algorithms are instances of the Expectation-Maximization (EM) algorithm. In [BSV15], a generic EM algorithm for NLP is described. This work presents a particular speech model, the Hidden Markov model, and its standard training algorithm, the Baum-Welch algorithm. It is then shown that the Baum-Welch algorithm is an instance of the generic EM algorithm introduced by [BSV15], from which follows that all statements about the generic EM algorithm also apply to the Baum-Welch algorithm, especially its correctness and convergence properties.
542

Srovnání algoritmů dekódování Reed-Solomonova kódu / Comparison of decoding algorithms of Reed-Solomon code

Šicner, Jiří January 2011 (has links)
The work deals with the encoding and decoding of Reed-Solomon codes. There is generally described algebraic decoding of Reed-Solomon codes, and then described four methods of decoding, namely Massey-Berlekamp algorithm, Euclidean algoritus, Peterson-Gorenstein-Zierler algorithm and the direct method. These methods are then compared, and some of them are implemented in Matlab.
543

Algoritmus s pravděpodobnostním směrovým vektorem / Optimization Algorithm with Probability Direction Vector

Pohl, Jan January 2015 (has links)
This disertation presents optimization algorithm with probability direction vector. This algorithm, in its basic form, belongs to category of stochastic optimization algorithms. It uses statistically effected perturbation of individual through state space. This work also represents modification of basic idea to the form of swarm optimization algoritm. This approach contains form of stochastic cooperation. This is one of the new ideas of this algorithm. Population of individuals cooperates only through modification of probability direction vector and not directly. Statistical tests are used to compare resultes of designed algorithms with commonly used algorithms Simulated Annealing and SOMA. This part of disertation also presents experimental data from other optimization problems. Disertation ends with chapter which seeks optimal set of control variables for each designed algorithm.
544

Near-field microwave imaging with coherent and interferometric reconstruction methods

Zhou, Qiping January 2020 (has links)
No description available.
545

Calibration of IDM Car Following Model with Evolutionary Algorithm

Yang, Zhimin 11 January 2024 (has links)
Car following (CF) behaviour modelling has made significant progress in both traffic engi-neering and traffic psychology during recent decades. Autonomous vehicles (AVs) have been demonstrated to optimise traffic flow and increase traffic stability. Consequently, sever-al car-following models have been proposed based on various car following criteria, leading to a range of model parameter sets. In traffic engineering, Intelligent Driving Model (IDM) are commonly used as microscopic traffic flow models to simulate a single vehicle's behav-iour on a road. Observational data can be employed to parameter calibrate IDM models, which enhances their practicality for real-world applications. As a result, the calibration of model parameters is crucial in traffic simulation research and typically involves solving an optimization problem. Within the given context, the Nelder-Mead(NM)algorithm, particle swarm optimization (PSO) algorithm and genetic algorithm (GA) are utilized in this study for parameterizing the IDM model, using abundant trajectory data from five different road conditions. The study further examines the effects of various algorithms on the IDM model in different road sections, providing useful insights for traffic simulation and optimization.:Table of Contents CHAPTER 1 INTRODUCTION 1 1.1 BACKGROUND AND MOTIVATION 1 1.2 STRUCTURE OF THE WORK 3 CHAPTER 2 BACKGROUND AND RELATED WORK 4 2.1 CAR-FOLLOWING MODELS 4 2.1.1 General Motors model and Gazis-Herman-Rothery model 5 2.1.2 Optimal velocity model and extended models 6 2.1.3 Safety distance or collision avoidance models 7 2.1.4 Physiology-psychology models 8 2.1.5 Intelligent Driver model 10 2.2 CALIBRATION OF CAR-FOLLOWING MODEL 12 2.2.1 Statistical Methods 13 2.2.2 Optimization Algorithms 14 2.3 TRAJECTORY DATA 21 2.3.1 Requirements of Experimental Data 22 2.3.2 Data Collection Techniques 22 2.3.3 Collected Experimental Data 24 CHAPTER 3 EXPERIMENTS AND RESULTS 28 3.1 CALIBRATION PROCESS 28 3.1.1 Objective Function 29 3.1.2 Errors Analysis 30 3.2 SOFTWARE AND METHODOLOGY 30 3.3 NM RESULTS 30 3.4 PSO RESULTS 37 3.4.1 PSO Calibrator 37 3.4.2 PSO Results 44 3.5 GA RESULTS 51 3.6 OPTIMIZATION PERFORMANCE ANALYSIS 58 CHAPTER 4 CONCLUSION 60 REFERENCES 62
546

Development of sensor-based nitrogen recommendation algorithms for cereal crops

Asebedo, Antonio Ray January 1900 (has links)
Doctor of Philosophy / Department of Agronomy / David B. Mengel / Nitrogen (N) management is one of the most recognizable components of farming both within and outside the world of agriculture. Interest over the past decade has greatly increased in improving N management systems in corn (Zea mays) and winter wheat (Triticum aestivum) to have high NUE, high yield, and be environmentally sustainable. Nine winter wheat experiments were conducted across seven locations from 2011 through 2013. The objectives of this study were to evaluate the impacts of fall-winter, Feekes 4, Feekes 7, and Feekes 9 N applications on winter wheat grain yield, grain protein, and total grain N uptake. Nitrogen treatments were applied as single or split applications in the fall-winter, and top-dressed in the spring at Feekes 4, Feekes 7, and Feekes 9 with applied N rates ranging from 0 to 134 kg ha[superscript]-1. Results indicate that Feekes 7 and 9 N applications provide more optimal combinations of grain yield, grain protein levels, and fertilizer N recovered in the grain when compared to comparable rates of N applied in the fall-winter or at Feekes 4. Winter wheat N management studies from 2006 through 2013 were utilized to develop sensor-based N recommendation algorithms for winter wheat in Kansas. Algorithm RosieKat v.2.6 was designed for multiple N application strategies and utilized N reference strips for establishing N response potential. Algorithm NRS v1.5 addressed single top-dress N applications and does not require a N reference strip. In 2013, field validations of both algorithms were conducted at eight locations across Kansas. Results show algorithm RK v2.6 consistently provided highly efficient N recommendations for improving NUE, while achieving high grain yield and grain protein. Without the use of the N reference strip, NRS v1.5 performed statistically equal to the KSU soil test N recommendation in regards to grain yield but with lower applied N rates. Six corn N fertigation experiments were conducted at KSU irrigated experiment fields from 2012 through 2014 to evaluate the previously developed KSU sensor-based N recommendation algorithm in corn N fertigation systems. Results indicate that the current KSU corn algorithm was effective at achieving high yields, but has the tendency to overestimate N requirements. To optimize sensor-based N recommendations for N fertigation systems, algorithms must be specifically designed for these systems to take advantage of their full capabilities, thus allowing implementation of high NUE N management systems.
547

Verification of FlexRay membership protocol using UPPAAL

Mudaliar, Vinodkumar Sekar January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Mitchell L. Neilsen / Safety-critical systems embedded in avionics and automotive systems are becoming increasing complex. Components with different requirements typically share a common distributed platform for communication. To accommodate varied requirements, many of these distributed real-time systems use FlexRay communication network. FlexRay supports both time triggered and event-triggered communications. In such systems, it is vital to establish a consistent view of all the associated processes to handle fault-tolerance. This task can be accomplished through the use of a Process Group Membership Protocol. This protocol must provide a high level of assurance that it operates correctly. In this thesis, we provide for the verification of one such protocol using Model Checking. Through this verification, we found that the protocol may remove nodes from the group of operational nodes in the communicating network at a fast rate. This may lead to exhaustion of the system resources by the protocol, hampering system performance. We determine allowable rates of failure that do not hamper system performance.
548

Optimized combination model and algorithm of parking guidance information configuration

Mei, Zhenyu, Tian, Ye January 2011 (has links)
Operators of parking guidance and information (PGI) systems often have difficulty in providing the best car park availability information to drivers in periods of high demand. A new PGI configuration model based on the optimized combination method was proposed by analyzing of parking choice behavior. This article first describes a parking choice behavioral model incorporating drivers perceptions of waiting times at car parks based on PGI signs. This model was used to predict the influence of PGI signs on the overall performance of the traffic system. Then relationships were developed for estimating the arrival rates at car parks based on driver characteristics, car park attributes as well as the car park availability information displayed on PGI signs. A mathematical program was formulated to determine the optimal display PGI sign configuration to minimize total travel time. A genetic algorithm was used to identify solutions that significantly reduced queue lengths and total travel time compared with existing practices. These procedures were applied to an existing PGI system operating in Deqing Town and Xiuning City. Significant reductions in total travel time of parking vehicles with PGI being configured. This would reduce traffic congestion and lead to various environmental benefits.
549

SOME MEASURED PERFORMANCE BOUNDS AND IMPLEMENTATION CONSIDERATIONS FOR THE LEMPEL-ZIV-WELCH DATA COMPACTION ALGORITHM

Jacobsen, H. D. 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California / Lempel-Ziv-Welch (LZW) algorithm is a popular data compaction technique that has been adopted by CCITT in its V.42bis recommendation and is often implemented in association with the V.32 standard for 9600 bps modems. It has also been implemented as Microcom Networking Protocol (MNP) Level 7, where it goes by the name of Enhanced Data Compression. LZW compacts data by encoding frequently occurring input strings with a single output symbol. The algorithm automatically generates a string dictionary for each symbol at each end of the transmission path. The amount of compaction that can be derived with the LZW algorithm varies with the type of data being transmitted and the efficiency by which table entries can be indexed. Table indexing is usually implemented by use of a hashing table. Although some manufacturers advertise a 4-to-1 gain in throughput, this seems to be an extreme case. This paper documents a implementation of the exact ZLW algorithm. The results presented in this paper are significantly less, typically on the order of 1-to-2 for ASCII text, with substantially less compaction for pre-compacted files or files containing random bit patterns. The efficiency of the LZW algorith on ASCII text is shown to be a function of dictionary size and block size. Although fewer transmitted symbols are required for larger dictionary tables, the additional bits required for the symbol index is marginally greater than the efficiency that is gained. The net effect is that dictionary sizes beyond 2K in size are increasingly less efficient for input data block sizes of 10K or more. The author concludes that the algorithm could be implemented as a direct table look-up rather than through a hashing algorithm. This would allow the LZW to be implemented with very simple firmware and with a maximum of hardware efficiency.
550

Schema theory for gene expression programming

Huang, Zhengwen January 2014 (has links)
This thesis studied a new variant of Evolutionary Algorithms called Gene Expression Programming. The evolution process of Gene Expression Programming was investigated from the practice to the theory. As a practice level, the original version of Gene Expression Programming was applied to a classification problem and an enhanced version of the algorithm was consequently developed. This allowed the development of a general understanding of each component of the genotype and phenotype separated representation system of the solution employed by the algorithm. Based on such an understanding, a version of the schema theory was developed for Gene Expression Programming. The genetic modifications provided by each genetic operator employed by this algorithm were analysed and a set of theorems predicting the propagation of the schema from one generation to another was developed. Also a set of experiments were performed to test the validity of the developed schema theory obtaining good agreement between the experimental results and the theoretical predictions.

Page generated in 0.0399 seconds