• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2579
  • 911
  • 381
  • 347
  • 331
  • 101
  • 66
  • 46
  • 40
  • 36
  • 34
  • 32
  • 30
  • 27
  • 26
  • Tagged with
  • 5913
  • 1419
  • 865
  • 717
  • 716
  • 667
  • 487
  • 486
  • 473
  • 441
  • 419
  • 414
  • 386
  • 358
  • 338
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Enhanced computation time for fast block matching algorithm

Ahmed, Zaynab Anwer January 2013 (has links)
Video compression is the process of reducing the amount of data required to represent digital video while preserving an acceptable video quality. Recent studies on video compression have focused on multimedia transmission, videophones, teleconferencing, high definition television (HDTV), CD-ROM storage, etc. The idea of compression techniques is to remove the redundant information that exists in the video sequences. Motion compensated predictive coding is the main coding tool for removing temporal redundancy of video sequences and it typically accounts for 50-80% of the video encoding complexity. This technique has been adopted by all of the existing international video coding standards. It assumes that the current frame can be locally modelled as a translation of the reference frames. The practical and widely method used to carry out motion compensated prediction is block matching algorithm. In this method, video frames are divided into a set of non-overlapped macroblocks; each target macroblock of the current frame is compared with the search area in the reference frame in order to find the best matching macroblock. This will carry out displacement vectors that stipulate the movement of the macroblocks from one location to another in the reference frame. Checking all these locations is called full Search, which provides the best result. However, this algorithm suffers from long computational time, which necessitates improvement. Several methods of Fast Block Matching algorithm were developed to reduce the computation complexity. This thesis focuses on two classifications: the first is called the lossless block matching algorithm process, in which the computational time required to determine the matching macroblock of the full search is decreased while the resolution of the predicted frames is the same as for the full search. The second is called the lossy block matching algorithm process, which reduces the computational complexity effectively but the search result’s quality is not the same as for the full search.
102

Quantum Algorithm Animator

Nicholson, Lori Eileen 01 January 2010 (has links)
The design and development of quantum algorithms present a challenge, especially for inexperienced computer science students. Despite the numerous common concepts with classical computer science, quantum computation is still considered a branch of theoretical physics not commonly used by computer scientists. Experimental research into the development of a quantum computer makes the use of quantum mechanics in organizing computation more attractive, however the physical realization of a working quantum computer may still be decades away. This study introduces quantum computing to computer science students using a quantum algorithm animator called QuAL. QuAL's design uses features common to classical algorithm animators guided by an exploratory study but refined to animate the esoteric and interesting aspects of quantum algorithms. In addition, this study investigates the potential for the animation of a quantum sorting algorithm to help novice computer science students understand the formidable concepts of quantum computing. The animations focus on the concepts required to understand enough about quantum algorithms to entice student interest and promote the integration of quantum computational concepts into computer science applications and curricula. The experimental case study showed no significant improvement in student learning when using QuAL's initial prototype. Possible reasons include the animator's presentation of concepts and the study's pedagogical framework such as choice of algorithm (Wallace and Narayanan's sorting algorithm), design of pre- and post tests, and the study's small size (20 students) and brief duration (2 hours). Nonetheless, the animation system was well received by students. Future work includes enhancing this animation tool for illustrating elusive concepts in quantum computing.
103

Survey of Approximation Algorithms for Set Cover Problem

Dutta, Himanshu Shekhar 12 1900 (has links)
In this thesis, I survey 11 approximation algorithms for unweighted set cover problem. I have also implemented the three algorithms and created a software library that stores the code I have written. The algorithms I survey are: 1. Johnson's standard greedy; 2. f-frequency greedy; 3. Goldsmidt, Hochbaum and Yu's modified greedy; 4. Halldorsson's local optimization; 5. Dur and Furer semi local optimization; 6. Asaf Levin's improvement to Dur and Furer; 7. Simple rounding; 8. Randomized rounding; 9. LP duality; 10. Primal-dual schema; and 11. Network flow technique. Most of the algorithms surveyed are refinements of standard greedy algorithm.
104

A software visualization-based approach for understanding and analyzing incremental implementations of complex graph-based algorithms

Jiaxin Sun (8802671) 06 May 2020 (has links)
Algorithm has always been a challenging topic for students to learn because of its high level of abstraction. To provide visual aid for algorithm education, many algorithm visualization systems have been designed, developed, and evaluated for the last two decades. However, neither the topics covered nor the interactivity of most AV systems are satisfying. This problem is presented in detail in chapter 2. As a result, this research aims to design, implement and evaluate a compiler-based algorithm visualization system on complex graph algorithm implementation with the assumption that it can help students build both confidence and competence in understanding it. This system is designed and developed according to the method in chapter 3. To test the hypothesis, a comparison experiment on 10 students in the Computer Graphics Technology department is conducted. The complete test protocol can be found in chapter 3.4, and the result can be found in chapter 4. Based on the limited number of subjects’ testing data, a rough conclusion is made that this AV system has only a slight positive effect on subjects’ confidence and competence in understanding complex graph algorithm’s implementation, and its usability is acceptable. However, a concrete conclusion can only be reached if the testing is conducted to a larger group of subjects. In addition to the objective testing data, some interesting subjective observations, which are listed in chapter 5.2 are also made while doing the test. These observations indicate that algorithm visualization may more of a tool to examine users’understanding of the implementation than a tool to help them learn it.
105

Comparison of bioinspired algorithms applied to the timetabling problem

Silva, Jose, Varela, Noel, Varas, Jesus, Lezama, Omar, Maco, José, Villón, Martín 01 January 2021 (has links)
The problem of timetabling events is present in various organizations such as schools, hospitals, transportation centers. The purpose of timetabling activities at a university is to ensure that all students attend their required subjects in accordance with the available resources. The set of constraints that must be considered in the design of timetables involves students, teachers and infrastructure. This study shows that acceptable solutions are generated through the application of genetic, memetic and immune system algorithms for the problem of timetabling. The algorithms are applied to real instances of the University of Mumbai in India and their results are comparable with those of a human expert. / Revisión por pares
106

Development, improvement and assessment of image classification and probability mapping algorithms

Wang, Qing 01 December 2018 (has links) (PDF)
Remotely sensed imagery is one of the most important data sources for large-scale and multi-temporal agricultural, forestry, soil, environmental, social and economic applications. In order to accurately extract useful thematic information of the earth surface from images, various techniques and methods have been developed. The methods can be divided into parametric and non-parametric based on the requirement of data distribution, or into global and local based on the characteristics of modeling global trends and local variability, or into unsupervised and supervised based on whether training data are required, and into design-based and model-based in terms of the theory based on which the estimators are developed. The methods have their own disadvantages that impede the improvement of estimation accuracy. Thus, developing novel methods and improving the existing methods are needed. This dissertation focused on the development of a feature-space indicator simulation (FSIS), the improvement of geographically weighted sigmoidal simulation (GWSS) and k-nearest neighbors (kNN), and their assessment of land use and land cover (LULC) classification and probability (fraction) mapping of percentage vegetation cover (PVC) in Duolun County, Xilingol League, Inner Mongolia, China. The FSIS employs an indicator simulation in a high-dimensional feature space and expends derivation of indicator variograms from geographic space to feature space that leads to feature space indicator variograms (FSIV), to circumvent the issues existing in traditional indicator simulation in geostatistics. The GWSS is a stochastic and probability mapping method and considers a spatially nonstationary sample data and the local variation of an interest variable. The improved kNN, called Optimal k-nearest neighbors (OkNN), searches for an optimal number of nearest neighbors at each location based on local variability, and can be used for both classification and probability mapping. Three methods were validated and compared with several widely used approaches for LULC classification and PVC mapping in the study area. The datasets used in the study included a Landsat 8 image and a total of 920 field plots. The results obtained showed that 1) Compared with maximum likelihood classification (ML), support vector machine (SVM) and random forest (RF), the proposed FSIS classifier led to statistically significantly higher classification accuracy of six LULC types (water, agricultural land, grassland, bare soil, built-up area, and forested area); 2) Compared with linear regression (LR), polynomial regression (PR), sigmoidal regression (SR), geographically weighted regression (GWR), and geographically weighted polynomial regression (GWPR), GWSS did not only resulted in more accurate estimates of PVC, but also greatly reduced the underestimations and overestimation of PVC for the small and large values respectively; 3) Most of the red and near infrared bands relevant vegetation indices had significant contributions to improving the accuracy of mapping PVC; 4) OkNN resulted in spatially variable and optimized k values and higher prediction accuracy of PVC than the global methods; and 5) The range parameter of FSIVs was the major factor that spatially affected the classification accuracy of LULC types, but the FSIVs were less sensitive to the number of training samples. Thus, the results answered all six research questions proposed.
107

Improving Dynamic Navigation Algorithms

Yue, Weiya, Ph.D. 30 September 2013 (has links)
No description available.
108

Reconstruction of the Temperature Profile Along a Blackbody Optical Fiber Thermometer

Barker, David Gary 08 April 2003 (has links) (PDF)
A blackbody optical fiber thermometer consists of an optical fiber whose sensing tip is given a metallic coating. The sensing tip of the fiber forms an isothermal cavity, and the emission from this cavity is approximately equal to the emission from a blackbody. Standard two-color optical fiber thermometry involves measuring the spectral intensity at the end of the fiber at two wavelengths. The temperature at the sensing tip of the fiber can then be inferred using Planck's law and the ratio of the spectral intensities. If, however, the length of the optical fiber is exposed to elevated temperatures, erroneous temperature measurements will occur due to emission by the fiber. This thesis presents a method to account for emission by the fiber and accurately infer the temperature at the tip of the optical fiber. Additionally, an estimate of the temperature profile along the fiber may be obtained. A mathematical relation for radiation transfer down the optical fiber is developed. The radiation exiting the fiber and the temperature profile along the fiber are related to the detector signal by a signal measurement equation. Since the temperature profile cannot be solved for directly using the signal measurement equation, two inverse minimization techniques are developed to find the temperature profile. Simulated temperature profile reconstructions show the techniques produce valid and unique results. Tip temperatures are reconstructed to within 1.0%. Experimental results are also presented. Due to the limitations of the detection system and the optical fiber probe, the uncertainty in the signal measurement equation is high. Also, due to the limitations of the laboratory furnace and the optical detector, the measurement uncertainty is also high. This leads to reconstructions that are not always accurate. Even though the temperature profiles are not completely accurate, the tip-temperatures are reconstructed to within 1%—a significant improvement over the standard two-color technique under the same conditions. Improvements are recommended that will lead to decreased measurement and signal measurement equation uncertainty. This decreased uncertainty will lead to the development of a reliable and accurate temperature measurement device.
109

An Evolutionary Algorithm for Matrix-Variate Model-Based Clustering

Flynn, Thomas J. January 2023 (has links)
Model-based clustering is the use of finite mixture models to identify underlying group structures in data. Estimating parameters for mixture models is notoriously difficult, with the expectation-maximization (EM) algorithm being the predominant method. An alternative approach is the evolutionary algorithm (EA) which emulates natural selection on a population of candidate solutions. By leveraging a fitness function and genetic operators like crossover and mutation, EAs offer a distinct way to search the likelihood surface. EAs have been developed for model-based clustering in the multivariate setting; however, there is a growing interest in matrix-variate distributions for three-way data applications. In this context, we propose an EA for finite mixtures of matrix-variate distributions. / Thesis / Master of Science (MSc)
110

USER ACTIVITY TRACKER USING ANDROID SENSOR

Song, Chenxi 15 May 2015 (has links)
No description available.

Page generated in 0.087 seconds