• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2619
  • 941
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 32
  • 27
  • 26
  • Tagged with
  • 5995
  • 1459
  • 890
  • 730
  • 724
  • 706
  • 494
  • 493
  • 482
  • 453
  • 421
  • 414
  • 386
  • 366
  • 342
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Economical Aspects of Resource Allocation under Discounts

January 2015 (has links)
abstract: Resource allocation is one of the most challenging issues policy decision makers must address. The objective of this thesis is to explore the resource allocation from an economical perspective, i.e., how to purchase resources in order to satisfy customers' requests. In this thesis, we attend to answer the question: when and how to buy resources to fulfill customers' demands with minimum costs? The first topic studied in this thesis is resource allocation in cloud networks. Cloud computing heralded an era where resources (such as computation and storage) can be scaled up and down elastically and on demand. This flexibility is attractive for its cost effectiveness: the cloud resource price depends on the actual utilization over time. This thesis studies two critical problems in cloud networks, focusing on the economical aspects of the resource allocation in the cloud/virtual networks, and proposes six algorithms to address the resource allocation problems for different discount models. The first problem attends a scenario where the virtual network provider offers different contracts to the service provider. Four algorithms for resource contract migration are proposed under two pricing models: Pay-as-You-Come and Pay-as-You-Go. The second problem explores a scenario where a cloud provider offers k contracts each with a duration and a rate respectively and a customer buys these contracts in order to satisfy its resource demand. This work shows that this problem can be seen as a 2-dimensional generalization of the classic online parking permit problem, and present a k-competitive online algorithm and an optimal online algorithm. The second topic studied in this thesis is to explore how resource allocation and purchasing strategies work in our daily life. For example, is it worth buying a Yoga pass which costs USD 100 for ten entries, although it will expire at the end of this year? Decisions like these are part of our daily life, yet, not much is known today about good online strategies to buy discount vouchers with expiration dates. This work hence introduces a Discount Voucher Purchase Problem (DVPP). It aims to optimize the strategies for buying discount vouchers, i.e., coupons, vouchers, groupons which are valid only during a certain time period. The DVPP comes in three flavors: (1) Once Expire Lose Everything (OELE): Vouchers lose their entire value after expiration. (2) Once Expire Lose Discount (OELD): Vouchers lose their discount value after expiration. (3) Limited Purchasing Window (LPW): Vouchers have the property of OELE and can only be bought during a certain time window. This work explores online algorithms with a provable competitive ratio against a clairvoyant offline algorithm, even in the worst case. In particular, this work makes the following contributions: we present a 4-competitive algorithm for OELE, an 8-competitive algorithm for OELD, and a lower bound for LPW. We also present an optimal offline algorithm for OELE and LPW, and show it is a 2-approximation solution for OELD. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2015
102

Enhanced computation time for fast block matching algorithm

Ahmed, Zaynab Anwer January 2013 (has links)
Video compression is the process of reducing the amount of data required to represent digital video while preserving an acceptable video quality. Recent studies on video compression have focused on multimedia transmission, videophones, teleconferencing, high definition television (HDTV), CD-ROM storage, etc. The idea of compression techniques is to remove the redundant information that exists in the video sequences. Motion compensated predictive coding is the main coding tool for removing temporal redundancy of video sequences and it typically accounts for 50-80% of the video encoding complexity. This technique has been adopted by all of the existing international video coding standards. It assumes that the current frame can be locally modelled as a translation of the reference frames. The practical and widely method used to carry out motion compensated prediction is block matching algorithm. In this method, video frames are divided into a set of non-overlapped macroblocks; each target macroblock of the current frame is compared with the search area in the reference frame in order to find the best matching macroblock. This will carry out displacement vectors that stipulate the movement of the macroblocks from one location to another in the reference frame. Checking all these locations is called full Search, which provides the best result. However, this algorithm suffers from long computational time, which necessitates improvement. Several methods of Fast Block Matching algorithm were developed to reduce the computation complexity. This thesis focuses on two classifications: the first is called the lossless block matching algorithm process, in which the computational time required to determine the matching macroblock of the full search is decreased while the resolution of the predicted frames is the same as for the full search. The second is called the lossy block matching algorithm process, which reduces the computational complexity effectively but the search result’s quality is not the same as for the full search.
103

Quantum Algorithm Animator

Nicholson, Lori Eileen 01 January 2010 (has links)
The design and development of quantum algorithms present a challenge, especially for inexperienced computer science students. Despite the numerous common concepts with classical computer science, quantum computation is still considered a branch of theoretical physics not commonly used by computer scientists. Experimental research into the development of a quantum computer makes the use of quantum mechanics in organizing computation more attractive, however the physical realization of a working quantum computer may still be decades away. This study introduces quantum computing to computer science students using a quantum algorithm animator called QuAL. QuAL's design uses features common to classical algorithm animators guided by an exploratory study but refined to animate the esoteric and interesting aspects of quantum algorithms. In addition, this study investigates the potential for the animation of a quantum sorting algorithm to help novice computer science students understand the formidable concepts of quantum computing. The animations focus on the concepts required to understand enough about quantum algorithms to entice student interest and promote the integration of quantum computational concepts into computer science applications and curricula. The experimental case study showed no significant improvement in student learning when using QuAL's initial prototype. Possible reasons include the animator's presentation of concepts and the study's pedagogical framework such as choice of algorithm (Wallace and Narayanan's sorting algorithm), design of pre- and post tests, and the study's small size (20 students) and brief duration (2 hours). Nonetheless, the animation system was well received by students. Future work includes enhancing this animation tool for illustrating elusive concepts in quantum computing.
104

Survey of Approximation Algorithms for Set Cover Problem

Dutta, Himanshu Shekhar 12 1900 (has links)
In this thesis, I survey 11 approximation algorithms for unweighted set cover problem. I have also implemented the three algorithms and created a software library that stores the code I have written. The algorithms I survey are: 1. Johnson's standard greedy; 2. f-frequency greedy; 3. Goldsmidt, Hochbaum and Yu's modified greedy; 4. Halldorsson's local optimization; 5. Dur and Furer semi local optimization; 6. Asaf Levin's improvement to Dur and Furer; 7. Simple rounding; 8. Randomized rounding; 9. LP duality; 10. Primal-dual schema; and 11. Network flow technique. Most of the algorithms surveyed are refinements of standard greedy algorithm.
105

A software visualization-based approach for understanding and analyzing incremental implementations of complex graph-based algorithms

Jiaxin Sun (8802671) 06 May 2020 (has links)
Algorithm has always been a challenging topic for students to learn because of its high level of abstraction. To provide visual aid for algorithm education, many algorithm visualization systems have been designed, developed, and evaluated for the last two decades. However, neither the topics covered nor the interactivity of most AV systems are satisfying. This problem is presented in detail in chapter 2. As a result, this research aims to design, implement and evaluate a compiler-based algorithm visualization system on complex graph algorithm implementation with the assumption that it can help students build both confidence and competence in understanding it. This system is designed and developed according to the method in chapter 3. To test the hypothesis, a comparison experiment on 10 students in the Computer Graphics Technology department is conducted. The complete test protocol can be found in chapter 3.4, and the result can be found in chapter 4. Based on the limited number of subjects’ testing data, a rough conclusion is made that this AV system has only a slight positive effect on subjects’ confidence and competence in understanding complex graph algorithm’s implementation, and its usability is acceptable. However, a concrete conclusion can only be reached if the testing is conducted to a larger group of subjects. In addition to the objective testing data, some interesting subjective observations, which are listed in chapter 5.2 are also made while doing the test. These observations indicate that algorithm visualization may more of a tool to examine users’understanding of the implementation than a tool to help them learn it.
106

Comparison of bioinspired algorithms applied to the timetabling problem

Silva, Jose, Varela, Noel, Varas, Jesus, Lezama, Omar, Maco, José, Villón, Martín 01 January 2021 (has links)
The problem of timetabling events is present in various organizations such as schools, hospitals, transportation centers. The purpose of timetabling activities at a university is to ensure that all students attend their required subjects in accordance with the available resources. The set of constraints that must be considered in the design of timetables involves students, teachers and infrastructure. This study shows that acceptable solutions are generated through the application of genetic, memetic and immune system algorithms for the problem of timetabling. The algorithms are applied to real instances of the University of Mumbai in India and their results are comparable with those of a human expert. / Revisión por pares
107

Development, improvement and assessment of image classification and probability mapping algorithms

Wang, Qing 01 December 2018 (has links) (PDF)
Remotely sensed imagery is one of the most important data sources for large-scale and multi-temporal agricultural, forestry, soil, environmental, social and economic applications. In order to accurately extract useful thematic information of the earth surface from images, various techniques and methods have been developed. The methods can be divided into parametric and non-parametric based on the requirement of data distribution, or into global and local based on the characteristics of modeling global trends and local variability, or into unsupervised and supervised based on whether training data are required, and into design-based and model-based in terms of the theory based on which the estimators are developed. The methods have their own disadvantages that impede the improvement of estimation accuracy. Thus, developing novel methods and improving the existing methods are needed. This dissertation focused on the development of a feature-space indicator simulation (FSIS), the improvement of geographically weighted sigmoidal simulation (GWSS) and k-nearest neighbors (kNN), and their assessment of land use and land cover (LULC) classification and probability (fraction) mapping of percentage vegetation cover (PVC) in Duolun County, Xilingol League, Inner Mongolia, China. The FSIS employs an indicator simulation in a high-dimensional feature space and expends derivation of indicator variograms from geographic space to feature space that leads to feature space indicator variograms (FSIV), to circumvent the issues existing in traditional indicator simulation in geostatistics. The GWSS is a stochastic and probability mapping method and considers a spatially nonstationary sample data and the local variation of an interest variable. The improved kNN, called Optimal k-nearest neighbors (OkNN), searches for an optimal number of nearest neighbors at each location based on local variability, and can be used for both classification and probability mapping. Three methods were validated and compared with several widely used approaches for LULC classification and PVC mapping in the study area. The datasets used in the study included a Landsat 8 image and a total of 920 field plots. The results obtained showed that 1) Compared with maximum likelihood classification (ML), support vector machine (SVM) and random forest (RF), the proposed FSIS classifier led to statistically significantly higher classification accuracy of six LULC types (water, agricultural land, grassland, bare soil, built-up area, and forested area); 2) Compared with linear regression (LR), polynomial regression (PR), sigmoidal regression (SR), geographically weighted regression (GWR), and geographically weighted polynomial regression (GWPR), GWSS did not only resulted in more accurate estimates of PVC, but also greatly reduced the underestimations and overestimation of PVC for the small and large values respectively; 3) Most of the red and near infrared bands relevant vegetation indices had significant contributions to improving the accuracy of mapping PVC; 4) OkNN resulted in spatially variable and optimized k values and higher prediction accuracy of PVC than the global methods; and 5) The range parameter of FSIVs was the major factor that spatially affected the classification accuracy of LULC types, but the FSIVs were less sensitive to the number of training samples. Thus, the results answered all six research questions proposed.
108

Improving Dynamic Navigation Algorithms

Yue, Weiya, Ph.D. 30 September 2013 (has links)
No description available.
109

Reconstruction of the Temperature Profile Along a Blackbody Optical Fiber Thermometer

Barker, David Gary 08 April 2003 (has links) (PDF)
A blackbody optical fiber thermometer consists of an optical fiber whose sensing tip is given a metallic coating. The sensing tip of the fiber forms an isothermal cavity, and the emission from this cavity is approximately equal to the emission from a blackbody. Standard two-color optical fiber thermometry involves measuring the spectral intensity at the end of the fiber at two wavelengths. The temperature at the sensing tip of the fiber can then be inferred using Planck's law and the ratio of the spectral intensities. If, however, the length of the optical fiber is exposed to elevated temperatures, erroneous temperature measurements will occur due to emission by the fiber. This thesis presents a method to account for emission by the fiber and accurately infer the temperature at the tip of the optical fiber. Additionally, an estimate of the temperature profile along the fiber may be obtained. A mathematical relation for radiation transfer down the optical fiber is developed. The radiation exiting the fiber and the temperature profile along the fiber are related to the detector signal by a signal measurement equation. Since the temperature profile cannot be solved for directly using the signal measurement equation, two inverse minimization techniques are developed to find the temperature profile. Simulated temperature profile reconstructions show the techniques produce valid and unique results. Tip temperatures are reconstructed to within 1.0%. Experimental results are also presented. Due to the limitations of the detection system and the optical fiber probe, the uncertainty in the signal measurement equation is high. Also, due to the limitations of the laboratory furnace and the optical detector, the measurement uncertainty is also high. This leads to reconstructions that are not always accurate. Even though the temperature profiles are not completely accurate, the tip-temperatures are reconstructed to within 1%—a significant improvement over the standard two-color technique under the same conditions. Improvements are recommended that will lead to decreased measurement and signal measurement equation uncertainty. This decreased uncertainty will lead to the development of a reliable and accurate temperature measurement device.
110

An Evolutionary Algorithm for Matrix-Variate Model-Based Clustering

Flynn, Thomas J. January 2023 (has links)
Model-based clustering is the use of finite mixture models to identify underlying group structures in data. Estimating parameters for mixture models is notoriously difficult, with the expectation-maximization (EM) algorithm being the predominant method. An alternative approach is the evolutionary algorithm (EA) which emulates natural selection on a population of candidate solutions. By leveraging a fitness function and genetic operators like crossover and mutation, EAs offer a distinct way to search the likelihood surface. EAs have been developed for model-based clustering in the multivariate setting; however, there is a growing interest in matrix-variate distributions for three-way data applications. In this context, we propose an EA for finite mixtures of matrix-variate distributions. / Thesis / Master of Science (MSc)

Page generated in 0.0449 seconds