• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 588
  • 44
  • 39
  • 37
  • 9
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 768
  • 768
  • 185
  • 174
  • 156
  • 135
  • 119
  • 82
  • 71
  • 66
  • 63
  • 63
  • 59
  • 55
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

The parallel solution of some discrete optimization problems

Loots, Wim 07 October 2014 (has links)
M.Sc. (Computer Science) / Please refer to full text to view abstract
242

Buffering strategies and bandwidth renegotiation for MPEG video streams

Schonken, Nico January 1999 (has links)
This paper confirms the existence of short-term and long-term variation of the required bandwidth for MPEG videostreams. We show how the use of a small amount of buffering and GOP grouping can significantly reduce the effect of the short-term variation. By introducing a number of bandwidth renegotiation techniques, which can be applied to MPEG video streams in general, we are able to reduce the effect of long-term variation. These techniques include those that need the a priori knowledge of frame sizes as well as one that can renegotiate dynamically. A costing algorithm has also been introduced in order to compare various proposals against each other.
243

A hybridisation technique for game playing using the upper confidence for trees algorithm with artificial neural networks

Burger, Clayton January 2014 (has links)
In the domain of strategic game playing, the use of statistical techniques such as the Upper Confidence for Trees (UCT) algorithm, has become the norm as they offer many benefits over classical algorithms. These benefits include requiring no game-specific strategic knowledge and time-scalable performance. UCT does not incorporate any strategic information specific to the game considered, but instead uses repeated sampling to effectively brute-force search through the game tree or search space. The lack of game-specific knowledge in UCT is thus both a benefit but also a strategic disadvantage. Pattern recognition techniques, specifically Neural Networks (NN), were identified as a means of addressing the lack of game-specific knowledge in UCT. Through a novel hybridisation technique which combines UCT and trained NNs for pruning, the UCTNN algorithm was derived. The NN component of UCT-NN was trained using a UCT self-play scheme to generate game-specific knowledge without the need to construct and manage game databases for training purposes. The UCT-NN algorithm is outlined for pruning in the game of Go-Moku as a candidate case-study for this research. The UCT-NN algorithm contained three major parameters which emerged from the UCT algorithm, the use of NNs and the pruning schemes considered. Suitable methods for finding candidate values for these three parameters were outlined and applied to the game of Go-Moku on a 5 by 5 board. An empirical investigation of the playing performance of UCT-NN was conducted in comparison to UCT through three benchmarks. The benchmarks comprise a common randomly moving opponent, a common UCTmax player which is given a large amount of playing time, and a pair-wise tournament between UCT-NN and UCT. The results of the performance evaluation for 5 by 5 Go-Moku were promising, which prompted an evaluation of a larger 9 by 9 Go-Moku board. The results of both evaluations indicate that the time allocated to the UCT-NN algorithm directly affects its performance when compared to UCT. The UCT-NN algorithm generally performs better than UCT in games with very limited time-constraints in all benchmarks considered except when playing against a randomly moving player in 9 by 9 Go-Moku. In real-time and near-real-time Go-Moku games, UCT-NN provides statistically significant improvements compared to UCT. The findings of this research contribute to the realisation of applying game-specific knowledge to the UCT algorithm.
244

A framework proposal for algorithm animation systems

Yeh, Chih Lung January 2006 (has links)
The learning and analysis of algorithms and algorithm concepts are challenging to students due to the abstract and conceptual nature of algorithms. Algorithm animation is a form of technological support tool which encourages algorithm comprehension by visualising algorithms in execution. Algorithm animation can potentially be utilised to support students while learning algorithms. Despite widespread acknowledgement for the usefulness of algorithm animation in algorithm courses at tertiary institutions, no recognised framework exists upon which algorithm animation systems can be effectively modelled. This dissertation consequently focuses on the design of an extensible algorithm animation framework to support the generation of interactive algorithm animations. A literature and extant system review forms the basis for the framework design process. The result of the review is a list of requirements for a pedagogically effective algorithm animation system. The proposed framework supports the pedagogic requirements by utilising an independent layer structure to support the generation and display of algorithm animations. The effectiveness of the framework is evaluated through the implementation of a prototype algorithm animation system using sorting algorithms as a case study. This dissertation is successful in proposing a framework to support the development of algorithm animations. The prototype developed will enable the integration of algorithm animations into the Nelson Mandela Metropolitan University’s teaching model, thereby permitting the university to conduct future research relating to the usefulness of algorithm animation in algorithm courses.
245

Vector-Item Pattern Mining Algorithms and their Applications

Wu, Jianfei January 2011 (has links)
Advances in storage technology have long been driving the need for new data mining techniques. Not only are typical data sets becoming larger, but the diversity of available attributes is increasing in many problem domains. In biological applications for example, a single protein may have associated sequence-, text-, graph-, continuous and item data. Correspondingly, there is growing need for techniques to find patterns in such complex data. Many techniques exist for mapping specific types of data to vector space representations, such as the bag-of-words model for text [58] or embedding in vector spaces of graphs [94, 91]. However, there are few techniques that recognize the resulting vector space representations as units that may be combined and further processed. This research aims to mine important vector-item patterns hidden across multiple and diverse data sources. We consider sets of related continuous attributes as vector data and search for patterns that relate a vector attribute to one or more items. The presence of an item set defines a subset of vectors that may or may not show unexpected density fluctuations. Two types of vector-item pattern mining algorithms have been developed, namely histogram-based vector-item pattern mining algorithms and point distribution vector-item pattern mining algorithms. In histogram-based vector-item pattern mining algorithms, a vector-item pattern is significant or important if its density histogram significantly differs from what is expected for a random subset of transactions, using χ² goodness-of-fit test or effect size analysis. For point distribution vector-item pattern mining algorithms, a vector-item pattern is significant if its probability density function (PDF) has a big KullbackLeibler divergence from random subsamples. We have applied the vector-item pattern mining algorithms to several application areas, and by comparing with other state-of-art algorithms we justify the effectiveness and efficiency of the algorithms.
246

Reversible Logic Synthesis Using a Non-blocking Order Search

Patino, Alberto 01 January 2010 (has links)
Reversible logic is an emerging area of research. With the rapid growth of markets such as mobile computing, power dissipation has become an increasing concern for designers (temperature range limitations, generating smaller transistors) as well as customers (battery life, overheating). The main benefit of utilizing reversible logic is that there exists, theoretically, zero power dissipation. The synthesis of circuits is an important part of any design cycle. The circuit used to realize any specification must meet detailed requirements for both layout and manufacturing. Quantum cost is the main metric used in reversible logic. Many algorithms have been proposed thus far which result in both low gate count and quantum cost. In this thesis the AP algorithm is introduced. The goal of the algorithm is to drive quantum cost down by using multiple non-blocking orders, a breadth first search, and a quantum cost reduction transformation. The results shown by the AP algorithm demonstrate that the resulting quantum cost for well-known benchmarks are improved by at least 9% and up to 49%.
247

Reinforcement learning in the presence of rare events

Frank, Jordan William, 1980- January 2009 (has links)
No description available.
248

Lax probabilistic bisimulation

Taylor, Jonathan, 1981- January 2008 (has links)
No description available.
249

An algorithm for two-dimensional density reconstruction in proton computed tomography (PCT)

Tafas, Jihad 01 January 2007 (has links)
The purpose of this thesis is to develop an optimized and effective iterative reconstruction algorithm and hardware acceleration methods that work synonymously together through reconstruction in proton computed tomography, which accurately maps the electron density.
250

Generation of referring expressions for an unknown audience

Kutlák, Roman January 2014 (has links)
When computers generate text, they have to consider how to describe the entities mentioned in the text. This situation becomes more difficult when the audience is unknown, as it is not clear what information is available to the addressees. This thesis investigates generation of descriptions in situations when an algorithm does not have a precise model of addressee's knowledge. This thesis starts with the collection and analysis of a corpus of descriptions of famous people. The analysis of the corpus revealed a number of useful patterns, which informed the remainder of this thesis. One of the difficult questions is how to choose information that helps addressees identify the described person. This thesis introduces a corpus-based method for determining which properties are more likely to be known by the addressees, and a probability-based method to identify properties that are distinguishing. One of the patterns observed in the collected corpus is the inclusion of multiple properties each of which uniquely identifies the referent. This thesis introduces a novel corpus-based method for determining how many properties to include in a description. Finally, a number of algorithms that leverage the findings of the corpus analysis and their computational implementation are proposed and tested in an evaluation involving human participants. The proposed algorithms outperformed the Incremental Algorithm in terms of numbers of correctly identified referents and in terms of providing a better mental image of the referent. The main contributions of this thesis are: (1) a corpus-based analysis of descriptions produced for an unknown audience; (2) a computational heuristic for estimating what information is likely to be known to addressees; and (3) algorithms that can generate referring expressions that benefit addressees without having an explicit model of addressee's knowledge.

Page generated in 0.0605 seconds