81 |
Reuse of Past Games for Move Generation in Computer GoHoueland, Tor Gunnar Høst January 2008 (has links)
<p>Go is an ancient two player board game that has been played for several thousand years. Despite its simple rules, the game requires players to form long-term strategic plans and also possess strong tactical skills to handle the complex fights that often occur during a game. From an artificial intelligence point of view, Go is notable as a game that has been highly resistant to all traditional game playing approaches. In contrast to other board games such as chess and checkers, top human Go players are still significantly better than any computer Go playing programs. It is believed that the strategic depth of Go will require the use of new and more powerful artificial intelligence methods than the ones successfully used to create computer players for such other games. There have been some promising new developments using new Monte Carlo-based techniques to play computer Go in recent years, and programs based on this approach are currently the strongest computer Go players in the world. However, even these programs still play at an amateur level, and they cannot compete with professional or strong amateur human players. In this thesis we explore the idea of reusing experience from previous games to identify strategically important moves for a Go board position. This is based on finding a previous game position that is highly similar to the one in the current game. The moves that were played in this previous game are then adapted to generate new moves for the current game situation. A new computer Go playing system using Monte Carlo-based Go methods was designed as a part of this thesis work, and a prototype implementation of this system was also developed. We extended this initial prototype using case based reasoning (CBR) methods to quickly identify the most strategically valuable areas of the board at the early stages of the game, based on finding similar positions in a collection of professionally played games. The last part of the thesis is an evaluation of the developed system and the results observed using our implementation. These results show that our CBR-based approach is a significant improvement over the initial prototype, and in the opening game it allows the program to quickly locate the most strategically interesting areas of the board. However, by itself our approach does not find strong tactical moves within these identified areas, and thus it is most valuable when used to provide strategic guidelines for other methods that can find tactical plays.</p>
|
82 |
Discriminating Music,Speech and other Sounds and Language IdentificationStrømhaug, Tommy January 2008 (has links)
<p>The tasks : discriminating music, speech and other sounds and language identification have a broad range of applications in todays multilingual multimedia community. Both tasks gave a lot of possibilities regarding methods and development tools which also brings some risk. The Language Identification(LID) problem ended up with two different approaches. One approach was discarded due to poor results in the pre-study while the other approach had some promising potential but did not deliver as hoped in the first place. On the other hand, the music, speech discrimination was solved with great accuracy using 3 simple time domain features and Support Vector Machines(SVM). Adding 'other sounds' to this discrimination problem did complicate the problem but the final solution delivered great results using the enormous BBC Sound Effects library as examples of non speech and music. Both tasks were tried being solved using Gaussian Mixture Models(GMM) because of it's known great ability to model arbitrary feature space segmentations. The tools used were Matlab together with a number of different toolboxes explained further in the text.</p>
|
83 |
Linux Support for AVR32 UC3A : Adaption of the Linux kernel and toolchainDriveklepp, Pål, Morken, Olav, Rangøy, Gunnar January 2009 (has links)
<p>The use of Linux in embedded systems is steadily growing in popularity. The UC3A is a series of high performance, low power 32-bit microcontrollers aimed at several industrial and commercial applications including PLC, instrumentation, phones, vending machines and more. The main goal of this project was to complete the adaptation of the Linux kernel, compiler and loader software, in order to enable the Linux kernel to load and run applications on this device. In addition, a set of useful applications should be picked, compiled and tested on the target platform to indicate a complete software solution. This master's thesis is a continuation, by the same three students, of the work of a student project during the fall of 2008. In this report we present in detail the findings, challenges, choices and and solutions involved in the working process. During the course of this project, we have successfully adapted the Linux kernel, and a toolchain for generating binaries loadable by Linux. A set of test applications have been compiled and tested on the resulting platform. This project has resulted in the submission of a revised patch series for the U-Boot boot loader, one patch series for Linux, and one for the toolchain. Requirements have been created, and tests for the requirements have been carried out.</p>
|
84 |
Skippy : Agents learning how to play curlingAannevik, Frode, Robertsen, Jan Erik January 2009 (has links)
<p>In this project we seek to explore whether it is possible for an artificial agent to learn how to play curling. To achieve this goal we developed a simulator that works as an environment where different agents can be tested against each other. Our most successful agent use a Linear Target Function as a basis for selecting good moves in the game. This agent has become very adept at placing stones, but we discovered that it lacks the ability to employ advanced strategies that reach over more than just one stone. In an effort to give the agent this ability we expanded it using Q-learning with UCT, however this was not successful. For the agent to work we need a good representation of the information in curling, and our representation was quite broad. This caused the training of the agent to take an unreasonably large amount of time.</p>
|
85 |
Multimodal Behaviour Generation Frameworks in Virtual Heritage Applications : A Virtual Museum at SverresborgStokes, Michael James January 2009 (has links)
<p>This masters thesis proposes that multimodal behaviour generation frameworks are an appropriate way to increase the believability of animated characters in virtual heritage applications. To investigate this proposal, an existing virtual museum guide application developed by the author is extended by integrating the Behavioural Markup Language (BML), and the open-source BML realiser SmartBody. The architectural and implementation decisions involved in this process are catalogued and discussed. The integration of BML and SmartBody results in a dramatic improvement in the quality of character animation in the application, as well as greater flexibility and extensibility, including the ability to create scripted sequences of behaviour for multiple characters in the virtual museum. The successful integration confirms that multimodal behaviour generation frameworks have a place in virtual heritage applications.</p>
|
86 |
A CBR/RL system for learning micromanagement in real-time strategy gamesGunnerud, Martin Johansen January 2009 (has links)
<p>The gameplay of real-time strategy games can be divided into macromanagement and micromanagement. Several researchers have studied automated learning for macromanagement, using a case-based reasoning/reinforcement learning architecture to defeat both static and dynamic opponents. Unlike the previous research, we present the Unit Priority Artificial Intelligence (UPAI). UPAI is a case-based reasoning/reinforcement learning system for learning the micromanagement task of prioritizing which enemy units to attack in different game situations, through unsupervised learning from experience. We discuss different case representations, as well as the exploration vs exploitation aspect of reinforcement learning in UPAI. Our research demonstrates that UPAI can learn to improve its micromanagement decisions, by defeating both static and dynamic opponents in a micromanagement setting.</p>
|
87 |
Modeling Communication on Multi-GPU SystemsSpampinato, Daniele January 2009 (has links)
<p>Coupling commodity CPUs and modern GPUs give you heterogeneous systems that are cheap, high-performance with incredible FLOPS counts. Recent evolution of GPGPU models and technologies make these systems even more appealing as compute devices for a range of HPC applications including image processing, seismic processing and other physical modeling, as well as linear programming applications. In fact, graphics vendor such as NVIDIA and AMD are now targeting HPC with some of their products. Due to the power and frequency walls, the trend is now to use multiple GPUs on a given system, much like you will find multiple cores on CPU-based systems. However, increasing the hierarchy of resource wides the spectrum of factors that may impact on the performance of the system. The lack of good models for GPU-based, heterogeneous systems also makes it harder to understand which factors impact performance the most. The goal of this thesis is to analyze such factors by investigating and benchmarking NVIDIA's multi-GPU solution, their recent NVIDIA Tesla S1070 Computing System. This system combines four T10 GPUs making available up to 4 TFLOPS of computational power. Based on a comparative study of fundamental parallel computing models and on the specific heterogeneous features exposed by the system, we define a test space for performance analysis. As a case study, we develop a red-black, SOR PDE solver for Laplace equations with Dirichlet boundaries, well known for requiring constant communication in order to exchange neighboring data. To aid both design and analysis, we propose a model for multi-GPU systems targeting communication between the several GPUs. The main variables exposed by the benchmark application are: domain size and shape, kind of data partitioning, number of GPUs, width of the borders to exchange, kernels to use, and kind of synchronization between the GPU contexts. Among other results, the framework is able to point out the most critical bounds of the S1070 system when dealing with applications like the one in our case study. We show that the multi-GPU system greatly benefits from using all its four GPUs on very large data volumes. Our results show the four GPUs almost four times faster than a single GPU, and twice as fast as two. Our analysis outcomes also allow us to refine our static communication model, enriching it with regression-based predictions.</p>
|
88 |
FPGA realization of a public key block cipherFjellskaalnes, Stig January 2009 (has links)
<p>This report will cover the physical realization of a public key algorithm based on multivariate quadratic quasigroups. The intension is that this implementation will use real keys and data. Efforts are also taken in order to reduce area cost as much as possible. The solution will be described and analyzed. This will show wether the measures were successfull or not.</p>
|
89 |
Parallel Techniques for Estimation and Correction of Aberration in Medical Ultrasound ImagingHerikstad, Åsmund January 2009 (has links)
<p>Medical ultrasound imaging is a great diagnostic tool for physicians because of its noninvasive nature. It is performed by directing ultrasonic sound into tissue and visualizing the echo signal. Aberration in the reflected signal is caused by inhomogeneous tissue varying the speed of sound, which results in a blurring of the image. Dr. Måsøy and Dr. Varslot at NTNU have developed and algorithm for estimating and correcting ultrasound aberration. This algorithm adaptively estimates the aberration and adjusts the next transmitted signal to account for the aberration, resulting in a clearer image. This master's thesis focuses on developing a parallelized version of this algorithm. Since NVIDIA CUDA (Compute Unified Device Architecture) is an architecture oriented towards general purpose computations on the GPU (Graphics Processing Unit), it also examines how suitable the parallelization is for modern GPUs. The goal is using the GPU to off-load the CPU with an aim of achieving real-time calculations of the correction filter. The ultrasound image creation is examined, including how the aberrations come into being. Next, how the algorithm can be implemented efficiently using the GPU is looked at using both NVIDIA's FFT (fast Fourier transform) library as well as developing several computational kernels to run on the GPU. Our findings show that the algorithm is highly parallelizable and achieves a speedup of over 5x when implemented on the GPU. This is, however, not fast enough for real-time correction, but taking into account suggestions for overcoming the limitations encountered, the study shows great promise for future work.</p>
|
90 |
Linear Programming on the Cell/BEEldhuset, Åsmund January 2009 (has links)
<p>Linear programming is a form of mathematical optimisation in which one seeks to optimise a linear function subject to linear constraints on the variables. It is a very versatile tool that has many important applications, one of them being modelling of production and trade in the petroleum industry. The Cell Broadband Engine, developed by IBM, Sony and Toshiba, is an innovative multicore architecture that has already been proven to have a great potential for high performance computing. However, developing applications for the Cell/BE is challenging, particularily due to the low-level memory management that is mandated by the architecture, and because careful optimisation by hand is often required to get the most out of the hardware. In this thesis, we investigate the opportunities for implementing a parallel solver for sparse linear programs on the Cell/BE. A parallel version of the standard simplex method is developed, and the ASYNPLEX algorithm by Hall and McKinnon is partially implemented on the Cell/BE. We have met substantial challenges when it comes to numerical stability, and this has prevented us from spending sufficient time on Cell/BE-specific optimisation and support for large data sets. Our implementations can therefore only be regarded as proofs of concept, but we provide analyses and discussions of several aspects of the implementations, which may guide the future work on this topic.</p>
|
Page generated in 0.0741 seconds