Spelling suggestions: "subject:"free 3research"" "subject:"free 1research""
31 |
Akcelerace adversariálních algoritmů s využití grafického procesoru / GPU Accelerated Adversarial SearchBrehovský, Martin January 2011 (has links)
General purpose graphical processing units were proven to be useful for accelerating computationally intensive algorithms. Their capability to perform massive parallel computing significantly improve performance of many algorithms. This thesis focuses on using graphical processors (GPUs) to accelerate algorithms based on adversarial search. We investigate whether or not the adversarial algorithms are suitable for single instruction multiple data (SIMD) type of parallelism, which GPU provides. Therefore, parallel versions of selected algorithms accelerated by GPU were implemented and compared with the algorithms running on CPU. Obtained results show significant speed improvement and proof the applicability of GPU technology in the domain of adversarial search algorithms.
|
32 |
Coherent and non-coherent data detection algorithms in massive MIMOAlshamary, Haider Ali Jasim 01 May 2017 (has links)
Over the past few years there has been an extensive growth in data traffic consumption devices. Billions of mobile data devices are connected to the global wireless network. Customers demand revived services and up-to-date developed applications, like real-time video and games. These applications require reliable and high data rate wireless communication with high throughput network. One way to meet these requirements is by increasing the number of transmit and/or receive antennas of the wireless communication systems. Massive multiple-input multiple-output (MIMO) has emerged as a promising candidate technology for the next generation (5G) wireless communication. Massive MIMO increases the spatial multiplexing gain and the data rate by adding an excessive number of antennas to the base station (BS) terminals of wireless communication systems. However, building efficient algorithms able to decode a coherently or non-coherently large flow of transmitted signal with low complexity is a big challenge in massive MIMO. In this dissertation, we propose novel approaches to achieve optimal performance for joint channel estimation and signal detection for massive MIMO systems. The dissertation consists of three parts depending on the number of users at the receiver side.
In the first part, we introduce a probabilistic approach to solve the problem of coherent signal detection using the optimized Markov Chain Monte Carlo (MCMC) technique. Two factors contribute to the speed of finding the optimal solution by the MCMC detector: The probability of encountering the optimal solution when the Markov chain converges to the stationary distribution, and the mixing time of the MCMC detector. First, we compute the optimal value of the “temperature'' parameter such that the MC encounters the optimal solution in a polynomially small probability. Second, we study the mixing time of the underlying Markov chain of the proposed MCMC detector.
We assume the channel state information is known in the first part of the dissertation; in the second part we consider non-coherent signal detection. We develop and design an optimal joint channel estimation and signal detection algorithms for massive (single-input multiple-output) SIMO wireless systems. We propose exact non-coherent data detection algorithms in the sense of generalized likelihood ratio test (GLRT). In addition to their optimality, these proposed tree based algorithms perform low expected complexity and for general modulus constellations. More specifically, despite the large number of the unknown channel coefficients for massive SIMO systems, we show that the expected computational complexity of these algorithms is linear in the number of receive antennas (N) and polynomial in channel coherence time (T). We prove that as $N \rightarrow \infty$, the number of tested hypotheses for each coherent block equals $T$ times the cardinality of the modulus constellation. Simulation results show that the optimal non-coherent data detection algorithms achieve significant performance gains (up to 5 dB improvement in energy efficiency) with low computational complexity.
In the part three, we consider massive MIMO uplink wireless systems with time-division duplex (TDD) operation. We propose an optimal algorithm in terms of GLRT to solve the problem of joint channel estimation and data detection for massive MIMO systems. We show that the expected complexity of our algorithm grows polynomially in the channel coherence time (T). The proposed algorithm is novel in two terms: First, the transmitted signal can be chosen from any modulus constellation, constant and non-constant. Second, the algorithm decodes the received noisy signal, which is transmitted a from multiple-antenna array, offering exact solution with polynomial complexity in the coherent block interval. Simulation results demonstrate significant performance gains of our approach compared with suboptimal non-coherent detection schemes. To the best of our knowledge, this is the first algorithm which efficiently achieves GLRT-optimal non-coherent detections for massive MIMO systems with general constellations.
|
33 |
Playing and solving the game of HexHenderson, Philip 11 1900 (has links)
The game of Hex is of interest to the mathematics, algorithms, and artificial intelligence communities. It is a classical PSPACE-complete problem, and its invention is intrinsically tied to the Four Colour Theorem and the well-known strategy-stealing argument. Nash, Shannon, Tarjan, and Berge are among the mathematicians who have researched and published about this game.
In this thesis we expand on previous research, further developing the mathematical theory and algorithmic techniques relating to Hex. In particular, we identify new classes of moves that can be pruned from consideration, and devise new algorithms to identify connection strategies efficiently.
As a result of these theoretical improvements, we produce an automated solver capable of solving all 8 x 8 Hex openings and most 9 x 9 Hex openings; this marks the first time that computers have solved all Hex openings solved by humans. We also produce the two strongest automated Hex players in the world --- Wolve and MoHex --- and obtain both the gold and silver medals in the 2008 and 2009 International Computer Olympiads.
|
34 |
GPU-accelerated Model Checking of Periodic Self-Suspending Real-Time TasksLiberg, Tim, Måhl, Per-Erik January 2012 (has links)
Efficient model checking is important in order to make this type of software verification useful for systems that are complex in their structure. If a system is too large or complex then model checking does not simply scale, i.e., it could take too much time to verify the system. This is one strong argument for focusing on making model checking faster. Another interesting aim is to make model checking so fast that it can be used for predicting scheduling decisions for real-time schedulers at runtime. This of course requires the model checking to complete within a order of milliseconds or even microseconds. The aim is set very high but the results of this thesis will at least give a hint on whether this seems possible or not. The magic card for (maybe) making this possible is called Graphics Processing Unit (GPU). This thesis will investigate if and how a model checking algorithm can be ported and executed on a GPU. Modern GPU architectures offers a high degree of processing power since they are equipped with up to 1000 (NVIDIA GTX 590) or 3000 (NVIDIA Tesla K10) processor cores. The drawback is that they offer poor thread-communication possibilities and memory caches compared to CPU. This makes it very difficult to port CPU programs to GPUs.The example model (system) used in this thesis represents a real-time task scheduler that can schedule up to three periodic self-suspending tasks. The aim is to verify, i.e., find a feasible schedule for these tasks, and do it as fast as possible with the help of the GPU.
|
35 |
A tabu search methodology for spacecraft tour trajectory optimizationJohnson, Gregory Phillip 03 February 2015 (has links)
A spacecraft tour trajectory is a trajectory in which a spacecraft visits a number of objects in sequence. The target objects may consist of satellites, moons, planets or any other body in orbit, and the spacecraft may visit these in a variety of ways, for example flying by or rendezvousing with them. The key characteristic is the target object sequence which can be represented as a discrete set of decisions that must be made along the trajectory. When this sequence is free to be chosen, the result is a hybrid discrete-continuous optimization problem that combines the challenges of discrete and combinatorial optimization with continuous optimization. The problem can be viewed as a generalization of the traveling salesman problem; such problems are NP-hard and their computational complexity grows exponentially with the problem size. The focus of this dissertation is the development of a novel methodology for the solution of spacecraft tour trajectory optimization problems. A general model for spacecraft tour trajectories is first developed which defines the parameterization and decision variables for use in the rest of the work. A global search methodology based on the tabu search metaheuristic is then developed. The tabu search approach is extended to operate on a tree-based solution representation and neighborhood structure, which is shown to be especially efficient for problems with expensive solution evaluations. Concepts of tabu search including recency-based tabu memory and strategic intensification and diversification are then applied to ensure a diverse exploration of the search space. The result is an automated, adaptive and efficient search algorithm for spacecraft tour trajectory optimization problems. The algorithm is deterministic, and results in a diverse population of feasible solutions upon termination. A novel numerical search space pruning approach is then developed, based on computing upper bounds to the reachable domain of the spacecraft, to accelerate the search. Finally, the overall methodology is applied to the fourth annual Global Trajectory Optimization Competition (GTOC4), resulting in previously unknown solutions to the problem, including one exceeding the best known in the literature. / text
|
36 |
Playing and solving the game of HexHenderson, Philip Unknown Date
No description available.
|
37 |
Playing and Solving HavannahEwalds, Timo V Unknown Date
No description available.
|
38 |
Introduction of statistics in optimizationTeytaud, Fabien 08 December 2011 (has links) (PDF)
In this thesis we study two optimization fields. In a first part, we study the use of evolutionary algorithms for solving derivative-free optimization problems in continuous space. In a second part we are interested in multistage optimization. In that case, we have to make decisions in a discrete environment with finite horizon and a large number of states. In this part we use in particular Monte-Carlo Tree Search algorithms. In the first part, we work on evolutionary algorithms in a parallel context, when a large number of processors are available. We start by presenting some state of the art evolutionary algorithms, and then, show that these algorithms are not well designed for parallel optimization. Because these algorithms are population based, they should be we well suitable for parallelization, but the experiments show that the results are far from the theoretical bounds. In order to solve this discrepancy, we propose some rules (such as a new selection ratio or a faster decrease of the step-size) to improve the evolutionary algorithms. Experiments are done on some evolutionary algorithms and show that these algorithms reach the theoretical speedup with the help of these new rules.Concerning the work on multistage optimization, we start by presenting some of the state of the art algorithms (Min-Max, Alpha-Beta, Monte-Carlo Tree Search, Nested Monte-Carlo). After that, we show the generality of the Monte-Carlo Tree Search algorithm by successfully applying it to the game of Havannah. The application has been a real success, because today, every Havannah program uses Monte-Carlo Tree Search algorithms instead of the classical Alpha-Beta. Next, we study more precisely the Monte-Carlo part of the Monte-Carlo Tree Search algorithm. 3 generic rules are proposed in order to improve this Monte-Carlo policy. Experiments are done in order to show the efficiency of these rules.
|
39 |
Introduction of statistics in optimization / Introduction de statistiques en optimisationTeytaud, Fabien 08 December 2011 (has links)
Cette thèse se situe dans le contexte de l'optimisation. Deux grandes parties s'en dégagent ; la première concerne l'utilisation d'algorithmes évolutionnaires pour résoudre des problèmes d'optimisation continue et sans dérivées. La seconde partie concerne l'optimisation de séquences de décisions dans un environnement discret et à horizon fini en utilisant des méthodes de type Monte-Carlo Tree Search. Dans le cadre de l'optimisation évolutionnaire, nous nous intéressons particulièrement au cadre parallèle à grand nombre d'unités de calcul. Après avoir présenté les algorithmes de référence du domaine, nous montrons que ces algorithmes, sous leur forme classique, ne sont pas adaptés à ce cadre parallèle et sont loin d'atteindre les vitesses de convergence théoriques. Nous proposons donc ensuite différentes règles (comme la modification du taux de sélection des individus ainsi que la décroissance plus rapide du pas) afin de corriger et améliorer ces algorithmes. Nous faisons un comparatif empirique de ces règles appliquées à certains algorithmes. Dans le cadre de l'optimisation de séquences de décisions, nous présentons d'abord les algorithmes de référence dans ce domaine (Min-Max, Alpha-Beta, Monte-carlo Tree Search, Nested Monte-Carlo). Nous montrons ensuite la généricité de l'algorithme Monte-Carlo Tree Search en l'appliquant avec succès au jeu de Havannah. Cette application a été un réel succès puisqu'aujourd'hui les meilleurs joueurs artificiels au jeu de Havannah utilisent cet algorithme et non plus des algorithmes de type Min-Max ou Alpha-Beta. Ensuite, nous nous sommes particulièrement intéressés à l'amélioration de la politique Monte-Carlo de ces algorithmes. Nous proposons trois améliorations, chacune étant générique. Des expériences sont faites pour mesurer l'impact de ces améliorations, ainsi que la généricité de l'une d'entre elles. Nous montrons à travers ces expériences que les résultats sont positifs. / In this thesis we study two optimization fields. In a first part, we study the use of evolutionary algorithms for solving derivative-free optimization problems in continuous space. In a second part we are interested in multistage optimization. In that case, we have to make decisions in a discrete environment with finite horizon and a large number of states. In this part we use in particular Monte-Carlo Tree Search algorithms. In the first part, we work on evolutionary algorithms in a parallel context, when a large number of processors are available. We start by presenting some state of the art evolutionary algorithms, and then, show that these algorithms are not well designed for parallel optimization. Because these algorithms are population based, they should be we well suitable for parallelization, but the experiments show that the results are far from the theoretical bounds. In order to solve this discrepancy, we propose some rules (such as a new selection ratio or a faster decrease of the step-size) to improve the evolutionary algorithms. Experiments are done on some evolutionary algorithms and show that these algorithms reach the theoretical speedup with the help of these new rules.Concerning the work on multistage optimization, we start by presenting some of the state of the art algorithms (Min-Max, Alpha-Beta, Monte-Carlo Tree Search, Nested Monte-Carlo). After that, we show the generality of the Monte-Carlo Tree Search algorithm by successfully applying it to the game of Havannah. The application has been a real success, because today, every Havannah program uses Monte-Carlo Tree Search algorithms instead of the classical Alpha-Beta. Next, we study more precisely the Monte-Carlo part of the Monte-Carlo Tree Search algorithm. 3 generic rules are proposed in order to improve this Monte-Carlo policy. Experiments are done in order to show the efficiency of these rules.
|
40 |
Monte-Carlo Tree Search in Continuous Action Spaces for Autonomous Racing : F1-tenthJönsson, Jonatan, Stenbäck, Felix January 2020 (has links)
Autonomous cars involve problems with control and planning. In thispaper, we implement and evaluate an autonomous agent based ona Monte-Carlo Tree Search in continuous action space. To facilitatethe algorithm, we extend an existing simulation framework and usea GPU for faster calculations. We compare three action generatorsand two rewards functions. The results show that MCTS convergesto an effective driving agent in static environments. However, it onlysucceeds at driving slow speeds in real-time. We discuss the problemsthat arise in dynamic and static environments and look to future workin improving the simulation tool and the MCTS algorithm. See code, https://github.com/felrock/PyRacecarSimulator
|
Page generated in 0.0478 seconds