Spelling suggestions: "subject:"3research algorithms"" "subject:"1research algorithms""
11 |
Optimal Latin Hypercube Designs for Computer Experiments Based on Multiple ObjectivesHou, Ruizhe 22 March 2018 (has links)
Latin hypercube designs (LHDs) have broad applications in constructing computer experiments and sampling for Monte-Carlo integration due to its nice property of having projections evenly distributed on the univariate distribution of each input variable. The LHDs have been combined with some commonly used computer experimental design criteria to achieve enhanced design performance. For example, the Maximin-LHDs were developed to improve its space-filling property in the full dimension of all input variables. The MaxPro-LHDs were proposed in recent years to obtain nicer projections in any subspace of input variables. This thesis integrates both space-filling and projection characteristics for LHDs and develops new algorithms for constructing optimal LHDs that achieve nice properties on both criteria based on using the Pareto front optimization approach. The new LHDs are evaluated through case studies and compared with traditional methods to demonstrate their improved performance.
|
12 |
Informované prohledávání prostoru řešení pomocí algoritmu A* / Informed searching in state space using A* algorithmKobr, Dan January 2012 (has links)
This master's thesis deals with informed search algorithms. It's theoretical section summarizes basic theoretical ideas and terms which are related to this topic. It means especially discrete mathematics, graph theory, artificial intelligence and agent systems. Cardinal aim of this section is to provide theoretical analysis of search algorithms and to classify them into informed and uninformed classes. Theoretical section describes basic search strategies such as breadth first search, deep first search and modifications of these strategies, then it is focused on informed search algorithms, specifically A* (A-Star), IDA* (Iterative Deepening A-Star) and SMA* (Simplified Memory bounded A-star). It also describes topics related to informed search strategies -- heuristic functions and problem relaxation method. Given algorithms are analyzed in order to compare their time and space complexity. Main goal of practical part of this thesis is to design and implement software application, which will use informed and uninformed search strategies described in theoretical section. This application is intended to solve fifteen puzzle problem, so-called Lloyds fifteen puzzle game. First part of practical section analyses fifteen puzzle from mathematical and informatical perspective, then it examines possible implementation variants of algorithms and heuristics and proposes design of the application. Description of main interfaces and classes of the realized application follows. At the end of this section the analysis of informed algorithms and heuristics is performed using the implemented application and obtained results are compared to theoretical characteristics of these algorithms.
|
13 |
Šachy a umělá inteligence / Chess and Artificial IntelligenceMacůrek, Miloslav January 2019 (has links)
This thesis will cover the topic of artificial intelligence algorithms in the game of chess and their implementation in computer chess program. The research contains the basics of the chess game and its history with a focus on computer chess, classical methods in chess programming and basic summary of neural networks and possibilities of their application. Selected algorithms are further implemented in chess program “Beast”.
|
14 |
Video game pathfinding and improvements to discrete search on grid-based mapsAnguelov, Bobby 02 March 2012 (has links)
The most basic requirement for any computer controlled game agent in a video game is to be able to successfully navigate the game environment. Pathfinding is an essential component of any agent navigation system. Pathfinding is, at the simplest level, a search technique for finding a route between two points in an environment. The real-time multi-agent nature of video games places extremely tight constraints on the pathfinding problem. This study aims to provide the first complete review of the current state of video game pathfinding both in regards to the graph search algorithms employed as well as the implications of pathfinding within dynamic game environments. Furthermore this thesis presents novel work in the form of a domain specific search algorithm for use on grid-based game maps: the spatial grid A* algorithm which is shown to offer significant improvements over A* within the intended domain. Copyright / Dissertation (MSc)--University of Pretoria, 2011. / Computer Science / unrestricted
|
15 |
Practical Improvements in Applied Spectral LearningDrake, Adam C. 30 June 2010 (has links) (PDF)
Spectral learning algorithms, which learn an unknown function by learning a spectral representation of the function, have been widely used in computational learning theory to prove many interesting learnability results. These algorithms have also been successfully used in real-world applications. However, previous work has left open many questions about how to best use these methods in real-world learning scenarios. This dissertation presents several significant advances in real-world spectral learning. It presents new algorithms for finding large spectral coefficients (a key sub-problem in spectral learning) that allow spectral learning methods to be applied to much larger problems and to a wider range of problems than was possible with previous approaches. It presents an empirical comparison of new and existing spectral learning methods, showing among other things that the most common approach seems to be the least effective in typical real-world settings. It also presents a multi-spectrum learning approach in which a learner makes use of multiple representations when training. Empirical results show that a multi-spectrum learner can usually match or exceed the performance of the best single-spectrum learner. Finally, this dissertation shows how a particular application, sentiment analysis, can benefit from a spectral approach, as the standard approach to the problem is significantly improved by incorporating spectral features into the learning process.
|
16 |
Complexity Bounds for Search ProblemsNicholas Joseph Recker (18390417) 18 April 2024 (has links)
<p dir="ltr">We analyze the query complexity of multiple search problems.</p><p dir="ltr">Firstly, we provide lower bounds on the complexity of "Local Search". In local search we are given a graph G and oracle access to a function f mapping the vertices to numbers, and seek a local minimum of f; i.e. a vertex v such that f(v) <= f(u) for all neighbors u of v. We provide separate lower bounds in terms of several graph parameters, including congestion, expansion, separation number, mixing time of a random walk, and spectral gap. To aid in showing these bounds, we design and use an improved relational adversary method for classical algorithms, building on the prior work of Scott Aaronson. We also obtain some quantum bounds using the traditional strong weighted adversary method.</p><p dir="ltr">Secondly, we show a multiplicative duality gap for Yao's minimax lemma by studying unordered search. We then go on to give tighter than asymptotic bounds for unordered and ordered search in rounds. Inspired by a connection through sorting with rank queries, we also provide tight asymptotic bounds for proportional cake cutting in rounds.</p>
|
17 |
Coherent and non-coherent data detection algorithms in massive MIMOAlshamary, Haider Ali Jasim 01 May 2017 (has links)
Over the past few years there has been an extensive growth in data traffic consumption devices. Billions of mobile data devices are connected to the global wireless network. Customers demand revived services and up-to-date developed applications, like real-time video and games. These applications require reliable and high data rate wireless communication with high throughput network. One way to meet these requirements is by increasing the number of transmit and/or receive antennas of the wireless communication systems. Massive multiple-input multiple-output (MIMO) has emerged as a promising candidate technology for the next generation (5G) wireless communication. Massive MIMO increases the spatial multiplexing gain and the data rate by adding an excessive number of antennas to the base station (BS) terminals of wireless communication systems. However, building efficient algorithms able to decode a coherently or non-coherently large flow of transmitted signal with low complexity is a big challenge in massive MIMO. In this dissertation, we propose novel approaches to achieve optimal performance for joint channel estimation and signal detection for massive MIMO systems. The dissertation consists of three parts depending on the number of users at the receiver side.
In the first part, we introduce a probabilistic approach to solve the problem of coherent signal detection using the optimized Markov Chain Monte Carlo (MCMC) technique. Two factors contribute to the speed of finding the optimal solution by the MCMC detector: The probability of encountering the optimal solution when the Markov chain converges to the stationary distribution, and the mixing time of the MCMC detector. First, we compute the optimal value of the “temperature'' parameter such that the MC encounters the optimal solution in a polynomially small probability. Second, we study the mixing time of the underlying Markov chain of the proposed MCMC detector.
We assume the channel state information is known in the first part of the dissertation; in the second part we consider non-coherent signal detection. We develop and design an optimal joint channel estimation and signal detection algorithms for massive (single-input multiple-output) SIMO wireless systems. We propose exact non-coherent data detection algorithms in the sense of generalized likelihood ratio test (GLRT). In addition to their optimality, these proposed tree based algorithms perform low expected complexity and for general modulus constellations. More specifically, despite the large number of the unknown channel coefficients for massive SIMO systems, we show that the expected computational complexity of these algorithms is linear in the number of receive antennas (N) and polynomial in channel coherence time (T). We prove that as $N \rightarrow \infty$, the number of tested hypotheses for each coherent block equals $T$ times the cardinality of the modulus constellation. Simulation results show that the optimal non-coherent data detection algorithms achieve significant performance gains (up to 5 dB improvement in energy efficiency) with low computational complexity.
In the part three, we consider massive MIMO uplink wireless systems with time-division duplex (TDD) operation. We propose an optimal algorithm in terms of GLRT to solve the problem of joint channel estimation and data detection for massive MIMO systems. We show that the expected complexity of our algorithm grows polynomially in the channel coherence time (T). The proposed algorithm is novel in two terms: First, the transmitted signal can be chosen from any modulus constellation, constant and non-constant. Second, the algorithm decodes the received noisy signal, which is transmitted a from multiple-antenna array, offering exact solution with polynomial complexity in the coherent block interval. Simulation results demonstrate significant performance gains of our approach compared with suboptimal non-coherent detection schemes. To the best of our knowledge, this is the first algorithm which efficiently achieves GLRT-optimal non-coherent detections for massive MIMO systems with general constellations.
|
18 |
Effective and efficient similarity search in databasesLange, Dustin January 2013 (has links)
Given a large set of records in a database and a query record, similarity search aims to find all records sufficiently similar to the query record. To solve this problem, two main aspects need to be considered: First, to perform effective search, the set of relevant records is defined using a similarity measure. Second, an efficient access method is to be found that performs only few database accesses and comparisons using the similarity measure. This thesis solves both aspects with an emphasis on the latter.
In the first part of this thesis, a frequency-aware similarity measure is introduced. Compared record pairs are partitioned according to frequencies of attribute values. For each partition, a different similarity measure is created: machine learning techniques combine a set of base similarity measures into an overall similarity measure. After that, a similarity index for string attributes is proposed, the State Set Index (SSI), which is based on a trie (prefix tree) that is interpreted as a nondeterministic finite automaton. For processing range queries, the notion of query plans is introduced in this thesis to describe which similarity indexes to access and which thresholds to apply. The query result should be as complete as possible under some cost threshold. Two query planning variants are introduced: (1) Static planning selects a plan at compile time that is used for all queries. (2) Query-specific planning selects a different plan for each query. For answering top-k queries, the Bulk Sorted Access Algorithm (BSA) is introduced, which retrieves large chunks of records from the similarity indexes using fixed thresholds, and which focuses its efforts on records that are ranked high in more than one attribute and thus promising candidates.
The described components form a complete similarity search system. Based on prototypical implementations, this thesis shows comparative evaluation results for all proposed approaches on different real-world data sets, one of which is a large person data set from a German credit rating agency. / Ziel von Ähnlichkeitssuche ist es, in einer Menge von Tupeln in einer Datenbank zu einem gegebenen Anfragetupel all diejenigen Tupel zu finden, die ausreichend ähnlich zum Anfragetupel sind.
Um dieses Problem zu lösen, müssen zwei zentrale Aspekte betrachtet werden: Erstens, um eine effektive Suche durchzuführen, muss die Menge der relevanten Tupel mithilfe eines Ähnlichkeitsmaßes definiert werden. Zweitens muss eine effiziente Zugriffsmethode gefunden werden, die nur wenige Datenbankzugriffe und Vergleiche mithilfe des Ähnlichkeitsmaßes durchführt. Diese Arbeit beschäftigt sich mit beiden Aspekten und legt den Fokus auf Effizienz.
Im ersten Teil dieser Arbeit wird ein häufigkeitsbasiertes Ähnlichkeitsmaß eingeführt. Verglichene Tupelpaare werden entsprechend der Häufigkeiten ihrer Attributwerte partitioniert. Für jede Partition wird ein unterschiedliches Ähnlichkeitsmaß erstellt: Mithilfe von Verfahren des Maschinellen Lernens werden Basisähnlichkeitsmaßes zu einem Gesamtähnlichkeitsmaß verbunden. Danach wird ein Ähnlichkeitsindex für String-Attribute vorgeschlagen, der State Set Index (SSI), welcher auf einem Trie (Präfixbaum) basiert, der als nichtdeterministischer endlicher Automat interpretiert wird. Zur Verarbeitung von Bereichsanfragen wird in dieser Arbeit die Notation der Anfragepläne eingeführt, um zu beschreiben welche Ähnlichkeitsindexe angefragt und welche Schwellwerte dabei verwendet werden sollen. Das Anfrageergebnis sollte dabei so vollständig wie möglich sein und die Kosten sollten einen gegebenen Schwellwert nicht überschreiten. Es werden zwei Verfahren zur Anfrageplanung vorgeschlagen: (1) Beim statischen Planen wird zur Übersetzungszeit ein Plan ausgewählt, der dann für alle Anfragen verwendet wird. (2) Beim anfragespezifischen Planen wird für jede Anfrage ein unterschiedlicher Plan ausgewählt. Zur Beantwortung von Top-k-Anfragen stellt diese Arbeit den Bulk Sorted Access-Algorithmus (BSA) vor, der große Mengen von Tupeln mithilfe fixer Schwellwerte von den Ähnlichkeitsindexen abfragt und der Tupel bevorzugt, die hohe Ähnlichkeitswerte in mehr als einem Attribut haben und damit vielversprechende Kandidaten sind.
Die vorgestellten Komponenten bilden ein vollständiges Ähnlichkeitssuchsystem. Basierend auf einer prototypischen Implementierung zeigt diese Arbeit vergleichende Evaluierungsergebnisse für alle vorgestellten Ansätze auf verschiedenen Realwelt-Datensätzen; einer davon ist ein großer Personendatensatz einer deutschen Wirtschaftsauskunftei.
|
19 |
Camera Controlled Pick And Place Application With Puma 760 RobotDurusu, Deniz 01 December 2005 (has links) (PDF)
This thesis analyzes the kinematical structure of Puma 760 arm and introduces the implementation of image based pick and place application by taking care of the obstacles in the environment. Forward and inverse kinematical solutions of PUMA 760 are carried out. A control software has been developed to calculate both the forward and inverse kinematics solution of this manipulator. The control program enables user to perform both offline programming and real time realization by transmitting the VAL commands (Variable Assembly Language) to the control computer.
Using the proposed inverse kinematics solutions, an interactive application is generated on PUMA 760 arm. The picture of the workspace is taken using a fixed camera attached above the robot workspace. The captured image is then processed to find the position and the distribution of all objects in the workspace. The target is differentiated from the obstacles by analyzing some specific properties of all objects, i.e. roundness. After determining the configuration of the workspace, a clustering based search algorithm is executed to find a path to pick the target object and places it to the desired place. The trajectory points in pixel coordinates, are mapped into the robot workspace coordinates by using the camera calibration matrix obtained in the calibration procedure of the robot arm with respect to the attached camera. The required joint angles, to get the end effector of the robot arm to the desired location, are calculated using the Jacobian type inverse kinematics algorithm. The VAL commands are generated and sent to the control computer of PUMA 760 to pick the object and places it to a user defined location.
|
20 |
A one-class object-based system for sparse geographic feature identificationFourie, Christoff 03 1900 (has links)
Thesis (MSc (Geography and Environmental Studies))--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: The automation of information extraction from earth observation imagery has become a field of active research. This is mainly due to the high volumes of remotely sensed data that remain unused and the possible benefits that the extracted information can provide to a wide range of interest groups. In this work an earth observation image processing system is presented and profiled that attempts to streamline the information extraction process, without degradation of the quality of the extracted information, for geographic object anomaly detection. The proposed system, implemented as a software application, combines recent research in automating image segment generation and automatically finding statistical classifier parameters and attribute subsets using evolutionary inspired search algorithms.
Exploratory research was conducted on the use of an edge metric as a fitness function to an evolutionary search heuristic to automate the generation of image segments for a region merging segmentation algorithm having six control parameters. The edge metric for such an application is compared with an area based metric. The use of attribute subset selection in conjunction with a free parameter tuner for a one class support vector machine (SVM) classifier, operating on high dimensional object based data, was also investigated. For common earth observation anomaly detection problems using typical segment attributes, such a combined free parameter tuning and attribute subset selection system provided superior statistically significant results compared to a free parameter tuning only process. In some extreme cases, due to the stochastic nature of the search algorithm employed, the free parameter only strategy provided slightly better results. The developed system was used in a case study to map a single class of interest on a 22.5 x 22.5km subset of a SPOT 5 image and is compared with a multiclass classification strategy. The developed system generated slightly better classification accuracies than the multiclass classifier and only required samples from the class of interest. / AFIKAANSE OPSOMMING: Die outomatisering van die verkryging van inligting vanaf aardwaarnemingsbeelde het in sy eie reg 'n navorsingsveld geword as gevolg van die groot volumes data wat nie benut word nie, asook na aanleiding van die moontlike bydrae wat inligting wat verkry word van hierdie beelde aan verskeie belangegroepe kan bied. In hierdie tesis word 'n aardwaarneming beeldverwerkingsstelsel bekend gestel en geëvalueer. Hierdie stelsel beoog om die verkryging van inligting van aardwaarnemingsbeelde te vergemaklik deur verbruikersinteraksie te minimaliseer, sonder om die kwaliteit van die resultate te beïnvloed. Die stelsel is ontwerp vir geografiese voorwerp anomalie opsporing en is as 'n sagteware program geïmplementeer. Die program kombineer onlangse navorsing in die gebruik van evolusionêre soek-algoritmes om outomaties goeie beeldsegmente te verkry en parameters te vind, sowel as om kenmerke vir 'n statistiese klassifikasie van beeld segmente te selekteer.
Verkennende navorsing is gedoen op die benutting van 'n rand metriek as 'n passings funksie in 'n evolusionêre soek heuristiek om outomaties goeie parameters te vind vir 'n streeks kombinering beeld segmentasie algoritme met ses beheer parameters. Hierdie rand metriek word vergelyk met 'n area metriek vir so 'n toepassing. Die nut van atribuut substel seleksie in samewerking met 'n vrye parameter steller vir 'n een klas steun vektor masjien (SVM) klassifiseerder is ondersoek op hoë dimensionele objek georiënteerde data. Vir algemene aardwaarneming anomalie opsporings probleme met 'n tipiese segment kenmerk versameling, het so 'n stelsel beduidend beter resultate as 'n eksklusiewe vrye parameter stel stelsel gelewer in sommige uiterste gevalle. As gevolg van die stogastiese aard van die soek algoritme het die eksklusiewe vrye parameter stel strategie effens beter resultate gelewer. Die stelsel is getoets in 'n gevallestudie waar 'n enkele klas op 'n 22.5 x 22.5km substel van 'n SPOT 5 beeld geïdentifiseer word. Die voorgestelde stelsel, wat slegs monsters van die gekose klas gebruik het, het beter klassifikasie akkuraathede genereer as die multi klas klassifiseerder.
|
Page generated in 0.041 seconds