• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 11
  • 6
  • 3
  • 1
  • Tagged with
  • 75
  • 75
  • 18
  • 16
  • 14
  • 13
  • 12
  • 11
  • 11
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Node-Weighted Prize Collecting Steiner Tree and Applications

Sadeghian Sadeghabad, Sina January 2013 (has links)
The Steiner Tree problem has appeared in the Karp's list of the first 21 NP-hard problems and is well known as one of the most fundamental problems in Network Design area. We study the Node-Weighted version of the Prize Collecting Steiner Tree problem. In this problem, we are given a simple graph with a cost and penalty value associated with each node. Our goal is to find a subtree T of the graph minimizing the cost of the nodes in T plus penalty of the nodes not in T. By a reduction from set cover problem it can be easily shown that the problem cannot be approximated in polynomial time within factor of (1-o(1))ln n unless NP has quasi-polynomial time algorithms, where n is the number of vertices of the graph. Moss and Rabani claimed an O(log n)-approximation algorithm for the problem using a Primal-Dual approach in their STOC'01 paper \cite{moss2001}. We show that their algorithm is incorrect by providing a counter example in which there is an O(n) gap between the dual solution constructed by their algorithm and the optimal solution. Further, evidence is given that their algorithm probably does not have a simple fix. We propose a new algorithm which is more involved and introduces novel ideas in primal dual approach for network design problems. Also, our algorithm is a Lagrangian Multiplier Preserving algorithm and we show how this property can be utilized to design an O(log n)-approximation algorithm for the Node-Weighted Quota Steiner Tree problem using the Lagrangian Relaxation method. We also show an application of the Node Weighted Quota Steiner Tree problem in designing algorithm with better approximation factor for Technology Diffusion problem, a problem proposed by Goldberg and Liu in \cite{goldberg2012} (SODA 2013). In Technology Diffusion, we are given a graph G and a threshold θ(v) associated with each vertex v and we are seeking a set of initial nodes called the seed set. Technology Diffusion is a dynamic process defined over time in which each vertex is either active or inactive. The vertices in the seed set are initially activated and each other vertex v gets activated whenever there are at least θ(v) active nodes connected to v through other active nodes. The Technology Diffusion problem asks to find the minimum seed set activating all nodes. Goldberg and Liu gave an O(rllog n)-approximation algorithm for the problem where r and l are the diameter of G and the number of distinct threshold values, respectively. We improve the approximation factor to O(min{r,l}log n) by establishing a close connection between the problem and the Node Weighted Quota Steiner Tree problem.
62

Problemas de alocação e precificação de itens / Allocation and pricing problems

Rafael Crivellari Saliba Schouery 14 February 2014 (has links)
Nessa tese consideramos problemas de alocação e precificação de itens, onde temos um conjunto de itens e um conjunto de compradores interessados em tais itens. Nosso objetivo é escolher uma alocação de itens a compradores juntamente com uma precificação para tais itens para maximizar o lucro obtido, considerando o valor máximo que um comprador está disposto a pagar por um determinado item. Em particular, focamos em três problemas: o Problema da Compra Máxima, o Problema da Precificação Livre de Inveja e o Leilão de Anúncios de Segundo Preço. O Problema da Compra Máxima e o Problema da Precificação Livre de Inveja modelam o problema que empresas que vendem produtos ou serviços enfrentam na realidade, onde é necessário escolher corretamente os preços dos produtos ou serviços disponíveis para os clientes para obter um lucro interessante. Já o Leilão de Anúncios de Segundo Preço modela o problema enfrentado por empresas donas de ferramentas de busca que desejam vender espaço para anunciantes nos resultados das buscas dos usuários. Ambas as questões, tanto a precificação de produtos e serviços quanto a alocação de anunciantes em resultados de buscas, são de grande relevância econômica e, portanto, são interessantes de serem atacadas dos pontos de vista teórico e prático. Nosso foco nesse trabalho é considerar algoritmos de aproximação e algoritmos de programação inteira mista para os problemas mencionados, apresentando novos resultados superiores àqueles conhecidos previamente na literatura, bem como determinar a complexidade computacional destes problemas ou de alguns de seus casos particulares de interesse. / In this thesis we consider allocation and pricing problems, where we have a set of items and a set of consumers interested in such items. Our objective is to choose an allocation of items to consumers, considering the maximum value a consumer is willing to pay in a specific item. In particular, we focus in three problems: the Max-Buying Problem, the Envy-Free Pricing Problem and the Second-Price Ad Auction. The Max-Buying Problem and the Envy-Free Pricing Problem model a problem faced in reality by companies that sell products or services, where it is necessary to correctly choose the price of the products or services available to clients in order to obtain an interesting profit. The Second-Price Ad Auction models the problem faced by companies that own search engines and desire to sell space for advertisers in the search results of the users. Both questions, the pricing of items and services and the allocation of advertisers in search results are of great economical relevance and, for this, are interesting to be attacked from a theoretical and a practical perspective. Our focus in this work is to consider approximation algorithms and mixed integer programming algorithms for the aforementioned problems, presenting new results superior than the previously known in the literature, as well as to determine the computational complexity of such problems or some of their interesting particular cases.
63

Optimisation techniques for combustor design

Motsamai, O.S. (Oboetswe Seraga) 07 April 2009 (has links)
For gas turbines, the demand for high-performance, more efficient and longer-life turbine blades is increasing. This is especially so, now that there is a need for high-power and low-weight aircraft gas turbines. Thus, the search for improved design methodologies for the optimisation of combustor exit temperature profiles enjoys high priority. Traditional experimental methods are found to be too time-consuming and costly, and they do not always achieve near-optimal designs. In addition to the above deficiencies, methods based on semi-empirical correlations are found to be lacking in performing three-dimensional analyses and these methods cannot be used for parametric design optimisation. Computational fluid dynamics has established itself as a viable alternative to reduce the amount of experimentation needed, resulting in a reduction in the time scales and costs of the design process. Furthermore, computational fluid dynamics provides more insight into the flow process, which is not available through experimentation only. However, the fact remains that, because of the trial-and-error nature of adjusting the parameters of the traditional optimisation techniques used in this field, the designs reached cannot be called “optimum”. The trial-and-error process depends a great deal on the skill and experience of the designer. Also, the above technologies inhibit the improvement of the gas turbine power output by limiting the highest exit temperature possible, putting more pressure on turbine blade cooling technologies. This limitation to technology can be overcome by implementing a search algorithm capable of finding optimal design parameters. Such an algorithm will perform an optimum search prior to computational fluid dynamics analysis and rig testing. In this thesis, an efficient methodology is proposed for the design optimisation of a gas turbine combustor exit temperature profile. The methodology involves the combination of computational fluid dynamics with a gradient-based mathematical optimiser, using successive objective and constraint function approximations (Dynamic-Q) to obtain the optimum design. The methodology is tested on three cases, namely: (a) The first case involves the optimisation of the combustor exit temperature profile with two design variables related to the dilution holes, which is a common procedure. The combustor exit temperature profile was optimised, and the pattern factor improved, but pressure drop was very high. (b) The second case involves the optimisation of the combustor exit temperature profile with four design variables, one equality constraint and one inequality constraint based on pressure loss. The combustor exit temperature profile was also optimised within the constraints of pressure. Both the combustor exit temperature profile and pattern factor were improved. (c) The third case involves the optimisation of the combustor exit temperature profile with five design variables. The swirler angle and primary hole parameters were included in order to allow for the effect of the central toroidal recirculation zone on the combustor exit temperature profile. Pressure loss was also constrained to a certain maximum. The three cases show that a relatively recent mathematical optimiser (Dynamic-Q), combined with computational fluid dynamics, can be considered a strong alternative to the design optimisation of a gas turbine combustor exit temperature profile. This is due to the fact that the proposed methodology provides designs that can be called near-optimal, when compared with that yielded by traditional methods and computational fluid dynamics alone. / Thesis (PhD)--University of Pretoria, 2009. / Mechanical and Aeronautical Engineering / unrestricted
64

Towards New Bounds for the 2-Edge Connected Spanning Subgraph Problem

Legault, Philippe January 2017 (has links)
Given a complete graph K_n = (V,E) with non-negative edge costs c ∈ R^E, the problem multi-2EC_cost is that of finding a 2-edge connected spanning multi-subgraph of K_n with minimum cost. It is believed that there are no efficient ways to solve the problem exactly, as it is NP-hard. Methods such as approximation algorithms, which rely on lower bounds like the linear programming relaxation multi-2EC^LP of multi-2EC , thus become vital cost cost to obtain solutions guaranteed to be close to the optimal in a fast manner. In this thesis, we focus on the integrality gap αmulti-2EC of multi-2EC^LP , which is a measure of the quality of multi-2EC^LP as a lower bound. Although we currently only know cost that 6/5 ≤ αmulti-2EC_cost ≤ 3 , the integrality gap for multi-2EC_cost has been conjectured to be 6/5. We explore the idea of using the structure of solutions for αmulti-2EC_cost and the concept of convex combination to obtain improved bounds for αmulti-2EC_cost. We focus our efforts on a family J of half-integer solutions that appear to give the largest integrality gap for multi-2EC_cost. We successfully show that the conjecture αmulti-2EC_cost = 6/5 is true for any cost functions optimized by some x∗ ∈ J. We also study the related problem 2EC_size, which consists of finding the minimum size 2-edge connected spanning subgraph of a 2-edge connected graph. The problem is NP-hard even at its simplest, when restricted to cubic 3-edge connected graphs. We study that case in the hope of finding a more general method, and we show that every 3-edge connected cubic graph G = (V ′, E′), with n = |V ′| allows a 2EC_size solution for G of size at most 7n/6 This improves upon Boyd, Iwata and Takazawa’s guarantee of 6n/5 and extend Takazawa’s 7n/6 guarantee for bipartite cubic 3-edge connected graphs to all cubic 3-edge connected graphs.
65

Complexity and Approximation of the Rectilinear Steiner Tree Problem

Mussafi, Noor Saif Muhammad 21 July 2009 (has links)
Given a finite set K of terminals in the plane. A rectilinear Steiner minimum tree for K (RST) is a tree which interconnects among these terminals using only horizontal and vertical lines of shortest possible length containing Steiner point. We show the complexity of RST i.e. belongs to NP-complete. Moreover we present an approximative method of determining the solution of RST problem proposed by Sanjeev Arora in 1996, Arora's Approximation Scheme. This algorithm has time complexity polynomial in the number of terminals for a fixed performance ratio 1 + Epsilon.
66

On semi-online machine scheduling and generalized bin covering

Hellwig, Matthias 17 July 2013 (has links)
In dieser Arbeit untersuchen wir Algorithmen für Scheduling-Probleme. Wir betrachten semi-online Makespan-Scheduling und generalisiertes Bin Covering. Im online Makespan- Scheduling-Problem sind m Maschinen und n Jobs gegeben, wobei letztere jeweils eine individuelle Bearbeitungszeit haben. Es wird zu jedem Zeitpunkt ein Job offengelegt und muss sofort und unwiderruflich einer Maschine zugewiesen werden, ohne Wissen über zukünftige Jobs. Die Last einer Maschine wird als die Summe der Bearbeitungszeiten der ihr zugewiesenen Jobs definiert. Das Ziel ist es, eine Zuweisung von Jobs zu Maschinen zu finden, sodass die höchste Last einer Maschine minimiert wird. Im semi-online Scheduling-Modell wird dieses strikte Szenario relaxiert. Wir untersuchen drei verschied- ene Modelle. Im ersten ist uns die kumulierte Bearbeitungszeit der Jobs vor Ankunft der einzelnen Jobs bekannt. Im zweiten Modell dürfen wir bis zu einem gewissen Grade bereits zugewiesene Jobs anderen Maschinen neu zuordnen.Im dritten semi-online Scheduling-Modell darf ein Algorithmus mehrere Lösungen parallel konstruieren, von denen die beste ausgegeben wird. Beim generalisierten Bin Covering sind uns m Bintypen und n Objekte gegeben. Ein Bintyp Mj hat einen Bedarf dj und einen Profit rj. Jedes Objekt Jt hat eine Größe pt. Ein Bin vom Typ Mj heißt abgedeckt, wenn die Summe der Größen der ihm zugewiesenen Objekte mindestens dj ist. Wenn ein Bin vom Typ Mj abgedeckt ist, erzielen wir einen Profit von rj. Ziel ist es, die Objekte Bins zuzuweisen, sodass der erzielte Gesamtprofit maximiert wird. Wir untersuchen zwei Modelle, die sich in der Verfügbarkeit von Bintypen unterscheiden. Im Unit-Supply-Modell steht uns von jedem Bintyp genau ein Bin zur Verfügung. Im Gegensatz dazu stehen uns im Infinite-Supply-Modell von jedem Bintyp beliebig viele Bins zur Verfügung. Das Unit-Supply-Modell ist daher eine Verallgemeinerung des Infinite-Supply-Modells. Für alle Modelle zeigen wir beinahe scharfe obere und untere Schranken. / In this thesis we study algorithms for scheduling problems. We investigate semi-online minimum makespan scheduling and generalized bin covering. In online minimum makespan scheduling we are given a set of m machines and n jobs, where each job Jt is specified by a processing time. The jobs arrive one by one and we have to assign them to the machines without any knowledge about future incoming jobs. The load of a machine is defined to be total processing time of the assigned jobs. The goal is to place the jobs on the machines such that the maximum load of a machine is minimized. In semi-online minimum makespan scheduling this strict setting is softened. We investigate three different models. In the first setting an algorithm is given an advice on the total processing time of the jobs. In the second setting we may reassign jobs upto a limited amount. The third semi-online setting we study is minimum makespan scheduling with parallel schedules. In this problem an algorithm may maintain several schedules, the best of which is output after the arrival of the entire job sequence. In generalized bin covering we are given m bin types and n items. Each bin type Mj is specified by a demand dj and a revenue rj. Each item Jt has a size pj. A bin of type Mj is said to be covered if the total size of the assigned items is at least the demand dj. Then the revenue rj is earned. The goal is to find an assignment of items to bins maximizing the total obtained revenue. We study two models of bin supply. In the unit supply model there is only one bin of each type available. By contrast in the infinite supply model each bin type is available arbitrarily often, and hence the former is a generalization of the latter. We provide nearly tight upper and lower bounds for all models.
67

Topics in spatial and dynamical phase transitions of interacting particle systems

Restrepo Lopez, Ricardo 19 August 2011 (has links)
In this work we provide several improvements in the study of phase transitions of interacting particle systems: - We determine a quantitative relation between non-extremality of the limiting Gibbs measure of a tree-based spin system, and the temporal mixing of the Glauber Dynamics over its finite projections. We define the concept of 'sensitivity' of a reconstruction scheme to establish such a relation. In particular, we focus on the independent sets model, determining a phase transition for the mixing time of the Glauber dynamics at the same location of the extremality threshold of the simple invariant Gibbs version of the model. - We develop the technical analysis of the so-called spatial mixing conditions for interacting particle systems to account for the connectivity structure of the underlying graph. This analysis leads to improvements regarding the location of the uniqueness/non-uniqueness phase transition for the independent sets model over amenable graphs; among them, the elusive hard-square model in lattice statistics, which has received attention since Baxter's solution of the analogous hard-hexagon in 1980. - We build on the work of Montanari and Gerschenfeld to determine the existence of correlations for the coloring model in sparse random graphs. In particular, we prove that correlations exist above the 'clustering' threshold of such a model; thus providing further evidence for the conjectural algorithmic 'hardness' occurring at such a point.
68

From Worst-Case to Average-Case Efficiency – Approximating Combinatorial Optimization Problems

Plociennik, Kai 18 February 2011 (has links) (PDF)
In theoretical computer science, various notions of efficiency are used for algorithms. The most commonly used notion is worst-case efficiency, which is defined by requiring polynomial worst-case running time. Another commonly used notion is average-case efficiency for random inputs, which is roughly defined as having polynomial expected running time with respect to the random inputs. Depending on the actual notion of efficiency one uses, the approximability of a combinatorial optimization problem can be very different. In this dissertation, the approximability of three classical combinatorial optimization problems, namely Independent Set, Coloring, and Shortest Common Superstring, is investigated for different notions of efficiency. For the three problems, approximation algorithms are given, which guarantee approximation ratios that are unachievable by worst-case efficient algorithms under reasonable complexity-theoretic assumptions. The algorithms achieve polynomial expected running time for different models of random inputs. On the one hand, classical average-case analyses are performed, using totally random input models as the source of random inputs. On the other hand, probabilistic analyses are performed, using semi-random input models inspired by the so called smoothed analysis of algorithms. Finally, the expected performance of well known greedy algorithms for random inputs from the considered models is investigated. Also, the expected behavior of some properties of the random inputs themselves is considered.
69

Optimization Algorithms for Deterministic, Stochastic and Reinforcement Learning Settings

Joseph, Ajin George January 2017 (has links) (PDF)
Optimization is a very important field with diverse applications in physical, social and biological sciences and in various areas of engineering. It appears widely in ma-chine learning, information retrieval, regression, estimation, operations research and a wide variety of computing domains. The subject is being deeply studied both theoretically and experimentally and several algorithms are available in the literature. These algorithms which can be executed (sequentially or concurrently) on a computing machine explore the space of input parameters to seek high quality solutions to the optimization problem with the search mostly guided by certain structural properties of the objective function. In certain situations, the setting might additionally demand for “absolute optimum” or solutions close to it, which makes the task even more challenging. In this thesis, we propose an optimization algorithm which is “gradient-free”, i.e., does not employ any knowledge of the gradient or higher order derivatives of the objective function, rather utilizes objective function values themselves to steer the search. The proposed algorithm is particularly effective in a black-box setting, where a closed-form expression of the objective function is unavailable and gradient or higher-order derivatives are hard to compute or estimate. Our algorithm is inspired by the well known cross entropy (CE) method. The CE method is a model based search method to solve continuous/discrete multi-extremal optimization problems, where the objective function has minimal structure. The proposed method seeks, in the statistical manifold of the parameters which identify the probability distribution/model defined over the input space to find the degenerate distribution concentrated on the global optima (assumed to be finite in quantity). In the early part of the thesis, we propose a novel stochastic approximation version of the CE method to the unconstrained optimization problem, where the objective function is real-valued and deterministic. The basis of the algorithm is a stochastic process of model parameters which is probabilistically dependent on the past history, where we reuse all the previous samples obtained in the process till the current instant based on discounted averaging. This approach can save the overall computational and storage cost. Our algorithm is incremental in nature and possesses attractive features such as stability, computational and storage efficiency and better accuracy. We further investigate, both theoretically and empirically, the asymptotic behaviour of the algorithm and find that the proposed algorithm exhibits global optimum convergence for a particular class of objective functions. Further, we extend the algorithm to solve the simulation/stochastic optimization problem. In stochastic optimization, the objective function possesses a stochastic characteristic, where the underlying probability distribution in most cases is hard to comprehend and quantify. This begets a more challenging optimization problem, where the ostentatious nature is primarily due to the hardness in computing the objective function values for various input parameters with absolute certainty. In this case, one can only hope to obtain noise corrupted objective function values for various input parameters. Settings of this kind can be found in scenarios where the objective function is evaluated using a continuously evolving dynamical system or through a simulation. We propose a multi-timescale stochastic approximation algorithm, where we integrate an additional timescale to accommodate the noisy measurements and decimate the effects of the gratuitous noise asymptotically. We found that if the objective function and the noise involved in the measurements are well behaved and the timescales are compatible, then our algorithm can generate high quality solutions. In the later part of the thesis, we propose algorithms for reinforcement learning/Markov decision processes using the optimization techniques we developed in the early stage. MDP can be considered as a generalized framework for modelling planning under uncertainty. We provide a novel algorithm for the problem of prediction in reinforcement learning, i.e., estimating the value function of a given stationary policy of a model free MDP (with large state and action spaces) using the linear function approximation architecture. Here, the value function is defined as the long-run average of the discounted transition costs. The resource requirement of the proposed method in terms of computational and storage cost scales quadratically in the size of the feature set. The algorithm is an adaptation of the multi-timescale variant of the CE method proposed in the earlier part of the thesis for simulation optimization. We also provide both theoretical and empirical evidence to corroborate the credibility and effectiveness of the approach. In the final part of the thesis, we consider a modified version of the control problem in a model free MDP with large state and action spaces. The control problem most commonly addressed in the literature is to find an optimal policy which maximizes the value function, i.e., the long-run average of the discounted transition payoffs. The contemporary methods also presume access to a generative model/simulator of the MDP with the hidden premise that observations of the system behaviour in the form of sample trajectories can be obtained with ease from the model. In this thesis, we consider a modified version, where the cost function to be optimized is a real-valued performance function (possibly non-convex) of the value function. Additionally, one has to seek the optimal policy without presuming access to the generative model. In this thesis, we propose a stochastic approximation algorithm for this peculiar control problem. The only information, we presuppose, available to the algorithm is the sample trajectory generated using a priori chosen behaviour policy. The algorithm is data (sample trajectory) efficient, stable, robust as well as computationally and storage efficient. We provide a proof of convergence of our algorithm to a high performing policy relative to the behaviour policy.
70

Adaptive methods for autonomous environmental modelling

Kemppainen, A. (Anssi) 26 March 2018 (has links)
Abstract In this thesis, we consider autonomous environmental modelling, where robotic sensing platforms are utilized in environmental surveying. In order to allow a wide range of different environments, our models must be flexible to the data with some a prior assumptions. Respectively, in order to guide action planning, we need to have a unified sensing quality metric that depends on the prediction quality of our models. Finally, in order to be able to adapt to the observed information, at each iteration of the action planning algorithm, we must be able to provide solutions that aim at minimum travelling time needed to reach a certain level of sensing quality. These are the main topics in this thesis. At the center of our approaches are stationary and non-stationary Gaussian processes based on the assumption that the observed phenomenon is due to the diffusion of white noise, where diffusion kernel anisotropy and scale may vary between locations. For these models, we propose adaptation of diffusion kernels based on a structure tensor approach. Proposed methods are demonstrated with experiments that show, assuming sensor noise is not dominating, our iterative approach is able to return diffusion kernel values close to correct ones. In order to quantify how precise our models are, we propose a mutual information based sensing quality criterion, and prove that the optimal design using our sensing quality provides the best prediction quality for the model. To incorporate localization uncertainty in modelling, we also propose an approach where a posterior model is marginalized over sensing path distribution. The benefit is that this approach implicitly favors actions that result in previously visited or otherwise well-defined areas, meanwhile, maximizing the information gain. Experiments support our claims that our proposed approaches are best when considering predictive distribution quality. In action planning, our approach is to use graph-based approximation algorithms to obtain a certain level of model quality in an efficient way. In order account for spatial dependency and active localization, we propose adaptation methods that map sensing quality to vertex prices in a graph. Experiments demonstrate the benefit of our adaptation methods compared to the action planning algorithms that do not consider these specific features. / Tiivistelmä Tässä väitöskirjassa tarkastellaan autonomista ympäristön mallinnusta, missä ympäristön kartoitukseen hyödynnetään robottimittausalustoja. Erilaisia ympäristöjä varten, käytettävien mallien tulee olla joustavia datalle tietyillä a priori oletuksilla. Mittausalustojen ohjaus vaatii vastaavasti yhtenäisen, mallien ennustuslaadusta riippuvan, kartoituksen laatumetriikan. Mukautuakseen uuteen informaatioon, ohjausalgoritmin tulee lisäksi pyrkiä joka iteraatiolla minimoimaan tietyn kartoituksen laadun saavuttava kulkuaika. Nämä ovat tämän väitöskirjan pääaiheet. Tämän väitöskirjan keskiössä ovat sellaiset stationaariset ja ei-stationaariset Gaussin prosessit, jotka perustuvat oletukseen että havaittu ilmiö johtuu valkoisen kohinan diffuusiosta. Diffuusiokernelin anisotrooppisuudelle ja skaalalle sallitaan paikkariippuvaisuus. Tässä väitöskirjassa esitetään näiden mallien mukauttamiseen rakennetensoripohjaisia menetelmiä. Suoritetut kokeet osoittavat, että esitetyt iteratiiviset mukauttamismenetelmät tuottavat lähes oikeita diffuusiokernelien arvoja, olettaen, että sensorikohina ei dominoi mittauksia. Mallien ennustustarkkuuden määrittämiseen esitetään keskinäisinformaatioon perustuva kartoituksen laatumetriikka. Väitöskirjassa todistetaan, että optimaalinen ennustuslaatu saavutetaan käyttämällä esitettyä laatumetriikkaa. Väitöskirjassa esitetään lisäksi laatumetriikka, jossa posteriori malli on marginalisoitu kartoituspolkujen jakauman yli. Tämän avulla voidaan huomioida paikannusepävarmuuden vaikutukset mallinnuksessa. Tällöin etuna on se, että kyseinen laatumetriikka suosii implisiittisesti sellaisia mittausalustojen ohjauksia, jotka johtavat aeimmin kartoitetuille tai helposti ennustettaville alueille samalla maksimoiden informaatiohyödyn. Suoritetut kokeet tukevat väittämiä, että väitöskirjassa esitetyt menetelmät tuottavat parhaan ennustusjakauman laadun. Mittausalustojen ohjaus vaatii vastaavasti yhtenäisen, mallien ennustuslaadusta riippuvan, kartoituksen laatumetriikan. Väitöskirjassa esitetään mukautusmenetelmiä kartoituksen laadun kuvaukseksi graafin solmujen kustannuksiksi. Tämän avulla sallitaan sekä spatiaalinen riippuvuus että aktiivinen paikannus. Mittausalustojen ohjaus vaatii vastaavasti yhtenäisen, mallien ennustuslaadusta riippuvan, kartoituksen laatumetriikan.

Page generated in 0.1169 seconds