Spelling suggestions: "subject:"diehard"" "subject:"cochard""
21 |
Modification, development, application and computational experiments of some selected network, distribution and resource allocation models in operations researchNyamugure, Philimon January 2017 (has links)
Thesis (Ph.D. (Statistics)) -- University of Limpopo, 2017 / Operations Research (OR) is a scientific method for developing quantitatively
well-grounded recommendations for decision making. While it is true that it
uses a variety of mathematical techniques, OR has a much broader scope. It is
in fact a systematic approach to solving problems, which uses one or more analytical
tools in the process of analysis. Over the years, OR has evolved through
different stages. This study is motivated by new real-world challenges needed
for efficiency and innovation in line with the aims and objectives of OR – the
science of better, as classified by the OR Society of the United Kingdom. New
real-world challenges are encountered on a daily basis from problems arising
in the fields of water, energy, agriculture, mining, tourism, IT development,
natural phenomena, transport, climate change, economic and other societal requirements.
To counter all these challenges, new techniques ought to be developed.
The growth of global markets and the resulting increase in competition
have highlighted the need for OR techniques to be improved. These developments,
among other reasons, are an indication that new techniques are needed
to improve the day-to-day running of organisations, regardless of size, type and
location.
The principal aim of this study is to modify and develop new OR techniques
that can be used to solve emerging problems encountered in the areas of linear
programming, integer programming, mixed integer programming, network
routing and travelling salesman problems. Distribution models, resource allocation
models, travelling salesman problem, general linear mixed integer
ii
programming and other network problems that occur in real life, have been
modelled mathematically in this thesis. Most of these models belong to the
NP-hard (non-deterministic polynomial) class of difficult problems. In other
words, these types of problems cannot be solved in polynomial time (P). No general
purpose algorithm for these problems is known. The thesis is divided into
two major areas namely: (1) network models and (2) resource allocation and
distribution models. Under network models, five new techniques have been developed:
the minimum weight algorithm for a non-directed network, maximum
reliability route in both non-directed and directed acyclic network, minimum
spanning tree with index less than two, routing through 0k0 specified nodes,
and a new heuristic to the travelling salesman problem. Under the resource
allocation and distribution models section, four new models have been developed,
and these are: a unified approach to solve transportation and assignment
problems, a transportation branch and bound algorithm for the generalised assignment
problem, a new hybrid search method over the extreme points for
solving a large-scale LP model with non-negative coefficients, and a heuristic
for a mixed integer program using the characteristic equation approach. In
most of the nine approaches developed in the thesis, efforts were done to compare
the effectiveness of the new approaches to existing techniques. Improvements
in the new techniques in solving problems were noted. However, it was
difficult to compare some of the new techniques to the existing ones because
computational packages of the new techniques need to be developed first. This
aspect will be subject matter of future research on developing these techniques
further. It was concluded with strong evidence, that development of new OR
techniques is a must if we are to encounter the emerging problems faced by the
world today.
Key words: NP-hard problem, Network models, Reliability, Heuristic, Largescale
LP, Characteristic equation, Algorithm.
|
22 |
Efficient search of an underwater area based on probabilityPukitis Furhoff, Hampus January 2019 (has links)
Today more and more different types of autonomous robots and vehicles are being developed. Most of these rely on the global positioning system and/or communication with other robots and vehicles to determine their global position. However, these are not viable options for the autonomous underwater vehicles (AUVs) of today since radio-waves does not travel well in water. Instead, various techniques for determining the AUVs position are used which comes with a margin of error. This thesis examines the problem of efficiently performing a local search within this margin of error with the objective of finding a docking-station or a bouy.To solve this problem research was made on the subject of search theory and how it previously has been applied in this context. What was found was that classical bayesian search theory had not been used very often in this context since it would require to much processing power to be a viable option in the embedded systems that is AUVs. Instead different heuristics were used to get solutions that still were viable for the situations in which they were used, even though they maybe wasn’t optimal.Based on this the search-strategies Spiral, Greedy, Look-ahead and Quadtree were developed and evaluated in a simulator. Their mean time to detection (MTTD) were compared as well as the average time it took for the strategies to process a search. Look-ahead was the best one of the four different strategies with respect to the MTTD and based on this it is suggested that it should be implemented and evaluated in a real AUV. / Idag utvecklas allt fler olika typer av autonoma robotar och fordon. De flesta av dessa är beroende av det globala positioneringssystemet och/eller kommunikation med andra robotar och fordon för att bestämma deras globala position. Detta är dock inte realistiska alternativ för autonoma undervattensfordon (AUV) idag eftersom radiovågor inte färdas bra i vatten. I stället används olika tekniker för att bestämma AUVens position, tekniker som ofta har en felmarginal. Denna rapport undersöker problemet med att effektivt utföra en lokal sökning inom denna felmarginal med målet att hitta en dockningsstation eller en boj.För att lösa detta problem gjordes en litteraturstudie om ämnet sökteori och hur det tidigare har tillämpats i detta sammanhang. Det som hittades var att den klassiska bayesiska sökteorin inte hade använts mycket ofta i detta sammanhang eftersom det skulle kräva för mycket processorkraft för att det skulle vara ett rimligt alternativ för de inbyggda systemen på en AUV. Istället användes olika heuristiska metoder för att få lösningar som fortfarande var dugliga för de situationer där de användes, även om de kanske inte var optimala.Baserat på detta utvecklades sökstrategierna Spiral, Greedy, Look-ahead och Quad-tree och utvärderades i en simulator. Deras genomsnittliga tid för att upptäcka målet (MTTD) jämfördes liksom den genomsnittliga tiden det tog för strategierna att bearbeta en sökning. Look-ahead var den bästa av de fyra olika strategierna med avseende på MTTD och baserat på detta föreslås det att den ska implementeras och utvärderas i en verklig AUV.
|
23 |
Využití optimalizace v řízení výroby / The use of optimization in production planningPokorný, Pavel January 2008 (has links)
The Master’s thesis deals with production scheduling in an industrial company. It uses the means of artificial intelligence to develop an appropriate production schedule in a generalized Flow-shop Programming problem. This problem can be solved by application which is a result of this thesis and was prepaired with use of the software Matlab 7.1 and its Genetic Algorithm and Direct Search toolbox. There is a part devoted to the use of advanced production systems (APS) and the concept of the operative production planning in praxis as well. The thesis pays attention to various optimization models in production scheduling and supply chain management too.
|
24 |
Otimização por Nuvem de Partículas e Busca Tabu para Problema da Diversidade MáximaBonotto, Edison Luiz 31 January 2017 (has links)
Submitted by Maike Costa (maiksebas@gmail.com) on 2017-06-29T14:15:20Z
No. of bitstreams: 1
arquivototal.pdf: 1397036 bytes, checksum: 303111e916d8c9feca61ed32762bf54c (MD5) / Made available in DSpace on 2017-06-29T14:15:20Z (GMT). No. of bitstreams: 1
arquivototal.pdf: 1397036 bytes, checksum: 303111e916d8c9feca61ed32762bf54c (MD5)
Previous issue date: 2017-01-31 / The Maximu m Diversity Problem (MDP) is a problem of combinatorial optimization
area that aims to select a pre-set number of elements in a given set so that a sum of
the differences between the selected elements are greater as possible. MDP belongs
to the class of NP-Hard problems, that is, there is no known algorithm that solves
in polynomial time accurately. Because they have a complexity of exponential order,
require efficient heuristics to provide satisfactory results in acceptable time. However,
heuristics do not guarantee the optimality of the solution found. This paper proposes a
new hybrid approach for a resolution of the Maximum Diversity Problem and is based
on the Particle Swarm Optimization (PSO) and Tabu Search (TS) metaheuristics,
The algorithm is called PSO_TS. The use of PSO_TS achieves the best results for
known instances testing in the literature, thus demonstrating be competitive with the
best algorithms in terms of quality of the solutions. / O Problema da Diversidade Máxima (MDP) é um problema da área de Otimização
Combinatória que tem por objetivo selecionar um número pré-estabelecido de elementos
de um dado conjunto de maneira tal que a soma das diversidades entre os
elementos selecionados seja a maior possível. O MDP pertence a classe de problemas
NP-difícil, isto é, não existe algoritmo conhecido que o resolva de forma exata em
tempo polinomial. Por apresentarem uma complexidade de ordem exponencial, exigem
heurísticas eficientes que forneçam resultados satisfatórios em tempos aceitáveis.
Entretanto, as heurísticas não garantem otimalidade da solução encontrada. Este
trabalho propõe uma nova abordagem híbrida para a resolução do Problema da
Diversidade Máxima e está baseada nas meta-heurísticas de Otimização por Nuvem
de Partículas (PSO) e Busca Tabu(TS). O algoritmo foi denominado PSO_TS. Para
a validação do método, os resultados encontrados são comparados com os melhores
existentes na literatura.
|
25 |
Metaheur?sticas evolutivas para o problema de roteamento de unidades m?veis de pistoneio / Evolutionary metaheuristics applied to routing problem of units mobile recovery of oilNascimento, Jo?o Paulo Lima do 23 December 2010 (has links)
Made available in DSpace on 2014-12-17T14:53:02Z (GMT). No. of bitstreams: 1
JoaoPLN_DISSERT.pdf: 2086259 bytes, checksum: 2a57554e118e00d9a44cdf572e29af7a (MD5)
Previous issue date: 2010-12-23 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / This paper presents metaheuristic strategies based on the framework of evolutionary
algorithms (Genetic and Memetic) with the addition of Technical Vocabulary Building for
solving the Problem of Optimizing the Use of Multiple Mobile Units Recovery of Oil (MRO
units). Because it is an NP-hard problem, a mathematical model is formulated for the
problem, allowing the construction of test instances that are used to validate the evolutionary
metaheuristics developed / O presente trabalho apresenta estrat?gias metaheur?sticas baseadas no framework dos
Algoritmos Evolutivos (Gen?ticos e Mem?ticos) com a adi??o da t?cnica Vocabulary
Building para a resolu??o do Problema de Otimiza??o do Emprego de Unidades M?veis de
Pistoneio (UMPs). Por se tratar de um problema NP-?rduo, uma modelagem matem?tica ?
formulada para o problema, permitindo a constru??o de inst?ncias testes que s?o utilizadas
para validar as metaheur?sticas evolutivas desenvolvidas
|
26 |
Optimizing the imbalances in a graph / Optimiser les déséquilibres dans un grapheGlorieux, Antoine 19 June 2017 (has links)
Le déséquilibre d'un sommet dans un graphe orienté est la valeur absolue de la différence entre son degré sortant et son degré entrant. Nous étudions le problème de trouver une orientation des arêtes du graphe telle que l'image du vecteur dont les composantes sont les déséquilibres des sommets par une fonction objectif f est maximisée. Le premier cas considéré est le problème de maximiser le minimum des déséquilibres sur toutes les orientations possibles. Nous caractérisons les graphes dont la valeur objective optimale est nulle. Ensuite nous donnons plusieurs résultats concernant la complexité du problème. Enfin, nous introduisons différentes formulations du problème et présentons quelques résultats numériques. Par la suite, nous montrons que le cas f=1/2 | |·| |₁ mène au célèbre problème de la coupe de cardinalité maximale. Nous introduisons de nouvelles formulations ainsi qu'un nouveau majorant qui domine celui de Goemans et Williamson. Des résultats théoriques et numériques concernant la performance des approches sont présentés. Pour finir, dans le but de renforcer certaines des formulations des problèmes étudiés, nous étudions une famille de polyèdres spécifique consistant en l'enveloppe convexe des matrices d'affectation 0/1 (où chaque colonne contient exactement une composante égale à 1) annexée avec l'indice de leur ligne non-identiquement nulle la plus basse. Nous donnons une description complète de ce polytope ainsi que certaines de ses variantes qui apparaissent naturellement dans le contexte de divers problèmes d'optimisation combinatoire. Nous montrons également que résoudre un programme linéaire sur un tel polytope peut s'effectuer en temps polynomial / The imbalance of a vertex in a directed graph is the absolute value of the difference between its outdegree and indegree. In this thesis we study the problem of orienting the edges of a graph in such a way that the image of the vector which components are the imbalances of the vertices of the graph under an objective function f is maximized. The first case considered is the problem of maximizing the minimum imbalance of all the vertices over all the possible orientations of the input graph. We first characterize graphs for which the optimal objective value is zero. Next we give several results concerning the computational complexity of the problem. Finally, we deal with several mixed integer programming formulations for this problem and present some numerical experiments. Next, we show that the case for f=1/2 | |·| |₁ leads to the famous unweighted maximum cut problem. We introduce some new formulations along with a new bound shown to be tighter than Michel Goemans & David Williamson's. Theoretical and computational results regarding bounds quality and performance are also reported. Finally, in order to strengthen some formulations of the studied problems, we study a specific class of polytopes. Consider the polytope consisting in the convex hull of the 0/1 assignment matrices where each column contains exactly one coefficient equal to 1 appended with their index of the lowest row that is not identically equal to the zero row. We give a full description of this polytope and some of its variants which naturally appear in the context of several combinatorial optimization problems. We also show that linear optimization over those polytopes can be done in polynomial time
|
27 |
[pt] ESTUDO DE HEURÍSTICAS PARA PROBLEMAS DE ESCALONAMENTO EM UM AMBIENTE COM MÁQUINAS INDISPONÍVEIS / [en] SCHEDULING ALGORITHMS APPLICATION FOR MACHINE AVAILABILITY CONSTRAINTBRUNO LEONARDO KMITA DE OLIVEIRA PASSOS 20 March 2015 (has links)
[pt] Grande parte da literatura de problemas de escalonamento assume que todas as máquinas estão disponíveis durante todo o período de análise o que, na prática, não é verdade, pois algumas das máquinas podem estar indisponíveis para processamento sem aviso prévio devido a problemas ou a políticas de utilização de seus recursos. Nesta tese, exploramos algumas das poucas heurísticas disponíveis na literatura para a minimização do makespan para este tipo de problema NP-difícil e apresentamos uma nova heurística que utiliza estatísticas de disponibilidade das máquinas para gerar um escalonamento. O estudo experimental com dados reais mostrou que a nova heurística apresenta ganhos de makespan em relação aos demais algoritmos clássicos que não utilizam informações de disponibilidade no processo de decisão. A aplicação prática deste problema está relacionada a precificação de ativos de uma carteira teórica de forma a estabelecer o risco de mercado da forma mais rápida possível através da utilização de recursos tecnológicos ociosos. / [en] Most literature in scheduling theory assumes that machines are always available during the scheduling time interval, which in practice is not true due to machine breakdowns or resource usage policies. We study a few available heuristics for the NP-hard problem of minimizing the makespan when breakdowns may happen. We also develop a new scheduling heuristic based on historical machine availability information. Our experimental study, with real data, suggests that this new heuristic is better in terms of makespan than other algorithms that do not take this information into account. We apply the results of our investigation for the asset-pricing problem of a fund portfolio in order to determine a full valuation market risk using idle technological resources of a company.
|
28 |
Learning Partial Policies for Intractable Domains on Tractable Subsets. / Lärande av ofullständiga strategier för svårlärda domäner via lättlärda delmängder.Carlsson, Viktor January 2023 (has links)
The field of classical planning deals with designing algorithms for generating plans or squences of actions that achieve specific goals. It involves representing a problem domain as a set of state variables, actions and goals, and then developing search algorithms that can explore the state of possible plans to find the one that satisfies the specified goal. Classical planning domains are often NP-hard, meaning that their worst-case complexity grows exponentially with the size of the problem. This means that as the number of state variables, actions and goals in the problem domain increases, the search space grows exponentially, making it very difficult to find a plan that satisfies the specified goal. This thesis is concerned with investigating these NP-hard domains, specifically by simplifying these domains into ones that have a polynomial solving time, creating a general policy of conditions and rules for which actions to take for the simplified domain, and then attempting to apply this policy onto the original domain. This creates a partial policy for the original domain, and the performance of this policy can be measured in order to judge its effectiveness. This can be explained as simplifying an intractable domain into a tractable one, creating a general policy for the tractable domain and then measuring its performance as a partial policy for the intractable domain.
|
29 |
Algorithmes exacts et exponentiels pour les problèmes NP-difficiles sur les graphes et hypergraphes / Exact Exponential-Time Algorithms for NP-hards Problems on Graphs and HypergraphsCochefert, Manfred 18 December 2014 (has links)
Dans cette thèse, nous nous intéressons à la résolution exacte de problèmes NP-difficiles sur les graphes et les hypergraphes. Les problèmes que nous étudions regroupent dans un premier temps des variantes du problème classique du nombre chromatique. Les variantes de ce problème se distinguent par la difficulté introduite par les relations entre les classes de couleurs, ou la difficulté de reconnaissance des classes de couleurs elles-mêmes. Puis nous ferons le lien avec les problèmes de transversaux sur les hypergraphes. Plus particulièrement, il s’agira de s’intéresser à l’énumération de transversaux minimaux dans un hypergraphe de rang borné. Outre la résolution exacte, nous nous intéressons à la résolution à paramètre fixe. Le problème de racine carrée de graphe est un problème important en théorie des graphes. Nous proposons et montrons la solubilité à paramètre fixe de deux problèmes d’optimisation reliés. Finalement, nous nous intéresserons à la résolution de problèmes de graphe, soit en lien avec les problèmes de colorations, soit pour montrer les performances possibles de différents algorithmes en fonction de l’espace mémoire disponible. Dans cette thèse, nous aurons à cœur d’appliquer judicieusement la grande majorité des techniques essentielles en algorithmique exacte exponentielle. Principalement, nous appliquerons la programmation dynamique ou le principe d’inclusion-exclusion pour les problèmes de coloration. La technique de programmation dynamique se retrouvera pour d’autres problèmes de cette thèse, aux côtés d’autres méthodes comme la technique de branchement ou de mesurer et conquérir / In this thesis, we are interested in the exact computation of np-hard problems on graphs and hypergraphs. Firstly, we study several variants of colorings. Those variants appear harder than the famous chromatic number problem, by adding difficulty in recognizing the color classes, or more often by introducing various relationships between them. Then we link to problems of transversals in hypergraphs. More precisely, we are interested in enumerating minimal transversals in bounded ranked hypergraphs. Besides the exact computation, we are also interested in fixed parameter tractability. For this area, we study two optimization versions of the famous square root of graphs problem. Finally, we will be interested in solving other problems of graphs related to colorings, or in order to compare efficiencies of algorithms depending on the memory space available. In this thesis, we will apply most of major techniques in designing exact exponential algorithms. The main techniques we use are dynamic programming, inclusion-exclusion, branching, or measure and conquer
|
30 |
Near-capacity sphere decoder based detection schemes for MIMO wireless communication systemsKapfunde, Goodwell January 2013 (has links)
The search for the closest lattice point arises in many communication problems, and is known to be NP-hard. The Maximum Likelihood (ML) Detector is the optimal detector which yields an optimal solution to this problem, but at the expense of high computational complexity. Existing near-optimal methods used to solve the problem are based on the Sphere Decoder (SD), which searches for lattice points confined in a hyper-sphere around the received point. The SD has emerged as a powerful means of finding the solution to the ML detection problem for MIMO systems. However the bottleneck lies in the determination of the initial radius. This thesis is concerned with the detection of transmitted wireless signals in Multiple-Input Multiple-Output (MIMO) digital communication systems as efficiently and effectively as possible. The main objective of this thesis is to design efficient ML detection algorithms for MIMO systems based on the depth-first search (DFS) algorithms whilst taking into account complexity and bit error rate performance requirements for advanced digital communication systems. The increased capacity and improved link reliability of MIMO systems without sacrificing bandwidth efficiency and transmit power will serve as the key motivation behind the study of MIMO detection schemes. The fundamental principles behind MIMO systems are explored in Chapter 2. A generic framework for linear and non-linear tree search based detection schemes is then presented Chapter 3. This paves way for different methods of improving the achievable performance-complexity trade-off for all SD-based detection algorithms. The suboptimal detection schemes, in particular the Minimum Mean Squared Error-Successive Interference Cancellation (MMSE-SIC), will also serve as pre-processing as well as comparison techniques whilst channel capacity approaching Low Density Parity Check (LDPC) codes will be employed to evaluate the performance of the proposed SD. Numerical and simulation results show that non-linear detection schemes yield better performance compared to linear detection schemes, however, at the expense of a slight increase in complexity. The first contribution in this thesis is the design of a near ML-achieving SD algorithm for MIMO digital communication systems that reduces the number of search operations within the sphere-constrained search space at reduced detection complexity in Chapter 4. In this design, the distance between the ML estimate and the received signal is used to control the lower and upper bound radii of the proposed SD to prevent NP-complete problems. The detection method is based on the DFS algorithm and the Successive Interference Cancellation (SIC). The SIC ensures that the effects of dominant signals are effectively removed. Simulation results presented in this thesis show that by employing pre-processing detection schemes, the complexity of the proposed SD can be significantly reduced, though at marginal performance penalty. The second contribution is the determination of the initial sphere radius in Chapter 5. The new initial radius proposed in this thesis is based on the variable parameter α which is commonly based on experience and is chosen to ensure that at least a lattice point exists inside the sphere with high probability. Using the variable parameter α, a new noise covariance matrix which incorporates the number of transmit antennas, the energy of the transmitted symbols and the channel matrix is defined. The new covariance matrix is then incorporated into the EMMSE model to generate an improved EMMSE estimate. The EMMSE radius is finally found by computing the distance between the sphere centre and the improved EMMSE estimate. This distance can be fine-tuned by varying the variable parameter α. The beauty of the proposed method is that it reduces the complexity of the preprocessing step of the EMMSE to that of the Zero-Forcing (ZF) detector without significant performance degradation of the SD, particularly at low Signal-to-Noise Ratios (SNR). More specifically, it will be shown through simulation results that using the EMMSE preprocessing step will substantially improve performance whenever the complexity of the tree search is fixed or upper bounded. The final contribution is the design of the LRAD-MMSE-SIC based SD detection scheme which introduces a trade-off between performance and increased computational complexity in Chapter 6. The Lenstra-Lenstra-Lovasz (LLL) algorithm will be utilised to orthogonalise the channel matrix H to a new near orthogonal channel matrix H ̅.The increased computational complexity introduced by the LLL algorithm will be significantly decreased by employing sorted QR decomposition of the transformed channel H ̅ into a unitary matrix and an upper triangular matrix which retains the property of the channel matrix. The SIC algorithm will ensure that the interference due to dominant signals will be minimised while the LDPC will effectively stop the propagation of errors within the entire system. Through simulations, it will be demonstrated that the proposed detector still approaches the ML performance while requiring much lower complexity compared to the conventional SD.
|
Page generated in 0.0522 seconds