• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 5
  • 3
  • 2
  • 1
  • Tagged with
  • 34
  • 34
  • 13
  • 10
  • 9
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Využití optimalizace v řízení výroby / The use of optimization in production planning

Pokorný, Pavel January 2008 (has links)
The Master’s thesis deals with production scheduling in an industrial company. It uses the means of artificial intelligence to develop an appropriate production schedule in a generalized Flow-shop Programming problem. This problem can be solved by application which is a result of this thesis and was prepaired with use of the software Matlab 7.1 and its Genetic Algorithm and Direct Search toolbox. There is a part devoted to the use of advanced production systems (APS) and the concept of the operative production planning in praxis as well. The thesis pays attention to various optimization models in production scheduling and supply chain management too.
22

Otimização por Nuvem de Partículas e Busca Tabu para Problema da Diversidade Máxima

Bonotto, Edison Luiz 31 January 2017 (has links)
Submitted by Maike Costa (maiksebas@gmail.com) on 2017-06-29T14:15:20Z No. of bitstreams: 1 arquivototal.pdf: 1397036 bytes, checksum: 303111e916d8c9feca61ed32762bf54c (MD5) / Made available in DSpace on 2017-06-29T14:15:20Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 1397036 bytes, checksum: 303111e916d8c9feca61ed32762bf54c (MD5) Previous issue date: 2017-01-31 / The Maximu m Diversity Problem (MDP) is a problem of combinatorial optimization area that aims to select a pre-set number of elements in a given set so that a sum of the differences between the selected elements are greater as possible. MDP belongs to the class of NP-Hard problems, that is, there is no known algorithm that solves in polynomial time accurately. Because they have a complexity of exponential order, require efficient heuristics to provide satisfactory results in acceptable time. However, heuristics do not guarantee the optimality of the solution found. This paper proposes a new hybrid approach for a resolution of the Maximum Diversity Problem and is based on the Particle Swarm Optimization (PSO) and Tabu Search (TS) metaheuristics, The algorithm is called PSO_TS. The use of PSO_TS achieves the best results for known instances testing in the literature, thus demonstrating be competitive with the best algorithms in terms of quality of the solutions. / O Problema da Diversidade Máxima (MDP) é um problema da área de Otimização Combinatória que tem por objetivo selecionar um número pré-estabelecido de elementos de um dado conjunto de maneira tal que a soma das diversidades entre os elementos selecionados seja a maior possível. O MDP pertence a classe de problemas NP-difícil, isto é, não existe algoritmo conhecido que o resolva de forma exata em tempo polinomial. Por apresentarem uma complexidade de ordem exponencial, exigem heurísticas eficientes que forneçam resultados satisfatórios em tempos aceitáveis. Entretanto, as heurísticas não garantem otimalidade da solução encontrada. Este trabalho propõe uma nova abordagem híbrida para a resolução do Problema da Diversidade Máxima e está baseada nas meta-heurísticas de Otimização por Nuvem de Partículas (PSO) e Busca Tabu(TS). O algoritmo foi denominado PSO_TS. Para a validação do método, os resultados encontrados são comparados com os melhores existentes na literatura.
23

Metaheur?sticas evolutivas para o problema de roteamento de unidades m?veis de pistoneio / Evolutionary metaheuristics applied to routing problem of units mobile recovery of oil

Nascimento, Jo?o Paulo Lima do 23 December 2010 (has links)
Made available in DSpace on 2014-12-17T14:53:02Z (GMT). No. of bitstreams: 1 JoaoPLN_DISSERT.pdf: 2086259 bytes, checksum: 2a57554e118e00d9a44cdf572e29af7a (MD5) Previous issue date: 2010-12-23 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / This paper presents metaheuristic strategies based on the framework of evolutionary algorithms (Genetic and Memetic) with the addition of Technical Vocabulary Building for solving the Problem of Optimizing the Use of Multiple Mobile Units Recovery of Oil (MRO units). Because it is an NP-hard problem, a mathematical model is formulated for the problem, allowing the construction of test instances that are used to validate the evolutionary metaheuristics developed / O presente trabalho apresenta estrat?gias metaheur?sticas baseadas no framework dos Algoritmos Evolutivos (Gen?ticos e Mem?ticos) com a adi??o da t?cnica Vocabulary Building para a resolu??o do Problema de Otimiza??o do Emprego de Unidades M?veis de Pistoneio (UMPs). Por se tratar de um problema NP-?rduo, uma modelagem matem?tica ? formulada para o problema, permitindo a constru??o de inst?ncias testes que s?o utilizadas para validar as metaheur?sticas evolutivas desenvolvidas
24

Optimizing the imbalances in a graph / Optimiser les déséquilibres dans un graphe

Glorieux, Antoine 19 June 2017 (has links)
Le déséquilibre d'un sommet dans un graphe orienté est la valeur absolue de la différence entre son degré sortant et son degré entrant. Nous étudions le problème de trouver une orientation des arêtes du graphe telle que l'image du vecteur dont les composantes sont les déséquilibres des sommets par une fonction objectif f est maximisée. Le premier cas considéré est le problème de maximiser le minimum des déséquilibres sur toutes les orientations possibles. Nous caractérisons les graphes dont la valeur objective optimale est nulle. Ensuite nous donnons plusieurs résultats concernant la complexité du problème. Enfin, nous introduisons différentes formulations du problème et présentons quelques résultats numériques. Par la suite, nous montrons que le cas f=1/2 | |·| |₁ mène au célèbre problème de la coupe de cardinalité maximale. Nous introduisons de nouvelles formulations ainsi qu'un nouveau majorant qui domine celui de Goemans et Williamson. Des résultats théoriques et numériques concernant la performance des approches sont présentés. Pour finir, dans le but de renforcer certaines des formulations des problèmes étudiés, nous étudions une famille de polyèdres spécifique consistant en l'enveloppe convexe des matrices d'affectation 0/1 (où chaque colonne contient exactement une composante égale à 1) annexée avec l'indice de leur ligne non-identiquement nulle la plus basse. Nous donnons une description complète de ce polytope ainsi que certaines de ses variantes qui apparaissent naturellement dans le contexte de divers problèmes d'optimisation combinatoire. Nous montrons également que résoudre un programme linéaire sur un tel polytope peut s'effectuer en temps polynomial / The imbalance of a vertex in a directed graph is the absolute value of the difference between its outdegree and indegree. In this thesis we study the problem of orienting the edges of a graph in such a way that the image of the vector which components are the imbalances of the vertices of the graph under an objective function f is maximized. The first case considered is the problem of maximizing the minimum imbalance of all the vertices over all the possible orientations of the input graph. We first characterize graphs for which the optimal objective value is zero. Next we give several results concerning the computational complexity of the problem. Finally, we deal with several mixed integer programming formulations for this problem and present some numerical experiments. Next, we show that the case for f=1/2 | |·| |₁ leads to the famous unweighted maximum cut problem. We introduce some new formulations along with a new bound shown to be tighter than Michel Goemans & David Williamson's. Theoretical and computational results regarding bounds quality and performance are also reported. Finally, in order to strengthen some formulations of the studied problems, we study a specific class of polytopes. Consider the polytope consisting in the convex hull of the 0/1 assignment matrices where each column contains exactly one coefficient equal to 1 appended with their index of the lowest row that is not identically equal to the zero row. We give a full description of this polytope and some of its variants which naturally appear in the context of several combinatorial optimization problems. We also show that linear optimization over those polytopes can be done in polynomial time
25

[pt] ESTUDO DE HEURÍSTICAS PARA PROBLEMAS DE ESCALONAMENTO EM UM AMBIENTE COM MÁQUINAS INDISPONÍVEIS / [en] SCHEDULING ALGORITHMS APPLICATION FOR MACHINE AVAILABILITY CONSTRAINT

BRUNO LEONARDO KMITA DE OLIVEIRA PASSOS 20 March 2015 (has links)
[pt] Grande parte da literatura de problemas de escalonamento assume que todas as máquinas estão disponíveis durante todo o período de análise o que, na prática, não é verdade, pois algumas das máquinas podem estar indisponíveis para processamento sem aviso prévio devido a problemas ou a políticas de utilização de seus recursos. Nesta tese, exploramos algumas das poucas heurísticas disponíveis na literatura para a minimização do makespan para este tipo de problema NP-difícil e apresentamos uma nova heurística que utiliza estatísticas de disponibilidade das máquinas para gerar um escalonamento. O estudo experimental com dados reais mostrou que a nova heurística apresenta ganhos de makespan em relação aos demais algoritmos clássicos que não utilizam informações de disponibilidade no processo de decisão. A aplicação prática deste problema está relacionada a precificação de ativos de uma carteira teórica de forma a estabelecer o risco de mercado da forma mais rápida possível através da utilização de recursos tecnológicos ociosos. / [en] Most literature in scheduling theory assumes that machines are always available during the scheduling time interval, which in practice is not true due to machine breakdowns or resource usage policies. We study a few available heuristics for the NP-hard problem of minimizing the makespan when breakdowns may happen. We also develop a new scheduling heuristic based on historical machine availability information. Our experimental study, with real data, suggests that this new heuristic is better in terms of makespan than other algorithms that do not take this information into account. We apply the results of our investigation for the asset-pricing problem of a fund portfolio in order to determine a full valuation market risk using idle technological resources of a company.
26

Learning Partial Policies for Intractable Domains on Tractable Subsets. / Lärande av ofullständiga strategier för svårlärda domäner via lättlärda delmängder.

Carlsson, Viktor January 2023 (has links)
The field of classical planning deals with designing algorithms for generating plans or squences of actions that achieve specific goals. It involves representing a problem domain as a set of state variables, actions and goals, and then developing search algorithms that can explore the state of possible plans to find the one that satisfies the specified goal. Classical planning domains are often NP-hard, meaning that their worst-case complexity grows exponentially with the size of the problem. This means that as the number of state variables, actions and goals in the problem domain increases, the search space grows exponentially, making it very difficult to find a plan that satisfies the specified goal. This thesis is concerned with investigating these NP-hard domains, specifically by simplifying these domains into ones that have a polynomial solving time, creating a general policy of conditions and rules for which actions to take for the simplified domain, and then attempting to apply this policy onto the original domain. This creates a partial policy for the original domain, and the performance of this policy can be measured in order to judge its effectiveness. This can be explained as simplifying an intractable domain into a tractable one, creating a general policy for the tractable domain and then measuring its performance as a partial policy for the intractable domain.
27

Algorithmes exacts et exponentiels pour les problèmes NP-difficiles sur les graphes et hypergraphes / Exact Exponential-Time Algorithms for NP-hards Problems on Graphs and Hypergraphs

Cochefert, Manfred 18 December 2014 (has links)
Dans cette thèse, nous nous intéressons à la résolution exacte de problèmes NP-difficiles sur les graphes et les hypergraphes. Les problèmes que nous étudions regroupent dans un premier temps des variantes du problème classique du nombre chromatique. Les variantes de ce problème se distinguent par la difficulté introduite par les relations entre les classes de couleurs, ou la difficulté de reconnaissance des classes de couleurs elles-mêmes. Puis nous ferons le lien avec les problèmes de transversaux sur les hypergraphes. Plus particulièrement, il s’agira de s’intéresser à l’énumération de transversaux minimaux dans un hypergraphe de rang borné. Outre la résolution exacte, nous nous intéressons à la résolution à paramètre fixe. Le problème de racine carrée de graphe est un problème important en théorie des graphes. Nous proposons et montrons la solubilité à paramètre fixe de deux problèmes d’optimisation reliés. Finalement, nous nous intéresserons à la résolution de problèmes de graphe, soit en lien avec les problèmes de colorations, soit pour montrer les performances possibles de différents algorithmes en fonction de l’espace mémoire disponible. Dans cette thèse, nous aurons à cœur d’appliquer judicieusement la grande majorité des techniques essentielles en algorithmique exacte exponentielle. Principalement, nous appliquerons la programmation dynamique ou le principe d’inclusion-exclusion pour les problèmes de coloration. La technique de programmation dynamique se retrouvera pour d’autres problèmes de cette thèse, aux côtés d’autres méthodes comme la technique de branchement ou de mesurer et conquérir / In this thesis, we are interested in the exact computation of np-hard problems on graphs and hypergraphs. Firstly, we study several variants of colorings. Those variants appear harder than the famous chromatic number problem, by adding difficulty in recognizing the color classes, or more often by introducing various relationships between them. Then we link to problems of transversals in hypergraphs. More precisely, we are interested in enumerating minimal transversals in bounded ranked hypergraphs. Besides the exact computation, we are also interested in fixed parameter tractability. For this area, we study two optimization versions of the famous square root of graphs problem. Finally, we will be interested in solving other problems of graphs related to colorings, or in order to compare efficiencies of algorithms depending on the memory space available. In this thesis, we will apply most of major techniques in designing exact exponential algorithms. The main techniques we use are dynamic programming, inclusion-exclusion, branching, or measure and conquer
28

Near-capacity sphere decoder based detection schemes for MIMO wireless communication systems

Kapfunde, Goodwell January 2013 (has links)
The search for the closest lattice point arises in many communication problems, and is known to be NP-hard. The Maximum Likelihood (ML) Detector is the optimal detector which yields an optimal solution to this problem, but at the expense of high computational complexity. Existing near-optimal methods used to solve the problem are based on the Sphere Decoder (SD), which searches for lattice points confined in a hyper-sphere around the received point. The SD has emerged as a powerful means of finding the solution to the ML detection problem for MIMO systems. However the bottleneck lies in the determination of the initial radius. This thesis is concerned with the detection of transmitted wireless signals in Multiple-Input Multiple-Output (MIMO) digital communication systems as efficiently and effectively as possible. The main objective of this thesis is to design efficient ML detection algorithms for MIMO systems based on the depth-first search (DFS) algorithms whilst taking into account complexity and bit error rate performance requirements for advanced digital communication systems. The increased capacity and improved link reliability of MIMO systems without sacrificing bandwidth efficiency and transmit power will serve as the key motivation behind the study of MIMO detection schemes. The fundamental principles behind MIMO systems are explored in Chapter 2. A generic framework for linear and non-linear tree search based detection schemes is then presented Chapter 3. This paves way for different methods of improving the achievable performance-complexity trade-off for all SD-based detection algorithms. The suboptimal detection schemes, in particular the Minimum Mean Squared Error-Successive Interference Cancellation (MMSE-SIC), will also serve as pre-processing as well as comparison techniques whilst channel capacity approaching Low Density Parity Check (LDPC) codes will be employed to evaluate the performance of the proposed SD. Numerical and simulation results show that non-linear detection schemes yield better performance compared to linear detection schemes, however, at the expense of a slight increase in complexity. The first contribution in this thesis is the design of a near ML-achieving SD algorithm for MIMO digital communication systems that reduces the number of search operations within the sphere-constrained search space at reduced detection complexity in Chapter 4. In this design, the distance between the ML estimate and the received signal is used to control the lower and upper bound radii of the proposed SD to prevent NP-complete problems. The detection method is based on the DFS algorithm and the Successive Interference Cancellation (SIC). The SIC ensures that the effects of dominant signals are effectively removed. Simulation results presented in this thesis show that by employing pre-processing detection schemes, the complexity of the proposed SD can be significantly reduced, though at marginal performance penalty. The second contribution is the determination of the initial sphere radius in Chapter 5. The new initial radius proposed in this thesis is based on the variable parameter α which is commonly based on experience and is chosen to ensure that at least a lattice point exists inside the sphere with high probability. Using the variable parameter α, a new noise covariance matrix which incorporates the number of transmit antennas, the energy of the transmitted symbols and the channel matrix is defined. The new covariance matrix is then incorporated into the EMMSE model to generate an improved EMMSE estimate. The EMMSE radius is finally found by computing the distance between the sphere centre and the improved EMMSE estimate. This distance can be fine-tuned by varying the variable parameter α. The beauty of the proposed method is that it reduces the complexity of the preprocessing step of the EMMSE to that of the Zero-Forcing (ZF) detector without significant performance degradation of the SD, particularly at low Signal-to-Noise Ratios (SNR). More specifically, it will be shown through simulation results that using the EMMSE preprocessing step will substantially improve performance whenever the complexity of the tree search is fixed or upper bounded. The final contribution is the design of the LRAD-MMSE-SIC based SD detection scheme which introduces a trade-off between performance and increased computational complexity in Chapter 6. The Lenstra-Lenstra-Lovasz (LLL) algorithm will be utilised to orthogonalise the channel matrix H to a new near orthogonal channel matrix H ̅.The increased computational complexity introduced by the LLL algorithm will be significantly decreased by employing sorted QR decomposition of the transformed channel H ̅ into a unitary matrix and an upper triangular matrix which retains the property of the channel matrix. The SIC algorithm will ensure that the interference due to dominant signals will be minimised while the LDPC will effectively stop the propagation of errors within the entire system. Through simulations, it will be demonstrated that the proposed detector still approaches the ML performance while requiring much lower complexity compared to the conventional SD.
29

Exploring algorithms to score control points in metrogaine events

Van Hoepen, Wilhelmina Adriana 02 1900 (has links)
Metrogaining is an urban outdoor navigational sport that uses a street map to which scored control points have been added. The objective is to collect maximum score points within a set time by visiting a subset of the scored control points. There is currently no metrogaining scoring standard, only guidelines on how to allocate scores. Accordingly, scoring approaches were explored to create new score sets by using scoring algorithms based on a simple relationship between the score of, and the number of visits to a control point. A spread model, which was developed to evaluate the score sets, generated a range of routes by solving a range of orienteering problems, which belongs to the class of NP-hard combinatorial optimisation problems. From these generated routes, the control point visit frequencies of each control point were determined. Using the visit frequencies, test statistics were subsequently adapted to test the goodness of scoring for each score set. The ndings indicate that the score-visits relationship is not a simple one, as the number of visits to a control point is not only dependent on its score, but also on the scores of the surrounding control points. As a result, the scoring algorithms explored were unable to cope with the complex scoring process uncovered. / Decision Sciences / M. Sc. (Operations Research)
30

Agrégation de classements avec égalités : algorithmes, guides à l'utilisateur et applications aux données biologiques / Rank aggregation with ties : algorithms, user guidance et applications to biologicals data

Brancotte, Bryan 25 September 2015 (has links)
L'agrégation de classements consiste à établir un consensus entre un ensemble de classements (éléments ordonnés). Bien que ce problème ait de très nombreuses applications (consensus entre les votes d'utilisateurs, consensus entre des résultats ordonnés différemment par divers moteurs de recherche...), calculer un consensus exact est rarement faisable dans les cas d'applications réels (problème NP-difficile). De nombreux algorithmes d'approximation et heuristiques ont donc été conçus. Néanmoins, leurs performances (en temps et en qualité de résultat produit) sont très différentes et dépendent des jeux de données à agréger. Plusieurs études ont cherché à comparer ces algorithmes mais celles-ci n’ont généralement pas considéré le cas (pourtant courant dans les jeux de données réels) des égalités entre éléments dans les classements (éléments classés au même rang). Choisir un algorithme de consensus adéquat vis-à-vis d'un jeu de données est donc un problème particulièrement important à étudier (grand nombre d’applications) et c’est un problème ouvert au sens où aucune des études existantes ne permet d’y répondre. Plus formellement, un consensus de classements est un classement qui minimise le somme des distances entre ce consensus et chacun des classements en entrés. Nous avons considérés (comme une grande partie de l’état-de-art) la distance de Kendall-Tau généralisée, ainsi que des variantes, dans nos études. Plus précisément, cette thèse comporte trois contributions. Premièrement, nous proposons de nouveaux résultats de complexité associés aux cas que l'on rencontre dans les données réelles où les classements peuvent être incomplets et où plusieurs éléments peuvent être classés à égalité. Nous isolons les différents « paramètres » qui peuvent expliquer les variations au niveau des résultats produits par les algorithmes d’agrégation (par exemple, utilisation de la distance de Kendall-Tau généralisée ou de variantes, d’un pré-traitement des jeux de données par unification ou projection). Nous proposons un guide pour caractériser le contexte et le besoin d’un utilisateur afin de le guider dans le choix à la fois d’un pré-traitement de ses données mais aussi de la distance à choisir pour calculer le consensus. Nous proposons finalement une adaptation des algorithmes existants à ce nouveau contexte. Deuxièmement, nous évaluons ces algorithmes sur un ensemble important et varié de jeux de données à la fois réels et synthétiques reproduisant des caractéristiques réelles telles que similarité entre classements, la présence d'égalités, et différents pré-traitements. Cette large évaluation passe par la proposition d’une nouvelle méthode pour générer des données synthétiques avec similarités basée sur une modélisation en chaîne Markovienne. Cette évaluation a permis d'isoler les caractéristiques des jeux de données ayant un impact sur les performances des algorithmes d'agrégation et de concevoir un guide pour caractériser le besoin d'un utilisateur et le conseiller dans le choix de l'algorithme à privilégier. Une plateforme web permettant de reproduire et étendre ces analyses effectuée est disponible (rank-aggregation-with-ties.lri.fr). Enfin, nous démontrons l'intérêt d'utiliser l'approche d'agrégation de classements dans deux cas d'utilisation. Nous proposons un outil reformulant à-la-volé des requêtes textuelles d'utilisateur grâce à des terminologies biomédicales, pour ensuite interroger de bases de données biologiques, et finalement produire un consensus des résultats obtenus pour chaque reformulation (conqur-bio.lri.fr). Nous comparons l'outil à la plateforme de références et montrons une amélioration nette des résultats en qualité. Nous calculons aussi des consensus entre liste de workflows établie par des experts dans le contexte de la similarité entre workflows scientifiques. Nous observons que les consensus calculés sont très en accord avec les utilisateurs dans une large proportion de cas. / The rank aggregation problem is to build consensus among a set of rankings (ordered elements). Although this problem has numerous applications (consensus among user votes, consensus between results ordered differently by different search engines ...), computing an optimal consensus is rarely feasible in cases of real applications (problem NP-Hard). Many approximation algorithms and heuristics were therefore designed. However, their performance (time and quality of product loss) are quite different and depend on the datasets to be aggregated. Several studies have compared these algorithms but they have generally not considered the case (yet common in real datasets) that elements can be tied in rankings (elements at the same rank). Choosing a consensus algorithm for a given dataset is therefore a particularly important issue to be studied (many applications) and it is an open problem in the sense that none of the existing studies address it. More formally, a consensus ranking is a ranking that minimizes the sum of the distances between this consensus and the input rankings. Like much of the state-of-art, we have considered in our studies the generalized Kendall-Tau distance, and variants. Specifically, this thesis has three contributions. First, we propose new complexity results associated with cases encountered in the actual data that rankings may be incomplete and where multiple items can be classified equally (ties). We isolate the different "features" that can explain variations in the results produced by the aggregation algorithms (for example, using the generalized distance of Kendall-Tau or variants, pre-processing the datasets with unification or projection). We propose a guide to characterize the context and the need of a user to guide him into the choice of both a pre-treatment of its datasets but also the distance to choose to calculate the consensus. We finally adapt existing algorithms to this new context. Second, we evaluate these algorithms on a large and varied set of datasets both real and synthetic reproducing actual features such as similarity between rankings, the presence of ties and different pre-treatments. This large evaluation comes with the proposal of a new method to generate synthetic data with similarities based on a Markov chain modeling. This evaluation led to the isolation of datasets features that impact the performance of the aggregation algorithms, and to design a guide to characterize the needs of a user and advise him in the choice of the algorithm to be use. A web platform to replicate and extend these analyzes is available (rank-aggregation-with-ties.lri.fr). Finally, we demonstrate the value of using the rankings aggregation approach in two use cases. We provide a tool to reformulating the text user queries through biomedical terminologies, to then query biological databases, and ultimately produce a consensus of results obtained for each reformulation (conqur-bio.lri.fr). We compare the results to the references platform and show a clear improvement in quality results. We also calculate consensus between list of workflows established by experts in the context of similarity between scientific workflows. We note that the computed consensus agree with the expert in a very large majority of cases.

Page generated in 0.054 seconds