Spelling suggestions: "subject:"heuristics"" "subject:"euristics""
511 |
A dynamic heuristics approach for proactive production scheduling under robustness targetsZahid, Taiba 29 March 2017 (has links)
In den vergangenen Jahrzehnten konzentrierte sich das Operations Management auf Optimierungsstrategien, insbesondere wurden Meta-Heuristiken für das komplexe, kombinatorische Problem der ressourcenbegrenzten Ablaufplanung erforscht. In einfachen Worten gehört dieses Problem zu den NP-schweren Problemen, die einen derart großen Lösungsraum besitzen, der mittels Enumerationverfahren rechnerisch unlösbar ist. Daher erfordert die Exploration von optimalen Lösungen andere Methoden als Zufallssuchverfahren. Solche Suchalgorithmen in Meta-Heuristik starten mit einer oder mehreren Ausgangslösung und erkunden den Suchraum nach optimalen Lösungen. Jedoch stellen die existierenden Forschungsansätze zur Lösungssuche nur diejenigen Lösungen bereit, die ausschließlich unter den gegebenen Eingangsbedingungen optimal sind. Diese Eingabebedingungen definieren einen Lösungsraum, in dem alles nach Plan geht. Jedoch ist das in der Praxis sicherlich nicht der Fall. Wie wir sagen, der Wandel ist die einzige Konstante in dieser Welt. Risiken und Unsicherheiten begegnen stets im täglichen Leben. Die vorliegende Dissertation untersucht Optimierungsansätze unter Unsicherheit. Der Forschungsbeitrag ist zweigeteilt.
Wie bereits gesagt, wurden Optimierungsstrategien zum Durchsuchen des Lösungsraums in den letzten Jahren stark erforscht. Obwohl es eine anerkannte Tatsache ist, dass die Verbesserung und die Leistung von Optimierungsstrategien stark mit den Initiallösungen korreliert, scheint die Literatur diesbezüglich inexistent, während zumeist auf die Entwicklung von meta-heuristischen Algorithmen wie Genetische Algorithmen und Particle-Swarm-Optimierung fokussiert wird. Die Initiallösungen werden durch simulationsbasierte Strategien entwickelt, die typischerweise gierige Regeln und ereignisbasierte Simulation nutzen. Allerdings verhalten sich kommerzielle Basis-Softwareprodukte meist als Black-Box und stellen keine Informationen über das interne Verhalten bereit. Außerdem erfordern derartige Softwareprodukte meist spezielle Architekturen und missachten Ressourcenbeschränkungen. Die vorliegende Studie diskutiert die ressourcenbeschränkte Projektplanung mit alternativen Modi und schlägt ein simulationsbasiertes Rahmenwerk vor, mit dem ein heuristisches Multi-Pass-Verfahren zur Verfügung gestellt wird. Das erweiterte Multi-Modus-Problem ist in der Lage, den Produktionsbereich in einer besseren Art und Weise nachzubilden, bei dem eine Aktivität von mehreren Ressourcen unterschiedlicher Qualifikation ausgeführt werden kann. Der vorgeschlagene Rahmen diskutiert die Leistung von Algorithmen und verwendet hierfür Benchmark-Instanzen. Das Verhalten verschiedener Projektnetze und deren Eigenschaften werden auch innerhalb des vorgeschlagenen Rahmenwerks bewertet. Darüber hinaus hilft das offene Rahmenwerk, besondere Eigenschaften von Aktivitäten zu analysieren, um deren Verhalten im Fall von Störungen zu prognostizieren.
Die traditionellen Methoden der Risikoanalyse schlagen Slack-basierte Maßzahlen vor, um die Effizienz von Basisplänen zu bestimmen. Das Rahmenwerk wird weiter entwickelt, um mit diesem einen Prüfstand zu gestalten, mit dem nicht-reguläre Maßzahlen bestimmt werden können. Diese Maßnahmen werden als Robustheitsindikatoren bezeichnet und korrelieren mit der Verzögerung derartiger Multi-Modus-Probleme. Solche Leistungsmaße können genutzt werden, um die Wirksamkeit von Basisplänen zu bewerten und ihr Verhalten unter Unsicherheiten zu prognostizieren. Die Ergebnisse dieser Tests werden als modifizierte Zielfunktion verwendet, in der ein bi-objektives Leistungsmaß aus Durchlaufzeit und Robustheit eingesetzt wird, um die Effizienz der vorgeschlagenen Heuristiken zu testen. Da diese Leistungsmaße das Verhalten von Aktivitäten unter Störungen zeigen, werden diese auch genutzt, um die Formfaktoren und Puffergrößen für die Entwicklung eines stochastischen Modells zu bestimmen. Die Analyse der Projektergebnisse, durchgeführt mittels Monte-Carlo-Simulationen, unterstützt das Argument von Teilpuffern für die Modellierung von Aktivitätsdauern anstatt Ansätze mit Extrempuffern und PERT-beta-Schätzungen. / Over the past decades, researches in the field of operations management have focused on optimization strategies based on meta-heuristics for the complex-combinatorial problem of resource constrained scheduling. In simple terms, the solution for this particular problem categorized as NP-hard problem, exhibits a large search space, is computationally intractable, and requires techniques other than random search. Meta-heuristic algorithms start with a single or multiple solutions to explore and optimize using deterministic data and retrieve a valid optimum only under specified input conditions. These input conditions define a solution search space for a theoretical world undergoing no disturbance. But change is inherent to the real world; one is faced with risks and uncertainties in everyday life. The present study explores solution methodologies in the face of uncertainties. The contributions of this thesis are two-fold.
As mentioned earlier, existing optimization strategies have been vigorously investigated in the past decade with respect to exploring large solution search space. Although, it is an established fact that the improvement and performance of optimization strategies is highly correlated with the initial solutions, existing literature regarding this area is not exhaustive and mostly focuses on the development of meta-heuristic algorithms such as genetic algorithms and particle swarm optimization. The initial solutions are developed through simulation based strategies mainly based on greedy rules and event based simulation. However, the available commercial softwares are primarily modeled as a black box and provide little information as to internal processing. Additionally, such planners require special architecture and disregard resource constraints. The present study discusses the multi-mode resource constrained scheduling problem and proposes a simulation-based framework to provide a multi-pass heuristic method. The extended version of multi-mode problem is able to imitate production floor in an improved manner where a task can be performed with multiple resources with certain qualifications. The performance of the proposed framework was analyzed using benchmark instances. The behavior of different project networks and their characteristics is also evaluated within the proposed framework. In addition, the open framework aids in determining the particular characteristic of tasks in order to analyze and forecast their behavior in case of disruptions.
The traditional risk analysis techniques suggest slack-based measures in order to determine the efficiency of baseline schedules. The framework is further developed to design a test bench in order to determine non-regular performance measures named as robustness indicators which correlate with the delay of such cases as multi-mode problem. Such performance measures can be used to indicate the effectiveness of baseline schedules and forecast their behavior. The outputs of these tests are used to modify the objective function which uses makespan and robustness indicators as a bi-objective performance measure in order to test the efficiency of proposed heuristics. Furthermore, since these measures indicate the behavior of tasks under disruptions, they are utilized in order to determine the shape factors and buffers for the development of a stochastic model. The analysis of project outcomes performed through Monte-Carlo simulations supports the argument of partial buffer sizing for modeling activity duration estimates rather than extreme buffer approaches proposed via PERT-beta estimates.
|
512 |
Makespan Minimization in Re-entrant Permutation Flow ShopsHinze, Richard 29 August 2017 (has links)
Re-entrant permutation flow shop problems occur in practical applications such as wafer manufacturing, paint shops, mold and die processes and textile industry. A re-entrant material flow means that the production jobs need to visit at least one working station multiple times. A comprehensive review gives an overview of the literature on re-entrant scheduling. The influence of missing operations received just little attention so far and splitting the jobs into sublots was not examined in re-entrant permutation flow shops before. The computational complexity of makespan minimization in re-entrant permutation flow shop problems requires heuristic solution approaches for large problem sizes. The problem provides promising structural properties for the application of a variable neighborhood search because of the repeated processing of jobs on several machines. Furthermore the different characteristics of lot streaming and their impact on the makespan of a schedule are examined in this thesis and the heuristic solution methods are adjusted to manage the problem’s extension.
|
513 |
Cílování cenové hladiny s nedokonalou racionalitou: heuristický přístup / Price Level Targeting with Imperfect Rationality: A Heuristic ApproachMolnár, Vojtěch January 2020 (has links)
Price Level Targeting with Imperfect Rationality: A Heuristic Approach Vojtěch Molnár Abstract The thesis compares price level targeting and inflation targeting regimes in a New Keynesian model without rational expectations hypothesis. Economic agents instead form their expectations using heuristics-they choose between a few simple rules based on their past forecasting performance. Two main specifications of the price level targeting model are examined-the agents form expectations either about price level or about inflation, which is ex ante not equivalent because of sequential nature of the model. In addition, several formulations of the forecasting rules are considered. According to the results, price level targeting is preferred in the case with expectations created about price level under the baseline calibration; but it is sensitive to some model parameters. Furthermore, when expectations are created about inflation, price level targeting over time loses credibility and leads to divergence of the economy. On the other hand, inflation targeting model functions stably. Therefore, while potential benefits of price level targeting have been confirmed under certain assumptions, the results suggest that inflation targeting constitutes significantly more robust choice for monetary policy.
|
514 |
Risk-benefit perception of AI use : Public perception of AI in healthcare and commercial domainsÅrnfelt, Theodor January 2021 (has links)
AI applications are today implemented in a wide range of settings with the goal of achieving greater efficiency. However, these implementations can not be guaranteed to be free of risks. This study investigated how people perceive these risks and benefits, and whether there were any notable differences to be found between the domains in which they appear, in this case e-commerce/marketing and healthcare. Additionally, the relationship between the perceptions and individual positive and negative attitudes towards AI were examined by utilizing an affect heuristic framework. The findings showed that the two domains did differ from one another, as ratings of both perceived risks and benefits were higher for the healthcare domain compared to the e-commerce/marketing domain. Further, an inverse correlation between ratings of risks and benefits were found within each domain, which is consistent with the affect heuristic framework.
|
515 |
Placement de graphes de tâches de grande taille sur architectures massivement multicoeurs / Mapping of large task network on manycore architectureBerger, Karl-Eduard 08 December 2015 (has links)
Ce travail de thèse de doctorat est dédié à l'étude d'un problème de placement de tâches dans le domaine de la compilation d'applications pour des architectures massivement parallèles. Ce problème vient en réponse à certains besoins industriels tels que l'économie d'énergie, la demande de performances pour les applications de type flots de données synchrones. Ce problème de placement doit être résolu dans le respect de trois critères: les algorithmes doivent être capable de traiter des applications de tailles variables, ils doivent répondre aux contraintes de capacités des processeurs et prendre en compte la topologie des architectures cibles. Dans cette thèse, les tâches sont organisées en réseaux de communication, modélisés sous forme de graphes. Pour évaluer la qualité des solutions produites par les algorithmes, les placements obtenus sont comparés avec un placement aléatoire. Cette comparaison sert de métrique d'évaluation des placements des différentes méthodes proposées. Afin de résoudre à ce problème, deux algorithmes de placement de réseaux de tâches de grande taille sur des architectures clusterisées de processeurs de type many-coeurs ont été développés. Ils s'appliquent dans des cas où les poids des tâches et des arêtes sont unitaires. Le premier algorithme, nommé Task-wise Placement, place les tâches une par une en se servant d'une notion d'affinité entre les tâches. Le second, intitulé Subgraph-wise Placement, rassemble les tâches en groupes puis place les groupes de tâches sur les processeurs en se servant d'une relation d'affinité entre les groupes et les tâches déjà affectées. Ces algorithmes ont été testés sur des graphes, représentants des applications, possédant des topologies de types grilles ou de réseaux de portes logiques. Les résultats des placements sont comparés avec un algorithme de placement, présent dans la littérature qui place des graphes de tailles modérée et ce à l'aide de la métrique définie précédemment. Les cas d'application des algorithmes de placement sont ensuite orientés vers des graphes dans lesquels les poids des tâches et des arêtes sont variables similairement aux valeurs qu'on peut retrouver dans des cas industriels. Une heuristique de construction progressive basée sur la théorie des jeux a été développée. Cet algorithme, nommé Regret Based Approach, place les tâches une par une. Le coût de placement de chaque tâche en fonction des autres tâches déjà placées est calculée. La phase de sélection de la tâche se base sur une notion de regret présente dans la théorie des jeux. La tâche qu'on regrettera le plus de ne pas avoir placée est déterminée et placée en priorité. Afin de vérifier la robustesse de l'algorithme, différents types de graphes de tâches (grilles, logic gate networks, series-parallèles, aléatoires, matrices creuses) de tailles variables ont été générés. Les poids des tâches et des arêtes ont été générés aléatoirement en utilisant une loi bimodale paramétrée de manière à obtenir des valeurs similaires à celles des applications industrielles. Les résultats de l'algorithme ont également été comparés avec l'algorithme Task-Wise Placement, qui a été spécialement adapté pour les valeurs non unitaires. Les résultats sont également évalués en utilisant la métrique de placement aléatoire. / This Ph.D thesis is devoted to the study of the mapping problem related to massively parallel embedded architectures. This problem arises from industrial needs like energy savings, performance demands for synchronous dataflow applications. This problem has to be solved considering three criteria: heuristics should be able to deal with applications with various sizes, they must meet the constraints of capacities of processors and they have to take into account the target architecture topologies. In this thesis, tasks are organized in communication networks, modeled as graphs. In order to determine a way of evaluating the efficiency of the developed heuristics, mappings, obtained by the heuristics, are compared to a random mapping. This comparison is used as an evaluation metric throughout this thesis. The existence of this metric is motivated by the fact that no comparative heuristics can be found in the literature at the time of writing of this thesis. In order to address this problem, two heuristics are proposed. They are able to solve a dataflow process network mapping problem, where a network of communicating tasks is placed into a set of processors with limited resource capacities, while minimizing the overall communication bandwidth between processors. They are applied on task graphs where weights of tasks and edges are unitary set. The first heuristic, denoted as Task-wise Placement, places tasks one after another using a notion of task affinities. The second algorithm, named Subgraph-wise Placement, gathers tasks in small groups then place the different groups on processors using a notion of affinities between groups and processors. These algorithms are tested on tasks graphs with grid or logic gates network topologies. Obtained results are then compared to an algorithm present in the literature. This algorithm maps task graphs with moderated size on massively parallel architectures. In addition, the random based mapping metric is used in order to evaluate results of both heuristics. Then, in a will to address problems that can be found in industrial cases, application cases are widen to tasks graphs with tasks and edges weights values similar to those that can be found in the industry. A progressive construction heuristic named Regret Based Approach, based on game theory, is proposed. This heuristic maps tasks one after another. The costs of mapping tasks according to already mapped tasks are computed. The process of task selection is based on a notion of regret, present in game theory. The task with the highest value of regret for not placing it, is pointed out and is placed in priority. In order to check the strength of the algorithm, many types of task graphs (grids, logic gates networks, series-parallel, random, sparse matrices) with various size are generated. Tasks and edges weights are randomly chosen using a bimodal law parameterized in order to have similar values than industrial applications. Obtained results are compared to the Task Wise placement, especially adapted for non-unitary values. Moreover, results are evaluated using the metric defined above.
|
516 |
An Integrated Approach to Development of Dynamic Capabilities and Investments in Strategic Factor MarketsKoparan, Ipek 02 April 2020 (has links)
No description available.
|
517 |
The single row layout problem with clearancesKeller, Birgit 20 May 2019 (has links)
The single row layout problem (SRLP) is a specially structured instance of the classical facility layout problem, especially used in flexible manufacturing systems. The SRLP consists of finding the most efficient arrangement of a given number of machines along one side of the material handling path with the purpose of minimising the total weighted sum of distances among all machine pairs. To reflect real manufacturing situations, a minimum space (so-called clearances) between machines may be required by observing technological constraints, safety considerations and regulations. This thesis intends to outline the different concepts of clearances used in literature and analyse their effects on modelling and solution approaches for the SRLP. In particular the special characteristics of sequence-dependent, asymmetric clearances are discussed and finally extended to large size clearances (machine-spanning clearances). For this, adjusted and novel model formulations and solution approaches are presented. Furthermore, a comprehensive survey of articles published in this research area since 2000 is provided which identify recent developments and emerging trends in SRLP.
|
518 |
A Comparison of Mixed Integer Programming and a Heuristic Approach for Harvest Blocking in AustraliaTaylor, Ronald Gordon 07 May 2016 (has links)
The goal of harvest scheduling is to produce a practical operations schedule that can be implemented in the field by operational foresters and maximizes all values. The resulting harvest units need to represent a close approximation to what will be done operationally and while emulating natural disturbance regimes and topographic boundaries. using flow direction surfaces. Two methods of meeting spatially acceptable harvest units through a heuristic algorithm and a mixed integer programming method. A factor analysis was conducted on both to determine the statistical significance between 3 forest characterizations and mean financial and shape index indicators. Mixed integer programming had higher cash flows and net present values per hectare and the heuristic method had higher net present value per cubic meter at the 95% level of significance.
|
519 |
Models and Algorithms to Solve a Reliable and Congested Biomass Supply Chain Network Designing Problem under UncertaintyPoudel, Sushil Raj 06 May 2017 (has links)
This dissertation studies two important problems in the field of biomass supply chain network. In the first part of the dissertation, we study the pre-disaster planning problem that seeks to strengthen the links between the multi-modal facilities of a biomass supply chain network. A mixed-integer nonlinear programming model is developed to determine the optimal locations for multi-modal facilities and bio-refineries, offer suggestions on reliability improvement at vulnerable links, production at bio-refineries, and make transportation decision under both normal and disrupted scenarios. The aim is to assist investors in determining which links’ reliability can be improved under specific budget limitations so that the biouel supply chain network can prevent possible losses when transportation links are disrupted because of natural disasters. We used states Mississippi and Alabama as a testing ground for our model. As part of numerical experimentation, some realistic hurricane scenarios are presented to determine the potential impact that pre-investing may have on improving the bio-mass supply chain network’s reliability on vulnerable transportation links considering limited budget availability. In the second part of the dissertation, we study the impact of feedstock supply uncertainty on the design and management of an inbound biomass coiring supply chain network. A two-stage stochastic mixed integer linear programming model is developed to determine the optimal use of multi-modal facilities, biomass storage and processing plants, and shipment routes for delivering biomass to coal plants under feedstock supply uncertainty while considering congestion into account. To represent a more realistic case, we generated a scenario tree based on the prediction errors obtained from historical and forecasted feedstock supply availability. We linearized the nonlinear problem and solved with high quality and in a time efficient manner by using a hybrid decomposition algorithm that connects a Constraint generation algorithm with Sample average approximation algorithm and enhanced Progressive hedging algorithm. We used states Mississippi and Alabama as a testing ground for our study and conducted thorough computational experiments to test our model and to draw managerial insights.
|
520 |
Finding an Optimal Trajectory for Autonomous Parking Under Uncertain ConditionsGreinsmark, Vidar, Hjertberg, Tommy January 2019 (has links)
Path planning that considers accurate vehicle dynamics and obstacle avoidance is an important problem in the area of autonomous driving. This paper describes a method of implementing trajectory planning for autonomous parking in conditions where the starting point and the position of fixed obstacles are uncertain. The narrow spaces and complicated manoeuvres required for parking demands a lot from the trajectory planning algorithm. It needs to have the ability to accurately model vehicle dynamics and find an efficient way around obstacles. Having obstacles in the way of the parking vehicle makes this a nonconvex problem the goal can usually not be reached by travelling in a straight line and finding a perfect trajectory around them is generally not computationally tractable. This paper reviews a two tiered approach to solving this problem. First a rough path is found using a modified Rapidly-exploring Random Tree (RRT) algorithm called Forward-Backward RRT, which runs two treebuilding processes in parallel and constructs a feasible path from where they intersect. Using optimisation this is then improved into a trajectory that is at least a local optimum. These methods will be demonstrated to produce efficient and feasible trajectories that respects the dynamic constraints of the vehicle and avoids collisions.
|
Page generated in 0.0641 seconds