• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 4
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 32
  • 32
  • 7
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Dynamic Scheduling of Steel Casting and Milling using Multi-agents

Cowling, Peter I., Ouelhadj, D., Petrovic, S. 13 July 2009 (has links)
No / This paper presents a case study on the use of multi-agents for integrated dynamic scheduling of steel milling and casting. Steel production is an extremely complex problem requiring the consideration of several different constraints and objectives of a range of processes in a dynamic environment. Most research in steel production scheduling considers static scheduling of processes in isolation. In contrast to earlier approaches, the multi-agent architecture proposed consists of a set of heterogeneous agents which integrate and optimize a range of scheduling objectives related to different processes of steel production, and can adapt to changes in the environment while still achieving overall system goals. Each agent embodies its own scheduling model and realizes its local predictive-reactive schedule taking into account local objectives, real-time information and information received from other agents. Agents cooperate in order to find a globally good schedule, which is able to effectively react to real-time disruptions, and to optimize the original production goals whilst minimising disruption carried by unexpected events occurring in real-time. The inter-agent cooperation is based on the Contract Net Protocol with commitment
12

Static and dynamic job-shop scheduling using rolling-horizon approaches and the Shifting Bottleneck Procedure

Ghoniem, Ahmed 10 July 2003 (has links)
Over the last decade, the semiconductor industry has witnessed a steady increase in its complexity based on improvements in manufacturing processes and equipment. Progress in the technology used is no longer the key to success, however. In fact, the semiconductor technology has reached such a high level of complexity that improvements appear at a slow pace. Moreover, the diffusion of technology among competitors shows that traditional approaches based on technological advances and innovations are not sufficient to remain competitive. A recent crisis in the semiconductor field in the summer 2001 made it even clearer that optimizing the operational control of semiconductor wafer fabrication facilities is a vital key to success. Operating research-oriented studies have been carried out to this end for the last 5 years. None of them, however, suggest a comprehensive model and solution to the operational control problem of a semiconductor manufacturing facility. Two main approaches, namely mathematical programming and dispatching rules, have been explored in the literature so far, either partially or entirely dealing with this problem. Adapting the Shifting Bottleneck (SB) procedure is a third approach that has motivated many studies. Most research focuses on optimizing a certain objective function under idealized conditions and thus does not take into consideration system disruptions such as machine breakdown. While many papers address the adaptations of the SB procedure, the problem of re-scheduling jobs dynamically to take disruptions and local disturbances (machines breakdown, maintenance...) into consideration shows interesting perspectives for research. Dealing with local disturbances in a production environment and analyzing their impact on scheduling policies is a complex issue. It becomes even more complex in the semiconductor industry because of the numerous inherent constraints to take into account. The problem that is addressed in this thesis consists of studying dynamic scheduling in a job-shop environment where local disturbances occur. This research focuses on scheduling a large job shop and developing re-scheduling policies when local disturbances occur. The re-scheduling can be applied to the whole production horizon considered in the instance, or applied to a restricted period T that becomes a decision variable of the problem. The length of the restricted horizon T of re-scheduling can influence significantly the overall results. Its impact on the general performance is studied. Future extensions can be made to include constraints that arise in the semiconductors industry, such as the presence of parallel and batching machines, reentrant flows and the lot dedication problem. The theoretical results developed through this research will be applied to data sets to study their efficiency. We hope this methodology will bring useful insights to dealing effectively with local disturbances in production environments. / Master of Science
13

Stochastic Scheduling for a Network of MEMS Job Shops

Varadarajan, Amrusha 31 January 2007 (has links)
This work is motivated by the pressing need for operational control in the fabrication of Microelectromechanical systems or MEMS. MEMS are miniature three-dimensional integrated electromechanical systems with the ability to absorb information from the environment, process this information and suitably react to it. These devices offer tremendous advantages owing to their small size, low power consumption, low mass and high functionality, which makes them very attractive in applications with stringent demands on weight, functionality and cost. While the system''s "brain" (device electronics) is fabricated using traditional IC technology, the micromechanical components necessitate very intricate and sophisticated processing of silicon or other suitable substrates. A dearth of fabrication facilities with micromachining capabilities and a lengthy gestation period from design to mass fabrication and commercial acceptance of the product in the market are factors most often implicated in hampering the growth of MEMS. These devices are highly application specific with low production volumes and the few fabs that do possess micromachining capabilities are unable to offer a complete array of fabrication processes in order to be able to cater to the needs of the MEMS R&D community. A distributed fabrication network has, therefore, emerged to serve the evolving needs of this high investment, low volume MEMS industry. Under this environment, a central facility coordinates between a network of fabrication centers (Network of MEMS job shops -- NMJS) containing micromachining capabilities. These fabrication centers include commercial, academic and government fabs, which make their services available to the ordinary customer. Wafers are shipped from one facility to another until all processing requirements are met. The lengthy and intricate process sequences that need to be performed over a network of capital intensive facilities are complicated by dynamic job arrivals, stochastic processing times, sequence-dependent set ups and travel between fabs. Unless the production of these novel devices is carefully optimized, the benefits of distributed fabrication could be completely overshadowed by lengthy lead times, chaotic routings and costly processing. Our goal, therefore, is to develop and validate an approach for optimal routing (assignment) and sequencing of MEMS devices in a network of stochastic job shops with the objective of minimizing the sum of completion times and the cost incurred, given a set of fabs, machines and an expected product mix. In view of our goal, we begin by modeling the stochastic NMJS problem as a two-stage stochastic program with recourse where the first-stage variables are binary and the second-stage variables are continuous. The key decision variables are binary and pertain to the assignment of jobs to machines and their sequencing for processing on the machines. The assignment variables essentially fix the route of a job as it travels through the network because these variables specify the machine on which each job-operation must be performed out of several candidate machines. Once the assignment is decided upon, sequencing of job-operations on each machine follows. The assignment and sequencing must be such that they offer the best solution (in terms of the objective) possible in light of all the processing time scenarios that can be realized. We present two approaches for solving the stochastic NMJS problem. The first approach is based on the L-shaped method (credited to van Slyke and Wets, 1969). Since the NMJS problem lacks relatively complete recourse, the first-stage solution can be infeasible to the second-stage problem in that the first stage solution may either violate the reentrant flow conditions or it may create a deadlock. In order to alleviate these infeasibilities, we develop feasibility cuts which when appended to the master problem eliminate the infeasible solution. Alternatively, we also develop constraints to explicitly address these infeasibilities directly within the master problem. We show how a deadlock involving 2 or 3 machines arises if and only if a certain relationship between operations and a certain sequence amongst them exists. We generalize this argument to the case of m machines, which forms the basis for our deadlock prevention constraints. Computational results at the end of Chapter 3 compare the relative merits of a model which relies solely on feasibility cuts with models that incorporate reentrant flow and deadlock prevention constraints within the master problem. Experimental evidence reveals that the latter offers appreciable time savings over the former. Moreover, in a majority of instances we see that models that carry deadlock prevention constraints in addition to the reentrant flow constraints provide at par or better performance than those that solely carry reentrant flow constraints. We, next, develop an optimality cut which when appended to the master problem helps in eliminating the suboptimal master solution. We also present alternative optimality and feasibility cuts obtained by modifying the disjunctive constraints in the subproblem so as to eliminate the big H terms in it. Although any large positive number can be used as the value of H, a conservative estimate may improve computational performance. In light of this, we develop a conservative upper bound for operation completion times and use it as the value of H. Test instances have been generated using a problem generator written in JAVA. We present computational results to evaluate the impact of a conservative estimate for big H on run time, analyze the effect of the different optimality cuts and demonstrate the performance of the multicut method (Wets, 1981) which differs from the L-shaped method in that the number of optimality cuts it appends is equal to the number of scenarios in each iteration. Experimentation indicates that Model 2, which uses the standard optimality cut in conjunction with the conservative estimate for big H, almost always outperforms Model 1, which also uses the standard optimality cut but uses a fixed value of 1000 for big H. Model 3, which employs the alternative optimality cut with the conservative estimate for big H, requires the fewest number of iterations to converge to the optimum but it also incurs the maximum premium in terms of computational time. This is because the alternative optimality cut adds to the complexity of the problem in that it appends additional variables and constraints to the master as well as the subproblems. In the case of Model 4 (multicut method), the segregated optimality cuts accurately reflect the shape of the recourse function resulting in fewer overall iterations but the large number of these cuts accumulate over the iterations making the master problem sluggish and so this model exhibits a variable performance for the various datasets. These experiments reveal that a compact master problem and a conservative estimate for big H positively impact the run time performance of a model. Finally, we develop a framework for a branch-and-bound scheme within which the L-shaped method, as applied to the NMJS problem, can be incorporated so as to further enhance its performance. Our second approach for solving the stochastic NMJS problem relies on the tight LP relaxation observed for the deterministic equivalent of the model. We, first, solve the LP relaxation of the deterministic equivalent problem, and then, fix certain binary assignment variables that take on a value of either a 0 or a 1 in the relaxation. Based on this fixing of certain assignment variables, additional logical constraints have been developed that lead to the fixing of some of the sequencing variables too. Experimental results, comparing the performance of the above LP heuristic procedure with CPLEX over the generated test instances, illustrate the effectiveness of the heuristic procedure. For the largest problems (5 jobs, 10 operations/job, 12 machines, 7 workcenters, 7 scenarios) solved in this experiment, an average savings of as much as 4154 seconds and 1188 seconds was recorded in a comparison with Models 1 and 2, respectively. Both of these models solve the deterministic equivalent of the stochastic NMJS problem but differ in that Model 1 uses a big H value of 1000 whereas Model 2 uses the conservative upper bound for big H developed in this work. The maximum optimality gap observed for the LP heuristic over all the data instances solved was 1.35%. The LP heuristic, therefore, offers a powerful alternative to solving these problems to near-optimality with a very low computational burden. We also present results pertaining to the value of the stochastic solution for various data instances. The observed savings of up to 8.8% over the mean value approach underscores the importance of using a solution that is robust over all scenarios versus a solution that approximates the randomness through expected values. We, next, present a dynamic stochastic scheduling approach (DSSP) for the NMJS problem. The premise behind this undertaking is that in a real-life implementation that is faithful to the two-stage procedure, assignment (routing) and sequencing decisions will be made for all the operations of all the jobs at the outset and these will be followed through regardless of the actual processing times realized for individual operations. However, it may be possible to refine this procedure if information on actual processing time realizations for completed operations could be utilized so that assignment and sequencing decisions for impending operations are adjusted based on the evolving scenario (which may be very different from the scenarios modeled) while still hedging against future uncertainty. In the DSSP approach, the stochastic programming model for the NMJS problem is solved at each decision point using the LP heuristic in a rolling horizon fashion while incorporating constraints that model existing conditions in the shop floor and the actual processing times realized for the operations that have been completed. The implementation of the DSSP algorithm is illustrated through an example problem. The results of the DSSP approach as applied to two large problem instances are presented. The performance of the DSSP approach is evaluated on three fronts; first, by using the LP heuristic at each decision point, second, by using an optimal algorithm at each decision point, and third, against the two-stage stochastic programming approach. Results from the experimentation indicate that the DSSP approach using the LP heuristic at each decision point generates superior assignment and sequencing decisions than the two-stage stochastic programming approach and provides solutions that are near-optimal with a very low computational burden. For the first instance involving 40 operations, 12 machines and 3 processing time scenarios, the DSSP approach using the LP heuristic yields the same solution as the optimal algorithm with a total time savings of 71.4% and also improves upon the two-stage stochastic programming solution by 1.7%. In the second instance, the DSSP approach using the LP heuristic yields a solution with an optimality gap of 1.77% and a total time savings of 98% over the optimal algorithm. In this case, the DSSP approach with the LP heuristic improves upon the two-stage stochastic programming solution by 6.38%. We conclude by presenting a framework for the DSSP approach that extends the basic DSSP algorithm to accommodate jobs whose arrival times may not be known in advance. / Ph. D.
14

Efficient Scheduling In Distributed Computing On Grid

Kaya, Ozgur 01 December 2006 (has links) (PDF)
Today many computing resources distributed geographically are idle much of time. The aim of the grid computing is collecting these resources into a single system. It helps to solve problems that are too complex for a single PC. Scheduling plays a critical role in the efficient and effective management of resources to achieve high performance on grid computing environment. Due to the heterogeneity and highly dynamic nature of grid, developing scheduling algorithms for grid computing involves some challenges. In this work, we concentrate on efficient scheduling of distributed tasks on grid. We propose a novel scheduling heuristic for bag-of-tasks applications. The proposed algorithm primarily makes use of history based runtime estimation. The history stores information about the applications whose runtimes and other specific properties are recorded during the previous executions. Scheduling decisions are made according to similarity between the applications. Definition of similarity is an important aspect of this approach, apart from the best resource allocation. The aim of this scheduling algorithm (HISA-History Injected Scheduling Algorithm) is to define and find the similarity, and assign the job to the most suitable resource, making use of the similarity. In our evaluation, we use Grid simulation tool called GridSim. A number of intensive experiments with various simulation settings have been conducted. Based on the experimental results, the effectiveness of HISA scheduling heuristic is studied and compared to the other scheduling algorithms embedded in GridSim. The results show that history injection improves the performance of future job submissions on a grid.
15

On The Service Models For Dynamic Scheduling Of Multi-class Base-stock Controlled Systems

Kat, Bora 01 September 2005 (has links) (PDF)
This study is on the service models for dynamic scheduling of multi-class make-to-stock systems. An exponential single-server facility processes different types of items one by one and demand arrivals for different item types occur according to independent Poisson processes. Inventories of the items are managed by base-stock policies and backordering is allowed. The objective is to minimize base-stock investments or average inventory holding costs subject to a constraint on the aggregate fill rate, which is a weighted average of the fill rates of the item types. The base-stock controlled policy that maximizes aggregate fill rate is numerically investigated, for both symmetric and asymmetric systems, and is shown to be optimal for minimizing base-stock investments under an aggregate fill rate constraint. Alternative policies are generated by heuristics in order to approximate the policy that maximizes aggregate fill rate and performances of these policies are compared to those of two well-known Longest Queue and First Come First Served policies. Also, optimal policy for the service model to minimize average inventory holding cost subject to an aggregate fill rate constraint is investigated without restricting the attention to only base-stock controlled dynamic scheduling policies. Based on the equivalence relations between this service model and the corresponding cost model, it is observed that the base-stock controlled policy that maximizes aggregate fill rate is almost the same as the solution to the service model and cost model under consideration, especially when backorder penalties are large in the cost model as compared to cost parameters for inventory holding or equivalently when the target fill rate is large in the service model.
16

SLOT: uma ferramenta dinâmica para escalonamento global de aplicações em grades computacionais

Rios, Ricardo Araújo 19 May 2008 (has links)
Made available in DSpace on 2016-06-02T19:05:34Z (GMT). No. of bitstreams: 1 2082.pdf: 3574461 bytes, checksum: e92a97c5bb512846ce12d22f63592a23 (MD5) Previous issue date: 2008-05-19 / Financiadora de Estudos e Projetos / The constant improvement in performance that computers and interconnection networks present has favored the use of distributed computational resources, and given rise to Grid Computing. This new approach uses heterogeneous and geographically distributed resources to resolve problems with high computational costs. The execution of applications in this environment is generally achieved with scheduling mechanisms that manipulate the task set and its interdependences, mapping the tasks on to the resources. However, existing schedulers generate the schedule of each application individually, not evaluating the impact on the execution of previously scheduled applications. In this sense, this work presents a global scheduling tool for the tasks submitted to the Grid, and also presents a scheduling algorithm that allocates tasks between previously scheduled slots of time. The use of the proposed tool and algorithm permit a reduction in the amount of time processors remain idle and therefore a more efficient execution of the applications. / A melhoria constante de desempenho que os computadores e as redes de interconexão vêm apresentando favoreceu o uso de recursos computacionais distribuídos, dando origem à Computação em Grade. Esta nova abordagem utiliza recursos heterogêneos e geograficamente distribuídos, a fim de resolver problemas de grande custo computacional. A execução de aplicações neste ambiente geralmente é realizada por meio de mecanismos de escalonamento que manipulam os conjuntos de tarefas e suas interdependências, mapeando-as de forma eficiente nos recursos. Contudo, os escalonadores existentes atualmente realizam o escalonamento de cada aplicação individualmente, deixando de avaliar o impacto na execução de aplicações previamente escalonadas. Neste sentido, este trabalho apresenta uma ferramenta de escalonamento global das tarefas submetidas para a Grade e apresenta ainda um algoritmo de escalonamento que aloca as tarefas em fatias de tempo livre entre tarefas previamente escalonadas. A utilização da ferramenta e do algoritmo propostos permite a redução dos períodos de tempo ociosos nos processadores e a execução das aplicações de forma mais eficiente quando comparado com algoritmos tradicionais.
17

Parallélisations de méthodes de programmation par contraintes / Parallelizations of constraint programming methods

Menouer, Tarek 26 June 2015 (has links)
Dans le cadre du projet PAJERO, nous présentons dans cette thèse une parallélisation externe d'un solveur de Programmation Par Contraintes (PPC) basée sur des méthodes de parallélisation de la search et Portfolio. Cela, afin d'améliorer la performance de la résolution des problèmes de satisfaction et d'optimisation sous contraintes. La parallélisation de la search que nous proposons est adaptée pour une exécution en mode opportuniste et déterministe, suivant les besoins des clients. Le principe consiste à partitionner à la demande l'arbre de recherche unique généré par une seule stratégie de recherche en un ensemble de sous-arbres, pour ensuite affecter chaque sous-arbre à un coeur de calcul. Une stratégie de recherche est un algorithme qui choisit pour chaque noeud dans l'arbre de recherche la variable à assigner et choisi également l'ordonnancement de la recherche. En PPC, il existe plusieurs stratégies de recherche, certaines sont plus efficaces que d'autres, mais cela dépend généralement de la nature des problèmesde contraintes. Cependant la difficulté reste de choisir la bonne stratégie. Pour bénéficier de la variété des stratégies et de la disponibilité des ressources de calcul, un autre type de parallélisation est proposé, appelé Portfolio. La parallélisationPortfolio consiste à exécuter en parallèle N stratégies de recherche, ensuite la première stratégie qui trouve une solution met fin à toutes les autres. La nouveauté que nous proposons dans la parallélisation Portfolio consiste à adapterl'ordonnancement des N stratégies entre elles afin de privilégier la stratégie la plus prometteuse. Cela en lui donnant plus de coeurs que les autres. Pour ceci nous appliquons soit une fonction d'estimation pour chaque stratégie afin de sélectionner la stratégie qui a le plus petit arbre de recherche, soit un algorithme d'apprentissage qui permet de prédire quelle est la meilleure stratégie suivant le résultat d'un apprentissage effectué sur des instances déjà résolues. Afin d'ordonnancer plusieurs applications de PPC, nous proposons également un nouveau système d'allocation de ressources basé sur une stratégie d'ordonnancement combinée avec un modèle économique. Les applications de PPC sont résolues avec des solveurs parallèles dans une infrastructure cloud computing. L'originalité du system d'allocation est qu'il détermine automatiquement le nombre de ressources à affecter pour chaque application suivant la classe économique du client. Les performances obtenues avec nos méthodes de parallélisation sont illustrées par la résolution des problèmes de contraintes en portant le solveur Google OR-Tools au-dessus de notre framework parallèle Bobpp / In the context of the PAJERO project, we propose in this thesis an external parallelization of a Constraint Programming (CP) solver, using both search and Portfolio parallelizations, in order to solve constraint satisfaction and optimization problems. In our work the search parallelization is adapted for deterministic and non-deterministic executions, according to the needs of the user. The principle is to partition the unique search tree generated by one search strategy into a set of sub-trees, then assign each sub-tree to one computing core. A search strategy herein means an algorithm to decide which variable is selected to be assigned in each node of the search tree, and decide also the scheduling of the search. In CP, several search strategies exist and each one could be better than others for solving a specific problem. The difficulty lies in how to choose the best strategy. To benefit from the variety of strategies and the availability of computationalresources, another parallelization exists called the Portfolio parallelization. The principle of this Portfolio parallelization is to execute N search strategies in parallel. The first strategy which find a solution stops the others. The noveltyof our work in the context of the Portfolio is to adapt the schedule of the N strategies in order to favour the most promising strategy, which is a candidate to find a solution first, by giving it more cores than others. The promising strategyis selected using two methods. The first method is to use an estimation function which select the strategy with the smallest search tree. The second method is to use a learning algorithm which automatically determines the number of cores thatwill be allocated to each strategy according to the previous experiment. We have also proposed a new resource allocation system based on a scheduling strategy used with an economic model in order to execute several PPC applications. Thisapplications are solved using parallel solvers in the cloud computing infrastructure. The originality of this system is that the number of resources allocated to each PPC application is determined automatically according the economic classesof the users. The performances obtained by our parallelization methods are illustrated by solving the CP problems using the Google OR-Tools solver on top of the parallel Bobpp framework.
18

Enhancing Task Assignment in Many-Core Systems by a Situation Aware Scheduler

Meier, Tobias, Ernst, Michael, Frey, Andreas, Hardt, Wolfram 17 July 2017 (has links) (PDF)
The resource demand on embedded devices is constantly growing. This is caused by the sheer explosion of software based functions in embedded systems, that are growing far faster than the resources of the single-core and multi-core embedded processors. As one of the limitation is the computing power of the processors we need to explore ways to use this resource more efficiently. We identified that during the run-time of the embedded devices the resource demand of the software functions is permanently changing dependent on the device situation. To enable an embedded device to take advantage of this dynamic resource demand, the allocation of the software functions to the processor must be handled by a scheduler that is able to evaluate the resource demand of the software functions in relation to the device situation. This marks a change in embedded devices from static defined software systems to dynamic software systems. Above that we can increase the efficiency even further by extending the approach from a single device to a distributed or networked system (many-core system). However, existing approaches to deal with dynamic resource allocation are focused on individual devices and leave the optimization potential of manycore systems untouched. Our concept will extend the existing Hierarchical Asynchronous Multi-Core Scheduler (HAMS) concept for individual devices to many-core systems. This extension introduces a dynamic situation aware scheduler for many-core systems which take the current workload of all devices and the system-situation into account. With our approach, the resource efficiency of an embedded many-core system can be increased. The following paper will explain the architecture and the expected results of our concept.
19

Real-Time optimalizace operací v průmyslové výrobě / Real-Time Optimizations in Industrial Production

Křen, Michal January 2012 (has links)
The thesis deals with the scheduling problem of manufacturing operations in industrial production. This problem is described as the well-known the Resource-Constrained Project Scheduling Problem. The objective of this problem is to find an optimal assignment of operations to limited resources. Optimizer created for the thesis uses a genetic algorithm to solve the scheduling problem. For the purpose of a dynamic scheduling, a failures model was designed and a system with real-time optimizer, that is able to repair the original schedule fluently, was created. In the real-time optimizer, several solution methods were implemented and these solution methods underwent a number of experiments. The system thus created is also able to simulate manufacturing operations and draw a Gantt chart.
20

Dynamic container orchestration for a device-cloud continuum

Alfonso Rodriguez Garzon, Camilo January 2023 (has links)
Edge computing has emerged as a paradigm to support the growing demand for real-time processing of data generated at the edge of the network. As the devices at the edge are constrained, one of the challenges in the area is how to schedule workloads. The scheduling problem is difficult to tackle due to the multitude of sources from which variables originate, diverse algorithms and execution methods, and tasks involving information dissemination and action execution. This project aims to explore the problem and implement a system that simplifies the construction of a scheduler for the edge computing to reduce the cognitive load on developers that work on the area and focus their attention on their expertise area. To construct the solution, a literature review is conducted, a set of functional and non functional requirements are proposed, an implementation using a Kubernetes operator and Python application is performed, and an evaluation and validation of the solution against the requirements and an use case and test case are performed. The results demonstrate that the system generates customized instances capable of receiving any number of inputs, outsources the execution of the logic and interacts with different outputs. This allows developers to rapidly deploy instances for their own needs, focusing on their domain of expertise. / Edge computing har framträtt som ett paradigm för att stödja den växande efterfrågan på realtidsbehandling av data som genereras vid nätverkets kant. Eftersom enheterna vid kanten är begränsade utgör en av utmaningarna inom området hur arbetsbelastningar ska schemaläggas. Schemaläggningsproblemet är svårt att hantera på grund av den mångfald av källor varifrån variabler härstammar, varierande algoritmer och utförandemetoder samt uppgifter som involverar informationsförmedling och utförande av åtgärder. Detta projekt syftar till att utforska problemet och implementera ett system som förenklar konstruktionen av en schemaläggare för kantberäkning för att minska den kognitiva belastningen på utvecklare som arbetar inom området och fokusera deras uppmärksamhet på deras expertområde. För att konstruera lösningen genomförs en litteraturgenomgång, en uppsättning funktionella och ickefunktionella krav föreslås, en implementation med hjälp av en Kubernetesoperatör och en Python-applikation utförs, och en utvärdering och validering av lösningen gentemot kraven, inklusive både användnings- och testfall, genomförs. Resultaten visar att systemet genererar anpassade instanser som kan ta emot vilket antal inmatningar som helst, outsourcar utförandet av logiken och interagerar med olika utgångar. Detta gör det möjligt för utvecklare att snabbt distribuera instanser för sina egna behov och fokusera på sitt expertområde.

Page generated in 0.0692 seconds