• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 21
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 72
  • 72
  • 26
  • 17
  • 15
  • 13
  • 13
  • 12
  • 12
  • 10
  • 10
  • 10
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Computational Intelligent Sensor-rank Consolidation Approach for Industrial Internet of Things (IIoT)

Mekala, M. S., Rizwan, Patan, Khan, Mohammad S. 01 January 2021 (has links)
Continues field monitoring and searching sensor data remains an imminent element emphasizes the influence of the Internet of Things (IoT). Most of the existing systems are concede spatial coordinates or semantic keywords to retrieve the entail data, which are not comprehensive constraints because of sensor cohesion, unique localization haphazardness. To address this issue, we propose deep learning inspired sensor-rank consolidation (DLi-SRC) system that enables 3-set of algorithms. First, sensor cohesion algorithm based on Lyapunov approach to accelerate sensor stability. Second, sensor unique localization algorithm based on rank-inferior measurement index to avoid redundancy data and data loss. Third, a heuristic directive algorithm to improve entail data search efficiency, which returns appropriate ranked sensor results as per searching specifications. We examined thorough simulations to describe the DLi-SRC effectiveness. The outcomes reveal that our approach has significant performance gain, such as search efficiency, service quality, sensor existence rate enhancement by 91%, and sensor energy gain by 49% than benchmark standard approaches.
62

Time-window optimization for a constellation of earth observation satellite

Oberholzer, Christiaan Vermaak 02 1900 (has links)
Thesis (M.Com.(quantitative Management)) / Satellite Scheduling Problems (SSP) are NP-hard and constraint programming and metaheuristics solution methods yield mixed results. This study investigates a new version of the SSP, the Satellite Constellation Time-Window Optimization Problem (SCoTWOP), involving commercial satellite constellations that provide frequent earth coverage. The SCoTWOP is related to the dual of the Vehicle Routing Problem with Multiple Timewindows, suggesting binary solution vectors representing an activation of time-windows. This representation fitted well with the MatLab® Genetic Algorithm and Direct Search Toolbox subsequently used to experiment with genetic algorithms, tabu search, and simulated annealing as SCoTWOP solution methods. The genetic algorithm was most successful and in some instances activated all 250 imaging time-windows, a number that is typical for a constellation of six satellites. / Quantitative Management
63

Time-window optimization for a constellation of earth observation satellite

Oberholzer, Christiaan Vermaak 02 1900 (has links)
Thesis (M.Com.(quantitative Management)) / Satellite Scheduling Problems (SSP) are NP-hard and constraint programming and metaheuristics solution methods yield mixed results. This study investigates a new version of the SSP, the Satellite Constellation Time-Window Optimization Problem (SCoTWOP), involving commercial satellite constellations that provide frequent earth coverage. The SCoTWOP is related to the dual of the Vehicle Routing Problem with Multiple Timewindows, suggesting binary solution vectors representing an activation of time-windows. This representation fitted well with the MatLab® Genetic Algorithm and Direct Search Toolbox subsequently used to experiment with genetic algorithms, tabu search, and simulated annealing as SCoTWOP solution methods. The genetic algorithm was most successful and in some instances activated all 250 imaging time-windows, a number that is typical for a constellation of six satellites. / Quantitative Management
64

I/O Aware Power Shifting

Savoie, Lee, Lowenthal, David K., Supinski, Bronis R. de, Islam, Tanzima, Mohror, Kathryn, Rountree, Barry, Schulz, Martin 05 1900 (has links)
Power limits on future high-performance computing (HPC) systems will constrain applications. However, HPC applications do not consume constant power over their lifetimes. Thus, applications assigned a fixed power bound may be forced to slow down during high-power computation phases, but may not consume their full power allocation during low-power I/O phases. This paper explores algorithms that leverage application semantics-phase frequency, duration and power needs-to shift unused power from applications in I/O phases to applications in computation phases, thus improving system-wide performance. We design novel techniques that include explicit staggering of applications to improve power shifting. Compared to executing without power shifting, our algorithms can improve average performance by up to 8% or improve performance of a single, high-priority application by up to 32%.
65

Systèmes désordonnés et frustrés: modèles champ moyen et problèmes d'optimisation combinatoire

Schreiber, Georg R. 13 November 1997 (has links) (PDF)
Dans la présente thèse de doctorat je présente des résultats concernant des modèles désordonnés et frustrés venant de la physique statistique et de l'optimisation combinatoire. Comme application de la théorie des verres de spins, j'étudie le modèle de Blume, Emery et Griffiths désordonné et frustré. Ce modèle est traité dans l'approximation de champ moyen dans le cadre de la méthode des répliques A l'aide de l'Ansatz symétrique dans les répliques je présente une solution numérique complète puis je discute des effets de brisure de cette symétrie La stabilité de la solution symétrique a été Rudik et les régions instables identifiées Le diagramme de phase exhibe des transitions de premier et de second ordre. Le point tricritique persiste dans le modèle frustré, Ce qui est en accord avec des travaux antérieurs une version du modèle BEG avec un potentiel chimique désordonné a également été étudiée. les calculs confirment que le point tricritique apparaît à plus basse température quand il y a du désordre. Ensuite je considère le problème de la bipartition d'un graphe. Ce problème correspond du point de vue de la physique statistique h un verre de spins soumis h une contrainte d'aimantation totale nulle. je considère les propriétés statistiques des solutions de faible énergie engendrées par des algorithmes heuristiques. de tels algorithme sont en général conçus pour résoudre des problèmes d'optimisation combinatoire qui sont NP- difficiles. Plusieurs heuristiques ont 60 implémentées pour le problème de la bipartition de graphe. des lois d'échelle ont été obtenues : en particulier la moyenne et la variance du coût obéissent A une loi linéaire en N. Par conséquent le coût obtenu par des heuristiques est une quantité auto-moyennante. je suggère que cette propriété est générale valable aussi pour les solutions aléatoires pour les solutions quasi-optimales et pour les solutions optimales. En outre je propose une procédure pour comparer des algorithmes heuristiques. Cette procédure tient compte de la qualité de la solution aussi bien que du temps de calcul utilisé. Dans la troisième partie de ma thèse j'ai étudié en détail les propriétés h température nulle des verres de spins sur des graphes aléatoires lacunaires avec une coordination fixe. les verres de spins sur de tels graphes peuvent être considérés comme une approximation aux vrais verres de spins qui est plus réaliste que le modèle de Sherrington et Kirkpatrick. J'ai conçu un nouvel algorithme pour trouver les états fondamentaux. Aussi je teste numériquement une conjecture de Banavar, Sherrington et Sourlas qui donne la densité d'énergie du fondamental dans la limite de grande taille en fonction de la coordination. La distribution du paramètre d'ordre se révèle être non triviale et les données présentent une forte indication de la présence d'ultramétricité pour toutes les valeur de la coordination. Ces résultats confirment que les propriétés particulières des verres de spin, déduites an niveau de l'approximation de champ moyen dans le cadre du modèle de Sherrington et Kirkpatrick, sont aussi présentes pour des modèles plus réalistes comme les verres de spins sur des graphes aléatoires lacunaires avec une coordination fixe.
66

Scheduling Problems With Discrete And Batch Processor Machines In Automobile Gear Manufacturing

Gokhale, Ravindra 08 1900 (has links)
The economy of a developing nation like India depends on various sectors such as: Agriculture, Commerce and Industries, Finance, Communication and Information Technology, etc. The manufacturing industries play a key role in contributing to the economy of a nation. They mainly consist of industries like steel casting, automobiles and other heavy manufacturing. This research study is related to automobile industry and particularly the gear manufacturers. The automobile industry is an important industry from the manufacturing point of view. This is due to the fact that it has deep forward and backward linkages with several key segments of the economy and it has a strong multiplier effect. Hence, it is capable of being the driver of economic growth. The recent trend among the automobile manufacturing organizations is to outsource individual components to sub-contractors and conduct the sub-assemblies and assemblies in-house. Some components like gears are important in terms of quality. So in case of these components the automobile manufacturing organizations normally partially outsource the components. They carry out the important finishing operations in-house. Due to this practice, many micro to medium scale gear manufacturers have emerged as sub-contractors to different automobile manufacturing organizations. There is a high amount of competition among different gear manufacturers. To survive the competition any gear manufacturer must focus on the three major aspects: cost, quality and delivery. The focus in this study is on the delivery aspect. Precisely, this thesis attempts to address the scheduling problems in automobile gear manufacturing by proposing efficient solution methodologies in order to aid the gear manufacturers in the delivery of orders in the form of semi-finished gears, to the customers (i.e. automobile manufacturing organizations). The automobile gear manufacturing process can be broadly divided into three distinct stages, starting from the machining of the gear blanks. These three stages in automobile gear manufacturing are: pre heat treatment (pre-HT), heat treatment (HT) and post heat treatment (post-HT). Out of these three stages, the gear manufacturers carry out the first two stages namely pre-HT and HT, and deliver the semi-finished gears to the automobile manufacturing organizations. As most of the operations are carried out by the gear manufacturers, the real research problem lies in identifying bottleneck operations in both pre-HT and HT stages. By addressing the bottleneck operations, one can expect to have a competitive advantage among the gear manufacturers and in turn among the automobile manufacturing organizations. Since every gear manufacturer is involved in both: the pre-HT stage and the HT stage of gear manufacturing, they will always try to achieve both: throughput (from their own company’s perspective) and due date compliance (from the customer’s i.e. automobile manufacturing organizations’ perspective). In order to meet these two objectives for the gear manufacturer, there are two research problems identified in this thesis based on the bottleneck operations: one at the pre-HT stage and the other at the HT stage. The Research Problem 1, identified in the pre HT stage consists of a variety of machining operations. In all the pre-HT operations, one single gear is processed on a machine at a time. The machines used in these operations are essentially the discrete processors as known in the scheduling literature. Among the different operations carried out in the pre-HT stage, the gear shaving operation is the final operation which takes a relatively longer processing time compared to other operations in this stage. Hence this operation becomes the bottleneck operation and the Research Problem 1 is related to this operation. Normally, there are more than one machines available for carrying out the gear shaving operations. The processing time of a job is independent of the type of machine used (identical parallel machines). However, since automobile gears vary widely in terms of size, all the available machines may not be able to process a given job (machine eligibility restrictions). The jobs are made available for processing as and when the job orders are received from the automobile manufacturing organizations. Thus all the jobs may not be available for processing at the same time (unequal release times or dynamic job arrivals). After a job is completed on a machine, a setup is incurred before processing the next job. The setup operations involve cleaning of the machine, changing of cutting tools and toolings, setting of the machining parameters and fine tuning of the machining parameters. The amount of time required for the setup depends on both, the job that has been completed and the job that is to be processed (sequence dependent setup time). Different jobs will have different priorities depending on the nature of the job order, monetary value of the job and urgency for the next stage (job weights). Since the pre-HT stage is an upstream stage in gear manufacturing, particularly to the heat treatment (HT) stage, the aim of this stage will be to feed the downstream stage as quickly as possible. Hence, a completion time based scheduling objective is considered. Since the release times of jobs are unequal, the flowtime of a job is of interest in determining the throughput. Also, since the jobs have different weights, the weighted flowtime is of significance. Therefore, the scheduling objective for Research Problem 1 is taken as minimizing the total weighted flowtime of the jobs in a scheduling window. A vast amount of literature is available on scheduling of parallel machines. However, to the best of our knowledge, no study has simultaneously addressed the job characteristics: unequal release times, sequence dependent setup times and machine eligibility restrictions for identical parallel machines to minimize the total weighted flowtime. The Research Problem 2 was identified at the HT stage of gear manufacturing. The aim of the HT stage in the metallurgical terms is to achieve case hardening of gears. The types of machines used in the HT stage are essentially the batch processing machines (BPM) or simply batch processors (BP). A BP unlike the discrete processor, can process several different jobs at a time. The constituent jobs that are processed together form a batch in the BP. The different operations of this stage are: hardening and soaking, quenching, tempering and shot blasting. The hardening and soaking operation typically takes a longer processing time (6-18 hours) as compared to the other operations (15 minutes to 90 minutes). Also, being the first operation of the HT stage, once the hardening and soaking operation is completed, the other operations can be streamlined. Hence the scheduling of this bottleneck operation is of managerial importance. The jobs arrive at the hardening and soaking operation as and when they are completed at the pre-HT stage (unequal release times). Different jobs may have different processing requirements corresponding to time and temperature setting. Therefore, although a BP can process multiple jobs at a time, jobs with different processing requirements cannot be processed together. Jobs that can be processed together are grouped in job families. Since jobs of different families cannot be processed together we get the situation of incompatible job families. The BP has a fixed and finite capacity (expressed in terms of mass). Jobs will have different masses – which represents the size of jobs (non-identical job sizes). While constructing a batch, a job can be split in case there is a capacity violation for the BP (job splitting). The same priorities of the jobs (job weights) as in the pre-HT stage, will continue in this stage as well. As the Research Problem 2 deals with downstream stage of the gear manufacturing the objective of scheduling will be efficient delivery of the job to the automobile manufacturing organizations. The non-conformance to the due date will result in tardiness of the job. Also, since the jobs have different weights, the weighted tardiness of a job is of significance. Therefore, the scheduling objective for Research Problem 2 is taken as minimizing the total weighted tardiness (TWT) of the jobs in a scheduling window. Compared to the discrete processors, the scheduling of batch processors is a relatively new field (about two decades old). However, a review of literature reveals that no study has simultaneously addressed in any problem domain, the characteristics: unequal release times, incompatible job families, non-identical job sizes and job splitting for scheduling batch processors to minimize the total weighted tardiness. For Research Problem 1, an integer linear programming (ILP) model is developed. A suitable numerical example is developed and the workability of the proposed ILP model is validated for a small sized problem. The computational intractability of the proposed ILP model is verified empirically. Due to the computational intractability, real life large-size problems cannot be solved using the proposed ILP model. This has motivated us to propose simple heuristic algorithms. Accordingly, ten heuristic algorithms are proposed. Out of these ten proposed heuristic algorithms, five heuristic algorithms allow the use of unforced idleness and the remaining five heuristic algorithms do not allow the use of unforced idleness. In scheduling, unforced idleness is a situation when a machine is kept idle even if there are jobs available for processing. In order to understand the performance of the proposed heuristic algorithms, various factors that can affect the performance of the heuristic algorithms are identified based on the literature as well as based on the knowledge gained from the industry. An experimental design is developed based on the identified factors with different levels. A series of computational experiments has been conducted for absolute evaluation of the heuristic algorithms: (a) in comparison with optimal solution for small size problem instances (there are 96 problem instances) and (b) in comparison with the estimated optimal solution for large size problem instances (there are 2880 problem instances). The evaluation is based on computational time and the quality of the solution. With respect to the computational time, it is observed that the time required for obtaining results from the heuristic algorithms is meager. For evaluating the quality of the solution, the two standard performance measures – Average Relative Percentage Deviation (ARPD) which indicates the average performance of the proposed heuristic algorithms and Maximum Relative Percentage Deviation (MRPD) which indicates the worst case performance of the proposed heuristic algorithms have been used. From the results of the experimental evaluation it is observed that the heuristic algorithms incorporating the information on machine eligibility restrictions along with other job characteristics worked relatively better compared to other proposed heuristic algorithms. It is also observed that system congestion plays an important role in determining the performance of the heuristic algorithms. Hence, a further study based on the effect of system congestion on different heuristic algorithms is carried out. The system congestion effect was controlled using the two problem factors: number of jobs and release time of jobs. The computational experiments were based on a total of 48 problem instances. Based on the results it was inferred that for congested systems, the proposed heuristic algorithms allowing unforced idleness perform better than the corresponding heuristic algorithms not allowing unforced idleness. For Research Problem 2, two situations are examined. The first situation pertains to the micro and small scale gear manufacturers. In this case, the gear manufacturers can have a single batch processor (BP) for the operation: hardening and soaking. The other situation pertains to small and medium scale gear manufacturers, and in this case, there are more than one batch processors with possibly different capacities (multiple non-identical batch processors). For the Research Problem 2 with single BP, an ILP model is developed. A suitable numerical example is developed and the workability of the proposed ILP model is validated for a small sized problem. The computational intractability of the proposed ILP model is verified empirically. Due to the computational intractability it is proposed to develop simple heuristic algorithms. Based on the pilot experimental analysis and based on the fact that allowing unforced idleness gave superior results in case of Research Problem 1, it is decided to incorporate unforced idleness while developing heuristic algorithms for the Research Problem 2 with single BP. Accordingly, three groups of heuristic algorithms are proposed for Research Problem 2 with single BP by allowing unforced idleness – (i) Seven variants of the heuristic algorithms are based on the computation of the weighted tardiness, (ii) Three variants of the heuristic algorithms based on computation of the composite job scores and (iii) Three variants of the heuristic algorithms based on the computation of composite family scores followed by the composite job scores. For evaluating the performance of the thirteen proposed heuristic algorithms various factors that can affect the workability of the heuristic algorithms are identified based on the literature as well as based on the knowledge gained from the industry. An experimental design is developed based on these factors with levels. A series of computational experiments has been conducted for absolute evaluation of the heuristic algorithms: (a) in comparison with optimal solution for small size problem instances (192 problem instances) and (b) in comparison with the estimated optimal solution for large size problem instances (7680 problem instances). The evaluation is based on computational time and the quality of the solution. With respect to the computational time, it is observed that the time required for obtaining results from the heuristic algorithms is meager. For evaluating the quality of the solution, the two standard performance measures – ARPD and MRPD, used in Research Problem 1, could not be used here due to the nature of the scheduling objective: minimizing the TWT (as the TWT tends to zero). Therefore, a suitable performance measure was identified in the literature and suitably modified for the Research Problem 2 with single BP under study. This performance measure gives stable results even when the TWT value approaches zero. From the results of the experimental evaluation it is observed that variants of heuristic algorithms based on accommodation of non-consecutive jobs while batch construction, perform better than the other variants of the heuristic algorithms. Following the research study on single BP sitaution, the multiple non-identical BPs situation of Research Problem 2 is studied. The ILP model proposed for the Research Problem 2 with single BP problem is suitably extended to account for multiple non-identical BPs and the workability of the model is demonstrated. Additionally, the proposed heuristic algorithms for the Research Problem 2 with single BP problem have been suitably modified for the multiple non-identical BP situation. After developing a suitable experimental design for Research Problem 2 with multiple non-identical BPs, a series of computational experiments has been conducted for absolute evaluation of the heuristic algorithms in comparison with the estimated optimal solution for large size problem instances, based on the 7680 problem instances. Similar performance measure as that used in the Research Problem 2 with single BP problem is used. The observations made from the experimental evaluation for the Research Problem 2 with multiple non-identical BPs are similar to and therefore consistent with those made for the Research Problem 2 with single BP problem. Finally, a sensitivity analysis to determine the effect of capacity of batch processor sets (BP sets) in terms of: number of batch processors and capacities of each batch processor, for Research Problem 2, is carried out. That is, considering different combinations of the two factors: number of batch processors and capacities of each batch processor, seven different BP sets are considered for the proposed sensitivity analysis. The effect on the scheduling objective: Total Weighted Tardiness for different problem configurations is studied by conducting computational experiments. It is observed that higher net capacities of the BP sets give a proportionately better advantage as compared to lower net capacities of the BP sets. Proportionately better advantage means that the percentage of improvement observed in the scheduling objective is higher than the percentage increase in the net capacity of the BP set. Another observation made is that for a given net capacity, it is better to have multiple BPs with smaller capacities than a single BP with high capacity. Although the problems pertaining to the gear manufacturing process simultaneously considering many real life situations have been addressed in this study, there are some limitations to it such as addressing of identical parallel machines instead of a general case of unrelated parallel machines (for Research Problem 1) and consideration of only deterministic situations for both the research problems. There are many immediate future research directions for the problem studied in this thesis such as overcoming the limitations mentioned in this study, proposing good lower bounds and additional heuristic algorithms, and coming up with integrating both, Research Problem 1 and Research Problem 2 and proposing solution methodologies for the integrated problem.
67

Times assíncronos inicializadores para o planejamento da expansão da transmissão de energia elétrica baseados no modelo híbrido linear

Sanchez, Fernando Rodrigo Lopes [UNESP] 06 June 2008 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:22:35Z (GMT). No. of bitstreams: 0 Previous issue date: 2008-06-06Bitstream added on 2014-06-13T20:09:51Z : No. of bitstreams: 1 sanchez_frl_me_ilha.pdf: 660422 bytes, checksum: f8ab299d7cef18ca3a218acf27a94f43 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Neste trabalho foram implementados diversos agentes heuristicos construtivos, baseados no modelo híbrido linear, que fazem parte de um time assíncrono que tem como objetivo gerar configurações de boa qualidade para inicializar as metaheuríticas que resolvem o problema do planejamento da expansão da transmissão dos sistemas de energia elétrica. A teoria de times assíncronos foi aplicada para reunir as qualidades individuais dos métodos heurísticos, de uma maneira que, partindo de uma configuração base (sem adições) e utilizando um fluxo de dados cíclico, os agentes construtivos adicionassem circuitos a esta configuração de maneira sistemática e aleatória até que esta atenda as demandas de carga solicitadas pelo sistema elétrico em um horizonte futuro. Estas configurações foram então utilizadas por um algoritmo genético no intuito de validar a qualidade das mesmas. Os algoritmos foram implementados em Fortran, utilizando as rotinas de trocas de mensagens do LAM-MPI e simulados para sistemas teste de pequeno, médio e grande porte em ambiente de processamento distribuido. Os resultados comprovam que os times ass´ıncronos de vários metodos heurísticos são mais eficazes comparados com uma única heurística. / In this study, it was implemented several constructive heuristic algorithms, based on hybrid linear model, which are part of a asynchronous team that aims to generate initial solutions with good quality for meta-heuristics that solve the transmission expansion planning problem of electric power systems. The theory of asynchronous team was applied to meet the individual qualities of each heuristic method, in a way that, starting from a base network configuration and using a cyclical flow of data, heuristic agents add circuits to is configuration in a systematic and random way until they meet the load demands requested by the electrical system on a future horizon. Then these configurations are utilized by a genetic algorithm in order to validate the quality of them. The algorithms were implemented in Fortran, using exchanging messages routines from LAM-MPI and simulated for small, medium and large size test-systems in distributed processing environment. The results show that the solutions obtained with asynchronous teams of several heuristic methods are more effective than the solutions with a single heuristic algorithm.
68

Evaluating reasoning heuristics for a hybrid theorem proving platform

Ackermann, Jacobus Gideon 06 1900 (has links)
Text in English with abstracts in English, Afrikaans and isiZulu / The formalisation of first-order logic and axiomatic set theory in the first half of the 20th century—along with the advent of the digital computer—paved the way for the development of automated theorem proving. In the 1950s, the automation of proof developed from proving elementary geometric problems and finding direct proofs for problems in Principia Mathematica by means of simple, human-oriented rules of inference. A major advance in the field of automated theorem proving occurred in 1965, with the formulation of the resolution inference mechanism. Today, powerful Satisfiability Modulo Theories (SMT) provers combine SAT solvers with sophisticated knowledge from various problem domains to prove increasingly complex theorems. The combinatorial explosion of the search space is viewed as one of the major challenges to progress in the field of automated theorem proving. Pioneers from the 1950s and 1960s have already identified the need for heuristics to guide the proof search effort. Despite theoretical advances in automated reasoning and technological advances in computing, the size of the search space remains problematic when increasingly complex proofs are attempted. Today, heuristics are still useful and necessary to discharge complex proof obligations. In 2000, a number of heuristics was developed to aid the resolution-based prover OTTER in finding proofs for set-theoretic problems. The applicability of these heuristics to next-generation theorem provers were evaluated in 2009. The provers Vampire and Gandalf required respectively 90% and 80% of the applicable OTTER heuristics. This dissertation investigates the applicability of the OTTER heuristics to theorem proving in the hybrid theorem proving environment Rodin—a system modelling tool suite for the Event-B formal method. We show that only 2 of the 10 applicable OTTER heuristics were useful when discharging proof obligations in Rodin. Even though we argue that the OTTER heuristics were largely ineffective when applied to Rodin proofs, heuristics were still needed when proof obligations could not be discharged automatically. Therefore, we propose a number of our own heuristics targeted at theorem proving in the Rodin tool suite. / Die formalisering van eerste-orde-logika en aksiomatiese versamelingsteorie in die eerste helfte van die 20ste eeu, tesame met die koms van die digitale rekenaar, het die weg vir die ontwikkeling van geoutomatiseerde bewysvoering gebaan. Die outomatisering van bewysvoering het in die 1950’s ontwikkel vanuit die bewys van elementêre meetkundige probleme en die opspoor van direkte bewyse vir probleme in Principia Mathematica deur middel van eenvoudige, mensgerigte inferensiereëls. Vooruitgang is in 1965 op die gebied van geoutomatiseerde bewysvoering gemaak toe die resolusie-inferensie-meganisme geformuleer is. Deesdae kombineer kragtige Satisfiability Modulo Theories (SMT) bewysvoerders SAT-oplossers met gesofistikeerde kennis vanuit verskeie probleemdomeine om steeds meer komplekse stellings te bewys. Die kombinatoriese ontploffing van die soekruimte kan beskou word as een van die grootste uitdagings vir verdere vooruitgang in die veld van geoutomatiseerde bewysvoering. Baanbrekers uit die 1950’s en 1960’s het reeds bepaal dat daar ’n behoefte is aan heuristieke om die soektog na bewyse te rig. Ten spyte van die teoretiese vooruitgang in outomatiese bewysvoering en die tegnologiese vooruitgang in die rekenaarbedryf, is die grootte van die soekruimte steeds problematies wanneer toenemend komplekse bewyse aangepak word. Teenswoordig is heuristieke steeds nuttig en noodsaaklik om komplekse bewysverpligtinge uit te voer. In 2000 is ’n aantal heuristieke ontwikkel om die resolusie-gebaseerde bewysvoerder OTTER te help om bewyse vir versamelingsteoretiese probleme te vind. Die toepaslikheid van hierdie heuristieke vir die volgende generasie bewysvoerders is in 2009 geëvalueer. Die bewysvoerders Vampire en Gandalf het onderskeidelik 90% en 80% van die toepaslike OTTER-heuristieke nodig gehad. Hierdie verhandeling ondersoek die toepaslikheid van die OTTER-heuristieke op bewysvoering in die hibriede bewysvoeringsomgewing Rodin—’n stelselmodelleringsuite vir die formele Event-B-metode. Ons toon dat slegs 2 van die 10 toepaslike OTTER-heuristieke van nut was vir die uitvoering van bewysverpligtinge in Rodin. Ons voer aan dat die OTTER-heuristieke grotendeels ondoeltreffend was toe dit op Rodin-bewyse toegepas is. Desnieteenstaande is heuristieke steeds nodig as bewysverpligtinge nie outomaties uitgevoer kon word nie. Daarom stel ons ’n aantal van ons eie heuristieke voor wat in die Rodin-suite aangewend kan word. / Ukwenziwa semthethweni kwe-first-order logic kanye ne-axiomatic set theory ngesigamu sokuqala sekhulunyaka lama-20—kanye nokufika kwekhompyutha esebenza ngobuxhakaxhaka bedijithali—kwavula indlela ebheke ekuthuthukisweni kwenqubo-kusebenza yokufakazela amathiyoremu ngekhomyutha. Ngeminyaka yawo-1950, ukuqinisekiswa kobufakazi kwasuselwa ekufakazelweni kwezinkinga zejiyomethri eziyisisekelo kanye nasekutholakaleni kobufakazi-ngqo bezinkinga eziphathelene ne-Principia Mathematica ngokuthi kusetshenziswe imithetho yokuqagula-sakucabangela elula, egxile kubantu. Impumelelo enkulu emkhakheni wokufakazela amathiyoremu ngekhompyutha yenzeka ngowe-1965, ngokwenziwa semthethweni kwe-resolution inference mechanism. Namuhla, abafakazeli abanohlonze bamathiyori abizwa nge-Satisfiability Modulo Theories (SMT) bahlanganisa ama-SAT solvers nolwazi lobungcweti oluvela kwizizinda zezinkinga ezihlukahlukene ukuze bakwazi ukufakazela amathiyoremu okungelula neze ukuwafakazela. Ukukhula ngesivinini kobunzima nobunkimbinkimbi benkinga esizindeni esithile kubonwa njengenye yezinselelo ezinkulu okudingeka ukuthi zixazululwe ukuze kube nenqubekela phambili ekufakazelweni kwamathiyoremu ngekhompyutha. Amavulandlela eminyaka yawo-1950 nawo-1960 asesihlonzile kakade isidingo sokuthi amahuristikhi (heuristics) kube yiwona ahola umzamo wokuthola ubufakazi. Nakuba ikhona impumelelo esiyenziwe kumathiyori ezokucabangela okujulile kusetshenziswa amakhompyutha kanye nempumelelo yobuchwepheshe bamakhompyutha, usayizi wesizinda usalokhu uyinkinga uma kwenziwa imizamo yokuthola ubufakazi obuyinkimbinkimbi futhi obunobunzima obukhudlwana. Namuhla imbala, amahuristikhi asewuziso futhi ayadingeka ekufezekiseni izibopho zobufakazi obuyinkimbinkimbi. Ngowezi-2000, kwathuthukiswa amahuristikhi amaningana impela ukuze kulekelelwe uhlelo-kusebenza olungumfakazeli osekelwe phezu kwesixazululo, olubizwa nge-OTTER, ekutholeni ubufakazi bama-set-theoretic problems. Ukusebenziseka kwalawa mahuristikhi kwizinhlelo-kusebenza ezingabafakazeli bamathiyoremu besimanjemanje kwahlolwa ngowezi-2009. Uhlelo-kusebenza olungumfakazeli, olubizwa nge-Vampire kanye nalolo olubizwa nge-Gandalf zadinga ama-90% kanye nama-80%, ngokulandelana kwazo, maqondana nama-OTTER heuristics afanelekile. Lolu cwaningo luphenya futhi lucubungule ukusebenziseka kwama-OTTER heuristics ekufakazelweni kwamathiyoremu esimweni esiyinhlanganisela sokufakazela amathiyoremu esibizwa nge-Rodin—okuyi-system modelling tool suite eqondene ne-Event-B formal method. Kulolu cwaningo siyabonisa ukuthi mabili kuphela kwayi-10 ama-OTTER heuristics aba wusizo ngenkathi kufezekiswa isibopho sobufakazi ku-Rodin. Nakuba sibeka umbono wokuthi esikhathini esiningi ama-OTTER heuristics awazange abe wusizo uma esetshenziswa kuma-Rodin proofs, amahuristikhi asadingeka ezimweni lapho izibopho zobufakazi zingazenzekelanga ngokwazo ngokulawulwa yizinhlelo-kusebenza zekhompyutha. Ngakho-ke, siphakamisa amahuristikhi ethu amaningana angasetshenziswa ekufakazeleni amathiyoremu ku-Rodin tool suite. / School of Computing / M. Sc. (Computer Science)
69

Batch Processsor Scheduling - A Class Of Problems In Steel Casting Foundries

Ramasubramaniam, M 06 1900 (has links)
Modern manufacturing systems need new types of scheduling methods. While traditional scheduling methods are primarily concerned with sequencing of jobs, modern manufacturing environments provide the additional possibility to process jobs in batches. This adds to the complexity of scheduling. There are two types of batching namely: (i) serial batching (jobs may be batched if they share the same setup on a machine and one job is processed at a time. The machine which processes jobs in this manner is called as discrete processor) and (ii) parallel batching (several jobs can be processed simultaneously on a machine at a time. The machine which processes jobs in this manner is called as batch processor or batch processing machine). Parallel batching environments have attracted wide attention of the researchers working in the field of scheduling. Particularly, taking inspiration from studies of scheduling batch processors in semiconductor manufacturing [Mathirajan and Sivakumar (2006b) and Venkataramana (2006)] and in steel casting industries [Krishnaswamy et al. (1998), Shekar (1998) and Mathirajan (2002)] in the Management Studies Department of Indian Institute of Science, this thesis addresses a special problem on scheduling batch processor, observed in the steel casting manufacturing. A fundamental feature of the steel casting industry is its extreme flexibility, enabling castings to be produced with almost unlimited freedom in design over an extremely wide range of sizes, quantities and materials suited to practically every environment and application. Furthermore, the steel casting industry is capital intensive and highly competitive. From the viewpoint of throughput and utilization of the important and costly resources in the foundry manufacturing, it was felt that the process-controlled furnace operations for the melting and pouring operations as well as the heat-treatment furnace operations are critical for meeting the overall production schedules. The two furnace operations are batch processes that have distinctive constraints on job-mixes in addition to the usual capacity and technical constraints associated with any industrial processes. The benefits of effective scheduling of these batch processes include higher machine utilization, lower work-in-process (WIP) inventory, shorter cycle time and greater customer satisfaction [Pinedo (1995)]. Very few studies address the production planning and scheduling models for a steel foundry, considering the melting furnace of the pre-casting stage as the core foundry operation [Voorhis et al. (2001), Krishnaswamy et al. (1998) and Shekar (1998)]. Even though the melting and pouring operations may be considered as the core of foundry operations and their scheduling is of central importance, the scheduling of heat-treatment furnaces is also of considerable importance. This is because the processing time required at the heat treatment furnace is often longer compared to other operations in the steel-casting foundry and therefore considerably affects the scheduling, overall flow time and WIP inventory. Further, the heat-treatment operation is critical because it determines the final properties that enable components to perform under demanding service conditions such as large mechanical load, high temperature and anti-corrosive processing. It is also important to note that the heat-treatment operation is the only predominantly long process in the entire steel casting manufacturing process, taking up a large part of total processing time (taking up to a few days as against other processes that typically take only a few hours). Because of these, the heat-treatment operation is a major bottleneck operation in the entire steel casting process. The jobs in the WIP inventory in front of heat-treatment furnace vary widely in sizes (few grams to a ton) and dimensions (from 10 mm to 2000 mm). Furthermore, castings are primarily classified into a number of job families based on the alloy type, such as low alloy castings and high alloy castings. These job families are incompatible as the temperature requirement for low alloy and high alloy vary for similar type of heat-treatment operation required. These job families are further classified into various sub-families based on the type of heat treatment operations they undergo. These sub-families are also incompatible as each of these sub-families requires a different combination of heat-treatment operation. The widely varying job sizes, job dimensions and multiple incompatible job family characteristic introduce a high degree of complexity into scheduling heat-treatment furnace. Scheduling of heat-treatment furnace with multiple incompatible job families can have profound effect on the overall production rate as the processing time at heat-treatment operation is very much longer. Considering the complexity of the process and time consumed by the heat treatment operation, it is imperative that efficient scheduling of this operation is required in order to maximize throughput and to enhance productivity of the entire steel casting manufacturing process. This is of importance to the firm. The concerns of the management in increasing the throughput of the bottleneck machine, thereby increasing productivity, motivated us to adopt the scheduling objective of makespan. In a recent observation of heat-treatment operations in a couple of steel casting industries and the research studies reported in the literature, we noticed that the real-life problem of dynamic scheduling of heat-treatment furnace with multiple incompatible job families, non-identical job sizes, non-identical job dimensions, non-agreeable release times and due dates to maximize the throughput, higher utilization and minimize the work-in-process inventory is not at all addressed. However, there are very few studies [Mathirajan et al. (2001, 2002, 2004a, 2007)] which have addressed the problem of scheduling of heat-treatment furnace with incompatible job families and non-identical job sizes to maximize the utilization of the furnace. Due to the difference between the real-life situation on dynamic scheduling of heat-treatment furnace of the steel casting manufacturing and the research reported on the same problem, we identified three new class of batch processor problems, which are applicable to a real-life situation based on the type of heat-treatment operation(s) being carried out and the type of steel casting industry (small, medium and large scale steel casting industry) and this thesis addresses these new class of research problems on scheduling of batch processor. The first part of the thesis addresses our new Research Problem (called Research Problem 1) of minimizing makespan (Cmax) on a batch processor (BP) with single job family (SJF), non-identical job sizes (NIJS), and non-identical job dimensions (NIJD). This problem is of interest to small scale steel casting industries performing only one type of heat treatment operation such as surface hardening. Generally, there would be only a few steel casting industries which offer such type of special heat-treatment operation and thus the customer is willing to accept delay in the completion of his orders. So, the due date issues are not important for these types of industries. We formulate the problem as Mixed Integer Linear Programming (MILP) model and validate the proposed MILP model through a numerical example. In order to understand the computational intractability issue, we carry out a small computational experiment. The results of this experiment indicate that the computational time required, as a function of problem size, for solving the MILP model is non-deterministic and non-polynomial. Due to the computational intractability of the proposed MILP model, we propose five variants of a greedy heuristic algorithm and a genetic algorithm for addressing the Research Problem 1. We carry out computational experiments to obtain the performance of heuristic algorithms based on two perspectives: (i) comparison with optimal solution on small scale instances and (ii) comparison with lower bound for large scale instances. We choose five important problem parameters for the computational experiment and propose a suitable experimental design to generate pseudo problem instances. As there is no lower bound (LB) procedure for the Research Problem1, in this thesis, we develop an LB procedure that provides LB on makespan by considering both NIJS and NIJD characteristics together. Before using the proposed LB procedure for evaluating heuristic algorithms, we conduct a computational experiment to obtain the quality of the LB on makespan in comparison with optimal makespan on number of small scale instances. The results of this experiment indicate that the proposed LB procedure is efficient and could be used to obtain LB on makespan for any large scale problem. In the first perspective of the evaluation of the performance of the heuristic algorithms proposed for Research Problem 1, the proposed heuristic algorithms are run through small scale problem instances and we record the makespan values. We solve the MILP model to obtain optimal solutions for these small scale instances. For comparing the proposed heuristic algorithms we use the performance measures: (a) number of times the proposed heuristic algorithm solution equal to optimal solution and (b) average loss with respect to optimal solution in percentage. In the second perspective of the evaluation of the performance of the heuristic algorithms, the proposed heuristic algorithms are run through large scale problem instances and we record the makespan values. The LB procedure is also run through these problem instances to obtain LB on makespan. For comparing the performance of heuristic algorithms with respect to LB on makespan, we use the performance measures: (a) number of times the proposed heuristic algorithm solution equal to LB on makespan (b) average loss with respect to LB on makespan in percentage, (c) average relative percentage deviation and (d) maximum relative percentage deviation. We extend the Research Problem 1 by including additional job characteristics: job arrival time to WIP inventory area of heat-treatment furnace, due date and additional constraint on non-agreeable release time and due date (NARD). Due date considerations and the constraint on non-agreeable release times and due date (called Research Problem 2) are imperative to small scale steel casting foundries performing traditional but only one type of heat treatment operation such as annealing where due date compliance is important as many steel casting industries offer such type of heat treatment operations. The mathematical model, LB procedure, greedy heuristic algorithm and genetic algorithm proposed for Research Problem 1, including the computational experiments, are appropriately modified and\or extended for addressing Research Problem 2. Finally, we extend the Research Problem 2 is by including an additional real life dimension: multiple incompatible job families (MIJF). This new Research Problem (called Research Problem 3) is more relevant to medium and large scale steel casting foundries performing more than one type of heat treatment operations such as homogenizing and tempering, normalizing and tempering. The solution methodologies, the LB procedure and the computational experiments proposed for Research Problem 2 are further modified and enriched to address the Research Problem 3. From the detailed computational experiments conducted for each of the research problems defined in this study, we observe that: (a) the problem parameters considered in this study have influence on the performance of the heuristic algorithms, (b) the proposed LB procedure is found to be efficient, (c) the proposed genetic algorithm outperforms among the proposed heuristic algorithms (but the computational time required for genetic algorithm increases as problem size keeps increasing), and (d) in case the decision maker wants to choose an heuristic algorithm which is computationally most efficient algorithm among the proposed algorithms, the variants of greedy heuristic algorithms : SWB, SWB(NARD), SWB(NARD&MIJF) is relatively the best algorithm for Research Problem 1, Research Problem 2 and Research Problem 3 respectively.
70

Necessary and Sufficient Conditions on State Transformations That Preserve the Causal Structure of LTI Dynamical Networks

Leung, Chi Ho 01 May 2019 (has links)
Linear time-invariant (LTI) dynamic networks are described by their dynamical structure function, and generally, they have many possible state space realizations. This work characterizes the necessary and sufficient conditions on a state transformation that preserves the dynamical structure function, thereby generating the entire set of realizations of a given order for a specific dynamic network.

Page generated in 0.1 seconds