• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1524
  • 363
  • 359
  • 194
  • 78
  • 48
  • 46
  • 39
  • 31
  • 26
  • 20
  • 18
  • 17
  • 13
  • 9
  • Tagged with
  • 3303
  • 1149
  • 436
  • 429
  • 323
  • 320
  • 303
  • 285
  • 269
  • 258
  • 236
  • 234
  • 218
  • 211
  • 205
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

A PRACTICAL SCHEDULING APPROACH FOR THE STEEL-MAKING PROCESS

Ryota, Tamura January 2023 (has links)
This thesis presents a review of optimal production scheduling in the steel industry. Steel production encompasses various processes, such as the ‘’Blast Furnace’’ and ‘’Hot-Rolled Steel Sheets Mill’’. However, this thesis specifically focuses on the steel making process due to its intermediate nature and substantial influence on profits and costs. \\\\This paper presents a MILP scheduling method to tackle practical steel scheduling problems. The scheduling of steel process poses a significant challenge due to complicated constraints and machine rules, making it a time-consuming task to obtain an optimal solution. To address this problem, a strategy has been proposed to break down the huge and complex problem into smaller sub-problems. The foundational concept behind this approach was initially introduced by Harjunkoski and Grossmann (2001). However, further improvements are proposed in this thesis by introducing a more flexible model for process and grading selections, tailored to practical steel scheduling problems. The thesis presents a strategy to obtain optimal steel making process scheduling by using a MILP approach. In addition, this thesis shows an optimal steel making process scheduling under processing time uncertainty. Uncertain processing time can have great impact the schedule accuracy. To tackle with this problem, a stochastic scheduling model is represented. Moreover, this thesis illustrates an improvement to generate a practical scheduling of steel making process by making use of real processing time data. To validate the effectiveness of our proposed methods, we provide a small example for each step of the scheduling process. The results demonstrate that our approach yields reasonable scheduling solutions. / Thesis / Master of Applied Science (MASc) / In this work, we propose a decomposition strategy for solving practical complex scheduling problems in the steel-making process within a sufficiently short computation time. While there are various processes involved, such as the Cold-Rolled Steel Sheets mill and Steel Pipe mill, we focus on the steel-making process. The optimal scheduling of this process is crucial for increasing profits, reducing waste, and minimizing costs. However, scheduling optimization for the steel-making part presents significant challenges due to complex constraints and specific process rules. To address these challenges, we suggest a decomposition strategy in Chapter 3 of this thesis. This strategy primarily involves breaking down the large and complex scheduling problems into smaller subproblems. While a basic solution strategy is provided in the work of Harjunkoski and Grossmann (2001), our research introduces several improvements tailored to practical scheduling problems. For example, the original paper suggests grouping products together only if they have the same grade. In practical scheduling, however, it is often necessary to mix products of different grades within the same group to maximize productivity and operate efficiently. Additionally, the original paper considers only a single machine for each downstream process. In reality, there are often multiple machines involved in each downstream process. Therefore, our research addresses this challenge by incorporating two refining machines and two continuous casting processes into the scheduling formulations for the downstream process. As a result, the suggestions presented in this paper contribute to handling more flexible patterns of scheduling problems. In Chapter 3, the formulation is based on the aforementioned idea, and its validation is confirmed through a case study. While the obtained scheduling results may not be optimal, they are reasonable for each step when compared to the perspective of an experienced person. Furthermore, the computational time required for each step is less than 1 minute. As a result, the proposed scheduling strategy can effectively solve practical scheduling problems within a limited time frame. The strategy is specifically designed to incorporate mixed grade grouping, as well as multiple and flexible structures for downstream processes. In addition, in the steel industry, fluctuation in process time is inevitable because of the high temperature and high-speed conditions to produce products. To address these problems, in Chapter 4, we propose a strategy to incorporate processing time uncertainty into a decomposition strategy. The strategy is based on the two-stage stochastic scheduling formulation. In practical steel industries, there are many preparations before producing the products such as setting a specific condition and maintaining the facilities, and the preparations are based on the scheduling product's order. Therefore, in this formulation, the variables defining the product's order are regarded as the first decision variables to reflect a practical scheduling problem. The formulation is based on this concept, and its validation is confirmed through application to a practical case study. The results are reasonable by comparing to the knowledge of an experienced person. Furthermore, the computational time required for this strategy is also less than 1 minute. Therefore, the strategies presented in this thesis offer an efficient approach for addressing practical steel-making scheduling problems.
182

Use of Low Order Models for Near-Optimal Control of High Order Systems

de Bruin, Huibregt 25 June 2015 (has links)
<p> Ten different reduced models, of a particular test system are selected. Two cost functions are selected and the test system minimum cost is found for each. The model optimal controls are found for each cost function and are used to provide sub-optimal control of the system using two different methods. The system cost is calculated for each case and compared to the minimum attainable. The reduction methods are compared with a view to application for the near-optimal control of a linear system.</p> / Thesis / Master of Engineering (MEngr)
183

Computational Study of the Optimization of a Catalytic Reactor for a Reversible Reaction with Catalyst Decay

Drouin, Jean-Guy 09 1900 (has links)
<p> The optimal temperature policy with time is sought which maximizes the total amount of reaction in a fixed time in a tubular reactor with uniform temperature and decaying catalyst for a single reversible reaction.</p> <p> A numerical procedure together with theoretical developments is used to solve this problem for two kinetic models. The problem is treated in the format of Pontryagin's Maximum Principle.</p> <p> Computer listings are given in the Appendix for the following cases A) Optimal policy for irreversible reactions 8) Optimal policy for isothermal irreversible reactions C) Optimal policy for reversible reactions.</p> / Thesis / Master of Engineering (MEngr)
184

The Optimal Placement of Shutoff Rods in CANDU Nuclear Reactors (Part A)

Gordon, Charles W. 11 1900 (has links)
<p> The optimal placement of shutdown systems in power reactors is investigated, in particular, the placement of mechanical and liquid shutoff rods. Two CANDU reactor cores were used as a basis for evaluation. The optimal shutdown system was defined here to be one which, with the least number of rods, maximizes the reactivity depth of the system with the two most effective rods assumed to be absent. It was found that rows of rods placed parallel to the fuel channels were more effective and four of these rows were required in a simple core. For real cores where positions are limited six or seven rows were needed to obtain a large system worth. (Time analysis was not done to evaluate insertion rate and delay effects on the power transient in the case of an accident.) </p> / Thesis / Master of Engineering (MEngr)
185

The Application of Decision Theory and Dynamic Programming to Adaptive Control Systems

King Lee, Louis K 09 1900 (has links)
It is generally assumed that the implementation of adaptive control requires a precise identification of plant parameters. In the case of a system with varying parameters, the identification problem gets very involved, as speed of identification and accuracy are contradictory requirements. In this thesis it has been shown that using a feedback policy, the optimal controller is relatively· insensitive to changes in plant parameters as long as these lie within some specified ranges. It is, therefore, concluded that, with such an arrangement, adaptive control can be implemented if one has only the knowledge of the ranges within which the parameters of the plant lie. Thus identification can be carried on more rapidly, as stringent accuracy is no longer necessary. / Thesis / Master of Engineering (ME)
186

Optimal Tolerance Allocation

Michael, Waheed K. 07 1900 (has links)
<p> This thesis addresses itself to one of the most general theoretical problems associated with the art of engineering design. Viewed in its entirety the proposed approach integrates the relation between the design and production engineers through the theory of nonlinear optimization. The conventional optimization problem is extended to include the optimal allocation of the upper and lower limits of the random variables of an engineering system. The approach is illustrated by an example using a sequence of increasingly generalized formulations, while the general mathematical theory is also provided. The method appears to offer a practical technique provided a satisfactory cost function can be defined.</p> <p> The thesis presents an analytical approach to full acceptability design conditions as well as less than full acceptability or scrap design conditions. An important distinction between the design and the manufacturing scrap has been introduced and illustrated through examples.</p> <p> The space regionalization technique is utilized to estimate the system design scrap. Optimization strategies are introduced to the mathematically defined upper and lower limits of the regionalization region. This region is then discretized into a number of cells depending upon the probabilistic characteristic of the system random variables.</p> <p> The analytical approach exhibited does not rely explicitly on evaluation of partial derivatives of either the system cost objective or any of its constraints at any point. Moreover, the technique could be applied to engineering systems with either convex or nonconvex feasible regions. It could also be exercised irrespective of the shape of the probabilistic distributions that describe the random variables variation.</p> <p> Industrially oriented design examples are furnished to justify the applicability of the theory in different engineering disciplines.</p> / Thesis / Doctor of Philosophy (PhD)
187

Explicitly Staged Software Pipelining

Thaller, Wolfgang 08 1900 (has links)
<p> Software Pipelining is a method of instruction scheduling where loops are scheduled more efficiently by executing operations from more than one iteration of the loop in parallel. Finding an optimal software pipelined schedule is NP-complete, but many heuristic algorithms exist. </p> In iteration i, a software pipelined loop will execute, in parallel, "stage" 1 of iteration i, stage 2 of iteration i- 1 and so on until stage k of iteration i-k+l. </p> <p> We present a new approach to software pipelining based on using a hemistic algorithm to explicitly assign each operation to its stage before the actual scheduling. </p> <p> This explicit assignment allows us to implement control flow mechanisms that are hard to implement with traditional methods of software pipelining, which do not give us direct control over what stages instructions are assigned to. </p> / Thesis / Master of Science (MSc)
188

Optimal Synthesis and Balancing of Linkages

Sutherland, George 10 1900 (has links)
<p> The problems of dimensional synthesis and of balancing of linkages are formulated as multifactor optimization problems. Using the new techniques developed in the thesis to solve these problems, a general computer program has been written to be a design aid for such problems. A guide to usage and a complete documentation for this computer program are included in the thesis. </p> / Thesis / Master of Engineering (MEngr)
189

Novel Algorithms for Optimal Transport via Splitting Methods

Lindbäck, Jacob January 2023 (has links)
This thesis studies how the Douglas–Rachford splitting technique can be leveraged for scalable computational optimal transport (OT). By carefully splitting the problem, we derive an algorithm with several advantages. First, the algorithm enjoys global convergence rates comparable to the state-of-the-art while benefiting from accelerated local rates. In contrast to other methods, it does not depend on hyperparameters that can cause numerical instability. This feature is particularly advantageous when low-precision floating points are used or if the data is noisy. Moreover, the updates can efficiently be carried out on GPUs and, therefore, benefit from the high degree of parallelization achieved via GPU computations. Furthermore, we show that the algorithm can be extended to handle a broad family of regularizers and constraints while enjoying the same theoretical and numerical properties. These factors combined result in a fast algorithm that can be applied to large-scale OT problems and regularized versions thereof, which we illustrate in several numerical experiments. In the first part of the main body of the thesis, we present how Douglas-Rachford splitting can be adapted for the unregularized OT problem to derive a fast algorithm. We present two global convergence guarantees for the resulting algorithm: a 1/k-ergodic rate and a linear rate. We also show that the stopping criteria of the algorithm can be computed on the fly with virtually no extra costs. Further, we specify how a GPU kernel can be efficiently implemented to carry out the operations needed for the algorithm. To show that the algorithm is fast, accurate, and robust, we run a series of numerical benchmarks that demonstrate the advantages of our algorithm. We then extend the algorithm to handle regularized OT using sparsity-promoting regularizers. The generalized algorithm will enjoy the same sublinear rate derived for the unregularized formulation. We also complement the global rate with local guarantees, establishing that, under non-degeneracy assumptions on the solution, the algorithm will identify the correct sparsity pattern of the solution in finitely many iterations. When the sparsity pattern is identified, a faster linear rate typically dominates. We also specify how to extend to the GPU implementation and the stopping criteria to handle regularized OT, and we subsequently specify how to backpropagate through the solver. We end this part of the thesis by presenting some numerical results, including performance on quadratically regularized OT and group Lasso regularized OT for domain adaptation, showing a substantial improvement compared to the state-of-the-art. In the last part of the thesis, we provide a more detailed analysis of the local behavior of the algorithm when applied to unregularized OT and quadratically regularized OT. We subsequently outline how to extend this analysis to several other sparsity-promoting regularizers. In the former two cases, we show that the update that constitutes the algorithm converges to a linear operator in finitely many iterations. By analyzing the spectral properties of these linear operators, we gain insights into the local behavior of the algorithm, and specifically, these results suggest how to tune stepsizes to obtain better local rates. / Denna avhandling behandlar hur Douglas–Rachford-splittning kan tillämpas för skalbara beräkningar av optimal transport (OT). Genom en noggrann splittning av problemet härleder vi en algoritm med flera fördelar. För det första åtnjuter algoritmen en global konvergenshastighet som är jämförbara med populära OT-lösare, samtidigt som den drar nytta av accelererade lokalahastigheter. Till skillnad från andra metoder är den inte beroende av hyperparametrar som kan orsaka numerisk instabilitet. Den här egenskapen är särskilt fördelaktig när lågprecisionsaritmetik används eller när data innehåller mycket brus. Uppdateringarna som algoritmen baseras på kan effektivt utföras på GPU:er och dra nytta av dess parallellberäkningar. Vi visar också att algoritmen kan utökas för att hantera en rad regulariseriseringar och bivillkor samtidigt som den åtnjuter liknande teoretiska och numeriska egenskaper. Tillsammans resulterar dessa faktorer i en snabb algoritm som kan tillämpas på storskaliga OT-problem samt flera av dess regulariserade varianter, vilket vi visar i flera numeriska experiment. I den första delen av avhandlingen presenterar vi hur Douglas-Rachford-splittning kan anpassas för det oregulariserade OT-problemet för att härleda en snabb algoritm. För den resulterande algoritmen presenterar vi två globala konvergensgarantier: en 1/k-ergodisk och en linjär konvergenshastighet. Vi presenterar också hur stoppkriterierna för algoritmen kan beräknas utan vidare extra kostnader. Dessutom specificerar vi hur en GPU-kärna kan implementeras för att effektivt utföra de operationer som algoritmen baseras på. För att visa att algoritmen är snabb, exakt och robust utför vi ett flertal numeriska experiment som påvisar flera fördelar över jämförbara algoritmer. Därefter utökar vi algoritmen för att hantera regulariserad OT med s.k. sparsity-promoting regularizers. Den generaliserade algoritmen åtnjuter samma sublinjära konvergenshastighet som härleddes för den oregulariserade fallet. Vi kompletterar också garantierna genom att tillhandahålla lokala garantier genom att fastställa att givet svaga antaganden om lösningen kommer algoritmen att identifiera den korrekta lösningens gleshetsstuktur inom ett begränsat antal iterationer. När glesheten identifierats dominerar typiskt sett en snabbare linjär konvergenshastighet. Vi specificerar också hur man utökar till GPU-kärnan och resultaten av stoppkriterierna för att hantera regulariserad OT, och vi specificerar sedan hur man differentiera genom lösaren. Vi avslutar den här delen av avhandlingen genom att presentera några numeriska resultat för kvadratiskt reglerad OT och group Lasso-reglerad OT för domänanpassning, vilket visar en betydande förbättring jämfört med de mest populära metoderna för dessa tillämpningar. I den sista delen av avhandlingen ger vi en mer detaljerad analys av algoritmens lokala beteende när den tillämpas på oregulerisad OT och kvadratiskt reglerad OT. Vi föreslår även sätt att studera flera andra fallet. I de två första fallen visar vi att uppdateringen som utgör algoritmen konvergerar till en linjär operator inom ett begränsat antal iterationer. Genom att analysera de spektrala egenskaperna hos dessa linjära operatorer får vi ytterligare insikter i algoritmens lokala beteende, och specifikt indikerar dessa resultat hur man kan justera steglängden för att uppnå ännu bättre konvergenshastigheter. / <p>QC 20231123</p>
190

Dimensionality Reduction of Hyperspectral Signatures for Optimized Detection of Invasive Species

Mathur, Abhinav 13 December 2002 (has links)
The aim of this thesis is to investigate the use of hyperspectral reflectance signals for the discrimination of cogongrass (Imperata cylindrica) from other subtly different vegetation species. Receiver operating characteristics (ROC) curves are used to determine which spectral bands should be considered as candidate features. Multivariate statistical analysis is then applied to the candidate features to determine the optimum subset of spectral bands. Linear discriminant analysis (LDA) is used to compute the optimum linear combination of the selected subset to be used as a feature for classification. Similarly, for comparison purposes, ROC analysis, multivariate statistical analysis, and LDA are utilized to determine the most advantageous discrete wavelet coefficients for classification. The overall system was applied to hyperspectral signatures collected with a handheld spectroradiometer (ASD) and to simulated satellite signatures (Hyperion). A leave-one-out testing of a nearest mean classifier for the ASD data shows that cogongrass can be detected amongst various other grasses with an accuracy as high as 87.86% using just the pure spectral bands and with an accuracy of 92.77% using the Haar wavelet decomposition coefficients. Similarly, the Hyperion signatures resulted in classification accuracies of 92.20% using just the pure spectral bands and with an accuracy of 96.82% using the Haar wavelet decomposition coefficients. These results show that hyperspectral reflectance signals can be used to reliably detect cogongrass from subtly different vegetation.

Page generated in 0.0308 seconds