• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 22
  • 22
  • 7
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Otimização de processos acoplados: programação da produção e corte de estoque / Optimization of coupled process: planning production and cutting stock

Silva, Carla Taviane Lucke da 15 January 2009 (has links)
Em diversas indústrias de manufatura (por exemplo, papeleira, moveleira, metalúrgica, têxtil) as decisões do dimensionamento de lotes interagem com outras decisões do planejamento e programação da produção, tais como, a distribuição, o processo de corte, entre outros. Porém, usualmente, essas decisões são tratadas de forma isolada, reduzindo o espaço de soluções e a interdependência entre as decisões, elevando assim os custos totais. Nesta tese, estudamos o processo produtivo de indústrias de móveis de pequeno porte, que consiste em cortar placas grandes disponíveis em estoque para obter diversos tipos de peças que são processadas posteriormente em outros estágios e equipamentos com capacidades limitadas para, finalmente, comporem os produtos demandados. Os problemas de dimensionamento de lotes e corte de estoque são acoplados em um modelo de otimização linear inteiro cujo objetivo é minimizar os custos de produção, estoque de produtos, preparação de máquinas e perda de matéria-prima. Esse modelo mostra o compromisso existente entre antecipar ou não a fabricação de certos produtos aumentando os custos de estoque, mas reduzindo a perda de matéria-prima ao obter melhores combinações entre as peças. O impacto da incerteza da demanda (composta pela carteira de pedidos e mais uma quantidade extra estimada) foi amortizado pela estratégia de horizonte de planejamento rolante e por variáveis de decisão que representam uma produção extra para a demanda esperada no melhor momento, visando a minimização dos custos totais. Dois métodos heurísticos são desenvolvidos para resolver uma simplificação do modelo matemático proposto, o qual possui um alto grau de complexidade. Os experimentos computacionais realizados com exemplares gerados a partir de dados reais coletados em uma indústria de móveis de pequeno porte, uma análise dos resultados, as conclusões e perspectivas para este trabalho são apresentados / In the many manufacturing industries (e.g., paper industry, furniture, steel, textile), lot-sizing decisions generally arise together with other decisions of planning production, such as distribution, cutting, scheduling and others. However, usually, these decisions are dealt with separately, which reduce the solution space and break dependence on decisions, increasing the total costs. In this thesis, we study the production process that arises in small scale furniture industries, which consists basically of cutting large plates available in stock into several thicknesses to obtain different types of pieces required to manufacture lots of ordered products. The cutting and drilling machines are possibly bottlenecks and their capacities have to be taken into account. The lot-sizing and cutting stock problems are coupled with each other in a large scale linear integer optimization model, whose objective function consists in minimizing different costs simultaneously, production, inventory, raw material waste and setup costs. The proposed model captures the tradeoff between making inventory and reducing losses. The impact of the uncertainty of the demand, which is composed with ordered and forecasting products) was smoothed down by a rolling horizon strategy and by new decision variables that represent extra production to meet forecasting demands at the best moment, aiming at total cost minimization. Two heuristic methods are proposed to solve relaxation of the mathematical model. Randomly generated instances based on real world life data were used for the computational experiments for empirical analyses of the model and the proposed solution methods
12

Large scale group network optimization

Shim, Sangho 17 November 2009 (has links)
Every knapsack problem may be relaxed to a cyclic group problem. In 1969, Gomory found the subadditive characterization of facets of the master cyclic group problem. We simplify the subadditive relations by the substitution of complementarities and discover a minimal representation of the subadditive polytope for the master cyclic group problem. By using the minimal representation, we characterize the vertices of cardinality length 3 and implement the shooting experiment from the natural interior point. The shooting from the natural interior point is a shooting from the inside of the plus level set of the subadditive polytope. It induces the shooting for the knapsack problem. From the shooting experiment for the knapsack problem we conclude that the most hit facet is the knapsack mixed integer cut which is the 2-fold lifting of a mixed integer cut. We develop a cutting plane algorithm augmenting cutting planes generated by shooting, and implement it on Wong-Coppersmith digraphs observing that only small number of cutting planes are enough to produce the optimal solution. We discuss a relaxation of shooting as a clue to quick shooting. A max flow model on covering space is shown to be equivalent to the dual of shooting linear programming problem.
13

Advanced Integer Linear Programming Techniques for Large Scale Grid-Based Location Problems

Alam, Md. Noor-E- Unknown Date
No description available.
14

Otimização de processos acoplados: programação da produção e corte de estoque / Optimization of coupled process: planning production and cutting stock

Carla Taviane Lucke da Silva 15 January 2009 (has links)
Em diversas indústrias de manufatura (por exemplo, papeleira, moveleira, metalúrgica, têxtil) as decisões do dimensionamento de lotes interagem com outras decisões do planejamento e programação da produção, tais como, a distribuição, o processo de corte, entre outros. Porém, usualmente, essas decisões são tratadas de forma isolada, reduzindo o espaço de soluções e a interdependência entre as decisões, elevando assim os custos totais. Nesta tese, estudamos o processo produtivo de indústrias de móveis de pequeno porte, que consiste em cortar placas grandes disponíveis em estoque para obter diversos tipos de peças que são processadas posteriormente em outros estágios e equipamentos com capacidades limitadas para, finalmente, comporem os produtos demandados. Os problemas de dimensionamento de lotes e corte de estoque são acoplados em um modelo de otimização linear inteiro cujo objetivo é minimizar os custos de produção, estoque de produtos, preparação de máquinas e perda de matéria-prima. Esse modelo mostra o compromisso existente entre antecipar ou não a fabricação de certos produtos aumentando os custos de estoque, mas reduzindo a perda de matéria-prima ao obter melhores combinações entre as peças. O impacto da incerteza da demanda (composta pela carteira de pedidos e mais uma quantidade extra estimada) foi amortizado pela estratégia de horizonte de planejamento rolante e por variáveis de decisão que representam uma produção extra para a demanda esperada no melhor momento, visando a minimização dos custos totais. Dois métodos heurísticos são desenvolvidos para resolver uma simplificação do modelo matemático proposto, o qual possui um alto grau de complexidade. Os experimentos computacionais realizados com exemplares gerados a partir de dados reais coletados em uma indústria de móveis de pequeno porte, uma análise dos resultados, as conclusões e perspectivas para este trabalho são apresentados / In the many manufacturing industries (e.g., paper industry, furniture, steel, textile), lot-sizing decisions generally arise together with other decisions of planning production, such as distribution, cutting, scheduling and others. However, usually, these decisions are dealt with separately, which reduce the solution space and break dependence on decisions, increasing the total costs. In this thesis, we study the production process that arises in small scale furniture industries, which consists basically of cutting large plates available in stock into several thicknesses to obtain different types of pieces required to manufacture lots of ordered products. The cutting and drilling machines are possibly bottlenecks and their capacities have to be taken into account. The lot-sizing and cutting stock problems are coupled with each other in a large scale linear integer optimization model, whose objective function consists in minimizing different costs simultaneously, production, inventory, raw material waste and setup costs. The proposed model captures the tradeoff between making inventory and reducing losses. The impact of the uncertainty of the demand, which is composed with ordered and forecasting products) was smoothed down by a rolling horizon strategy and by new decision variables that represent extra production to meet forecasting demands at the best moment, aiming at total cost minimization. Two heuristic methods are proposed to solve relaxation of the mathematical model. Randomly generated instances based on real world life data were used for the computational experiments for empirical analyses of the model and the proposed solution methods
15

Optimisation des tournées d'inspection des voies ferroviaires

Lannez, Sébastien 25 November 2010 (has links)
La SNCF utilise plusieurs engins spécialisés pour ausculter les fissures internes du rail. La fréquence d’auscultation de chaque rail est fonction du tonnage cumulé qui passe dessus. La programmation des engins d’auscultations ultrasonores est aujourd’hui décentralisée. Dans le cadre d’une étude de réorganisation, la SNCF souhaite étudier la faisabilité de l’optimisation de certaines tournées d’inspection. Dans le cadre de cette thèse de doctorat, l’optimisation de la programmation des engins d’auscultation à ultrasons est étudiée.Une modélisation mathématique sous forme de problème de tournées sur arcs généralisant plusieurs problèmes académiques est proposées. Une méthode de résolution exacte, appliquant la décomposition de Benders, est détaillée. À partir de cette approche, une heuristique de génération de colonnes et de contraintes est présentée et analysée numériquement sur des données réelles de 2009. Enfin, un logiciel industriel développé autour de cette approche est présenté / SNCF is using specialised rolling stock units to inspect internal defects in rails. Rail’s inspection frequency is defined by the cumulative weight of the trains which are going through. In2009, the scheduling of these train units is decentralised. SNCF is studying the centralisation of this process. In this Ph.D. thesis, a new problem, the Railroad Track Inspection SchedulingProblem is studied.A mathematical formulation, based on the generalization of classical arc routing models,is proposed. An exact solving approach, based on Benders’ decomposition scheme, is detailed.From this approach, a column and cut generation heuristic is developed, implemented, andtested on real datasets for 2009. The industrial software developed around this heuristic is presented.
16

Approches "problèmes inverses" régularisées pour l'imagerie sans lentille et la microscopie holographique en ligne / Regularized inverse problems approaches for lensless imaging and in-line holographie microscopy

Jolivet, Frederic 13 April 2018 (has links)
En imagerie numérique, les approches «problèmes inverses» régularisées reconstruisent une information d'intérêt à partir de mesures et d'un modèle de formation d'image. Le problème d'inversion étant mal posé, mal conditionné et le modèle de formation d'image utilisé peu contraint, il est nécessaire d'introduire des a priori afin de restreindre l'ambiguïté de l'inversion. Ceci permet de guider la reconstruction vers une solution satisfaisante. Les travaux de cette thèse ont porté sur le développement d'algorithmes de reconstruction d'hologrammes numériques, basés sur des méthodes d'optimisation en grande dimension (lisse ou non-lisse). Ce cadre général a permis de proposer différentes approches adaptées aux problématiques posées par cette technique d'imagerie non conventionnelle : la super­-résolution, la reconstruction hors du champ du capteur, l'holographie «couleur» et enfin la reconstruction quantitative d'objets de phase (c.a.d. transparents). Dans ce dernier cas, le problème de reconstruction consiste à estimer la transmittance complexe 2D des objets ayant absorbé et/ou déphasé l'onde d'éclairement lors de l'enregistrement de l'hologramme. Les méthodes proposées sont validées à l'aide de simulations numériques puis appliquées sur des données expérimentales issues de l'imagerie sans lentille ou de la microscopie holographique en ligne (imagerie cohérente en transmission, avec un objectif de microscope). Les applications vont de la reconstruction de mires de résolution opaques à la reconstruction d'objets biologiques (bactéries), en passant par la reconstruction de gouttelettes d'éther en évaporation dans le cadre d'une étude de la turbulence en mécanique des fluides. / In Digital Imaging, the regularized inverse problems methods reconstruct particular information from measurements and an image formation model. With an inverse problem that is ill-posed and ill­conditioned, and with the used image formation mode! having few constraints, it is necessary to introduce a priori conditions in order to restrict ambiguity for the inversion. This allows us to guide the reconstruction towards a satisfying solution. The works of the following thesis delve into the development of reconstruction algorithms of digital holograms based on large-scale optimization methods (smooth and non-smooth). This general framework allowed us to propose different approaches adapted to the challenges found with this unconventional imaging technique: the super-resolution, reconstruction outside the sensor's field, the color holography and finally, the quantitative reconstruction of phase abjects (i.e. transparent). For this last case, the reconstruction problem consists of estimating the complex 2D transmittance of abjects having absorbed and/or dephased the light wave during the recording of the hologram. The proposed methods are validated with the help of numerical simulations that are then applied on experimental data taken from the lensless imaging or from the in-line holographie microscopy (coherent imaging in transmission, with a microscope abject glass). The applications range from the reconstruction of opaque resolution sights, to the reconstruction of biological objects (bacteria), passing through the reconstruction of evaporating ether droplets from a perspective of turbulence study in fluid mechanics.
17

Optimization Methods for Patient Positioning in Leksell Gamma Knife Perfexion

Ghobadi, Kimia 21 July 2014 (has links)
We study inverse treatment planning approaches for stereotactic radiosurgery using Leksell Gamma Knife Perfexion (PFX, Elekta, Stockholm, Sweden) to treat brain cancer and tumour patients. PFX is a dedicated head-and-neck radiation delivery device that is commonly used in clinics. In a PFX treatment, the patient lies on a couch and the radiation beams are emitted from eight banks of radioactive sources around the patient's head that are focused at a single spot, called an isocentre. The radiation delivery in PFX follows a step-and-shoot manner, i.e., the couch is stationary while the radiation is delivered at an isocentre location, and only moves when no beam is being emitted. To find a set of well-positioned isocentres in tumour volumes, we explore fast geometry-based algorithms, including skeletonization and hybrid grassfire and sphere-packing approaches. For the selected set of isocentres, the optimal beam durations to deliver a high prescription dose to the tumour are later found using a penalty-based optimization model. We next extend our grassfire and sphere-packing isocentre selection method to treatments with homogenous dose distributions. Dose homogeneity is required in multi-session plans where a larger volume is treated to account for daily setup errors, and thus large overlaps with surrounding healthy tissue may exist. For multi-session plans, we explicitly consider the healthy tissue overlaps in our algorithms and strategically select many isocentres in adjacent volumes to avoid hotspots. There is also interest in treating patients with continuous couch motion to decrease the total treatment session and increase plan quality. We therefore investigate continuous dose delivery treatment plans for PFX. We present various path selection methods along which the dose is delivered using Hamiltonian paths techniques, and develop mixed-integer and linear approximation models to determine the configuration and duration of the radiation time along the paths. We consider several criteria in our optimization models, including machine speed constraints and movement accuracy, preference for single or multiple paths, and smoothness of movement. Our plans in all proposed approaches are tested on seven clinical cases and can meet or exceed clinical guidelines and usually outperform clinical treatments.
18

Parallel and Decentralized Algorithms for Big-data Optimization over Networks

Amir Daneshmand (11153640) 22 July 2021 (has links)
<p>Recent decades have witnessed the rise of data deluge generated by heterogeneous sources, e.g., social networks, streaming, marketing services etc., which has naturally created a surge of interests in theory and applications of large-scale convex and non-convex optimization. For example, real-world instances of statistical learning problems such as deep learning, recommendation systems, etc. can generate sheer volumes of spatially/temporally diverse data (up to Petabytes of data in commercial applications) with millions of decision variables to be optimized. Such problems are often referred to as Big-data problems. Solving these problems by standard optimization methods demands intractable amount of centralized storage and computational resources which is infeasible and is the foremost purpose of parallel and decentralized algorithms developed in this thesis.</p><p><br></p><p>This thesis consists of two parts: (I) Distributed Nonconvex Optimization and (II) Distributed Convex Optimization.</p><p><br></p><p>In Part (I), we start by studying a winning paradigm in big-data optimization, Block Coordinate Descent (BCD) algorithm, which cease to be effective when problem dimensions grow overwhelmingly. In particular, we considered a general family of constrained non-convex composite large-scale problems defined on multicore computing machines equipped with shared memory. We design a hybrid deterministic/random parallel algorithm to efficiently solve such problems combining synergically Successive Convex Approximation (SCA) with greedy/random dimensionality reduction techniques. We provide theoretical and empirical results showing efficacy of the proposed scheme in face of huge-scale problems. The next step is to broaden the network setting to general mesh networks modeled as directed graphs, and propose a class of gradient-tracking based algorithms with global convergence guarantees to critical points of the problem. We further explore the geometry of the landscape of the non-convex problems to establish second-order guarantees and strengthen our convergence to local optimal solutions results to global optimal solutions for a wide range of Machine Learning problems.</p><p><br></p><p>In Part (II), we focus on a family of distributed convex optimization problems defined over meshed networks. Relevant state-of-the-art algorithms often consider limited problem settings with pessimistic communication complexities with respect to the complexity of their centralized variants, which raises an important question: can one achieve the rate of centralized first-order methods over networks, and moreover, can one improve upon their communication costs by using higher-order local solvers? To answer these questions, we proposed an algorithm that utilizes surrogate objective functions in local solvers (hence going beyond first-order realms, such as proximal-gradient) coupled with a perturbed (push-sum) consensus mechanism that aims to track locally the gradient of the central objective function. The algorithm is proved to match the convergence rate of its centralized counterparts, up to multiplying network factors. When considering in particular, Empirical Risk Minimization (ERM) problems with statistically homogeneous data across the agents, our algorithm employing high-order surrogates provably achieves faster rates than what is achievable by first-order methods. Such improvements are made without exchanging any Hessian matrices over the network. </p><p><br></p><p>Finally, we focus on the ill-conditioning issue impacting the efficiency of decentralized first-order methods over networks which rendered them impractical both in terms of computation and communication cost. A natural solution is to develop distributed second-order methods, but their requisite for Hessian information incurs substantial communication overheads on the network. To work around such exorbitant communication costs, we propose a “statistically informed” preconditioned cubic regularized Newton method which provably improves upon the rates of first-order methods. The proposed scheme does not require communication of Hessian information in the network, and yet, achieves the iteration complexity of centralized second-order methods up to the statistical precision. In addition, (second-order) approximate nature of the utilized surrogate functions, improves upon the per-iteration computational cost of our earlier proposed scheme in this setting.</p>
19

Integrated Aircraft Fleeting, Routing, and Crew Pairing Models and Algorithms for the Airline Industry

Shao, Shengzhi 23 January 2013 (has links)
The air transportation market has been growing steadily for the past three decades since the airline deregulation in 1978. With competition also becoming more intense, airline companies have been trying to enhance their market shares and profit margins by composing favorable flight schedules and by efficiently allocating their resources of aircraft and crews so as to reduce operational costs. In practice, this is achieved based on demand forecasts and resource availabilities through a structured airline scheduling process that is comprised of four decision stages: schedule planning, fleet assignment, aircraft routing, and crew scheduling. The outputs of this process are flight schedules along with associated assignments of aircraft and crews that maximize the total expected profit. Traditionally, airlines deal with these four operational scheduling stages in a sequential manner. However, there exist obvious interdependencies among these stages so that restrictive solutions from preceding stages are likely to limit the scope of decisions for succeeding stages, thus leading to suboptimal results and even infeasibilities. To overcome this drawback, we first study the aircraft routing problem, and develop some novel modeling foundations based on which we construct and analyze an integrated model that incorporates fleet assignment, aircraft routing, and crew pairing within a single framework. Given a set of flights to be covered by a specific fleet type, the aircraft routing problem (ARP) determines a flight sequence for each individual aircraft in this fleet, while incorporating specific considerations of minimum turn-time and maintenance checks, as well as restrictions on the total accumulated flying time, the total number of takeoffs, and the total number of days between two consecutive maintenance operations. This stage is significant to airline companies as it directly assigns routes and maintenance breaks for each aircraft in service. Most approaches for solving this problem adopt set partitioning formulations that include exponentially many variables, thus requiring the design of specialized column generation or branch-and-price algorithms. In this dissertation, however, we present a novel compact polynomially sized representation for the ARP, which is then linearized and lifted using the Reformulation-Linearization Technique (RLT). The resulting formulation remains polynomial in size, and we show that it can be solved very efficiently by commercial software without complicated algorithmic implementations. Our numerical experiments using real data obtained from United Airlines demonstrate significant savings in computational effort; for example, for a daily network involving 344 flights, our approach required only about 10 CPU seconds for deriving an optimal solution. We next extend Model ARP to incorporate its preceding and succeeding decision stages, i.e., fleet assignment and crew pairing, within an integrated framework. We formulate a suitable representation for the integrated fleeting, routing, and crew pairing problem (FRC), which accommodates a set of fleet types in a compact manner similar to that used for constructing the aforementioned aircraft routing model, and we generate eligible crew pairings on-the-fly within a set partitioning framework. Furthermore, to better represent industrial practice, we incorporate itinerary-based passenger demands for different fare-classes. The large size of the resulting model obviates a direct solution using off-the-shelf software; hence, we design a solution approach based on Benders decomposition and column generation using several acceleration techniques along with a branch-and-price heuristic for effectively deriving a solution to this model. In order to demonstrate the efficacy of the proposed model and solution approach and to provide insights for the airline industry, we generated several test instances using historical data obtained from United Airlines. Computational results reveal that the massively-sized integrated model can be effectively solved in reasonable times ranging from several minutes to about ten hours, depending on the size and structure of the instance. Moreover, our benchmark results demonstrate an average of 2.73% improvement in total profit (which translates to about 43 million dollars per year) over a partially integrated approach that combines the fleeting and routing decisions, but solves the crew pairing problem sequentially. This improvement is observed to accrue due to the fact that the fully integrated model effectively explores alternative fleet assignment decisions that better utilize available resources and yield significantly lower crew costs. / Ph. D.
20

Μαθηματικές μέθοδοι βελτιστοποίησης προβλημάτων μεγάλης κλίμακας / Mathematical methods of optimization for large scale problems

Αποστολοπούλου, Μαριάννα 21 December 2012 (has links)
Στην παρούσα διατριβή μελετάμε το πρόβλημα της βελτιστοποίησης μη γραμμικών συναρτήσεων πολλών μεταβλητών, όπου η αντικειμενική συνάρτηση είναι συνεχώς διαφορίσιμη σε ένα ανοιχτό υποσύνολο του Rn. Αναπτύσσουμε μαθηματικές μεθόδους βελτιστοποίησης αποσκοπώντας στην επίλυση προβλημάτων μεγάλης κλίμακας, δηλαδή προβλημάτων των οποίων οι μεταβλητές είναι πολλές χιλιάδες, ακόμα και εκατομμύρια. Η βασική ιδέα των μεθόδων που αναπτύσσουμε έγκειται στη θεωρητική μελέτη των χαρακτηριστικών μεγεθών των Quasi-Newton ενημερώσεων ελάχιστης και μικρής μνήμης. Διατυπώνουμε θεωρήματα αναφορικά με το χαρακτηριστικό πολυώνυμο, τον αριθμό των διακριτών ιδιοτιμών και των αντίστοιχων ιδιοδιανυσμάτων. Εξάγουμε κλειστούς τύπους για τον υπολογισμό των ανωτέρω ποσοτήτων, αποφεύγοντας τόσο την αποθήκευση όσο και την παραγοντοποίηση πινάκων. Τα νέα θεωρητικά απoτελέσματα εφαρμόζονται αφενός μεν στην επίλυση μεγάλης κλίμακας υποπροβλημάτων περιοχής εμπιστοσύνης, χρησιμοποιώντας τη μέθοδο της σχεδόν ακριβούς λύσης, αφετέρου δε, στην καμπυλόγραμμη αναζήτηση, η οποία χρησιμοποιεί ένα ζεύγος κατευθύνσεων μείωσης, την Quasi-Newton κατεύθυνση και την κατεύθυνση αρνητικής καμπυλότητας. Η νέα μέθοδος μειώνει δραστικά τη χωρική πολυπλοκότητα των γνωστών αλγορίθμων του μη γραμμικού προγραμματισμού, διατηρώντας παράλληλα τις καλές ιδιότητες σύγκλισής τους. Ως αποτέλεσμα, οι προκύπτοντες νέοι αλγόριθμοι έχουν χωρική πολυπλοκότητα Θ(n). Τα αριθμητικά αποτελέσματα δείχνουν ότι οι νέοι αλγόριθμοι είναι αποδοτικοί, γρήγοροι και πολύ αποτελεσματικοί όταν χρησιμοποιούνται στην επίλυση προβλημάτων με πολλές μεταβλητές. / In this thesis we study the problem of minimizing nonlinear functions of several variables, where the objective function is continuously differentiable on an open subset of Rn. We develop mathematical optimization methods for solving large scale problems, i.e., problems whose variables are many thousands, even millions. The proposed method is based on the theoretical study of the properties of minimal and low memory Quasi-Newton updates. We establish theorems concerning the characteristic polynomial, the number of distinct eigenvalues and corresponding eigenvectors. We derive closed formulas for calculating these quantities, avoiding both the storage and factorization of matrices. The new theoretical results are applied in the large scale trust region subproblem for calculating nearly exact solutions as well as in a curvilinear search that uses a Quasi-Newton and a negative curvature direction. The new method is drastically reducing the spatial complexity of known algorithms of nonlinear programming. As a result, the new algorithms have spatial complexity Θ(n), while they are maintaining good convergence properties. The numerical results show that the proposed algorithms are efficient, fast and very effective when used in solving large scale problems.

Page generated in 0.1342 seconds