• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 27
  • 13
  • 4
  • 2
  • Tagged with
  • 99
  • 66
  • 26
  • 26
  • 25
  • 20
  • 19
  • 19
  • 16
  • 15
  • 15
  • 15
  • 15
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Archetypal Spaces & Storytelling in Architecture

Smith, William 25 May 2023 (has links)
No description available.
12

Lagrangian Relaxation / Dual Approaches For Solving Large-Scale Linear Programming Problems

Madabushi, Ananth R. 17 February 1997 (has links)
This research effort focuses on large-scale linear programming problems that arise in the context of solving various problems such as discrete linear or polynomial, and continuous nonlinear, nonconvex programming problems, using linearization and branch-and-cut algorithms for the discrete case, and using polyhedral outer-approximation methods for the continuous case. These problems arise in various applications in production planning, location-allocation, game theory, economics, and many engineering and systems design problems. During the solution process of discrete or continuous nonconvex problems using polyhedral approaches, one has to contend with repeatedly solving large-scale linear programming(LP) relaxations. Thus, it becomes imperative to employ an efficient method in solving these problems. It has been amply demonstrated that solving LP relaxations using a simplex-based algorithm, or even an interior-point type of procedure, can be inadequately slow ( especially in the presence of complicating constraints, dense coefficient matrices, and ill-conditioning ) in comparison with a Lagrangian Relaxation approach. With this motivation, we present a practical primal-dual subgradient algorithm that incorporates a dual ascent, a primal recovery, and a penalty function approach to recover a near optimal and feasible pair of primal and dual solutions. The proposed primal-dual approach is comprised of three stages. Stage I deals with solving the Lagrangian dual problem by using various subgradient deflection strategies such as the Modified Gradient Technique (MGT), the Average Direction Strategy (ADS), and a new direction strategy called the Modified Average Direction Strategy (M-ADS). In the latter, the deflection parameter is determined based on the process of projecting the unknown optimal direction onto the space spanned by the current subgradient direction and the previous direction. This projected direction approximates the desired optimal direction as closely as possible using the conjugate subgradient concept. The step-length rules implemented in this regard are the Quadratic Fit Line Search Method and a new line search method called the Directional Derivative Line Search Method in which we start with a prescribed step-length and then ascertain whether to increase or decrease the step-length value based on the right-hand and left-hand derivative information available at each iteration. In the second stage of the algorithm (Stage II), a sequence of updated primal solutions is generated using some convex combinations of the Lagrangian subproblem solutions. Alternatively, a starting primal optimal solution can be obtained using the complementary slackness conditions. Depending on the extent of feasibility and optimality attained, Stage III applies a penalty function method to improve the obtained primal solution toward a near feasible and optimal solution. We present computational experience using a set of randomly generated, structured, linear programming problems of the type that might typically arise in the context of discrete optimization. / Master of Science
13

Robust Exact Algorithms for the Euclidean Bipartite Matching Problem

Gattani, Akshaykumar Gopalkrishna 06 July 2023 (has links)
The minimum cost bipartite matching problem is a well-studied optimization problem in computer science and operations research, with wide-ranging applications in fields such as machine learning, economics, transportation, logistics and biology. A special instance of this problem is the computation of the p-Wasserstein distance which we define next. Given a complete bipartite graph with two disjoint sets of n points in d-dimensional Euclidean space and an integer p ≥ 1, let the cost of an edge be the p-th power of the Euclidean distance between its endpoints. The objective of this problem is to find a minimum-cost matching in this complete bipartite graph. The Hungarian algorithm is a classical method that solves this problem in O(n^3) time. There are many algorithms that have a run time better than that of the Hungarian algorithm if the graphs have non-negative integer edge costs bounded by C. Since the input points have real-valued coordinates and the Euclidean distances can be irrational, such algorithms only return an approximate matching. Thus, the Hungarian algorithm remains the fastest known algorithm to compute an exact matching. In this thesis, we implement a new algorithm in the divide and conquer framework that computes the exact p-Wasserstein distance and has a run time asymptotically better than the Hungarian algorithm for stochastic point sets. Inspired by the techniques used in the algorithm, we also design an alternate version of the Hungarian algorithm that uses a grid- based approach. Our experimental analysis shows that both of our algorithms significantly outperform the classical Hungarian algorithm. / Master of Science / Suppose we have two sets of equal number of items and a list of compatible pairs of items, where a pair is considered compatible if its items belong to different sets. A perfect matching is a subset of compatible pairs where each item is paired with exactly one other item. When trying to find a perfect matching, there may be multiple options, and minimizing the cost of the perfect matching is often desired. This is referred to as the minimum cost bipartite matching problem, which is extensively studied due to its importance in algorithmic theory and operations research. A special instance of this problem is the calculation of the p- Wasserstein distance. It has many practical applications in fields such as machine learning, economics, transportation, logistics and biology. The Hungarian algorithm is the only known algorithm that can compute the exact p-Wasserstein distance. Therefore, our focus is to develop exact algorithms for this problem that perform better than the Hungarian algorithm and can scale efficiently to large datasets.
14

"Métodos de pontos interiores aplicados ao pré-despacho de um sistema hidroelétrico usando o princípio de mínimo esforço - comparação com o modelo de fluxo em redes" / Interior point methods applied to the predispatch of a hydroelectric system using the minimum effort principle - comparison with the network flow model

Carvalho, Lilian Milena Ramos 07 November 2005 (has links)
Neste trabalho, os métodos de pontos interiores primal-dual e preditor corretor são estudados e desenvolvidos para o problema de minimização de custos na geração e perdas na transmissão do pré-despacho DC (fluxo de carga em corrente contínua) de um sistema de potência hidroelétrico, com base no modelo de fluxo em redes e no princípio do mínimo esforço. A estrutura matricial, resultante da simplificação do problema proposto pela inclusão do princípio do mínimo esforço, é estudada visando implementações eficientes. / In this work, the primal-dual and predictor corrector interior points methods are studied and developed for the predispatch DC problem that minimizes generation and transmission losses on hydroelectric power systems, on the basis of the network flow model and the minimum effort principle. The matrix structure, resulting of the simplification of the problem considered by inclusion of the minimum effort principle, is studied aiming efficient implementations. A disturbed primal-dual method is considered on the basis of a heuristic definition that determine the choice of the disturbance parameter. This method showed to be efficient in practice and converged in fewer iterations when compare with an existing implementation of the network flow model.
15

"Métodos de pontos interiores aplicados ao pré-despacho de um sistema hidroelétrico usando o princípio de mínimo esforço - comparação com o modelo de fluxo em redes" / Interior point methods applied to the predispatch of a hydroelectric system using the minimum effort principle - comparison with the network flow model

Lilian Milena Ramos Carvalho 07 November 2005 (has links)
Neste trabalho, os métodos de pontos interiores primal-dual e preditor corretor são estudados e desenvolvidos para o problema de minimização de custos na geração e perdas na transmissão do pré-despacho DC (fluxo de carga em corrente contínua) de um sistema de potência hidroelétrico, com base no modelo de fluxo em redes e no princípio do mínimo esforço. A estrutura matricial, resultante da simplificação do problema proposto pela inclusão do princípio do mínimo esforço, é estudada visando implementações eficientes. / In this work, the primal-dual and predictor corrector interior points methods are studied and developed for the predispatch DC problem that minimizes generation and transmission losses on hydroelectric power systems, on the basis of the network flow model and the minimum effort principle. The matrix structure, resulting of the simplification of the problem considered by inclusion of the minimum effort principle, is studied aiming efficient implementations. A disturbed primal-dual method is considered on the basis of a heuristic definition that determine the choice of the disturbance parameter. This method showed to be efficient in practice and converged in fewer iterations when compare with an existing implementation of the network flow model.
16

Funções penalidade para variáveis discretas e o problema de fluxo de potência ótimo reativo /

Mazal, Camila Mara Nardello January 2019 (has links)
Orientador: Edméa Cássia Baptista / Resumo: O problema de fluxo de potência ótimo reativo é representado matematicamente por um problema de otimização não linear, restrito, não convexo, de grande porte e com variáveis de controle contínuas e discretas. A representação dos taps dos transformadores em fase e das susceptâncias shunt dos bancos de capacitores/reatores do sistema como variáveis discretas, torna o problema mais próximo da realidade. Entretanto, problemas de otimização não linear com variáveis discretas apresentam dificuldades em sua resolução, as quais são impostas pelas variáveis discretas. Uma das técnicas para sua resolução consiste em utilizar funções penalidades para tratar as variáveis discretas. Desta forma, transforma-se o problema discreto em uma sequência de problemas contínuos, e o método primal-dual barreira logarítmica pode ser utilizado para resolver esses problemas. Neste trabalho o objetivo é analisar a convergência do método de penalidade para variáveis discretas aplicado ao problema de fluxo de potência ótimo reativo, ao se utilizar diferentes funções penalidade e a combinação delas. Testes computacionais foram realizados com um exemplo númérico e com os sistemas elétricos IEEE 14, 30 e 118 barras, utilizando o pacote de otimização KNITRO em interface com o software GAMS. Os resultados demonstram que a combinação de diferentes funções penalidade para o tratamento das variáveis discretas é promissora. / Abstract: The reactive optimal power flow problem is mathematically represented by a nonlinear, constrained, nonconvex, large scale optimization problem with continuous and discrete control variables. The representation of the in-phase transformers taps and/or the shunt susceptances of capacitor/reactor Banks of the system, as discrete variables, make the problem closer to reality. Nonlinear optimization problems with discrete variables are difficulty to solve, due to the discrete variables. One of the soluction techniques consist in using penalty functions to treat the discrete variables. Thus, the discrete problem is transformed in a sequence of continuous problems, and the primal dual logarithmic barrier method can be used to solve these problems. In this work the objective is to analyze the convergence of the penalty method for discrete variables applied to the reactive optimal power flow problem, by using different penalty functions and the mixture of them. Computational tests have been carried out with a numerical example and with the IEEE 14, 30 and 118 buses electrical systems, using the KNITRO optimization package in interface with the GAMS software. The results show that a mixture of different penalty functions for treatment of discrete variable is advantageous. / Mestre
17

Locating a semi-obnoxious facility in the special case of Manhattan distances

Wagner, Andrea January 2019 (has links) (PDF)
The aim of thiswork is to locate a semi-obnoxious facility, i.e. tominimize the distances to a given set of customers in order to save transportation costs on the one hand and to avoid undesirable interactions with other facilities within the region by maximizing the distances to the corresponding facilities on the other hand. Hence, the goal is to satisfy economic and environmental issues simultaneously. Due to the contradicting character of these goals, we obtain a non-convex objective function. We assume that distances can be measured by rectilinear distances and exploit the structure of this norm to obtain a very efficient dual pair of algorithms.
18

Do recalque originário aos signos de percepção: contribuições de Silvia Bleichmar à psicanálise / From primal repression to perceptual signs: Silvia Bleichmars contributions to psychoanalysis

Moraes, Gisele Cristiane Senne de 12 July 2019 (has links)
Nesta dissertação são apresentadas duas contribuições originais de Silvia Bleichmar em relação a Laplanche: recalque originário e signos de percepção. Partindo das observações de Laplanche sobre a ideia freudiana que considera o inconsciente fundado pelo recalque originário, Silvia Bleichmar ampliou o conceito, produzindo importantes contribuições em comparação ao pensamento freudiano e laplancheano. A autora articulou o recalque originário com os destinos pulsionais anteriores ao recalque, tornando-o um conceito metapsicológico passível de ser observado clinicamente, sobretudo na clínica de crianças. O recalque originário, em Silvia Bleichmar, longe de ser um processo mítico em algum momento, aproxima-se da ideia freudiana de recalque propriamente dito. Para Bleichmar, tal como em Freud, o recalque originário extrai sua força inicial de contrainvestimentos. No entanto, a autora entende que estes contrainvestimentos seriam originados no outro, pela interdição ao autoerotismo infantil. Outra importante contribuição da autora para a psicanálise foi sua conceituação sobre signos de percepção, tradução para a expressão alemã Wahrnehmungszeichen (Wz), usada por Freud em seu modelo de aparelho psíquico apresentado na Carta 52 a Fliess. Laplanche questiona se o termo Zeichen deveria ser entendido como signos ou indícios, fazendo sua escolha pela palavra signos. Para este autor, os signos de percepção se equivalem às mensagens enigmáticas, portanto, seriam portadores de mensagens. Silvia Bleichmar entende que Zeichen seriam indícios, pegadas, marcas, vestígios, huellas (em espanhol), dando ênfase ao caráter indiciário dos signos de percepção. A autora partiu da concepção de Charles S. Peirce sobre signos indiciários para articular sua própria concepção de signos de percepção. Para a autora, estes seriam as primeiras inscrições pulsionais que podem permanecer no psiquismo como arcaico se não transcritas ou as inscrições advindas de traumatismos severos, a qualquer momento, que ficam igualmente sem possibilidade de transcrição no psiquismo. Assim, o arcaico seria formado pelos signos de percepção que permanecem sem transcrições enquanto o originário seria o conteúdo do inconsciente originariamente recalcado / Two original Silvia Bleichmars contributions to Laplanche are presented in this essay: the primal repression and the perceptual signs. Stemming from Laplanches observations on the Freudian conception that considers the unconscious founded by the primal repression, Silvia Bleichmar expanded the concept, producing important contributions in comparison to the Freudian and the Laplanchean lines of thought. The author connected the primal repression to the Freudian destinies of the drive (instinct) existing prior to repression, which made it a metapsychological concept subject to clinical observation, most of all in the work with children. In Silvia Bleichmars thinking, far from being a mythical process at some point, the primal repression is close to the Freudian idea of repression itself. To Bleichmar, as well as to Freud, the primal repression extracts its initial strength from anticathexes. Nevertheless, the author believes that these anticathexes stem from the other through the interdiction of child auto-erotism. Another important contribution of the author to psychoanalysis was her conceptualization of perceptual signs, which is a translation of the German expression Wahrnehmungszeichen (Wz), used by Freud in his model of psychic apparatus presented in the Letter 52 to Fliess. Laplanche questions if the term Zeichen should be understood as signs or indexes, preferring the word signs. For him, the perceptual signs are equivalent to the enigmatic messages, therefore they would be message carriers. Silvia Bleichmar believes that Zeichen are indexes, footprints, marks, vestiges, huellas (in Spanish), emphasizing the indexing character of the perceptual signs. Therefore, deriving from Charles S. Peirces conception of indexing signs, the author built her own conception of perceptual signs. For her, these would be the first drive inscriptions that may remain in the psyche as archaic if not transcript, as well as the inscriptions stemming from severe trauma, at any point, which equally remain without transcription in the psyche. Hence, the archaic would be composed by the perceptual signs that remain without transcriptions whereas the primal would be the content of the primally repressed unconscious
19

Convergence Analysis of Generalized Primal-Dual Interior-Point Algorithms for Linear Optimization

Wei, Hua January 2002 (has links)
We study the zeroth-, first-, and second-order algorithms proposed by Tuncel. The zeroth-order algorithms are the generalization of the classic primal-dual affine-scaling methods, and have a strong connection with the quasi-Newton method. Although the zeroth-order algorithms have the property of strict monotone decrease in both primal and dual objective values, they may not converge. We give an illustrative example as well as an algebraic proof to show that the zeroth-order algorithms do not converge to an optimal solution in some cases. The second-order algorithms use the gradients and Hessians of the barrier functions. Tuncel has shown that all second-order algorithms have a polynomial iteration bound. The second-order algorithms have a range of primal-dual scaling matrices to be chosen. We give a method to construct such a primal-dual scaling matrix. We then analyze a new centrality measure. This centrality measure appeared in both first- and second-order algorithms. We compare the neighbourhood defined by this centrality measure with other known neighbourhoods. We then analyze how this centrality measure changes in the next iteration in terms of the step length and some other information of the current iteration.
20

Convergence Analysis of Generalized Primal-Dual Interior-Point Algorithms for Linear Optimization

Wei, Hua January 2002 (has links)
We study the zeroth-, first-, and second-order algorithms proposed by Tuncel. The zeroth-order algorithms are the generalization of the classic primal-dual affine-scaling methods, and have a strong connection with the quasi-Newton method. Although the zeroth-order algorithms have the property of strict monotone decrease in both primal and dual objective values, they may not converge. We give an illustrative example as well as an algebraic proof to show that the zeroth-order algorithms do not converge to an optimal solution in some cases. The second-order algorithms use the gradients and Hessians of the barrier functions. Tuncel has shown that all second-order algorithms have a polynomial iteration bound. The second-order algorithms have a range of primal-dual scaling matrices to be chosen. We give a method to construct such a primal-dual scaling matrix. We then analyze a new centrality measure. This centrality measure appeared in both first- and second-order algorithms. We compare the neighbourhood defined by this centrality measure with other known neighbourhoods. We then analyze how this centrality measure changes in the next iteration in terms of the step length and some other information of the current iteration.

Page generated in 0.0276 seconds