Spelling suggestions: "subject:"feasible"" "subject:"reasible""
31 |
[en] INSTITUTIONS AND MONETARY POLICY: A CROSS-COUNTRY EMPIRICAL ANALYSIS / [pt] INSTITUIÇÕES E POLÍTICA MONETÁRIA: UMA ANÁLISE EMPÍRICA DE UM CROSS-SECTION DE PAÍSESGUSTAVO AMORAS SOUZA LIMA 06 March 2018 (has links)
[pt] Esse trabalho busca verificar se há relação entre a política monetária conduzida por um grupo de países e as suas instituições, especialmente aquelas ligadas ao setor público. A partir da estimação de uma regra de
politica monetária comum para um grupo de países, regredimos coeficientes de reação das autoridades monetárias a desvios da inflação da meta e do hiato da atividade em métricas de instituições. Encontramos relações significativas entre a condução de política monetária e as instituições dos países, bem como potenciais determinantes das instituições, em vários casos. / [en] This paper seeks to verify if there is a relationship between the monetary policy conducted by a group of countries and their institutions, especially those related to the public sector. From the estimation of a common
monetary policy rule for a group of countries, we regressed the reaction coefficients of the monetary authorities to deviations from inflation target and activity gap on institutional metrics. We find significant relationships between conducting monetary policy and country institutions, as well as potential determinants of institutions, in several cases.
|
32 |
PERFORMANCE MACRO-MODELING TECHNIQUES FOR FAST ANALOG CIRCUIT SYNTHESISWOLFE, GLENN A. January 2004 (has links)
No description available.
|
33 |
Using maximal feasible subset of constraints to accelerate a logic-based Benders decomposition scheme for a multiprocessor scheduling problemGrgic, Alexander, Andersson, Filip January 2022 (has links)
Logic-based Benders decomposition (LBBD) is a strategy for solving discrete optimisation problems. In LBBD, the optimisation problem is divided into a master problem and a subproblem and each part is solved separately. LBBD methods that combine mixed-integer programming and constraint programming have been successfully applied to solve large-scale scheduling and resource allocation problems. Such combinations typically solve an assignment-type master problem and a scheduling-type subproblem. However, a challenge with LBBD methods that have feasibility subproblems are that they do not provide a feasible solution until an optimal solution is found. In this thesis, we show that feasible solutions can be obtained by finding and combining feasible parts of an infeasible master problem assignment. We use these insights to develop an acceleration technique for LBBD that solves a series of subproblems, according to algorithms for constructing a maximal feasible subset of constraints (MaFS). Using a multiprocessor scheduling problem as a benchmark, we study the computational impact from using this technique. We evaluate three variants of LBBD schemes. The first uses MaFS, the second uses irreducible subset of constraints (IIS) and the third combines MaFS with IIS. Computational tests were performed on an instance set of multiprocessor scheduling problems. In total, 83 instances were tested, and their number of tasks varied between 2794 and 10,661. The results showed that when applying our acceleration technique in the decomposition scheme, the pessimistic bounds were strong, but the convergence was slow. The decomposition scheme combining our acceleration technique with the acceleration technique using IIS showed potential to accelerate the method.
|
34 |
Heuristic Algorithms for Graph Coloring Problems / Algorithmes heuristiques pour des problèmes de coloration de graphesSun, Wen 29 November 2018 (has links)
Cette thèse concerne quatre problèmes de coloration de graphes NPdifficiles, à savoir le problème de coloration (GCP), le problème de coloration équitable (ECP), le problème de coloration des sommets pondérés et le problème de sous-graphe critique (k-VCS). Ces problèmes sont largement étudiés dans la littérature, non seulement pour leur difficulté théorique, mais aussi pour leurs applications réelles dans de nombreux domaines. Étant donné qu'ils appartiennent à la classe de problèmes NP-difficiles, il est difficile de les résoudre dans le cas général de manière exacte. Pour cette raison, cette thèse est consacrée au développement d'approches heuristiques pour aborder ces problèmes complexes. Plus précisément, nous développons un algorithme mémétique de réduction (RMA) pour la coloration des graphes, un algorithme de recherche réalisable et irréalisable (FISA) pour la coloration équitable et un réalisable et irréalisable (AFISA) pour le problème de coloration des sommets pondérés et un algorithme de suppression basé sur le retour en arrière (IBR) pour le problème k-VCS. Tous les algorithmes ont été expérimentalement évalués et comparés aux méthodes de l'état de l'art. / This thesis concerns four NP-hard graph coloring problems, namely, graph coloring (GCP), equitable coloring (ECP), weighted vertex coloring (WVCP) and k-vertex-critical subgraphs (k-VCS). These problems are extensively studied in the literature not only for their theoretical intractability, but also for their real-world applications in many domains. Given that they belong to the class of NP-hard problems, it is computationally difficult to solve them exactly in the general case. For this reason, this thesis is devoted to developing effective heuristic approaches to tackle these challenging problems. We develop a reduction memetic algorithm (RMA) for the graph coloring problem, a feasible and infeasible search algorithm (FISA) for the equitable coloring problem, an adaptive feasible and infeasible search algorithm (AFISA) for the weighted vertex coloring problem and an iterated backtrack-based removal (IBR) algorithm for the k-VCS problem. All these algorithms were experimentally evaluated and compared with state-of-the-art methods.
|
35 |
Um método de pontos interiores primal-dual viável para minimização com restrições lineares de grande porte / A feasible primal-dual interior-point method for large-scale linearly constrained minimizationGardenghi, John Lenon Cardoso 16 April 2014 (has links)
Neste trabalho, propomos um método de pontos interiores para minimização com restrições lineares de grande porte. Este método explora a linearidade das restrições, partindo de um ponto viável e preservando a viabilidade dos iterandos. Apresentamos os principais resultados de convergência global, além de uma descrição rica em detalhes de uma implementação prática de todos os passos do método. Para atestar a implementação do método, exibimos uma ampla experimentação numérica, e uma análise comparativa com métodos bem difundidos na comunidade de otimização contínua. / In this work, we propose an interior-point method for large-scale linearly constrained optimization. This method explores the linearity of the constraints, starting from a feasible point and preserving the feasibility of the iterates. We present the main global convergence results, together with a rich description of the implementation details of all the steps of the method. To validate the implementation of the method, we present a wide set of numerical experiments and a comparative analysis with well known softwares of the continuous optimization community.
|
36 |
Feasible Direction Methods for Constrained Nonlinear Optimization : Suggestions for ImprovementsMitradjieva-Daneva, Maria January 2007 (has links)
This thesis concerns the development of novel feasible direction type algorithms for constrained nonlinear optimization. The new algorithms are based upon enhancements of the search direction determination and the line search steps. The Frank-Wolfe method is popular for solving certain structured linearly constrained nonlinear problems, although its rate of convergence is often poor. We develop improved Frank--Wolfe type algorithms based on conjugate directions. In the conjugate direction Frank-Wolfe method a line search is performed along a direction which is conjugate to the previous one with respect to the Hessian matrix of the objective. A further refinement of this method is derived by applying conjugation with respect to the last two directions, instead of only the last one. The new methods are applied to the single-class user traffic equilibrium problem, the multi-class user traffic equilibrium problem under social marginal cost pricing, and the stochastic transportation problem. In a limited set of computational tests the algorithms turn out to be quite efficient. Additionally, a feasible direction method with multi-dimensional search for the stochastic transportation problem is developed. We also derive a novel sequential linear programming algorithm for general constrained nonlinear optimization problems, with the intention of being able to attack problems with large numbers of variables and constraints. The algorithm is based on inner approximations of both the primal and the dual spaces, which yields a method combining column and constraint generation in the primal space. / The articles are note published due to copyright rextrictions.
|
37 |
Solving the Vehicle Routing Problem with Genetic ALgorithm and Simulated AnnealingKovàcs, Akos January 2008 (has links)
This Thesis Work will concentrate on a very interesting problem, the Vehicle Routing Problem (VRP). In this problem, customers or cities have to be visited and packages have to be transported to each of them, starting from a basis point on the map. The goal is to solve the transportation problem, to be able to deliver the packages-on time for the customers,-enough package for each Customer,-using the available resources- and – of course - to be so effective as it is possible.Although this problem seems to be very easy to solve with a small number of cities or customers, it is not. In this problem the algorithm have to face with several constraints, for example opening hours, package delivery times, truck capacities, etc. This makes this problem a so called Multi Constraint Optimization Problem (MCOP). What’s more, this problem is intractable with current amount of computational power which is available for most of us. As the number of customers grow, the calculations to be done grows exponential fast, because all constraints have to be solved for each customers and it should not be forgotten that the goal is to find a solution, what is best enough, before the time for the calculation is up. This problem is introduced in the first chapter: form its basics, the Traveling Salesman Problem, using some theoretical and mathematical background it is shown, why is it so hard to optimize this problem, and although it is so hard, and there is no best algorithm known for huge number of customers, why is it a worth to deal with it. Just think about a huge transportation company with ten thousands of trucks, millions of customers: how much money could be saved if we would know the optimal path for all our packages.Although there is no best algorithm is known for this kind of optimization problems, we are trying to give an acceptable solution for it in the second and third chapter, where two algorithms are described: the Genetic Algorithm and the Simulated Annealing. Both of them are based on obtaining the processes of nature and material science. These algorithms will hardly ever be able to find the best solution for the problem, but they are able to give a very good solution in special cases within acceptable calculation time.In these chapters (2nd and 3rd) the Genetic Algorithm and Simulated Annealing is described in details, from their basis in the “real world” through their terminology and finally the basic implementation of them. The work will put a stress on the limits of these algorithms, their advantages and disadvantages, and also the comparison of them to each other.Finally, after all of these theories are shown, a simulation will be executed on an artificial environment of the VRP, with both Simulated Annealing and Genetic Algorithm. They will both solve the same problem in the same environment and are going to be compared to each other. The environment and the implementation are also described here, so as the test results obtained.Finally the possible improvements of these algorithms are discussed, and the work will try to answer the “big” question, “Which algorithm is better?”, if this question even exists.
|
38 |
Cutter-workpiece engagement identification in multi-axis millingAras, Eyyup 11 1900 (has links)
This thesis presents cutter swept volume generation, in-process workpiece modeling and Cutter Workpiece Engagement (CWE) algorithms for finding the instantaneous intersections between cutter and workpiece in milling. One of the steps in simulating machining operations is the accurate extraction of the intersection geometry between cutter and workpiece. This geometry is a key input to force calculations and feed rate scheduling in milling. Given that industrial machined components can have highly complex geometries, extracting intersections accurately and efficiently is challenging. Three main steps are needed to obtain the intersection geometry between cutter and workpiece. These are the Swept volume generation, in-process workpiece modeling and CWE extraction respectively.
In this thesis an analytical methodology for determining the shapes of the cutter swept envelopes is developed. In this methodology, cutter surfaces performing 5-axis tool motions are decomposed into a set of characteristic circles. For obtaining these circles a concept of two-parameter-family of spheres is introduced. Considering relationships among the circles the swept envelopes are defined analytically. The implementation of methodology is simple, especially when the cutter geometries are represented by pipe surfaces.
During the machining simulation the workpiece update is required to keep track of the material removal process. Several choices for workpiece updates exist. These are the solid, facetted and vector model based methodologies. For updating the workpiece surfaces represented by the solid or faceted models third party software can be used. In this thesis multi-axis milling update methodologies are developed for workpieces defined by discrete vectors with different orientations. For simplifying the intersection calculations between discrete vectors and the tool envelope the properties of canal surfaces are utilized.
A typical NC cutter has different surfaces with varying geometries and during the material removal process restricted regions of these surfaces are eligible to contact the in-process workpiece. In this thesis these regions are analyzed with respect to different tool motions. Later using the results from these analyses the solid, polyhedral and vector based CWE methodologies are developed for a range of different types of cutters and multi-axis tool motions. The workpiece surfaces cover a wide range of surface geometries including sculptured surfaces.
|
39 |
Cutter-workpiece engagement identification in multi-axis millingAras, Eyyup 11 1900 (has links)
This thesis presents cutter swept volume generation, in-process workpiece modeling and Cutter Workpiece Engagement (CWE) algorithms for finding the instantaneous intersections between cutter and workpiece in milling. One of the steps in simulating machining operations is the accurate extraction of the intersection geometry between cutter and workpiece. This geometry is a key input to force calculations and feed rate scheduling in milling. Given that industrial machined components can have highly complex geometries, extracting intersections accurately and efficiently is challenging. Three main steps are needed to obtain the intersection geometry between cutter and workpiece. These are the Swept volume generation, in-process workpiece modeling and CWE extraction respectively.
In this thesis an analytical methodology for determining the shapes of the cutter swept envelopes is developed. In this methodology, cutter surfaces performing 5-axis tool motions are decomposed into a set of characteristic circles. For obtaining these circles a concept of two-parameter-family of spheres is introduced. Considering relationships among the circles the swept envelopes are defined analytically. The implementation of methodology is simple, especially when the cutter geometries are represented by pipe surfaces.
During the machining simulation the workpiece update is required to keep track of the material removal process. Several choices for workpiece updates exist. These are the solid, facetted and vector model based methodologies. For updating the workpiece surfaces represented by the solid or faceted models third party software can be used. In this thesis multi-axis milling update methodologies are developed for workpieces defined by discrete vectors with different orientations. For simplifying the intersection calculations between discrete vectors and the tool envelope the properties of canal surfaces are utilized.
A typical NC cutter has different surfaces with varying geometries and during the material removal process restricted regions of these surfaces are eligible to contact the in-process workpiece. In this thesis these regions are analyzed with respect to different tool motions. Later using the results from these analyses the solid, polyhedral and vector based CWE methodologies are developed for a range of different types of cutters and multi-axis tool motions. The workpiece surfaces cover a wide range of surface geometries including sculptured surfaces.
|
40 |
The profitability of momentum investingFriedrich, Ekkehard Arne 03 1900 (has links)
Thesis (MScEng (Industrial Engineering))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: Several studies have shown that abnormal returns can be generated simply by buying past winning stocks and selling past losing stocks. Being able to predict future price behaviour by past price movements represents a direct challenge to the Efficient Market Hypothesis, a centrepiece of contemporary finance.
Fund managers have attempted to exploit this effect, but reliable footage of the performance of such funds is very limited. Several academic studies have documented the presence of the momentum effect across different markets and between different periods. These studies employ trading rules that might be helpful to establish whether the momentum effect is present in a market or not, but have limited practical value as they ignore several practical constraints.
The number of shares in the portfolios formed by academic studies is often impractical. Some studies (e.g. Conrad & Kaul, 1998) require holding a certain percentage of every share in the selection universe, resulting in an extremely large number of shares in the portfolios. Others create portfolios with as little as three shares (e.g. Rey & Schmid, 2005) resulting in portfolios that are insufficiently diversified. All academic studies implicitly require extremely high portfolio turnover rates that could cause transaction costs to dissipate momentum profits and lead the returns of such strategies to be taxed at an investor’s income tax rate rather than her capital gains tax rate. Depending on the tax jurisdiction within which the investor resides these tax ramifications could represent a tax difference of more than 10 percent, an amount that is unlikely to be recovered by any investment strategy.
Critics of studies documenting positive alpha argue that momentum returns may be due to statistical biases such as data mining or due to risk factors not effectively captured by the standard CAPM. The empirical tests conducted in this study were therefore carefully designed to avoid every factor that could compromise the results and hinder a meaningful interpretation of the results. For example, small-caps were excluded to avoid the small firm effect from influencing the results and the tests were conducted on two different samples to avoid data mining from being a possible driver. Previous momentum studies generally used long/short strategies. It was found, however, that momentum strategies generally picked short positions in volatile and illiquid stocks, making it difficult to effectively estimate the transaction costs involved with holding such positions. For this reason it was chosen to test a long-only strategy.
Three different strategies were tested on a sample of JSE mid-and large-caps on a replicated S&P500 index between January 2000 and September 2009. All strategies yielded positive abnormal returns and the null hypothesis that feasible momentum strategies cannot generate statistically significant abnormal returns could be rejected at the 5 percent level of significance for all three strategies on the JSE sample.
However, further analysis showed that the momentum profits were far more pronounced in “up” markets than in “down” markets, leaving macroeconomic risk as a possible explanation for the vast returns generated by the strategy. There was ample evidence for the January anomaly being a possible driver behind the momentum returns derived from the S&P500 sample. / AFRIKAANSE OPSOMMING: Verskillende studies het gewys dat abnormale winste geskep kan word deur eenvoudig voormalige wenner aandele te koop en voormalige verloorder aandele te verkoop. Die moontlikheid om toekomstige prysgedrag te voorspel deur na prysbewegings uit die verlede te kyk is ‘n direkte uitdaging teen die “Efficient Market Hypothesis”, wat ’n kernstuk van hedendaagse finansies is.
Fondsbestuurders het gepoog om hierdie effek te benut, maar akademiese ondersteuning vir die gedrag van sulke fondse is uiters beperk. Verskeie akademiese studies het die teenwoordigheid van die momentum effek in verskillende markte en oor verskillende periodes uitgewys.
Hierdie akademiese studies benut handelsreëls wat gebruik kan word om te bepaal of die momentum effek wel in die mark teenwordig is al dan nie, maar is van beperkte waarde aangesien hulle verskeie praktiese beperkings ignoreer. Sommige studies (Conrad & Kaul, 1998) vereis dat 'n sekere persentasie van elke aandeel in die seleksie-universum gehou moet word, wat in oormatige groot aantal aandele in die portefeulle tot gevolg het. Ander skep portefeuljes met so min as drie aandele (Rey & Schmid, 2005), wat resulteer in onvoldoende gediversifiseerde portefeuljes. Die hooftekortkoming van alle akademiese studies is dat hulle portefeulleomsetverhoudings van hoër as 100% vereis wat daartoe sal lei dat winste uit sulke strategieë teen die belegger se inkomstebelastingskoers belas sal word in plaas van haar kapitaalaanwinskoers. Afhangende van die belastingsjurisdiksie waaronder die belegger val, kan hierdie belastingseffek meer as 10% beloop, wat nie maklik deur enige belegginsstrategie herwin kan word nie.
Kritici van studies wat abnormale winste dokumenteer beweer dat sulke winste ‘n gevolg kan wees van statistiese bevooroordeling soos die myn van data, of as gevolg van risikofaktore wat nie effektief deur die standaard CAPM bepaal word nie. Die empiriese toetse is dus sorgvuldig ontwerp om enige faktor uit te skakel wat die resultate van hierdie studie sal kan bevraagteken en ‘n betekenisvolle interpretasie van die resultate kan verhinder. Die toetse sluit byvoorbeeld sogenaamde “small-caps” uit om die klein firma effek uit te skakel, en die toetse is verder op twee verskillende monsters uitgevoer om myn van data as ‘n moontlke dryfveer vir die resultate uit te skakel. Normaalweg toets akademiese studies lang/ kort nulkostestrategieë. Dit is gevind dat momentum strategieë oor die algemeen kort posisies kies in vlugtige en nie-likiede aandele, wat dit moeilik maak om die geassosieerde transaksiekoste effektief te bepaal. Daar is dus besluit om ’n “lang-alleenlik” strategie te toets.
Drie verskillende strategieë is getoets op ‘n steekproef van JSE “mid-caps” en “large-caps” en op ‘n gerepliseerde S&P500 index tussen Januarie 2000 en September 2009. Alle strategieë het positiewe abnormale winste opgelewer, en die nul hipotese dat momentum strategieë geen statisties beduidende abnormale winste kan oplewer kon op die 5% vlak van beduidendheid vir al drie strategieë van die JSE monster verwerp word.
Verdere analiese het wel getoon dat momentumwinste baie meer opvallend vertoon het in opwaartse markte as in afwaartse markte, wat tot die gevolgtrekking kan lei dat makro-ekonomiese risiko ‘n moontlike verklaring kan wees. Daar was genoegsaam bewyse vir die Januarie effek as ‘n moontlike dryfveer agter die momentum-winste in die S&P500 monster.
|
Page generated in 0.0397 seconds