• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 171
  • 54
  • 50
  • 49
  • 10
  • 8
  • 8
  • 6
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 448
  • 96
  • 73
  • 71
  • 66
  • 56
  • 47
  • 43
  • 43
  • 38
  • 38
  • 33
  • 32
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Ant colony optimization based simulation of 3d automatic hose/pipe routing

Thantulage, Gishantha I. F. January 2009 (has links)
This thesis focuses on applying one of the rapidly growing non-deterministic optimization algorithms, the ant colony algorithm, for simulating automatic hose/pipe routing with several conflicting objectives. Within the thesis, methods have been developed and applied to single objective hose routing, multi-objective hose routing and multi-hose routing. The use of simulation and optimization in engineering design has been widely applied in all fields of engineering as the computational capabilities of computers has increased and improved. As a result of this, the application of non-deterministic optimization techniques such as genetic algorithms, simulated annealing algorithms, ant colony algorithms, etc. has increased dramatically resulting in vast improvements in the design process. Initially, two versions of ant colony algorithms have been developed based on, respectively, a random network and a grid network for a single objective (minimizing the length of the hoses) and avoiding obstacles in the CAD model. While applying ant colony algorithms for the simulation of hose routing, two modifications have been proposed for reducing the size of the search space and avoiding the stagnation problem. Hose routing problems often consist of several conflicting or trade-off objectives. In classical approaches, in many cases, multiple objectives are aggregated into one single objective function and optimization is then treated as a single-objective optimization problem. In this thesis two versions of ant colony algorithms are presented for multihose routing with two conflicting objectives: minimizing the total length of the hoses and maximizing the total shared length (bundle length). In this case the two objectives are aggregated into a single objective. The current state-of-the-art approach for handling multi-objective design problems is to employ the concept of Pareto optimality. Within this thesis a new Pareto-based general purpose ant colony algorithm (PSACO) is proposed and applied to a multi-objective hose routing problem that consists of the following objectives: total length of the hoses between the start and the end locations, number of bends, and angles of bends. The proposed method is capable of handling any number of objectives and uses a single pheromone matrix for all the objectives. The domination concept is used for updating the pheromone matrix. Among the currently available multi-objective ant colony optimization (MOACO) algorithms, P-ACO generates very good solutions in the central part of the Pareto front and hence the proposed algorithm is compared with P-ACO. A new term is added to the random proportional rule of both of the algorithms (PSACO and P-ACO) to attract ants towards edges that make angles close to the pre-specified angles of bends. A refinement algorithm is also suggested for searching an acceptable solution after the completion of searching the entire search space. For all of the simulations, the STL format (tessellated format) for the obstacles is used in the algorithm instead of the original shapes of the obstacles. This STL format is passed to the C++ library RAPID for collision detection. As a result of using this format, the algorithms can handle freeform obstacles and the algorithms are not restricted to a particular software package.
392

Projeto de redes ópticas de alta capacidade utilizando técnicas de otimização bioinspiradas

CHAVES, Daniel Augusto Ribeiro 31 January 2012 (has links)
Made available in DSpace on 2014-06-12T17:36:41Z (GMT). No. of bitstreams: 2 arquivo9409_1.pdf: 2593707 bytes, checksum: c82123beb5138e539b7e5a7a30279094 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2012 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Nesta Tese são propostas diversas estratégias para projetar redes ópticas WDM de forma otimizada. As redes são consideradas com tráfego dinâmico e penalizadas pelas penalidades da camada física. As estratégias propostas lidam com os principais elementos que afetam a relação custo desempenho em uma rede óptica: o algoritmo de roteamento e atribuição de comprimentos de onda (RWA), a colocação de regeneradores (RP), a atribuição de regeneradores (RA), o projeto da topologia física da rede (PTD) e o dimensionamento dos dispositivos ópticos (DDO) que serão instalados na rede. Esses problemas são tratados tanto de forma separada quanto de forma integrada na Tese. Para o RWA, é proposta uma metodologia para se projetar algoritmos heurísticos de roteamento que têm como objetivo o aumento no desempenho da rede e que levam em conta as penalidades da camada física. Para a solução do RP são propostos algoritmos heurísticos e metaheurísticos para o projeto de redes ópticas translúcidas, considerando simultaneamente a otimização dos custos de capital (CapEx) e operacional (OpEx) e do desempenho da rede. O problema de PTD é tratado em conjunto com o DDO também de forma mutiobjetiva, considerando a otimização simultânea do CapEx e do desempenho (probabilidade de bloqueio). Um algoritmo multiobjetivo para realização da expansão de topologia (i.e. adição de novos enlaces a uma rede já existente) também é proposto. Além disso, são resolvidos conjuntamente os problemas de PTD, RP e RWA de forma mutiobjetiva considerando a otimização simultânea de CapEx e desempenho da rede. As otimizações das soluções são feitas utilizando as seguintes estratégias metaheuristicas propostas na área de inteligência computacional: Particle Swarm Optimization (PSO) e Non-dominated Sorting Genetic Algorithm II (NSGA-II)
393

The Difference Principle in Rawls: Pragmatic or Infertile?

Esmaeili, Farzaneh 01 January 2015 (has links)
This thesis attempts to provide a coherent view of the idea of ‘justice as fairness’ and, in particular, the ‘difference principle’ expressed by John Rawls in A Theory of Justice. The main focus of the thesis is the difference principle and its limits. Rawls’s conception of ‘justice as fairness’ is based on the thought experiment of the ‘original position’ in which people, considered as free and equal, deliberate under an imagined ‘veil of ignorance,’ i.e. not knowing which social roles or status they would occupy in their society. Rawls then argues that in the original position people come up with two major principles of justice, understood as principles that would be acceptable to people treated as free and equal. The second principle entails the so-called ‘difference principle,’ according to which the inequalities of, say, wealth and authority are just and fair only if they lead to compensating benefits for everybody and particularly the least advantaged. The thesis proceeds, then, by probing whether compared with other theories, , including a discussion of Dahl’s theory of democracy, Rawls’s difference principle could be a proper answer to one of the main questions of social justice. The questions is: how the economic fortune in a society should be distributed among citizens. However, despite Rawls’s aim to develop the difference principle as a practical normative theory, it fails to give us a pragmatic answer. The reason is: the statement of the difference principle fails to take into account one crucial point: to wit, the matter of time. The thesis develops two empirical economic scenarios to illustrate that there is a trade-off between the interests of the poor in short and long period of time. However, this important issue is not considered and discussed by Rawls which makes the theory inapplicable.
394

股價指數報酬率厚尾程度之研究

李佳晏 Unknown Date (has links)
許多觀察到的時間序列資料,多呈現高峰厚尾(leptokurtic)的現象,本文引用時間序列資料為Paretian分配之假設,估計各個國家股價指數報酬率於不同頻率資料下之最大級數動差,以觀察其厚尾程度。實證結果發現,各個國家指數報酬率於不同頻率資料下之四級以上動差大部分存在,且不隨資料之頻率不同,而有不同的表現。由此可推論,各個國家股價指數報酬率之歷史分配,其離群值之活動並不嚴重。接著,利用樣本分割預測檢定(Sample Split Prediction Test)來檢定所觀察各個國家股價指數報酬率於同一樣本期間內,其左右尾之厚尾程度是否一致,及檢定所觀察各個國家指數報酬率於跨期間左尾或右尾之厚尾程度是否穩定。在同一樣本期間,檢定時間序列之左右尾之厚尾程度是否一致之檢定中,發現各個國家指數報酬率在所觀察樣本期間內,其左右尾之厚尾程度大致相同;而在跨期間之樣本分割預測檢定中,發現各個國家指數報酬率在像是1987年10月美國股市大崩盤、1990年至1991年間之波斯灣戰爭、1997年亞洲金融風暴等事件前後,其左(右)尾之厚尾程度有顯著差異。最後提出Cusum of Squares檢定,係用於檢定一時間序列資料在所觀察之樣本期間內,其非條件變異數是否為一常數。 Cusum of Squares檢定之檢定結果顯示,本文之各個國家指數報酬率在所觀察之樣本期間內,其非條件變異數並非為一常數。進一步觀察各個國家指數報酬率之Cusum of Squares圖,並綜合前述跨期間樣本分割預測檢定之結果,可推論在處理較長樣本期間之時間序列資料可能遇到結構性變動之情況時,跨期間之樣本分割預測檢定及Cusum of Squares檢定可提供結構性變動可能發生之時點。
395

Optimisation de transport à la demande dans des territoires polarisés

Chevrier, Rémy 18 November 2008 (has links) (PDF)
Cette thèse pluridisciplinaire, géographique et informatique (géomatique), s'intéresse à la problématique du transport à la demande (TAD). Le TAD est un transport de personnes collectif terrestre activé seulement à la demande se situant à mi-chemin entre le taxi et le bus. L'idée porteuse de cette recherche est d'utiliser la structure polarisée des territoires pour faciliter une optimisation informatique d'un TAD en (multi)convergence, recourant, par exemple, aux Arbres Couvrants et au modèle gravitaire . Cette approche se traduit notamment par une rationalisation des coûts économiques du service (regroupement des clients, nombre de véhicules nécessaires, temps de parcours...). Par ailleurs, cette thèse donne des éléments méthodologiques pour déployer un TAD usant d'une part d'algorithmes à métaheuristiques (les algorithmes génétiques, i.e. NSGA-II) et d'autre part de modèles géographiques (la forme dite en convergence se basant sur le caractère polarisé du territoire). Des simulations permettent d'évaluer la capacité des méthodes développées à fournir de bonnes solutions dans un contexte opérationnel de forte montée en charge potentielle.<br /><br />Reposant sur le principe de convergence des flux, la méthode exploite la théorie des graphes pour définir les tournées des véhicules, elles-mêmes optimisées selon un algorithme génétique dédié, reposant sur une approche multicritères avec front de Pareto.<br /><br />La dernière partie de la thèse s'intéresse à l'influence du choix des métriques d'optimisation sur les solutions obtenues, compte tenu d'un territoire et d'une granularité spatiale donnés. Elle ouvre sur le questionnement suivant : quelle configuration d'optimisation pour quel territoire et pour quel usage ?
396

Methods for parameterizing and exploring Pareto frontiers using barycentric coordinates

Daskilewicz, Matthew John 08 April 2013 (has links)
The research objective of this dissertation is to create and demonstrate methods for parameterizing the Pareto frontiers of continuous multi-attribute design problems using barycentric coordinates, and in doing so, to enable intuitive exploration of optimal trade spaces. This work is enabled by two observations about Pareto frontiers that have not been previously addressed in the engineering design literature. First, the observation that the mapping between non-dominated designs and Pareto efficient response vectors is a bijection almost everywhere suggests that points on the Pareto frontier can be inverted to find their corresponding design variable vectors. Second, the observation that certain common classes of Pareto frontiers are topologically equivalent to simplices suggests that a barycentric coordinate system will be more useful for parameterizing the frontier than the Cartesian coordinate systems typically used to parameterize the design and objective spaces. By defining such a coordinate system, the design problem may be reformulated from y = f(x) to (y,x) = g(p) where x is a vector of design variables, y is a vector of attributes and p is a vector of barycentric coordinates. Exploration of the design problem using p as the independent variables has the following desirable properties: 1) Every vector p corresponds to a particular Pareto efficient design, and every Pareto efficient design corresponds to a particular vector p. 2) The number of p-coordinates is equal to the number of attributes regardless of the number of design variables. 3) Each attribute y_i has a corresponding coordinate p_i such that increasing the value of p_i corresponds to a motion along the Pareto frontier that improves y_i monotonically. The primary contribution of this work is the development of three methods for forming a barycentric coordinate system on the Pareto frontier, two of which are entirely original. The first method, named "non-domination level coordinates," constructs a coordinate system based on the (k-1)-attribute non-domination levels of a discretely sampled Pareto frontier. The second method is based on a modification to an existing "normal boundary intersection" multi-objective optimizer that adaptively redistributes its search basepoints in order to sample from the entire frontier uniformly. The weights associated with each basepoint can then serve as a coordinate system on the frontier. The third method, named "Pareto simplex self-organizing maps" uses a modified a self-organizing map training algorithm with a barycentric-grid node topology to iteratively conform a coordinate grid to the sampled Pareto frontier.
397

Value-informed space systems design and acquisition

Brathwaite, Joy Danielle 16 December 2011 (has links)
Investments in space systems are substantial, indivisible, and irreversible, characteristics that make them high-risk, especially when coupled with an uncertain demand environment. Traditional approaches to system design and acquisition, derived from a performance- or cost-centric mindset, incorporate little information about the spacecraft in relation to its environment and its value to its stakeholders. These traditional approaches, while appropriate in stable environments, are ill-suited for the current, distinctly uncertain and rapidly changing technical, and economic conditions; as such, they have to be revisited and adapted to the present context. This thesis proposes that in uncertain environments, decision-making with respect to space system design and acquisition should be value-based, or at a minimum value-informed. This research advances the value-centric paradigm by providing the theoretical basis, foundational frameworks, and supporting analytical tools for value assessment of priced and unpriced space systems. For priced systems, stochastic models of the market environment and financial models of stakeholder preferences are developed and integrated with a spacecraft-sizing tool to assess the system's net present value. The analytical framework is applied to a case study of a communications satellite, with market, financial, and technical data obtained from the satellite operator, Intelsat. The case study investigates the implications of the value-centric versus the cost-centric design and acquisition choices. Results identify the ways in which value-optimal spacecraft design choices are contingent on both technical and market conditions, and that larger spacecraft for example, which reap economies of scale benefits, as reflected by their decreasing cost-per-transponder, are not always the best (most valuable) choices. Market conditions and technical constraints for which convergence occurs between design choices under a cost-centric and a value-centric approach are identified and discussed. In addition, an innovative approach for characterizing value uncertainty through partial moments, a technique used in finance, is adapted to an engineering context and applied to priced space systems. Partial moments disaggregate uncertainty into upside potential and downside risk, and as such, they provide the decision-maker with additional insights for value-uncertainty management in design and acquisition. For unpriced space systems, this research first posits that their value derives from, and can be assessed through, the value of information they provide. To this effect, a Bayesian framework is created to assess system value in which the system is viewed as an information provider and the stakeholder an information recipient. Information has value to stakeholders as it changes their rational beliefs enabling them to yield higher expected pay-offs. Based on this marginal increase in expected pay-offs, a new metric, Value-of-Design (VoD), is introduced to quantify the unpriced system's value. The Bayesian framework is applied to the case of an Earth Science satellite that provides hurricane information to oil rig operators using nested Monte Carlo modeling and simulation. Probability models of stakeholders' beliefs, and economic models of pay-offs are developed and integrated with a spacecraft payload generation tool. The case study investigates the information value generated by each payload, with results pointing to clusters of payload instruments that yielded higher information value, and minimum information thresholds below which it is difficult to justify the acquisition of the system. In addition, an analytical decision tool, probabilistic Pareto fronts, is developed in the Cost-VoD trade space to provide the decision-maker with additional insights into the coupling of a system's probable value generation and its associated cost risk.
398

Duality and optimality in multiobjective optimization

Bot, Radu Ioan 04 July 2003 (has links) (PDF)
The aim of this work is to make some investigations concerning duality for multiobjective optimization problems. In order to do this we study first the duality for scalar optimization problems by using the conjugacy approach. This allows us to attach three different dual problems to a primal one. We examine the relations between the optimal objective values of the duals and verify, under some appropriate assumptions, the existence of strong duality. Closely related to the strong duality we derive the optimality conditions for each of these three duals. By means of these considerations, we study the duality for two vector optimization problems, namely, a convex multiobjective problem with cone inequality constraints and a special fractional programming problem with linear inequality constraints. To each of these vector problems we associate a scalar primal and study the duality for it. The structure of both scalar duals give us an idea about how to construct a multiobjective dual. The existence of weak and strong duality is also shown. We conclude our investigations by making an analysis over different duality concepts in multiobjective optimization. To a general multiobjective problem with cone inequality constraints we introduce other six different duals for which we prove weak as well as strong duality assertions. Afterwards, we derive some inclusion results for the image sets and, respectively, for the maximal elements sets of the image sets of these problems. Moreover, we show under which conditions they become identical. A general scheme containing the relations between the six multiobjective duals and some other duals mentioned in the literature is derived. / Das Ziel dieser Arbeit ist die Durchführung einiger Untersuchungen bezüglich der Dualität für Mehrzieloptimierungsaufgaben. Zu diesem Zweck wird als erstes mit Hilfe des so genannten konjugierten Verfahrens die Dualität für skalare Optimierungsaufgaben untersucht. Das erlaubt uns zu einer primalen Aufgabe drei unterschiedliche duale Aufgaben zuzuordnen. Wir betrachten die Beziehungen zwischen den optimalen Zielfunktionswerten der drei Dualaufgaben und untersuchen die Existenz der starken Dualität unter naheliegenden Annahmen. Im Zusammenhang mit der starken Dualität leiten wir für jede dieser Dualaufgaben die Optimalitätsbedingungen her. Die obengenannten Ergebnisse werden beim Studium der Dualität für zwei Vektoroptimierungsaufgaben angewandt, und zwar für die konvexe Mehrzieloptimierungsaufgabe mit Kegel-Ungleichungen als Nebenbedingungen und für eine spezielle Quotientenoptimierungsaufgabe mit linearen Ungleichungen als Nebenbedingungen. Wir assoziieren zu jeder dieser vektoriellen Aufgaben eine skalare Aufgabe für welche die Dualität betrachtet wird. Die Formulierung der beiden skalaren Dualaufgaben führt uns zu der Konstruktion der Mehrzieloptimierungsaufgabe. Die Existenz von schwacher und starker Dualität wird bewiesen. Wir schliessen unsere Untersuchungen ab, indem wir eine Analyse von verschiedenen Dualitätskonzepten in der Mehrzieloptimierung durchführen. Zu einer allgemeinen Mehrzieloptimierungsaufgabe mit Kegel-Ungleichungen als Nebenbedingungen werden sechs verschiedene Dualaufgaben eingeführt, für die sowohl schwache als auch starke Dualitätsaussagen gezeigt werden. Danach leiten wir verschiedene Beziehungen zwischen den Bildmengen, bzw., zwischen den Mengen der maximalen Elemente dieser Bildmengen der sechs Dualaufgaben her. Dazu zeigen wir unter welchen Bedingungen werden diese Mengen identisch. Ein allgemeines Schema das die Beziehungen zwischen den sechs dualen Mehrzieloptimierungsaufgaben und andere Dualaufgaben aus der Literatur enthält, wird dargestellt.
399

Preliminary design of spacecraft trajectories for missions to outer planets and small bodies

Lantukh, Demyan Vasilyevich 17 September 2015 (has links)
Multiple gravity assist (MGA) spacecraft trajectories can be difficult to find, an intractable problem to solve completely. However, these trajectories have enormous benefits for missions to challenging destinations such as outer planets and primitive bodies. Techniques are presented to aid in solving this problem with a global search tool and additional investigation into one particular proximity operations option is discussed. Explore is a global grid-search MGA trajectory pathsolving tool. An efficient sequential tree search eliminates v∞ discontinuities and prunes trajectories. Performance indices may be applied to further prune the search, with multiple objectives handled by allowing these indices to change between trajectory segments and by pruning with a Pareto-optimality ranking. The MGA search is extended to include deep space maneuvers (DSM), v∞ leveraging transfers (VILT) and low-thrust (LT) transfers. In addition, rendezvous or nπ sequences can patch the transfers together, enabling automatic augmentation of the MGA sequence. Details of VILT segments and nπ sequences are presented: A boundaryvalue problem (BVP) VILT formulation using a one-dimensional root-solve enables inclusion of an efficient class of maneuvers with runtime comparable to solving ballistic transfers. Importantly, the BVP VILT also allows the calculation of velocity-aligned apsidal maneuvers (VAM), including inter-body transfers and orbit insertion maneuvers. A method for automated inclusion of nπ transfers such as resonant returns and back-flip trajectories is introduced: a BVP is posed on the v∞ sphere and solved with one or more nπ transfers – which may additionally fulfill specified science objectives. The nπ sequence BVP is implemented within the broader search, combining nπ and other transfers in the same trajectory. To aid proximity operations around small bodies, analytical methods are used to investigate stability regions in the presence of significant solar radiation pressure (SRP) and body oblateness perturbations. The interactions of these perturbations allow for heliotropic orbits, a stable family of low-altitude orbits investigated in detail. A novel constrained double-averaging technique analytically determines inclined heliotropic orbits. This type of knowledge is uniquely valuable for small body missions where SRP and irregular body shape are very important and where target selection is often a part of the mission design.
400

Analyzing value at risk and expected shortfall methods: the use of parametric, non-parametric, and semi-parametric models

Huang, Xinxin 25 August 2014 (has links)
Value at Risk (VaR) and Expected Shortfall (ES) are methods often used to measure market risk. Inaccurate and unreliable Value at Risk and Expected Shortfall models can lead to underestimation of the market risk that a firm or financial institution is exposed to, and therefore may jeopardize the well-being or survival of the firm or financial institution during adverse markets. The objective of this study is therefore to examine various Value at Risk and Expected Shortfall models, including fatter tail models, in order to analyze the accuracy and reliability of these models. Thirteen VaR and ES models under three main approaches (Parametric, Non-Parametric and Semi-Parametric) are examined in this study. The results of this study show that the proposed model (ARMA(1,1)-GJR-GARCH(1,1)-SGED) gives the most balanced Value at Risk results. The semi-parametric model (Extreme Value Theory, EVT) is the most accurate Value at Risk model in this study for S&P 500. / October 2014

Page generated in 0.0196 seconds