• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 72
  • 31
  • 8
  • 7
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 146
  • 31
  • 29
  • 22
  • 17
  • 16
  • 16
  • 15
  • 14
  • 14
  • 14
  • 13
  • 13
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Gleichgewicht im heterogenen Oligopol

Helmedag, Fritz 10 December 2004 (has links) (PDF)
The present paper aims to show that the oligopoly problem is much more determined than commonly believed. In oligopoly prerequisites are likely to prevail inducing a 'normal' behaviour in accordance with profit maximization. This leads to a price combination located at an exactly definable line section. Finally some consequences upon economic policy are outlined. / Dieser Beitrag versucht zu zeigen, daß das Oligopolproblem wesentlich determinierter ist als gemeinhin angenommen. Im Oligopol liegen die Voraussetzungen besonders günstig, daß durch ein aus dem Streben nach Gewinnmaximierung abgeleitetes, "normales" Verhalten eine Preiskombination auf einem exakt abgrenzbaren Kurvenabschnitt zustande kommt. Abschließend werden wirtschaftspolitische Konsequenzen angedeutet.
132

Contributions to quality improvement methodologies and computer experiments

Tan, Matthias H. Y. 16 September 2013 (has links)
This dissertation presents novel methodologies for five problem areas in modern quality improvement and computer experiments, i.e., selective assembly, robust design with computer experiments, multivariate quality control, model selection for split plot experiments, and construction of minimax designs. Selective assembly has traditionally been used to achieve tight specifications on the clearance of two mating parts. Chapter 1 proposes generalizations of the selective assembly method to assemblies with any number of components and any assembly response function, called generalized selective assembly (GSA). Two variants of GSA are considered: direct selective assembly (DSA) and fixed bin selective assembly (FBSA). In DSA and FBSA, the problem of matching a batch of N components of each type to give N assemblies that minimize quality cost is formulated as axial multi-index assignment and transportation problems respectively. Realistic examples are given to show that GSA can significantly improve the quality of assemblies. Chapter 2 proposes methods for robust design optimization with time consuming computer simulations. Gaussian process models are widely employed for modeling responses as a function of control and noise factors in computer experiments. In these experiments, robust design optimization is often based on average quadratic loss computed as if the posterior mean were the true response function, which can give misleading results. We propose optimization criteria derived by taking expectation of the average quadratic loss with respect to the posterior predictive process, and methods based on the Lugannani-Rice saddlepoint approximation for constructing accurate credible intervals for the average loss. These quantities allow response surface uncertainty to be taken into account in the optimization process. Chapter 3 proposes a Bayesian method for identifying mean shifts in multivariate normally distributed quality characteristics. Multivariate quality characteristics are often monitored using a few summary statistics. However, to determine the causes of an out-of-control signal, information about which means shifted and the directions of the shifts is often needed. We propose a Bayesian approach that gives this information. For each mean, an indicator variable that indicates whether the mean shifted upwards, shifted downwards, or remained unchanged is introduced. Default prior distributions are proposed. Mean shift identification is based on the modes of the posterior distributions of the indicators, which are determined via Gibbs sampling. Chapter 4 proposes a Bayesian method for model selection in fractionated split plot experiments. We employ a Bayesian hierarchical model that takes into account the split plot error structure. Expressions for computing the posterior model probability and other important posterior quantities that require evaluation of at most two uni-dimensional integrals are derived. A novel algorithm called combined global and local search is proposed to find models with high posterior probabilities and to estimate posterior model probabilities. The proposed method is illustrated with the analysis of three real robust design experiments. Simulation studies demonstrate that the method has good performance. The problem of choosing a design that is representative of a finite candidate set is an important problem in computer experiments. The minimax criterion measures the degree of representativeness because it is the maximum distance of a candidate point to the design. Chapter 5 proposes algorithms for finding minimax designs for finite design regions. We establish the relationship between minimax designs and the classical set covering location problem in operations research, which is a binary linear program. We prove that the set of minimax distances is the set of discontinuities of the function that maps the covering radius to the optimal objective function value, and optimal solutions at the discontinuities are minimax designs. These results are employed to design efficient procedures for finding globally optimal minimax and near-minimax designs.
133

Apprentissage actif pour l'approximation de variétés

Gandar, Benoît 27 November 2012 (has links) (PDF)
L'apprentissage statistique cherche à modéliser un lien fonctionnel entre deux variables X et Y à partir d'un échantillon aléatoire de réalisations de (X,Y ). Lorsque la variable Y prend un nombre binaire de valeurs, l'apprentissage s'appelle la classification (ou discrimination en français) et apprendre le lien fonctionnel s'apparente à apprendre la frontière d'une variété dans l'espace de la variable X. Dans cette thèse, nous nous plaçons dans le contexte de l'apprentissage actif, i.e. nous supposons que l'échantillon d'apprentissage n'est plus aléatoire et que nous pouvons, par l'intermédiaire d'un oracle, générer les points sur lesquels l'apprentissage de la variété va s'effectuer. Dans le cas où la variable Y est continue (régression), des travaux précédents montrent que le critère de la faible discrépance pour générer les premiers points d'apprentissage est adéquat. Nous montrons, de manière surprenante, que ces résultats ne peuvent pas être transférés à la classification. Dans ce manuscrit, nous proposons alors le critère de la dispersion pour la classification. Ce critère étant difficile à mettre en pratique, nous proposons un nouvel algorithme pour générer un plan d'expérience à faible dispersion dans le carré unité. Après une première approximation de la variété, des approximations successives peuvent être réalisées afin d'affiner la connaissance de celle-ci. Deux méthodes d'échantillonnage sont alors envisageables : le " selective sampling " qui choisit les points à présenter à un oracle parmi un ensemble fini de candidats et l'" adaptative sampling " qui permet de choisir n'importe quels points de l'espace de la variable X. Le deuxième échantillonnage peut être vu comme un passage à la limite du premier. Néanmoins, en pratique, il n'est pas raisonnable d'utiliser cette méthode. Nous proposons alors un nouvel algorithme basé sur le critère de dispersion, menant de front exploitation et exploration, pour approximer une variété.
134

Development of an assured systems management model for environmental decision–making / Jacobus Johannes Petrus Vivier

Vivier, Jacobus Johannes Petrus January 2011 (has links)
The purpose of this study was to make a contribution towards decision–making in complex environmental problems, especially where data is limited and associated with a high degree of uncertainty. As a young scientist, I understood the value of science as a measuring and quantification tool and used to intuitively believe that science was exact and could provide undisputable answers. It was in 1997, during the Safety Assessments done at the Vaalputs National Radioactive Waste Repository that my belief system was challenged. This occurred after there were numerous scientific studies done on the site that was started since the early 1980’s, yet with no conclusion as to how safe the site is in terms of radioactive waste disposal. The Safety Assessment process was developed by the International Atomic Energy Agency (IAEA) to transform the scientific investigations and data into decision–making information for the purposes of radioactive waste management. It was also during the Vaalputs investigations when I learned the value of lateral thinking. There were numerous scientists with doctorate and master’s degrees that worked on the site of which I was one. One of the important requirements was to measure evaporation at the local weather station close to the repository. It was specifically important to measure evaporation as a controlling parameter in the unsaturated zone models. Evaporation was measured with an Apan that is filled with water so that the losses can be measured. Vaalputs is a very dry place and water is scarce. The local weather station site was fenced off, but there was a problem in that the aardvark dug below the fence and drank the water in the A–pan, so that no measurements were possible. The solution from the scientists was to put the fence deeper into the ground. The aardvark did not find it hard to dig even deeper. The next solution was to put a second fence around the weather station and again the aardvark dug below it to drink the water. It was then that Mr Robbie Schoeman, a technician became aware of the problem and put a drinking water container outside the weather station fence for the aardvark and – the problem was solved at a fraction of the cost of the previous complex solutions. I get in contact with the same thinking patterns that intuitively expect that the act of scientific investigations will provide decision–making information or even solve the problem. If the investigation provides more questions than answers, the quest is for more and more data on more detailed scales. There is a difference between problem characterization and solution viidentification. Problem characterization requires scientific and critical thinking, which is an important component but that has to be incorporated with the solution identification process of creative thinking towards decision–making. I am a scientist by heart, but it was necessary to realise that apart from research, practical science must feed into a higher process, such as decision–making to be able to make a practical difference. The process of compilation of this thesis meant a lot to me as I initially thought of doing a PhD and then it changed me, especially in the way I think. This was a life changing process, which is good. As Jesus said in Mathew 3:2 And saying, Repent (think differently; change your mind, regretting your sins and changing your conduct), for the kingdom of heaven is at hand. / Thesis (Ph.D. (Geography and Environmental Studies))--North-West University, Potchefstroom Campus, 2011.
135

Development of an assured systems management model for environmental decision–making / Jacobus Johannes Petrus Vivier

Vivier, Jacobus Johannes Petrus January 2011 (has links)
The purpose of this study was to make a contribution towards decision–making in complex environmental problems, especially where data is limited and associated with a high degree of uncertainty. As a young scientist, I understood the value of science as a measuring and quantification tool and used to intuitively believe that science was exact and could provide undisputable answers. It was in 1997, during the Safety Assessments done at the Vaalputs National Radioactive Waste Repository that my belief system was challenged. This occurred after there were numerous scientific studies done on the site that was started since the early 1980’s, yet with no conclusion as to how safe the site is in terms of radioactive waste disposal. The Safety Assessment process was developed by the International Atomic Energy Agency (IAEA) to transform the scientific investigations and data into decision–making information for the purposes of radioactive waste management. It was also during the Vaalputs investigations when I learned the value of lateral thinking. There were numerous scientists with doctorate and master’s degrees that worked on the site of which I was one. One of the important requirements was to measure evaporation at the local weather station close to the repository. It was specifically important to measure evaporation as a controlling parameter in the unsaturated zone models. Evaporation was measured with an Apan that is filled with water so that the losses can be measured. Vaalputs is a very dry place and water is scarce. The local weather station site was fenced off, but there was a problem in that the aardvark dug below the fence and drank the water in the A–pan, so that no measurements were possible. The solution from the scientists was to put the fence deeper into the ground. The aardvark did not find it hard to dig even deeper. The next solution was to put a second fence around the weather station and again the aardvark dug below it to drink the water. It was then that Mr Robbie Schoeman, a technician became aware of the problem and put a drinking water container outside the weather station fence for the aardvark and – the problem was solved at a fraction of the cost of the previous complex solutions. I get in contact with the same thinking patterns that intuitively expect that the act of scientific investigations will provide decision–making information or even solve the problem. If the investigation provides more questions than answers, the quest is for more and more data on more detailed scales. There is a difference between problem characterization and solution viidentification. Problem characterization requires scientific and critical thinking, which is an important component but that has to be incorporated with the solution identification process of creative thinking towards decision–making. I am a scientist by heart, but it was necessary to realise that apart from research, practical science must feed into a higher process, such as decision–making to be able to make a practical difference. The process of compilation of this thesis meant a lot to me as I initially thought of doing a PhD and then it changed me, especially in the way I think. This was a life changing process, which is good. As Jesus said in Mathew 3:2 And saying, Repent (think differently; change your mind, regretting your sins and changing your conduct), for the kingdom of heaven is at hand. / Thesis (Ph.D. (Geography and Environmental Studies))--North-West University, Potchefstroom Campus, 2011.
136

A game theoretic analysis of adaptive radar jamming

Bachmann, Darren John Unknown Date (has links) (PDF)
Advances in digital signal processing (DSP) and computing technology have resulted in the emergence of increasingly adaptive radar systems. It is clear that the Electronic Attack (EA), or jamming, of such radar systems is expected to become a more difficult task. The reason for this research was to address the issue of jamming adaptive radar systems. This required consideration of adaptive jamming systems and the development of a methodology for outlining the features of such a system is proposed as the key contribution of this thesis. For the first time, game-based optimization methods have been applied to a maritime counter-surveillance/counter-targeting scenario involving conventional, as well as so-called ‘smart’ noise jamming.Conventional noise jamming methods feature prominently in the origins of radar electronic warfare, and are still widely implemented. They have been well studied, and are important for comparisons with coherent jamming techniques.Moreover, noise jamming is more readily applied with limited information support and is therefore germane to the problem of jamming adaptive radars; during theearly stages when the jammer tries to learn about the radar’s parameters and its own optimal actions.A radar and a jammer were considered as informed opponents ‘playing’ in a non-cooperative two-player, zero-sum game. The effects of jamming on the target detection performance of a radar using Constant False Alarm Rate (CFAR)processing were analyzed using a game theoretic approach for three cases: (1) Ungated Range Noise (URN), (2) Range-Gated Noise (RGN) and (3) False-Target (FT) jamming.Assuming a Swerling type II target in the presence of Rayleigh-distributed clutter, utility functions were described for Cell-Averaging (CA) and Order Statistic (OS) CFAR processors and the three cases of jamming. The analyses included optimizations of these utility functions, subject to certain constraints, with respectto control variables (strategies) in the jammer, such as jammer power and spatial extent of jamming, and control variables in the radar, such as threshold parameter and reference window size. The utility functions were evaluated over the players’ strategy sets and the resulting matrix-form games were solved for the optimal or ‘best response’ strategies of both the jammer and the radar.
137

Teoria dos Pontos Críticos e Sistemas Hamiltonianos. / Critical Point Theory and Hamiltonian Systems.

BARBOSA, Leopoldo Maurício Tavares. 17 July 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-07-17T17:42:26Z No. of bitstreams: 1 LEOPOLDO MAURÍCIO TAVARES BARBOSA - DISSERTAÇÃO PPGMAT 2007..pdf: 712644 bytes, checksum: 6b24483b48b038e23d4ace377b04ece5 (MD5) / Made available in DSpace on 2018-07-17T17:42:27Z (GMT). No. of bitstreams: 1 LEOPOLDO MAURÍCIO TAVARES BARBOSA - DISSERTAÇÃO PPGMAT 2007..pdf: 712644 bytes, checksum: 6b24483b48b038e23d4ace377b04ece5 (MD5) Previous issue date: 2007-10 / CNPq / Capes / Neste trabalho usamos métodos variacionais para mostrar a existência de solução fraca para dois tipos de problema. O primeiro trata-se de uma Equação Diferencial Ordinária. O segundo é referente ao sistema Hamiltoniano. *Para Visualisar as equações ou formulas originalmente escritas neste resumo recomendamos o downloado do arquivo completo. / In this work we use variational methods to show the existence of weak solutions for two types problems. The first, is related with a following Ordinary Differential Equations. The second is relating at the Hamiltonian Systems. *To see the equations or formulas originally written in this summary we recommend downloading the complete file.
138

Gleichgewicht im heterogenen Oligopol

Helmedag, Fritz 10 December 2004 (has links)
The present paper aims to show that the oligopoly problem is much more determined than commonly believed. In oligopoly prerequisites are likely to prevail inducing a 'normal' behaviour in accordance with profit maximization. This leads to a price combination located at an exactly definable line section. Finally some consequences upon economic policy are outlined. / Dieser Beitrag versucht zu zeigen, daß das Oligopolproblem wesentlich determinierter ist als gemeinhin angenommen. Im Oligopol liegen die Voraussetzungen besonders günstig, daß durch ein aus dem Streben nach Gewinnmaximierung abgeleitetes, "normales" Verhalten eine Preiskombination auf einem exakt abgrenzbaren Kurvenabschnitt zustande kommt. Abschließend werden wirtschaftspolitische Konsequenzen angedeutet.
139

Studies on the Space Exploration and the Sink Location under Incomplete Information towards Applications to Evacuation Planning / 不完全情報下における空間探索及び施設配置に関する理論的研究 -避難計画への応用を目指して-

Higashikawa, Yuya 24 September 2014 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第18582号 / 工博第3943号 / 新制||工||1606(附属図書館) / 31482 / 京都大学大学院工学研究科建築学専攻 / (主査)教授 加藤 直樹, 教授 門内 輝行, 教授 神吉 紀世子 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
140

Resource Allocation on Networks: Nested Event Tree Optimization, Network Interdiction, and Game Theoretic Methods

Lunday, Brian Joseph 08 April 2010 (has links)
This dissertation addresses five fundamental resource allocation problems on networks, all of which have applications to support Homeland Security or industry challenges. In the first application, we model and solve the strategic problem of minimizing the expected loss inflicted by a hostile terrorist organization. An appropriate allocation of certain capability-related, intent-related, vulnerability-related, and consequence-related resources is used to reduce the probabilities of success in the respective attack-related actions, and to ameliorate losses in case of a successful attack. Given the disparate nature of prioritizing capital and material investments by federal, state, local, and private agencies to combat terrorism, our model and accompanying solution procedure represent an innovative, comprehensive, and quantitative approach to coordinate resource allocations from various agencies across the breadth of domains that deal with preventing attacks and mitigating their consequences. Adopting a nested event tree optimization framework, we present a novel formulation for the problem as a specially structured nonconvex factorable program, and develop two branch-and-bound schemes based respectively on utilizing a convex nonlinear relaxation and a linear outer-approximation, both of which are proven to converge to a global optimal solution. We also investigate a fundamental special-case variant for each of these schemes, and design an alternative direct mixed-integer programming model representation for this scenario. Several range reduction, partitioning, and branching strategies are proposed, and extensive computational results are presented to study the efficacy of different compositions of these algorithmic ingredients, including comparisons with the commercial software BARON. The developed set of algorithmic implementation strategies and enhancements are shown to outperform BARON over a set of simulated test instances, where the best proposed methodology produces an average optimality gap of 0.35% (compared to 4.29% for BARON) and reduces the required computational effort by a factor of 33. A sensitivity analysis is also conducted to explore the effect of certain key model parameters, whereupon we demonstrate that the prescribed algorithm can attain significantly tighter optimality gaps with only a near-linear corresponding increase in computational effort. In addition to enabling effective comprehensive resource allocations, this research permits coordinating agencies to conduct quantitative what-if studies on the impact of alternative resourcing priorities. The second application is motivated by the author's experience with the U.S. Army during a tour in Iraq, during which combined operations involving U.S. Army, Iraqi Army, and Iraqi Police forces sought to interdict the transport of selected materials used for the manufacture of specialized types of Improvised Explosive Devices, as well as to interdict the distribution of assembled devices to operatives in the field. In this application, we model and solve the problem of minimizing the maximum flow through a network from a given source node to a terminus node, integrating different forms of superadditive synergy with respect to the effect of resources applied to the arcs in the network. Herein, the superadditive synergy reflects the additional effectiveness of forces conducting combined operations, vis-à-vis unilateral efforts. We examine linear, concave, and general nonconcave superadditive synergistic relationships between resources, and accordingly develop and test effective solution procedures for the underlying nonlinear programs. For the linear case, we formulate an alternative model representation via Fourier-Motzkin elimination that reduces average computational effort by over 40% on a set of randomly generated test instances. This test is followed by extensive analyses of instance parameters to determine their effect on the levels of synergy attained using different specified metrics. For the case of concave synergy relationships, which yields a convex program, we design an inner-linearization procedure that attains solutions on average within 3% of optimality with a reduction in computational effort by a factor of 18 in comparison with the commercial codes SBB and BARON for small- and medium-sized problems; and outperforms these softwares on large-sized problems, where both solvers failed to attain an optimal solution (and often failed to detect a feasible solution) within 1800 CPU seconds. Examining a general nonlinear synergy relationship, we develop solution methods based on outer-linearizations, inner-linearizations, and mixed-integer approximations, and compare these against the commercial software BARON. Considering increased granularities for the outer-linearization and mixed-integer approximations, as well as different implementation variants for both these approaches, we conduct extensive computational experiments to reveal that, whereas both these techniques perform comparably with respect to BARON on small-sized problems, they significantly improve upon the performance for medium- and large-sized problems. Our superlative procedure reduces the computational effort by a factor of 461 for the subset of test problems for which the commercial global optimization software BARON could identify a feasible solution, while also achieving solutions of objective value 0.20% better than BARON. The third application is likewise motivated by the author's military experience in Iraq, both from several instances involving coalition forces attempting to interdict the transport of a kidnapping victim by a sectarian militia as well as, from the opposite perspective, instances involving coalition forces transporting detainees between interment facilities. For this application, we examine the network interdiction problem of minimizing the maximum probability of evasion by an entity traversing a network from a given source to a designated terminus, while incorporating novel forms of superadditive synergy between resources applied to arcs in the network. Our formulations examine either linear or concave (nonlinear) synergy relationships. Conformant with military strategies that frequently involve a combination of overt and covert operations to achieve an operational objective, we also propose an alternative model for sequential overt and covert deployment of subsets of interdiction resources, and conduct theoretical as well as empirical comparative analyses between models for purely overt (with or without synergy) and composite overt-covert strategies to provide insights into absolute and relative threshold criteria for recommended resource utilization. In contrast to existing static models, in a fourth application, we present a novel dynamic network interdiction model that improves realism by accounting for interactions between an interdictor deploying resources on arcs in a digraph and an evader traversing the network from a designated source to a known terminus, wherein the agents may modify strategies in selected subsequent periods according to respective decision and implementation cycles. We further enhance the realism of our model by considering a multi-component objective function, wherein the interdictor seeks to minimize the maximum value of a regret function that consists of the evader's net flow from the source to the terminus; the interdictor's procurement, deployment, and redeployment costs; and penalties incurred by the evader for misperceptions as to the interdicted state of the network. For the resulting minimax model, we use duality to develop a reformulation that facilitates a direct solution procedure using the commercial software BARON, and examine certain related stability and convergence issues. We demonstrate cases for convergence to a stable equilibrium of strategies for problem structures having a unique solution to minimize the maximum evader flow, as well as convergence to a region of bounded oscillation for structures yielding alternative interdictor strategies that minimize the maximum evader flow. We also provide insights into the computational performance of BARON for these two problem structures, yielding useful guidelines for other research involving similar non-convex optimization problems. For the fifth application, we examine the problem of apportioning railcars to car manufacturers and railroads participating in a pooling agreement for shipping automobiles, given a dynamically determined total fleet size. This study is motivated by the existence of such a consortium of automobile manufacturers and railroads, for which the collaborative fleet sizing and efforts to equitably allocate railcars amongst the participants are currently orchestrated by the \textit{TTX Company} in Chicago, Illinois. In our study, we first demonstrate potential inequities in the industry standard resulting either from failing to address disconnected transportation network components separately, or from utilizing the current manufacturer allocation technique that is based on average nodal empty transit time estimates. We next propose and illustrate four alternative schemes to apportion railcars to manufacturers, respectively based on total transit time that accounts for queuing; two marginal cost-induced methods; and a Shapley value approach. We also provide a game-theoretic insight into the existing procedure for apportioning railcars to railroads, and develop an alternative railroad allocation scheme based on capital plus operating costs. Extensive computational results are presented for the ten combinations of current and proposed allocation techniques for automobile manufacturers and railroads, using realistic instances derived from representative data of the current business environment. We conclude with recommendations for adopting an appropriate apportionment methodology for implementation by the industry. / Ph. D.

Page generated in 0.039 seconds