• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 157
  • 66
  • 33
  • 28
  • 13
  • 10
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 367
  • 158
  • 122
  • 109
  • 46
  • 34
  • 33
  • 31
  • 31
  • 28
  • 27
  • 26
  • 24
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

The Nashville Civil Rights Movement: A Study of the Phenomenon of Intentional Leadership Development and its Consequences for Local Movements and the National Civil Rights Movement

Lee, Barry Everett 09 April 2010 (has links)
The Nashville Civil Rights Movement was one of the most dynamic local movements of the early 1960s, producing the most capable student leaders of the period 1960 to 1965. Despite such a feat, the historical record has largely overlooked this phenomenon. What circumstances allowed Nashville to produce such a dynamic movement whose youth leadership of John Lewis, Diane Nash, Bernard LaFayette, and James Bevel had no parallel? How was this small cadre able to influence movement developments on local and a national level? In order to address these critical research questions, standard historical methods of inquiry will be employed. These include the use of secondary sources, primarily Civil Rights Movement histories and memoirs, scholarly articles, and dissertations and theses. The primary sources used include public lectures, articles from various periodicals, extant interviews, numerous manuscript collections, and a variety of audio and video recordings. No original interviews were conducted because of the availability of extensive high quality interviews. This dissertation will demonstrate that the Nashville Movement evolved out of the formation of independent Black churches and college that over time became the primary sites of resistance to racial discrimination, starting in the Nineteenth Century. By the late 1950s, Nashville’s Black college attracted the students who became the driving force of a local movement that quickly established itself at the forefront of the Civil Rights Movement. Nashville’s forefront status was due to an intentional leadership training program based upon nonviolence. As a result of the training, leaders had a profound impact upon nearly every major movement development up to 1965, including the sit-ins, the Freedom Rides, the March on Washington, the birth of SNCC, the emergence of Black Power, the direction of the SCLC after 1962, the thinking of Dr. Martin Luther King, Jr., the Birmingham campaign, and the Selma voting rights campaign. In addition, the Nashville activists helped eliminate fear as an obstacle to Black freedom. These activists also revealed new relationship dynamics between students and adults and merged nonviolent direct action with voter registration, a combination considered incompatible.
212

Three essays in economic theory /

Tatur, Tymon. January 2003 (has links) (PDF)
Ill., Northwestern Univ., Diss.--Evanston, 2003. / Kopie, ersch. im Verl. UMI, Ann Arbor, Mich. - Enth. 3 Beitr.
213

Cost-effectiveness of NASH screening

Zhang, Eric W. 09 1900 (has links)
No description available.
214

Controle hierárquico via estratégia de Stackelberg-Nash para controlabilidade de sistemas parabólicos e hiperbólicos

Silva, Luciano Cipriano da 31 March 2017 (has links)
Submitted by Leonardo Cavalcante (leo.ocavalcante@gmail.com) on 2018-05-03T13:44:12Z No. of bitstreams: 1 Arquivototal.pdf: 1150863 bytes, checksum: a7e25ab87986c9d088c0fe224303f97f (MD5) / Made available in DSpace on 2018-05-03T13:44:12Z (GMT). No. of bitstreams: 1 Arquivototal.pdf: 1150863 bytes, checksum: a7e25ab87986c9d088c0fe224303f97f (MD5) Previous issue date: 2017-03-31 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / In this thesis we presents results on the exact controllability of the partial Di erential Equations (PDEs) of the parabolic and hyperbolic type, in the context of hierarchic control, using the Stackelberg-Nash strategy. In every problems we consider a main control (leader) and two secondary controls (followers). To each leader we obtain a correnponding Nash equilibrium, associated to a bi-objective optimal control problem; then we look for a leader of minimal cost that solves the exact controllability problem. For the parabolic problems we have distributed and boundary controls, now in the hyperbolics every controls are distributed. We consider linear and semilinear cases, which we solve using observability inequality obtained combining right Carleman inequalities. Also we use a xed point method. / Nesta tese apresentamos resultados sobre controlabilidade exata de Equações Diferenciais Parciais (EDPs) dos tipos parabólico e hiperbólico, no contexto de controle hierárquico, usando a estratégia de Stackelberg-Nash. Em todos os problemas consideramos um controle principal (líder) e dois controles secundários (seguidores). Para cada líder obtemos um equil íbrio de Nash correspondente, associado a um problema de controle ótimo bi-objetivo; então buscamos o líder de custo que resolve o problema de controlabilidade. Para os problemas parabólicos temos controles distribuídos e na fronteira, já nos hiperbólico todos os controles são distribuídos. Consideramos casos lineares e semilineares, os quais resolvemos usando desigualdade de observabilidade obtidas combinando desigualdades de Carleman adequadas. Também usamos um método de ponto xo.
215

Uma nova metodologia de jogos dinÃmicos lineares quadrÃticos / A new methodology for linear quadratic dynamic games

Andrà Luiz Sampaio de Alencar 29 July 2011 (has links)
CoordenaÃÃo de AperfeiÃoamento de NÃvel Superior / A teoria dos jogos à um ramo da matemÃtica dedicado ao estudo de situaÃÃes que surgem quando mÃltiplos agentes de decisÃo buscam atingir seus objetivos individuais, possivelmente conflitantes entre si. Em sua formulaÃÃo dinÃmica linear quadrÃtica (LQ), as soluÃÃes de equilÃbrio de Nash dos jogadores podem ser obtidas em termos das equaÃÃes algÃbricas de Riccati acopladas, que, a depender do mÃtodo numÃrico utilizado para seu cÃlculo, podem gerar resultados insatisfatÃrios sob o ponto de vista da estabilidade e precisÃo numÃrica. Neste sentido, esta dissertaÃÃo propÃe um novo algoritmo para uma soluÃÃo alternativa das equaÃÃes algÃbricas de Riccati acopladas associadas aos jogos dinÃmicos (LQ), com estrutura de informaÃÃo em malha aberta, utilizando, para isso, conceitos da teoria da dualidade e otimizaÃÃo estÃtica convexa. Em adiÃÃo, obtÃm-se uma nova metodologia para a sÃntese de uma famÃlia de controladores Ãtimos. A teoria dos jogos tambÃm revela um enorme potencial de aplicaÃÃo em problemas de controle multiobjetivo, no qual està incluÃdo o controle Hinf, que pode ser formulado como um jogo dinÃmico de soma-zero. Considerando essa formulaÃÃo, as novas metodologias propostas neste trabalho sÃo estendidas aos problemas de controle Hinf com rejeiÃÃo de perturbaÃÃo, gerando resultados com melhores propriedades de desempenho e estabilidade que os obtidos via equaÃÃo algÃbrica de Riccati modificada. Por fim, atravÃs de exemplos numÃricos e simulaÃÃes computacionais, as novas metodologias sÃo confrontadas com as metodologias tradicionais, evidenciando-se os aspectos mais relevantes de cada abordagem.
216

Precificação em orquestradores de informação: maximizando redes estáveis

Lustosa, Bernardo Carvalho 13 August 2013 (has links)
Submitted by Bernardo Lustosa (bernardo.lustosa@clearsale.com.br) on 2013-09-09T23:42:12Z No. of bitstreams: 1 Tese v25 - Com Ficha Catalográfica no Final.pdf: 1307168 bytes, checksum: f1e2947e6ac3832680ea2397380348f3 (MD5) / Rejected by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br), reason: Prezado Bernardo, Falta ficha catalográfica na 3ª página. Att. Suzi 3799-7876 on 2013-09-10T12:37:23Z (GMT) / Submitted by Bernardo Lustosa (bernardo.lustosa@clearsale.com.br) on 2013-09-10T13:40:50Z No. of bitstreams: 1 Tese v26 - Com Ficha Catalográfica na Terceira Página.pdf: 1307219 bytes, checksum: 1d84d82fc68651266396b568690933bf (MD5) / Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2013-09-10T13:51:29Z (GMT) No. of bitstreams: 1 Tese v26 - Com Ficha Catalográfica na Terceira Página.pdf: 1307219 bytes, checksum: 1d84d82fc68651266396b568690933bf (MD5) / Made available in DSpace on 2013-09-10T14:06:34Z (GMT). No. of bitstreams: 1 Tese v26 - Com Ficha Catalográfica na Terceira Página.pdf: 1307219 bytes, checksum: 1d84d82fc68651266396b568690933bf (MD5) Previous issue date: 2013-08-13 / Em redes de inovação baseadas em trocas de informação, o agente orquestrador se apropria das informações dos atores periféricos, gera inovação e distribui em forma de valor agregado. É sua função promover a estabilidade na rede fazendo com que a mesma tenha taxas não negativas de crescimento. Nos mercados de análise de crédito e fraude, por exemplo, ou bureaus funcionam como agentes orquestradores, concentrando as informações históricas da população que são provenientes de seus clientes e fornecendo produtos que auxiliam na tomada de decisão. Assumindo todas as empresas do ecossistema como agentes racionais, a teoria dos jogos se torna uma ferramenta apropriada para o estudo da precificação dos produtos como mecanismo de promoção da estabilidade da rede. Este trabalho busca identificar a relação de diferentes estruturas de precificação promovidas pelo agente orquestrador com a estabilidade e eficiência da rede de inovação. Uma vez que o poder da rede se dá pela força conjunta de seus membros, a inovação por esta gerada varia de acordo com a decisão isolada de cada agente periférico de contratar o agente orquestrador ao preço por ele estipulado. Através da definição de um jogo teórico simplificado onde diferentes agentes decidem conectar-se ou não à rede nas diferentes estruturas de preços estipuladas pelo agente orquestrador, o estudo analisa as condições de equilíbrio conclui que o equilíbrio de Nash implica em um cenário de estabilidade da rede. Uma conclusão é que, para maximizar o poder de inovação da rede, o preço a ser pago por cada agente para fazer uso da rede deve ser diretamente proporcional ao benefício financeiro auferido pela inovação gerada pela mesma. O estudo apresenta ainda uma simulação computacional de um mercado fictício para demonstração numérica dos efeitos observados. Através das conclusões obtidas, o trabalho cobre uma lacuna da literatura de redes de inovação com agentes orquestradores monopolistas em termos de precificação do uso da rede, servindo de subsídio de tomadores de decisão quando da oferta ou demanda dos serviços da rede. / In innovation networks based on information exchange, the orchestrating actor, or hub, captures information from the peripherical actors, promotes innovation and then distributes it for the network in the form of added value. Orchestration comprises promoting the network’s stability in order to avoid negative growth rates. The credit and fraud agencies, for example, can be understood as orchestrating hubs, concentrating the historical information of the population generated by their clients and offering products that support decision making. Assuming all the companies of this ecosystem as rational agents, game theory emerges as an appropriate framework for the study of pricing as a mechanism to promote the network’s stability. The present work focuses on the identification of a relationship between the different pricing options that can be proposed by the orchestrating hub and the network’s stability and efficiency. Since the network power is given by the combined strength of its members, the innovation generated is a function of the isolated decision of each peripherical agent on whether to hire the orchestrating hub’s services for the price defined by the latter. Through the definition of a simplified theoretical game in which agents decide whether to connect or not to the network based on the pricing structure defined by the hub, the present study analyzes the equilibrium conditions and concludes that the Nash equilibrium entails the network’s stability. One of the conclusions is that in order to maximize the innovation power of the network, the agents should be charged a price that is proportional to the financial benefit obtained by the innovation generated by the net. The study presents as well a computer simulation of a fictitious market for a numerical demonstration of the observed effects. With these conclusions, the present study fills a gap in the literature on monopolistic orchestrated innovation in terms of the pricing structures of the network connection and its use. It can be used as a basis for decision making both on the supply and the demand sides of the services of the hub.
217

Méthodes efficaces de capture de front de pareto en conception mécanique multicritère : applications industrielles / Non disponible

Benki, Aalae 28 January 2014 (has links)
Dans le domaine d’optimisation de forme de structures, la réduction des coûts et l’amélioration des produits sont des défis permanents à relever. Pour ce faire, le procédé de mise en forme doit être optimisé. Optimiser le procédé revient alors à résoudre un problème d’optimisation. Généralement ce problème est un problème d’optimisation multicritère très coûteux en terme de temps de calcul, où on cherche à minimiser plusieurs fonctions coût en présence d’un certain nombre de contraintes. Pour résoudre ce type de problème, on a développé un algorithme robuste, efficace et fiable. Cet algorithme, consiste à coupler un algorithme de capture de front de Pareto (NBI ou NNCM) avec un métamodèle (RBF), c’est-à-dire des approximations des résultats des simulations coûteuses. D’après l’ensemble des résultats obtenus par cette approche, il est intéressant de souligner que la capture de front de Pareto génère un ensemble des solutions non dominées. Pour savoir lesquelles choisir, le cas échéant, il est nécessaire de faire appel à des algorithmes de sélection, comme par exemple Nash et Kalai-Smorodinsky. Ces deux approches, issues de la théorie des jeux, ont été utilisées pour notre travail. L’ensemble des algorithmes sont validés sur deux cas industriels proposés par notre partenaire industriel. Le premier concerne un modèle 2D du fond de la canette (elasto-plasticité) et le second est un modèle 3D de la traverse (élasticité linéaire). Les résultats obtenus confirment l’efficacité de nos algorithmes développés. / One of the current challenges in the domain of the multiobjective shape optimization is to reduce the calculation time required by conventional methods. The high computational cost is due to the high number of simulation or function calls required by these methods. Recently, several studies have been led to overcome this problem by integratinga metamodel in the overall optimization loop. In this thesis, we perform a coupling between the Normal Boundary Intersection -NBI- algorithm and The Normalized Normal constraint Method -NNCM- algorithm with Radial Basis Function -RBF- metamodel in order to have asimple tool with a reasonable calculation time to solve multicriteria optimization problems. First, we apply our approach to academic test cases. Then, we validate our method against two industrial cases, namely, shape optimization of the bottom of a can undergoing nonlinear elasto-plastic deformation and an optimization of an automotive twist beam. Then, in order to select solutions among the Pareto efficient ones, we use the same surrogate approach to implement a method to compute Nash and Kalai-Smorodinsky equilibria.
218

Algorithms For Stochastic Games And Service Systems

Prasad, H L 05 1900 (has links) (PDF)
This thesis is organized into two parts, one for my main area of research in the field of stochastic games, and the other for my contributions in the area of service systems. We first provide an abstract for my work in stochastic games. The field of stochastic games has been actively pursued over the last seven decades because of several of its important applications in oligopolistic economics. In the past, zero-sum stochastic games have been modelled and solved for Nash equilibria using the standard techniques of Markov decision processes. General-sum stochastic games on the contrary have posed difficulty as they cannot be reduced to Markov decision processes. Over the past few decades the quest for algorithms to compute Nash equilibria in general-sum stochastic games has intensified and several important algorithms such as stochastic tracing procedure [Herings and Peeters, 2004], NashQ [Hu and Wellman, 2003], FFQ [Littman, 2001], etc., and their generalised representations such as the optimization problem formulations for various reward structures [Filar and Vrieze, 1997] have been proposed. However, they suffer from either lack of generality or are intractable for even medium sized problems or both. In our venture towards algorithms for stochastic games, we start with a non-linear optimization problem and then design a simple gradient descent procedure for the same. Though this procedure gives the Nash equilibrium for a sample problem of terrain exploration, we observe that, in general, it need not be true. We characterize the necessary conditions and define KKT-N point. KKT-N points are those Karush-Kuhn-Tucker (KKT) points which corresponding to Nash equilibria. Thus, for a simple gradient based algorithm to guarantee convergence to Nash equilibrium, all KKT points of the optimization problem need to be KKT-N points, which restricts the applicability of such algorithms. We then take a step back and start looking at better characterization of those points of the optimization problem which correspond to Nash equilibria of the underlying game. As a result of this exploration, we derive two sets of necessary and sufficient conditions. The first set, KKT-SP conditions, is inspired from KKT conditions itself and is obtained by breaking down the main optimization problem into several sub-problems and then applying KKT conditions to each one of those sub-problems. The second set, SG-SP conditions, is a simplified set of conditions which characterize those Nash points more compactly. Using both KKT-SP and SG-SP conditions, we propose three algorithms, OFF-SGSP, ON-SGSP and DON-SGSP, respectively, which we show provide Nash equilibrium strategies for general-sum discounted stochastic games. Here OFF-SGSP is an off-line algorithm while ONSGSP and DON-SGSP are on-line algorithms. In particular, we believe that DON-SGSP is the first decentralized on-line algorithm for general-sum discounted stochastic games. We show that both our on-line algorithms are computationally efficient. In fact, we show that DON-SGSP is not only applicable for multi-agent scenarios but is also directly applicable for the single-agent case, i.e., MDPs (Markov Decision Processes). The second part of the thesis focuses on formulating and solving the problem of minimizing the labour-cost in service systems. We define the setting of service systems and then model the labour-cost problem as a constrained discrete parameter Markov-cost process. This Markov process is parametrized by the number of workers in various shifts and with various skill levels. With the number of workers as optimization variables, we provide a detailed formulation of a constrained optimization problem where the objective is the expected long-run averages of the single-stage labour-costs, and the main set of constraints are the expected long-run average of aggregate SLAs (Service Level Agreements). For this constrained optimization problem, we provide two stochastic optimization algorithms, SASOC-SF-N and SASOC-SF-C, which use smoothed functional approaches to estimate gradient and perform gradient descent in the aforementioned constrained optimization problem. SASOC-SF-N uses Gaussian distribution for smoothing while SASOC-SF-C uses Cauchy distribution for the same. SASOC-SF-C is the first Cauchy based smoothing algorithm which requires a fixed number (two) of simulations independent of the number of optimization variables. We show that these algorithms provide an order of magnitude better performance than existing industrial standard tool, OptQuest. We also show that SASOC-SF-C gives overall better performance.
219

Voluntary Participation Games in Public Good Mechanisms: Coalitional Deviations and Efficiency / 公共財供給メカニズムへの参加ゲーム : 結託離脱と効率性

Shinohara, Ryusuke, 篠原, 隆介 14 June 2006 (has links)
博士(経済学) / 乙第354号 / 112 p. / Hitotsubashi University
220

Application of game theory in Swedish raw material market : Investigating the pulpwood market

Al Halabi, Rami January 2020 (has links)
Studien går ut på att analysera marknadsstrukturen för två industriföretag(Holmen och SCA) under antagandet att båda konkurrerar mot varandragenom att köpa rå material samt genom att sälja förädlade produkter.Produktmarknaden som undersöks är pappersmarknaden och antas varakoncentrerad. Rå materialmknaden som undersöks ärmassavedmarknaden och antas karaktäriseras som en duopsony. Detvisade sig att Holmen och SCA köper massaved från en stor mängdskogsägare. Varje företag skapar varje månad en prislista där de bestämmerbud priset föassaved. Priset varierar beroende på region. Både SCA ochHolmen väljer mellan två strategiska beslut, antigen att buda högt pris ellerlågt pris. Genom spelteori så visade det sig att båda industriföretagenanvänder mixade strategier då de i vissa tillfällen budar högt och i andratillfällen budar lågt. Nash jämviktslägen för mixade strategier räknades utmatematiskt och analyserades genom dynamisk spelteori.Marknadskoncentrationen för pappersmarknaden undersöktes viaHerfindahl-Hirschman index (HHI). Porters femkraftsmodell användes föratt analysera industri konkurrensen. Resultatet visade attproduktmarknaden är koncentrerad då HHI testerna gav höga indexvärdenmellan 3100 och 1700. Det existerade dessutom ett Nash jämviktsläge fö mixade strategier som gav SCA förväntad lönsamhet 1651 miljoner kronoroch Holmen 1295 miljoner kronor. Dynamisk spelteori visade att SCA ochHolmens budgivning följer ett mönster och att högt/lågt bud beror påavvikelser från Nash jämviktslägets sannolikhetsdistribution. Nashjämviktslägets råder ifall sannolikhetsdistributionerna vid låg budgivningär 68,6 procent för SCA och 66,7 procent för Holmen. Detta gav indikatore för icke samarbetsvilliga spel. Slutsatsen är att om två spelare (kvarnar) når / The research aims to analyze the market structure of two companies in th forest industry (Holmen and SCA) with the assumption that thes companies compete at buying raw materials and selling products. Theproduct market in this study is the paper market under the assumption thatboth companies operate in a concentrated product market. The rawmatial market that one investigates in this study is the pulpwood marketunder the assumption that it is a duopsony. What this study has concludedis that Holmen and SCA buy pulpwood from lots of different self-managingforest owners. Each company creates a monthly pricelist where they decidethe bid price of pulpwood. The amount varies depending on the region. Bot SCA and Holmen chooses between two strategic decisions, either to bid highor to bid low. Through game theory, it has been clear that each company usesmixed strategies as they sometimes give high bids and sometimes give lowbids. The Nash equilibrium for mixed strategies have been calculatedmathematically and analyzed through the dynamics of game theory. As fore market concentration, the product market has been investigatedthrough the Herfindahl-Hirschman index (HHI). Porter's five-force modelwas used to analyze the industry competition. The results showed that theproduct market is concentrated as the HHI tests gave High index scoresbetween 3100 and 1700. In addition, there existed a Nash equilibrium in amixed strategy that gave SCA expected payoff 1651 million SEK and Holmen1295 million SEK. The dynamic game theory showed that SCA and Holmen'sbidding follows a repeating trajectory and that the high/low bidding is dueto deviations from Nash equilibrium probability distribution. The Nashequilibrium situation prevails if the probability distribution at low biddingis 68.6 percent for SCA and 66,7 percent for Holmen. This providedindicators for a non-cooperative game. The conclusion is that if two players

Page generated in 0.0554 seconds