• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 187
  • 42
  • 36
  • 23
  • 20
  • 18
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 381
  • 156
  • 77
  • 51
  • 46
  • 46
  • 43
  • 40
  • 40
  • 39
  • 39
  • 36
  • 33
  • 33
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Democracy and the Common Good : A Study of the Weighted Majority Rule

Berndt Rasmussen, Katharina January 2013 (has links)
In this study I analyse the performance of a democratic decision-making rule: the weighted majority rule. It assigns to each voter a number of votes that is proportional to her stakes in the decision. It has been shown that, for collective decisions with two options, the weighted majority rule in combination with self-interested voters maximises the common good when the latter is understood in terms of either the sum-total or prioritarian sum of the voters’ well-being. The main result of my study is that this argument for the weighted majority rule — that it maximises the common good — can be improved along the following three main lines. (1) The argument can be adapted to other criteria of the common good, such as sufficientarian, maximin, leximin or non-welfarist criteria. I propose a generic argument for the collective optimality of the weighted majority rule that works for all of these criteria. (2) The assumption of self-interested voters can be relaxed. First, common-interest voters can be accommodated. Second, even if voters are less than fully competent in judging their self-interest or the common interest, the weighted majority rule is weakly collectively optimal, that is, it almost certainly maximises the common good given a large numbers of voters. Third, even for smaller groups of voters, the weighted majority rule still has some attractive features. (3) The scope of the argument can be extended to decisions with more than two options. I state the conditions under which the weighted majority rule maximises the common good even in multi-option contexts. I also analyse the possibility and the detrimental effects of strategic voting. Furthermore, I argue that self-interested voters have reason to accept the weighted majority rule.
242

Universalidad y especificidad de las restricciones fonológicas: Acento y fonotaxis en inglés

Cutillas Espinosa, Juan Antonio 12 April 2006 (has links)
La presente tesis se encuadra dentro del campo de la Teoría de la Optimidad en fonología. Parte de la idea de que este modelo teórico es esencialmente universalista, lo que supone un contratiempo a la hora de explicar patrones específicos de lenguas concretas. La tesis explora hasta qué punto es posible realizar análisis de fenómenos fonológicos complejos maximizando el ingrediente universal y profundiza en los límites entre usos aceptables y no aceptables de restricciones específicas. Para ello, plantea análisis de los mecanismos de asignación de acento primario en inglés, así como de la fonotaxis de este idioma. Se analizan propuestas anteriores y se sugiere que el componente específico de la gramática debería codificarse en forma de restricciones paradigmáticas que establezcan relaciones entre formas superficiales (O-O). También se operacionaliza la relación entre formas regulares e irregulares a través lo que se denomina un Registro Superficial. El trabajo concluye que es posible realizar descripciones satisfactorias de patrones fonológicos complejos dentro de la Teoría de la Optimidad, siempre y cuando se evite un uso específico de las restricciones de marcación. / This dissertation belongs to the field of Optimality-Theoretic phonological studies. It starts from the assumption that Optimality Theory is an essentially universalistic framework, which results in considerable difficulty when dealing with language-specific patterns. The thesis discusses to what extent it is possible to carry out analyses of complex phonological patterns by maximizing the use of universal constraints. It also explores the limits between acceptable and unacceptable uses of language-specific constraints. In order to show these conflicts, we analyse stress assignment and phonotactic structure in English. More specifically, we look at previous approaches to the problem of specificity and suggest that language-specific aspects of grammar should be encoded by paradigmatic constraints based on the relation between different surface forms (O-O). We also operationalize the relation between regular and irregular forms via a mechanism called Surface Register. We conclude that it is possible to offer a satisfactory description of complex phonological patterns within the Optimality Theory framework, provided that the unjustified use of language-specific markedness constraints is avoided.
243

ANALYSIS OF SHIPMENT CONSOLIDATION IN THE LOGISTICS SUPPLY CHAIN

Ulku, M. Ali January 2009 (has links)
Shipment Consolidation (SCL) is a logistics strategy that combines two or more orders or shipments so that a larger quantity can be dispatched on the same vehicle to the same market region. This dissertation aims to emphasize the importance and substantial cost saving opportunities that come with SCL in a logistics supply chain, by offering new models or by improving on the current body of literature. Our research revolves around "three main axes" in SCL: Single-Item Shipment Consolidation (SISCL), Multi-Item Shipment Consolidation (MISCL), and Pricing and Shipment Consolidation. We investigate those topics by employing various Operations Research concepts or techniques such as renewal theory, dynamic optimization, and simulation. In SISCL, we focus on analytical models, when the orders arrive randomly. First, we examine the conditions under which an SCL program enables positive savings. Then, in addition to the current SCL policies used in practice and studied in the literature, i.e. Quantity-Policy (Q-P), Time-Policy (T-P) and Hybrid Policy (H-P), we introduce a new one that we call the Controlled Dispatch Policy (CD-P). Moreover, we provide a cost-based comparison of those policies. We show that the Q-P yields the lowest cost per order amongst the others, yet with the highest randomness in dispatch times. On the other hand, we also show that, between the service-level dependent policies (i.e. the CD-P, H-P and T-P), H-P provides the lowest cost per order, while CD-P turns out to be more flexible and responsive to dispatch times, again with a lower cost than the T-P. In MISCL, we construct dispatch decision rules. We employ a myopic analysis, and show that it is optimal, when costs and the order-arrival processes are dependent on the type of items. In a dynamic setting, we apply the concept of time-varying probability to integrate the dispatching and load planning decisions. For the most common dispatch objectives such as cost per order, cost per unit time or cost per unit weight, we use simulation and observe that the variabilities in both cost and the optimal consolidation cycle are smaller for the objective of cost per unit weight. Finally on our third axis, we study the joint optimization of pricing and time-based SCL policy. We do this for a price- and time-sensitive logistics market, both for common carriage (transport by a public, for-hire trucking company) and private carriage (employing one's own fleet of trucks). The main motivation for introducing pricing in SCL decisions stems from the fact that transportation is a service, and naturally demand is affected by price. Suitable pricing decisions may influence the order-arrival rates, enabling extra savings. Those savings emanate from two sources: Scale economies (in private carriage) or discount economies (in common carriage) that come with SCL, and additional revenue generated by employing an appropriate pricing scheme. Throughout the dissertation, we offer numerical examples and as many managerial insights as possible. Suggestions for future research are offered.
244

ANALYSIS OF SHIPMENT CONSOLIDATION IN THE LOGISTICS SUPPLY CHAIN

Ulku, M. Ali January 2009 (has links)
Shipment Consolidation (SCL) is a logistics strategy that combines two or more orders or shipments so that a larger quantity can be dispatched on the same vehicle to the same market region. This dissertation aims to emphasize the importance and substantial cost saving opportunities that come with SCL in a logistics supply chain, by offering new models or by improving on the current body of literature. Our research revolves around "three main axes" in SCL: Single-Item Shipment Consolidation (SISCL), Multi-Item Shipment Consolidation (MISCL), and Pricing and Shipment Consolidation. We investigate those topics by employing various Operations Research concepts or techniques such as renewal theory, dynamic optimization, and simulation. In SISCL, we focus on analytical models, when the orders arrive randomly. First, we examine the conditions under which an SCL program enables positive savings. Then, in addition to the current SCL policies used in practice and studied in the literature, i.e. Quantity-Policy (Q-P), Time-Policy (T-P) and Hybrid Policy (H-P), we introduce a new one that we call the Controlled Dispatch Policy (CD-P). Moreover, we provide a cost-based comparison of those policies. We show that the Q-P yields the lowest cost per order amongst the others, yet with the highest randomness in dispatch times. On the other hand, we also show that, between the service-level dependent policies (i.e. the CD-P, H-P and T-P), H-P provides the lowest cost per order, while CD-P turns out to be more flexible and responsive to dispatch times, again with a lower cost than the T-P. In MISCL, we construct dispatch decision rules. We employ a myopic analysis, and show that it is optimal, when costs and the order-arrival processes are dependent on the type of items. In a dynamic setting, we apply the concept of time-varying probability to integrate the dispatching and load planning decisions. For the most common dispatch objectives such as cost per order, cost per unit time or cost per unit weight, we use simulation and observe that the variabilities in both cost and the optimal consolidation cycle are smaller for the objective of cost per unit weight. Finally on our third axis, we study the joint optimization of pricing and time-based SCL policy. We do this for a price- and time-sensitive logistics market, both for common carriage (transport by a public, for-hire trucking company) and private carriage (employing one's own fleet of trucks). The main motivation for introducing pricing in SCL decisions stems from the fact that transportation is a service, and naturally demand is affected by price. Suitable pricing decisions may influence the order-arrival rates, enabling extra savings. Those savings emanate from two sources: Scale economies (in private carriage) or discount economies (in common carriage) that come with SCL, and additional revenue generated by employing an appropriate pricing scheme. Throughout the dissertation, we offer numerical examples and as many managerial insights as possible. Suggestions for future research are offered.
245

Risk Measures Constituting Risk Metrics for Decision Making in the Chemical Process Industry

Prem, Katherine 2010 December 1900 (has links)
The occurrence of catastrophic incidents in the process industry leave a marked legacy of resulting in staggering economic and societal losses incurred by the company, the government and the society. The work described herein is a novel approach proposed to help predict and mitigate potential catastrophes from occurring and for understanding the stakes at risk for better risk informed decision making. The methodology includes societal impact as risk measures along with tangible asset damage monetization. Predicting incidents as leading metrics is pivotal to improving plant processes and, for individual and societal safety in the vicinity of the plant (portfolio). From this study it can be concluded that the comprehensive judgments of all the risks and losses should entail the analysis of the overall results of all possible incident scenarios. Value-at-Risk (VaR) is most suitable as an overall measure for many scenarios and for large number of portfolio assets. FN-curves and F$-curves can be correlated and this is very beneficial for understanding the trends of historical incidents in the U.S. chemical process industry. Analyzing historical databases can provide valuable information on the incident occurrences and their consequences as lagging metrics (or lagging indicators) for the mitigation of the portfolio risks. From this study it can be concluded that there is a strong statistical relationship between the different consequence tiers of the safety pyramid and Heinrich‘s safety pyramid is comparable to data mined from the HSEES database. Furthermore, any chemical plant operation is robust only when a strategic balance is struck between optimal plant operations and, maintaining health, safety and sustaining environment. The balance emerges from choosing the best option amidst several conflicting parameters. Strategies for normative decision making should be utilized for making choices under uncertainty. Hence, decision theory is utilized here for laying the framework for choice making of optimum portfolio option among several competing portfolios. For understanding the strategic interactions of the different contributing representative sets that play a key role in determining the most preferred action for optimum production and safety, the concepts of game theory are utilized and framework has been provided as novel application to chemical process industry.
246

Asymptotic theory for decentralized sequential hypothesis testing problems and sequential minimum energy design algorithm

Wang, Yan 19 May 2011 (has links)
The dissertation investigates asymptotic theory of decentralized sequential hypothesis testing problems as well as asymptotic behaviors of the Sequential Minimum Energy Design (SMED). The main results are summarized as follows. 1.We develop the first-order asymptotic optimality theory for decentralized sequential multi-hypothesis testing under a Bayes framework. Asymptotically optimal tests are obtained from the class of "two-stage" procedures and the optimal local quantizers are shown to be the "maximin" quantizers that are characterized as a randomization of at most M-1 Unambiguous Likelihood Quantizers (ULQ) when testing M >= 2 hypotheses. 2. We generalize the classical Kullback-Leibler inequality to investigate the quantization effects on the second-order and other general-order moments of log-likelihood ratios. It is shown that a quantization may increase these quantities, but such an increase is bounded by a universal constant that depends on the order of the moment. This result provides a simpler sufficient condition for asymptotic theory of decentralized sequential detection. 3. We propose a class of multi-stage tests for decentralized sequential multi-hypothesis testing problems, and show that with suitably chosen thresholds at different stages, it can hold the second-order asymptotic optimality properties when the hypotheses testing problem is "asymmetric." 4. We characterize the asymptotic behaviors of SMED algorithm, particularly the denseness and distributions of the design points. In addition, we propose a simplified version of SMED that is computationally more efficient.
247

Duality for convex composed programming problems

Vargyas, Emese Tünde 20 December 2004 (has links) (PDF)
The goal of this work is to present a conjugate duality treatment of composed programming as well as to give an overview of some recent developments in both scalar and multiobjective optimization. In order to do this, first we study a single-objective optimization problem, in which the objective function as well as the constraints are given by composed functions. By means of the conjugacy approach based on the perturbation theory, we provide different kinds of dual problems to it and examine the relations between the optimal objective values of the duals. Given some additional assumptions, we verify the equality between the optimal objective values of the duals and strong duality between the primal and the dual problems, respectively. Having proved the strong duality, we derive the optimality conditions for each of these duals. As special cases of the original problem, we study the duality for the classical optimization problem with inequality constraints and the optimization problem without constraints. The second part of this work is devoted to location analysis. Considering first the location model with monotonic gauges, it turns out that the same conjugate duality principle can be used also for solving this kind of problems. Taking in the objective function instead of the monotonic gauges several norms, investigations concerning duality for different location problems are made. We finish our investigations with the study of composed multiobjective optimization problems. In doing like this, first we scalarize this problem and study the scalarized one by using the conjugacy approach developed before. The optimality conditions which we obtain in this case allow us to construct a multiobjective dual problem to the primal one. Additionally the weak and strong duality are proved. In conclusion, some special cases of the composed multiobjective optimization problem are considered. Once the general problem has been treated, particularizing the results, we construct a multiobjective dual for each of them and verify the weak and strong dualities. / In dieser Arbeit wird, anhand der sogenannten konjugierten Dualitätstheorie, ein allgemeines Dualitätsverfahren für die Untersuchung verschiedener Optimierungsaufgaben dargestellt. Um dieses Ziel zu erreichen wird zuerst eine allgemeine Optimierungsaufgabe betrachtet, wobei sowohl die Zielfunktion als auch die Nebenbedingungen zusammengesetzte Funktionen sind. Mit Hilfe der konjugierten Dualitätstheorie, die auf der sogenannten Störungstheorie basiert, werden für die primale Aufgabe drei verschiedene duale Aufgaben konstruiert und weiterhin die Beziehungen zwischen deren optimalen Zielfunktionswerten untersucht. Unter geeigneten Konvexitäts- und Monotonievoraussetzungen wird die Gleichheit dieser optimalen Zielfunktionswerte und zusätzlich die Existenz der starken Dualität zwischen der primalen und den entsprechenden dualen Aufgaben bewiesen. In Zusammenhang mit der starken Dualität werden Optimalitätsbedingungen hergeleitet. Die Ergebnisse werden abgerundet durch die Betrachtung zweier Spezialfälle, nämlich die klassische restringierte bzw. unrestringierte Optimierungsaufgabe, für welche sich die aus der Literatur bekannten Dualitätsergebnisse ergeben. Der zweite Teil der Arbeit ist der Dualität bei Standortproblemen gewidmet. Dazu wird ein sehr allgemeines Standortproblem mit konvexer zusammengesetzter Zielfunktion in Form eines Gauges formuliert, für das die entsprechenden Dualitätsaussagen abgeleitet werden. Als Spezialfälle werden Optimierungsaufgaben mit monotonen Normen betrachtet. Insbesondere lassen sich Dualitätsaussagen und Optimalitätsbedingungen für das klassische Weber und Minmax Standortproblem mit Gauges als Zielfunktion herleiten. Das letzte Kapitel verallgemeinert die Dualitätsaussagen, die im zweiten Kapitel erhalten wurden, auf multikriterielle Optimierungsprobleme. Mit Hilfe geeigneter Skalarisierungen betrachten wir zuerst ein zu der multikriteriellen Optimierungsaufgabe zugeordnetes skalares Problem. Anhand der in diesem Fall erhaltenen Optimalitätsbedingungen formulieren wir das multikriterielle Dualproblem. Weiterhin beweisen wir die schwache und, unter bestimmten Annahmen, die starke Dualität. Durch Spezialisierung der Zielfunktionen bzw. Nebenbedingungen resultieren die klassischen konvexen Mehrzielprobleme mit Ungleichungs- und Mengenrestriktionen. Als weitere Anwendungen werden vektorielle Standortprobleme betrachtet, zu denen wir entsprechende duale Aufgaben formulieren.
248

A computational framework for the solution of infinite-dimensional Bayesian statistical inverse problems with application to global seismic inversion

Martin, James Robert, Ph. D. 18 September 2015 (has links)
Quantifying uncertainties in large-scale forward and inverse PDE simulations has emerged as a central challenge facing the field of computational science and engineering. The promise of modeling and simulation for prediction, design, and control cannot be fully realized unless uncertainties in models are rigorously quantified, since this uncertainty can potentially overwhelm the computed result. While statistical inverse problems can be solved today for smaller models with a handful of uncertain parameters, this task is computationally intractable using contemporary algorithms for complex systems characterized by large-scale simulations and high-dimensional parameter spaces. In this dissertation, I address issues regarding the theoretical formulation, numerical approximation, and algorithms for solution of infinite-dimensional Bayesian statistical inverse problems, and apply the entire framework to a problem in global seismic wave propagation. Classical (deterministic) approaches to solving inverse problems attempt to recover the “best-fit” parameters that match given observation data, as measured in a particular metric. In the statistical inverse problem, we go one step further to return not only a point estimate of the best medium properties, but also a complete statistical description of the uncertain parameters. The result is a posterior probability distribution that describes our state of knowledge after learning from the available data, and provides a complete description of parameter uncertainty. In this dissertation, a computational framework for such problems is described that wraps around the existing forward solvers, as long as they are appropriately equipped, for a given physical problem. Then a collection of tools, insights and numerical methods may be applied to solve the problem, and interrogate the resulting posterior distribution, which describes our final state of knowledge. We demonstrate the framework with numerical examples, including inference of a heterogeneous compressional wavespeed field for a problem in global seismic wave propagation with 10⁶ parameters.
249

從優選理論分析英文縮詞與混合詞之音韻保留形式 / Phonological Preservation of English Clips and Blends: An Optimality-Theoretic Analysis

林綠茜, Lin,Lu Chien Unknown Date (has links)
本文以優選理論的觀點分析英文縮詞與混合詞的音韻保留形式。音韻保留形式分為:來源詞之聲母保留、來源詞之單音節保留以及來源詞之雙音節保留。本研究認為英文縮詞與混合詞的音韻保留策略不只一種,應用不同的策略會產生不同類型的縮詞及混合詞,筆者透過並存音韻理論(Cophonology Theory)來說明英文縮詞與混合詞的音韻保留策略是多個次語法的運作結果。縮詞的音韻保留策略有四種,可分為左邊保留及右邊保留,其中又以左邊保留佔多數,在這兩種保留中又分別有兩種模組(template)保留策略。混合詞的音韻保留策略有三種,主要由MAXS2這條可移動制約的位置來決定,當它在層級中移動到不同的位置會形成不同的保留策略。此外,本文也提供了跨語言分析,發現西班牙混合詞與英文混合詞可由相似的制約透過不同排序來解釋,表示不同語言的混合詞,其行為相當類似。簡言之,本篇論文藉由優選理論的觀點,首度就英文縮詞與混合詞提出了一個整體分析。 / This thesis examines the nature of English clipping and blending from the perspective of Optimality Theory. Clipped and blended words may use phonological strategies to preserve part of the source such as the preservation of the onset, syllable, or foot. Different strategies of preservation form different patterns of clipped or blended words. This thesis illustrates that these phonological strategies are determined by the different cophonologies. There are four strategies in forming clipped words. Clipped words can be preserved from the left edge or the right edge of the source, each of which follows either a bimoraic template or a disyllabic template. There are three strategies in forming blended words, depending on the ranking of the unspecified constraint MAXS2. In addition, the present study offers cross-linguistic evidence from Spanish blends, showing that Spanish blending and English blending share certain similarities. To conclude, this thesis has provided a theoretical generalization of English clipping and blending, taking a constraint-based approach.
250

漢語聖經經文搭配音樂之策略 / The Phonological-Musical Strategies in Textsetting of Chinese Bible Verses

凌旺楨, Ling, Wang Chen Unknown Date (has links)
許多學者曾探討語言與音樂的互動關係。本研究則是將此議題延伸討論至漢語聖經經文入樂。本文漢語聖詩的歌詞皆截取自聖經經文,旋律的部份則是由牟維華(2007)所編。本篇主要包含了語料庫以及優選理論分析。語料庫的設置是用以觀察音節與音符的關連性、語言與音樂的邊界,以及輕聲的節奏。而優選理論分析則是藉著制約排序選出優選輸出值。本文聖詩可以兩種方式避免音符沒有對應到歌詞的現象,即一個音節對入多個音符,以及插入重複的歌詞。此兩種不同變異體可透過並存音韻(cophonology)的次語法,也就是Dep-σ 及Univormity-SM兩個制約的重新排序來解釋。由 Align-R(Long,IP)、Align-L (T, XP) 及 Align-R (T, IP) 三種制約排序,可預測出三連音邊界與歌詞的對應關係。Rhythm-N和Align-R (Long, IP) 兩個制約的排序,則可預測輕聲字的節奏長短。本研究發現音樂可被調整,以容納音節節奏。語言亦會被音樂影響,如此以符合樂譜的模組(template)。 / Scholars have discussed the interaction between language and music. This study investigates this issue in Chinese biblical hymns. The lyrics of Chinese biblical hymn discussed in this paper are from biblical verses while the melodies are composed by Mou (2007). This paper includes corpus-based and Optimality Theory analysis. Corpus is constructed to observe the phenomena of syllable-note association, language and music boundaries, and the rhythm of neutral tone syllables. In terms of Optimality Theory, the optimal output of the hymn is governed by a set of constraints. Both linking one syllable to multiple notes and inserting repeated lyrics to the hymns can avoid notes from not linking to any lyric. These two variations can be accounted for by re-ranking Dep-σ and Univormity-SM in Cophonology Theory. The constraint ranking of Align-R (Long, IP), Align-L (T, XP) and Align-R (T, IP) can predict the interaction between the tiercet boundary and lyrics. The ranking of Rhythm-N and Align-R (Long, IP) determines the rhythm of neutral tone syllables. To conclude, this study has found that music can be modified to accommodate syllable rhythm. Language can also be influenced by music to satisfy the music template.

Page generated in 0.0706 seconds