• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 151
  • 26
  • 20
  • 10
  • 8
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 283
  • 283
  • 95
  • 46
  • 42
  • 33
  • 30
  • 27
  • 27
  • 26
  • 25
  • 24
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

On the stability of cooperation under indirect reciprocity with first-order information

Berger, Ulrich, Grüne, Ansgar 07 1900 (has links) (PDF)
Indirect reciprocity describes a class of reputation-based mechanisms which may explain the prevalence of cooperation in large groups where partners meet only once. The first model for which this has been demonstrated was the image scoring mechanism. But analytical work on the simplest possible case, the binary scoring model, has shown that even small errors in implementation destabilize any cooperative regime. It has thus been claimed that for indirect reciprocity to stabilize cooperation, assessments of reputation must be based on higher-order information. Is indirect reciprocity relying on frst-order information doomed to fail? We use a simple analytical model of image scoring to show that this need not be the case. Indeed, in the general image scoring model the introduction of implementation errors has just the opposite effect as in the binary scoring model: it may stabilize instead of destabilize cooperation.
32

Saddle point techniques in convex composite and error-in-measurement optimization

He, Niao 07 January 2016 (has links)
This dissertation aims to develop efficient algorithms with improved scalability and stability properties for large-scale optimization and optimization under uncertainty, and to bridge some of the gaps between modern optimization theories and recent applications emerging in the Big Data environment. To this end, the dissertation is dedicated to two important subjects -- i) Large-scale Convex Composite Optimization and ii) Error-in-Measurement Optimization. In spite of the different natures of these two topics, the common denominator, to be presented, lies in their accommodation for systematic use of saddle point techniques for mathematical modeling and numerical processing. The main body can be split into three parts. In the first part, we consider a broad class of variational inequalities with composite structures, allowing to cover the saddle point/variational analogies of the classical convex composite minimization (i.e. summation of a smooth convex function and a simple nonsmooth convex function). We develop novel composite versions of the state-of-the-art Mirror Descent and Mirror Prox algorithms aimed at solving such type of problems. We demonstrate that the algorithms inherit the favorable efficiency estimate of their prototypes when solving structured variational inequalities. Moreover, we develop several variants of the composite Mirror Prox algorithm along with their corresponding complexity bounds, allowing the algorithm to handle the case of imprecise prox mapping as well as the case when the operator is represented by an unbiased stochastic oracle. In the second part, we investigate four general types of large-scale convex composite optimization problems, including (a) multi-term composite minimization, (b) linearly constrained composite minimization, (c) norm-regularized nonsmooth minimization, and (d) maximum likelihood Poisson imaging. We demonstrate that the composite Mirror Prox, when integrated with saddle point techniques and other algorithmic tools, can solve all these optimization problems with the best known so far rates of convergences. Our main related contributions are as follows. Firstly, regards to problems of type (a), we develop an optimal algorithm by integrating the composite Mirror Prox with a saddle point reformulation based on exact penalty. Secondly, regards to problems of type (b), we develop a novel algorithm reducing the problem to solving a ``small series'' of saddle point subproblems and achieving an optimal, up to log factors, complexity bound. Thirdly, regards to problems of type (c), we develop a Semi-Proximal Mirror-Prox algorithm by leveraging the saddle point representation and linear minimization over problems' domain and attain optimality both in the numbers of calls to the first order oracle representing the objective and calls to the linear minimization oracle representing problem's domain. Lastly, regards to problem (d), we show that the composite Mirror Prox when applied to the saddle point reformulation circumvents the difficulty with non-Lipschitz continuity of the objective and exhibits better convergence rate than the typical rate for nonsmooth optimization. We conduct extensive numerical experiments and illustrate the practical potential of our algorithms in a wide spectrum of applications in machine learning and image processing. In the third part, we examine error-in-measurement optimization, referring to decision-making problems with data subject to measurement errors; such problems arise naturally in a number of important applications, such as privacy learning, signal processing, and portfolio selection. Due to the postulated observation scheme and specific structure of the problem, straightforward application of standard stochastic optimization techniques such as Stochastic Approximation (SA) and Sample Average Approximation (SAA) are out of question. Our goal is to develop computationally efficient and, hopefully, not too conservative data-driven techniques applicable to a broad scope of problems and allowing for theoretical performance guarantees. We present two such approaches -- one depending on a fully algorithmic calculus of saddle point representations of convex-concave functions and the other depending on a general approximation scheme of convex stochastic programming. Both approaches allow us to convert the problem of interests to a form amenable for SA or SAA. The latter developments are primarily focused on two important applications -- affine signal processing and indirect support vector machines.
33

On collapsible pushdown automata, their graphs and the power of links

Broadbent, Christopher H. January 2011 (has links)
Higher-Order Pushdown Automata (HOPDA) are abstract machines equipped with a nested stacks of stacks ... of stacks of stacks. Collapsible pushdown automata (CPDA) enhance these stacks with the addition of ‘links’ emanating from atomic elements to the higher-order stacks below. For trees CPDA are equi-expressive with recursion schemes, which can be viewed as simply-typed λY terms. With vanilla HOPDA, one can only capture schemes satisfying a syntactic constraint called safety. This dissertation begins with some results concerning the significance of links in terms of recursion schemes. We introduce a fine-grained notion of safety that allows us to correlate the need for links of a given order with the imposition of safety on variables of a corresponding order. This generalises some joint work with William Blum that shows we can dispense with homogeneous types when characterising safety. We complement this result with a demonstration that homogeneity by itself does not constrain the expressivity of otherwise unrestricted recursion schemes. The main results of the dissertation, however, concern the configuration graphs of CPDA. Whilst the configuration graphs of HOPDA are well understood and have decidable MSO theories (they coincide with the Caucal hierarchy), relatively little is known about the transition graphs of CPDA. It is known that they already have undecidable MSO theories at order-2, but Kartzow recently showed that 2-CPDA graphs are tree automatic and hence first-order logic is decidable at order-2. We provide a characterisation of the decidability of first-order logic on CPDA graphs in terms of quantifier-alternation and the order of CPDA stacks and the links contained within. Whilst this characterisation is fairly comprehensive, we do leave open the question of decidability for some sub-classes of CPDA. It turns out that decidability can be highly sensitive to the order of links in a stack relative to the order of the stack itself. In addition to some strong and surprising undecidability results, we also develop further Kartzow’s work on 2-CPDA. We introduce prefix-rewrite systems for nested-words that characterise the configuration graphs of both 2-CPDA and 2-HOPDA, capturing the power of collapse precisely in terms outside of the language of CPDA. It also formalises and demonstrates the inherent asymmetry of the collapse operation. This generalises the rational prefix-rewriting systems characterising conventional pushdown graphs and we believe establishes the 2-CPDA graphs as an interesting and robust class.
34

Material screening and performance analysis of active magnetic heat pumps

Niknia, Iman 26 April 2017 (has links)
With the discovery of the magnetocaloric effect, utilizing magnetocaloric materials in cycles to generate cooling power began. The magnetocaloric effect is a physical phenomenon observed in some magnetic materials where the temperature of the material increases and decreases with application and removal of magnetic field. Usually the adiabatic temperature change observed in magnetocaloric materials is too small for room temperature refrigeration. A solution to this problem is to use magnetocaloric materials in an active magnetic regenerator (AMR) cycle. In this study a detailed numerical model is developed, validated, and used to improve our understanding of AMR systems. A one dimensional, time dependent model is used to study the performance of an active magnetic regenerator. Parameters related to device configuration such as external heat leaks and demagnetization effects are included. Performance is quantified in terms of cooling power and second law efficiency for a range of displaced fluid volumes and operating frequencies. Simulation results show that a step change model for applied field can be effectively used instead of full field wave form if the flow weighted average low and high field values are used. This is an important finding as it can greatly reduce the time required to solve the numerical problem. In addition, the effects of external losses on measured AMR performance are quantified. The performance of eight cases of known magnetocaloric material (including first order MnFeP1-xAsx and second order materials Gd, GdDy, Tb) and 15 cases of hypothetical materials are considered. Using a fixed regenerator matrix geometry, magnetic field, and flow waveforms, the maximum exergetic cooling power of each material is identified. Several material screening metrics such as RCP and RC are tested and a linear correlation is found between RCPMax and the maximum exergetic cooling power. The sensitivity of performance to variations in the hot side and cold side temperatures from the conditions giving maximum exergetic power are determined. The impact of 2 K variation in operating temperature is found to reduce cooling power up to 20 % for a second order material, but can reduce cooling power up to 70% with a first order material. A detailed numerical analysis along with experimental measurements are used to study the behavior of typical first order material (MnFeP1-xSix samples) in an AMR. For certain operating conditions, it is observed that multiple points of equilibrium (PE) exist for a fixed heat rejection temperature. Stable and unstable PEs are identified and behavior of these points are analysed. The impacts of heat loads, operating conditions and configuration losses on the number of PEs are discussed and it is shown that the existence of multiple PEs can affect the performance of an AMR significantly. Thermal hysteresis along with multiple PEs are considered as the main factors that contribute to the temperature history dependent performance behavior of FOMs when used in an AMR. / Graduate / 0548 / iniknia@uvic.ca
35

Learning to cooperate via indirect reciprocity

Berger, Ulrich 07 September 2010 (has links) (PDF)
Cooperating in the Prisoner's Dilemma is irrational and some supporting mechanism is needed to stabilize cooperation. Indirect reciprocity based on reputation is one such mechanism. Assessing an individual's reputation requires first-order information, i.e. knowledge about its previous behavior, as it is utilized under image scoring. But there seems to be an agreement that in order to successfully stabilize cooperation, higher-order information is necessary, i.e. knowledge of others' previous reputations. We show here that such a conclusion might have been premature. Tolerant scoring, a first-order assessment rule with built-in tolerance against single defections, can lead a society to stable cooperation. (author's abstract)
36

Teorie a algebry formulí / Theories and algebras of formulas

Garlík, Michal January 2011 (has links)
In the present work we study first-order theories and their Lindenbaum alge- bras by analyzing the properties of the chain BnT n<ω, called B-chain, where BnT is the subalgebra of the Lindenbaum algebra given by formulas with up to n free variables. We enrich the structure of Lindenbaum algebra in order to cap- ture some differences between theories with term-by-term isomorphic B-chains. Several examples of theories and calculations of their B-chains are given. We also construct a model of Robinson arithmetic, whose n-th algebras of definable sets are isomorphic to the Cartesian product of the countable atomic saturated Boolean algebra and the countable atomless Boolean algebra. 1
37

Characterization of Magnetic Nanostructured Materials by First Order Reversal Curve Method

Lenormand, Denny R 02 August 2012 (has links)
The Interactions and magnetization reversal of Ni nanowire arrays and synthetic anit-ferromagnetic coupled thin film trilayers have been investigated through first order reversal curve (FORC) method. By using a quantitative analysis of the local interaction field profile distributions obtained from FORC, it has proven to be a powerful characterization tool that can reveal subtle features of magnetic interactions.
38

A Semantic Conception of Truth

Lumpkin, Jonathan 01 May 2014 (has links)
I explore three main points in Alfred Tarski’s Semantic Conception of Truth and the Foundation of Theoretical Semantics: (1) his physicalist program, (2) a general theory of truth, and (3) the necessity of a metalanguage when defining truth. Hartry Field argued that Tarski’s theory of truth failed to accomplish what it set out to do, which was to ground truth and semantics in physicalist terms. I argue that Tarski has been adequately defended by Richard Kirkham. Development of logic in the past three decades has created a shift away from Fregean and Russellian understandings of quantification to an independent conception of quantification in independence-friendly first-order logic. This shift has changed some of the assumptions that led to Tarski’s Impossibility Theorem.
39

Development of a novel rate-modulated fixed dose analgesic combination for the treatment of mild to moderate pain

Hobbs, Kim Melissa 17 September 2010 (has links)
MSc (Med),Dept of Pharmacy and Pharmacology, Faculty of Health Sciences, University of the Witwatersrand / Pain is the net effect of multidimensional mechanisms that engage most parts of the central nervous system (CNS) and the treatment of pain is one of the key challenges in clinical medicine (Le Bars et al., 2001; Miranda et al., 2008). Polypharmacy is seen as a barrier to analgesic treatment compliance, signifying the necessity for the development of fixed dose combinations (FDCs), which allow the number of tablets administered to be reduced, with no associated loss in efficacy or increase in the prevalence of side effects (Torres Morera, 2004). FDCs of analgesic drugs with differing mechanisms of nociceptive modulation offer benefits including synergistic analgesic effects, where the individual agents act in a greater than additive manner, and a reduced occurrence of side-effects (Raffa, 2001; Camu, 2002). This study aimed at producing a novel, rate-modulated, fixed-dose analgesic formulation for the treatment of mild to moderate pain. The fixed-dose combination (FDC) rationale of paracetamol (PC), tramadol hydrochloride (TM) and diclofenac potassium (DC) takes advantage of previously reported analgesic synergy of PC and TM as well as extending the analgesic paradigm with the addition of the anti-inflammatory component, DC. The study involved the development of a triple-layered tablet delivery system with the desired release characteristics of approximately 60% of the PC and TM being made available within 2 hours to provide an initial pain relief effect and then sustained zero-order release of DC over a period of 24 hours to combat the on-going effects of any underlying inflammatory conditions. The triple-layered tablet delivery system would thus provide both rapid onset of pain relief as well as potentially address an underlying inflammatory cause. The design of a novel triple-layered tablet allowed for the desired release characteristics to be attained. During initial development work on the polymeric matrix it was discovered that only when combined with the optimized ratio of the release retarding polymer polyethylene oxide (PEO) in combination with electrolytic-crosslinking activity, provided by the biopolymer sodium alginate and zinc gluconate, could the 24 hour zero-order release of DC be attained. It was also necessary for this polymeric matrix to be bordered on both sides by the cellulosic polymers containing PC and TM. Thus the application of multi-layered tableting technology in the form of a triple-layered tablet were capable of attaining the rate-modulated release objectives set out in the study. The induced barriers provided by the three layers also served to physically separate TM and DC, reducing the likelihood of the bioavailability-diminishing interaction noted in United States Patent 6,558,701 and detected in the DSC analysis performed as part of this study. The designed system provided significant flexibility in modulation of release kinetics for drugs of varying solubility. The suitability of the designed triple-layered tablet delivery system was confirmed by a Design of Experiments (DoE) statistical evaluation, which revealed that Formulation F4 related closest to the desired more immediate release for PC and TM and the zero-order kinetics for DC. The results were confirmed by comparing Formulation F4 to typical release kinetic mechanisms described by Noyes-Whitney, Higuchi, Power Law, Pappas-Sahlin and Hopfenberg. Using f1 and f2 fit factors Formulation F4 compared favourably to each of the criteria defined for these kinetic models. The Ultra Performance Liquid Chromatographic (UPLC) assay method developed displayed superior resolution of the active pharmaceutical ingredient (API) combinations and the linearity plots produced indicated that the method was sufficiently sensitive to detect the concentrations of each API over the concentration ranges studied. The method was successfully validated and hence appropriate to simultaneously detect the three APIs as well as 4-aminophenol, the degradation product related to PC. Textural profile analysis in the form of swelling as well as matrix hardness analysis revealed that an increase in the penetration distance was associated with an increase in hydration time of the tablet and also an increase in gel layer thickness. The swelling complexities observed in the delivery system in terms of both the PEO, crosslinking sodium alginate and both cellulose polymers as well as the actuality of the three layers of the tablet swelling simultaneously suggests further intricacies involved in the release kinetics of the three drugs from this tablet configuration. Modified release dosage forms, such as the one developed in this study, have gained widespread importance in recent years and offer many advantages including flexible release kinetics and improved therapy and patient compliance.
40

Topics in modal quantification theory / Tópicos em teoria da quantificação modal

Salvatore, Felipe de Souza 21 August 2015 (has links)
The modal logic S5 gives us a simple technical tool to analyze some main notions from philosophy (e.g. metaphysical necessity and epistemological concepts such as knowledge and belief). Although S5 can be axiomatized by some simple rules, this logic shows some puzzling properties. For example, an interpolation result holds for the propositional version, but this same result fails when we add first-order quantifiers to this logic. In this dissertation, we study the failure of the Definability and Interpolation Theorems for first-order S5. At the same time, we combine the results of justification logic and we investigate the quantified justification counterpart of S5 (first-order JT45). In this way we explore the relationship between justification logic and modal logic to see if justification logic can contribute to the literature concerning the restoration of the Interpolation Theorem. / A lógica modal S5 nos oferece um ferramental técnico para analizar algumas noções filosóficas centrais (por exemplo, necessidade metafísica e certos conceitos epistemológicos como conhecimento e crença). Apesar de ser axiomatizada por princípios simples, esta lógica apresenta algumas propriedades peculiares. Uma das mais notórias é a seguinte: podemos provar o Teorema da Interpolação para a versão proposicional, mas esse mesmo teorema não pode ser provado quando adicionamos quantificadores de primeira ordem a essa lógica. Nesta dissertação vamos estudar a falha dos Teoremas da Definibilidade e da Interpolação para a versão quantificada de S5. Ao mesmo tempo, vamos combinar os resultados da lógica da justificação e investigar a contraparte da versão quantificada de S5 na lógica da justificação (a lógica chamada JT45 de primeira ordem). Desse modo, vamos explorar a relação entre lógica modal e lógica da justificação para ver se a lógica da justificação pode contribuir para a restauração do Teorema da Interpolação.

Page generated in 0.0684 seconds