• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 442
  • 79
  • 76
  • 38
  • 28
  • 22
  • 9
  • 8
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 866
  • 98
  • 81
  • 79
  • 70
  • 60
  • 60
  • 57
  • 54
  • 47
  • 47
  • 47
  • 42
  • 41
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Mikrolokalne distribucije defekta i primene / Microlocal defect distributions and applications

Vojnović Ivana 01 July 2017 (has links)
<p>H-mere i H-distribucije su mikrolokalni objekti koji se koriste za ispitivanje jake konvergencije slabo konvergentnog niza u prostorima Lebega i prostorima Soboljeva. H-mere su uveli Tartar i&nbsp; Zerar (koji ih zove mikrolokalne mere defekta), u radovima [34] i [19]. H-mere su Radonove mere koje daju informacije o mogu &acute; cim oblastima jake konvergencije slabo konvergentnog<em> L</em><sup>2</sup> niza. Da bismo mogli da posmatramo i slabo konvergentne<em> L</em><sup>p</sup> nizove za 1 &lt; p &lt; &infin;, Antonić&nbsp; i Mitrović u radu [11] uvode H-distribucije.</p><p>U disertaciji dajemo konstrukciju H-distribucija za slabo konvergentne nizove u <em>W</em><sup>-k,p</sup> prostorima, kad je 1 &lt; p &lt; &infin;, k &isin; ℕ&nbsp;i pokazujemo da kada je H-distribucija pridružena slabo konvergetnim nizovima jednaka nuli za sve test funkcije, onda imamo lokalno jaku konverenciju datog niza.</p><p>Takođe je pokazan i lokalizacijski princip, koji nam daje oblast u kojoj imamo lokalno jaku&nbsp; konvergenciju slabo konvergentnog niza. H-mere i H-distribucije deluju na test funkcije &phi;&nbsp;i &psi;&nbsp;(odgovarajuće regularnosti) koje su definisane na ℝ<sup>d</sup> i S<sup>d-1</sup> (jedinična sfera u ℝ<sup>d</sup>), pri&nbsp; čemu je funkcija &psi;, koju zovemo množilac, ograničena. U disertaciji uvodimo i H-distribucije sa neograničenim simbolom, pri čemu posmatramo slabo&nbsp; konvergentne nizove u Beselovim H<sup>p</sup><sub>-s</sub> prostorima, gde je 1 &lt; p &lt; &infin;; s &isin; ℝ. U ovom delu koristimo teoriju pseudo-diferencijalnih operatora i dokazujemo kompaktnost komutatora [<i>A</i><sub>&psi;</sub>, T<sub>&phi;</sub>] za razne klase množioca &psi;,&nbsp; &scaron;to je potrebno za dokaz postojanja H-distribucija. Takođe pokazujemo odgovarajuću verziju lokalizacijskog principa.</p> / <p>H-measures and H-distributions are microlocal tools that can be used to investigate strong conver-gence of weakly convergent sequences in the Lebesgue and Sobolev spaces.</p><p>H-measures are introduced by Tartar and G&eacute;rard (as microlocal defect measures) in papers [34] and [19]. H-measures are Radon measures and they provide information about the set of points where given weakly convergent sequence in <em>L</em><sup>2</sup> converges strongly. In paper [11], Antonić and Mitrović introduced&nbsp; H-distributions in order to work with weakly convergent <em>L</em><sup>p</sup> sequences.</p><p>In this thesis we give construction of H-distributions for weakly convergent <em>W<sup>-</sup></em><sup>k,p</sup> sequences, where 1 &lt; p &lt; &infin;; k &isin;&nbsp;N. We show that if the H-distribution corresponding to given weakly convergent sequence is equal to zero, then we have locally strong convergence of the sequence. We also prove localization principle.</p><p>H-measures and H-distributions act on test functions &phi; and &psi;&nbsp;(regular enough) which are defined on ℝ<sup>d</sup> and <sup>d-1</sup> (unit sphere in ℝ<sup>d</sup> ) and the function &psi;, which is called multiplier, is bounded. We also introduce H-distributions with unboundedmultipliers and in this&nbsp; case we assume that weakly convergent sequences are in Bessel potential spaces H<sup>p</sup><sub>-s</sub> , where 1 &lt; p &lt; &infin;, s &isin; ℝ. Theory of pseudo-differential operators is used in construction of H-distributions with unbounded multipliers. We prove compactness of the commutator [<em>A</em><sub><em>&psi;</em></sub>,T<sub>&phi;</sub> ] for different classes of multipliers y and appropriate version of localization principle.</p>
462

How may we explain Nepal’s foreign policy behavior and strategy? The case of a weak and small state in the international system and its foreign policy behavior and strategy

Biehl, Paul January 2020 (has links)
This paper focuses on the foreign policy behavior and strategy of weak and small states in the international system. Further, it explains the behavior and strategies employed by those states by examining several concepts and theories and applying them on the case of Nepal. In a realist world and among states that are most interested in their own integrity and survival, and partly in maximizing their power, weak and small states like Nepal try to keep a neutral position between all actors, try to maintain and extend bilateral relations to the immediate neighbors and other actors in the international system, and further integrate themselves into regional and international frameworks to secure their survival. Because they are the most vulnerable actors, the study of those states and their behavior and strategies is both interesting and compelling. Methodologically, this paper employs interviews as the main source of data and additionally peruses the foreign policy reports of Nepal from the last five years (2015-2019). The data is being analyzed both qualitatively and quantitatively. After studying the case and its implications, the author suggests that especially geographic patterns are important to understand the foreign policy of weak and small states, and further neutrality and bilateral as well as multilateral relations are indispensable for those actors to secure their integrity and survival in the international system.
463

Security Strategies for Hosting Sensitive Information in the Commercial Cloud

Forde, Edward Steven 01 January 2017 (has links)
IT experts often struggle to find strategies to secure data on the cloud. Although current security standards might provide cloud compliance, they fail to offer guarantees of security assurance. The purpose of this qualitative case study was to explore the strategies used by IT security managers to host sensitive information in the commercial cloud. The study's population consisted of information security managers from a government agency in the eastern region of the United States. The routine active theory, developed by Cohen and Felson, was used as the conceptual framework for the study. The data collection process included IT security manager interviews (n = 7), organizational documents and procedures (n = 14), and direct observation of a training meeting (n = 35). Data collection from organizational data and observational data were summarized. Coding from the interviews and member checking were triangulated with organizational documents and observational data/field notes to produce major and minor themes. Through methodological triangulation, 5 major themes emerged from the data analysis: avoiding social engineering vulnerabilities, avoiding weak encryption, maintaining customer trust, training to create a cloud security culture, and developing sufficient policies. The findings of this study may benefit information security managers by enhancing their information security practices to better protect their organization's information that is stored in the commercial cloud. Improved information security practices may contribute to social change by providing by proving customers a lesser amount of risk of having their identity or data stolen from internal and external thieves
464

Measurements of Higgs boson properties in the four-lepton final state at sqrt(s) = 13 TeV with the CMS experiment at the LHC. / Mesure des propriétés du boson de Higgs dans l’état final à quatre leptons à √s = 13 TeV avec l’expérience CMS au LHC

Regnard, Simon 07 November 2016 (has links)
Cette thèse présente une étude de la production de boson de Higgs dans les collisions proton-proton à sqrt(s) = 13 TeV enregistrées avec le détecteur CMS au Grand collisionneur de hadrons (LHC) du CERN, exploitant la voie de désintégration en une paire de bosons Z qui se désintègrent à leur tour en paires d’électrons ou de muons (H->ZZ->4l, l = e,mu).Ce travail s’inscrit dans le contexte du début du Run II du LHC, une nouvelle période de prise de données qui a commencé en 2015 après une interruption de deux ans. Ce redémarrage est marqué par une augmentation de l’énergie dans le centre de masse de 8 TeV à 13 TeV et un resserrement de l’espacement entre paquets de protons de 50 ns à 25 ns. Ces nouveaux paramètres augmentent à la fois la luminosité et placent des contraintes inédites sur le déclenchement, la reconstruction et l’analyse des données de collisions pp. Un effort important est donc consacré à l’amélioration et la réoptimisation du système de déclenchement de CMS pour le Run II, en mettant l’accent sur la reconstruction et la sélection des électrons et sur la préparation de chemins de déclenchement multi-leptons préservant une efficacité maximale pour le canal H->ZZ->4l.Dans un second temps, les algorithmes de sélection hors-ligne des électrons et des muons sont optimisés et leurs efficacités sont mesurées dans les données, tandis que la logique de sélection des candidats à quatre leptons est améliorée. Afin d’extraire des modes de production rares du boson de Higgs tels que la fusion de bosons vecteurs, la production par « Higgsstrahlung » VH et la production associée ttH, une nouvelle répartition des événements sélectionnés en catégories exclusives est introduite, fondée sur des discriminants utilisant le calcul d’éléments de matrice et l’étiquetage de saveur des jets.Les résultats de l’analyse des premières données à 13 TeV sont présentés pour des lots de données enregistrés en 2015 et au début de 2016, qui correspondent à des luminosités intégrées respectives de 2.8 fb-1 and 12.9 fb-1. Le boson de Higgs est redécouvert de façon indépendante à la nouvelle énergie. L’intensité du signal relative à la prédiction du modèle standard, la masse et la largeur de désintégration du boson sont mesurées, ainsi qu’un jeu de paramètres contrôlant les contributions des principaux modes de production attendus. Tous les résultats sont en bon accord avec les prévisions du modèle standard pour un boson de Higgs à 125 GeV, aux incertitudes de mesure près, ces dernières étant dominées par la composante statistique avec l’échantillon de données actuel. Enfin, une autre résonance se désintégrant en quatre leptons est recherchée à haute masse, et aucun excès significatif n’est observé. / This thesis reports a study of Higgs boson production in proton-proton collisions at sqrt(s) = 13 TeV recorded with the CMS detector at the CERN Large Hadron Collider (LHC), exploiting the decay channel into a pair of Z bosons that in turn decay into pairs of electrons or muons (H->ZZ->4l, l = e,mu).This work is carried out in the context of the beginning of Run II of the LHC, a new data-taking period that started in 2015, following a two-year-long shutdown. This restart is marked by an increase of the centre-of-mass energy from 8 TeV to 13 TeV, and a narrowing of the spacing of proton bunches from 50 ns to 25 ns. These new parameters both increase the luminosity and set new constraints on the triggering, reconstruction and analysis of pp collision events. Therefore, considerable effort is devoted to the improvement and reoptimization of the CMS trigger system for Run II, focusing on the reconstruction and selection of electrons and on the preparation of multilepton trigger paths that preserve a maximal efficiency for the H->ZZ->4l channel.Secondly, the offline algorithms for electron and muon selection are optimized and their efficiencies are measured in data, while the selection logic of four-lepton candidates is improved. In order to extract rare production modes of the Higgs boson such as vector boson fusion, VH associated production and ttH associated production, a new classification of selected events into exclusive categories is introduced, using discriminants based on matrix-element calculations and jet flavour tagging.Results of the analysis of first 13 TeV data are presented for two data sets recorded in 2015 and early 2016, corresponding to integrated luminosities of 2.8 fb-1 and 12.9 fb-1, respectively. A standalone rediscovery of the Higgs boson in the four-lepton channel is achieved at the new energy. The signal strength relative to the standard model prediction, the mass and decay width of the boson, and a set of parameters describing the contributions of its main predicted production modes are measured. All results are in good agreement with standard model expectations for a 125 GeV Higgs boson within the incertainties, which are dominated by their statistical component with the current data set. Finally, a search for an additional high-mass resonance decaying to four leptons is performed, and no significant excess is observed.
465

Newtonian Spaces Based on Quasi-Banach Function Lattices

Malý, Lukáš January 2012 (has links)
The traditional first-order analysis in Euclidean spaces relies on the Sobolev spaces W1,p(Ω), where Ω ⊂ Rn is open and p ∈ [1, ∞].The Sobolev norm is then defined as the sum of Lp norms of a function and its distributional gradient.We generalize the notion of Sobolev spaces in two different ways. First, the underlying function norm will be replaced by the “norm” of a quasi-Banach function lattice. Second, we will investigate functions defined on an abstract metric measure space and that is why the distributional gradients need to be substituted. The thesis consists of two papers. The first one builds up the elementary theory of Newtonian spaces based on quasi-Banach function lattices. These lattices are complete linear spaces of measurable functions with a topology given by a quasinorm satisfying the lattice property. Newtonian spaces are first-order Sobolev-type spaces on abstract metric measure spaces, where the role of weak derivatives is passed on to upper gradients. Tools such asmoduli of curve families and the Sobolev capacity are developed, which allows us to study basic properties of the Newtonian functions.We will see that Newtonian spaces can be equivalently defined using the notion of weak upper gradients, which increases the number of techniques available to study these spaces. The absolute continuity of Newtonian functions along curves and the completeness of Newtonian spaces in this general setting are also established. The second paper in the thesis then continues with investigation of properties of Newtonian spaces based on quasi-Banach function lattices. The set of all weak upper gradients of a Newtonian function is of particular interest.We will prove that minimalweak upper gradients exist in this general setting.Assuming that Lebesgue’s differentiation theoremholds for the underlyingmetricmeasure space,wewill find a family of representation formulae. Furthermore, the connection between pointwise convergence of a sequence of Newtonian functions and its convergence in norm is studied.
466

Effect of Magnetic Shear and Heating on Electromagnetic Micro-instability and Turbulent Transport in Global Toroidal System / 大域的トロイダル系における電磁的な微視的不安定性と乱流輸送に対する磁気シアと加熱の効果

Qin, Zhihao 24 September 2021 (has links)
京都大学 / 新制・課程博士 / 博士(エネルギー科学) / 甲第23537号 / エネ博第428号 / 新制||エネ||82(附属図書館) / 京都大学大学院エネルギー科学研究科エネルギー基礎科学専攻 / (主査)教授 岸本 泰明, 教授 中村 祐司, 教授 田中 仁 / 学位規則第4条第1項該当 / Doctor of Energy Science / Kyoto University / DFAM
467

Convergence of stochastic processes on varying metric spaces / 変化する距離空間上の確率過程の収束

Suzuki, Kohei 23 March 2016 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(理学) / 甲第19468号 / 理博第4128号 / 新制||理||1594(附属図書館) / 32504 / 京都大学大学院理学研究科数学・数理解析専攻 / (主査)准教授 矢野 孝次, 教授 上田 哲生, 教授 重川 一郎 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DFAM
468

Lateral Stability Analysis of Precast Prestressed Bridge Girders During All Phases of Construction

Sathiraju, Venkata Sai Surya Praneeth 25 July 2019 (has links)
No description available.
469

Sufficient conditions for local exactness of the exact penalty function method in nonsmooth optimization

Al hashimi, Farah 01 May 2019 (has links)
No description available.
470

Equilibria in Multiplayer Games Played on Graphs

Goeminne, Aline 27 April 2021 (has links) (PDF)
Today, as computer systems are ubiquitous in our everyday life, there is no need to argue that their correctness is of capital importance. In order to prove (in a mathematical sense) that a given system satisfies a given property, formal methods have been introduced. They include concepts such as model checking and synthesis. Roughly speaking, when considering synthesis, we aim at building a model of the system which is correct by construction. In order to do so, models are mainly borrowed from game theory. During the last decades, there has been a shift from two-player qualitative zero-sum games (used to model antagonistic interactions between a system and its environment) to multiplayer quantitative games (used to model complex systems composed of several agents whose objectives are not necessarily antagonistic). In the latter setting, the solution concepts of interest include numerous equilibria, such as Nash equilibrium (NE) and subgame perfect equilibrium (SPE). While the existence of equilibria is widely studied, it is also well known that several equilibria may coexist in the same game. Nevertheless, some equilibria are more relevant than others. For example, if we consider a game in which each player aims at satisfying a given qualitative objective, it is possible to have both an equilibrium in which no player satisfies his objective and another one in which each player satisfies it. In this case one prefers the latter equilibrium which is more relevant.In this thesis, we focus on multiplayer turn-based games played on graphs either with qualitative or quantitative objectives. Our contributions are twofold: (i) we provide equilibria characterizations and (ii) we use these characterizations to solve decision problems related to the existence of relevant equilibria; and characterize their complexities. Firstly, we provide a characterization of a weaker notion of SPE (weak SPE) in multiplayer games with omega-regular objectives based on the payoff profiles which are realizable by a weak SPE. We then adopt another point of view by characterizing the outcomes of equilibria instead of their payoff profiles. In particular we focus on weak SPE outcome characterization. As for some kinds of games (e.g. multiplayer quantitative Reachability games), weak SPEs and SPEs are equivalent, this characterization is useful in order to study SPEs in these games.Secondly, we use those different equilibrium characterizations to provide the exact complexity classes of different decision problems related to the existence of relevant equilibria. We mainly focus on the constrained existence problem: if each player aims at maximizing his gain, this problem asks whether there exists an equilibrium such that each resulting player’s gain is greater than a threshold (one per player). We also consider variants of relevant equilibria based on the social welfare and the Pareto optimality of the players’ payoff. In this way, we prove the exact complexity classes for (i) the weak SPE constrained existence problem in multiplayer games with classical qualitative objectives such as Büchi, co-Büchi and Safety and (ii) the NE and SPE constrained existence problems (and variants) for qualitative and quantitative reachability games. In the latter case, the upper bounds on the required memory for such relevant equilibria are studied and proved to be finite. Studying memory requirements of strategies is important since with the synthesis process those strategies have to be implemented.Finally, we consider multiplayer, non zero-sum, turn-based timed games with qualitative Reachability objectives together with the concept of SPE. We prove that the SPE constrained existence problem is EXPTIME-complete for qualitative Reachability timed games. In order to obtain an EXPTIME algorithm, we proceed in different steps. In the first step, we prove that the game variant of the classical region graph is a good abstraction for the SPE constrained existence problem. In fact, we identify conditions on bisimulations under which the study of SPE in a given game can be reduced to the study of its quotient. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished

Page generated in 0.0366 seconds