Spelling suggestions: "subject:"found"" "subject:"sound""
711 |
Thermochemical differences in lysine and lysine-homolog containing oligopeptides: Determination of basicity and gas-phase structure through mass spectrometry, infrared spectroscopy, and computational chemistryBatoon, Patrick Henry M. 01 January 2016 (has links)
The data presented in this thesis is a comprehensive study on the nature of peptide structure and how subtle and systematic changes in sequence and sidechain affect the basicity, ion stability, and conformation of a peptide. The peptides characterized were acetylated polyalanine di-, tri-, and tetra- peptides containing a proton-accepting probe: lysine and or the non-proteinogenic lysine-homologs: ornithine, 2,4-diaminobutyric acid, and 2,3-diaminopropionic acid. Peptides were studied in isomeric pairs for which the basic amino acid was placed closest to the N-terminus or the C-terminus of each peptide family (A n Probe vs. ProbeA n ). Using a variety of mass spectrometry based techniques and infrared multiphoton dissociation ion spectroscopy, the isomeric families of polyalanine peptides were characterized. Quantum chemical techniques were employed in parallel to provide theoretical predictions of three-dimensional structure, physical properties (dipole moment, polarizability, and accessible surface area), thermochemical values, and vibrational IR spectra, to gain further understanding of the peptides studied and to push the limits of current theoretical models. Overall it was found that the AnProbe peptide was more basic than their ProbeAn isomer. For the dipeptide systems, the greater basicity of AProbe peptides was due to efficiently charge-solvated ions which formed more compact structures compared to their ProbeA counterpart. For the tri- and tetra- peptide systems, greater basicity of the A 2,3 Probe peptides was likely due to formation of α or 3 10 helix-like structures in the protonated forms., introducing the macrodipolar effect, which cooperatively encouraged helical formation while stabilizing the charged site. On the other hand, ProbeA 2,3 peptides formed charge-solvated coils which do not exhibit any kind of dipole effect, resulting in lower basicity than their A2,3Probe counterpart.
|
712 |
Creating College-Going Cultures for our Children: Narratives of TRIO Upward Bound Program AlumniRamsey, Ieesha O. January 2019 (has links)
No description available.
|
713 |
Evolutionsgleichungen und obere Abschätzungen an die Lösungen des AnfangswertproblemsWingert, Daniel 05 July 2012 (has links)
In dieser Arbeit werden die zu einem m-sektoriellen Operator assoziierten Halbgruppen betrachtet, die die Lösungen des Anfangswertproblems der zugehörigen Evolutionsgleichung beschreiben. Es wird eine 1987 von Davies veröffentlichte Methode zur Abschätzung dieser Halbgruppen verallgemeinert.
Einen Schwerpunkt bilden die zu Dirichlet-Formen assoziierten Markov-Halbgruppen. Für diese werden die Resultate spezialisiert und der Zusammenhang zur intrinsischen Metrik dargelegt. Die Arbeit schließt mit verschiedenen Beispielen, die zeigen, wie mit diesen Verallgemeinerungen von Davies Methode neue Anwendungsgebiete erschlossen werden können.:Einleitung
Funktionalanalytische Grundlagen
Spezielle Halbgruppeneigenschaften
Symmetrische Dirichlet-Formen
Obere Schranken für die Halbgruppe
Anwendungen
Ausblick
Komplexe Maße
Anhang / This thesis is about m-sectorial operators and their associated semigroups describing the solutions of the initial value problem of the corresponding evolution equation. We generalize a method published by Davies 1987 to estimate these semigroups.
A focus is set on Markov semigroups associated with Dirchlet forms. The results are applied to them and the connection to the intrinsic metric is presented. The thesis ends with different examples showing how this generalization of Davies method can be applied into new fields of application.:Einleitung
Funktionalanalytische Grundlagen
Spezielle Halbgruppeneigenschaften
Symmetrische Dirichlet-Formen
Obere Schranken für die Halbgruppe
Anwendungen
Ausblick
Komplexe Maße
Anhang
|
714 |
Covering systemsKlein, Jonah 12 1900 (has links)
Un système couvrant est un ensemble fini de progressions arithmétiques avec la propriété que
chaque entier appartient à au moins une des progressions. L’étude des systèmes couvrants
a été initié par Erdős dans les années 1950, et il posa dans les années qui suivirent plusieurs
questions sur ces objets mathématiques. Une de ses questions les plus célèbres est celle du
plus petit module : est-ce que le plus petit module de tous les systèmes couvrants avec
modules distinct est borné uniformément?
En 2015, Hough a montré que la réponse était affirmative, et qu’une borne admissible
est 1016. En se basant sur son travail, mais en simplifiant la méthode, Balister, Bollobás,
Morris, Sahasrabudhe et Tiba on réduit cette borne a 616, 000. Leur méthode a menée a
plusieurs applications supplémentaires. Entre autres, ils ont compté le nombre de système
couvrant avec un nombre fixe de module.
La première partie de ce mémoire vise a étudier une question similaire. Nous allons essayer
de compter le nombre de système couvrant avec un ensemble de module fixé. La technique
que nous utiliserons nous mènera vers l’étude des symmétries de système couvrant.
Dans la seconde partie, nous répondrons à des variantes du problème du plus petit module. Nous regarderons des bornes sur le plus petit module d’un système couvrant de multiplicité s, c’est-à-dire un système couvrant dans lequel chaque module apparait au plus s
fois. Nous utiliserons ensuite ce résultat afin montrer que le plus petit module d’un système
couvrant de multiplicité 1 d’une progression arithmétique est borné, ainsi que pour montrer
que le n-eme plus petit module dans un système couvrant de multiplicité 1 est borné. / A covering system is a finite set of arithmetic progressions with the property that every
integer belongs to at least one of them. The study of covering systems was started by Erdős
in the 1950’s, and he asked many questions about them in the following years. One of the
most famous questions he asked was if the minimum modulus of a covering system with
distinct moduli is bounded uniformly.
In 2015, Hough showed that it is at most 1016. Following on his work, but simplifying
the method, Balister, Bollobás, Morris, Sahasrabudhe and Tiba showed that it is at most
616, 000. Their method led them to many further applications. Notably, they counted the
number of covering systems with a fixed number of moduli.
The first part of this thesis seeks to study a related question, that is to count the number
of covering systems with a given set of moduli. The technique developped to do this for some
sets will lead us to look at symmetries of covering systems.
The second part of this thesis will look at variants of the minimum modulus problem.
Notably, we will be looking at bounds on the minimum modulus of a covering system of
multiplicity s, that is a covering system in which each moduli appears at most s times, as well
as bounds on the minimum modulus of a covering system of multiplicity 1 of an arithmetic
progression, and finally look at bounds for the n-th smallest modulus in a covering system.
|
715 |
Learning to compare nodes in branch and bound with graph neural networksLabassi, Abdel Ghani 08 1900 (has links)
En informatique, la résolution de problèmes NP-difficiles en un temps raisonnable est d’une grande importance : optimisation de la chaîne d’approvisionnement, planification, routage, alignement de séquences biologiques multiples, inference dans les modèles graphiques pro- babilistes, et même certains problèmes de cryptographie sont tous des examples de la classe NP-complet. En pratique, nous modélisons beaucoup d’entre eux comme un problème d’op- timisation en nombre entier, que nous résolvons à l’aide de la méthodologie séparation et évaluation. Un algorithme de ce style divise un espace de recherche pour l’explorer récursi- vement (séparation), et obtient des bornes d’optimalité en résolvant des relaxations linéaires sur les sous-espaces (évaluation). Pour spécifier un algorithme, il faut définir plusieurs pa- ramètres, tel que la manière d’explorer les espaces de recherche, de diviser une recherche l’espace une fois exploré, ou de renforcer les relaxations linéaires. Ces politiques peuvent influencer considérablement la performance de résolution.
Ce travail se concentre sur une nouvelle manière de dériver politique de recherche, c’est à dire le choix du prochain sous-espace à séparer étant donné une partition en cours, en nous servant de l’apprentissage automatique profond. Premièrement, nous collectons des données résumant, sur une collection de problèmes donnés, quels sous-espaces contiennent l’optimum et quels ne le contiennent pas. En représentant ces sous-espaces sous forme de graphes bipartis qui capturent leurs caractéristiques, nous entraînons un réseau de neurones graphiques à déterminer la probabilité qu’un sous-espace contienne la solution optimale par apprentissage supervisé. Le choix d’un tel modèle est particulièrement utile car il peut s’adapter à des problèmes de différente taille sans modifications. Nous montrons que notre approche bat celle de nos concurrents, consistant à des modèles d’apprentissage automatique plus simples entraînés à partir des statistiques du solveur, ainsi que la politique par défaut de SCIP, un solveur open-source compétitif, sur trois familles NP-dures: des problèmes de recherche de stables de taille maximum, de flots de réseau multicommodité à charge fixe, et de satisfiabilité maximum. / In computer science, solving NP-hard problems in a reasonable time is of great importance, such as in supply chain optimization, scheduling, routing, multiple biological sequence align- ment, inference in probabilistic graphical models, and even some problems in cryptography. In practice, we model many of them as a mixed integer linear optimization problem, which we solve using the branch and bound framework. An algorithm of this style divides a search space to explore it recursively (branch) and obtains optimality bounds by solving linear relaxations in such sub-spaces (bound). To specify an algorithm, one must set several pa- rameters, such as how to explore search spaces, how to divide a search space once it has been explored, or how to tighten these linear relaxations. These policies can significantly influence resolution performance.
This work focuses on a novel method for deriving a search policy, that is, a rule for select- ing the next sub-space to explore given a current partitioning, using deep machine learning. First, we collect data summarizing which subspaces contain the optimum, and which do not. By representing these sub-spaces as bipartite graphs encoding their characteristics, we train a graph neural network to determine the probability that a subspace contains the optimal so- lution by supervised learning. The choice of such design is particularly useful as the machine learning model can automatically adapt to problems of different sizes without modifications. We show that our approach beats the one of our competitors, consisting of simpler machine learning models trained from solver statistics, as well as the default policy of SCIP, a state- of-the-art open-source solver, on three NP-hard benchmarks: generalized independent set, fixed-charge multicommodity network flow, and maximum satisfiability problems.
|
716 |
[en] CONSERVATIVE-SOLUTION METHODOLOGIES FOR STOCHASTIC PROGRAMMING: A DISTRIBUTIONALLY ROBUST OPTIMIZATION APPROACH / [pt] METODOLOGIAS PARA OBTENÇÃO DE SOLUÇÕES CONSERVADORAS PARA PROGRAMAÇÃO ESTOCÁSTICA: UMA ABORDAGEM DE OTIMIZAÇÃO ROBUSTA À DISTRIBUIÇÕESCARLOS ANDRES GAMBOA RODRIGUEZ 20 July 2021 (has links)
[pt] A programação estocástica dois estágios é uma abordagem
matemática amplamente usada em aplicações da vida real, como planejamento
da operação de sistemas de energia, cadeias de suprimentos,
logística, gerenciamento de inventário e planejamento financeiro. Como
a maior parte desses problemas não pode ser resolvida analiticamente,
os tomadores de decisão utilizam métodos numéricos para obter uma
solução quase ótima. Em algumas aplicações, soluções não convergidas
e, portanto, sub-ótimas terminam sendo implementadas devido a limitações
de tempo ou esforço computacional. Nesse contexto, os métodos
existentes fornecem uma solução otimista sempre que a convergência
não é atingida. As soluções otimistas geralmente geram altos níveis
de arrependimento porque subestimam os custos reais na função objetivo
aproximada. Para resolver esse problema, temos desenvolvido duas
metodologias de solução conservadora para problemas de programação
linear estocástica dois estágios com incerteza do lado direito e suporte retangular:
Quando a verdadeira distribuição de probabilidade da incerteza
é conhecida, propomos um problema DRO (Distributionally Robust Optimization)
baseado em esperanças condicionais adaptadas à uma partição
do suporte cuja complexidade cresce exponencialmente com a dimensionalidade
da incerteza; Quando apenas observações históricas da incerteza
estão disponíveis, propomos um problema de DRO baseado na métrica
de Wasserstein a fim de incorporar ambiguidade sobre a real distribuição
de probabilidade da incerteza. Para esta última abordagem, os métodos
existentes dependem da enumeração dos vértices duais do problema de
segundo estágio, tornando o problema DRO intratável em aplicações
práticas. Nesse contexto, propomos esquemas algorítmicos para lidar
com a complexidade computacional de ambas abordagens. Experimentos
computacionais são apresentados para o problema do fazendeiro, o problema
de alocação de aviões, e o problema do planejamento da operação
do sistema elétrico (unit ommitmnet problem). / [en] Two-stage stochastic programming is a mathematical framework
widely used in real-life applications such as power system operation
planning, supply chains, logistics, inventory management, and financial
planning. Since most of these problems cannot be solved analytically,
decision-makers make use of numerical methods to obtain a near-optimal
solution. Some applications rely on the implementation of non-converged
and therefore sub-optimal solutions because of computational time or
power limitations. In this context, the existing methods provide an optimistic
solution whenever convergence is not attained. Optimistic solutions
often generate high disappointment levels because they consistently
underestimate the actual costs in the approximate objective function.
To address this issue, we have developed two conservative-solution
methodologies for two-stage stochastic linear programming problems
with right-hand-side uncertainty and rectangular support: When the actual
data-generating probability distribution is known, we propose a DRO
problem based on partition-adapted conditional expectations whose complexity
grows exponentially with the uncertainty dimensionality; When
only historical observations of the uncertainty are available, we propose
a DRO problem based on the Wasserstein metric to incorporate ambiguity
over the actual data-generating probability distribution. For this
latter approach, existing methods rely on dual vertex enumeration of the
second-stage problem rendering the DRO problem intractable in practical
applications. In this context, we propose algorithmic schemes to address
the computational complexity of both approaches. Computational experiments
are presented for the farmer problem, aircraft allocation problem,
and the stochastic unit commitment problem.
|
717 |
Essays on Financial Intermediation and Monetary PolicySetayesh Valipour, Abolfazl 24 August 2022 (has links)
No description available.
|
718 |
Robust Control of Uncertain Input-Delayed Sample Data Systems through Optimization of a Robustness BoundKratz, Jonathan L. 22 May 2015 (has links)
No description available.
|
719 |
Expectations, Choices, and Lessons Learned: The Experience of Rural, Appalachian, Upward Bound GraduatesPennock Arnold, Tiffany G. January 2017 (has links)
No description available.
|
720 |
Impact of Phase Information on Radar Automatic Target RecognitionMoore, Linda Jennifer January 2016 (has links)
No description available.
|
Page generated in 0.0451 seconds