• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 183
  • 72
  • 33
  • 17
  • 16
  • 12
  • 9
  • 8
  • 5
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 392
  • 72
  • 63
  • 59
  • 55
  • 47
  • 38
  • 36
  • 33
  • 31
  • 27
  • 24
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

One- and Two-Variable $p$-adic Measures in Iwasawa Theory

January 2015 (has links)
abstract: In 1984, Sinnott used $p$-adic measures on $\mathbb{Z}_p$ to give a new proof of the Ferrero-Washington Theorem for abelian number fields by realizing $p$-adic $L$-functions as (essentially) the $Gamma$-transform of certain $p$-adic rational function measures. Shortly afterward, Gillard and Schneps independently adapted Sinnott's techniques to the case of $p$-adic $L$-functions associated to elliptic curves with complex multiplication (CM) by realizing these $p$-adic $L$-functions as $Gamma$-transforms of certain $p$-adic rational function measures. The results in the CM case give the vanishing of the Iwasawa $mu$-invariant for certain $mathbb{Z}_p$-extensions of imaginary quadratic fields constructed from torsion points of CM elliptic curves. In this thesis, I develop the theory of $p$-adic measures on $mathbb{Z}_p^d$, with particular interest given to the case of $d>1$. Although I introduce these measures within the context of $p$-adic integration, this study includes a strong emphasis on the interpretation of $p$-adic measures as $p$-adic power series. With this dual perspective, I describe $p$-adic analytic operations as maps on power series; the most important of these operations is the multivariate $Gamma$-transform on $p$-adic measures. This thesis gives new significance to product measures, and in particular to the use of product measures to construct measures on $mathbb{Z}_p^2$ from measures on $mathbb{Z}_p$. I introduce a subring of pseudo-polynomial measures on $mathbb{Z}_p^2$ which is closed under the standard operations on measures, including the $Gamma$-transform. I obtain results on the Iwasawa-invariants of such pseudo-polynomial measures, and use these results to deduce certain continuity results for the $Gamma$-transform. As an application, I establish the vanishing of the Iwasawa $mu$-invariant of Yager's two-variable $p$-adic $L$-function from measure theoretic considerations. / Dissertation/Thesis / Doctoral Dissertation Mathematics 2015
232

Estudo da difusão turbulenta empregando modelos estocásticos lagrangeanos / Study of turbulent difusion employing lagrangian stochastic models

Timm, Andréa Ucker 09 March 2007 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / In this work, the Lagrangian stochastic particle model LAMBDA is utilized to simulate the dispersion and the transport of contaminants under different atmospheric conditions. The analysis employs three different field experiments of atmospheric diffusion: the Copenhagen experiment, which was accomplished in unstable conditions, the Prairie Grass experiment in which was considered only neutral stability cases (mean wind velocity higher than 6ms-1) and the INEL experiment occurring in low wind stable conditions and presenting wind meandering phenomenon. LAMBDA is a tridimensional model to simulate the pollutants dispersion over flat terrain. The model solves the generalized form the Langevin Equation and it can use the higher moments of Eulerian probability density function of the wind velocity fluctuations. The main aim of this work is to test a new parameterization for the parameters p and q which represent the frequency associated to the meandering phenomenon. The new parameterization is expressed in terms of m , a non-dimensional quantity that controls the meandering oscillation frequency, and T , a time scale associated to the coherent structures in a fully developed turbulence. The simulations show that the LAMBDA model incorporating this new parameterization reproduces correctly the enhanced diffusion of passive scalars in a low wind speed stable atmospheric boundary layer. / Neste trabalho é utilizado o modelo de partículas estocástico Lagrangeano LAMBDA para simular a dispersão e o transporte de contaminantes sob diferentes condições atmosféricas. A análise emprega três diferentes experimentos de difusão atmosférica: o experimento de Copenhagen, que foi realizado sob condições convectivas, o experimento de Prairie Grass, em que foram considerados somente os casos de estabilidade neutra (velocidades do vento médio maiores que 6m/s) e o experimento de INEL realizado em condições estáveis com velocidade do vento fraco e apresentando o fenômeno de meandro do vento. LAMBDA é um modelo tridimensional para simular a dispersão de poluentes sobre terreno plano. O modelo resolve a forma generalizada da Equação de Langevin e pode usar os momentos de ordem superior da função densidade de probabilidade Euleriana das flutuações da velocidade do vento. O principal objetivo deste trabalho é testar uma nova parametrização dos parâmetros p e q que representam as frequências associadas ao fenômeno de meandro do vento. Esta nova parametrização é descrita em termos de m , uma quantidade adimensional que controla a frequência de oscilação do meandro e T , uma escala de tempo associada às estruturas coerentes em uma turbulência bem desenvolvida. As simulações demonstram que o modelo LAMBDA incorporando esta nova parametrização reproduz corretamente a difusão de escalares passivos em uma camada limite atmosférica estável com velocidade do vento fraco.
233

Aplicações para o modelo Diebold – Li no ajuste e previsão da ETTJ brasileira

Sartori, Lúcio Daniel January 2014 (has links)
O presente trabalho testa uma alternativa de ajuste da estrutura a termo da taxa de juros brasileira bem como a sua previsão através de uma variação do modelo Diebold e Li (2006) focando principalmente em seu fator de decaimento exponencial. Esta variação do fator de decaimento ocorre distintamente em dois momentos do trabalho, primeiramente no ajuste da curva e após quando da previsão desta. No ajuste, o encontro deste parâmetro é feito através de ferramenta computacional, buscando o fator de decaimento que reduz a diferença de mínimos quadrados em relação aos pontos originais capturados no mercado de juros futuro brasileiro em conjunto dos três outros fatores do modelo. A previsão da estrutura a termo utiliza modelos auto regressivos para estimar as próximas curvas no horizonte de um período. A importância deste estudo reside em conhecer a aderência do modelo proposto à curva de juros brasileira testando sua eficiência quando utilizados os pressupostos enunciados. / This study tests an alternative adjustment of the term structure of Brazilian interest rate and its prediction through a variation of the Diebold and Li (2006) model focusing mainly on his exponential decay factor. The variation of the decay factor occurs in two distinct moments of this work, in the curve fitting and after this in the forecasting. During the setting, this parameter is mesured through computational tool, seeking the decay factor that reduces the difference in least squares relative to the original points captured in the Brazilian market future interest together the other three factors of the model. To Forecast the term structure is used auto regressive models to estimate the upcoming curves. The importance of this study lies in knowing the adherence of the proposed to the Brazilian yield curve testing its efficiency when utilized the assumptions listed in the model.
234

Aplicações para o modelo Diebold – Li no ajuste e previsão da ETTJ brasileira

Sartori, Lúcio Daniel January 2014 (has links)
O presente trabalho testa uma alternativa de ajuste da estrutura a termo da taxa de juros brasileira bem como a sua previsão através de uma variação do modelo Diebold e Li (2006) focando principalmente em seu fator de decaimento exponencial. Esta variação do fator de decaimento ocorre distintamente em dois momentos do trabalho, primeiramente no ajuste da curva e após quando da previsão desta. No ajuste, o encontro deste parâmetro é feito através de ferramenta computacional, buscando o fator de decaimento que reduz a diferença de mínimos quadrados em relação aos pontos originais capturados no mercado de juros futuro brasileiro em conjunto dos três outros fatores do modelo. A previsão da estrutura a termo utiliza modelos auto regressivos para estimar as próximas curvas no horizonte de um período. A importância deste estudo reside em conhecer a aderência do modelo proposto à curva de juros brasileira testando sua eficiência quando utilizados os pressupostos enunciados. / This study tests an alternative adjustment of the term structure of Brazilian interest rate and its prediction through a variation of the Diebold and Li (2006) model focusing mainly on his exponential decay factor. The variation of the decay factor occurs in two distinct moments of this work, in the curve fitting and after this in the forecasting. During the setting, this parameter is mesured through computational tool, seeking the decay factor that reduces the difference in least squares relative to the original points captured in the Brazilian market future interest together the other three factors of the model. To Forecast the term structure is used auto regressive models to estimate the upcoming curves. The importance of this study lies in knowing the adherence of the proposed to the Brazilian yield curve testing its efficiency when utilized the assumptions listed in the model.
235

Estudo sobre a vida útil de rolamentos fixos de uma carreira de esferas. / Study about rolling bearing life of deep groove ball bearings.

Marcos Vilodres Campanha 19 December 2007 (has links)
O presente trabalho destina-se à discussão sobre o cálculo de vida útil de rolamentos. Mostrando o avanço do processo de cálculo ao longo das décadas até o mais alto grau de desenvolvimento atual. A preocupação do texto é demonstrar de forma simples e objetiva as divergências que existem entre a formulação teórica e a real vida dos rolamentos, no que tange a fadiga de contato. Neste contexto foram realizados testes, em máquina especialmente destinada ao ensaio da fadiga de rolamentos. Variando-se para as duas séries de ensaios, apenas, a temperatura (aproximadamente 85°C e 110°C). Os resultados obtidos indicam que a vida real dos rolamentos apresenta grande divergência se comparada com a vida útil calculada, principalmente, no regime com maior temperatura. Atribui-se a esta disparidade, a ausência de cálculos precisos quanto à correlação da vida útil com o fator l, que é uma forma de se calcular o espaçamento entre as superfícies de contato, e o não emprego do cálculo do fator de carga, na formulação da vida útil de rolamentos. / The present work has the purpose of discussing the life of rolling bearings, describing the evolution of bearing life calculation until its current state of the art. Our focus is to demonstrate, simply and objectively, the inconsistencies occurring between the actual life of rolling bearings and their theoretical fatigue life estimation. For such purpose, tests were developed in a special bearing test rig to assess bearing fatigue. Two test sets were carried out with temperature being the only variation (approximately 85°C and 110°C). Results obtained from these tests suggest that the real life of rolling bearings is indeed very different from calculated bearing life, especially under higher temperature. Such disparity can be attributed to the lack of a precise computation of the relationship between bearing real life and the l factor - which determines the thickness of lubricant separating raceways and balls - as well as to the failure to compute the load factor in bearing life estimation.
236

Monte Carlo simulation study of the e+e- →Lambda Lamba-Bar reaction with the BESIII experiment

Forssman, Niklas January 2016 (has links)
Studying the reactions where electrons and positrons collide and annihilate so that hadrons can be formed from their energy is an excellent tool when we try to improve our understanding of the standard model. Hadrons are composite quark systems held together by the strong force. By doing precise measurements of the, so called, cross section of the hadron production that was generated during the annihilation one can obtain information about the electromagnetic form factors, GE and GM, which describe the inner electromagnetic structure of hadrons. This will give us a better understanding of the strong force and the standard model.During my bachelor degree project I have been using data from the BESIII detector located at the Beijing Electron-Positron Collider (BEPC-II) in China. Uppsala university has several scientists working with the BESIII experiment. My task was to do a quality assurance of previous results for the reaction e+e-→ Lambda Lambda-Bar at a center of mass energy of 2.396 GeV. During a major part of the project I have been working with Monte Carlo data. Generating the reactions was done with two generators, ConExc and PHSP. The generators was used for different means. I have analyzed the simulated data to find a method of filtering out the background noise in order to extract a clean signal. Dr Cui Li at the hadron physics group at Uppsala university have worked with several selection criteria to extract these signals. The total efficiency of Cui Li's analysis was 14%. For my analysis I also obtained total efficiency of 14%. This gave me confidence that my analysis have been implemented in a correct fashion and that my analysis now can be transferred over to real data. It is also reassuring for Cui Li and the rest of the group that her analysis has been verified by and independently implemented selection algorithm. / Att studera vad som händer vid reaktioner där elektroner och positroner kolliderar och annihilerar så att hadroner kan bildas ur energin kan vara till stor hjälp när vi vill förstå standardmodellen och dess krafter, i synnerhet den starka kraften, som kan studeras i sådana reaktioner. Genom att utföra precisa mätningar av tvärsnitt för hadronproduktion får man fram de elektromagnetiska formfaktorerna GE och GM som beskriver hadronernas inre struktur. Hadroner är sammansatta system av kvarkar och den starka kraften binder dessa kvarkar.\\Under mitt examensarbete har jag använt mig av data från detektorn BESIII som finns vid BEPC-II (Beijing Electron-Positron Collider) i Kina. Uppsala universitet har flera forskare som jobbar med BESIII experimentet. Målet var att kvalitetssäkra den tidigare analys som gjorts för reaktionen e+e- → Lambda Lambda-Bar vid 2.396 GeV. Jag började med att göra Monte Carlo-simuleringar. Reaktionerna har genererats med två olika generatorer, ConExc och PHSP. Dessa generatorer har använts till olika ändamål. De genererade partiklarnas färd genom detektorn har sedan simulerats. Då bildas data av samma typ som dem man får från experiment. Jag har analyserat dessa simulerade data för att hitta en metod som kan filtrera bort bakgrundsstörningar samtidigt som intressanta data sparas. Kriterier utarbetade av Dr. Cui Li har använts för att skapa denna metod. Min algortim gav en total effektivitet på 14%, vilket stämmer bra med den tidigare algoritmen som Cui Li skapade, även där var effektiviteten 14%. Detta ger förtroende för min algortim och den stärker även Cui Lis resultat.
237

Data Compression for use in the Short Messaging System / Datakompression för användning i Short Messaging Systemet

Andersson, Måns January 2010 (has links)
Data compression is a vast subject with a lot of different algorithms. All algorithms are not good at every task and this thesis takes a closer look on compression of small files in the range of 100-300 bytes having in mind that the compressed output are to be sent over the Short Messaging System (SMS). Some well-known algorithms are tested for compression ratio and two of them, the Algorithm Λ, and the Adaptive Arithmetic Coding, are chosen to get a closer understanding of and then implement in the Java language. Those implementations are tested alongside the first tested implementations and one of the algorithms are chosen to answer the question ”Which compression algorithm is best suited for compression of data for use in Short Messaging System messages?”. / Datakompression är ett brett område med ett stort antal olika algoritmer. Alla algoritmer är inte bra för alla tillfällen och denna rapport tittar i huvudsak på kompression av små filer i intervallet 100-300 byte tänkta att skickas komprimerade över SMS. Ett antal välkända algoritmers kompressionsgrad är testade och två av dem, Algorithm Λ och Adaptiv Aritmetisk Kodning, väljs ut och studeras närmre samt implementeras i Java. Dessa implementationer är sedan testade tillsammans med tidigare testade implementationer och en av algoritmerna väljs ut för att besvara frågan "Vilken kompressionsalgoritm är best lämpad för att komprimerad data för användning i SMS-meddelanden?".
238

Using React Native and AWS Lambda for cross-platform development in a startup

Andersson, Jonas January 2017 (has links)
When developing mobile applications, the tradition has been to write code specific (native) for each platform they are running on. Usually it’s about writing two separate applications for the biggest platforms, Android and iOS. There exist alternatives to this approach that uses the same code for different platforms. React Native is a relatively new cross-platform development framework that makes it possible to use the same code for application to Android and iOS. It also uses native UI-elements as a possible solution for performance issues that is often associated with cross-plattform development. This thesis evaluates React Native and compares it against native Android. The implementation is done by replicating the main functionality from a social media application written as a native Android application. However, the application is not made as an exact replica since that could limit the solutions in React Native. The evaluation is done in a Startup company and therefore focuses on aspects important in a Startup. Another issue when developing a mobile application is what type of backend that shall be used. Performance, scalability and complexity are all important aspects when choosing a framework or language as a base for the backend architecture.There do exist theoretical frameworks that could be used when building the backend. However, these frameworks require resources that are often missing in a Startup. AWS Lambda is a platform that claims to be a cost-effective way of building a scalable application. In this thesis AWS Lambda is evaluated to see if it can be used to create an automatically scaled backend for this type of social media application. The conclusion of the React Native evaluation is that it can be a suitable alternative to native Android development. If the team has previous experience in web development but lack experience in mobile application development it can be a wise choice since it removes the need to learn two frameworks in native Android and native iOS development. React Native is also good to fast create functional prototypes which can be shown to potential investors. The biggest drawback is performance in animations. However, there are often ways to work around that. In our case this drawback did not affect the user experience of the end application. The evaluation of AWS Lambda concludes that it is not for every project. In this thesis, the application was a bit too database heavy and therefore the autoscaling ability did not work properly. However, for a service that needs a lot of computing power, AWS Lambda could be a good fit. It could also be a suitable alternative if someone in the team has previous experience in the AWS environment.
239

Monte Carlo Simulation of e + e − → Σ̄ 0 Λ/ Σ̄ 0 Σ 0 Reaction

Vaheid, Halimeh January 2017 (has links)
A central objective of the field of nuclear physics is understanding the fundamental properties of hadrons and nuclei in terms of QCD. In the last decade, a large range of experimental and theoretical methods have been developed to study the nature of quark confinement and the structure of hadrons which are composites of quarks and gluons. One important way to address some questions of hadron physics is studying the electromagnetic form factors of hadrons. The electric and magnetic form factors are related to the distribution of charge and magnetization in hadrons.The internal structure of hyperons, which are a subgroup of hadrons, is a topic of interest of particle physicists. The BES III experiment is one of the few current facilities for studying hadron structure.The Uppsala Hadron Physics group, which is a part of the BES III collaboration, is preparing a proposal for data taking for ΛΣ̄ 0 transition form factors and Σ 0 direct form factors at 2.5 GeV.Aiming the electromagnetic form factors of Σ hyperons, this work contributes to this proposal by simulation study of the e+ e− → ΛΣ̄ 0 and e + e − → Σ 0 Σ̄ 0 reactions. The efficiency and resolution of the electromagnetic calorimeter sub-detector of BES III and kinematic properties of the detected particles are studied in this work. Our final goal is to provide input for the beam time proposal and optimize the future measurement.In the first chapter, the theoretical background including the Standard Model, strong interaction, QCD, and hadrons are studied. In the second chapter, some concepts like the formalism of cross section, relativistic kinematics, and electromagnetic form factors are briefly presented. The third chapter is dedicated to introducing the e + e − → ΛΣ̄ 0 and e + e − → Σ 0 Σ̄ 0 reactions. The BES experiment at BEPC-II is introduced in chapter 4. In chapter 5, the software tools which have been used for this work are introduced. In the sixth chapter, the result of a toy-Monte Carlo study for parameter estimation is presented. The last chapter is dedicated to the results of a full BES software simulation.
240

Model-based Air and Fuel Path Control of a VCR Engine / Modellbaserad luft- och bränslereglering av en VCR-motor

Lindell, Tobias January 2009 (has links)
The objective of the work was to develop a basic control system for an advancedexperimental engine from scratch. The engine this work revolves around is a Saabvariable compression engine.A new control system is developed based on the naked engine, stripped of theoriginal control system. Experiments form the basis that the control system isbuilt upon. Controllers for throttles, intake manifold pressure for pressures lessthan ambient pressure and exhaust gas oxygen ratio are developed and validated.They were found to be satisfactory. The lambda controller is tested with severalparameter sets, and the best set is picked to be implemented in the engine. Modelsnecessary for the development and validation of the controllers are developed.These models include models for the volumetric efficiency, the pressure dynamicsof the intake manifold, the fuel injectors and wall wetting.

Page generated in 0.0454 seconds