• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 570
  • 181
  • 54
  • 47
  • 23
  • 18
  • 10
  • 9
  • 9
  • 8
  • 4
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 1208
  • 1208
  • 1208
  • 173
  • 172
  • 165
  • 128
  • 124
  • 120
  • 108
  • 102
  • 96
  • 86
  • 84
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1061

Matematický model rozložení tvrdosti na opěrném válci / Mathematical Model of Hardness Distribution inside Backing Roll

Kracík, Adam January 2011 (has links)
The aim of this work is to get the best detailed knowledge about hardness distribution in first 60 mm below the surface of backing roll. To this end, a method for obtaining multi-dimensional polynomial regression was developed and then a computer program for its processing was written.Way of finding suitable regression surfaces and their subsequent interpretation, is a pivotal part of this work.
1062

Využití modelů neuronových sítí pro hodnocení kvality vody ve vodovodních sítích / Using Artificial Neural Network Models to Assess Water Quality in Water Distribution Networks

Cuesta Cordoba, Gustavo Andres January 2013 (has links)
A water distribution system (WDS) is based in a network of interconnected hydraulic components to transport the water directly to the customers. Water must be treated in a Water Treatment Plant (WTP) to provide safe drinking water to consumers, free from pathogenic and other undesirable organisms. The disinfection is an important aspect in achieving safe drinking water and preventing the spread of waterborne diseases. Chlorine is the most commonly used disinfectant in conventional water treatment processes because of its low cost, its capacity to deactivate bacteria, and because it ensures residual concentrations in WDS to prevent microbiological contamination. Chlorine residual concentration is affected by a phenomenon known as chlorine decay, which means that chlorine reacts with other components along the system and its concentration decrease. Chlorine is measured at the output of the WTP and also in several considered points within the WDS to control the water quality in the system. Simulation and modeling methods help to predict in an effective way the chlorine concentration in the WDS. The purpose of the thesis is to assess chlorine concentration in some strategic points within the WDS by using the historical measured data of some water quality parameters that influence chlorine decay. Recent investigations of the water quality have shown the need of the use of non-linear modeling for chlorine decay prediction. Chlorine decay in a pipeline is a complex phenomenon so it requires techniques that can provide reliable and efficient representation of the complexity of this behavior. Statistical models based on Artificial Neural Networks (ANN) have been found appropriated for the investigation and solution of problems related with non-linearity in the chlorine decay prediction offering advantages over more conventional modeling techniques. In this sense, this thesis uses a specific neural network application to solve the problem of forecasting the residual chlorine
1063

Structural Reliability Study of Highway Bridge Girders Based on AASTHO LRFD Bridge Design Specifications

Dallakoti, Pramish Shakti January 2020 (has links)
No description available.
1064

A Simple PET Imaging Educational Demonstrator

Hussain, Shabbir January 2012 (has links)
Recent interests in computer based tools and simulations for PET imaging studies have been a leading source for many new developments. A strong emphasis in these studies has been to improve and optimize the PET scanners for better image quality and quantification of related system parameters. In this project, an attempt has been made to develop a Matlab tool intended to be of educational nature for new students where one can perform demonstration of PET-like imaging in a simple and quick way. This demonstration tool utilizes a high resolution, voxel based digital brain (Zubal) phantom as a primary study object. A tumor of specific size is defined by the user on a chosen slice of the phantom. The output images from this tool show the exact location of the predefined tumor. The algorithm attempts to estimate the positron emission direction, positron range distribution and photon detection in a circular geometry. Additional attempt has been made to estimate certain statistical parameters against a specific amount of radiotracer uptake. These include spatial resolution, photons count, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of the ultimate PET image. Dependence of these estimated results by the tool on different system input parameters has been studied.
1065

Study Of Spin-Lattice Relaxation Rates In Solids:Lattice-Frame Method Compared With Quantum Density-Matrix Method, And Glauber Dynamic

Solomon, Lazarus 09 December 2006 (has links)
The spin-lattice relaxation rates are calculated for a rigid magnetic spin cluster in an elastic medium in the presence of a magnetic eld using the latticerame method. This rate is then compared with both the rate calculated using the quantum mechanical densitymatrix method and with the Glauber dynamics. These calculation results are used in the contribution of various heat baths, such as a phonon bath in various dimensions or a fermionic bath, to transition rates that enter into dynamic Monte Carlo simulations of molecular magnets and nanomagnets.
1066

Pricing and Hedging of Financial Instruments using Forward–Backward Stochastic Differential Equations : Call Spread Options with Different Interest Rates for Borrowing and Lending

Berta, Abigail Hailu January 2022 (has links)
In this project, we are aiming to solve option pricing and hedging problems numerically via Backward Stochastic Differential Equations (BSDEs). We use Markovian BSDEs to formulate nonlinear pricing and hedging problems of both European and American option types. This method of formulation is crucial for pricing financial instruments since it enables consideration of market imperfections and computations in high dimensions. We conduct numerical experiments of the pricing and hedging problems, where there is a higher interest rate for borrowing than lending, using the least squares Monte Carlo and deep neural network methods. Moreover, based on the experiment results, we point out which method to chooseover the other depending on the the problem at hand.
1067

Stokastiska modellers framtida roll för investeringsbeslut i fastigheter / The Future Role of Stochastic Models in Real Estate Investing

Karlsson, Hampus, Teimert, Emil January 2021 (has links)
Digitalisering, datainsamling och de kraftigt utökade databaser som idag finns tillgängliga är något som resulterat i stora förändringar i hur man arbetar, detta gäller många branscher och inte minst fastighetsbranschen. Den ökade mängden tillgängliga data är ett stort hjälpmedel när det kommet till att ta beslut. För att på bästa sätt ta vara på all denna data krävs det dock att nya metoder och verktyg utvecklas eller gamla anpassas. Oavsett hur mycket data som finns tillgänglig kommer det dock alltid finnas osäkerheter att ta hänsyn till. Stokastiska modeller har tagits fram just för att hantera både stora mängder data och som hjälpmedel för att uppskatta osäkerhet. Idag används stokastiska modeller inom många olika branscher såsom finans, medicin samt inom forskning i fysik, matematik och statistik. Än idag har de dock en mycket liten roll när det kommer till investeringsbeslut inom fastighetsbranschen Syftet med detta arbete är att undersöka om stokastiska modeller i framtiden kommer att ha en större roll när det kommer till investeringsbeslut inom fastighetsbranschen, samt vad den idag begränsade användningen beror på. Dessutom syftar arbetet på att belysa de skillnader, samt för- och nackdelar som finns hos och mellan deterministiska och stokastiska modeller tillämpade för att assistera beslutsprocessen kring investeringar i fastigheter. Det kommer göras dels genom en intervjustudie med verksamma inom branschen med erfarenhet av arbete med investeringsbeslut. Detta för att höra deras syn på huruvida det idag finns ett motstånd mot stokastiska modeller och om de tror att stokastiska modeller kommer ha en större roll i framtiden. Men också genom en litteraturstudie av tidigare arbeten. Slutsatsen från arbetet är det tydligt att det finns skillnader mellan deterministiska och stokastiska modeller, något som även gör att det till viss del ger olika resultat, även om avvikelserna som betraktades under detta arbete inte var speciellt omfattande. Detta kan stöttas av Jensen`s Inequality samt The Flaw of Averages, vilket tyder på att det kan var så att både risk och möjlighet idag under- eller överskattas. När det kommer till stokastiska modellers framtid inom investeringsbeslut i fastigheter var respondenterna relativt eniga i att de inte skulle bli någon större skillnad mot idag. Detta skulle dock kunna förändras om några skulle börja använda modellerna då detta skulle kunna leda till att fler följer efter. Effekten skulle också kunna accelereras om digitalisering, förbättrade databaser och AI skulle ge modellerna möjlighet att uppskatta mjuka parameter och ta med dessa i sina beräkningar. / Digitalization, data collection and the significantly increased databases that today are accessible have resulted in new work methods in several different industries, one of them being the real estate industry. The increased amount of data accessible can assist in a lot of different situations, for example being a great basis for decision making. To be able to utilize the data in the best way possible, however, either new methods and tools or a change in current methods to adapt to the new conditions that exist is needed. These methods also must be able to handle uncertainty since it will always exist, no matter the amount of data. Stochastic models are previously developed to do just that, handle large amounts of data and at the same time work as a tool for working with uncertainty. Stochastic models are today used in a lot of different industries including finance, medicine, and computer science but also to assist research in mathematics, physics, and statistics. Still today, however, the use is very limited when it comes to real estate investments. The aim of this thesis is to research the possibility of an increased use of stochastic models in real estate investments in the future and the reason for it being very limited today. The thesis also aims to illustrate the differences between deterministic and stochastic models and the pros and cons that follows. To achieve this several interviews with real estate professionals with experience in investment decisions were conducted. In addition to this a literature review was made to analyze previous work on the topic and to collect information considering the differences between deterministic and stochastic models. To summarize the results from this study there are some clear differences between deterministic and stochastic models which in this case also lead to different results, even if it in this study is a minor difference. This finding is supported by Jensen’s Inequality and The Flaw of averages which shows that models used today may both under- and overestimate both the risk and opportunity of an investment. When it comes to the future of stochastic models in real estate investments the respondents were quite united in their believe that not much will change from today. This could change however, if some started to use the models many thought other would follow. This effect might also be accelerated if digitalization, increased databases, and AI could result in better estimates of soft parameters and use it in its calculations.
1068

Hierarchical Scaling of Carbon Fluxes in the Arctic Using an Integrated Terrestrial, Aquatic, and Atmospheric Approach

Ludwig, Sarah January 2024 (has links)
With warming temperatures, Arctic ecosystems are changing from a net sink to a net sourceof carbon to the atmosphere, but the Arctic’s carbon balance remains highly uncertain. Landscapes are often assumed to be homogeneous when interpreting eddy covariance carbon fluxes, which can lead to biases when gap-filling and scaling-up observations to determine regional carbon budgets. Tundra ecosystems are heterogeneous at multiple scales. Plant functional types, soil moisture, thaw depth, and microtopography, for example, vary across the landscape and influence carbon dioxide (CO₂) and methane (CH4) fluxes. In Chapter 2, I reported results from growing season CO₂ and CH₄ fluxes from an eddy covariance tower in the Yukon-Kuskokwim (YK) Delta in Alaska. I used flux footprint models and Bayesian Markov Chain Monte Carlo (MCMC) methods to unmix eddy covariance observations into constituent landcover fluxes based on high resolution landcover maps of the region. I compared three types of footprint models and used two landcover maps with varying complexity to determine the effects of these choices on derived ecosystem fluxes. I used artificially created gaps of withheld observations to compare gap-filling performance using our derived landcover-specific fluxes and traditional gap-filling methods that assume homogeneous landscapes. I also compared regional carbon budgets scaled up from observations using heterogeneous and homogeneous approaches. Gap-filling methods that accounted for heterogeneous landscapes were better at predicting artificially withheld gaps in CO₂ fluxes than traditional approaches, and there were only slight differences performance between footprint models and landcover maps. I identified and quantified hot spots of carbon fluxes in the landscape (e.g., late growing season emissions from wetlands and small ponds). I resolved distinct seasonality in tundra growing season CO₂ fluxes. Scaling while assuming a homogeneous landscape overestimated the growing season CO₂ sink by a factor of two and underestimated CH₄ emissions by a factor of two when compared to scaling with any method that accounts for landscape heterogeneity. I showed how Bayesian MCMC, analytical footprint models, and high resolution landcover maps can be leveraged to derive detailed landcover carbon fluxes from eddy covariance timeseries. These results demonstrate the importance of landscape heterogeneity when scaling carbon emissions across the Arctic. Climate change is causing an intensification in tundra fires across the Arctic, including the unprecedented 2015 fires in the YK Delta. The YK Delta contains extensive surface waters (approximately 33% cover) and significant quantities of organic carbon, much of which is stored in vulnerable permafrost. Inland aquatic ecosystems act as hot-spots for landscape CO₂ and CH₄ emissions and likely represent a significant component of the Arctic carbon balance, yet aquatic fluxes of CO₂ and CH₄ are also some of the most uncertain. In Chapter 3, I measured dissolved CO₂ and CH₄ concentrations (n = 364), in surface waters from different types of waterbodies during summers from 2016 to 2019. I used Sentinel-2 multispectral imagery to classify landcover types and area burned in contributing watersheds. I developed a model using machine learning to assess how waterbody properties (size, shape, and landscape properties), environmental conditions (O₂ concentration, temperature), and surface water chemistry (dissolved organic carbon composition, nutrient concentrations) help predict in situ observations of CO₂ and CH₄ concentrations across deltaic waterbodies. CO₂ concentrations were negatively related to waterbody size and positively related to waterbody edge effects. CH₄ concentrations were primarily related to organic matter quantity and composition. Waterbodies in burned watersheds appeared to be less carbon limited and had longer soil water residence times than in unburned watersheds. My results illustrated the importance of small lakes for regional carbon emissions and demonstrate the need for a mechanistic understanding of the drivers of greenhouse gasses in small waterbodies. In the Arctic waterbodies are abundant and rapid thaw of permafrost is destabilizing the carbon cycle and changing hydrology. It is particularly important to quantify and accurately scale aquatic carbon emissions in arctic ecosystems. Recently available high-resolution remote sensing datasets capture the physical characteristics of arctic landscapes at unprecedented spatial resolution. In Chapter 4, I demonstrated how machine learning models can capitalize on these spatial datasets to greatly improve accuracy when scaling waterbody CO₂ and CH₄ fluxes across the YK Delta of south-west AK. I found that waterbody size and contour were strong predictors for aquatic CO₂ emissions, attributing greater than two-thirds of the influence to the scaling model. Small ponds (<0.001 km²) were hotspots of emissions, contributing fluxes several times their relative area, but were less than 5% of the total carbon budget. Small to medium lakes (0.001–0.1 km²) contributed the majority of carbon emissions from waterbodies. Waterbody CH₄ emissions were predicted by a combination of wetland landcover and related drivers, as well as watershed hydrology, and waterbody surface reflectance related to chromophoric dissolved organic matter. When compared to my machine learning approach, traditional scaling methods that did not account for relevant landscape characteristics overestimated waterbody CO₂ and CH₄ emissions by 26%–79% and 8%–53% respectively. This chapter demonstrated the importance of an integrated terrestrial-aquatic approach to improving estimates and uncertainty when scaling carbon emissions in the arctic. In order to understand carbon feedbacks with the atmosphere and predict climate change, we need to develop methods to model and scale up carbon emissions. Gridded datasets of carbon fluxes are used to benchmark Earth system models, attribute changes in rates of atmospheric concentrations of greenhouse gases, and project future climate change. There are two main approaches to deriving gridded datasets of carbon fluxes and global or regional carbon budgets: bottom-up scaling, and top-down atmospheric inversions. There is often divergence between approaches, with carbon budgets calculated from bottom-up and top-down studies rarely overlapping. The resulting uncertainty in carbon budgets calculated from either approach is more pronounced in high-latitudes. One of the challenges with combining bottom-up models and comparing top-down models is the variable spatial resolutions used in each approach. In Chapter 5, I applied flux scaling models from earlier chapters to create bottom-up carbon budgets at very high resolution (10 m) for the entire YK Delta domain. I used ERA5 land reanalysis data to extend the flux models to 2012-2015 and 2017 growing seasons to coincide with airborne observations of atmospheric CO₂ and CH₄ concentrations from NASA CARVE and Arctic-CAP campaigns. I progressively coarsened remote sensing imagery for the region to 30 m, 90 m, 250 m, and 1 km to create coarser landcover maps and corresponding bottom-up carbon budgets. The high resolution bottom-up models, when convolved with concentration footprints, produced simulated atmospheric enhancements that were similar to observed atmospheric enhancements. There was little change coarsening to 30 m and 90 m, but simulated atmospheric enhancements and especially carbon budgets were quite different at 250 m and 1 km spatial resolution. The changes with resolution were largely the result of an increase in area mapped as wetlands and shrub tundra, and less area mapped as small waterbodies and lichen tundra. Coarser resolution bottom-up scaling consistently overestimated CH4 budgets. By evaluating flux models against atmospheric observations, I was able to diagnose missing components such as inland water carbon emissions and times when the scaling models overestimated emissions to missing seasonal dynamics. This dissertation combined novel uses of statistical techniques with a high density of field observations to yield process-level understanding of carbon cycling that could be applied to scaling-up carbon emissions. By merging terrestrial and aquatic perspectives and concurrently mapping ecosystem landcovers and disturbances at high spatial resolution, I avoided common sources of uncertainty in carbon budgets such as double-counting of areas. I investigated how we represent the landscape in terms of both spatial resolution and the level of landscape heterogeneity, and determined the effects of these choices on carbon fluxes and budget estimates. By comparing to the atmosphere, I evaluated the validity of different approaches to modeling carbon fluxes in the Arctic. Together, the chapters in this dissertation provided a holistic study of carbon cycling in the Arctic.
1069

Redistricting, Representation, and Perception: Three Essays on U.S. Local Politics

Novoa, Gustavo Francisco January 2024 (has links)
This dissertation analyzes how redistricting, minority representation, and perceived polarization— three topics regularly studied at the national level—function at the level of local government. The first two chapters focus on city council redistricting. In the first chapter, I used a new approach, a sequential Monte Carlo algorithm, to simulate city council district plans. By simulating tens of thousands of plans for each city, I was able to compare the plans that are actually implemented to a representative sample of all plausible plans. This analysis represents the first large-N geospatial analysis of city council redistricting. I found that the city council maps that are actually implemented feature more majority-minority districts than the median simulation. This implies that somewhere in the redistricting process, a conscious effort is made to foster minority representation. In the second chapter, I merged city council map data with the results of city council elections. I then analyzed the relationship between the composition of districts and who runs and who wins in city council elections. I found that district-level demographic makeup continues to be the dominant factor in the supply of minority candidates. I also found that, comparing two cities that are otherwise demographically and politically similar, cities that fall under the Voting Rights Act (VRA) pre-clearance had more minorities run for office and win election on average. In the third chapter, I conducted an original survey experiment to determine if respondents’ perceptions of partisan polarization differed in local contexts relative to the national political landscape. I did not find measurable differences in the perceived prevalence of support for different issues. However, I did find that respondents were slightly less willing to endorse generic language about partisans’ issue support when cued to think about ordinary voters in their local area. All together, these studies probe three different aspects of local electoral politics. In doing so, they help reconcile our understanding of electoral politics nationally with areas of local politics that still have many open questions.
1070

Statistical Methodologies for Decision-Making and Uncertainty Reduction in Machine Learning

Zhang, Haofeng January 2024 (has links)
Stochasticity arising from data and training can cause statistical errors in prediction and optimization models and lead to inferior decision-making. Understanding the risk associated with the models and converting predictions into better decisions have become increasingly prominent. This thesis studies the interaction of two fundamental topics, data-driven decision-making and machine-learning-based uncertainty reduction, where it develops statistically principled methodologies and provides theoretical insights. Chapter 2 studies data-driven stochastic optimization where model parameters of the underlying distribution need to be estimated from data in addition to the optimization task. Several mainstream approaches have been developed to solve data-driven stochastic optimization, but direct statistical comparisons among different approaches have not been well investigated in the literature. We develop a new regret-based framework based on stochastic dominance to rigorously study and compare their statistical performance. Chapter 3 studies uncertainty quantification and reduction techniques for neural network models. Uncertainties of neural networks arise not only from data, but also from the training procedure that often injects substantial noises and biases. These hinder the attainment of statistical guarantees and, moreover, impose computational challenges due to the need for repeated network retraining. Building upon the recent neural tangent kernel theory, we create statistically guaranteed schemes to principally characterize and remove the uncertainty of over-parameterized neural networks with very low computation effort. Chapter 4 studies reducing uncertainty in stochastic simulation where standard Monte Carlo computation is widely known to exhibit a canonical square-root convergence speed in terms of sample size. Two recent techniques derived from an integration of reproducing kernels and Stein's identity have been proposed to reduce the error in Monte Carlo computation to supercanonical convergence. We present a more general framework to encompass both techniques that is especially beneficial when the sample generator is biased and noise-corrupted. We show that our general estimator, the doubly robust Stein-kernelized estimator, outperforms both existing methods in terms of mean squared error rates across different scenarios. Chapter 5 studies bandit problems, which are important sequential decision-making problems that aim to find optimal adaptive strategies to maximize cumulative reward. Bayesian bandit algorithms with approximate Bayesian inference have been widely used to solve bandit problems in practice, but their theoretical justification is less investigated partially due to the additional Bayesian inference errors. We propose a general theoretical framework to analyze Bayesian bandits in the presence of approximate inference and establish the first regret bound for Bayesian bandit algorithms with bounded approximate inference errors.

Page generated in 0.0555 seconds