• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3010
  • 1002
  • 369
  • 345
  • 272
  • 182
  • 174
  • 160
  • 82
  • 54
  • 30
  • 29
  • 23
  • 22
  • 21
  • Tagged with
  • 6620
  • 2241
  • 1127
  • 915
  • 851
  • 791
  • 740
  • 738
  • 643
  • 542
  • 498
  • 486
  • 444
  • 417
  • 397
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Methodological advances in benefit transfer and hedonic analysis

Puri, Roshan 19 September 2023 (has links)
This dissertation introduces advanced statistical and econometric methods in two distinct areas of non-market valuation: benefit transfer (BT) and hedonic analysis. While the first and the third chapters address the challenge of estimating the societal benefits of prospective environmental policy changes by adopting locally weighted regression (LWR) technique in an environmental valuation context, the second chapter combines the output from traditional hedonic regression and matching estimators and provides guidance on the choice of model with low risk of bias in housing market studies. The economic and societal benefits associated with various environmental conservation programs, such as improvement in water quality, or increment in wetland acreages, can be directly estimated using primary studies. However, conducting primary studies can be highly resource-intensive and time-consuming as they typically involve extensive data collection, sophisticated models, and a considerable investment of financial and human resources. As a result, BT offers a practical alternative, which involves employing valuation estimates, functions, or models from prior primary studies to predict the societal benefit of conservation policies at a policy site. Existing studies typically fit one single regression model to all observations within the given metadata and generate a single set of coefficients to predict welfare (willingness-to-pay) in a prospective policy site. However, a single set of coefficients may not reflect the true relationship between dependent and independent variables, especially when multiple source studies/locations are involved in the data-generating process which, in turn, degrades the predictive accuracy of the given meta-regression model (MRM). To address this shortcoming, we employ the LWR technique in an environmental valuation context. LWR allows an estimation of a different set of coefficients for each location to be used for BT prediction. However, the empirical exercise carried out in the existing literature is rigorous from a computational perspective and is cumbersome for practical adaptation. In the first chapter, we simplify the experimental setup required for LWR-BT analysis by taking a closer look at the choice of weight variables for different window sizes and weight function settings. We propose a pragmatic solution by suggesting "universal weights" instead of striving to identify the best of thousands of different weight variable settings. We use the water quality metadata employed in the published literature and show that our universal weights generate more efficient and equally plausible BT estimates for policy sites than the best weight variable settings that emerge from a time-consuming cross-validation search over the entire universe of individual variable combinations. The third chapter expands the scope of LWR to wetland meta-data. We use a conceptually similar set of weight variables as in the first chapter and replicate the methodological approach of that chapter. We show that LWR, under our proposed weight settings, generates substantial gain in both predictive accuracy and efficiency compared to the one generated by standard globally-linear MRM. Our second chapter delves into a separate yet interrelated realm of non-market valuation, i.e., hedonic analysis. Here, we explore the combined inferential power of traditional hedonic regression and matching estimators to provide guidance on model choice for housing market studies where researchers aim to estimate an unbiased binary treatment effect in the presence of unobserved spatial and temporal effects. We examine the potential sources of bias within both hedonic regression and basic matching. We discuss the theoretical routes to mitigate these biases and assess their feasibility in practical contexts. We propose a novel route towards unbiasedness, i.e., the "cancellation effect" and illustrate its empirical feasibility while estimating the impact of flood hazards on housing prices. / Doctor of Philosophy / This dissertation introduces novel statistical and econometric methods to better understand the value of environmental resources that do not have an explicit market price, such as the benefits we get from the changes in water quality, size of wetlands, or the impact of flood risk zoning in the sales price of residential properties. The first and third chapters tackle the challenge of estimating the value of environmental changes, such as cleaner water or more wetlands. To figure out how much people benefit from these changes, we can look at how much they would be willing to pay for such improved water quality or increased wetland area. This typically requires conducting a primary survey, which is expensive and time-consuming. Instead, researchers can draw insights from prior studies to predict welfare in a new policy site. This approach is analogous to applying a methodology and/or findings from one research work to another. However, the direct application of findings from one context to another assumes uniformity across the different studies which is unlikely, especially when past studies are associated with different spatial locations. To address this, we propose a ``locally-weighting" technique. This places greater emphasis on the studies that closely align with the characteristics of the new (policy) context. Determining the weight variables/factors that dictate this alignment is a question that requires an empirical investigation. One recent study attempts this locally-weighting technique to estimate the benefits of improved water quality and suggests experimenting with different factors to find the similarity between the past and new studies. However, their approach is computationally intensive, making it impractical for adaptation. In our first chapter, we propose a more pragmatic solution---using a "universal weight" that does not require assessing multiple factors. With our proposed weights in an otherwise similar context, we find more efficient and equally plausible estimates of the benefits as previous studies. We expand the scope of the local weighting to the valuation of gains or losses in wetland areas in the third chapter. We use a conceptually similar set of weight variables and replicate the empirical exercise from the first chapter. We show that the local-weighting technique, under our proposed settings, substantially improves the accuracy and efficiency of estimated benefits associated with the change in wetland acreage. This highlights the diverse potential of the local weighting technique in an environmental valuation context. The second chapter of this dissertation attempts to understand the impact of flood risk on housing prices. We can use "hedonic regression" to understand how different features of a house, like its size, location, sales year, amenities, and flood zone location affect its price. However, if we do not correctly specify this function, then the estimates will be misleading. Alternatively, we can use "matching" technique where we pair the houses inside and outside of the flood zone in all observable characteristics, and differentiate their price to estimate the flood zone impact. However, finding identical houses in all aspects of household and neighborhood characteristics is practically impossible. We propose that any leftover differences in features of the matched houses can be balanced out by considering where the houses are located (school zone, for example) and when they were sold. We refer to this route as the "cancellation effect" and show that this can indeed be achieved in practice especially when we pair a single house in a flood zone with many houses outside that zone. This not only allows us to accurately estimate the effect of flood zones on housing prices but also reduces the uncertainty around our findings.
212

A Model to Predict Student Matriculation from Admissions Data

Khajuria, Saket 20 April 2007 (has links)
No description available.
213

Kernel Methods for Regression

Rossmann, Tom Lennart January 2023 (has links)
Kernel methods are a well-studied approach for addressing regression problems by implicitly mapping input variables into possibly infinite-dimensional feature spaces, particularly in cases where standard linear regression fails to capture non-linear relationships in data. Therefore, the choice between standard linear regression and kernel regression can be seen as a tradeoff between constraints on the number of features and the number of training samples. Our results show that the Gaussian kernel consistently achieves the lowest mean squared error for the largest considered training size. At the same time, the standard ridge regression exhibits a higher mean squared error and lower fit time. We have proven algebraically that the solutions of standard ridge regression and kernel ridge regression are mathematically equivalent.
214

Predicting UFC matches using regression models

Apelgren, Sebastian, Eklund, Christoffer January 2024 (has links)
This project applied statistical inference methods to historical data of mixed martial arts (MMA) matches from the Ultimate Fighting Championship (UFC). The goal of the project was to create a model to predict the outcome of Ultimate Fighting Championship matches with the best possible accuracy. The main methods used in the project were logistic regression and Bayesian regression. The data used for said model was taken from matches between early April 2000 and mid April 2024. The predictions made by these models were compared with the predictions of various betting sites as well as with the true outcomes of the matches. The logistic regression model and the Bayesian model predicted the true outcome of the matches 60% and 70% of the time respectively, with both having comparable predictions to those of the betting sites.
215

Analysis of large data sets with linear and logistic regression

Hill, Christopher M. 01 April 2003 (has links)
No description available.
216

Prisbildning på fastigheter : Vad kan en skattesänkning innebära?

Björkman, Gunilla, Gelinder, Kristian January 2008 (has links)
<p>Denna uppsats behandlar hur prisbildningen på fastigheter ser ut i Sverige. Syftet är att analysera prisbildningsmekanismer för småhusmarknaden, speciellt fokus läggs på fastighetsskatten. Genom insamling av tidsseriedata på kvartalsnivå för åren 1990- 2007 för hela landet samt regionerna Stockholms län, Sydsverige och mellersta Norrland har regressioner genomförts för att studera olika variablers inverkan på småhuspriserna.</p><p>Variablerna som studerats är hushållens inkomst, bolåneränta, fastighetsskatt, Affärsvärldens generalindex samt förgående års fastighetspris. Resultaten visar att skatten i viss mån påverkar prissättningen. Vidare konstateras att inkomsten är den viktigaste variabeln samt att bolåneräntan uppvisar signifikanta resultat för prispåverkan.</p>
217

Bayesian locally weighted online learning

Edakunni, Narayanan U. January 2010 (has links)
Locally weighted regression is a non-parametric technique of regression that is capable of coping with non-stationarity of the input distribution. Online algorithms like Receptive FieldWeighted Regression and Locally Weighted Projection Regression use a sparse representation of the locally weighted model to approximate a target function, resulting in an efficient learning algorithm. However, these algorithms are fairly sensitive to parameter initializations and have multiple open learning parameters that are usually set using some insights of the problem and local heuristics. In this thesis, we attempt to alleviate these problems by using a probabilistic formulation of locally weighted regression followed by a principled Bayesian inference of the parameters. In the Randomly Varying Coefficient (RVC) model developed in this thesis, locally weighted regression is set up as an ensemble of regression experts that provide a local linear approximation to the target function. We train the individual experts independently and then combine their predictions using a Product of Experts formalism. Independent training of experts allows us to adapt the complexity of the regression model dynamically while learning in an online fashion. The local experts themselves are modeled using a hierarchical Bayesian probability distribution with Variational Bayesian Expectation Maximization steps to learn the posterior distributions over the parameters. The Bayesian modeling of the local experts leads to an inference procedure that is fairly insensitive to parameter initializations and avoids problems like overfitting. We further exploit the Bayesian inference procedure to derive efficient online update rules for the parameters. Learning in the regression setting is also extended to handle a classification task by making use of a logistic regression to model discrete class labels. The main contribution of the thesis is a spatially localised online learning algorithm set up in a probabilistic framework with principled Bayesian inference rule for the parameters of the model that learns local models completely independent of each other, uses only local information and adapts the local model complexity in a data driven fashion. This thesis, for the first time, brings together the computational efficiency and the adaptability of ‘non-competitive’ locally weighted learning schemes and the modelling guarantees of the Bayesian formulation.
218

Characterization of the association between short-term variations in daily mortality and adverse environmental conditions using time series methodology

Guzman, Martha Elva Ramierez January 1990 (has links)
No description available.
219

Utilizing auxiliary information in sample survey estimation and analysis

Silva, Pedro Luis do Nascimento January 1996 (has links)
No description available.
220

Extensions of quantal problems

Acar, Emel January 2000 (has links)
No description available.

Page generated in 0.0639 seconds