• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 551
  • 94
  • 78
  • 58
  • 36
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 22
  • 15
  • 4
  • 3
  • Tagged with
  • 956
  • 956
  • 221
  • 163
  • 139
  • 126
  • 97
  • 92
  • 90
  • 74
  • 72
  • 69
  • 66
  • 65
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
861

Production function shifts in Soviet postwar industry: the mid 1970's shift

Mitchell, Claire E. January 1985 (has links)
The Soviet economy experienced a marked decline in the rate of growth of output in the mid 1970s. Research was conducted for Soviet postwar industry in order to try and identify when the shift was strongest, and in which industrial branches. A statistical technique known as the "Chow Test" was used to test for a "break" year -- the year when the production function most dramatically changed. Regression results showed that two types of industry -- that which was closely associated with military production, and industry responsible for producing consumer goods, showed little or no shift in the mid 1970s. The remaining sectors, which were primarily resource intensive, did show a significant shift in 1974. A description of the investigation, including input data and regression results, is included. / M.A.
862

A stand level multi-species growth model for Appalachian hardwoods

Bowling, Ernest H. January 1985 (has links)
A stand-level growth and yield model was developed to predict future diameter distributions of thinned stands of mixed Appalachian hardwoods. The model allows prediction by species groups and diameter classes. Stand attributes ( basal area per acre, trees per acre, minimum stand diameter, and arithmetic mean dbh) were projected through time for the whole stand and for individual species groups. Future diameter distributions were obtained using the three-parameter Weibull probability density function and parameter recovery method. The recovery method used employed the first two non-central moments of dbh (arithmetic mean dbh and quadratic mean dbh squared) to generate Weibull parameters. Future diameter distributions were generated for the whole stand and every species group but one; the diameter distribution of the remaining species group was obtained by subtraction from whole stand values. A system of tree volume equations which allow the user t o obtain total tree volume or merchantable volume to any top height or diameter completes the model. Volumes can be calculated by species group and summed to get whole stand volume. / M.S.
863

Assessing Worker Preferences For Steel Industry Electrification Using Discrete Choice Methods

Meenakshi Narayanaswami (19179634) 19 July 2024 (has links)
<p dir="ltr">As nations strive to reduce greenhouse gas emissions, the transformation of energy-intensive industries will significantly impact job quality and worker well-being. This thesis investigates the critical intersection of employment opportunities and just energy transitions in the context of industrial decarbonization, focusing on the U.S. steel sector. We address the challenge of balancing economic, environmental, and social considerations in the shift towards low-carbon manufacturing processes. Semi-structured interviews inform the development of a choice-based conjoint survey of Indiana steelworkers, which helps quantify worker preferences for various job attributes such as shift patterns, overtime hours, and wages. The analysis employs willingness-to-pay models to elucidate the complex relationships between compensation and working conditions in the context of potential changes brought about by renewable energy integration and electrification of steel production. Key findings reveal significant disutility associated with increased overtime hours and an unexpected preference for night shifts over day shifts among respondents. The research also highlights the importance of sociotechnical solutions that account for worker needs in designing decarbonized manufacturing processes. While acknowledging limitations such as potential sample bias, this thesis contributes to the development of integrated modeling approaches that combine worker preferences with operational constraints and energy costs. The results inform strategies for achieving a just energy transition in the steel industry, emphasizing the need for policies that prioritize worker well-being alongside decarbonization goals.</p>
864

Application of Distance Covariance to Time Series Modeling and Assessing Goodness-of-Fit

Fernandes, Leon January 2024 (has links)
The overarching goal of this thesis is to use distance covariance based methods to extend asymptotic results from the i.i.d. case to general time series settings. Accounting for dependence may make already difficult statistical inference all the more challenging. The distance covariance is an increasingly popular measure of dependence between random vectors that goes beyond linear dependence as described by correlation. It is defined by a squared integral norm of the difference between the joint and marginal characteristic functions with respect to a specific weight function. Distance covariance has the advantage of being able to detect dependence even for uncorrelated data. The energy distance is a closely related quantity that measures distance between distributions of random vectors. These statistics can be used to establish asymptotic limit theory for stationary ergodic time series. The asymptotic results are driven by the limit theory for the empirical characteristic functions. In this thesis we apply the distance covariance to three problems in time series modeling: (i) Independent Component Analysis (ICA), (ii) multivariate time series clustering, and (iii) goodness-of-fit using residuals from a fitted model. The underlying statistical procedures for each topic uses the distance covariance function as a measure of dependence. The distance covariance arises in various ways in each of these topics; one as a measure of independence among the components of a vector, second as a measure of similarity of joint distributions and, third for assessing serial dependence among the fitted residuals. In each of these cases, limit theory is established for the corresponding empirical distance covariance statistics when the data comes from a stationary ergodic time series. For Topic (i) we consider an ICA framework, which is a popular tool used for blind source separation and has found application in fields such as financial time series, signal processing, feature extraction, and brain imaging. The Structural Vector Autogregression (SVAR) model is often the basic model used for modeling macro time series. The residuals in such a model are given by e_t = A S_t, the classical ICA model. In certain applications, one of the components of S_t has infinite variance. This differs from the standard ICA model. Furthermore the e_t's are not observed directly but are only estimated from the SVAR modeling. Many of the ICA procedures require the existence of a finite second or even fourth moment. We derive consistency when using the distance covariance for measuring independence of residuals under the infinite variance case.Extensions to the ICA model with noise, which has a direct application to SVAR models when testing independence of residuals based on their estimated counterparts is also considered. In Topic (ii) we propose a novel methodology for clustering multivariate time series data using energy distance. Specifically, a dissimilarity matrix is formed using the energy distance statistic to measure separation between the finite dimensional distributions for the component time series. Once the pairwise dissimilarity matrix is calculated, a hierarchical clustering method is then applied to obtain the dendrogram. This procedure is completely nonparametric as the dissimilarities between stationary distributions are directly calculated without making any model assumptions. In order to justify this procedure, asymptotic properties of the energy distance estimates are derived for general stationary and ergodic time series. Topic (iii) considers the fundamental and often final step in time series modeling, assessing the quality of fit of a proposed model to the data. Since the underlying distribution of the innovations that generate a model is often not prescribed, goodness-of-fit tests typically take the form of testing the fitted residuals for serial independence. However, these fitted residuals are inherently dependent since they are based on the same parameter estimates and thus standard tests of serial independence, such as those based on the autocorrelation function (ACF) or distance correlation function (ADCF) of the fitted residuals need to be adjusted. We apply sample splitting in the time series setting to perform tests of serial dependence of fitted residuals using the sample ACF and ADCF. Here the first f_n of the n data points in the time series are used to estimate the parameters of the model. Tests for serial independence are then based on all the n residuals. With f_n = n/2 the ACF and ADCF tests of serial independence tests often have the same limit distributions as though the underlying residuals are indeed i.i.d. That is, if the first half of the data is used to estimate the parameters and the estimated residuals are computed for the entire data set based on these parameter estimates, then the ACF and ADCF can have the same limit distributions as though the residuals were i.i.d. This procedure ameliorates the need for adjustment in the construction of confidence bounds for both the ACF and ADCF, based on the fitted residuals, in goodness-of-fit testing. We also show that if f_n < n/2 then the asymptotic distribution of the tests stochastically dominate the corresponding asymptotic distributions for the true i.i.d. noise; the stochastic order gets reversed under f_n > n/2.
865

<b>Economic Studies of the Global Trade of Wood Pellets</b>

Hiromi Waragai (18578983) 20 July 2024 (has links)
<p dir="ltr">This thesis investigated the international trade dynamics of wood pellets within the context of renewable energy transitions amid climate change concerns. In the first chapter, by employing gravity models with different estimators and specifications, we analyzed the determinants of trade flows of wood pellets. Additionally, we forecasted the future trade values of wood pellets under five shared socioeconomic pathways (SSP) scenarios. Our results showed the effects of some factors such as GDP of exporters, contiguity, and the distance between the two trading countries, were consistent with the economic theory. On the other hand, some other factors exhibited unexpected effects or conflicting results across the models. Regarding projections under five SSP scenarios, our results indicated substantial growth in trade flows, although potential overestimations are acknowledged due to the imposed assumptions. SSP3, which reflects a nationalistic scenario, is projected to have the smallest trade flows, while SSP5 anticipates the highest trade flows due to diminishing inequality and high GDP growth. Also, regional shifts in trade patterns were forecasted, with East Asia and Southeast Asia gaining prominence in imports and exports, respectively. Conversely, Europe’s imports and exports as well as North America’s exports are expected to decrease their shares in the global trade. Overall, our findings emphasize the complexity of trade determinants and underscore the need for nuanced forecasting methodologies to anticipate future trade dynamics accurately amidst evolving global scenarios of wood pellet trade.</p><p dir="ltr">The second chapter evaluated the effects of the Paris Agreement on the international trade of wood pellets. The growing concern about climate change has encouraged the global communities to take actions toward climate-change mitigation. As a form of such efforts, the Paris Agreement was signed in 2015 by 196 parties around the world and went into force in 2016. As a means to mitigate climate change, wood pellets have been used as fuels alternative to fossil fuels. Traditionally, Europe was the primary importer of wood pellets, mostly sourced from the United States and Canada. In the last decade, there has also been a significant uptake in East Asia, indicating shifting trade patterns and market dynamics in the wood pellet industry. This study employed an event-study framework to analyze the impact of the Paris Agreement on the global trade of wood pellets from 2014 to 2019, using import and export data at the regional level. Our results revealed distinct patterns in responses to the Paris Agreement in terms of adjustment speed and magnitude. Europe exhibited a rapid increase in both imports and exports immediately after the Paris Agreement. East Asia demonstrated a delayed yet substantial rise in imports, particularly after 2018. North America also swiftly expanded exports, following the agreement, while Southeast Asia emerged as an important exporter, particularly in supporting the East Asian market from 2017 onwards. We also found an increase in exports of non-pellet wood fuels from Africa. This finding indicates that international climate agreements not only contribute to the overall expansion of the global market of wood pellets but also reshape the market by involving more countries in international efforts to mitigate climate change.</p>
866

Topics on Machine Learning under Imperfect Supervision

Yuan, Gan January 2024 (has links)
This dissertation comprises several studies addressing supervised learning problems where the supervision is imperfect. Firstly, we investigate the margin conditions in active learning. Active learning is characterized by its special mechanism where the learner can sample freely over the feature space and exploit mostly the limited labeling budget by querying the most informative labels. Our primary focus is to discern critical conditions under which certain active learning algorithms can outperform the optimal passive learning minimax rate. Within a non-parametric multi-class classification framework,our results reveal that the uniqueness of Bayes labels across the feature space serves as the pivotal determinant for the superiority of active learning over passive learning. Secondly, we study the estimation of central mean subspace (CMS), and its application in transfer learning. We show that a fast parametric convergence rate is achievable via estimating the expected smoothed gradient outer product, for a general class of covariate distribution that admits Gaussian or heavier distributions. When the link function is a polynomial with a degree of at most r and the covariates follow the standard Gaussian, we show that the prefactor depends on the ambient dimension d as d^r. Furthermore, we show that under a transfer learning setting, an oracle rate of prediction error as if the CMS is known is achievable, when the source training data is abundant. Finally, we present an innovative application involving the utilization of weak (noisy) labels for addressing an Individual Tree Crown (ITC) segmentation challenge. Here, the objective is to delineate individual tree crowns within a 3D LiDAR scan of tropical forests, with only 2D noisy manual delineations of crowns on RGB images available as a source of weak supervision. We propose a refinement algorithm designed to enhance the performance of existing unsupervised learning methodologies for the ITC segmentation problem.
867

Computational approaches to understand mechanisms of human genetic disorders

Zhong, Guojie January 2024 (has links)
Human genetics is one of the strongest risk factors for complex diseases. Understandingthe effects of genetic variations not only serves as a fundamental approach to studying disease mechanisms but also offers unprecedented opportunities for improved clinical screening, disease diagnosis and therapeutic discoveries. Despite decades of extensive DNA sequencing and genetic research involving large cohorts, two major challenges remain. First, the majority of disease risk genes remain unidentified due to limited statistical power. Second, the functional effects of rare variants, especially missense variants, in disease risk genes are understudied. In this thesis, I describe new computational approaches to address those challenges using statistical genetics and machine learning methods implementing intuition of biological mechanisms. First, I worked on a statistical framework that can identify disease related pathways from de novo coding variants data. I applied this framework to study the genetics of esophageal atresia / tracheoesophageal fistula (EA/TEF) and identified several potential disease causal pathways that involved in endosome trafficking. Next, I developed a new method to identifying disease risk genes by integrating genetic (rare de novo variants) and functional genomics data. Identifying risk genes using rare variants typically has low statistical power due to the rarity of genotype data. Using functional genomics data has the potential to address this challenge as it serves as informative priors of disease risk. Therefore, I developed a statistical method called VBASS. VBASS is a semi-supervised algorithm that uses a neural network to encode biological priors, such as cell type-specific expression values, into a rigorous Bayesian statistical model to increase statistical power. On simulated data, VBASS demonstrated proper error rate control and better power than current state-of-the-art methods. We applied VBASS to congenital heart disease (CHD) and autism spectrum disorder (ASD), identifying several novel disease risk genes along with their associated cell types. Finally, I focused on predicting the functional mechanisms of missense variants that cause diseases. Pathogenic missense variants may act through different modes of action (e.g., gain-of-function or loss-of-function) by affecting various aspects of protein function. These variants may result in distinct clinical conditions requiring different treatments, yet current computational tools cannot distinguish between them because their predictions heavily relied on evolutional conservation data. The recent breakthrough of AI-powered protein structure prediction tools provides an opportunity to address this challenge because the functional mechanisms of variants is intrinsically embedded in its structural properties. Therefore, I developed a deep learning method called PreMode. PreMode is a pretrained SE(3)-equivariant graph neural network model designed to capture the effects of missense variants from their structural contexts and evolutionary information. I pretrained PreMode using labeled pathogenicity data to enable the model to learn a general representation of variant effects, followed by protein-specific transfer learning to predict mode-of-action effects. I applied PreMode to the mode-of-action predictions of 17 genes and demonstrated that PreMode achieved state-of-the-art performance compared to existing models. PreMode has various applications, including identifying novel gain/loss-of-function variants, improving the study design of deep mutational scans and optimization in protein engineering.
868

Non-response error in surveys

Taljaard, Monica 06 1900 (has links)
Non-response is an error common to most surveys. In this dissertation, the error of non-response is described in terms of its sources and its contribution to the Mean Square Error of survey estimates. Various response and completion rates are defined. Techniques are examined that can be used to identify the extent of nonresponse bias in surveys. Methods to identify auxiliary variables for use in nonresponse adjustment procedures are described. Strategies for dealing with nonresponse are classified into two types, namely preventive strategies and post hoc adjustments of data. Preventive strategies discussed include the use of call-backs and follow-ups and the selection of a probability sub-sample of non-respondents for intensive follow-ups. Post hoc adjustments discussed include population and sample weighting adjustments and raking ratio estimation to compensate for unit non-response as well as various imputation methods to compensate for item non-response. / Mathematical Sciences / M. Com. (Statistics)
869

Statistical models for the long-term monitoring of songbird populations : a Bayesian analysis of constant effort sites and ring-recovery data

Cave, Vanessa M. January 2010 (has links)
To underpin and improve advice given to government and other interested parties on the state of Britain’s common songbird populations, new models for analysing ecological data are developed in this thesis. These models use data from the British Trust for Ornithology’s Constant Effort Sites (CES) scheme, an annual bird-ringing programme in which catch effort is standardised. Data from the CES scheme are routinely used to index abundance and productivity, and to a lesser extent estimate adult survival rates. However, two features of the CES data that complicate analysis were previously inadequately addressed, namely the presence in the catch of “transient” birds not associated with the local population, and the sporadic failure in the constancy of effort assumption arising from the absence of within-year catch data. The current methodology is extended, with efficient Bayesian models developed for each of these demographic parameters that account for both of these data nuances, and from which reliable and usefully precise estimates are obtained. Of increasing interest is the relationship between abundance and the underlying vital rates, an understanding of which facilitates effective conservation. CES data are particularly amenable to an integrated approach to population modelling, providing a combination of demographic information from a single source. Such an integrated approach is developed here, employing Bayesian methodology and a simple population model to unite abundance, productivity and survival within a consistent framework. Independent data from ring-recoveries provide additional information on adult and juvenile survival rates. Specific advantages of this new integrated approach are identified, among which is the ability to determine juvenile survival accurately, disentangle the probabilities of survival and permanent emigration, and to obtain estimates of total seasonal productivity. The methodologies developed in this thesis are applied to CES data from Sedge Warbler, Acrocephalus schoenobaenus, and Reed Warbler, A. scirpaceus.
870

Information heterogeneity and voter uncertainty in spatial voting: the U.S. presidential elections, 1992-2004

Lee, So Young 29 August 2008 (has links)
This dissertation addresses voters' information heterogeneity and its effect on spatial voting. While most spatial voting models simply assume that voter uncertainty about candidate preferences is homogeneous across voters despite Downs' early use of uncertainty scale to classify the electorate, information studies have discovered that well and poorly informed citizens have sizeable and consistent differences in issue conceptualization, perception, political opinion and behavior. Built upon the spatial theory's early insights on uncertainty and the findings of information literature, this dissertation claims that information effects should be incorporated into the spatial voting model. By this incorporation, I seek to unify the different scholarly traditions of the spatial theory of voting and the study of political information. I hypothesize that uncertainty is not homogeneous, but varies with the level of information, which are approximated by political activism as well as information on candidate policy positions. To test this hypothesis, I employ heteroskedastic probit models that specify heterogeneity of voter uncertainty in probabilistic models of spatial voting. The models are applied to the U.S. presidential elections in 1992-2004. The empirical results of the analysis strongly support the expectation. They reveal that voter uncertainty is heterogeneous as a result of uneven distributions of information and political activism even when various voting cues are available. This dissertation also discovers that this heterogeneity in voter uncertainty has a significant effect on electoral outcomes. It finds that the more uncertain a voter is about the candidates, the more likely he or she is to vote for the incumbent or a better-known candidate. This clearly reflects voters' risk-averse attitudes that reward the candidate with greater certainty, all other things held constant. Heterogeneity in voter uncertainty and its electoral consequences, therefore, have important implications for candidates' strategies. The findings suggest that the voter heterogeneity leads candidates' equilibrium strategies and campaign tactics to be inconsistent with those that spatial analysts have normally proposed. / text

Page generated in 0.1501 seconds