• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 458
  • 41
  • 32
  • 16
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 584
  • 584
  • 584
  • 584
  • 132
  • 104
  • 97
  • 90
  • 60
  • 59
  • 58
  • 54
  • 49
  • 48
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

The Functional Mechanism of the Bacterial Ribosome, an Archetypal Biomolecular Machine

Ray, Korak Kumar January 2023 (has links)
Biomolecular machines are responsible for carrying out a host of essential cellular processes. In accordance to the wide range of functions they execute, the architectures of these also vary greatly. Yet, despite this diversity in both structure and function, they have some common characteristics. They are all large macromolecular complexes that enact multiple steps during the course of their functions. They are also ’Brownian’ in nature, i.e., they rectify the thermal motions of their surroundings into work. Yet how these machines can utilise their surrounding thermal energy in a directional manner, and do so in a cycle over and over again, is still not well understood. The work I present in this thesis spans the development, evaluation and use of biophysical, in particular single-molecule, tools in the study of the functional mechanisms of biomolecular machines. In Chapter 2, I describe a mathematical framework which utilises both the framework of Bayesian inference to relate any experimental data to an ideal template irrespective of the scale, background and noise in the data. This framework may be used for the analysis of data generated by multiple experimental techniques in an accurate, fast, and human-independent manner. One such application is described in Chapter 3, where this framework is used to evaluate the extent of spatial information present in experimental data generated using cryogenic electron microscopy (cryoEM). This application will not only aid the study of biomolecular structure using cryoEM by structural biologists, but also enable biophysicists and biochemists who use structural models to interpret and design their experiments to evaluate the cryoEM data they need to use for their investigations. In Chapter 4, I describe an investigation into the use of one class of analytical models, hidden Markov models (HMMs) to accurately extract kinetic information from single-molecule experimental data, such as the data generated by single-molecule fluorescence resonance energy transfer (smFRET) experiments. Finally in Chapter 5, I describe how single-molecule experiments have led to the discovery of a mechanism by which ligands can modulate and drive the conformational dynamics of the ribosome in a manner that facilitates ribosome-catalysed protein synthesis. This mechanism has implications to our understanding of the functional mechanisms of the ribosome in particular, and of biomolecular machines in general.
462

Essays on Online Learning and Resource Allocation

Yin, Steven January 2022 (has links)
This thesis studies four independent resource allocation problems with different assumptions on information available to the central planner, and strategic considerations of the agents present in the system. We start off with an online, non-strategic agents setting in Chapter 1, where we study the dynamic pricing and learning problem under the Bass demand model. The main objective in the field of dynamic pricing and learning is to study how a seller can maximize revenue by adjusting price over time based on sequentially realized demand. Unlike most existing literature on dynamic pricing and learning, where the price only affects the demand in the current period, under the Bass model, price also influences the future evolution of demand. Finding arevenue-maximizing dynamic pricing policy in this model is non-trivial even in the full information case, where model parameters are known. We consider the more challenging incomplete information problem where dynamic pricing is applied in conjunction with learning the unknown model parameters, with the objective of optimizing the cumulative revenues over a given selling horizon of length 𝑻. Our main contribution is an algorithm that satisfies a high probability regret guarantee of order 𝑚²/³; where the market size 𝑚 is known a priori. Moreover, we show that no algorithm can incur smaller order of loss by deriving a matching lower bound. We then switch our attention to a single round, strategic agents setting in Chapter 2, where we study a multi-resource allocation problem with heterogeneous demands and Leontief utilities. Leontief utility function captures the idea that for certain resource allocation settings, the utility of marginal increase in one resource depends on the availabilities of other resources. We generalize the existing literature on this model formulation to incorporate more constraints faced in real applications, which in turn requires new algorithm design and analysis techniques. The main contribution of this chapter is an allocation algorithm that satisfies Pareto optimality, envy-freenss, strategy-proofness, and a notion of sharing incentive. In Chapter 3, we study a single round, non-strategic agent setting, where the central planner tries to allocate a pool of items to a set of agents who each has to receive a prespecified fraction of all items. Additionally, we want to ensure fairness by controlling the amount of envy that agents have with the final allocations. We make the observation that this resource allocation setting can be formulated as an Optimal Transport problem, and that the solution structure displays a surprisingly simple structure. Using this insight, we are able to design an allocation algorithm that achieves the optimal trade-off between efficiency and envy. Finally, in Chapter 4 we study an online, strategic agent setting, where similar to the previous chapter, the central planner needs to allocate a pool of items to a set of agents who each has to receive a prespecified fraction of all items. Unlike in the previous chapter, the central planner has no a priori information on the distribution of items. Instead, the central planner needs to implicitly learn these distributions from the observed values in order to pick a good allocation policy. Additionally, an added challenge here is that the agents are strategic with incentives to misreport their valuations in order to receive better allocations. This sets our work apart both from the online auction mechanism design settings which typically assume known valuation distributions and/or involve payments, and from the online learning settings that do not consider strategic agents. To that end, our main contribution is an online learning based allocation mechanism that is approximately Bayesian incentive compatible, and when all agents are truthful, guarantees a sublinear regret for individual agents' utility compared to that under the optimal offline allocation policy.
463

Prediction of a school superintendent's tenure using regression and Bayesian analyses

Anderson, Sandra Lee January 1988 (has links)
A model was developed to incorporate the major forces impacting upon a school superintendent and the descriptors, stability measures, intentions and processes of those forces. Tenure was determined to be the best outcome measure, thus the model became a quantitative method for predicting tenure. A survey measuring characteristics of the community, School Board, and the superintendent was sent to superintendents nationwide who had left a superintendency between 1983 and 1985. Usable forms were returned by 835 persons. The regression analysis was significant (p ≤ .0000) and accounted for 40% of the variance in superintendent tenure. In developing the equation, statistical applications included Mallows C<sub>P</sub> for subset selection, Rousseeuw’s Least Median of Squares for outlier diagnostics, and the PRESS statistic for validation. The survey also included 24 hypothetical situations randomly selected out of a set of 290 items with four optional courses of action. The answers were weighted by the tenure groups of the superintendents. and the responses analyzed using a Bayesian joint probability formula. Predictions of the most probable tenure based on these items were accurate for only 18% of the superintendents. Variables found to contribute significantly in every candidate equation included per pupil expenditure, recent board member defeat, years in the contract, use of a formal interview format, age, being in the same etlmic group as the community, intention to move to another superintendency, orienting new Board members, salary, enrollment, and Board stability. Variables which were significant in some equations were region of the country, state turnover rate, proportion of Board support, whether changes were expected, use of a regular written evaluation, community power structure, number of Board members, grade levels in the district, gender, and having worked in the same school district. Variables which did not contribute were per capita income, whether the board was elected or appointed, educational degree and type of community. / Ph. D. / incomplete_metadata
464

Estimating Individual Treatment Effects Using Emerging Methods from Machine Learning and Multiple Imputation

Park, Sangbaek January 2024 (has links)
This dissertation used synthetic datasets, semi-synthetic datasets, and a real-world dataset from an educational intervention to compare the performance of 15 machine learning and multiple imputation methods to estimate the individual treatment effect (ITE). In addition, it examined the performance of five evaluation metrics that can be used to identify the best ITE estimation method when conducting research with real-world data. Among the ITE estimation methods that were analyzed, the S-learner, the Bayesian Causal Forest (BCF), the Causal Forest, and the X-learner exhibited the best performance. In general, the meta-learners with BART and tree-based direct estimation methods performed better than the representation learning methods and the multiple imputation methods. As for the evaluation metrics, τ_(risk_R ) and the Switch Doubly Robust MSE (SDR-MSE) performed the best in identifying the best ITE estimation method when the true treatment effect was unknown. This dissertation contributes to a small but growing body of research on ITE estimation which is gaining popularity in various fields due to its potential for tailoring interventions to meet the needs of individuals and targeting programs at those who would benefit the most from those interventions.
465

Trade-Offs and Opportunities in High-Dimensional Bayesian Modeling

Cademartori, Collin Andrew January 2024 (has links)
With the increasing availability of large multivariate datasets, modern parametric statisticalmodels makes increasing use of high-dimensional parameter spaces to flexibly represent complex data generating mechanisms. Yet, ceteris paribus, increases in dimensionality often carry drawbacks across the various sub-problems of data analysis, posing challenges for the data analyst who must balance model plausibility against the practical considerations of implementation. We focus here on challenges to three components of data analysis: computation, inference, and model checking. In the computational domain, we are concerned with achieving reasonable scaling of the computational complexity with the parameter dimension without sacrificing the trustworthiness of our computation. Here, we study a particular class of algorithms - the vectorized approximate message passing (VAMP) iterations - which offer the possibility of linear per-iteration scaling with dimension. These iterations perform approximate inference for a class of Bayesian generalized linear regression models, and we demonstrate that under flexible distributional conditions, the estimation performance of these VAMP iterations can be predicted to high accuracy with probability decaying exponentially fast in the size of the regression problem. In the realm of statistical inference, we investigate the relationship between parameter dimension and identification. We develop formal notions of weak identification and model expansion in the Bayesian setting and use this to argue for a very general tendency for dimensionality-increasing model expansion to weaken the identification of model parameters. We draw two substantive conclusions from this formalism. First, the negative association between dimensionality and identification can be weakened or reversed when we construct prior distributions that encode sufficiently strong dependence between parameters. Absent such prior information, we derive bounds which indicate that decreasing identification is usually unavoidable with sufficient inflation of the dimension without increasing the severity of the third challenge we consider: that of dimensionality to model checking. We divide the topic of model checking into two sub-problems: fitness testing and correctness testing. Using our model expansion formalism, we show again that both of these problems tend to become more difficult as the model dimension grows. We propose two extensions of the posterior predictive 𝑝-value - certain conditional and joint 𝑝-values, which are designed to address these challenges for fitness and correctness testing respectively. We demonstrate the potential of these 𝑝-values to allow successful model checking that scales with dimensionality theoretically and with examples.
466

Algorithmic Bayesian Epistemology

Neyman, Eric January 2024 (has links)
One aspect of the algorithmic lens in theoretical computer science is a view on other scientific disciplines that focuses on satisfactory solutions that adhere to real-world constraints, as opposed to solutions that would be optimal ignoring such constraints. The algorithmic lens has provided a unique and important perspective on many academic fields, including molecular biology, ecology, neuroscience, quantum physics, economics, and social science. This thesis applies the algorithmic lens to Bayesian epistemology. Traditional Bayesian epistemology provides a comprehensive framework for how an individual's beliefs should evolve upon receiving new information. However, these methods typically assume an exhaustive model of such information, including the correlation structure between different pieces of evidence. In reality, individuals might lack such an exhaustive model, while still needing to form beliefs. Beyond such informational constraints, an individual may be bounded by limited computation, or by limited communication with agents that have access to information, or by the strategic behavior of such agents. Even when these restrictions prevent the formation of a *perfectly* accurate belief, arriving at a *reasonably* accurate belief remains crucial. In this thesis, we establish fundamental possibility and impossibility results about belief formation under a variety of restrictions, and lay the groundwork for further exploration.
467

Politics Meets the Internet: Three Essays on Social Learning

Cremin, John Walter Edward January 2024 (has links)
This dissertation studies three models of sequential social learning, each of which has implications for the impact of the internet and social media on political discourse. I take three features of online political discussion, and consider in what ways they interfere with or assist learning.In Chapter 1, I consider agents who engage in motivated reasoning, which is a belief-formation procedure in which agents trade-off a desire to form accurate beliefs against a desire to hold ideologically congenial beliefs. Taking a model of motivated reasoning in which agents can reject social signals that provide too strong evidence against their preferred state, I analyse under which conditions we can expect asymptotic consensus, where all agents choose the same action, and learning, in which Bayesian agents choose the correct state with probability 1. I find that learning requires much more connected observation networks than is the case with Bayesian agents. Furthermore, I find that increasing the precision of agents’ private signals can actually break consensus, providing an explanation for the advance of factual polarisation despite the greater access to information that the internet provides. In Chapter 2, I evalute the importance of timidity. In the presence of agents who prefer not to be caught in error publicly, and can choose to keep their views to themselves given this, insufficiently confident individuals may choose not to participate in online debate. Studying social learning in this setting, I discover an unravelling mechanism by which non-partisan agents drop out of online political discourse. This leads to an exaggerated online presence for partisans, which can cause even more Bayesian agents to drop out. I consider the possibility of introducing partially anonymous commenting, how this could prevent such unravelling, and what restrictions on such commenting would be desirable. In Chapter 3, my focus moves on to considering rational inattention, and how this interacts with the glut of information the internet has produced. I set out a model that incorporates the costly observation of private and social information, and derive conditions under which we should expect learning to obtain despite these costs. I find that expanding access to cheap information can actually damage learning: giving all agents Blackwell-preferred signals or cheaper observations of all their neighbors can reduce the asymptotic probability with which they match the state. Furthermore, the highly connected networks social media produces can generate a public good problem in investigate journalism, damaging the ‘information ecosystem’ further still.
468

On Modeling Spatial Time-to-Event Data with Missing Censoring Type

Lu, Diane January 2024 (has links)
Time-to-event data, a common occurrence in medical research, is also pertinent in the ecological context, exemplified by leaf desiccation studies using innovative optical vulnerability techniques. Such data can unveil valuable insights into the influence of various factors on the event of interest. Leveraging both spatial and temporal information, spatial survival modeling can unravel the intricate spatiotemporal dynamics governing event occurrences. Existing spatial survival models often assume the availability of the censoring type for censored cases. Various approaches have been employed to address scenarios where a "subset" of cases lacks a known "censoring indicator" (i.e., whether they are right-censored or uncensored). This uncertainty in the subset pertains to missing information regarding the censoring status. However, our study specifically centers on situations where the missing information extends to "all" censored cases, rendering them devoid of a known censoring "type" indicator (i.e., whether they are right-censored or left-censored). The genesis of this challenge emerged from leaf hydraulic data, specifically embolism data, where the observation of embolism events is limited to instances when leaf veins transition from water-filled to air-filled during the observation period. Although it is known that all veins eventually embolize when the entire plant dries up, the critical information of whether a censored leaf vein embolized before or after the observation period is absent. In other words, the censoring type indicator is missing. To address this challenge, we developed a Gibbs sampler for a Bayesian spatial survival model, aiming to recover the missing censoring type indicator. This model incorporates the essential embolism formation mechanism theory, accounting for dynamic patterns observed in the embolism data. The model assumes spatial smoothness between connected leaf veins and incorporates vein thickness information. Our Gibbs sampler effectively infers the missing censoring type indicator, as demonstrated on both simulated and real-world embolism data. In applying our model to real data, we not only confirm patterns aligning with existing phytological literature but also unveil novel insights previously unexplored due to limitations in available statistical tools. Additionally, our results suggest the potential for building hierarchical models with species-level parameters focusing solely on the temporal component. Overall, our study illustrates that the proposed Gibbs sampler for the spatial survival model successfully addresses the challenge of missing censoring type indicators, offering valuable insights into the underlying spatiotemporal dynamics.
469

Empirical Bayes methods in time series analysis

Khoshgoftaar, Taghi M. January 1982 (has links)
In the case of repetitive experiments of a similar type, where the parameters vary randomly from experiment to experiment, the Empirical Bayes method often leads to estimators which have smaller mean squared errors than the classical estimators. Suppose there is an unobservable random variable θ, where θ ~ G(θ), usually called a prior distribution. The Bayes estimator of θ cannot be obtained in general unless G(θ) is known. In the empirical Bayes method we do not assume that G(θ) is known, but the sequence of past estimates is used to estimate θ. This dissertation involves the empirical Bayes estimates of various time series parameters: The autoregressive model, moving average model, mixed autoregressive-moving average, regression with time series errors, regression with unobservable variables, serial correlation, multiple time series and spectral density function. In each case, empirical Bayes estimators are obtained using the asymptotic distributions of the usual estimators. By Monte Carlo simulation the empirical Bayes estimator of first order autoregressive parameter, ρ, was shown to have smaller mean squared errors than the conditional maximum likelihood estimator for 11 past experiences. / Doctor of Philosophy
470

Individual decision-making and the maintenance of cooperative breeding in superb starlings (Lamprotornis superbus)

Earl, Alexis Diana January 2024 (has links)
From cells to societies, cooperation occurs at all levels of biological organization. In vertebrates, the most complex societies occur in cooperative breeders where some group members (helpers) forego reproduction, sacrificing their immediate direct fitness to assist in raising the offspring of others (breeders). Individuals in cooperative breeding societies can gain indirect fitness benefits from passing on shared genes when they help the offspring of close genetic relatives (kin selection), such that cooperation is expected to correlate with genetic relatedness. However, some cooperatively breeding societies include cooperation between nonrelatives. Cooperatively breeding societies range in complexity, from singular (one breeding pair) to plural (two or more breeding pairs). In the majority of singular breeding societies, helpers are relatives of breeders. Thus, kin selection is thought to underlie helping behavior in singular breeding societies. Plural breeding societies, such as in superb starlings (Lamprotornis superbus) inhabiting the East African savanna in central Kenya, involve multiple territory-sharing families raising offspring with helpers who can assist more than one family simultaneously. The superb starling’s complex and dynamic social system, mixed kin structure, relatively long lives, and stable social groups make them an ideal study species for investigating how patterns of individual decision-making have shaped and maintained cooperative societies. My dissertation research focuses on using long-term data on cooperatively breeding superb starlings to explore how temporally variable environments, such as the East African savanna, influence individual decisions across lifetimes, and subsequently how individual behavior shapes the structure and organization of the society. In Chapter 1, I apply a Bayesian approach to the animal model to estimate how genetic versus nongenetic factors influence among individual variation in the social roles: “breeder”, “helper”, and “non-breeder/non-helper”. Non-breeder/non-helper indicates that the individual maintained membership in the social group but did not breed or help during that season. I then estimated heritability and found, as predicted, overall low heritability of traits responsible for each role. This result is consistent with the findings of other studies on the heritability of social behavior, which tends to be low compared to non-social traits, primarily because the social behavior of an individual is highly influenced by interactions with other individuals. In Chapter 2, I show that superb starlings (i) are nepotistic, and (ii) switch between the social roles of “helper” and “breeder” across their lives. This role switching, which unexpectedly includes breeders going back to helping again, is linked to reciprocal helping between pairs of helpers and breeders, independent of genetic relatedness. Reciprocal helping was long thought to be irrelevant for cooperative breeders because most helping is assumed to be unidirectional, from subordinate helpers to dominant breeders, and reciprocal helping is often measured on short timescales. These long-term reciprocal helping relationships among kin and nonkin alike may be important for the persistence of this population because previous research has demonstrated that enhancing group size by immigration from outside groups, while reducing group kin structure, is necessary to prevent group extinction. Finally, the results of Chapter 3 reveal how social and ecological factors shape role switching across individual lifetimes. Overall, my dissertation highlights the remarkable flexibility of superb starling cooperative behavior and the crucial role of mutual direct fitness benefits from reciprocal helping, which may help promote the stability of cooperative group living among nonkin as well as kin group members, contributing to the resilience of this population within a harsh and unpredictable environment.

Page generated in 0.0511 seconds