211 |
Information Complexity in Material CultureTran, Ngoc-Han 09 March 2022 (has links)
Humans invest a substantial amount of time in the creation of artworks. For generations, humans around the world have learned and shared their knowledge and skills on artistic traditions. Albeit large experimental settings or online databases
have brought considerable insights on the evolutionary role and trajectory of art,
why humans invest in art, what information artworks carry and how art functions
within the community still remain elusive. To address these unresolved questions,
this present thesis integrates ethnographic accounts with data governance and statistical approaches to systematically investigate a large corpus of art. This thesis specifically focuses on a large corpus of Tamil kolam art from South India to provide an exemplary case study of artistic traditions. The foundation for the projects presented in this thesis was the design and construction of a robust data infrastructure that enabled the synthesis of raw data from various sources into one database for systematic analyses. The data infrastructure on the kolam artistic system enabled the development of complex statistical methods to explore the substantial investments and information complexity in art. In the first chapter, I examine artists’ strategic decisions in the creation of kolam art and how they strive to optimize the complexity of their artworks under constraints using evolutionary signaling theory and theoretically guided statistical methods. Results revealed that artists strive to maintain a stable and invariant complexity measured as Shannon information entropy, regardless of the size of the artwork. In order to achieve an optimal artistic complexity “sweet spot”, artists trade-off two standard measures of biological diversity in ecology: evenness and richness. Additionally, results showed that although kolam art arises in a highly stratified and multi-ethnic society, artistic complexity is strategically optimized across the population and not constrained by group boundaries. Instead, the trade-off can most likely be explained by aesthetic preferences or cognitive limitations. While artistic complexity in kolam art can be strategically optimized across the population, distinct styles and patterns can still be employed by artists. Thus, in the second chapter, I focus on how artistic styles in kolam art covary along cultural boundaries. I employ a novel statistical method to measure the mapping between styles onto group boundaries on a large corpus of kolam art by decomposing the system into sequential drawing decisions. In line with Chapter 1, results demonstrate limited group-level variation. Distinct styles or patterns in kolam art can only be weakly mapped to caste boundaries, neighborhoods or previous migration. Both chapters strongly suggest that kolam art is primarily a sphere where artists differentiate themselves from others by displaying their unique skill set and knowledge. Thus, variability in kolam art is largely dominated by individual-level variation and not reflective of group boundaries or narrow socialization channels. This thesis contributes to an emergent understanding of how artists conceptualize what they are doing and how art functions within the community. Taken together, this thesis serves as an example approach that demonstrates an optimized workflow and novel approaches for the evolutionary study of a large corpus of artistic traditions.
|
212 |
Exploring stellar magnetic activities with Bayesian inference / ベイズ推論による恒星磁気活動の探究Ikuta, Kai 23 March 2021 (has links)
京都大学 / 新制・課程博士 / 博士(理学) / 甲第23006号 / 理博第4683号 / 新制||理||1672(附属図書館) / 京都大学大学院理学研究科物理学・宇宙物理学専攻 / (主査)准教授 野上 大作, 教授 一本 潔, 教授 太田 耕司 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DFAM
|
213 |
Bayesian Learning in Computational Rheology: Applications to Soft Tissues and PolymersKedari, Sayali Ravindra 23 May 2022 (has links)
No description available.
|
214 |
Bayesian Music Alignment / ベイス推定に基づく音楽アライメントMaezawa, Akira 23 March 2015 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第19106号 / 情博第552号 / 新制||情||98(附属図書館) / 32057 / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 河原 達也, 教授 田中 利幸, 講師 吉井 和佳 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
215 |
Approximate Bayesian Inference based on Dense Matrices and New Features using INLAAbdul Fattah, Esmail 30 July 2023 (has links)
The Integrated Nested Laplace Approximations (INLA) method has become a commonly used tool for researchers and practitioners to perform approximate Bayesian inference for various fields of applications. It has become essential to incorporate more complex models and expand the method’s capabilities with more features. In this dissertation, we contribute to the INLA method in different aspects. First, we present a new framework, INLA$^+$, based on dense matrices to perform approximate Bayesian inference. An application of the new approach is fitting disease-mapping models for count data with complex interactions. When the precision matrix is dense, the new approach scales better than the existing INLA method and utilizes the power of multiprocessors on shared and distributed memory architectures in today’s computational resources. Second, we propose an adaptive technique to improve gradient estimation for the convex gradient-based optimization framework in INLA. We propose a simple limited-memory technique for improving the accuracy of the numerical gradient of the marginal posterior of the hyperparameter by exploiting a coordinate transformation of the gradient and the history of previously taken descent directions. Third, we extend the commonly utilized Bayesian spatial model in disease mapping, known as the Besag model, into a non-stationary spatial model. This new model considers variations in spatial dependency among a predetermined number of sub-regions. The model incorporates multiple precision parameters, which enable different intensities of spatial dependence in each sub-region. To avoid overfitting and enhance generalization, we derive a joint penalized complexity prior for these parameters. These contributions expand the capabilities of the INLA method, improving its scalability, accuracy, and flexibility for a wider range of applications.
|
216 |
Study on method for estimating phase coupling functions from time-series data and its application to human gait motion data / 時系列データからの位相結合関数の推定手法とその歩行運動データへの応用に関する研究Arai, Takahiro 23 March 2023 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第24739号 / 情博第827号 / 新制||情||139(附属図書館) / 京都大学大学院情報学研究科先端数理科学専攻 / (主査)教授 青柳 富誌生, 教授 田口 智清, 准教授 寺前 順之介 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DGAM
|
217 |
Investigating Convergence of Markov Chain Monte Carlo Methods for Bayesian Phylogenetic InferenceSpade, David Allen 29 August 2013 (has links)
No description available.
|
218 |
Improving Machine Learning Through Oracle LearningMenke, Joshua Ephraim 12 March 2007 (has links) (PDF)
The following dissertation presents a new paradigm for improving the training of machine learning algorithms, oracle learning. The main idea in oracle learning is that instead of training directly on a set of data, a learning model is trained to approximate a given oracle's behavior on a set of data. This can be beneficial in situations where it is easier to obtain an oracle than it is to use it at application time. It is shown that oracle learning can be applied to more effectively reduce the size of artificial neural networks, to more efficiently take advantage of domain experts by approximating them, and to adapt a problem more effectively to a machine learning algorithm.
|
219 |
Simulation and Calibration of Uncertain Space Fractional Diffusion EquationsAlzahrani, Hasnaa H. 10 January 2023 (has links)
Fractional diffusion equations have played an increasingly important role in ex- plaining long-range interactions, nonlocal dynamics and anomalous diffusion, pro- viding effective means of describing the memory and hereditary properties of such processes. This dissertation explores the uncertainty propagation in space fractional diffusion equations in one and multiple dimensions with variable diffusivity and order parameters. This is achieved by:(i) deploying accurate numerical schemes of the forward problem, and (ii) employing uncertainty quantifications tools that accelerate the inverse problem. We begin by focusing on parameter calibration of a variable- diffusivity fractional diffusion model. A random, spatially-varying diffusivity field is considered together with an uncertain but spatially homogeneous fractional operator order. Polynomial chaos (PC) techniques are used to express the dependence of the stochastic solution on these random variables. A non-intrusive methodology is used, and a deterministic finite-difference solver of the fractional diffusion model is utilized for this purpose. The surrogates are first used to assess the sensitivity of quantities of interest (QoIs) to uncertain inputs and to examine their statistics. In particular, the analysis indicates that the fractional order has a dominant effect on the variance of the QoIs considered. The PC surrogates are further exploited to calibrate the uncertain parameters using a Bayesian methodology. In the broad range of parameters addressed, the analysis shows that the uncertain parameters having a significant impact on the variance of the solution can be reliably inferred, even from limited observations.
Next, we address the numerical challenges when multidimensional space-fractional
diffusion equations have spatially varying diffusivity and fractional order. Significant computational challenges arise due to the kernel singularity in the fractional integral operator as well as the resulting dense discretized operators. Hence, we present a singularity-aware discretization scheme that regularizes the singular integrals through a singularity subtraction technique adapted to the spatial variability of diffusivity and fractional order. This regularization strategy is conveniently formulated as a sparse matrix correction that is added to the dense operator, and is applicable to different formulations of fractional diffusion equations. Numerical results show that the singularity treatment is robust, substantially reduces discretization errors, and attains the first-order convergence rate allowed by the regularity of the solutions.
In the last part, we explore the application of a Bayesian formalism to detect an anomaly in a fractional medium. Specifically, a computational method is presented for inferring the location and properties of an inclusion inside a two-dimensional domain. The anomaly is assumed to have known shape, but unknown diffusivity and fractional order parameters, and is assumed to be embedded in a fractional medium of known fractional properties. To detect the presence of the anomaly, the medium is forced using a collection of localized sources, and its response is measured at the source locations. To this end, the singularity-aware finite-difference scheme is applied. A non-intrusive regression approach is used to explore the dependence of the computed signals on the properties of the anomaly, and the resulting surrogates are first exploited to characterize the variability of the response, and then used to accelerate the Bayesian inference of the anomaly. In the regime of parameters considered, the computational results indicate that robust estimates of the location and fractional properties of the anomaly can be obtained, and that these estimates become sharper when high contrast ratios prevail between the anomaly and the surrounding matrix.
|
220 |
Real-time Traffic Safety Evaluation Models And Their Application For Variable Speed LimitsYu, Rongjie 01 January 2013 (has links)
Traffic safety has become the first concern in the transportation area. Crashes have cause extensive human and economic losses. With the objective of reducing crash occurrence and alleviating crash injury severity, major efforts have been dedicated to reveal the hazardous factors that affect crash occurrence at both the aggregate (targeting crash frequency per segment, intersection, etc.,) and disaggregate levels (analyzing each crash event). The aggregate traffic safety studies, mainly developing safety performance functions (SPFs), are being conducted for the purpose of unveiling crash contributing factors for the interest locations. Results of the aggregate traffic safety studies can be used to identify crash hot spots, calculate crash modification factors (CMF), and improve geometric characteristics. Aggregate analyses mainly focus on discovering the hazardous factors that are related to the frequency of total crashes, of specific crash type, or of each crash severity level. While disaggregate studies benefit from the reliable surveillance systems which provide detailed real-time traffic and weather data. This information could help in capturing microlevel influences of the hazardous factors which might lead to a crash. The disaggregate traffic safety models, also called real-time crash risk evaluation models, can be used in monitoring crash hazardousness with the real-time field data fed in. One potential use of real-time crash risk evaluation models is to develop Variable Speed Limits (VSL) as a part of a freeway management system. Models have been developed to predict crash occurrence to proactively improve traffic safety and prevent crash occurrence. iv In this study, first, aggregate safety performance functions were estimated to unveil the different risk factors affecting crash occurrence for a mountainous freeway section. Then disaggregate real-time crash risk evaluation models have been developed for the total crashes with both the machine learning and hierarchical Bayesian models. Considering the need for analyzing both aggregate and disaggregate aspects of traffic safety, systematic multi-level traffic safety studies have been conducted for single- and multi-vehicle crashes, and weekday and weekend crashes. Finally, the feasibility of utilizing a VSL system to improve traffic safety on freeways has been investigated. This research was conducted based on data obtained from a 15-mile mountainous freeway section on I-70 in Colorado. The data contain historical crash data, roadway geometric characteristics, real-time weather data, and real-time traffic data. Real-time weather data were recorded by 6 weather stations installed along the freeway section, while the real-time traffic data were obtained from the Remote Traffic Microwave Sensor (RTMS) radars and Automatic Vechicle Identification (AVI) systems. Different datasets have been formulated from various data sources, and prepared for the multi-level traffic safety studies. In the aggregate traffic safety investigation, safety performance functions were developed to identify crash occurrence hazardous factors. For the first time real-time weather and traffic data were used in SPFs. Ordinary Poisson model and random effects Poisson models with Bayesian inference approach were employed to reveal the effects of weather and traffic related variables on crash occurrence. Two scenarios were considered: one seasonal based case and one crash type v based case. Deviance Information Criterion (DIC) was utilized as the comparison criterion; and the correlated random effects Poisson models outperform the others. Results indicate that weather condition variables, especially precipitation, play a key role in the safety performance functions. Moreover, in order to compare with the correlated random effects Poisson model, Multivariate Poisson model and Multivariate Poisson-lognormal model have been estimated. Conclusions indicate that, instead of assuming identical random effects for the homogenous segments, considering the correlation effects between two count variables would result in better model fit. Results from the aggregate analyses shed light on the policy implication to reduce crash frequencies. For the studied roadway segment, crash occurrence in the snow season have clear trends associated with adverse weather situations (bad visibility and large amount of precipitation); weather warning systems can be employed to improve road safety during the snow season. Furthermore, different traffic management strategies should be developed according to the distinct seasonal influence factors. In particular, sites with steep slopes need more attention from the traffic management center and operators especially during snow seasons to control the excess crash occurrence. Moreover, distinct strategy of freeway management should be designed to address the differences between single- and multi-vehicle crash characteristics. In addition to developing safety performance functions with various modeling techniques, this study also investigates four different approaches of developing informative priors for the independent variables. Bayesian inference framework provides a complete and coherent way to balance the empirical data and prior expectations; merits of these informative priors have been tested along with two types of Bayesian hierarchical models (Poisson-gamma and Poisson- vi lognormal models). Deviance Information Criterion, R-square values, and coefficients of variance for the estimations were utilized as evaluation measures to select the best model(s). Comparisons across the models indicate that the Poisson-gamma model is superior with a better model fit and it is much more robust with the informative priors. Moreover, the two-stage Bayesian updating informative priors provided the best goodness-of-fit and coefficient estimation accuracies. In addition to the aggregate analyses, real-time crash risk evaluation models have been developed to identify crash contributing factors at the disaggregate level. Support Vector Machine (SVM), a recently proposed statistical learning model and Hierarchical Bayesian logistic regression models were introduced to evaluate real-time crash risk. Classification and regression tree (CART) model has been developed to select the most important explanatory variables. Based on the variable selection results, Bayesian logistic regression models and SVM models with different kernel functions have been developed. Model comparisons based on receiver operating curves (ROC) demonstrate that the SVM model with Radial basis kernel function outperforms the others. Results from the models demonstrated that crashes are likely to happen during congestion periods (especially when the queuing area has propagated from the downstream segment); high variation of occupancy and/or volume would increase the probability of crash occurrence. Moreover, effects of microscopic traffic, weather, and roadway geometric factors on the occurrence of specific crash types have been investigated. Crashes have been categorized as rear- vii end, sideswipe, and single-vehicle crashes. AVI segment average speed, real-time weather data, and roadway geometric characteristics data were utilized as explanatory variables. Conclusions from this study imply that different active traffic management (ATM) strategies should be designed for three- and two-lane roadway sections and also considering the seasonal effects. Based on the abovementioned results, real-time crash risk evaluation models have been developed separately for multi-vehicle and single-vehicle crashes, and weekday and weekend crashes. Hierarchical Bayesian logistic regression models (random effects and random parameter logistic regression models) have been introduced to address the seasonal variations, crash unit level’s diversities, and unobserved heterogeneity caused by geometric characteristics. For the multi-vehicle crashes: congested conditions at downstream would contribute to an increase in the likelihood of multi-vehicle crashes; multi-vehicle crashes are more likely to occur during poor visibility conditions and if there is a turbulent area that exists downstream. Drivers who are unable to reduce their speeds timely are prone to causing rear-end crashes. While for the singlevehicle crashes: slow moving traffic platoons at the downstream detector of the crash occurrence locations would increase the probability of single-vehicle crashes; large variations of occupancy downstream would also increase the likelihood of single-vehicle crash occurrence. Substantial efforts have been dedicated to revealing the hazardous factors that affect crash occurrence from both the aggregate and disaggregate level in this study, however, findings and conclusions from these research work need to be transferred into applications for roadway design and freeway management. This study further investigates the feasibility of utilizing Variable Speed Limits (VSL) system, one key part of ATM, to improve traffic safety on freeways. A proactive traffic safety improvement VSL control algorithm has been proposed. First, an viii extension of the traffic flow model METANET was employed to predict traffic flow while considering VSL’s impacts on the flow-density diagram; a real-time crash risk evaluation model was then estimated for the purpose of quantifying crash risk; finally, the optimal VSL control strategies were achieved by employing an optimization technique of minimizing the total predicted crash risks along the VSL implementation area. Constraints were set up to limit the increase of the average travel time and differences between posted speed limits temporarily and spatially. The proposed VSL control strategy was tested for a mountainous freeway bottleneck area in the microscopic simulation software VISSIM. Safety impacts of the VSL system were quantified as crash risk improvements and speed homogeneity improvements. Moreover, three different driver compliance levels were modeled in VISSIM to monitor the sensitivity of VSL’s safety impacts on driver compliance levels. Conclusions demonstrate that the proposed VSL system could effectively improve traffic safety by decreasing crash risk, enhancing speed homogeneity, and reducing travel time under both high and moderate driver compliance levels; while the VSL system does not have significant effects on traffic safety enhancement under the low compliance scenario. Future implementations of VSL control strategies and related research topics were also discussed.
|
Page generated in 0.0169 seconds