Spelling suggestions: "subject:"[een] QUANTIFICATION"" "subject:"[enn] QUANTIFICATION""
161 |
The standard interpretation of higher-order variables in modern logic and the concept of function in mathematicsConstant, Dimitri 22 January 2016 (has links)
A logic that utilizes higher-order quantification --quantifying over concepts (or relations), not just over the first-order level of individuals-- can be interpreted standardly or nonstandardly depending on whether one takes an intensional or extensional view of concepts. I argue that this decision is connected to how one understands the mathematical notion of function. A function is often understood as a rule that, when given an argument from a set of objects called a "domain," returns a value from a set of objects called a "codomain." Because a concept can be thought of as a two-valued function (that indicates whether or not a given object falls under the concept), having an extensional interpretation of higher-order variables --the standard interpretation-- requires that one adopt an extensional notion of function. Viewed extensionally, however, a function is understood not as a rule but rather as a correlation associating every element in a domain with an element in a codomain. When the domain is finite, the two understandings of function are equivalent (since one can define a rule for any finite correlation), but with an infinite domain, the latter understanding admits arbitrary functions, or correlations not definable by a finitely specifiable rule.
Rejection of the standard interpretation is often motivated by the same reasons used to resist the extensional understanding of function. Such resistance is overt in the pronouncements of Leopold Kronecker, but is also implicit in the work of Gottlob Frege, who used an intensional notion of function in his logic. Looking at the problem historically, I argue that the extensional notion of function has been basic to mathematics since ancient times. Moreover, I claim that Gottfried Wilhelm Leibniz's combination of mathematical and metaphysical ideas helped inaugurate an extensional and ultimately model-theoretical approach to mathematical concepts that led to some of the most important applications of mathematics to science (e.g. the use of non-Euclidean geometry in the theory of general relativity). In logic, Frege's use of an intensional notion of function led to contradiction, while Richard Dedekind and Georg Cantor applied the extensional notion of function to develop mathematically revolutionary theories of the transfinite. / 2025-10-15
|
162 |
Uncertainty Quantification and Sensitivity Analysis of Multiphysics Environments for Application in Pressurized Water Reactor DesignBlakely, Cole David 01 August 2018 (has links)
The most common design among U.S. nuclear power plants is the pressurized water reactor (PWR). The three primary design disciplines of these plants are system analysis (which includes thermal hydraulics), neutronics, and fuel performance. The nuclear industry has developed a variety of codes over the course of forty years, each with an emphasis within a specific discipline. Perhaps the greatest difficulty in mathematically modeling a nuclear reactor, is choosing which specific phenomena need to be modeled, and to what detail.
A multiphysics computational environment provides a means of advancing simulations of nuclear plants. Put simply, users are able to combine various physical models which have commonly been treated as separate in the past. The focus of this work is a specific multiphysics environment currently under development at Idaho National Labs known as the LOCA Toolkit for US light water reactors (LOTUS).
The ability of LOTUS to use uncertainty quantification (UQ) and sensitivity analysis (SA) tools within a multihphysics environment allow for a number of unique analyses which to the best of our knowledge, have yet to be performed. These include the first known integration of the neutronics and thermal hydraulic code VERA-CS currently under development by CASL, with the well-established fuel performance code FRAPCON by PNWL. The integration was used to model a fuel depletion case.
The outputs of interest for this integration were the minimum departure from nucleate boiling ratio (MDNBR) (a thermal hydraulic parameter indicating how close a heat flux is to causing a dangerous form of boiling in which an insulating layer of coolant vapour is formed), the maximum fuel centerline temperature (MFCT) of the uranium rod, and the gap conductance at peak power (GCPP). GCPP refers to the thermal conductance of the gas filled gap between fuel and cladding at the axial location with the highest local power generation.
UQ and SA were performed on MDNBR, MFCT, and GCPP at a variety of times throughout the fuel depletion. Results showed the MDNBR to behave linearly and consistently throughout the depletion, with the most impactful input uncertainties being coolant outlet pressure and inlet temperature as well as core power. MFCT also behaves linearly, but with a shift in SA measures. Initially MFCT is sensitive to fuel thermal conductivity and gap dimensions. However, later in the fuel cycle, nearly all uncertainty stems from fuel thermal conductivity, with minor contributions coming from core power and initial fuel density. GCPP uncertainty exhibits nonlinear, time-dependent behaviour which requires higher order SA measures to properly analyze. GCPP begins with a dependence on gap dimensions, but in later states, shifts to a dependence on the biases of a variety of specific calculation such as fuel swelling and cladding creep and oxidation.
LOTUS was also used to perform the first higher order SA of an integration of VERA-CS and the BISON fuel performance code currently under development at INL. The same problem and outputs were studied as the VERA-CS and FRAPCON integration. Results for MDNBR and MFCT were relatively consistent. GCPP results contained notable differences, specifically a large dependence on fuel and clad surface roughness in later states. However, this difference is due to the surface roughness not being perturbed in the first integration. SA of later states also showed an increased sensitivity to fission gas release coefficients.
Lastly a Loss of Coolant Accident was investigated with an integration of FRAPCON with the INL neutronics code PHISICS and system analysis code RELAP5-3D. The outputs of interest were ratios of the peak cladding temperatures (highest temperature encountered by cladding during LOCA) and equivalent cladding reacted (the percentage of cladding oxidized) to their cladding hydrogen content-based limits. This work contains the first known UQ of these ratios within the aforementioned integration. Results showed the PCT ratio to be relatively well behaved. The ECR ratio behaves as a threshold variable, which is to say it abruptly shifts to radically higher values under specific conditions. This threshold behaviour establishes the importance of performing UQ so as to see the full spectrum of possible values for an output of interest.
The SA capabilities of LOTUS provide a path forward for developers to increase code fidelity for specific outputs. Performing UQ within a multiphysics environment may provide improved estimates of safety metrics in nuclear reactors. These improved estimates may allow plants to operate at higher power, thereby increasing profits. Lastly, LOTUS will be of particular use in the development of newly proposed nuclear fuel designs.
|
163 |
Modeling and Quantification of Profile Matching Risk in Online Social NetworksHalimi, Anisa 01 September 2021 (has links)
No description available.
|
164 |
Uncertainty Quantification for Underdetermined Inverse Problems via Krylov Subspace Iterative SolversDevathi, Duttaabhinivesh 23 May 2019 (has links)
No description available.
|
165 |
Non-Deterministic Metamodeling for Multidisciplinary Design Optimization of Aircraft Systems Under UncertaintyClark, Daniel L., Jr. 18 December 2019 (has links)
No description available.
|
166 |
Nonlinear Uncertainty Quantification, Sensitivity Analysis, and Uncertainty Propagation of a Dynamic Electrical CircuitDoty, Austin January 2012 (has links)
No description available.
|
167 |
Applications of Computer Vision Technologies of Automated Crack Detection and Quantification for the Inspection of Civil Infrastructure SystemsWu, Liuliu 01 January 2015 (has links)
Many components of existing civil infrastructure systems, such as road pavement, bridges, and buildings, are suffered from rapid aging, which require enormous nation's resources from federal and state agencies to inspect and maintain them. Crack is one of important material and structural defects, which must be inspected not only for good maintenance of civil infrastructure with a high quality of safety and serviceability, but also for the opportunity to provide early warning against failure. Conventional human visual inspection is still considered as the primary inspection method. However, it is well established that human visual inspection is subjective and often inaccurate. In order to improve current manual visual inspection for crack detection and evaluation of civil infrastructure, this study explores the application of computer vision techniques as a non-destructive evaluation and testing (NDE&T) method for automated crack detection and quantification for different civil infrastructures. In this study, computer vision-based algorithms were developed and evaluated to deal with different situations of field inspection that inspectors could face with in crack detection and quantification. The depth, the distance between camera and object, is a necessary extrinsic parameter that has to be measured to quantify crack size since other parameters, such as focal length, resolution, and camera sensor size are intrinsic, which are usually known by camera manufacturers. Thus, computer vision techniques were evaluated with different crack inspection applications with constant and variable depths. For the fixed-depth applications, computer vision techniques were applied to two field studies, including 1) automated crack detection and quantification for road pavement using the Laser Road Imaging System (LRIS), and 2) automated crack detection on bridge cables surfaces, using a cable inspection robot. For the various-depth applications, two field studies were conducted, including 3) automated crack recognition and width measurement of concrete bridges' cracks using a high-magnification telescopic lens, and 4) automated crack quantification and depth estimation using wearable glasses with stereovision cameras. From the realistic field applications of computer vision techniques, a novel self-adaptive image-processing algorithm was developed using a series of morphological transformations to connect fragmented crack pixels in digital images. The crack-defragmentation algorithm was evaluated with road pavement images. The results showed that the accuracy of automated crack detection, associated with artificial neural network classifier, was significantly improved by reducing both false positive and false negative. Using up to six crack features, including area, length, orientation, texture, intensity, and wheel-path location, crack detection accuracy was evaluated to find the optimal sets of crack features. Lab and field test results of different inspection applications show that proposed compute vision-based crack detection and quantification algorithms can detect and quantify cracks from different structures' surface and depth. Some guidelines of applying computer vision techniques are also suggested for each crack inspection application.
|
168 |
Two Types of Definites in Natural LanguageSchwarz, Florian 01 September 2009 (has links)
This thesis is concerned with the description and analysis of two semantically different types of definite articles in German. While the existence of distinct article paradigms in various Germanic dialects and other languages has been acknowledged in the descriptive literature for quite some time, the theoretical implications of their existence have not been explored extensively. I argue that each of the articles corresponds to one of the two predominant theoretical approaches to analyzing definite descriptions: the `weak' article encodes uniqueness. The `strong' article is anaphoric in nature. In the course of spelling out detailed analyses for the two articles, various more general issues relevant to current semantic theory are addressed, in particular with respect to the analysis of donkey sentences and domain restriction. Chapter 2 describes the contrast between the weak and the strong article in light of the descriptive literature and characterizes their uses in terms of Hawkins's (1978) classification. Special attention is paid to two types of bridging uses, which shed further light on the contrast and play an important in the analysis developed in the following chapters. Chapter 3 introduces a situation semantics and argues for a specific version thereof. First, I propose that situation arguments in noun phrases are represented syntactically as situation pronouns at the level of the DP (rather than within the NP). Secondly, I argue that domain restriction (which is crucial for uniqueness analyses) can best be captured in a situation semantics, as this is both more economical and empirically more adequate than an analysis in terms of contextually supplied C-variables. Chapter 4 provides a uniqueness analysis of weak-article definites. The interpretation of a weak-article definite crucially depends on the interpretation of its situation pronoun, which can stand for the topic situation or a contextually supplied situation, or be quantificationally bound. I make a specific proposal for how topic situations (roughly, the situations that we are talking about) can be derived from questions and relate this to a more general perspective on discourse structure based on the notion of Question Under Discussion (QUD) (Roberts 1996, Buring 2003). I also show that it requires a presuppositional view of definites. A detailed, situation-semantic analysis of covarying interpretations of weak-article definites in donkey sentences is spelled out as well, which provides some new insights with regards to transparent interpretations of the restrictors of donkey sentences. Chapter 5 deals with so-called larger situation uses (Hawkins 1978), which call for a special, systematic way of determining the situation in which the definite is interpreted. I argue that a situation semantic version of an independently motivated type-shifter for relational nouns (shifting relations (he; he; stii) to properties (he; hstii)) brings about the desired situational effect. This type-shifter also applies to cases of part-whole bridging and provides a deeper understanding thereof. Another independently motivated mechanism, namely that of Matching functions, gives rise to similar effects, but in contrast to the type-shifter, it depends heavily on contextual support and cannot account for the general availability of larger situation uses that is independent of the context. The anaphoric nature of the strong article is described and analyzed in detail in chapter 6. In addition to simple discourse anaphoric uses, I discuss covarying interpretations and relational anaphora (the type of bridging expressed by the strong article). Cases where uniqueness does not hold (e.g., in so-called bishop sentences) provide crucial evidence for the need to encode the anaphoric link between strongarticle definites and their antecedents formally. The resulting dynamic analysis of strong-article definites encodes the anaphoric dependency via a separate anaphoric element that is incorporated into a uniqueness meaning. Finally, remaining challenges for the analysis are discussed, in particular the existence of strong-article definites without an antecedent and a puzzling contrast between the articles with respect to relative clauses. The final chapter discusses some loose ends that suggest directions for future work and sums up the main conclusions.
|
169 |
Automated measurement of fluorescence signals reveals a significant increase of the graft-derived neurite extension in neonates compared to aged rats / 移植神経細胞の突起伸長は老齢ラットよりも幼若ラットにおいて著明に増加することが蛍光信号の自動計測で示されるGRINAND, Luc Brice 26 September 2022 (has links)
京都大学 / 新制・課程博士 / 博士(医科学) / 甲第24202号 / 医科博第143号 / 新制||医科||10(附属図書館) / 京都大学大学院医学研究科医科学専攻 / (主査)教授 井上 治久, 教授 髙橋 良輔, 教授 花川 隆 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
|
170 |
Deep Gaussian Process Surrogates for Computer ExperimentsSauer, Annie Elizabeth 27 April 2023 (has links)
Deep Gaussian processes (DGPs) upgrade ordinary GPs through functional composition, in which intermediate GP layers warp the original inputs, providing flexibility to model non-stationary dynamics. Recent applications in machine learning favor approximate, optimization-based inference for fast predictions, but applications to computer surrogate modeling - with an eye towards downstream tasks like Bayesian optimization and reliability analysis - demand broader uncertainty quantification (UQ). I prioritize UQ through full posterior integration in a Bayesian scheme, hinging on elliptical slice sampling of latent layers. I demonstrate how my DGP's non-stationary flexibility, combined with appropriate UQ, allows for active learning: a virtuous cycle of data acquisition and model updating that departs from traditional space-filling designs and yields more accurate surrogates for fixed simulation effort. I propose new sequential design schemes that rely on optimization of acquisition criteria through evaluation of strategically allocated candidates instead of numerical optimizations, with a motivating application to contour location in an aeronautics simulation. Alternatively, when simulation runs are cheap and readily available, large datasets present a challenge for full DGP posterior integration due to cubic scaling bottlenecks. For this case I introduce the Vecchia approximation, popular for ordinary GPs in spatial data settings. I show that Vecchia-induced sparsity of Cholesky factors allows for linear computational scaling without compromising DGP accuracy or UQ. I vet both active learning and Vecchia-approximated DGPs on numerous illustrative examples and real computer experiments. I provide open-source implementations in the "deepgp" package for R on CRAN. / Doctor of Philosophy / Scientific research hinges on experimentation, yet direct experimentation is often impossible or infeasible (practically, financially, or ethically). For example, engineers designing satellites are interested in how the shape of the satellite affects its movement in space. They cannot create whole suites of differently shaped satellites, send them into orbit, and observe how they move. Instead they rely on carefully developed computer simulations. The complexity of such computer simulations necessitates a statistical model, termed a "surrogate", that is able to generate predictions in place of actual evaluations of the simulator (which may take days or weeks to run). Gaussian processes (GPs) are a common statistical modeling choice because they provide nonlinear predictions with thorough estimates of uncertainty, but they are limited in their flexibility. Deep Gaussian processes (DGPs) offer a more flexible alternative while still reaping the benefits of traditional GPs. I provide an implementation of DGP surrogates that prioritizes prediction accuracy and estimates of uncertainty. For computer simulations that are very costly to run, I provide a method of sequentially selecting input configurations to maximize learning from a fixed budget of simulator evaluations. I propose novel methods for selecting input configurations when the goal is to optimize the response or identify regions that correspond to system "failures". When abundant simulation evaluations are available, I provide an approximation which allows for faster DGP model fitting without compromising predictive power. I thoroughly vet my methods on both synthetic "toy" datasets and real aeronautic computer experiments.
|
Page generated in 0.0685 seconds