1 |
Controlling cracking in prestressed concrete panelsForeman, James Michael 25 October 2010 (has links)
Precast, prestressed concrete panels (PCPs) are used in 85% of bridges in Texas. The goal of this thesis is to reduce collinear cracking (cracks propagating parallel to strands) in PCPs. One way to reduce collinear cracking would be to reduce the initial prestress force. In design, TxDOT conservatively assumes total prestress losses of 45 ksi. Based on eight panel specimens, instrumented and fabricated at two different precast plants in Texas, actual prestress losses were measured as at most 25 ksi. This difference (about 20 ksi) is consistent with a reduction in initial prestress force from 16.1 kips per strand to 14.4 kips per strand. Another way to reduce collinear cracking would be to provide additional transverse reinforcement in the end regions of the panels. By comparing crack spacings and crack widths in current and modified panel specimens, it was found that additional reinforcement consisting of one or two #3 bars placed transverse to strands at panel ends would effectively control collinear cracking in PCPs. / text
|
2 |
On the analysis of multiple site damage in stiffened panelsCollins, Richard Anthony January 1999 (has links)
No description available.
|
3 |
Investigating the balance of bottom-up and top-down processing in autistic perceptionJachim, Stephen January 2015 (has links)
Autism spectrum disorder (ASD) is a neurodevelopmental disorder emerging in the first few years of life. Currently, three characteristics are required for a diagnosis of ASD, impaired social interactions, impaired verbal communication and restricted and repetitive patterns of behaviour or interests. This last category can optionally include hyper- or hypo-reactivity to sensory input. Individuals with autism can also display superior performance on visual tasks where it may help to ignore global detail, behaviour sometimes described as ‘not seeing the forest for the trees’. At present, the exact mechanisms underlying the perceptual differences between autistic and neurotypical groups remain unknown, but they may reflect an imbalance in the contributions that bottom-up and top-down processing make in perceptual processing. Visual perception is thought to rely on interactions between the ‘bottom-up’ flow of ambiguous information from the retina and the ‘top-down’ flow of disambiguating information from higher cortical areas, via cortical circuits that have been shaped by a lifetime’s experience. These interactions lead to the activation of internal representations (of objects) which are necessary for the successful navigation of our environment. In order to investigate these perceptual differences, we employed three well-known experimental paradigms with a group of thirteen autistic participants and their matched controls. We investigated visual integration (involving bottom-up and top-down interactions) across low and intermediate stage neural mechanisms. A dim line (target) is easier to detect when flanked by two brighter collinear lines (flankers), an effect known as collinear facilitation, and we used two variations of this task to investigate low-level visual integration. In the first, we varied the orientation of the collinear flankers and found reduced integration for an autistic compared to a neurotypical group, a finding that conflicts with previous research. In a second collinear facilitation experiment with neurotypical participants, in which the target could be presented before, during or after flanker presentation, we were able to isolate facilitation that we believe was due to feedforward and feedback processing. However, in a subsequent study in which we compared autistic and neurotypical performance on this task, we found no significant difference. Moving onto intermediate level visual integration, we used a contour integration task consisting of open (lines) and closed (square) contours and found reduced integration for the autistic compared to the neurotypical groups when integrating closed contours. In our final study, we looked at global motion integration, and made use of a translating diamond. This is a bistable stimulus in which four lines can be perceived as independent line fragments moving vertically, or as a single integrated shape - a diamond moving horizontally. In this experiment, the autistic group showed an unexpected bias to perceiving the stimuli in its integrated form as a diamond. Perceptual processing of shapes based on squares or diamonds reflects visual integration at a global level, and so the differences we have found in shape processing between our experimental groups (reduced integration for the square and increased integration for the diamond in autism) are more likely to be the result of differences in top-down processing.
|
4 |
Algebraic Semi-Classical Model for Reaction DynamicsWendler, Tim Glenn 01 December 2014 (has links) (PDF)
We use an algebraic method to model the molecular collision dynamics of a collinear triatomic system. Beginning with a forced oscillator, we develop a mathematical framework upon which inelastic and reactive collisions are modeled. The model is considered algebraic because it takes advantage of the properties of a Lie algebra in the derivation of a time-evolution operator. The time-evolution operator is shown to generate both phase-space and quantum dynamics of a forced oscillator simultaneously. The model is considered semi-classical because only the molecule's internal degrees-of-freedom are quantized. The relative translation between the colliding atom and molecule in an exchange reaction (AB+C ->A+BC) contains no bound states and any possible tunneling is neglected so the relative translation is treated classically. The purpose of this dissertation is to develop a working model for the quantum dynamics of a collinear reactive collision. After a reliable model is developed we apply statistical mechanics principles by averaging collisions with molecules in a thermal bath. The initial Boltzmann distribution is of the oscillator energies. The relative velocities of the colliding particles is considered a thermal average. Results are shown of quantum transition probabilities around the transition state that are highly dynamic due to the coupling between the translational and transverse coordinate.
|
5 |
Collinearity and Surround Size Effects on Spatial Discrimination TasksKramer, Michael L. 08 August 2006 (has links)
No description available.
|
6 |
Applications of Effective Field Theories for Precision Calculations at e⁺e⁻ CollidersFickinger, Michael January 2012 (has links)
Effective field theories can be used to describe measurements at e⁺e⁻ colliders over a wide kinematic range while allowing reliable error predictions and systematic extensions. We show this in two physical situations. First, we give a factorization formula for the e⁺e⁻ thrust distribution dσ/dτ with thrust T and τ = 1 − T based on soft collinear effective theory. The result is applicable for all τ, i.e. in the peak, tail, and far-tail regions. We present a global analysis of all available thrust distribution data measured at center-of-mass energies Q = 35 to 207 GeV in the tail region, where a two parameter fit to the strong coupling constant α(s)(m(Z)) and the leading power correction parameter Ω₁ suffices. We find α(s)(m(Z)) = 0.1135 ± (0.0002)expt ± (0.0005)hadr ± (0.0009)pert, with x²/dof = 0.91, where the displayed 1-sigma errors are the total experimental error, the hadronization uncertainty, and the perturbative theory uncertainty, respectively. In addition, we consider cumulants of the thrust distribution using predictions of the full spectrum for thrust. From a global fit to the first thrust moment we extract α(s)(m(Z)) and Ω₁. We obtain α(s)(m(Z)) = 0.1140 ± (0.0004)exp ± (0.0013)hadr ± (0.0007)pert which is compatible with the value from our tail region fit. The n-th thrust cumulants for n ≥ 2 are completely insensitive to Ω₁, and therefore a good instrument for extracting information on higher order power corrections, Ω'(n)/Qⁿ, from moment data. We find (˜Ω₂)^1/2 = 0.74 ± (0.11)exp ± (0.09)pert GeV. Second, we study the differential cross section dσ/dx of e⁺e⁻-collisions producing a heavy hadron with energy fraction x of the beam energy in the center-of-mass frame. Using a sequence of effective field theories we give a definition of the heavy quark fragmentation function in the endpoint region x → 1. From the perspective of our effective field theory approach we revisit the heavy quark fragmentation function away from the endpoint and outline how to develop a description of the heavy quark fragmentation function valid for all x. Our analysis is focused on Z-boson decays producing one B-meson. Finally, we will give a short outlook of how we want to apply our approach to determine the leading nonperturbative power corrections of the b-quark fragmentation function from LEP experiments.
|
7 |
Precision absolute frequency laser spectroscopy of argon II in parallel and antiparallel geometry using a frequency comb for calibrationLioubimov, Vladimir 14 January 2010 (has links)
A collinear fast ion beam laser apparatus was constructed and tested. It will be used
on-line to the SLOW RI radioactive beam facility in RIKEN (Japan) and as in the
present experiment for precision absolute frequency measurements of astrophysically
important reference lines. In the current work we conducted absolute measurements
of spectral lines of Ar ions using parallel and antiparallel geometries. To provide
a reference for the laser wavelength iodine saturation spectroscopy was used. The
precision of this reference was enhanced by simultaneously observing the beat node
between the spectroscopy laser and the corresponding mode of a femtosecond laser
frequency comb.
When performing collinear and anticollinear measurements simultaneously for
the laser induced fluorescence, the exact relativistic formula for the transition
frequency v0 = pvcoll � vanticoll can be applied. In this geometry ion source instabilities
due to pressure and anode voltage fluctuation are minimized.
The procedure of fluorescence lineshapes fitting is discussed and the errors
in the measurements are estimated. The result is v0 = 485, 573, 619.7 � 0.3MHz
corresponding to (delta v)/v = 6 � 10?10 and is an improvement of two orders of magnitude
over the NIST published value.
|
8 |
Jet Definitions in Effective Field Theory and Decaying Dark MatterCheung, Man Yin 10 December 2012 (has links)
In this thesis jet production and cosmological constraints on decaying dark matter are studied. The powerful framework of effective field theory is applied in both cases to further our knowledge of particle physics.
We first discuss how to apply the Soft Collinear Effective Theory (SCET) for calculating hadronic jet production rate. By applying SCET power counting, we develop a consistent approach to perform phase space integrations. This approach is then successfully applied to one-loop calculations with regard to a variety of jet algorithms. This allows us to study if the soft contribution can be factorized from the collinear ones. In particular we point out the connection between such factorization and the choice of ultraviolet regulator.
We then further our study of the (exclusive) kt and C/A jet algorithms in SCET with the introduction of an additional regulator. Regularizing the virtualities and rapidities of graphs in SCET, we are able to write the next-to-leading-order dijet cross section as the product of separate hard, jet, and soft contributions. We show how to reproduce the Sudakov form factor to next-to-leading logarithmic accuracy previously calculated by the coherent branching formalism. Our resummed expression only depends on the renormalization group evolution of the hard function, rather than on that of the hard and jet functions as is usual in SCET.
Finally we present a complete analysis of the cosmological constraints on decaying dark matter. For this, we have updated and extended previous analyses to include Lyman-alpha forest, large scale structure, and weak lensing observations. Astrophysical constraints are not considered in this thesis. The bounds on the lifetime of decaying dark matter are dominated by either the late-time integrated Sachs-Wolfe effect for the scenario with weak reionization, or CMB polarisation observations when there is significant reionization. For the respective scenarios, the lifetimes for decaying dark matter are constrained by Gamma^{-1} > 100 Gyr and (f*Gamma)^{-1} > 5.3 x 10^8 Gyr (at 95.4% confidence level), where the phenomenological parameter f is the fraction of decay energy deposited into the baryonic gas. This allows us to constrain particle physics models with dark matter candidates, by analyzing effective operators responsible for the dark matter decays into Standard Model particles.
|
9 |
Jet Definitions in Effective Field Theory and Decaying Dark MatterCheung, Man Yin 10 December 2012 (has links)
In this thesis jet production and cosmological constraints on decaying dark matter are studied. The powerful framework of effective field theory is applied in both cases to further our knowledge of particle physics.
We first discuss how to apply the Soft Collinear Effective Theory (SCET) for calculating hadronic jet production rate. By applying SCET power counting, we develop a consistent approach to perform phase space integrations. This approach is then successfully applied to one-loop calculations with regard to a variety of jet algorithms. This allows us to study if the soft contribution can be factorized from the collinear ones. In particular we point out the connection between such factorization and the choice of ultraviolet regulator.
We then further our study of the (exclusive) kt and C/A jet algorithms in SCET with the introduction of an additional regulator. Regularizing the virtualities and rapidities of graphs in SCET, we are able to write the next-to-leading-order dijet cross section as the product of separate hard, jet, and soft contributions. We show how to reproduce the Sudakov form factor to next-to-leading logarithmic accuracy previously calculated by the coherent branching formalism. Our resummed expression only depends on the renormalization group evolution of the hard function, rather than on that of the hard and jet functions as is usual in SCET.
Finally we present a complete analysis of the cosmological constraints on decaying dark matter. For this, we have updated and extended previous analyses to include Lyman-alpha forest, large scale structure, and weak lensing observations. Astrophysical constraints are not considered in this thesis. The bounds on the lifetime of decaying dark matter are dominated by either the late-time integrated Sachs-Wolfe effect for the scenario with weak reionization, or CMB polarisation observations when there is significant reionization. For the respective scenarios, the lifetimes for decaying dark matter are constrained by Gamma^{-1} > 100 Gyr and (f*Gamma)^{-1} > 5.3 x 10^8 Gyr (at 95.4% confidence level), where the phenomenological parameter f is the fraction of decay energy deposited into the baryonic gas. This allows us to constrain particle physics models with dark matter candidates, by analyzing effective operators responsible for the dark matter decays into Standard Model particles.
|
10 |
Automation of calculations in soft-collinear effective theoryRahn, Rudi Michael January 2016 (has links)
Theoretical predictions for generic multi-scale observables in Quantum Chromodynamics (QCD) typically suffer from large Sudakov logarithms associated with the emission of soft or collinear radiation, whose presence spoils the perturbative expansion in the coupling strength which underlies most calculations in QCD. A canonical way to improve predictions wherever these logarithms appear is to resum them to all perturbative orders, which can conveniently be achieved using Effective Field Theory (EFT) methods. In an age of increasing automation using computers, this task is still mostly performed manually, observable-by-observable. In this thesis we identify the 2-loop soft function as a crucial ingredient for the resummation of QCD Sudakov logarithms to Next-to-next-to-leading logarithmic (NNLL) accuracy in Soft-Collinear Effective Theory (SCET), for wide classes of observables involving two massless colour-charged energetic particles, such as dijet event shapes at lepton colliders, or colour singlet production at hadron colliders. We develop a method to evaluate these soft functions using numerical methods based on sector decomposition and the choice of a convenient parametrisation for the phase space. This allows the factorisation of all implicit (real emission) and explicit (virtual correction) divergences made manifest by dimensional and analytic regularisation. The regulator pole coefficients can then be evaluated numerically following a subtraction and expansion, and two computational tools are presented to perform these numerical integrations, one based on publicly available tools, the other based on our own code. Some technical improvements over naive straightforward numerical evaluation are demonstrated and implemented. This allows us to compute and verify two of three colour structures of the 2-loop bare soft functions for wide ranges of observables with a factorisation theorem. A number of example results - both new and already known - are shown to demonstrate the reach of this approach, and a few possible extensions are sketched. This thesis therefore represents a crucial step towards automation of resummation for generic observables to NNLL accuracy in SCET.
|
Page generated in 0.0496 seconds