• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 296
  • 10
  • Tagged with
  • 457
  • 457
  • 176
  • 176
  • 88
  • 73
  • 73
  • 73
  • 53
  • 27
  • 27
  • 27
  • 26
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Monte Carlo dose calculations in advanced radiotherapy

Bush, Karl Kenneth 15 September 2009 (has links)
The remarkable accuracy of Monte Carlo (MC) dose calculation algorithms has led to the widely accepted view that these methods should and will play a central role in the radiotherapy treatment verification and planning of the future. The advantages of using MC clinically are particularly evident for radiation fields passing through inhomogeneities, such as lung and air cavities, and for small fields, including those used in today's advanced intensity modulated radiotherapy techniques. Many investigators have reported significant dosimetric differences between MC and conventional dose calculations in such complex situations, and have demonstrated experimentally the unmatched ability of MC calculations in modeling charged particle disequilibrium. The advantages of using MC dose calculations do come at a cost. The nature of MC dose calculations require a highly detailed, in-depth representation of the physical system (accelerator head geometry/composition, anatomical patient geometry/composition and particle interaction physics) to allow accurate modeling of external beam radiation therapy treatments. To perform such simulations is computationally demanding and has only recently become feasible within mainstream radiotherapy practices. In addition, the output of the accelerator head simulation can be highly sensitive to inaccuracies within a model that may not be known with sufficient detail. The goal of this dissertation is to both improve and advance the implementation of MC dose calculations in modern external beam radiotherapy. To begin, a novel method is proposed to fine-tune the output of an accelerator model to better represent the measured output. In this method an intensity distribution of the electron beam incident on the model is inferred by employing a simulated annealing algorithm. The method allows an investigation of arbitrary electron beam intensity distributions and is not restricted to the commonly assumed Gaussian intensity. In a second component of this dissertation the design, implementation and evaluation of a technique for reducing a latent variance inherent from the recycling of phase space particle tracks in a simulation is presented. In the technique a random azimuthal rotation about the beam's central axis is applied to each recycled particle, achieving a significant reduction of the latent variance. In a third component, the dissertation presents the first MC modeling of Varian's new RapidArc delivery system and a comparison of dose calculations with the Eclipse treatment planning system. A total of four arc plans are compared including an oropharynx patient phantom containing tissue inhomogeneities. Finally, in a step toward introducing MC dose calculation into the planning of treatments such as RapidArc, a technique is presented to feasibly generate and store a large set of MC calculated dose distributions. A novel 3-D dyadic multi-resolution (MR) decomposition algorithm is presented and the compressibility of the dose data using this algorithm is investigated. The presented MC beamlet generation method, in conjunction with the presented 3-D data MR decomposition, represents a viable means to introduce MC dose calculation in the planning and optimization stages of advanced radiotherapy.
232

Kicking at the darkness: detecting deeply embedded protostars at 1–10 μm

Maxwell, Aaron J. 03 November 2010 (has links)
We present an analysis of observations using the Spitzer Space Telescope and the James Clerk Maxwell Telescope of deeply embedded protostars in the Perseus Giant Molecular Cloud. Building on the results of Jørgensen et al. (2007), we attempt to characterize the physical properties of these deeply embedded protostars, discovered due to their extremely red near infrared colours and their proximity to protostellar cores detected at 850 μm. Using a grid of radiative transfer models by Robitaille et al. (2006), we fit the observed fluxes of each source, and build statistical descriptions of the best fits. We also use simple one dimensional analytic approximations to the protostars in order to determine the physical size and mass of the protostellar envelope, and use these 1D models to provide a goodness-of-fit criterion when considering the model grid fits to the Perseus sources. We find that it is possible to create red [3.6]-[4.5] and [8.0]-[24] colours by inflating the inner envelope radius, as well as by observing embedded protostars through the bipolar outflows. The majority of the deeply embedded protostars, however, are well fit by models seen at intermediate inclinations, with outflow cavity opening angles < 30o, and scattering of photons off of the cavity walls produces the red colours. We also discuss other results of the SED fitting.
233

The impact of strongly interacting relics on big bang nucleosynthesis

Sharman, Jonathan William 17 December 2010 (has links)
We study the impact of long lived strongly interacting particles on primordial nuclear abundances. Particularly we look at the case of anti-squark quark bound states called mesinos. These mesinos are similar to massive nucleons in that they have the same spin and isospin. Like nucleons, the mesinos take part in nucleosynthesis and are bound into nuclei. We incorporate the mesinos into the various stages of BBN, from the QCD phase transition, to their capture of nucleons, to their eventual decay. We identify the mechanisms by which the mesinos could impact primordial abundances and show which actually do so. We nd that for the predicted mesino abundance, only one mechanism exists that has the potential of generating an observable signature.
234

Development of a multiplexing biosensor platform using SERS particle immunoassay technology

Kumarswami, Neelam January 2014 (has links)
The purpose of this study is to demonstrate the ability of surface enhanced Raman scattering (SERS) active particles to enable multiplexed immunoassays in a lateral flow format for point of care (POC) testing. The SERS particles used for this study are chemically active glass coated gold particles, containing tracer molecules which in principle can be chosen to provide Raman Spectra with unique features allowing multiple tracers to be simultaneously measured and distinguished without interference between each other. Lateral flow immunoassay technology is the important part of this study and can be conveniently packaged for the use of other than highly skilled technicians outside of the laboratory. A well-known (single channel - simplex) device for the pregnancy test is a typical example of the lateral flow assay. Similar formats have been/are being developed by others for a range of POC applications – but most diagnostic applications require simultaneous determination of a range of biomarkers and multiplexed assays are difficult to achieve without significant interference between the individual assays. This is where SERS particles may provide some advantages over existing techniques. Cardiac markers are the growing market for point of care technology therefore biomarkers of cardiac injury (Troponin, myoglobin and CRP) have been chosen as a model. The object of the study is to establish the proof of concept multiplexing assay using these chosen biomarkers. Thus, initially all different particles were characterised in single and mixture form. Also development of conjugate chemistry between antibodies for each analyte that have been purchased from commercial sources and SERS particles were analysed using different conditions like buffer, pH and antibody loading concentration to get the optimum intensity. The selected SERS particles and their conjugates were tested for size, aggregation and immune quality using a range of techniques: ultraviolet-visible (UV/Vis) absorption spectroscopy, dynamic light scattering (DLS) and lateral flow assay. These characterisations methodologies gave the understanding of optimum conditions of the each conjugates and individual’s behaviour in mixture conditions as well. After the characterisation all conjugates were tested singularly on the lateral flow assay using buffers and serum. The results of this single analyte immunoassay explained the individual’s bioactivity on the lateral flow strip. Further in study, multiplex assay have been demonstrated in serum. These outcomes have described each candidate characteristic in a mixture form on the lateral flow strip. In order to get the optimum Raman intensity from multiplex assay, the detection and capture antibodies loading concentrations were tuned in the assay. Later on different combinations (high, medium and low concentrations) of all three analytes were analysed and has found some interferences in multiplex assay. To investigate these issues various aspect were considered. First of all, different possibilities of non-specific interactions between the co-analytes and antibodies were tested. In addition, steric hindrance and optical interference investigations were performed via several assays and analysis using Scanning electron microscopy. The outcomes have confirmed related optical interferences. Therefore other assay (wound biomarkers) established to eliminate the interferences. In summary, the works reported here have built and test the equipment and necessary reagents for individual assays before moving on the more complicated task. In addition, the entire study has given a deep knowledge of multiplex assay on a single test line including the investigation of the issues for selected cardiac biomarkers and their applications in the future.
235

Correlation Analysis of Calcium Signalling Networks in Living Cells

Nilsson, Erik January 2008 (has links)
<p>In living cells, calcium ions (Ca2+) play an important role as an intracellular second messenger. It mediates the regulation of cellular processes such as gene expression, initiation of vesicle fusion in synapses, is used in muscle contraction and is believed to play a fundamental role in synaptic plasticity as a molecular substrate for learning. The Ca2+ signals are created by the fact that the concentration of Ca2+ in the cytosol is four orders of magnitude lower than in the extracellular fluid as well as in cytoplasmic compartments such as the endoplasmic reticulum (ER). This enables fast increments in the cytosol concentration, which is regulated back to normal concentration by different mechanisms. In this project, the connection between Ca2+ signals of different cells was analysed using different correlation techniques: cross-correlation of continuous signals and digitalised signals. Therefore a software tool was developed in MATLAB, which takes Ca2+ recordings from time-lapse fluorescence microscopy as input and calculates the pair wise correlation for all cells. The software was tested by using previous data from experiments with embryonic stem cells from mouse (mES) and human (hES) as well as data from recordings done as part of the project. The study shows that the mathematical method of cross-correlation can successfully be applied to quantitative and qualititative analysis of Ca2+ signals. Furthermore, there exist strongly correlated cells in colonies of mES cells and hES cells. We suggest the synchronisation is achieved by physical coupling implicating a decrease of correlation as the distance increases for strong correlations. In addition, the lag used by the cross-correlation function (an effective phase shift) decreases as the correlation coefficient increases and increases as the intercellular distance increases for high correlation coefficients. Interestingly, the number of cells included in small scale clusters of strongly correlated cells is significantly larger for the differentiating mES cells than for the proliferating mÉS cells. In a broader perspective, the developed software might be usd in for instance analysis of cellular electrical activity and shows the relevance of applying methods from the exact sciences to biology.</p> / QC 20100708
236

Investigation of the Addition of Basalt Fibres into Cement

Palme, Jahi 01 May 2014 (has links)
Mechanical properties of concrete are most commonly determined using destructive tests including: compression, flexure, and fracture notch specimen tests. However, nondestructive tests exist for evaluating the properties of concrete such as ultrasonic pulse velocity and impact echo tests. One of major issues with concrete (which has cement as its prime ingredient) is that unlike steel it is quasi-brittle material. It tends to want to crack when tensile stresses develop. Fibres have been added to concrete for many years to reduce the amount of and size of cracks cause by temperature changes or shrinkage. In more recent years, significant research has been carried out into the effect of the addition of basalt fibres to cement has on its mechanical strength. As well, developing concrete that is more durable, flexible, stronger, and less permeable than traditional concrete has been explored. It has become important to test and verify improvements that are made to the cement by basalt fibres as well as testing the general strength of concrete to stand up to constant pressure at varied strengths.
237

Plasma vertical position control in the COMPASS–D tokamak

Vyas, Parag January 1996 (has links)
The plasma vertical position system on the COMPASS–D tokamak is studied in this thesis. An analogue P+D controller is used to regulate the plasma vertical position which is open loop unstable. Measurements from inside the vessel are used for the derivative component of the control signal and external measurements for the proportional component. Two main sources of disturbances are observed on COMPASS–D. One source is 600Hz noise from thyristor power supplies which cause large oscillations at the control amplifier output. Another source is impulse–like disturbances due to ELMs (Edge Localized Modes) and this can occasionally lead to loss of control when the control amplifier saturates. Models of the plasma open loop dynamics were obtained using the process of system identification. Experimental data is used to fit the coefficients of a mathematical model. The frequency response of the model is strongly dependent on the shape of the plasma. The effect of shielding by the vessel wall on external measurements when compared with internal measurements is also observed. The models were used to predict values of gain margins and phase crossover frequencies which were found to be in good agreement with measured values. The harsh reactor conditions on the proposed ITER tokamak preclude the use of internal measurements. On COMPASS–D the stability margins of the loop decrease when using only external flux loops. High order controllers were designed to stabilize the system using only external measurements and to reduce the effect of 600Hz noise on the control amplifier voltage. The controllers were tested on COMPASS–D and demonstrated the improved performance of high order controllers over the simple P+D controller. ELMs cause impulse–like disturbances on the plasma position. The optimal controller minimizing the peak of the impulse response can be calculated analytically for COMPASS–D. A multiobjective controller which combines a small peak impulse response with robust stability and noise attenuation can be obtained using a numerical search.
238

Statistical Spectral Parameter Estimation of Acoustic Signals with Applications to Byzantine Music

Tsiappoutas, Kyriakos Michael 17 December 2011 (has links)
Digitized acoustical signals of Byzantine music performed by Iakovos Nafpliotis are used to extract the fundamental frequency of each note of the diatonic scale. These empirical results are then contrasted to the theoretical suggestions and previous empirical findings. Several parametric and non-parametric spectral parameter estimation methods are implemented. These include: (1) Phase vocoder method, (2) McAulay-Quatieri method, (3) Levinson-Durbin algorithm,(4) YIN, (5) Quinn & Fernandes Estimator, (6) Pisarenko Frequency Estimator, (7) MUltiple SIgnal Characterization (MUSIC) algorithm, (8) Periodogram method, (9) Quinn & Fernandes Filtered Periodogram, (10) Rife & Vincent Estimator, and (11) the Fourier transform. Algorithm performance was very precise. The psychophysical aspect of human pitch discrimination is explored. The results of eight (8) psychoacoustical experiments were used to determine the aural just noticeable difference (jnd) in pitch and deduce patterns utilized to customize acceptable performable pitch deviation to the application at hand. These customizations [Acceptable Performance Difference (a new measure of frequency differential acceptability), Perceptual Confidence Intervals (a new concept of confidence intervals based on psychophysical experiment rather than statistics of performance data), and one based purely on music-theoretical asymphony] are proposed, discussed, and used in interpretation of results. The results suggest that Nafpliotis' intervals are closer to just intonation than Byzantine theory (with minor exceptions), something not generally found in Thrasivoulos Stanitsas' data. Nafpliotis' perfect fifth is identical to the just intonation, even though he overstretches his octaveby fifteen (15)cents. His perfect fourth is also more just, as opposed to Stanitsas' fourth which is directionally opposite. Stanitsas' tendency to exaggerate the major third interval A4-F4 is still seen in Nafpliotis, but curbed. This is the only noteworthy departure from just intonation, with Nafpliotis being exactly Chrysanthian (the most exaggerated theoretical suggestion of all) and Stanitsas overstretching it even more than Nafpliotis and Chrysanth. Nafpliotis ascends in the second tetrachord more robustly diatonically than Stanitsas. The results are reported and interpreted within the framework of Acceptable Performance Differences.
239

Numerical methods for design of the transfer line of the ESSnuSB project : Independent Project in Engineering Physics

Boholm Kylesten, Karl-Fredrik January 2019 (has links)
ESS neutrino Super Beam (ESSnuSB) is a project that aim to create ahigh energy beam of neutrinos and anti-neutrinos to study thephenomenon neutrino oscillation and learn more about symmetryviolations in quantum mechanics. To create the neutrino beam, negativeHydrogen ions must be transported from the ESS linear accelerator at2.5 GeV, to a proton accumulation ring. This is done through a transferline, that shall direct the ion beam while preserve the beam as much aspossible. In thisproject, there was an attempt at finding a design for this transferline. Preferably, the line consists of a long main line of FODO cellsand two matching sections at each end. A simulation of the beam wasdone that gives the progression beta and dispersion functions,statistical measurements of the particle distribution, through a partof the transfer line. A design for the main line was found. For tuningthe quadrupole magnets, an iterative method using the system's responsematrix was used. However, it could not match more than four parametersat the time, while six was required for complete matching. Because ofthis, it is not able to match thedispersion.
240

Laser-Induced Recoverable Surface Patterning on Ni50Ti50 Shape Memory Alloys

Ilhom, Saidjafarzoda 01 July 2018 (has links)
Shape memory alloys (SMAs) are a unique class of smart materials exhibiting extraordinary properties with a wide range of applications in engineering, biomedical, and aerospace technologies. In this study, an advanced, efficient, low-cost, and highly scalable laser-assisted imprinting method with low environmental impact to create thermally controllable surface patterns is reported. Two different imprinting methods were carried out mainly on Ni50Ti50 (at. %) SMAs by using a nanosecond pulsed Nd:YAG laser operating at 1064 nm wavelength and 10 Hz frequency. First, laser pulses at selected fluences were directly focused on the NiTi surface, which generated pressure pulses of up to a few gigapascal (GPa), and thus created micro-indents. Second, a suitable transparent overlay serving as a confining medium, a sacrificial layer, and a mesh grid was placed on the NiTi sample, whereafter the laser was focused through the confinement medium, ablating the sacrificial layer to create plasma and pressure, and thus pushing and transferring the grid pattern onto the sample. Scanning electron microscope (SEM) and laser profiler images show that surface patterns with tailorable sizes and high fidelity could be obtained. The depth of the patterns was shown to increase and later level off with the increase in laser power and irradiation time. Upon heating, the depth profile of the imprinted SMA surfaces changed where the maximum depth recovery ratio of 30 % was observed. Recovery ratio decreased and saturated at about 15 % when the number of pulses were increased. A numerical simulation of the laser irradiation process was performed showing that considerably high pressure and temperature could be generated depending on the laser fluence. The stress wave closely followed the rise time of the laser pulse to its peak value and followed by the rapid attenuation and dispersion of the stress through the sample.

Page generated in 0.0851 seconds