Spelling suggestions: "subject:"homography."" "subject:"lomography.""
1041 |
Passive Seismic Tomography and Seismicity Hazard Analysis in Deep Underground MinesMa, Xu 05 February 2015 (has links)
Seismic tomography is a promising tool to help understand and evaluate the stability of a rock mass in mining excavations. Lab measurements give evidence that velocities of seismic wave propagations increase in high stress areas of rock samples. It is well known that closing effects of cracks under compressive pressures tend to increase the effective elastic moduli of rocks. Tomography can map stress transfer and redistribution and further forecast rock burst potential and other seismic hazards, which are influenced by mining. Recorded by seismic networks in multiple underground mines, arrival time of seismic waves and locations of seismic events are used as sources of tomographic imaging survey. An initial velocity model is established according to properties of a rock mass, then velocity structure is reconstructed by velocity inversion to reflect the anomalies of the rock mass. Mining-induced seismicity and double-difference tomographic images of rock mass in mining areas are coupled to show how stress changes with microseismic activities. Especially, comparisons between velocity structures of different periods (before and after rock burst) are performed to analyze effects of rock burst on stress distribution. Tomographic results show that high velocity anomalies form in the vicinity of rock burst before the occurrence, and velocity subsequently experiences a significant drop after the occurrence of rock burst. In addition, regression analysis of travel time and distance indicates that the average velocity of all the monitored region appears to increase before rock burst and reduce after them. A reasonable explanation is that rock bursts tend to be triggered in highly stressed rock masses. After the energy release of rock bursts, stress relief is expected to exhibit within rock mass. Average velocity significantly decreases because of lower stresses and as a result of fractures in the rock mass that are generated by shaking-induced damage from nearby rock burst zones. Mining-induced microseismic rate is positively correlated with stress level. The fact that highly concentrated seismicity is more likely to be located in margins between high-velocity and low-velocity regions manifests that high seismic rates appear to be along with high stress in rock masses. Statistical analyses were performed on the aftershock sequence in order to generate an aftershock decay model to detect potential hazards and evaluate stability of aftershock activities. / Ph. D.
|
1042 |
Utilizing pQCT and Biomarkers of Bone Turnover to Study Influences of Physical Activity or Bariatric Surgery on Structural and Metabolic Status of BoneCreamer, Kyle William 03 September 2014 (has links)
Bone health in the context of two common maladies, osteoporosis and obesity, has spurred research in the area of physical activity (PA) and bariatric surgery (BarS).
Objectives: To examine: 1) relationships between PA and the skeleton utilizing the peripheral Quantitative Computed Tomography (pQCT) and Dual-energy X-ray Absorptiometry (DXA) in pre-menopausal women; 2) effects of adjustable gastric banding (AGB) vs. Roux-en-Y gastric bypass (RYGB) surgeries on pQCT and DXA measures; 3) 6-month time course changes on serum biomarkers of bone turnover and associated adipokines induced by AGB vs. RYGB.
Methods: Standard DXA and pQCT measurements were taken for all subjects. PA tertiles (PA-L, PA-M, PA-H) were based on a calculated average MET-min/day determined from 4-d self-reported PA and pedometer step counts. For BarS subjects, bone measurements were taken pre-surgery, 3- and 6-months post-surgery along with serum (or plasma) from fasting blood draws, with ELISA assays for total OC, undercarboxylated OC, CTx, adiponectin, and leptin.
Results: Minimal DXA differences between the highest and lowest PA tertiles were seen, while pQCT tibial measures and polar strength-strain index (SSIp) indicated differences along the tibial shaft. Comparing the two instruments and adjusting for BMI, the DXA leg and hip BMD and BMC showed differences (p<0.05) between PA-M and PA-L as well as PA-H and PA-L. Similarly, the pQCT tibial cortical area, BMC, and SSIp were progressively greater for the different levels of PA (p<0.05).
3- and 6-months post-BarS weight, fat-free mass, fat mass, central body fat, tibial and radial subcutaneous fat, and radial MCSA decreased (p<0.05). Comparing the AGB and RYGB and adjusting for weight, DXA BMC showed decreases (p<0.01) at both time points for RYGB. RYGB demonstrated differences (p<0.05) in bone measures at 3- and 6-months post-surgery along the tibial shaft that are indicative of increases in bone strength, and at 6-months, total OC, undercarboxylated OC, and HMW adiponectin increased, while leptin decreased.
Conclusions: PA is associated with increases in bone, but pQCT data are more discriminatory and sensitive. 6-months post-RYGB, pQCT measures indicate increases in bone strength parameters, and greater bone adaptation was evidenced by biomarkers of increased osteoblastic activity. / Ph. D.
|
1043 |
Collimator width Optimization in X-ray Luminescent Computed TomographyMishra, Sourav 17 June 2013 (has links)
X-ray Luminescent Computed Tomography (XLCT) is a new imaging modality which is under extensive trials at present. The modality works by selective excitation of X-ray sensitive nanophosphors and detecting the optical signal thus generated. This system can be used towards recreating high quality tomographic slices even with low X-ray dose. There have been many studies which have reported successful validation of the underlying philosophy. However, there is still lack of information about optimal settings or combination of imaging parameters, which could yield best outputs. Research groups participating in this area have reported results on basis of dose, signal to noise ratio or resolution only.
In this thesis, the candidate has evaluated XLCT taking into consideration noise and resolution in terms of composite indices. Simulations have been performed for various beam widths and noise & resolution metrics deduced. This information has been used in evaluating quality of images on basis of CT Figure of Merit & a modified Wang-Bovik Image Quality index. Simulations indicate the presence of an optimal setting which can be set prior to extensive scans. The conducted study, although focusing on a particular implementation, hopes to establish a paradigm in finding best settings for any XLCT system. Scanning with an optimal setting preconfigured can help in vastly reducing the cost and risks involved with this imaging modality. / Master of Science
|
1044 |
Fiber optic interferometer for optical coherence microscopy imagingAkcay, Avni Ceyhun 01 April 2001 (has links)
No description available.
|
1045 |
A single-centre experience of implementing a rapid CXR reporting and CT access pathway for suspected lung cancer: Initial outcomesHunter, R., Wilkinson, Elaine, Snaith, Beverly 01 April 2022 (has links)
Yes / Lung cancer remains a major cause of preventable death and early diagnosis is critical to improving survival chances. The chest X-ray (CXR) remains the most common initial investigation, but clinical pathways need to support timely diagnosis through, where necessary, escalation of abnormal findings to ensure priority reporting and early CT scan.
This single-centre study included a retrospective evaluation of a rapid lung cancer CXR pathway in its first year of operation (May 2018-April 2019). The pathway was initially designed for primary care referrals but could also be used for any CXR demonstrating abnormal findings. A parallel cross-sectional survey of radiographers explored their understanding, adherence and concerns regarding their role in the pathway operation.
Primary care referrals on the rapid diagnostic pathway were low (n = 51/21,980; 0.2%), with 11 (21.6%) requiring a CT scan. A further 333 primary care CXR were escalated by the examining radiographer, with 100 (30.0%) undergoing a CT scan. Overall, 64 of the CT scans (57.7%) were abnormal or demonstrated suspicious findings warranting further investigation. There were 39 confirmed primary lung carcinomas, most with advanced disease. Survey responses showed that most radiographers were familiar with the pathway but some expressed concerns regarding their responsibilities and limited knowledge of CXR pathologies.
This baseline evaluation of the rapid lung cancer pathway demonstrated poor referral rates from primary care and identified the need for improved engagement. Radiographer escalation of abnormal findings is an effective adjunct but underlines the need for appropriate awareness, training, and ongoing support.
Engagement of the multiprofessional team is critical in new pathway implementation. Rapid diagnostic pathways can enable early diagnosis and the radiographer has a key role to play in their success.
|
1046 |
A systematic review comparing the effective radiation dose of musculoskeletal cone beam computed tomography to other diagnostic imaging modalitiesMason, K., Iball, Gareth, Hinchcliffe, D., Snaith, Beverly 24 September 2024 (has links)
Yes / Purpose: Cone-Beam CT (CBCT) is well established in orofacial diagnostic imaging and is currently expanding into musculoskeletal applications. This systematic review sought to update the knowledge base on radiation dose comparisons between imaging modalities in MSK imaging and consider how research studies have reported dose measures.
Methods: This review utilised a database search and an online literature tool. Studies with potential relevance were screened then before full text review, each performed by two independent reviewers, with a third independent reviewer available for conflicts. Data was extracted using a bespoke tool created within the literature tool.
Results: 21 studies were included in the review which compared CBCT with MSCT (13), conventional radiography (1), or both (7). 19 studies concluded that CBCT provided a reduced radiation dose when compared with MSCT: the factor of reduction ranging from 1.71 to 50 with an average of 12. Studies comparing CBCT to DR found DR to have an average dose reduction of 4.55.
Conclusions: The claims that CBCT produces a lower radiation dose than MSCT is borne out with most studies confirming doses less than half that of MSCT. Fewer studies include DR as a comparator but confirm that CBCT results in a higher effective dose on average, with scope for CBCT to provide an equivalent radiation dose. This review highlighted a need for consistency in methodology when conducting studies which compare radiation dose across different technologies. Potential solutions lie outside the scope of this review, likely requiring multi-discipline approach to ensure a cohesive outcome.
|
1047 |
Inferring Network Status from Partial ObservationsRangudu, Venkata Pavan Kumar 09 February 2017 (has links)
In many network applications, such as the Internet and infrastructure networks, nodes fail or get congested dynamically, but tracking this information about all the nodes in a network where some dynamical processes are taking place is a fundamental problem. In this work, we study the problem of inferring the complete set of failed nodes, when only a sample of the node failures are known---we will be referring to this particular problem as prob{} . We consider the setting in which there exists correlations between node failures in networks, which has been studied in the case of many infrastructure networks. We formalize the prob{} problem using the Minimum Description Length (MDL) principle and we show that, in general, finding solutions that minimize the MDL cost is hard, and develop efficient algorithms with rigorous performance guarantees for finding near-optimal MDL cost solutions. We evaluate our methods on both synthetic and real world datasets, which includes the one from WAZE. WAZE is a crowd-sourced road navigation tool, that collects and presents the traffic incident reports. We found that the proposed greedy algorithm for this problem is able to recover $80%$, on average, of the failed nodes in a network for a given partial sample of input failures, which are sampled from the true set of failures at some predefined rate. Furthermore, we have also proved that this algorithm will find a solution that has MDL cost with an additive approximation guarantee of log(n) from the optimal. / Master of Science / In many real-world networks, such as Internet and Transportation networks, there will be some dynamical processes taking place. Due to the activity of these processes some of the elements in these networks may fail at random. For example service node failures in Internet, traffic congestion in road networks are some such scenarios. Identifying the complete state information of such networks is a fundamental problem. In this work, we study the problem of identifying unknown node failures in a network based on the partial observations – we referr to this problem as NetStateInf. Similar to some of the previous studies in this area we assume the settings where node failures in these networks are correlated. We approached this problem using Minimum Description Length (MDL) principle, which states that the information learned from a given data can be maximized by compressing it i.e., by identifying maximum number of patterns in the data. Using these concepts we have developed a mathematical representation of NetStateInf problem and proposed efficient algorithms with rigorous performance guarantees for finding the best set of failed nodes in the network that can best explain the observed faiures. We evaluated our algorithms against both synthetic – artificial network with failures generated based on a predefined mathematical model – and real-world data, for example traffic alerts data collected by WAZE, a crowdsourced navigation tool, for Boston road network. Using this approach we are able to recover around 80% of the failured nodes in the network from the given partial failure data. Furthermore, we have proved that our algorithm will find a solution that has a maximum cost difference of <i>log(n)</i> when compared with the optimal solution, where cost of a solution is the MDL way of representing its allignment with desired requirements.
|
1048 |
Acoustic Tomography and Thrust Estimation on Turbofan EnginesGillespie, John Lawrie 21 December 2023 (has links)
Acoustic sensing provides a possibility of measuring propulsion flow fields non-intrusively, and is of great interest because it may be applicable to cases that are difficult to measure with traditional methods. In this work, some of the successes and limitations of this technique are considered. In the first main result, the acoustic time of flight is shown to be usable along with a calibration curve in order to accurately estimate the thrust of two turbofan engines (1.0-1.5%). In the second, it is shown that acoustic tomography methods that only use the first ray paths to arrive cannot distinguish some relevant propulsion flow fields (i.e., different flow fields can have the same times of flight). In the third result we demonstrate, via the first validated acoustic tomography experiment on a turbofan engine, that a reasonable estimate of the flow can be produced despite this challenge. This is also the first successful use of acoustic tomography to reconstruct a compressible, multi-stream flow. / Doctor of Philosophy / Sound may be used to measure air flows, and has been used for this purpose in studies of the atmosphere for decades. In this work, the extension of the method to measure air flows in aircraft engines is considered. This is challenging for two main reasons. The first challenge is that aircraft engines are very loud, which makes it harder to accurately measure the sounds that are needed to determine the speeds and temperatures. In this work, we show that the thrust (the force made by an engine) may be accurately measured using sound despite this difficulty. The second challenge is that the temperatures and velocities involved are very large compared to those in the atmosphere. We show that these large variations in temperature and velocity can make it impossible to distinguish between two different air flows in certain circumstances. We also show that despite this limitation, sound can be used to produce a reasonable, though imperfect, estimate of the flow. In particular, the technique was successfully used to measure the varying temperatures and velocities in a jet engine, which has not been done successfully before.
|
1049 |
A Method for Measuring Spatially Varying Equivalence Ratios with Application to ThermoacousticsHugger, Blaine Thomas 17 December 2021 (has links)
Computed tomography for flame chemiluminescence emissions allows for 3D spatially resolved flame measurements to be acquired using a series of discrete viewing angle camera images. To determine fuel/air ratios, the ratio of excited radical species (OH*/CH*) emissions using chemiluminescence can be employed. Following the process of high-resolution tomography reconstructions in this work allowed for flame tomography coupled with chemiluminescence emissions to be used for spatially resolved phase averaged equivalence ratio measurements. This is important as variations in local equivalence ratios can have a profound effect on flame behavior including but not limited to thermoacoustic instability, NOx and CO formation, and flame stabilization. Local equivalence ratios are determined from a OH*/CH* ratio of tomographically reconstructed intensity fields and relating them to equivalence ratio. The correlation of OH*/CH* to equivalence ratio is derived from an axisymmetric, commercially available flat flame burner (Holthuis and Associates Burner). To relate intensity field imaging (camera coordinate system) during the tomographic reconstruction to the real-world coordinate system of the burner a calibration procedure was performed and then validated. A calibration plate with 39 non-coplanar points was used in this procedure. It was then validated by comparing the Abel inverted flame images of the axisymmetric Holthuis and Associates burner with the tomographic reconstructed images. Results show a successful tomographic reconstruction of thermoacoustic self-excited cycle concluding equivalence ratio fluctuations coinciding with the 1st dominate frequency of the pressure fluctuations and influenced by a 2nd harmonic frequency. / Master of Science / In recent years tomographic reconstruction of flames have gained significant focus in understanding different flame phenomenon. One specific flame phenomenon is known as a thermoacoustic instability. Using highspeed cameras for chemiluminescence imaging of specific species can help define heat release rate, air/fuel ratio/equivalence ratio spatially. Coupling of pressure measurements to imaging methods can be used to determine the flames response to acoustic perturbations in the flow field. Every optics system has inherently different light transmission characteristics and therefore, needs to be calibrated/correlated using a known flame source. The work done in this paper used a Holthuis and Associates flat flame as the known flame source in conjunction with an optics system to correlate OH*/CH* ratio to equivalence ratio. This is possible due to the perfectly premixed nature the flat flame provides. The correlation curve for the optics system is then applied to the tomographically reconstructed chemiluminescence intensities during a self-excited thermo-acoustic instability. In addition, a flat flame burner was used to validate the tomography approach and calibration procedure. In conclusion the objective of this work develops and validates a method for use in tomographic reconstruction of spatially varying equivalence ratios.
|
1050 |
Enhanced Objective Detection of Retinal Nerve Fiber Bundle Defects in Glaucoma With a Novel Method for En Face OCT Slab Image Construction and AnalysisCheloni, R., Dewsbery, S.D., Denniss, Jonathan 11 October 2021 (has links)
Yes / To introduce and evaluate the performance in detecting glaucomatous abnormalities of a novel method for extracting en face slab images (SMAS), which considers varying individual anatomy and configuration of retinal nerve fiber bundles.
Dense central retinal spectral domain optical coherence tomography scans were acquired in 16 participants with glaucoma and 19 age-similar controls. Slab images were generated by averaging reflectivity over different depths below the inner limiting membrane according to several methods. SMAS considered multiple 16 µm thick slabs from 8 to 116 µm below the inner limiting membrane, whereas 5 alternative methods considered single summary slabs of various thicknesses and depths. Superpixels in eyes with glaucoma were considered abnormal if below the first percentile of distributions fitted to control data for each method. The ability to detect glaucoma defects was measured by the proportion of abnormal superpixels. Proportion of superpixels below the fitted first percentile in controls was used as a surrogate false-positive rate. The effects of slab methods on performance measures were evaluated with linear mixed models.
The ability to detect glaucoma defects varied between slab methods, χ2(5) = 120.9, P
|
Page generated in 0.0422 seconds