• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 256
  • 80
  • 32
  • 23
  • 23
  • 13
  • 9
  • 7
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 580
  • 97
  • 53
  • 48
  • 45
  • 44
  • 43
  • 42
  • 37
  • 37
  • 36
  • 36
  • 31
  • 28
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Using MARS Spectral CT for Identifying Biomedical Nanoparticles

Raja, Aamir Younis January 2013 (has links)
The goal of this research is to contribute to the development of MARS spectral CT and to demonstrate the feasibility of molecular imaging using the technology. MARS is a newly developed micro CT scanner, incorporating the latest spectroscopic Medipix photon counting detector. I show that the scanner can identify both drug markers and stenosis of atherosclerosis labelled with non-toxic nanoparticles. I also show that spectral computed tomography using Medipix x-ray detectors can give quantitative measurements of concentrations of gold nanoparticles in phantoms, mice and excised atheroma. The characterisation of the Medipix2 assemblies with Si and CdTe x-ray sensors using poly-energetic x-ray sources has been performed. I measure the inhomogeneities within the sensors; individual pixel sensitivity response; and their saturation effects at higher photon fluxes. The effects of charge sharing on the performance of Medipix2 have been assessed, showing that it compromises energy resolution much more than spatial resolution. I have commissioned several MARS scanners incorporating several different Medipix2 and Medipix3 cameras. After the characterization of x-ray detectors and the geometrical assessment of MARS-CT, spectral CT data has been acquired, using x-ray energies that are appropriate for human imaging. The outcome shows that MARS scanner has the ability to discriminate among low atomic number materials, and from various concentrations of heavy atoms. This new imaging modality, used with functionalized gold nanoparticles, gives a new tool to assess plaque vulnerability. I demonstrated this by using gold nanoparticles, attached to antibodies, which targeted to thrombotic events in excised plaque. Likewise, the imaging modality can be used to track drugs labelled with any heavy atoms to assess how much drug gets into a target organ. Thus the methodology could be used to accelerate development of new drug treatments for cancers and inflammatory diseases.
192

Polytopes Arising from Binary Multi-way Contingency Tables and Characteristic Imsets for Bayesian Networks

Xi, Jing 01 January 2013 (has links)
The main theme of this dissertation is the study of polytopes arising from binary multi-way contingency tables and characteristic imsets for Bayesian networks. Firstly, we study on three-way tables whose entries are independent Bernoulli ran- dom variables with canonical parameters under no three-way interaction generalized linear models. Here, we use the sequential importance sampling (SIS) method with the conditional Poisson (CP) distribution to sample binary three-way tables with the sufficient statistics, i.e., all two-way marginal sums, fixed. Compared with Monte Carlo Markov Chain (MCMC) approach with a Markov basis (MB), SIS procedure has the advantage that it does not require expensive or prohibitive pre-computations. Note that this problem can also be considered as estimating the number of lattice points inside the polytope defined by the zero-one and two-way marginal constraints. The theorems in Chapter 2 give the parameters for the CP distribution on each column when it is sampled. In this chapter, we also present the algorithms, the simulation results, and the results for Samson’s monks data. Bayesian networks, a part of the family of probabilistic graphical models, are widely applied in many areas and much work has been done in model selections for Bayesian networks. The second part of this dissertation investigates the problem of finding the optimal graph by using characteristic imsets, where characteristic imsets are defined as 0-1 vector representations of Bayesian networks which are unique up to Markov equivalence. Characteristic imset polytopes are defined as the convex hull of all characteristic imsets we consider. It was proven that the problem of finding optimal Bayesian network for a specific dataset can be converted to a linear programming problem over the characteristic imset polytope [51]. In Chapter 3, we first consider characteristic imset polytopes for all diagnosis models and show that these polytopes are direct product of simplices. Then we give the combinatorial description of all edges and all facets of these polytopes. At the end of this chapter, we generalize these results to the characteristic imset polytopes for all Bayesian networks with a fixed underlying ordering of nodes. Chapter 4 includes discussion and future work on these two topics.
193

Ecosystem Services Based Evaluation Framework of Land-use Management Options for Dryland Salinity in the Avon Region, Western Australian Wheatbelt

Kleplova, Klara Zoe January 2014 (has links)
Dryland-salinity management options aim to positively influence the adverse human-induced processes which lead to salinisation of top-soil. Specifically, the processes causing dryland-salinity are rising saline groundwater table and soil erosion. In the Avon region of Western Australia, the management options are evaluated solely on the basis of their efficiency in lowering groundwater tables. However, recently the need to take into account also their wider impact on the ecosystems' resilience has been recognised as well. Nevertheless, the tool to assess these impacts is missing. The aim of this thesis is to synthesise the missing tool from existing ecosystem services-based land-use evaluation frameworks, which would fit the environmental issue, regional socio-economic demands and the existing dryland salinity management options' efficiency evaluation framework. The thesis builds on secondary data and describes (i) the environmental issue of dryland salinity in Australia, (ii) the dryland salinity-environmental, economic, social and political environments of the Avon region, and (iii) five chosen evaluation frameworks which assess the impact of land-use on ecosystem resilience. The proposed optimal framework for the Avon region is then a combination of two existent frameworks: (i) ecosystem resilience evaluation framework & (ii) the ecosystem services economic valuation framework. Where the inputs of the proposed optimal framework are: (i) soil properties, (ii) external natural and anthropogenic drivers and (iii) beneficiaries; the transfer phase is represented by the soil processes; and the output of the framework are (i) ecosystem services and (ii) their economically valued benefits.
194

Quantification and Maximization of Performance Measures for Photon Counting Spectral Computed Tomography

Yveborg, Moa January 2015 (has links)
During my time as a PhD student at the Physics of Medical Imaging group at KTH, I have taken part in the work of developing a photon counting spectrally resolved silicon detector for clinical computed tomography. This work has largely motivated the direction of my research, and is the main reason for my focus on certain issues. Early in the work, a need to quantify and optimize the performance of a spectrally resolved detector was identified. A large part of my work have thus consisted of reviewing conventional methods used for performance quantification and optimization in computed tomography, and identifying which are best suited for the characterization of a spectrally resolved system. In addition, my work has included comparisons of conventional systems with the detector we are developing. The collected result after a little more than four years of work are four publications and three conference papers. This compilation thesis consists of five introductory chapters and my four publications. The introductory chapters are not self-contained in the sense that the theory and results from all my published work are included. Rather, they are written with the purpose of being a context in which the papers should be read. The first two chapters treat the general purpose of the introductory chapters, and the theory of computed tomography including the distinction between conventional, non-spectral, computed tomography, and different practical implementations of spectral computed tomography. The second chapter consists of a review of the conventional methods developed for quantification and optimization of image quality in terms of detectability and signal-to-noise ratio, part of which are included in my published work. In addition, the theory on which the method of material basis decomposition is based on is presented, together with a condensed version of the results from my work on the comparison of two systems with fundamentally different practical solutions for material quantification. In the fourth chapter, previously unpublished measurements on the photon counting spectrally resolved detector we are developing are presented, and compared to Monte Carlo simulations. In the fifth and final chapter, a summary of the appended publications is included. / <p>QC 20150303</p>
195

Adaptive Range Counting and Other Frequency-Based Range Query Problems

Wilkinson, Bryan T. January 2012 (has links)
We consider variations of range searching in which, given a query range, our goal is to compute some function based on frequencies of points that lie in the range. The most basic such computation involves counting the number of points in a query range. Data structures that compute this function solve the well-studied range counting problem. We consider adaptive and approximate data structures for the 2-D orthogonal range counting problem under the w-bit word RAM model. The query time of an adaptive range counting data structure is sensitive to k, the number of points being counted. We give an adaptive data structure that requires O(n loglog n) space and O(loglog n + log_w k) query time. Non-adaptive data structures on the other hand require Ω(log_w n) query time (Pătraşcu, 2007). Our specific bounds are interesting for two reasons. First, when k=O(1), our bounds match the state of the art for the 2-D orthogonal range emptiness problem (Chan et al., 2011). Second, when k=Θ(n), our data structure is tight to the aforementioned Ω(log_w n) query time lower bound. We also give approximate data structures for 2-D orthogonal range counting whose bounds match the state of the art for the 2-D orthogonal range emptiness problem. Our first data structure requires O(n loglog n) space and O(loglog n) query time. Our second data structure requires O(n) space and O(log^ε n) query time for any fixed constant ε>0. These data structures compute an approximation k' such that (1-δ)k≤k'≤(1+δ)k for any fixed constant δ>0. The range selection query problem in an array involves finding the kth lowest element in a given subarray. Range selection in an array is very closely related to 3-sided 2-D orthogonal range counting. An extension of our technique for 3-sided 2-D range counting yields an efficient solution to adaptive range selection in an array. In particular, we present an adaptive data structure that requires O(n) space and O(log_w k) query time, exactly matching a recent lower bound (Jørgensen and Larsen, 2011). We next consider a variety of frequency-based range query problems in arrays. We give efficient data structures for the range mode and least frequent element query problems and also exhibit the hardness of these problems by reducing Boolean matrix multiplication to the construction and use of a range mode or least frequent element data structure. We also give data structures for the range α-majority and α-minority query problems. An α-majority is an element whose frequency in a subarray is greater than an α fraction of the size of the subarray; any other element is an α-minority. Surprisingly, geometric insights prove to be useful even in the design of our 1-D range α-majority and α-minority data structures.
196

Path Queries in Weighted Trees

Zhou, Gelin January 2012 (has links)
Trees are fundamental structures in computer science, being widely used in modeling and representing different types of data in numerous computer applications. In many cases, properties of objects being modeled are stored as weights or labels on the nodes of trees. Thus researchers have studied the preprocessing of weighted trees in which each node is assigned a weight, in order to support various path queries, for which a certain function over the weights of the nodes along a given query path in the tree is computed [3, 14, 22, 26]. In this thesis, we consider the problem of supporting several various path queries over a tree on n weighted nodes, where the weights are drawn from a set of σ distinct values. One query we support is the path median query, which asks for the median weight on a path between two given nodes. For this and the more general path selection query, we present a linear space data structure that answers queries in O(lg σ) time under the word RAM model. This greatly improves previous results on the same problem, as previous data structures achieving O(lg n) query time use O(n lg^2 n) space, and previous linear space data structures require O(n^ε) time to answer a query for any positive constant ε [26]. We also consider the path counting query and the path reporting query, where a path counting query asks for the number of nodes on a query path whose weights are in a query range, and a path reporting query requires to report these nodes. Our linear space data structure supports path counting queries with O(lg σ) query time. This matches the result of Chazelle [14] when σ is close to n, and has better performance when σ is significantly smaller than n. The same data structure can also support path reporting queries in O(lg σ + occ lg σ) time, where occ is the size of output. In addition, we present a data structure that answers path reporting queries in O(lg σ + occ lg lg σ) time, using O(n lg lg σ) words of space. These are the first data structures that answer path reporting queries.
197

Fatigue Life Assessment of 30CrNiMo8HH Steel Under Variable Amplitude Loading

Ibrahim, Elfaitori January 2012 (has links)
The actual service loading histories of most engineering components are characterized by variable amplitudes and are sometimes rather complicated. The goal of this study was to estimate the fatigue life of nickel-chromium-molybdenum 30CrNiMo8HH steel alloy under axial and pure torsion variable amplitude loading (VAL) conditions. The investigation was directed at two primary factors that are believed to have an influence on fatigue life under such loading conditions: load sequence and mean stress. The experimental work for this research included two-step loading, non-zero mean strain loading, and VAL tests, the results of which were added to previously determined fully reversed strain-controlled fatigue data. The effect of load sequence on fatigue life was examined through the application of the commonly used linear damage accumulation rule along with the Manson and Marco–Starkey damage accumulation methods, the latter of which takes load sequence into account. Based on the two-step experimental results, both the Manson and Marco–Starkey methods were modified in order to eliminate the empirically determined constants normally required for these two methods. The effect of mean stress on fatigue life was investigated with the use of three life prediction models: Smith–Watson–Topper (SWT), Fatemi–Socie (FS), and Jahed–Varvani (JV). The cycles from the VAL histories were counted using a rainflow counting procedure that maintains the applied strain sequence, and a novel method was developed for the estimation of the total energy density required for the JV model. For two-step loading and for all three fatigue models employed, the modified damage accumulation methods provided superior fatigue life predictions. However, regardless of the damage accumulation method applied, the most satisfactory fatigue life correlation for VAL was obtained using the energy-based JV model.
198

Counting And Constructing Boolean Functions With Particular Difference Distribution Vectors

Yildirim, Elif 01 June 2004 (has links) (PDF)
In this thesis we deal with the Boolean functions with particular difference distribution vectors. Besides the main properties, we especially focus on strict avalanche criterion for cryptographic aspects. Not only we deal with known methods we also demonstrate some new methods for counting and constructing such functions. Furthermore, performing some statistical tests, we observed a number of interesting properties.
199

Fatigue Life Calculation By Rainflow Cycle Counting Method

Ariduru, Secil 01 December 2004 (has links) (PDF)
In this thesis, fatigue life of a cantilever aluminum plate with a side notch under certain loading conditions is analyzed. Results of experimental stress analysis of the cantilever aluminum plate by using a uniaxial strain gage are presented. The strain gage is glued on a critical point at the specimen where stress concentration exists. Strain measurement is performed on the base-excited cantilever beam under random vibration test in order to examine the life profile simulation. The fatigue analysis of the test specimen is carried out in both time and frequency domains. Rainflow cycle counting in time domain is examined by taking the time history of load as an input. Number of cycles is determined from the time history. In frequency domain analysis, power spectral density function estimates of normal stress are obtained from the acquired strain data sampled at 1000 Hz. The moments of the power spectral density estimates are used to find the probability density function estimate from Dirlik&rsquo / s empirical expression. After the total number of cycles in both time and frequency domain approaches are found, Palmgren-Miner rule, cumulative damage theory, is used to estimate the fatigue life. Results of fatigue life estimation study in both domains are comparatively evaluated. Frequency domain approach is found to provide a marginally safer prediction tool in this study.
200

Fatigue Life Assessment of 30CrNiMo8HH Steel Under Variable Amplitude Loading

Ibrahim, Elfaitori January 2012 (has links)
The actual service loading histories of most engineering components are characterized by variable amplitudes and are sometimes rather complicated. The goal of this study was to estimate the fatigue life of nickel-chromium-molybdenum 30CrNiMo8HH steel alloy under axial and pure torsion variable amplitude loading (VAL) conditions. The investigation was directed at two primary factors that are believed to have an influence on fatigue life under such loading conditions: load sequence and mean stress. The experimental work for this research included two-step loading, non-zero mean strain loading, and VAL tests, the results of which were added to previously determined fully reversed strain-controlled fatigue data. The effect of load sequence on fatigue life was examined through the application of the commonly used linear damage accumulation rule along with the Manson and Marco–Starkey damage accumulation methods, the latter of which takes load sequence into account. Based on the two-step experimental results, both the Manson and Marco–Starkey methods were modified in order to eliminate the empirically determined constants normally required for these two methods. The effect of mean stress on fatigue life was investigated with the use of three life prediction models: Smith–Watson–Topper (SWT), Fatemi–Socie (FS), and Jahed–Varvani (JV). The cycles from the VAL histories were counted using a rainflow counting procedure that maintains the applied strain sequence, and a novel method was developed for the estimation of the total energy density required for the JV model. For two-step loading and for all three fatigue models employed, the modified damage accumulation methods provided superior fatigue life predictions. However, regardless of the damage accumulation method applied, the most satisfactory fatigue life correlation for VAL was obtained using the energy-based JV model.

Page generated in 0.069 seconds