• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 472
  • 98
  • 51
  • 26
  • 22
  • 12
  • 11
  • 9
  • 7
  • 6
  • 5
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 850
  • 248
  • 235
  • 118
  • 105
  • 83
  • 80
  • 76
  • 71
  • 67
  • 66
  • 65
  • 62
  • 53
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Longitudinal Changes in the Corpus Callosum Following Pediatric Traumatic Brain Injury as Assessed by Volumetric MRI and Diffusion Tensor Imaging

Wu, Trevor Chuang Kuo 04 April 2011 (has links) (PDF)
Atrophy of the corpus callosum (CC) is a documented consequence of moderate-to-severe traumatic brain injury (TBI), which has been expressed as volume loss using quantitative magnetic resonance imaging (MRI). Other advanced imaging modalities such as diffusion tensor imaging (DTI) have also detected white matter microstructural alteration following TBI in the CC. The manner and degree to which macrostructural changes such as volume and microstructural changes develop over time following pediatric TBI and their relation to a measure of processing speed is the focus of this longitudinal investigation. As such, DTI and volumetric changes of the CC in participants with TBI and a comparison group at approximately three and 18 months post injury and their relation to processing speed were determined.
182

Chemical Potential Perturbation: A Method to Predict Chemical Potential Using Molecular Simulations

Moore, Stan G. 11 June 2012 (has links) (PDF)
A new method, called chemical potential perturbation (CPP), has been developed to predict the chemical potential as a function of composition in molecular simulations. The CPP method applies a spatially varying external potential to the simulation, causing the composition to depend upon position in the simulation cell. Following equilibration, the homogeneous chemical potential as a function of composition can be determined relative to some reference state after correcting for the effects of the inhomogeneity of the system. The CPP method allows one to predict chemical potential for a wide range of composition points using a single simulation and works for dense fluids where other prediction methods become inefficient. For pure-component systems, three different methods of approximating the inhomogeneous correction are compared. The first method uses the van der Waals density gradient theory, the second method uses the local pressure tensor, and the third method uses the Triezenberg-Zwanzig definition of surface tension. If desired, the binodal and spinodal densities of a two-phase fluid region can also be predicted by the new method. The CPP method is tested for pure-component systems using a Lennard-Jones (LJ) fluid at supercritical and subcritical conditions. The CPP method is also compared to Widom's method. In particular, the new method works well for dense fluids where Widom's method starts to fail.The CPP method is also extended to an Ewald lattice sum treatment of intermolecular potentials. When computing the inhomogeneous correction term, one can use the Irving-Kirkwood (IK) or Harasima (H) contours of distributing the pressure. We show that the chemical potential can be approximated with the CPP method using either contour, though with the lattice sum method the H contour has much greater computational efficiency. Results are shown for the LJ fluid and extended simple point-charge (SPC/E) water. We also show preliminary results for solid systems and for a new LJ lattice sum method, which is more efficient than a full lattice sum when the average density varies only in one direction. The CPP method is also extended to activity coefficient prediction of multi-component fluids. For multi-component systems, a separate external potential is applied to each species, and constant normal component pressure is maintained by adjusting the external field of one of the species. Preliminary results are presented for five different binary LJ mixtures. Results from the CPP method show the correct trend but some CPP results show a systematic bias, and we discuss a few possible ways to improve the method.
183

Equivalence Classes of Subquotients of Pseudodifferential Operator Modules on the Line

Larsen, Jeannette M. 08 1900 (has links)
Certain subquotients of Vec(R)-modules of pseudodifferential operators from one tensor density module to another are categorized, giving necessary and sufficient conditions under which two such subquotients are equivalent as Vec(R)-representations. These subquotients split under the projective subalgebra, a copy of ????2, when the members of their composition series have distinct Casimir eigenvalues. Results were obtained using the explicit description of the action of Vec(R) with respect to this splitting. In the length five case, the equivalence classes of the subquotients are determined by two invariants. In an appropriate coordinate system, the level curves of one of these invariants are a pencil of conics, and those of the other are a pencil of cubics.
184

Policy Evaluation in Statistical Reinforcement Learning

Pratik Ramprasad (14222525) 07 December 2022 (has links)
<p>While Reinforcement Learning (RL) has achieved phenomenal success in diverse fields in recent years, the statistical properties of the underlying algorithms are still not fully understood. One key aspect in this regard is the evaluation of the value associated with the RL agent. In this dissertation, we propose two statistically sound methods for policy evaluation and inference, and study their theoretical properties within the RL setting. </p> <p><br></p> <p>In the first work, we propose an online bootstrap method for statistical inference in policy evaluation. The bootstrap is a flexible and efficient approach for inference in online learning, but its efficacy in the RL setting has yet to be explored. Existing methods for online inference are restricted to settings involving independently sampled observations. In contrast, our method is shown to be distributionally consistent for statistical inference in policy evaluation under Markovian noise, which is a standard assumption in the RL setting. To demonstrate the effectiveness of our method in practical applications, we include several numerical simulations involving the temporal difference (TD) learning and Gradient TD (GTD) learning algorithms across a range of real RL environments. </p> <p><br></p> <p>In the second work, we propose a tensor Markov Decision Process framework for modeling the evolution of a sequential decision-making process when the state-action features are tensors. Under this framework, we develop a low-rank tensor estimation method for off-policy evaluation in batch RL. The proposed estimator approximates the Q-function using a tensor parameter embedded with low-rank structure. To overcome the challenge of nonconvexity, we introduce an efficient block coordinate descent approach with closed-form solutions to the alternating updates. Under standard assumptions from the tensor and RL literature, we establish an upper bound on the statistical error which guarantees a sub-linear rate of computational error. We provide numerical simulations to demonstrate that our method significantly outperforms standard batch off-policy evaluation algorithms when the true parameter has a low-rank tensor structure.</p>
185

Advanced Magnetic Resonance (MR) Diffusion Analysis in Healthy Human Liver

Wong, Oi Lei 11 1900 (has links)
Diagnosing diffuse liver disease first involves measurement of blood enzymes followed by biopsy. However, blood markers lack spatial and diagnostic specificity and biopsy is highly risky and variable. Although structural changes have been evaluated using diffusion weighted imaging (DWI), the technique is minimally quantitative. Quantitative MR diffusion approaches, such as intra-voxel incoherent motion (IVIM) and diffusion tensor imaging (DTI) have been proposed to better characterize diseased liver. However, the so called pseudo-hepatic artefact due to cardiac motion, drastically affects DWI results. The overall goals of this thesis were thus to evaluate the pseudo-hepatic anisotropy artefact on the quality of diffusion tensor (DT) and IVIM metrics, and to identify potential solutions. Intra- and intersession DTI repeatability was evaluated in healthy human livers when varying the number of diffusion encoding gradients (NGD) and number of signal averages (NSA). Although no further advantage was observed with increasing NGD beyond 6 directions, increased NSA improved intra- and inter-session repeatability. The pseudo-hepatic artefact resulted in increased fractional anisotropy (FA) and tensor eigenvalues (λ1, λ2, λ3), most prominent in the left liver lobe during systole of the cardiac cycle. Without taking advantage of tensor directional information, increasing the acquired NGD slightly improved IVIM fit quality thus helping to minimize the pseudo-hepatic artefact. Combining IVIM and DTI resulted in FA values closer to the hypothesized value of 0.0, which, based on liver microstructure is most logical. Although both IVIM-DTI and DTI-IVIM exhibited similar fit R2 values, the latter failed more often, especially near major blood vessels. Thus, IVIM-DTI was concluded to be more robust and thus the better approach. / Thesis / Doctor of Philosophy (PhD)
186

An Analysis of Materials of Differential Type

Misra, Bijoy 04 1900 (has links)
<p> An investigation of general Materials of Differential Type [MDT], and Motions With Constant Stretch History [MCSH] is presented. Rivlin-Ericksen tensors An are shown to result from a Taylor series expansion of the relative strain tensor Ct(T). Internal constraint in MDT is discussed. General Solutions of Motions of Differential Type are worked out. Dynamically possible stresses are found for certain irrotational motions. Theorems regarding necessary and sufficient conditions for MCSH are proved. A class of MCSH is introduced, and an approximate MCSH is suggested. Necessary equations regarding gradients of a scalar-valued tensor function are derived. </p> / Thesis / Master of Engineering (MEngr)
187

Nlcviz: Tensor Visualization And Defect Detection In Nematic Liquid Crystals

Mehta, Ketan 05 August 2006 (has links)
Visualization and exploration of nematic liquid crystal (NLC) data is a challenging task due to the multidimensional and multivariate nature of the data. Simulation study of an NLC consists of multiple timesteps, where each timestep computes scalar, vector, and tensor parameters on a geometrical mesh. Scientists developing an understanding of liquid crystal interaction and physics require tools and techniques for effective exploration, visualization, and analysis of these data sets. Traditionally, scientists have used a combination of different tools and techniques like 2D plots, histograms, cut views, etc. for data visualization and analysis. However, such an environment does not provide the required insight into NLC datasets. This thesis addresses two areas of the study of NLC data---understanding of the tensor order field (the Q-tensor) and defect detection in this field. Tensor field understanding is enhanced by using a new glyph (NLCGlyph) based on a new design metric which is closely related to the underlying physical properties of an NLC, described using the Q-tensor. A new defect detection algorithm for 3D unstructured grids based on the orientation change of the director is developed. This method has been used successfully in detecting defects for both structured and unstructured models with varying grid complexity.
188

FOUR-YEAR EVOLUTION OF BRAIN TISSUE INTEGRITY USING DIFFUSION TENSOR IMAGING IN MULTIPLE SCLEROSIS

Ontaneda, Daniel 27 August 2012 (has links)
No description available.
189

A Framework for Performance Optimization of TensorContraction Expressions

Lai, Pai-Wei January 2014 (has links)
No description available.
190

Object Trackers Performance Evaluation and Improvement with Applications using High-order Tensor

Pang, Yu January 2020 (has links)
Visual tracking is one of the fundamental problems in computer vision. This topic has been a widely explored area attracting a great amount of research efforts. Over the decades, hundreds of visual tracking algorithms, or trackers in short, have been developed and a great packs of public datasets are available alongside. As the number of trackers grow, it then becomes a common problem how to evaluate who is a better tracker. Many metrics have been proposed together with tons of evaluation datasets. In my research work, we first make an application practice of tracking multiple objects in a restricted scene with very low frame rate. It has a unique challenge that the image quality is low and we cannot assume images are close together in a temporal space. We design a framework that utilize background subtraction and object detection, then we apply template matching algorithms to achieve the tracking by detection. While we are exploring the applications of tracking algorithm, we realize the problem when authors compare their proposed tracker with others, there is unavoidable subjective biases: it is non-trivial for the authors to optimize other trackers, while they can reasonably tune their own tracker to the best. Our assumption is based on that the authors will give a default setting to other trackers, hence the performances of other trackers are less biased. So we apply a leave-their-own-tracker-out strategy to weigh the performances of other different trackers. we derive four metrics to justify the results. Besides the biases in evaluation, the datasets we use as ground truth may not be perfect either. Because all of them are labeled by human annotators, they are prone to label errors, especially due to partial visibility and deformation. we demonstrate some human errors from existing datasets and propose smoothing technologies to detect and correct them. we use a two-step adaptive image alignment algorithm to find the canonical view of the video sequence. then use different techniques to smooth the trajectories at certain degrees. The results show it can slightly improve the trained model, but would overt if overcorrected. Once we have a clear understanding and reasonable approaches towards the visual tracking scenario, we apply the principles in multi-target tracking cases. To solve the problem, we formulate it into a multi-dimensional assignment problem, and build the motion information in a high-order tensor framework. We propose to solve it using rank-1 tensor approximation and use a tensor power iteration algorithm to efficiently obtain the solution. It can apply in pedestrian tracking, aerial video tracking, as well as curvalinear structure tracking in medical video. Furthermore, this proposed framework can also fit into the affinity measurement of multiple objects simultaneously. We propose the Multiway Histogram Intersection to obtain the similarities between histograms of more than two targets. With the solution of using tensor power iteration algorithm, we show it can be applied in a few multi-target tracking applications. / Computer and Information Science

Page generated in 0.0254 seconds