• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 59
  • 19
  • 12
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 127
  • 18
  • 16
  • 15
  • 15
  • 14
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Adaptive Curvature for Stochastic Optimization

January 2019 (has links)
abstract: This thesis presents a family of adaptive curvature methods for gradient-based stochastic optimization. In particular, a general algorithmic framework is introduced along with a practical implementation that yields an efficient, adaptive curvature gradient descent algorithm. To this end, a theoretical and practical link between curvature matrix estimation and shrinkage methods for covariance matrices is established. The use of shrinkage improves estimation accuracy of the curvature matrix when data samples are scarce. This thesis also introduce several insights that result in data- and computation-efficient update equations. Empirical results suggest that the proposed method compares favorably with existing second-order techniques based on the Fisher or Gauss-Newton and with adaptive stochastic gradient descent methods on both supervised and reinforcement learning tasks. / Dissertation/Thesis / Masters Thesis Computer Science 2019
22

Multiple surface segmentation using novel deep learning and graph based methods

Shah, Abhay 01 May 2017 (has links)
The task of automatically segmenting 3-D surfaces representing object boundaries is important in quantitative analysis of volumetric images, which plays a vital role in numerous biomedical applications. For the diagnosis and management of disease, segmentation of images of organs and tissues is a crucial step for the quantification of medical images. Segmentation finds the boundaries or, limited to the 3-D case, the surfaces, that separate regions, tissues or areas of an image, and it is essential that these boundaries approximate the true boundary, typically by human experts, as closely as possible. Recently, graph-based methods with a global optimization property have been studied and used for various applications. Sepecifically, the state-of-the-art graph search (optimal surface segmentation) method has been successfully used for various such biomedical applications. Despite their widespread use for image segmentation, real world medical image segmentation problems often pose difficult challenges, wherein graph based segmentation methods in its purest form may not be able to perform the segmentation task successfully. This doctoral work has a twofold objective. 1)To identify medical image segmentation problems which are difficult to solve using existing graph based method and develop novel methods by employing graph search as a building block to improve segmentation accuracy and efficiency. 2) To develop a novel multiple surface segmentation strategy using deep learning which is more computationally efficient and generic than the exisiting graph based methods, while eliminating the need for human expert intervention as required in the current surface segmentation methods. This developed method is possibly the first of its kind where the method does not require and human expert designed operations. To accomplish the objectives of this thesis work, a comprehensive framework of graph based and deep learning methods is proposed to achieve the goal by successfully fulfilling the follwoing three aims. First, an efficient, automated and accurate graph based method is developed to segment surfaces which have steep change in surface profiles and abrupt distance changes between two adjacent surfaces. The developed method is applied and validated on intra-retinal layer segmentation of Spectral Domain Optical Coherence Tomograph (SD-OCT) images of eye with Glaucoma, Age Related Macular Degneration and Pigment Epithelium Detachment. Second, a globally optimal graph based method is developed to attain subvoxel and super resolution accuracy for multiple surface segmentation problem while imposing convex constraints. The developed method was applied to layer segmentation of SD-OCT images of normal eye and vessel walls in Intravascular Ultrasound (IVUS) images. Third, a deep learning based multiple surface segmentation is developed which is more generic, computaionally effieient and eliminates the requirement of human expert interventions (like transformation designs, feature extrraction, parameter tuning, constraint modelling etc.) required by existing surface segmentation methods in varying capacities. The developed method was applied to SD-OCT images of normal and diseased eyes, to validate the superior segmentaion performance, computation efficieny and the generic nature of the framework, compared to the state-of-the-art graph search method.
23

Extremal sextic truncated moment problems

Yoo, Seonguk 01 May 2011 (has links)
Inverse problems naturally occur in many branches of science and mathematics. An inverse problem entails finding the values of one or more parameters using the values obtained from observed data. A typical example of an inverse problem is the inversion of the Radon transform. Here a function (for example of two variables) is deduced from its integrals along all possible lines. This problem is intimately connected with image reconstruction for X-ray computerized tomography. Moment problems are a special class of inverse problems. While the classical theory of moments dates back to the beginning of the 20th century, the systematic study of truncated moment problems began only a few years ago. In this dissertation we will first survey the elementary theory of truncated moment problems, and then focus on those problems with cubic column relations. For a degree 2n real d-dimensional multisequence β ≡ β (2n) ={β i}i∈Zd+,|i|≤2n to have a representing measure μ, it is necessary for the associated moment matrix Μ(n) to be positive semidefinite, and for the algebraic variety associated to β, Vβ, to satisfy rank Μ(n)≤ card Vβ as well as the following consistency condition: if a polynomial p(x)≡ ∑|i|≤2naixi vanishes on Vβ, then Λ(p):=∑|i|≤2naiβi=0. In 2005, Professor Raúl Curto collaborated with L. Fialkow and M. Möller to prove that for the extremal case (Μ(n)= Vβ), positivity and consistency are sufficient for the existence of a (unique, rank Μ(n)-atomic) representing measure. In joint work with Professor Raúl Curto we have considered cubic column relations in M(3) of the form (in complex notation) Z3=itZ+ubar Z, where u and t are real numbers. For (u,t) in the interior of a real cone, we prove that the algebraic variety Vβ consists of exactly 7 points, and we then apply the above mentioned solution of the extremal moment problem to obtain a necessary and sufficient condition for the existence of a representing measure. This requires a new representation theorem for sextic polynomials in Z and bar Z which vanish in the 7-point set Vβ. Our proof of this representation theorem relies on two successive applications of the Fundamental Theorem of Linear Algebra. Finally, we use the Division Algorithm from algebraic geometry to extend this result to other situations involving cubic column relations.
24

Thioredoxin and Oxidative Stress

Gregory, Mary Sarah-Jane, n/a January 2004 (has links)
The experiments described in this thesis involve the expression and characterisation of recombinant truncated thioredoxin (tTrx) and the potential involvement that thioredoxin (Trx) has in the cellular responses to oxidative stress. Truncated Trx (80 amino acids) was expressed from a plasmid containing the ORF for tTrx that had been introduced into E.coli BL-21(DE3) cells. The protein was initially extracted using a combination of high concentrations of urea, high pH levels, and multiple sonification steps to remove the tTrx from inclusion bodies formed during expression. This procedure produced a stable solution of tTrx. Purification of tTrx from this protein solution required anion exchange chromatography followed by gel permeation in a HPLC system to obtain fully purified, recombinant tTrx which allowed further characterisation studies to be undertaken. An initial investigation into tTrx was performed to determine some basic physical, biochemical and functional aspects of this hitherto relatively undefined protein. Analysis by sedimentation equilibrium indicated that freshly prepared tTrx forms a single species with a molecular weight of 18.8kDa. This value indicates that recombinant tTrx naturally forms a dimer in solution that was shown to be non-covalent in nature and stable in solution. The capacity of tTrx to reduce protein disulphide bonds was determined using the insulin reduction assay. Results show that tTrx lacks this particular redox ability. The rate of oxidisation at 4 degrees C was analysed using free thiol determination, sedimentation equilibrium and SDS-PAGE patterning. Results indicated a steady rise in the degree of oxidation of tTrx over an eight day period. After six days the oxidated protein consistently displayed the presence of intramolecular disulphide bonds. Covalently-linked disulphide dimers and higher molecular weight oligomers were detectable after eight days oxidation. An investigation of the reducing capacity of the basic Trx system determined that fully oxidised tTrx was unable to act alone as a substrate for thioredoxin reductase (TR). However, when reduced Trx was added to the system, it appeared capable of acting as an electron donor to the oxidised tTrx in order to reduce disulphide groups. Recombinant tTrx was successfully radiolabelled with Trans 35S-methionine/cysteine for use in cell association studies. No evidence was found to indicate the presence of a receptor for tTrx on either MCF-7 or U-937 cells. Findings suggest that a low level of non-specific binding of tTrx to these cell lines rather than a classical ligand-binding mechanism occurs thus suggesting the absence of a cell surface receptor for tTrx. The role that Trx may play in the cellular responses to oxidative stress was also investigated. The chemical oxidants hydrogen peroxide (H2O2) and diamide were used to establish an in vitro model of oxidative stress for the choriocarcinoma cytotrophoblast cell line JEG-3. Cellular function was assessed in terms of membrane integrity, metabolic activity and the ability to synthesis new DNA following exposure to these oxidants. Results indicated that both agents were capable of causing cells to undergo oxidative stress without inducing immediate apoptosis or necrosis. Initially, JEG-3 cells exposed to 38μM or 75μM H2O2 or 100μM diamide were shown to display altered cell metabolism and DNA synthesis without loss to cell viability or membrane integrity. Cells were also shown to be capable of some short-term recovery but later lapsed into a more stressed state. Expression levels of Trx were studied to determine whether this type of chemical stress caused a change in intercellular protein levels. Both cELISA and western blotting results indicated that only cells exposed to 100μM diamide displayed any significant increase in Trx protein levels after 6 or 8hrs exposure to the oxidant. Further studies over a longer time-frame were also performed. These found that when JEG-3 cells were exposed to 18μM H2O2 or 200μM diamide over 12-48hrs, a positive correlation between increasing endogenous Trx protein levels and a decline in cell proliferation was observed. Cytotrophoblast cells, which are responsible for implantation and placentation, are susceptible to oxidative stress in vivo and their anti-oxidant capacity is fundamental to the establishment of pregnancy. The findings obtained during these studies suggest that Trx plays a role in this process.
25

Analysis of the phase space, asymptotic behavior and stability for heavy symmetric top and tippe top

Sköldstam, Markus January 2004 (has links)
<p>In this thesis we analyze the phase of the heavy symmetric top and the tippe top. These tops are two examples of physical systems for which the usefulness of integrals of motion and invariant manifolds, in phase space picture analysis, can be illustrated</p><p>In the case of the heavy symmetric top, simplified proofs of stability of the vertical rotation have been perpetuated by successive textbooks during the last century. In these proofs correct perturbations of integrals of motion are missing. This may seem harmless since the deduced threshold value for stability is correct. However, perturbations of first integrals are essential in rigorous proofs of stability of motions for both tops.</p><p>The tippe top is a toy that has the form of a truncated sphere equipped with a little peg. When spun fast on the spherical bottom its center of mass rises above its geometrical center and after a few seconds the top is spinning vertically on the peg. We study the tippe top through a sequence of embedded invariant manifolds to unveil the structure of the top's phase space. The last manifold, consisting of the asymptotic trajectories, is analyzed completely. We prove that trajectories in this manifold attract solutions in contact with the plane of support at all times and we give a complete description of their stability/instability properties for all admissible choices of model parameters and of the initial conditions.</p> / Report code: LiU-TEK-LIC-2004:35.
26

Multi-Mode Floating-Point Multiply-Add Fused Unit for Low-Power Applications

Yu, Kee-khuan 01 August 2011 (has links)
In digital signal processing and multimedia applications, floating-point(FP) multiplication and addition are the most commonly used operations. In addition, FP multiplication operations are frequently followed by the FP addition operations. Therefore, in order to achieve high performance and low cost, multiplication and addition are usually combined into a single unit, known as the FP Multiply-Add Fused (MAF). On the other hand, the mobile devices nowadays are rapidly developing. For this kind of devices, performance and power sustainability have to become the major trend in the research area. As a result, the mechanisms to reduce energy consumption become more important. Therefore, we propose a multi-mode FP MAF based on the concept of iterative multiplication and truncated addition, to achieve different operating modes with different errors. This MAF, with a total of seven modes, includes three modes for the FP multiply-accumulate operations, two modes for single FP multiplication operation and single FP addition operation, respectively. FP multiply-accumulate operations provide three modes to user, and this three modes have 0%, 0.328% and 1.107% of error. The 0% error is the same with the standard IEEE754 single-precision FP Multiply-Add Fused operations. For FP multiplication and FP addition operations, the proposed MAF allows users to choose two kinds of error modes, which are 0%, 0.328% error for FP multiplication and 0%, 0.781% error for FP addition. The 0% error is the same with the standard IEEE754 single-precision floating-point operations. When compared with the standard IEEE754 single-precision FP MAF, the proposed multi-mode FP MAF architecture has 4.5% less area and increase about 22% delay to achieve the effect of multi-mode. To demonstrate the power efficiency of proposed FP MAF, it is used to perform the operations of FP MAF, FP multiplication, and FP addition in the application of RGB to YUV format conversion. Experimental results show that, the proposed multi-mode FP MAF can significantly reduce power consumption when the modes with error are adopted.
27

Design of a Table-Driven Function Evaluation Generator Using Bit-Level Truncation Methods

Tseng, Yu-ling 30 August 2011 (has links)
Functional evaluation is one of key arithmetic operations in many applications including 3D graphics and stereo. Among various designs of hardware-based function evaluators, piecewise polynomial approximation methods are the most popular which interpolate the piecewise function curve in a sub-interval using polynomials with polynomial coefficients of each sub-interval stored in an entry of a ROM. The conventional piecewise methods usually determine the bit-widths of each ROM entry and multipliers and adders by analyzing the various error sources, including polynomial approximation errors, coefficient quantization errors, truncation errors of arithmetic operations, and the final rounding error. In this thesis, we present a new piecewise function evaluation design by considering all the error sources together. By combining all the error sources during the approximation, quantization, truncation and rounding, we can efficiently reduce the area cost of ROM and the corresponding arithmetic units. The proposed method is applied to piecewise function evaluators of both uniform and non-uniform segmentation.
28

Design of a CORDIC Function Generator Using Table-Driven Function Evaluation with Bit-Level Truncation

Hsu, Wei-Cheng 10 September 2012 (has links)
Functional evaluation is one of key arithmetic operations in many applications including 3D graphics and stereo. Among various designs of hardware-based function evaluation methods, piecewise polynomial approximation is the most popular approach which interpolates the piecewise function curve in a sub-interval using polynomials with polynomial coefficients of each sub-interval stored in an entry of a lookup table ROM. The conventional piecewise methods usually determine the bit-widths of each ROM entry, multipliers, and adders by analyzing the various error sources, including polynomial approximation errors, coefficient quantization errors, truncation errors of arithmetic operations, and the final rounding error. In this thesis, we present a new piecewise function evaluation design by considering all the error sources together. By combining all the error sources during the approximation, quantization, truncation and rounding, we can efficiently reduce the area cost of ROM and the corresponding arithmetic units in the design of CORDIC processors.
29

Improved Bit-Level Truncation with Joint Error Analysis for Table-Based Function Evaluation

Lin, Shin-hung 12 September 2012 (has links)
Function evaluation is often used in many science and engineering applications. In order to reduce the computation time, different hardware implementations have been proposed to accelerate the speed of function evaluation. Table-based piecewise polynomial approximation is one of the major methods used in hardware function evaluation designs that require simple hardware components to achieve desired precision. Piecewise polynomial method approximates the original function values in each partitioned subinterval using low-degree polynomials with coefficients stored in look-up tables. Errors are introduced in the hardware implementations. Conventional error analysis in piecewise polynomial methods includes four types of error sources: polynomial approximation error, coefficient quantization error, arithmetic truncation error, and final rounding error. Typical design approach is to pre-allocated maximum allowable error budget for each individual hardware component so that the total error induced from these individual errors satisfies the bit accuracy. In this thesis, we present a new design approach by jointly considering the error sources in designing all the hardware components, including look-up tables and arithmetic units, so that the total area cost is reduced compared to the previously published designs.
30

The Comparison of Parameter Estimation with Application to Massachusetts Health Care Panel Study (MHCPS) Data

Huang, Yao-wen 03 June 2004 (has links)
In this paper we propose two simple algorithms to estimate parameters £] and baseline survival function in Cox proportional hazard model with application to Massachusetts Health Care Panel Study (MHCPS) (Chappell, 1991) data which is a left truncated and interval censored data. We find that, in the estimation of £] and baseline survival function, Kaplan and Meier algorithm is uniformly better than the Empirical algorithm. Also, Kaplan and Meier algorithm is uniformly more powerful than the Empirical algorithm in testing whether two groups of survival functions are the same. We also define a distance measure D and compare the performance of these two algorithms through £] and D.

Page generated in 0.03 seconds