• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 664
  • 207
  • 62
  • 60
  • 53
  • 45
  • 12
  • 11
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1325
  • 1325
  • 211
  • 205
  • 159
  • 140
  • 139
  • 131
  • 117
  • 116
  • 114
  • 110
  • 110
  • 108
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Quantitative analysis and segmentation of knee MRI using layered optimal graph segmentation of multiple objects and surfaces

Kashyap, Satyananda 01 December 2016 (has links)
Knee osteoarthritis is one of the most debilitating aging diseases as it causes loss of cartilage of the knee joint. Knee osteoarthritis affects the quality of life and increases the burden on health care costs. With no disease-modifying osteoarthritis drug currently available there is an immediate need to understand the factors triggering the onset and progression of the disease. Developing robust segmentation techniques and quantitative analysis helps identify potential imaging-based biomarkers that indicate the onset and progression of osteoarthritis. This thesis work developed layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) framework based knee MRI segmentation algorithms in 3D and longitudinal 3D (4D). A hierarchical random forest classifier algorithm was developed to improve cartilage costs functions for the LOGISMOS framework. The new cost function design significantly improved the segmentation accuracy over the existing state of the art methods. Disease progression results in more artifacts appearing similar to cartilage in MRI. 4D LOGISMOS segmentation was developed to simultaneously segment multiple time-points of a single patient by incorporating information from earlier time points with a relatively healthier knee in the early stage of the disease. Our experiments showed consistently higher segmentation accuracy across all the time-points over 3D LOGISMOS segmentation of each time-point. Fully automated segmentation algorithms proposed are not 100% accurate especially for patient MRI's having severe osteoarthritis and require interactive correction. An interactive technique called just-enough interaction (JEI) was developed which added a fast correction step to the automated LOGISMOS, speeding up the interactions substantially over the current slice-by-slice manual editing while maintaining high accuracy. JEI editing modifies the graph nodes instead of the boundary surfaces of the bones and cartilages providing globally optimally corrected results. 3D JEI was extended to 4D JEI allowing for simultaneous visualization and interaction of multiple time points of the same patients. Further quantitative analysis tools were developed to study the thickness losses. Nomenclature compliant sub-plate detection algorithm was developed to quantify thickness in the smaller load bearing regions of the knee to help understand the varying rates of thickness losses in the different regions. Regression models were developed to predict the thickness accuracy on a patient MRI at a later follow-up using the available thickness information from the LOGISMOS segmentation of the current set of MRI scans of the patient. Further non-cartilage based imaging biomarker quantification was developed to analyze bone shape changes between progressing and non-progressing osteoarthritic populations. The algorithm quantified statistically significant local shape changes between the two populations. Overall this work improved the state of the art in the segmentation of the bones and cartilage of the femur and tibia. Interactive 3D and 4D JEI were developed allowing for fast corrections of the segmentations and thus significantly improving the accuracy while performing many times faster. Further, the quantitative analysis tools developed robustly analyzed the segmentation providing measurable metrics of osteoarthritis progression.
362

Mechanical analysis of lung CT images using nonrigid registration

Cao, Kunlin 01 May 2012 (has links)
Image registration plays an important role in pulmonary image analysis. Accurate image registration is a challenging problem when the lungs have deformation with large distance. Registration results estimate the local tissue movement and are useful for studying lung mechanical quantities. In this thesis, we propose a new registration algorithm and a registration scheme to solve lung CT matching problems. Approaches to study lung functions are discussed and presented through a practical application. The overall objective of our project is to develop image registration techniques and analysis approaches to measure lung functions at high resolution. We design a nonrigid volumetric registration algorithm to catch lung motion from a pair of intrasubject CT images acquired at different inflation levels. This registration algorithm preserves both parenchymal tissue volume and vesselness measure, and is regularized by a linear elasticity cost. Validation methods for lung CT matching are introduced and used to evaluate the performance of different registration algorithms. Evaluation shows the feature-based vesselness constraint can efficiently improve the registration accuracy around lung boundaries and in the base lung region. Meanwhile, a new scheme to solve complex registration problem is introduced utilizing both surface and volumetric registration. The first step of this scheme is to register the boundaries of two images using surface registration. The resulting boundary displacements are extended to the entire ROI domains using the Element Free Galerkin Method (EFGM) based on weighted extended B-Splines (WEB-Splines). These displacement fields are used as initial conditions for the tissue volume– and vessel–preserving non-rigid registration over the object domain. Both B-Splines and WEB-Splines are used to parameterize the transformations. Our algorithms achieve high accuracy and provide reasonable lung function maps. The mean errors on landmarks, vessel locations, and fissure planes are on the order of 1 mm (sub-voxel level). Furthermore, we establish methods based on registration derived transformation to analyze mechanical quantities and measure regional lung function. The proposed registration method and lung function measurement are applied on a practical application to detect mechanical alternations in the lung following bronchoalveolar lavage, which achieves satisfactory results and demonstrates the applicability of our proposed approaches.
363

β-CATENIN REGULATION OF ADULT SKELETAL MUSCLE PLASTICITY

Wen, Yuan 01 January 2018 (has links)
Adult skeletal muscle is highly plastic and responds readily to environmental stimuli. One of the most commonly utilized methods to study skeletal muscle adaptations is immunofluorescence microscopy. By analyzing images of adult muscle cells, also known as myofibers, one can quantify changes in skeletal muscle structure and function (e.g. hypertrophy and fiber type). Skeletal muscle samples are typically cut in transverse or cross sections, and antibodies against sarcolemmal or basal lamina proteins are used to label the myofiber boundaries. The quantification of hundreds to thousands of myofibers per sample is accomplished either manually or semi-automatically using generalized pathology software, and such approaches become exceedingly tedious. In the first study, I developed MyoVision, a robust, fully automated software that is dedicated to skeletal muscle immunohistological image analysis. The software has been made freely available to muscle biologists to alleviate the burden of routine image analyses. To date, more than 60 technicians, students, postdoctoral fellows, faculty members, and others have requested this software. Using MyoVision, I was able to accurately quantify the effects of β-catenin knockout on myofiber hypertrophy. In the second study, I tested the hypothesis that myofiber hypertrophy requires β-catenin to activate c-myc transcription and promote ribosome biogenesis. Recent evidence in both mice and human suggests a close association between ribosome biogenesis and skeletal muscle hypertrophy. Using an inducible mouse model of skeletal myofiber-specific genetic knockout, I obtained evidence that β-catenin is important for myofiber hypertrophy, although its role in ribosome biogenesis appears to be dispensable for mechanical overload induced muscle growth. Instead, β-catenin may be necessary for promoting the translation of growth related genes through activation of ribosomal protein S6. Unexpectedly, we detected a novel, enhancing effect of myofiber β-catenin knockout on the resident muscle stem cells, or satellite cells. In the absence of myofiber β-catenin, satellite cells activate and proliferate earlier in response to mechanical overload. Consistent with the role of satellite cells in muscle repair, the enhanced recruitment of satellite cells led to a significantly improved regeneration response after chemical injury. The novelty of these findings resides in the fact that the genetic perturbation was extrinsic to the satellite cells, and this is even more surprising because the current literature focuses heavily on intrinsic mechanisms within satellite cells. As such, this model of myofiber β-catenin knockout may significantly contribute to better understanding of the mechanisms of satellite cell priming, with implications for regenerative medicine.
364

Characterization of Computed Tomography Radiomic Features using Texture Phantoms

Shafiq ul Hassan, Muhammad 05 April 2018 (has links)
Radiomics treats images as quantitative data and promises to improve cancer prediction in radiology and therapy response assessment in radiation oncology. However, there are a number of fundamental problems that need to be solved in order to potentially apply radiomic features in clinic. The first basic step in computed tomography (CT) radiomic analysis is the acquisition of images using selectable image acquisition and reconstruction parameters. Radiomic features have shown large variability due to variation of these parameters. Therefore, it is important to develop methods to address these variability issues in radiomic features due to each CT parameter. To this end, texture phantoms provide a stable geometry and Hounsfield Units (HU) to characterize the radiomic features with respect to image acquisition and reconstruction parameters. In this project, normalization methods were developed to address the variability issues in CT Radiomics using texture phantoms. In the first part of this project, variability in radiomic features due to voxel size variation was addressed. A voxel size resampling method is presented as a preprocessing step for imaging data acquired with variable voxel sizes. After resampling, variability due to variable voxel size in 42 radiomic features was reduced significantly. Voxel size normalization is presented to address the intrinsic dependence of some key radiomic features. After normalization, 10 features became robust as a function of voxel size. Some of these features were identified as predictive biomarkers in diagnostic imaging or useful in response assessment in radiation therapy. However, these key features were found to be intrinsically dependent on voxel size (which also implies dependence on lesion volume). The normalization factors are also developed to address the intrinsic dependence of texture features on the number of gray levels. After normalization, the variability due to gray levels in 17 texture features was reduced significantly. In the second part of the project, voxel size and gray level (GL) normalizations developed based on phantom studies, were tested on the actual lung cancer tumors. Eighteen patients with non-small cell lung cancer of varying tumor volumes were studied and compared with phantom scans acquired on 8 different CT scanners. Eight out of 10 features showed high (Rs > 0.9) and low (Rs < 0.5) Spearman rank correlations with voxel size before and after normalizations, respectively. Likewise, texture features were unstable (ICC < 0.6) and highly stable (ICC > 0.9) before and after gray level normalizations, respectively. This work showed that voxel size and GL normalizations derived from texture phantom also apply to lung cancer tumors. This work highlights the importance and utility of investigating the robustness of CT radiomic features using CT texture phantoms. Another contribution of this work is to develop correction factors to address the variability issues in radiomic features due to reconstruction kernels. Reconstruction kernels and tube current contribute to noise texture in CT. Most of texture features were sensitive to correlated noise texture due to reconstruction kernels. In this work, noise power spectra (NPS) was measured on 5 CT scanners using standard ACR phantom to quantify the correlated noise texture. The variability in texture features due to different kernels was reduced by applying the NPS peak frequency and the region of interest (ROI) maximum intensity as correction factors. Most texture features were radiation dose independent but were strongly kernel dependent, which is demonstrated by a significant shift in NPS peak frequency among kernels. Percent improvements in robustness of 19 features were in the range of 30% to 78% after corrections. In conclusion, most texture features are sensitive to imaging parameters such as reconstruction kernels, reconstruction Field of View (FOV), and slice thickness. All reconstruction parameters contribute to inherent noise in CT images. The problem can be partly solved by quantifying noise texture in CT radiomics using a texture phantom and an ACR phantom. Texture phantoms should be a pre-requisite to patient studies as they provide stable geometry and HU distribution to characterize the radiomic features and provide ground truths for multi-institutional validation studies.
365

Dynamic Modelling, Measurement and Control of Co-rotating Twin-Screw Extruders

Elsey, Justin Rae January 2003 (has links)
Co-rotating twin-screw extruders are unique and versatile machines that are used widely in the plastics and food processing industries. Due to the large number of operating variables and design parameters available for manipulation and the complex interactions between them, it cannot be claimed that these extruders are currently being optimally utilised. The most significant improvement to the field of twin-screw extrusion would be through the provision of a generally applicable dynamic process model that is both computationally inexpensive and accurate. This would enable product design, process optimisation and process controller design to be performed cheaply and more thoroughly on a computer than can currently be achieved through experimental trials. This thesis is divided into three parts: dynamic modelling, measurement and control. The first part outlines the development of a dynamic model of the extrusion process which satisfies the above mentioned criteria. The dynamic model predicts quasi-3D spatial profiles of the degree of fill, pressure, temperature, specific mechanical energy input and concentrations of inert and reacting species in the extruder. The individual material transport models which constitute the dynamic model are examined closely for their accuracy and computational efficiency by comparing candidate models amongst themselves and against full 3D finite volume flow models. Several new modelling approaches are proposed in the course of this investigation. The dynamic model achieves a high degree of simplicity and flexibility by assuming a slight compressibility in the process material, allowing the pressure to be calculated directly from the degree of over-fill in each model element using an equation of state. Comparison of the model predictions with dynamic temperature, pressure and residence time distribution data from an extrusion cooking process indicates a good predictive capability. The model can perform dynamic step-change calculations for typical screw configurations in approximately 30 seconds on a 600 MHz Pentium 3 personal computer. The second part of this thesis relates to the measurement of product quality attributes of extruded materials. A digital image processing technique for measuring the bubble size distribution in extruded foams from cross sectional images is presented. It is recognised that this is an important product quality attribute, though difficult to measure accurately with existing techniques. The present technique is demonstrated on several different products. A simulation study of the formation mechanism of polymer foams is also performed. The measurement of product quality attributes such as bulk density and hardness in a manner suitable for automatic control is also addressed. This is achieved through the development of an acoustic sensor for inferring product attributes using the sounds emanating from the product as it leaves the extruder. This method is found to have good prediction ability on unseen data. The third and final part of this thesis relates to the automatic control of product quality attributes using multivariable model predictive controllers based on both direct and indirect control strategies. In the given case study, indirect control strategies, which seek to regulate the product quality attributes through the control of secondary process indicators such as temperature and pressure, are found to cause greater deviations in product quality than taking no corrective control action at all. Conversely, direct control strategies are shown to give tight control over the product quality attributes, provided that appropriate product quality sensors or inferential estimation techniques are available.
366

Filamentous cyanobacteria in the Baltic Sea - spatiotemporal patterns and nitrogen fixation

Almesjö, Lisa January 2007 (has links)
<p>Summer blooms of filamentous, diazotrophic cyanobacteria are typical of the Baltic Sea Proper, and are dominated by <i>Aphanizomenon </i>sp<i>.</i> and the toxic <i>Nodularia spumigena.</i> Although occurring every summer, the blooms vary greatly in timing and spatial distribution, making monitoring difficult and imprecise. This thesis studies how the spatial variability of Baltic cyanobacterial blooms influences estimates of abundance, vertical and horizontal distribution and N<sub>2</sub>-fixation. Implications for sampling and monitoring of cyanobacterial blooms are also discussed.</p><p>The results of the thesis confirm the importance of diazotrophic cyanobacteria in providing N for summer production in the Baltic Proper. It also highlights the large spatial and temporal variation in these blooms and argues that improved spatial coverage and replication could make monitoring data more useful for demonstrating time trends, and for identifying the factors regulating the blooms. The vertical distribution of <i>Aphanizomenon</i> and <i>Nodularia</i> was found to be spatially variable, probably as a combination of species-specific adaptations and ambient weather conditions. Vertical migration in <i>Aphanizomenon</i> was more important towards the end of summer, and is probably regulated by a trade-off between P-availability and light and temperature.</p>
367

Diversifying Demining : An Experimental Crowdsourcing Method for Optical Mine Detection / Diversifiering av minröjning : En experimentell crowdsourcingmetod för optisk mindetektering

Andersson, David January 2008 (has links)
<p>This thesis explores the concepts of crowdsourcing and the ability of diversity, applied to optical mine detection. The idea is to use the human eye and wide and diverse workforce available on the Internet to detect mines, in addition to computer algorithms.</p><p>The theory of diversity in problem solving is discussed, especially the Diversity Trumps Ability Theorem and the Diversity Prediction Theorem, and how they should be carried out for possible applications such as contrast interpretation and area reduction respectively.</p><p>A simple contrast interpretation experiment is carried out comparing the results of a laymen crowd and one of experts, having the crowds examine extracts from hyperspectral images, classifying the amount of objects or mines and the type of terrain. Due to poor participation rate of the expert group, and an erroneous experiment introduction, the experiment does not yield any statistically significant results. Therefore, no conclusion is made.</p><p>Experiment improvements are proposed as well as possible future applications.</p> / <p>Denna rapport går igenom tanken bakom <em>crowdsourcing</em> och mångfaldens styrka tillämpad på optisk mindetektering. Tanken är att använda det mänskliga ögat och Internets skiftande och varierande arbetsstyrka som ett tillägg för att upptäcka minor tillsammans med dataalgoritmer.</p><p>Mångfaldsteorin i problemlösande diskuteras och speciellt ''Diversity Trumps Ability''-satsen och ''Diversity Prediction''-satsen och hur de ska genomföras för tillämpningar som kontrastigenkänning respektive ytreduktion.</p><p>Ett enkelt kontrastigenkänningsexperiment har genomförts för att jämföra resultaten mellan en lekmannagrupp och en expertgrupp. Grupperna tittar på delar av data från hyperspektrala bilder och klassifierar andel objekt eller minor och terrängtyp. På grund av lågt deltagande från expertgruppen och en felaktig experimentintroduktion ger inte experimentet några statistiskt signifikanta resultat, varför ingen slutsats dras.</p><p>Experimentförbättringar och framtida tillämpningar föreslås.</p> / Multi Optical Mine Detection System
368

Camera Based Navigation : Matching between Sensor reference and Video image

Olgemar, Markus January 2008 (has links)
<p>an Internal Navigational System and a Global Navigational Satellite System (GNSS). In navigational warfare the GNSS can be jammed, therefore are a third navigational system is needed. The system that has been tried in this thesis is camera based navigation. Through a video camera and a sensor reference the position is determined. This thesis will process the matching between the sensor reference and the video image.</p><p>Two methods have been implemented: normalized cross correlation and position determination through a homography. Normalized cross correlation creates a correlation matrix. The other method uses point correspondences between the images to determine a homography between the images. And through the homography obtain a position. The more point correspondences the better the position determination will be.</p><p>The results have been quite good. The methods have got the right position when the Euler angles of the UAV have been known. Normalized cross correlation has been the best method of the tested methods.</p>
369

Computer Assisted Coronary CT Angiography Analysis : Disease-centered Software Development

Wang, Chunliang January 2009 (has links)
<p>The substantial advances of coronary CTA have resulted in a boost of use of this new technique in the last several years, which brings a big challenge to radiologists by the increasing number of exams and the large amount of data for each patient. The main goal of this study was to develop a computer tool to facilitate coronary CTA analysis by combining knowledge of medicine and image processing.Firstly, a competing fuzzy connectedness tree algorithm was developed to segment the coronary arteries and extract centerlines for each branch. The new algorithm, which is an extension of the “virtual contrast injection” method, preserves the low density soft tissue around the coronary, which reduces the possibility of introducing false positive stenoses during segmentation.Secondly, this algorithm was implemented in open source software in which multiple visualization techniques were integrated into an intuitive user interface to facilitate user interaction and provide good over¬views of the processing results. Considerable efforts were put on optimizing the computa¬tional speed of the algorithm to meet the clinical requirements.Thirdly, an automatic seeding method, that can automatically remove rib cage and recognize the aortic root, was introduced into the interactive segmentation workflow to further minimize the requirement of user interactivity during post-processing. The automatic procedure is carried out right after the images are received, which saves users time after they open the data. Vessel enhance¬ment and quantitative 2D vessel contour analysis are also included in this new version of the software. In our preliminary experience, visually accurate segmentation results of major branches have been achieved in 74 cases (42 cases reported in paper II and 32 cases in paper III) using our software with limited user interaction. On 128 branches of 32 patients, the average overlap between the centerline created in our software and the manually created reference standard was 96.0%. The average distance between them was 0.38 mm, lower than the mean voxel size. The automatic procedure ran for 3-5 min as a single-thread application in the background. Interactive processing took 3 min in average with the latest version of software. In conclusion, the presented software provides fast and automatic coron¬ary artery segmentation and visualization. The accuracy of the centerline tracking was found to be acceptable when compared to manually created centerlines.</p>
370

Three dimensional object recognition for robot conveyor picking

Wikander, Gustav January 2009 (has links)
<p>Shape-based matching (SBM) is a method for matching objects in greyscale images. It extracts edges from search images and matches them to a model using a similarity measure. In this thesis we extend SBM to find the tilt and height position of the object in addition to the z-plane rotation and x-y-position. The search is conducted using a scale pyramid to improve the search speed. A 3D matching can be done for small tilt angles by using SBM on height data and extending it with additional steps to calculate the tilt of the object. The full pose is useful for picking objects with an industrial robot.</p><p>The tilt of the object is calculated using a RANSAC plane estimator. After the 2D search the differences in height between all corresponding points of the model and the live image are calculated. By estimating a plane to this difference the tilt of the object can be calculated. Using the tilt the model edges are tilted in order to improve the matching at the next scale level.</p><p>The problems that arise with occlusion and missing data have been studied. Missing data and erroneous data have been thresholded manually after conducting tests where automatic filling of missing data did not noticeably improve the matching. The automatic filling could introduce new false edges and remove true ones, thus lowering the score.</p><p>Experiments have been conducted where objects have been placed at increasing tilt angles. The results show that the matching algorithm is object dependent and correct matches are almost always found for tilt angles less than 10 degrees. This is very similar to the original 2D SBM because the model edges does not change much for such small angels. For tilt angles up to about 25 degrees most objects can be matched and for nice objects correct matches can be done at large tilt angles of up to 40 degrees.</p>

Page generated in 0.065 seconds