• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 664
  • 207
  • 62
  • 60
  • 54
  • 45
  • 12
  • 11
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1326
  • 1326
  • 211
  • 205
  • 159
  • 140
  • 139
  • 131
  • 117
  • 116
  • 114
  • 110
  • 110
  • 108
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Color Persistent Anisotropic Diffusion of Images

Freddie, Åström, Michael, Felsberg, Reiner, Lenz January 2011 (has links)
Techniques from the theory of partial differential equations are often used to design filter methods that are locally adapted to the image structure. These techniques are usually used in the investigation of gray-value images. The extension to color images is non-trivial, where the choice of an appropriate color space is crucial. The RGB color space is often used although it is known that the space of human color perception is best described in terms of non-euclidean geometry, which is fundamentally different from the structure of the RGB space. Instead of the standard RGB space, we use a simple color transformation based on the theory of finite groups. It is shown that this transformation reduces the color artifacts originating from the diffusion processes on RGB images. The developed algorithm is evaluated on a set of real-world images, and it is shown that our approach exhibits fewer color artifacts compared to state-of-the-art techniques. Also, our approach preserves details in the image for a larger number of iterations. / Original Publication:Åström Freddie, Felsberg Michael and Lenz Reiner, Color Persistent Anisotropic Diffusion of Images, 2011, Image Analysis, SCIA conference, 23-27 May 2011, Ystad Sweden, 262-272.http://dx.doi.org/10.1007/978-3-642-21227-7_25Copyright: Springer
412

I bilderböckernas värld : En analys av text och bild i Sven Nordqvists böcker

Masar, Terése January 2012 (has links)
Uppsatsen syftar till att lyfta fram bilderboksillustratören som konstnär och framförallt peka på Sven Nordqvists stil. Detta görs genom tre olika analyser. Först en ikonotextuell analys där text-och bildsamspelet undersöksi Nordqvists böcker.En bildanalays försöker utröna speciella stildrag i böckernas illustrationer och ytterligare en analys försöker se sambandet mellan omslag, försättsblad, titelsida och själva berättelsen. / The essay aims to highlight the picture book illustrator as an artist and above all indicate the style of Sven Nordqvist. This is implemented througt three different analyses. First an ikonotexual analysis where the text and picture interaction in Nordqvist’s books is investigated. An image analysis hightlights the specific style of the books illustrations and an additional analysis is trying to see a connection between the book cover, flyleaf, title-page and the actual story.
413

Människosonens beständighet : Bildanalys av två surrealistiska konstverk / Persistence of a Son of a Man : Image Analysis of two Surrealist Artwork

Pettersson, Emma January 2013 (has links)
Syftet med detta arbete är utifrån litteraturuppgifter samt analys beskriva Salvador Dalis och René Magrittes konstnärliga arbetsmetod, och utifrån detta göra en egen visuell gestaltning. Metoden jag har använt mig utav är litteratur, bildandalys och konstnärliga gestaltningar.Salvador Dalis metod i Minnets beständighet bygger på gränslöshet och fantasi, med tydliga semiotiska inslag. Du som betraktare avgör vad som ses, vill du se målningarna enkelt och bara uppleva det som är målat, eller vill betraktaren sjunka in i en värld av nyskapande, suddiga gränser och bortom alla regler. René Magrittes metod bygger i Människosonen på naturliga och trovärdiga drag, hans målning framstå som verklig fast med en touch av fantasi, med sparsamma drag av semiotik. / The purpose of this work is based on literature data and analysis describing the Salvador Dali and René Magritte's artistic working method, and accordingly make its own visual interpretation. The method I used is out literature, bildandalys and artistic depictions.Salvador Dali's method of memory resistance based on the boundlessness and fantasy, with distinct semiotic elements. You as a viewer decides what is seen, you want to see the paintings simple and just experience what is painted, or want the viewer to sink into a world of innovative, blurred boundaries and beyond all rules. René Magritte's method is based in the Son of man on natural and credible move, his painting appear to be real solid with a touch of fantasy, with sparse features of semiotics.
414

Impact of Glycemic Therapy on Myocardial Sympathetic Neuronal Integrity and Left Ventricular Function in Insulin Resistant Diabetic Rats: Serial Evaluation by 11C-meta-Hydroxyephedrine Positron Emission Tomography

Thackeray, James 19 September 2012 (has links)
Diagnosis of diabetes mellitus, presence of hyperglycemia, and/or insulin resistance confer cardiovascular risk, particularly for diastolic dysfunction. Diabetes is associated with elevated myocardial norepinephrine (NE) content, enhanced sympathetic nervous system (SNS) activity, altered resting heart rate, and depressed heart rate variability. Positron emission tomography (PET) using the NE analogue [11C]meta-hydroxyephedrine ([11C]HED) provides an index of myocardial sympathetic neuronal integrity at the NE reuptake transporter (NET). The hypothesis of this project is that (i) hyperglycemia imparts heightened sympathetic tone and NE release, leading to abnormal sympathetic neuronal function in the hearts of diabetic rats, and (ii) these abnormalities may be reversed or prevented by treatments to normalize glycemia. Sprague Dawley rats were rendered insulin resistant by high fat feeding and diabetic by a single dose of streptozotocin (STZ). Diabetic rats were treated for 8 weeks with insulin, metformin or rosiglitazone, starting from either 1 week (prevention) or 8 weeks (reversal) after STZ administration. Sympathetic neuronal integrity was evaluated longitudinally by [11C]HED PET. Echocardiography measures of systolic and diastolic function were completed at serial timepoints. Plasma NE levels were evaluated serially and expression of NET and β-adrenoceptors were tested at the terminal endpoints. Diabetic rats exhibited a 52-57% reduction of [11C]HED standardized uptake value (SUV) at 8 weeks after STZ, with a parallel 2.5-fold elevation of plasma NE and a 17-20% reduction in cardiac NET expression. These findings were confirmed by ex vivo biodistribution studies. Transmitral pulse wave Doppler echocardiography established an extension of mitral valve deceleration time and elevated early to atrial velocity ratio, suggesting diastolic dysfunction. Subsequent treatment with insulin but not metformin restored glycemia, reduced plasma NE by 50%, normalized NET expression, and recovered [11C]HED SUV towards non-diabetic age-matched control. Diastolic dysfunction in these rats persisted. By contrast, early treatment with insulin, metformin, or rosiglitazone delayed the progression of diastolic dysfunction, but had no effect on elevated NE and reduced [11C]HED SUV in diabetic rats, potentially owing to a latent decrease in blood glucose. In conclusion, diabetes is associated with heightened circulating and tissue NE levels which can be effectively reversed by lowering glycemia with insulin. Noninvasive interrogation of sympathetic neuronal integrity using [11C]HED PET may have added value in the stratification of cardiovascular risk among diabetic patients and in determining the myocardial effects of glycemic therapy.
415

Implementation and evaluation of motion correction for quantitative MRI

Larsson, Jonatan January 2010 (has links)
Image registration is the process of aligning two images such that their mutual features overlap. This is of great importance in several medical applications. In 2008 a novel method for simultaneous T1, T2 and proton density quantification was suggested. The method is in the field of quantitative Magnetic Resonance Imaging or qMRI. In qMRI parameters are quantified by a pixel-to-pixel fit of the image intensity as a function of different MR scanner settings. The quantification depends on several volumes of different intensities to be aligned. If a patient moves during the data aquisition the datasets will not be aligned and the results are degraded due to this. Since the quantification takes several minutes there is a considerable risk of patient movements. In this master thesis three image registration methods are presented and a comparison in robustness and speed was made. The phase based algorithm was suited for this problem and limited to finding rigid motion. The other two registration algorithms, originating from the Statistical Parametrical Mapping, SPM, package, were used as references. The result shows that the pixel-to-pixel fit is greatly improved in the datasets with found motion. In the comparison between the different methods the phase based algorithm turned out to be both the fastest and the most robust method.
416

Improving cancer subtype diagnosis and grading using clinical decision support system based on computer-aided tissue image analysis

Chaudry, Qaiser Mahmood 02 January 2013 (has links)
This research focuses towards the development of a clinical decision support system (CDSS) based on cellular and tissue image analysis and classification system that improves consistency and facilitates the clinical decision making process. In a typical cancer examination, pathologists make diagnosis by manually reading morphological features in patient biopsy images, in which cancer biomarkers are highlighted by using different staining techniques. This process is subjected to pathologist's training and experience, especially when the same cancer has several subtypes (i.e. benign tumor subtype vs. malignant subtype) and the same cancer tissue biopsy contains heterogeneous morphologies in different locations. The variability in pathologist's manual reading may result in varying cancer diagnosis and treatment. This Ph.D. research aims to reduce the subjectivity and variation existing in traditional histo-pathological reading of patient tissue biopsy slides through Computer-Aided Diagnosis (CAD). Using the CAD, quantitative molecular profiling of cancer biomarkers of stained biopsy images are obtained by extracting and analyzing texture and cellular structure features. In addition, cancer sub-type classification and a semi-automatic grade scoring (i.e. clinical decision making) for improved consistency over a large number of cancer subtype images can be performed. The CAD tools do have their own limitations and in certain cases the clinicians, however, prefer systems which are flexible and take into account their individuality when necessary by providing some control rather than fully automated system. Therefore, to be able to introduce CDSS in health care, we need to understand users' perspectives and preferences on the new information technology. This forms as the basis for this research where we target to present the quantitative information acquired through the image analysis, annotate the images and provide suitable visualization which can facilitate the process of decision making in a clinical setting.
417

Manifolds in Image Science and Visualization

Brun, Anders January 2007 (has links)
A Riemannian manifold is a mathematical concept that generalizes curved surfaces to higher dimensions, giving a precise meaning to concepts like angle, length, area, volume and curvature. A glimpse of the consequences of a non-flat geometry is given on the sphere, where the shortest path between two points – a geodesic – is along a great circle. Different from Euclidean space, the angle sum of geodesic triangles on the sphere is always larger than 180 degrees. Signals and data found in applied research are sometimes naturally described by such curved spaces. This dissertation presents basic research and tools for the analysis, processing and visualization of such manifold-valued data, with a particular emphasis on future applications in medical imaging and visualization. Two-dimensional manifolds, i.e. surfaces, enter naturally into the geometric modelling of anatomical entities, such as the human brain cortex and the colon. In advanced algorithms for processing of images obtained from computed tomography (CT) and ultrasound imaging (US), images themselves and derived local structure tensor fields may be interpreted as two- or three-dimensional manifolds. In diffusion tensor magnetic resonance imaging (DT-MRI), the natural description of diffusion in the human body is a second-order tensor field, which can be related to the metric of a manifold. A final example is the analysis of shape variations of anatomical entities, e.g. the lateral ventricles in the brain, within a population by describing the set of all possible shapes as a manifold. Work presented in this dissertation include: Probabilistic interpretation of intrinsic and extrinsic means in manifolds. A Bayesian approach to filtering of vector data, removing noise from sampled manifolds and signals. Principles for the storage of tensor field data and learning a natural metric for empirical data. The main contribution is a novel class of algorithms called LogMaps, for the numerical estimation of logp (x) from empirical data sampled from a low-dimensional manifold or geometric model embedded in Euclidean space. The logp (x) function has been used extensively in the literature for processing data in manifolds, including applications in medical imaging such as shape analysis. However, previous approaches have been limited to manifolds where closed form expressions of logp (x) have been known. The introduction of the LogMap framework allows for a generalization of the previous methods. The application of LogMaps to texture mapping, tensor field visualization, medial locus estimation and exploratory data analysis is also presented. / The electronic version is corrected for grammatical and spelling errors.
418

Methods and models for 2D and 3D image analysis in microscopy, in particular for the study of muscle cells / Metoder och modeller för två- och tredimensionell bildanalys inom mikroskopi, speciellt med inrikting mot muskelceller

Karlsson Edlund, Patrick January 2008 (has links)
Many research questions in biological research lead to numerous microscope images that need to be evaluated. Here digital image cytometry, i.e., quantitative, automated or semi-automated analysis of the images is an important rapidly growing discipline. This thesis presents contributions to that field. The work has been carried out in close cooperation with biomedical research partners, successfully solving real world problems. The world is 3D and modern imaging methods such as confocal microscopy provide 3D images. Hence, a large part of the work has dealt with the development of new and improved methods for quantitative analysis of 3D images, in particular fluorescently labeled skeletal muscle cells. A geometrical model for robust segmentation of skeletal muscle fibers was developed. Images of the multinucleated muscle cells were pre-processed using a novel spatially modulated transform, producing images with reduced complexity and facilitating easy nuclei segmentation. Fibers from several mammalian species were modeled and features were computed based on cell nuclei positions. Features such as myonuclear domain size and nearest neighbor distance, were shown to correlate with body mass, and femur length. Human muscle fibers from young and old males, and females, were related to fiber type and extracted features, where myonuclear domain size variations were shown to increase with age irrespectively of fiber type and gender. A segmentation method for severely clustered point-like signals was developed and applied to images of fluorescent probes, quantifying the amount and location of mitochondrial DNA within cells. A synthetic cell model was developed, to provide a controllable golden standard for performance evaluation of both expert manual and fully automated segmentations. The proposed method matches the correctness achieved by manual quantification. An interactive segmentation procedure was successfully applied to treated testicle sections of boar, showing how a common industrial plastic softener significantly affects testosterone concentrations.
419

Graphical Model Inference and Learning for Visual Computing

Komodakis, Nikos 08 July 2013 (has links) (PDF)
Computational vision and image analysis is a multidisciplinary scientific field that aims to make computers "see" in a way that is comparable to human perception. It is currently one of the most challenging research areas in artificial intelligence. In this regard, the extraction of information from the vast amount of visual data that are available today as well as the exploitation of the resulting information space becomes one of the greatest challenges in our days. To address such a challenge, this thesis describes a very general computational framework that can be used for performing efficient inference and learning for visual perception based on very rich and powerful models.
420

Evaluation of FFT Based Cross-Correlation Algorithms for Particle Image Velocimetry

Gilbert, Ross January 2002 (has links)
In the current study, the four most common Particle Image Velocimetry (PIV) cross-correlation algorithms were evaluated by measuring the displacement of particles in computer generated images. The synthetic images were employed to compare the methods since the particle diameter, density, and intensity could be controlled, removing some of the uncertainty found in images collected during experiments, e. g. parallax, 3-D motion, etc. The most important parameter that was controlled in the synthetic images was the particle motion. Six different displacement functions were applied to move the particles between images: uniform translation, step, sawtooth, sinusoid, line source and line vortex. The four algorithms, which all use the fast Fourier transform (FFT) to perform the cross-correlation, were evaluated with four criteria; (1) spatial resolution, (2) dynamic range, (3) accuracy and (4) robustness. The uniform translation images determined the least error possible with each method, of which the deformed FFT proved to be the most accurate. The super resolution FFT and deformed FFT methods could not properly measure the infinite displacement gradient in the step images due to the interpolation of the displacement vector field used by each method around the step. However, the predictor corrector FFT scheme, which does not require interpolation when determining the interrogation area offset, successfully measured the infinite displacement gradient in the step images. The smaller interrogation areas used by the super resolution FFT scheme proved to be the best method to capture the high frequency finite displacement gradients in the sawtooth and sinusoid images. Also shown in the sawtooth and sinusoid images is the positional bias error introduced by assuming the measured particle displacement occurs at the centre of the interrogation area. The deformed FFT method produced the most accurate results for the source and vortex images, which both contained displacement gradients in multiple directions. Experimentally obtained images were also evaluated to verify the results derived using the synthetic images. The flow in a multiple grooved channel, using both water and air as the fluid medium in separate experiments, was measured and compared to DNS simulations reported by Yang. The mean velocity, average vorticity and turbulent fluctuations determined from both experiments using the deformed FFT method compared very well to the DNS calculations.

Page generated in 0.0466 seconds