• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 112
  • 100
  • 41
  • 33
  • 25
  • 10
  • 6
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 391
  • 86
  • 76
  • 76
  • 66
  • 54
  • 52
  • 50
  • 36
  • 32
  • 31
  • 31
  • 30
  • 30
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Novel dictionary learning algorithm for accelerating multi-dimensional MRI applications

Bhave, Sampada Vasant 01 December 2016 (has links)
The clinical utility of multi-dimensional MRI applications like multi-parameter mapping and 3D dynamic lung imaging is limited by long acquisition times. Quantification of multiple tissue MRI parameters has been shown to be useful for early detection and diagnosis of various neurological diseases and psychiatric disorders. They also provide useful information about disease progression and treatment efficacy. Dynamic lung imaging enables the diagnosis of abnormalities in respiratory mechanics in dyspnea and regional lung function in pulmonary diseases like chronic obstructive pulmonary disease (COPD), asthma etc. However, the need for acquisition of multiple contrast weighted images as in case of multi-parameter mapping or multiple time points as in case of pulmonary imaging, makes it less applicable in the clinical setting as it increases the scan time considerably. In order to achieve reasonable scan times, there is often tradeoffs between SNR and resolution. Since, most MRI images are sparse in known transform domain; they can be recovered from fewer samples. Several compressed sensing schemes have been proposed which exploit the sparsity of the signal in pre-determined transform domains (eg. Fourier transform) or exploit the low rank characteristic of the data. However, these methods perform sub-optimally in the presence of inter-frame motion since the pre-determined dictionary does not account for the motion and the rank of the data is considerably higher. These methods rely on two step approach where they first estimate the dictionary from the low resolution data and using these basis functions they estimate the coefficients by fitting the measured data to the signal model. The main focus of the thesis is accelerating the multi-parameter mapping and 3D dynamic lung imaging applications to achieve desired volume coverage and spatio-temporal resolution. We propose a novel dictionary learning framework called the Blind compressed sensing (BCS) scheme to recover the underlying data from undersampled measurements, in which the underlying signal is represented as a sparse linear combination of basic functions from a learned dictionary. We also provide an efficient implementation using variable splitting technique to reduce the computational complexity by up to 15 fold. In both multi- parameter mapping and 3D dynamic lung imaging, the comparisons of BCS scheme with other schemes indicates superior performance as it provides a richer presentation of the data. The reconstructions from BCS scheme result in high accuracy parameter maps for parameter imaging and diagnostically relevant image series to characterize respiratory mechanics in pulmonary imaging.
92

Effects of using corpora and online reference tools on foreign language writing: a study of Korean learners of English as a second language

Koo, Kyosung 01 January 2006 (has links)
The general aim of this study is to better understand aspects of using reference tools for writing and to identify technologies that can assist foreign language writers. The specific purpose of this study is to look closely at how English as a Second Language (ESL) students from Korea use a corpus as a reference tool in conjunction with dictionaries when paraphrasing English newspaper articles. The participants were Korean graduate students with advanced English proficiency (N=10). Their task was to paraphrase an English newspaper article. The results show that purposes for using a concordancing program include collocations, definitions, context, and parts of speech. The subjects looked for a variety of information in a concordancing program, including prepositions, authentic samples, and the context in which the search terms were used. Reasons for using dictionaries include definitions, parts of speech, and sample sentences. The most common strategy was to combine reference tools, while the second most common was to use a specific search word. Subjects who used more than one tool for a search or performed multiple searches were more successful in finding what they were looking for. A concordancing program enabled users to see multiple examples of everyday language use. By using the concordancing program, learners were able to see words that were used most frequently, their patterns, and collocations. Learners took more responsibility for their language learning, as they became researchers in their own right. They gained confidence as L2 writers as they had inside access to linguistic resources. The subjects became more independent and were able to solve their own writing and linguistic problems as they became more aware through the use of authentic texts. In this study, the subjects found the corpora to be useful for sentence-level composition and revision. Overall, the use of reference tools led to an improvement in the accuracy of writing. A concordancing program played an important role in defining the structure and context of English phrases and sentences.
93

Novel adaptive reconstruction schemes for accelerated myocardial perfusion magnetic resonance imaging

Lingala, Sajan Goud 01 December 2013 (has links)
Coronary artery disease (CAD) is one of the leading causes of death in the world. In the United States alone, it is estimated that approximately every 25 seconds, a new CAD event will occur, and approximately every minute, someone will die of one. The detection of CAD during in its early stages is very critical to reduce the mortality rates. Magnetic resonance imaging of myocardial perfusion (MR-MPI) has been receiving significant attention over the last decade due to its ability to provide a unique view of the microcirculation blood flow in the myocardial tissue through the coronary vascular network. The ability of MR-MPI to detect changes in microcirculation during early stages of ischemic events makes it a useful tool in identifying myocardial tissues that are alive but at the risk of dying. However this technique is not yet fully established clinically due to fundamental limitations imposed by the MRI device physics. The limitations of current MRI schemes often make it challenging to simultaneously achieve high spatio-temporal resolution, sufficient spatial coverage, and good image quality in myocardial perfusion MRI. Furthermore, the acquisitions are typically set up to acquire images during breath holding. This often results in motion artifacts due to improper breath hold patterns. This dissertation deals with developing novel image reconstruction methods in conjunction with non-Cartesian sampling for the reconstruction of dynamic MRI data from highly accelerated / under-sampled Fourier measurements. The reconstruction methods are based on adaptive signal models to represent the dynamic data using few model coefficients. Three novel adaptive reconstruction methods are developed and validated: (a) low rank and sparsity based modeling, (b) blind compressed sensing, and (c) motion compensated compressed sensing. The developed methods are applicable to a wide range of dynamic imaging problems. In the context of MR-MPI, this dissertation show feasibilities that the developed methods can enable free breathing myocardial perfusion MRI acquisitions with high spatio-temporal resolutions ( < 2mm x 2mm, 1 heart beat) and slice coverage (upto 8 slices).
94

Cardiac motion estimation in ultrasound images using a sparse representation and dictionary learning / Estimation du mouvement cardiaque en imagerie ultrasonore par représentation parcimonieuse et apprentissage de dictionnaire

Ouzir, Nora 16 October 2018 (has links)
Les maladies cardiovasculaires sont de nos jours un problème de santé majeur. L'amélioration des méthodes liées au diagnostic de ces maladies représente donc un réel enjeu en cardiologie. Le coeur étant un organe en perpétuel mouvement, l'analyse du mouvement cardiaque est un élément clé pour le diagnostic. Par conséquent, les méthodes dédiées à l'estimation du mouvement cardiaque à partir d'images médicales, plus particulièrement en échocardiographie, font l'objet de nombreux travaux de recherches. Cependant, plusieurs difficultés liées à la complexité du mouvement du coeur ainsi qu'à la qualité des images échographiques restent à surmonter afin d'améliorer la qualité et la précision des estimations. Dans le domaine du traitement d'images, les méthodes basées sur l'apprentissage suscitent de plus en plus d'intérêt. Plus particulièrement, les représentations parcimonieuses et l'apprentissage de dictionnaires ont démontré leur efficacité pour la régularisation de divers problèmes inverses. Cette thèse a ainsi pour but d'explorer l'apport de ces méthodes, qui allient parcimonie et apprentissage, pour l'estimation du mouvement cardiaque. Trois principales contributions sont présentées, chacune traitant différents aspects et problématiques rencontrées dans le cadre de l'estimation du mouvement en échocardiographie. Dans un premier temps, une méthode d'estimation du mouvement cardiaque se basant sur une régularisation parcimonieuse est proposée. Le problème d'estimation du mouvement est formulé dans le cadre d'une minimisation d'énergie, dont le terme d'attache aux données est construit avec l'hypothèse d'un bruit de Rayleigh multiplicatif. Une étape d'apprentissage de dictionnaire permet une régularisation exploitant les propriétés parcimonieuses du mouvement cardiaque, combinée à un terme classique de lissage spatial. Dans un second temps, une méthode robuste de flux optique est présentée. L'objectif de cette approche est de robustifier la méthode d'estimation développée au premier chapitre de manière à la rendre moins sensible aux éléments aberrants. Deux régularisations sont mises en oeuvre, imposant d'une part un lissage spatial et de l'autre la parcimonie des champs de mouvements dans un dictionnaire approprié. Afin d'assurer la robustesse de la méthode vis-à-vis des anomalies, une stratégie de minimisation récursivement pondérée est proposée. Plus précisément, les fonctions employées pour cette pondération sont basées sur la théorie des M-estimateurs. Le dernier travail présenté dans cette thèse, explore une méthode d'estimation du mouvement cardiaque exploitant une régularisation parcimonieuse combinée à un lissage à la fois dans les domaines spatial et temporel. Le problème est formulé dans un cadre général d'estimation de flux optique. La régularisation temporelle proposée impose des trajectoires de mouvement lisses entre images consécutives. De plus, une méthode itérative d'estimation permet d'incorporer les trois termes de régularisations, tout en rendant possible le traitement simultané d'un ensemble d'images. Dans cette thèse, les contributions proposées sont validées en employant des images synthétiques et des simulations réalistes d'images ultrasonores. Ces données avec vérité terrain permettent d'évaluer la précision des approches considérées, et de souligner leur compétitivité par rapport à des méthodes de l'état-del'art. Pour démontrer la faisabilité clinique, des images in vivo de patients sains ou atteints de pathologies sont également considérées pour les deux premières méthodes. Pour la dernière contribution de cette thèse, i.e., exploitant un lissage temporel, une étude préliminaire est menée en utilisant des données de simulation. / Cardiovascular diseases have become a major healthcare issue. Improving the diagnosis and analysis of these diseases have thus become a primary concern in cardiology. The heart is a moving organ that undergoes complex deformations. Therefore, the quantification of cardiac motion from medical images, particularly ultrasound, is a key part of the techniques used for diagnosis in clinical practice. Thus, significant research efforts have been directed toward developing new cardiac motion estimation methods. These methods aim at improving the quality and accuracy of the estimated motions. However, they are still facing many challenges due to the complexity of cardiac motion and the quality of ultrasound images. Recently, learning-based techniques have received a growing interest in the field of image processing. More specifically, sparse representations and dictionary learning strategies have shown their efficiency in regularizing different ill-posed inverse problems. This thesis investigates the benefits that such sparsity and learning-based techniques can bring to cardiac motion estimation. Three main contributions are presented, investigating different aspects and challenges that arise in echocardiography. Firstly, a method for cardiac motion estimation using a sparsity-based regularization is introduced. The motion estimation problem is formulated as an energy minimization, whose data fidelity term is built using the assumption that the images are corrupted by multiplicative Rayleigh noise. In addition to a classical spatial smoothness constraint, the proposed method exploits the sparse properties of the cardiac motion to regularize the solution via an appropriate dictionary learning step. Secondly, a fully robust optical flow method is proposed. The aim of this work is to take into account the limitations of ultrasound imaging and the violations of the regularization constraints. In this work, two regularization terms imposing spatial smoothness and sparsity of the motion field in an appropriate cardiac motion dictionary are also exploited. In order to ensure robustness to outliers, an iteratively re-weighted minimization strategy is proposed using weighting functions based on M-estimators. As a last contribution, we investigate a cardiac motion estimation method using a combination of sparse, spatial and temporal regularizations. The problem is formulated within a general optical flow framework. The proposed temporal regularization enforces smoothness of the motion trajectories between consecutive images. Furthermore, an iterative groupewise motion estimation allows us to incorporate the three regularization terms, while enabling the processing of the image sequence as a whole. Throughout this thesis, the proposed contributions are validated using synthetic and realistic simulated cardiac ultrasound images. These datasets with available groundtruth are used to evaluate the accuracy of the proposed approaches and show their competitiveness with state-of-the-art algorithms. In order to demonstrate clinical feasibility, in vivo sequences of healthy and pathological subjects are considered for the first two methods. A preliminary investigation is conducted for the last contribution, i.e., exploiting temporal smoothness, using simulated data.
95

Lärande bildlexikon : Ett interaktivt sätt att lära sig

Gustafsson, Jessica January 2009 (has links)
<p>Ett bildlexikon ger en bildlig stimulans och gör lärande till en interaktiv lek. Men bildlexikon är inte enbart för barn, de skapas även för ungdomar och vuxna. De kan berätta historier och sagor på ett underhållande sätt, även återge historia. Deras syfte bestäms enbart av deras skapare. Det denna rapport handlar om är skapandet av ett bildlexikon. Syftet med detta lexikon är att utbilda yngre människor inom ämnet vardag, allt som man kan komma att stöta på inom vardagen finns i detta lexikon. Det företag som står för detta bildlexikon är Euroway Media Business AB, de har länge jobbat med hemsidor men vill nu ge sig in på nytt territorier där de kan utöka sina kunskaper. Bildlexikonet kommer sedan att göras om till en applikation, i ett senare projekt, och integreras på en hemsida som för tillfället är under konstruktion. Det som är viktigt att tänka på vid skapandet av ett bildlexikon är att ha en röd tråd igenom hela arbetet, göra många undersökningar för att hålla koll på att man är på rätt spår och till sist vara kreativ. Olika väl använda metoder – som till exempel enkäter och persona - kom att användas under projektet för att samla in data, undersöka målgruppen och för att utvärdera lexikonet. Resultatet blev ett strukturerat lexikon med tillräckligt många bilder och kategorier för att det ska vara utbildande. Rent grafiskt blev det tilltalande och framhäver innehållet i lexikonet.</p> / <p>A picture dictionary gives a figurative stimulus and makes learning an interactive game. But the dictionary is not just for children, they are also made for young people and adults. They can tell stories and fairytales in an amusing way, they can even retell history. Their purpose is only decided by their creator. This report is about the creation of a dictionary with pictures. The purpose of this dictionary is to educate young people about the objects we encounter in our everyday life; anything you can encounter is in this dictionary. The company that is in charge for this dictionary is Euroway Media Business AB, they usually work with websites for other companies but they feel like expanding their ground and knowledge. The dictionary will be made into an application, in a later project, which eventually will be integrated on a website – that is currently under construction. One thing to keep in mind when designing a dictionary is to have a main theme throughout the project, go through a lot of surveys to keep on track and finally – be creative. Different kinds of methods were used – for example a poll and a persona – during the project to collect data, examine the target group and evaluate the dictionary. The result became a well structured dictionary with enough pictures and categories to be educational. It became pure graphic, appealing and highlights the dictionaries content.</p>
96

CredProxy: A Password Manager for Online Authentication Environments

Golrang, Mohammad Saleh 20 December 2012 (has links)
Internet users are increasingly required to sign up for online services and establish accounts before receiving service from websites. On the one hand, generation of strong usernames and passwords is a difficult task for the user. On the other hand, memorization of strong passwords is by far more problematic for the average user. Thus, the average user has a tendency to use weak passwords, and also reuse his passwords for more than one website, which makes several attacks feasible. Under the aforementioned circumstances, the use of password managers is beneficial, since they unburden the user from the task of memorizing user credentials. However, password managers have a number of weaknesses. This thesis is mainly aimed at alleviating some of the intrinsic weaknesses of password managers. We propose three cryptographic protocols which can improve the security of password managers while enhancing user convenience. We also present the design of a phishing and Man-in-the-Browser resistant password manger which best fits into our scheme. Furthermore, we present our novel virtual on-screen keyboard and keypad which are designed to provide strong protection mechanisms against threats such as keylogging and shoulder surfing.
97

AUTOMATIC EXTRACTION OF TRANSLATION PATTERNS FROM BILINGUAL LEGAL CORPUS

Inagaki, Yasuyoshi, Matsubara, Shigeki, Ohara, Makoto 26 October 2003 (has links)
No description available.
98

Kernelized Supervised Dictionary Learning

Jabbarzadeh Gangeh, Mehrdad 24 April 2013 (has links)
The representation of a signal using a learned dictionary instead of predefined operators, such as wavelets, has led to state-of-the-art results in various applications such as denoising, texture analysis, and face recognition. The area of dictionary learning is closely associated with sparse representation, which means that the signal is represented using few atoms in the dictionary. Despite recent advances in the computation of a dictionary using fast algorithms such as K-SVD, online learning, and cyclic coordinate descent, which make the computation of a dictionary from millions of data samples computationally feasible, the dictionary is mainly computed using unsupervised approaches such as k-means. These approaches learn the dictionary by minimizing the reconstruction error without taking into account the category information, which is not optimal in classification tasks. In this thesis, we propose a supervised dictionary learning (SDL) approach by incorporating information on class labels into the learning of the dictionary. To this end, we propose to learn the dictionary in a space where the dependency between the signals and their corresponding labels is maximized. To maximize this dependency, the recently-introduced Hilbert Schmidt independence criterion (HSIC) is used. The learned dictionary is compact and has closed form; the proposed approach is fast. We show that it outperforms other unsupervised and supervised dictionary learning approaches in the literature on real-world data. Moreover, the proposed SDL approach has as its main advantage that it can be easily kernelized, particularly by incorporating a data-driven kernel such as a compression-based kernel, into the formulation. In this thesis, we propose a novel compression-based (dis)similarity measure. The proposed measure utilizes a 2D MPEG-1 encoder, which takes into consideration the spatial locality and connectivity of pixels in the images. The proposed formulation has been carefully designed based on MPEG encoder functionality. To this end, by design, it solely uses P-frame coding to find the (dis)similarity among patches/images. We show that the proposed measure works properly on both small and large patch sizes on textures. Experimental results show that by incorporating the proposed measure as a kernel into our SDL, it significantly improves the performance of a supervised pixel-based texture classification on Brodatz and outdoor images compared to other compression-based dissimilarity measures, as well as state-of-the-art SDL methods. It also improves the computation speed by about 40% compared to its closest rival. Eventually, we have extended the proposed SDL to multiview learning, where more than one representation is available on a dataset. We propose two different multiview approaches: one fusing the feature sets in the original space and then learning the dictionary and sparse coefficients on the fused set; and the other by learning one dictionary and the corresponding coefficients in each view separately, and then fusing the representations in the space of the dictionaries learned. We will show that the proposed multiview approaches benefit from the complementary information in multiple views, and investigate the relative performance of these approaches in the application of emotion recognition.
99

Exploring Krapf's dictionary

Miehe, Gudrun, Firsching, Henrike 15 August 2012 (has links) (PDF)
This collection summarizes the items on society, history and culture from Krapf’s famous dictionary which may be of some interest to today’s audience. The idea of arranging the sometimes idiosyncratic Swahili for modern use came up during preparations for the Krapf Workshop held on 11 September 2007 at Fort Jesus in Mombasa.1 The lemmas found in this first comprehensive Swahili dictionary were checked against Frederick Johnson’s Standard dictionary of 1939. In addition, the dictionary by Charles Sacleux of 1939 and the revised version of Krapf’s dictionary by Harry Kerr Binns (1925) served as sources of information. With the exception of those entries which Krapf had already marked with a question mark, all others were selected, which are not found in Johnson or which are described differently or in less depth than in Krapf\'s work.
100

TUKI 2004. Kamusi ya Kiswahili Sanifu. Toleo la Pili. [A standard Swahili dictionary. Second edition]. Nairobi: Oxford University Press. xviii, 477 pp. ISBN 0195732227. (ca. 15000 ThS/ 15.- €)

Herms, Irmtraud 14 August 2012 (has links) (PDF)
Book review: In 2004 the long awaited second edition of the Standard Swahili - Swahili Dictionary, edited by the Insitute of Kiswahili Research (TUKI) at the University of Dar es Salaam, appeared. With this publication TUKI has once again confirmed its leading role in the field of Swahili lexicography in East Africa. it is up to date, containing new words and phrases which are in use in East Africa in order to cope with the development in science and technology, society, economics and globalization.

Page generated in 0.1228 seconds