• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 112
  • 100
  • 41
  • 33
  • 25
  • 10
  • 6
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 391
  • 86
  • 76
  • 76
  • 66
  • 54
  • 52
  • 50
  • 36
  • 32
  • 31
  • 31
  • 30
  • 30
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Kamusi ya Kiswahili sanifu in test:

Horskainen, Arvi 15 October 2012 (has links) (PDF)
The paper describes a computer system for testing the coherence and adequacy of dictionaries. The system suits also well for retiieving lexical material in context from computerized text archives Results are presented from a series of tests made with Kamusi ya Kiswahlli Sanifu (KKS), a monolingual Swahili dictionary.. The test of the intemal coherence of KKS shows that the text itself contains several hundreds of such words, for which there is no entry in the dictionary. Examples and frequency numbers of the most often occurring words are given The adequacy of KKS was also tested with a corpus of nearly one million words, and it was found out that 1.32% of words in book texts were not recognized by KKS, and with newspaper texts the amount was 2.24% The higher number in newspaper texts is partly due to numerous names occurring in news articles Some statistical results are given on frequencies of wordforms not recognized by KKS The tests shows that although KKS covers the modern vocabulary quite well, there are several ru·eas where the dictionary should be improved The internal coherence is far from satisfactory, and there are more than a thousand such rather common words in prose text which rue not included into KKS The system described in this article is au effective tool for `detecting problems and for retrieving lexical data in context for missing words.
102

Image/Video Deblocking via Sparse Representation

Chiou, Yi-Wen 08 September 2012 (has links)
Blocking artifact, characterized by visually noticeable changes in pixel values along block boundaries, is a common problem in block-based image/video compression, especially at low bitrate coding. Various post-processing techniques have been proposed to reduce blocking artifacts, but they usually introduce excessive blurring or ringing effects. This paper proposes a self-learning-based image/ video deblocking framework via properly formulating deblocking as an MCA (morphological component analysis)-based image decomposition problem via sparse representation. The proposed method first decomposes an image/video frame into the low-frequency and high-frequency parts by applying BM3D (block-matching and 3D filtering) algorithm. The high-frequency part is then decomposed into a ¡§blocking component¡¨ and a ¡§non-blocking component¡¨ by performing dictionary learning and sparse coding based on MCA. As a result, the blocking component can be removed from the image/video frame successfully while preserving most original image/video details. Experimental results demonstrate the efficacy of the proposed algorithm.
103

Ein Beitrag zum Entwurf industrieller Datenbanksysteme

Rössel, Mike 11 July 2009 (has links) (PDF)
Zielstellung der Dissertation ist der Entwurf eines industriellen Datenbanksystems (DBS). Industriell eingesetzte DBS benötigen im Wesentlichen nur zwei Eigenschaften, welche durch herkömmliche DBS nur unzureichend unterstützt werden. Dazu zählen die Realzeitfähigkeit und die Dauerbetriebsfähigkeit. Zentrales Kernelement der vorliegenden Arbeit sind die Betrachtungen zur Realzeitfähigkeit des DBS. Unter bestimmten Voraussetzungen ist es möglich, ein hart realzeitfähiges DBS (RZ-DBS) zu implementieren. Dafür müssen außer dem Realzeitmanager keine grundsätzlich neuen Algorithmen implementiert werden. Einige Algorithmen des DBMS müssen lediglich an die Realzeitbedingungen angepasst bzw. dafür optimiert werden. Da sich Datenbestände dynamisch entwickeln, ist es notwendig, alle Realzeitanforderungen und -bedingungen im DBS zu speichern. Dazu bietet sich das Data Dictionary an. Das RZ-DBS ist, sofern es vollständig implementiert ist, selbstständig in der Lage, die Einhaltung der Realzeitanforderungen zu gewährleisten. Das DBS wurde teilweise in der Industrie erprobt.
104

Sparse coding for machine learning, image processing and computer vision

Mairal, Julien 30 November 2010 (has links) (PDF)
We study in this thesis a particular machine learning approach to represent signals that that consists of modelling data as linear combinations of a few elements from a learned dictionary. It can be viewed as an extension of the classical wavelet framework, whose goal is to design such dictionaries (often orthonormal basis) that are adapted to natural signals. An important success of dictionary learning methods has been their ability to model natural image patches and the performance of image denoising algorithms that it has yielded. We address several open questions related to this framework: How to efficiently optimize the dictionary? How can the model be enriched by adding a structure to the dictionary? Can current image processing tools based on this method be further improved? How should one learn the dictionary when it is used for a different task than signal reconstruction? How can it be used for solving computer vision problems? We answer these questions with a multidisciplinarity approach, using tools from statistical machine learning, convex and stochastic optimization, image and signal processing, computer vision, but also optimization on graphs.
105

Šarlio Fransua Lomono (L'Homond) „Epitome Historiae Sacrae“: S. Stanevičiaus ir S. Daukanto leksikos ypatumai / Charle's Fransua L'Homond „Epitome Historiae Sacrae“: lexical peculiarities of translating by S. Stanevicius and S. Daukantas

Kirkaitė, Birutė 28 June 2005 (has links)
The main theme of magistra’s work is ,,Charle’s Fransua L’Homond ,,Epitome Historiae Sacrae”: lexical peculiarities of translatings by S. Daukantas and S. Stanevičius. The main aim of this work is to make S.Daukantas Lithuanian – Latin index by translating of ,,Epitome Historiae Sacrae”. Problems of this work are: 1) determine field of this work, object, review work’s of other authors about ,,Epitome Historiae Sacrae”; 2) discuss main language peculiarities of ,,Žodrodys”; 3) index of S. Daukantas comparite with Lithuanian – Lowlander index of S. Stanevičius. There are literature’s review, analysis, comparative, syntesis methods in this work. It contains of introductory remarks, where are given linguistic works of S.Daukantas, authors, who wrote about ,,Žodrodys” (J.Kruopas, G.Subačius and other researchers) and the main part ,,Discussion of ,,Žodrodys”. It contains of three chapters, deduction, summary, literature list and few additions. In the first chapter of this work there are talking about S.Daukantas’ ,,Žodrodys” dialect peculiarities and spelling. Also review systems of vowels, digraphs and diphtongs, consonants. There also are talking about dialects morphology of S.Daukantas. The second chapter – lexics review of S. Daukantas’ ,,Žodrodys”. There are discussed themical groups of dictionary, lexics by origin of words. In the third chapter S. Daukantas’ ,,Epitome historiae sacrae” lexic is comparied with S. Stanevičius’ ,,Historyia... [to full text]
106

Review: Kyallo Wadi Wamitila. 2003. kamusi ya fasihi. istilahi na nadharia

Diegner, Lutz 23 July 2012 (has links) (PDF)
The 6th National Book Fair in Nairobi, Kenya, in September 2003 saw a new publication in the field of Swahili literary studies that should draw the attention of Swahili scholars in and outside of East Africa: the first comprehensive literary dictionary in Swahili language. Kyallo Wadi Wamitila, who is currently Senior Lecturer for Swahili Literature and Literary Theory at the University of Nairobi, has committed more than a decade of meticulous research to compile this major work. It comprises roughly 1.300 entries, arranged alphabetically, ranging from adhidadi (antonym) to muhakati (mimesis), tashtiti (satire) and zila (tragic flaw).
107

CredProxy: A Password Manager for Online Authentication Environments

Golrang, Mohammad Saleh 20 December 2012 (has links)
Internet users are increasingly required to sign up for online services and establish accounts before receiving service from websites. On the one hand, generation of strong usernames and passwords is a difficult task for the user. On the other hand, memorization of strong passwords is by far more problematic for the average user. Thus, the average user has a tendency to use weak passwords, and also reuse his passwords for more than one website, which makes several attacks feasible. Under the aforementioned circumstances, the use of password managers is beneficial, since they unburden the user from the task of memorizing user credentials. However, password managers have a number of weaknesses. This thesis is mainly aimed at alleviating some of the intrinsic weaknesses of password managers. We propose three cryptographic protocols which can improve the security of password managers while enhancing user convenience. We also present the design of a phishing and Man-in-the-Browser resistant password manger which best fits into our scheme. Furthermore, we present our novel virtual on-screen keyboard and keypad which are designed to provide strong protection mechanisms against threats such as keylogging and shoulder surfing.
108

Nonparametric Bayesian Models for Joint Analysis of Imagery and Text

Li, Lingbo January 2014 (has links)
<p>It has been increasingly important to develop statistical models to manage large-scale high-dimensional image data. This thesis presents novel hierarchical nonparametric Bayesian models for joint analysis of imagery and text. This thesis consists two main parts.</p><p>The first part is based on single image processing. We first present a spatially dependent model for simultaneous image segmentation and interpretation. Given a corrupted image, by imposing spatial inter-relationships within imagery, the model not only improves reconstruction performance but also yields smooth segmentation. Then we develop online variational Bayesian algorithm for dictionary learning to process large-scale datasets, based on online stochastic optimization with a natu- ral gradient step. We show that dictionary is learned simultaneously with image reconstruction on large natural images containing tens of millions of pixels.</p><p>The second part applies dictionary learning for joint analysis of multiple image and text to infer relationship among images. We show that feature extraction and image organization with annotation (when available) can be integrated by unifying dictionary learning and hierarchical topic modeling. We present image organization in both "flat" and hierarchical constructions. Compared with traditional algorithms feature extraction is separated from model learning, our algorithms not only better fits the datasets, but also provides richer and more interpretable structures of image</p> / Dissertation
109

Kernelized Supervised Dictionary Learning

Jabbarzadeh Gangeh, Mehrdad 24 April 2013 (has links)
The representation of a signal using a learned dictionary instead of predefined operators, such as wavelets, has led to state-of-the-art results in various applications such as denoising, texture analysis, and face recognition. The area of dictionary learning is closely associated with sparse representation, which means that the signal is represented using few atoms in the dictionary. Despite recent advances in the computation of a dictionary using fast algorithms such as K-SVD, online learning, and cyclic coordinate descent, which make the computation of a dictionary from millions of data samples computationally feasible, the dictionary is mainly computed using unsupervised approaches such as k-means. These approaches learn the dictionary by minimizing the reconstruction error without taking into account the category information, which is not optimal in classification tasks. In this thesis, we propose a supervised dictionary learning (SDL) approach by incorporating information on class labels into the learning of the dictionary. To this end, we propose to learn the dictionary in a space where the dependency between the signals and their corresponding labels is maximized. To maximize this dependency, the recently-introduced Hilbert Schmidt independence criterion (HSIC) is used. The learned dictionary is compact and has closed form; the proposed approach is fast. We show that it outperforms other unsupervised and supervised dictionary learning approaches in the literature on real-world data. Moreover, the proposed SDL approach has as its main advantage that it can be easily kernelized, particularly by incorporating a data-driven kernel such as a compression-based kernel, into the formulation. In this thesis, we propose a novel compression-based (dis)similarity measure. The proposed measure utilizes a 2D MPEG-1 encoder, which takes into consideration the spatial locality and connectivity of pixels in the images. The proposed formulation has been carefully designed based on MPEG encoder functionality. To this end, by design, it solely uses P-frame coding to find the (dis)similarity among patches/images. We show that the proposed measure works properly on both small and large patch sizes on textures. Experimental results show that by incorporating the proposed measure as a kernel into our SDL, it significantly improves the performance of a supervised pixel-based texture classification on Brodatz and outdoor images compared to other compression-based dissimilarity measures, as well as state-of-the-art SDL methods. It also improves the computation speed by about 40% compared to its closest rival. Eventually, we have extended the proposed SDL to multiview learning, where more than one representation is available on a dataset. We propose two different multiview approaches: one fusing the feature sets in the original space and then learning the dictionary and sparse coefficients on the fused set; and the other by learning one dictionary and the corresponding coefficients in each view separately, and then fusing the representations in the space of the dictionaries learned. We will show that the proposed multiview approaches benefit from the complementary information in multiple views, and investigate the relative performance of these approaches in the application of emotion recognition.
110

Towards Scalable Analysis of Images and Videos

Zhao, Bin 01 September 2014 (has links)
With widespread availability of low-cost devices capable of photo shooting and high-volume video recording, we are facing explosion of both image and video data. The sheer volume of such visual data poses both challenges and opportunities in machine learning and computer vision research. In image classification, most of previous research has focused on small to mediumscale data sets, containing objects from dozens of categories. However, we could easily access images spreading thousands of categories. Unfortunately, despite the well-known advantages and recent advancements of multi-class classification techniques in machine learning, complexity concerns have driven most research on such super large-scale data set back to simple methods such as nearest neighbor search, one-vs-one or one-vs-rest approach. However, facing image classification problem with such huge task space, it is no surprise that these classical algorithms, often favored for their simplicity, will be brought to their knees not only because of the training time and storage cost they incur, but also because of the conceptual awkwardness of such algorithms in massive multi-class paradigms. Therefore, it is our goal to directly address the bigness of image data, not only the large number of training images and high-dimensional image features, but also the large task space. Specifically, we present algorithms capable of efficiently and effectively training classifiers that could differentiate tens of thousands of image classes. Similar to images, one of the major difficulties in video analysis is also the huge amount of data, in the sense that videos could be hours long or even endless. However, it is often true that only a small portion of video contains important information. Consequently, algorithms that could automatically detect unusual events within streaming or archival video would significantly improve the efficiency of video analysis and save valuable human attention for only the most salient contents. Moreover, given lengthy recorded videos, such as those captured by digital cameras on mobile phones, or surveillance cameras, most users do not have the time or energy to edit the video such that only the most salient and interesting part of the original video is kept. To this end, we also develop algorithm for automatic video summarization, without human intervention. Finally, we further extend our research on video summarization into a supervised formulation, where users are asked to generate summaries for a subset of a class of videos of similar nature. Given such manually generated summaries, our algorithm learns the preferred storyline within the given class of videos, and automatically generates summaries for the rest of videos in the class, capturing the similar storyline as in those manually summarized videos.

Page generated in 0.138 seconds