• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 37
  • 37
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Near Sets in Set Pattern Classification

Uchime, Chidoteremndu Chinonyelum 06 February 2015 (has links)
This research is focused on the extraction of visual set patterns in digital images, using relational properties like nearness and similarity measures, as well as descriptive properties such as texture, colour and image gradient directions. The problem considered in this thesis is application of topology in visual set pattern discovery, and consequently pattern generation. A visual set pattern is a collection of motif patterns generated from different unique points called seed motifs in the set. Each motif pattern is a descriptive neighbourhood of a seed motif. Such a neighbourhood is a set of points that are descriptively near a seed motif. A new similarity distance measure based on dot product between image feature vectors was introduced in this research, for image classification with the generated visual set patterns. An application of this approach to pattern generation can be useful in content based image retrieval and image classification.
32

Αυτόματη ταυτοποίηση βιομετρικών χαρακτηριστικών : εφαρμογή στα δακτυλικά αποτυπώματα

Ουζούνογλου, Αναστασία 17 September 2012 (has links)
Η αυτόματη ταυτοποίηση εικόνων δακτυλικών αποτυπωμάτων αποτελεί ένα δύσκολο και πολυδιάστατο πρόβλημα, το οποίο έχει απασχολήσει πλήθος ερευνητών και για το οποίο έχει αναπτυχθεί μεγάλος αριθμός τεχνικών. Η δυσκολία του προβλήματος έγκειται στο γεγονός ότι οι εικόνες των αποτυπωμάτων είναι σε μεγάλο ποσοστό αλλοιωμένες ή ακόμα σε κάποιες περιπτώσεις δεν είναι διαθέσιμη η πλήρης εικόνα του αποτυπώματος, αλλά μόνο ένα μέρος αυτής. Στη συγκεκριμένη διατριβή, προτείνονται δύο μέθοδοι αυτόματης ταυτοποίησης δακτυλικών αποτυπωμάτων: α) η μέθοδος ταυτοποίησης δακτυλικών αποτυπωμάτων με χρήση τεχνικών ευθυγράμμισης και β) μέθοδος ταυτοποίησης δακτυλικών αποτυπωμάτων από το συνδυασμό του Δικτύου Αυτό-Οργανούμενων Δικτύων του Kohonen και του ορισμού των σημείων μικρολεπτομερειών των αποτυπωμάτων ως νευρώνων του δικτύου. Επιπλέον, ιδιαίτερη βαρύτητα δόθηκε στην προεπεξεργασία των εικόνων των δακτυλικών αποτυπωμάτων βάσει της ανάπτυξης και εφαρμογής κατάλληλων τεχνικών επεξεργασίας εικόνων προκειμένου να προκύψει βελτίωση της ποιότητας της εικόνας του δακτυλικού αποτυπώματος και να εξαχθούν οι μικρολεπτομέρειες που χρησιμοποιούνται για την ταύτιση των δακτυλικών αποτυπωμάτων. Στο πλαίσιο της παρούσας διατριβής, χρησιμοποιήθηκαν δεδομένα δακτυλικών αποτυπωμάτων από τις βάσεις VeriFingerSample_DB της Neurotechnology και η DB3 του διαγωνισμού δακτυλικών αποτυπωμάτων FVC2004. Για την ποσοτική αποτίμηση της απόδοσης των προτεινόμενων μεθόδων χρησιμοποιήθηκε το κριτήριο της Αναλογίας Ίσου Σφάλματος (EqualErrorRate – EER). Σύμφωνα με το κριτήριο αυτό, η μέθοδος ταυτοποίησης δακτυλικών αποτυπωμάτων βάσει του Δικτύου Αυτό-Οργανούμενων Δικτύων παρείχε καλύτερα αποτελέσματα σε σύγκριση με οποιαδήποτε μέθοδο ευθυγράμμισης που εφαρμόστηκε. / Automatic Fingerprint Identification is a difficult and multidimensional problem. For this reason, the number of papers and techniques regarding this issue is numerous. The hardness of the problem lies with the fact that there is a large percentage of corrupted and partial fingerprint images. Throughout this Thesis, two methods were proposed for the Automatic Fingerprint Identification: a) the Automatic Fingerprint Identification based on registration techniques and b) the Automatic Fingerprint Identification based on the theory of Self Organizing Maps of Kohonen, setting the minutiae of the fingerprint images as input neurons of the Map. Furthermore, an important step prior to the application of the proposed automatic fingerprint identification methods is the pre-processing of these images by the development and implementation of a series of image processing techniques in order to enhance the image quality and to extract the minutiae which are then used for the fingerprint identification. In this Thesis, a substantial number of fingerprint images were used from the database VeriFingerSample_DB kai from the database DB3 of the competition FVC2004. The quantitative evaluation of both proposed automatic fingerprint identification methods were based on the Equal Error Rate (EER) criterion. According to this, the Automatic Fingerprint Identification based on the Self Organizing Maps outperformed against any other method based on registration techniques.
33

Biometrie s využitím snímků duhovky / Biometry based on iris images

Tobiášová, Nela January 2014 (has links)
The biometric techniques are well known and widespread nowadays. In this context biometry means automated person recognition using anatomic features. This work uses the iris as the anatomic feature. Iris recognition is taken as the most promising technique of all because of its non-invasiveness and low error rate. The inventor of iris recognition is John G. Daugman. His work underlies almost all current public works of this technology. This final thesis is concerned with biometry based on iris images. The principles of biometric methods based on iris images are described in the first part. The first practical part of this work is aimed at the proposal and realization of two methods which localize the iris inner boundary. The third part presents the proposal and realization of iris image processing in order to classifying persons. The last chapter is focus on evaluation of experimental results and there are also compared our results with several well-known methods.
34

Algoritmisk jämförelse av musiksmak och personliga värderingar : Med användning av Spotifys Web API

Lundberg, Hampus January 2020 (has links)
Tidigare forskning visar att det finns en koppling mellan musiksmak och social attraktion mellan människor, eftersom delad musiksmak ofta innebär delade personliga värderingar, och delade personliga värderingar kan innebära större chans för social attraktion. Målet med undersökningen har varit att ta reda på om musiksmak har någon korrelation med personliga värderingar, och vilka algoritmer som i så fall skulle kunna användas för att beräkna korrelationen. En modell ställs upp för en teoretisk perfekt matchningsalgoritm mot vilken de undersökta algoritmerna testas och jämförs praktiskt. Studien, som är uppdelad i tre delar, undersöker algoritmerna närmare med hjälp av testdata i formen av datorgenererade värden i den första och andra delen. Den första delen använder data i formen av heltal (antalet förekomster av musikpreferens) och den andra använder data i formen av binära tal (förekomst eller ej av musikpreferens). Den tredje delen använder sig av användardata, från 13 deltagare, från Spotify samt från en enkät om personliga värderingar. Resultaten visar ingen uppenbar korrelation mellan personliga värderingar och musiksmak, vilket troligtvis beror på datamängderna; det kan vara så att det krävs mer detaljerad och strukturerad användardata än den som inhämtats och använts i denna undersökning för att få tydliga resultat. / Earlier research shows that there is a connection between music taste and social attraction between people, because shared music taste usually means shared personal values, and shared personal values could mean greater chance for social attraction. The goal with the project has been to find out if music taste is correlated with personal values, and what algorithms can be used to calculate that correlation. A model is defined for a perfect matching-algorithm against which the studied algorithms are tested and compared practically. The study, which is divided into three parts, investigates the algorithms closer using test data in the form of computer-generated values in the first and second part. The first part uses data in the form of integers (the number of occurences of a music preference) and the second part uses data in the form of binary numbers (occurence or not of a music preference). The third part uses real user data, from 13 participants, from Spotify and from a survey regarding personal values. The results show no apparent correlation between personal values and music taste, the cause of which is most likely the data; it could be that it takes more detailed and structured user data than the one used in this study to get clear results.
35

Counting prime polynomials and measuring complexity and similarity of information

Rebenich, Niko 02 May 2016 (has links)
This dissertation explores an analogue of the prime number theorem for polynomials over finite fields as well as its connection to the necklace factorization algorithm T-transform and the string complexity measure T-complexity. Specifically, a precise asymptotic expansion for the prime polynomial counting function is derived. The approximation given is more accurate than previous results in the literature while requiring very little computational effort. In this context asymptotic series expansions for Lerch transcendent, Eulerian polynomials, truncated polylogarithm, and polylogarithms of negative integer order are also provided. The expansion formulas developed are general and have applications in numerous areas other than the enumeration of prime polynomials. A bijection between the equivalence classes of aperiodic necklaces and monic prime polynomials is utilized to derive an asymptotic bound on the maximal T-complexity value of a string. Furthermore, the statistical behaviour of uniform random sequences that are factored via the T-transform are investigated, and an accurate probabilistic model for short necklace factors is presented. Finally, a T-complexity based conditional string complexity measure is proposed and used to define the normalized T-complexity distance that measures similarity between strings. The T-complexity distance is proven to not be a metric. However, the measure can be computed in linear time and space making it a suitable choice for large data sets. / Graduate / 0544 0984 0405 / nrebenich@gmail.com
36

Schedulability Tests for Real-Time Uni- and Multiprocessor Systems: Focusing on Partitioned Approaches

Müller, Dirk 19 February 2014 (has links)
This work makes significant contributions in the field of sufficient schedulability tests for rate-monotonic scheduling (RMS) and their application to partitioned RMS. Goal is the maximization of possible utilization in worst or average case under a given number of processors. This scenario is more realistic than the dual case of minimizing the number of necessary processors for a given task set since the hardware is normally fixed. Sufficient schedulability tests are useful for quick estimates of task set schedulability in automatic system-synthesis tools and in online scheduling where exact schedulability tests are too slow. Especially, the approach of Accelerated Simply Periodic Task Sets (ASPTSs) and the concept of circular period similarity are cornerstones of improvements in the success ratio of such schedulability tests. To the best of the author's knowledge, this is the first application of circular statistics in real-time scheduling. Finally, the thesis discusses the use of sharp total utilization thresholds for partitioned EDF. A constant-time admission control is enabled with a controlled residual risk. / Diese Arbeit liefert entscheidende Beiträge im Bereich der hinreichenden Planbarkeitstests für ratenmonotones Scheduling (RMS) und deren Anwendung auf partitioniertes RMS. Ziel ist die Maximierung der möglichen Last im Worst Case und im Average Case bei einer gegebenen Zahl von Prozessoren. Dieses Szenario ist realistischer als der duale Fall der Minimierung der Anzahl der notwendigen Prozessoren für eine gegebene Taskmenge, da die Hardware normalerweise fixiert ist. Hinreichende Planbarkeitstests sind für schnelle Schätzungen der Planbarkeit von Taskmengen in automatischen Werkzeugen zur Systemsynthese und im Online-Scheduling sinnvoll, wo exakte Einplanungstests zu langsam sind. Insbesondere der Ansatz der beschleunigten einfach-periodischen Taskmengen und das Konzept der zirkulären Periodenähnlichkeit sind Eckpfeiler für Verbesserungen in der Erfolgsrate solcher Einplanungstests. Nach bestem Wissen ist das die erste Anwendung zirkulärer Statistik im Echtzeit-Scheduling. Schließlich diskutiert die Arbeit plötzliche Phasenübergänge der Gesamtlast für partitioniertes EDF. Eine Zugangskontrolle konstanter Zeitkomplexität mit einem kontrollierten Restrisiko wird ermöglicht.
37

Compression et inférence des opérateurs intégraux : applications à la restauration d’images dégradées par des flous variables / Approximation and estimation of integral operators : applications to the restoration of images degraded by spatially varying blurs

Escande, Paul 26 September 2016 (has links)
Le problème de restauration d'images dégradées par des flous variables connaît un attrait croissant et touche plusieurs domaines tels que l'astronomie, la vision par ordinateur et la microscopie à feuille de lumière où les images sont de taille un milliard de pixels. Les flous variables peuvent être modélisés par des opérateurs intégraux qui associent à une image nette u, une image floue Hu. Une fois discrétisé pour être appliqué sur des images de N pixels, l'opérateur H peut être vu comme une matrice de taille N x N. Pour les applications visées, la matrice est stockée en mémoire avec un exaoctet. On voit apparaître ici les difficultés liées à ce problème de restauration des images qui sont i) le stockage de ce grand volume de données, ii) les coûts de calculs prohibitifs des produits matrice-vecteur. Ce problème souffre du fléau de la dimension. D'autre part, dans beaucoup d'applications, l'opérateur de flou n'est pas ou que partialement connu. Il y a donc deux problèmes complémentaires mais étroitement liés qui sont l'approximation et l'estimation des opérateurs de flou. Cette thèse a consisté à développer des nouveaux modèles et méthodes numériques permettant de traiter ces problèmes. / The restoration of images degraded by spatially varying blurs is a problem of increasing importance. It is encountered in many applications such as astronomy, computer vision and fluorescence microscopy where images can be of size one billion pixels. Variable blurs can be modelled by linear integral operators H that map a sharp image u to its blurred version Hu. After discretization of the image on a grid of N pixels, H can be viewed as a matrix of size N x N. For targeted applications, matrices is stored with using exabytes on the memory. This simple observation illustrates the difficulties associated to this problem: i) the storage of a huge amount of data, ii) the prohibitive computation costs of matrix-vector products. This problems suffers from the challenging curse of dimensionality. In addition, in many applications, the operator is usually unknown or only partially known. There are therefore two different problems, the approximation and the estimation of blurring operators. They are intricate and have to be addressed with a global overview. Most of the work of this thesis is dedicated to the development of new models and computational methods to address those issues.

Page generated in 0.072 seconds