311 |
'n Masjienleerbenadering tot woordafbreking in AfrikaansFick, Machteld 06 1900 (has links)
Text in Afrikaans / Die doel van hierdie studie was om te bepaal tot watter mate ’n suiwer patroongebaseerde benadering tot woordafbreking bevredigende resultate lewer. Die masjienleertegnieke kunsmatige neurale netwerke, beslissingsbome en die TEX-algoritme is ondersoek aangesien dit met letterpatrone uit woordelyste afgerig kan word om lettergreep- en saamgesteldewoordverdeling te doen.
’n Leksikon van Afrikaanse woorde is uit ’n korpus van elektroniese teks genereer. Om lyste vir lettergreep- en saamgesteldewoordverdeling te kry, is woorde in die leksikon in lettergrepe verdeel en saamgestelde woorde is in hul samestellende dele verdeel. Uit elkeen van hierdie lyste van ±183 000 woorde is ±10 000 woorde as toetsdata gereserveer terwyl die res as afrigtingsdata gebruik is.
’n Rekursiewe algoritme is vir saamgesteldewoordverdeling ontwikkel. In hierdie algoritme word alle ooreenstemmende woorde uit ’n verwysingslys (die leksikon) onttrek deur stringpassing van die begin en einde van woorde af. Verdelingspunte word dan op grond van woordlengte uit die
samestelling van begin- en eindwoorde bepaal. Die algoritme is uitgebrei deur die tekortkominge
van hierdie basiese prosedure aan te spreek.
Neurale netwerke en beslissingsbome is afgerig en variasies van beide tegnieke is ondersoek om
die optimale modelle te kry. Patrone vir die TEX-algoritme is met die OPatGen-program
gegenereer. Tydens toetsing het die TEX-algoritme die beste op beide lettergreep- en saamgesteldewoordverdeling
presteer met 99,56% en 99,12% akkuraatheid, respektiewelik. Dit kan
dus vir woordafbreking gebruik word met min risiko vir afbrekingsfoute in gedrukte teks. Die neurale netwerk met 98,82% en 98,42% akkuraatheid op lettergreep- en saamgesteldewoordverdeling, respektiewelik, is ook bruikbaar vir lettergreepverdeling, maar dis meer riskant. Ons het bevind dat beslissingsbome te riskant is om vir lettergreepverdeling en veral vir woordverdeling te gebruik, met 97,91% en 90,71% akkuraatheid, respektiewelik.
’n Gekombineerde algoritme is ontwerp waarin saamgesteldewoordverdeling eers met die TEXalgoritme gedoen word, waarna die resultate van lettergreepverdeling deur beide die TEXalgoritme en die neurale netwerk gekombineer word. Die algoritme het 1,3% minder foute as die TEX-algoritme gemaak. ’n Toets op gepubliseerde Afrikaanse teks het getoon dat die risiko vir woordafbrekingsfoute in teks met gemiddeld tien woorde per re¨el ±0,02% is. / The aim of this study was to determine the level of success achievable with a purely pattern
based approach to hyphenation in Afrikaans. The machine learning techniques artificial neural
networks, decision trees and the TEX algorithm were investigated since they can be trained
with patterns of letters from word lists for syllabification and decompounding.
A lexicon of Afrikaans words was extracted from a corpus of electronic text. To obtain lists
for syllabification and decompounding, words in the lexicon were respectively syllabified and
compound words were decomposed. From each list of ±183 000 words, ±10 000 words were
reserved as testing data and the rest was used as training data.
A recursive algorithm for decompounding was developed. In this algorithm all words corresponding
with a reference list (the lexicon) are extracted by string fitting from beginning and
end of words. Splitting points are then determined based on the length of reassembled words.
The algorithm was expanded by addressing shortcomings of this basic procedure.
Artificial neural networks and decision trees were trained and variations of both were examined
to find optimal syllabification and decompounding models. Patterns for the TEX algorithm
were generated by using the program OPatGen. Testing showed that the TEX algorithm
performed best on both syllabification and decompounding tasks with 99,56% and 99,12% accuracy,
respectively. It can therefore be used for hyphenation in Afrikaans with little risk of
hyphenation errors in printed text. The performance of the artificial neural network was lower,
but still acceptable, with 98,82% and 98,42% accuracy for syllabification and decompounding,
respectively. The decision tree with accuracy of 97,91% on syllabification and 90,71% on
decompounding was found to be too risky to use for either of the tasks
A combined algorithm was developed where words are first decompounded by using the TEX
algorithm before syllabifying them with both the TEX algoritm and the neural network and
combining the results. This algoritm reduced the number of errors made by the TEX algorithm
by 1,3% but missed more hyphens. Testing the algorithm on Afrikaans publications showed the risk for hyphenation errors to be ±0,02% for text assumed to have an average of ten words per
line. / Decision Sciences / D. Phil. (Operational Research)
|
312 |
Τεχνικές ελέγχου ορθής λειτουργίας με έμφαση στη χαμηλή κατανάλωση ισχύος / VLSI testing techniques focused on low power dissipationΜπέλλος, Μάτσιεϊ 25 June 2007 (has links)
Η διατριβή ασχολείται με το αντικείμενο του ελέγχου ορθής λειτουργίας κυκλωμάτων κατά τον οποίο λαμβάνεται υπόψη και η συμπεριφορά ως προς την κατανάλωση ισχύος. Οι τεχνικές που προτείνονται αφορούν α) τη συμπίεση ενός συνόλου δοκιμής σε περιβάλλον ενσωματωμένου ελέγχου με χρήση εξωτερικών ελεγκτών, β) την εμφώλευση διανυσμάτων δοκιμής σε περιβάλλον ενσωματωμένου ελέγχου και γ) τη μείωση της κατανάλωση ισχύς και ενέργειας σε περιβάλλον εξωτερικού ελέγχου. Η συμπίεση των δεδομένων βασίζεται στην παρατήρηση ότι ένα διάνυσμα δοκιμής μπορεί να παραχθεί από το προηγούμενό του με την αντικατάσταση κάποιων τμημάτων του. Μεγαλύτερη συμπίεση επιτυγχάνεται όταν γίνει αναδιαταξή διανυσμάτων και αναδιάταξη των φλιπ-φλοπ της αλυσίδας ανίχνευσης. Αν η αναδιάταξη των φλιπ-φλοπ γίνει με βάση τη συχνότητα αλλαγών κατάστασης γειτονικών φλιπ-φλοπ τότε επιτυγχάνεται και μείωση της κατανάλωσης ισχύος. Όσον αφορά τις τεχνικές ενσωματωμένου αυτοελέγχου, μελετήθηκε το πρόβλημα της εμφώλευσης διανυσμάτων δοκιμής. Προτάθηκαν αποδοτικά κυκλώματα παραγωγής διανυσμάτων δοκιμής βασισμένα σε ολισθητές γραμμικής ανάδρασης και δέντρα πυλών XOR και σε ολισθητές συνδυασμένους με δέντρα πυλών OR. Όταν τα κυκλώματα υπό έλεγχο είναι κανονικής μορφής όπως είναι οι αθροιστές του αριθμητικού συστήματος υπολοίπων, προτείνονται κυκλώματα που εκμεταλεύονται την κανονική μορφή του συνόλου δοκιμής. Τέλος, σε περιβάλλον εξωτερικού ελέγχου, προτείνονται μέθοδοι αναδιάταξης διανυσμάτων δοκιμής με επανάληψη διανυσμάτων που μειώνουν την κατανάλωση. Οι μέθοδοι αυτές βασίζονται στην επιλογή των κατάλληλων ελάχιστων γεννητικών δέντρων και στη μετατροπή των κατάλληλων επαναλαμβανόμενων διανυσμάτων επιτυγχάνοντας σημαντική μείωση στην κατανάλωση ενέργειας, στη μέση και στη μέγιστη κατανάλωση ισχύος. / The dissertation is focused on VLSI testing while power dissipation is also taken into account. The techniques proposed are: a) test data compression in an embedded test environment, b) test set embedding in a built-in self test environment and c) reduction in test power dissipation in an external testing environment. Test data compression is based on the observation that a test vector can be produced from the previous one by replacing some parts of the previous vector with new parts of the current vector. The compression is even higher when the test vectors are ordered and scan cell reordering is also performed. If the scan cell reordering is based on a transition frequency approach then reduction in power dissipation is also achieved. In the case of built-in self test the problem of test set embedding was studied and efficient circuits based on linear feedback shift registers combined with XOR trees or shift registers combined with OR trees were proposed. If the circuits have a regular structure, such as the structure of residue number system adders, then a circuit taking advantage of the regular form of the test set can be derived. Finally, when external testing is considered, we proposed test vector ordering with vector repetition methods, which reduce power consumption. The methods are based on the selection of the appropriate minimum spanning trees and through the modification of the repeated vectors they achieve considerable savings in energy, average and peak power dissipation.
|
313 |
Adaptive Waveletmethoden zur Approximation von Bildern / Adaptive wavelet methods for the approximation of imagesTenorth, Stefanie 08 July 2011 (has links)
No description available.
|
314 |
Οργάνωση και διαχείριση βάσεων εικόνων βασισμένη σε τεχνικές εκμάθησης δεδομένων πολυσχιδούς δομήςΜακεδόνας, Ανδρέας 22 December 2009 (has links)
Το ερευνητικό αντικείμενο της συγκεκριμένης διατριβής αναφέρεται στην επεξεργασία έγχρωμης εικόνας με χρήση της θεωρίας γράφων, την ανάκτηση εικόνας καθώς και την οργάνωση / διαχείριση βάσεων δεδομένων με μεθόδους γραφημάτων και αναγνώρισης προτύπων, με εφαρμογή σε πολυμέσα.
Τα συγκεκριμένα προβλήματα προσεγγίστηκαν διατηρώντας τη γενικότητά τους και επιλύθηκαν με βάση τα ακόλουθα σημεία:
1. Ανάπτυξη τεχνικών για την επιλογή χαρακτηριστικών από τις εικόνες βάσει χαρακτηριστικών χαμηλού επιπέδου (χρώματος και υφής), για χρήση τους σε εφαρμογές ομοιότητας και ανάκτησης εικόνας.
2. Υπολογισμός μετρικών και αποστάσεων στο χώρο των χαρακτηριστικών.
3. Μελέτη της πολυσχιδούς δομής των εικόνων μιας βάσης στο χώρο των χαρακτηριστικών.
4. Ελάττωση της διάστασης του χώρου και παραγωγή αναπαραστάσεων δύο διαστάσεων.
5. Εφαρμογή των μεθόδων αυτών σε υποκειμενικές αποστάσεις εικόνων.
Η θεωρία γράφων και οι μέθοδοι αναγνώρισης προτύπων χρησιμοποιήθηκαν προκειμένου να παρουσιαστούν βέλτιστες λύσεις αφενός στο πρόβλημα της ανάκτησης εικόνων από βάσεις δεδομένων και αφετέρου στην οργάνωση και διαχείριση τέτοιων βάσεων εικόνων. Η διατριβή φέρνει πιο κοντά την επεξεργασία εικόνας με μεθόδους προερχόμενες από τη θεωρία γραφημάτων, τη στατιστική και την αναγνώριση προτύπων. Σε όλη τη διάρκεια της διατριβής, ιδιαίτερη έμφαση δόθηκε στο ζήτημα της εύρεσης του κατάλληλου συνδυασμού μεταξύ της αποτελεσματικότητας των συστημάτων και της αποδοτικότητας στα πλαίσια της εφαρμογής των προτεινόμενων αλγοριθμικών διαδικασιών. Τα αναλυτικά πειραματικά αποτελέσματα που πραγματοποιήθηκαν, αποδεικνύουν την βελτιωμένη απόδοση των προτεινόμενων μεθοδολογιών. / The subject of this doctoral thesis is related to color image processing using graph theoretic methods, image retrieval and image database management and organization in the reduced feature space, using pattern recognition analysis, with multimedia applications.
The author attempted to approach the thesis subject by retaining its genericness and addressing the following points:
1. Development of techniques for extraction of image visual attributes based on low level features (color and texture information), to be used for image similarity and retrieval practices.
2. Calculation of metrics and distances in the feature space.
3. Study of the image manifolds created in the selected feature space.
4. Application of dimensionality reduction techniques and production of biplots.
5. Application of the proposed methodologies using perceptual image distances.
Graph theory and pattern recognition methodologies were incorporated in order to provide novel solution to color image retrieval of image databases, as well as to image database management and organization. The current thesis brings closer image processing with graph theoretic methodologies, statistical analysis and pattern recognition. Throughout the thesis, consideration has been taken for finding the best trade off between effectiveness and efficiency when applying the proposed algorithmic procedures. The extended experimental results carried out in all stages of the projected studies reveal the enhanced performance of the proposed methodologies.
|
315 |
Inteligência computacional no sensoriamento a fibra ótica / Computational intelligence applied to optical fiber sensingNegri, Lucas Hermann 20 February 2017 (has links)
CAPES; CNPq; Fundação Araucária; FINEP / Esta tese apresenta aplicações de inteligência computacional para o aprimoramento de sensoriamento ótico realizado com sensores em fibra ótica. Para tanto, redes neurais artificiais (perceptron de múltiplas camadas), máquinas de vetor de suporte para regressão, evolução diferencial e métodos de sensoriamento compressivo são empregados em conjunto com transdutores de redes de Bragg em fibras óticas. As redes neurais artificiais, máquinas de vetor de suporte para regressão e redes de Bragg são empregadas na localização de uma carga aplicada sobre uma placa de acrílico. É apresentado um novo método utilizando evolução diferencial para a solução do problema do espalhamento inverso em redes de Bragg em fibra ótica, propondo o uso de restrições para solucioná-lo na ausência de informação de fase do sinal refletido. Um método para a detecção de múltiplas cargas posicionadas acima de uma placa de metal é proposto. Neste método, a placa de metal é suportada por anéis de ferro contendo redes de Bragg em fibra ótica e a detecção das cargas é realizada com o uso de métodos de sensoriamento compressivo para a solução do problema inverso subdeterminado resultante. A troca dos anéis de ferro por blocos de silicone e um novo método baseado em sensoriamento compressivo e evolução diferencial são propostos. Os resultados experimentais mostram que os métodos computacionais propostos auxiliam o sensoriamento e podem permitir uma melhoria da resolução espacial do sistema sem a necessidade do aumento do número de elementos transdutores. / This thesis presents new optical fiber sensing methodologies employing computational intelligence approaches seeking for the improvement of the sensing performance. Particularly, artificial neural networks, support vector regression, differential evolution and compressive sensing methods were employed with fiber Bragg grating transducers. Artificial neural networks (multilayer perceptron) and fiber Bragg gratings were used to determine the location of a load applied to a polymethyl methacrylate sheet. A new method based on the application of differential evolution is proposed to solve the inverse scattering problem in fiber Bragg gratings, where constraints are imposed to solve the problem without the need of phase information. A method for detecting multiple loads on a metal sheet is also proposed. In this method, the metal sheet is supported by iron rings containing fiber Bragg gratings, and compressive sensing methods are employed to solve the resulting underdetermined inverse problem. Further developments of the method replaced the iron rings by silicon blocks and employed a new reconstruction method based on compressive sensing and differential evolution. Experimental results show that the proposed computational methods improve the optical fiber sensing and lead to an enhancement of the spatial resolution without increasing the number of transducers.
|
316 |
Modelos de compressão de dados para classificação e segmentação de texturasHonório, Tatiane Cruz de Souza 31 August 2010 (has links)
Made available in DSpace on 2015-05-14T12:36:26Z (GMT). No. of bitstreams: 1
parte1.pdf: 2704137 bytes, checksum: 1bc9cc5c3099359131fb11fa1878c22f (MD5)
Previous issue date: 2010-08-31 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / This work analyzes methods for textures images classification and segmentation using
lossless data compression algorithms models. Two data compression algorithms are
evaluated: the Prediction by Partial Matching (PPM) and the Lempel-Ziv-Welch (LZW) that
had been applied in textures classification in previous works. The textures are pre-processed
using histogram equalization. The classification method is divided into two stages. In the
learning stage or training, the compression algorithm builds statistical models for the
horizontal and the vertical structures of each class. In the classification stage, samples of
textures to be classified are compressed using models built in the learning stage, sweeping the
samples horizontally and vertically. A sample is assigned to the class that obtains the highest
average compression. The classifier tests were made using the Brodatz textures album. The
classifiers were tested for various contexts sizes (in the PPM case), samples number and
training sets. For some combinations of these parameters, the classifiers achieved 100% of
correct classifications. Texture segmentation process was made only with the PPM. Initially,
the horizontal models are created using eight textures samples of size 32 x 32 pixels for each
class, with the PPM context of a maximum size 1. The images to be segmented are
compressed by the models of classes, initially in blocks of size 64 x 64 pixels. If none of the
models achieve a compression ratio at a predetermined interval, the block is divided into four
blocks of size 32 x 32. The process is repeated until a model reach a compression ratio in the
range of the compression ratios set for the size of the block in question. If the block get the 4
x 4 size it is classified as belonging to the class of the model that reached the highest
compression ratio. / Este trabalho se propõe a analisar métodos de classificação e segmentação de texturas
de imagens digitais usando algoritmos de compressão de dados sem perdas. Dois algoritmos
de compressão são avaliados: o Prediction by Partial Matching (PPM) e o Lempel-Ziv-Welch
(LZW), que já havia sido aplicado na classificação de texturas em trabalhos anteriores. As
texturas são pré-processadas utilizando equalização de histograma. O método de classificação
divide-se em duas etapas. Na etapa de aprendizagem, ou treinamento, o algoritmo de
compressão constrói modelos estatísticos para as estruturas horizontal e vertical de cada
classe. Na etapa de classificação, amostras de texturas a serem classificadas são comprimidas
utilizando modelos construídos na etapa de aprendizagem, varrendo-se as amostras na
horizontal e na vertical. Uma amostra é atribuída à classe que obtiver a maior compressão
média. Os testes dos classificadores foram feitos utilizando o álbum de texturas de Brodatz.
Os classificadores foram testados para vários tamanhos de contexto (no caso do PPM),
amostras e conjuntos de treinamento. Para algumas das combinações desses parâmetros, os
classificadores alcançaram 100% de classificações corretas. A segmentação de texturas foi
realizada apenas com o PPM. Inicialmente, são criados os modelos horizontais usados no
processo de segmentação, utilizando-se oito amostras de texturas de tamanho 32 x 32 pixels
para cada classe, com o contexto PPM de tamanho máximo 1. As imagens a serem
segmentadas são comprimidas utilizando-se os modelos das classes, inicialmente, em blocos
de tamanho 64 x 64 pixels. Se nenhum dos modelos conseguir uma razão de compressão em
um intervalo pré-definido, o bloco é dividido em quatro blocos de tamanho 32 x 32. O
processo se repete até que algum modelo consiga uma razão de compressão no intervalo de
razões de compressão definido para o tamanho do bloco em questão, podendo chegar a blocos
de tamanho 4 x 4 quando o bloco é classificado como pertencente à classe do modelo que
atingiu a maior taxa de compressão.
|
317 |
Masjienleerbenadering tot woordafbreking in AfrikaansFick, Machteld 06 1900 (has links)
Text in Afrikaans / Die doel van hierdie studie was om te bepaal tot watter mate ’n suiwer patroongebaseerde benadering tot woordafbreking bevredigende resultate lewer. Die masjienleertegnieke kunsmatige neurale netwerke, beslissingsbome en die TEX-algoritme is ondersoek aangesien dit met letterpatrone uit woordelyste afgerig kan word om lettergreep- en saamgesteldewoordverdeling te doen.
’n Leksikon van Afrikaanse woorde is uit ’n korpus van elektroniese teks genereer. Om lyste vir lettergreep- en saamgesteldewoordverdeling te kry, is woorde in die leksikon in lettergrepe verdeel en saamgestelde woorde is in hul samestellende dele verdeel. Uit elkeen van hierdie lyste van ±183 000 woorde is ±10 000 woorde as toetsdata gereserveer terwyl die res as afrigtingsdata gebruik is.
’n Rekursiewe algoritme is vir saamgesteldewoordverdeling ontwikkel. In hierdie algoritme word alle ooreenstemmende woorde uit ’n verwysingslys (die leksikon) onttrek deur stringpassing van die begin en einde van woorde af. Verdelingspunte word dan op grond van woordlengte uit die
samestelling van begin- en eindwoorde bepaal. Die algoritme is uitgebrei deur die tekortkominge
van hierdie basiese prosedure aan te spreek.
Neurale netwerke en beslissingsbome is afgerig en variasies van beide tegnieke is ondersoek om
die optimale modelle te kry. Patrone vir die TEX-algoritme is met die OPatGen-program
gegenereer. Tydens toetsing het die TEX-algoritme die beste op beide lettergreep- en saamgesteldewoordverdeling
presteer met 99,56% en 99,12% akkuraatheid, respektiewelik. Dit kan
dus vir woordafbreking gebruik word met min risiko vir afbrekingsfoute in gedrukte teks. Die neurale netwerk met 98,82% en 98,42% akkuraatheid op lettergreep- en saamgesteldewoordverdeling, respektiewelik, is ook bruikbaar vir lettergreepverdeling, maar dis meer riskant. Ons het bevind dat beslissingsbome te riskant is om vir lettergreepverdeling en veral vir woordverdeling te gebruik, met 97,91% en 90,71% akkuraatheid, respektiewelik.
’n Gekombineerde algoritme is ontwerp waarin saamgesteldewoordverdeling eers met die TEXalgoritme gedoen word, waarna die resultate van lettergreepverdeling deur beide die TEXalgoritme en die neurale netwerk gekombineer word. Die algoritme het 1,3% minder foute as die TEX-algoritme gemaak. ’n Toets op gepubliseerde Afrikaanse teks het getoon dat die risiko vir woordafbrekingsfoute in teks met gemiddeld tien woorde per re¨el ±0,02% is. / The aim of this study was to determine the level of success achievable with a purely pattern
based approach to hyphenation in Afrikaans. The machine learning techniques artificial neural
networks, decision trees and the TEX algorithm were investigated since they can be trained
with patterns of letters from word lists for syllabification and decompounding.
A lexicon of Afrikaans words was extracted from a corpus of electronic text. To obtain lists
for syllabification and decompounding, words in the lexicon were respectively syllabified and
compound words were decomposed. From each list of ±183 000 words, ±10 000 words were
reserved as testing data and the rest was used as training data.
A recursive algorithm for decompounding was developed. In this algorithm all words corresponding
with a reference list (the lexicon) are extracted by string fitting from beginning and
end of words. Splitting points are then determined based on the length of reassembled words.
The algorithm was expanded by addressing shortcomings of this basic procedure.
Artificial neural networks and decision trees were trained and variations of both were examined
to find optimal syllabification and decompounding models. Patterns for the TEX algorithm
were generated by using the program OPatGen. Testing showed that the TEX algorithm
performed best on both syllabification and decompounding tasks with 99,56% and 99,12% accuracy,
respectively. It can therefore be used for hyphenation in Afrikaans with little risk of
hyphenation errors in printed text. The performance of the artificial neural network was lower,
but still acceptable, with 98,82% and 98,42% accuracy for syllabification and decompounding,
respectively. The decision tree with accuracy of 97,91% on syllabification and 90,71% on
decompounding was found to be too risky to use for either of the tasks
A combined algorithm was developed where words are first decompounded by using the TEX
algorithm before syllabifying them with both the TEX algoritm and the neural network and
combining the results. This algoritm reduced the number of errors made by the TEX algorithm
by 1,3% but missed more hyphens. Testing the algorithm on Afrikaans publications showed the risk for hyphenation errors to be ±0,02% for text assumed to have an average of ten words per
line. / Decision Sciences / D. Phil. (Operational Research)
|
318 |
Komprese dat / Data compressionKrejčí, Michal January 2009 (has links)
This thesis deals with lossless and losing methods of data compressions and their possible applications in the measurement engineering. In the first part of the thesis there is a theoretical elaboration which informs the reader about the basic terminology, the reasons of data compression, the usage of data compression in standard practice and the division of compression algorithms. The practical part of thesis deals with the realization of the compress algorithms in Matlab and LabWindows/CVI.
|
319 |
A Compressed Data Collection System For Use In Wireless Sensor NetworksErratt, Newlyn S. 06 March 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / One of the most common goals of a wireless sensor network is to collect sensor data. The goal of this thesis is to provide an easy to use and energy-e fficient system for deploying data collection sensor networks. There are numerous challenges associated with deploying a wireless sensor network for collection of sensor data; among
these challenges are reducing energy consumption and the fact that users interested in collecting data may not be familiar with software design. This thesis presents a
complete system, comprised of the Compression Data-stream Protocol and a general gateway for data collection in wireless sensor networks, which attempts to provide an easy to use, energy efficient and complete system for data collection in sensor networks. The Compressed Data-stream Protocol is a transport layer compression protocol with a primary goal, in this work, to reduce energy consumption. Energy consumption of the radio in wireless sensor network nodes is expensive and the Com-pressed Data-stream Protocol has been shown in simulations to reduce energy used on
transmission and reception by around 26%. The general gateway has been designed
in such a way as to make customization simple without requiring vast knowledge of sensor networks and software development. This, along with the modular nature of the Compressed Data-stream Protocol, enables the creation of an easy to deploy and easy to configure sensor network for data collection. Findings show that individual components work well and that the system as a whole performs without errors. This system, the components of which will eventually be released as open source, provides
a platform for researchers purely interested in the data gathered to deploy a sensor network without being restricted to specific vendors of hardware.
|
320 |
Development of a continuous condition monitoring system based on probabilistic modelling of partial discharge data for polymeric insulation cablesAhmed, Zeeshan 09 August 2019 (has links)
Partial discharge (PD) measurements have been widely accepted as an efficient online insulation condition assessment method in high voltage equipment. Two sets of experimental PD measuring setups were established with the aim to study the variations in the partial discharge characteristics over the insulation degradation in terms of the physical phenomena taking place in PD sources, up to the point of failure. Probabilistic lifetime modeling techniques based on classification, regression and multivariate time series analysis were performed for a system of PD response variables, i.e. average charge, pulse repetition rate, average charge current, and largest repetitive discharge magnitude over the data acquisition period. Experimental lifelong PD data obtained from samples subjected to accelerated degradation was used to study the dynamic trends and relationships among those aforementioned response variables. Distinguishable data clusters detected by the T-Stochastics Neighborhood Embedding (tSNE) algorithm allows for the examination of the state-of-the-art modeling techniques over PD data. The response behavior of trained models allows for distinguishing the different stages of the insulation degradation. An alternative approach utilizing a multivariate time series analysis was performed in parallel with Classification and Regression models for the purpose of forecasting PD activity (PD response variables corresponding to insulation degradation). True observed data and forecasted data mean values lie within the 95th percentile confidence interval responses for a definite horizon period, which demonstrates the soundness and accuracy of models. A life-predicting model based on the cointegrated relations between the multiple response variables, trained model responses correlated with experimentally evaluated time-to-breakdown values and well-known physical discharge mechanisms, can be used to set an emergent alarming trigger and as a step towards establishing long-term continuous monitoring of partial discharge activity. Furthermore, this dissertation also proposes an effective PD monitoring system based on wavelet and deflation compression techniques required for an optimal data acquisition as well as an algorithm for high-scale, big data reduction to minimize PD data size and account only for the useful PD information. This historically recorded useful information can thus be used for, not only postault diagnostics, but also for the purpose of improving the performance of modelling algorithms as well as for an accurate threshold detection.
|
Page generated in 0.0934 seconds