• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 21
  • 19
  • 19
  • 16
  • 7
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 190
  • 40
  • 38
  • 36
  • 31
  • 28
  • 27
  • 26
  • 25
  • 24
  • 24
  • 22
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Low-Complexity Decoding and Construction of Space-Time Block Codes

Natarajan, Lakshmi Prasad January 2013 (has links) (PDF)
Space-Time Block Coding is an efficient communication technique used in multiple-input multiple-output wireless systems. The complexity with which a Space-Time Block Code (STBC) can be decoded is important from an implementation point of view since it directly affects the receiver complexity and speed. In this thesis, we address the problem of designing low complexity decoding techniques for STBCs, and constructing STBCs that achieve high rate and full-diversity with these decoders. This thesis is divided into two parts; the first is concerned with the optimal decoder, viz. the maximum-likelihood (ML) decoder, and the second with non-ML decoders. An STBC is said to be multigroup ML decodable if the information symbols encoded by it can be partitioned into several groups such that each symbol group can be ML decoded independently of the others, and thereby admitting low complexity ML decoding. In this thesis, we first give a new framework for constructing low ML decoding complexity STBCs using codes over the Klein group, and show that almost all known low ML decoding complexity STBCs can be obtained by this method. Using this framework we then construct new full-diversity STBCs that have the least known ML decoding complexity for a large set of choices of number of transmit antennas and rate. We then introduce the notion of Asymptotically-Good (AG) multigroup ML decodable codes, which are families of multigroup ML decodable codes whose rate increases linearly with the number of transmit antennas. We give constructions for full-diversity AG multigroup ML decodable codes for each number of groups g > 1. For g > 2, these are the first instances of g-group ML decodable codes that are AG or have rate more than 1. For g = 2 and identical delay, the new codes match the known families of AG codes in terms of rate. In the final section of the first part we show that the upper triangular matrix R encountered during the sphere-decoding of STBCs can be rank-deficient, thus leading to higher sphere-decoding complexity, even when the rate is less than the minimum of the number of transmit antennas and the number receive antennas. We show that all known AG multigroup ML decodable codes suffer from such rank-deficiency, and we explicitly derive the sphere-decoding complexities of most known AG multigroup ML decodable codes. In the second part of this thesis we first study a low complexity non-ML decoder introduced by Guo and Xia called Partial Interference Cancellation (PIC) decoder. We give a new full-diversity criterion for PIC decoding of STBCs which is equivalent to the criterion of Guo and Xia, and is easier to check. We then show that Distributed STBCs (DSTBCs) used in wireless relay networks can be full-diversity PIC decoded, and we give a full-diversity criterion for the same. We then construct full-diversity PIC decodable STBCs and DSTBCs which give higher rate and better error performance than known multigroup ML decodable codes for similar decoding complexity, and which include other known full-diversity PIC decodable codes as special cases. Finally, inspired by a low complexity essentially-ML decoder given by Sirianunpiboon et al. for the two and three antenna Perfect codes, we introduce a new non-ML decoder called Adaptive Conditional Zero-Forcing (ACZF) decoder which includes the technique of Sirianunpiboon et al. as a special case. We give a full-diversity criterion for ACZF decoding, and show that the Perfect codes for two, three and four antennas, the Threaded Algebraic Space-Time code, and the 4 antenna rate 2 code of Srinath and Rajan satisfy this criterion. Simulation results show that the proposed decoder performs identical to ML decoding for these five codes. These STBCs along with ACZF decoding have the best error performance with least complexity among all known STBCs for four or less transmit antennas.
182

Space-Time Block Codes With Low Sphere-Decoding Complexity

Jithamithra, G R 07 1900 (has links) (PDF)
One of the most popular ways to exploit the advantages of a multiple-input multiple-output (MIMO) system is using space time block coding. A space time block code (STBC) is a finite set of complex matrices whose entries consist of the information symbols to be transmitted. A linear STBC is one in which the information symbols are linearly combined to form a two-dimensional code matrix. A well known method of maximum-likelihood (ML) decoding of such STBCs is using the sphere decoder (SD). In this thesis, new constructions of STBCs with low sphere decoding complexity are presented and various ways of characterizing and reducing the sphere decoding complexity of an STBC are addressed. The construction of low sphere decoding complexity STBCs is tackled using irreducible matrix representations of Clifford algebras, cyclic division algebras and crossed-product algebras. The complexity reduction algorithms for the STBCs constructed are explored using tree based search algorithms. Considering an STBC as a vector space over the set of weight matrices, the problem of characterizing the sphere decoding complexity is addressed using quadratic form representations. The main results are as follows. A sub-class of fast decodable STBCs known as Block Orthogonal STBCs (BOSTBCs) are explored. A set of sufficient conditions to obtain BOSTBCs are explained. How the block orthogonal structure of these codes can be exploited to reduce the SD complexity of the STBC is then explained using a depth first tree search algorithm. Bounds on the SD complexity reduction and its relationship with the block orthogonal structure are then addressed. A set of constructions to obtain BOSTBCs are presented next using Clifford unitary weight designs (CUWDs), Coordinate-interleaved orthogonal designs (CIODs), cyclic division algebras and crossed product algebras which show that a lot of codes existing in literature exhibit the block orthogonal property. Next, the dependency of the ordering of information symbols on the SD complexity is discussed following which a quadratic form representation known as the Hurwitz-Radon quadratic form (HRQF) of an STBC is presented which is solely dependent on the weight matrices of the STBC and their ordering. It is then shown that the SD complexity is only a function of the weight matrices defining the code and their ordering, and not of the channel realization (even though the equivalent channel when SD is used depends on the channel realization). It is also shown that the SD complexity is completely captured into a single matrix obtained from the HRQF. Also, for a given set of weight matrices, an algorithm to obtain a best ordering of them leading to the least SD complexity is presented using the HRQF matrix.
183

Návrh a implementace řídícího programu pro CNC obráběcí stroj prostřednictvím B&R Automation / Design and implementation of control program for CNC machine via B&R Automation

Vavrík, Michal January 2020 (has links)
The aim of the diploma thesis is the robotization of a conventional milling machine to a CNC milling machine using a programmable logic controller. The theoretical part of the thesis contains a description of machine tools and industry 4.0, an overview of Optimum Maschinen and B&R Automation companies, including products, and a description of the G code programming language. At the beginning of the practical part, the selected hardware, its connection and tuning of motors are described. Furthermore, the practical part explains the method of motor control and processing of CNC programs in G code. The following chapters discuss the creation of industrial visualizations for machine control and digital twin for testing purposes. The conclusion describes the evaluation of the results and indicates the possibilities for future expansion of the machine and its implementation in an automated cell in the sense of industry 4.0.
184

Modul digitálního signálového procesoru pro ruční RFID čtečku / Digital Signal Processor for Handheld RFID Reader

Benetka, Miroslav January 2008 (has links)
This diploma thesis deals with design and realization of a module for a digital signal procesor, for handheld RFID reader working in UHF band. It utilises a special chip EM4298 for RFID signals processing. Module is controlled by the microcontroller ATmega32L, which communicates with the PC through USB bus. Settings in EM4298 is made by a service program which processes received identifying data obtained from tags. Source codes for microcontroller are created in AVR Studio 4.13 program. Source codes for microcontroller are created in C++ Builder 6.0 program. Further thing is Desing and realization of analog interface and a UHF transceiver for wireless communication with tags. A Webench program was used for the analog interface design, which is freely available on the internet. For verification of parameters of the analog interface it was used PSpice 10.0 program. The UHF transceiver is build-up with a MAX2903 chip (transmitter) and AD8347 (receiver) and transmitting and receiving antennae.
185

E-noses equipped with Artificial Intelligence Technology for diagnosis of dairy cattle disease in veterinary / E-nose utrustad med Artificiell intelligens teknik avsedd för diagnos av mjölkboskap sjukdom i veterinär

Haselzadeh, Farbod January 2021 (has links)
The main goal of this project, running at Neurofy AB, was that developing an AI recognition algorithm also known as, gas sensing algorithm or simply recognition algorithm, based on Artificial Intelligence (AI) technology, which would have the ability to detect or predict diary cattle diseases using odor signal data gathered, measured and provided by Gas Sensor Array (GSA) also known as, Electronic Nose or simply E-nose developed by the company. Two major challenges in this project were to first overcome the noises and errors in the odor signal data, as the E-nose is supposed to be used in an environment with difference conditions than laboratory, for instance, in a bail (A stall for milking cows) with varying humidity and temperatures, and second to find a proper feature extraction method appropriate for GSA. Normalization and Principal component analysis (PCA) are two classic methods which not only intended for re-scaling and reducing of features in a data-set at pre-processing phase of developing of odor identification algorithm, but also it thought that these methods reduce the affect of noises in odor signal data. Applying classic approaches, like PCA, for feature extraction and dimesionality reduction gave rise to loss of valuable data which made it difficult for classification of odors. A new method was developed to handle noises in the odors signal data and also deal with dimentionality reduction without loosing of valuable data, instead of the PCA method in feature extraction stage. This method, which is consisting of signal segmentation and Autoencoder with encoder-decoder, made it possible to overcome the noise issues in data-sets and it also is more appropriate feature extraction method due to better prediction accuracy performed by the AI gas recognition algorithm in comparison to PCA. For evaluating of Autoencoder monitoring of its learning rate of was performed. For classification and predicting of odors, several classifier, among alias, Logistic Regression (LR), Support vector machine (SVM), Linear Discriminant Analysis (LDA), Random forest Classifier (RFC) and MultiLayer perceptron (MLP), was investigated. The best prediction was obtained by classifiers MLP . To validate the prediction, obtained by the new AI recognition algorithm, several validation methods like Cross validation, Accuracy score, balanced accuracy score , precision score, Recall score, and Learning Curve, were performed. This new AI recognition algorithm has the ability to diagnose 3 different diary cattle diseases with an accuracy of 96% despite lack of samples. / Syftet med detta projekt var att utveckla en igenkänning algoritm baserad på maskinintelligens (Artificiell intelligens (AI) ), även känd som gasavkänning algoritm eller igenkänningsalgoritm, baserad på artificiell intelligens (AI) teknologi såsom maskininlärning ach djupinlärning, som skulle kunna upptäcka eller diagnosera vissa mjölkkor sjukdomar med hjälp av luktsignaldata som samlats in, mätts och tillhandahållits av Gas Sensor Array (GSA), även känd som elektronisk näsa eller helt enkelt E-näsa, utvecklad av företaget Neorofy AB. Två stora utmaningar i detta projekt bearbetades. Första utmaning var att övervinna eller minska effekten av brus i signaler samt fel (error) i dess data då E-näsan är tänkt att användas i en miljö där till skillnad från laboratorium förekommer brus, till example i ett stall avsett för mjölkkor, i form av varierande fukthalt och temperatur. Andra utmaning var att hitta rätt dimensionalitetsreduktion som är anpassad till GSA. Normalisering och Principal component analysis (PCA) är två klassiska metoder som används till att både konvertera olika stora datavärden i datamängd (data-set) till samma skala och dimensionalitetsminskning av datamängd (data-set), under förbehandling process av utvecling av luktidentifieringsalgoritms. Dessa metoder används även för minskning eller eliminering av brus i luktsignaldata (odor signal data). Tillämpning av klassiska dimensionalitetsminskning algoritmer, såsom PCA, orsakade förlust av värdefulla informationer som var viktiga för kllasifisering. Den nya metoden som har utvecklats för hantering av brus i luktsignaldata samt dimensionalitetsminskning, utan att förlora värdefull data, är signalsegmentering och Autoencoder. Detta tillvägagångssätt har gjort det möjligt att övervinna brusproblemen i datamängder samt det visade sig att denna metod är lämpligare metod för dimensionalitetsminskning jämfört med PCA. För utvärdering of Autoencoder övervakning of inlärningshastighet av Autoencoder tillämpades. För klassificering, flera klassificerare, bland annat, LogisticRegression (LR), Support vector machine (SVM) , Linear Discriminant Analysis (LDA), Random forest Classifier (RFC) och MultiLayer perceptron (MLP) undersöktes. Bästa resultate erhölls av klassificeraren MLP. Flera valideringsmetoder såsom, Cross-validering, Precision score, balanced accuracy score samt inlärningskurva tillämpades. Denna nya AI gas igenkänningsalgoritm har förmågan att diagnosera tre olika mjölkkor sjukdomar med en noggrannhet på högre än 96%.
186

Implementace detektoru klíčových slov do mobilního telefonu (Symbian 60) / Keyword Spotting Implementation to Mobil Phone (Symbian 60)

Cipr, Tomáš Unknown Date (has links)
Keyword spotting is one of the many applications of automatic speech recognition. Its purpose is determining spots in given utterance in which some of the specified words were spoken. Keyword spotting has a great potential to enhance performance of new applications as well as the existing ones. An example could be a mobile phone voice control. Due to OS Symbian's coming to the market it is even possible for end user to implement a keyword spotting for a mobile phone on his or her own. The thesis describes theoretical prerequisites for keyword spotting and its implementation. Firstly the OS Symbian is presented with respect to the given task. Secondly each step of keyword spotting process is described. Finally the object design of keyword spotter is presented followed by implementation description. The thesis concludes with results review and notes on possible improvements.
187

Segmentation and Depth Estimation of Urban Road Using Monocular Camera and Convolutional Neural Networks / Segmentering och djupskatting av stadsväg med monokulär kamera

Djikic, Addi January 2018 (has links)
Deep learning for safe autonomous transport is rapidly emerging. Fast and robust perception for autonomous vehicles will be crucial for future navigation in urban areas with high traffic and human interplay. Previous work focuses on extracting full image depth maps, or finding specific road features such as lanes. However, in urban environments lanes are not always present, and sensors such as LiDAR with 3D point clouds provide a quite sparse depth perception of road with demanding algorithmic approaches. In this thesis we derive a novel convolutional neural network that we call AutoNet. It is designed as an encoder-decoder network for pixel-wise depth estimation of an urban drivable free-space road, using only a monocular camera, and handled as a supervised regression problem. AutoNet is also constructed as a classification network to solely classify and segment the drivable free-space in real- time with monocular vision, handled as a supervised classification problem, which shows to be a simpler and more robust solution than the regression approach. We also implement the state of the art neural network ENet for comparison, which is designed for fast real-time semantic segmentation and fast inference speed. The evaluation shows that AutoNet outperforms ENet for every performance metrics, but shows to be slower in terms of frame rate. However, optimization techniques are proposed for future work, on how to advance the frame rate of the network while still maintaining the robustness and performance. All the training and evaluation is done on the Cityscapes dataset. New ground truth labels for road depth perception are created for training with a novel approach of fusing pre-computed depth maps with semantic labels. Data collection with a Scania vehicle is conducted, mounted with a monocular camera to test the final derived models. The proposed AutoNet shows promising state of the art performance in regards to road depth estimation as well as road classification. / Deep learning för säkra autonoma transportsystem framträder mer och mer inom forskning och utveckling. Snabb och robust uppfattning om miljön för autonoma fordon kommer att vara avgörande för framtida navigering inom stadsområden med stor trafiksampel. I denna avhandling härleder vi en ny form av ett neuralt nätverk som vi kallar AutoNet. Där nätverket är designat som en autoencoder för pixelvis djupskattning av den fria körbara vägytan för stadsområden, där nätverket endast använder sig av en monokulär kamera och dess bilder. Det föreslagna nätverket för djupskattning hanteras som ett regressions problem. AutoNet är även konstruerad som ett klassificeringsnätverk som endast ska klassificera och segmentera den körbara vägytan i realtid med monokulärt seende. Där detta är hanterat som ett övervakande klassificerings problem, som även visar sig vara en mer simpel och mer robust lösning för att hitta vägyta i stadsområden. Vi implementerar även ett av de främsta neurala nätverken ENet för jämförelse. ENet är utformat för snabb semantisk segmentering i realtid, med hög prediktions- hastighet. Evalueringen av nätverken visar att AutoNet utklassar ENet i varje prestandamätning för noggrannhet, men visar sig vara långsammare med avseende på antal bilder per sekund. Olika optimeringslösningar föreslås för framtida arbete, för hur man ökar nätverk-modelens bildhastighet samtidigt som man behåller robustheten.All träning och utvärdering görs på Cityscapes dataset. Ny data för träning samt evaluering för djupskattningen för väg skapas med ett nytt tillvägagångssätt, genom att kombinera förberäknade djupkartor med semantiska etiketter för väg. Datainsamling med ett Scania-fordon utförs även, monterad med en monoculär kamera för att testa den slutgiltiga härleda modellen. Det föreslagna nätverket AutoNet visar sig vara en lovande topp-presterande modell i fråga om djupuppskattning för väg samt vägklassificering för stadsområden.
188

Augmenting High-Dimensional Data with Deep Generative Models / Högdimensionell dataaugmentering med djupa generativa modeller

Nilsson, Mårten January 2018 (has links)
Data augmentation is a technique that can be performed in various ways to improve the training of discriminative models. The recent developments in deep generative models offer new ways of augmenting existing data sets. In this thesis, a framework for augmenting annotated data sets with deep generative models is proposed together with a method for quantitatively evaluating the quality of the generated data sets. Using this framework, two data sets for pupil localization was generated with different generative models, including both well-established models and a novel model proposed for this purpose. The unique model was shown both qualitatively and quantitatively to generate the best data sets. A set of smaller experiments on standard data sets also revealed cases where this generative model could improve the performance of an existing discriminative model. The results indicate that generative models can be used to augment or replace existing data sets when training discriminative models. / Dataaugmentering är en teknik som kan utföras på flera sätt för att förbättra träningen av diskriminativa modeller. De senaste framgångarna inom djupa generativa modeller har öppnat upp nya sätt att augmentera existerande dataset. I detta arbete har ett ramverk för augmentering av annoterade dataset med hjälp av djupa generativa modeller föreslagits. Utöver detta så har en metod för kvantitativ evaulering av kvaliteten hos genererade data set tagits fram. Med hjälp av detta ramverk har två dataset för pupillokalisering genererats med olika generativa modeller. Både väletablerade modeller och en ny modell utvecklad för detta syfte har testats. Den unika modellen visades både kvalitativt och kvantitativt att den genererade de bästa dataseten. Ett antal mindre experiment på standardiserade dataset visade exempel på fall där denna generativa modell kunde förbättra prestandan hos en existerande diskriminativ modell. Resultaten indikerar att generativa modeller kan användas för att augmentera eller ersätta existerande dataset vid träning av diskriminativa modeller.
189

Překlad z češtiny do angličtiny / Czech-English Translation

Petrželka, Jiří January 2010 (has links)
Tato diplomová práce popisuje principy statistického strojového překladu a demonstruje, jak sestavit systém pro statistický strojový překlad Moses. V přípravné fázi jsou prozkoumány volně dostupné bilingvní česko-anglické korpusy. Empirická analýza časové náročnosti vícevláknových nástrojů pro zarovnání slov demonstruje, že MGIZA++ může dosáhnout až pětinásobného zrychlení, zatímco PGIZA++ až osminásobného zrychlení (v porovnání s GIZA++). Jsou otestovány tři způsoby morfologického pre-processingu českých trénovacích dat za použití jednoduchých nefaktorových modelů. Zatímco jednoduchá lemmatizace může snížit BLEU, sofistikovanější přístupy většinou BLEU zvyšují. Positivní efekty morfologického pre-processingu se vytrácejí s růstem velikosti korpusu. Vztah mezi dalšími charakteristikami korpusu (velikost, žánr, další data) a výsledným BLEU je empiricky měřen. Koncový systém je natrénován na korpusu CzEng 0.9 a vyhodnocen na testovacím vzorku z workshopu WMT 2010.
190

Medical image captioning based on Deep Architectures / Medicinsk bild textning baserad på Djupa arkitekturer

Moschovis, Georgios January 2022 (has links)
Diagnostic Captioning is described as “the automatic generation of a diagnostic text from a set of medical images of a patient collected during an examination” [59] and it can assist inexperienced doctors and radiologists to reduce clinical errors or help experienced professionals increase their productivity. In this context, tools that would help medical doctors produce higher quality reports in less time could be of high interest for medical imaging departments, as well as significantly impact deep learning research within the biomedical domain, which makes it particularly interesting for people involved in industry and researchers all along. In this work, we attempted to develop Diagnostic Captioning systems, based on novel Deep Learning approaches, to investigate to what extent Neural Networks are capable of performing medical image tagging, as well as automatically generating a diagnostic text from a set of medical images. Towards this objective, the first step is concept detection, which boils down to predicting the relevant tags for X-RAY images, whereas the ultimate goal is caption generation. To this end, we further participated in ImageCLEFmedical 2022 evaluation campaign, addressing both the concept detection and the caption prediction tasks by developing baselines based on Deep Neural Networks; including image encoders, classifiers and text generators; in order to get a quantitative measure of my proposed architectures’ performance [28]. My contribution to the evaluation campaign, as part of this work and on behalf of NeuralDynamicsLab¹ group at KTH Royal Institute of Technology, within the school of Electrical Engineering and Computer Science, ranked 4th in the former and 5th in the latter task [55, 68] among 12 groups included within the top-10 best performing submissions in both tasks. / Diagnostisk textning avser automatisk generering från en diagnostisk text från en uppsättning medicinska bilder av en patient som samlats in under en undersökning och den kan hjälpa oerfarna läkare och radiologer, minska kliniska fel eller hjälpa erfarna yrkesmän att producera diagnostiska rapporter snabbare [59]. Därför kan verktyg som skulle hjälpa läkare och radiologer att producera rapporter av högre kvalitet på kortare tid vara av stort intresse för medicinska bildbehandlingsavdelningar, såväl som leda till inverkan på forskning om djupinlärning, vilket gör den domänen särskilt intressant för personer som är involverade i den biomedicinska industrin och djupinlärningsforskare. I detta arbete var mitt huvudmål att utveckla system för diagnostisk textning, med hjälp av nya tillvägagångssätt som används inom djupinlärning, för att undersöka i vilken utsträckning automatisk generering av en diagnostisk text från en uppsättning medi-cinska bilder är möjlig. Mot detta mål är det första steget konceptdetektering som går ut på att förutsäga relevanta taggar för röntgenbilder, medan slutmålet är bildtextgenerering. Jag deltog i ImageCLEF Medical 2022-utvärderingskampanjen, där jag deltog med att ta itu med både konceptdetektering och bildtextförutsägelse för att få ett kvantitativt mått på prestandan för mina föreslagna arkitekturer [28]. Mitt bidrag, där jag representerade forskargruppen NeuralDynamicsLab² , där jag arbetade som ledande forskningsingenjör, placerade sig på 4:e plats i den förra och 5:e i den senare uppgiften [55, 68] bland 12 grupper som ingår bland de 10 bästa bidragen i båda uppgifterna.

Page generated in 0.0595 seconds