• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 642
  • 84
  • 37
  • 26
  • 15
  • 12
  • 8
  • 7
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 997
  • 858
  • 588
  • 496
  • 458
  • 417
  • 403
  • 300
  • 203
  • 186
  • 184
  • 174
  • 162
  • 158
  • 154
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Turbo codes

Yan, Yun January 1999 (has links)
No description available.
322

Convolutional Codes with Additional Structure and Block Codes over Galois Rings

Szabo, Steve January 2009 (has links)
No description available.
323

Semantic Segmentation of Building Materials in Real World Images Using 3D Information / Semantisk segmentering av byggnadsmaterial i verkliga världen med hjälp av 3D information

Rydgård, Jonas, Bejgrowicz, Marcus January 2021 (has links)
The increasing popularity of drones has made it convenient to capture a large number of images of a property, which can then be used to build a 3D model. The conditions of buildings can be analyzed to plan renovations. This creates an interest for automatically identifying building materials, a task well suited for machine learning. With access to drone imagery of buildings as well as depth maps and normal maps, we created a dataset for semantic segmentation. Two different convolutional neural networks were trained and evaluated, to see how well they perform material segmentation. DeepLabv3+, which uses RGB data, was compared to Depth-Aware CNN, which uses RGB-D data. Our experiments showed that DeepLabv3+ achieved higher mean intersection over union. To investigate if the information in the depth maps and normal maps could give a performance boost, we conducted experiments with an encoding we call HMN - horizontal disparity, magnitude of normal with ground, normal parallel with gravity. This three channel encoding was used to jointly train two CNNs, one with RGB and one with HMN, and then sum their predictions. This led to improved results for both DeepLabv3+ and Depth-Aware CNN. / Den ökade populariteten av drönare har gjort det smidigt att ta ett stort antal bilder av en fastighet, och sedan skapa en 3D-modell. Skicket hos en byggnad kan enkelt analyseras och renoveringar planeras. Det är då av intresse att automatiskt kunna identifiera byggnadsmaterial, en uppgift som lämpar sig väl för maskininlärning.  Med tillgång till såväl drönarbilder av byggnader som djupkartor och normalkartor har vi skapat ett dataset för semantisk segmentering. Två olika faltande neuronnät har tränats och utvärderats för att se hur väl de fungerar för materialigenkänning. DeepLabv3+ som använder sig av RGB-data har jämförts med Depth-Aware CNN som använder RGB-D-data och våra experiment visar att DeepLabv3+ får högre mean intersection over union. För att undersöka om resultaten kan förbättras med hjälp av datat i djupkartor och normalkartor har vi kodat samman informationen till vad vi valt att benämna HMN - horisontell disparitet, magnitud av normalen parallell med marken, normal i gravitationsriktningen. Denna trekanalsinput kan användas för att träna ett extra CNN samtidigt som man tränar med RGB-bilder, och sedan summera båda predikteringarna. Våra experiment visar att detta leder till bättre segmenteringar för både DeepLabv3+ och Depth-Aware CNN.
324

Iterative full-genome phasing and imputation using neural networks

Rydin, Lotta January 2022 (has links)
In this project, a model based on a convolutional neural network have been developed with the aim of imputing missing genotype data. This model was based on an already existing autoencoder that was modified into a U-Net structure. The network was trained and used iteratively with the intention that the result would improve in each iteration. In order to do this, the output of the model was used as the input in the next iteration. The data used in this project was diploid genotype data, which was phased into haploids and then run separately through the network. In each iteration, the new haploids were generated based on the output haploids. These were used as in input in the next iteration. The result showed that the accuracy of the imputation improved slightly in every iteration. However, it did not surpass the same model that was trained for one single iteration. Further work is needed to make the model more useful.
325

Evaluation of Temporal Convolutional Networks for Nanopore DNA Sequencing

Stymne, Jakob, Welin Odeback, Oliver January 2020 (has links)
Nanopore sequencing, a recently developed methodfor DNA sequencing, involves applying a constant electricfield over a membrane and translocating single-stranded DNAmolecules through membrane pores. This results in an electricalsignal, which is dependent on the structure of the DNA. The aimof this project is to train and evaluate a non-causal temporalconvolution neural network in order to accurately translate suchelectrical raw signal into the corresponding nucleotide sequence.The training dataset is sampled from the E. coli bacterial genomeand the phage Lambda virus. We implemented and evaluatedseveral different temporal convolutional architectures. Using anetwork with five residual blocks with five convolutional layersin each block yields maximum performance, with a predictionaccuracy of 76.1% on unseen test data. This result indicates thata temporal convolution network could be an effective way tosequence DNA data. / Nanopore sequencing är en nyligen utvecklad metod för DNA-sekvensering som innebär att man applicerar ett konstant elektriskt fält över ett membran och translokerar enkelsträngade DNA-molekyler genom membranporer. Detta resulterar i en elektrisk signal som beror på DNA-strukturen.  Målet med detta projekt är att träna och evaluera icke-kausula ”temporal convolutional networks” som ska kunna översätta denna ofiltrerade elektriska signalen till den motsvarande nukleotidsekvensen. Träningsdatan är ett urval av genomen från bakterien E. coli och viruset phage Lambda. Vi implementerade och utvärderade ett antal olika nätverksstrukturer. Ett nätverk med fem residuala block med fem faltande lager i varje block gav maximal prestation, med en precision på 76.1% på testdata. Detta resultat indikerar att ett ”temporal convolution network” skulle kunna vara ett effektivt sätt att sekvensera DNA. / Kandidatexjobb i elektroteknik 2020, KTH, Stockholm
326

Identification and quantification of concrete cracks using image analysis and machine learning

AVENDAÑO, JUAN CAMILO January 2020 (has links)
Nowadays inspections of civil engineering structures are performed manually at close range to be able to assess damages. This requires specialized equipment that tends to be expensive and to produce closure of the bridge. Furthermore, manual inspections are time-consuming and can often be a source or risk for the inspectors. Moreover, manual inspections are subjective and highly dependent on the state of mind of the inspector which reduces the accuracy of this kind of inspections. Image-based inspections using cameras or unmanned aerial vehicles (UAV) combined with image processing have been used to overcome the challenges of traditional manual inspections. This type of inspection has also been studied with the use of machine learning algorithms to improve the detection of damages, in particular cracks. This master’s thesis presents an approach that combines different aspects of the inspection, from the data acquisition, through the crack detection to the quantification of essential parameters. To do this, both digital cameras and a UAV have been used for data acquisition. A convolutional neural network (CNN) for the identification of cracks is used and subsequently, different quantification methods are explored to determine the width and length of the cracks. The results are compared with control measures to determine the accuracy of the method. The results present low to no false negatives when using the CNN to identify cracks. The quantification of the identified cracks is performed obtaining the highest accuracy estimation for 0.2mm cracks.
327

Segmentation of cancer epithelium using nuclei morphology with Deep Neural Network / Segmentering av cancerepitel utifrån kärnmorfologi med djupinlärning

Sharma, Osheen January 2020 (has links)
Bladder cancer (BCa) is the fourth most commonly diagnosed cancers in men and the eighth most common in women. It is an abnormal growth of tissues which develops in the bladder lining. Histological analysis of bladder tissue facilities diagnosis as well as it serves as an important tool for research. To bet- ter understand the molecular profile of bladder cancer and to detect predictive and prognostic features, microscopy methods, such as immunofluorescence (IF), are used to investigate the characteristics of bladder cancer tissue. For this project, a new method is proposed to segment cancer epithelial us- ing nuclei morphology captured with IF staining. The method is implemented using deep learning algorithms and performance achieved is compared with the literature. The dataset is stained for nuclei (DAPI) and a marker for cancer epithelial (panEPI) which was used to create the ground truth. Three popu- lar Convolutional Neural Network (CNN) namely U-Net, Residual U-Net and VGG16 were implemented to perform the segmentation task on the tissue mi- croarray dataset. In addition, a transfer learning approach was tested with the VGG16 network that was pre-trained with ImageNet dataset. Further, the performance from the three networks were compared using 3fold cross-validation. The dice accuracies achieved were 83.32% for U-Net, 88.05% for Residual U-Net and 82.73% for VGG16. These findings suggest that segmentation of cancerous tissue regions, using only the nuclear morphol- ogy, is feasible with high accuracy. Computer vision methods better utilizing nuclear morphology captured by the nuclear stain, are promising approaches to digitally augment the conventional IF marker panels, and therefore offer im- proved resolution of the molecular characteristics for research settings.
328

Automated Interpretation of Abnormal Adult Electroencephalograms

Lopez de Diego, Silvia Isabel January 2017 (has links)
Interpretation of electroencephalograms (EEGs) is a process that is still dependent on the subjective analysis of the examiner. The interrater agreement, even for relevant clinical events such as seizures, can be low. For instance, the differences between interictal, ictal, and post-ictal EEGs can be quite subtle. Before making such low-level interpretations of the signals, neurologists often classify EEG signals as either normal or abnormal. Even though the characteristics of a normal EEG are well defined, there are some factors, such as benign variants, that complicate this decision. However, neurologists can make this classification accurately by only examining the initial portion of the signal. Therefore, in this thesis, we explore the hypothesis that high performance machine classification of an EEG signal as abnormal can approach human performance using only the first few minutes of an EEG recording. The goal of this thesis is to establish a baseline for automated classification of abnormal adult EEGs using state of the art machine learning algorithms and a big data resource – The TUH EEG Corpus. A demographically balanced subset of the corpus was used to evaluate performance of the systems. The data was partitioned into a training set (1,387 normal and 1,398 abnormal files), and an evaluation set (150 normal and 130 abnormal files). A system based on hidden Markov Models (HMMs) achieved an error rate of 26.1%. The addition of a Stacked Denoising Autoencoder (SdA) post-processing step (HMM-SdA) further decreased the error rate to 24.6%. The overall best result (21.2% error rate) was achieved by a deep learning system that combined a Convolutional Neural Network and a Multilayer Perceptron (CNN-MLP). Even though the performance of our algorithm still lags human performance, which approaches a 1% error rate for this task, we have established an experimental paradigm that can be used to explore this application and have demonstrated a promising baseline using state of the art deep learning technology. / Electrical and Computer Engineering
329

Deep Convolutional Neural Networks for Multiclassification of Imbalanced Liver MRI Sequence Dataset

Trivedi, Aditya January 2020 (has links)
Application of deep learning in radiology has the potential to automate workflows, support radiologists with decision support, and provide patients a logic-based algorithmic assessment. Unfortunately, medical datasets are often not uniformly distributed due to a naturally occurring imbalance. For this research, a multi-classification of liver MRI sequences for imaging of hepatocellular carcinoma (HCC) was conducted on a highly imbalanced clinical dataset using deep convolutional neural network. We have compared four multi classification classifiers which were Model A and Model B (both trained using imbalanced training data), Model C (trained using augmented training images) and Model D (trained using under sampled training images). Data augmentation such as 45-degree rotation, horizontal and vertical flip and random under sampling were performed to tackle class imbalance. HCC, the third most common cause of cancer-related mortality [1], can be diagnosed with high specificity using Magnetic Resonance Imaging (MRI) with the Liver Imaging Reporting and Data System (LI-RADS). Each individual MRI sequence reveals different characteristics that are useful to determine likelihood of HCC. We developed a deep convolutional neural network for the multi-classification of imbalanced MRI sequences that will aid when building a model to apply LI-RADS to diagnose HCC. Radiologists use these MRI sequences to help them identify specific LI-RADS features, it helps automate some of the LIRADS process, and further applications of machine learning to LI-RADS will likely depend on automatic sequence classification as a first step. Our study included an imbalanced dataset of 193,868 images containing 10 MRI sequences: in- phase (IP) chemical shift imaging, out-phase (OOP) chemical shift imaging, T1-weighted post contrast imaging (C+, C-, C-C+), fat suppressed T2 weighted imaging (T2FS), T2 weighted imaging, Diffusion Weighted Imaging (DWI), Apparent Diffusion Coefficient map (ADC) and In phase/Out of phase (IPOOP) imaging. Model performance for Models A, B, C and D provided a macro average F1 score of 0.97, 0.96, 0.95 and 0.93 respectively. Model A showed higher classification scores than models trained using data augmentation and under sampling. / Thesis / Master of Science (MSc)
330

Joint random linear network coding and convolutional code with interleaving for multihop wireless network

Susanto, Misfa, Hu, Yim Fun, Pillai, Prashant January 2013 (has links)
No / Abstract: Error control techniques are designed to ensure reliable data transfer over unreliable communication channels that are frequently subjected to channel errors. In this paper, the effect of applying a convolution code to the Scattered Random Network Coding (SRNC) scheme over a multi-hop wireless channel was studied. An interleaver was implemented for bit scattering in the SRNC with the purpose of dividing the encoded data into protected blocks and vulnerable blocks to achieve error diversity in one modulation symbol while randomising errored bits in both blocks. By combining the interleaver with the convolution encoder, the network decoder in the receiver would have enough number of correctly received network coded blocks to perform the decoding process efficiently. Extensive simulations were carried out to study the performance of three systems: 1) SRNC with convolutional encoding, 2) SRNC; and 3) A system without convolutional encoding nor interleaving. Simulation results in terms of block error rate for a 2-hop wireless transmission scenario over an Additive White Gaussian Noise (AWGN) channel were presented. Results showed that the system with interleaving and convolutional code achieved better performance with coding gain of at least 1.29 dB and 2.08 dB on average when the block error rate is 0.01 when compared with system II and system III respectively.

Page generated in 0.0305 seconds