11 |
Quantum convolutional stabilizer codesChinthamani, Neelima 30 September 2004 (has links)
Quantum error correction codes were introduced as a means to protect quantum information from decoherance and operational errors. Based on their approach to error control, error correcting codes can be divided into two different classes: block codes and convolutional codes. There has been significant development towards finding quantum block codes, since they were first discovered in 1995. In contrast, quantum convolutional codes remained mainly uninvestigated. In this thesis, we develop the stabilizer formalism for quantum convolutional codes. We define distance properties of these codes and give a general method for constructing encoding circuits, given a set of generators of the stabilizer of a quantum convolutional stabilizer code, is shown. The resulting encoding circuit enables online encoding of the qubits, i.e., the encoder does not have to wait for the input transmission to end before starting the encoding process. We develop the quantum analogue of the Viterbi algorithm. The quantum Viterbi algorithm (QVA) is a maximum likehood error estimation algorithm, the complexity of which grows linearly with the number of encoded qubits. A variation of the quantum Viterbi algorithm, the Windowed QVA, is also discussed. Using Windowed QVA, we can estimate the most likely error without waiting for the entire received sequence.
|
12 |
Dual domain decoding of high rate convolutional codes for iterative decodersSrinivasan, Sudharshan January 2008 (has links)
This thesis addresses the problem of decoding high rate convolutional codes directly without resorting to puncturing. High rate codes are necessary for applications which require high bandwidth efficiency, like high data rate communication systems and magnet recording systems. Convolutional (rate k/n) codes, used as component codes for turbo codes, are preferred for their regular trellis structure and the resulting ease in decoding. However, the branch complexity of the (primal) code trellis increases exponentially with k for k/(k+1) codes, making decoding on the code trellis quickly impractical with increasing code rate. 'Puncturing' is the method traditionally used for generating high rate codes, which keeps the decoding complexity nearly the same for a wide range of code rates, since the same ?mother? code decoder is used at the receiver, while only the puncturing and depuncturing pattern is altered for changes in code rate. However, 'puncturing' puts a constraint in the search for the best possible high rate code, thereby resulting in a performance penalty, particularly at high SNRs.
|
13 |
Real-Time Contactless Heart Rate Estimation from Facial VideoQiu, Ying 26 October 2018 (has links)
With the increase in health consciousness, noninvasive body monitoring has aroused interest among researchers. As one of the most important pieces of physiological information, researchers have remotely estimated heart rates from facial videos in recent years. Although progress has been made over the past few years, there are still some limitations, such as the increase in processing time with accuracy and the lack of comprehensive and challenging datasets for use and comparison. Recently, it was shown that heart rate information can be extracted from facial videos by spatial decomposition and temporal filtering. Inspired by this, a new framework is introduced in this thesis for remotely estimating the heart rate under realistic conditions by combining spatial and temporal filtering and a convolutional neural network. Our proposed approach exhibits better performance compared with that of the benchmark on the MMSE-HR dataset in terms of both the average heart rate estimation and short-term heart rate estimation. High consistency in short-term heart rate estimation is observed between our method and the ground truth.
|
14 |
Enhanced Approach for the Classification of Ulcerative Colitis Severity in Colonoscopy Videos Using CNNSure, Venkata Leela 08 1900 (has links)
Ulcerative colitis (UC) is a chronic inflammatory disease characterized by periods of relapses and remissions affecting more than 500,000 people in the United States. To achieve the therapeutic goals of UC, which are to first induce and then maintain disease remission, doctors need to evaluate the severity of UC of a patient. However, it is very difficult to evaluate the severity of UC objectively because of non-uniform nature of symptoms and large variations in their patterns. To address this, in our previous works, we developed two different approaches in which one is using the image textures, and the other is using CNN (convolutional neural network) to measure and classify objectively the severity of UC presented in optical colonoscopy video frames. But, we found that the image texture based approach could not handle larger number of variations in their patterns, and the CNN based approach could not achieve very high accuracy. In this paper, we improve our CNN based approach in two ways to provide better accuracy for the classification. We add more thorough and essential preprocessing, and generate more classes to accommodate large variations in their patterns. The experimental results show that the proposed preprocessing can improve the overall accuracy of evaluating the severity of UC.
|
15 |
Mitotic cell detection in H&E stained meningioma histopathology slidesCheng, Huiwen 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Meningioma represent more than one-third of all primary central nervous system (CNS) tumors, and it can be classified into three grades according to WHO (World Health Organization) in terms of clinical aggressiveness and risk of recurrence. A key component of meningioma grades is the mitotic count, which is defined as quantifying the number of cells in the process of dividing (i.e., undergoing mitosis) at a specific point in time. Currently, mitosis counting is done manually by a pathologist looking at 10 consecutive high-power fields (HPF) on a glass slide under a microscope, which is an extremely laborious and time-consuming process. The goal of this thesis is to investigate the use of computerized methods to automate the detection of mitotic nuclei with limited labeled data. We built computational methods to detect and quantify the histological features of mitotic cells on a whole slides image which mimic the exact process of pathologist workflow. Since we do not have enough training data from meningioma slide, we learned the mitotic cell features through public available breast cancer datasets, and predicted on meingioma slide for accuracy. We use either handcrafted features that capture certain morphological, statistical, or textural attributes of mitoses or features learned with convolutional neural networks (CNN). Hand crafted features are inspired by the domain knowledge, while the data-driven VGG16 models tend to be domain agnostic and attempt to learn additional feature bases that cannot be represented through any of the handcrafted features. Our work on detection of mitotic cells shows 100% recall , 9% precision and 0.17 F1 score. The detection using VGG16 performs with 71% recall, 73% precision, and 0.77 F1 score. Finally, this research of automated image analysis could drastically increase diagnostic efficiency and reduce inter-observer variability and errors in pathology diagnosis, which would allow fewer pathologists to serve more patients while maintaining diagnostic accuracy and precision. And all these methodologies will increasingly transform practice of pathology, allowing it to mature toward a quantitative science.
|
16 |
Performance Impact on Neural Network with Partitioned Convolution Implemented with GPU Programming / Partitioned Convolution in Neuron NetworkLee, Bill January 2021 (has links)
For input data of homogenous type, the standard form of convolutional neural network is normally constructed with universally applied filters to identify global patterns. However, for certain datasets, there are identifiable trends and patterns within subgroups of input data. This research proposes a convolutional neural network that deliberately partitions input data into groups to be processed with unique sets of convolutional layers, thus identifying the underlying features of individual data groups. Training and testing data are built from historical prices of stock market and preprocessed so that the generated datasets are suitable for both standard and the proposed convolutional neural network. The author of this research also developed a software framework that can construct neural networks to perform necessary testing. The calculation logic was implemented using parallel programming and executed on a Nvidia graphic processing unit, thus allowing tests to be executed without expensive hardware. Tests were executed for 134 sets of datasets to benchmark the performance between standard and the proposed convolutional neural network. Test results show that the partitioned convolution method is capable of performance that rivals its standard counterpart. Further analysis indicates that more sophisticated method of building datasets, larger sets of training data, or more training epochs can further improve the performance of the partitioned neural network. For suitable datasets, the proposed method could be a viable replacement or supplement to the standard convolutional neural network structure. / Thesis / Master of Applied Science (MASc) / A convolutional neural network is a machine learning tool that allows complex patterns in datasets to be identified and modelled. For datasets with input that consists of the same type of data, a convolutional neural network is often architected to identify global patterns. This research explores the viability of partitioning input data into groups and processing them with separate convolutional layers so unique patterns associated with individual subgroups of input data can be identified. The author of this research built suitable test datasets and developed a (parallel computation enabled) framework that can construct both standard and proposed convolutional neural networks. The test results show that the proposed structure is capable of performance that matches its standard counterpart. Further analysis indicates that there are potential methods to further improve the performance of partitioned convolution, making it a viable replacement or supplement to standard convolution.
|
17 |
Implementation of a Forward Error Correction Technique using Convolutional Encoding with Viterbi DecodingRawat, Sachin 30 June 2004 (has links)
No description available.
|
18 |
Regularization, Uncertainty Estimation and Out of Distribution Detection in Convolutional Neural NetworksKrothapalli, Ujwal K. 11 September 2020 (has links)
Classification is an important task in the field of machine learning and when classifiers are trained on images, a variety of problems can surface during inference. 1) Recent trends of using convolutional neural networks (CNNs) for various machine learning tasks has borne many successes and CNNs are surprisingly expressive in their learning ability due to a large number of parameters and numerous stacked layers in the CNNs. This increased model complexity also increases the risk of overfitting to the training data. Increasing the size of the training data using synthetic or artificial means (data augmentation) helps CNNs learn better by reducing the amount of over-fitting and producing a regularization effect to improve generalization of the learned model. 2) CNNs have proven to be very good classifiers and generally localize objects well; however, the loss functions typically used to train classification CNNs do not penalize inability to localize an object, nor do they take into account an object's relative size in the given image when producing confidence measures. 3) Convolutional neural networks always output in the space of the learnt classes with high confidence while predicting the class of a given image regardless of what the image consists of. For example an ImageNet-1K trained CNN can not say if the given image has no objects that it was trained on if it is provided with an image of a dinosaur (not an ImageNet category) or if the image has the main object cut out of it (context only). We approach these three different problems using bounding box information and learning to produce high entropy predictions on out of distribution classes.
To address the first problem, we propose a novel regularization method called CopyPaste. The idea behind our approach is that images from the same class share similar context and can be 'mixed' together without affecting the labels. We use bounding box annotations that are available for a subset of ImageNet images. We consistently outperform the standard baseline and explore the idea of combining our approach with other recent regularization methods as well. We show consistent performance gains on PASCAL VOC07, MS-COCO and ImageNet datasets.
For the second problem we employ objectness measures to learn meaningful CNN predictions. Objectness is a measure of likelihood of an object from any class being present in a given image. We present a novel approach to object localization that combines the ideas of objectness and label smoothing during training. Unlike previous methods, we compute a smoothing factor that is adaptive based on relative object size within an image.
We present extensive results using ImageNet and OpenImages to demonstrate that CNNs trained using adaptive label smoothing are much less likely to be overconfident in their predictions, as compared to CNNs trained using hard targets. We train CNNs using objectness computed from bounding box annotations that are available for the ImageNet dataset and the OpenImages dataset. We perform extensive experiments with the aim of improving the ability of a classification CNN to learn better localizable features and show object detection performance improvements, calibration and classification performance on standard datasets. We also show qualitative results using class activation maps to illustrate the improvements.
Lastly, we extend the second approach to train CNNs with images belonging to out of distribution and context using a uniform distribution of probability over the set of target classes for such images. This is a novel way to use uniform smooth labels as it allows the model to learn better confidence bounds. We sample 1000 classes (mutually exclusive to the 1000 classes in ImageNet-1K) from the larger ImageNet dataset comprising about 22K classes. We compare our approach with standard baselines and provide entropy and confidence plots for in distribution and out of distribution validation sets. / Doctor of Philosophy / Categorization is an important task in everyday life. Humans can perform the task of classifying objects effortlessly in pictures. Machines can also be trained to classify objects in images. With the tremendous growth in the area of artificial intelligence, machines have surpassed human performance for some tasks. However, there are plenty of challenges for artificial neural networks. Convolutional Neural Networks (CNNs) are a type of artificial neural networks. 1) Sometimes, CNNs simply memorize the samples provided during training and fail to work well with images that are slightly different from the training samples. 2) CNNs have proven to be very good classifiers and generally localize objects well; however, the objective functions typically used to train classification CNNs do not penalize inability to localize an object, nor do they take into account an object's relative size in the given image. 3) Convolutional neural networks always produce an output in the space of the learnt classes with high confidence while predicting the class of a given image regardless of what the image consists of. For example, an ImageNet-1K (a popular dataset) trained CNN can not say if the given image has no objects that it was trained on if it is provided with an image of a dinosaur (not an ImageNet category) or if the image has the main object cut out of it (images with background only).
We approach these three different problems using object position information and learning to produce low confidence predictions on out of distribution classes.
To address the first problem, we propose a novel regularization method called CopyPaste. The idea behind our approach is that images from the same class share similar context and can be 'mixed' together without affecting the labels. We use bounding box annotations that are available for a subset of ImageNet images. We consistently outperform the standard baseline and explore the idea of combining our approach with other recent regularization methods as well. We show consistent performance gains on PASCAL VOC07, MS-COCO and ImageNet datasets.
For the second problem we employ objectness measures to learn meaningful CNN predictions. Objectness is a measure of likelihood of an object from any class being present in a given image. We present a novel approach to object localization that combines the ideas of objectness and label smoothing during training. Unlike previous methods, we compute a smoothing factor that is adaptive based on relative object size within an image.
We present extensive results using ImageNet and OpenImages to demonstrate that CNNs trained using adaptive label smoothing are much less likely to be overconfident in their predictions, as compared to CNNs trained using hard targets. We train CNNs using objectness computed from bounding box annotations that are available for the ImageNet dataset and the OpenImages dataset. We perform extensive experiments with the aim of improving the ability of a classification CNN to learn better localizable features and show object detection performance improvements, calibration and classification performance on standard datasets. We also show qualitative results to illustrate the improvements.
Lastly, we extend the second approach to train CNNs with images belonging to out of distribution and context using a uniform distribution of probability over the set of target classes for such images. This is a novel way to use uniform smooth labels as it allows the model to learn better confidence bounds. We sample 1000 classes (mutually exclusive to the 1000 classes in ImageNet-1K) from the larger ImageNet dataset comprising about 22K classes. We compare our approach with standard baselines on `in distribution' and `out of distribution' validation sets.
|
19 |
Deep Convolutional Neural Networks for Segmenting Unruptured Intracranial Aneurysms from 3D TOF-MRA ImagesBoonaneksap, Surasith 07 February 2022 (has links)
Despite facing technical issues (e.g., overfitting, vanishing and exploding gradients), deep neural networks have the potential to capture complex patterns in data. Understanding how depth impacts neural networks performance is vital to the advancement of novel deep learning architectures. By varying hyperparameters on two sets of architectures with different depths, this thesis aims to examine if there are any potential benefits from developing deep networks for segmenting intracranial aneurysms from 3D TOF-MRA scans in the ADAM dataset. / Master of Science / With the technologies we have today, people are constantly generating data. In this pool of information, gaining insight into the data proves to be extremely valuable. Deep learning is one method that allows for automatic pattern recognition by iteratively improving the disparity between its prediction and the ground truth. Complex models can learn complex patterns, and such models introduce challenges. This thesis explores the potential benefits of deep neural networks whether they stand to gain improvement despite the challenges. The models will be trained to segment intracranial aneurysms from volumetric images.
|
20 |
Artificial Intelligence For Mitigation Against Array Perturbations In Direction Of Arrival EstimationShaham, Mathew 01 June 2024 (has links) (PDF)
Direction of Arrival (DOA) estimation with digital arrays under unknown Gaussian distributed element location perturbation has detrimental effects to the performance of traditional DOA estimation techniques. This work proposes an artificial intelligence (AI) approach as a solution to this problem. A Deep Convolutional Neural Network (DCNN) is proposed and experimentation into network parameters, classification networks, and how the DCNN is applied to the DOA problem are studied. It is shown that this AI based approach is successful in estimating the DOA with perturbed arrays where traditional approaches fail.
|
Page generated in 0.0689 seconds