Spelling suggestions: "subject:"[een] CONVOLUTION"" "subject:"[enn] CONVOLUTION""
131 |
An investigative study of the applicability of the convolution method to geophysical tomographyChin, Kimberley Germaine January 1985 (has links)
No description available.
|
132 |
Semi-parametric Bayesian Models Extending Weighted Least SquaresWang, Zhen 31 August 2009 (has links)
No description available.
|
133 |
Detection of myocardial infarction based on novel deep transfer learning methods for urban healthcare in smart citiesAlghamdi, A., Hammad, M., Ugail, Hassan, Abdel-Raheem, A., Muhammad, K., Khalifa, H.S., Abd El-Latif, A.A. 20 March 2022 (has links)
Yes / One of the common cardiac disorders is a cardiac attack called Myocardial infarction (MI), which occurs due to the blockage of one or more coronary arteries. Timely treatment of MI is important and slight delay results in severe consequences. Electrocardiogram (ECG) is the main diagnostic tool to monitor and reveal the MI signals. The complex nature of MI signals along with noise poses challenges to doctors for accurate and quick diagnosis. Manually studying large amounts of ECG data can be tedious and time-consuming. Therefore, there is a need for methods to automatically analyze the ECG data and make diagnosis. Number of studies has been presented to address MI detection, but most of these methods are computationally expensive and faces the problem of overfitting while dealing real data. In this paper, an effective computer-aided diagnosis (CAD) system is presented to detect MI signals using the convolution neural network (CNN) for urban healthcare in smart cities. Two types of transfer learning techniques are employed to retrain the pre-trained VGG-Net (Fine-tuning and VGG-Net as fixed feature extractor) and obtained two new networks VGG-MI1 and VGG-MI2. In the VGG-MI1 model, the last layer of the VGG-Net model is replaced with a specific layer according to our requirements and various functions are optimized to reduce overfitting. In the VGG-MI2 model, one layer of the VGG-Net model is selected as a feature descriptor of the ECG images to describe it with informative features. Considering the limited availability of dataset, ECG data is augmented which has increased the classification performance. A standard well-known database Physikalisch-Technische Bundesanstalt (PTB) Diagnostic ECG is used for the validation of the proposed framework. It is evident from experimental results that the proposed framework achieves a high accuracy surpasses the existing methods. In terms of accuracy, sensitivity, and specificity; VGG-MI1 achieved 99.02%, 98.76%, and 99.17%, respectively, while VGG-MI2 models achieved an accuracy of 99.22%, a sensitivity of 99.15%, and a specificity of 99.49%. / This project was funded by University of Jeddah, Jeddah, Saudi Arabia (Project number: UJ-02-018-ICGR).
|
134 |
Development of Surrogate Model for FEM Error Prediction using Deep LearningJain, Siddharth 07 July 2022 (has links)
This research is a proof-of-concept study to develop a surrogate model, using deep learning (DL), to predict solution error for a given model with a given mesh. For this research, we have taken the von Mises stress contours and have predicted two different types of error indicators contours, namely (i) von Mises error indicator (MISESERI), and (ii) energy density error indicator (ENDENERI). Error indicators are designed to identify the solution domain areas where the gradient has not been properly captured. It uses the spatial gradient distribution of the existing solution for a given mesh to estimate the error. Due to poor meshing and nature of the finite element method, these error indicators are leveraged to study and reduce errors in the finite element solution using an adaptive remeshing scheme. Adaptive re-meshing is an iterative and computationally expensive process to reduce the error computed during the post-processing step. To overcome this limitation we propose an approach to replace it using data-driven techniques. We have introduced an image processing-based surrogate model designed to solve an image-to-image regression problem using convolutional neural networks (CNN) that takes a 256 × 256 colored image of von mises stress contour and outputs the required error indicator. To train this model with good generalization performance we have developed four different geometries for each of the three case studies: (i) quarter plate with a hole, (b) simply supported plate with multiple holes, and (c) simply supported stiffened plate. The entire research is implemented in a three phase approach, phase I involves the design and development of a CNN to perform training on stress contour images with their corresponding von Mises stress values volume-averaged over the entire domain. Phase II involves developing a surrogate model to perform image-to-image regression and the final phase III involves extending the capabilities of phase II and making the surrogate model more generalized and robust. The final surrogate model used to train the global dataset of 12,000 images consists of three auto encoders, one encoder-decoder assembly, and two multi-output regression neural networks. With the error of less than 1% in the neural network training shows good memorization and generalization performance. Our final surrogate model takes 15.5 hours to train and less than a minute to predict the error indicators on testing datasets. Thus, this present study can be considered a good first step toward developing an adaptive remeshing scheme using deep neural networks. / Master of Science / This research is a proof-of-concept study to develop an image processing-based neural network (NN) model to solve an image-to-image regression problem. In finite element analysis (FEA), due to poor meshing and nature of the finite element method, these error indicators are used to study and reduce errors. For this research, we have predicted two different types of error indicator contours by using stress images as inputs to the NN model. In popular FEA packages, adaptive remeshing scheme is used to optimize mesh quality by iteratively computing error indicators making the process computationally expensive. To overcome this limitation we propose an approach to replace it using convolutional neural networks (CNN). Such neural networks are particularly used for image based data. To train our CNN model with good generalization performance we have developed four different geometries with varying load cases. The entire research is implemented in a three phase approach, phase I involves the design and development of a CNN model to perform initial level training on small image size. Phase II involves developing an assembled neural network to perform image-to-image regression and the final phase III involves extending the capabilities of phase II for more generalized and robust results. With the error of less than 1% in the neural network training shows good memorization and generalization performance. Our final surrogate model takes 15.5 hours to train and less than a minute to predict the error indicators on testing datasets. Thus, this present study can be considered a good first step toward developing an adaptive remeshing scheme using deep neural networks.
|
135 |
FIR implementation on FPGA: investigate the FIR order on SDA and PDA algorithmsMigdadi, Hassan S.O., Abd-Alhameed, Raed, Obeidat, Huthaifa A.N., Noras, James M., Qaralleh, E.A.A., Ngala, Mohammad J. January 2015 (has links)
No / Finite impulse response (FIR) digital filters are extensively used due to their key role in various digital signal processing (DSP) applications. Several attempts have been made to develop hardware realization of FIR filters characterized by implementation complexity, precision and high speed. Field Programmable Gate Array is a reconfigurable realization of FIR filters. Field-programmable gate arrays (FPGAs) are on the verge of revolutionizing digital signal processing. Many front-end digital signal processing (DSP) algorithms, such as FFTs, FIR or IIR filters, are now most often realized by FPGAs. Modern FPGA families provide DSP arithmetic support with fast-carry chains that are used to implement multiply-accumulates (MACs) at high speed, with low overhead and low costs. In this paper, distributed arithmetic (DA) realization of FIR filter as serial and parallel are discussed in terms of hardware cost and resource utilization.
|
136 |
An evaluation of coded wavelet for multicarrier modulation with OFDMAnoh, Kelvin O.O., Ghazaany, Tahereh S., Hussaini, Abubakar S., Abd-Alhameed, Raed, Jones, Steven M.R., Rodriguez, Jonathan January 2013 (has links)
No / Orthogonal frequency division multiplexing (OFDM) is pronounced in wireless communication systems. Methods for improving the performance of the OFDM-based systems are mostly sought. A method of doing this involves error correction coding and another, a better multicarrier modulation kernel. In this work, convolutional error correction coding with interleaving is introduced in wavelet multicarrier modulation OFDM system (wavelet-OFDM) to improve the performance of multicarrier systems as the signal traverses the multipath and noisy transmission channels. This is compared with FFT-based multicarrier modulation (FFT-OFDM). Results show that coded wavelet-OFDM saves more than a half of the transmit power than the uncoded wavelet. Also it will be shown that, the interleaved and non-interleaved coded wavelet-OFDM well outperform interleaved coded and non-interleaved coded FFT-OFDM systems respectively.
|
137 |
Probabilistic Modeling of Airborne Spherical Object for Robotic Limbs Implementation Using Artificial IntelligencePham, Binh 01 January 2024 (has links) (PDF)
In recent years, the technological space experienced the proliferation of Generative AI models. A prominent type of this model is a language model-based chatbot. The primary function of these models is to generate answers to a question from an extensive database and create a stream of conversation at various levels of complexity. The database of these models encompasses diverse data type of text (e.g., ChatGPT), audio (e.g., PlayHT), or images (e.g., DALLE-2). The intricate process involves neural networks, which undergoes pre-training from the database, building result from architecture neural networks, refined tuning to creating coherent result, probability estimation to produce the correct context result, and generating and refinement as improvement to generated answers. This proposal aims to delve deep into the probability estimation process of the generative AI model. A specific focus is to predict an airborne object's trajectory to create an understanding of how to adapt and adjust robotic limbs and enable them to intercept and capture the thing with some degree of precision.
|
138 |
Probing sequence-level instructions for gene expression / Etude des instructions pour l’expression des gènes présentes dans la séquence ADNTaha, May 28 November 2018 (has links)
La régulation des gènes est fortement contrôlée afin d’assurer une large variété de types cellulaires ayant des fonctions spécifiques. Ces contrôles prennent place à différents niveaux et sont associés à différentes régions génomiques régulatrices. Il est donc essentiel de comprendre les mécanismes à la base des régulations géniques dans les différents types cellulaires, dans le but d’identifier les régulateurs clés. Plusieurs études tentent de mieux comprendre les mécanismes de régulation en modulant l’expression des gènes par des approches épigénétiques. Cependant, ces approches sont basées sur des données expérimentales limitées à quelques échantillons, et sont à la fois couteuses et chronophages. Par ailleurs, les constituants nécessaires à la régulation des gènes au niveau des séquences ne peut pas être capturées par ces approches. L’objectif principal de cette thèse est d’expliquer l’expression des ARNm en se basant uniquement sur les séquences d’ADN.Dans une première partie, nous utilisons le modèle de régression linéaire avec pénalisation Lasso pour prédire l’expression des gènes par l’intermédiaire des caractéristique de l’ADN comme la composition nucléotidique et les sites de fixation des facteurs de transcription. La précision de cette approche a été mesurée sur plusieurs données provenant de la base de donnée TCGA et nous avons trouvé des performances similaires aux modèles ajustés aux données expérimentales. Nous avons montré que la composition nucléotidique a un impact majeur sur l’expression des gènes. De plus, l’influence de chaque régions régulatrices est évaluée et l’effet du corps de gène, spécialement les introns semble être clé dans la prédiction de l’expression. En second partie, nous présentons une tentative d’amélioration des performances du modèle. D’abord, nous considérons inclure dans le modèles les interactions entres les différents variables et appliquer des transformations non linéaires sur les variables prédictives. Cela induit une légère augmentation des performances du modèles. Pour aller plus loin, des modèles d’apprentissage profond sont étudiés. Deux types de réseaux de neurones sont considérés : Les perceptrons multicouches et les réseaux de convolutions.Les paramètres de chaque neurone sont optimisés. Les performances des deux types de réseaux semblent être plus élevées que celles du modèle de régression linéaire pénalisée par Lasso. Les travaux de cette thèse nous ont permis (i) de démontrer l’existence des instructions au niveau de la séquence en relation avec l’expression des gènes, et (ii) de fournir différents cadres de travail basés sur des approches complémentaires. Des travaux complémentaires sont en cours en particulier sur le deep learning, dans le but de détecter des informations supplémentaires présentes dans les séquences. / Gene regulation is tightly controlled to ensure a wide variety of cell types and functions. These controls take place at different levels and are associated with different genomic regulatory regions. An actual challenge is to understand how the gene regulation machinery works in each cell type and to identify the most important regulators. Several studies attempt to understand the regulatory mechanisms by modeling gene expression using epigenetic marks. Nonetheless, these approaches rely on experimental data which are limited to some samples, costly and time-consuming. Besides, the important component of gene regulation based at the sequence level cannot be captured by these approaches. The main objective of this thesis is to explain mRNA expression based only on DNA sequences features. In a first work, we use Lasso penalized linear regression to predict gene expression using DNA features such as transcription factor binding site (motifs) and nucleotide compositions. We measured the accuracy of our approach on several data from the TCGA database and find similar performance as that of models fitted with experimental data. In addition, we show that nucleotide compositions of different regulatory regions have a major impact on gene expression. Furthermore, we rank the influence of each regulatory regions and show a strong effect of the gene body, especially introns.In a second part, we try to increase the performances of the model. We first consider adding interactions between nucleotide compositions and applying non-linear transformations on predictive variables. This induces a slight increase in model performances.To go one step further, we then learn deep neuronal networks. We consider two types of neural networks: multilayer perceptrons and convolution networks. Hyperparameters of each network are optimized. The performances of both types of networks appear slightly higher than those of a Lasso penalized linear model. In this thesis, we were able to (i) demonstrate the existence of sequence-level instructions for gene expression and (ii) provide different frameworks based on complementary approaches. Additional work is ongoing, in particular with the last direction based on deep learning, with the aim of detecting additional information present in the sequence.
|
139 |
Moments method for random matrices with applications to wireless communication. / La méthode des moments pour les matrices aléatoires avec application à la communication sans filMasucci, Antonia Maria 29 November 2011 (has links)
Dans cette thèse, on étudie l'application de la méthode des moments pour les télécommunications. On analyse cette méthode et on montre son importance pour l'étude des matrices aléatoires. On utilise le cadre de probabilités libres pour analyser cette méthode. La notion de produit de convolution/déconvolution libre peut être utilisée pour prédire le spectre asymptotique de matrices aléatoires qui sont asymptotiquement libres. On montre que la méthode de moments est un outil puissant même pour calculer les moments/moments asymptotiques de matrices qui n'ont pas la propriété de liberté asymptotique. En particulier, on considère des matrices aléatoires gaussiennes de taille finie et des matrices de Vandermonde al ?eatoires. On développe en série entiére la distribution des valeurs propres de differents modèles, par exemple les distributions de Wishart non-centrale et aussi les distributions de Wishart avec des entrées corrélées de moyenne nulle. Le cadre d'inference pour les matrices des dimensions finies est suffisamment souple pour permettre des combinaisons de matrices aléatoires. Les résultats que nous présentons sont implémentés en code Matlab en générant des sous-ensembles, des permutations et des relations d'équivalence. On applique ce cadre à l'étude des réseaux cognitifs et des réseaux à forte mobilité. On analyse les moments de matrices de Vandermonde aléatoires avec des entrées sur le cercle unitaire. On utilise ces moments et les détecteurs à expansion polynomiale pour décrire des détecteurs à faible complexité du signal transmis par des utilisateurs mobiles à une station de base (ou avec deux stations de base) représentée par des réseaux linéaires uniformes. / In this thesis, we focus on the analysis of the moments method, showing its importance in the application of random matrices to wireless communication. This study is conducted in the free probability framework. The concept of free convolution/deconvolution can be used to predict the spectrum of sums or products of random matrices which are asymptotically free. In this framework, we show that the moments method is very appealing and powerful in order to derive the moments/asymptotic moments for cases when the property of asymptotic freeness does not hold. In particular, we focus on Gaussian random matrices with finite dimensions and structured matrices as Vandermonde matrices. We derive the explicit series expansion of the eigenvalue distribution of various models, as noncentral Wishart distributions, as well as correlated zero mean Wishart distributions. We describe an inference framework so flexible that it is possible to apply it for repeated combinations of random ma- trices. The results that we present are implemented generating subsets, permutations, and equivalence relations. We developped a Matlab routine code in order to perform convolution or deconvolution numerically in terms of a set of input moments. We apply this inference framework to the study of cognitive networks, as well as to the study of wireless networks with high mobility. We analyze the asymptotic moments of random Vandermonde matrices with entries on the unit circle. We use them and polynomial expansion detectors in order to design a low complexity linear MMSE decoder to recover the signal transmitted by mobile users to a base station or two base stations, represented by uniform linear arrays.
|
140 |
Compressed Convolutional Neural Network for Autonomous SystemsDurvesh Pathak (5931110) 17 January 2019 (has links)
The word “Perception” seems to be intuitive and maybe the most straightforward
problem for the human brain because as a child we have been trained to classify
images, detect objects, but for computers, it can be a daunting task. Giving intuition
and reasoning to a computer which has mere capabilities to accept commands
and process those commands is a big challenge. However, recent leaps in hardware
development, sophisticated software frameworks, and mathematical techniques have
made it a little less daunting if not easy. There are various applications built around
to the concept of “Perception”. These applications require substantial computational
resources, expensive hardware, and some sophisticated software frameworks. Building
an application for perception for the embedded system is an entirely different
ballgame. Embedded system is a culmination of hardware, software and peripherals
developed for specific tasks with imposed constraints on memory and power.
Therefore, the applications developed should keep in mind the memory and power
constraints imposed due to the nature of these systems.Before 2012, the problems related to “Perception” such as classification, object
detection were solved using algorithms with manually engineered features. However,
in recent years, instead of manually engineering the features, these features are learned
through learning algorithms. The game-changing architecture of Convolution Neural
Networks proposed in 2012 by Alex K, provided a tremendous momentum in the
direction of pushing Neural networks for perception. This thesis is an attempt to
develop a convolution neural network architecture for embedded systems, i.e. an architecture that has a small model size and competitive accuracy. Recreate state-of-the-art
architectures using fire module’s concept to reduce the model size of the
architecture. The proposed compact models are feasible for deployment on embedded
devices such as the Bluebox 2.0. Furthermore, attempts are made to integrate the
compact Convolution Neural Network with object detection pipelines.
|
Page generated in 0.0551 seconds