• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 144
  • 50
  • 46
  • 22
  • 10
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 347
  • 84
  • 67
  • 66
  • 64
  • 44
  • 40
  • 38
  • 37
  • 36
  • 35
  • 31
  • 31
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Convolution and Autoencoders Applied to Nonlinear Differential Equations

Borquaye, Noah 01 December 2023 (has links) (PDF)
Autoencoders, a type of artificial neural network, have gained recognition by researchers in various fields, especially machine learning due to their vast applications in data representations from inputs. Recently researchers have explored the possibility to extend the application of autoencoders to solve nonlinear differential equations. Algorithms and methods employed in an autoencoder framework include sparse identification of nonlinear dynamics (SINDy), dynamic mode decomposition (DMD), Koopman operator theory and singular value decomposition (SVD). These approaches use matrix multiplication to represent linear transformation. However, machine learning algorithms often use convolution to represent linear transformations. In our work, we modify these approaches to system identification and forecasting of solutions of nonlinear differential equations by replacing matrix multiplication with convolution transformation. In particular, we develop convolution-based approach to dynamic mode decomposition and discuss its application to problems not solvable otherwise.
302

Probing the Structure of Ionised ISM in Lyman-Continuum-Leaking Green Pea Galaxies with MUSE

Nagar, Chinmaya January 2023 (has links)
Lyman continuum (LyC) photons are known to be responsible for reionising the universe after the end of the Dark Ages, which marked a period called the Epoch of Reionisation (EoR). While these high-energy photons are thought to predominantly originate from young, hot, massive stars within the earliest galaxies, and contributions from high-energy sources like quasars and AGN, the origins of these photons are yet not well known and highly debated. Detecting LyC photons from the early galaxies near the EoR is not possible as they get completely absorbed by the intergalactic medium (IGM) on their way to us, which has prompted the development of various indirect diagnostics to study the amount of LyC photons contributed by such galaxies by studying their analogues at low redshifts. In this study, we probe the ionised interstellar medium (ISM) of seven Green Pea galaxies through spatially resolved[O III] λ5007/[O II] λ3727 (O32) and [O III] λ5007/Hα λ6562 (O3Hα) emission-line ratio maps, using data from the Multi Unit Spectroscopic Explorer (MUSE) onboard the Very large telescope (VLT). Out of the two ratios, the former has proven to be a successful diagnostic in predicting Lyman continuum emitters (LCEs). Along with the line ratio maps, the surface brightness profiles of the galaxies are also studied to examine the spatial distribution of the emission lines and the regions from which they originate. The resulting maps indicate whether the ISM of the galaxies is ionization-bounded or density-bounded. Our analysis reveals that a subset of the galaxies with ionization-bounded ISM exhibits pronounced ionisation channels in the outer regions. These channels are potential pathways through which Lyman continuum photons may escape. For density-bounded ISM, the ionised ISM extends well beyond the stellar regions into the halos of the galaxies, highlighting their potential contribution to the ionising photon budget during the EoR. The findings emphasise the importance of spatially resolved ISM studies in understanding the mechanisms facilitating the escape of LyC photons.
303

Fast and Accurate Image Feature Detection for On-The-Go Field Monitoring Through Precision Agriculture. Computer Predictive Modelling for Farm Image Detection and Classification with Convolution Neural Network (CNN)

Abdullahi, Halimatu S. January 2020 (has links)
This study aimed to develop a novel end-to-end plant diagnosis model for the analysis of plant health conditions in near real-time to optimize the rate of production on farmlands for an intensive, yet environmentally safe farming production to preserve the natural environment. First, field research was conducted to determine the extent of the problems faced by farmers in agricultural production. This allowed us to refine the research statement and the level of technology involved in the production processes. The advantages of unmanned aerial systems were exploited in the continuous monitoring of farm plantations to develop automated and accurate measures of farm conditions. To this end, this thesis applies the Precision Agricultural technology as a data based management system that takes into account spatial variations by using the Global Positioning System, Geographical Information System, remote sensing, yield monitors, mapping, and guidance system for variable rate applications. An unmanned aerial vehicle embedded with an optic and radiometric sensor was used to obtain high spectral resolution images of plantation status during normal production/growth cycle. Then, an ensemble of classifiers with Convolution Neural Networks (CNN) was used as off the shelf feature extractor to train images to develop an end-to-end feature detection and multiclass classification system for plant overall health’s conditions. Whereby previous works have concentrated on using CNN as off the shelf feature extractor and model training to detect only plant diseases from plants. To date, no research has yet been carried out to develop an end-to-end model for the overall plant diagnosis system. Previous studies focused on the detection of diseases at any given time, making it difficult to implement comprehensive real-time PA systems. Applying the pretrained model to the new images showed that the model can accurately predict any plant condition with an average of 97% accuracy.
304

Tracking Under Countermeasures Using Infrared Imagery

Modorato, Sara January 2022 (has links)
Object tracking can be done in numerous ways, where the goal is to track a target through all frames in a sequence. The ground truth bounding box is used to initialize the object tracking algorithm. Object tracking can be carried out on infrared imagery suitable for military applications to execute tracking even without illumination. Objects, such as aircraft, can deploy countermeasures to impede tracking. The countermeasures most often mainly impact one wavelength band. Therefore, using two different wavelength bands for object tracking can counteract the impact of the countermeasures. The dataset was created from simulations. The countermeasures applied to the dataset are flares and Directional Infrared Countermeasures (DIRCMs). Different object tracking algorithms exist, and many are based on discriminative correlation filters (DCF). The thesis investigated the DCF-based trackers STRCF and ECO on the created dataset. The STRCF and the ECO trackers were analyzed using one and two wavelength bands. The following features were investigated for both trackers: grayscale, Histogram of Oriented Gradients (HOG), and pre-trained deep features. The results indicated that the STRCF and the ECO trackers using two wavelength bands instead of one improved performance on sequences with countermeasures. The use of HOG, deep features, or a combination of both improved the performance of the STRCF tracker using two wavelength bands. Likewise, the performance of the ECO tracker using two wavelength bands was improved by the use of deep features. However, the negative aspect of using two wavelength bands and introducing more features is that it resulted in a lower frame rate.
305

Compression et inférence des opérateurs intégraux : applications à la restauration d’images dégradées par des flous variables / Approximation and estimation of integral operators : applications to the restoration of images degraded by spatially varying blurs

Escande, Paul 26 September 2016 (has links)
Le problème de restauration d'images dégradées par des flous variables connaît un attrait croissant et touche plusieurs domaines tels que l'astronomie, la vision par ordinateur et la microscopie à feuille de lumière où les images sont de taille un milliard de pixels. Les flous variables peuvent être modélisés par des opérateurs intégraux qui associent à une image nette u, une image floue Hu. Une fois discrétisé pour être appliqué sur des images de N pixels, l'opérateur H peut être vu comme une matrice de taille N x N. Pour les applications visées, la matrice est stockée en mémoire avec un exaoctet. On voit apparaître ici les difficultés liées à ce problème de restauration des images qui sont i) le stockage de ce grand volume de données, ii) les coûts de calculs prohibitifs des produits matrice-vecteur. Ce problème souffre du fléau de la dimension. D'autre part, dans beaucoup d'applications, l'opérateur de flou n'est pas ou que partialement connu. Il y a donc deux problèmes complémentaires mais étroitement liés qui sont l'approximation et l'estimation des opérateurs de flou. Cette thèse a consisté à développer des nouveaux modèles et méthodes numériques permettant de traiter ces problèmes. / The restoration of images degraded by spatially varying blurs is a problem of increasing importance. It is encountered in many applications such as astronomy, computer vision and fluorescence microscopy where images can be of size one billion pixels. Variable blurs can be modelled by linear integral operators H that map a sharp image u to its blurred version Hu. After discretization of the image on a grid of N pixels, H can be viewed as a matrix of size N x N. For targeted applications, matrices is stored with using exabytes on the memory. This simple observation illustrates the difficulties associated to this problem: i) the storage of a huge amount of data, ii) the prohibitive computation costs of matrix-vector products. This problems suffers from the challenging curse of dimensionality. In addition, in many applications, the operator is usually unknown or only partially known. There are therefore two different problems, the approximation and the estimation of blurring operators. They are intricate and have to be addressed with a global overview. Most of the work of this thesis is dedicated to the development of new models and computational methods to address those issues.
306

Praktické ukázky zpracování signálů / Practical examples of signal processing

Hanzálek, Pavel January 2019 (has links)
The thesis focuses on the issue of signal processing. Using practical examples, it tries to show the use of individual signal processing operations from a practical point of view. For each of the selected signal processing operations, an application is created in MATLAB, including a graphical interface for easier operation. The division of the thesis is such that each chapter is first analyzed from a theoretical point of view, then it is shown using a practical demonstration of what the operation is used in practice. Individual applications are described here, mainly in terms of how they are handled and their possible results. The results of the practical part are presented in the attachment of the thesis.
307

Fast, Parallel Techniques for Time-Domain Boundary Integral Equations

Kachanovska, Maryna 27 January 2014 (has links) (PDF)
This work addresses the question of the efficient numerical solution of time-domain boundary integral equations with retarded potentials arising in the problems of acoustic and electromagnetic scattering. The convolutional form of the time-domain boundary operators allows to discretize them with the help of Runge-Kutta convolution quadrature. This method combines Laplace-transform and time-stepping approaches and requires the explicit form of the fundamental solution only in the Laplace domain to be known. Recent numerical and analytical studies revealed excellent properties of Runge-Kutta convolution quadrature, e.g. high convergence order, stability, low dissipation and dispersion. As a model problem, we consider the wave scattering in three dimensions. The convolution quadrature discretization of the indirect formulation for the three-dimensional wave equation leads to the lower triangular Toeplitz system of equations. Each entry of this system is a boundary integral operator with a kernel defined by convolution quadrature. In this work we develop an efficient method of almost linear complexity for the solution of this system based on the existing recursive algorithm. The latter requires the construction of many discretizations of the Helmholtz boundary single layer operator for a wide range of complex wavenumbers. This leads to two main problems: the need to construct many dense matrices and to evaluate many singular and near-singular integrals. The first problem is overcome by the use of data-sparse techniques, namely, the high-frequency fast multipole method (HF FMM) and H-matrices. The applicability of both techniques for the discretization of the Helmholtz boundary single-layer operators with complex wavenumbers is analyzed. It is shown that the presence of decay can favorably affect the length of the fast multipole expansions and thus reduce the matrix-vector multiplication times. The performance of H-matrices and the HF FMM is compared for a range of complex wavenumbers, and the strategy to choose between two techniques is suggested. The second problem, namely, the assembly of many singular and nearly-singular integrals, is solved by the use of the Huygens principle. In this work we prove that kernels of the boundary integral operators $w_n^h(d)$ ($h$ is the time step and $t_n=nh$ is the time) exhibit exponential decay outside of the neighborhood of $d=nh$ (this is the consequence of the Huygens principle). The size of the support of these kernels for fixed $h$ increases with $n$ as $n^a,a<1$, where $a$ depends on the order of the Runge-Kutta method and is (typically) smaller for Runge-Kutta methods of higher order. Numerical experiments demonstrate that theoretically predicted values of $a$ are quite close to optimal. In the work it is shown how this property can be used in the recursive algorithm to construct only a few matrices with the near-field, while for the rest of the matrices the far-field only is assembled. The resulting method allows to solve the three-dimensional wave scattering problem with asymptotically almost linear complexity. The efficiency of the approach is confirmed by extensive numerical experiments.
308

Spatial information and end-to-end learning for visual recognition / Informations spatiales et apprentissage bout-en-bout pour la reconnaissance visuelle

Jiu, Mingyuan 03 April 2014 (has links)
Dans cette thèse nous étudions les algorithmes d'apprentissage automatique pour la reconnaissance visuelle. Un accent particulier est mis sur l'apprentissage automatique de représentations, c.à.d. l'apprentissage automatique d'extracteurs de caractéristiques; nous insistons également sur l'apprentissage conjoint de ces dernières avec le modèle de prédiction des problèmes traités, tels que la reconnaissance d'objets, la reconnaissance d'activités humaines, ou la segmentation d'objets. Dans ce contexte, nous proposons plusieurs contributions : Une première contribution concerne les modèles de type bags of words (BoW), où le dictionnaire est classiquement appris de manière non supervisée et de manière autonome. Nous proposons d'apprendre le dictionnaire de manière supervisée, c.à.d. en intégrant les étiquettes de classes issues de la base d'apprentissage. Pour cela, l'extraction de caractéristiques et la prédiction de la classe sont formulées en un seul modèle global de type réseau de neurones (end-to-end training). Deux algorithmes d'apprentissage différents sont proposés pour ce modèle : le premier est basé sur la retro-propagation du gradient de l'erreur, et le second procède par des mises à jour dans le diagramme de Voronoi calculé dans l'espace des caractéristiques. Une deuxième contribution concerne l'intégration d'informations géométriques dans l'apprentissage supervisé et non-supervisé. Elle se place dans le cadre d'applications nécessitant une segmentation d'un objet en un ensemble de régions avec des relations de voisinage définies a priori. Un exemple est la segmentation du corps humain en parties ou la segmentation d'objets spécifiques. Nous proposons une nouvelle approche intégrant les relations spatiales dans l'algorithme d'apprentissage du modèle de prédication. Contrairement aux méthodes existantes, les relations spatiales sont uniquement utilisées lors de la phase d'apprentissage. Les algorithmes de classification restent inchangés, ce qui permet d'obtenir une amélioration du taux de classification sans augmentation de la complexité de calcul lors de la phase de test. Nous proposons trois algorithmes différents intégrant ce principe dans trois modèles : - l'apprentissage du modèle de prédiction des forêts aléatoires, - l'apprentissage du modèle de prédiction des réseaux de neurones (et de la régression logistique), - l'apprentissage faiblement supervisé de caractéristiques visuelles à l'aide de réseaux de neurones convolutionnels. / In this thesis, we present our research on visual recognition and machine learning. Two types of visual recognition problems are investigated: action recognition and human body part segmentation problem. Our objective is to combine spatial information such as label configuration in feature space, or spatial layout of labels into an end-to-end framework to improve recognition performance. For human action recognition, we apply the bag-of-words model and reformulate it as a neural network for end-to-end learning. We propose two algorithms to make use of label configuration in feature space to optimize the codebook. One is based on classical error backpropagation. The codewords are adjusted by using gradient descent algorithm. The other is based on cluster reassignments, where the cluster labels are reassigned for all the feature vectors in a Voronoi diagram. As a result, the codebook is learned in a supervised way. We demonstrate the effectiveness of the proposed algorithms on the standard KTH human action dataset. For human body part segmentation, we treat the segmentation problem as classification problem, where a classifier acts on each pixel. Two machine learning frameworks are adopted: randomized decision forests and convolutional neural networks. We integrate a priori information on the spatial part layout in terms of pairs of labels or pairs of pixels into both frameworks in the training procedure to make the classifier more discriminative, but pixelwise classification is still performed in the testing stage. Three algorithms are proposed: (i) Spatial part layout is integrated into randomized decision forest training procedure; (ii) Spatial pre-training is proposed for the feature learning in the ConvNets; (iii) Spatial learning is proposed in the logistical regression (LR) or multilayer perceptron (MLP) for classification.
309

Measurement of effective diffusivity : chromatographic method (pellets & monoliths)

Zhang, Runtong January 2013 (has links)
This thesis aims to find out the effective diffusivity (Deff) of a porous material – γ-alumina, using an unsteady state method with two inert gases at ambient condition with no reactions. For porous materials, Deff is important because it determines the amount of reactants that transfers to the surface of pores. When Deff is known, the apparent tortuosity factor of γ-alumina is calculated using the parallel pore model. The apparent tortuosity factor is important because: (a) it can be used to back-calculate Deff at reacting conditions; (b) once Deff with reactions is known, the Thiele modulus can be calculated and hence the global reaction rate can be found; (c) apparent tortuosity factor is also important for modelling purposes (e.g. modelling a packed-bed column or a catalytic combustion reactor packed with porous γ-alumina in various shapes and monoliths). Experimental measurements were performed to determine the effective diffusivity of a binary pair of non-reacting gases (He in N2, and N2 in He) in spherical γ-alumina pellets (1 mm diameter), and in γ-alumina washcoated monoliths (washcoat thickness 20 to 60 µm, on 400 cpsi (cells per square inch) cordierite support). The method used is based on the chromatographic technique, where a gas flows through a tube, which is packed with the sample to be tested. A pulse of tracer gas is injected (e.g. using sample loops: 0.1, 0.2, 0.5 ml) and by using an on-line mass spectrometer the response in the outlet of the packed bed is monitored over time. For the spherical pellets, the tube i.d. = 13.8 mm and the packed bed depths were 200 and 400 mm. For monoliths the tube i.d. = 7 mm and the packed lengths were 500 and 1000 mm. When the chromatographic technique was applied to the monoliths, it was observed that experimental errors can be significant, and it is very difficult to interpret the data. However, the technique worked well with the spherical pellets, and the effective diffusivity of He in N2 was 0.75 – 1.38 × 10-7 m2 s-1, and for N2 in He was 1.81 – 3.10 × 10-7 m2 s-1. Using the parallel pore model to back-calculate the apparent tortuosity factor, then a value between 5 to 9.5 was found for the pellets.
310

Hybridní hluboké metody pro automatické odpovídání na otázky / Hybrid Deep Question Answering

Aghaebrahimian, Ahmad January 2019 (has links)
Title: Hybrid Deep Question Answering Author: Ahmad Aghaebrahimian Institute: Institute of Formal and Applied Linguistics Supervisor: RNDr. Martin Holub, Ph.D., Institute of Formal and Applied Lin- guistics Abstract: As one of the oldest tasks of Natural Language Processing, Question Answering is one of the most exciting and challenging research areas with lots of scientific and commercial applications. Question Answering as a discipline in the conjunction of computer science, statistics, linguistics, and cognitive science is concerned with building systems that automatically retrieve answers to ques- tions posed by humans in a natural language. This doctoral dissertation presents the author's research carried out in this discipline. It highlights his studies and research toward a hybrid Question Answering system consisting of two engines for Question Answering over structured and unstructured data. The structured engine comprises a state-of-the-art Question Answering system based on knowl- edge graphs. The unstructured engine consists of a state-of-the-art sentence-level Question Answering system and a word-level Question Answering system with results near to human performance. This work introduces a new Question An- swering dataset for answering word- and sentence-level questions as well. Start- ing from a...

Page generated in 0.0517 seconds