• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 139
  • 50
  • 46
  • 22
  • 10
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 344
  • 82
  • 65
  • 64
  • 62
  • 44
  • 39
  • 37
  • 37
  • 35
  • 35
  • 31
  • 30
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Design and Evaluation of Convolutional Networks for Video Analysis of Bee Traffic

Vats, Prateek 01 August 2019 (has links)
Colony Collapse Disorder (CCD) has been a major threat to bee colonies around the world which affects vital human food crop pollination. The decline in bee population can have tragic consequences, for humans as well as the bees and the ecosystem. Bee health has been a cause of urgent concern for farmers and scientists around the world for at least a decade but a specific cause for the phenomenon has yet to be conclusively identified. A normal hive inspection can be very disruptive for the bee colony, as the hive needs to be disassembled to visually assess hive health from the inside by collecting larvae and egg data. This work uses Machine Learning and Computer Vision methodologies to develop techniques to monitor hive health without disrupting the bee colony residing in the hive. Bee traffic refers to the number of bees moving in a given area in front of the hive over a given period of time. Bee traffic is related to forager traffic. Forager traffic is the number of bees moving out of the beehive. Forager traffic is a crucial factor in determining and monitoring food availability, food demand, colony age structure, the impact of pesticides, etc. on beehives. This work focuses on estimating bee traffic levels in a given hive and associate this information with data collected through manual beehive inspections.
12

A Class of Univalent Convolutions of Harmonic Mappings

Romney, Matthew Daniel 05 July 2013 (has links) (PDF)
A planar harmonic mapping is a complex-valued function ƒ : D → C of the form ƒ(x+iy) = u(x,y) + iv(x,y), where u and v are both real harmonic. Such a function can be written as ƒ = h+g where h and g are both analytic; the function w = g'/h' is called the dilatation of ƒ. This thesis considers the convolution or Hadamard product of planar harmonic mappings that are the vertical shears of the canonical half-plane mapping p;(z) = z/(1-z) with respective dilatations e^iθz and e^ipz, θ, p ∈ R. We prove that any such convolution is univalent. We also derive a convolution identity that extends this result to shears of p(z) = z/(1-z) in other directions.
13

Hardy-Littlewood Maximal Functions

Vaughan, David 09 1900 (has links)
<p> The principal object of this study is to find weak and strong type estimates concerning functions in weighted Lp spaces and their maximal functions. We also apply these results to the study of convolution integrals. </p> / Thesis / Master of Science (MSc)
14

On Sylvester's Theorem

Hanchin, Terence G. 29 April 2010 (has links)
No description available.
15

Application of digital signal processing methods to very high frequency omnidirectional range (VOR) signals in the design of an airborne flight measurement system

Tye, Thomas N. January 1996 (has links)
No description available.
16

Asymptotic expansion for the <i>L</i><sup>1</sup> Norm of N-Fold convolutions

Stey, George C. 27 March 2007 (has links)
No description available.
17

Detecting Public Transit Service Disruptions Using Social Media Mining and Graph Convolution

Zulfiqar, Omer 09 June 2021 (has links)
In recent years we have seen an increase in the number of public transit service disruptions due to aging infrastructure, system failures and the regular need for maintenance. With the fleeting growth in the usage of these transit networks there has been an increase in the need for the timely detection of such disruptions. Any types of disruptions in these transit networks can lead to delays which can have major implications on the daily passengers. Most current disruption detection systems either do not operate in real-time or lack transit network coverage. The theme of this thesis was to leverage Twitter data to help in earlier detection of service disruptions. This work involves developing a pure Data Mining approach and a couple different approaches that use Graph Neural Networks to identify transit disruption related information in Tweets from a live Twitter stream related to the Washington Metropolitan Area Transit Authority (WMATA) metro system. After developing three different models, a Dynamic Query Expansion model, a Tweet-GCN and a Tweet-Level GCN to represent the data corpus we performed various experiments and benchmark evaluations against other existing baseline models, to justify the efficacy of our approaches. After seeing astounding results across both the Tweet-GCN and Tweet-Level GCN, with an average accuracy of approximately 87.3% and 89.9% we can conclude that not only are these two graph neural models superior for basic NLP text classification, but they also outperform other models in identifying transit disruptions. / Master of Science / Millions of people worldwide rely on public transit networks for their daily commutes and day to day movements. With the growth in the number of people using the service, there has been an increase in the number of daily passengers affected by service disruptions. This thesis and research involves proposing and developing three different approaches to help aid in the timely detection of these disruptions. In this work we have developed a pure data mining approach along with two deep learning models using neural networks and live data from Twitter to identify these disruptions. The data mining approach uses a set of dirsuption related input keywords to identify similar keywords within the live Twitter data. By collecting historical data we were able to create deep learning models that represent the vocabulary from the disruptions related Tweets in the form of a graph. A graph is a collection of data values where the data points are connected to one another based on their relationships. A longer chain of connection between two words defines a weak relationship, a shorter chain defines a stronger relationship. In our graph, words with similar contextual meanings are connected to each other over shorter distances, compared to words with different meanings. At the end we use a neural network as a classifier to scan this graph to learn the semantic relationships within our data. Afterwards, this learned information can be used to accurately classify the disruption related Tweets within a pool of random Tweets. Once all the proposed approaches have been developed, a benchmark evaluation is performed against other existing text classification techniques, to justify the effectiveness of the approaches. The final results indicate that the proposed graph based models achieved a higher accuracy, compared to the data mining model, and also outperformed all the other baseline models. Our Tweet-Level GCN had the highest accuracy of 89.9%.
18

Handling Invalid Pixels in Convolutional Neural Networks

Messou, Ehounoud Joseph Christopher 29 May 2020 (has links)
Most neural networks use a normal convolutional layer that assumes that all input pixels are valid pixels. However, pixels added to the input through padding result in adding extra information that was not initially present. This extra information can be considered invalid. Invalid pixels can also be inside the image where they are referred to as holes in completion tasks like image inpainting. In this work, we look for a method that can handle both types of invalid pixels. We compare on the same test bench two methods previously used to handle invalid pixels outside the image (Partial and Edge convolutions) and one method that was designed for invalid pixels inside the image (Gated convolution). We show that Partial convolution performs the best in image classification while Gated convolution has the advantage on semantic segmentation. As for hotel recognition with masked regions, none of the methods seem appropriate to generate embeddings that leverage the masked regions. / Master of Science / A module at the heart of deep neural networks built for Artificial Intelligence is the convolutional layer. When multiple convolutional layers are used together with other modules, a Convolutional Neural Network (CNN) is obtained. These CNNs can be used for tasks such as image classification where they tell if the object in an image is a chair or a car, for example. Most CNNs use a normal convolutional layer that assumes that all parts of the image fed to the network are valid. However, most models zero pad the image at the beginning to maintain a certain output shape. Zero padding is equivalent to adding a black frame around the image. These added pixels result in adding information that was not initially present. Therefore, this extra information can be considered invalid. Invalid pixels can also be inside the image where they are referred to as holes in completion tasks like image inpainting where the network is asked to fill these holes and give a realistic image. In this work, we look for a method that can handle both types of invalid pixels. We compare on the same test bench two methods previously used to handle invalid pixels outside the image (Partial and Edge convolutions) and one method that was designed for invalid pixels inside the image (Gated convolution). We show that Partial convolution performs the best in image classification while Gated convolution has the advantage on semantic segmentation. As for hotel recognition with masked regions, none of the methods seem appropriate to generate embeddings that leverage the masked regions.
19

Approches complémentaires pour une classification efficace des textures / Complementary Approaches for Efficient Texture Classification

Nguyen, Vu Lam 29 May 2018 (has links)
Dans cette thèse, nous nous intéressons à la classification des images de textures avec aucune connaissance a priori sur les conditions de numérisation. Cette classification selon des définitions pré-établies de matériaux repose sur des algorithmes qui extraient des descripteurs visuels.A cette fin, nous introduisons tout d'abord une variante de descripteurs par motifs binaires locaux (Local Binary Patterns).Dans cette proposition, une approche statistique est suivie pour représenter les textures statiques.Elle incorpore la quantité d'information complémentaire des niveaux de gris des images dans des opérateurs basés LBP.Nous avons nommé cette nouvelle méthode "Completed Local Entropy Binary Patterns (CLEBP)".CLEBP capture la distribution des relations entre les mesures statistiques des données aléatoires d'une image, l'ensemble étant calculé pour tous les pixels au sein d'une structure locale.Sans la moindre étape préalable d'apprentissage, ni de calibration automatique, les descriptions CLEBP contiennent à la fois des informations locales et globales des textures, tout en étant robustes aux variations externes.En outre, nous utilisons le filtrage inspiré par la biologie, ou biologically-inspired filtering (BF), qui simule la rétine humaine via une phase de prétraitement.Nous montrons que notre approche est complémentaire avec les LBP conventionnels, et les deux combinés offrent de meilleurs résultats que l'une des deux méthodes seule.Les résultats expérimentaux sur quatre bases de texture, Outex, KTH-TIPS-2b, CURet, et UIUC montrent que notre approche est plus performante que les méthodes actuelles.Nous introduisons également un cadre formel basé sur une combinaison de descripteurs pour la classification de textures.Au sein de ce cadre, nous combinons des descripteurs LBP invariants en rotation et en échelle, et de faible dimension, avec les réseaux de dispersion, ou scattering networks (ScatNet).Les résultats expérimentaux montrent que l'approche proposée est capable d'extraire des descripteurs riches à de nombreuses orientations et échelles.Les textures sont modélisées par une concaténation des codes LBP et valeurs moyennes des coefficients ScatNet.Nous proposons également d'utiliser le filtrage inspiré par la biologie, ou biologically-inspired filtering (BF), pour améliorer la resistance des descripteurs LBP.Nous démontrons par l'expérience que ces nouveaux descripteurs présentent de meilleurs résultats que les approches usuelles de l'état de l'art.Ces résultats sont obtenus sur des bases réelles qui contiennent de nombreuses avec des variations significatives.Nous proposons aussi un nouveau réseau conçu par l'expertise appelé réseaux de convolution normalisée, ou normalized convolution network.Celui-ci est inspiré du modèle des ScatNet, auquel deux modifications ont été apportées.La première repose sur l'utilisation de la convolution normalisé en lieu et place de la convolution standard.La deuxième propose de remplacer le calcul de la valeur moyenne des coefficients du réseaux par une agrégation avec la méthode des vecteurs de Fisher.Les expériences montrent des résultats compétitifs sur de nombreuses bases de textures.Enfin, tout au long de cette thèse, nous avons montré par l'expérience qu'il est possible d'obtenir de très bons résultats de classification en utilisant des techniques peu coûteuses en ressources. / This thesis investigates the complementary approaches for classifying texture images.The thesis begins by proposing a Local Binary Pattern (LBP) variant for efficient texture classification.In this proposed method, a statistical approach to static texture representation is developed. It incorporates the complementary quantity information of image intensity into the LBP-based operators. We name our LBP variant `the completed local entropy binary patterns (CLEBP)'. CLEBP captures the distribution of the relationships between statistical measures of image data randomness, calculated over all pixels within a local structure. Without any pre-learning process and any additional parameters to be learned, the CLEBP descriptors convey both global and local information about texture while being robust to external variations. Furthermore, we use biologically-inspired filtering (BF) which simulates the performance of human retina as preprocessing technique. It is shown that our approach and the conventional LBP have the complementary strength and that by combining these algorithms, one obtains better results than either of them considered separately. Experimental results on four large texture databases show that our approach is more efficient than contemporary ones.We then introduce a framework which is a feature combination approach to the problem of texture classification. In this framework, we combine Local Binary Pattern (LBP) features with low dimensional, rotation and scale invariant counterparts, the handcrafted scattering network (ScatNet). The experimental results show that the proposed approach is capable of extracting rich features at multiple orientations and scales. Textures are modeled by concatenating histogram of LBP codes and the mean values of ScatNet coefficients. Then, we propose using Biological Inspired Filtering (BF) preprocessing technique to enhance the robustness of LBP features. We have demonstrated by experiment that the novel features extracted from the proposed framework achieve superior performance as compared to their traditional counterparts when benchmarked on real-world databases containing many classes with significant imaging variations.In addition, we propose a novel handcrafted network called normalized convolution network. It is inspired by the model of ScatNet with two important modification. Firstly, normalized convolution substitute for standard convolution in ScatNet model to extract richer texture features. Secondly, Instead of using mean values of the network coefficients, Fisher vector is exploited as an aggregation method. Experiments show that our proposed network gains competitive classification results on many difficult texture benchmarks.Finally, throughout the thesis, we have proved by experiments that the proposed approaches gain good classification results with low resource required.
20

Contributions à la modélisation et à l'inférence des fonctions aléatoires non-stationnaires de second ordre / Contributions to modelling and inference of second order non-stationary random functions

Fouedjio Kameni, Migraine Francky 15 December 2014 (has links)
Les fonctions aléatoires stationnaires ont été utilisées avec succès dans les applications géostatistiques depuis plusieurs décennies. La structure de dépendance spatiale sous-jacente de la fonction aléatoire est alors représentée par un variogramme ou une covariance stationnaire. Cependant, dans certaines situations, il y a très peu de raisons de s'attendre à une structure de dépendance spatiale stationnaire sur l'ensemble du domaine d'intérêt. Dans cette thèse, deux approches de modélisation non-stationnaire de fonctions aléatoires sont considérées: déformation d'espace et convolution stochastique. Pour chacune d'elle, nous développons une méthodologie statistique d'estimation de la structure de dépendance spatiale non-stationnaire, dans le contexte d'une réalisation unique. Par ailleurs, nous montrons également comment dans ce cadre non-stationnaire, les prédictions spatiales et les simulations conditionnelles peuvent être menées. Les méthodes d'inférence développées permettent de capturer des structures de dépendance variables tout en garantissant la cohérence globale du modèle final. L'évaluation de leur performance selon plusieurs critères, sur des données synthétiques et réelles montre qu'elles donnent de meilleurs résultats de prédiction qu'une méthode stationnaire. Au delà de la prédiction, elles peuvent également servir comme outil pour une analyse exploratoire de la non-stationnarité. / Stationary Random Functions have been sucessfully applied in geostatistical applications for decades. The underlying spatial dependence structure of the Random Function is represented by a stationary variogram or covariance. However, in some instances, there is little reason to expect the spatial dependence structure to be stationary over the whole region of interest. In this manuscript, two non-stationary modelling approaches for Random Functions are considered: space deformation and stochastic convolution. For each of them, we develop a statistical methodology for estimating the non-stationary spatial dependence structure, in the context of a single realization. Moreover, we also show how spatial predictions and conditional simulations can be carried out in this non-stationary framework. The developed inference methods allow to capture varying spatial structures while guaranteeing the global consistency of the final model. The assessment of their performance on both synthetic and real datasets show that they outperform stationary method, according to several criteria. Beyond the prediction, they can also serve as a tool for exploratory analysis of the non-stationarity.

Page generated in 0.2059 seconds