• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 119
  • 19
  • 15
  • 8
  • 8
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 211
  • 93
  • 75
  • 61
  • 50
  • 49
  • 41
  • 37
  • 36
  • 31
  • 31
  • 26
  • 23
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Biométrie faciale 3D par apprentissage des caractéristiques géométriques : Application à la reconnaissance des visages et à la classification du genre

Ballihi, Lahoucine 12 May 2012 (has links) (PDF)
La biométrie du visage a suscité, ces derniers temps, l'intérêt grandissant de la communauté scientifique et des industriels de la biométrie vue son caractère naturel, sans contact et non-intrusif. Néanmoins, les performances des systèmes basés sur les images 2D sont affectées par différents types de variabilités comme la pose, les conditions d'éclairage, les occultations et les expressions faciales. Avec la disponibilité de caméras 3D capables d'acquérir la forme tridimensionnelle, moins sensibles aux changements d'illumination et de pose, plusieurs travaux de recherche se sont tournés vers l'étude de cette nouvelle modalité. En revanche, d'autres défis apparaissent comme les déformations de la forme faciales causées par les expressions et le temps de calcul que requièrent les approches développées. Cette thèse s'inscrit dans ce paradigme en proposant de coupler la géométrie Riemannienne avec les techniques d'apprentissage pour une biométrie faciale 3D efficace et robuste aux changements d'expressions. Après une étape de pré-traitement, nous proposons de représenter les surfaces faciales par des collections de courbes 3D qui captent localement leurs formes. Nous utilisons un cadre géométrique existant pour obtenir les déformations " optimales " entre les courbes ainsi que les distances les séparant sur une variété Riemannienne (espace des formes des courbes). Nous appliquons, par la suite, des techniques d'apprentissage afin de déterminer les courbes les plus pertinentes pour deux applications de la biométrie du visage : la reconnaissance d'identité et la classification du genre. Les résultats obtenus sur le benchmark de référence FRGC v2 et leurs comparaison avec les travaux de l'état de l'art confirment tout l'intérêt de coupler l'analyse locale de la forme par une approche géométrique (possibilité de calculer des moyennes, etc.) avec des techniques d'apprentissage (Basting, etc.) pour gagner en temps de calcul et en performances.
172

Towards Climate Based Early Warning and Response Systems for Malaria

Sewe, Maquins Odhiambo January 2017 (has links)
Background: Great strides have been made in combating malaria, however, the indicators in sub Saharan Africa still do not show promise for elimination in the near future as malaria infections still result in high morbidity and mortality among children. The abundance of the malaria-transmitting mosquito vectors in these regions are driven by climate suitability. In order to achieve malaria elimination by 2030, strengthening of surveillance systems have been advocated. Based on malaria surveillance and climate monitoring, forecasting models may be developed for early warnings. Therefore, in this thesis, we strived to illustrate the use malaria surveillance and climate data for policy and decision making by assessing the association between weather variability (from ground and remote sensing sources) and malaria mortality, and by building malaria admission forecasting models. We further propose an economic framework for integrating forecasts into operational surveillance system for evidence based decisionmaking and resource allocation.  Methods: The studies were based in Asembo, Gem and Karemo areas of the KEMRI/CDC Health and Demographic Surveillance System in Western Kenya. Lagged association of rainfall and temperature with malaria mortality was modeled using general additive models, while distributed lag non-linear models were used to explore relationship between remote sensing variables, land surface temperature(LST), normalized difference vegetation index(NDVI) and rainfall on weekly malaria mortality. General additive models, with and without boosting, were used to develop malaria admissions forecasting models for lead times one to three months. We developed a framework for incorporating forecast output into economic evaluation of response strategies at different lead times including uncertainties. The forecast output could either be an alert based on a threshold, or absolute predicted cases. In both situations, interventions at each lead time could be evaluated by the derived net benefit function and uncertainty incorporated by simulation.  Results: We found that the environmental factors correlated with malaria mortality with varying latencies. In the first paper, where we used ground weather data, the effect of mean temperature was significant from lag of 9 weeks, with risks higher for mean temperatures above 250C. The effect of cumulative precipitation was delayed and began from 5 weeks. Weekly total rainfall of more than 120 mm resulted in increased risk for mortality. In the second paper, using remotely sensed data, the effect of precipitation was consistent in the three areas, with increasing effect with weekly total rainfall of over 40 mm, and then declined at 80 mm of weekly rainfall. NDVI below 0.4 increased the risk of malaria mortality, while day LST above 350C increased the risk of malaria mortality with shorter lags for high LST weeks. The lag effect of precipitation was more delayed for precipitation values below 20 mm starting at week 5 while shorter lag effect for higher precipitation weeks. The effect of higher NDVI values above 0.4 were more delayed and protective while shorter lag effect for NDVI below 0.4. For all the lead times, in the malaria admissions forecasting modelling in the third paper, the boosted regression models provided better prediction accuracy. The economic framework in the fourth paper presented a probability function of the net benefit of response measures, where the best response at particular lead time corresponded to the one with the highest probability, and absolute value, of a net benefit surplus.  Conclusion: We have shown that lagged relationship between environmental variables and malaria health outcomes follow the expected biological mechanism, where presentation of cases follow the onset of specific weather conditions and climate variability. This relationship guided the development of predictive models showcased with the malaria admissions model. Further, we developed an economic framework connecting the forecasts to response measures in situations with considerable uncertainties. Thus, the thesis work has contributed to several important components of early warning systems including risk assessment; utilizing surveillance data for prediction; and a method to identifying cost-effective response strategies. We recommend economic evaluation becomes standard in implementation of early warning system to guide long-term sustainability of such health protection programs.
173

Quantitative Retrieval of Organic Soil Properties from Visible Near-Infrared Shortwave Infrared (Vis-NIR-SWIR) Spectroscopy Using Fractal-Based Feature Extraction.

Liu, Lanfa, Buchroithner, Manfred, Ji, Min, Dong, Yunyun, Zhang, Rongchung 27 March 2017 (has links) (PDF)
Visible and near-infrared diffuse reflectance spectroscopy has been demonstrated to be a fast and cheap tool for estimating a large number of chemical and physical soil properties, and effective features extracted from spectra are crucial to correlating with these properties. We adopt a novel methodology for feature extraction of soil spectroscopy based on fractal geometry. The spectrum can be divided into multiple segments with different step–window pairs. For each segmented spectral curve, the fractal dimension value was calculated using variation estimators with power indices 0.5, 1.0 and 2.0. Thus, the fractal feature can be generated by multiplying the fractal dimension value with spectral energy. To assess and compare the performance of new generated features, we took advantage of organic soil samples from the large-scale European Land Use/Land Cover Area Frame Survey (LUCAS). Gradient-boosting regression models built using XGBoost library with soil spectral library were developed to estimate N, pH and soil organic carbon (SOC) contents. Features generated by a variogram estimator performed better than two other estimators and the principal component analysis (PCA). The estimation results for SOC were coefficient of determination (R2) = 0.85, root mean square error (RMSE) = 56.7 g/kg, the ratio of percent deviation (RPD) = 2.59; for pH: R2 = 0.82, RMSE = 0.49 g/kg, RPD = 2.31; and for N: R2 = 0.77, RMSE = 3.01 g/kg, RPD = 2.09. Even better results could be achieved when fractal features were combined with PCA components. Fractal features generated by the proposed method can improve estimation accuracies of soil properties and simultaneously maintain the original spectral curve shape.
174

Technology-independent CMOS op amp in minimum channel length

Sengupta, Susanta 13 July 2004 (has links)
The performance of analog integrated circuits is dependent on the technology. Digital circuits are scalable in nature, and the same circuit can be scaled from one technology to another with improved performance. But, in analog integrated circuits, the circuit components must be re-designed to maintain the desired performance across different technologies. Moreover, in the case of digital circuits, minimum feature-size (short channel length) devices can be used for better performance, but analog circuits are still being designed using channel lengths larger than the minimum feature sizes. The research in this thesis is aimed at understanding the impact of technology scaling and short channel length devices on the performance of analog integrated circuits. The operational amplifier (op amp) is chosen as an example circuit for investigation. The performance of the conventional op amps are studied across different technologies for short channel lengths, and techniques to develop technology-independent op amp architectures have been proposed. In this research, three op amp architectures have been developed whose performance is relatively independent of the technology and the channel length. They are made scalable, and the same op amp circuits are scaled from a 0.25 um CMOS onto a 0.18 um CMOS technology with the same components. They are designed to achieve large small-signal gain, constant unity gain-bandwidth frequency and constant phase margin. They are also designed with short channel length transistors. Current feedback, gm-boosted, CMOS source followers are also developed, and they are used in the buffered versions of these op amps.
175

Détection automatique de chutes de personnes basée sur des descripteurs spatio-temporels : définition de la méthode, évaluation des performances et implantation temps-réel

Charfi, Imen 21 October 2013 (has links) (PDF)
Nous proposons une méthode supervisée de détection de chutes de personnes en temps réel, robusteaux changements de point de vue et d'environnement. La première partie consiste à rendredisponible en ligne une base de vidéos DSFD enregistrées dans quatre lieux différents et qui comporteun grand nombre d'annotations manuelles propices aux comparaisons de méthodes. Nousavons aussi défini une métrique d'évaluation qui permet d'évaluer la méthode en s'adaptant à la naturedu flux vidéo et la durée d'une chute, et en tenant compte des contraintes temps réel. Dans unsecond temps, nous avons procédé à la construction et l'évaluation des descripteurs spatio-temporelsSTHF, calculés à partir des attributs géométriques de la forme en mouvement dans la scène ainsique leurs transformations, pour définir le descripteur optimisé de chute après une méthode de sélectiond'attributs. La robustesse aux changements d'environnement a été évaluée en utilisant les SVMet le Boosting. On parvient à améliorer les performances par la mise à jour de l'apprentissage parl'intégration des vidéos sans chutes enregistrées dans l'environnement définitif. Enfin, nous avonsréalisé, une implantation de ce détecteur sur un système embarqué assimilable à une caméra intelligentebasée sur un composant SoC de type Zynq. Une démarche de type Adéquation AlgorithmeArchitecture a permis d'obtenir un bon compromis performance de classification/temps de traitement
176

Conception, synthèse et dévelopement d'inhibiteurs du répresseur transcriptionnel mycobactérien ETHR selon une approche par fragments. Une nouvelle approche dans la lutte contre la tuberculose / Use of fragment-based approaches for the design, synthesis and development of new ethr inhibitors as a new strategy to fight tuberculosis

Villemagne, Baptiste 28 September 2012 (has links)
Avec plus d’un million et demi de morts chaque année, la tuberculose reste aujourd’hui la seconde cause de mortalité liée à un agent infectieux. De plus l’organisation mondiale de la santé (OMS) a estimé en 2011 qu’un tiers de la population mondiale était porteuse du bacille Mycobacterium tuberculosis responsable de la maladie. Depuis la fin des années 1980, une recrudescence du nombre de cas de tuberculose est observée à l’échelle mondiale. Cette recrudescence est due à la fois à l’apparition de souches résistantes, mais également à l’épidémie de VIH qui est un facteur de prédisposition au déclenchement de la maladie.En 2000, le répresseur transcriptionnel mycobactérien EthR a été identifié comme étant un régulateur clé dans la bioactivation de l’éthionamide (ETH), un antituberculeux utilisé pour le traitement de seconde intention. En 2009, l’inhibition de ce répresseur par le développement de molécules « drug-like » a permis de potentialiser l’activité de l’éthionamide d’un facteur 3 chez la souris infectée et a permis de valider cette cible pour une future approche thérapeutique.Ce travail repose sur la découverte et l’optimisation de nouveaux inhibiteurs de ce répresseur transcriptionnel mycobactérien, à partir d’une petite molécule appelée « fragment » qui a été cocristallisée avec la protéine. Par la combinaison d’un criblage in silico, d’un criblage in vitro des touches identifiées, de l’étude des structures radiocristallographiques des complexes ligands/protéines et de la chimie médicinale, le développement de trois approches complémentaires dites « fragmentgrowing », « fragment-merging » et « fragment-linking » a permis de développer des composés présentant de fortes activités. Ces résultats permettront très prochainement de sélectionner une nouvelle molécule issue de ce travail dans la perspective de nouveaux essais sur le modèle murin. / Tuberculosis (TB) remains the leading cause of death due to a single infective agent with more than 1.5 million people killed each year. In 2011, the world health organization (WHO) estimated that one third of the world’s population is infected with Mycobacterium tuberculosis, the pathogen responsible for the disease. This phenomenon may be due to an explosive escalation of TB incidence that occurred in the 1980s due to the emergence of both resistant strains and HIV epidemic.In 2000, EthR, a mycobacterial transcriptional repressor, was identified as a key modulator of ethionamide (ETH) bioactivation. ETH is one of the main second-line drugs used to treat drug resistant strains. In 2009, it was shown that co-administration of ETH and drug-like inhibitors of EthR was able to boost ETH activity threefold in a mouse-model of TB-infection, thus validating the target for a new therapeutic strategy.This work deals with the discovery and optimisation of new EthR inhibitors, based on a small molecule, called a “fragment”, co-crystallized with the protein. We combined in silico screening, in vitro evaluation of the hit compounds, study of co-crystal structures and medicinal chemistry to develop three complementary approaches called “fragment growing”, “fragment merging” and “fragment linking” that led to the discovery of very potent inhibitors. Based on these results, we are currently selecting a potential candidate for new in vivo experiments.
177

Vision-based moving pedestrian recognition from imprecise and uncertain data / Reconnaissance de piétons par vision à partir de données imprécises et incertaines

Zhou, Dingfu 05 December 2014 (has links)
La mise en oeuvre de systèmes avancés d’aide à la conduite (ADAS) basée vision, est une tâche complexe et difficile surtout d’un point de vue robustesse en conditions d’utilisation réelles. Une des fonctionnalités des ADAS vise à percevoir et à comprendre l’environnement de l’ego-véhicule et à fournir l’assistance nécessaire au conducteur pour réagir à des situations d’urgence. Dans cette thèse, nous nous concentrons sur la détection et la reconnaissance des objets mobiles car leur dynamique les rend plus imprévisibles et donc plus dangereux. La détection de ces objets, l’estimation de leurs positions et la reconnaissance de leurs catégories sont importants pour les ADAS et la navigation autonome. Par conséquent, nous proposons de construire un système complet pour la détection des objets en mouvement et la reconnaissance basées uniquement sur les capteurs de vision. L’approche proposée permet de détecter tout type d’objets en mouvement en fonction de deux méthodes complémentaires. L’idée de base est de détecter les objets mobiles par stéréovision en utilisant l’image résiduelle du mouvement apparent (RIMF). La RIMF est définie comme l’image du mouvement apparent causé par le déplacement des objets mobiles lorsque le mouvement de la caméra a été compensé. Afin de détecter tous les mouvements de manière robuste et de supprimer les faux positifs, les incertitudes liées à l’estimation de l’ego-mouvement et au calcul de la disparité doivent être considérées. Les étapes principales de l’algorithme sont les suivantes : premièrement, la pose relative de la caméra est estimée en minimisant la somme des erreurs de reprojection des points d’intérêt appariées et la matrice de covariance est alors calculée en utilisant une stratégie de propagation d’erreurs de premier ordre. Ensuite, une vraisemblance de mouvement est calculée pour chaque pixel en propageant les incertitudes sur l’ego-mouvement et la disparité par rapport à la RIMF. Enfin, la probabilité de mouvement et le gradient de profondeur sont utilisés pour minimiser une fonctionnelle d’énergie de manière à obtenir la segmentation des objets en mouvement. Dans le même temps, les boîtes englobantes des objets mobiles sont générées en utilisant la carte des U-disparités. Après avoir obtenu la boîte englobante de l’objet en mouvement, nous cherchons à reconnaître si l’objet en mouvement est un piéton ou pas. Par rapport aux algorithmes de classification supervisée (comme le boosting et les SVM) qui nécessitent un grand nombre d’exemples d’apprentissage étiquetés, notre algorithme de boosting semi-supervisé est entraîné avec seulement quelques exemples étiquetés et de nombreuses instances non étiquetées. Les exemples étiquetés sont d’abord utilisés pour estimer les probabilités d’appartenance aux classes des exemples non étiquetés, et ce à l’aide de modèles de mélange de gaussiennes après une étape de réduction de dimension réalisée par une analyse en composantes principales. Ensuite, nous appliquons une stratégie de boosting sur des arbres de décision entraînés à l’aide des instances étiquetées de manière probabiliste. Les performances de la méthode proposée sont évaluées sur plusieurs jeux de données de classification de référence, ainsi que sur la détection et la reconnaissance des piétons. Enfin, l’algorithme de détection et de reconnaissances des objets en mouvement est testé sur les images du jeu de données KITTI et les résultats expérimentaux montrent que les méthodes proposées obtiennent de bonnes performances dans différents scénarios de conduite en milieu urbain. / Vision-based Advanced Driver Assistance Systems (ADAS) is a complex and challenging task in real world traffic scenarios. The ADAS aims at perceiving andunderstanding the surrounding environment of the ego-vehicle and providing necessary assistance for the drivers if facing some emergencies. In this thesis, we will only focus on detecting and recognizing moving objects because they are more dangerous than static ones. Detecting these objects, estimating their positions and recognizing their categories are significantly important for ADAS and autonomous navigation. Consequently, we propose to build a complete system for moving objects detection and recognition based on vision sensors. The proposed approach can detect any kinds of moving objects based on two adjacent frames only. The core idea is to detect the moving pixels by using the Residual Image Motion Flow (RIMF). The RIMF is defined as the residual image changes caused by moving objects with compensated camera motion. In order to robustly detect all kinds of motion and remove false positive detections, uncertainties in the ego-motion estimation and disparity computation should also be considered. The main steps of our general algorithm are the following : first, the relative camera pose is estimated by minimizing the sum of the reprojection errors of matched features and its covariance matrix is also calculated by using a first-order errors propagation strategy. Next, a motion likelihood for each pixel is obtained by propagating the uncertainties of the ego-motion and disparity to the RIMF. Finally, the motion likelihood and the depth gradient are used in a graph-cut-based approach to obtain the moving objects segmentation. At the same time, the bounding boxes of moving object are generated based on the U-disparity map. After obtaining the bounding boxes of the moving object, we want to classify the moving objects as a pedestrian or not. Compared to supervised classification algorithms (such as boosting and SVM) which require a large amount of labeled training instances, our proposed semi-supervised boosting algorithm is trained with only a few labeled instances and many unlabeled instances. Firstly labeled instances are used to estimate the probabilistic class labels of the unlabeled instances using Gaussian Mixture Models after a dimension reduction step performed via Principal Component Analysis. Then, we apply a boosting strategy on decision stumps trained using the calculated soft labeled instances. The performances of the proposed method are evaluated on several state-of-the-art classification datasets, as well as on a pedestrian detection and recognition problem.Finally, both our moving objects detection and recognition algorithms are tested on the public images dataset KITTI and the experimental results show that the proposed methods can achieve good performances in different urban scenarios.
178

Predicting inter-frequency measurements in an LTE network using supervised machine learning : a comparative study of learning algorithms and data processing techniques / Att prediktera inter-frekvensmätningar i ett LTE-nätverk med hjälp av övervakad maskininlärning

Sonnert, Adrian January 2018 (has links)
With increasing demands on network reliability and speed, network suppliers need to effectivize their communications algorithms. Frequency measurements are a core part of mobile network communications, increasing their effectiveness would increase the effectiveness of many network processes such as handovers, load balancing, and carrier aggregation. This study examines the possibility of using supervised learning to predict the signal of inter-frequency measurements by investigating various learning algorithms and pre-processing techniques. We found that random forests have the highest predictive performance on this data set, at 90.7\% accuracy. In addition, we have shown that undersampling and varying the discriminator are effective techniques for increasing the performance on the positive class on frequencies where the negative class is prevalent. Finally, we present hybrid algorithms in which the learning algorithm for each model depends on attributes of the training data set. These algorithms perform at a much higher efficiency in terms of memory and run-time without heavily sacrificing predictive performance.
179

Machine learning strategies for multi-step-ahead time series forecasting

Ben Taieb, Souhaib 08 October 2014 (has links)
How much electricity is going to be consumed for the next 24 hours? What will be the temperature for the next three days? What will be the number of sales of a certain product for the next few months? Answering these questions often requires forecasting several future observations from a given sequence of historical observations, called a time series. <p><p>Historically, time series forecasting has been mainly studied in econometrics and statistics. In the last two decades, machine learning, a field that is concerned with the development of algorithms that can automatically learn from data, has become one of the most active areas of predictive modeling research. This success is largely due to the superior performance of machine learning prediction algorithms in many different applications as diverse as natural language processing, speech recognition and spam detection. However, there has been very little research at the intersection of time series forecasting and machine learning.<p><p>The goal of this dissertation is to narrow this gap by addressing the problem of multi-step-ahead time series forecasting from the perspective of machine learning. To that end, we propose a series of forecasting strategies based on machine learning algorithms.<p><p>Multi-step-ahead forecasts can be produced recursively by iterating a one-step-ahead model, or directly using a specific model for each horizon. As a first contribution, we conduct an in-depth study to compare recursive and direct forecasts generated with different learning algorithms for different data generating processes. More precisely, we decompose the multi-step mean squared forecast errors into the bias and variance components, and analyze their behavior over the forecast horizon for different time series lengths. The results and observations made in this study then guide us for the development of new forecasting strategies.<p><p>In particular, we find that choosing between recursive and direct forecasts is not an easy task since it involves a trade-off between bias and estimation variance that depends on many interacting factors, including the learning model, the underlying data generating process, the time series length and the forecast horizon. As a second contribution, we develop multi-stage forecasting strategies that do not treat the recursive and direct strategies as competitors, but seek to combine their best properties. More precisely, the multi-stage strategies generate recursive linear forecasts, and then adjust these forecasts by modeling the multi-step forecast residuals with direct nonlinear models at each horizon, called rectification models. We propose a first multi-stage strategy, that we called the rectify strategy, which estimates the rectification models using the nearest neighbors model. However, because recursive linear forecasts often need small adjustments with real-world time series, we also consider a second multi-stage strategy, called the boost strategy, that estimates the rectification models using gradient boosting algorithms that use so-called weak learners.<p><p>Generating multi-step forecasts using a different model at each horizon provides a large modeling flexibility. However, selecting these models independently can lead to irregularities in the forecasts that can contribute to increase the forecast variance. The problem is exacerbated with nonlinear machine learning models estimated from short time series. To address this issue, and as a third contribution, we introduce and analyze multi-horizon forecasting strategies that exploit the information contained in other horizons when learning the model for each horizon. In particular, to select the lag order and the hyperparameters of each model, multi-horizon strategies minimize forecast errors over multiple horizons rather than just the horizon of interest.<p><p>We compare all the proposed strategies with both the recursive and direct strategies. We first apply a bias and variance study, then we evaluate the different strategies using real-world time series from two past forecasting competitions. For the rectify strategy, in addition to avoiding the choice between recursive and direct forecasts, the results demonstrate that it has better, or at least has close performance to, the best of the recursive and direct forecasts in different settings. For the multi-horizon strategies, the results emphasize the decrease in variance compared to single-horizon strategies, especially with linear or weakly nonlinear data generating processes. Overall, we found that the accuracy of multi-step-ahead forecasts based on machine learning algorithms can be significantly improved if an appropriate forecasting strategy is used to select the model parameters and to generate the forecasts.<p><p>Lastly, as a fourth contribution, we have participated in the Load Forecasting track of the Global Energy Forecasting Competition 2012. The competition involved a hierarchical load forecasting problem where we were required to backcast and forecast hourly loads for a US utility with twenty geographical zones. Our team, TinTin, ranked fifth out of 105 participating teams, and we have been awarded an IEEE Power & Energy Society award.<p> / Doctorat en sciences, Spécialisation Informatique / info:eu-repo/semantics/nonPublished
180

Kernel-Based Pathway Approaches for Testing and Selection

Friedrichs, Stefanie 25 September 2017 (has links)
No description available.

Page generated in 0.0552 seconds