• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 51
  • 51
  • 32
  • 29
  • 22
  • 20
  • 20
  • 20
  • 19
  • 18
  • 13
  • 13
  • 11
  • 10
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Introduction à l’apprentissage automatique en pharmacométrie : concepts et applications

Leboeuf, Paul-Antoine 05 1900 (has links)
L’apprentissage automatique propose des outils pour faire face aux problématiques d’aujourd’hui et de demain. Les récentes percées en sciences computationnelles et l’émergence du phénomène des mégadonnées ont permis à l’apprentissage automatique d’être mis à l’avant plan tant dans le monde académique que dans la société. Les récentes réalisations de l’apprentissage automatique dans le domaine du langage naturel, de la vision et en médecine parlent d’eux-mêmes. La liste des sciences et domaines qui bénéficient des techniques de l’apprentissage automatique est longue. Cependant, les tentatives de coopération avec la pharmacométrie et les sciences connexes sont timides et peu nombreuses. L’objectif de ce projet de maitrise est d’explorer le potentiel de l’apprentissage automatique en sciences pharmaceutiques. Cela a été réalisé par l’application de techniques et des méthodes d’apprentissage automatique à des situations de pharmacologie clinique et de pharmacométrie. Le projet a été divisé en trois parties. La première partie propose un algorithme pour renforcer la fiabilité de l’étape de présélection des covariables d’un modèle de pharmacocinétique de population. Une forêt aléatoire et l’XGBoost ont été utilisés pour soutenir la présélection des covariables. Les indicateurs d’importance relative des variables pour la forêt aléatoire et pour l’XGBoost ont bien identifié l’importance de toutes les covariables qui avaient un effet sur les différents paramètres du modèle PK de référence. La seconde partie confirme qu’il est possible d’estimer des concentrations plasmatiques avec des méthodes différentes de celles actuellement utilisés en pharmacocinétique. Les mêmes algorithmes ont été sélectionnés et leur ajustement pour la tâche était appréciable. La troisième partie confirme la possibilité de faire usage des méthodes d'apprentissage automatique pour la prédiction de relations complexes et typiques à la pharmacologie clinique. Encore une fois, la forêt aléatoire et l’XGBoost ont donné lieu à un ajustement appréciable. / Machine learning offers tools to deal with current problematics. Recent breakthroughs in computational sciences and the emergence of the big data phenomenon have brought machine learning to the forefront in both academia and society. The recent achievements of machine learning in natural language, computational vision and medicine speak for themselves. The list of sciences and fields that benefit from machine learning techniques is long. However, attempts to cooperate with pharmacometrics and related sciences are timid and limited. The aim of this Master thesis is to explore the potential of machine learning in pharmaceutical sciences. This has been done through the application of machine learning techniques and methods to situations of clinical pharmacology and pharmacometrics. The project was divided into three parts. The first part proposes an algorithm to enhance the reliability of the covariate pre-selection step of a population pharmacokinetic model. Random forest and XGBoost were used to support the screening of covariates. The indicators of the relative importance of the variables for the random forest and for XGBoost recognized the importance of all the covariates that influenced the various parameters of the PK model of reference. The second part exemplifies the estimation of plasma concentrations using machine learning methods. The same algorithms were selected and their fit for the task was appreciable. The third part confirms the possibility to apply machine learning methods in the prediction of complex relationships, as some typical clinical pharmacology relationships. Again, random forest and XGBoost got a nice adjustment.
42

Data Driven Energy Efficiency of Ships

Taspinar, Tarik January 2022 (has links)
Decreasing the fuel consumption and thus greenhouse gas emissions of vessels has emerged as a critical topic for both ship operators and policy makers in recent years. The speed of vessels has long been recognized to have highest impact on fuel consumption. The solution suggestions like "speed optimization" and "speed reduction" are ongoing discussion topics for International Maritime Organization. The aim of this study are to develop a speed optimization model using time-constrained genetic algorithms (GA). Subsequent to this, this paper also presents the application of machine learning (ML) regression methods in setting up a model with the aim of predicting the fuel consumption of vessels. Local outlier factor algorithm is used to eliminate outlier in prediction features. In boosting and tree-based regression prediction methods, the overfitting problem is observed after hyperparameter tuning. Early stopping technique is applied for overfitted models.In this study, speed is also found as the most important feature for fuel consumption prediction models. On the other hand, GA evaluation results showed that random modifications in default speed profile can increase GA performance and thus fuel savings more than constant speed limits during voyages. The results of GA also indicate that using high crossover rates and low mutations rates can increase fuel saving.Further research is recommended to include fuel and bunker prices to determine more accurate fuel efficiency.
43

Multi-level Safety Performance Functions For High Speed Facilities

Ahmed, Mohamed 01 January 2012 (has links)
High speed facilities are considered the backbone of any successful transportation system; Interstates, freeways, and expressways carry the majority of daily trips on the transportation network. Although these types of roads are relatively considered the safest among other types of roads, they still experience many crashes, many of which are severe, which not only affect human lives but also can have tremendous economical and social impacts. These facts signify the necessity of enhancing the safety of these high speed facilities to ensure better and efficient operation. Safety problems could be assessed through several approaches that can help in mitigating the crash risk on long and short term basis. Therefore, the main focus of the research in this dissertation is to provide a framework of risk assessment to promote safety and enhance mobility on freeways and expressways. Multi-level Safety Performance Functions (SPFs) were developed at the aggregate level using historical crash data and the corresponding exposure and risk factors to identify and rank sites with promise (hot-spots). Additionally, SPFs were developed at the disaggregate level utilizing real-time weather data collected from meteorological stations located at the freeway section as well as traffic flow parameters collected from different detection systems such as Automatic Vehicle Identification (AVI) and Remote Traffic Microwave Sensors (RTMS). These disaggregate SPFs can identify real-time risks due to turbulent traffic conditions and their interactions with other risk factors. In this study, two main datasets were obtained from two different regions. Those datasets comprise historical crash data, roadway geometrical characteristics, aggregate weather and traffic parameters as well as real-time weather and traffic data. iii At the aggregate level, Bayesian hierarchical models with spatial and random effects were compared to Poisson models to examine the safety effects of roadway geometrics on crash occurrence along freeway sections that feature mountainous terrain and adverse weather. At the disaggregate level; a main framework of a proactive safety management system using traffic data collected from AVI and RTMS, real-time weather and geometrical characteristics was provided. Different statistical techniques were implemented. These techniques ranged from classical frequentist classification approaches to explain the relationship between an event (crash) occurring at a given time and a set of risk factors in real time to other more advanced models. Bayesian statistics with updating approach to update beliefs about the behavior of the parameter with prior knowledge in order to achieve more reliable estimation was implemented. Also a relatively recent and promising Machine Learning technique (Stochastic Gradient Boosting) was utilized to calibrate several models utilizing different datasets collected from mixed detection systems as well as real-time meteorological stations. The results from this study suggest that both levels of analyses are important, the aggregate level helps in providing good understanding of different safety problems, and developing policies and countermeasures to reduce the number of crashes in total. At the disaggregate level, real-time safety functions help toward more proactive traffic management system that will not only enhance the performance of the high speed facilities and the whole traffic network but also provide safer mobility for people and goods. In general, the proposed multi-level analyses are useful in providing roadway authorities with detailed information on where countermeasures must be implemented and when resources should be devoted. The study also proves that traffic data collected from different detection systems could be a useful asset that should be utilized iv appropriately not only to alleviate traffic congestion but also to mitigate increased safety risks. The overall proposed framework can maximize the benefit of the existing archived data for freeway authorities as well as for road users.
44

Machine Learning based Predictive Data Analytics for Embedded Test Systems

Al Hanash, Fayad January 2023 (has links)
Organizations gather enormous amounts of data and analyze these data to extract insights that can be useful for them and help them to make better decisions. Predictive data analytics is a crucial subfield within data analytics that make accurate predictions. Predictive data analytics extracts insights from data by using machine learning algorithms. This thesis presents the supervised learning algorithm to perform predicative data analytics in Embedded Test System at the Nordic Engineering Partner company. Predictive Maintenance is a concept that is often used in manufacturing industries which refers to predicting asset failures before they occur. The machine learning algorithms used in this thesis are support vector machines, multi-layer perceptrons, random forests, and gradient boosting. Both binary and multi-class classifier have been provided to fit the models, and cross-validation, sampling techniques, and a confusion matrix have been provided to accurately measure their performance. In addition to accuracy, recall, precision, f1, kappa, mcc, and roc auc measurements are used as well. The prediction models that are fitted achieve high accuracy.
45

Toward an application of machine learning for predicting foreign trade in services – a pilot study for Statistics Sweden

Unnebäck, Tea January 2023 (has links)
The objective of this thesis is to investigate the possibility of using machine learn- ing at Statistics Sweden within the Foreign Trade in Services (FTS) statistic, to predict the likelihood of a unit to conduct foreign trade in services. The FTS survey is a sample survey, for which there is no natural frame to sample from. Therefore, prior to sampling a frame is manually constructed each year, starting with a register of all Swedish companies and agencies and in a rule- based manner narrowing it down to contain only what is classified as units likely to trade in services during the year to come. An automatic procedure that would enable reliable predictions is requested. To this end, three different machine learning methods have been analyzed, two rule- based methods (random forest and extreme gradient boosting) and one distance- based method (k nearest neighbors). The models arising from these methods are trained and tested on historically sampled units, for which it is known whether they did trade or not. The results indicate that the two rule-based methods perform well in classifying likely traders. The random forest model is better at finding traders, while the extreme gradient boosting model is better at finding non-traders. The results also indicate interesting patterns when studying different metrics for the models. The results also indicate that when training the rule-based models, the year in which the training data was sampled needs to be taken into account. This entails that cross-validation with random folds should not be used, but rather grouped cross-validation based on year. By including a feature that mirror the state of the economy, the model can adapt its rules to this, meaning that the rules learned on training data can be extended to years beyond training data. Based on the observed results, the final recommendation is to further develop and investigate the performance of the random forest model.
46

Analýza a klasifikace dat ze snímače mozkové aktivity / Data Analysis and Clasification from the Brain Activity Detector

Jileček, Jan January 2019 (has links)
This thesis aims to implement methods for recording EEG data obtained with the neural activity sensor OpenBCI Ultracortex IV headset. It also describes neurofeedback, methods of obtaining data from the motor cortex for further analysis and takes a look at the machine learning algorithms best suited for the presented problem. Multiple training and testing datasets are created, as well as a tool for recording the brain activity of a headset-wearing test subject, which is being visually presented with cognitive challenges on the screen in front of him. A neurofeedback demo app has been developed, presented and later used for calibration of new test subjects. Next part is data analysis, which aims to discriminate the left and right hand movement intention signatures in the brain motor cortex. Multiple classification methods are used and their utility reviewed.
47

How Certain Are You of Getting a Parking Space? : A deep learning approach to parking availability prediction / Maskininlärning för prognos av tillgängliga parkeringsplatser

Nilsson, Mathias, von Corswant, Sophie January 2020 (has links)
Traffic congestion is a severe problem in urban areas and it leads to the emission of greenhouse gases and air pollution. In general, drivers lack knowledge of the location and availability of free parking spaces in urban cities. This leads to people driving around searching for parking places, and about one-third of traffic congestion in cities is due to drivers searching for an available parking lot. In recent years, various solutions to provide parking information ahead have been proposed. The vast majority of these solutions have been applied in large cities, such as Beijing and San Francisco. This thesis has been conducted in collaboration with Knowit and Dukaten to predict parking occupancy in car parks one hour ahead in the relatively small city of Linköping. To make the predictions, this study has investigated the possibility to use long short-term memory and gradient boosting regression trees, trained on historical parking data. To enhance decision making, the predictive uncertainty was estimated using the novel approach Monte Carlo dropout for the former, and quantile regression for the latter. This study reveals that both of the models can predict parking occupancy ahead of time and they are found to excel in different contexts. The inclusion of exogenous features can improve prediction quality. More specifically, we found that incorporating hour of the day improved the models’ performances, while weather features did not contribute much. As for uncertainty, the employed method Monte Carlo dropout was shown to be sensitive to parameter tuning to obtain good uncertainty estimates.
48

Peak shaving optimisation in school kitchens : A machine learning approach

Alhoush, George, Edvardsson, Emil January 2022 (has links)
With the increasing electrification of todays society the electrical grid is experiencing increasing pressure from demand. One factor that affects the stability of the grid are the time intervals at which power demand is at its highest which is referred to as peak demand. This project was conducted in order to reduce the peak demand through a process called peak shaving in order to relieve some of this pressure through the use of batteries and renewable energy. By doing so, the user of such systems could reduce the installation cost of their electrical infrastructure as well as the electrical billing. Peak shaving in this project was implemented using machine learning algorithms that predicted the daily power consumption in school kitchens with help of their food menus, which were then fed to an algorithm to steer a battery according to the results. All of these project findings are compared to another system installed by a company to decide whether the algorithm has the right accuracy and performance. The results of the simulations were promising as the algorithm was able to detect the vast majority of the peaks and perform peak shaving intelligently. Based on the graphs and values presented in this report, it can be concluded that the algorithm is ready to be implemented in the real world with the potential to contribute to a long-term sustainable electrical grid while saving money for the user.
49

Predicting Workforce in Healthcare : Using Machine Learning Algorithms, Statistical Methods and Swedish Healthcare Data / Predicering av Arbetskraft inom Sjukvården genom Maskininlärning, Statistiska Metoder och Svenska Sjukvårdsstatistik

Diskay, Gabriel, Joelsson, Carl January 2023 (has links)
Denna studie undersöker användningen av maskininlärningsmodeller för att predicera arbetskraftstrender inom hälso- och sjukvården i Sverige. Med hjälp av en linjär regressionmodell, en Gradient Boosting Regressor-modell och en Exponential Smoothing-modell syftar forskningen för detta arbete till att ge viktiga insikter för underlaget till makroekonomiska överväganden och att ge en djupare förståelse av Beveridge-kurvan i ett sammanhang relaterat till hälso- och sjukvårdssektorn. Trots vissa utmaningar med datan är målet att förbättra noggrannheten och effektiviteten i beslutsfattandet rörande arbetsmarknaden. Resultaten av denna studie visar maskininlärningspotentialen i predicering i ett ekonomiskt sammanhang, även om inneboende begränsningar och etiska överväganden beaktas. / This study examines the use of machine learning models to predict workforce trends in the healthcare sector in Sweden. Using a Linear Regression model, a Gradient Boosting Regressor model, and an Exponential Smoothing model the research aims to grant needed insight for the basis of macroeconomic considerations and to give a deeper understanding of the Beveridge Curve in the healthcare sector’s context. Despite some challenges with data, the goal is to improve the accuracy and efficiency of the policy-making around the labor market. The results of this study demonstrates the machine learning potential in the forecasting within an economic context, although inherent limitations and ethical considerations are considered.
50

Analysis of the human corneal shape with machine learning

Bouazizi, Hala 01 1900 (has links)
Cette thèse cherche à examiner les conditions optimales dans lesquelles les surfaces cornéennes antérieures peuvent être efficacement pré-traitées, classifiées et prédites en utilisant des techniques de modélisation géométriques (MG) et d’apprentissage automatiques (AU). La première étude (Chapitre 2) examine les conditions dans lesquelles la modélisation géométrique peut être utilisée pour réduire la dimensionnalité des données utilisées dans un projet d’apprentissage automatique. Quatre modèles géométriques ont été testés pour leur précision et leur rapidité de traitement : deux modèles polynomiaux (P) – polynômes de Zernike (PZ) et harmoniques sphériques (PHS) – et deux modèles de fonctions rationnelles (R) : fonctions rationnelles de Zernike (RZ) et fonctions rationnelles d’harmoniques sphériques (RSH). Il est connu que les modèles PHS et RZ sont plus précis que les modèles PZ pour un même nombre de coefficients (J), mais on ignore si les modèles PHS performent mieux que les modèles RZ, et si, de manière plus générale, les modèles SH sont plus précis que les modèles R, ou l’inverse. Et prenant en compte leur temps de traitement, est-ce que les modèles les plus précis demeurent les plus avantageux? Considérant des valeurs de J (nombre de coefficients du modèle) relativement basses pour respecter les contraintes de dimensionnalité propres aux taches d’apprentissage automatique, nous avons établi que les modèles HS (PHS et RHS) étaient tous deux plus précis que les modèles Z correspondants (PZ et RR), et que l’avantage de précision conféré par les modèles HS était plus important que celui octroyé par les modèles R. Par ailleurs, les courbes de temps de traitement en fonction de J démontrent qu’alors que les modèles P sont traités en temps quasi-linéaires, les modèles R le sont en temps polynomiaux. Ainsi, le modèle SHR est le plus précis, mais aussi le plus lent (un problème qui peut en partie être remédié en appliquant une procédure de pré-optimisation). Le modèle ZP était de loin le plus rapide, et il demeure une option intéressante pour le développement de projets. SHP constitue le meilleur compromis entre la précision et la rapidité. La classification des cornées selon des paramètres cliniques a une longue tradition, mais la visualisation des effets moyens de ces paramètres sur la forme de la cornée par des cartes topographiques est plus récente. Dans la seconde étude (Chapitre 3), nous avons construit un atlas de cartes d’élévations moyennes pour différentes variables cliniques qui pourrait s’avérer utile pour l’évaluation et l’interprétation des données d’entrée (bases de données) et de sortie (prédictions, clusters, etc.) dans des tâches d’apprentissage automatique, entre autres. Une base de données constituée de plusieurs milliers de surfaces cornéennes antérieures normales enregistrées sous forme de matrices d’élévation de 101 by 101 points a d’abord été traitée par modélisation géométrique pour réduire sa dimensionnalité à un nombre de coefficients optimal dans une optique d’apprentissage automatique. Les surfaces ainsi modélisées ont été regroupées en fonction de variables cliniques de forme, de réfraction et de démographie. Puis, pour chaque groupe de chaque variable clinique, une surface moyenne a été calculée et représentée sous forme de carte d’élévations faisant référence à sa SMA (sphère la mieux ajustée). Après avoir validé la conformité de la base de donnée avec la littérature par des tests statistiques (ANOVA), l’atlas a été vérifié cliniquement en examinant si les transformations de formes cornéennes présentées dans les cartes pour chaque variable étaient conformes à la littérature. C’était le cas. Les applications possibles d’un tel atlas sont discutées. La troisième étude (Chapitre 4) traite de la classification non-supervisée (clustering) de surfaces cornéennes antérieures normales. Le clustering cornéen un domaine récent en ophtalmologie. La plupart des études font appel aux techniques d’extraction des caractéristiques pour réduire la dimensionnalité de la base de données cornéennes. Le but est généralement d’automatiser le processus de diagnostique cornéen, en particulier en ce qui a trait à la distinction entre les cornées normales et les cornées irrégulières (kératocones, Fuch, etc.), et dans certains cas, de distinguer différentes sous-classes de cornées irrégulières. L’étude de clustering proposée ici se concentre plutôt sur les cornées normales afin de mettre en relief leurs regroupements naturels. Elle a recours à la modélisation géométrique pour réduire la dimensionnalité de la base de données, utilisant des polynômes de Zernike, connus pour leur interprétativité transparente (chaque terme polynomial est associé à une caractéristique cornéenne particulière) et leur bonne précision pour les cornées normales. Des méthodes de différents types ont été testées lors de prétests (méthodes de clustering dur (hard) ou souple (soft), linéaires or non-linéaires. Ces méthodes ont été testées sur des surfaces modélisées naturelles (non-normalisées) ou normalisées avec ou sans traitement d’extraction de traits, à l’aide de différents outils d’évaluation (scores de séparabilité et d’homogénéité, représentations par cluster des coefficients de modélisation et des surfaces modélisées, comparaisons statistiques des clusters sur différents paramètres cliniques). Les résultats obtenus par la meilleure méthode identifiée, k-means sans extraction de traits, montrent que les clusters produits à partir de surfaces cornéennes naturelles se distinguent essentiellement en fonction de la courbure de la cornée, alors que ceux produits à partir de surfaces normalisées se distinguent en fonction de l’axe cornéen. La dernière étude présentée dans cette thèse (Chapitre 5) explore différentes techniques d’apprentissage automatique pour prédire la forme de la cornée à partir de données cliniques. La base de données cornéennes a d’abord été traitée par modélisation géométrique (polynômes de Zernike) pour réduire sa dimensionnalité à de courts vecteurs de 12 à 20 coefficients, une fourchette de valeurs potentiellement optimales pour effectuer de bonnes prédictions selon des prétests. Différentes méthodes de régression non-linéaires, tirées de la bibliothèque scikit-learn, ont été testées, incluant gradient boosting, Gaussian process, kernel ridge, random forest, k-nearest neighbors, bagging, et multi-layer perceptron. Les prédicteurs proviennent des variables cliniques disponibles dans la base de données, incluant des variables géométriques (diamètre horizontal de la cornée, profondeur de la chambre cornéenne, côté de l’œil), des variables de réfraction (cylindre, sphère et axe) et des variables démographiques (âge, genre). Un test de régression a été effectué pour chaque modèle de régression, défini comme la sélection d’une des 256 combinaisons possibles de variables cliniques (les prédicteurs), d’une méthode de régression, et d’un vecteur de coefficients de Zernike d’une certaine taille (entre 12 et 20 coefficients, les cibles). Tous les modèles de régression testés ont été évalués à l’aide de score de RMSE établissant la distance entre les surfaces cornéennes prédites (les prédictions) et vraies (les topographies corn¬éennes brutes). Les meilleurs d’entre eux ont été validés sur l’ensemble de données randomisé 20 fois pour déterminer avec plus de précision lequel d’entre eux est le plus performant. Il s’agit de gradient boosting utilisant toutes les variables cliniques comme prédicteurs et 16 coefficients de Zernike comme cibles. Les prédictions de ce modèle ont été évaluées qualitativement à l’aide d’un atlas de cartes d’élévations moyennes élaborées à partir des variables cliniques ayant servi de prédicteurs, qui permet de visualiser les transformations moyennes d’en groupe à l’autre pour chaque variables. Cet atlas a permis d’établir que les cornées prédites moyennes sont remarquablement similaires aux vraies cornées moyennes pour toutes les variables cliniques à l’étude. / This thesis aims to investigate the best conditions in which the anterior corneal surface of normal corneas can be preprocessed, classified and predicted using geometric modeling (GM) and machine learning (ML) techniques. The focus is on the anterior corneal surface, which is the main responsible of the refractive power of the cornea. Dealing with preprocessing, the first study (Chapter 2) examines the conditions in which GM can best be applied to reduce the dimensionality of a dataset of corneal surfaces to be used in ML projects. Four types of geometric models of corneal shape were tested regarding their accuracy and processing time: two polynomial (P) models – Zernike polynomial (ZP) and spherical harmonic polynomial (SHP) models – and two corresponding rational function (R) models – Zernike rational function (ZR) and spherical harmonic rational function (SHR) models. SHP and ZR are both known to be more accurate than ZP as corneal shape models for the same number of coefficients, but which type of model is the most accurate between SHP and ZR? And is an SHR model, which is both an SH model and an R model, even more accurate? Also, does modeling accuracy comes at the cost of the processing time, an important issue for testing large datasets as required in ML projects? Focusing on low J values (number of model coefficients) to address these issues in consideration of dimensionality constraints that apply in ML tasks, it was found, based on a number of evaluation tools, that SH models were both more accurate than their Z counterparts, that R models were both more accurate than their P counterparts and that the SH advantage was more important than the R advantage. Processing time curves as a function of J showed that P models were processed in quasilinear time, R models in polynomial time, and that Z models were fastest than SH models. Therefore, while SHR was the most accurate geometric model, it was the slowest (a problem that can partly be remedied by applying a preoptimization procedure). ZP was the fastest model, and with normal corneas, it remains an interesting option for testing and development, especially for clustering tasks due to its transparent interpretability. The best compromise between accuracy and speed for ML preprocessing is SHP. The classification of corneal shapes with clinical parameters has a long tradition, but the visualization of their effects on the corneal shape with group maps (average elevation maps, standard deviation maps, average difference maps, etc.) is relatively recent. In the second study (Chapter 3), we constructed an atlas of average elevation maps for different clinical variables (including geometric, refraction and demographic variables) that can be instrumental in the evaluation of ML task inputs (datasets) and outputs (predictions, clusters, etc.). A large dataset of normal adult anterior corneal surface topographies recorded in the form of 101×101 elevation matrices was first preprocessed by geometric modeling to reduce the dimensionality of the dataset to a small number of Zernike coefficients found to be optimal for ML tasks. The modeled corneal surfaces of the dataset were then grouped in accordance with the clinical variables available in the dataset transformed into categorical variables. An average elevation map was constructed for each group of corneal surfaces of each clinical variable in their natural (non-normalized) state and in their normalized state by averaging their modeling coefficients to get an average surface and by representing this average surface in reference to the best-fit sphere in a topographic elevation map. To validate the atlas thus constructed in both its natural and normalized modalities, ANOVA tests were conducted for each clinical variable of the dataset to verify their statistical consistency with the literature before verifying whether the corneal shape transformations displayed in the maps were themselves visually consistent. This was the case. The possible uses of such an atlas are discussed. The third study (Chapter 4) is concerned with the use of a dataset of geometrically modeled corneal surfaces in an ML task of clustering. The unsupervised classification of corneal surfaces is recent in ophthalmology. Most of the few existing studies on corneal clustering resort to feature extraction (as opposed to geometric modeling) to achieve the dimensionality reduction of the dataset. The goal is usually to automate the process of corneal diagnosis, for instance by distinguishing irregular corneal surfaces (keratoconus, Fuch, etc.) from normal surfaces and, in some cases, by classifying irregular surfaces into subtypes. Complementary to these corneal clustering studies, the proposed study resorts mainly to geometric modeling to achieve dimensionality reduction and focuses on normal adult corneas in an attempt to identify their natural groupings, possibly in combination with feature extraction methods. Geometric modeling was based on Zernike polynomials, known for their interpretative transparency and sufficiently accurate for normal corneas. Different types of clustering methods were evaluated in pretests to identify the most effective at producing neatly delimitated clusters that are clearly interpretable. Their evaluation was based on clustering scores (to identify the best number of clusters), polar charts and scatter plots (to visualize the modeling coefficients involved in each cluster), average elevation maps and average profile cuts (to visualize the average corneal surface of each cluster), and statistical cluster comparisons on different clinical parameters (to validate the findings in reference to the clinical literature). K-means, applied to geometrically modeled surfaces without feature extraction, produced the best clusters, both for natural and normalized surfaces. While the clusters produced with natural corneal surfaces were based on the corneal curvature, those produced with normalized surfaces were based on the corneal axis. In each case, the best number of clusters was four. The importance of curvature and axis as grouping criteria in corneal data distribution is discussed. The fourth study presented in this thesis (Chapter 5) explores the ML paradigm to verify whether accurate predictions of normal corneal shapes can be made from clinical data, and how. The database of normal adult corneal surfaces was first preprocessed by geometric modeling to reduce its dimensionality into short vectors of 12 to 20 Zernike coefficients, found to be in the range of appropriate numbers to achieve optimal predictions. The nonlinear regression methods examined from the scikit-learn library were gradient boosting, Gaussian process, kernel ridge, random forest, k-nearest neighbors, bagging, and multilayer perceptron. The predictors were based on the clinical variables available in the database, including geometric variables (best-fit sphere radius, white-towhite diameter, anterior chamber depth, corneal side), refraction variables (sphere, cylinder, axis) and demographic variables (age, gender). Each possible combination of regression method, set of clinical variables (used as predictors) and number of Zernike coefficients (used as targets) defined a regression model in a prediction test. All the regression models were evaluated based on their mean RMSE score (establishing the distance between the predicted corneal surfaces and the raw topographic true surfaces). The best model identified was further qualitatively assessed based on an atlas of predicted and true average elevation maps by which the predicted surfaces could be visually compared to the true surfaces on each of the clinical variables used as predictors. It was found that the best regression model was gradient boosting using all available clinical variables as predictors and 16 Zernike coefficients as targets. The most explicative predictor was the best-fit sphere radius, followed by the side and refractive variables. The average elevation maps of the true anterior corneal surfaces and the predicted surfaces based on this model were remarkably similar for each clinical variable.

Page generated in 0.0895 seconds