• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 73
  • 19
  • 10
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 146
  • 146
  • 19
  • 18
  • 17
  • 16
  • 16
  • 16
  • 16
  • 15
  • 12
  • 12
  • 11
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Prediktivní analýza - postup a tvorba prediktivních modelů / Predictive Analytics - Process and Development of Predictive Models

Praus, Ondřej January 2013 (has links)
This master's degree thesis focuses on predictive analytics. This type of analysis uses historical data and predictive models to predict future phenomenon. The main goal of this thesis is to describe predictive analytics and its process from theoretical as well as practical point of view. Secondary goal is to implement project of predictive analytics in an important insurance company operating in the Czech market and to improve the current state of detection of fraudulent insurance claims. Thesis is divided into theoretical and practical part. The process of predictive analytics and selected types of predictive models are described in the theoretical part of the thesis. Practical part describes the implementation of predictive analytics in a company. First described are techniques of data organization used in datamart development. Predictive models are then implemented based on the data from the prepared datamart. Thesis includes examples and problems with their solutions. The main contribution of this thesis is the detailed description of the project implementation. The field of the predictive analytics is better understandable thanks to the level of detail. Another contribution of successfully implemented predictive analytics is the improvement of the detection of fraudulent insurance claims.
102

Information diffusion, information and knowledge extraction from social networks / Diffusion d'information, extraction d'information et de connaissance sans les réseaux sociaux

Hoang 1985-...., Thi Bich Ngoc 28 September 2018 (has links)
La popularité des réseaux sociaux a rapidement augmenté au cours de la dernière décennie. Selon Statista, environ 2 milliards d'utilisateurs utiliseront les réseaux sociaux d'ici janvier 2018 et ce nombre devrait encore augmenter au cours des prochaines années. Tout en gardant comme objectif principal de connecter le monde, les réseaux sociaux jouent également un rôle majeur dans la connexion des commerçants avec les clients, les célébrités avec leurs fans, les personnes ayant besoin d'aide avec les personnes désireuses d'aider, etc.. Le succès de ces réseaux repose principalement sur l'information véhiculée ainsi que sur la capacité de diffusion des messages dans les réseaux sociaux. Notre recherche vise à modéliser la diffusion des messages ainsi qu'à extraire et à représenter l'information des messages dans les réseaux sociaux. Nous introduisons d'abord une approche de prédiction de la diffusion de l'information dans les réseaux sociaux. Plus précisément, nous prédisons si un tweet va être re-tweeté ou non ainsi que son niveau de diffusion. Notre modèle se base sur trois types de caractéristiques: basées sur l'utilisateur, sur le temps et sur le contenu. Nous avons évalué notre modèle sur différentes collections correspondant à une douzaine de millions de tweets. Nous avons montré que notre modèle améliore significativement la F-mesure par rapport à l'état de l'art, à la fois pour prédire si un tweet va être re-tweeté et pour prédire le niveau de diffusion. La deuxième contribution de cette thèse est de fournir une approche pour extraire des informations dans les microblogs. Plusieurs informations importantes sont incluses dans un message relatif à un événement, telles que la localisation, l'heure et les entités associées. Nous nous concentrons sur l'extraction de la localisation qui est un élément primordial pour plusieurs applications, notamment les applications géospatiales et les applications liées aux événements. Nous proposons plusieurs combinaisons de méthodes existantes d'extraction de localisation dans des tweets en ciblant des applications soit orientées rappel soit orientées précision. Nous présentons également un modèle pour prédire si un tweet contient une référence à un lieu ou non. Nous montrons que nous améliorons significativement la précision des outils d'extraction de lieux lorsqu'ils se focalisent sur les tweets que nous prédisons contenir un lieu. Notre dernière contribution présente une base de connaissances permettant de mieux représenter l'information d'un ensemble de tweets liés à des événements. Nous combinons une collection de tweets de festivals avec d'autres ressources issues d'Internet pour construire une ontologie de domaine. Notre objectif est d'apporter aux utilisateurs une image complète des événements référencés au sein de cette collection. / The popularity of online social networks has rapidly increased over the last decade. According to Statista, approximated 2 billion users used social networks in January 2018 and this number is still expected to grow in the next years. While serving its primary purpose of connecting people, social networks also play a major role in successfully connecting marketers with customers, famous people with their supporters, need-help people with willing-help people. The success of online social networks mainly relies on the information the messages carry as well as the spread speed in social networks. Our research aims at modeling the message diffusion, extracting and representing information and knowledge from messages on social networks. Our first contribution is a model to predict the diffusion of information on social networks. More precisely, we predict whether a tweet is going to be diffused or not and the level of the diffusion. Our model is based on three types of features: user-based, time-based and content-based features. Being evaluated on various collections corresponding to dozen millions of tweets, our model significantly improves the effectiveness (F-measure) compared to the state-of-the-art, both when predicting if a tweet is going to be retweeted or not, and when predicting the level of retweet. The second contribution of this thesis is to provide an approach to extract information from microblogs. While several pieces of important information are included in a message about an event such as location, time, related entities, we focus on location which is vital for several applications, especially geo-spatial applications and applications linked to events. We proposed different combinations of various existing methods to extract locations in tweets targeting either recall-oriented or precision-oriented applications. We also defined a model to predict whether a tweet contains a location or not. We showed that the precision of location extraction tools on the tweets we predict to contain a location is significantly improved as compared when extracted from all the tweets.Our last contribution presents a knowledge base that better represents information from a set of tweets on events. We combined a tweet collection with other Internet resources to build a domain ontology. The knowledge base aims at bringing users a complete picture of events referenced in the tweet collection (we considered the CLEF 2016 festival tweet collection).
103

Determination of micro-meso-macro damage mechanisms in geopolymer concrete using non-destructive techniques

Azarsa, Peiman 15 January 2021 (has links)
Cement-based concrete is one of the main construction materials that is widely used for many construction applications due to its strength, durability, reflectivity, and versatility. However, it is acknowledged that production of cement as a primary material of concrete releases 1.8 Gt carbon dioxide (CO2) into the environment. It is estimated that one ton of cement production releases one ton of CO2 to the atmosphere. That is why, this work aims to create a concrete that could be an alternative to cement-based concrete. Geopolymer concrete (GPC) is an eco-friendly construction material and an alternative to conventional concrete that is produced by reacting aluminate and silicate bearing constituents with a caustic activator (i.e. sodium-based or potassium-based). Both potassium and sodium have been considered as generally safe intergradient by the FDA, based upon the observance of several good manufacturing practice conditions of use. Theses activators are used in various application including concrete, food, as a stabilizer, and as a thickening agent. Moreover, these activators are also used in making soap, as an electrolyte in alkaline batteries and in electroplating, lithography, and paint and varnish removers. Medically, these activators are widely used in the wet mount preparation of various clinical specimens for microscopic visualization of fungi and fungal elements in skin, hair, nails, and even vaginal secretions, Currently, it was determined that these activators solution were found to be a safe and effective treatment of plane warts. Despite the developments in the studies relating to GPC made by various precursors such as fly-ash and slag in the literatures, the use of GPC made by fly-ash and bottom-ash has not been overly researched. In this study, attempts have been made to produce a unique mix proportion for Potassium-based GPC made by fly-ash and bottom-ash and investigate various mechanical properties of this type of GPC including elastic modulus, freeze-thaw resistance, heavy metal leach-ability and corrosion in both laboratory and real environmental conditions using Non-Destructive Tests (NDT)s. / Graduate / 2021-12-15
104

[en] PREDICTING THE ACQUISITION OF RESISTANT PATHOGENS IN ICUS USING MACHINE LEARNING TECHNIQUES / [pt] PREVENDO A AQUISIÇÃO DE PATÓGENOS RESISTENTES EM UTIS UTILIZANDO TÉCNICAS DE APRENDIZADO DE MÁQUINA

LEILA FIGUEIREDO DANTAS 01 February 2021 (has links)
[pt] As infecções por bactérias Gram-negativas Resistentes aos Carbapenêmicos (CR-GNB) estão entre as maiores preocupações atuais da área da, especialmente em Unidades de Terapia Intensiva (UTI), e podem estar associadas ao aumento do tempo de hospitalização, morbidade, custos e mortalidade. Esta tese tem como objetivo desenvolver uma abordagem abrangente e sistemática aplicando técnicas de aprendizado de máquina para construir modelos para prever a aquisição de CR-GNB em UTIs de hospitais brasileiros. Propusemos modelos de triagem para detectar pacientes que não precisam ser testados e um modelo de risco que estima a probabilidade de pacientes de UTI adquirirem CR-GNB. Aplicamos métodos de seleção de características, técnicas de aprendizado de máquina e estratégias de balanceamento para construir e comparar os modelos. Os critérios de desempenho escolhidos para avaliação foram Negative Predictive Value (NPV) and Matthews Correlation Coefficient (MCC) para o modelo de triagem e Brier score e curvas de calibração para o modelo de risco de aquisição de CR-GNB. A estatística de Friedman e os testes post hoc de Nemenyi foram usados para testar a significância das diferenças entre as técnicas. O método de ganho de informações e a mineração de regras de associação avaliam a importância e a força entre os recursos. Nosso banco de dados reúne dados de pacientes, antibióticos e microbiologia de cinco hospitais brasileiros de 8 de maio de 2017 a 31 de agosto de 2019, envolvendo pacientes hospitalizados em 24 UTIs adultas. As informações do laboratório foram usadas para identificar todos os pacientes com teste positivo ou negativo para CR-GNB, A. baumannii, P. aeruginosa ou Enterobacteriaceae. Há um total de 539 testes positivos e 7.462 negativos, resultando em 3.604 pacientes com pelo menos um exame após 48 horas de hospitalização. Dois modelos de triagem foram propostos ao tomador de decisão do hospital. O modelo da floresta aleatória reduz aproximadamente 39 por cento dos testes desnecessários e prevê corretamente 92 por cento dos positivos. A rede neural evita testes desnecessários em 64 por cento dos casos, mas 24 por cento dos testes positivos são classificados incorretamente. Os resultados mostram que as estratégias de amostragem tradicional, SMOTEBagging e UnderBagging obtiveram melhores resultados. As técnicas lineares como Regressão Logística com regularização apresentam bom desempenho e são mais interpretáveis; elas não são significativamente diferentes dos classificadores mais complexos. Para o modelo de risco de aquisição, o Centroides Encolhidos Mais Próximos é o melhor modelo com um Brier score de 0,152 e um cinto de calibração aceitável. Desenvolvemos uma validação externa a partir de 624 pacientes de dois outros hospitais da mesma rede, encontrando bons valores de Brier score (0,128 and 0,079) em ambos. O uso de antibióticos e procedimentos invasivos, principalmente ventilação mecânica, são os atributos mais importantes e significativos para a colonização ou infecção de CR-GNB. Os modelos preditivos podem ajudar a evitar testes de rastreamento e tratamento inadequado em pacientes de baixo risco. Políticas de controle de infecção podem ser estabelecidas para controlar a propagação dessas bactérias. A identificação de pacientes que não precisam ser testados diminui os custos hospitalares e o tempo de espera do laboratório. Concluímos que nossos modelos apresentam bom desempenho e parecem suficientemente confiáveis para prever um paciente com esses patógenos. Esses modelos preditivos podem ser incluídos no sistema hospitalar. A metodologia proposta pode ser replicada em diferentes ambientes de saúde. / [en] Infections by Carbapenem-Resistant Gram-negative bacteria (CR-GNB) are among the most significant contemporary health concerns, especially in intensive care units (ICUs), and may be associated with increased hospitalization time, morbidity, costs, and mortality. This thesis aims to develop a comprehensive and systematic approach applying machine-learning techniques to build models to predict the CR-GNB acquisition in ICUs from Brazilian hospitals. We proposed screening models to detect ICU patients who do not need to be tested and a risk model that estimates ICU patients probability of acquiring CR-GNB. We applied feature selection methods, machine-learning techniques, and balancing strategies to build and compare the models. The performance criteria chosen to evaluate the models were Negative Predictive Value (NPV) and Matthews Correlation Coefficient (MCC) for the screening model and Brier score and calibration curves for the CR-GNB acquisition risk model. Friedman s statistic and Nemenyi post hoc tests are used to test the significance of differences among techniques. Information gain method and association rules mining assess the importance and strength among features. Our database gathers the patients, antibiotic, and microbiology data from five Brazilian hospitals from May 8th, 2017 to August 31st, 2019, involving hospitalized patients in 24 adult ICUs. Information from the laboratory was used to identify all patients with a positive or negative test for carbapenem-resistant GNB, A. baumannii, P. aeruginosa, or Enterobacteriaceae. We have a total of 539 positive and 7,462 negative tests, resulting in 3,604 patients with at least one exam after 48 hours hospitalized. We proposed to the hospital s decision-maker two screening models. The random forest s model would reduce approximately 39 percent of the unnecessary tests and correctly predict 92 percent of positives. The Neural Network model avoids unnecessary tests in 64 percent of the cases, but 24 percent of positive tests are misclassified as negatives. Our results show that the sampling, SMOTEBagging, and UnderBagging approaches obtain better results. The linear techniques such as Logistic Regression with regularization give a relatively good performance and are more interpretable; they are not significantly different from the more complex classifiers. For the acquisition risk model, the Nearest Shrunken Centroids is the best model with a Brier score of 0.152 and a calibration belt acceptable. We developed an external validation of 624 patients from two other hospitals in the same network, finding good Brier score (0.128 and 0.079) values in both. The antibiotic and invasive procedures used, especially mechanical ventilation, are the most important attributes for the colonization or infection of CR-GNB. The predictive models can help avoid screening tests and inappropriate treatment in patients at low risk. Infection control policies can be established to control these bacteria s spread. Identifying patients who do not need to be tested decreases hospital costs and laboratory waiting times. We concluded that our models present good performance and seem sufficiently reliable to predict a patient with these pathogens. These predictive models can be included in the hospital system. The proposed methodology can be replicated in different healthcare settings.
105

Incorporating Shear Resistance Into Debris Flow Triggering Model Statistics

Lyman, Noah J 01 December 2020 (has links) (PDF)
Several regions of the Western United States utilize statistical binary classification models to predict and manage debris flow initiation probability after wildfires. As the occurrence of wildfires and large intensity rainfall events increase, so has the frequency in which development occurs in the steep and mountainous terrain where these events arise. This resulting intersection brings with it an increasing need to derive improved results from existing models, or develop new models, to reduce the economic and human impacts that debris flows may bring. Any development or change to these models could also theoretically increase the ease of collection, processing, and implementation into new areas. Generally, existing models rely on inputs as a function of rainfall intensity, fire effects, terrain type, and surface characteristics. However, no variable in these models directly accounts for the shear stiffness of the soil. This property when considered with the respect to the state of the loading of the sediment informs the likelihood of particle dislocation, contractive or dilative volume changes, and downslope movement that triggers debris flows. This study proposes incorporating shear wave velocity (in the form of slope-based thirty-meter shear wave velocity, Vs30) to account for this shear stiffness. As commonly used in seismic soil liquefaction analysis, the shear stiffness is measured via shear wave velocity which is the speed of the vertically propagating horizontal shear wave through sediment. This spatially mapped variable allows for broad coverage in the watersheds of interest. A logistic regression is used to then compare the new variable against what is currently used in predictive post-fire debris flow triggering models. Resulting models indicated improvement in some measures of statistical utility through receiver operating characteristic curves (ROC) and threat score analysis, a method of ranking models based on true/false positive and negative results. However, the integration of Vs30 offers similar utility to current models in additional metrics, suggesting that this input can benefit from further refinement. Further suggestions are additionally offered to further improve the use of Vs30 through in-situ measurements of surface shear wave propagation and integration into Vs30 datasets through a possible transfer function. Additional discussion into input variables and their impact on created models is also included.
106

Sublingual drug delivery: In vitro characterization of barrier properties and prediction of permeability

Goswami, Tarun 01 January 2008 (has links) (PDF)
Sublingual administration of drugs offers advantages including avoidance of first pass metabolism and quick absorption into the systemic circulation. In spite of being one of the oldest routes of drug delivery, there is dearth of literature on characterization of the barrier properties of the sublingual mucosa. Therefore, the aim of this research was to gain an insight into the barrier properties of the porcine sublingual mucosa. The studies conducted in this dissertation research focused on an important aspect of sublingual permeation, the dependence of permeability on different physicochemical properties of the permeant such as the degree of ionization, distribution coefficient and molecular weight/size on drug transport across sublingual mucosa. Further the data from the sublingual permeation of model compounds was used in development of a predictive model which provided us with some understanding regarding the important descriptors required for sublingual drug delivery. A series of β-blockers were employed as the model drugs to study the dependence of permeability on lipophilicity across the sublingual mucosa. Eighth different β-blockers with log D (distribution coefficient) values ranging from -1.30 to 1.37 were used in this study. The most hydrophilic drug atenolol showed the lowest permeability (0.19 ± 0.04 x 10 -6 ) cm/sec and the most lipophilic drug propranolol showed the highest permeability (38.25 ± 4.30 x 10 -6 ) cm/sec. The log-log plot of permeability coefficient and the distribution coefficient showed a linear relationship. It was concluded that the increase in lipophilicity results in improved partitioning across the lipid bilayers of sublingual mucosa which results in increased permeation for the drugs. As the sublingual mucosa contains a significant amount of the polar lipids bonded with water molecules, therefore, it was hypothesized that the hydrophilic or ionized permeants will have significant permeation across the sublingual mucosa. The objective of this research was to study the effect of ionization on permeation across sublingual mucosa using a model drug nimesulide. Based on the relationship between the permeability coefficient and distribution coefficient of nimesulide at different pH, the lipoidal route was suggested as the dominant transport route for nimesulide across the sublingual mucosa. The contribution of individual ionic species of nimesulide to the total drug flux was quantitatively delineated. It was observed that the ionized species of nimesulide contributes significantly to the total flux across the sublingual mucosa. The contribution of the ionized species to total flux was almost (90%) at a pH where the drug was almost completely ionized. Polyethylene glycols (PEGs) were used as the model permeants to study the dependence of permeability on molecular weight. An inverse relationship between molecular weight and permeability coefficients was observed. This relationship was used to estimate the molecular weight cut off for the sublingual mucosa. The molecular weight cut off was estimated to be around 1675 daltons. Further, the Renkin function was used to estimate the theoretical pore size of the sublingual mucosa and the pore size of the sublingual mucosa was estimated to be around 30–53 Å based on two separate calculations using the radius of gyration and Stokes-Einstein radius for PEG molecules, respectively. No specific model is present in literature to predict the in vitro sublingual drug permeability. In this dissertation a specific model was developed and validated by performing permeation studies of 14 small molecules across the porcine sublingual mucosa. It was shown that the lipophilicity (logD 6.8 ) and the number of hydrogen bond donors (HBD) were the most significant descriptors affecting sublingual permeability. Research conducted in this dissertation provided an in-depth understanding about the barrier properties of the porcine sublingual mucosa and role of different physicochemical properties on sublingual transport. Such an understanding will hopefully expand the suitable lead candidates for sublingual delivery.
107

Enhancing Fuzzy Associative Rule Mining Approaches for Improving Prediction Accuracy. Integration of Fuzzy Clustering, Apriori and Multiple Support Approaches to Develop an Associative Classification Rule Base

Sowan, Bilal I. January 2011 (has links)
Building an accurate and reliable model for prediction for different application domains, is one of the most significant challenges in knowledge discovery and data mining. This thesis focuses on building and enhancing a generic predictive model for estimating a future value by extracting association rules (knowledge) from a quantitative database. This model is applied to several data sets obtained from different benchmark problems, and the results are evaluated through extensive experimental tests. The thesis presents an incremental development process for the prediction model with three stages. Firstly, a Knowledge Discovery (KD) model is proposed by integrating Fuzzy C-Means (FCM) with Apriori approach to extract Fuzzy Association Rules (FARs) from a database for building a Knowledge Base (KB) to predict a future value. The KD model has been tested with two road-traffic data sets. Secondly, the initial model has been further developed by including a diversification method in order to improve a reliable FARs to find out the best and representative rules. The resulting Diverse Fuzzy Rule Base (DFRB) maintains high quality and diverse FARs offering a more reliable and generic model. The model uses FCM to transform quantitative data into fuzzy ones, while a Multiple Support Apriori (MSapriori) algorithm is adapted to extract the FARs from fuzzy data. The correlation values for these FARs are calculated, and an efficient orientation for filtering FARs is performed as a post-processing method. The FARs diversity is maintained through the clustering of FARs, based on the concept of the sharing function technique used in multi-objectives optimization. The best and the most diverse FARs are obtained as the DFRB to utilise within the Fuzzy Inference System (FIS) for prediction. The third stage of development proposes a hybrid prediction model called Fuzzy Associative Classification Rule Mining (FACRM) model. This model integrates the ii improved Gustafson-Kessel (G-K) algorithm, the proposed Fuzzy Associative Classification Rules (FACR) algorithm and the proposed diversification method. The improved G-K algorithm transforms quantitative data into fuzzy data, while the FACR generate significant rules (Fuzzy Classification Association Rules (FCARs)) by employing the improved multiple support threshold, associative classification and vertical scanning format approaches. These FCARs are then filtered by calculating the correlation value and the distance between them. The advantage of the proposed FACRM model is to build a generalized prediction model, able to deal with different application domains. The validation of the FACRM model is conducted using different benchmark data sets from the University of California, Irvine (UCI) of machine learning and KEEL (Knowledge Extraction based on Evolutionary Learning) repositories, and the results of the proposed FACRM are also compared with other existing prediction models. The experimental results show that the error rate and generalization performance of the proposed model is better in the majority of data sets with respect to the commonly used models. A new method for feature selection entitled Weighting Feature Selection (WFS) is also proposed. The WFS method aims to improve the performance of FACRM model. The prediction performance is improved by minimizing the prediction error and reducing the number of generated rules. The prediction results of FACRM by employing WFS have been compared with that of FACRM and Stepwise Regression (SR) models for different data sets. The performance analysis and comparative study show that the proposed prediction model provides an effective approach that can be used within a decision support system. / Applied Science University (ASU) of Jordan
108

Développement et validation d’un modèle d’apprentissage machine pour la détection de potentiels donneurs d’organes

Sauthier, Nicolas 08 1900 (has links)
Le processus du don d’organes, crucial pour la survie de nombreux patients, ne répond pas à la demande croissante. Il dépend d’une identification, par les cliniciens, des potentiels donneurs d’organes. Cette étape est imparfaite et manque entre 30% et 60% des potentiels donneurs d’organes et ce indépendamment des pays étudiés. Améliorer ce processus est un impératif à la fois moral et économique. L’objectif de ce mémoire était de développer et valider un modèle afin de détecter automatiquement les potentiels donneurs d’organes. Pour ce faire, les données cliniques de l’ensemble des patients adultes hospitalisés aux soins intensifs du CHUM entre 2012 et 2019 ont été utilisées. 103 valeurs de laboratoires temporelles différentes et 2 valeurs statiques ont été utilisées pour développer un modèle de réseaux de neurones convolutifs entrainé à prédire les potentiels donneurs d’organes. Ce modèle a été comparé à un modèle fréquentiste linéaire non temporel. Le modèle a par la suite été validé dans une population externe cliniquement distincte. Différentes stratégies ont été comparées pour peaufiner le modèle dans cette population externe et améliorer les performances. Un total de 19 463 patients, dont 397 donneurs potentiels, ont été utilisés pour développer le modèle et 4 669, dont 36 donneurs potentiels, ont été utilisés pour la validation externe. Le modèle démontrait une aire sous la courbe ROC (AUROC) de 0.966 (IC95% 0.9490.981), supérieure au modèle fréquentiste linéaire (AUROC de 0.940 IC95% 0.908-0.969, p=0.014). Le modèle était aussi supérieur dans certaines sous populations d’intérêt clinique. Dans le groupe de validation externe, l’AUROC du modèle de réseaux de neurones était de 0.820 (0.682-0.948) augmentant à 0.874 (0.731-0.974) à l’aide d’un ré-entrainement. Ce modèle prometteur a le potentiel de modifier et d’améliorer la détection des potentiels donneurs d’organes. D’autres étapes de validation prospectives et d’amélioration du modèle, notamment l’ajout de données spécifiques, sont nécessaires avant une utilisation clinique de routine. / The organ donation process, however crucial for many patients’ survival, is not enough to address the increasing demand. Its efficiency depends on potential organ donors’ identification by clinicians. This imperfect step misses between 30%–60% of potential organ donor. Improving that process is a moral and economic imperative. The main goal of this work was to address that liming step by developing and validating a predictive model that could automatically detect potential organ donors. The clinical data from all patients hospitalized, between 2012 and 2019 to the CHUM critical care units were extracted. The temporal evolution of 103 types of laboratory analysis and 2 static clinical data was used to develop and test a convolutive neural network (CNN), trained to predict potential organ donors. This model was compared to a non-temporal logistical model as a baseline. The CNN model was validated in a clinically distinct external population. To improve the performance in this external cohort, strategies to fine-tune the network were compared. 19 463 patients, including 397 potential organ donors, were used to create the model and 4 669 patients, including 36 potential organ donors, served as the external validation cohort. The CNN model performed better with an AUROC of 0.966 (IC95% 0.949-0.981), compared to the logistical model (AUROC de 0.940 IC95% 0.908-0.969, p=0.014). The CNN model was also superior in specific subpopulation of increased clinical interest. In the external validation cohort, the CNN model’s AUROC was 0.820 (0.682-0.948) and could be improved to 0.874 (0.731-0.974) after fine tuning. This promising model could change potential organ donors' detection for the better. More studies are however required to improve the model, by adding more types of data, and to validate prospectively the mode before routine clinical usage.
109

AI-Powered Network Traffic Prediction / AI baserad prediktering av nätverkstraffik

Bolakhrif, Amin January 2021 (has links)
In this Internet and big data era, resource management has become a crucial task to ensure the quality of service for users in modern wireless networks. Accurate and rapid Internet traffic data is essential for many applications in computer networking to enable high networking performance. Such applications facilitate admission control, congestion control, anomaly detection, and bandwidth allocation. In radio networks, these mechanisms are typically handled by features such as Carrier Aggregation, Inter-Frequency Handover, and Predictive Scheduling. Since these mechanisms often take time and cost radio resources, it is desirable to only enable them for users expected to gain from them. The problem of network traffic flow prediction is forecasting aspects of an ongoing traffic flow to mobilize networking mechanisms that ensures both user experience quality and resource management. The expected size of an active traffic flow, its expected duration, and the anticipated amount of packets within the flow are some of the aspects. Additionally, forecasting individual packet sizes and arrival times can also be beneficial. The wide-spread availability of Internet flow data allows machine learning algorithms to learn the complex relationships in network traffic and form models capable of forecasting traffic flows. This study proposes a deep-learning-based flow prediction method, established using a residual neural network (ResNet) for regression. The proposed model architecture demonstrates the ability to accurately predict the packet count, size, and duration of flows using only the information available at the arrival of the first packet. Additionally, the proposed method manages to outperform traditional machine learning methods such as linear regression and decision trees, in addition to conventional deep neural networks. The results indicate that the proposed method is able to predict the general magnitude of flows with high accuracy, providing precise magnitude classifications. / I denna Internet och data era har resurshantering blivit allt mer avgörande för att säkerställa tjänstekvaliteten för användare i moderna trådlösa nätverk. Noggrann och hastig Internet-trafikinformation är avgörande för många applikationer inom datanätverk för att möjliggöra hög nätverksprestanda. Sådana applikationer underlättar kontroll av behörighet, kontroller av trängsel, detektering av avvikelser och allokering av bandbredd. I radionätverk hanteras dessa mekanismer vanligtvis av funktioner som Carrier Aggregation, Inter- Frequency Handover och Predictive Scheduling. Eftersom dessa funktioner ofta tar tid och kostar resurser så är det önskvärt att nätverk endast möjliggör sådana funktioner för användare som förväntas dra nytta av dem. Prediktering av trafikflöden i nätverk grundar sig i att förutsäga aspekter av ett pågående trafikflöde för att kunna mobilisera nätverksfunktioner som säkerställer både kvaliteten för användare samt resurshantering. Den förväntade storleken på ett aktivt trafikflöde, dess varaktighet och mängden paket inom flödet är några av dessa aspekter. Det kan dessutom vara fördelaktigt att förutsäga individuella paketstorlekar och ankomsttider. Den stora tillgången till data med nätverks-flöden gör det möjligt för maskininlärningsmetoder att lära sig de komplexa förhållandena i nätverkstrafik och därigenom formulera modeller som kan förutsäga flöden i nätverk. Denna studie föreslår en djupinlärningsbaserad metod för att prediktera flöden i nätverk, med hjälp av ett anpassat neuralt nätverk som utnyttjar genvägar i modellens konstruktion (ResNet). Den föreslagna modell-arkitekturen visar sig nöjaktigt kunna förutsäga antalet paket, storlek och varaktighet för flöden med endast den information som är tillgänglig från det första paketet. Dessutom lyckas den föreslagna metoden att överträffa både traditionella maskininlärningsmetoder som linjär regression och beslutsträd, samt konventionella djupa neurala nätverk. Resultaten indikerar att den föreslagna metoden kan förutsäga den allmänna storleken på flödens egenskaper med hög noggrannhet, givet att IP-adresser är tillgängliga.
110

Bayesian Model Checking Methods for Dichotomous Item Response Theory and Testlet Models

Combs, Adam 02 April 2014 (has links)
No description available.

Page generated in 0.0824 seconds