• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 7
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 77
  • 27
  • 14
  • 13
  • 12
  • 11
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Processo alternativo para obtenção de tetrafluoreto de urânio a partir de efluentes fluoretados da etapa de reconversão de urânio / Dry uranium tetrafluoride process preparation using the uranium hexafluoride reconversion process effluents

SILVA NETO, JOAO B. da 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:54:58Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T14:07:31Z (GMT). No. of bitstreams: 0 / Dissertacao (Mestrado) / IPEN/D / Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
52

Processo alternativo para obtenção de tetrafluoreto de urânio a partir de efluentes fluoretados da etapa de reconversão de urânio / Dry uranium tetrafluoride process preparation using the uranium hexafluoride reconversion process effluents

SILVA NETO, JOAO B. da 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:54:58Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T14:07:31Z (GMT). No. of bitstreams: 0 / O processamento químico a partir do hexafluoreto de urânio (UF6), permite uma flexibilidade na produção de combustíveis à base de siliceto de urânio (U3Si2) e octóxido de urânio (U3O8). Atualmente no IPEN-CNEN/SP desenvolvem-se trabalhos visando o processamento de combustíveis com alta concentração de urânio, por meio da substituição do U3O8 por U3Si2. Para a obtenção de U3Si2, duas possibilidades podem ser consideradas na preparação da matéria-prima utilizada, que é o tetrafluoreto de urânio (UF4), são elas: a redução do urânio presente na solução hidrolisada do UF6 utilizando-se cloreto estanhoso (SnCl2) e a hidrofluoretação do dióxido de urânio (UO2) proveniente do tricarbonato de amônio e uranilo (TCAU). Descreve-se neste trabalho um procedimento para obtenção de tetrafluoreto de urânio (UF4), utilizando-se como matéria-prima os filtrados gerados na preparação de determinados compostos nos processos de reconversão do hexafluoreto de urânio (UF6), mais especificamente o amonioperóxidofluoranato (APOFU). Os filtrados consistem principalmente de uma solução contendo altas concentrações dos íons amônio (NH4 +), fluoreto (F-) e baixa concentração de urânio. O processo descrito visa principalmente a recuperação do NH4F e do urânio, como UF4, por meio da cristalização do bifluoreto de amônio (NH4HF2) e em uma etapa posterior, a adição deste ao UO2, ocorrendo a fluoração e decomposição. O UF4 obtido foi caracterizado química e fisicamente e será reciclado para ser usado na unidade de produção de urânio metálico para a obtenção de U3Si2, utilizado como combustível para o reator IEA-R1m. / Dissertacao (Mestrado) / IPEN/D / Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
53

Habitat models to predict wetland bird occupancy influenced by scale, anthropogenic disturbance, and imperfect detection

Glisson, Wesley J., Conway, Courtney J., Nadeau, Christopher P., Borgmann, Kathi L. 06 1900 (has links)
Understanding species-habitat relationships for endangered species is critical for their conservation. However, many studies have limited value for conservation because they fail to account for habitat associations at multiple spatial scales, anthropogenic variables, and imperfect detection. We addressed these three limitations by developing models for an endangered wetland bird, Yuma Ridgway's rail (Rallus obsoletus yumanensis), that examined how the spatial scale of environmental variables, inclusion of anthropogenic disturbance variables, and accounting for imperfect detection in validation data influenced model performance. These models identified associations between environmental variables and occupancy. We used bird survey and spatial environmental data at 2473 locations throughout the species' U.S. range to create and validate occupancy models and produce predictive maps of occupancy. We compared habitat-based models at three spatial scales (100, 224, and 500 m radii buffers) with and without anthropogenic disturbance variables using validation data adjusted for imperfect detection and an unadjusted validation dataset that ignored imperfect detection. The inclusion of anthropogenic disturbance variables improved the performance of habitat models at all three spatial scales, and the 224-m-scale model performed best. All models exhibited greater predictive ability when imperfect detection was incorporated into validation data. Yuma Ridgway's rail occupancy was negatively associated with ephemeral and slow-moving riverine features and high-intensity anthropogenic development, and positively associated with emergent vegetation, agriculture, and low-intensity development. Our modeling approach accounts for common limitations in modeling species-habitat relationships and creating predictive maps of occupancy probability and, therefore, provides a useful framework for other species.
54

Learning with Complex Performance Measures : Theory, Algorithms and Applications

Narasimhan, Harikrishna January 2016 (has links) (PDF)
We consider supervised learning problems, where one is given objects with labels, and the goal is to learn a model that can make accurate predictions on new objects. These problems abound in applications, ranging from medical diagnosis to information retrieval to computer vision. Examples include binary or multiclass classi cation, where the goal is to learn a model that can classify objects into two or more categories (e.g. categorizing emails into spam or non-spam); bipartite ranking, where the goal is to learn a model that can rank relevant objects above the irrelevant ones (e.g. ranking documents by relevance to a query); class probability estimation (CPE), where the goal is to predict the probability of an object belonging to different categories (e.g. probability of an internet ad being clicked by a user). In each case, the accuracy of a model is evaluated in terms of a specified `performance measure'. While there has been much work on designing and analyzing algorithms for different supervised learning tasks, we have complete understanding only for settings where the performance measure of interest is the standard 0-1 or a loss-based classification measure. These performance measures have a simple additive structure, and can be expressed as an expectation of errors on individual examples. However, in many real-world applications, the performance measure used to evaluate a model is often more complex, and does not decompose into a sum or expectation of point-wise errors. These include the binary or multiclass G-mean used in class-imbalanced classification problems; the F1-measure and its multiclass variants popular in text retrieval; and the (partial) area under the ROC curve (AUC) and precision@ employed in ranking applications. How does one design efficient learning algorithms for such complex performance measures, and can these algorithms be shown to be statistically consistent, i.e. shown to converge in the limit of infinite data to the optimal model for the given measure? How does one develop efficient learning algorithms for complex measures in online/streaming settings where the training examples need to be processed one at a time? These are questions that we seek to address in this thesis. Firstly, we consider the bipartite ranking problem with the AUC and partial AUC performance measures. We start by understanding how bipartite ranking with AUC is related to the standard 0-1 binary classification and CPE tasks. It is known that a good binary CPE model can be used to obtain both a good binary classification model and a good bipartite ranking model (formally, in terms of regret transfer bounds), and that a binary classification model does not necessarily yield a CPE model. However, not much is known about other directions. We show that in a weaker sense (where the mapping needed to transform a model from one problem to another depends on the underlying probability distribution), a good bipartite ranking model for AUC can indeed be used to construct a good binary classification model, and also a good binary CPE model. Next, motivated by the increasing number of applications (e.g. biometrics, medical diagnosis, etc.), where performance is measured, not in terms of the full AUC, but in terms of the partial AUC between two false positive rates (FPRs), we design batch algorithms for optimizing partial AUC in any given FPR range. Our algorithms optimize structural support vector machine based surrogates, which unlike for the full AUC; do not admit a straightforward decomposition into simpler terms. We develop polynomial time cutting plane solvers for solving the optimization, and provide experiments to demonstrate the efficacy of our methods. We also present an application of our approach to predicting chemotherapy outcomes for cancer patients, with the aim of improving treatment decisions. Secondly, we develop algorithms for optimizing (surrogates for) complex performance mea-sures in the presence of streaming data. A well-known method for solving this problem for standard point-wise surrogates such as the hinge surrogate, is the stochastic gradient descent (SGD) method, which performs point-wise updates using unbiased gradient estimates. How-ever, this method cannot be applied to complex objectives, as here one can no longer obtain unbiased gradient estimates from a single point. We develop a general stochastic method for optimizing complex measures that avoids point-wise updates, and instead performs gradient updates on mini-batches of incoming points. The method is shown to provably converge for any performance measure that satis es a uniform convergence requirement, such as the partial AUC, precision@ and F1-measure, and in experiments, is often several orders of magnitude faster than the state-of-the-art batch methods, while achieving similar or better accuracies. Moreover, for specific complex binary classification measures, which are concave functions of the true positive rate (TPR) and true negative rate (TNR), we are able to develop stochastic (primal-dual) methods that can indeed be implemented with point-wise updates, by using an adaptive linearization scheme. These methods admit convergence rates that match the rate of the SGD method, and are again several times faster than the state-of-the-art methods. Finally, we look at the design of consistent algorithms for complex binary and multiclass measures. For binary measures, we consider the practically popular plug-in algorithm that constructs a classifier by applying an empirical threshold to a suitable class probability estimate, and provide a general methodology for proving consistency of these methods. We apply this technique to show consistency for the F1-measure, and under a continuity assumption on the distribution, for any performance measure that is monotonic in the TPR and TNR. For the case of multiclass measures, a simple plug-in method is no longer tractable, as in the place of a single threshold parameter, one needs to tune at least as many parameters as the number of classes. Using an optimization viewpoint, we provide a framework for designing learning algorithms for multiclass measures that are general functions of the confusion matrix, and as an instantiation, provide an e cient and provably consistent algorithm based on the bisection method for multiclass measures that are ratio-of-linear functions of the confusion matrix (e.g. micro F1). The algorithm outperforms the state-of-the-art SVMPerf method in terms of both accuracy and running time. Overall, in this thesis, we have looked at various aspects of complex performance measures used in supervised learning problems, leading to several new algorithms that are often significantly better than the state-of-the-art, to improved theoretical understanding of the performance measures studied, and to novel real-life applications of the algorithms developed.
55

Prediction of Credit Risk using Machine Learning Models

Isaac, Philip January 2022 (has links)
This thesis aims to investigate different machine learning (ML) models and their performance to find the best performing model to predict credit risk at a specific company. Since granting credit to corporate customers is a part of this company's core business, managing the credit risk is of high importance. The company has of today only one credit risk measurement, which is obtained through an external company, and the goal is to find a model that outperforms this measurement.     The study consists of two ML models, Logistic Regression (LR) and eXtreme Gradient Boosting. This thesis proves that both methods perform better than the external risk measurement and the LR method achieves the overall best performance. One of the most important analyses done in this thesis was handling the dataset and finding the best-suited combination of features that the ML models should use.
56

Bankruptcy prediction models on Swedish companies.

Charraud, Jocelyn, Garcia Saez, Adrian January 2021 (has links)
Bankruptcies have been a sensitive topic all around the world for over 50 years. From their research, the authors have found that only a few bankruptcy studies have been conducted in Sweden and even less on the topic of bankruptcy prediction models. This thesis investigates the performance of the Altman, Ohlson and Zmijewski bankruptcy prediction models. This research investigates all Swedish companies during the years 2017 and 2018.  This study has the intention to shed light on some of the most famous bankruptcy prediction models. It is interesting to explore the predictive abilities and usability of those three models in Sweden. The second purpose of this study is to create two models from the most significant variable out of the three models studied and to test its prediction power with the aim to create two models designed for Swedish companies.  We identified a research gap in terms of Sweden, where bankruptcy prediction models have been rather unexplored and especially with those three models. Furthermore, we have identified a second research gap regarding the time period of the research. Only a few studies have been conducted on the topic of bankruptcy prediction models post the financial crisis of 2007/08.  We have conducted a quantitative study in order to achieve the purpose of the study. The data used was secondary data gathered from the Serrano database. This research followed an abductive approach with a positive paradigm. This research has studied all active Swedish companies between the years 2017 and 2018. Finally, this contributed to the current field of knowledge on the topic through the analysis of the results of the models on Swedish companies, using the liquidity theory, solvency and insolvency theory, the pecking order theory, the profitability theory, the cash flow theory, and the contagion effect. The results aligned with the liquidity theory, the solvency and insolvency theory and the profitability theory. Moreover, from this research we have found that the Altman model has the lowest performance out of the three models, followed by the Ohlson model that shows some mixed results depending on the statistical analysis. Lastly, the Zmijewski model has the best performance out of the three models. Regarding the performance and the prediction power of the two new models were significantly higher than the three models studied.
57

Landslide Susceptibility Analysis Using Open Geo-spatial Data and Frequency Ratio Technique / Jordskredkänslighetsanalys med hjälp av öppen geo-spatial data och frekvenskvotsteknik

YORULMAZ, TARIK EMRE January 2022 (has links)
Landslide susceptibility maps are useful for spatial decision-making to minimize the lossof lives and properties. There are many studies related to the development of landslidesusceptibility maps using various methods such as Analytic Hierarchy Process, Weight ofEvidence and Logistic Regression. Commonly, the geospatial data required for such analysis(such as land cover and soil type maps) are only locally available and pertinent to smallcase studies. Transferable and scalable approaches utilizing publicly available, large scaledatasets (ie., global or continental) are necessary to develop susceptibility maps in areaswhere local data is not available or when large-scale analysis is required. To develop suchapproaches, a systematic comparison between locally available, fine resolution, and largerscale, openly available but coarser resolution datasets is essential. The objective of this study isto investigate the efficiency of globally available public data for landslide susceptibility mappingby comparing it with the performance of the data provided from local institutions. For this purpose, the Göta river valley in Sweden and the country of Rwanda were selectedas study areas. Göta river valley was used for the comparison of local and open data.While Rwanda was used as a study area to ensure the efficiency of open data analysis andtransferability of the framework. The selected landslide impact factors for this study are;elevation, slope, soil type, land cover, precipitation, lithology, distance to roads, and distanceto drainage network. Landslide susceptibility maps were prepared by using the state-of-the-artFrequency Ratio method. The validation results using the prediction rate curve technique show92.9%, 90.2%, and 83.1% area under curve values for local and open data analyses of Göta rivervalley and open data analysis of Rwanda country, respectively. The results show that globallyavailable open data demonstrate strong potential for landslide susceptibility mapping whenhigh-resolution local data are not available.
58

Discriminative Articulatory Feature-based Pronunciation Models with Application to Spoken Term Detection

Prabhavalkar, Rohit Prakash 27 September 2013 (has links)
No description available.
59

Predicting Community-based Methadone Maintenance Treatment (MMT) Outcome

Stones, George 07 January 2013 (has links)
This was a retrospective study of a community-based methadone maintenance treatment (MMT) program in Toronto. Participants (N = 170) were federally sentenced adult male offenders admitted to this voluntary program between 1997 and 2009 while subject to community supervision following incarceration. The primary investigation examined correlates of treatment responsivity, with principal outcome measures including MMT clients’ rates of: (i) illicit drug use; and (ii) completion of conditional (parole) or statutory release (SR). For a subset (n = 74), recidivism rates were examined after a 9-year interval. Findings included strong convergent evidence from logistic regression and ROC analyses that an empirically and theoretically derived set of five variables was a stable and highly significant (p <.001) predictor of release outcome. Using five factors related to risk (work/school status, security level of releasing institution, total PCL-R score, history of institutional drug use, and days at risk), release outcome was predicted with an overall classification accuracy of 88%, with high specificity (86%) and sensitivity (89%). The logistic regression model generated an R2 of .55 and the accompanying AUC was .89, both substantial. Work/school status had an extremely large positive association with successful completion of community supervision, accounting for > half of the total variance explained by the five-factor model and increasing the estimated odds of successful release outcome by > 15-fold. Also, when in the MMT program, clients' risk taking behaviour was significantly moderated, with low overall base rates of illicit drug use, yet the rate of parole/SR revocation (71%) was high. The 9-year follow-up showed a high mortality rate (15%) overall. Revocation of release while in the MMT program was associated with a significantly higher rate and more violent recidivism at follow-up. Results are discussed within the context of: (a) Andrews' and Bonta's psychology of criminal conduct; (b) the incompatibility of a harm reduction treatment model with an abstinence-based parole decision-making model; (c) changing drug use profiles among MMT clients; (d) a strength-based approach to correctional intervention focusing on educational and vocational retraining initiatives; and (e) creation of a user friendly case-based screening algorithm for prediction of release outcome for new releases.
60

Targeting mycophenolate mofetil for graft-versus-host disease prophylaxis after allogenic blood stem cell transplantation / Pharmakokinetisches Targeting von Mycophenolat mofetil zur GvHD - Prophylaxe nach allogener Stammzelltransplantation

Häntzschel, Ingmar 01 July 2011 (has links) (PDF)
Targeting mycophenolate mofetil for graft-versus-host disease prophylaxis after allogenic blood stem cell transplantation

Page generated in 0.064 seconds