• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 172
  • 42
  • 42
  • 21
  • 20
  • 19
  • 10
  • 8
  • 6
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 431
  • 127
  • 62
  • 60
  • 53
  • 42
  • 38
  • 35
  • 32
  • 32
  • 31
  • 28
  • 27
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

A Comparison of Manual and Automated Grammatical Precoding on the Accuracy of Automated Developmental Sentence Scoring

Janis, Sarah Elizabeth 01 May 2016 (has links)
Developmental Sentence Scoring (DSS) is a standardized language sample analysis procedure that evaluates and scores a child's use of standard American-English grammatical rules within complete sentences. Automated DSS programs have the potential to increase the efficiency and reduce the amount of time required for DSS analysis. The present study examines the accuracy of one automated DSS software program, DSSA 2.0, compared to manual DSS scoring on previously collected language samples from 30 children between the ages of 2-5 and 7-11. Additionally, this study seeks to determine the source of error in the automated score by comparing DSSA 2.0 analysis given manually versus automatedly assigned grammatical tag input. The overall accuracy of DSSA 2.0 was 86%; the accuracy of individual grammatical category-point value scores varied greatly. No statistically significant difference was found between the two DSSA 2.0 input conditions (manual vs. automated tags) suggesting that the underlying grammatical tagging is not the primary source of error in DSSA 2.0 analysis.
42

Möjligheten för småföretagande att få banklån : En jämförelsestudie mellan tre banker på mindre orter

Ibrisagic, Sabina January 2019 (has links)
Syftet med studien är att identifiera faktorer som påverkar möjligheten för småföretagare att få banklån på mindre orter. Metoden som används i studien är en kvalitativ metod som bygger på tre stycken intervjuer med respondenter som befinner sig på banker belägna på mindre orter. Kombinerat med detta genomförs en deltagande observation då jag arbetar på en av bankerna. Utifrån det insamlade materialet identifierades det att den lokala kännedomen spelar en stor roll för bankerna vid kreditgivningen tillsammans med småföretagarnas historiska skötsamhet. Samt att bankerna använder sig av scoring där de bedömer småföretagarnas återbetalningsförmåga. I övrigt är det viktigt för bankerna att småföretagen utformar en bra affärsidé som är realistisk och hållbar. Dessa kriterier är desamma oavsett kund de möter. / The purpose of the study is to identify the factors that affect the possibility for small business owners to obtain bank loan in smaller towns. The method used in the study is a qualitative method based on three interviews with relevant respondents on the banks located in smaller towns. Combined with this, a participant observations is performed because I work at one of the banks. Based on the collected material, it was identified that the local knowledge plays a major role for the banks in the granting of credit along with the historical care of the small business owners. The banks also use scoring where they assess the ability of repayment capacity. Otherwise, it is important for the banks that small companies design a good business idea that is realistic and sustainable. These criteria are the same regardless of the customer they meet.
43

Effects of Dual Language Learning on Early Language and Literacy Skills in Low Income Preschool Students

Tápanes, Vanessa 02 July 2007 (has links)
This paper presents a framework for literacy skill development relating to both monolingual and dual language learners. The purpose of this study was to identify the differences that may exist between monolingual and dual language learners' performance on literacy tasks, before having a significant amount of exposure to the preschool curriculum. The sample included 78 monolingual language learners and 44 dual language learners who were assessed using the Woodcock Language Proficiency Battery-Revised (WLPB-R). The researcher used scoring methods that took into consideration split vocabulary in dual language learners where a conceptual scoring technique was used (Bedore, Pena, Garcia, & Cortez, 2005). The research design employed was casual comparative where the effects of dual language learning on letter knowledge, concepts of print, vocabulary, listening comprehension, and broad language development were investigated. Findings from two Multivariate Analysis of Variances indicated that there were significant differences between monolingual and dual language learners on early language and literacy skills. This study contributes to the literature regarding dual language development and the use of appropriate scoring methods. Particularly, the outcomes from this study provide guidance regarding best practices for assessment of dual language learners to identify learning and language difficulties.
44

Prediction Markets: Theory and Applications

Ruberry, Michael Edward 18 October 2013 (has links)
In this thesis I offer new results on how we can acquire, reward, and use accurate predictions of future events. Some of these results are entirely theoretical, improving our understanding of strictly proper scoring rules (Chapter 3), and expanding strict properness to include cost functions (Chapter 4). Others are more practical, like developing a practical cost function for the [0, 1] interval (Chapter 5), exploring how to design simple and informative prediction markets (Chapter 6), and using predictions to make decisions (Chapter 7). / Engineering and Applied Sciences
45

AnÃlise de determinantes da inadimplÃncia (pessoa fÃsica) tomadores de crÃdito: uma abordagem economÃtrica / Analysis of determinative of the insolvency (natural person) borrowed of credit: a econometrical boarding

Evanessa Maria Barbosa de Castro Lima 19 April 2004 (has links)
nÃo hà / Sendo a intermediaÃÃo financeira a principal atividade dos bancos, alocando recursos de clientes superavitÃrios a clientes deficitÃrios, à na incerteza quanto ao carÃter e a capacidade de pagamento dos clientes que se estabelece o risco e com ele a necessidade de se buscar novas alternativas para se proteger de perdas potenciais, que podem refletir em menores lucros para as instituiÃÃes. AlÃm da subjetividade dos analistas de crÃdito, o uso de modelos quantitativos, baseados em prÃticas estatÃsticas, economÃtricas e matemÃticas, vÃm cada vez mais se firmando nos mercados como ferramenta de apoio aos gestores de crÃdito na tomada de decisÃo. VÃrios modelos de avaliaÃÃo de risco sÃo adotados pelas instituiÃÃes, modelos de credit scoring, behavioral scoring, sÃo exemplos destes modelos. O modelo de credit scoring tem sido um dos mais usados, em especial para concessÃo de crÃdito a pessoas fÃsicas. Os modelos de credit scoring utilizam tÃcnicas como a anÃlise de discriminantes, programaÃÃo matemÃtica, econometria, redes neurais, entre outras, para atravÃs da anÃlise de caracterÃsticas particulares dos indivÃduos, estabelecer uma mÃtrica de separaÃÃo de bons e maus pagadores, atribuindo probabilidades diferentes de inadimplÃncia aos mesmos. A presente dissertaÃÃo tem como objetivo central analisar os determinantes de inadimplÃncia (pessoa fÃsica), usando uma abordagem economÃtrica com base no modelo Logit. O modelo utilizado foi um modelo para aprovaÃÃo de crÃdito na abertura de conta corrente, partindo de um estudo com uma amostra de 308 observaÃÃes (cadastros pessoas fÃsicas), baseados na experiÃncia real de uma instituiÃÃo financeira, cujo objetivo à atingir uma taxa de aprovaÃÃo de crÃdito tal que a receita mÃdia depois das perdas de emprÃstimos seja maximizada. / In the financial intermediation, banks focus on its main activity, allocating resources from clients with surplus to deficit clients. The uncertainty related to the characteristics or payment capacity of the clients establishes the risk and the need to search for new alternatives to protect the institutions from potential losses, which may reflect on lower profits. Besides the subjective issue of credit analysts, the use of quantitative models, based on statistical, mathematical or econometric practices are becoming an important tool to support credit managers on the decision making process. There are several models of risk evaluation, which are adopted by financial institutions such as the credit scoring and the behavioral scoring models. The credit-scoring model has been widely used, especially on the concession of individual credit. The credit scoring model uses techniques such as discriminant analysis, mathematic programming, econometrics, neural networks, among others, to analyze particular characteristics of individuals where it establishes a metric separation of good and bad payers, therefore providing different nonpayment status to each. This present dissertation has the main objective of analyzing the determinants of nonpayment status (individuals), using an econometric approach based on the Logit model. The model utilized was a model for approval of credit in the opening from the bill shackle, starting from a study with 308 observations (physical registers Persons), based in the real experience of a financial institution, whose objective is he reach a credit approval rate such that the medium prescription after the losses of loans be maximized.
46

Técnicas de classificação aplicadas a credit scoring: revisão sistemática e comparação / Classification techniques applied to credit scoring: a systematic review and comparison

Renato Frazzato Viana 18 December 2015 (has links)
Com a crescente demanda por crédito é muito importante avaliar o risco de cada operação desse tipo. Portanto, ao fornecer crédito a um cliente é necessário avaliar as chances do cliente não pagar o empréstimo e, para esta tarefa, as técnicas de credit scoring são aplicadas. O presente trabalho apresenta uma revisão da literatura de credit scoring com o objetivo de fornecer uma vis~ao geral das várias técnicas empregadas. Além disso, um estudo de simulação computacional é realizado com o intuito de comparar o comportamento de várias técnicas apresentadas no estudo. / Nowadays the increasing amount of bank transactions and the increasing of data storage created a demand for risk evaluation associated with personal loans. It is very important for a company has a very good tools in credit risk evaluation because theses tools can avoid money losses. In this context, it is interesting estimate the default probability for a customers and, the credit scoring techniques are very useful for this task. This work presents a credit scoring literature review with and aim to give a overview covering many techniques employed in credit scoring and, a computational study is accomplished in order to compare some of the techniques seen in this text.
47

Drug design in silico : criblage virtuel de protéines à visée thérapeutique

Elkaïm, Judith 20 December 2011 (has links)
Les processus qui mènent à la découverte de nouveaux médicaments sont longs et fastidieux, et les taux de succès sont relativement faibles. L’identification de candidats par le biais de tests expérimentaux s’avère coûteuse, et nécessite de connaître en profondeur les mécanismes d'action de la protéine visée afin de mettre en place des essais efficaces. Le criblage virtuel peut considérablement accélérer ces processus en permettant une évaluation rapide de chimiothèques de plusieurs milliers de molécules afin de déterminer lesquelles sont les plus susceptibles de se lier à une cible. Ces dernières années ont ainsi été témoins de quelques success stories dans ce domaine.Le premier objectif de ce travail était de comparer différents outils et stratégies couramment utilisés dans le criblage virtuel “structure-based”, puis de les appliquer à des cibles protéiques à visée thérapeutique, en particulier dans le cadre du cancer.La protéine kinase GSK3 et un test set de ligands connus ont servi de modèle pour différentes études méthodologiques ayant pour but d’évaluer les programmes de docking et de scoring à notre disposition. En particulier, l’utilisation de plusieurs structures relaxées du récepteur ou l’insertion de torsions sur certains résidus du site actif pendant le docking ont permis d’évaluer l’influence de la flexibilité de la protéine. L’utilité et la pertinence d’outils permettant de générer automatiquement les structures 3D des ligands et de méthodes de consensus scoring ont également été étudiées.Un criblage virtuel de la Pontine, une ATPase impliquée dans la croissance tumorale pour laquelle aucun inhibiteur n’était connu, a permis la sélection de candidats issus de banques de données commerciales. Ces molécules ont été testées dans un essai enzymatique par le biais d’une collaboration, et quatre d’entre elles se sont révélées capable d’inhiber l’activité ATPase de la Pontine. Le criblage de bases de ligands synthétisés et imaginés dans l’équipe a également fourni un inhibiteur original. Au contraire, l’étude de la sPLA2-X humaine, une phospholipase dont l’activité catalytique est dépendante d’un atome de Ca2+ localisé au sein du site actif, a montré les limites de nos outils de docking qui n’ont pas été capables de gérer cet ion métallique et mis en évidence la nécessité de mettre en place d’autres outils. / The process of drug discovery is long and tedious. Besides, it is relatively inefficient in terms of hit rate. The identification of candidates through experimental testing is expensive and requires extensive data on the mechanisms of the target protein in order to develop efficient assays. Virtual screening can considerably accelerate the process by quickly evaluating large databases of compounds and determining the most likely to bind to a target. Some success stories have emerged in the field over the last few years.The objectives of this work were first, to compare common tools and strategies for structure-based virtual screening, and second, to apply those tools to actual target proteins implied notably in carcinogenesis.In order to evaluate the docking and scoring programs available, the protein kinase GSK3 and a test set of known ligands were used as a model to perform methodological studies. In particular the influence of the flexibility of the protein was explored via relaxed structures of the receptor or the insertion of torsions on the side chains of residues located in the binding site. Studies concerning the automatic generation of 3D structures for the ligands and the use of consensus scoring also provided insights on the usability of these tools while performing a virtual screening.Virtual screening of the human protein Pontin, an ATPase implied in tumor cell growth for which no inhibitors were known, allowed the prioritization of compounds from commercial databases. These compounds were tested in an enzymatic assay via a collaboration, and led to the identification of four molecules capable of inhibiting the ATPase activity of Pontin. Additional screens of in-house oriented databases also provided at least one innovative inhibitor for this protein. On the contrary, a study of the human PLA2-X, a phospholipase that requires a Ca2+ atom to bind to its active site in order to catalyze the hydrolysis of its substrate, revealed the limits of our docking tools that could not handle the metal ion and the need for new tools.
48

Scoring pour le risque de crédit : variable réponse polytomique, sélection de variables, réduction de la dimension, applications / Scoring for credit risk : polytomous response variable, variable selection, dimension reduction, applications

Vital, Clément 11 July 2016 (has links)
Le but de cette thèse était d'explorer la thématique du scoring dans le cadre de son utilisation dans le monde bancaire, et plus particulièrement pour contrôler le risque de crédit. En effet, la diversification et la globalisation des activités bancaires dans la deuxième moitié du XXe siècle ont conduit à l'instauration d'un certain nombre de régulations, afin de pouvoir s'assurer que les établissements bancaires disposent de capitaux nécessaires à couvrir le risque qu'ils prennent. Cette régulation impose ainsi la modélisation de certains indicateurs de risque, dont la probabilité de défaut, qui est pour un prêt en particulier la probabilité que le client se retrouve dans l'impossibilité de rembourser la somme qu'il doit. La modélisation de cet indicateur passe par la définition d'une variable d'intérêt appelée critère de risque, dénotant les "bons payeurs" et les "mauvais payeurs". Retranscrit dans un cadre statistique plus formel, cela signifie que nous cherchons à modéliser une variable à valeurs dans {0,1} par un ensemble de variables explicatives. Cette problématique est en pratique traitée comme une question de scoring. Le scoring consiste en la définition de fonction, appelées fonctions de score, qui retransmettent l'information contenue dans l'ensemble des variables explicatives dans une note de score réelle. L'objectif d'une telle fonction sera de donner sur les individus le même ordonnancement que la probabilité a posteriori du modèle, de manière à ce que les individus ayant une forte probabilité d'être "bons" aient une note élevée, et inversement que les individus ayant une forte probabilité d'être "mauvais" (et donc un risque fort pour la banque) aient une note faible. Des critères de performance tels que la courbe ROC et l'AUC ont été définis, permettant de quantifier à quel point l'ordonnancement produit par la fonction de score est pertinent. La méthode de référence pour obtenir des fonctions de score est la régression logistique, que nous présentons ici. Une problématique majeure dans le scoring pour le risque de crédit est celle de la sélection de variables. En effet, les banques disposent de larges bases de données recensant toutes les informations dont elles disposent sur leurs clients, aussi bien sociodémographiques que comportementales, et toutes ne permettent pas d'expliquer le critère de risque. Afin d'aborder ce sujet, nous avons choisi de considérer la technique du Lasso, reposant sur l'application d'une contrainte sur les coefficients, de manière à fixer les valeurs des coefficients les moins significatifs à zéro. Nous avons envisagé cette méthode dans le cadre des régressions linéaires et logistiques, ainsi qu'une extension appelée Group Lasso, permettant de considérer les variables explicatives par groupes. Nous avons ensuite considéré le cas où la variable réponse n'est plus binaire, mais polytomique, c'est-à-dire avec plusieurs niveaux de réponse possibles. La première étape a été de présenter une définition du scoring équivalente à celle présentée précédemment dans le cas binaire. Nous avons ensuite présenté différentes méthodes de régression adaptées à ce nouveau cas d'étude : une généralisation de la régression logistique binaire, des méthodes semi-paramétriques, ainsi qu'une application à la régression logistique polytomique du principe du Lasso. Enfin, le dernier chapitre est consacré à l'application de certaines des méthodes évoquées dans le manuscrit sur des jeux de données réelles, permettant de les confronter aux besoins réels de l'entreprise. / The objective of this thesis was to explore the subject of scoring in the banking world, and more precisely to study how to control credit risk. The diversification and globalization of the banking business in the second half of the twentieth century led to introduce regulations, which require banks to make reserves to cover the risk they take. These regulations also dictate that they should model different risk indicators, among which the probability of default. This indicator represents the probability for a client to find himself in the incapacity to pay back his debt. In order to predict this probability, one should define a risk criterion, that allows to distinguish the "bad clients" from the "good clients". In a more formal statistical approach, that means we want to model a binary variable by an ensemble of explanatory variables. This problem is usually treated as a scoring problem. It consists in the definition of functions, called scoring functions, which interpret the information contained in the explanatory variables and transform it into a real-value score note. The goal of such a function is to induce the same order on the observations than the a posteriori probability, so that the observations that have a high probability to be "good" have a high score, and those that have a high probability to be "bad" (and thus a high risk for the bank) have a low score. Performance criteria such as the ROC curve and the AUC allow us to quantify the quality of the order given by the scoring function. The reference method to obtain such scoring functions is the logistic regression, which we present here. A major subject in credit scoring is the variable selection. The banks have access to large databases, which gather information on the profile of their clients and their past behavior. However, those variables may not all be discriminating regarding the risk criterion. In order to select the variables, we proposed to use the Lasso method, based on the restriction of the coefficients of the model, so that the less significative coefficients will be fixed to zero. We applied the Lasso method on linear regression and logistic regression. We also considered an extension of the Lasso method called Group Lasso on logistic regression, which allows us to select groups of variables rather than individual variables. Then, we considered the case in which the response variable is not binary, but polytomous, that is to say with more than two response levels. The first step in this new context was to extend the scoring problem as we knew in the binary case to the polytomous case. We then presented some models adapted to this case: an extension of the binary logistic regression, semi-parametric methods, and an application of the Lasso method on the polytomous logistic regression. Finally, the last chapter deals with some application studies, in which the methods presented in this manuscript are applied to real data from the bank, to see how they meet the needs of the real world.
49

PAS positivity of erythroid precursor cells is associated with a poor prognosis in newly diagnosed myelodysplastic syndrome patients / 新たに診断された骨髄異形成症候群患者のPAS陽性赤芽球は不良な予後に関連する

Masuda, Kenta 23 July 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(人間健康科学) / 甲第21305号 / 人健博第61号 / 新制||人健||5(附属図書館) / 京都大学大学院医学研究科人間健康科学系専攻 / (主査)教授 足立 壯一, 教授 藤井 康友, 教授 羽賀 博典 / 学位規則第4条第1項該当 / Doctor of Human Health Sciences / Kyoto University / DFAM
50

Adaptation des techniques actuelles de scoring aux besoins d'une institution de crédit : le CFCAL-Banque / Adaptation of current scoring techniques to the needs of a credit institution : the Crédit Foncier et Communal d'Alsace et de Lorraine (CFCAL-banque)

Kouassi, Komlan Prosper 26 July 2013 (has links)
Les institutions financières sont, dans l’exercice de leurs fonctions, confrontées à divers risques, entre autres le risque de crédit, le risque de marché et le risque opérationnel. L’instabilité de ces facteurs fragilise ces institutions et les rend vulnérables aux risques financiers qu’elles doivent, pour leur survie, être à même d’identifier, analyser, quantifier et gérer convenablement. Parmi ces risques, celui lié au crédit est le plus redouté par les banques compte tenu de sa capacité à générer une crise systémique. La probabilité de passage d’un individu d’un état non risqué à un état risqué est ainsi au cœur de nombreuses questions économiques. Dans les institutions de crédit, cette problématique se traduit par la probabilité qu’un emprunteur passe d’un état de "bon risque" à un état de "mauvais risque". Pour cette quantification, les institutions de crédit recourent de plus en plus à des modèles de credit-scoring. Cette thèse porte sur les techniques actuelles de credit-scoring adaptées aux besoins d’une institution de crédit, le CFCAL-banque, spécialisé dans les prêts garantis par hypothèques. Nous présentons en particulier deux modèles non paramétriques (SVM et GAM) dont nous comparons les performances en termes de classification avec celles du modèle logit traditionnellement utilisé dans les banques. Nos résultats montrent que les SVM sont plus performants si l’on s’intéresse uniquement à la capacité de prévision globale. Ils exhibent toutefois des sensibilités inférieures à celles des modèles logit et GAM. En d’autres termes, ils prévoient moins bien les emprunteurs défaillants. Dans l’état actuel de nos recherches, nous préconisons les modèles GAM qui ont certes une capacité de prévision globale moindre que les SVM, mais qui donnent des sensibilités, des spécificités et des performances de prévision plus équilibrées. En mettant en lumière des modèles ciblés de scoring de crédit, en les appliquant sur des données réelles de crédits hypothécaires, et en les confrontant au travers de leurs performances de classification, cette thèse apporte une contribution empirique à la recherche relative aux modèles de credit-scoring. / Financial institutions face in their functions a variety of risks such as credit, market and operational risk. These risks are not only related to the nature of the activities they perform, but also depend on predictable external factors. The instability of these factors makes them vulnerable to financial risks that they must appropriately identify, analyze, quantify and manage. Among these risks, credit risk is the most prominent due to its ability to generate a systemic crisis. The probability for an individual to switch from a risked to a riskless state is thus a central point to many economic issues. In credit institution, this problem is reflected in the probability for a borrower to switch from a state of “good risk” to a state of “bad risk”. For this quantification, banks increasingly rely on credit-scoring models. This thesis focuses on the current credit-scoring techniques tailored to the needs of a credit institution: the CFCAL-banque specialized in mortgage credits. We particularly present two nonparametric models (SVM and GAM) and compare their performance in terms of classification to those of logit model traditionally used in banks. Our results show that SVM are more effective if we only focus on the global prediction performance of the models. However, SVM models give lower sensitivities than logit and GAM models. In other words the predictions of SVM models on defaulted borrowers are not satisfactory as those of logit or GAM models. In the present state of our research, even GAM models have lower global prediction capabilities, we recommend these models that give more balanced sensitivities, specificities and performance prediction. This thesis is not completely exhaustive about the scoring techniques for credit risk management. By trying to highlight targeted credit scoring models, adapt and apply them on real mortgage data, and compare their performance through classification, this thesis provides an empirical and methodological contribution to research on scoring models for credit risk management.

Page generated in 0.022 seconds