• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 175
  • 42
  • 42
  • 21
  • 20
  • 19
  • 10
  • 8
  • 6
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 434
  • 127
  • 62
  • 60
  • 53
  • 42
  • 38
  • 37
  • 34
  • 34
  • 31
  • 28
  • 27
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

A Study of Stravinsky's Use of the Trombone in Selected Works

Drew, George Ronald 08 1900 (has links)
The primary objectives of this paper are to deal with Stravinsky's use of the trombone and to study the different methods of scoring for the instrument which he has employed in his works. In order to make this discussion more meaningful, the first chapter contains a brief history of the use and the development of the instrument from the fifteenth century up until 1900. In the second chapter Stravinsky's major works are divided into three periods, and each is individually discussed. The general characteristics common to the related major works are pointed out to provide an understanding of each period. The remainder of the paper is devoted to a study of Stravinsky's trombone scoring in three of his major works, one from each period. A concluding chapter summarizes his writing for the trombone as exemplified by these three works, and surveys the scoring for trombone in some of his other works.
132

Brain Dysfunction Indication on the Bender-Gestalt Test: a Validation of the Embree/Butler Scoring System

Henderson, J. Louise 12 1900 (has links)
The Embree/Butler scoring system served as criterion for ascertaining brain dysfunction on the protocols of 100 subjects--50 had been diagnosed by health professionals as having brain dysfunction, and 50 had been diagnosed as having no brain dysfunction. In comparing the hospital's diagnoses with those of the Embree/Butler method, the data strongly supported the hypothesis that the Embree/Butler scoring system did effectively discriminate (chi square of 77.99 < .01) between those with organic brain syndrome (or cerebral dysfunction) and those with psychiatric classification. A point-biserial correlation was used to distinguish the relationship between diagnosis and the score. A cutoff score of above 14 produced the least false-negative or false-positive evaluations.
133

A novel knowledge discovery based approach for supplier risk scoring with application in the HVAC industry

Chuddher, Bilal Akbar January 2015 (has links)
This research has led to a novel methodology for assessment and quantification of supply risks in the supply chain. The research has built on advanced Knowledge Discovery techniques and has resulted to a software implementation to be able to do so. The methodology developed and presented here resembles the well-known consumer credit scoring methods as it leads to a similar metric, or score, for assessing a supplier’s reliability and risk of conducting business with that supplier. However, the focus is on a wide range of operational metrics rather than just financial, which credit scoring techniques typically focus on. The core of the methodology comprises the application of Knowledge Discovery techniques to extract the likelihood of possible risks from within a range of available datasets. In combination with cross-impact analysis, those datasets are examined for establish the inter-relationships and mutual connections among several factors that are likely contribute to risks associated with particular suppliers. This approach is called conjugation analysis. The resulting parameters become the inputs into a logistic regression which leads to a risk scoring model the outcome of the process is the standardized risk score which is analogous to the well-known consumer risk scoring model, better known as FICO score. The proposed methodology has been applied to an Air Conditioning manufacturing company. Two models have been developed. The first identifies the supply risks based on the data about purchase orders and selected risk factors. With this model the likelihoods of delivery failures, quality failures and cost failures are obtained. The second model built on the first one but also used the actual data about the performance of supplier to identify risks of conducting business with particular suppliers. Its target was to provide quantitative measures of an individual supplier’s risk level. The supplier risk scoring model is tested on the data acquired from the company for its performance analysis. The supplier risk scoring model achieved 86.2% accuracy, while the area under curve (AUC) was 0.863. The AUC curve is much higher than required model’s validity threshold value of 0.5. It represents developed model’s validity and reliability for future data. The numerical studies conducted with real-life datasets have demonstrated the effectiveness of the proposed methodology and system as well as its future potential for industrial adoption.
134

A Combined Motif Discovery Method

Lu, Daming 06 August 2009 (has links)
A central problem in the bioinformatics is to find the binding sites for regulatory motifs. This is a challenging problem that leads us to a platform to apply a variety of data mining methods. In the efforts described here, a combined motif discovery method that uses mutual information and Gibbs sampling was developed. A new scoring schema was introduced with mutual information and joint information content involved. Simulated tempering was embedded into classic Gibbs sampling to avoid local optima. This method was applied to the 18 pieces DNA sequences containing CRP binding sites validated by Stormo and the results were compared with Bioprospector. Based on the results, the new scoring schema can get over the defect that the basic model PWM only contains single positioin information. Simulated tempering proved to be an adaptive adjustment of the search strategy and showed a much increased resistance to local optima.
135

Testovanie vybraných investičných stratégií / Testing of selected investment strategies

Hrmo, Michal January 2010 (has links)
In my thesis I will try to compare the profitability of investment strategies based on the books of the eight famous financial gurus. I'll try to explain the process of selection of stocks to model portfolios, and describe its pitfalls and ideas hidden behind them.I will evaluate the performance of model portfolios under current market conditions based on observation of their development. I will try to clarify the trend observed in stocks moves not only in terms of the criteria of tested strategies, but also in terms of important company news that occurred at the time of observation. I will look on the chosen strategies from the short-term point of view, the observation will last several weeks. The outcome of my work should be my own scoring model for finding undervalued stocks based on chosen strategies and criteria that will appear to be successful within my own observation.
136

Improvement of the software systems development life cycle of the credit scoring process at a financial institution through the application of systems engineering

Meyer, Nadia 11 October 2016 (has links)
A Research Report Submitted to the Faculty of Engineering and the Built Environment in partial fulfilment of the Requirements for the degree of Master of Science in Engineering / The research centred on improving the current software systems development life cycle (SDLC) of the credit scoring process at a financial institution based on systems engineering principles. The research sought ways to improve the current software SDLC in terms of cost, schedule and performance. This paper proposes an improved software SDLC that conforms to the principles of systems engineering. As decisioning has been automated in financial institutions, various processes are developed according to a software SDLC in order to ensure accuracy and validity thereof. This research can be applied to various processes within financial institutions where software development is conducted, verified and tested. A comparative analysis between the current software SDLC and a recommended SDLC was performed. Areas within the current SDLC that did not comply with systems engineering principles were identified. These inefficiencies were found during unit testing, functional testing and regression testing. An SDLC is proposed that conforms to systems engineering principles and is expected to reduce the current SDLC schedule by 20 per cent. Proposed changes include the sequence of processes within the SDLC, increasing test coverage by extracting data from the production environment, filtering and sampling data from the production environment, automating functional testing using mathematical algorithms, and creating a test pack for regression testing which adequately covers the software change. / MT2016
137

Métodos híbridos em docagem molecular: implementação, validação e aplicação / Hybrid methods in molecular docking: implementation, validation and application

Muniz, Heloisa dos Santos 13 June 2018 (has links)
A modelagem das interações entre macromoléculas e ligantes ainda se depara com diversos desafios na área de desenho de fármacos assistidos por computador. Apesar do crescimento da área, temas como a flexibilidade do receptor, funções de pontuação e solvatação ainda têm sido alvo de intensa investigação na comunidade científica. Com o objetivo de analisar a interação em milhares ou milhões de complexos, é imprescindível uma boa harmonização entre o custo computacional e a acurácia dos métodos computacionais que permitem a classificação de ligantes de acordo com a energia de interação. O LiBELa (Ligand Binding Energy Landscape) é um programa de docagem molecular com abordagem híbrida, ou seja, utiliza informações do ligante e do receptor durante o processo de docagem. Inicialmente, as características estéricas e eletrostáticas de um ligante de referência (cristalográfico, por exemplo) são utilizadas nos cálculos de similaridade e sobreposição, obtendo assim uma conformação inicial pré-otimizada do ligante testado. Em seguida, a energia de interação é minimizada no sítio ativo de receptor a partir de potenciais energéticos. Quatro funções de pontuação baseadas em campo de força foram testadas e otimizadas, compostas por potenciais de van der Waals, de Coulomb, e uma função empírica de solvatação denominada função de Stouten-Verkhivker (SV). A flexibilidade do sistema foi tratada através da geração de confôrmeros que amostram os graus de liberdade dos ligantes descritos como semi-rígidos e através de potenciais atenuados que suavizam a superfície de energia de interação, permitindo interações em distâncias interatômicas antes repulsivas. Como ponto de partida, os métodos implementados no programa LiBELa demonstraram resultados satisfatórios nos testes de cross- e self-docking, mostrando ser uma ferramenta eficiente em encontrar os modos de ligação cristalográficos de forma equivalente ou até melhor às dos programas comparados. Através de testes de enriquecimento nos conjuntos de dados DUD, DUDE e CM-DUD, foram otimizadas de forma sistemática as constantes dielétrica, do termo de solvatação, e dos termos de atenuação. Também foi realizado um paralelo entre as funções de pontuação, incluindo a atenuação e o termo de solvatação. Estes mesmos testes mostraram resultados superiores do LiBELa de 39% e 15% em comparação com um programa baseado puramente no receptor (DOCK 6.6), relativo à média da área sob a curva em escala semi-logarítmica nas bases de dados DUDE e DUD respectivamente. Apesar da função de solvatação SV implementada no LiBELa apresentar boa correlação com dados experimentais (r=0,72) e com o modelo Zou GB de solvatação (r=0,88), não apresentou correlação significativa com os métodos GB e PB implementados no pacote de programas disponível no AmberTools. Comparadas às funções de pontuação do LiBELa, as funções com correção para solvatação apresentaram pior enriquecimento, salvo alguns alvos específicos. Por fim, foram realizados ensaios de docagem molecular utilizando como alvo uma enzima &beta;-galactosidase da família GH42, cuja estrutura fora resolvida em nosso grupo. Os resultados permitiram conclusões acerca de como o modo de ligação interfere na preferência de ligação entre dissacarídeos de ligações glicosídicas distintas, consistentes com dados experimentais de ensaios cinéticos de ligação. / Modeling the interactions between macromolecules and ligands still faces several challenges in the computer-aided drug design area. Despite the growth in the area, subjects such as receptor flexibility, scoring functions and solvation still have been widely explored in the scientific community. In order to analyze the interaction for thousands or millions of complexes, a good harmonization between the computational cost and the accuracy of the calculation methods in molecular docking programs is essential. LiBELa (Ligand Binding Energy Landscape) is a hybrid approach program that uses both ligand and receptor information for ligand docking. Initially, the steric and electrostatic characteristics from a reference binder (crystallographic, for example) are used to similarity and overlay calculations, thus obtaining an initial conformation of the ligand tested. Then, within the receptor´s active site, the interaction energy is minimized using energetic potentials. Four force field-based scoring functions were tested and optimized, composed of van der Waals and Coulomb potentials and an empirical solvation function called Stouten-Verkhivker (SV). Concerning the system flexibility, besides the confomers generation that sample the degrees of freedom for semi-rigid ligands, attenuated potentials smooth the energy surface allowing interactions between previously repulsive interatomic distances. As a starting point, LiBELa performed satisfactorily in the cross- and self-docking tests, showing that is an eficient tool to reproduce crystallographic binding modes equivalently to or even better than reference programs. Through enrichment of DUD, DUDE and CM-DUD datasets, the dielectric constant, solvation and softening terms were systematically optimized. It also allowed a parallel between scoring functions, including attenuation and solvation term. Finally, it revealed the LiBELa showed an enhancement of 39% and 15% as compared to the purely receptor-based program DOCK 6.6, relative to the mean of the area under the curve on a semi-logarithmic scale in the DUDE and DUD databases respectively. Although the SV solvation function implemented in LiBELa showed good correlations with experimental data (r = 0.72) and with the Zou GB / SA solvation method implemented in DOCK6 (r = 0.88), it did not show significant correlation with the GB/SA and PB/SA methods implemented in AmberTools. Comparing all the LiBELa tested scoring functions, those including solvation correction showed worse enrichments, except for some specific targets. Finally, molecular docking experiments using LiBELa were conducted with a &beta;-galactosidase from GH42 family, whose structure was solved in our group. The results allowed conclusions concerning how the binding mode interferes the preference for some disaccharides of distinct glycosidic bonds, consistent with experimental data from kinetic assays.
138

Métodos de categorização de variáveis preditoras em modelos de regressão para variáveis binárias / Categorization methods for predictor variables in binary regression models

Silva, Diego Mattozo Bernardes da 13 June 2017 (has links)
Modelos de regressão para variáveis resposta binárias são muito comuns em diversas áreas do conhecimento. O modelo mais utilizado nessas situações é o modelo de regressão logística, que assume que o logito da probabilidade de ocorrência de um dos valores da variável resposta é uma função linear das variáveis preditoras. Quando essa suposição não é razoável, algumas possíveis alternativas são: realizar transformação das variáveis preditoras e/ou inserir termos quadráticos ou cúbicos no modelo. O problema dessa abordagem é que ela dificulta bastante a interpretação dos parâmetros do modelo e, em algumas áreas, é fundamental que eles sejam interpretáveis. Assim, uma abordagem muitas vezes utilizada é a categorização das variáveis preditoras quantitativas do modelo. Sendo assim, este trabalho tem como objetivo propor duas novas classes de métodos de categorização de variáveis contínuas em modelos de regressão para variáveis resposta binárias. A primeira classe de métodos é univariada e busca maximizar a associação entre a variável resposta e a covariável categorizada utilizando medidas de associação para variáveis qualitativas. Já a classe de métodos multivariada tenta incorporar a estrutura de dependência entre as covariáveis do modelo através da categorização conjunta de todas as variáveis preditoras. Para avaliar o desempenho, aplicamos as classes de métodos propostas e quatro métodos de categorização existentes em 3 bases de dados relacionadas à área de risco de crédito e a dois cenários de dados simulados. Os resultados nas bases reais sugerem que a classe univariada proposta têm um desempenho superior aos métodos existentes quando comparamos o poder preditivo do modelo de regressão logística. Já os resultados nas bases de dados simuladas sugerem que ambas as classes propostas possuem um desempenho superior aos métodos existentes. Em relação ao desempenho computacional, o método multivariado mostrou-se inferior e o univariado é superior aos métodos existentes. / Regression models for binary response variables are very common in several areas of knowledge. The most used model in these situations is the logistic regression model, which assumes that the logit of the probability of a certain event is a linear function of the predictors variables. When this assumption is not reasonable, it is common to make some changes in the model, such as: transformation of predictor variables and/or add quadratic or cubic terms to the model. The problem with this approach is that it hinders parameter interpretation, and in some areas it is fundamental to interpret the parameters. Thus, a common approach is to categorize the quantitative covariates. This work aims to propose two new classes of categorization methods for continuous variables in binary regression models. The first class of methods is univariate and seeks to maximize the association between the response variable and the categorized covariate using measures of association for qualitative variables. The second class of methods is multivariate and incorporates the predictor variables correlation structure through the joint categorization of all covariates. To evaluate the performance, we applied the proposed methods and four existing categorization methods in 3 credit scoring databases and in two simulated cenarios. The results in the real databases suggest that the proposed univariate class of categorization methods performs better than the existing methods when we compare the predictive power of the logistic regression model. The results in the simulated databases suggest that both proposed classes perform better than the existing methods. Regarding computational performance, the multivariate method is inferior and the univariate method is superior to the existing methods.
139

Análise discriminante com mistura de variáveis categóricas e contínuas / Discriminant Analysis with Mixed Categorical and Continuous Data

Sanda, Rene 22 June 1990 (has links)
O objetivo do trabalho é apresentar os métodos mais consagrados de Análise Discriminante quando temos uma mistura de variáveis categóricas e contínuas. / The purpose of this dissertation is to analyze and compare Discriminant Analysis techniques in the presence of mixed categorical and continuous data.
140

Role of timeouts in table tennis examined

Karlsson, Michaela, Sandéhn, Alexandra January 2019 (has links)
The purpose of the present study was to examine the role of timeouts in competitive elite table tennis in relation to psychological momentum (PM). To that end, archival data from elite top-international matches (N= 48) was firstly examined to gather information on when timeouts are most taken, and whether these have any objective influence on subsequent performance (set outcome and ultimately match outcome). Secondly, similar archival data for Swedish League matches (N= 36) was examined and interviews with elite coaches from the highest Swedish league (N= 6) at these given matches were carried out to gain further knowledge and understanding on the role and use of timeouts in competitive elite table tennis. Findings showed that timeouts were mostly called following a sequence of three consecutive lost points; that is, coaches used timeouts to break negative PM. However, findings also showed that these given timeouts had no objective impact on neither set nor match outcomes; that is, sets and matches were ultimately lost. Future research examining the subjective coach-player experience revolving around timeouts is needed to comprehend potential ‘secondary’ purposes when calling timeouts and, subsequently, understand timeouts role in table tennis fully.

Page generated in 0.0267 seconds