Spelling suggestions: "subject:"banking"" "subject:"anking""
21 |
How Can we Derive Consensus Among Various Rankings of Marketing Journals?Theußl, Stefan, Reutterer, Thomas, Hornik, Kurt 15 October 2010 (has links) (PDF)
The identification of high quality journals often serves as a basis for the assessment of research contributions. In this context rankings have become an increasingly popular vehicle to decide upon incentives for researchers, promotions, tenure or even library budgets. These rankings are typically based on the judgments of peers or domain experts or scientometric methods (e.g., citation frequencies, acceptance rates). Depending on which (combination) of these ranking approaches is followed, the outcome leads to more or less diverging results. This paper addresses the issue on how to construct suitable aggregate (subsets) of these rankings. We present an optimization based consensus ranking approach and apply the proposed
method to a subset of marketing-related journals from the Harzing Journal Quality List. Our results show that even though journals are not uniformly ranked it is possible to derive a consensus ranking with considerably high agreement among the individual rankings. In addition,
we explore regional differences in consensus rankings. / Series: Research Report Series / Department of Statistics and Mathematics
|
22 |
Um modelo baseado em análise envoltória de dados para avaliação da eficiência, da relação de representatividade e prestígio dos periódicos da base SCImago e Qualis/CAPES por áreaBEZERRA, Bruna Rafaella Sales Claudino 26 February 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-08-05T13:36:08Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Bruna_Rafaella_S_Claudino_Bezerra_Dissertação_Engenharia_de_Produção_326.pdf: 1870953 bytes, checksum: 90f4ee7fc975583e6fd8da79c89a7c50 (MD5) / Made available in DSpace on 2016-08-05T13:36:09Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Bruna_Rafaella_S_Claudino_Bezerra_Dissertação_Engenharia_de_Produção_326.pdf: 1870953 bytes, checksum: 90f4ee7fc975583e6fd8da79c89a7c50 (MD5)
Previous issue date: 2016-02-26 / CAPEs / A avaliação da qualidade de bases de periódicos científicos é de extrema importância
para os sistemas educacionais e de pesquisas, visto que, a partir dela, podem-se realizar ações
que resultem na formação de pessoas e instituições qualificadas para atender às necessidades de
empreendimentos públicos e privados. As linhas de ações mais conhecidas são: Avaliação de
cursos de pós-graduação; Investimentos na formação de recursos de alto nível no país e exterior,
dentre outras. A autora propôs uma metodologia única para a classificação de eficiência de
periódicos de várias áreas do conhecimento, onde usou como ferramenta á Análise Envoltória de
Dados (DEA) e base de dados de periódicos do SCImago. Outra contribuição da autora para a
avaliação das áreas de conhecimento foi a criação de três indicadores: Indicador de
Representatividade (R); Indicador do Prestígio das Publicações Brasileiras na Área (P); e,
Indicador de Associação da Eficiência das Revistas no Qualis (E). Esses indicadores serviram
para comparar a base de qualificação Qualis/CAPES em relação à base de qualificação
internacional SCImago. Obtivemos que, em geral, existe uma baixa representatividade de
publicações brasileiras que estão avaliadas no Qualis/CAPES em relação a quantidade de
periódicos existentes na base SCImago. Por outro lado, a eficiência média dos periódicos
avaliados no Qualis/CAPES é, na maioria das áreas, superior a eficiência média da área
correspondente na base SCImago, indicando que existe uma boa seletividade por parte dos
pesquisadores destas áreas com relação a eficiência do periódico. Finalmente, a análise do
indicador E sugere que não existe uma associação entre o ranking de qualidade dos periódicos
proposto pelo Qualis/CAPES e a eficiência em termos de Produto/Custo dos periódicos. / The assessment of the quality of journal databases is extremely important to education
and research systems, since from it one may take actions that result in higher qualified people
and institutions to attend the demand of the private and public sectors. The most common actions
are: evaluation of graduate programs, application of financial resources for human resources
qualification in the country and abroad, among others. The author proposes a unique
methodology for the classification of the efficiency of journals in many areas of knowledge,
where the data envelopment analysis (DEA) was applied to the journals in the SCImago
database. Another contribution of this work was the proposal of three indicators:
representativeness indicator (R); prestigious indicator of the Brazilian publications per area (P);
and the efficiency association indicator of the journals in the Qualis/CAPES system (E). These
indicators were used to compare Qualis/CAPES database and the SCImago database. Our
analysis show that, in general, there exists a low representativeness of the Brazilian publications
evaluated at the Qualis/CAPES with respect to number of journals in the SCImago database. On
the other hand, the mean efficiency of the journals evaluated in the Qualis/CAPES is, in most
areas, higher than the mean efficiency of the corresponding area in the SCImago database,
suggesting that Brazilian researches in those areas have a good selectivity in what refers to the
journal efficiency. Finally, the analysis of the E indicator suggests that there is no association
between the Qualis/CAPES journal ranking and the journal Product/Cost efficiency.
|
23 |
Methodological and analytical considerations on ranking probabilities in network meta-analysis: Evaluating comparative effectiveness and safety of interventionsDaly, Caitlin Helen January 2020 (has links)
Network meta-analysis (NMA) synthesizes all available direct (head-to-head) and indirect evidence on the comparative effectiveness of at least three treatments and provides coherent estimates of their relative effects. Ranking probabilities are commonly used to summarize these estimates and provide comparative rankings of treatments. However, the reliability of ranking probabilities as summary measures has not been formally established and treatments are often ranked for each outcome separately. This thesis aims to address methodological gaps and limitations in current literature by providing alternative methods for evaluating the robustness of treatment ranks, establishing comparative rankings, and integrating ranking probabilities across multiple outcomes. These novel tools, addressing three specific objectives, are developed in three papers.
The first paper presents a conceptual framework for quantifying the robustness of treatments ranks and for elucidating potential sources of lack of robustness. Cohen’s kappa is proposed for quantifying the agreement between two sets of ranks based on NMAs of the full data and a subset of the data. A leave one-study-out strategy was used to illustrate the framework with empirical data from published NMAs, where ranks based on the surface under the cumulative ranking curve (SUCRA) were considered. Recommendations for using this strategy to evaluate sensitivity or robustness to concerning evidence are given.
When two or more cumulative ranking curves cross, treatments with large probabilities of ranking the best, second best, third best, etc. may rank worse than treatments with smaller corresponding probabilities based on SUCRA. This limitation of SUCRA is addressed in the second paper through the proposal of partial SUCRA (pSUCRA) as an alternative measure for ranking treatments. pSUCRA is adopted from the partial area under the receiver operating characteristic curve in diagnostic medicine and is derived to summarize relevant regions of the cumulative ranking curve.
Knowledge users are often faced with the challenge of making sense of large volumes of NMA results presented across multiple outcomes. This may be further complicated if the comparative rankings on each outcome contradict each other, leading to subjective final decisions. The third paper addresses this limitation through a comprehensive methodological framework for integrating treatments’ ranking probabilities across multiple outcomes. The framework relies on the area inside spie charts representing treatments’ performances on all outcomes, while also incorporating the outcomes’ relative importance. This approach not only provides an objective measure of the comparative ranking of treatments across multiple outcomes, but also allows graphical presentation of the results, thereby facilitating straightforward interpretation.
All contributions in this thesis provide objective means to improve the use of comparative treatment rankings in NMA. Further extensive evaluations of these tools are required to assess their validity in empirical and simulated networks of different size and sparseness. / Thesis / Doctor of Philosophy (PhD) / Decisions on how to best treat a patient should be informed by all relevant evidence comparing the benefits and harms of available options. Network meta-analysis (NMA) is a statistical method for combining evidence on at least three treatments and produces a coherent set of results. Nevertheless, NMA results are typically presented separately for each health outcome (e.g., length of hospital stay, mortality) and the volume of results can be overwhelming to a knowledge user. Moreover, the results can be contradictory across multiple outcomes. Statistics that facilitate the ranking of treatments may aid in easing this interpretative burden while limiting subjectivity. This thesis aims to address methodological gaps and limitations in current ranking approaches by providing alternative methods for evaluating the robustness of treatment ranks, establishing comparative rankings, and integrating ranking probabilities across multiple outcomes. These
contributions provide objective means to improve the use of comparative treatment rankings in NMA.
|
24 |
A COMPONENT RANKING FRAMEWORK FOR MORE RELIABLE SOFTWAREChaudhari, Dhyanesh 10 September 2013 (has links)
Software components are meant to be reusable and flexible by design. These characteristics and others continue attracting software developers to adapt a component (typically designed elsewhere) into their systems. However, software components are also vulnerable to reliability and security problems due to existence of non-obvious faults. We believe that a systematic approach to detect failures of a component and prioritize components using such failures can help developers decide on appropriate solutions to improve reliability. In this thesis, we present a framework that can help developers in detecting and ranking component failures systematically so that more reliable software can be achieved. Our proposed framework can allow monitoring critical components within a system under instrumentation, detecting failures based on specifications and using failure data and input from developers to rank the components. The proposed approach provides information for developers who could decide if the reliability could be improved by trivial code modification or require advanced reliability techniques. A prototype is designed along with a number of failure scenarios to detect specific failure types within a component. Four major failure types (value, timing, commission, and omission) are detected and used to rank software components. We conducted an experimental evaluation using two subject systems to assess the effectiveness of the proposed framework and to measure its performance overhead. Our experimental results show that the approach can benefit system developers by prioritizing components for effective maintenance with a minimal overhead. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2013-09-09 23:08:02.035
|
25 |
Variable selection for general transformation models. / CUHK electronic theses & dissertations collectionJanuary 2011 (has links)
General transformation models are a class of semiparametric survival models. The models generalize simple transformation models with more flexibility in modeling data coming from statistical practice. The models include many popular survival models as their special cases, e.g., proportional hazard Cox regression models, proportional odds models, generalized probit models, frailty survival models and heteroscedastic hazard regression models etc. Although the maximum marginal likelihood estimate of parameters in general transformation models with interval censored data is very satisfactory, its large sample properties are open. In this thesis, we will consider the problem and use discretization technique to establish the large sample properties of maximum marginal likelihood estimates with interval censored data. / In general, to reduce possible model bias, many covariates will be collected into a model. Hence a high-dimensional regression model is built. But at the same time, some non-significant variables may be also included in. So one of tasks to build an efficient survival model is to select significant variables. In this thesis, we will focus on the variable selection for general transformation models with ranking data, right censored data and interval censored data. Ranking data are widely seen in epidemiological studies, population pharmacokinetics and economics. Right censored data are the most common data in clinical trials. Interval censored data are another type common data in medical studies, financial, epidemiological, demographical and sociological studies. For example, a patient visits a doctor with a prespecified schedule. In his last visit, the doctor did not find occurrence of an interested event but at the current visit, the doctor found the event has occurred. Then the exact occurrence time of this event was censored in an interval bracketed by the two consecutive visiting dates. Based on rank-based penalized log-marginal likelihood approach, we will propose an uniform variable selection procedure for all three types of data mentioned above. In the penalized marginal likelihood function, we will consider non-concave and Adaptive-LASSO (ALASSO) penalties. For the non-concave penalties, we will adopt HARD thresholding, SCAD and LASSO penalties. ALASSO is an extended version of LASSO. The key of ALASSO is that it can assign weights to effects adaptively according to the importance of corresponding covariates. Therefore it has received more attention recently. By incorporating Monte Carlo Markov Chain stochastic approximation (MCMC-SA) algorithm, we also propose an uniform algorithm to find the rank-based penalized maximum marginal likelihood estimates. Based on the numeric approximation for marginal likelihood function, we propose two evaluation criteria---approximated GCV and BIC---to select proper tuning parameters. Using the procedure, we not only can select important variables but also be able to estimate corresponding effects simultaneously. An advantage of the proposed procedure is that it is baseline-free and censoring-distribution-free. With some regular conditions and proper penalties, we can establish the n -consistency and oracle properties of penalized maximum marginal likelihood estimates. We illustrate our proposed procedure by some simulations studies and some real data examples. At last, we will extend the procedures to analyze stratified survival data. / Keywords: General transformation models; Marginal likelihood; Ranking data; Right censored data; Interval censored data; Variable selection; HARD; SCAD; LASSO; ALASSO; Consistency; Oracle. / Li, Jianbo. / Adviser: Minggao Gu. / Source: Dissertation Abstracts International, Volume: 73-06, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 104-111). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
|
26 |
Modelling and analysis of ranking data with misclassification.January 2007 (has links)
Chan, Ho Wai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (leaves 56). / Abstracts in English and Chinese. / Abstract --- p.ii / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Model --- p.3 / Chapter 3 --- Implementation by Mx --- p.10 / Chapter 3.1 --- Example 1 --- p.10 / Chapter 3.2 --- Example 2 --- p.22 / Chapter 4 --- Covariance structure analysis --- p.26 / Chapter 5 --- Simulation --- p.29 / Chapter 5.1 --- Simulation 1 --- p.29 / Chapter 5.2 --- Simulation 2 --- p.36 / Chapter 6 --- Discussion --- p.41 / Appendix A: Mx input script for ranking data data with p =4 --- p.43 / Appendix B: Selection matrices for ranking data with p = 4 --- p.47 / Appendix C: Mx input script for ranking data data with p = 3 --- p.50 / Appendix D: Mx input script for p = 4 with covariance structure --- p.53 / References --- p.56
|
27 |
Top-k ranking with uncertain dataWang, Chonghai 06 1900 (has links)
The goal of top-k ranking is to rank individuals so that the best k of them can be determined. Depending on the application domain, an individual can be a person, a product, an event, or just a collection of data or information for which an ordering makes sense.
In the context of databases, top-k ranking has been studied in two distinct directions, depending on whether the stored
information is certain or uncertain. In the former, the past research has focused on efficient query processing. In the latter case, a number of semantics based on possible worlds have been proposed and computational mechanisms investigated for what are called uncertain databases or probabilistic databases, where a tuple is associated with a membership probability indicating the level of confidence on the stored information.
In this thesis, we study top-k ranking with uncertain data in two general areas. The first is on pruning for the computation of top-k tuples in a probabilistic database. We investigate the theoretical basis and practical means of pruning for the recently proposed, unifying framework based on parameterized ranking functions. As such, our results are applicable to a wide range of ranking functions. We show experimentally that pruning can generate orders of magnitude performance gains. In the second area of our investigation, we study the problem of top-k ranking for objects with multiple attributes whose values are modeled by probability distributions and constraints. We formulate a theory of top-k ranking for objects by a characterization of what constitutes the strength of an object, and show that a number of previous proposals for top-k ranking are special cases of our theory. We carry out a limited study on computation of top-k objects under our theory. We reveal the close connection between top-k ranking in this context
and high-dimensional space studied in mathematics, in particular, the problem of computing the volumes of high-dimensional polyhedra expressed by linear inequations is a special case of top-k ranking of objects, and as such, the algorithms formulated for the former can be employed for the latter under the same conditions.
|
28 |
Model-based decision trees for ranking dataLee, Hong, 李匡 January 2010 (has links)
published_or_final_version / Statistics and Actuarial Science / Doctoral / Doctor of Philosophy
|
29 |
Analysis of ranking data with covariates林漢坤, Lam, Hon-kwan. January 1998 (has links)
published_or_final_version / Statistics / Master / Master of Philosophy
|
30 |
A robustness study of Gupta's subset selection procedurePetit, Timothy Mark 05 1900 (has links)
No description available.
|
Page generated in 0.0382 seconds