• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 283
  • 67
  • 28
  • 23
  • 20
  • 17
  • 13
  • 11
  • 10
  • 9
  • 8
  • 6
  • 6
  • 6
  • 4
  • Tagged with
  • 589
  • 93
  • 84
  • 83
  • 78
  • 63
  • 57
  • 51
  • 40
  • 40
  • 39
  • 37
  • 37
  • 35
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

QS Ranking Methodologies

Newman, Janson 19 April 2018 (has links)
Conferencia realizado el 19 de Abril de 2018 en las instslaciones de la Universidad Peruana de Ciencias Aplicadas (UPC), campus San Isidro. Lima Perú. Evento auspiciado por Universidad y FIPES. / Conferencia acerca de la importancia de los rankings universitarios: caso QS World University Rankings.
2

Ranking for Decision Making: Fairness and Usability

Kuhlman, Caitlin A. 06 May 2020 (has links)
Today, ranking is the de facto way that information is presented to users in automated systems, which are increasingly used for high stakes decision making. Such ranking algorithms are typically opaque, and users don’t have control over the ranking process. When complex datasets are distilled into simple rankings, patterns in the data are exploited which may not reflect the user’s true preferences, and can even include subtle encodings of historical inequalities. Therefore it is paramount that the user’s preferences and fairness objectives are reflected in the rankings generated. This research addresses concerns around fairness and usability of ranking algorithms. The dissertation is organized in two parts. Part one investigates the usability of interactive systems for automatic ranking. The aim is to better understand how to capture user knowledge through interaction design, and empower users to generate personalized rankings. A detailed requirements analysis for interactive ranking systems is conducted. Then alternative preference elicitation techniques are evaluated in a crowdsourced user study. The study reveals surprising ways in which collection interfaces may prompt users to organize more data, thereby requiring minimal effort to create sufficient training data for the underlying machine learning algorithm. Following from these insights, RanKit is presented. This system for personalized ranking automatically generates rankings based on user-specified preferences among a subset of items. Explanatory features give feedback on the impact of user preferences on the ranking model and confidence of predictions. A case study demonstrates the utility of this interactive tool. In part two, metrics for evaluating the fairness of rankings are studied in depth, and a new problem of fair ranking by consensus is introduced. Three group fairness metrics are presented: rank equality, rank calibration, and rank parity which cover a broad spectrum of fairness considerations from proportional representation to error rate similarity across groups. These metrics are designed using a pairwise evaluation strategy to adapt algorithmic fairness concepts previously only applicable for classification. The metrics are employed in the FARE framework, a novel diagnostic tool for auditing rankings which exposes tradeoffs between different notions of fairness. Next, different ways of measuring a single definition of fairness are evaluated in a comparative study of state-of-the-art statistical parity metrics for ranking. This study identifies a core set of parity metrics which all behave similarly with respect to group advantage, reflecting well an intuitive definition of unfairness. However, this analysis also reveals that under relaxed assumptions about group advantage, different ways of measuring group advantage yield different fairness results. Finally, I introduce a new problem of fair ranking by consensus among multiple decision makers. A family of algorithms are presented which solve this open problem of guaranteeing fairness for protected groups of candidates, while still producing a good aggregation of the base rankings. Exact solutions are presented as well as a method which guarantees fairness with minimal approximation error. Together, this research expands the utility of ranking algorithms to support fair decision making.
3

Topics in the statistical aspects of simulation

McDonald, Joshua L. 07 January 2016 (has links)
We apply various variance reduction techniques to the estimation of Asian averages and options and propose an easy-to-use quasi-Monte Carlo method that can provide significant variance reductions with minimal increases in computational time. We have also extended these techniques to estimate higher moments of the Asians. We then use these estimated moments to efficiently implement Gram--Charlier based estimators for probability density functions of Asian averages and options. Finally, we investigate a ranking and selection application that uses post hoc analysis to determine how the circumstances of procedure termination affect the probability of correct selection.
4

Ανάπτυξη χρονοπρογραμματιστή με τυχαίες επιλογές

Τόλλος, Αθανάσιος 10 March 2014 (has links)
Ο σύγχρονος κόσμος των δικτύων και του internet απαιτεί πολύ υψηλές ταχύτητες διασυνδέσεων στα διάφορα δίκτυα. Ξεκινώντας ακόμα και από τα οικιακά δίκτυα και τα τοπικά δίκτυα (LAN), στα πανεπιστημιακά δίκτυα (campus networks), στα μητροπολιτικά δίκτυα (MAN), στα δίκτυα ευρύτερης περιοχής (WAN) και στα δίκτυα κορμού του internet (core networks). Σε όλα αυτά τα δίκτυα χρησιμοποιούνται κατά κόρον μεταγωγείς (switches) και δρομολογητές (routers) προκειμένου να μεταφέρουν την δικτυακή πληροφορία από την αφετηρία της στον προορισμό της διασχίζοντας πληθώρα άλλων δικτύων. Πυρήνα των μεταγωγέων και των δρομολογητών αποτελεί ο χρονοπρογραμματιστής, ένας αλγόριθμος δηλαδή υλοποιημένος στο hardware της εκάστοτε συσκευής, που αποφασίζει την προώθηση της πληροφορίας από την είσοδό της στην έξοδό της, αφού προηγουμένως έχει καθοριστεί με άλλο μηχανισμό η θύρα εξόδου της πληροφορίας. Η σημασία του χρονοπρογραμματιστή γίνεται φανερή από την πληθώρα προβλημάτων που πρέπει να επιλύσει. Επιλεκτικά, κάποια από τα προβλήματα είναι ο ανταγωνισμός εισόδων για την ίδια έξοδο, το ταίριασμα εισόδων – εξόδων, η ελάχιστη δυνατόν καθυστέρηση στην διερχόμενη πληροφορία, η σταθερότητα λειτουργίας, η μεγιστοποίηση της διαμεταγωγής (throughput), η δικαιοσύνη στην εξυπηρέτηση εισόδων και εξόδων, κ.α. Στην παρούσα διπλωματική παρουσιάζεται η οικογένεια αλγορίθμων χρονοπρογραμματισμού ROLM (Randomized On-Line Matching), η οποία υλοποιεί τυχαιότητα με αποδοτικό και αποτελεσματικό τρόπο. Οι επιδόσεις αυτές φαίνονται στη μικρή καθυστέρηση στην προώθηση πακέτων (packet forwarding), επιτυγχάνοντας έτσι υψηλή διαμεταγωγή (throughput) και στα χαρακτηριστικά δικαιοσύνης που προσφέρουν, σε σχέση με τις υπάρχουσες ανταγωνιστικές υλοποιήσεις, που δεν χρησιμοποιούν τυχαιότητα αλλά ντετερμινιστικές μεθόδους απόφασης. Τα αποτελέσματα αυτά οφείλονται στο βασικό αλγόριθμο της οικογένειας ROLM, τον Ranking, o οποίος υπολογίζει μέγιστο ταίριασμα εισόδων – εξόδων. Οι αλγόριθμοι αυτοί επιλέγουν τυχαία εισόδους για προώθηση στις εξόδους που ζητούν, επιλογή η οποία μπορεί να οδηγήσει σε χρονοπρογραμματιστές υψηλών ταχυτήτων, ταχύτητες που ορίζει η εκάστοτε τεχνολογία υλοποίησης και η ταχύτητα των συνδέσμων δικτύου. Ο αλγόριθμος Ranking υλοποιείται σε software και σε hardware (υλικό), στην πλατφόρμα FPSLIC της ATMEL. Η πλατφόρμα αυτή περιέχει έναν 8μπιτο επεξεργαστή, τον AVR, και ένα προγραμματιζόμενο πίνακα πυλών (FPGA) στην ίδια πλακέτα κατασκευασμένα με την ίδια τεχνολογία. Έτσι, οι μετρήσεις των δύο υλοποιήσεων είναι συγκρίσιμες. Το πρόγραμμα που αναπτύσσεται, τόσο για την software όσο και για την hardware υλοποίηση, δέχεται ως παράμετρο το μέγεθος του μεταγωγέα. Έτσι, μετρώνται και συγκρίνονται χαρακτηριστικά όπως η ταχύτητα, ο χρόνος απόφασης, η επιφάνεια και το πλήθος θυρών I/O, για μεταγωγείς μεγέθους τεσσάρων εισόδων και τεσσάρων εξόδων (4x4), 8x8, 16x16 και 32x32. / --
5

Ad-hoc Holistic Ranking Aggregation

Saleeb, Mina January 2012 (has links)
Data exploration is considered one of the major processes that enables the user to analyze massive amount of data in order to find the most important and relevant informa- tion needed. Aggregation and Ranking are two of the most frequently used tools in data exploration. The interaction between ranking and aggregation has been studied widely from different perspectives. In this thesis, a comprehensive survey about this interaction is studied. Holistic Ranking Aggregation which is a new interaction is introduced. Finally, various algorithms are proposed to efficiently process ad-hoc holistic ranking aggregation for both monotone and generic scoring functions.
6

Factors Affecting Students¡¦ Choices of Senior High Schools without Following the School Ranking by Joint Entrance Exam

Tu, Yu-ming 06 July 2011 (has links)
President Ma Ying-jeou proclaimed the 12-year compulsory education plan on January 1st, 2011 (the 100th Year of Republic of China). From 2014 onward, both senior high and vocational schools will require no tuition, and most of them can be attended without the requirement of students passing an entrance exam. This policy marks a milestone in Taiwan¡¦s high school admission system. In the future, students graduating from junior high schools may choose a school they favor, rather than having no choice but to attend the one according to their exam results, as was the practice in the past. Purposive sampling was adopted with the freshmen in eight public senior high schools in Kaohsiung as the subjects; two classes in each school were sampled with the questionnaire based on four dimensions: ¡§background of the senior high schools¡¨, ¡§influencing factors occurring during the process of choosing a school¡¨, ¡§main information channels to better understand the senior high schools¡¨, and ¡§related consultations on how to choose a senior high school.¡¨ The aim was to compare the behaviors in choosing a senior high school among the students who do not follow the conventional school ranking as determined by the entrance exam and those who do, in order to explore the factors that affect the process of choosing a senior high school by those who do not follow the practice. The study results show that: 1. Both the students who follow the practice and those who do not, value the dimensions: ¡§background of the senior high schools¡¨ the most, such as ¡§ratio of students entering a university¡¨, ¡§school image (reputation)¡¨, and ¡§school ranking in accordance with the entrance exam result¡¨, etc. 2. Those who do and do not follow the practice differ in choosing a school in terms of four aspects: ¡§whether there is a classmate attending the same senior high school¡¨, ¡§whether background information on the senior high school is available¡¨, ¡§whether the senior high schools hold recruitment activities on the students¡¦ campus¡¨, and ¡§whether related consultation data is issued by the junior high schools they are attending.¡¨ Through a logistic regression analysis, it was found that the three aspects ¡§classmate¡¨, ¡§background information¡¨, and ¡§consultation data¡¨ are significantly predictive regarding the behavior of choosing a senior high school by both groups of students. According to the study results, suggestions are proposed regarding senior high and vocational schools¡¦ planning of future marketing strategies and junior high schools¡¦ provision of consultations about choosing a senior high school. In addition, suggestions are advanced to education administrative organizations for the implementation of the 12-year compulsory education. Finally, suggestions for follow-up studies are also listed.
7

Some topics in modeling ranking data

Qi, Fang, 齊放 January 2014 (has links)
Many applications of analysis of ranking data arise from different fields of study, such as psychology, economics, and politics. Over the past decade, many ranking data models have been proposed. AdaBoost is proved to be a very successful technique to generate a stronger classifier from weak ones; it can be viewed as a forward stagewise additive modeling using the exponential loss function. Motivated by this, a new AdaBoost algorithm is developed for ranking data. Taking into consideration the ordinal structure of the ranking data, I propose measures based on the Spearman/Kendall distance to evaluate classifier instead of the usual misclassification rate. Some ranking datasets are tested by the new algorithm, and the results show that the new algorithm outperforms traditional algorithms. The distance-based model assumes that the probability of observing a ranking depends on the distance between the ranking and its central ranking. Prediction of ranking data can be made by combining distance-based model with the famous k-nearest-neighbor (kNN) method. This model can be improved by assigning weights to the neighbors according to their distances to the central ranking and assigning weights to the features according to their relative importance. For the feature weighting part, a revised version of the traditional ReliefF algorithm is proposed. From the experimental results we can see that the new algorithm is more suitable for ranking data problem. Error-correcting output codes (ECOC) is widely used in solving multi-class learning problems by decomposing the multi-class problem into several binary classification problems. Several ECOCs for ranking data are proposed and tested. By combining these ECOCs and some traditional binary classifiers, a predictive model for ranking data with high accuracy can be made. While the mixture of factor analyzers (MFA) is useful tool for analyzing heterogeneous data, it cannot be directly used for ranking data due to the special discrete ordinal structures of rankings. I fill in this gap by extending MFA to accommodate for complete and incomplete/partial ranking data. Both simulated and real examples are studied to illustrate the effectiveness of the proposed MFA methods. / published_or_final_version / Statistics and Actuarial Science / Doctoral / Doctor of Philosophy
8

Factor analysis for ranking data

Lo, Siu-ming, 盧小皿 January 1998 (has links)
published_or_final_version / abstract / toc / Statistics and Actuarial Science / Master / Master of Philosophy
9

Estimation methods for rank data

徐兆邦, Chui, Shiu-bong. January 2000 (has links)
published_or_final_version / Statistics and Actuarial Science / Master / Master of Philosophy
10

Measuring interestingness of documents using variability

KONDI CHANDRASEKARAN, PRADEEP KUMAR 01 February 2012 (has links)
The amount of data we are dealing with is being generated at an astronomical pace. With the rapid technological advances in the field of data storage techniques, storing and transmitting copious amounts of data has become very easy and hassle-free. However, exploring those abundant data and finding the interesting ones has always been a huge integral challenge and cumbersome process to people in all industrial sectors. A model to rank data by interest will help in saving the time spent on the large amount of data. In this research we concentrate specifically on ranking the text documents in corpora according to ``interestingness'' We design a state-of-the-art empirical model to rank documents according to ``interestingness''. The model is cost-efficient, fast and automated to an extent which requires minimal human intervention. We identify different categories of documents based on the word-usage pattern which in turn classifies them as being interesting, mundane or anomalous documents. The model is a novel approach which does not depend on the semantics of the words used in the document but is based on the repetition of words and rate of introduction of new words in the document. The model is a generic design which can be applied to a document corpus of any size from any domain. The model can be used to rank new documents introduced into the corpus. We formulate a couple of normalization techniques which can be used to neutralize the impact of variable document length. We use three approaches, namely dictionary-based data compression, analysis of the rate of new word occurrences and Singular Value Decomposition (SVD). To test the model we use a variety of corpora namely: US Diplomatic Cable releases by Wikileaks, US Presidents State of Union Addresses, Open American National Corpus and 20 Newsgroups articles. The techniques have various pre-processing steps which are totally automated. We compare the results of the three techniques and examine the level of agreement between pair of techniques using a statistical method called the Jaccard coefficient. This approach can also be used to detect the unusual and anomalous documents within the corpus. The results also contradict the assumptions made by Simon and Yule in deriving an equation for a general text generation model. / Thesis (Master, Computing) -- Queen's University, 2012-01-31 15:28:04.177

Page generated in 0.0445 seconds