• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 284
  • 67
  • 28
  • 23
  • 20
  • 17
  • 13
  • 11
  • 10
  • 9
  • 8
  • 6
  • 6
  • 6
  • 4
  • Tagged with
  • 591
  • 93
  • 84
  • 83
  • 78
  • 63
  • 57
  • 52
  • 41
  • 40
  • 39
  • 37
  • 37
  • 35
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Obesity Rankings, Do They Add Up?

McCabe, Annie Michelle 08 December 2008 (has links)
No description available.
182

Improving Web Search Ranking Using the Internet Archive

Li, Liyan 02 June 2020 (has links)
Current web search engines retrieve relevant results only based on the latest content of web pages stored in their indices despite the fact that many web resources update frequently. We explore possible techniques and data sources for improving web search result ranking using web page historical content change. We compare web pages with previous versions and separately model texts and relevance signals in the newly added, retained, and removed parts. We particularly examine the Internet Archive, the largest web archiving service thus far, for its effectiveness in improving web search performance. We experiment with a few possible retrieval techniques, including language modeling approaches using refined document and query representations built based on comparing current web pages to previous versions and Learning-to-rank methods for combining relevance features in different versions of web pages. Experimental results on two large-scale retrieval datasets (ClueWeb09 and ClueWeb12) suggest it is promising to use web page content change history to improve web search performance. However, it is worth mentioning that the actual effectiveness at this moment is affected by the practical coverage of the Internet Archive and the amount of regularly-changing resources among the relevant information related to search queries. Our work is the first step towards a promising area combining web search and web archiving, and discloses new opportunities for commercial search engines and web archiving services. / Master of Science / Current web search engines show search documents only based on the most recent version of web pages stored in their database despite the fact that many web resources update frequently. We explore possible techniques and data sources for improving web search result ranking using web page historical content change. We compare web pages with previous versions and get the newly added, retained, and removed parts. We examine the Internet Archive in particular, the largest web archiving service now, for its effectiveness in improving web search performance. We experiment with a few possible retrieval techniques, including language modeling approaches using refined document and query representations built based on comparing current web pages to previous versions and Learning-to-rank methods for combining relevance features in different versions of web pages. Experimental results on two large-scale retrieval datasets (ClueWeb09 and ClueWeb12) suggest it is promising to use web page content change history to improve web search performance. However, it is worth mentioning that the actual effectiveness at this point is affected by the practical coverage of the Internet Archive and the amount of ever-changing resources among the relevant information related to search queries. Our work is the first step towards a promising area combining web search and web archiving, and discloses new opportunities for commercial search engines and web archiving services.
183

The rank analysis of triple comparisons

Pendergrass, Robert Nixon 12 March 2013 (has links)
General extensions of the probability model for paired comparisons, which was developed by R. A. Bradley and M. E. Terry, are considered. Four generalizations to triple comparisons are discussed. One of these models is used to develop methods of analysis of data obtained from the ranks of items compared in groups of size three. / Ph. D.
184

The curve through the expected values of order statistics with special reference to problems in nonparametric tests of hypotheses

Chow, Bryant January 1965 (has links)
The expected value ot the s<sup>th</sup> largest ot n ranked variates from a population with probability density f(x) occurs often in the statistical literature and especially in the theory of nonparametric statistics. A new expression for this value will be obtained tor any underlying density f(x) but emphasis will be placed on normal scores. A finite series representation, the individual terms of which are easy to calculate, will be obtained for the sum of squares of normal scores. The derivation of this series demonstrates a technique which can also be used to obtain the expected value of Fisher's measure or correlation as well as the expected value of the Fisher-Yates test statistic under an alternative hypothesis. / Ph. D.
185

Mapping IS failure factors on PRINCE2® stages: an application of Interpretive Ranking Process (IRP)

Hughes, D.L., Dwivedi, Y.K., Rana, Nripendra P. 25 September 2020 (has links)
Yes / The social, political and cultural issues faced by organisations and their senior management team in the delivery and adoption of strategic projects, is highly complex and problematic. Despite a mature body of literature, increasing levels of practitioner certification, application of standards and numerous government initiatives, improvements in success have been minimal. In this study, we analyse the key underlying factors surrounding the failure of Information Systems (IS) projects and explore the merits of articulating a narrative that focuses on senior management embracing practical pessimism. Specifically, we develop a hypothesis supported by empirical study that leverages expert’s views on the dominance and interrelationships between failure factors within PRINCE2® project stages using an Interpretive Ranking Process. Our findings establish how the concept of dominance between individual failure factors can necessitate senior management to make key informed and timely decisions that could potentially influence project outcomes based on an empirical derived, interpretive predictive framework.
186

Predicting the “helpfulness” of online consumer reviews

Singh, J.P., Irani, S., Rana, Nripendra P., Dwivedi, Y.K., Saumya, S., Kumar Roy, P. 25 September 2020 (has links)
Yes / Online shopping is increasingly becoming people's first choice when shopping, as it is very convenient to choose products based on their reviews. Even for moderately popular products, there are thousands of reviews constantly being posted on e-commerce sites. Such a large volume of data constantly being generated can be considered as a big data challenge for both online businesses and consumers. That makes it difficult for buyers to go through all the reviews to make purchase decisions. In this research, we have developed models based on machine learning that can predict the helpfulness of the consumer reviews using several textual features such as polarity, subjectivity, entropy, and reading ease. The model will automatically assign helpfulness values to an initial review as soon as it is posted on the website so that the review gets a fair chance of being viewed by other buyers. The results of this study will help buyers to write better reviews and thereby assist other buyers in making their purchase decisions, as well as help businesses to improve their websites.
187

A methodological critique of the Interpretive Ranking Process for examining IS project failure

Hughes, L., Dwivedi, Y.K., Rana, Nripendra P. 27 September 2020 (has links)
Yes / This research critically analyzes the Interpretive Ranking Process (IRP) using an illustrative empirically derived IS project failure related case study to articulate a deeper understanding of the method. The findings emphasize the suitability of the method for a number of practical applications, but also highlight the limitations for larger matrix sized problems. The IRP process to derive the dominance between IS project failure factors is judged to be methodical and systematic, enabling the development of clear dominating interactions.
188

Using Texture Features To Perform Depth Estimation

Kotha, Bhavi Bharat 22 January 2018 (has links)
There is a great need in real world applications for estimating depth through electronic means without human intervention. There are many methods in the field which help in autonomously finding depth measurements. Some of which are using LiDAR, Radar, etc. One of the most researched topic in the field of depth measurements is Computer Vision which uses techniques on 2D images to achieve the desired result. Out of the many 3D vision techniques used, stereovision is a field where a lot of research is being done to solve this kind of problem. Human vision plays an important part behind the inspiration and research performed in this field. Stereovision gives a very high spatial resolution of depth estimates which is used for obstacle avoidance, path planning, object recognition, etc. Stereovision makes use of two images in the image pair. These images are taken with two cameras from different views and those two images are processed to get depth information. Processing stereo images has been one of the most intensively sought-after research topics in computer vision. Many factors affect the performance of this approach like computational efficiency, depth discontinuities, lighting changes, correspondence and correlation, electronic noise, etc. An algorithm is proposed which uses texture features obtained using Laws Energy Masks and multi-block approach to perform correspondence matching between stereo pair of images with high baseline. This is followed by forming disparity maps to get the relative depth of pixels in the image. An analysis is also made between this approach to the current state-of-the-art algorithms. A robust method to score and rank the stereo algorithms is also proposed. This approach provides a simple way for researchers to rank the algorithms according to their application needs. / Master of Science / There is a great need in real world applications for estimating depth through electronic means without human intervention. There are many methods in the field which help in autonomously finding depth measurements. Some of which are using LiDAR, Radar, etc. One of the most researched topic in the field of depth measurements is Computer Vision which uses techniques on 2D images to achieve the desired result. Out of the many 3D vision techniques used, stereovision is a field where a lot of research is being done to solve this kind of problem. Human vision plays a important part behind the inspiration and research performed in this field. A large variety of algorithms are being developed to find the measure of depth of ideally each and every point on the pictured scene giving us a very high spatial resolution as compared to other methods. Real world needs of depth estimation and the benefits provided by using stereo vision are the main driving force behind this approach. Stereovision gives a very high spatial resolution which is used for obstacle avoidance, path planning, object recognition, etc. Stereovision makes use of image pairs taken from two cameras with different perspective to estimate depth. The two images in the image pair are taken with two cameras from different views (translational change in view) and those two images are processed to get depth information. The software tool developed is a new approach to perform correspondence matching to find depth using stereo vision concepts. This software tool developed in this work is written in MATLAB. The tools efficiency was evaluated using standard techniques which have been described in detail. The evaluation was also performed by using the software tool with the images collected using a pair of stereo cameras and a tape measure to measure the depth of an object by hand. A scoring method has also been proposed to rank the algorithms in the field of stereo vision.
189

Visualizing and modeling partial incomplete ranking data

Sun, Mingxuan 23 August 2012 (has links)
Analyzing ranking data is an essential component in a wide range of important applications including web-search and recommendation systems. Rankings are difficult to visualize or model due to the computational difficulties associated with the large number of items. On the other hand, partial or incomplete rankings induce more difficulties since approaches that adapt well to typical types of rankings cannot apply generally to all types. While analyzing ranking data has a long history in statistics, construction of an efficient framework to analyze incomplete ranking data (with or without ties) is currently an open problem. This thesis addresses the problem of scalability for visualizing and modeling partial incomplete rankings. In particular, we propose a distance measure for top-k rankings with the following three properties: (1) metric, (2) emphasis on top ranks, and (3) computational efficiency. Given the distance measure, the data can be projected into a low dimensional continuous vector space via multi-dimensional scaling (MDS) for easy visualization. We further propose a non-parametric model for estimating distributions of partial incomplete rankings. For the non-parametric estimator, we use a triangular kernel that is a direct analogue of the Euclidean triangular kernel. The computational difficulties for large n are simplified using combinatorial properties and generating functions associated with symmetric groups. We show that our estimator is computational efficient for rankings of arbitrary incompleteness and tie structure. Moreover, we propose an efficient learning algorithm to construct a preference elicitation system from partial incomplete rankings, which can be used to solve the cold-start problems in ranking recommendations. The proposed approaches are examined in experiments with real search engine and movie recommendation data.
190

Optimal randomized and non-randomized procedures for multinomial selection problems

Tollefson, Eric Sander 20 March 2012 (has links)
Multinomial selection problem procedures are ranking and selection techniques that aim to select the best (most probable) alternative based upon a sequence of multinomial observations. The classical formulation of the procedure design problem is to find a decision rule for terminating sampling. The decision rule should minimize the expected number of observations taken while achieving a specified indifference zone requirement on the prior probability of making a correct selection when the alternative configurations are in a particular subset of the probability space called the preference zone. We study the constrained version of the design problem in which there is a given maximum number of allowed observations. Numerous procedures have been proposed over the past 50 years, all of them suboptimal. In this thesis, we find via linear programming the optimal selection procedure for any given probability configuration. The optimal procedure turns out to be necessarily randomized in many cases. We also find via mixed integer programming the optimal non-randomized procedure. We demonstrate the performance of the methodology on a number of examples. We then reformulate the mathematical programs to make them more efficient to implement, thereby significantly expanding the range of computationally feasible problems. We prove that there exists an optimal policy which has at most one randomized decision point and we develop a procedure for finding such a policy. We also extend our formulation to replicate existing procedures. Next, we show that there is very little difference between the relative performances of the optimal randomized and non-randomized procedures. Additionally, we compare existing procedures using the optimal procedure as a benchmark, and produce updated tables for a number of those procedures. Then, we develop a methodology that guarantees the optimal randomized and non-randomized procedures for a broad class of variable observation cost functions -- the first of its kind. We examine procedure performance under a variety of cost functions, demonstrating that incorrect assumptions regarding marginal observation costs may lead to increased total costs. Finally, we investigate and challenge key assumptions concerning the indifference zone parameter and the conditional probability of correct selection, revealing some interesting implications.

Page generated in 0.0411 seconds