• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1540
  • 499
  • 154
  • 145
  • 145
  • 121
  • 55
  • 55
  • 47
  • 36
  • 36
  • 34
  • 17
  • 17
  • 16
  • Tagged with
  • 3395
  • 488
  • 475
  • 370
  • 340
  • 285
  • 261
  • 250
  • 242
  • 238
  • 234
  • 220
  • 214
  • 213
  • 210
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

Ordering, Indexing, and Searching Semantic Data: A Terminology Aware Index Structure

Pound, Jeffrey January 2008 (has links)
Indexing data for efficient search capabilities is a core problem in many domains of computer science. As applications centered around semantic data sources become more common, the need for more sophisticated indexing and querying capabilities arises. In particular, the need to search for specific information in the presence of a terminology or ontology (i.e. a set of logic based rules that describe concepts and their relations) becomes of particular importance, as the information a user seeks may exists as an entailment of the explicit data by means of the terminology. This variant on traditional indexing and search problems forms the foundation of a range of possible technologies for semantic data. In this work, we propose an ordering language for specifying partial orders over semantic data items modeled as descriptions in a description logic. We then show how these orderings can be used as the basis of a search tree index for processing \emph{concept searches} in the presence of a terminology. We study in detail the properties of the orderings and the associated index structure, and also explore a relationship between ordering descriptions called \emph{order refinement}. A sound and complete procedure for deciding refinement is given. We also empirically evaluate a prototype implementation of our index structure, validating its potential efficacy in semantic query problems.
502

Efficient Pattern Search in Large, Partial-Order Data Sets

Nichols, Matthew January 2008 (has links)
The behaviour of a large, distributed system is inherently complex. One step towards making this behaviour more understandable to a user involves instrumenting the system and collecting data about its execution. We can model the data as traces (representing various sequential entities in the system such as single-threaded processes) that contain both events local to the trace and communication events involving another trace. Visualizing this data provides a modest benefit to users as it makes basic interactions in the system clearer and, with some user effort, more complex interactions can be determined. Unfortunately, visualization by itself is not an adequate solution, especially for large numbers of events and complex interactions among traces. A search facility has the ability to make this event data more useful. Work has been done previously on various frameworks and algorithms that could form the core of such a search facility; however, various shortcomings in the completeness of the frameworks and in the efficiency of the algorithms resulted in an inconsistent, incomplete, and inefficient solution. This thesis takes steps to remedy this situation. We propose a provably-complete framework for determining precedence between sets of events and propose additions to a previous pattern-specification language so it can specify a wider variety of search patterns. We improve the efficiency of the existing search algorithm, and provide a new, more efficient, algorithm that processes a pattern in a fundamentally different way. Furthermore, the various proposed improvements have been implemented and are analysed empirically.
503

Is Your Brand Going Out of Fashion? A Quantitative, Causal Study Designed to Harness the Web for Early Indicators of Brand Value

Cole, Maureen S 01 August 2012 (has links)
Can Internet search query data be a relevant predictor of financial measures of brand value? Can Internet search query data enrich existing financial measures of brand valuation tools and provide more timely insights to brand managers? Along with the financial based motivation to estimate the value of a brand for accounting purposes, marketers desire to show “accountability” of marketing activity and respond to the customer’s perception of the brand quickly to maintain their competitive advantage and value. The usefulness of the “consumer information processing” framework for brand, consumer and firm forecasting is examined. To develop our hypotheses, we draw from the growing body of work relating web searches to real world outcomes, to determine if a search query for a brand is causal to, and potentially predictive of brand, consumer and firm value. The contribution to current literature is that search queries can predict perception, whereas previous research in this nascent area predicted behavior and events. In this direction, we propose arguments underpinning this research as follows: the theoretical background relative to brand valuation and the theoretical frame based on an in-depth review of how scholars have used search query data as a predictive measure across several disciplines including economics and the health sciences. From a practitioner perspective, unlike traditional valuation methods search query data for brands is more timely, actionable, and inclusive.
504

Model for Auditing Search Engine Optimization for E-business

Schooner, Patrick January 2010 (has links)
E-commerce combines web technology with business economics. As of the last 10 years, online visibility for such online enterprises now heavily rely on the relationship between the own online sales platform and Search Engines for improved traffic consisting of presumable customers with the intent of acquiring products or services related to the customers’ needs. In 2008 an Internet behavioural analysis showed that over 90% percent of Swedish internet users make use of search engines at least once a week, stating that online visibility through the use of search engines now is a crucial business marketing aspect. To improve the relationship between online e-commercial platforms and search engines several applications exists within the technical field of Online Marketing – one being Search Engine Optimization (SEO), As a subset of Online Marketing, SEO consists mainly of three subareas; Organic Search Engine Optimization (Organic SEO), Search Engine Marketing (SEM) and Social Media Optimization (SMO). The true nature of how Search Engines operate to crawl and index web contents are hidden behind business secrets owned by the individual search engines operating online, leaving SEO auditors and operators to systematically “try-and-error” test for optimal settings. The first part of this thesis unfolds the SEO theory obtained from online sources, acclaimed literature and articles to discover settings in which SEO auditors and operator may use as tools to improve online visibility and accessibility on live websites to search engines. The second part sets on forming a theory driven work model (called the “PS Model”) to systematically work with SEO; structure for implementations and ways to measure the improvements. Third part of the thesis evaluates the PS model using a case study where the model is implemented upon. The case study uses a website (in this thesis referred to as “BMG”) owned by a company active in the biotechnological research and development field situated in Sweden (in this thesis referred to as “BSG”), which at the start of January 2010 was in need of SEO improvements as the relationship between the search engine Google had somewhat stagnated leaving several vital documents outside of Google’s indexing and the relevancy between performed search quires and site-wide keywords had been lowered. The focus of this thesis reside on bringing forth a work model taking in essential parts of SEO (Organic SEO, SEM and SMO), implementing it on the BMG platform to improve the website’s online visibility and accessibility to search engines (mainly focusing on Google), thus enhancing and solving the stagnated situation identified as such in January 2010 by the BMG site-owners – consequently validating the PS Model. In May 2010 it was shown that the PS model did improve site-wide indexing at Google and search queries containing the main set of keywords in use of BMG was improved in terms of relevancy (higher placing on search result pages).
505

Sökmotoroptimering : Metoder för att förbättra sin placering i Googles sökresultat

Allard, Sebastian, Nilsson, Björn January 2010 (has links)
This paper is a literature study on search engine optimization (SEO) considering the leader of the search engine market: Google. There´s an introductory background description of Google and its methods of  crawling the Internet and indexing the web pages, along with a brief review of the famous PageRank algorithm. The purpose of this paper is to describe the major important methods for improved rankings on Google´s result lists. These methods could be categorized as on-page methods tied to the website to be optimized or off-page methods that are external to the website such as link development. Furthermore the most common unethical methods are described, known as “black hat”, which is the secondary purpose of the text. The discussion that follows concerns the practical implications of SEO and personal reflections of the matter. Finally  there´s a quick view of the expanding market of handheld devices connected to the Internet and mobile search as an initial area of research. / Denna uppsats är en litteraturstudie om ämnet sökmotoroptimering (SEO) som behandlar ledaren bland sökmotorer: Google. Det finns en introducerande bakgrund som beskriver Google och dess metoder för ”crawling” och indexering av webbplatser, tillsammans med en kort genomgång av den  välkända  PageRank-algoritmen. Syftet med denna uppsats är att beskriva de centrala metoderna för förbättrad rankning i Googles träffresultat. Dessa metoder kan kategoriseras som ”on-page”-metoder, som är knutna till webbplatsen, eller ”off-page”-metoder, som är externa, exempelvis  länkförvärvning. Vidare kommer de vanligaste oetiska metoderna att beskrivas, kända som ”black hat”, som är det andra syftet med denna text. Diskussionen som följer behandlar de praktiska implikationerna av SEO och personliga reflektioner i frågan. Avslutningsvis  berör vi  den expanderade marknaden av handhållen utrustning med Internetuppkoppling och mobil sökning som ett kommande forskningsområde.
506

Evolving Cuckoo Search : From single-objective to multi-objective

Lidberg, Simon January 2011 (has links)
This thesis aims to produce a novel multi-objective algorithm that is based on Cuckoo Search by Dr. Xin-She Yang. Cuckoo Search is a promising nature-inspired meta-heuristic optimization algorithm, which currently is only able to solve single-objective optimization problems. After an introduction, a number of theoretical points are presented as a basis for the decision of which algorithms to hybridize Cuckoo Search with. These are then reviewed in detail and verified against current benchmark algorithms to evaluate their efficiency. To test the proposed algorithm in a new setting, a real-world combinatorial problem is used. The proposed algorithm is then used as an optimization engine for a simulation-based system and compared against a current implementation.
507

Differential Equations and Depth First Search for Enumeration of Maps in Surfaces

Brown, Daniel January 1999 (has links)
A map is an embedding of the vertices and edges of a graph into a compact 2-manifold such that the remainder of the surface has components homeomorphic to open disks. With the goal of proving the Four Colour Theorem, Tutte began the field of map enumeration in the 1960's. His methods included developing the edge deletion decomposition, developing and solving a recurrence and functional equation based on this decomposition, and developing the medial bijection between two equinumerous infinite families of maps. Beginning in the 1980's Jackson, Goulden and Visentin applied algebraic methods in enumeration of non-planar and non-orientable maps, to obtain results of interest for mathematical physics and algebraic geometry, and the Quadrangulation Conjecture and the Map-Jack Conjecture. A special case of the former is solved by Tutte's medial bijection. The latter uses Jack symmetric functions which are a topic of active research. In the 1960's Walsh and Lehman introduced a method of encoding orientable maps. We develop a similar method, based on depth first search and extended to non-orientable maps. With this, we develop a bijection that extends Tutte's medial bijection and partially solves the Quadrangulation Conjecture. Walsh extended Tutte's recurrence for planar maps to a recurrence for all orientable maps. We further extend the recurrence to include non-orientable maps, and express it as a partial differential equation satisfied by the generating series. By appropriately interpolating the differential equation and applying the depth first search method, we construct a parameter that empirically fulfils the conditions of the Map-Jack Conjecture, and we prove some of its predicted properties. Arques and Beraud recently obtained a continued fraction form of a specialisation of the generating series for maps. We apply the depth search method with an ordinary differential equation, to construct a bijection whose existence is implied by the continued fraction.
508

Adaptive Comparison-Based Algorithms for Evaluating Set Queries

Mirzazadeh, Mehdi January 2004 (has links)
In this thesis we study a problem that arises in answering boolean queries submitted to a search engine. Usually a search engine stores the set of IDs of documents containing each word in a pre-computed sorted order and to evaluate a query like "computer AND science" the search engine has to evaluate the union of the sets of documents containing the words "computer" and "science". More complex queries will result in more complex set expressions. In this thesis we consider the problem of evaluation of a set expression with union and intersection as operators and ordered sets as operands. We explore properties of comparison-based algorithms for the problem. A <i>proof of a set expression</i> is the set of comparisons that a comparison-based algorithm performs before it can determine the result of the expression. We discuss the properties of the proofs of set expressions and based on how complex the smallest proofs of a set expression <i>E</i> are, we define a measurement for determining how difficult it is for <i>E</i> to be computed. Then, we design an algorithm that is adaptive to the difficulty of the input expression and we show that the running time of the algorithm is roughly proportional to difficulty of the input expression, where the factor is roughly logarithmic in the number of the operands of the input expression.
509

Collaboration During Visual Search

Malcolmson, Kelly January 2006 (has links)
Three experiments examine how collaboration influences visual search performance. Working with a partner or on their own, participants reported whether a target was present or absent in briefly presented search displays. The search performance of individuals working together (collaborative pairs) was compared to the pooled responses of the individuals working alone (nominal pairs). Collaborative pairs were less likely than nominal pairs to correctly detect a target and they were less likely to make false alarms. Signal detection analyses revealed that collaborative pairs were more sensitive to the presence of the target and had a more conservative response bias than the nominal pairs. This pattern was observed when the search difficulty was increased and when the presence of another individual was matched across pairs. The results are discussed in the context of task sharing, social loafing and current theories of visual search.
510

Classification-Based Adaptive Search Algorithm for Video Motion Estimation

Asefi, Mahdi January 2006 (has links)
A video sequence consists of a series of frames. In order to compress the video for efficient storage and transmission, the temporal redundancy among adjacent frames must be exploited. A frame is selected as reference frame and subsequent frames are predicted from the reference frame using a technique known as motion estimation. Real videos contain a mixture of motions with slow and fast contents. Among block matching motion estimation algorithms, the full search algorithm is known for its superiority in the performance over other matching techniques. However, this method is computationally very extensive. Several fast block matching algorithms (FBMAs) have been proposed in the literature with the aim to reduce computational costs while maintaining desired quality performance, but all these methods are considered to be sub-optimal. No fixed fast block matching algorithm can effi- ciently remove temporal redundancy of video sequences with wide motion contents. Adaptive fast block matching algorithm, called classification based adaptive search (CBAS) has been proposed. A Bayes classifier is applied to classify the motions into slow and fast categories. Accordingly, appropriate search strategy is applied for each class. The algorithm switches between different search patterns according to the content of motions within video frames. The proposed technique outperforms conventional stand-alone fast block matching methods in terms of both peak signal to noise ratio (PSNR) and computational complexity. In addition, a new hierarchical method for detecting and classifying shot boundaries in video sequences is proposed which is based on information theoretic classification (ITC). ITC relies on likelihood of class label transmission of a data point to the data points in its vicinity. ITC focuses on maximizing the global transmission of true class labels and classify the frames into classes of cuts and non-cuts. Applying the same rule, the non-cut frames are also classified into two categories of arbitrary shot frames and gradual transition frames. CBAS is applied on the proposed shot detection method to handle camera or object motions. Experimental evidence demonstrates that our method can detect shot breaks with high accuracy.

Page generated in 0.0838 seconds