Spelling suggestions: "subject:"[een] SEARCH ENGINE"" "subject:"[enn] SEARCH ENGINE""
111 |
Morgondagens marknadsföringMangs, Melinda January 2007 (has links)
Purpose/Aim: The purpose of this study was to investigate future marketing channels from the perspective of professional marketers. Material/Method: The study is based upon interviews with six professional marketers. Main results: Traditional marketing is not being put aside but needs to be combined with new methods. There are several new and exciting ways to gain attention from the audience, all depending on the purpose of the campaign. Mobile technology is considered upcoming and target group defining is a key issue.
|
112 |
Optimizing Performance of Internet Advertising CampaignsVrsecky, Jiri 27 July 2012 (has links)
This paper closely examines Internet advertising techniques and tools which can be used to promote new product at the market. Goal of the thesis is to measure marketing campaign effectiveness and optimize advertising campaigns. Relevant KPIs were chosen to create evaluation matrices, identify successful advertising channels and create efficient internet marketing mix. Advertising activities described in the thesis helped Czech company to increase sales on the Internet.
Practical part of the thesis describes process of introducing new product on the Czech market. In the initial stage several analyses were done to identify market conditions, competition and ideal consumer. Based on the results ¡V market entry strategy and web development plan were created. Internet e-shop and other supportive web pages were developed with aim to sell products through Internet channel. Websites follows best practice for web presentation design, search engine optimization and web audience measurement.
During the project history several marketing campaigns were launched and the results were monitored using Google Analytics software. Selected marketing activities were closely examined. Concepts of Search engine optimization, POEM media, and Pay Per Click (PPC) advertising were used as field experiments. Partial results of each field experiment as well as overall results of all marketing activities are summarized in the conclusion.
The thesis presents a comprehensive overview of the marketing tools and channels at the Czech market. Paper also summarizes best practices for website development and content optimization for visitors and search engines. Comparison of advertising activities in different channels within three PPC advertising networks helps to define PPC advertising strategy for Czech market. Based on the findings - optimized Internet marketing mix was created with aim to increase marketing campaigns effectiveness. Suggestions for optimization of website¡¦s content and recommendation for next marketing activities were summarized with aim to help company for future project development.
|
113 |
Recuperação de informação em jornais on-line: percepção sobre atributos de pesquisa em mecanismos de busca / Information retrieval in online newspapers: perceptions of search attributes in search enginesAntonio Paulo Carretta 23 September 2015 (has links)
Estudo analisa questões de organização e recuperação de informação em repositórios de jornais on-line. Destaca aspectos do suporte hipermídia, estrutura informativa do documento digital e gênero do conteúdo da informação jornalística on-line; aborda a noção de memória como atributo de ativação e conexão de informações no contexto da Web; descreve a estrutura básica de mecanismos de busca e traça o perfil de jornalistas no âmbito da convergência digital. Para investigar potenciais dificuldades de pesquisa e recuperação de informação, adota-se pesquisa exploratória para inspeção das interfaces similares de mecanismos de busca de jornais selecionados, nacionais e estrangeiros, e questionário on-line para identificar a percepção de usuários especialistas, jornalistas, sobre o uso de mecanismos de busca interna na rotina de trabalho. Como resultado, discute-se sensibilidades dos atributos de pesquisa, padrões técnicos de tratamento da informação, carências do processo de pesquisa e fatores de satisfação para recuperação de informação em ambiente digital. / Study examines issues of organization and information retrieval in online newspapers\' repositories. Highlights aspects of hypermedia, informative structure of the digital document and some genres of the online journalistic information; It addresses the concept of memory as attribute of activation and connection of information in the Web context; It describes the basic structure of search engines and traces the journalists\' profile within the aspect of digital convergence. To investigate potential difficulties of search and information retrieval, exploratory research is adopted to inspect similar interfaces of search engine, in national and foreign selected newspapers; in addition, an online questionnaire is used to identify the perception of expert users about the use internal search engines, based on the work routine of journalists. As a result of these investigations, study shows some sensitivities of search attributes, standards of information processing, research process and satisfaction factors for information retrieval in digital context.
|
114 |
Algoritmos para avaliação de confiança em apontadores encontrados na Web / Algorithms for Assessing Reliability Pointers Found on the WebSouza, Jucimar Brito de 23 April 2009 (has links)
Made available in DSpace on 2015-04-11T14:03:17Z (GMT). No. of bitstreams: 1
DISSERTACAO JUCIMAR.pdf: 1288048 bytes, checksum: eec502380e9a7d5716cd68993d6cab40 (MD5)
Previous issue date: 2009-04-23 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Search engines have become an essential tool for web users today. They use algorithms to analyze the linkage relationships of the pages in order to estimate popularity for each page, taking each link as a vote of quality for pages. This information is used in the search engine ranking algorithms. However, a large amount of links found on the Web can not be considered as a good vote for quality, presenting information that can be considered as noise for search engine ranking algorithms. This work aims to detect noises in the structure of links that exist in search engine collections. We studied the impact of the methods developed here for detection of noisy links, considering scenarios in which the reputation of pages is calculated using Pagerank and Indegree algorithms. The results of the experiments showed improvement up to 68.33% in metric Mean Reciprocal Rank (MRR) for navigational queries and up to 35.36% for randomly selected navigational queries. / Máquinas de busca têm se tornado uma ferramenta imprescindível para os usuários da Web. Elas utilizam algoritmos de análise de apontadores para explorar a estrutura dos apontadores da Web para atribuir uma estimativa de popularidade a cada página. Essa informação é usada na ordenação da lista de respostas dada por máquinas de busca a consultas submetidas por seus usuários. Contudo, alguns tipos de apontadores prejudicam a qualidade da estimativa de popularidade por apresentar informação ruidosa, podendo assim afetar negativamente a qualidade de respostas providas por máquinas de busca a seus usuários. Exemplos de tais apontadores incluem apontadores repetidos, apontadores resultantes da duplicação de páginas, SPAM, dentre outros. Esse trabalho tem como objetivo detectar ruídos na estrutura dos apontadores existentes em base de dados de máquinas de busca. Foi estudado o impacto dos métodos aqui desenvolvidos para detecção de apontadores ruidosos, considerando cenários nos quais a reputação das páginas é calculada tanto com o algoritmos Pagerank quanto com o algoritmo Indegree. Os resultados dos experimentos apresentaram melhoria de até 68,33% na métrica Mean Reciprocal Rank (MRR) para consultas navegacionais e de até 35,36% para as consultas navegacionais aleatórias quando uma máquina de busca utiliza o algoritmo Pagerank.
|
115 |
Exploring Search Engine Optimization (SEO) Techniques for Dynamic WebsitesKanwal, Wasfa January 2011 (has links)
ABSTRACT Context: With growing number of online businesses, Search Engine Optimization (SEO) has become vital to capitalize a business because SEO is key factor for marketing an online business. SEO is the process to optimize a website so that it ranks well on Search Engine Result Pages (SERPs). Dynamic websites are commonly used for e-commerce because they are easier to update and expand; however they are subjected to indexing related problems. Objectives: This research aims to examine and address dynamic websites indexing related issues. To achieve aims and objectives of this research I intend to explore dynamic websites indexing considerations, investigate SEO tools to carry SEO campaign in three major search engines (Google, Yahoo and Bing), experiment SEO techniques, and determine to what extent dynamic websites can be made search engine friendly on these major search engines. Methods: In this research, detailed literature survey is performed to evaluate existing knowledge for SEO for dynamic websites. Further empirical experiments are conducted to address dynamic websites indexing problems; and to evaluate SEO techniques used in empirical experiments. Results: It is found that all major search engines, including Google, cannot fully index dynamic websites. I used some SEO techniques which I explored during this study to help dynamic webpage(s) get indexed in major search engines. The experiment results reflect the effectiveness of SEO techniques including URL encoding /friendly URLs on major search engines. Conclusions: I conclude that, dynamic websites are subjected to indexing related problems and require additional SEO efforts to appear in SERPs. Not all SEO techniques are equally effective on all search engines to improve indexing of dynamic webpage(s). Each implemented SEO technique has different impression on major search engines (Google, Yahoo, Bing, Ask, and AOL). As, the encoded URLs technique is effective on all major search engines. However, Yahoo and Bing prefer friendly URLs over typical URLs with parameters. Therefore, presentation of dynamic URL could be quite paying if it is needed to index dynamic website on search engines other than Google.
|
116 |
Designing and implementing an architecture for single-page applications in Javascript and HTML5Petersson, Jesper January 2012 (has links)
A single-page application is a website that retrieves all needed components in one single page load. The intention is to get a user experience that reminds more of a native appli- cation rather than a website. Single-page applications written in Javascript are becoming more and more popular, but when the size of the applications grows the complexity is also increased. A good architecture or a suitable framework is therefore needed. The thesis begins by analyzing a number of design patterns suitable for applications containing a graphical user interface. Based on a composition of these design patterns, an architecture that targets single-page applications was designed. The architecture was designed to make applications easy to develop, test and maintain. Initial loading time, data synchronization and search engine optimizations were also important aspects that were considered. A framework based on the architecture was implemented, tested and compared against other frameworks available on the market. The framework that was implemented was designed to be modular, supports routing and templates as well as a number of different drivers for communicating with a server-side database. The modules were designed with a variant of the pattern Model-View-Controller (MVC), where a presentation model was introduced between the controller and the view. This allows unit tests to bypass the user interface and instead communicate directly with the core of the application. After minification and compression, the size of the framework is only 14.7 kB including all its dependencies. This results in a low initial loading time. Finally, a solution that allows a Javascript application to be indexed by a search engine is presented. It is based on PhantomJS in order to produce a static snapshot that can be served to the search engines. The solution is fast, scalable and easy to maintain.
|
117 |
Document Clustering InterfaceJohnson, Samuel January 2014 (has links)
This project created a first step prototype interface for a document clustering search engine. The goal is to facilitate the needs of people with reading difficulties as well as being a useful tool for general users when trying to find relevant but easy to read documents. The hypothesis is that minimizing the amount of text and focus on graphical representation will make the service easier to use for all users. The interface was developed using previously established persona and evaluated by general users (i.e. not users with reading disabilities) in order to see if the interface was easy to use and to understand without tooltips and tutorials. The results showed that even though the participants understood the interface and found it intuitive, there was still some information they thought were missing, such as an explanation for the reading indexes and how they determined readability.
|
118 |
Online Institutions, Markets, and DemocracyHong, Sounman 01 June 2017 (has links)
In this dissertation, I explore the implications of the advances in information and communication technology on democracy. In particular, I examine the roles of online institutions—search engines, news aggregators, and social media—in information readership and political outcomes. In Chapter 1, I show that information consumption pattern is more concentrated and polarized in online news traffic than in offline newspaper circulation. I then show that this pattern occurs not because online traffic better reflects people’s demand, but because online institutions generate a cascade. Using this evidence, I argue that online institutions produce a trade-off between the benefits involved when people access information and the costs of the cascade. In Chapter 3, I compare information consumption pattern on various online institutions. In Chapter 2, I explain why the cascade may become increasingly significant over time. An increase in Internet users suggests not only a reduced digital divide but also an even more concentrated and polarized online information consumption pattern as, ceteris paribus, the magnitude of the cascade will increase with an increase in the number of Internet users. I then empirically show a positive association between the traffic to an online institution and the estimated magnitude of the cascade observed on that site. In Chapter 4, I show that the observed concentrated and polarized online information consumption may affect political outcomes. For instance, if such an information consumption pattern affects political behaviors, we can expect the same pattern in measurable political outcomes. I test this prediction by investigating the association between U.S. Representatives using Twitter and their fundraising. Evidence suggests that, after politicians started using Twitter, their individual collected contributions became more concentrated, ideologically polarized, and geographically diverse. Finally, I discuss the implications of these findings for political equality, polarization, and democracy. In sum, online institutions may result in political outcomes becoming more concentrated and polarized. Given that a significant part of the observed concentration and polarization can be attributed to the cascade effect, this paper challenges the notion that Internet-mediated political actions or communications will necessarily promote democracy.
|
119 |
Using Wikipedia Knowledge and Query Types in a New Indexing Approach for Web Search EnginesAl-Akashi, Falah Hassan Ali January 2014 (has links)
The Web is comprised of a vast quantity of text. Modern search engines struggle to index it independent of the structure of queries and type of Web data, and commonly use indexing based on Web‘s graph structure to identify high-quality relevant pages. However, despite the apparent widespread use of these algorithms, Web indexing based on human feedback and document content is controversial. There are many fundamental questions that need to be addressed, including: How many types of domains/websites are there in the Web? What type of data is in each type of domain? For each type, which segments/HTML fields in the documents are most useful? What are the relationships between the segments? How can web content be indexed efficiently in all forms of document configurations? Our investigation of these questions has led to a novel way to use Wikipedia to find the relationships between the query structures and document configurations throughout the document indexing process and to use them to build an efficient index that allows fast indexing and searching, and optimizes the retrieval of highly relevant results. We consider the top page on the ranked list to be highly important in determining the types of queries. Our aim is to design a powerful search engine with a strong focus on how to make the first page highly relevant to the user, and on how to retrieve other pages based on that first page. Through processing the user query using the Wikipedia index and determining the type of the query, our approach could trace the path of a query in our index, and retrieve specific results for each type.
We use two kinds of data to increase the relevancy and efficiency of the ranked results: offline and real-time. Traditional search engines find it difficult to use these two kinds of data together, because building a real-time index from social data and integrating it with the index for the offline data is difficult in a traditional distributed index.
As a source of offline data, we use data from the Text Retrieval Conference (TREC) evaluation campaign. The web track at TREC offers researchers chance to investigate different retrieval approaches for web indexing and searching. The crawled offline dataset makes it possible to design powerful search engines that extends current methods and to evaluate and compare them.
We propose a new indexing method, based on the structures of the queries and the content of documents. Our search engine uses a core index for offline data and a hash index for real-time
V
data, which leads to improved performance. The TREC Web track evaluation of our experiments showed that our approach can be successfully employed for different types of queries. We evaluated our search engine on different sets of queries from TREC 2010, 2011 and 2012 Web tracks. Our approach achieved very good results in the TREC 2010 training queries. In the TREC 2011 testing queries, our approach was one of the six best compared to all other approaches (including those that used a very large corpus of 500 million documents), and it was second best when compared to approaches that used only part of the corpus (50 million documents), as ours did. In the TREC 2012 testing queries, our approach was second best if compared to all the approaches, and first if compared only to systems that used the subset of 50 million documents.
|
120 |
Analýza komunikačního mixu společnosti Memos Software s.r.o. / Analysis of the communication mix of the Memos Software s.r.o. companyPikartová, Ivana January 2007 (has links)
The thesis is focused on the analysis of the communication mix of Memos Software s.r.o. company which develops the custom software. Before the main analysis the theoretical part takes place in which the traditional components of the communication mix are explained. After that the attention is oriented on the communication mix which is done through the internet. In the practical part the analyzed company is introduced and after that the contemporary status of its communication mix is being described. Based on the analysis in the end of this thesis the recommendations are given with the aim to retrieve the imperfections and to optimize the communication mix of this company.
|
Page generated in 0.0424 seconds