• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 7
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 44
  • 10
  • 10
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Investigating the Correlation Between Marketing Emails and Receivers Using Unsupervised Machine Learning on Limited Data : A comprehensive study using state of the art methods for text clustering and natural language processing / Undersökning av samband mellan marknadsföringsemail och dess mottagare med hjälp av oövervakad maskininlärning på begränsad data

Pettersson, Christoffer January 2016 (has links)
The goal of this project is to investigate any correlation between marketing emails and their receivers using machine learning and only a limited amount of initial data. The data consists of roughly 1200 emails and 98.000 receivers of these. Initially, the emails are grouped together based on their content using text clustering. They contain no information regarding prior labeling or categorization which creates a need for an unsupervised learning approach using solely the raw text based content as data. The project investigates state-of-the-art concepts like bag-of-words for calculating term importance and the gap statistic for determining an optimal number of clusters. The data is vectorized using term frequency - inverse document frequency to determine the importance of terms relative to the document and to all documents combined. An inherit problem of this approach is high dimensionality which is reduced using latent semantic analysis in conjunction with singular value decomposition. Once the resulting clusters have been obtained, the most frequently occurring terms for each cluster are analyzed and compared. Due to the absence of initial labeling an alternative approach is required to evaluate the clusters validity. To do this, the receivers of all emails in each cluster who actively opened an email is collected and investigated. Each receiver have different attributes regarding their purpose of using the service and some personal information. Once gathered and analyzed, conclusions could be drawn that it is possible to find distinguishable connections between the resulting email clusters and their receivers but to a limited extent. The receivers from the same cluster did show similar attributes as each other which were distinguishable from the receivers of other clusters. Hence, the resulting email clusters and their receivers are specific enough to distinguish themselves from each other but too general to handle more detailed information. With more data, this could become a useful tool for determining which users of a service should receive a particular email to increase the conversion rate and thereby reach out to more relevant people based on previous trends. / Målet med detta projekt att undersöka eventuella samband mellan marknadsföringsemail och dess mottagare med hjälp av oövervakad maskininlärning på en brgränsad mängd data. Datan består av ca 1200 email meddelanden med 98.000 mottagare. Initialt så gruperas alla meddelanden baserat på innehåll via text klustering. Meddelandena innehåller ingen information angående tidigare gruppering eller kategorisering vilket skapar ett behov för ett oövervakat tillvägagångssätt för inlärning där enbart det råa textbaserade meddelandet används som indata. Projektet undersöker moderna tekniker så som bag-of-words för att avgöra termers relevans och the gap statistic för att finna ett optimalt antal kluster. Datan vektoriseras med hjälp av term frequency - inverse document frequency för att avgöra relevansen av termer relativt dokumentet samt alla dokument kombinerat. Ett fundamentalt problem som uppstår via detta tillvägagångssätt är hög dimensionalitet, vilket reduceras med latent semantic analysis tillsammans med singular value decomposition. Då alla kluster har erhållits så analyseras de mest förekommande termerna i vardera kluster och jämförs. Eftersom en initial kategorisering av meddelandena saknas så krävs ett alternativt tillvägagångssätt för evaluering av klustrens validitet. För att göra detta så hämtas och analyseras alla mottagare för vardera kluster som öppnat något av dess meddelanden. Mottagarna har olika attribut angående deras syfte med att använda produkten samt personlig information. När de har hämtats och undersökts kan slutsatser dras kring hurvida samband kan hittas. Det finns ett klart samband mellan vardera kluster och dess mottagare, men till viss utsträckning. Mottagarna från samma kluster visade likartade attribut som var urskiljbara gentemot mottagare från andra kluster. Därav kan det sägas att de resulterande klustren samt dess mottagare är specifika nog att urskilja sig från varandra men för generella för att kunna handera mer detaljerad information. Med mer data kan detta bli ett användbart verktyg för att bestämma mottagare av specifika emailutskick för att på sikt kunna öka öppningsfrekvensen och därmed nå ut till mer relevanta mottagare baserat på tidigare resultat.
42

A Smart Tale: An Examination of the Smart City Phenomena through the Lens of a Case Study

Habib, Abdulrahman 08 1900 (has links)
This dissertation addresses research questions related to defining a smart city and the associated activities. The general research question is explored in the dissertation via the conduct of three related studies. The finding from these three investigations are presented in the results section as 3 essays that collectively examine the smart city phenomena as it has emerged within the City. Essay 1 assesses building municipal open data capability. The study proposed an Opendata Roadmap Framework to enhance the organization's dynamic capability. The results provide a valuable practical framework to help cities develop open data capability. The results also provide a comparative study or benchmark for similar initiatives with other regional cities and within the nation. Essay 2 measures the residents' understanding and beliefs about smart cities. This portion of the research used a qualitative method that included interviewing residents and city officials to understand their definition of a smart city and what they believe makes a city smart. The interviews focused on understanding resident engagement because it is an important characteristic of a smart city. The gap between the city officials and residents understanding was examined. In addition, the interviews help identify essential factors associated with smart cities like trust in government, perceived security, perceived privacy, trust in technology, and perceived monetary value. Essay 3 examines the acceptance of smart city technologies and factors that affect the adoption of such technologies. This essay uses the insights from the other two essays to propose a smart city Unified Theory of Acceptance and Use of Technology (UTAUT) extension labelled Smart City UTAUT (SC-UTAUT). The new proposed model was tested using a survey method. The 1,786 valid responses were used to test the proposed structural equation model using Smart PLS. Results show a significant relationship between trust in technology, trust in government, perceived monetary value, effort expectancy, self-efficacy, and behavioral intention. The dissertation concludes with a summary of how the three essays make a cumulative contribution to the literature as well as providing practical guidance for becoming a smart city.
43

Avaliação automática de questões discursivas usando LSA

SANTOS, João Carlos Alves dos 05 February 2016 (has links)
Submitted by camilla martins (camillasmmartins@gmail.com) on 2017-01-27T15:50:37Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese_AvaliacaoAutomaticaQuestoes.pdf: 5106074 bytes, checksum: c401d50ce5e666c52948ece7af20b2c3 (MD5) / Approved for entry into archive by Edisangela Bastos (edisangela@ufpa.br) on 2017-01-30T13:02:31Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese_AvaliacaoAutomaticaQuestoes.pdf: 5106074 bytes, checksum: c401d50ce5e666c52948ece7af20b2c3 (MD5) / Made available in DSpace on 2017-01-30T13:02:31Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese_AvaliacaoAutomaticaQuestoes.pdf: 5106074 bytes, checksum: c401d50ce5e666c52948ece7af20b2c3 (MD5) Previous issue date: 2016-02-05 / Este trabalho investiga o uso de um modelo usando Latent Semantic Analysis (LSA) na avaliação automática de respostas curtas, com média de 25 a 70 palavras, de questões discursivas. Com o surgimento de ambientes virtuais de aprendizagem, pesquisas sobre correção automática tornaram-se mais relevantes, pois permitem a correção mecânica com baixo custo para questões abertas. Além disso, a correção automática permite um feedback instantâneo e elimina o trabalho de correção manual. Isto possibilita criar turmas virtuais com grande quantidade de alunos (centenas ou milhares). Pesquisas sobre avaliação automática de textos estão sendo desenvolvidas desde a década de 60, mas somente na década atual estão alcançando a acurácia necessária para uso prático em instituições de ensino. Para que os usuários finais tenham confiança, o desafio de pesquisa é desenvolver sistemas de avaliação robustos e com acurácia próxima de avaliadores humanos. Apesar de alguns estudos apontarem nesta direção, existem ainda muitos pontos a serem explorados nas pesquisas. Um ponto é a utilização de bigramas com LSA, mesmo que não contribua muito com a acurácia, contribui com a robustez, que podemos definir como confiabilidade2, pois considera a ordem das palavras dentro do texto. Buscando aperfeiçoar um modelo LSA na direção de melhorar a acurácia e aumentar a robustez trabalhamos em quatro direções: primeira, incluímos bigramas de palavras no modelo LSA; segunda, combinamos modelos de co-ocorrência de unigrama e bigramas com uso de regressão linear múltipla; terceira, acrescentamos uma etapa de ajustes sobre a pontuação do modelo LSA baseados no número de palavras das respostas avaliadas; quarta, realizamos uma análise da distribuição das pontuações atribuídas pelo modelo LSA contra avaliadores humanos. Para avaliar os resultados comparamos a acurácia do sistema contra a acurácia de avaliadores humanos verificando o quanto o sistema se aproxima de um avaliador humano. Utilizamos um modelo LSA com cinco etapas: 1) pré- processamento, 2) ponderação, 3) decomposição a valores singulares, 4) classificação e 5) ajustes do modelo. Para cada etapa explorou-se estratégias alternativas que influenciaram na acurácia final. Nos experimentos obtivemos uma acurácia de 84,94% numa avaliação comparativa contra especialistas humanos, onde a correlação da acurácia entre especialistas humanos foi de 84,93%. No domínio estudado, a tecnologia de avaliação automática teve resultados próximos aos dos avaliadores humanos mostrando que esta alcançando um grau de maturidade para ser utilizada em sistemas de avaliação automática em ambientes virtuais de aprendizagem. / This work investigates the use of a model using Latent Semantic Analysis (LSA) In the automatic evaluation of short answers, with an average of 25 to 70 words, of questions Discursive With the emergence of virtual learning environments, research on Automatic correction have become more relevant as they allow the mechanical correction With low cost for open questions. In addition, automatic Feedback and eliminates manual correction work. This allows you to create classes With large numbers of students (hundreds or thousands). Evaluation research Texts have been developed since the 1960s, but only in the The current decade are achieving the necessary accuracy for practical use in teaching. For end users to have confidence, the research challenge is to develop Evaluation systems that are robust and close to human evaluators. despite Some studies point in this direction, there are still many points to be explored In the surveys. One point is the use of bigrasms with LSA, even if it does not contribute Very much with the accuracy, contributes with the robustness, that we can define as reliability2, Because it considers the order of words within the text. Seeking to perfect an LSA model In the direction of improving accuracy and increasing robustness we work in four directions: First, we include word bigrasms in the LSA model; Second, we combine models Co-occurrence of unigram and bigrams using multiple linear regression; third, We added a stage of adjustments on the LSA model score based on the Number of words of the responses evaluated; Fourth, we performed an analysis of the Of the scores attributed by the LSA model against human evaluators. To evaluate the We compared the accuracy of the system against the accuracy of human evaluators Verifying how close the system is to a human evaluator. We use a LSA model with five steps: 1) pre-processing, 2) weighting, 3) decomposition a Singular values, 4) classification and 5) model adjustments. For each stage it was explored Strategies that influenced the final accuracy. In the experiments we obtained An 84.94% accuracy in a comparative assessment against human Correlation among human specialists was 84.93%. In the field studied, the Evaluation technology had results close to those of the human evaluators Showing that it is reaching a degree of maturity to be used in Assessment in virtual learning environments. Google Tradutor para empresas:Google Toolkit de tradução para appsTradutor de sitesGlobal Market Finder.
44

Metody sumarizace dokumentů na webu / Methods of Document Summarization on the Web

Belica, Michal January 2013 (has links)
The work deals with automatic summarization of documents in HTML format. As a language of web documents, Czech language has been chosen. The project is focused on algorithms of text summarization. The work also includes document preprocessing for summarization and conversion of text into representation suitable for summarization algorithms. General text mining is also briefly discussed but the project is mainly focused on the automatic document summarization. Two simple summarization algorithms are introduced. Then, the main attention is paid to an advanced algorithm that uses latent semantic analysis. Result of the work is a design and implementation of summarization module for Python language. Final part of the work contains evaluation of summaries generated by implemented summarization methods and their subjective comparison of the author.

Page generated in 0.0251 seconds