• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 626
  • 79
  • 64
  • 59
  • 34
  • 26
  • 25
  • 21
  • 10
  • 8
  • 8
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 1194
  • 544
  • 237
  • 218
  • 206
  • 190
  • 189
  • 172
  • 156
  • 152
  • 147
  • 142
  • 131
  • 128
  • 127
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Cloud enabled data analytics and visualization framework for health-shock prediction

Mahmud, S. January 2016 (has links)
Health-shock can be defined as a health event that causes severe hardship to the household because of the financial burden for healthcare payments and the income loss due to inability to work. It is one of the most prevalent shocks faced by the people of underdeveloped and developing countries. In Pakistan especially, policy makers and healthcare sector face an uphill battle in dealing with health-shock due to the lack of a publicly available dataset and an effective data analytics approach. In order to address this problem, this thesis presents a data analytics and visualization framework for health-shock prediction based on a large-scale health informatics dataset. The framework is developed using cloud computing services based on Amazon web services integrated with Geographical Information Systems (GIS) to facilitate the capture, storage, indexing and visualization of big data for different stakeholders using smart devices. The data was collected through offline questionnaires and an online mobile based system through Begum Memhooda Welfare Trust (BMWT). All data was coded in the online system for the purpose of analysis and visualization. In order to develop a predictive model for health-shock, a user study was conducted to collect a multidimensional dataset from 1000 households in rural and remotely accessible regions of Pakistan, focusing on their health, access to health care facilities and social welfare, as well as economic and environmental factors. The collected data was used to generate a predictive model using a fuzzy rule summarization technique, which can provide stakeholders with interpretable linguistic rules to explain the causal factors affecting health-shock. The evaluation of the proposed system in terms of the interpretability and accuracy of the generated data models for classifying health-shock shows promising results. The prediction accuracy of the fuzzy model based on a k-fold crossvalidation of the data samples shows above 89% performance in predicting health-shock based on the given factors. Such a framework will not only help the government and policy makers to manage and mitigate health-shock effectively and timely, but will also provide a low-cost, flexible, scalable, and secure architecture for data analytics and visualization. Future work includes extending this study to form Pakistan’s first publicly available health informatics tool to help government and healthcare professionals to form policies and healthcare reforms. This study has implications at a national and international level to facilitate large-scale health data analytics through cloud computing in order to minimize the resource commitments needed to predict and manage health-shock.
212

Měření a zvyšování efektivity internetových kampaní / Measuring and improving effectiveness of internet campaigns

Zdarsa, Jan January 2010 (has links)
Master's thesis deals with the evaluation of the effectiveness of internet campaigns. Theoretical and methodological part of the thesis presents online marketing channels and business models widely used in Internet marketing, setting right goals before the campaign launch and the most used web analytics tools, including technological difficulties and the correct procedure for evaluating campaigns. The practical part of the thesis describes a significant difference in the evaluation of the effectiveness of internet campaigns by using the multitouch attribution. A secondary aim is to show that internet marketing is not well measured as marketers think.
213

Návrh na optimalizáciu internetového predaja Triola a.s. / Optimalization recommendations for internet sales of Triola company

Dvořáková, Ivana January 2013 (has links)
This diploma thesis demonstrates the importance of the internet sales in today's world, using an example of the real world company, Triola, which specializes in lingerie manufacturing. Objective of this thesis is behavioral analysis of the company's customers, followed by evaluation of this knowledge which results into recommendations on how to optimize internet sales and how to enhance the design and functionality to achieve this. Analysis and its evaluation are both based on the latest research in this field, which describes the customer's motivation for the internet shopping and the correlation between web design and impulsive shopping. Part of the analysis is also the comparison of trends in behavior of customers all across the world with customers from Czech Republic and with customers of Triola company, trying to identify trends that will likely affect the Triola company in the future. The thesis also comes up with important information about the customers of the Triola company and possible evolution of the internet sales and shows the different approaches for the Triola to take, in order to make internet shopping a pleasant experience for its customers and to raise the internet sales rates as a result.
214

Vliv vývojových trendů na řešení projektu BI / The influence of trends in BI project

Kapitán, Lukáš January 2012 (has links)
The aim of this these is to analyse the trends occurring in Business intelligence. It does examine, summarise and judge each of the trends from the point of their usability in the real world, their influence and modification of each phase of the implementation of Bussiness intelligence. It is clear that each of these trends has its positives and negatives which can influence the statements in the evaluation. These factors are taken into consideration and analysed as well. The advantages and disadvantages of the trends are occurring especially in the areas of economical demand and technical difficultness. The main aim is to compare the methods of implementation of Bussiness intelligence with actual trends in BI. In order to achieve this a few crucial points were set: to investigate recent trends in the BI and to define the methods of implementation in the broadest terms. The awaited benefit of this these is already mentioned investigation and analysis of trends in the area of Bussiness intelligence and its use in implementation.
215

On-line marketing so zameraním na kampane prostredníctvom Google / Online marketing with focus on Google as a marketing tool

Bokaová, Katarína January 2012 (has links)
Internet advertising is becoming more and more important. Budgets of media planners are moving from TV and print to internet. Due to the young age of the Internet and especially online marketing, we can assume that this trend will grow stronger. This work addresses one of the biggest players on the Internet - Google and especially its tool for creating PPC ads. The first part concerns the most important aspects of the current internet marketing sphere and the second part is devoted to a specific topic of creating campaigns through Google Adwords and their subsequent success analysis by using already established metrics.
216

Advanced Analytics in Retail Banking in the Czech Republic / Prediktívna analytika v retailovom bankovníctve v Českej republike

Búza, Ján January 2014 (has links)
Advanced analytics and big data allow a more complete picture of customers' preferences and demands. Through this deeper understanding, organizations of all types are finding new ways to engage with existing or potential customers. Research shows that companies using big data and advanced analytics in their operations have productivity and profitability rates that are 5 to 6 percent higher compared to their peers. At the same time it is almost impossible to find a banking institution in the Czech Republic exploiting potential of data analytics to its full extent. This thesis will therefore focus on exploring opportunities for banks applicable in the local context, taking into account technological and financial limitations as well as the market situation. Author will conduct interviews with bank managers and management consultants familiar with the topic in order to evaluate theoretical concepts and the best practices from around the world from the point of Czech market environment, to assess capability of local banks to exploit them and identify the main obstacles that stand in the way. Based on that a general framework for bank managers, who would like to use advanced analytics, will be proposed.
217

Visualization of intensional and extensional levels of ontologies / Visualização de níveis intensional e extensional de ontologias

Silva, Isabel Cristina Siqueira da January 2014 (has links)
Técnicas de visualização de informaçoes têm sido usadas para a representação de ontologias visando permitir a compreensão de conceitos e propriedades em domínios específicos. A visualização de ontologias deve ser baseada em representaccões gráficas efetivas e téquinas de interação que auxiliem tarefas de usuários relacionadas a diferentes entidades e aspectos. Ontologias podem ser complexas devido tanto à grande quantidade de níveis da hierarquia de classes como também aos diferentes atributos. Neste trabalho, propo˜e-se uma abordagem baseada no uso de múltiplas e coordenadas visualizações para explorar ambos os níceis intensional e extensional de uma ontologia. Para tanto, são empregadas estruturas visuais baseadas em árvores que capturam a característica hierárquiva de partes da ontologia enquanto preservam as diferentes categorias de classes. Além desta contribuição, propõe-se um inovador emprego do conceito "Degree of Interest" de modo a reduzir a complexidade da representação da ontologia ao mesmo tempo que procura direcionar a atenção do usuádio para os principais conceitos de uma determinada tarefa. Através da análise automáfica dos diferentes aspectos da ontologia, o principal conceito é colocado em foco, distinguindo-o, assim, da informação desnecessária e facilitando a análise e o entendimento de dados correlatos. De modo a sincronizar as visualizações propostas, que se adaptam facilmente às tarefas de usuários, e implementar esta nova proposta de c´calculo baseado em "Degree of Interest", foi desenvolvida uma ferramenta de visualização de ontologias interativa chamada OntoViewer, cujo desenvolvimento seguiu um ciclo interativo baseado na coleta de requisitos e avaliações junto a usuários em potencial. Por fim, uma última contribuição deste trabalho é a proposta de um conjunto de "guidelines"visando auxiliar no projeto e na avaliação de téncimas de visualização para os níceis intensional e extensional de ontologias. / Visualization techniques have been used for the representation of ontologies to allow the comprehension of concepts and properties in specific domains. Techniques for visualizing ontologies should be based on effective graphical representations and interaction techniques that support users tasks related to different entities and aspects. Ontologies can be very large and complex due to many levels of classes’ hierarchy as well as diverse attributes. In this work we propose a multiple, coordinated views approach for exploring the intensional and extensional levels of an ontology. We use linked tree structures that capture the hierarchical feature of parts of the ontology while preserving the different categories of classes. We also present a novel use of the Degree of Interest notion in order to reduce the complexity of the representation itself while drawing the user attention to the main concepts for a given task. Through an automatic analysis of ontology aspects, we place the main concept in focus, distinguishing it from the unnecessary information and facilitating the analysis and understanding of correlated data. In order to synchronize the proposed views, which can be easily adapted to different user tasks, and implement this new Degree of Interest calculation, we developed an interactive ontology visualization tool called OntoViewer. OntoViewer was developed following an iterative cycle of refining designs and getting user feedback, and the final version was again evaluated by ten experts. As another contribution, we devised a set of guidelines to help the design and evaluation of visualization techniques for both the intensional and extensional levels of ontologies.
218

Automated feature synthesis on big data using cloud computing resources

Saker, Vanessa January 2020 (has links)
The data analytics process has many time-consuming steps. Combining data that sits in a relational database warehouse into a single relation while aggregating important information in a meaningful way and preserving relationships across relations, is complex and time-consuming. This step is exceptionally important as many machine learning algorithms require a single file format as an input (e.g. supervised and unsupervised learning, feature representation and feature learning, etc.). An analyst is required to manually combine relations while generating new, more impactful information points from data during the feature synthesis phase of the feature engineering process that precedes machine learning. Furthermore, the entire process is complicated by Big Data factors such as processing power and distributed data storage. There is an open-source package, Featuretools, that uses an innovative algorithm called Deep Feature Synthesis to accelerate the feature engineering step. However, when working with Big Data, there are two major limitations. The first is the curse of modularity - Featuretools stores data in-memory to process it and thus, if data is large, it requires a processing unit with a large memory. Secondly, the package is dependent on data stored in a Pandas DataFrame. This makes the use of Featuretools with Big Data tools such as Apache Spark, a challenge. This dissertation aims to examine the viability and effectiveness of using Featuretools for feature synthesis with Big Data on the cloud computing platform, AWS. Exploring the impact of generated features is a critical first step in solving any data analytics problem. If this can be automated in a distributed Big Data environment with a reasonable investment of time and funds, data analytics exercises will benefit considerably. In this dissertation, a framework for automated feature synthesis with Big Data is proposed and an experiment conducted to examine its viability. Using this framework, an infrastructure was built to support the process of feature synthesis on AWS that made use of S3 storage buckets, Elastic Cloud Computing services, and an Elastic MapReduce cluster. A dataset of 95 million customers, 34 thousand fraud cases and 5.5 million transactions across three different relations was then loaded into the distributed relational database on the platform. The infrastructure was used to show how the dataset could be prepared to represent a business problem, and Featuretools used to generate a single feature matrix suitable for inclusion in a machine learning pipeline. The results show that the approach was viable. The feature matrix produced 75 features from 12 input variables and was time efficient with a total end-to-end run time of 3.5 hours and a cost of approximately R 814 (approximately $52). The framework can be applied to a different set of data and allows the analysts to experiment on a small section of the data until a final feature set is decided. They are able to easily scale the feature matrix to the full dataset. This ability to automate feature synthesis, iterate and scale up, will save time in the analytics process while providing a richer feature set for better machine learning results.
219

Three-Component Visual Summary: A Design to Support Casual Experts in Making Data-Driven Decisions

Calvin Yau (8746482) 24 April 2020 (has links)
<div>Recent advancements in data-collecting technologies have posed new opportunities and challenges to making data-driven decisions. While visual analytics can be a powerful tool for exploring large datasets and extracting relevant insights to support data-driven decisions, many decision-makers lack the time or the technical expertise to utilize visual analytics effectively. It is more common for data analysts to explore data through visual analytics and report their findings to the decision-makers. However, the communication gap between data analysts and decision-makers limits the decision-maker's ability to make optimal data-driven decisions. I present a Three-Component Visual Summary to allow accurate and efficient extraction of insights relevant to the decisions and provide context to validate the insights retrieved. The Three-Component Visual Summary design creates visual summaries by combining visual representations of representative data, analytical highlights, and the data envelope. This design incorporates a high-level summary, the relevant analytical insights, and detailed explorations into one coherent visual representation which addresses the potential training gaps and limited available time for visual analytics. I demonstrate how the design can be applied to four major data types commonly used in commercial visual analytics tools. The evaluations prove the design allows more accurate and efficient knowledge retrieval and a more comprehensive understanding of the data and of the insights generated, making it more accessible to decision-makers that are casual experts. Finally, I summarize the insights gained from the design process and the feedback received, and provide a list of recommendations for designing a Three-Component Visual Summary.</div>
220

An Empirical Analysis of Network Traffic: Device Profiling and Classification

Anbazhagan, Mythili Vishalini 02 July 2019 (has links)
Time and again we have seen the Internet grow and evolve at an unprecedented scale. The number of online users in 1995 was 40 million but in 2020, number of online devices are predicted to reach 50 billion, which would be 7 times the human population on earth. Up until now, the revolution was in the digital world. But now, the revolution is happening in the physical world that we live in; IoT devices are employed in all sorts of environments like domestic houses, hospitals, industrial spaces, nuclear plants etc., Since they are employed in a lot of mission-critical or even life-critical environments, their security and reliability are of paramount importance because compromising them can lead to grave consequences. IoT devices are, by nature, different from conventional Internet connected devices like laptops, smart phones etc., They have small memory, limited storage, low processing power etc., They also operate with little to no human intervention. Hence it becomes very important to understand IoT devices better. How do they behave in a network? How different are they from traditional Internet connected devices? Can they be identified from their network traffic? Is it possible for anyone to identify them just by looking at the network data that leaks outside the network, without even joining the network? That is the aim of this thesis. To the best of our knowledge, no study has collected data from outside the network, without joining the network, with the intention of finding out if IoT devices can be identified from this data. We also identify parameters that classify IoT and non-IoT devices. Then we do manual grouping of similar devices and then do the grouping automatically, using clustering algorithms. This will help in grouping devices of similar nature and create a profile for each kind of device.

Page generated in 0.0437 seconds