• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 594
  • 119
  • 110
  • 75
  • 42
  • 40
  • 27
  • 22
  • 19
  • 12
  • 8
  • 7
  • 6
  • 6
  • 5
  • Tagged with
  • 1233
  • 1233
  • 181
  • 171
  • 163
  • 157
  • 151
  • 151
  • 150
  • 130
  • 113
  • 112
  • 112
  • 109
  • 109
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

“De är kompisar” : En studie om möjligheterna att utvinna affärsnytta utifrån den ökande mängd data Internet of Things genererar

Bjellman, Evelina, Gunnarsson, Anton January 2016 (has links)
No description available.
122

Investigation into the opportunities presented by big data for the 4C Group

Spence, William MacDonald 04 1900 (has links)
Thesis (MBA)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: The telecommunications industry generates vast amounts of data on a daily basis. The exponential growth in this industry has, therefore, increased the amounts of nodes that generates data on a near real-time basis, and the required processing power to process all this information has increased as well. Organisations in different industries have experienced the same growth in information processing, and, in recent years, professionals in the Information Systems (IS) industry have started referring to these challenges as the concept of Big Data (BD). This theoretical research investigated the definition of big data as defined by several leading players in the industry. The theoretical research further focussed on several key areas relating to the big data era: i) Common attributes of big data. ii) How do organisations respond to big data? iii) What are the opportunities that big data provide to organisations? A selecting of case studies are presented to determine what other players in the IS industry does to exploit big data opportunities. The study signified that the concept of big data has emerged due to IT infrastructure struggling to cope with the increased volumes, variety and velocity of data being generated and that organisations are finding it difficult to incorporate the results from new and advanced mining and analytical techniques into their operations in order to extract the maximum value from their data. The study further found that big data impacts each component of the modern day computer based information system and the exploration of several practical cases further highlighted how different organisations have addressed this big data phenomenon in their IS environment. Using all this information, the study investigated the 4C Group business model and identified some key opportunities for this IT vendor in the big data era. As the 4C Group has positioned themselves across the ICT value chain, big data presents several good opportunities to explore in all components of the IS. While training and consulting can establish the 4C Group as a big data knowledgeable vendor, some enhancements to their application software functionalities can provide additional big data opportunities. And as true big data value only originates from the utilization of the data in the daily decision making processes, by offering IaaS the 4C Group can enable their clients to achieve the illusive goal of becoming a data driven organisation.
123

När magin försvinner : En studie om artificiell intelligens betydelse för företag

Ljungberg, Emil, Norberg, Fredrik January 2015 (has links)
I denna uppsats undersöks fenomenet artificiell intelligens och vilken funktion den i dagsläget fyller för företag. Uppsatsens första del består av en förstudie där begreppet artificiell intelligens (AI) reds ut. Förstudien skapar en bättre förståelse för AI - vilken kombineras med teori kring teknikimplementering hos företag för att skapa en analysmodell, som i uppsatsens andra del appliceras på fallföretaget Zecobyte. De slutsatser som slutligen nås är att Zecobyte arbetar med AI-teknologi, men att tekniken egentligen inte skiljer sig något särskilt från någon annan typ av IT-teknologi. De utvecklar innovativa produkter, vilka utnyttjar Big Data fenomenet, med olika kundnyttor vilka även de inte skiljer sig markant från de kundnyttor som genereras från annan IT-teknologi. Vi konstaterar även att det finns mycket potential hos AI och avslutar med att öppna upp för fortsatta studier inom området.
124

Modelos de negocios basados en datos : desafíos del Big Data en Latinoamérica

Alcaíno Ruiz, Myrla, Arenas Miranda, Valeska, Gutiérrez Baeza, Felipe 11 1900 (has links)
Seminario para optar al título de Ingeniero Comercial, Mención Administración / El presente estudio tiene por objetivo identificar cuáles son los principales Modelos de Negocios basados en Datos desarrollados por las Startups en Latinoamérica, y determinar qué características definen cada uno de los modelos encontrados. La muestra estudiada incluyó un universo de 82 empresas, pertenecientes a 7 países listadas en AngelList y Start-Up Chile. Dentro de los objetivos específicos, se busca establecer qué patrones de innovación están desarrollando estas empresas y la forma en que podrían evolucionar. La metodología utilizada aborda la técnica CRISP-DM. Entre las principales etapas que se presentan destaca el Entendimiento de los Datos, en donde se explican las líneas de investigación que este trabajo desarrolló, específicamente, aquellas relacionadas al análisis del tipo cualitativo, a través de entrevistas semiestructuradas y, al del tipo cuantitativo, mediante el análisis de páginas web. Los resultados de entrevistas se examinaron mediante un análisis de contenido, y para el caso de los datos recolectados del análisis web se utilizaron tres tipos de algoritmos, análisis de Clusters K-means, X-means y árboles de decisión. Los resultados fueron representados en un mapa de posicionamiento. En relación a los resultados encontrados, las entrevistas arrojaron que, respecto de los patrones de innovación, estas empresas no se familiarizan con ninguno de los patrones previamente identificados en el trabajo de (Parmar, Mackenzie, Cohn, & Gann, 2014), sin embargo, la mayoría destaca la importancia de recolectar información desde distintas industrias, la relevancia de asociarse con otras empresas y la posibilidad de estandarizar alguno de sus procesos para venderlos como un producto independiente. Respecto del análisis web, se identificaron seis tipos de modelos de negocios diferentes, los cuales se caracterizaron por realizar más de una actividad clave, enfocarse en análisis más complejos que los reportados en el trabajo previo de (Hartmann et al., 2014) y en realizar procesos de monitoreo. Tanto las fuentes de datos internas, como externas son utilizadas y, la principal propuesta de valor, tiene que ver con la entrega de información y conocimiento. La gran mayoría de las empresas examinadas se dirige al segmento B2B. Adicionalmente, se identificó una nueva actividad clave, relacionada al asesoramiento de las empresas una vez que se entregan los resultados del procesamiento de datos.
125

Customer Churn Prediction Using Big Data Analytics

TANNEEDI, NAREN NAGA PAVAN PRITHVI January 2016 (has links)
Customer churn is always a grievous issue for the Telecom industry as customers do not hesitate to leave if they don’t find what they are looking for. They certainly want competitive pricing, value for money and above all, high quality service. Customer churning is directly related to customer satisfaction. It’s a known fact that the cost of customer acquisition is far greater than cost of customer retention, that makes retention a crucial business prototype. There is no standard model which addresses the churning issues of global telecom service providers accurately. BigData analytics with Machine Learning were found to be an efficient way for identifying churn. This thesis aims to predict customer churn using Big Data analytics, namely a J48 decision tree on a Java based benchmark tool, WEKA. Three different datasets from various sources were considered; first includes Telecom operator’s six month aggregate active and churned users’ data usage volumes, second includes globally surveyed data and third dataset comprises of individual weekly data usage analysis of 22 android customers along with their average quality, annoyance and churn scores by accompanying theses. Statistical analyses and J48 Decision trees were drawn for three different datasets. From the statistics of normalized volumes, autocorrelations were small owing to reliable confidence intervals, but confidence intervals were overlapping and close by, therefore no much significance could be noticed, henceforth no strong trends could be observed. From decision tree analytics, decision trees with 52%, 70% and 95% accuracies were achieved for three different data sources respectively.      Data preprocessing, data normalization and feature selection have shown to be prominently influential. Monthly data volumes have not shown much decision power. Average Quality, Churn Risk and to some extent, Annoyance scores may point out a probable churner. Weekly data volumes with customer’s recent history and necessary attributes like age, gender, tenure, bill, contract, data plan, etc., are pivotal for churn prediction.
126

Implementing a Lambda Architecture to perform real-time updates

Gudipati, Pramod Kumar January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / William Hsu / The Lambda Architecture is the new paradigm for big data, that helps in data processing with a balance on throughput, latency and fault-tolerance. There exists no single tool that provides a complete solution in terms of better accuracy, low latency and high throughput. This initiated the idea to use a set of tools and techniques to build a complete big data system. The Lambda Architecture defines a set of layers to fit in a set of tools and techniques rightly for building a complete big data system: Speed Layer, Serving Layer, Batch Layer. Each layer satisfies a set of properties and builds upon the functionality provided by the layers beneath it. The Batch layer is the place where the master dataset is stored, which is an immutable and append-only set of raw data. Also, batch layer pre-computes results using a distributed processing system like Hadoop, Apache Spark that can handle large quantities of data. The Speed Layer captures new data coming in real time and processes it. The Serving Layer contains a parallel processing query engine, which takes results from both Batch and Speed layers and responds to queries in real time with low latency. Stack Overflow is a Question & Answer forum with a huge user community, millions of posts with a rapid growth over the years. This project demonstrates The Lambda Architecture by constructing a data pipeline, to add a new “Recommended Questions” section in Stack Overflow user profile and update the questions suggested in real time. Also, various statistics such as trending tags, user performance numbers such as UpVotes, DownVotes are shown in user dashboard by querying through batch processing layer.
127

Green Clusters / Green Clusters

Vašut, Marek January 2015 (has links)
The thesis evaluates the viability of reducing power consumption of a contem- porary computer cluster by using more power-efficient hardware components. The cluster in question runs an Map-Reduce algorithm implementation and the worker nodes consist of either systems with an ARM CPU or systems which combine both an ARM CPU and an FPGA in a single package. The behavior of such cluster is discussed from both performance side as well as power consumption side. The text discusses the problems and peculiarities with the integration of an ARM-based and especially the combined ARM-FPGA-based systems into the Map-Reduce framework. The Map-Reduce framework performance itself is eval- uated to identify the gravest performance bottlenecks when using the framework in the environment with ARM systems. 1
128

Fostering collaboration amongst business intelligence, business decision makers and statisticians for the optimal use of big data in marketing strategies

De Koker, Louise January 2019 (has links)
Philosophiae Doctor - PhD / The aim of this study was to propose a model of collaboration adaptable for the optimal use of big data in an organisational environment. There is a paucity of knowledge on such collaboration and the research addressed this gap. More specifically, the research attempted to establish whether leadership, trust and knowledge sharing influence collaboration among the stakeholders identified at large organisations. The conceptual framework underlying this research was informed by collaboration theory and organisational theory. It was assumed that effective collaboration in the optimal use of big data possibly is associated with leadership, knowledge sharing and trust. These concepts were scientifically hypothesised to determine whether such associations exist within the context of big data. The study used a mixed methods approach, combining a qualitative with a quantitative study. The qualitative study was in the form of in-depth interviews with senior managers from different business units at a retail organisation in Cape Town. The quantitative study was an online survey conducted with senior marketing personnel at JSE-listed companies from various industries in Cape Town. A triangulation methodology was adopted, with additional in-depth interviews of big data and analytics experts from both South Africa and abroad, to strengthen the research. The findings of the research indicate the changing role of the statistician in the era of big data and the new discipline of data science. They also confirm the importance of leadership, trust and knowledge sharing in ensuring effective collaboration. Of the three hypotheses tested, two were confirmed. Collaboration has been applied in many areas. Unexpected findings of the research were the role the chief data officer plays in fostering collaboration among stakeholders in the optimal use of big data in marketing strategies, as well as the importance of organisational structure and culture in effective collaboration in the context of big data and data science in large organisations. The research has contributed to knowledge by extending the theory of collaboration to the domain of big data in the organisational context, with the proposal of an integrated model of collaboration in the context of big data. This model was grounded in the data collected from various sources, establishing the crucial new role of the chief data officer as part of the executive leadership and main facilitator of collaboration in the organisation. Collaboration among the specified stakeholders, led by the chief data officer, occurs both horizontally with peers and vertically with specialists at different levels within the organisation in the proposed model. The application of such a model of collaboration should facilitate the successful outcome of the collaborative efforts in data science in the form of financial benefits to the organisation through the optimal use of big data.
129

Measuring metadata quality

Király, Péter 24 June 2019 (has links)
No description available.
130

A competition policy for the digital age : An analysis of the challenges posed by data-driven business models to EU competition law

Sahlstedt, Andreas January 2019 (has links)
The increasing volume and value of data in online markets along with tendencies of market concentration makes it an interesting research topic in the field of competition law. The purpose of this thesis is to evaluate how EU competition law could adapt to the challenges brought on by big data, particularly in relation to Art. 102 TFEU and the EUMR. Furthermore, this thesis analyses the intersection between privacy regulations and competition law. The characteristics pertaining to online markets and data are presented in this thesis in order to accurately describe the specific challenges which arise in online markets. By analysing previous case law of the ECJ as well as the Bundeskartellamt’s Facebook investigation, this thesis concludes that privacy concerns could potentially be addressed within a EU competition law procedure. Such an approach might be particularly warranted in markets where privacy is a key parameter of competition. However, a departure from the traditionally price-centric enforcement of competition law is required in order to adequately address privacy concerns. The research presented in this thesis demonstrates the decreasing importance of market shares in the assessment of a dominant position in online markets, due to the dynamic character of such markets. An increased focus on entry barriers appears to be necessary, of which data can constitute an important barrier. Additionally, consumer behaviour constitutes a source of market power in online markets, which warrants a shift towards behavioural economic analysis. The turnover thresholds of the EUMR do not appear to adequately address data-driven mergers, which is illustrated by the Facebook/WhatsApp merger. Therefore, thresholds based on other parameters are necessary. The value of data also increases the potential anticompetitive effects of vertical and conglomerate mergers, warranting an increased focus on such mergers.

Page generated in 0.0384 seconds