• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 589
  • 118
  • 109
  • 75
  • 40
  • 40
  • 27
  • 22
  • 19
  • 10
  • 7
  • 7
  • 6
  • 6
  • 5
  • Tagged with
  • 1221
  • 1221
  • 179
  • 169
  • 163
  • 156
  • 150
  • 149
  • 148
  • 129
  • 112
  • 110
  • 109
  • 108
  • 107
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

När magin försvinner : En studie om artificiell intelligens betydelse för företag

Ljungberg, Emil, Norberg, Fredrik January 2015 (has links)
I denna uppsats undersöks fenomenet artificiell intelligens och vilken funktion den i dagsläget fyller för företag. Uppsatsens första del består av en förstudie där begreppet artificiell intelligens (AI) reds ut. Förstudien skapar en bättre förståelse för AI - vilken kombineras med teori kring teknikimplementering hos företag för att skapa en analysmodell, som i uppsatsens andra del appliceras på fallföretaget Zecobyte. De slutsatser som slutligen nås är att Zecobyte arbetar med AI-teknologi, men att tekniken egentligen inte skiljer sig något särskilt från någon annan typ av IT-teknologi. De utvecklar innovativa produkter, vilka utnyttjar Big Data fenomenet, med olika kundnyttor vilka även de inte skiljer sig markant från de kundnyttor som genereras från annan IT-teknologi. Vi konstaterar även att det finns mycket potential hos AI och avslutar med att öppna upp för fortsatta studier inom området.
122

Modelos de negocios basados en datos : desafíos del Big Data en Latinoamérica

Alcaíno Ruiz, Myrla, Arenas Miranda, Valeska, Gutiérrez Baeza, Felipe 11 1900 (has links)
Seminario para optar al título de Ingeniero Comercial, Mención Administración / El presente estudio tiene por objetivo identificar cuáles son los principales Modelos de Negocios basados en Datos desarrollados por las Startups en Latinoamérica, y determinar qué características definen cada uno de los modelos encontrados. La muestra estudiada incluyó un universo de 82 empresas, pertenecientes a 7 países listadas en AngelList y Start-Up Chile. Dentro de los objetivos específicos, se busca establecer qué patrones de innovación están desarrollando estas empresas y la forma en que podrían evolucionar. La metodología utilizada aborda la técnica CRISP-DM. Entre las principales etapas que se presentan destaca el Entendimiento de los Datos, en donde se explican las líneas de investigación que este trabajo desarrolló, específicamente, aquellas relacionadas al análisis del tipo cualitativo, a través de entrevistas semiestructuradas y, al del tipo cuantitativo, mediante el análisis de páginas web. Los resultados de entrevistas se examinaron mediante un análisis de contenido, y para el caso de los datos recolectados del análisis web se utilizaron tres tipos de algoritmos, análisis de Clusters K-means, X-means y árboles de decisión. Los resultados fueron representados en un mapa de posicionamiento. En relación a los resultados encontrados, las entrevistas arrojaron que, respecto de los patrones de innovación, estas empresas no se familiarizan con ninguno de los patrones previamente identificados en el trabajo de (Parmar, Mackenzie, Cohn, & Gann, 2014), sin embargo, la mayoría destaca la importancia de recolectar información desde distintas industrias, la relevancia de asociarse con otras empresas y la posibilidad de estandarizar alguno de sus procesos para venderlos como un producto independiente. Respecto del análisis web, se identificaron seis tipos de modelos de negocios diferentes, los cuales se caracterizaron por realizar más de una actividad clave, enfocarse en análisis más complejos que los reportados en el trabajo previo de (Hartmann et al., 2014) y en realizar procesos de monitoreo. Tanto las fuentes de datos internas, como externas son utilizadas y, la principal propuesta de valor, tiene que ver con la entrega de información y conocimiento. La gran mayoría de las empresas examinadas se dirige al segmento B2B. Adicionalmente, se identificó una nueva actividad clave, relacionada al asesoramiento de las empresas una vez que se entregan los resultados del procesamiento de datos.
123

Customer Churn Prediction Using Big Data Analytics

TANNEEDI, NAREN NAGA PAVAN PRITHVI January 2016 (has links)
Customer churn is always a grievous issue for the Telecom industry as customers do not hesitate to leave if they don’t find what they are looking for. They certainly want competitive pricing, value for money and above all, high quality service. Customer churning is directly related to customer satisfaction. It’s a known fact that the cost of customer acquisition is far greater than cost of customer retention, that makes retention a crucial business prototype. There is no standard model which addresses the churning issues of global telecom service providers accurately. BigData analytics with Machine Learning were found to be an efficient way for identifying churn. This thesis aims to predict customer churn using Big Data analytics, namely a J48 decision tree on a Java based benchmark tool, WEKA. Three different datasets from various sources were considered; first includes Telecom operator’s six month aggregate active and churned users’ data usage volumes, second includes globally surveyed data and third dataset comprises of individual weekly data usage analysis of 22 android customers along with their average quality, annoyance and churn scores by accompanying theses. Statistical analyses and J48 Decision trees were drawn for three different datasets. From the statistics of normalized volumes, autocorrelations were small owing to reliable confidence intervals, but confidence intervals were overlapping and close by, therefore no much significance could be noticed, henceforth no strong trends could be observed. From decision tree analytics, decision trees with 52%, 70% and 95% accuracies were achieved for three different data sources respectively.      Data preprocessing, data normalization and feature selection have shown to be prominently influential. Monthly data volumes have not shown much decision power. Average Quality, Churn Risk and to some extent, Annoyance scores may point out a probable churner. Weekly data volumes with customer’s recent history and necessary attributes like age, gender, tenure, bill, contract, data plan, etc., are pivotal for churn prediction.
124

Implementing a Lambda Architecture to perform real-time updates

Gudipati, Pramod Kumar January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / William Hsu / The Lambda Architecture is the new paradigm for big data, that helps in data processing with a balance on throughput, latency and fault-tolerance. There exists no single tool that provides a complete solution in terms of better accuracy, low latency and high throughput. This initiated the idea to use a set of tools and techniques to build a complete big data system. The Lambda Architecture defines a set of layers to fit in a set of tools and techniques rightly for building a complete big data system: Speed Layer, Serving Layer, Batch Layer. Each layer satisfies a set of properties and builds upon the functionality provided by the layers beneath it. The Batch layer is the place where the master dataset is stored, which is an immutable and append-only set of raw data. Also, batch layer pre-computes results using a distributed processing system like Hadoop, Apache Spark that can handle large quantities of data. The Speed Layer captures new data coming in real time and processes it. The Serving Layer contains a parallel processing query engine, which takes results from both Batch and Speed layers and responds to queries in real time with low latency. Stack Overflow is a Question & Answer forum with a huge user community, millions of posts with a rapid growth over the years. This project demonstrates The Lambda Architecture by constructing a data pipeline, to add a new “Recommended Questions” section in Stack Overflow user profile and update the questions suggested in real time. Also, various statistics such as trending tags, user performance numbers such as UpVotes, DownVotes are shown in user dashboard by querying through batch processing layer.
125

Green Clusters / Green Clusters

Vašut, Marek January 2015 (has links)
The thesis evaluates the viability of reducing power consumption of a contem- porary computer cluster by using more power-efficient hardware components. The cluster in question runs an Map-Reduce algorithm implementation and the worker nodes consist of either systems with an ARM CPU or systems which combine both an ARM CPU and an FPGA in a single package. The behavior of such cluster is discussed from both performance side as well as power consumption side. The text discusses the problems and peculiarities with the integration of an ARM-based and especially the combined ARM-FPGA-based systems into the Map-Reduce framework. The Map-Reduce framework performance itself is eval- uated to identify the gravest performance bottlenecks when using the framework in the environment with ARM systems. 1
126

Fostering collaboration amongst business intelligence, business decision makers and statisticians for the optimal use of big data in marketing strategies

De Koker, Louise January 2019 (has links)
Philosophiae Doctor - PhD / The aim of this study was to propose a model of collaboration adaptable for the optimal use of big data in an organisational environment. There is a paucity of knowledge on such collaboration and the research addressed this gap. More specifically, the research attempted to establish whether leadership, trust and knowledge sharing influence collaboration among the stakeholders identified at large organisations. The conceptual framework underlying this research was informed by collaboration theory and organisational theory. It was assumed that effective collaboration in the optimal use of big data possibly is associated with leadership, knowledge sharing and trust. These concepts were scientifically hypothesised to determine whether such associations exist within the context of big data. The study used a mixed methods approach, combining a qualitative with a quantitative study. The qualitative study was in the form of in-depth interviews with senior managers from different business units at a retail organisation in Cape Town. The quantitative study was an online survey conducted with senior marketing personnel at JSE-listed companies from various industries in Cape Town. A triangulation methodology was adopted, with additional in-depth interviews of big data and analytics experts from both South Africa and abroad, to strengthen the research. The findings of the research indicate the changing role of the statistician in the era of big data and the new discipline of data science. They also confirm the importance of leadership, trust and knowledge sharing in ensuring effective collaboration. Of the three hypotheses tested, two were confirmed. Collaboration has been applied in many areas. Unexpected findings of the research were the role the chief data officer plays in fostering collaboration among stakeholders in the optimal use of big data in marketing strategies, as well as the importance of organisational structure and culture in effective collaboration in the context of big data and data science in large organisations. The research has contributed to knowledge by extending the theory of collaboration to the domain of big data in the organisational context, with the proposal of an integrated model of collaboration in the context of big data. This model was grounded in the data collected from various sources, establishing the crucial new role of the chief data officer as part of the executive leadership and main facilitator of collaboration in the organisation. Collaboration among the specified stakeholders, led by the chief data officer, occurs both horizontally with peers and vertically with specialists at different levels within the organisation in the proposed model. The application of such a model of collaboration should facilitate the successful outcome of the collaborative efforts in data science in the form of financial benefits to the organisation through the optimal use of big data.
127

Measuring metadata quality

Király, Péter 24 June 2019 (has links)
No description available.
128

A competition policy for the digital age : An analysis of the challenges posed by data-driven business models to EU competition law

Sahlstedt, Andreas January 2019 (has links)
The increasing volume and value of data in online markets along with tendencies of market concentration makes it an interesting research topic in the field of competition law. The purpose of this thesis is to evaluate how EU competition law could adapt to the challenges brought on by big data, particularly in relation to Art. 102 TFEU and the EUMR. Furthermore, this thesis analyses the intersection between privacy regulations and competition law. The characteristics pertaining to online markets and data are presented in this thesis in order to accurately describe the specific challenges which arise in online markets. By analysing previous case law of the ECJ as well as the Bundeskartellamt’s Facebook investigation, this thesis concludes that privacy concerns could potentially be addressed within a EU competition law procedure. Such an approach might be particularly warranted in markets where privacy is a key parameter of competition. However, a departure from the traditionally price-centric enforcement of competition law is required in order to adequately address privacy concerns. The research presented in this thesis demonstrates the decreasing importance of market shares in the assessment of a dominant position in online markets, due to the dynamic character of such markets. An increased focus on entry barriers appears to be necessary, of which data can constitute an important barrier. Additionally, consumer behaviour constitutes a source of market power in online markets, which warrants a shift towards behavioural economic analysis. The turnover thresholds of the EUMR do not appear to adequately address data-driven mergers, which is illustrated by the Facebook/WhatsApp merger. Therefore, thresholds based on other parameters are necessary. The value of data also increases the potential anticompetitive effects of vertical and conglomerate mergers, warranting an increased focus on such mergers.
129

Microservices in data intensive applications

Remeika, Mantas, Urbanavicius, Jovydas January 2018 (has links)
The volumes of data which Big Data applications have to process are constantly increasing. This requires for the development of highly scalable systems. Microservices is considered as one of the solutions to deal with the scalability problem. However, the literature on practices for building scalable data-intensive systems is still lacking. This thesis aims to investigate and present the benefits and drawbacks of using microservices architecture in big data systems. Moreover, it presents other practices used to increase scalability. It includes containerization, shared-nothing architecture, data sharding, load balancing, clustering, and stateless design. Finally, an experiment comparing the performance of a monolithic application and a microservices-based application was performed. The results show that with increasing amount of load microservices perform better than the monolith. However, to cope with the constantly increasing amount of data, additional techniques should be used together with microservices.
130

Inteligência cibernética e uso de recursos semânticos na detecção de perfis falsos no contexto do Big Data /

Oliveira, José Antonio Maurilio Milagre de. January 2016 (has links)
Orientador: José Eduardo Santarem Segundo / Banca: Ricardo César Gonçalves Sant'Ana / Banca: Mário Furlaneto Neto / Resumo: O desenvolvimento da Internet transformou o mundo virtual em um repositório infindável de informações. Diariamente, na sociedade da informação, pessoas interagem, capturam e despejam dados nas mais diversas ferramentas de redes sociais e ambientes da Web. Estamos diante do Big Data, uma quantidade inacabável de dados com valor inestimável, porém de difícil tratamento. Não se tem dimensão da quantidade de informação capaz de ser extraída destes grandes repositórios de dados na Web. Um dos grandes desafios atuais na Internet do "Big Data" é lidar com falsidades e perfis falsos em ferramentas sociais, que causam alardes, comoções e danos financeiros significativos em todo o mundo. A inteligência cibernética e computação forense objetivam investigar eventos e constatar informações extraindo dados da rede. Por sua vez, a Ciência da Informação, preocupada com as questões envolvendo a recuperação, tratamento, interpretação e apresentação da informação, dispõe de elementos que quando aplicados neste contexto podem aprimorar processos de coleta e tratamento de grandes volumes de dados, na detecção de perfis falsos. Assim, por meio da presente pesquisa de revisão de literatura, documental e exploratória, buscou-se revisar os estudos internacionais envolvendo a detecção de perfis falsos em redes sociais, investigando técnicas e tecnologias aplicadas e principalmente, suas limitações. Igualmente, apresenta-se no presente trabalho contribuições de áreas da Ciência da Informação e critério... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The development of the Internet changed the virtual world in an endless repository of information. Every single day, in an information-society, people change, catch and turn out files in different tools of social network and Web surrounding. We are in front of "The Big Data", an endless amount of data with invaluable, but hard treating. It doesn't have some dimension of measure information to be able of extracting from these big Web data repositories. One of the most challenges nowadays on the Internet from the "Big Data" is to identify feelings, anticipating sceneries dealing with falsehood and false profiles social tools, which cause fanfare, upheavals and significant financial losses worldwide in front of our true scene. The cyber intelligence has by objective to look for events and finding information, subtracting dates from the Web. On the other hand, the Information Science, worried with the questions involving recovery, processing, interpretation and presentation of information that has important areas of study capable of being applied in this context hone the collection and treatment processes of large volumes of information (datas). Thus, through this research literature review, documentary and exploratory, the researcher aimed to review the International studies implicating the analysis of large volumes of data on social networking tools in falsehoods detection, investigating applied techniques and technologies and especially their limitations. Based on the identifi... (Complete abstract click electronic access below) / Mestre

Page generated in 0.0799 seconds