• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 591
  • 119
  • 109
  • 75
  • 40
  • 40
  • 27
  • 22
  • 19
  • 10
  • 8
  • 7
  • 6
  • 6
  • 5
  • Tagged with
  • 1225
  • 1225
  • 181
  • 170
  • 163
  • 156
  • 150
  • 150
  • 149
  • 129
  • 112
  • 110
  • 110
  • 109
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

La société connectée : contribution aux analyses sociologiques des liens entre technique et société à travers l'exemple des outils médiatiques numériques / The connected society : contribution to the sociological analyses between technology and society throught the example of the digital media tools

Huguet, Thibault 20 February 2017 (has links)
Initié depuis plusieurs décennies, le développement des techniques numériques marque de son empreinte profonde les esprits et les corps de nos sociétés contemporaines. Plus qu'un simple fait de société, il semble admis que nous assistons aujourd'hui à une véritable « mutation anthropologique ». Cependant, alors que les analyses des liens entre technique et société ont longtemps été marquées par des perspectives déterministes, nous proposons d'explorer dans cette thèse les relations dynamiques étroites qui font qu'une technique est éminemment sociale, et qu'une société est intrinsèquement technique. En adoptant un regard résolument compréhensif, cette recherche entend mettre en évidence les significations et les systèmes de sens qui entourent l'utilisation des outils médiatiques numériques, à une échelle macro-sociale et micro-sociale, pour expliquer causalement la place que nous accordons à cette catégorie spécifique d'objet. Les dynamiques à l’œuvre, tant à un niveau individuel que collectif, sont examinées de manière socio-logique, tour à tour dans une perspective historique, philosophique, économique, politique, sociale, et culturelle. En tant qu'artefacts-symboles de nos sociétés actuelles – objets sociaux totaux –, les médias numériques sont les outils techniques à partir desquels nous organisons la contemporanéité de notre rapport au monde : nous les concevons donc comme un prisme sociologique à partir desquels il est possible d'appréhender la société connectée. / Initiated for several decades, the development of the digital technology mark by its deep stamp the minds and the body of our contemporary society. More than a simple social phenomenon, it seems to be generaly agreed that we assist today at a true « anthropological mutation ». Nevertheless, while the analyses of the links between technology and society have been characterized for a long time by some deterministic prospects, we propose to explore in this thesis the dynamic relations which make that a technic is eminently social, and that a society is intrinsically technic. Adhering to a comprehensive approach, this research seeks to highlight the significations and the meaning systems related to the use of digital media tools, at a macro-social and a micro-social scale, to explain causally the importance we ascribed to this specific category of objects. The dynamics at work, both at an individual or collective level, are examinated in a socio-logical way, alternately with an historical, philosophical, economical, political, or socio-cultural point of view. As artefacts-symbols of our present day societies – total social object –, the digital media are the tools upon which we organize the contemporaneity of our relationship with the world : we regard them as a sociological prism from which it possible to grasp the connected society.
332

Computer vision for continuous plankton monitoring / Visão computacional para o monitoramento contínuo de plâncton

Damian Janusz Matuszewski 04 April 2014 (has links)
Plankton microorganisms constitute the base of the marine food web and play a great role in global atmospheric carbon dioxide drawdown. Moreover, being very sensitive to any environmental changes they allow noticing (and potentially counteracting) them faster than with any other means. As such they not only influence the fishery industry but are also frequently used to analyze changes in exploited coastal areas and the influence of these interferences on local environment and climate. As a consequence, there is a strong need for highly efficient systems allowing long time and large volume observation of plankton communities. This would provide us with better understanding of plankton role on global climate as well as help maintain the fragile environmental equilibrium. The adopted sensors typically provide huge amounts of data that must be processed efficiently without the need for intensive manual work of specialists. A new system for general purpose particle analysis in large volumes is presented. It has been designed and optimized for the continuous plankton monitoring problem; however, it can be easily applied as a versatile moving fluids analysis tool or in any other application in which targets to be detected and identified move in a unidirectional flux. The proposed system is composed of three stages: data acquisition, targets detection and their identification. Dedicated optical hardware is used to record images of small particles immersed in the water flux. Targets detection is performed using a Visual Rhythm-based method which greatly accelerates the processing time and allows higher volume throughput. The proposed method detects, counts and measures organisms present in water flux passing in front of the camera. Moreover, the developed software allows saving cropped plankton images which not only greatly reduces required storage space but also constitutes the input for their automatic identification. In order to assure maximal performance (up to 720 MB/s) the algorithm was implemented using CUDA for GPGPU. The method was tested on a large dataset and compared with alternative frame-by-frame approach. The obtained plankton images were used to build a classifier that is applied to automatically identify organisms in plankton analysis experiments. For this purpose a dedicated feature extracting software was developed. Various subsets of the 55 shape characteristics were tested with different off-the-shelf learning models. The best accuracy of approximately 92% was obtained with Support Vector Machines. This result is comparable to the average expert manual identification performance. This work was developed under joint supervision with Professor Rubens Lopes (IO-USP). / Microorganismos planctônicos constituem a base da cadeia alimentar marinha e desempenham um grande papel na redução do dióxido de carbono na atmosfera. Além disso, são muito sensíveis a alterações ambientais e permitem perceber (e potencialmente neutralizar) as mesmas mais rapidamente do que em qualquer outro meio. Como tal, não só influenciam a indústria da pesca, mas também são frequentemente utilizados para analisar as mudanças nas zonas costeiras exploradas e a influência destas interferências no ambiente e clima locais. Como consequência, existe uma forte necessidade de desenvolver sistemas altamente eficientes, que permitam observar comunidades planctônicas em grandes escalas de tempo e volume. Isso nos fornece uma melhor compreensão do papel do plâncton no clima global, bem como ajuda a manter o equilíbrio do frágil meio ambiente. Os sensores utilizados normalmente fornecem grandes quantidades de dados que devem ser processados de forma eficiente sem a necessidade do trabalho manual intensivo de especialistas. Um novo sistema de monitoramento de plâncton em grandes volumes é apresentado. Foi desenvolvido e otimizado para o monitoramento contínuo de plâncton; no entanto, pode ser aplicado como uma ferramenta versátil para a análise de fluídos em movimento ou em qualquer aplicação que visa detectar e identificar movimento em fluxo unidirecional. O sistema proposto é composto de três estágios: aquisição de dados, detecção de alvos e suas identificações. O equipamento óptico é utilizado para gravar imagens de pequenas particulas imersas no fluxo de água. A detecção de alvos é realizada pelo método baseado no Ritmo Visual, que acelera significativamente o tempo de processamento e permite um maior fluxo de volume. O método proposto detecta, conta e mede organismos presentes na passagem do fluxo de água em frente ao sensor da câmera. Além disso, o software desenvolvido permite salvar imagens segmentadas de plâncton, que não só reduz consideravelmente o espaço de armazenamento necessário, mas também constitui a entrada para a sua identificação automática. Para garantir o desempenho máximo de até 720 MB/s, o algoritmo foi implementado utilizando CUDA para GPGPU. O método foi testado em um grande conjunto de dados e comparado com a abordagem alternativa de quadro-a-quadro. As imagens obtidas foram utilizadas para construir um classificador que é aplicado na identificação automática de organismos em experimentos de análise de plâncton. Por este motivo desenvolveu-se um software para extração de características. Diversos subconjuntos das 55 características foram testados através de modelos de aprendizagem disponíveis. A melhor exatidão de aproximadamente 92% foi obtida através da máquina de vetores de suporte. Este resultado é comparável à identificação manual média realizada por especialistas. Este trabalho foi desenvolvido sob a co-orientacao do Professor Rubens Lopes (IO-USP).
333

Apprentissage ciblé et Big Data : contribution à la réconciliation de l'estimation adaptative et de l’inférence statistique / Targeted learning in Big Data : bridging data-adaptive estimation and statistical inference

Zheng, Wenjing 21 July 2016 (has links)
Cette thèse porte sur le développement de méthodes semi-paramétriques robustes pour l'inférence de paramètres complexes émergeant à l'interface de l'inférence causale et la biostatistique. Ses motivations sont les applications à la recherche épidémiologique et médicale à l'ère des Big Data. Nous abordons plus particulièrement deux défis statistiques pour réconcilier, dans chaque contexte, estimation adaptative et inférence statistique. Le premier défi concerne la maximisation de l'information tirée d'essais contrôlés randomisés (ECRs) grâce à la conception d'essais adaptatifs. Nous présentons un cadre théorique pour la construction et l'analyse d'ECRs groupes-séquentiels, réponses-adaptatifs et ajustés aux covariable (traduction de l'expression anglaise « group-sequential, response-adaptive, covariate-adjusted », d'où l'acronyme CARA) qui permettent le recours à des procédures adaptatives d'estimation à la fois pour la construction dynamique des schémas de randomisation et pour l'estimation du modèle de réponse conditionnelle. Ce cadre enrichit la littérature existante sur les ECRs CARA notamment parce que l'estimation des effets est garantie robuste même lorsque les modèles sur lesquels s'appuient les procédures adaptatives d'estimation sont mal spécificiés. Le second défi concerne la mise au point et l'étude asymptotique d'une procédure inférentielle semi-paramétrique avec estimation adaptative des paramètres de nuisance. A titre d'exemple, nous choisissons comme paramètre d'intérêt la différence des risques marginaux pour un traitement binaire. Nous proposons une version cross-validée du principe d'inférence par minimisation ciblée de pertes (« Cross-validated Targeted Mimum Loss Estimation » en anglais, d'où l'acronyme CV-TMLE) qui, comme son nom le suggère, marie la procédure TMLE classique et le principe de la validation croisée. L'estimateur CV-TMLE ainsi élaboré hérite de la propriété typique de double-robustesse et aussi des propriétés d'efficacité du TMLE classique. De façon remarquable, le CV-TMLE est linéairement asymptotique sous des conditions minimales, sans recourir aux conditions de type Donsker. / This dissertation focuses on developing robust semiparametric methods for complex parameters that emerge at the interface of causal inference and biostatistics, with applications to epidemiological and medical research in the era of Big Data. Specifically, we address two statistical challenges that arise in bridging the disconnect between data-adaptive estimation and statistical inference. The first challenge arises in maximizing information learned from Randomized Control Trials (RCT) through the use of adaptive trial designs. We present a framework to construct and analyze group sequential covariate-adjusted response-adaptive (CARA) RCTs that admits the use of data-adaptive approaches in constructing the randomization schemes and in estimating the conditional response model. This framework adds to the existing literature on CARA RCTs by allowing flexible options in both their design and analysis and by providing robust effect estimates even under model mis-specifications. The second challenge arises from obtaining a Central Limit Theorem when data-adaptive estimation is used to estimate the nuisance parameters. We consider as target parameter of interest the marginal risk difference of the outcome under a binary treatment, and propose a Cross-validated Targeted Minimum Loss Estimator (TMLE), which augments the classical TMLE with a sample-splitting procedure. The proposed Cross-Validated TMLE (CV-TMLE) inherits the double robustness properties and efficiency properties of the classical TMLE , and achieves asymptotic linearity at minimal conditions by avoiding the Donsker class condition.
334

Fast and slow machine learning / Apprentissage automatique rapide et lent

Montiel López, Jacob 07 March 2019 (has links)
L'ère du Big Data a révolutionné la manière dont les données sont créées et traitées. Dans ce contexte, de nombreux défis se posent, compte tenu de la quantité énorme de données disponibles qui doivent être efficacement gérées et traitées afin d’extraire des connaissances. Cette thèse explore la symbiose de l'apprentissage en mode batch et en flux, traditionnellement considérés dans la littérature comme antagonistes, sur le problème de la classification à partir de flux de données en évolution. L'apprentissage en mode batch est une approche bien établie basée sur une séquence finie: d'abord les données sont collectées, puis les modèles prédictifs sont créés, finalement le modèle est appliqué. Par contre, l’apprentissage par flux considère les données comme infinies, rendant le problème d’apprentissage comme une tâche continue (sans fin). De plus, les flux de données peuvent évoluer dans le temps, ce qui signifie que la relation entre les caractéristiques et la réponse correspondante peut changer. Nous proposons un cadre systématique pour prévoir le surendettement, un problème du monde réel ayant des implications importantes dans la société moderne. Les deux versions du mécanisme d'alerte précoce (batch et flux) surpassent les performances de base de la solution mise en œuvre par le Groupe BPCE, la deuxième institution bancaire en France. De plus, nous introduisons une méthode d'imputation évolutive basée sur un modèle pour les données manquantes dans la classification. Cette méthode présente le problème d'imputation sous la forme d'un ensemble de tâches de classification / régression résolues progressivement.Nous présentons un cadre unifié qui sert de plate-forme d'apprentissage commune où les méthodes de traitement par batch et par flux peuvent interagir de manière positive. Nous montrons que les méthodes batch peuvent être efficacement formées sur le réglage du flux dans des conditions spécifiques. Nous proposons également une adaptation de l'Extreme Gradient Boosting algorithme aux flux de données en évolution. La méthode adaptative proposée génère et met à jour l'ensemble de manière incrémentielle à l'aide de mini-lots de données. Enfin, nous présentons scikit-multiflow, un framework open source en Python qui comble le vide en Python pour une plate-forme de développement/recherche pour l'apprentissage à partir de flux de données en évolution. / The Big Data era has revolutionized the way in which data is created and processed. In this context, multiple challenges arise given the massive amount of data that needs to be efficiently handled and processed in order to extract knowledge. This thesis explores the symbiosis of batch and stream learning, which are traditionally considered in the literature as antagonists. We focus on the problem of classification from evolving data streams.Batch learning is a well-established approach in machine learning based on a finite sequence: first data is collected, then predictive models are created, then the model is applied. On the other hand, stream learning considers data as infinite, rendering the learning problem as a continuous (never-ending) task. Furthermore, data streams can evolve over time, meaning that the relationship between features and the corresponding response (class in classification) can change.We propose a systematic framework to predict over-indebtedness, a real-world problem with significant implications in modern society. The two versions of the early warning mechanism (batch and stream) outperform the baseline performance of the solution implemented by the Groupe BPCE, the second largest banking institution in France. Additionally, we introduce a scalable model-based imputation method for missing data in classification. This method casts the imputation problem as a set of classification/regression tasks which are solved incrementally.We present a unified framework that serves as a common learning platform where batch and stream methods can positively interact. We show that batch methods can be efficiently trained on the stream setting under specific conditions. The proposed hybrid solution works under the positive interactions between batch and stream methods. We also propose an adaptation of the Extreme Gradient Boosting (XGBoost) algorithm for evolving data streams. The proposed adaptive method generates and updates the ensemble incrementally using mini-batches of data. Finally, we introduce scikit-multiflow, an open source framework in Python that fills the gap in Python for a development/research platform for learning from evolving data streams.
335

Framtidens ERP system - Implementering av affärssystem : Förbättrad passform genom maskininlärning? / The future of ERP systems - Implementation of ERP systems : An improved fit through machine learning?

Lobo Roca, Andres, Stefanovic, Alexander January 2021 (has links)
ERP leverantörer arbetar ständigt med innovation för att vara konkurrenskraftiga. För att ständigt hålla sig konkurrenskraftiga måste företag arbeta med utveckling av ERP system, Machine Learning, big data och analytics. Att använda dessa tekniker i en kombination hjälper företag att kunna utveckla automatiserade funktioner för kunder. Denna studie är genomförd utifrån en kvalitativ ansats där vi analyserar två olika företag som levererar ERP system i olika branscher. Fokuset kommer att ligga på fenomenen maskininlärning, analytics och ERP system. I denna studie skapas en förståelse för viktiga begrepp och hur de fungerar tillsammans. Men även hur ERP leverantörer arbetar med ERP system, maskininlärning och analytics för att sedan kunna se om det är möjligt att anpassa systemets passform med hjälp av maskininlärning. Studien visar att det finns hinder som företag måste hantera när de arbetar datadrivet. / ERP suppliers are constantly working to be more innovative and to be more competitive. To be able to constantly stay competitive, companies must keep on working with development of ERP systems, Machine Learning, Big Data and Analytics. To use these techniques in combination with each other helps organisations to keep on developing automated functions for their customers. This study was conducted on the basis of a qualitative approach where we analyze two different organisations that deliver ERP systems to different industries. The focus in this study is applied to the concepts of Machine Learning, Analytics and ERP systems. This study also creates an understanding of these important concepts and how they work together. But also how ERP suppliers work with ERP systems, Machine Learning and analytics to be able to see if it is possible to create a better fit for the system with Machine Learning. This study also shows that there are obstacles that organisations must deal with when they work data-driven. This study is written in Swedish.
336

Digitala verktyg i revisionsprocessen : En kvalitativ jämförelse mellan stora och små byråer / Digital tools in the audit process : a qualitative comparison of large and small audit firms

Todorovic, Ljubisa, Hoxha, Timi January 2021 (has links)
Den digitala utvecklingen pågår i samhället i stort. Revisionsbranschen är en bransch som är i förändring till följd av digitaliseringen. Digitaliseringen tar bland annat uttryck i form av olika digitala verktyg som kan användas i revisionsprocessen. Syftet med digitala verktyg är att förenkla revisionsprocessen och effektivisera. Studiens syfte är att göra en jämförelse mellan hur stora och små byråer använder sig av digitala verktyg i revisionsprocessen. I syfte att göra en jämförelse valdes en kvalitativ metod. Studiens empiri samlades in genom intervjuer av revisorer från stora och små byråer. I samband med studien framfördes tidigare forskning, definition av viktiga begrepp samt den institutionella teorin, TOE ramverket och diffusion of innovation som tillsammans utgjorde studiens teoretiska referensram. Den teoretiska referensramen har använts för att analysera studiens empiri. Empirin pekar på att stora och små byråer använder digitala verktyg på liknande sätt i revisionsprocessen. De digitala verktygen används på liknande sätt och ofta i samma delar av revisionsprocessen. Granskningen är den delen där samtliga revisorer i studien har mest nytta av digitala verktyg. Digitala verktyg effektiviserar revisorns arbete och på så sätt kan revisorn fokusera på mer komplexa delar av revisionen. Skillnaden ligger i vilka digitala verktyg som används där små byråer tenderar att köpa in externa digitala verktyg medan stora byråer utvecklar egna. Det är också tydligt att digitala verktyg utgör en viktig del i revisorns dagliga arbete. / Digitalization affects society in an extensive manner and the audit profession is no exception. Auditing is changing as a result of digitalization. The digitalization in audit expresses itself through different digital tools that can be used in the audit process. Digital tools aim to simplify and make the audit process more efficient. The purpose of the study was to compare how large and small audit firms use digital tools in the audit process. A qualitative method was chosen in order to research the area. The empirical findings of the study was developed by interviewing auditors from large and small audit firms. Previous literature along with definitions of key concepts and theories such as institutional theory, TOE framework and diffusion of innovation were described in conjunction with the study in order to create a theoretical framework. The theoretical framework was used to analyze the empirical findings of the study. The digital tools are being used in a similar way and often in the same parts of the audit process. According to the study, digital tools are most useful in the reviewing phase of the audit process. Digital tools increase the efficiency of the audit and it allows the auditor to complex parts of the audit. The difference is in which digital tools are being used where small firms tend to buy external digital tools while large firms develop their own digital tools. It is also clear that digital tools are an important part of the auditor’s daily work.
337

The Major Challenges in DDDM Implementation: A Single-Case Study : What are the Main Challenges for Business-to-Business MNCs to Implement a Data-Driven Decision-Making Strategy?

Varvne, Matilda, Cederholm, Simon, Medbo, Anton January 2020 (has links)
Over the past years, the value of data and DDDM have increased significantly as technological advancements have made it possible to store and analyze large amounts of data at a reasonable cost. This has resulted in completely new business models that has disrupt whole industries. DDDM allows businesses to rely their decisions on data, as opposed to on gut feeling. Up until this point, literature is eligible to provide a general view of what are the major challenges corporations encounter when implementing a DDDM strategy. However, as the field is still rather new, the challenges identified are yet very general and many corporations, especially B2B MNCs selling consumer goods, seem to struggle with this implementation. Hence, a single-case study on such a corporation, named Alpha, was carried out with the purpose to explore what are their major challenges in this process. Semi-structured interviews revealed evidence of four major findings, whereas, execution and organizational culture were supported in existing literature, however, two additional findings associated with organizational structure and consumer behavior data were discovered in the case of Alpha. Based on this, the conclusions drawn were that B2B MNCs selling consumer goods encounter the challenges of identifying local markets as frontrunners for strategies such as the one to become more data-driven, as well as the need to find a way to retrieve consumer behavior data. With these two main challenges identified, it can provide a starting point for managers when implementing DDDM strategies in B2B MNCs selling consumer goods in the future.
338

Social Media und Banken – Die Reaktionen von Facebook-Nutzern auf Kreditanalysen mit Social Media Daten

Thießen, Friedrich, Brenger, Jan Justus, Kühn, Annemarie, Gliem, Georg, Nake, Marianne, Neuber, Markus, Wulf, Daniel 14 March 2017 (has links)
Der Trend zur Auswertung aller nur möglichen Datenbestände für kommerzielle Zwecke ist eine nicht mehr aufzuhaltende Entwicklung. Auch für die Kreditwürdigkeitsprüfung wird überlegt, Daten aus Sozialen Netzwerken einzusetzen. Die Forschungsfrage entsteht, wie die Nutzer dieser Netzwerke reagieren, wenn Banken ihre privaten Profile durchsuchen. Mit Hilfe einer Befragung von 271 Probanden wurde dieses Problem erforscht. Die Ergebnisse sind wie folgt: Die betroffenen Bürger sehen die Entwicklung mit Sorge. Sie begreifen ganz rational die neuen Geschäftsmodelle und ihre Logik und erkennen die Vorteile. Sie stehen dem Big-Data-Ansatz nicht vollkommen ablehnend gegenüber. Abgelehnt wird es aber, wenn sich Daten aus sozialen Medien negativ für eine Person auswirken. Wenn man schon sein Facebook-Profil einer Bank öffnet, dann will man einen Vorteil davon haben, keinen Nachteil. Ein Teil der Gesellschaft lehnt das Schnüffeln in privaten Daten strikt ab. Insgesamt sind die Antworten deutlich linksschief verteilt mit einem sehr dicken Ende im ablehnenden Bereich. Das Schnüffeln in privaten Daten wird als unethisch und unfair empfunden. Die Menschen fühlen sich im Gegenzug berechtigt, ihre Facebook-Daten zu manipulieren. Eine wie-Du-mir-so-ich-Dir-Mentalität ist festzustellen. Wer kommerziell ausgeschnüffelt wird, der antwortet kommerziell mit Manipulationen seiner Daten. Insgesamt ist Banken zu raten, nicht Vorreiter der Entwicklung zu sein, sondern abzuwarten, welche Erfahrungen Fintechs machen. Banken haben zu hohe Opportunitätskosten in Form des Verlustes von Kundenvertrauen. / The trend to analyze all conceivable data sets for commercial purposes is unstoppable. Banks and fintechs try to use social media data to assess the creditworthiness of potential customers. The research question is how social media users react when they realize that their bank evaluates personal social media profiles. An inquiry among 271 test persons has been performed to analyze this problem. The results are as follows: The persons are able to rationally reflect the reasons for the development and the logic behind big data analyses. They realize the advantages, but also see risks. Opening social media profiles to banks should not lead to individual disadvantages. Instead, people expect an advantage from opening their profiles voluntarily. This is a moral attitude. An important minority of 20 to 30 % argues strictly against the commercial use of social media data. When people realize that they cannot prevent the commercial use of private data, they start to manipulate them. Manipulation becomes more extensive when test persons learn about critical details of big data analyses. Those who realize that their private data are used commercially think it would be fair to answer in the same style. So the whole society moves into a commercial direction. To sum up, banks should be reluctant and careful in analyzing private client big data. Instead, banks should give the lead to fintechs as they have fewer opportunity costs, because they do not depend on good customer relations for related products.
339

Business intelligence för beslutsstöd inom telekommunikationsbolag : Nyttjandet av Business intelligence för att effektivisera affärsprocesser / Business intelligence for decision support in the telecommunications sector

El-Najjar, Lin, Ilic, Filip January 2020 (has links)
The value of data is growing to an increasing extent. The increased amount of available data has enabled business intelligence to take great strides in development. Organizations use data in order to enhance parts or entire operations within the organization. Business intelligence supports organizations in the management of data and mainly to create decision support. However, business intelligence is a broad topic which can be affected by factors such as Big Data or Cloud computing and can be applied in different ways. Previous studies shows that only a few organizations have succeeded in increasing profitability after implementation of business intelligence. This study therefore aims to create a deeper understanding of how business intelligence is used within a telecommunications company to create decision support connected to enhancing business processes. The choice of industry and organization in this study is based on the fact that the industry is one of the most data-intensive industries. The thesis relates to previous research and theories. The previous research is used in order to understand the challenges as well as the benefits and future potential of the subject. The theories are used to understand various key factors such as information systems or the combination of business intelligence and Business Process Management.  The result of the essay is created from semi-structured interviews with respondents who work within a telecommunications company and contribute to confirming the theory and answering the questions. / Värdet av data växer i allt större utsträckning. Den ökade mängden tillgänglig data har möjliggjort för Business intelligence att ta stora kliv i utvecklingen. Organisationer nyttjar data i syfte att effektivisera delar av eller hela verksamheter. Business intelligence stödjer organisationer i hanteringen av data och för att skapa beslutsstöd. Business intelligence är dock ett brett ämne vilket kan påverkas av faktorer såsom Big Data eller Cloud computing (molntjänster) och kan tillämpas på olika sätt. Tidigare studier visar att endast ett fåtal organisationer har lyckats öka lönsamheten efter implementeringen av business intelligence. Denna studie syftar till att skapa en djupare förståelse kring hur business intelligence används inom ett telekommunikationsbolag för att skapa beslutsstöd kopplat till effektivisering av affärsprocesser. Valet av bransch och organisation baseras på att branschen är en av de mest dataintensiva branscherna. Uppsatsen förhåller sig till tidigare forskning och teorier. Den tidigare forskningen används i syfte att förstå utmaningar samt fördelar och framtida potential för ämnet. Teorierna används för att förstå olika nyckelfaktorer såsom informationssystem eller kombinationen av business intelligence och Business Process Management. Resultatet i uppsatsen är skapat från semistrukturerade intervjuer med personer vilka arbetar inom ett telekommunikationsbolag och bidrar med att bekräfta teorin samt besvara frågeställningen.
340

Přístup k big data na základě "refusal to supply" judikatury Soudního dvora EU / Access to big data under the "refusal to supply" case-law of the Court of Justice of the EU

Ochodek, Tomáš January 2019 (has links)
Access to big data under the "refusal to supply" case-law of the Court of Justice of the EU Abstract This thesis deals with the topic of access to the so-called big data from the perspective of EU competition law. The thesis deals with the question whether and if so, to what extent it is possible to use the so-called "refusal to supply" case-law created by the Court of Justice of the EU to gain access to big data held by a dominant undertaking. The thesis finds that, under certain conditions, it is possible for all necessary steps to be fulfilled to allow one undertaking to request access from a dominant undertaking to big data under the control of that undertaking. This thesis therefore firstly discusses what factors affect the so-called online platforms, which can often find themselves in the position of dominant undertakings in terms of access to big data. The thesis analyses the effects of the so-called network effects, the impact of data analysis on their efficiency and the issue of the so-called multi-sided markets in connection with the position of online platforms. Subsequently, an assessment of the individual steps which, in summary, lead to the classification of the behavior of a dominant undertaking as an abuse of its dominant position by refusing access to big data is conducted. From the point...

Page generated in 0.059 seconds