• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 114
  • 88
  • 69
  • 38
  • 12
  • 7
  • 7
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 494
  • 494
  • 115
  • 108
  • 99
  • 81
  • 74
  • 73
  • 69
  • 69
  • 63
  • 56
  • 56
  • 53
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

The role of taxonomies in knowledge management

Fouché, Marie-Louise 30 June 2006 (has links)
The knowledge economy has brought about some new challenges for organisations. Accessing data and information in a logical manner is a critical component of information and knowledge management. Taxonomies are viewed as a solution to facilitate ease of access to information in a logical manner. The aim of this research was to investigate the role of taxonomies within organisations which utilise a knowledge management framework or strategy. An interview process was utilised to gain insight from leading organisations as to the use of taxonomies within the knowledge management environment. Organisations are starting to use taxonomies to manage multi-sourced environments and facilitate the appropriate sourcing of the organisations intellectual capital. Based on the research it is clear that taxonomies will play a central role in the coming years to help manage the complexity of the organisation's environment and ease the access to relevant information. / Information Science / M.Inf.
482

Real-time Business Intelligence through Compact and Efficient Query Processing Under Updates

Idris, Muhammad 05 March 2019 (has links) (PDF)
Responsive analytics are rapidly taking over the traditional data analytics dominated by the post-fact approaches in traditional data warehousing. Recent advancements in analytics demand placing analytical engines at the forefront of the system to react to updates occurring at high speed and detect patterns, trends, and anomalies. These kinds of solutions find applications in Financial Systems, Industrial Control Systems, Business Intelligence and on-line Machine Learning among others. These applications are usually associated with Big Data and require the ability to react to constantly changing data in order to obtain timely insights and take proactive measures. Generally, these systems specify the analytical results or their basic elements in a query language, where the main task then is to maintain query results under frequent updates efficiently. The task of reacting to updates and analyzing changing data has been addressed in two ways in the literature: traditional business intelligence (BI) solutions focus on historical data analysis where the data is refreshed periodically and in batches, and stream processing solutions process streams of data from transient sources as flows of data items. Both kinds of systems share the niche of reacting to updates (known as dynamic evaluation), however, they differ in architecture, query languages, and processing mechanisms. In this thesis, we investigate the possibility of a reactive and unified framework to model queries that appear in both kinds of systems.In traditional BI solutions, evaluating queries under updates has been studied under the umbrella of incremental evaluation of queries that are based on the relational incremental view maintenance model and mostly focus on queries that feature equi-joins. Streaming systems, in contrast, generally follow automaton based models to evaluate queries under updates, and they generally process queries that mostly feature comparisons of temporal attributes (e.g. timestamp attributes) along with comparisons of non-temporal attributes over streams of bounded sizes. Temporal comparisons constitute inequality constraints while non-temporal comparisons can either be equality or inequality constraints. Hence these systems mostly process inequality joins. As a starting point for our research, we postulate the thesis that queries in streaming systems can also be evaluated efficiently based on the paradigm of incremental evaluation just like in BI systems in a main-memory model. The efficiency of such a model is measured in terms of runtime memory footprint and the update processing cost. To this end, the existing approaches of dynamic evaluation in both kinds of systems present a trade-off between memory footprint and the update processing cost. More specifically, systems that avoid materialization of query (sub)results incur high update latency and systems that materialize (sub)results incur high memory footprint. We are interested in investigating the possibility to build a model that can address this trade-off. In particular, we overcome this trade-off by investigating the possibility of practical dynamic evaluation algorithm for queries that appear in both kinds of systems and present a main-memory data representation that allows to enumerate query (sub)results without materialization and can be maintained efficiently under updates. We call this representation the Dynamic Constant Delay Linear Representation (DCLRs).We devise DCLRs with the following properties: 1) they allow, without materialization, enumeration of query results with bounded-delay (and with constant delay for a sub-class of queries), 2) they allow tuple lookup in query results with logarithmic delay (and with constant delay for conjunctive queries with equi-joins only), 3) they take space linear in the size of the database, 4) they can be maintained efficiently under updates. We first study the DCLRs with the above-described properties for the class of acyclic conjunctive queries featuring equi-joins with projections and present the dynamic evaluation algorithm called the Dynamic Yannakakis (DYN) algorithm. Then, we present the generalization of the DYN algorithm to the class of acyclic queries featuring multi-way Theta-joins with projections and call it Generalized DYN (GDYN). We devise DCLRs with the above properties for acyclic conjunctive queries, and the working of DYN and GDYN over DCLRs are based on a particular variant of join trees, called the Generalized Join Trees (GJTs) that guarantee the above-described properties of DCLRs. We define GJTs and present algorithms to test a conjunctive query featuring Theta-joins for acyclicity and to generate GJTs for such queries. We extend the classical GYO algorithm from testing a conjunctive query with equalities for acyclicity to testing a conjunctive query featuring multi-way Theta-joins with projections for acyclicity. We further extend the GYO algorithm to generate GJTs for queries that are acyclic.GDYN is hence a unified framework based on DCLRs that enables processing of queries that appear in streaming systems as well as in BI systems in a unified main-memory model and addresses the space-time trade-off. We instantiate GDYN to the particular case where all Theta-joins involve only equalities and inequalities and call this instantiation IEDYN. We implement DYN and IEDYN as query compilers that generate executable programs in the Scala programming language and provide all the necessary data structures and their maintenance and enumeration methods in a continuous stream processing model. We evaluate DYN and IEDYN against state-of-the-art BI and streaming systems on both industrial and synthetically generated benchmarks. We show that DYN and IEDYN outperform the existing systems by over an order of magnitude efficiency in both memory footprint and update processing time. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
483

企業導入顧客關係管理決策之研究 / Decision on Adopting Customer Relationship Management (CRM) in Enterprises

陳巧佩, Chen, Chiao-Pei Unknown Date (has links)
隨著競爭環境日漸嚴苛,顧客需求日趨多元,企業需要有更有效率的方式來管理與顧客間的關係,顧客關係管理CRM(Customer Relationship Management, CRM)成為企業關心的熱門議題。但過去的學者研究中,僅止於針對影響企業導入&系統與否之決策進行研究,然而CRM對於企業經營顧客關係之重要性已被企業所認同,同時面對如此牽涉到龐大人力、資金,與時間投入的系統導入計畫,在導入過程中仍會面臨許多重要決策,因此本研究將針對響導入過程的相關因 數,並經由個案研討的實際驗證,提出具解釋力的理論架構。 經由過去文獻的整理,本研究以Rogers(1983)的創新擴散模型為基礎,將影響CRM導入過程的因素分為認知階段的環境面與組織面因素,以及說服階段認知的創新特質,另外系統供應商及顧問公司則扮演特殊的角色,同時在說服、決策及實行階段皆產生影響。其中環境面又包含競爭強度,需求不確定性及產業環境變化速度;組織面因素包含規模、結構化程度、以及高階主管態度。而認知的創新特質則包含相容性以及系統特性。另外則以資訊與資料的蒐集、累積與儲存、吸收與整理、展現與應用等為被影響的研究變數。並選取目前導入最積極的產業中之標竿企業,包含證券、人壽、行動電話系統業者、電子電腦公司,及網際網路服務提供業者等六家企業,進行深入之訪談,期能從這些正在導入的公司之經驗裡,發現可供參考與依循的準則。 經由對個案公司的深入訪談之後,本研究之研究架構有重大修正。認知階段的影響因素除了環境與組織構面仍存在外,從組織構面中抽離複雜度,並與產品特質、顧客基礎,公司依賴業務人員推廣業務及提供服務的程度共同組成新構面—業務面因素,而系統與顧問公司的選擇則成為企業採用CRM系統時重要的決策因素之一。更重要的是,本研究經由訪談對探討主題有更廣泛而完整的延伸,導入過程可以導入積極度、建置組織,以及建置方式來描述。導入積極度代表導入的速度;建置組織裡包含以專案小組的方式來推動,以及高階主管的參與程度;建置方式則包含CRM要素的建置優先順序、以及委外程度。 研究發現,環境面的因素與組織面的因素皆同時影響到CRM導入過程的積極度及建置方式裡的優先順序;而新增的業務面的因素則同時對CRM導入過程的積極度及建置方式裡的優先順序及自建或委外的選擇有較顯著影響。至於認知的創新特質中,不論是專案管理或是高階參與,其影響到的皆為CRM在建置單位上的特性。而協力單位則因為企業對於系統特性的要求有所不同,而同時成為決策結果與影響導入方式的變數。 本研究期望能藉由結合學術理論與實務應用的方案,提供給正在導入或評估規劃中的企業實用的考慮方向與實際例證,以協助企業在導入過程中自我檢測,選擇最適當的導入程序及設計組織關的配套措施,俾使導入過程順利而達到預期目標。 / To cope with the current increase in both competition and customer requirements, enterprises need more efficient methods to manage their relationships with customers. Previous researches focused mainly on the factors affecting the decision whether or not to adopt the CRM system. As the importance of CRM pertaining to management of relationships with customers has been recognized, meanwhile, with regard to implementation of the extensive software which involves investment of massive human resources, capital, and time, many critical decisions still need to be concerned. This research aims at extracting relevant factors affecting the adoption process and proposes a convincing framework verified by an empirical case study. The research in this study is based on the Innovation Diffusion Model of Rogers (1983) and divides factors affecting the CRM adoption process into knowledge and persuasion stages with environmental and organizational factors in the knowledge stage, and perceived characteristics of innovation in the persuasion stage, while system suppliers and consulting firms as joint associates. Environmental factors include competition intensity, demand uncertainty, and industry changing speed. Organizational factors consist of size, structure, and managerial attitude. Perceived characteristics of innovation are composed of compatibility and system characteristics. Data collection, data storage, data mining, and data visualization work as independent variables. Six companies in securities, life insurance, cellular telephone system, electronics and computers, and the Internet service provider industry were selected as study cases. However, the research frame was revised after investigation. In the knowledge stage, complexity is extracted and integrated with product attribute, customer base, and corporation dependence on sales representative to form one integrated factor called business. Moreover, the independent variables are amended to be more extensive, including adoption activeness, constructing section, and constructing manner. The research shows that environmental and organizational factors affect adoption activeness and priority; and that the business factor influences adoption activeness, adoption priority, as well as outsourcing decisions. Project management and managerial participation representing a CRM constructing section are affected by perceived characteristics of the innovation. Through the integration of theory and empirical data , this research hopes to provide direction for examining the CRM adoption process and organization design, so as to facilitate the fulfilling of the adoption objective.
484

A data management and analytic model for business intelligence applications

Banda, Misheck 05 1900 (has links)
Most organisations use several data management and business intelligence solutions which are on-premise and, or cloud-based to manage and analyse their constantly growing business data. Challenges faced by organisations nowadays include, but are not limited to growth limitations, big data, inadequate analytics, computing, and data storage capabilities. Although these organisations are able to generate reports and dashboards for decision-making in most cases, effective use of their business data and an appropriate business intelligence solution could achieve and retain informed decision-making and allow competitive reaction to the dynamic external environment. A data management and analytic model has been proposed on which organisations could rely for decisive guidance when planning to procure and implement a unified business intelligence solution. To achieve a sound model, literature was reviewed by extensively studying business intelligence in general, and exploring and developing various deployment models and architectures consisting of naïve, on-premise, and cloud-based which revealed their benefits and challenges. The outcome of the literature review was the development of a hybrid business intelligence model and the accompanying architecture as the main contribution to the study.In order to assess the state of business intelligence utilisation, and to validate and improve the proposed architecture, two case studies targeting users and experts were conducted using quantitative and qualitative approaches. The case studies found and established that a decision to procure and implement a successful business intelligence solution is based on a number of crucial elements, such as, applications, devices, tools, business intelligence services, data management and infrastructure. The findings further recognised that the proposed hybrid architecture is the solution for managing complex organisations with serious data challenges. / Computing / M. Sc. (Computing)
485

The role of taxonomies in knowledge management

Fouché, Marie-Louise 30 June 2006 (has links)
The knowledge economy has brought about some new challenges for organisations. Accessing data and information in a logical manner is a critical component of information and knowledge management. Taxonomies are viewed as a solution to facilitate ease of access to information in a logical manner. The aim of this research was to investigate the role of taxonomies within organisations which utilise a knowledge management framework or strategy. An interview process was utilised to gain insight from leading organisations as to the use of taxonomies within the knowledge management environment. Organisations are starting to use taxonomies to manage multi-sourced environments and facilitate the appropriate sourcing of the organisations intellectual capital. Based on the research it is clear that taxonomies will play a central role in the coming years to help manage the complexity of the organisation's environment and ease the access to relevant information. / Information Science / M.Inf.
486

Une approche automatisée basée sur des contraintes d’intégrité définies en UML et OCL pour la vérification de la cohérence logique dans les systèmes SOLAP : applications dans le domaine agri-environnemental / An automated approach based on integrity constraints defined in UML and OCL for the verification of logical consistency in SOLAP systems : applications in the agri-environmental field

Boulil, Kamal 26 October 2012 (has links)
Les systèmes d'Entrepôts de Données et OLAP spatiaux (EDS et SOLAP) sont des technologies d'aide à la décision permettant l'analyse multidimensionnelle de gros volumes de données spatiales. Dans ces systèmes, la qualité de l'analyse dépend de trois facteurs : la qualité des données entreposées, la qualité des agrégations et la qualité de l’exploration des données. La qualité des données entreposées dépend de critères comme la précision, l'exhaustivité et la cohérence logique. La qualité d'agrégation dépend de problèmes structurels (e.g. les hiérarchies non strictes qui peuvent engendrer le comptage en double des mesures) et de problèmes sémantiques (e.g. agréger les valeurs de température par la fonction Sum peut ne pas avoir de sens considérant une application donnée). La qualité d'exploration est essentiellement affectée par des requêtes utilisateur inconsistantes (e.g. quelles ont été les valeurs de température en URSS en 2010 ?). Ces requêtes peuvent engendrer des interprétations erronées des résultats. Cette thèse s'attaque aux problèmes d'incohérence logique qui peuvent affecter les qualités de données, d'agrégation et d'exploration. L'incohérence logique est définie habituellement comme la présence de contradictions dans les données. Elle est typiquement contrôlée au moyen de Contraintes d'Intégrité (CI). Dans cette thèse nous étendons d'abord la notion de CI (dans le contexte des systèmes SOLAP) afin de prendre en compte les incohérences relatives aux agrégations et requêtes utilisateur. Pour pallier les limitations des approches existantes concernant la définition des CI SOLAP, nous proposons un Framework basé sur les langages standards UML et OCL. Ce Framework permet la spécification conceptuelle et indépendante des plates-formes des CI SOLAP et leur implémentation automatisée. Il comporte trois parties : (1) Une classification des CI SOLAP. (2) Un profil UML implémenté dans l'AGL MagicDraw, permettant la représentation conceptuelle des modèles des systèmes SOLAP et de leurs CI. (3) Une implémentation automatique qui est basée sur les générateurs de code Spatial OCL2SQL et UML2MDX qui permet de traduire les spécifications conceptuelles en code au niveau des couches EDS et serveur SOLAP. Enfin, les contributions de cette thèse ont été appliquées dans le cadre de projets nationaux de développement d'applications (S)OLAP pour l'agriculture et l'environnement. / Spatial Data Warehouse (SDW) and Spatial OLAP (SOLAP) systems are Business Intelligence (BI) allowing for interactive multidimensional analysis of huge volumes of spatial data. In such systems the quality ofanalysis mainly depends on three components : the quality of warehoused data, the quality of data aggregation, and the quality of data exploration. The warehoused data quality depends on elements such accuracy, comleteness and logical consistency. The data aggregation quality is affected by structural problems (e.g., non-strict dimension hierarchies that may cause double-counting of measure values) and semantic problems (e.g., summing temperature values does not make sens in many applications). The data exploration quality is mainly affected by inconsistent user queries (e.g., what are temperature values in USSR in 2010?) leading to possibly meaningless interpretations of query results. This thesis address the problems of logical inconsistency that may affect the data, aggregation and exploration qualities in SOLAP. The logical inconsistency is usually defined as the presence of incoherencies (contradictions) in data ; It is typically controlled by means of Integrity Constraints (IC). In this thesis, we extends the notion of IC (in the SOLAP domain) in order to take into account aggregation and query incoherencies. To overcome the limitations of existing approaches concerning the definition of SOLAP IC, we propose a framework that is based on the standard languages UML and OCL. Our framework permits a plateforme-independent conceptual design and an automatic implementation of SOLAP IC ; It consists of three parts : (1) A SOLAP IC classification, (2) A UML profile implemented in the CASE tool MagicDraw, allowing for a conceptual design of SOLAP models and their IC, (3) An automatic implementation based on the code generators Spatial OCLSQL and UML2MDX, which allows transforming the conceptual specifications into code. Finally, the contributions of this thesis have been experimented and validated in the context of French national projetcts aimming at developping (S)OLAP applications for agriculture and environment.
487

Mise en place d'un Système d'Information Décisionnel pour le suivi et la prévention des épidémies / Implementation of decision information system for monitoring and preventing epidemics

Younsi, Fatima-Zohra 17 February 2016 (has links)
Les maladies infectieuses représentent aujourd’hui un problème majeur de santé publique. Devant l’augmentation des résistances bactériennes, l’émergence de nouveaux pathogènes et la propagation rapide de l’épidémie, le suivi et la surveillance de la transmission de la maladie devient particulièrement importants. Face à une telle menace, la société doit se préparer à l'avance pour réagir rapidement et efficacement si une telle épidémie est déclarée. Cela nécessite une mise en place des dispositifs de suivi et de prévention. Dans ce contexte, nous nous intéressons, dans le présent travail, à l’élaboration d’un Système d’Information Décisionnel Spatio-temporel pour le suivi et la surveillance du phénomène de propagation de l’épidémie de la grippe saisonnière au sein de la population de la ville d’Oran (Algérie). L’objectif de ce système est double : il consiste, d’une part, à comprendre comment l’épidémie se propage par l’utilisation du réseau social Small World (SW) et du modèle à compartiments d’épidémie SEIR (Susceptible-Exposed-Infected-Removed), et d’autre part, à stocker dans un entrepôt les données multiples tout en les analysant par un outil d’analyse en ligne de donnée Spatiale dit SOLAP (Spatial On-Line Analytical Processing). / Today, infectious diseases represent a major public health problem. With the increase of bacterial resistance, the emergence of new pathogens and the rapid spread of epidemic, monitoring and surveillance of disease transmission becomes important. In the face of such a threat, the society must prepare in advance to respond quickly and effectively if an outbreak is declared. This requires setting up monitoring mechanisms and prevention.In this context, we are particularly interested by development a Spatiotemporal decision support system for monitoring and preventing the phenomenon of seasonal influenza epidemic spread in the population of Oran (city at Algeria).The objective of this system is twofold: on one hand, to understand how epidemic is spreading through the social network by using SEIR (Susceptible-Exposed-Infected-Removed) compartmental model within Small World network, and on the other hand, to store multiple data in data warehouse and analyzing it by a specific online analysis tool Spatial OLAP (Spatial on-line Analytical Processing).
488

Designing conventional, spatial, and temporal data warehouses: concepts and methodological framework

Malinowski Gajda, Elzbieta 02 October 2006 (has links)
Decision support systems are interactive, computer-based information systems that provide data and analysis tools in order to better assist managers on different levels of organization in the process of decision making. Data warehouses (DWs) have been developed and deployed as an integral part of decision support systems. <p><p>A data warehouse is a database that allows to store high volume of historical data required for analytical purposes. This data is extracted from operational databases, transformed into a coherent whole, and loaded into a DW during the extraction-transformation-loading (ETL) process. <p><p>DW data can be dynamically manipulated using on-line analytical processing (OLAP) systems. DW and OLAP systems rely on a multidimensional model that includes measures, dimensions, and hierarchies. Measures are usually numeric additive values that are used for quantitative evaluation of different aspects about organization. Dimensions provide different analysis perspectives while hierarchies allow to analyze measures on different levels of detail. <p><p>Nevertheless, currently, designers as well as users find difficult to specify multidimensional elements required for analysis. One reason for that is the lack of conceptual models for DW and OLAP system design, which would allow to express data requirements on an abstract level without considering implementation details. Another problem is that many kinds of complex hierarchies arising in real-world situations are not addressed by current DW and OLAP systems.<p><p>In order to help designers to build conceptual models for decision-support systems and to help users in better understanding the data to be analyzed, in this thesis we propose the MultiDimER model - a conceptual model used for representing multidimensional data for DW and OLAP applications. Our model is mainly based on the existing ER constructs, for example, entity types, attributes, relationship types with their usual semantics, allowing to represent the common concepts of dimensions, hierarchies, and measures. It also includes a conceptual classification of different kinds of hierarchies existing in real-world situations and proposes graphical notations for them.<p><p>On the other hand, currently users of DW and OLAP systems demand also the inclusion of spatial data, visualization of which allows to reveal patterns that are difficult to discover otherwise. The advantage of using spatial data in the analysis process is widely recognized since it allows to reveal patterns that are difficult to discover otherwise. <p><p>However, although DWs typically include a spatial or a location dimension, this dimension is usually represented in an alphanumeric format. Furthermore, there is still a lack of a systematic study that analyze the inclusion as well as the management of hierarchies and measures that are represented using spatial data. <p><p>With the aim of satisfying the growing requirements of decision-making users, we extend the MultiDimER model by allowing to include spatial data in the different elements composing the multidimensional model. The novelty of our contribution lays in the fact that a multidimensional model is seldom used for representing spatial data. To succeed with our proposal, we applied the research achievements in the field of spatial databases to the specific features of a multidimensional model. The spatial extension of a multidimensional model raises several issues, to which we refer in this thesis, such as the influence of different topological relationships between spatial objects forming a hierarchy on the procedures required for measure aggregations, aggregations of spatial measures, the inclusion of spatial measures without the presence of spatial dimensions, among others. <p><p>Moreover, one of the important characteristics of multidimensional models is the presence of a time dimension for keeping track of changes in measures. However, this dimension cannot be used to model changes in other dimensions. <p>Therefore, usual multidimensional models are not symmetric in the way of representing changes for measures and dimensions. Further, there is still a lack of analysis indicating which concepts already developed for providing temporal support in conventional databases can be applied and be useful for different elements composing a multidimensional model. <p><p>In order to handle in a similar manner temporal changes to all elements of a multidimensional model, we introduce a temporal extension for the MultiDimER model. This extension is based on the research in the area of temporal databases, which have been successfully used for modeling time-varying information for several decades. We propose the inclusion of different temporal types, such as valid and transaction time, which are obtained from source systems, in addition to the DW loading time generated in DWs. We use this temporal support for a conceptual representation of time-varying dimensions, hierarchies, and measures. We also refer to specific constraints that should be imposed on time-varying hierarchies and to the problem of handling multiple time granularities between source systems and DWs. <p><p>Furthermore, the design of DWs is not an easy task. It requires to consider all phases from the requirements specification to the final implementation including the ETL process. It should also take into account that the inclusion of different data items in a DW depends on both, users' needs and data availability in source systems. However, currently, designers must rely on their experience due to the lack of a methodological framework that considers above-mentioned aspects. <p><p>In order to assist developers during the DW design process, we propose a methodology for the design of conventional, spatial, and temporal DWs. We refer to different phases, such as requirements specification, conceptual, logical, and physical modeling. We include three different methods for requirements specification depending on whether users, operational data sources, or both are the driving force in the process of requirement gathering. We show how each method leads to the creation of a conceptual multidimensional model. We also present logical and physical design phases that refer to DW structures and the ETL process.<p><p>To ensure the correctness of the proposed conceptual models, i.e. with conventional data, with the spatial data, and with time-varying data, we formally define them providing their syntax and semantics. With the aim of assessing the usability of our conceptual model including representation of different kinds of hierarchies as well as spatial and temporal support, we present real-world examples. Pursuing the goal that the proposed conceptual solutions can be implemented, we include their logical representations using relational and object-relational databases.<p> / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
489

Méthodologies pour la création de connaissances relatives au marché chinois dans une démarche d'Intelligence Économique : application dans le domaine des biotechnologies agricoles / Methodologies for building knowledge about the Chinese market in a business intelligence approach : application in the field of agricultural biotechnologies

Guénec, Nadège 02 July 2009 (has links)
Le décloisonnement des économies et l’accélération mondiale des échanges commerciaux ont, en une décennie à peine, transformés l’environnement concurrentiel des entreprises. La zone d’activités s’est élargie en ouvrant des nouveaux marchés à potentiels très attrayants. Ainsi en est-il des BRIC (Brésil, Russie, Inde et Chine). De ces quatre pays, impressionnants par la superficie, la population et le potentiel économique qu’ils représentent, la Chine est le moins accessible et le plus hermétique à notre compréhension de par un système linguistique distinct des langues indo-européennes d’une part et du fait d’une culture et d’un système de pensée aux antipodes de ceux de l’occident d’autre part. Pourtant, pour une entreprise de taille internationale, qui souhaite étendre son influence ou simplement conserver sa position sur son propre marché, il est aujourd’hui absolument indispensable d’être présent sur le marché chinois. Comment une entreprise occidentale aborde-t-elle un marché qui de par son altérité, apparaît tout d’abord comme complexe et foncièrement énigmatique ? Six années d’observation en Chine, nous ont permis de constater les écueils dans l’accès à l’information concernant le marché chinois. Comme sur de nombreux marchés extérieurs, nos entreprises sont soumises à des déstabilisations parfois inimaginables. L’incapacité à « lire » la Chine et à comprendre les enjeux qui s’y déroulent malgré des effets soutenus, les erreurs tactiques qui découlent d’une mauvaise appréciation du marché ou d’une compréhension biaisée des jeux d’acteurs nous ont incités à réfléchir à une méthodologie de décryptage plus fine de l’environnement d’affaire qui puisse offrir aux entreprises françaises une approche de la Chine en tant que marché. Les méthodes de l’Intelligence Economique (IE) se sont alors imposées comme étant les plus propices pour plusieurs raisons : le but de l’IE est de trouver l’action juste à mener, la spécificité du contexte dans lequel évolue l’organisation est prise en compte et l’analyse se fait en temps réel. Si une approche culturelle est faite d’interactions humaines et de subtilités, une approche « marché » est dorénavant possible par le traitement automatique de l’information et de la modélisation qui s’en suit. En effet, dans toute démarche d’Intelligence Economique accompagnant l’implantation d’une activité à l’étranger, une grande part de l’information à portée stratégique vient de l’analyse du jeu des acteurs opérants dans le même secteur d’activité. Une telle automatisation de la création de connaissance constitue, en sus de l’approche humaine « sur le terrain », une réelle valeur ajoutée pour la compréhension des interactions entre les acteurs car elle apporte un ensemble de connaissances qui, prenant en compte des entités plus larges, revêtent un caractère global, insaisissable par ailleurs. La Chine ayant fortement développé les technologies liées à l’économie de la connaissance, il est dorénavant possible d’explorer les sources d’information scientifiques et techniques chinoises. Nous sommes en outre convaincus que l’information chinoise prendra au fil du temps une importance de plus en plus cruciale. Il devient donc urgent pour les organisations de se doter de dispositifs permettant non seulement d’accéder à cette information mais également d’être en mesure de traiter les masses d’informations issues de ces sources. Notre travail consiste principalement à adapter les outils et méthodes issues de la recherche française à l’analyse de l’information chinoise en vue de la création de connaissances élaborées. L’outil MATHEO, apportera par des traitements bibliométriques une vision mondiale de la stratégie chinoise. TETRALOGIE, outil dédié au data-mining, sera adapté à l’environnement linguistique et structurel des bases de données scientifiques chinoises. En outre, nous participons au développement d’un outil d’information retreival (MEVA) qui intègre les données récentes des sciences cognitives et oeuvrons à son application dans la recherche de l’information chinoise, pertinente et adéquate. Cette thèse étant réalisée dans le cadre d’un contrat CIFRE avec le Groupe Limagrain, une application contextualisée de notre démarche sera mise en œuvre dans le domaine des biotechnologies agricoles et plus particulièrement autour des enjeux actuels de la recherche sur les techniques d’hybridation du blé. L’analyse de ce secteur de pointe, qui est à la fois une domaine de recherche fondamentale, expérimentale et appliquée donne actuellement lieu à des prises de brevets et à la mise sur le marché de produits commerciaux et représente donc une thématique très actuelle. La Chine est-elle réellement, comme nous le supposons, un nouveau territoire mondial de la recherche scientifique du 21e siècle ? Les méthodes de l’IE peuvent-elles s’adapter au marché chinois ? Après avoir fourni les éléments de réponses à ces questions dans es deux premières parties de notre étude, nous poserons en troisième partie, le contexte des biotechnologies agricoles et les enjeux mondiaux en terme de puissance économico-financière mais également géopolitique de la recherche sur l’hybridation du blé. Puis nous verrons en dernière partie comment mettre en œuvre une recherche d’information sur le marché chinois ainsi que l’intérêt majeur en terme de valeur ajoutée que représente l’analyse de l’information chinoise / The rise of globalization, including technological innovations and the dismantling of trade barriers, has spurred the steady acceleration of global trade and, in barely a decade, has transformed the competitive environment of enterprises. The area of activity has been expanded by the emergence of new markets with very attractive potential. So are the BRIC (Brazil, Russia, India and China). Among the four of them, all impressive by their size, population and economic potential they represent, China is the least accessible and the more closed to our understanding because of a linguistic system radically different from the Indo-European languages on the one hand and of the fact of a culture and a thought system at odds with those of Western countries. Yet for a company of international size, which wants to extend its influence or simply to maintain its market position, including its own market, it is now essential to be present on the Chinese market. How does a western company operate on a market that appears at first as inherently complex and enigmatic because of its otherness? During six years of observation in China, we have found out the pitfalls in access to information about the Chinese market. As on many markets, our companies are subject to some unimaginable destabilization. The inability to “read” China and understand the issues that take place in spite of sustained efforts, the tactical errors that arise from a misjudgement of the market or a biased understanding of the game players led us to consider a methodology that could provide French companies an approach to China as a market. The methodologies of Business Intelligence (BI) came out to be the most suitable for several reasons: the goal of BI is to find out the right action to realise, the specificity of the context in which the organization is evolving is taken into consideration and the analysis is done just in time. If a cultural approach is made of human interactions and subtleties, a market approach is now possible by the automatic processing of information and its modelling. In any process of economic intelligence accompanying the establishment of a foreign operation, a large part of the strategic information comes from analysis of the game players operating in the same sector of activity. Such an automation of knowledge creation is, in addition to the human approach on the field, a real high value added to help the understanding of the interactions between the players. It provides a set of knowledge, which taking into account more large entities, are more comprehensive. They are more elusive anywhere else. Because has highly developed technologies linked to the knowledge economy, it is now possible to explore the scientific and technological sources of information science in China. We are also convinced that Chinese sources of information will take a more and more crucial importance in any global watch. It is therefore an urgent need for organizations to get solutions that not only allow the access to this information but also are able to handle the masses of information from these sources. The aim of this thesis is mainly to adapt the tools and the methods invented by French university research to the analysis of Chinese information in order to create useful knowledge. Matheo software will provide some bibliometrical treatments that will give a global vision of the Chinese strategy. Tetralogy software, a tool dedicated to data-mining, will be tailored to the linguistic environment and to the structure of the Chinese scientific databases. In addition, we participate in the development of a method for the information retrieval (MEVA) which integrates the data of recent discoveries in cognitive science. We develop this method to the research of the relevant and appropriate information among the Chinese datas. As this thesis is conduced under a contract university /enterprise with Limagrain, an application of our approach will be implemented in the field of agricultural biotechnology and in particular around issues of research on techniques for hybridization of wheat. The analysis of this sector, which is an area of fundamental research, experimental and applied is a very current topic as it gives rise to the acquisition of patents and to the marketing of commercial products. Is China really, as we suppose to, a new territory Global Scientific Research of the 21st century? Can the methods of BI be adapted to the Chinese market? After providing some answers to these questions in the first two parts of our study, the third part will describe the global context of agricultural biotechnologies and its issues in terms of economic and financial power but also geopolitical. Then we will focus on the problematic of research on hybridization wheat. Then we will see in the fourth and last part how to implement a search for information on the Chinese market and the major interest in terms of added value of information analysis in China
490

Dashboardy - jejich analýza a implementace v prostředí SAP Business Objects / An analysis and implementation of Dashboards within SAP Business Objects 4.0/4.1

Kratochvíl, Tomáš January 2013 (has links)
The diploma thesis is focused on dashboards analysis and distribution and theirs implementation afterwards in SAP Dashboards and Web Intelligence tools. The main goal of this thesis is an analysis of dashboards for different area of company management according to chosen of architecture solution. Another goal of diploma thesis is to take into account the principles of dashboards within the company and it deals with indicator comparison as well. The author further defines data life cycle within Business Intelligence and deals with the decomposition of particular dashboard types in theoretical part. At the end of theory, it is included an important chapter from point of view data quality, data quality process and data quality improvement and an using of SAP Best Practices and KBA as well for BI tools published by SAP. The implementation of dashboards should be back up theoretical part. Implementation is divided into 3 chapters according to selected architecture, using multisource systems, SAP Infosets/Query and using Data Warehouse or Data Mart as an architecture solution for reporting purposes. The deep implementing section should be help reader to make his own opinion to different architecture, but especially difference in used BI tools within SAP Business Objects. At the end of each section regarding architecture and its solution, there are defined pros and cons.

Page generated in 0.0609 seconds