• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 85
  • 84
  • 46
  • 23
  • 12
  • 7
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 407
  • 407
  • 105
  • 100
  • 94
  • 74
  • 69
  • 61
  • 61
  • 61
  • 52
  • 49
  • 48
  • 43
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Um plano de m?tricas para monitoramento de projetos scrum

Spies, Eduardo Henrique 15 March 2013 (has links)
Made available in DSpace on 2015-04-14T14:50:09Z (GMT). No. of bitstreams: 1 453323.pdf: 1666131 bytes, checksum: b3d0384201e24752155d711856753450 (MD5) Previous issue date: 2013-03-15 / Agile methods have earned their space both in industry and in academia, being increasingly used. With the focus on frequent returns to customers, these methods have difficulties to gain control and maintain efficient communication, especially in larger projects with several collaborators. Software engineering techniques have proved of great value to increase predictability and provide more discipline to this kind of projects. In this paper we present a metrics program for SCRUM and an extension of a Data Warehousing environment for monitoring projects. Thus, we provide a consistent repository that can be used as a historical reference of projects and for exploring metrics in different dimensions, easing control over all aspects of the progress of a project. / M?todos ?geis j? consolidaram o seu espa?o tanto na ind?stria como na academia, sendo cada vez mais utilizados. Com o foco em retornos frequentes aos clientes, estes m?todos t?m dificuldades para obter controle e manter comunica??o eficiente, especialmente em projetos de maior porte e com grande quantidade de pessoas envolvidas. T?cnicas de engenharia de software t?m se mostrado de grande valia para aumentar a previsibilidade e dar mais disciplina deste tipo de projetos. Neste trabalho ? apresentado um programa de m?tricas para SCRUM e uma extens?o de um ambiente de Data Warehousing para o monitoramento de projetos. Desta forma, ? provido um reposit?rio consistente que pode ser utilizado como referencial hist?rico de projetos e para a visualiza??o de m?tricas em diferentes dimens?es, facilitando o controle sobre todos os aspectos do progresso de um projeto.
162

Contribution à la prévention des risques liés à l’anesthésie par la valorisation des informations hospitalières au sein d’un entrepôt de données / Contributing to preventing anesthesia adverse events through the reuse of hospital information in a data warehouse

Lamer, Antoine 25 September 2015 (has links)
Introduction Le Système d'Information Hospitalier (SIH) exploite et enregistre chaque jours des millions d'informations liées à la prise en charge des patients : résultats d'analyses biologiques, mesures de paramètres physiologiques, administrations de médicaments, parcours dans les unités de soins, etc... Ces données sont traitées par des applications opérationnelles dont l'objectif est d'assurer un accès distant et une vision complète du dossier médical des patients au personnel médical. Ces données sont maintenant aussi utilisées pour répondre à d'autres objectifs comme la recherche clinique ou la santé publique, en particulier en les intégrant dans un entrepôt de données. La principale difficulté de ce type de projet est d'exploiter des données dans un autre but que celui pour lequel elles ont été enregistrées. Plusieurs études ont mis en évidence un lien statistique entre le respect d'indicateurs de qualité de prise en charge de l'anesthésie et le devenir du patient au cours du séjour hospitalier. Au CHRU de Lille, ces indicateurs de qualité, ainsi que les comorbidités du patient lors de la période post-opératoire pourraient être calculés grâce aux données recueillies par plusieurs applications du SIH. L'objectif de se travail est d'intégrer les données enregistrées par ces applications opérationnelles afin de pouvoir réaliser des études de recherche clinique.Méthode Dans un premier temps, la qualité des données enregistrées dans les systèmes sources est évaluée grâce aux méthodes présentées par la littérature ou développées dans le cadre ce projet. Puis, les problèmes de qualité mis en évidence sont traités lors de la phase d'intégration dans l'entrepôt de données. De nouvelles données sont calculées et agrégées afin de proposer des indicateurs de qualité de prise en charge. Enfin, deux études de cas permettent de tester l'utilisation du système développée.Résultats Les données pertinentes des applications du SIH ont été intégrées au sein d'un entrepôt de données d'anesthésie. Celui-ci répertorie les informations liées aux séjours hospitaliers et aux interventions réalisées depuis 2010 (médicaments administrées, étapes de l'intervention, mesures, parcours dans les unités de soins, ...) enregistrées par les applications sources. Des données agrégées ont été calculées et ont permis de mener deux études recherche clinique. La première étude a permis de mettre en évidence un lien statistique entre l'hypotension liée à l'induction de l'anesthésie et le devenir du patient. Des facteurs prédictifs de cette hypotension ont également étaient établis. La seconde étude a évalué le respect d'indicateurs de ventilation du patient et l'impact sur les comorbidités du système respiratoire.Discussion The data warehouse L'entrepôt de données développé dans le cadre de ce travail, et les méthodes d'intégration et de nettoyage de données mises en places permettent de conduire des analyses statistiques rétrospectives sur plus de 200 000 interventions. Le système pourra être étendu à d'autres systèmes sources au sein du CHRU de Lille mais également aux feuilles d'anesthésie utilisées par d'autres structures de soins. / Introduction Hospital Information Systems (HIS) manage and register every day millions of data related to patient care: biological results, vital signs, drugs administrations, care process... These data are stored by operational applications provide remote access and a comprehensive picture of Electronic Health Record. These data may also be used to answer to others purposes as clinical research or public health, particularly when integrated in a data warehouse. Some studies highlighted a statistical link between the compliance of quality indicators related to anesthesia procedure and patient outcome during the hospital stay. In the University Hospital of Lille, the quality indicators, as well as the patient comorbidities during the post-operative period could be assessed with data collected by applications of the HIS. The main objective of the work is to integrate data collected by operational applications in order to realize clinical research studies.Methods First, the data quality of information registered by the operational applications is evaluated with methods … by the literature or developed in this work. Then, data quality problems highlighted by the evaluation are managed during the integration step of the ETL process. New data are computed and aggregated in order to dispose of indicators of quality of care. Finally, two studies bring out the usability of the system.Results Pertinent data from the HIS have been integrated in an anesthesia data warehouse. This system stores data about the hospital stay and interventions (drug administrations, vital signs …) since 2010. Aggregated data have been developed and used in two clinical research studies. The first study highlighted statistical link between the induction and patient outcome. The second study evaluated the compliance of quality indicators of ventilation and the impact on comorbity.Discussion The data warehouse and the cleaning and integration methods developed as part of this work allow performing statistical analysis on more than 200 000 interventions. This system can be implemented with other applications used in the CHRU of Lille but also with Anesthesia Information Management Systems used by other hospitals.
163

Evangelist Marketing of the CloverETL Software / Evangelist Marketing of the CloverETL Software

Štýs, Miroslav January 2011 (has links)
The Evangelist Marketing of the CloverETL Software diploma thesis aims at proposing a new marketing strategy for an ETL tool - CloverETL. Theoretical part comprises chapters two and three. In chapter two, the thesis attempts to cover the ETL term, which - as a separate component of the Business Intelligence architecture - is not given much space in literature. Chapter three introduces evangelist marketing, explains its origins and best practices. Practical part involves introducing the Javlin, a.s. company and its CloverETL software product. After assessing the current marketing strategy, proposal of a new strategy follows. The new strategy is built on evangelist marketing pillars. Finally, benefits of the new approach are discussed looking at stats and data - mostly Google Analytics outputs.
164

Utilização de data warehouses para gerenciar dados de redes de sensores sem fio que monitoram polinizadores. / The use of data warehouse to manage data from wireless sensors network that monitor pollinators.

Ricardo Augusto Gomes da Costa 19 August 2011 (has links)
Este trabalho tem como objetivo a aplicação do conceito de data warehouse para a agregação, gerenciamento e apresentação de dados coletados por meio de Redes de Sensores Sem Fio que monitoram polinizadores. Os experimentos científicos que utilizam tais redes para monitorar habitat geram um volume de dados que precisa ser tratado e analisado, para que possa auxiliar os pesquisadores e demais interessados nas informações. Tais dados, gerenciados e correlacionados com informações de outras fontes, podem contribuir para a tomada de decisões e ainda realimentar outros experimentos. Para a avaliação da proposta, desenvolveu-se um modelo para extração, transformação e normalização dos dados coletados por redes de sensores sem fio, contemplando ainda a carga em data warehouse. Considerou- se no modelo, dados tabulados das redes de sensores sem fio, utilizados em experimentos com abelhas e ainda dados de outras fontes sobre o cultivo de abelhas, importantes para obtenção de visões do data warehouse mais apuradas. O uso de data warehouse aplicado a esse contexto mostrou-se um alternativa viável e útil, pois facilitou a obtenção de dados consolidados sobre o experimento, importante para a tomada de decisão pelos pesquisadores e ainda, diminui o tempo gasto pelos interessados em extrair essas informações, em comparação à tradicional análise em planilhas eletrônicas. / This work aims at applying the concept of data warehouse for data aggregation, management and presentation of data collected by Wireless Sensor Networks that monitor pollinators. Scientific experiments using such networks to monitor habitat generate a volume of data that must be addressed and analyzed, so that they can help researchers and others interested in the information. This data, managed and correlated with information from other sources may contribute to the making and still replenish other experiments. For the evaluation of the proposal, it was developed a model for the extraction, processing and standardization of data collected by wireless sensor networks, covering also the load on the data warehouse. It was considered in the model tabulated data networks of wireless sensors, used in experiments with bees and even data from other sources on the cultivation of bees, important to obtain views of the data warehouse more accurate. The use of data warehouse implemented in this context proved to be a viable and useful, as it facilitated the obtaining of information for decision making by researches and stakeholders and reduces time consumed by stakeholders to extract such information.
165

Datový sklad pro vzájemně nekompatibilní verze systému EPOS / Data Warehouse for Incompatible Versions of EPOS system

Kyšková, Lucia January 2016 (has links)
This bachelor’s thesis is elaborated according to gained experience and knowledge from thie field of databases systems and business intelligence. Its result is a data warehouse with support business intelligence parts for two incompatible versions of system EPOS (Electronic cash desk checking system).
166

Large Scale ETL Design, Optimization and Implementation Based On Spark and AWS Platform

Zhu, Di January 2017 (has links)
Nowadays, the amount of data generated by users within an Internet product is increasing exponentially, for instance, clickstream for a website application from millions of users, geospatial information from GIS-based APPs of Android and IPhone, or sensor data from cars or any electronic equipment, etc. All these data may be yielded billions every day, which is not surprisingly essential that insights could be extracted or built. For instance, monitoring system, fraud detection, user behavior analysis and feature verification, etc.Nevertheless, technical issues emerge accordingly. Heterogeneity, massiveness and miscellaneous requirements for taking use of the data from different dimensions make it much harder when it comes to the design of data pipelines, transforming and persistence in data warehouse. Undeniably, there are traditional ways to build ETLs from mainframe [1], RDBMS, to MapReduce and Hive. Yet with the emergence and popularization of Spark framework and AWS, this procedure could be evolved to a more robust, efficient, less costly and easy-to-implement architecture for collecting, building dimensional models and proceeding analytics on massive data. With the advantage of being in a car transportation company, billions of user behavior events come in every day, this paper contributes to an exploratory way of building and optimizing ETL pipelines based on AWS and Spark, and compare it with current main Data pipelines from different aspects. / Mängden data som genereras internet-produkt-användare ökar lavinartat och exponentiellt. Det finns otaliga exempel på detta; klick-strömmen från hemsidor med miljontals användare, geospatial information från GISbaserade Android och iPhone appar, eller från sensorer på autonoma bilar.Mängden händelser från de här typerna av data kan enkelt uppnå miljardantal dagligen, därför är det föga förvånande att det är möjligt att extrahera insikter från de här data-strömmarna. Till exempel kan man sätta upp automatiserade övervakningssystem eller kalibrera bedrägerimodeller effektivt. Att handskas med data i de här storleksordningarna är dock inte helt problemfritt, det finns flertalet tekniska bekymmer som enkelt kan uppstå. Datan är inte alltid på samma form, den kan vara av olika dimensioner vilket gör det betydligt svårare att designa en effektiv data-pipeline, transformera datan och lagra den persistent i ett data-warehouse. Onekligen finns det traditionella sätt att bygga ETL’s på från mainframe [1], RDBMS, till MapReduce och Hive. Dock har det med upptäckten och ökade populariteten av Spark och AWS blivit mer robust, effektivt, billigare och enklare att implementera system för att samla data, bygga dimensions-enliga modeller och genomföra analys av massiva data-set. Den här uppsatsen bidrar till en ökad förståelse kring hur man bygger och optimerar ETL-pipelines baserade på AWS och Spark och jämför med huvudsakliga nuvarande Data-pipelines med hänsyn till diverse aspekter. Uppsatsen drar nytta av att ha tillgång till ett massivt data-set med miljarder användar-events genererade dagligen från ett bil-transport-bolag i mellanöstern.
167

Automating the multidimensional design of data warehouses

Romero Moral, Oscar 09 February 2010 (has links)
Les experiències prèvies en l'àmbit dels magatzems de dades (o data warehouse), mostren que l'esquema multidimensional del data warehouse ha de ser fruit d'un enfocament híbrid; això és, una proposta que consideri tant els requeriments d'usuari com les fonts de dades durant el procés de disseny.Com a qualsevol altre sistema, els requeriments són necessaris per garantir que el sistema desenvolupat satisfà les necessitats de l'usuari. A més, essent aquest un procés de reenginyeria, les fonts de dades s'han de tenir en compte per: (i) garantir que el magatzem de dades resultant pot ésser poblat amb dades de l'organització, i, a més, (ii) descobrir capacitats d'anàlisis no evidents o no conegudes per l'usuari.Actualment, a la literatura s'han presentat diversos mètodes per donar suport al procés de modelatge del magatzem de dades. No obstant això, les propostes basades en un anàlisi dels requeriments assumeixen que aquestos són exhaustius, i no consideren que pot haver-hi informació rellevant amagada a les fonts de dades. Contràriament, les propostes basades en un anàlisi exhaustiu de les fonts de dades maximitzen aquest enfocament, i proposen tot el coneixement multidimensional que es pot derivar des de les fonts de dades i, conseqüentment, generen massa resultats. En aquest escenari, l'automatització del disseny del magatzem de dades és essencial per evitar que tot el pes de la tasca recaigui en el dissenyador (d'aquesta forma, no hem de confiar únicament en la seva habilitat i coneixement per aplicar el mètode de disseny elegit). A més, l'automatització de la tasca allibera al dissenyador del sempre complex i costós anàlisi de les fonts de dades (que pot arribar a ser inviable per grans fonts de dades).Avui dia, els mètodes automatitzables analitzen en detall les fonts de dades i passen per alt els requeriments. En canvi, els mètodes basats en l'anàlisi dels requeriments no consideren l'automatització del procés, ja que treballen amb requeriments expressats en llenguatges d'alt nivell que un ordenador no pot manegar. Aquesta mateixa situació es dona en els mètodes híbrids actual, que proposen un enfocament seqüencial, on l'anàlisi de les dades es complementa amb l'anàlisi dels requeriments, ja que totes dues tasques pateixen els mateixos problemes que els enfocament purs.En aquesta tesi proposem dos mètodes per donar suport a la tasca de modelatge del magatzem de dades: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Totes dues consideren els requeriments i les fonts de dades per portar a terme la tasca de modelatge i a més, van ser pensades per superar les limitacions dels enfocaments actuals.1. MDBE segueix un enfocament clàssic, en el que els requeriments d'usuari són coneguts d'avantmà. Aquest mètode es beneficia del coneixement capturat a les fonts de dades, però guia el procés des dels requeriments i, conseqüentment, és capaç de treballar sobre fonts de dades semànticament pobres. És a dir, explotant el fet que amb uns requeriments de qualitat, podem superar els inconvenients de disposar de fonts de dades que no capturen apropiadament el nostre domini de treball.2. A diferència d'MDBE, AMDO assumeix un escenari on es disposa de fonts de dades semànticament riques. Per aquest motiu, dirigeix el procés de modelatge des de les fonts de dades, i empra els requeriments per donar forma i adaptar els resultats generats a les necessitats de l'usuari. En aquest context, a diferència de l'anterior, unes fonts de dades semànticament riques esmorteeixen el fet de no tenir clars els requeriments d'usuari d'avantmà.Cal notar que els nostres mètodes estableixen un marc de treball combinat que es pot emprar per decidir, donat un escenari concret, quin enfocament és més adient. Per exemple, no es pot seguir el mateix enfocament en un escenari on els requeriments són ben coneguts d'avantmà i en un escenari on aquestos encara no estan clars (un cas recorrent d'aquesta situació és quan l'usuari no té clares les capacitats d'anàlisi del seu propi sistema). De fet, disposar d'uns bons requeriments d'avantmà esmorteeix la necessitat de disposar de fonts de dades semànticament riques, mentre que a l'inversa, si disposem de fonts de dades que capturen adequadament el nostre domini de treball, els requeriments no són necessaris d'avantmà. Per aquests motius, en aquesta tesi aportem un marc de treball combinat que cobreix tots els possibles escenaris que podem trobar durant la tasca de modelatge del magatzem de dades. / Previous experiences in the data warehouse field have shown that the data warehouse multidimensional conceptual schema must be derived from a hybrid approach: i.e., by considering both the end-user requirements and the data sources, as first-class citizens. Like in any other system, requirements guarantee that the system devised meets the end-user necessities. In addition, since the data warehouse design task is a reengineering process, it must consider the underlying data sources of the organization: (i) to guarantee that the data warehouse must be populated from data available within the organization, and (ii) to allow the end-user discover unknown additional analysis capabilities.Currently, several methods for supporting the data warehouse modeling task have been provided. However, they suffer from some significant drawbacks. In short, requirement-driven approaches assume that requirements are exhaustive (and therefore, do not consider the data sources to contain alternative interesting evidences of analysis), whereas data-driven approaches (i.e., those leading the design task from a thorough analysis of the data sources) rely on discovering as much multidimensional knowledge as possible from the data sources. As a consequence, data-driven approaches generate too many results, which mislead the user. Furthermore, the design task automation is essential in this scenario, as it removes the dependency on an expert's ability to properly apply the method chosen, and the need to analyze the data sources, which is a tedious and timeconsuming task (which can be unfeasible when working with large databases). In this sense, current automatable methods follow a data-driven approach, whereas current requirement-driven approaches overlook the process automation, since they tend to work with requirements at a high level of abstraction. Indeed, this scenario is repeated regarding data-driven and requirement-driven stages within current hybrid approaches, which suffer from the same drawbacks than pure data-driven or requirement-driven approaches.In this thesis we introduce two different approaches for automating the multidimensional design of the data warehouse: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Both approaches were devised to overcome the limitations from which current approaches suffer. Importantly, our approaches consider opposite initial assumptions, but both consider the end-user requirements and the data sources as first-class citizens.1. MDBE follows a classical approach, in which the end-user requirements are well-known beforehand. This approach benefits from the knowledge captured in the data sources, but guides the design task according to requirements and consequently, it is able to work and handle semantically poorer data sources. In other words, providing high-quality end-user requirements, we can guide the process from the knowledge they contain, and overcome the fact of disposing of bad quality (from a semantical point of view) data sources.2. AMDO, as counterpart, assumes a scenario in which the data sources available are semantically richer. Thus, the approach proposed is guided by a thorough analysis of the data sources, which is properly adapted to shape the output result according to the end-user requirements. In this context, disposing of high-quality data sources, we can overcome the fact of lacking of expressive end-user requirements.Importantly, our methods establish a combined and comprehensive framework that can be used to decide, according to the inputs provided in each scenario, which is the best approach to follow. For example, we cannot follow the same approach in a scenario where the end-user requirements are clear and well-known, and in a scenario in which the end-user requirements are not evident or cannot be easily elicited (e.g., this may happen when the users are not aware of the analysis capabilities of their own sources). Interestingly, the need to dispose of requirements beforehand is smoothed by the fact of having semantically rich data sources. In lack of that, requirements gain relevance to extract the multidimensional knowledge from the sources.So that, we claim to provide two approaches whose combination turns up to be exhaustive with regard to the scenarios discussed in the literature
168

Applying data warehouse and on-line analytic processing techniques on human resource management

Kuo, Li-Fang 24 June 2004 (has links)
For being in this changed rapidly new economic era, network technology has brought significant reform for enterprise operational mode, human resource information system has developed with the advantage of information technology, and become a necessary manage instrument gradually, adopt systematic and statistical analyze mode, collocated to display with visional graphic table (such as analytic form or statistic sketch), let high-level and human resource chief be capable of scientific and specific policy assistant data. Data Warehouse is a new technology for data storage, within the data warehouse not only could compile data, even more proceed to decompose¡Bmerge and intersect in different range and layer, and then utilizing the function of On-Line Analytical Processing (OLAP) or Data Mining to obtain one step ahead of information, providing applicable message for policy maker. Therefore, in recent year, data warehouse has become the main data source of Decision Support System (DSS) gradually. This research attempts to establish a data warehouse fledgling model for human resource management, providing the basic requirement for rapidly inquire related statistic data for policy maker, and extract data from human manage information system data base, establish a related multiple dimension data model. And apply the technology of Data Warehouse and OLAP, via Internet, policy maker could depends on his requirement inquiring related statistic data elastically and rapidly, and enhance the quality and time effectiveness of decision. This research could establish the systematic benefits as below¡G ¢¹.Provide convenience to inquire data: via mouse proceed dragging action, rapidly and time effectively let user operate conveniently in data inquiring procedure. ¢º.Multi-dimensional analyze data: owing to OLAP could support multi-dimensional inquire, and makes different intersect analyze and variation comparison, could let the manager make decision reference material more explicitly. ¢».Obtain necessary information elastically: could depend on user¡¦s requirement, arbitrarily change dimension, obtain necessary information, increase user¡¦s inquiring elasticity. ¢¼.Via network medium access: establishing the base system by web-base, via network and browser could forward to inquire, enhance the system¡¦s mobility and convenience. ¢½.Function of data base examination: through OLAP statistic outcome, could reach examine database correctly and completely.
169

Data Warehouse Change Management Based on Ontology

Tsai, Cheng-Sheng 12 July 2003 (has links)
In the thesis, we provide a solution to solve a schema change problem. In a data warehouse system, if schema changes occur in a data source, the overall system will lose the consistency between the data sources and the data warehouse. These schema changes will render the data warehouse obsolete. We have developed three stages to handle schema changes occurring in databases. They are change detection, diagnosis, and handling. Recommendations are generated by DB-agent to information DW-agent to notify the DBA what and where a schema change affects the star schema. In the study, we mainly handle seven schema changes in a relational database. All of them, we not only handle non-adding schema changes but also handling adding schema changes. A non-adding schema change in our experiment has high correct mapping rate as using a traditional mappings between a data warehouse and a database. For an adding schema change, it has many uncertainties to diagnosis and handle. For this reason, we compare similarity between an adding relation or attribute and the ontology concept or concept attribute to generate a good recommendation. The evaluation results show that the proposed approach is capable to detect these schema changes correctly and to recommend the DBA about the changes appropriately.
170

To probe deeply into the Customer Relationship Management strategy and operation flow of life Insurance.-ex. Nan Shan life Insurance Co, LTD.

Hsiao, Chen-Nung 28 July 2003 (has links)
Abstract To probe deeply into the Customer Relationship Management strategy and operation flow of life Insurance. - ex. Nan Shan life Insurance Co, LTD. Due to the well development of information technology (IT) during the recent years, the clearance of the contents and knowledge as well as the price offered of life insurance caused the dramatic competition in this industry. The commodity of life insurance is only an intangible contract , it has to be relied on the operation combining with company image, reputation and the trust from customers for long term. Also they are the promise and responsibility to their clients. The marketing of life insurance is different from the other industries , it is an intangible deal. Owing to the variation of the whole environment comes the drastic competition, life insurance is the buyer¡¦s market oriented instead of seller¡¦s . It says the cost to create a new account is about 6 times or even 5 less or 10 more to maintain an old customer. Therefore, this industry has to pay more attention on the current accounts on hand and try to attract new clients to be owned gradually. To look for a break through as the task of the greatest urgency at present is to make good use of Customer Concept, which is to take good care of the CRM, to enhance customers¡¦ loyalty and satisfaction as to keep our clients and wish them also to introduce new accounts for us. Therefore, CRM is the most important part of life insurance. Previousely most of the customers¡¦ data base is incomplete. Now it is the e century, we can take the advantage by using the IT service to do a good CRM one to one deeply as to cope with the competition. To look into the 21th century, now the form of customer group is varied, the market is also different, they reform the market direction of life insurance industry as well¡Xfrom the commodity oriented to the customer base. Besides, the insurants now expect the value of commodity and service much more than before and also very sensitive to them. They would like the custom made offer, voluntarily to participate in the offer, they no longer accept the offer passively. Consequentially, we have to make the design-in service and one-to-one commodity as our new marketing strategy. Following is the planning on CRM case study¡XHow to cite the 4 big steps of Pepper & Roger¡¦s Model and 5W to probe the execution of tactic and operation flow, meanwhile, to learn and to execute the 4 conceptions of Customer Process Cycle Model to achieve the company strategy target of this CRM case. The findings through this research are : 1. Nan Shan Life Insurance Co, LTD. especially stress the function and operation of Call Center and result in the significant achievements. It is the most important area and elite of CRM. 2. By the CRM system integration and collecting the customers¡¦ information from time to time, the system can understand the customer¡¦s value and update it. Moreover, with the concept and technique of CRM Data warehouse and Data mining, it can record and analysize the customer¡¦s behavior mode then look for the target market as to correct the strategy of service and marketing in time(to carry out the project marketing) 3. In regards to the customer segment, according to the items of those information that Nan Shan Life Insurance Co, LTD. searching and collecting, it is not easy to make out the customer value-based and only can segment the customers by Need-based. Moreover, it is uneasy to find out the value of effective segment customer for company, but, it can rely on customer¡¦s demand to look for suitable service and commodity to your customers. 4. As human is the main motive for interaction between insurance and customer, sales rep. is acting a key role in this business. The CRM system of Nan Shan Life Insurance Co, LTD. requests the rep. to have the deal done by using the e-tooling and IT. They are pretty successful in the efficiency. 5. The skeleton of IT in CRM is very intact, which provides extensive channels for data surfing. Do pay the attention on insurant¡¦s servicing articles and convenience, direct contact with customers and do the best to find the chance to contact your customers. There are so many ways to communicate effectively with the customers by science and technology, no space-time limitation on communication: ¡]1¡^ www.nanshanlife.com.tw ¡]2¡^ E-mail ¡]3¡^ Telephone (Call center) ¡]4¡^ Cellular phone news flash ¡]5¡^ Sales representatives ¡]6¡^ Mail or DM 6. The construction of e-tooling in Nan Shan life Insurance Co, LTD. is perfect. It is also excellent on providing the design-in commodity and servicing. There have been 10 marketing projects presented within one year, they are all P/S after the analysis from customer segment. However, the training for the outside field sales reps. has to be re-inforced because there were too many projects presented within a short period of time, they can not comprehend completely duly and fail to become CI and CK then it will change the customers¡¦ purchasing habit. 7. The customer¡¦s information will not be complete collected in case the rep. is not practicable in the operation of CRM. The following proposals are brought up after the research: ¡]1¡^ To share the company current situation and various information with the insurant via e-mail or internet, such as the operation of investment. To deem your customers as the stock shareholders or partners, you will get the trust from your accounts. On the other hand, they will also be proud that they are the insurant of Nanshan. Life Insurance. ¡]2¡^ Recommend to use ES From 2002, the reps. prefer to deal with the investment insurance policy. If the education system can combine with the ES to do the financial planning, you will be a financial specialist soon. ¡]3¡^ Rep. is the interface to communicate with customers, while, there is always no good performance on those activities for projected commodity with the less bonus. It can link up with the campaign of balance score card to evaluate the performance and give the pressure on rep. to achieve the execution efficiency and target of CRM. ¡]4¡^ Nan Shan Life Insurance Co, LTD. does not only have a very good performance on the 4 steps of CRM flow but also can be pattern for those companies in the same business who would like to achieve company target by the way of CRM. It can be even perfect if they can consider their own company culture, background and market demand then modify a bit to be their own.

Page generated in 0.0407 seconds