• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 114
  • 88
  • 69
  • 38
  • 12
  • 7
  • 7
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 494
  • 494
  • 115
  • 108
  • 99
  • 81
  • 74
  • 73
  • 69
  • 69
  • 63
  • 56
  • 56
  • 53
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Automating the multidimensional design of data warehouses

Romero Moral, Oscar 09 February 2010 (has links)
Les experiències prèvies en l'àmbit dels magatzems de dades (o data warehouse), mostren que l'esquema multidimensional del data warehouse ha de ser fruit d'un enfocament híbrid; això és, una proposta que consideri tant els requeriments d'usuari com les fonts de dades durant el procés de disseny.Com a qualsevol altre sistema, els requeriments són necessaris per garantir que el sistema desenvolupat satisfà les necessitats de l'usuari. A més, essent aquest un procés de reenginyeria, les fonts de dades s'han de tenir en compte per: (i) garantir que el magatzem de dades resultant pot ésser poblat amb dades de l'organització, i, a més, (ii) descobrir capacitats d'anàlisis no evidents o no conegudes per l'usuari.Actualment, a la literatura s'han presentat diversos mètodes per donar suport al procés de modelatge del magatzem de dades. No obstant això, les propostes basades en un anàlisi dels requeriments assumeixen que aquestos són exhaustius, i no consideren que pot haver-hi informació rellevant amagada a les fonts de dades. Contràriament, les propostes basades en un anàlisi exhaustiu de les fonts de dades maximitzen aquest enfocament, i proposen tot el coneixement multidimensional que es pot derivar des de les fonts de dades i, conseqüentment, generen massa resultats. En aquest escenari, l'automatització del disseny del magatzem de dades és essencial per evitar que tot el pes de la tasca recaigui en el dissenyador (d'aquesta forma, no hem de confiar únicament en la seva habilitat i coneixement per aplicar el mètode de disseny elegit). A més, l'automatització de la tasca allibera al dissenyador del sempre complex i costós anàlisi de les fonts de dades (que pot arribar a ser inviable per grans fonts de dades).Avui dia, els mètodes automatitzables analitzen en detall les fonts de dades i passen per alt els requeriments. En canvi, els mètodes basats en l'anàlisi dels requeriments no consideren l'automatització del procés, ja que treballen amb requeriments expressats en llenguatges d'alt nivell que un ordenador no pot manegar. Aquesta mateixa situació es dona en els mètodes híbrids actual, que proposen un enfocament seqüencial, on l'anàlisi de les dades es complementa amb l'anàlisi dels requeriments, ja que totes dues tasques pateixen els mateixos problemes que els enfocament purs.En aquesta tesi proposem dos mètodes per donar suport a la tasca de modelatge del magatzem de dades: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Totes dues consideren els requeriments i les fonts de dades per portar a terme la tasca de modelatge i a més, van ser pensades per superar les limitacions dels enfocaments actuals.1. MDBE segueix un enfocament clàssic, en el que els requeriments d'usuari són coneguts d'avantmà. Aquest mètode es beneficia del coneixement capturat a les fonts de dades, però guia el procés des dels requeriments i, conseqüentment, és capaç de treballar sobre fonts de dades semànticament pobres. És a dir, explotant el fet que amb uns requeriments de qualitat, podem superar els inconvenients de disposar de fonts de dades que no capturen apropiadament el nostre domini de treball.2. A diferència d'MDBE, AMDO assumeix un escenari on es disposa de fonts de dades semànticament riques. Per aquest motiu, dirigeix el procés de modelatge des de les fonts de dades, i empra els requeriments per donar forma i adaptar els resultats generats a les necessitats de l'usuari. En aquest context, a diferència de l'anterior, unes fonts de dades semànticament riques esmorteeixen el fet de no tenir clars els requeriments d'usuari d'avantmà.Cal notar que els nostres mètodes estableixen un marc de treball combinat que es pot emprar per decidir, donat un escenari concret, quin enfocament és més adient. Per exemple, no es pot seguir el mateix enfocament en un escenari on els requeriments són ben coneguts d'avantmà i en un escenari on aquestos encara no estan clars (un cas recorrent d'aquesta situació és quan l'usuari no té clares les capacitats d'anàlisi del seu propi sistema). De fet, disposar d'uns bons requeriments d'avantmà esmorteeix la necessitat de disposar de fonts de dades semànticament riques, mentre que a l'inversa, si disposem de fonts de dades que capturen adequadament el nostre domini de treball, els requeriments no són necessaris d'avantmà. Per aquests motius, en aquesta tesi aportem un marc de treball combinat que cobreix tots els possibles escenaris que podem trobar durant la tasca de modelatge del magatzem de dades. / Previous experiences in the data warehouse field have shown that the data warehouse multidimensional conceptual schema must be derived from a hybrid approach: i.e., by considering both the end-user requirements and the data sources, as first-class citizens. Like in any other system, requirements guarantee that the system devised meets the end-user necessities. In addition, since the data warehouse design task is a reengineering process, it must consider the underlying data sources of the organization: (i) to guarantee that the data warehouse must be populated from data available within the organization, and (ii) to allow the end-user discover unknown additional analysis capabilities.Currently, several methods for supporting the data warehouse modeling task have been provided. However, they suffer from some significant drawbacks. In short, requirement-driven approaches assume that requirements are exhaustive (and therefore, do not consider the data sources to contain alternative interesting evidences of analysis), whereas data-driven approaches (i.e., those leading the design task from a thorough analysis of the data sources) rely on discovering as much multidimensional knowledge as possible from the data sources. As a consequence, data-driven approaches generate too many results, which mislead the user. Furthermore, the design task automation is essential in this scenario, as it removes the dependency on an expert's ability to properly apply the method chosen, and the need to analyze the data sources, which is a tedious and timeconsuming task (which can be unfeasible when working with large databases). In this sense, current automatable methods follow a data-driven approach, whereas current requirement-driven approaches overlook the process automation, since they tend to work with requirements at a high level of abstraction. Indeed, this scenario is repeated regarding data-driven and requirement-driven stages within current hybrid approaches, which suffer from the same drawbacks than pure data-driven or requirement-driven approaches.In this thesis we introduce two different approaches for automating the multidimensional design of the data warehouse: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Both approaches were devised to overcome the limitations from which current approaches suffer. Importantly, our approaches consider opposite initial assumptions, but both consider the end-user requirements and the data sources as first-class citizens.1. MDBE follows a classical approach, in which the end-user requirements are well-known beforehand. This approach benefits from the knowledge captured in the data sources, but guides the design task according to requirements and consequently, it is able to work and handle semantically poorer data sources. In other words, providing high-quality end-user requirements, we can guide the process from the knowledge they contain, and overcome the fact of disposing of bad quality (from a semantical point of view) data sources.2. AMDO, as counterpart, assumes a scenario in which the data sources available are semantically richer. Thus, the approach proposed is guided by a thorough analysis of the data sources, which is properly adapted to shape the output result according to the end-user requirements. In this context, disposing of high-quality data sources, we can overcome the fact of lacking of expressive end-user requirements.Importantly, our methods establish a combined and comprehensive framework that can be used to decide, according to the inputs provided in each scenario, which is the best approach to follow. For example, we cannot follow the same approach in a scenario where the end-user requirements are clear and well-known, and in a scenario in which the end-user requirements are not evident or cannot be easily elicited (e.g., this may happen when the users are not aware of the analysis capabilities of their own sources). Interestingly, the need to dispose of requirements beforehand is smoothed by the fact of having semantically rich data sources. In lack of that, requirements gain relevance to extract the multidimensional knowledge from the sources.So that, we claim to provide two approaches whose combination turns up to be exhaustive with regard to the scenarios discussed in the literature
242

Applying data warehouse and on-line analytic processing techniques on human resource management

Kuo, Li-Fang 24 June 2004 (has links)
For being in this changed rapidly new economic era, network technology has brought significant reform for enterprise operational mode, human resource information system has developed with the advantage of information technology, and become a necessary manage instrument gradually, adopt systematic and statistical analyze mode, collocated to display with visional graphic table (such as analytic form or statistic sketch), let high-level and human resource chief be capable of scientific and specific policy assistant data. Data Warehouse is a new technology for data storage, within the data warehouse not only could compile data, even more proceed to decompose¡Bmerge and intersect in different range and layer, and then utilizing the function of On-Line Analytical Processing (OLAP) or Data Mining to obtain one step ahead of information, providing applicable message for policy maker. Therefore, in recent year, data warehouse has become the main data source of Decision Support System (DSS) gradually. This research attempts to establish a data warehouse fledgling model for human resource management, providing the basic requirement for rapidly inquire related statistic data for policy maker, and extract data from human manage information system data base, establish a related multiple dimension data model. And apply the technology of Data Warehouse and OLAP, via Internet, policy maker could depends on his requirement inquiring related statistic data elastically and rapidly, and enhance the quality and time effectiveness of decision. This research could establish the systematic benefits as below¡G ¢¹.Provide convenience to inquire data: via mouse proceed dragging action, rapidly and time effectively let user operate conveniently in data inquiring procedure. ¢º.Multi-dimensional analyze data: owing to OLAP could support multi-dimensional inquire, and makes different intersect analyze and variation comparison, could let the manager make decision reference material more explicitly. ¢».Obtain necessary information elastically: could depend on user¡¦s requirement, arbitrarily change dimension, obtain necessary information, increase user¡¦s inquiring elasticity. ¢¼.Via network medium access: establishing the base system by web-base, via network and browser could forward to inquire, enhance the system¡¦s mobility and convenience. ¢½.Function of data base examination: through OLAP statistic outcome, could reach examine database correctly and completely.
243

Data Warehouse Change Management Based on Ontology

Tsai, Cheng-Sheng 12 July 2003 (has links)
In the thesis, we provide a solution to solve a schema change problem. In a data warehouse system, if schema changes occur in a data source, the overall system will lose the consistency between the data sources and the data warehouse. These schema changes will render the data warehouse obsolete. We have developed three stages to handle schema changes occurring in databases. They are change detection, diagnosis, and handling. Recommendations are generated by DB-agent to information DW-agent to notify the DBA what and where a schema change affects the star schema. In the study, we mainly handle seven schema changes in a relational database. All of them, we not only handle non-adding schema changes but also handling adding schema changes. A non-adding schema change in our experiment has high correct mapping rate as using a traditional mappings between a data warehouse and a database. For an adding schema change, it has many uncertainties to diagnosis and handle. For this reason, we compare similarity between an adding relation or attribute and the ontology concept or concept attribute to generate a good recommendation. The evaluation results show that the proposed approach is capable to detect these schema changes correctly and to recommend the DBA about the changes appropriately.
244

To probe deeply into the Customer Relationship Management strategy and operation flow of life Insurance.-ex. Nan Shan life Insurance Co, LTD.

Hsiao, Chen-Nung 28 July 2003 (has links)
Abstract To probe deeply into the Customer Relationship Management strategy and operation flow of life Insurance. - ex. Nan Shan life Insurance Co, LTD. Due to the well development of information technology (IT) during the recent years, the clearance of the contents and knowledge as well as the price offered of life insurance caused the dramatic competition in this industry. The commodity of life insurance is only an intangible contract , it has to be relied on the operation combining with company image, reputation and the trust from customers for long term. Also they are the promise and responsibility to their clients. The marketing of life insurance is different from the other industries , it is an intangible deal. Owing to the variation of the whole environment comes the drastic competition, life insurance is the buyer¡¦s market oriented instead of seller¡¦s . It says the cost to create a new account is about 6 times or even 5 less or 10 more to maintain an old customer. Therefore, this industry has to pay more attention on the current accounts on hand and try to attract new clients to be owned gradually. To look for a break through as the task of the greatest urgency at present is to make good use of Customer Concept, which is to take good care of the CRM, to enhance customers¡¦ loyalty and satisfaction as to keep our clients and wish them also to introduce new accounts for us. Therefore, CRM is the most important part of life insurance. Previousely most of the customers¡¦ data base is incomplete. Now it is the e century, we can take the advantage by using the IT service to do a good CRM one to one deeply as to cope with the competition. To look into the 21th century, now the form of customer group is varied, the market is also different, they reform the market direction of life insurance industry as well¡Xfrom the commodity oriented to the customer base. Besides, the insurants now expect the value of commodity and service much more than before and also very sensitive to them. They would like the custom made offer, voluntarily to participate in the offer, they no longer accept the offer passively. Consequentially, we have to make the design-in service and one-to-one commodity as our new marketing strategy. Following is the planning on CRM case study¡XHow to cite the 4 big steps of Pepper & Roger¡¦s Model and 5W to probe the execution of tactic and operation flow, meanwhile, to learn and to execute the 4 conceptions of Customer Process Cycle Model to achieve the company strategy target of this CRM case. The findings through this research are : 1. Nan Shan Life Insurance Co, LTD. especially stress the function and operation of Call Center and result in the significant achievements. It is the most important area and elite of CRM. 2. By the CRM system integration and collecting the customers¡¦ information from time to time, the system can understand the customer¡¦s value and update it. Moreover, with the concept and technique of CRM Data warehouse and Data mining, it can record and analysize the customer¡¦s behavior mode then look for the target market as to correct the strategy of service and marketing in time(to carry out the project marketing) 3. In regards to the customer segment, according to the items of those information that Nan Shan Life Insurance Co, LTD. searching and collecting, it is not easy to make out the customer value-based and only can segment the customers by Need-based. Moreover, it is uneasy to find out the value of effective segment customer for company, but, it can rely on customer¡¦s demand to look for suitable service and commodity to your customers. 4. As human is the main motive for interaction between insurance and customer, sales rep. is acting a key role in this business. The CRM system of Nan Shan Life Insurance Co, LTD. requests the rep. to have the deal done by using the e-tooling and IT. They are pretty successful in the efficiency. 5. The skeleton of IT in CRM is very intact, which provides extensive channels for data surfing. Do pay the attention on insurant¡¦s servicing articles and convenience, direct contact with customers and do the best to find the chance to contact your customers. There are so many ways to communicate effectively with the customers by science and technology, no space-time limitation on communication: ¡]1¡^ www.nanshanlife.com.tw ¡]2¡^ E-mail ¡]3¡^ Telephone (Call center) ¡]4¡^ Cellular phone news flash ¡]5¡^ Sales representatives ¡]6¡^ Mail or DM 6. The construction of e-tooling in Nan Shan life Insurance Co, LTD. is perfect. It is also excellent on providing the design-in commodity and servicing. There have been 10 marketing projects presented within one year, they are all P/S after the analysis from customer segment. However, the training for the outside field sales reps. has to be re-inforced because there were too many projects presented within a short period of time, they can not comprehend completely duly and fail to become CI and CK then it will change the customers¡¦ purchasing habit. 7. The customer¡¦s information will not be complete collected in case the rep. is not practicable in the operation of CRM. The following proposals are brought up after the research: ¡]1¡^ To share the company current situation and various information with the insurant via e-mail or internet, such as the operation of investment. To deem your customers as the stock shareholders or partners, you will get the trust from your accounts. On the other hand, they will also be proud that they are the insurant of Nanshan. Life Insurance. ¡]2¡^ Recommend to use ES From 2002, the reps. prefer to deal with the investment insurance policy. If the education system can combine with the ES to do the financial planning, you will be a financial specialist soon. ¡]3¡^ Rep. is the interface to communicate with customers, while, there is always no good performance on those activities for projected commodity with the less bonus. It can link up with the campaign of balance score card to evaluate the performance and give the pressure on rep. to achieve the execution efficiency and target of CRM. ¡]4¡^ Nan Shan Life Insurance Co, LTD. does not only have a very good performance on the 4 steps of CRM flow but also can be pattern for those companies in the same business who would like to achieve company target by the way of CRM. It can be even perfect if they can consider their own company culture, background and market demand then modify a bit to be their own.
245

Data Quality in Data Warehouses: a Case Study

Bringle, Per January 1999 (has links)
<p>Companies today experience problems with poor data quality in their systems. Because of the enormous amount of data in companies, the data has to be of good quality if companies want to take advantage of it. Since the purpose with a data warehouse is to gather information from several databases for decision support, it is absolutely vital that data is of good quality. There exists several ways of determining or classifying data quality in databases. In this work the data quality management in a large Swedish company's data warehouse is examined, through a case study, using a framework specialized for data warehouses. The quality of data is examined from syntactic, semantic and pragmatic point of view. The results of the examination is then compared with a similar case study previously conducted in order to find any differences and similarities.</p>
246

Datalager : identifiering av motiv

Henriksson, Niklas January 2000 (has links)
<p>Det här arbetet har som huvudsyfte att belysa och ta fram vilka motiv det finns till att implementera datalager. Datalager är en teknik för att ta fram beslutsstödssystem där slutanvändarna själva tar fram sin information. Ett beslutsstödssystem har till uppgift att ge beslutsfattare (slutanvändare) information om en hel eller delar av en organisation. I introduktionen förklaras närmare vad ett beslutsstödssystem och datalager är för någonting samt ges exempel på i vilka sammanhang datalager används. En förklaring ges också till den historiska utvecklingen mot datalager</p><p>För att få fram motiven har intervjuer genomförts hos fyra stycken företag som implementerat datalager. Hos de här företagen har projektledare som ansvarat för implementationerna intervjuats samt hos ett företag har fyra stycken slutanvändare intervjuats.</p><p>Resultatet från intervjuerna visar att de vanligaste motiven finns betraktat ur en slutanvändares perspektiv. Motiven är bland annat att tillgängliggöra information och slutanvändarna kan anpassa sin information i förhållande till sitt verksamhetsområde.</p>
247

Datalager : endast för storföretag?

Johansson, Tomas January 2001 (has links)
<p>Syftet med denna rapport är att undersöka om datalager, en sorts databas som fungerar som beslutsstödssystem, kan användas av mindre företag och organisationer inom en snar framtid. Datalager har tidigare varit en fråga främst för stora företag. Mindre företag nämns sällan i datalagersammanhang. Datalagerteknologin är ny vilket gör att den fortfarande utvecklas i hög takt.</p><p>Svaret söks genom en litteraturstudie samt en empirisk studie. I den senare tillfrågas ett antal datalagerutvecklare. Litteraturstudien fokuserar på fyra synvinklar: en storleksmässig synvinkel, en nyttomässig synvinkel, en ekonomisk synvinkel och framtida möjligheter. Ur dessa synvinklar studeras mindre företags möjligheter att använda datalager. Den empiriska studien fokuserar på spridningen av datalager i Sverige och mindre företags möjlighet att använda datalager.</p><p>Resultatet visar att det redan idag finns möjligheter för mindre företag att använda datalager. Även om möjligheterna finns är det en komplex process att implementera ett datalager och arbetet med datalagret upphör inte efter implementationen.</p>
248

Hur beräknas den ekonomiska avkastningen för en datalagerinvestering?

Wåhlgren, Yvonne January 2002 (has links)
<p>Syfte med detta arbete är att undersöka huruvida avkastningsberäkning för en datalagerinvestering bör göras, hur det kan göras och om det görs. Vidare avses att undersöka om generella kalkylmetoder kan användas av företag, oavsett storlek, som avser att starta datalagerinvesteringsprojekt. Datalagerteknologin har ännu en hög utvecklingstakt och detta medför ofta höga utvecklingskostnader i samband med investeringar inom datalager.</p><p>Undersökningen baseras på en kombinerad dokumentstudie och enkätundersökning. Dokumentstudien belyser den problematik vilken förknippas med problemområdet. Enkätundersökningen fokuseras mot olika större organisationer såsom banker, post och dagligvaruhandeln, vilka idag använder sig av datalager. De tillfrågas om huruvida avkastningsberäkning görs i deras organisation och, i så fall, hur den utförs.</p><p>Analysen och resultatet tyder på att problemområdet inte har en enkel lösning och att någon typ av avkastningsberäkning bör användas. Svårigheterna ligger i att värdera de potentiella fördelar som ett datalager kan generera.</p>
249

Faktorer som orsakar misslyckade data warehousingprojekt

Åström, Mattias January 2002 (has links)
<p>Eftersom så många organisationer och företag använder sig av data warehouses idag kan inte innehavet av data warehouses räknas som ett strategiskt övertag, utan en strategisk nödvändighet för företagen eller organisationerna. Att införa ett data warehouse kräver dock stort engagemang från alla inblandade i projektet och ett misslyckande kostar företaget mycket resurser och kapital.</p><p>Detta arbete undersöker vilka faktorer som främst bidrar till att ett data warehousingprojekt misslyckas. En litteraturstudie genomförs där misslyckade fall samt litteratur som beskriver författares generella uppfattningar angående faktorer med potentiell risk att orsaka ett misslyckat data warehousingprojekt undersöks. Undersökningen leder fram till en lista med faktorer som anses ha stor potential att orsaka ett misslyckande. Att identifera dessa faktorer bidrar till att organisationer och företag får möjligheten att i ett tidigt skede av projektet förhindra att dessa uppstår, vilket kan spara stora ekonomiska resurser.</p>
250

Data warehouse development : An opportunity for business process improvement

Holgersson, Jesper January 2002 (has links)
<p>Many of today’s organizations are striving to find ways to make faster and better decisions about their business. One way to achieve this is to develop a data warehouse, offering novel features such as data mining and ad hoc querying on data collected and integrated from many of the computerized systems used in the organization. A data warehouse is of vital interest for decision makers and may reduce uncertainty in decision making. The relationship between data warehousing and business processes may be used at the pre-deployment stage of a data warehouse project, i.e. during the actual development of the data warehouse, as an opportunity to change business processes in an organization. This may then result in improved business processes that in turn may result in a better performing data warehouse. By focusing on the pre-deployment stage instead of the post-deployment stage, we believe that the costs for development will decrease, since needs for changes detected early in a development project probably will be detected anyway, but in a later stage where changes in the business processes may cause a need to restructure the finished data warehouse. We are therefore interested in which factors that may cause a need for changes in the business processes during the pre-deployment stage of a data warehouse project, the types of business processes affected, and also if there is any correspondence between factors that trigger changes and business processes affected.</p><p>Based on a literature survey and an interview study, general triggering factors to change business processes have been identified, such as needs for new organizational knowledge and for prioritization of goals etc. We have also found that needs for changes more often concern supporting processes than other types of business processes. We have also found a general correspondence at a type level between triggering factors and affected business processes.</p><p>In combination with the results and conclusions presented, we have also identified propositions for future work, which will refine and confirm the ideas presented here.</p>

Page generated in 0.0546 seconds