• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 114
  • 88
  • 69
  • 38
  • 12
  • 7
  • 7
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 494
  • 494
  • 115
  • 108
  • 99
  • 81
  • 74
  • 73
  • 69
  • 69
  • 63
  • 56
  • 56
  • 53
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Detekce pojistných podvodů / Detection of Insurance Fraud

Minár, Tomáš January 2012 (has links)
This thesis focuses on the area of detection of potential insurance frauds by using Business Intelligence (BI) and its practical application to real data of compulsory and accident insurance. It describes the basic concepts of insurance business, the individual layers of BI architecture, and a detailed description of the implementation process from data transformation through the use of advanced analytical methods to the presentation of acquired information.
382

Rizikové chování ETL procesů v prostředí datového skladu / Risk Behaviour of ETL Processes in a Data Warehouse

Košinová, Kateřina January 2015 (has links)
This thesis is about hazardous of ETL processes in their data warehouse. In the first part of this thesis I have defined the ETL processes and the aim of this thesis. The second part is about theoretical solutions needed to create a data warehouse, the definition of ETL processes and discovering potential risks. The third part is about discovering potential risks of ETL processes using an analysis and risk assessment. This part also includes a control of the potential risks. The fourth part concentrates on modifying the ETL processes to prevent potential risks. An important part of this chapter is an emergency plan containing necessary processes which must be applied in case of a risk. The fifth part of this thesis is a summary of all knowledge found during the analysis and development.
383

Využití data miningu v personální agentuře / Utilization of Data Mining for Personnel Agency

Ondruš, Erik January 2017 (has links)
This master’s thesis will look into the use of data mining in the area of segmentation and the prediction of onboarding candidates of a recruitment agency. The obtained results should serve to make company processes more effective concerning the processing of orders, and should also facilitate a more personal approach to candidates. The first chapter includes imperetive theoretical bases from the studies of Business Intelligence, data warehouses, data mining and marketing. Thereafter an analysis of the current state is presented with a focus on the capture of the key processes in processing and order. The last chapter looks at the proposed solution and implementation on the platform Microsoft SQL Server 2014. To conclude there are proposals of utilizing data mining in direct marketing.
384

A Systematic Approach for Tool-Supported Performance Management of Engineering Education

Traikova, Aneta 26 November 2019 (has links)
Performance management of engineering education emerges from the need to assure proper training of future engineers in order to meet the constantly evolving expectations and challenges for the engineering profession. The process of accreditation ensures that engineering graduates are adequately prepared for their professional careers and responsibilities by ensuring that they possess an expected set of mandatory graduate attributes. Engineering programs are required by accreditation bodies to have systematic performance management of their programs that informs a continuous improvement process. Unfortunately, the vast diversity of engineering disciplines, varieties of information systems, and the large number of actors involved in the process makes this task challenging and complex. We performed a systematic literature review of jurisdictions around the world who are doing accreditation and examined how universities across Canada, US and other countries, have addressed tool support for performance management of engineering education. Our initial systematic approach for tool supported performance management evolved from this, and then we refined it through an iterative process of combined action research and design science research. We developed a prototype, Graduate Attribute Information Analysis (GAIA) in collaboration with the School of Electrical Engineering and Computer Science at the University of Ottawa, to support a systematic approach for accreditation of three engineering programs. This thesis contributes to research on the problem by developing a systematic approach, a tool that supports it, a set of related data transformations, and a tool-assessment checklist. Our systematic approach for tool-supported performance management addresses system architecture, a common continuous improvement process, a common set of key performance indicators, and identifies the performance management forms and reports needed to analyze graduate attribute data. The data transformation and analysis techniques we demonstrate ensure the accurate analysis of statistical and historical trends.
385

SPSS Modeler Integration mit IBM DB2 Analytics Accelerator

Nentwig, Markus 27 February 2018 (has links)
Die vorliegende Arbeit beschreibt einen Architekturansatz, der im Rahmen einer Machbarkeitsstudie bei IBM entwickelt wurde. Dadurch wird der IBM DB2 Analytics Accelerator als eine Data-Warehouse-Appliance dazu in die Lage versetzt, über angepasste Schnittstellen Data-Mining-Modelle über entsprechende Algorithmen direkt auf dem Accelerator zu erstellen. Neben dieser Beschreibung wird die bisherige Verwendung des DB2 Analytics Accelerators sowie das zugehörige Umfeld von Datenbanksystemen bis zum System z Mainframe vorgestellt. Darauf aufbauend werden praxisnahe Anwendungsfälle präsentiert, die unter Anwendung von intelligenten Methoden auf gespeicherten Kundendaten statistische Modelle erstellen. Für diesen Prozess wird die Datengrundlage zuerst vorbereitet und angepasst, um sie dann in dem zentralen Data-Mining-Schritt nach neuen Zusammenhängen zu durchsuchen.
386

Adaptive website recommentations with AWESOME

Thor, Andreas, Golovin, Nick, Rahm, Erhard 16 October 2018 (has links)
Recommendations are crucial for the success of large websites. While there are many ways to determine recommendations, the relative quality of these recommenders depends on many factors and is largely unknown. We present the architecture and implementation of AWESOME (Adaptive website recommendations), a data warehouse-based recommendation system. It allows the coordinated use of a large number of recommenders to automatically generate website recommendations. Recommendations are dynamically selected by efficient rule-based approaches utilizing continuously measured user feedback on presented recommendations. AWESOME supports a completely automatic generation and optimization of selection rules to minimize website administration overhead and quickly adapt to changing situations. We propose a classification of recommenders and use AWESOME to comparatively evaluate the relative quality of several recommenders for a sample website. Furthermore, we propose and evaluate several rule-based schemes for dynamically selecting the most promising recommendations. In particular, we investigate two-step selection approaches that first determine the most promising recommenders and then apply their recommendations for the current situation. We also evaluate one-step schemes that try to directly determine the most promising recommendations.
387

Datenintegration und Wissensgewinnung für lokale Learning Health Systems am Beispiel einer Zentralen Notaufnahme

Rauch, Jens 26 August 2020 (has links)
Learning Health Systems (LHS) sind sozio-technische Systeme, die gesundheitsbezogene Dienstleistungen erbringen und dabei mit Hilfe von Informationstechnologie neues Wissen aus Daten erzeugen, um die Gesundheitsversorgung kontinuierlich zu verbessern. Durch die zunehmende Digitalisierung des Gesundheitswesens entstehen vielerorts Daten, die zur Gewinnung von Wissen in LHS genutzt werden können. Dies setzt allerdings eine informationstechnische Infrastruktur voraus, die die Daten integriert und geeignete Algorithmen zur Wissensgewinnung bereitstellt. Der verbreitete Ansatz, solche Infrastrukturen in großen Institutionsverbünden zu entwickeln, zeigte bislang nicht den gewünschten Erfolg. Deshalb wurde in dieser Arbeit stattdessen von einer einzelnen Organisationseinheit ausgegangen, der Zentralen Notaufnahme eines Klinikums, und eine informationstechnische Infrastruktur für ein lokales Learning Health System entwickelt. Es wurden dabei Fragestellungen aus den Bereichen Datenintegration und -analyse behandelt. Zum Einen wurde gefragt, wie sich heterogene, semantisch zeitvariante, longitudinale Gesundheitsdaten flexibel auf Datenmodellebene integrieren lassen. Zum Anderen war Untersuchungsgegenstand, wie auf den so integrierten Gesundheitsdaten zwei datenanalytische Anwendungsfälle konkret realisiert werden können: Es wurde erstens untersucht, welche Untergruppen von Patienten mit häufigen Inanspruchnahmen (häufige Wiederkehrer, frequent users) sich ermitteln lassen und welches Wiederkehrrisiko mit bestimmten Diagnosen verbunden ist. Zweitens wurde untersucht, welche Aussagen über das Ankunftsverhalten und die Fallkomplexität von gebrechlichen, älteren Patienten getroffen werden können. Für die Beantwortung der Fragestellungen erfolgte die Datenextraktion und -integration nach dem Data-Warehouse-Ansatz. Es wurden Daten des Krankenhausinformationssystems des Klinikums Osnabrück mit Krankenhausqualitätsdaten, Fallklassifikationsdaten sowie Wetter-, Luftqualitäts- und Verkehrsdaten integriert. Für die Datenintegration wurde das Entity-Attribute-Value/Data Vault-Modell (EAV/DV) als ein neuer Modellierungsansatz entwickelt. Die Datenanalysen wurden mit einem Data-Mining-Verfahren zur Faktorisierung von Patientenmerkmalen sowie statistischen Methoden der Zeitreihenanalyse durchgeführt. Für Wiederkehrer ergaben sich vier distinkte Untergruppen von Patienten. Weiterhin konnte das relative Wiederkehr-Risiko für einzelne Diagnosen geschätzt werden. Zeitreihenanalytisch ergaben sich ausgeprägte Unterschiede im Ankunftsverhalten gebrechlicher, älterer Patienten im Vergleich zu allen übrigen Patienten. Eine höhere Fallkomplexität konnte bestätigt werden, war aber im Allgemeinen nicht tageszeitabhängig. Der Modellierungsansatz (EAV/DV) für longitudinale Gesundheitsdaten erleichterte die Integration heterogener sowie sich zeitlich ändernder Daten durch flexible Datenschemata innerhalb des Data Warehouses. Die datenanalytischen Modelle lassen sich laufend mit neuen Daten aus dem Krankenhausinformationssystem aktualisieren und realisieren damit die Wissensgewinnung aus Daten nach dem LHS-Ansatz. Sie können als Entscheidungsunterstützung für eine bessere personelle Ressourcenplanung und zielgruppengerechte Ansprache von ressourcenintensiven Patienten in der Notaufnahme dienen. Die vorgelegte Implementierung einer IT-Infrastruktur zeigt auf, wie die Wissensgewinnung aus Daten exemplarisch für das lokale Learning Health System der Organisationseinheit Zentrale Notaufnahme umgesetzt werden kann. Die schnelle prototypische Umsetzung und der erfolgreiche Wissensgewinn zu inhaltlichen Fragestellungen belegt, dass der gewählte bottom-up-Ansatz tragfähig ist und sinnvoll weiter ausgebaut werden kann.
388

Integrace Business Inteligence nástrojů do IS / Integration of Business Intelligence Tools into IS

Novák, Josef January 2009 (has links)
This Master's Thesis deals with the integration of Business Intelligence tools into an information system. There are concepts of BI, data warehouses, the OLAP analysis introduced as well as the knowledge discovery from databases, especially the association rule mining. In the chapters focused on practical part of the thesis, the design and implementation of resultant application are depicted. There are also the applied technologies like i.e. Microsoft SQL Server 2005 described.
389

A Dementia Care Mapping (DCM) data warehouse as a resource for improving the quality of dementia care. Exploring requirements for secondary use of DCM data using a user-driven approach and discussing their implications for a data warehouse

Khalid, Shehla January 2016 (has links)
The secondary use of Dementia Care Mapping (DCM) data, if that data were held in a data warehouse, could contribute to global efforts in monitoring and improving dementia care quality. This qualitative study identifies requirements for the secondary use of DCM data within a data warehouse using a user-driven approach. The thesis critically analyses various technical methodologies and then argues the use and further demonstrates the applicability of a modified grounded theory as a user-driven methodology for a data warehouse. Interviews were conducted with 29 DCM researchers, trainers and practitioners in three phases. 19 interviews were face to face with the others on Skype and telephone with an average length of individual interview 45-60 minutes. The interview data was systematically analysed using open, axial and selective coding techniques and constant comparison methods. The study data highlighted benchmarking, mappers’ support and research as three perceived potential secondary uses of DCM data within a data warehouse. DCM researchers identified concerns regarding the quality and security of DCM data for secondary uses, which led to identifying the requirements for additional provenance, ethical and contextual data to be included in a warehouse alongside DCM data to meet requirements for secondary uses of this data for research. The study data was also used to extrapolate three main factors such as an individual mapper, the organization and an electronic data management that can influence the quality and availability of DCM data for secondary uses. The study makes further recommendations for designing a future DCM data warehouse.
390

Routeplanner: a model for the visualization of warehouse data

Gouws, Patricia Mae 31 December 2008 (has links)
This study considers the details of development and use of a model of the visualization process to transform data in a warehouse to required insight. In the context of this study, `visualization process' refers to a step-wise methodology to develop enhanced insight by using visualization techniques. The model, named RoutePlanner, was developed by the researcher from a theoretical perspective and was then used and evaluated practically in the domain of insurance brokerage. The study highlights the proposed model, which comprises stages for the identification of the relevant data, selection of visualization methods and evaluation of the visualizations, undergirded by a set of practical guidelines. To determine the effect of the use of RoutePlanner an experiment was conducted to test a theory. The practical utility of RoutePlanner was assessed during an evaluation-of-use study. The goal of this study is to present the RoutePlanner model and the effect of its use. / Theoretical Computing / M.Sc. (Information Systems)

Page generated in 0.0739 seconds