• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 190
  • 25
  • 22
  • 21
  • 14
  • 12
  • 7
  • 6
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 358
  • 358
  • 66
  • 63
  • 61
  • 55
  • 50
  • 48
  • 43
  • 42
  • 41
  • 40
  • 37
  • 33
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Research challenges for energy data management (panel)

Pedersen, Torben Bach, Lehner, Wolfgang 11 August 2022 (has links)
This panel paper aims at initiating discussion at the Second International Workshop on Energy Data Management (EnDM 2013) about the important research challenges within Energy Data Management. The authors are the panel organizers, extra panelists will be recruited before the workshop.
142

High-Performance Processing of Continuous Uncertain Data

Tran, Thanh Thi Lac 01 May 2013 (has links)
Uncertain data has arisen in a growing number of applications such as sensor networks, RFID systems, weather radar networks, and digital sky surveys. The fact that the raw data in these applications is often incomplete, imprecise and even misleading has two implications: (i) the raw data is not suitable for direct querying, (ii) feeding the uncertain data into existing systems produces results of unknown quality. This thesis presents a system for uncertain data processing that has two key functionalities, (i) capturing and transforming raw noisy data to rich queriable tuples that carry attributes needed for query processing with quantified uncertainty, and (ii) performing query processing on such tuples, which captures changes of uncertainty as data goes through various query operators. The proposed system considers data naturally captured by continuous distributions, which is prevalent in sensing and scientific applications. The first part of the thesis addresses data capture and transformation by proposing a probabilistic modeling and inference approach. Since this task is application-specific and requires domain knowledge, this approach is demonstrated for RFID data from mobile readers. More specifically, the proposed solution involves an inference and cleaning substrate to transform raw RFID data streams to object location tuple streams where locations are inferred from raw noisy data and their uncertain values are captured by probability distributions. The second, also the main part, of this thesis examines query processing for uncertain data modeled by continuous random variables. The proposed system includes new data models and algorithms for relational processing, with a focus on aggregation and conditioning operations. For operations of high complexity, optimizations including approximations with guaranteed error bounds are considered. Then complex queries involving a mix of operations are addressed by query planning, which given a query, finds an efficient plan that meets user-defined accuracy requirements. Besides relational processing, this thesis also provides the support for user-defined functions (UDFs) on uncertain data, which aims to compute the output distribution given uncertain input and a black-box UDF. The proposed solution employs a learning-based approach using Gaussian processes to compute approximate output with error bounds, and a suite of optimizations for high performance in online settings such as data stream processing and interactive data analysis. The techniques proposed in this thesis are thoroughly evaluated using both synthetic data with controlled properties and various real-world datasets from the domains of severe weather monitoring, object tracking using RFID readers, and computational astrophysics. The experimental results show that these techniques can yield high accuracy, meet stream speeds, and outperform existing techniques such as Monte Carlo sampling for many important workloads .
143

Towards Data Governance for International Dementia Care Mapping (DCM). A Study Proposing DCM Data Management through a Data Warehousing Approach.

Khalid, Shehla January 2010 (has links)
Information Technology (IT) plays a vital role in improving health care systems by enhancing the quality, efficiency, safety, security, collaboration and informing decision making. Dementia, a decline in mental ability which affects memory, concentration and perception, is a key issue in health and social care, given the current context of an aging population. The quality of dementia care is noted as an international area of concern. Dementia Care Mapping (DCM) is a systematic observational framework for assessing and improving dementia care quality. DCM has been used as both a research and practice development tool internationally. However, despite the success of DCM and the annual generation of a huge amount of data on dementia care quality, it lacks a governance framework, based on modern IT solutions for data management, such a framework would provide the organisations using DCM a systematic way of storing, retrieving and comparing data over time, to monitor progress or trends in care quality. Data Governance (DG) refers to the implications of policies and accountabilities to data management in an organisation. The data management procedure includes availability, usability, quality, integrity, and security of the organisation data according to their users and requirements. This novel multidisciplinary study proposes a comprehensive solution for governing the DCM data by introducing a data management framework based on a data warehousing approach. Original contributions have been made through the design and development of a data management framework, describing the DCM international database design and DCM data warehouse architecture. These data repositories will provide the acquisition and storage solutions for DCM data. The designed DCM data warehouse facilitates various analytical applications to be applied for multidimensional analysis. Different queries are applied to demonstrate the DCM data warehouse functionality. A case study is also presented to explain the clustering technique applied to the DCM data. The performance of the DCM data governance framework is demonstrated in this case study related to data clustering results. Results are encouraging and open up discussion for further analysis.
144

Evaluation of Machine Learning techniques for Master Data Management

Toçi, Fatime January 2023 (has links)
In organisations, duplicate customer master data present a recurring problem. Duplicate records can result in errors, complication, and inefficiency since they frequently result from dissimilar systems or inadequate data integration. Since this problem is made more complicated by changing client information over time, prompt detection and correction are essential. In addition to improving data quality, eliminating duplicate information also improves business processes, boosts customer confidence, and makes it easier to make wise decisions. This master’s thesis explores machine learning’s application to the field of Master Data Management. The main objective of the project is to assess how machine learning may improve the accuracy and consistency of master data records. The project aims to support the improvement of data quality within enterprises by managing issues like duplicate customer data. One of the research topics of study is if machine learning can be used to improve the accuracy of customer data, and another is whether it can be used to investigate scientific models for customer analysis when cleaning data using machine learning. Dimension identification, appropriate algorithm selection, appropriate parameter value selection, and output analysis are the four steps in the study's process. As a ground truth for our project, we came to conclusion that 22,000 is the correct number of clusters for our clustering algorithms which represents the number of unique customers. Saying this, the best performing algorithm based on number of clusters and the silhouette score metric turned out the be KMEANS with 22,000 clusters and a silhouette score of 0.596, followed by BIRCH with 22,000 number of clusters and a silhouette score of 0.591.
145

Realizing and Satisfying Informational Requirements throughout the Customer Journey : A Case Study on the Industrial Manufacturing Industry

Santos, Kenneth, Törnros, Rasmus January 2023 (has links)
This thesis examines the informational requirements of customers throughoutthe customer journey within the industry for industrial facilitating goods and exploreshow to manage data to meet these requirements. The research adopts a qualitativeresearch design with a case study approach, using semi-structured interviewssupplemented with secondary survey data. Thematic analysis and descriptive andcorrelational analysis were employed to analyze data. The study identifies digitaltouchpoints as crucial areas for understanding data requirements and data exchange andhighlights the importance of data quality and searchability in enhancing customersatisfaction. The research findings simultaneously emphasize the need for a balancebetween physical and digital touchpoints and the significance of qualitative humaninteractions in generating positive customer experience. Managerial implicationsinclude the importance of updating and delivering the appropriate data sets for differentcustomer roles, investing in business web-presence to facilitate effective customerinteractions, and maintaining a balance between physical and digital touchpoints.
146

Time Data Management A case study focused on finding solutions for “Hard to determine” activities in manual assembly / Time Data Management En fallstudie fokuserade på att hitta lösningar för "Svåra att bestämma" aktiviteter vid manuell

Lingman, Viktor, Osman, Mohamed January 2022 (has links)
Tillverkningsindustrin förändras och ny teknik utvecklas ständigt. Med ny teknik så uppstår nya utmaningar. En vanlig utmaning som många industrier står inför är standardiseringen av processer. För att förbättra produktiviteten krävs standardisering. En del manuella aktiviteter kan inte standardiseras. Dessa aktiviteter är det främsta hindret för automatisering inom tillverkningen. Dessa aktiviteter har då definerats med begreppet “Hard to determine”. Detta begrepp har tagits fram från time data management principen “time determination”, vilket är ett steg inom metoden. Steget går då ut på att utföra tids datainsamling för att sedan bestämma hur långt tid ett monteringsmoment tar. Detta arbete har utförts på uppdrag av Youngkuk Jeong, som är en assisterande professor på Kungliga Tekniska Högskolan. Med detta i åtanke så har följande tre forskningsfrågor ställts och besvarats i detta arbete. ● Vad definierar en hard to determine aktivitet?● Vad kan användas för att samla tids data kring hard to determine aktiviteter?● Är det möjligt att utföra en automatisk tids data insamling på hard to measure aktiviteter? För att besvara frågorna har ett flertal studiebesök gjorts hos ett företag som vill lösa sina problem med “Hard to determine” (HTD) aktiviteter i sina manuella monteringslinjer. Förutom studiebesök har intervjuer genomförts med relevanta ingenjörer och medarbetare med ansvar för tidsdatahantering. Rapporten innehåller också en fallstudie med ett monteringsmoment som liknar den hos företaget. Ett experiment har även utförts för att testa om automatiskt tids data insamling av fallstudien är möjligt. Resultatet ger då en definition av HTD aktiviteter: ● När en viss uppgift varierar i tiden den tar att slutföra.● Hantering av oförutsägbara material.● Användning av flera typer av material baserat på kundorder.● Aktiviteten är svår att standardisera. Dessa indikatorer bidrar till att göra en uppgift HTD, och kan även användas som en kriterielista när man identifierar dessa aktiviteter. För närvarande mäter företaget alla sina aktiviteter med ett tidtagarur inklusive HTD-aktiviteter. Det finns dock möjlighet till att att använda olika typer av sensorer. Experimentet ämnar att jämföra resultaten från en manuellt insamlad tidsdata från ett stoppur mot en automatiskt insamlad tidsdata från en kamera med hjälp av en spårningsprogramvara. Resultatet tyder på att automatisk tidsdatainsamling med hjälp av kameror är ett användbart alternativ för företag som söker innovativa automationslösningar för HTD aktiviteter. / The manufacturing industry is changing and new technology is constantly evolving. With new technology, new challenges occur. One common challenge many industries face is the standardisation of processes. To improve productivity standardisation is required. However some manual activities cannot be standardised. They are the main barrier to automation in manufacturing. These activities have been defined with the term “Hard to determine”. This concept has been developed from the time data management principle “time determination”, which is a step within the method. The step involves performing time data collection, to then determine how long an assembly task takes. This research has been carried out on behalf of Youngkuk Jeong, who is an assistant professor at the Royal Institute of Technology. With this in mind, the following three research questions have been asked and answered: ● What defines a hard to determine activity?● What can be used to gather time data on hard to determine activities?● Is it possible to perform automatic time data gathering on a hard to determine activity? Several study visits have been made to a company that wants to solve its problems with "hard to determine" (HTD) activities in its manual assembly lines. In addition to study visits, interviews were conducted with relevant engineers and employees responsible for time data management. The report also includes an experimental case study with an assembly task similar to that of the visited company. An experiment has also been performed to test whether automatic time data gathering of the case study is possible. The result detail a definition of HTD activities: ● When a certain task varies in time it takes to complete.● Handling unpredictable materials.● Use of several types of materials based on customer orders.● The activity is difficult to standardise. These indicators help to determine if a task is HTD, and can also be used as a list of criteria when identifying these activities. Currently, the company measures all its activities with a stopwatch, including HTD activities. The experiment aims to compare the results of a manually collected time data from a stopwatch with an automatically collected time data from a camera using a tracking software. The results indicate that automatic time data collection using cameras and hand tracking software is a useful alternative for companies looking for innovative automation solutions for HTD activities.
147

Hierarchical and Semantic Data Management and Querying for Patient Records and Personal Photos

Elliott, Brendan David January 2009 (has links)
No description available.
148

Supporting Data-Intensive Scientic Computing on Bandwidth and Space Constrained Environments

Bicer, Tekin 18 August 2014 (has links)
No description available.
149

Geometric and Statistical Summaries for Big Data Visualization

Chaudhuri, Abon January 2013 (has links)
No description available.
150

Data Management and Data Processing Support on Array-Based Scientific Data

Wang, Yi 08 October 2015 (has links)
No description available.

Page generated in 0.2518 seconds