• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 191
  • 25
  • 22
  • 21
  • 14
  • 12
  • 7
  • 6
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 359
  • 359
  • 66
  • 63
  • 62
  • 56
  • 50
  • 48
  • 43
  • 42
  • 41
  • 40
  • 37
  • 33
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Förhoppningar och förutsättningar : En undersökning om datastyrning i praktiken i offentlig sektor på kommunal nivå

Andreas, Norin January 2023 (has links)
Data has been recognized as an important resource in the public sector. As a result, the expectations regarding data and its potential use cases have increased. In Sweden, key stakeholders have emphasized the importance of data for data-driven purposes. However, the amount of data collected, stored and analyzed has dramatically increased, which from an organizational perspective presents new demands on how public organizations govern and control their data management. In this study, data governance in the public sector at the municipal level was investigated with the purpose of creating an overview of how data governance occurs within municipalities. The study was divided into two sections, the first phase consisted of interviews with a specific municipality, whereas the second phase consisted of a survey distributed to all 290 Swedish municipalities. The results show that the current approach to data governance presents several challenges in the municipal sector. The results also indicate that the municipal sector may potentially be more focused on information governance rather than data governance and that there may be a lack of knowledge about the distinction between data and information. The findings suggest that the municipal sector lacks the prerequisites for managing data as a valuable and strategic asset, which is a concern given the public sector’s ambitions for data to generate and enhance services for society. / Data har blivit alltmer uppmärksammad som en viktig resurs i den offentliga sektorn. I och med detta har även förhoppningarna om data och dess användningsområden ökat inom sektorn. Bara i Sverige har nyckelaktörer påpekat betydelsen av data för datadrivna ändamål. Emellertid, har mängden data som samlas in, lagras och analyseras ökat, vilket utifrån ett organisatoriskt perspektiv ställer nya krav på hur offentliga organisationer styr och kontroller deras datahantering. Med hänsyn till detta har datastyrning i den offentliga sektorn, mer specifikt på kommunal nivå, undersökts i denna studie med syftet att skapa en överblick över hur datastyrning sker inom kommuner. Studien delades in i två faser, där den första fasen innefattade inledande intervjuer med en specifik kommun, medan den andra fasen innebar en enkätundersökning som distribuerades ut till alla 290 kommuner IT-chefer eller motsvarande. Resultatet visar på att rådande arbetssätt med datastyrning visar på ett antal utmaningar i den kommunala sektorn. Vidare är det möjligt att det råder en kunskapsbrist om skillnaden mellan data och information samt att den kommunala sektorn möjligtvis är mer fokusar på informationsstyrning än datastyrning. Med hänsyn till den offentliga sektorns förhoppningar om data för att skapa och förbättra tjänster för samhället visar resultatet att den kommunala sektorn inte har förutsättningarna att hantera data som en strategisk och värdefull tillgång.
212

Product Data Management inNew Product Introduction : A Qualitative Case Study of Ericsson, PIM RBSKista, Sweden / Produktdatahantering inom Industrialisering : En Kvalitativ Fallstudie av Ericsson, PIM RBSKista

LARSSON, KRISTOFER, VIDLUND, FREDRIK January 2014 (has links)
Dagens företagsklimat skapar ökad press på företag att minska sin tid till marknad för nya produkter, samtidigt som konstnader ska minskas och en hög produktkvalitet skall hållas. Ett resultat av detta är att tillverkningsföretag måste utveckla och producera produkter fortare, till en lägre kostnad, med ökande kvalité för att upprätthålla sin konkurrenskraft. Inom marknaden för informations- och kommunikationsteknik sker det snabba förändringar, detta göra att produktutvecklingen är allt mer viktig. Hanteringen av produktdata är en viktig aspekt av produktutvecklingen, men också en av de mest  utmanande.  Målet  med  denna  forskningsuppsats  är  att  undersöka  vilka  processer  inom industrialisering som används för att samla och hantera produktdata. Produktdata och hanteringen av den är en viktig del av industrialiseringsprocessen samt produktutvecklingsprocessen. PIM   (Product   Introduction   and   Maintenance)   RBS   (Radio   Base   Station)   Kista   är   en industrialiseringssite och har valts för denna fallstudie – då de representerar en ledande del av produktutvecklingen för utsedda produkter inom Ericsson som är ett världsledande företag inom informations-och kommunikationstekniks industrin. Denna forskning har utförst i linje med det valda fokusområdet att undersöka, beskriva och analysera de viktigaste metoderna som används inom PIM RBS Kista för att samla in, lagra och använda produktdata under produktutvecklingen i industrialiseringsprocessen.   Syftet   med   forskningen   är   att   bidra   till   forskningsområdet produktdatahantering. Fokus har legat inom Operations, där nya produkter realiseras under olika aktiviteter och från vilken produktdata är det viktigaste resultatet. De  arbetsmetoder  som  har  identifieras  under  fallstudien  diskuteras  och  skapar  insikt  hur produktdatahantering  används  under  förverkligandet  av  nya  produkter  –  med  koppling  till produktionsverkstadsgolvet. Denna forskingsuppsats diskuterar även de huvudsakliga implikationera relaterat till produktdatahantering inom organisationen som är vald för denna fallstudie. Detta för att bidra med förbättringsförslag gällande nuvarande produktdatahanteringsmetod och system, samt verktyg, som finns implementerade idag. / In today’s market there is an increasing pressure on companies to reduce their time-to-market and lower their cost whilst maintaining a high quality on their products. As a result, manufacturing firms have to develop and produce products faster, at lower costs, and with increased quality in order to maintain their competiveness. The information and communications technology (ICT) market is a fast  changing  market,  which  makes  the  development  process  all  the  more  important.  The management of product data is an important aspect of the product development process, but also one of the most challenging. Product data and product data management (PDM) are important aspects of the new product introduction (NPI) process and in turn the product development process. This research is based on a case study research conducted at PIM (Product Introduction and Maintenance) RBS (Radio Base Station) Kista. PIM RBS Kista is a lead-site responsible for NPI and product development for certain appointed products within Ericsson, a world leading multinational corporation in the ICT industry. In alignment with the research focus the main processes used within PIM RBS Kista to gather, store, and use product data during product development in the NPI process has been described and analysed – in order to contribute to the PDM research field. The focus has been within the Operations department, in which new products are realised during different activities and from which product data is the main output. The processes identified and analysed provides insight how PDM is used during product realisation and its connection to the production shop floor. The thesis also discusses the main complications within the case organisation and suggests improvements regarding the current PDM processes and systems/tools used.
213

A Dementia Care Mapping (DCM) data warehouse as a resource for improving the quality of dementia care. Exploring requirements for secondary use of DCM data using a user-driven approach and discussing their implications for a data warehouse

Khalid, Shehla January 2016 (has links)
The secondary use of Dementia Care Mapping (DCM) data, if that data were held in a data warehouse, could contribute to global efforts in monitoring and improving dementia care quality. This qualitative study identifies requirements for the secondary use of DCM data within a data warehouse using a user-driven approach. The thesis critically analyses various technical methodologies and then argues the use and further demonstrates the applicability of a modified grounded theory as a user-driven methodology for a data warehouse. Interviews were conducted with 29 DCM researchers, trainers and practitioners in three phases. 19 interviews were face to face with the others on Skype and telephone with an average length of individual interview 45-60 minutes. The interview data was systematically analysed using open, axial and selective coding techniques and constant comparison methods. The study data highlighted benchmarking, mappers’ support and research as three perceived potential secondary uses of DCM data within a data warehouse. DCM researchers identified concerns regarding the quality and security of DCM data for secondary uses, which led to identifying the requirements for additional provenance, ethical and contextual data to be included in a warehouse alongside DCM data to meet requirements for secondary uses of this data for research. The study data was also used to extrapolate three main factors such as an individual mapper, the organization and an electronic data management that can influence the quality and availability of DCM data for secondary uses. The study makes further recommendations for designing a future DCM data warehouse.
214

Increasing Data Driven Processes at an Industrial Company - A project performed at Scania

Bayati, Arastoo January 2022 (has links)
Amidst a world with emerging problems in workforce shortages, environmentalimpact, and energy crisis, it has become essential to increase production efficiency.Having data driven processes using artificial intelligence and machine learning canbe a step towards the solution. Nevertheless, these applications rely heavily on datascientists being able to create high quality models. Complications can arise becausethe data is normally generated in conjunction with processes outside the datascientist’s competency. Therefore, it is of great importance that the personnelworking in proximity to the data generation are instilled with some competency ofdata science. So that they can, not only communicate and aid data scientists in theirwork but, perform data analysis themselves. Combining the results from a literaturereview and discussions with experts in the field of production and data science, afive-step plan was made that engineers can follow to have a value adding impactwhen working with data scientists. The content of this paper relates to an industrialsetting, namely Scania which is where the project was performed, but in essence thismethod can be used by anyone working with high volume data. / I en värld präglad av problem kring brist på arbetskraft, miljöpåverkan, och energikrisblir det allt viktigare att öka produktionen samtidigt som man minskarresursanvändningen. Användningen av artificiell intelligens och maskininlärning kanvara ett steg mot lösningen. Dessa applikationer kräver att datavetare kan göra bramodeller, men problemet är att datagenerering ofta sker utanför en datavetareskärnkompetens. Det är således viktigt för personal som arbetar näradatagenereringen att ha vissa kunskaper inom datavetenskap, så att det inte barakan kommunicera och hjälpa datavetare, utan också själva utföra analyser. Genomatt kombinera resultatet av en litteraturstudie och intervjuer med experter inom fältetför produktion och datavetenskap har en femstegs plan gjorts som ingenjörer kananvända för att utvärdera sin data och jobba mer effektivt med datavetare. Innehålleti arbetet relaterar främst till industriella situationer, i synnerhet Scania vilket är därarbetet utgjorts, men kan i grunden användas av vem som helst som jobbar med enhög volym av data.
215

Parallel and Distributed Databases, Data Mining and Knowledge Discovery

Valduriez, Patrick, Lehner, Wolfgang, Talia, Domenico, Watson, Paul 17 July 2023 (has links)
Managing and efficiently analysing the vast amounts of data produced by a huge variety of data sources is one of the big challenges in computer science. The development and implementation of algorithms and applications that can extract information diamonds from these ultra-large, and often distributed, databases is a key challenge for the design of future data management infrastructures. Today’s data-intensive applications often suffer from performance problems and an inability to scale to high numbers of distributed data sources. Therefore, distributed and parallel databases have a key part to play in overcoming resource bottlenecks, achieving guaranteed quality of service and providing system scalability. The increased availability of distributed architectures, clusters, Grids and P2P systems, supported by high performance networks and intelligent middleware provides parallel and distributed databases and digital repositories with a great opportunity to cost-effectively support key everyday applications. Further, there is the prospect of data mining and knowledge discovery tools adding value to these vast new data resources by automatically extracting useful information from them.
216

A novel nomenclature for the identification of ground truth in medical imaging data : Design, implementation and integration in a large knowledge database

Realini, Edoardo January 2023 (has links)
The annotation of medical images is a critical task for many downstream applications. However, the lack of a unified annotation nomenclature has resulted in inconsistency and ambiguity in the storage and use of such data. In this thesis, we propose and evaluate a novel annotation nomenclature for medical images. Our nomenclature is designed to be intuitive, easy to use and to expand. We also developed a knowledge database storing large medical image datasets that integrates the new nomenclature. The database is implemented as a server application exposing REST APIs. This allows users to upload/download datasets and query the data based on the annotations and to integrate the system in existing frameworks. We conducted a user study to assess the usability characteristics of the label nomenclature and its integration in the new system. The results collected from the user base are positive. The nomenclature is well perceived and the users had rated positively the usability of the whole system. / Annotering av medicinska bilder är en kritisk uppgift för många efterföljande tillämpningar. Bristen på en enhetlig annoteringsnomenklatur har emellertid resulterat i inkonsekvens och tvetydighet i lagring och användning av sådana data. I denna avhandling föreslår och utvärderar vi en ny annoteringsnomenklatur för medicinska bilder. Vår nomenklatur är utformad för att vara intuitiv, lätt att använda och att expandera. Vi har också utvecklat en kunskapsdatabas som lagrar stora medicinska bildsätt som integrerar den nya nomenklaturen. Databasen är implementerad som en serverapplikation som exponerar REST API: er. Detta gör att användare kan ladda upp / ladda ner datasätt och söka efter data baserat på annotationer och integrera systemet i befintliga ramverk. Vi genomförde en användarstudie för att bedöma användbarhetskaraktäristikerna för etikettnomenklaturen och dess integration i det nya systemet. Resultaten som samlades in från användarbasen är positiva. Nomenklaturen är väl uppfattad och användarna har positivt betygsatt användbarheten i hela systemet.
217

Was sind FAIRe Daten?

Nagel, Stefanie 29 February 2024 (has links)
Die sog. FAIR-Prinzipien haben sich mittlerweile als Standard-Anforderung im Forschungsdatenmanagement etabliert. In Förderanträgen und -berichten müssen Wissenschaftler:innen darlegen, wie sie Forschungsdaten gemäß den FAIR-Prinzipien verwalten und veröffentlichen. Auch immer mehr Fachzeitschriften bzw. Verlage fordern von ihren Autor:innen, dass sie ihre Forschungsdaten gemäß den FAIR-Prinzipien teilen, um die Reproduzierbarkeit und Überprüfbarkeit ihrer Ergebnisse zu gewährleisten. Was das Akronym FAIR eigentlich bedeutet und worauf Forschende in diesem Zusammenhang achten sollten, fasst dieser Beitrag kurz zusammen.
218

Modern Anomaly Detection: Benchmarking, Scalability and a Novel Approach

Pasupathipillai, Sivam 27 November 2020 (has links)
Anomaly detection consists in automatically detecting the most unusual elements in a data set. Anomaly detection applications emerge in domains such as computer security, system monitoring, fault detection, and wireless sensor networks. The strategic importance of detecting anomalies in these domains makes anomaly detection a critical data analysis task. Moreover, the contextual nature of anomalies, among other issues, makes anomaly detection a particularly challenging problem. Anomaly detection has received significant research attention in the last two decades. Much effort has been invested in the development of novel algorithms for anomaly detection. However, several open challenges still exist in the field.This thesis presents our contributions toward solving these challenges. These contributions include: a methodological survey of the recent literature, a novel benchmarking framework for anomaly detection algorithms, an approach for scaling anomaly detection techniques to massive data sets, and a novel anomaly detection algorithm inspired by the law of universal gravitation. Our methodological survey highlights open challenges in the field, and it provides some motivation for our other contributions. Our benchmarking framework, named BAD, tackles the problem of reliably assess the accuracy of unsupervised anomaly detection algorithms. BAD leverages parallel and distributed computing to enable massive comparison studies and hyperparameter tuning tasks. The challenge of scaling unsupervised anomaly detection techniques to massive data sets is well-known in the literature. In this context, our contributions are twofold: we investigate the trade-offs between a single-threaded implementation and a distributed approach considering price-performance metrics, and we propose a scalable approach for anomaly detection algorithms to arbitrary data volumes. Our results show that, when high scalability is required, our approach can handle arbitrarily large data sets without significantly compromising detection accuracy. We conclude our contributions by proposing a novel algorithm for anomaly detection, named Gravity. Gravity identifies anomalies by considering the attraction forces among massive data elements. Our evaluation shows that Gravity is competitive with other popular anomaly detection techniques on several benchmark data sets. Additionally, the properties of Gravity makes it preferable in cases where hyperparameter tuning is challenging or unfeasible.
219

Data Curation Perspectives and Practices of Researchers at Kent State University’s Liquid Crystal Institute: A Case Study

Shakeri, Shadi 27 November 2013 (has links)
No description available.
220

Cloudwave: A Cloud Computing Framework for Multimodal Electrophysiological Big Data

Jayapandian, Catherine Praveena 02 September 2014 (has links)
No description available.

Page generated in 0.111 seconds