• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 191
  • 25
  • 22
  • 21
  • 14
  • 12
  • 7
  • 6
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 359
  • 359
  • 66
  • 63
  • 62
  • 56
  • 50
  • 48
  • 43
  • 42
  • 41
  • 40
  • 37
  • 33
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Enhancing understanding of company-wide product data management in ICT companies

Kropsu-Vehkaperä, H. (Hanna) 24 April 2012 (has links)
Abstract Data is becoming more critical success factor as business processes rely increasingly on information systems. Product data is required to produce, sell, deliver, and invoice a product in information systems. Traditionally, product data and product data management (PDM) studies have focused on product development and related activities, with less attention being paid to PDM in other lifecycle phases. The purpose of this doctoral dissertation is to clarify challenges and prerequisites for company-wide PDM. The study covers the entire product lifecycle and provides potential solutions for developing company-wide PDM and enhancing PDM understanding as a company-wide action. The study was realised by collecting and analysing data from those ICT companies that are seeking for better ways to manage a wide product-range, technologically complex products and comprehensive solutions by enhancing their data management practices. The empirical practitioner’s experiences and perceptions are seen to have increased the knowledge in company-wide PDM. This study adopted a case study approach and utilises interviews as the main data collection method. This study indicates that company managers have already realised that successful business operations require a higher-level understanding of products and related product data. In practice, however, several challenges hinder the ability to achieve the goal of higher-level business-driven PDM. These challenges include product harmonisation, PDM process development requirements and information systems development requirements. The results of this research indicate that product harmonisation is required to better support efficient product data management. Understanding the true nature of product data, that is combination of product master data and other general product data, and the content of product data from different stakeholder perspectives are prerequisites for functional company-wide PDM. Higher-level product decisions have a significant impact on product data management. Extensive product ranges require general guidelines in order to be manageable, especially as even single products are complex. The results of this study indicate that companies should follow a top-down approach when developing their PDM practices. The results also indicate that companies require a generic product structure in order to support unified product management. The main implication of this dissertation is the support it provides for managers in terms of developing true company-wide product data management practices. / Tiivistelmä Tiedosta on tullut tärkeä liiketoiminnan menestystekijä liiketoimintaprosessien hyödyntäessä yhä vahvemmin tietojärjestelmiä. Tuotteisiin liittyvä tieto on olennaista, jotta tuote voidaan valmistaa, myydä, toimittaa ja laskuttaa. Tuotetietoa ja sen hallintaa on perinteisesti tarkastelu tuotekehityslähtöisesti kun tämä tutkimus pyrkii ymmärtämään tuotetiedon hallintaa kattaen myös edellä mainitut yrityksen toiminnot. Tämän tutkimuksen tavoitteena on tunnistaa haasteita ja perusedellytyksiä yrityksenlaajuisten tuotetiedonhallinnan käytäntöjen kehittämiseksi. Tuotetiedon hallinta yrityksen laajuisena toimintona vaatii ymmärrystä eri toimijoista, jotka käyttävät tuotetietoa; tiedon luonteesta sekä tiedon hyödyntämisestä eri prosesseissa. Tutkimus toteutettiin ICT yrityksissä, joissa tuotetiedon käytäntöjä tehostamalla haetaan keinoja hallita laajaa tuotteistoa, teknologisesti monimutkaisten tuotteita sekä kokonaisratkaisuja. Käytännön toimijoiden kokemukset ja käsitykset ovat ensiarvoisen tärkeitä lisätessä tietoa yrityksen laajuisesta tuotetiedonhallinnasta. Tutkimus toteutettiin tapaustutkimuksen menetelmin, joissa pääasiallisena tiedonkeruu menetelmänä hyödynnettiin haastatteluja. Tämä tutkimus osoittaa, että liiketoimintalähtöisen tuotetiedon hallinan kehittäminen on ajankohtaista yrityksissä. Tutkimuksessa tunnistetaan lukuisia haasteita, jotka ovat estäneet liiketoimintalähtöisen tuotetiedonhallinnan saavuttamisen. Näitä haasteita ovat: tuotteen harmonisointi yrityksen eri toiminnoissa, tuotetiedon hallinnan prosessien kehittämisen vaatimukset sekä tietojärjestelmien kehittämisen vaatimukset. Tutkimustulosten mukaan tuotteiston harmonisointi on yksi perusedellityksistä tehokkaalle tuotetiedon hallinnalle. Yrityksen kattava tuotetiedoen hallinta vaatii myös tuotetiedon todellisen luonteen ymmärtämistä, joka koostuu tuotteen master datasta sekä muusta tuotetiedosta. Lisäksi on olennaista ymmrättää tuotetiedon sisältö sen todellisten käyttäjien näkökulmasta käsin. Tämän tutkimuksen tulokset osoittavat myös, että tuotetiedon hallinnan kehittäminen pitäisi edetä ”top-down” eli ylhäältä-alas periaatteen mukaan. Tulokset myös viittaavat siihen, että geneerinen tuoterakenne tukee yhdenmukaisia tuotehallinta käytäntöjä. Nämä tulokset tarjoavat työssä esitettyjen kuvausten ja mallien ohella tukea tuotetiedon hallinnan käytäntöjen kehittämiseen yrityksen laajuisesti.
262

Gestion et visualisation de données hétérogènes multidimensionnelles : application PLM à la neuroimagerie / Management and visualisation oh heterogeneous multidimensional data : PLM application to neuroimaging

Allanic, Marianne 17 December 2015 (has links)
La neuroimagerie est confrontée à des difficultés pour analyser et réutiliser la masse croissante de données hétérogènes qu’elle produit. La provenance des données est complexe – multi-sujets, multi-analyses, multi-temporalités – et ces données ne sont stockées que partiellement, limitant les possibilités d’études multimodales et longitudinales. En particulier, la connectivité fonctionnelle cérébrale est analysée pour comprendre comment les différentes zones du cerveau travaillent ensemble. Il est nécessaire de gérer les données acquises et traitées suivant plusieurs dimensions, telles que le temps d’acquisition, le temps entre les acquisitions ou encore les sujets et leurs caractéristiques. Cette thèse a pour objectif de permettre l’exploration de relations complexes entre données hétérogènes, ce qui se décline selon deux axes : (1) comment gérer les données et leur provenance, (2) comment visualiser les structures de données multidimensionnelles. L’apport de nos travaux s’articule autour de trois propositions qui sont présentées à l’issue d’un état de l’art sur les domaines de la gestion de données hétérogènes et de la visualisation de graphes. Le modèle de données BMI-LM (Bio-Medical Imaging – Lifecycle Management) structure la gestion des données de neuroimagerie en fonction des étapes d’une étude et prend en compte le caractère évolutif de la recherche grâce à l’association de classes spécifiques à des objets génériques. L’implémentation de ce modèle au sein d’un système PLM (Product Lifecycle Management) montre que les concepts développés depuis vingt ans par l’industrie manufacturière peuvent être réutilisés pour la gestion des données en neuroimagerie. Les GMD (Graphes Multidimensionnels Dynamiques) sont introduits pour représenter des relations complexes entre données qui évoluent suivant plusieurs dimensions, et le format JGEX (Json Graph EXchange) a été créé pour permettre le stockage et l’échange de GMD entre applications. La méthode OCL (Overview Constraint Layout) permet l’exploration visuelle et interactive de GMD. Elle repose sur la préservation partielle de la carte mentale de l’utilisateur et l’alternance de vues complètes et réduites des données. La méthode OCL est appliquée à l’étude de la connectivité fonctionnelle cérébrale au repos de 231 sujets représentées sous forme de GMD – les zones du cerveau sont représentées par les nœuds et les mesures de connectivité par les arêtes – en fonction de l’âge, du genre et de la latéralité : les GMD sont obtenus par l’application de chaînes de traitement sur des acquisitions IRM dans le système PLM. Les résultats montrent deux intérêts principaux à l’utilisation de la méthode OCL : (1) l’identification des tendances globales sur une ou plusieurs dimensions et (2) la mise en exergue des changements locaux entre états du GMD. / Neuroimaging domain is confronted with issues in analyzing and reusing the growing amount of heterogeneous data produced. Data provenance is complex – multi-subjects, multi-methods, multi-temporalities – and the data are only partially stored, restricting multimodal and longitudinal studies. Especially, functional brain connectivity is studied to understand how areas of the brain work together. Raw and derived imaging data must be properly managed according to several dimensions, such as acquisition time, time between two acquisitions or subjects and their characteristics. The objective of the thesis is to allow exploration of complex relationships between heterogeneous data, which is resolved in two parts : (1) how to manage data and provenance, (2) how to visualize structures of multidimensional data. The contribution follow a logical sequence of three propositions which are presented after a research survey in heterogeneous data management and graph visualization. The BMI-LM (Bio-Medical Imaging – Lifecycle Management) data model organizes the management of neuroimaging data according to the phases of a study and takes into account the scalability of research thanks to specific classes associated to generic objects. The application of this model into a PLM (Product Lifecycle Management) system shows that concepts developed twenty years ago for manufacturing industry can be reused to manage neuroimaging data. GMDs (Dynamic Multidimensional Graphs) are introduced to represent complex dynamic relationships of data, as well as JGEX (Json Graph EXchange) format that was created to store and exchange GMDs between software applications. OCL (Overview Constraint Layout) method allows interactive and visual exploration of GMDs. It is based on user’s mental map preservation and alternating of complete and reduced views of data. OCL method is applied to the study of functional brain connectivity at rest of 231 subjects that are represented by a GMD – the areas of the brain are the nodes and connectivity measures the edges – according to age, gender and laterality : GMDs are computed through processing workflow on MRI acquisitions into the PLM system. Results show two main benefits of using OCL method : (1) identification of global trends on one or many dimensions, and (2) highlights of local changes between GMD states.
263

Kvalita dat a efektivní využití rejstříků státní správy / Data Quality and Effective Use of Registers of State Administration

Rut, Lukáš January 2009 (has links)
This diploma thesis deals with registers of state administration in term of data quality. The main objective is to analyze the ways how to evaluate data quality and to apply appropriate method to data in business register. Analysis of possibilities of data cleansing and data quality improving and proposal of solution of found inaccuracy in business register is another objective. The last goal of this paper is to analyze approaches how to set identifier of persons and to choose suitable key for identification of persons in registers of state administration. The thesis is divided into several parts. The first one includes introduction into the sphere of registers of state administration. It closely analyzes several selected registers especially in terms of which data contain and how they are updated. Description of legislation changes, which will come into operation in the middle of year 2010, is great contribution of this part. Special attention is dedicated to the impact of these changes from data quality point of view. Next part deals with problems of legal and physical entities identifiers. This section contains possible solution how to identify entities in data from registers. Third part analyzes ways how to determine data quality. Method called data profiling is closely described and applied to extensive data quality analysis of business register. Correct metadata and information about incorrect data are the outputs of this analysis. The last chapter deals with possibilities how to solve data quality problems. There are proposed and compared three variations of solution. The paper as a whole represents compact material how to solve problems with effective using of data contained in registers of state administration. Nevertheless, proposed solutions and described approaches can be used in many other projects which deal with data quality.
264

Master Data Management, Integrace zákaznických dat a hodnota pro business / Master Data Management, Customer Data Integration and value for business

Rais, Filip January 2009 (has links)
This thesis is focused on Master Data Management (MDM), Customer Data Integration (CDI) area and its main domains. It is also a reference to a various theoretical directions that can be found in this area of expertise. It summarizes main aspects, domains and presents different perspectives to referenced principles. It is an exhaustive background research in area of Master Data Management with emphasis on practical use with references on authors experience and opinions. Secondary focus is directed to the field of business value of Master Data Management initiatives. Thesis presents a thought concept for initiations of MDM project. The reason for such a concept is based on current trend, where companies are struggling to determine actual benefits of MDM initiatives. There is overall accord on the subject of necessity of such initiatives, but the struggle is in area of determining actual measureable impact on company's revenue or profit. Since the MDM initiative is more of an enabling function, rather than direct revenue function, the benefit is less straight forward and therefore harder to determine. This work describes different layers and mapping of business requirements through layers for transparent linkage between enabling functions to revenue generating ones. The emphasis is given to financial benefit calculation, measurability and responsibility of business and IT departments. To underline certain conclusions thesis also presents real world interviews with possible stakeholders of MDM initiative within the company. These representatives were selected as key drivers for such an initiative. Interviews map their recognition of MDM and related terms. It also focus on their reasons and expectations from MDM. The representatives were also selected to equally represent business and IT departments, which presents interesting clash of views and expectations.
265

Nejčastější problémy s testovacími daty a možnosti jejich řešení / The most common test data problems and possible solutions

Langrová, Kamila January 2014 (has links)
This thesis is focused od testing, test data, most frequent test data issues and solutions of these issues. The theoretical part of thesis explains testing, test data, test data management. This thesis focuses on categorizing of testing by type, class and a way of testing and cetgorizing test data. There are also introduced differences between manual and automating testing. The practical part of thesis introduces the survey questionnaire and realization of most frequent test data issues and solutions of these issues survey. In this part of thesis is survey description, goals formulation and evaluation od survey included. The contribution of this thesis is integrated view to testing, test data a their importance at whole testing domain and obstacles, which testing workers have to deal with. This thesis also contributes resume of test data issues solutions and ways to prevent or handle these test data issues.
266

Storage Format Selection and Optimization for Materialized Intermediate Results in Data-Intensive Flows

Munir, Rana Faisal 01 February 2021 (has links)
Modern organizations produce and collect large volumes of data, that need to be processed repeatedly and quickly for gaining business insights. For such processing, typically, Data-intensive Flows (DIFs) are deployed on distributed processing frameworks. The DIFs of different users have many computation overlaps (i.e., parts of the processing are duplicated), thus wasting computational resources and increasing the overall cost. The output of these computation overlaps (known as intermediate results) can be materialized for reuse, which helps in reducing the cost and saves computational resources if properly done. Furthermore, the way such outputs are materialized must be considered, as different storage layouts (i.e., horizontal, vertical, and hybrid) can be used to reduce the I/O cost. In this PhD work, we first propose a novel approach for automatically materializing the intermediate results of DIFs through a multi-objective optimization method, which can tackle multiple and conflicting quality metrics. Next, we study the behavior of different operators of DIFs that are the first to process the loaded materialized results. Based on this study, we devise a rule-based approach, that decides the storage layout for materialized results based on the subsequent operation types. Despite improving the cost in general, the heuristic rules do not consider the amount of data read while making the choice, which could lead to a wrong decision. Thus, we design a cost model that is capable of finding the right storage layout for every scenario. The cost model uses data and workload characteristics to estimate the I/O cost of a materialized intermediate results with different storage layouts and chooses the one which has minimum cost. The results show that storage layouts help to reduce the loading time of materialized results and overall, they improve the performance of DIFs. The thesis also focuses on the optimization of the configurable parameters of hybrid layouts. We propose ATUN-HL (Auto TUNing Hybrid Layouts), which based on the same cost model and given the workload and characteristics of data, finds the optimal values for configurable parameters in hybrid layouts (i.e., Parquet). Finally, the thesis also studies the impact of parallelism in DIFs and hybrid layouts. Our proposed cost model helps to devise an approach for fine-tuning the parallelism by deciding the number of tasks and machines to process the data. Thus, the cost model proposed in this thesis, enables in choosing the best possible storage layout for materialized intermediate results, tuning the configurable parameters of hybrid layouts, and estimating the number of tasks and machines for the execution of DIFs. / Moderne Unternehmen produzieren und sammeln große Datenmengen, die wiederholt und schnell verarbeitet werden müssen, um geschäftliche Erkenntnisse zu gewinnen. Für die Verarbeitung dieser Daten werden typischerweise Datenintensive Prozesse (DIFs) auf verteilten Systemen wie z.B. MapReduce bereitgestellt. Dabei ist festzustellen, dass die DIFs verschiedener Nutzer sich in großen Teilen überschneiden, wodurch viel Arbeit mehrfach geleistet, Ressourcen verschwendet und damit die Gesamtkosten erhöht werden. Um diesen Effekt entgegenzuwirken, können die Zwischenergebnisse der DIFs für spätere Wiederverwendungen materialisiert werden. Hierbei müssen vor allem die unterschiedlichen Speicherlayouts (horizontal, vertikal und hybrid) berücksichtigt werden. In dieser Doktorarbeit wird ein neuartiger Ansatz zur automatischen Materialisierung der Zwischenergebnisse von DIFs durch eine mehrkriterielle Optimierungsmethode vorgeschlagen, der in der Lage ist widersprüchliche Qualitätsmetriken zu behandeln. Des Weiteren wird untersucht die Wechselwirkung zwischen verschiedenen peratortypen und unterschiedlichen Speicherlayouts untersucht. Basierend auf dieser Untersuchung wird ein regelbasierter Ansatz vorgeschlagen, der das Speicherlayout für materialisierte Ergebnisse, basierend auf den nachfolgenden Operationstypen, festlegt. Obwohl sich die Gesamtkosten für die Ausführung der DIFs im Allgemeinen verbessern, ist der heuristische Ansatz nicht in der Lage die gelesene Datenmenge bei der Auswahl des Speicherlayouts zu berücksichtigen. Dies kann in einigen Fällen zu falschen Entscheidung führen. Aus diesem Grund wird ein Kostenmodell entwickelt, mit dem für jedes Szenario das richtige Speicherlayout gefunden werden kann. Das Kostenmodell schätzt anhand von Daten und Auslastungsmerkmalen die E/A-Kosten eines materialisierten Zwischenergebnisses mit unterschiedlichen Speicherlayouts und wählt das kostenminimale aus. Die Ergebnisse zeigen, dass Speicherlayouts die Ladezeit materialisierter Ergebnisse verkürzen und insgesamt die Leistung von DIFs verbessern. Die Arbeit befasst sich auch mit der Optimierung der konfigurierbaren Parameter von hybriden Layouts. Konkret wird der sogenannte ATUN-HL Ansatz (Auto TUNing Hybrid Layouts) entwickelt, der auf der Grundlage des gleichen Kostenmodells und unter Berücksichtigung der Auslastung und der Merkmale der Daten die optimalen Werte für konfigurierbare Parameter in Parquet, d.h. eine Implementierung von hybrider Layouts. Schließlich werden in dieser Arbeit auch die Auswirkungen von Parallelität in DIFs und hybriden Layouts untersucht. Dazu wird ein Ansatz entwickelt, der in der Lage ist die Anzahl der Aufgaben und dafür notwendigen Maschinen automatisch zu bestimmen. Zusammengefasst lässt sich festhalten, dass das in dieser Arbeit vorgeschlagene Kostenmodell es ermöglicht, das bestmögliche Speicherlayout für materialisierte Zwischenergebnisse zu ermitteln, die konfigurierbaren Parameter hybrider Layouts festzulegen und die Anzahl der Aufgaben und Maschinen für die Ausführung von DIFs zu schätzen.
267

Data hiding algorithms for healthcare applications

Fylakis, A. (Angelos) 12 November 2019 (has links)
Abstract Developments in information technology have had a big impact in healthcare, producing vast amounts of data and increasing demands associated with their secure transfer, storage and analysis. To serve them, biomedical data need to carry patient information and records or even extra biomedical images or signals required for multimodal applications. The proposed solution is to host this information in data using data hiding algorithms through the introduction of imperceptible modifications achieving two main purposes: increasing data management efficiency and enhancing the security aspects of confidentiality, reliability and availability. Data hiding achieve this by embedding the payload in objects, including components such as authentication tags, without requirements in extra space or modifications in repositories. The proposed methods satisfy two research problems. The first is the hospital-centric problem of providing efficient and secure management of data in hospital networks. This includes combinations of multimodal data in single objects. The host data were biomedical images and sequences intended for diagnoses meaning that even non-visible modifications can cause errors. Thus, a determining restriction was reversibility. Reversible data hiding methods remove the introduced modifications upon extraction of the payload. Embedding capacity was another priority that determined the proposed algorithms. To meet those demands, the algorithms were based on the Least Significant Bit Substitution and Histogram Shifting approaches. The second was the patient-centric problem, including user authentication and issues of secure and efficient data transfer in eHealth systems. Two novel solutions were proposed. The first method uses data hiding to increase the robustness of face biometrics in photos, where due to the high robustness requirements, a periodic pattern embedding approach was used. The second method protects sensitive user data collected by smartphones. In this case, to meet the low computational cost requirements, the method was based on Least Significant Bit Substitution. Concluding, the proposed algorithms introduced novel data hiding applications and demonstrated competitive embedding properties in existing applications. / Tiivistelmä Modernit terveydenhuoltojärjestelmät tuottavat suuria määriä tietoa, mikä korostaa tiedon turvalliseen siirtämiseen, tallentamiseen ja analysointiin liittyviä vaatimuksia. Täyttääkseen nämä vaatimukset, biolääketieteellisen tiedon täytyy sisältää potilastietoja ja -kertomusta, jopa biolääketieteellisiä lisäkuvia ja -signaaleja, joita tarvitaan multimodaalisissa sovelluksissa. Esitetty ratkaisu on upottaa tämä informaatio tietoon käyttäen tiedonpiilotusmenetelmiä, joissa näkymättömiä muutoksia tehden saavutetaan kaksi päämäärää: tiedonhallinnan tehokkuuden nostaminen ja luottamuksellisuuteen, luotettavuuteen ja saatavuuteen liittyvien turvallisuusnäkökulmien parantaminen. Tiedonpiilotus saavuttaa tämän upottamalla hyötykuorman, sisältäen komponentteja, kuten todentamismerkinnät, ilman lisätilavaatimuksia tai muutoksia tietokantoihin. Esitetyt menetelmät ratkaisevat kaksi tutkimusongelmaa. Ensimmäinen on sairaalakeskeinen ongelma tehokkaan ja turvallisen tiedonhallinnan tarjoamiseen sairaaloiden verkoissa. Tämä sisältää multimodaalisen tiedon yhdistämisen yhdeksi kokonaisuudeksi. Tiedon kantajana olivat biolääketieteelliset kuvat ja sekvenssit, jotka on tarkoitettu diagnosointiin, missä jopa näkymättömät muutokset voivat aiheuttaa virheitä. Siispä määrittävin rajoite oli palautettavuus. Palauttavat tiedonpiilotus-menetelmät poistavat lisätyt muutokset, kun hyötykuorma irrotetaan. Upotuskapasiteetti oli toinen tavoite, joka määritteli esitettyjä algoritmeja. Saavuttaakseen nämä vaatimukset, algoritmit perustuivat vähiten merkitsevän bitin korvaamiseen ja histogrammin siirtämiseen. Toisena oli potilaskeskeinen ongelma, joka sisältää käyttäjän henkilöllisyyden todentamisen sekä turvalliseen ja tehokkaaseen tiedonsiirtoon liittyvät haasteet eHealth-järjestelmissä. Työssä ehdotettiin kahta uutta ratkaisua. Ensimmäinen niistä käyttää tiedonpiilotusta parantamaan kasvojen biometriikan kestävyyttä valokuvissa. Korkeasta kestävyysvaatimuksesta johtuen käytettiin periodisen kuvion upottamismenetelmää. Toinen menetelmä suojelee älypuhelimien keräämää arkaluontoista käyttäjätietoa. Tässä tapauksessa, jotta matala laskennallinen kustannus saavutetaan, menetelmä perustui vähiten merkitsevän bitin korvaamiseen. Yhteenvetona ehdotetut algoritmit esittelivät uusia tiedonpiilotussovelluksia ja osoittivat kilpailukykyisiä upotusominaisuuksia olemassa olevissa sovelluksissa.
268

Uso de la ciencia de datos en las organizaciones / Use of data science in the organizations

Chafloque Arévalo, Wilmert Ulises, Zaravia Mioca, Ana Cecilia 20 August 2021 (has links)
Suele decirse que los datos son el petróleo del siglo XXI, pero estos, por sí solos no producen valor, así que es necesario que las empresas gestionen y analicen eficientemente grandes volúmenes de datos que se generan gracias a una sociedad que se conecta a diversos dispositivos. Por ello, teniendo en cuenta la importancia que ha adquirido la ciencia de datos, la presente investigación se centra en exponer y analizar las diferentes posturas, en un marco temporal comprendido entre los años 2012-2021, respecto al uso de la ciencia de datos en las organizaciones. Los autores revisados coinciden en que existe una correlación entre el rendimiento de las empresas con el uso de los datos, razón por la que las organizaciones están obligadas a gestionar y analizar, de manera eficiente, grandes volúmenes de datos. Sin embargo, la rapidez con la que cambian los datos hace que el proceso de adaptación en las organizaciones sea lento y se pierda información valiosa. Frente a esto, las organizaciones deben implementar mecanismos de análisis de datos, impulsar una cultura organizacional productiva y fomentar el liderazgo democrático con miras a mejorar las capacidades tecnológicas y del personal, y, finalmente, establecer regulaciones que delimiten el alcance del uso de datos personales para evitar riesgos de vulneración de datos. / It is often said that data is the oil of the 21st century, but data alone does not produce value, so it is necessary for companies to efficiently manage and analyze large volumes of data that are generated thanks to a society that connects to various devices . Therefore, taking into account the importance that data science has acquired, this research focuses on exposing and analyzing the different positions, in a time frame between the years 2012-2021, regarding the use of data science in the organizations. The reviewed authors agree that there is a correlation between the performance of companies with the use of data, which is why organizations are obliged to efficiently manage and analyze large volumes of data. However, the speed with which data changes makes the adaptation process in organizations slow and valuable information is lost. Faced with this, organizations must implement data analysis mechanisms, promote a productive organizational culture and foster democratic leadership with a view to improving technological and personnel capacities, and, finally, establish regulations that define the scope of the use of personal data. to avoid risks of data breach. / Trabajo de Suficiencia Profesional
269

Implementace nových vzduchových jističů ABB SACE Emax 2 do produktové řady nízkonapěťových rozváděčů MNS / Implementation of the new ABB SACE Emax 2 air circuit breaker series in a low-voltage MNS switchgear product portfolio

Studený, Michal January 2014 (has links)
The Master’s Thesis comprises a summary of the differences existing design of MNS switchgear series against AGOMIN project innovative proposal. The introduction deals with the introduction to the CAD program SolidWorks and its extension for enterprise data management SolidWorks Enterprise PDM. The body of the paper deals with a new range of air circuit breakers ABB SACE Emax 2 and their implementation into the product line of low voltage switchgears. Included are other improvements that together with implementation of circuit breakers AGOMIN project brings.
270

The relationship between Research Data Management and Virtual Research Environments

Van Wyk, Barend Johannes January 2018 (has links)
The aim of the study was to compile a conceptual model of a Virtual Research Environment (VRE) that indicates the relationship between Research Data Management (RDM) and VREs. The outcome of this study was that VREs are ideal platforms for the management of research data. In the first part of the study, a literature review was conducted by focusing on four themes: VREs and other concepts related to VREs; VRE components and tools; RDM; and the relationship between VREs and RDM. The first theme included a discussion of definitions of concepts, approaches to VREs, their development, aims, characteristics, similarities and differences of concepts, an overview of the e-Research approaches followed in this study, as well as an overview of concepts used in this study. The second theme consisted of an overview of developments of VREs in four countries (United Kingdom, USA, The Netherlands, and Germany), an indication of the differences and similarities of these programmes, and a discussion on the concept of research lifecycles, as well as VRE components. These components were then matched with possible tools, as well as to research lifecycle stages, which led to the development of a first conceptual VRE framework. The third theme included an overview of the definitions of the concepts ‘data’ and ‘research data’, as well as RDM and related concepts, an investigation of international developments with regards to RDM, an overview of the differences and similarities of approaches followed internationally, and a discussion of RDM developments in South Africa. This was followed by a discussion of the concept ‘research data lifecycles’, their various stages, corresponding processes and the roles various stakeholders can play in each stage. The fourth theme consisted of a discussion of the relationship between research lifecycles and research data lifecycles, a discussion on the role of RDM as a component within a VRE, the management of research data by means of a VRE, as well as the presentation of a possible conceptual model for the management of research data by means of a VRE. This literature review was conducted as a background and basis for this study. In the second part of the study, the research methodology was outlined. The chosen methodology entailed a non-empirical part consisting of a literature study, and an empirical part consisting of two case studies from a South African University. The two case studies were specifically chosen because each used different methods in conducting research. The one case study used natural science oriented data and laboratory/experimental methods, and the other, human orientated data and survey instruments. The proposed conceptual model derived from the literature study was assessed through these case studies and feedback received was used to modify and/or enhance the conceptual model. The contribution of this study lies primarily in the presentation of a conceptual VRE model with distinct component layers and generic components, which can be used as technological and collaborative frameworks for the successful management of research data. / Thesis (DPhil)--University of Pretoria, 2018. / National Research Foundation / Information Science / DPhil / Unrestricted

Page generated in 0.0904 seconds