• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 37
  • 14
  • 8
  • 6
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 85
  • 85
  • 28
  • 18
  • 18
  • 14
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 10
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Archive und Forschungsdaten – sechs Kernfragen aus Sicht der Archive

Klindworth, Elisabeth 27 May 2022 (has links)
Im Zuge einer zunehmend digital ausgerichteten Forschung werden Fragen der langfristigen Sicherung und Nachnutzung von Forschungsdaten immer wichtiger, insbesondere auch im Kontext des Aufbaus einer Nationalen Forschungsdateninfrastruktur (NFDI). Archive sind fest etablierte Akteure der Forschungsinfrastruktur und verfügen bereits über langjährige Erfahrung in der Datenarchivierung. Im Online-Forum zum Thema 'Archive und Forschungsdaten' wird untersucht, wie der Begriff 'Forschungsdaten' in den Archiven verstanden wird. Darauf aufbauend werden Handlungsfelder für Archive identifiziert und die Rolle der Archive in der Forschungsdatenmanagement-Community diskutiert. Ein Austausch und eine partnerschaftliche Zusammenarbeit mit allen Akteuren in der Forschungsdatenlandschaft werden dabei angestrebt, damit effiziente und pragmatische Lösungen für die langfristige Nutzbarkeit von Forschungsdaten etabliert werden können. / In the course of increasingly digitally oriented research, questions of long-term preservation and subsequent use of research data are becoming more and more important, especially in the context of establishing a National Research Data Infrastructure (NFDI). Archives are firmly established players in the research infrastructure and already have many years of experience in data archiving. The online forum on 'Archives and Research Data' examines how the term 'research data' is understood in the archives. Based on this, fields of action for archives are identified and the role of archives in the research data management community is discussed. The aim is to exchange information and cooperate in a spirit of partnership with all stakeholders in the research data landscape so that efficient and pragmatic solutions for the long-term usability of research data can be established.
42

Regional Water Quality Data Viewer Tool: An Open-Source to Support Research Data Access

Dolder, Danisa 07 June 2021 (has links)
Water quality data collection, storage, and access is a difficult task and significant work has gone into methods to store and disseminate these data. We present a tool to disseminate research in a simple method that does not replace but extends and leverages these tools. In the United States, the federal government maintains two systems to fill that role for hydrological data: the U.S. Geological Survey (USGS) National Water Information System (NWIS) and the U.S. Environmental Protection Agency (EPA) Storage and Retrieval System (STORET), since superseded by the Water Quality Portal (WQP). The Consortium of the Universities for the Advancement of Hydrologic Science, Inc (CUAHSI) has developed the Hydrologic Information System (HIS) to standardize search and discovery of these data as well as other observational time series datasets. Additionally, CUAHSI developed and maintains HydroShare.org as a web portal for researchers to store and share hydrology data in a variety of formats including spatial geographic information system data. We present the Tethys Platform based Water Quality Data Viewer (WQDV) web application that uses these systems to provide researchers and local monitoring organizations with a simple method to archive, view, analyze, and distribute water quality data. WQDV provides an archive for non-official or preliminary research data and access to those data that have been collected but need to be distributed prior to review or inclusion in the state database. WQDV can also accept subsets of data downloaded from other sources, such as the EPA WQP. WQDV helps users understand what local data are available and how they relate to the data in larger databases. WQDV presents data in spatial (maps) and temporal (time series graphs) forms to help the users analyze and potentially screen the data sources before export for additional analysis. WQDV provides a convenient method for interim data to be widely disseminated and easily accessible in the context of a subset of official data. We present WQDV using a case study of data from Utah Lake, Utah, United States of America.
43

Readiness for research data management in the life sciences at the University of the Witwatersrand

Potgieter, Salomé 13 April 2023 (has links) (PDF)
Because of the importance of Research Data Management (RDM) in the life sciences, where vast amounts of research data in different complex formats are being produced, this study aimed to assess the state of RDM readiness in the life sciences at Wits to ascertain what support is needed with regards to RDM. In order to achieve the aim, the current RDM practices and needs of researchers, as well as the challenges they face, were investigated. The Jisc Research Data Lifecycle (Jisc, 2021a) was used to guide the literature review, frame data collection, analyse data and advise on some of the main findings and recommendations. A mixed methods approach and an explanatory sequential design were used to achieve the research objectives. For the quantitative phase of research, an online questionnaire was used to collect data. As the total target population (282) was not big, a census was conducted. The questionnaire was administered using SurveyMonkey software. During the qualitative part of the research, semi-structured interviews were used to explain the quantitative results. Five participants were purposively sampled to take part in interviews. The statistical package, MS Excel, was used to analyse quantitative data whilst qualitative data were analysed by thematic analysis. The study showed that life sciences researchers at Wits have adopted many RDM practices, and researchers are increasingly becoming aware of the importance of the openness of data. However, they are dealing with similar RDM issues as their peers worldwide. Results highlighted challenges of, amongst others, the lack of an RDM policy as well as the lack of, or unawareness of, appropriate RDM training and support at Wits. As formal implementation of RDM still needs to take place at Wits, it is recommended that Wits puts an RDM policy in place, followed by suitable RDM infrastructure and awareness making of current services.
44

Agreement between routine and research measurement of infant height and weight

Bryant, M., Santorelli, G., Fairley, L., Petherick, E.S., Bhopal, R.S., Lawlor, D.A., Tilling, K., Howe, L.D., Farrar, D., Cameron, N., Mohammed, Mohammed A., Wright, J., Born in Bradford Childhood Obesity Scientific Group January 2015 (has links)
No / In many countries, routine data relating to growth of infants are collected as a means of tracking health and illness up to school age. These have potential to be used in research. For health monitoring and research, data should be accurate and reliable. This study aimed to determine the agreement between length/height and weight measurements from routine infant records and researcher-collected data. Methods Height/length and weight at ages 6, 12 and 24 months from the longitudinal UK birth cohort (born in Bradford; n=836–1280) were compared with routine data collected by health visitors within 2 months of the research data (n=104–573 for different comparisons). Data were age adjusted and compared using Bland Altman plots. Results There was agreement between data sources, albeit weaker for height than for weight. Routine data tended to underestimate length/height at 6 months (0.5 cm (95% CI −4.0 to 4.9)) and overestimate it at 12 (−0.3 cm (95% CI −0.5 to 4.0)) and 24 months (0.3 cm (95% CI −4.0 to 3.4)). Routine data slightly overestimated weight at all three ages (range −0.04 kg (95% CI −1.2 to 0.9) to −0.04 (95% CI −0.7 to 0.6)). Limits of agreement were wide, particularly for height. Differences were generally random, although routine data tended to underestimate length in taller infants and underestimate weight in lighter infants. Conclusions Routine data can provide an accurate and feasible method of data collection for research, though wide limits of agreement between data sources may be observed. Differences could be due to methodological issues; but may relate to variability in clinical practice. Continued provision of appropriate training and assessment is essential for health professionals responsible for collecting routine data. / Open Access article
45

Understanding the Knowledge, Skills, and Abilities (KSAs) of Data Professionals in United States Academic Libraries

Khan, Hammad Rauf 12 1900 (has links)
This study applies the knowledge, skills, and abilities (KSA) framework for eScience professionals to data service positions in academic libraries. Understanding the KSAs needed to provide data services is of crucial concern. The current study looks at KSAs of data professionals working in the United States academic libraries. An exploratory sequential mixed method design was adopted to discover the KSAs. The study was divided into two phases, a qualitative content analysis of 260 job advertisements for data professionals for Phase 1, and distribution of a self-administered online survey to data professionals working in academic libraries research data services (RDS) for Phase 2. The discovery of the KSAs from the content analysis of 260 job ads and the survey results from 167 data professionals were analyzed separately, and then Spearman rank order correlation was conducted in order to triangulate the data and compare results. The results from the study provide evidence on what hiring managers seek through job advertisements in terms of KSAs and which KSAs data professionals find to be important for working in RDS. The Spearman rank order correlation found strong agreement between job advertisement KSAs and data professionals perceptions of the KSAs.
46

Semantic representation of provenance and contextual information in scientific research

Brahaj, Armand 15 November 2016 (has links)
Semantic-Representation-Provenance-Contextual-Information-Scientific-Research Die Computer- und Informationstechnologie ist eine der größten Errungenschaften des letzten Jahrhunderts -- eine Revolution, welche die Art und Weise beeinflusst, auf die wir im täglichen Leben auf technische und soziale Problemen reagieren. Obwohl diese Technologien bereits Forschungsaktivitäten an sich beeinflussen, so ist zu erwarten, dass sie auch einen Einfluss auf das Publizieren und Teilen von Forschungsergebnissen haben werden. Bisher wurden in wissenschaftlichen Publikationen nur in geringem Maße Daten beigefügt. Forschungförderungseinrichtungen drängen zu konkreten Lösungen zum Verbreiten, Teilen und Wiederverwenden von Forschungsergebnissen. Berichte wie “Riding the Wave - How Europe can gain from the rising tide of scientific data” der High Level Expert Group on Scientific Data der Europäischen Kommission zeichnen eine Vision, bei der die Herausforderungen einer Diversität an Datenformaten, Menschen und Gemeinschaften durch die Anwendung technischer, semantischer und sozialer Eigenschaften der Interoperabilität vermieden werden. Diese Forschung adressiert derartige Herausforderungen aus einer technischer Perspektive. Fokus dieser Arbeit ist die Exploration eines neuartigen Ansatzes zur Unterstützung der Kuration (Sichtung und Korrektur) von Forschungsdaten mittels der Entwicklung einer Methodologie und mittels der Definition eines automatischen Datenkurationsprozesses in welchem Daten auf einfache Weise annotiert werden können. Ein Beitrag besteht in einem formalen Modell (COSI), welches die Integration großer Mengen an Metadaten erlaubt, welche als logische Konzepte behandelt werden können anstatt nur als Literale. Diese Konzepte werden in einer Ontologie definiert, welche, unter anderem, Inferenzen und Schlussfolgerungen ermöglicht. Der zweite Beitrag dieser Arbeit besteht in einer pragmatischen Lösung die es erlaubt, Metadaten on-the-fly zu annotieren. / Computational and information technology is one of the biggest advancement of the last century, a revolution that is influencing the way we approach social and technical problems in our day to day life. While these technologies have already influenced the research activity per sé, it is to be expected that these innovations will significantly influence the publishing and sharing of scientific results as well. So far, scientific publications have relied on limited result data attached inline in research paper publications. Establishments supporting research are pushing for concrete solutions that allow dissemination, share and reuse of research results. Reports such as “Riding the Wave - How Europe can gain from the rising tide of scientific data” of the High Level Expert Group on Scientific Data, European Commission (High Level Expert Group on Scientific Data, October 2010) presents a vision where the challenges of diverse data formats, people and communities are avoided due to the application of technical, semantic and social features of interoperability. This research is an effort to address similar concerns from a technical perspective. Focus of this research is the exploration of a novel approach on supporting research data curation by developing a method and defining an automated data curation process where data can be easily annotated. As a contribution, this work offers a formal model (COSI) that allows integration of plentiful metadata that can be treated as logic concepts and not merely as literals. These concepts are defined in an ontology that allows among other actions, inference and reasoning operations. The second contribution of this work is associated to a pragmatic solution that facilitates annotation of metadata on the fly. This solution is referred as sheer curation and shows how data can be annotated (based on COSI) and published while investigations are executed.
47

Ein längeres Leben für Deine Daten! / Let your data live longer!

Schäfer, Felix 20 April 2016 (has links) (PDF)
Data life cycle and research data managemet plans are just two of many key-terms used in the present discussion about digital research data. But what do they mean - on the one hand for an individual scholar and on the other hand for a digital infrastructure like IANUS? The presentation will try to explain some of the terms and will show how IANUS is dealing with them in order to enhance the reusability of unique data. The presentation starts with an overview of the different disciplines, research methods and types of data, which together characterise modern research on ancient cultures. Nearly in all scientific processes digital data is produced and has gained a dominant role as the stakeholder-analysis and the evaluation of test data collections done by IANUS in 2013 clearly demonstrate. Nevertheless, inspite of their high relevance digital files and folders are in danger with regard to their accessability and reusability in the near and far future. Not only the storage devices, software applications and file formates become slowly but steadily obsolete, but also the relevant information (i.e. the metadata) to understand all the produced bits and bytes intellectually will get lost over the years. Therefore, urging questions concern the challenges how we can prevent – or at least reduce – a forseeable loss of digital information and what we will do with all the results, which do not find their way into publications? Being a disipline’s specific national center for research data of archaeology and ancient studies, IANUS tries to answer these questions and to establish different services in this context. The slides give an overview of the centre structure, its state of development and its planned targets. The primary service (scheduled for autumn 2016) will be the long-term preservation, curation and publication of digital research data to ensure its reusability and will be open for any person and institution. One already existing offer are the “IT-Empfehlungen für den nachhaltigen Umgang mit digitalen Daten in den Altertumswissenschaften“ which provide information and advice about data management, file formats and project documentation. Furthermore, it offers instructions on how to deposit data collections for archiving and disseminating. Here, external experts are cordially invited to contribute and write missing recommendations as new authors.
48

Computational Methods for Discovering and Analyzing Causal Relationships in Health Data

Liang, Yiheng 08 1900 (has links)
Publicly available datasets in health science are often large and observational, in contrast to experimental datasets where a small number of data are collected in controlled experiments. Variables' causal relationships in the observational dataset are yet to be determined. However, there is a significant interest in health science to discover and analyze causal relationships from health data since identified causal relationships will greatly facilitate medical professionals to prevent diseases or to mitigate the negative effects of the disease. Recent advances in Computer Science, particularly in Bayesian networks, has initiated a renewed interest for causality research. Causal relationships can be possibly discovered through learning the network structures from data. However, the number of candidate graphs grows in a more than exponential rate with the increase of variables. Exact learning for obtaining the optimal structure is thus computationally infeasible in practice. As a result, heuristic approaches are imperative to alleviate the difficulty of computations. This research provides effective and efficient learning tools for local causal discoveries and novel methods of learning causal structures with a combination of background knowledge. Specifically in the direction of constraint based structural learning, polynomial-time algorithms for constructing causal structures are designed with first-order conditional independence. Algorithms of efficiently discovering non-causal factors are developed and proved. In addition, when the background knowledge is partially known, methods of graph decomposition are provided so as to reduce the number of conditioned variables. Experiments on both synthetic data and real epidemiological data indicate the provided methods are applicable to large-scale datasets and scalable for causal analysis in health data. Followed by the research methods and experiments, this dissertation gives thoughtful discussions on the reliability of causal discoveries computational health science research, complexity, and implications in health science research.
49

Finite memory estimation and control of finite probabilistic systems.

Platzman, L. K. (Loren Kerry), 1951- January 1977 (has links)
Bibliography : leaves 196-200. / Thesis (Ph. D.)--Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science, 1977. / Microfiche copy available in the Institute Archives and Barker Engineering Library. / by Loren Kerry Platzman. / Ph.D.
50

Statistik der öffentlichen Unternehmen in Deutschland : die Datenbasis

Dietrich, Irina, Strohe, Hans Gerhard January 2011 (has links)
Öffentliche Unternehmen werden in Adäquation zum wirtschaftlichen und politischen Verständnis an Hand des Finanz- und Personalstatistikgesetzes operationalisierbar definiert und sowohl gegenüber öffentlichen Behörden als auch gegenüber privaten Unternehmen abgegrenzt. Dabei wird gezeigt, dass keine Deckungsgleichheit, aber eine stückweise Überlappung mit dem Sektor Staat besteht. Dadurch gewinnt ein Teil der öffentlichen Unternehmen Bedeutung für die Volkswirtschaftliche Gesamtrechnung, insbesondere für den öffentlichen Schuldenstand und damit für die Konvergenzkriterien im Rahmen der Wirtschafts- und Währungsunion. Die amtliche Statistik gewinnt die Daten für die Statistik öffentlicher Unternehmen in Totalerhebung aus den Jahresabschlüssen dieser Unternehmen einschließlich ihrer Gewinn und Verlustrechnung. Die Statistik öffentlicher Unternehmen übertrifft damit in ihrer Ausführlichkeit und Tiefe die meisten anderen Fachstatistiken. Dem steht der Nachteil der relativ späten Verfügbarkeit gegenüber. Der Wissenschaft steht die Statistik in Form einer formal anonymisierten Datei an Wissenschaftlerarbeitsplätzen in den Forschungsdatenzentren der Statistischen Ämter des Bundes und der Länder zur Verfügung. Der Anonymisierungsprozess bedeutet eine weitere Verzögerung der Verfügbarkeit der Daten und steht zusammen mit strengen Geheimhaltungsvorschriften in den Forschungsdatenzentren im Widerspruch zur gebotenen Transparenz und der vorgeschriebenen Offenlegung der Bilanzen im öffentlichen Sektor.

Page generated in 0.0765 seconds