• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 239
  • 139
  • 42
  • 40
  • 35
  • 19
  • 15
  • 10
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 621
  • 136
  • 119
  • 108
  • 108
  • 103
  • 99
  • 70
  • 62
  • 61
  • 54
  • 54
  • 53
  • 46
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Interactive visualization tools for spatial data & metadata

Antle, Alissa N. 11 1900 (has links)
In recent years, the focus of cartographic research has shifted from the cartographic communication paradigm to the scientific visualization paradigm. With this, there has been a resurgence of cognitive research that is invaluable in guiding the design and evaluation of effective cartographic visualization tools. The design of new tools that allow effective visual exploration of spatial data and data quality information in a resource management setting is critical if decision-makers and policy setters are to make accurate and confident decisions that will have a positive long-term impact on the environment. The research presented in this dissertation integrates the results of previous research in spatial cognition, visualization of spatial information and on-line map use in order to explore the design, development and experimental testing of four interactive visualization tools that can be used to simultaneously explore spatial data and data quality. Two are traditional online tools (side-by-side and sequenced maps) and two are newly developed tools (an interactive "merger" bivariate map and a hybrid o f the merger map and the hypermap). The key research question is: Are interactive visualization tools, such as interactive bivariate maps and hypermaps, more effective for communicating spatial information than less interactive tools such as sequenced maps? A methodology was developed in which subjects used the visualization tools to explore a forest species composition and associated data quality map in order to perform a range of map-use tasks. Tasks focused on an imaginary land-use conflict for a small region of mixed boreal forest in Northern Alberta. Subject responses in terms of performance (accuracy and confidence) and preference are recorded and analyzed. Results show that theory-based, well-designed interactive tools facilitate improved performance across all tasks, but there is an optimal matching between specific tasks and tools. The results are generalized into practical guidelines for software developers. The use of confidence as a measure of map-use effectiveness is verified. In this experimental setting, individual differences (in terms of preference, ability, gender etc.) did not significantly affect performance. / Arts, Faculty of / Geography, Department of / Graduate
352

Requirements of a web-based geographic information system clearinghouse

Mearns, Martie Alèt 12 September 2012 (has links)
M.Inf. / Users of geographic information systems (GIS) are often faced with a challenge with regard to identification, location and overall access to digital data used in the application of GIS. The selection of the appropriate data from the large volumes available, also gaining access to available data and the establishment of the distribution of data from one central source are necessary tasks in order to improve the dissemination of GIS data. However, these are difficult tasks due to many users being unaware of the full range of available digital GIS data. A mechanism that could assist in improving access to digital GIS data is the Webbased GIS clearinghouse. This study was initiated to determine the requirements of GIS clearinghouses for optimum accessibility to digital GIS data. A literature study was conducted to investigate the nature of data that is used in GIS clearinghouses, the current trends in GIS data on the Web and the unique characteristics of the Web that can increase accessibility to digital GIS data. A selection of clearinghouses was made and these were evaluated in order to determine variables that can be translated into criteria from which a model for the evaluation of GIS clearinghouses could be established. This model can act as a working document or check-list for users to evaluate GIS clearinghouses, or for designers to create new or improve existing GIS clearinghouses.
353

Analýza problémov dlhodobej archivácie dokumentov a prístupy k ich riešeniu / Analysis of current archiving problems and approach to their solutions

Klimko, Jozef January 2010 (has links)
The final thesis is dealing with long-term preservation of document and related issues with it. At the beginning of the thesis are defined terms such as paper and digital document and lifecycle it has to go through. Next parts are devoted to selected standards and recommendations, metadata, the most widely used approaches in this area, which are migration and emulation, most used technologies and the last part of this section is dedicated to security meters. The survey deals with still active projects which start at the beginning of last year. These projects show new trends and possibilities in this area. From twelve project were selected four interesting of them (ARCOMEM, ENSURE, TIMBUS, SCAPE) which are subsequently described from different perspectives, compared and evaluated.
354

A Netcentric Scientific Research Repository

Harrington, Brian 12 1900 (has links)
The Internet and networks in general have become essential tools for disseminating in-formation. Search engines have become the predominant means of finding information on the Web and all other data repositories, including local resources. Domain scientists regularly acquire and analyze images generated by equipment such as microscopes and cameras, resulting in complex image files that need to be managed in a convenient manner. This type of integrated environment has been recently termed a netcentric sci-entific research repository. I developed a number of data manipulation tools that allow researchers to manage their information more effectively in a netcentric environment. The specific contributions are: (1) A unique interface for management of data including files and relational databases. A wrapper for relational databases was developed so that the data can be indexed and searched using traditional search engines. This approach allows data in databases to be searched with the same interface as other data. Fur-thermore, this approach makes it easier for scientists to work with their data if they are not familiar with SQL. (2) A Web services based architecture for integrating analysis op-erations into a repository. This technique allows the system to leverage the large num-ber of existing tools by wrapping them with a Web service and registering the service with the repository. Metadata associated with Web services was enhanced to allow this feature to be included. In addition, an improved binary to text encoding scheme was de-veloped to reduce the size overhead for sending large scientific data files via XML mes-sages used in Web services. (3) Integrated image analysis operations with SQL. This technique allows for images to be stored and managed conveniently in a relational da-tabase. SQL supplemented with map algebra operations is used to select and perform operations on sets of images.
355

Organization is sharing = from eScience to personal information management = Organização é compartilhamento: de eScience para gestão de informação pessoal / Organização é compartilhamento : de eScience para gestão de informação pessoal

Senra, Rodrigo Dias Arruda, 1974- 12 October 2012 (has links)
Orientador: Claudia Bauzer Medeiros / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-22T03:52:51Z (GMT). No. of bitstreams: 1 Senra_RodrigoDiasArruda_D.pdf: 1421466 bytes, checksum: 10286dd7415a632d012984fc38bcca53 (MD5) Previous issue date: 2012 / Resumo: Compartilhamento de informação sempre foi um aspecto chave em qualquer tipo de esforço conjunto. Paradoxalmente, com o dilúvio de dados, a incremental disponibilização de informação tem dificultado o projeto e implementação de soluções que efetivamente estimulam o compartilhamento. Esta tese analisa aspectos distintos do compartilhamento - desde ambientes relacionados à eScience até informação pessoal. Como resultado desta análise, ela provê respostas para alguns dos problemas encontrados, ao longo de três eixos. O primeiro, SciFrame, é um arcabouço específico para descrição de sistemas ou processos envolvendo manipulação de dados científicos no formato digital, servindo como um padrão que auxilia a comparação de sistemas. A adoção do SciFrame para descrição de ambientes científicos virtuais permite a identificação de pontos em comum e oportunidades de interoperabilidade. O segundo eixo de contribuição contempla o problema da comunicação entre sistemas arbitrários e serviços oferecidos por bancos de dados, através do uso dos então chamados descritores de bancos de dados ou DBDs. Estes descritores contribuem para desacoplar aplicações dos serviços, melhorando, portanto o compartilhamento entre aplicações e bancos de dados. A terceira contribuição, Organografos, provê meios para a organização de informação multifacetada. Ela contempla problemas de compartilhamento de informação pessoal por intermédio da exploração da forma como organizamos tais informações. Neste caso, ao invés de tentarmos prover meios para o compartilhamento da informação propriamente dita, a unidade de compartilhamento é a própria organização da informação. Através do projeto e compartilhamento de organografos, grupos distintos trocam entre si visões reconfiguráveis de como a informação está organizada, promovendo assim interoperabilidade e reuso. Organografos são umas abordagens inovadoras para o gerenciamento de dados hierárquicos. Essas três contribuições estão centradas nas idéias básicas de construção e compartilhamento de informação organizada hierarquicamente. Parte destas contribuições foi validada por estudos de caso e, no caso de organografos, por uma implementação de fato / Abstract: Information sharing has always been a key issue in any kind of joint effort. Paradoxically, with the data deluge, the more information available, the harder it is to design and implement solutions that effectively foster such sharing. This thesis analyzes distinct aspects of sharing - from eScience-related environments to personal information. As a result of this analysis, it provides answers to some of the problems encountered, along three axes. The first, SciFrame, is a specific framework that describes systems or processes involving scientific digital data manipulation, serving as a descriptive pattern to help system comparison. The adoption of SciFrame to describe distinct scientific virtual environments allows identifying commonalities and points for interoperation. The second axe contribution addresses the specific problem of communication between arbitrary systems and services provided by distinct database platforms, via the use of the so-called database descriptors or DBDs. These descriptors contribute to provide independence between applications and the services, thereby enhancing sharing across applications and databases. The third contribution, Organographs, provides means to deal with multifaceted information organization. It addresses problems of sharing personal information by means of exploiting the way we organize such information. Here, rather than trying to provide means to share the information itself, the unit of sharing is the organization of the information. By designing and sharing organographs, distinct groups provide each other dynamic, reconfigurable views of how information is organized, thereby promoting interoperability and reuse. Organographs are an innovative approach to hierarchical data management. These three contributions are centered on the basic idea of building and sharing hierarchical organizations. Part of these contributions was validated by case studies and, in the case of organographs, an actual implementation / Doutorado / Ciência da Computação / Doutor em Ciência da Computação
356

[en] ADAPTIVE ELECTRONIC GUIDE APPLICATION BASED ON GINGA-NCL / [pt] APLICAÇÃO ADAPTATIVA DE GUIA ELETRÔNICO UTILIZANDO O GINGA-NCL

FELIPE NOGUEIRA BARBARA DE OLIVEIRA 14 February 2011 (has links)
[pt] Uma das consequências da digitalização da TV é o aumento na quantidade de canais disponíveis e, com isso, mais serviços podem ser oferecidos aos telespectadores. Com essa grande quantidade de conteúdos, torna-se necessária a existência de aplicações que apresentem informações sobre eles, com o objetivo de ajudar os telespectadores a escolherem o que desejam assistir. Tais aplicações são os Guias Eletrônicos, conhecidos como EPGs (Electronic Program Guides). A maioria das pesquisas e trabalhos relacionados a Guias Eletrônicos concentra-se no desenvolvimento de sistemas de recomendação ou de interfaces com o usuário. Os sistemas de recomendação integrados ao Guia Eletrônico adaptam as informações a serem apresentadas de acordo com as preferências do telespectador. A aplicação do Guia Eletrônico é responsável por adquirir as informações e gerar o Guia. Em geral, aplicação geradora do Guia Eletrônico pode ser substituída apenas por atualizações esporádicas. Nenhum dos trabalhos encontrados oferece suporte a adaptações da aplicação em tempo de exibição, ou seja, a modificação dos algoritmos utilizados sem interromper a exibição do Guia. Esta dissertação discute a importância de adaptações em tempo real na geração do Guia Eletrônico e apresenta uma implementação baseada no suporte oferecido pelo Ginga-NCL. A aplicação desenvolvida possui uma arquitetura modular que prevê adaptações dinâmicas através de um meta-serviço responsável pela tarefa. / [en] One of the consequences of the digitalization of TV systems is the increased amount of available channels and, as a consequence, the great number of services that can be offered to viewers. Due to the great number of content available, there has been a need for applications responsible for helping viewers to find what they want to watch. These applications are called EPGs (Electronic Program Guides). Most work related with EPG focuses either on the development of recommendation systems or on the design of EPG user interfaces. A recommendation system integrated with an EPG adapts the information to be presented based on the viewer’s preferences. On the other hand, the EPG application is responsible for gathering information and generating the EPG. Usually this EPG application can only be replaced by sporadic updates. Unfortunately, as far as the author knows, there is no work that offers support for application adaptations in real-time, which would make it possible to change algorithms without stopping the EPG presentation. This dissertation discusses the importance of providing real time adaptations and presents an EPG implementation based on the support offered by Ginga-NCL. The application modular architecture provides support to dynamic adaptations through a metaservice responsible for these tasks.
357

Modelo de gestión documental para empresas que brinden servicios informáticos a proyectos de investigación académica / Enterprise content management model for companies that provide computer services to academic research projects

Huanachin Yancce, Edith Fiorela, Chaffo Vega, Renzo Mauricio Renato 28 October 2019 (has links)
Las organizaciones que brindan servicios informáticos para el desarrollo de proyectos de investigación académica no están utilizando Enterprise Content Management (ECM), a pesar de la complejidad de la administración de documentos asociados a esta actividad. ECM permite para almacenar, entregar y gestionar contenido de forma más ágil y así poder brindar mejor servicio. En este trabajo se propone un modelo de gestión documental donde se presentan las fases que permiten implementar esta tecnología y con ello ganar capacidades nuevas en la organización. Para validar el modelo se implementó una herramienta en 2 empresas virtuales analizando los tiempos de respuesta a las solicitudes de servicio que se producían antes y después de la implementación. Además, se hizo una encuesta utilizando la escala de Likert para medirá la satisfacción de los stakeholders. / Organizations that provide computer services for the development of academic research projects are not using Enterprise Content Management (ECM), despite the complexity of document management associated with this activity. ECM allows to store, deliver and manage content more agile and thus be able to provide better service. In this paper, a document management model is proposed where the phases that allow implementing this technology are presented and thereby gain new capabilities in the organization. To validate the model, a tool was implemented in 2 virtual companies analyzing the response times to service requests that occurred before and after implementation. In addition, a survey was made using the Likert scale to measure stakeholder satisfaction. / Tesis
358

Enhancing clinical data management and streamlining organic phase DNA isolation protocol in the Pre-Cancer Genomic Atlas cohort

Potter, Austin 23 November 2020 (has links)
In the age of big data, thoughtful management and harmonization of clinical metadata and sample processing in translational research is a critical for effective data generation, integration, and analysis. These steps enable the cutting edge discoveries and enhance overall conclusions that may come from complex multi-omic translational research studies. The focus of my thesis has been on harmonizing the clinical metadata collected as part of the lung Pre Cancer Genome Atlas (PCGA) in addition to expanding the use of banked samples. The lung PCGA study included longitudinal collected samples and data from participants in a high-risk lung cancer-screening program at Roswell Park Comprehensive Cancer Center (Roswell) in Buffalo, NY. Clinical metadata for this study was collected over many years at Roswell and subsets of this data were shared with Boston University Medical Campus (BUMC) for the lung PCGA study. During the study, additional clinical metadata was acquired and shared with BUMC to complement the analysis of genomic profiling of DNA and RNA, as well as protein staining of tissue. With regards to the PCGA study, my thesis has two aims: 1) Curate the clinical metadata from received from Roswell during the PCGA study to enhance both its accessibility to current investigators and collaborators and reproducibility of results 2) Test methods to isolate DNA from remnant samples to expand the use of banked samples for genomic profiling. We hypothesized that the accomplishment of these goals would allow for increased use of the clinical metadata, enhanced reproducibility of the results, and expansion of samples available for DNA sequencing The clinical metadata received from Roswell was consolidated into a singular source that is continually updated and available for export for future research use. These metadata management efforts led to increased use among the members of our laboratory and collaborators working with the lung PCGA cohort. Additionally, the curation of metadata has allowed for improved analysis, reproducibility, and increased awareness of the current inventory of remaining samples. During the process of lung PCGA clinical metadata curation, physical inventory of the remaining samples revealed remnant organic phase samples. Therefore, in addition to my work associated with clinical metadata, the second goal of my thesis focuses on DNA isolation from remnant banked biological samples from the lung PCGA cohort. In the first phase of the lung PCGA, nucleic acid isolation of RNA was intended to be collected exclusively from fresh frozen endobronchial biopsy samples, and formalin-fixed paraffin embedded (FFPE) biopsy samples were to be used for DNA isolation. DNA isolation from the FFPE samples was unsuccessful. However, from the RNA isolation, the remaining organic phase was banked and could potentially serve as a source of DNA. The organic phase of this isolation contained cell debris, proteins, and, as previously mentioned, DNA. We hypothesized that current protocols for organic phase DNA isolation might yield adequate quantities of DNA for genomic profiling. Utilizing immortalized cell culture lines to establish methodology, numerous organic phase DNA isolation protocols were tested. During subsequent validation using the remaining organic phase samples from the lung PCGA cohort, the protocol yielded varied results, suggesting that further optimization to increase DNA purity is required. The ability to isolate DNA from these valuable samples will enhance progress in the lung PCGA study. The aims of this thesis involving curation of clinical metadata and generation of additional DNA samples for DNA profiling has had significant impact on the PCGA study and future expansions of this work.
359

Systém pro správu sbírek fotografií / System for Management of Photographic Collections

Čermák, Pavel January 2014 (has links)
This thesis deals with the management of digital photos by metadata contained in the photos. The thesis describes the structure of the formats JFIF, TIFF, RAW and EXIF format for storing metadata into photos. In the next part of this thesis is described the design and implementation of a simple photo management application. The main functionality of the application is focused on bulk editing EXIF metadata in the photos. In the conclusion of this thesis there's a proof of results and discussion about further extension options.
360

OntoStudyEdit: a new approach for ontology-based representation and management of metadata in clinical and epidemiological research

Uciteli, Alexandr, Herre, Heinrich January 2015 (has links)
Background: The specification of metadata in clinical and epidemiological study projects absorbs significant expense. The validity and quality of the collected data depend heavily on the precise and semantical correct representation of their metadata. In various research organizations, which are planning and coordinating studies, the required metadata are specified differently, depending on many conditions, e.g., on the used study management software. The latter does not always meet the needs of a particular research organization, e.g., with respect to the relevant metadata attributes and structuring possibilities. Methods: The objective of the research, set forth in this paper, is the development of a new approach for ontology-based representation and management of metadata. The basic features of this approach are demonstrated by the software tool OntoStudyEdit (OSE). The OSE is designed and developed according to the three ontology method. This method for developing software is based on the interactions of three different kinds of ontologies: a task ontology, a domain ontology and a top-level ontology. Results: The OSE can be easily adapted to different requirements, and it supports an ontologically founded representation and efficient management of metadata. The metadata specifications can by imported from various sources; they can be edited with the OSE, and they can be exported in/to several formats, which are used, e.g., by different study management software. Conclusions: Advantages of this approach are the adaptability of the OSE by integrating suitable domain ontologies, the ontological specification of mappings between the import/export formats and the DO, the specification of the study metadata in a uniform manner and its reuse in different research projects, and an intuitive data entry for non-expert users.

Page generated in 0.083 seconds