• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 107
  • 7
  • 6
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 172
  • 172
  • 51
  • 45
  • 45
  • 43
  • 42
  • 40
  • 27
  • 24
  • 21
  • 20
  • 19
  • 17
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Modelagem e implementação de banco de dados clínicos e moleculares de pacientes com câncer e seu uso para identificação de marcadores em câncer de pâncreas / Database design and implementation of clinical and molecular data of cancer patients and its application for biomarker discovery in pancreatic cancer

Bertoldi, Ester Risério Matos 20 October 2017 (has links)
O adenocarcinoma pancreático (PDAC) é uma neoplasia de difícil diagnóstico precoce e cujo tratamento não tem apresentado avanços expressivos desde a última década. As tecnologias de sequenciamento de nova geração (next generation sequencing - NGS) podem trazer importantes avanços para a busca de novos marcadores para diagnóstico de PDACs, podendo também contribuir para o desenvolvimento de terapias individualizadas. Bancos de dados são ferramentas poderosas para integração, padronização e armazenamento de grandes volumes de informação. O objetivo do presente estudo foi modelar e implementar um banco de dados relacional (CaRDIGAn - Cancer Relational Database for Integration and Genomic Analysis) que integra dados disponíveis publicamente, provenientes de experimentos de NGS de amostras de diferentes tipos histopatológicos de PDAC, com dados gerados por nosso grupo no IQ-USP, facilitando a comparação entre os mesmos. A funcionalidade do CaRDIGAn foi demonstrada através da recuperação de dados clínicos e dados de expressão gênica de pacientes a partir de listas de genes candidatos, associados com mutação no oncogene KRAS ou diferencialmente expressos em tumores identificados em dados de RNAseq gerados em nosso grupo. Os dados recuperados foram utilizados para a análise de curvas de sobrevida que resultou na identificação de 11 genes com potencial prognóstico no câncer de pâncreas, ilustrando o potencial da ferramenta para facilitar a análise, organização e priorização de novos alvos biomarcadores para o diagnóstico molecular do PDAC. / Pancreatic Ductal Adenocarcinoma (PDAC) is a type of cancer difficult to diagnose early on and treatment has not improved over the last decade. Next Generation Sequencing (NGS) technology may contribute to discover new biomarkers, develop diagnose strategies and personalised therapy applications. Databases are powerfull tools for data integration, normalization and storage of large data volumes. The main objective of this study was the design and implementation of a relational database to integrate publicly available data of NGS experiments of PDAC pacients with data generated in by our group at IQ-USP, alowing comparisson between both data sources. The database was called CaRDIGAn (Cancer Relational Database for Integration and Genomic Analysis) and its funcionalities were tested by retrieving clinical and expression data of public data of genes differencially expressed genes in our samples or genes associated with KRAS mutation. The output of those queries were used to fit survival curves of patients, which led to the identification of 11 genes potencially usefull for PDAC prognosis. Thus, CaRDIGAn is a tool for data storage and analysis, with promissing applications to identification and priorization of new biomarkers for molecular diagnosis in PDAC.
132

On fast and space-efficient database normalization : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University, Palmerston North, New Zealand

Koehler, Henning January 2007 (has links)
A common approach in designing relational databases is to start with a relation schema, which is then decomposed into multiple subschemas. A good choice of sub- schemas can often be determined using integrity constraints defined on the schema. Two central questions arise in this context. The first issue is what decompositions should be called "good", i.e., what normal form should be used. The second issue is how to find a decomposition into the desired form. These question have been the subject of intensive research since relational databases came to life. A large number of normal forms have been proposed, and methods for their computation given. However, some of the most popular proposals still have problems: - algorithms for finding decompositions are inefficient - dependency preserving decompositions do not always exist - decompositions need not be optimal w.r.t. redundancy/space/update anomalies We will address these issues in this work by: - designing effcient algorithms for finding dependency preserving decompositions - proposing a new normal form which minimizes overall storage space. This new normal form is then characterized syntactically, and shown to extend existing normal forms.
133

Formal design of data warehouse and OLAP systems : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University, Palmerston North, New Zealand

Zhao, Jane Qiong January 2007 (has links)
A data warehouse is a single data store, where data from multiple data sources is integrated for online business analytical processing (OLAP) of an entire organisation. The rationale being single and integrated is to ensure a consistent view of the organisational business performance independent from different angels of business perspectives. Due to its wide coverage of subjects, data warehouse design is a highly complex, lengthy and error-prone process. Furthermore, the business analytical tasks change over time, which results in changes in the requirements for the OLAP systems. Thus, data warehouse and OLAP systems are rather dynamic and the design process is continuous. In this thesis, we propose a method that is integrated, formal and application-tailored to overcome the complexity problem, deal with the system dynamics, improve the quality of the system and the chance of success. Our method comprises three important parts: the general ASMs method with types, the application tailored design framework for data warehouse and OLAP, and the schema integration method with a set of provably correct refinement rules. By using the ASM method, we are able to model both data and operations in a uniform conceptual framework, which enables us to design an integrated approach for data warehouse and OLAP design. The freedom given by the ASM method allows us to model the system at an abstract level that is easy to understand for both users and designers. More specifically, the language allows us to use the terms from the user domain not biased by the terms used in computer systems. The pseudo-code like transition rules, which gives the simplest form of operational semantics in ASMs, give the closeness to programming languages for designers to understand. Furthermore, these rules are rooted in mathematics to assist in improving the quality of the system design. By extending the ASMs with types, the modelling language is tailored for data warehouse with the terms that are well developed for data-intensive applications, which makes it easy to model the schema evolution as refinements in the dynamic data warehouse design. By providing the application-tailored design framework, we break down the design complexity by business processes (also called subjects in data warehousing) and design concerns. By designing the data warehouse by subjects, our method resembles Kimball's "bottom-up" approach. However, with the schema integration method, our method resolves the stovepipe issue of the approach. By building up a data warehouse iteratively in an integrated framework, our method not only results in an integrated data warehouse, but also resolves the issues of complexity and delayed ROI (Return On Investment) in Inmon's "top-down" approach. By dealing with the user change requests in the same way as new subjects, and modelling data and operations explicitly in a three-tier architecture, namely the data sources, the data warehouse and the OLAP (online Analytical Processing), our method facilitates dynamic design with system integrity. By introducing a notion of refinement specific to schema evolution, namely schema refinement, for capturing the notion of schema dominance in schema integration, we are able to build a set of correctness-proven refinement rules. By providing the set of refinement rules, we simplify the designers's work in correctness design verification. Nevertheless, we do not aim for a complete set due to the fact that there are many different ways for schema integration, and neither a prescribed way of integration to allow designer favored design. Furthermore, given its °exibility in the process, our method can be extended for new emerging design issues easily.
134

Gerenciamento de anotações de biosseqüências utilizando associações entre ontologias e esquemas XML

Teixeira, Marcus Vinícius Carneiro 26 May 2008 (has links)
Made available in DSpace on 2016-06-02T19:05:31Z (GMT). No. of bitstreams: 1 2080.pdf: 1369419 bytes, checksum: 4100f6c7c0400bc50f4f2f9a28621613 (MD5) Previous issue date: 2008-05-26 / Universidade Federal de Sao Carlos / Bioinformatics aims at providing computational tools to the development of genome researches. Among those tools are the annotations systems and the Database Management Systems (DBMS) that, associated to ontologies, allow the formalization of both domain conceptual and the data scheme. The data yielded by genome researches are often textual and with no regular structures and also requires scheme evolution. Due to these aspects, semi-structured DBMS might offer great potential to manipulate those data. Thus, this work presents architecture for biosequence annotation based on XML databases. Considering this architecture, a special attention was given to the database design and also to the manual annotation task performed by researchers. Hence, this architecture presents an interface that uses an ontology-driven model for XML schemas modeling and generation, and also a manual annotation interface prototype that uses molecular biology domain ontologies, such as Gene Ontology and Sequence Ontology. These interfaces were proven by Bioinformatics and Database experienced users, who answered questionnaires to evaluate them. The answers presented good assessments to issues like utility and speeding up the database design. The proposed architecture aims at extending and improving the Bio-TIM, an annotation system developed by the Database Group from the Computer Science Department of the Federal University from São Carlos (UFSCar). / A Bioinformática é uma área da ciência que visa suprir pesquisas de genomas com ferramentas computacionais que permitam o seu desenvolvimento tecnológico. Dentre essas ferramentas estão os ambientes de anotação e os Sistemas Gerenciadores de Bancos de Dados (SGBDs) que, associados a ontologias, permitem a formalização de conceitos do domínio e também dos esquemas de dados. Os dados produzidos em projetos genoma são geralmente textuais e sem uma estrutura de tipo regular, além de requerer evolução de esquemas. Por suas características, SGBDs semi-estruturados oferecem enorme potencial para tratar tais dados. Assim, este trabalho propõe uma arquitetura para um ambiente de anotação de biosseqüências baseada na persistência dos dados anotados em bancos de dados XML. Neste trabalho, priorizou-se o projeto de bancos de dados e também o apoio à anotação manual realizada por pesquisadores. Assim, foi desenvolvida uma interface que utiliza ontologias para guiar a modelagem de dados e a geração de esquemas XML. Adicionalmente, um protótipo de interface de anotação manual foi desenvolvido, o qual faz uso de ontologias do domínio de biologia molecular, como a Gene Ontology e a Sequence Ontology. Essas interfaces foram testadas por usuários com experiências nas áreas de Bioinformática e Banco de Dados, os quais responderam a questionários para avaliá-las. O resultado apresentou qualificações muito boas em diversos quesitos avaliados, como exemplo agilidade e utilidade das ferramentas. A arquitetura proposta visa estender e aperfeiçoar o ambiente de anotação Bio-TIM, desenvolvido pelo grupo de Banco de Dados do Departamento de Computação da Universidade Federal de São Carlos (UFSCar).
135

Modelagem e implementação de banco de dados clínicos e moleculares de pacientes com câncer e seu uso para identificação de marcadores em câncer de pâncreas / Database design and implementation of clinical and molecular data of cancer patients and its application for biomarker discovery in pancreatic cancer

Ester Risério Matos Bertoldi 20 October 2017 (has links)
O adenocarcinoma pancreático (PDAC) é uma neoplasia de difícil diagnóstico precoce e cujo tratamento não tem apresentado avanços expressivos desde a última década. As tecnologias de sequenciamento de nova geração (next generation sequencing - NGS) podem trazer importantes avanços para a busca de novos marcadores para diagnóstico de PDACs, podendo também contribuir para o desenvolvimento de terapias individualizadas. Bancos de dados são ferramentas poderosas para integração, padronização e armazenamento de grandes volumes de informação. O objetivo do presente estudo foi modelar e implementar um banco de dados relacional (CaRDIGAn - Cancer Relational Database for Integration and Genomic Analysis) que integra dados disponíveis publicamente, provenientes de experimentos de NGS de amostras de diferentes tipos histopatológicos de PDAC, com dados gerados por nosso grupo no IQ-USP, facilitando a comparação entre os mesmos. A funcionalidade do CaRDIGAn foi demonstrada através da recuperação de dados clínicos e dados de expressão gênica de pacientes a partir de listas de genes candidatos, associados com mutação no oncogene KRAS ou diferencialmente expressos em tumores identificados em dados de RNAseq gerados em nosso grupo. Os dados recuperados foram utilizados para a análise de curvas de sobrevida que resultou na identificação de 11 genes com potencial prognóstico no câncer de pâncreas, ilustrando o potencial da ferramenta para facilitar a análise, organização e priorização de novos alvos biomarcadores para o diagnóstico molecular do PDAC. / Pancreatic Ductal Adenocarcinoma (PDAC) is a type of cancer difficult to diagnose early on and treatment has not improved over the last decade. Next Generation Sequencing (NGS) technology may contribute to discover new biomarkers, develop diagnose strategies and personalised therapy applications. Databases are powerfull tools for data integration, normalization and storage of large data volumes. The main objective of this study was the design and implementation of a relational database to integrate publicly available data of NGS experiments of PDAC pacients with data generated in by our group at IQ-USP, alowing comparisson between both data sources. The database was called CaRDIGAn (Cancer Relational Database for Integration and Genomic Analysis) and its funcionalities were tested by retrieving clinical and expression data of public data of genes differencially expressed genes in our samples or genes associated with KRAS mutation. The output of those queries were used to fit survival curves of patients, which led to the identification of 11 genes potencially usefull for PDAC prognosis. Thus, CaRDIGAn is a tool for data storage and analysis, with promissing applications to identification and priorization of new biomarkers for molecular diagnosis in PDAC.
136

The Effects of Team Dynamics Training on Conceptual Data Modeling Task Performance

Menking, Ricky Arnold 12 1900 (has links)
Database modeling is a complex conceptual topic often taught through the use of project-based teams. One of the problems with the use of project-based teams in university courses is the determination of whether this is the most effective use of instructor and student time involvement and effort level. Therefore, this study investigated the impact of providing team dynamics training prior to the commencement of short-duration project-based team conceptual data modeling projects on individual data modeling task performance (DMTP) outcomes and team cohesiveness. The literature review encompassed conceptual data design modeling, the use of a project-based team approach, team dynamics and cohesion, self-efficacy, gender, and diversity. The research population consisted of 75 university students at a North American University (Canadian) pursuing a business program requiring an information systems course in which database design components are taught. Analysis of the collected data revealed that there was a statistically significant inverse relationship found between the provision of team dynamics training and individual DMTP. However, no statistically significant relationship was found between team dynamics training and team cohesion. Therefore, this study calls into question the value of team dynamics training on learning outcomes in the case of very short duration project-based teams involved in conceptual data modeling tasks. Additional research in this area would need to clarify what about this particular experiment might have contributed to these results.
137

The usability of a computer-based Statistics Data and Story Library in the South African context

Basson, Elizabeth Maria 04 February 2002 (has links)
Vista University is known in South Africa as a historically disadvantaged or black university. It is a multi-campus university (it has eight campuses throughout South Africa) and caters for learners from historically disadvantaged backgrounds. The Department of Mathematics and Statistics holds an annual meeting to coordinate the activities in the department across all eight campuses. Attendance is compulsory for all lecturers from all the campuses. Every year the same problem arises, which is to have examination papers drawn up that will be of a uniform standard across all the campuses. It is a very frustrating task for the compiler of the papers to get contributions from the lecturers that are submitted on time, in the agreed format and of an acceptable standard. During the 2000 meeting it was unanimously agreed that the long-term solution to the problem would be a database of questions in the agreed format and of an acceptable standard. Because the lecturers are spread over South Africa, this database must be available through Vista’s Intranet. The development of such a product would involve a great deal of time and energy, and the most important question to ask is whether the lecturers would use the product. The solution is to design a prototype of the product: a database with a Web-based portal populated with a sample of questions. The usability of such a database must be determined to ensure the effectiveness of the final product. The aim of this study is, after a prototype of a Web-based Statistical Data and Story Library in the South African Context (in future referred to as SSS) has been implemented, to determine the usability of the product. Copyright 2001, University of Pretoria. All rights reserved. The copyright in this work vests in the University of Pretoria. No part of this work may be reproduced or transmitted in any form or by any means, without the prior written permission of the University of Pretoria. Please cite as follows: Basson, EM 2001, The usability of a computer-based Statistics Data and Story Library in the South African context, MEd dissertation, University of Pretoria, Pretoria, viewed yymmdd < http://upetd.up.ac.za/thesis/available/etd-02042002-094953 / > / Thesis (MEd)--University of Pretoria, 2001. / Curriculum Studies / MEd / Unrestricted
138

A book management system eLibrary

Song, Shanpeng 01 January 2004 (has links)
"eLibrary" is a book management software application that runs on Microsoft Windows platforms. The software incorporates a Windows Explorer like interface and XML/XSL to display book details. The purpose of this project is to build a full-featured, commerical-quality software package to help people manage their books (either printed or electronic). The goal is for eLibrary to be a complete solution for people who wish to build their own personal electronic library catalog.
139

Personal Software Process (PSP) Scriber

Tsao, Heng-Jui 01 January 2002 (has links)
Personal Software Process (PSP) Scriber is a Web-based software engineering tool designed to implement an automatic system for performing PSP. The basis of this strategy is a set of tools to facilitate collection and analysis of development data. By analyzing the collected data, the developer can make informed, accurate decisions about their individual development effort.
140

White Board

Alemu, Getahun 01 January 2003 (has links)
This project designs and implements a tool to enhance the current means of availing coursework information in educational systems.

Page generated in 0.0616 seconds