• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1606
  • 457
  • 422
  • 170
  • 114
  • 102
  • 61
  • 49
  • 40
  • 36
  • 29
  • 23
  • 21
  • 17
  • 16
  • Tagged with
  • 3646
  • 856
  • 805
  • 754
  • 608
  • 544
  • 420
  • 400
  • 392
  • 363
  • 310
  • 304
  • 296
  • 277
  • 264
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
731

DNA Microarray Data Analysis and Mining: Affymetrix Software Package and In-House Complementary Packages

Xu, Lizhe 19 December 2003 (has links)
Data management and analysis represent a major challenge for microarray studies. In this study, Affymetrix software was used to analyze an HIV-infection data. The microarray analysis shows remarkably different results when using different parameters provided by the software. This highlights the fact that a standardized analysis tool, incorporating biological information about the genes is needed in order to better interpret the microarray study. To address the data management problem, in-house programs, including scripts and a database, were designed. The in-house programs were also used to overcome problems and inconveniences discovered during the data analysis, including management of the gene lists. The database provides rapid connection to many online public databases, as well as the integration of the original microarray data, relevant publications and other useful information. The in-house programs allow investigators to process and analyze the full Affymetrix microarray data in a speedy manner.
732

Classifier System Learning of Good Database Schema

Tanaka, Mitsuru 07 August 2008 (has links)
This thesis presents an implementation of a learning classifier system which learns good database schema. The system is implemented in Java using the NetBeans development environment, which provides a good control for the GUI components. The system contains four components: a user interface, a rule and message system, an apportionment of credit system, and genetic algorithms. The input of the system is a set of simple database schemas and the objective for the classifier system is to keep the good database schemas which are represented by classifiers. The learning classifier system is given some basic knowledge about database concepts or rules. The result showed that the system could decrease the bad schemas and keep the good ones.
733

A model for the evaluation of risks and control features in ORACLE 7

08 September 2015 (has links)
M.Com. / The proliferation of computers and the advances in technology introduced a number of new and additional management and control considerations. The inherent complexity of these environments has also increased the need to evaluate the adequacy of controls from an audit perspective. Due to the increasing use of database management systems as the backbone of information processing applications and the inherent complexities and diversity of these environments, the auditor is faced with the challenge of whether and to what extent reliance may be placed on the data contained in these databases...
734

Agilní správa databáze / Agile database management

Kotyza, David January 2009 (has links)
This diploma thesis is focused on agile management of relational databases. The gole is to provide detail analysis of changes which are performed on daily bases by DBA or software developers and describe how these changes can hugely affect performance of database system and its data. The principles of best known development methodics are described in part one (chapter 2 and 3). Following second part (chapter 4) contains descriptions of basics steps of agile strategies which have been often used in application solutions. Finally the third part (chapter 5 and following) contains detail information about usual performed database tasks.
735

Desenvolvimento de metodologia de avaliação de egressos de um programa de mestrado em pesquisa clínica

Desiderio, Tamiris Mariani Pereira January 2019 (has links)
Orientador: Ana Silvia Sartori Barraviera Seabra Ferreira / Resumo: A pós-graduação na modalidade profissional vem ganhando espaço em todas instituições mundiais e representa um divisor de águas entre o modelo acadêmico tradicional e as necessidades mais recentes do sistema de inovação científica, tecnológica e setor produtivo. Com isso, passa a ter uma crescente procura por ingressantes que já atuam no mercado de trabalho. O êxito do egresso, seja medido por inserção em instituições, empregabilidade e salários ou outras variáveis, é de grande importância para que os programas possam aprimorar suas metodologias e também conhecer mais sobre a área em que estão atuando. Objetivou-se elaborar um sistema de avaliação do perfil e êxito dos egressos do Programa de Mestrado em Pesquisa Clínica – FMB/CEVAP contribuindo para sua melhoria, atendendo às necessidades da Unesp e órgãos governamentais como a CAPES. Diante desta realidade foram feitas adaptações em um instrumento de avaliação de egressos que foi submetido à metodologia DELPHI por meio de consulta a dois grupos de peritos da área de Pesquisa Clínica que foram selecionados por meio de listas de pesquisadores que atendiam critérios como: obtenção de publicações na área de Pesquisa Clínica, vínculo institucional com estabelecimentos de pesquisa, participação em projetos na área de Pesquisa Clínica. Os participantes avaliaram as questões utilizando a escala LIKERT. Após a análise estatística das respostas e consequente verificação da concordância dos mesmos sobre o instrumento, houve a disponibi... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Professional graduate programs have been gaining ground in all world institutions and represent a watershed between the traditional academic model and the more recent needs of the scientific, technological and productive innovation system. As a result, there is a growing demand for new entrants who already work in the labor market. The success of the egress has been by insertion in institutions, employability salaries and other variables, so identifying the profile of the egress it is of great importance for the programs to improve their methods and also to know more about the field in which the prior students they are acting. The objective of the study was to elaborate and validate a system of evaluation of the profile and success of the graduates of a Masters program and to contribute to the increasing improvement of the same, attending to institutional needs and governmental organs. Faced with this reality, adaptations were made in a graduated evaluation instrument that was submitted to validation by the DELPHI methodology using the LIKERT scale. After it statistical analysis of the answers and consequent validation of the instrument, will be made available an online platform that will be applied graduate students of the Postgraduate Course in Clinical Research (Professional Master's Degree) of FMB / CEVAP - UNESP in 2017 and subsequent years. / Mestre
736

A Systems Approach to Rule-Based Data Cleaning

Amr H Ebaid (6274220) 10 May 2019 (has links)
<div>High quality data is a vital asset for several businesses and applications. With flawed data costing billions of dollars every year, the need for data cleaning is unprecedented. Many data-cleaning approaches have been proposed in both academia and industry. However, there are no end-to-end frameworks for detecting and repairing errors with respect to a set of <i>heterogeneous</i> data-quality rules.</div><div><br></div><div>Several important challenges exist when envisioning an end-to-end data-cleaning system: (1) It should deal with heterogeneous types of data-quality rules and interleave their corresponding repairs. (2) It can be extended by various data-repair algorithms to meet users' needs for effectiveness and efficiency. (3) It must support continuous data cleaning and adapt to inevitable data changes. (4) It has to provide user-friendly interpretable explanations for the detected errors and the chosen repairs.</div><div><br></div><div>This dissertation presents a systems approach to rule-based data cleaning that is <b>generalized</b>, <b>extensible</b>, <b>continuous </b>and <b>explaining</b>. This proposed system distinguishes between a <i>programming interface</i> and a <i>core </i>to address the above challenges. The programming interface allows the user to specify various types of data-quality rules that uniformly define and explain what is wrong with the data, and how to fix it. Handling all the rules as black-boxes, the core encapsulates various algorithms to holistically and continuously detect errors and repair data. The proposed system offers a simple interface to define data-quality rules, summarizes the data, highlights violations and fixes, and provides relevant auditing information to explain the errors and the repairs.</div>
737

S čepičkou nebo bez čepičky? Iniciace translace eukaryot se zaměřením na opurtunního patogena C. albicans / To cap or not to cap? Eukaryotic translation initiation with a special interest in human opportunistic pathogen C. albicans

Feketová, Zuzana January 2011 (has links)
Candida albicans belongs to serious human opportunistic pathogens, causing severe health complications to immunocompromised patients. To my best knowledge, it is the only organism that survives with unmethylated cap structures found on the 5'ends of mRNA molecules. Using functional assay, I demonstrated that orf19.7626 codes for C. albicans translation initiation factor 4E (Ca4E). We couldn't prove our hypothesis, that Ca4E could be responsible for the unmethylated cap recognition in our model organism S. cerevisiae. Candida sp. possesses also another rather unusual feature - ambiguous CUG codon. In most of the cases, CUG is decoded as a serine, but sometimes also as a leucine. This gives rise to a so called "statistical proteome". One CUG codon is also part of the mRNA coding for Ca4E protein, therefore two versions of Ca4E-Ca4ELeu and Ca4ESer -might occur in C. albicans simultaneously. Both of them are able to rescue deletion of S. cerevisiae eIF4E gene, but they confer temperature sensitivity to the heterologous host. This phenotype is more pronounced with the Ca4ELeu version. We observed milder temperature sensitive phenotype after co-expression of Ca4E together with C. albicans eIF4G (Ca4G). Conformational coupling between eIF4E and eIF4G leads to enhanced affinity of eIF4E to the cap...
738

Imersão de espaços métricos em espaços multidimensionais para indexação de dados usando detecção de agrupamentos / Embedding of metric spaces in multidimensional spaces for data indexing using cluster detection

Paterlini, Adriano Arantes 28 March 2011 (has links)
O sucesso dos Sistemas de Gerenciamento de Banco de Dados (SGBDs) em aplicações envolvendo dados tradicionais (números e textos curtos) encorajou o seu uso em novos tipos de aplicações, que exigem a manipulação de dados complexos. Séries temporais, dados científicos, dados multimídia e outros são exemplos de Dados Complexos. Inúmeras áreas de aplicação têm demandado soluções para o gerenciamento de dados complexos, dentre as quais a área de informática médica. Dados complexos podem também ser estudos com técnicas de descoberta de conhecimentos, conhecidas como KDD (Knowledge Discovery in Database), usando alguns algoritmos de detecção de agrupamentos apropriados. Entretanto, estes algoritmos possuem custo computacional elevado, o que dificulta a sua utilização em grandes conjuntos de dados. As técnicas já desenvolvidas na Área de Bases de Dados para indexação de espaços métricos usualmente consideram o conjunto de maneira uniforme sem levar em conta a existência de agrupamentos nos dados, por isso as estruturas buscam maximizar a eficiência das consultas para todo o conjunto simultaneamente. No entanto muitas vezes as consultas por similaridade estão limitadas a uma região específica do conjunto de dados. Neste contexto, esta dissertação propõe a criação de um novo método de acesso, que seja capaz de indexar de forma eficiente dados métricos, principalmente para conjuntos que contenham agrupamentos. Para atingir esse objetivo este trabalho também propõe um novo algoritmo para detecção de agrupamentos em dados métricos tornando mais eficiente a escolha do medoide de determinado conjunto de elementos. Os resultados dos experimentos mostram que os algoritmo propostos FAMES e M-FAMES podem ser utilizados para a detecção de agrupamentos em dados complexos e superam os algoritmos PAM, CLARA e CLARANS em eficácia e eficiência. Além disso, as consultas por similaridade realizadas com o método de acesso métrico proposto FAMESMAM mostraram ser especialmente apropriados para conjuntos de dados com agrupamentos / The success of Database Management System (DBMS) for applications with traditional data (numbers and short texts) has encouraged its use in new types of applications that require manipulation of complex data. Time series, scientific data and other multimedia data are examples of complex data. Several application fields, like medical informatics, have demanded solutions for managing complex data. Complex data can also be studied by means of Knowledge Discovery Techniques (KDD) applying appropriate clustering algorithms. However, these algorithms have high computational cost hindering their use in large data sets. The techniques already developed in the Databases research field for indexing metric spaces usually consider the sets have a uniform distribution, without taking into account the existence of clusters in the data, therefore the structures need to generalize the efficiency of queries for the entire set simultaneously. However the similarity searching is often limited to a specific region of the data set. In this context, this dissertation proposes a new access method able to index metric data efficiently, especially for sets containing clusters. It also proposes a new algorithm for clustering metric data so that selection of a medoid from a particular subset of elements becomes more efficient. The experimental results showed that the proposed algorithms FAMES and M-FAMES can be used as a clustering technique for complex data that outperform PAM, CLARA and CLARANS in effectiveness and efficiency. Moreover, the similarity searching performed with the proposed metric access method FAMESMAM proved to be especially appropriate to data sets with clusters
739

The Universal Sports Database

Chang, Lawrence January 2008 (has links)
Thesis advisor: David R. Martin / With vast amounts of data in the world, organization becomes a challenge. The success of data driven web services (IMDb, YouTube, Google Maps, Wikipedia, et cetera) all hinge on their ability to present information in an intuitive manner with user friendly interfaces. One area that fails to have such a service is sports statistics. With the ubiquitous appeal of sports, having a solution to this problem can be universally beneficial. Many sites exist that have statistics of different sports, but there are limitations to all of them. Since there is very little continuity among all sports, statistics are represented disparately.There are several problems with this approach. Any time there needs to be a change to the informational structure, the entire database and interface need to change. In addition, there can never be a single interface if there are different schemas for different sports, leading to a user unfriendly interface.My system uses a unique schema that is capable of representing statistics from any sport, no matter how unique. Adding new statistics to a sport to reflect rule changes or adding a new sport altogether are seamless. In addition, the web interface is structured by Rails, which changes automatically with the schema.Challenges included developing a universal sports schema and testing it sufficiently enough to prove its generality. Finding and extracting the data to populate the database also presented difficulties. / Thesis (BS) — Boston College, 2008. / Submitted to: Boston College. College of Arts and Sciences. / Discipline: Computer Science. / Discipline: College Honors Program.
740

Physical Plan Instrumentation in Databases: Mechanisms and Applications

Psallidas, Fotis January 2019 (has links)
Database management systems (DBMSs) are designed with the goal set to compile SQL queries to physical plans that, when executed, provide results to the SQL queries. Building on this functionality, an ever-increasing number of application domains (e.g., provenance management, online query optimization, physical database design, interactive data profiling, monitoring, and interactive data visualization) seek to operate on how queries are executed by the DBMS for a wide variety of purposes ranging from debugging and data explanation to optimization and monitoring. Unfortunately, DBMSs provide little, if any, support to facilitate the development of this class of important application domains. The effect is such that database application developers and database system architects either rewrite the database internals in ad-hoc ways; work around the SQL interface, if possible, with inevitable performance penalties; or even build new databases from scratch only to express and optimize their domain-specific application logic over how queries are executed. To address this problem in a principled manner in this dissertation, we introduce a prototype DBMS, namely, Smoke, that exposes instrumentation mechanisms in the form of a framework to allow external applications to manipulate physical plans. Intuitively, a physical plan is the underlying representation that DBMSs use to encode how a SQL query will be executed, and providing instrumentation mechanisms at this representation level allows applications to express and optimize their logic on how queries are executed. Having such an instrumentation-enabled DBMS in-place, we then consider how to express and optimize applications that rely their logic on how queries are executed. To best demonstrate the expressive and optimization power of instrumentation-enabled DBMSs, we express and optimize applications across several important domains including provenance management, interactive data visualization, interactive data profiling, physical database design, online query optimization, and query discovery. Expressivity-wise, we show that Smoke can express known techniques, introduce novel semantics on known techniques, and introduce new techniques across domains. Performance-wise, we show case-by-case that Smoke is on par with or up-to several orders of magnitudes faster than state-of-the-art imperative and declarative implementations of important applications across domains. As such, we believe our contributions provide evidence and form the basis towards a class of instrumentation-enabled DBMSs with the goal set to express and optimize applications across important domains with core logic over how queries are executed by DBMSs.

Page generated in 0.0483 seconds