• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 107
  • 7
  • 6
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 172
  • 172
  • 51
  • 45
  • 45
  • 43
  • 42
  • 40
  • 27
  • 24
  • 21
  • 20
  • 19
  • 17
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

The nested universal relation database model /

Levene, M. January 1900 (has links)
Revision of the author's thesis (Ph. D.)--Birkbeck College, 1990. / Includes bibliographical references (p. [163]-173) and index.
152

A knowledgebase of stress reponsive gene regulatory elements in arabidopsis Thaliana

Adam, Muhammed Saleem January 2011 (has links)
<p>Stress responsive genes play a key role in shaping the manner in which plants process and respond to environmental stress. Their gene products are linked to DNA transcription and its consequent translation into a response product. However, whilst these genes play a significant role in manufacturing responses to stressful stimuli, transcription factors coordinate access to these genes, specifically by accessing a gene&rsquo / s promoter region which houses transcription factor binding sites. Here transcriptional elements play a key role in mediating responses to environmental stress where each transcription factor binding site may constitute a potential response to a stress signal. Arabidopsis thaliana, a model organism, can be used to identify the mechanism of how transcription factors shape a plant&rsquo / s survival in a stressful environment. Whilst there are numerous plant stress research groups, globally there is a shortage of publicly available stress responsive gene databases. In addition a number of previous databases such as the Generation Challenge Programme&rsquo / s comparative plant stressresponsive gene catalogue, Stresslink and DRASTIC have become defunct whilst others have stagnated. There is currently a single Arabidopsis thaliana stress response database called STIFDB which was launched in 2008 and only covers abiotic stresses as handled by major abiotic stress responsive transcription factor families. Its data was sourced from microarray expression databases, contains numerous omissions as well as numerous erroneous entries and has not been updated since its inception.The Dragon Arabidopsis Stress Transcription Factor database (DASTF) was developed in response to the current lack of stress response gene resources. A total of 2333 entries were downloaded from SWISSPROT, manually curated and imported into DASTF. The entries represent 424 transcription factor families. Each entry has a corresponding SWISSPROT, ENTREZ GENBANK and TAIR accession number. The 5&rsquo / untranslated regions (UTR) of 417 families were scanned against TRANSFAC&rsquo / s binding site catalogue to identify binding sites. The relational database consists of two tables, namely a transcription factor table and a transcription factor family table called DASTF_TF and TF_Family respectively. Using a two-tier client-server architecture, a webserver was built with PHP, APACHE and MYSQL and the data was loaded into these tables with a PYTHON script. The DASTF database contains 60 entries which correspond to biotic stress and 167 correspond to abiotic stress while 2106 respond to biotic and/or abiotic stress. Users can search the database using text, family, chromosome and stress type search options. Online tools have been integrated into the DASTF&nbsp / database, such as HMMER, CLUSTALW, BLAST and HYDROCALCULATOR. User&rsquo / s can upload sequences to identify which transcription factor family their sequences belong to by using HMMER. The website can be accessed at http://apps.sanbi.ac.za/dastf/ and two updates per year are envisaged.</p>
153

E-model: event-based graph data model theory and implementation

Kim, Pilho 06 July 2009 (has links)
The necessity of managing disparate data models is increasing within all IT areas. Emerging hybrid relational-XML systems are under development in this context to support both relational and XML data models. However, there are ever-growing needs for adequate data models for texts and multimedia, which are applications that require proper storage, and their capability to coexist and collaborate with other data models is as important as that of a relational-XML hybrid model. This work proposes a new data model named E-model that supports rich relations and reflects the dynamic nature of information. This E-model introduces abstract data typing objects and rules of relation that support: (1) the notion of time in object definition and relation, (2) multiple-type relations, (3) complex schema modeling methods using a relational directed acyclic graph, and (4) interoperation with popular data models. To implement the E-model prototype, extensive data operation APIs have been developed on top of relational databases. In processing dynamic queries, our prototype achieves an order of magnitude improvement in speed compared with popular data models. Based on extensive E-model APIs, a new language named EML is proposed. EML extends the SQL-89 standard with various E-model features: (1) unstructured queries, (2) unified object namespaces, (3) temporal queries, (4) ranking orders, (5) path queries, and (6) semantic expansions. The E-model system can interoperate with popular data models with its rich relations and flexible structure to support complex data models. It can act as a stand-alone database server or it can also provide materialized views for interoperation with other data models. It can also co-exist with established database systems as a centralized online archive or as a proxy database server. The current E-model prototype system was implemented on top of a relational database. This allows significant benefits from established database engines in application development. In addition to extensive features added to SQL, our EML prototype achieves an order of magnitude speed improvement in dynamic queries compared to popular database models. Availability Release the entire work immediately for access worldwide after my graduation.
154

Modélisation et construction des bases de données géographiques floues et maintien de la cohérence de modèles pour les SGBD SQL et NoSQL / Modeling and construction of fuzzy geographic databases with supporting models consistency for SQL and NoSQL database systems

Soumri Khalfi, Besma 12 June 2017 (has links)
Aujourd’hui, les recherches autour du stockage et de l’intégration des données spatiales constituent un maillon important qui redynamise les recherches sur la qualité des données. La prise en compte de l’imperfection des données géographiques, particulièrement l’imprécision, ajoute une réelle complexification. Parallèlement à l’augmentation des exigences de qualité centrées sur les données (précision, exhaustivité, actualité), les besoins en information intelligible ne cessent d’augmenter. Sous cet angle, nous sommes intéressés aux bases de données géographiques imprécises (BDGI) et leur cohérence. Ce travail de thèse présente des solutions pour la modélisation et la construction des BDGI et cohérentes pour les SGBD SQL et NoSQL.Les méthodes de modélisation conceptuelle de données géographiques imprécises proposées ne permettent pas de répondre de façon satisfaisante aux besoins de modélisation du monde réel. Nous présentons une version étendue de l’approche F-Perceptory pour la conception de BDGI. Afin de construire la BDGI dans un système relationnel, nous présentons un ensemble de règles de transformation automatique de modèles pour générer à partir du modèle conceptuel flou le modèle physique. Nous implémentons ces solutions sous forme d’un prototype baptisé FPMDSG.Pour les systèmes NoSQL type document. Nous présentons un modèle logique baptisé Fuzzy GeoJSON afin de mieux cerner la structure des données géographiques imprécises. En plus, ces systèmes manquent de pertinence pour la cohérence des données ; nous présentons une méthodologie de validation pour un stockage cohérent. Les solutions proposées sont implémentées sous forme d'un processus de validation. / Today, research on the storage and the integration of spatial data is an important element that revitalizes the research on data quality. Taking into account the imperfection of geographic data particularly the imprecision adds a real complexity. Along with the increase in the quality requirements centered on data (accuracy, completeness, topicality), the need for intelligible information (logically consistent) is constantly increasing. From this point of view, we are interested in Imprecise Geographic Databases (IGDBs) and their logical coherence. This work proposes solutions to build consistent IGDBs for SQL and NoSQL database systems.The design methods proposed to imprecise geographic data modeling do not satisfactorily meet the modeling needs of the real world. We present an extension to the F-Perceptory approach for IGDBs design. To generate a coherent definition of the imprecise geographic objects and built the IGDB into relational system, we present a set of rules for automatic models transformation. Based on these rules, we develop a process to generate the physical model from the fuzzy conceptual model. We implement these solutions as a prototype called FPMDSG.For NoSQL document oriented databases, we present a logical model called Fuzzy GeoJSON to better express the structure of imprecise geographic data. In addition, these systems lack relevance for data consistency; therefore, we present a validation methodology for consistent storage. The proposed solutions are implemented as a schema driven pipeline based on Fuzzy GeoJSON schema and semantic constraints.
155

The design of a database of resources for rational therapy

Steyn, Genevieve Lee 06 1900 (has links)
The purpose of this study is to design a database of resources for rational therapy. An investigation of the current health situation and reorientation towards primary health care (PHC) in South Africa evidenced the need for a database of resources which would meet the demand for rational therapy information made on the Helderberg College Library by various user groups as well as make a contribution to the national health information infrastructure. Rational therapy is viewed as an approach within PHC that is rational, common-sense, wholistic and credible, focusing on the prevention and maintenance of health. A model of the steps in database design was developed. A user study identified users' requirements for design and the conceptual schema was developed. The entities, attributes, relationships and policies were presented and graphically summarised in an Entity-Relationship (E-R) diagram. The conceptual schema is the blueprint for further design and implementation of the database. / Information Science / M.Inf.
156

Evaluer et améliorer la qualité de l'information: herméneutique des bases de données administratives

Boydens, Isabelle January 1998 (has links)
Doctorat en philosophie et lettres / info:eu-repo/semantics/nonPublished
157

Online horizontal partitioning of heterogeneous data

Herrmann, Kai, Voigt, Hannes, Lehner, Wolfgang 30 November 2020 (has links)
In an increasing number of use cases, databases face the challenge of managing heterogeneous data. Heterogeneous data is characterized by a quickly evolving variety of entities without a common set of attributes. These entities do not show enough regularity to be captured in a traditional database schema. A common solution is to centralize the diverse entities in a universal table. Usually, this leads to a very sparse table. Although today’s techniques allow efficient storage of sparse universal tables, query efficiency is still a problem. Queries that address only a subset of attributes have to read the whole universal table includingmany irrelevant entities. Asolution is to use a partitioning of the table, which allows pruning partitions of irrelevant entities before they are touched. Creating and maintaining such a partitioning manually is very laborious or even infeasible, due to the enormous complexity. Thus an autonomous solution is desirable. In this article, we define the Online Partitioning Problem for heterogeneous data. We sketch how an optimal solution for this problem can be determined based on hypergraph partitioning. Although it leads to the optimal partitioning, the hypergraph approach is inappropriate for an implementation in a database system. We present Cinderella, an autonomous online algorithm for horizontal partitioning of heterogeneous entities in universal tables. Cinderella is designed to keep its overhead low by operating online; it incrementally assigns entities to partition while they are touched anyway duringmodifications. This enables a reasonable physical database design at runtime instead of static modeling.
158

[en] PARALLEL PROGRAMING IN THE REDIS KEY-VALUE DATASTORE / [pt] PROGRAMAÇÃO PARALELA NO BANCO DE DADOS CHAVE-VALOR REDIS

JUAREZ DA SILVA BOCHI 12 April 2016 (has links)
[pt] Redis é um banco de dados chave-valor de código livre que dá suporte à avaliação de scripts Lua, mas sua implementação utiliza apenas uma tarefa de sistema operacional. Scripts longos são desencorajados porque a avaliação do código é bloqueante, o que pode causar degradação de desempenho para os demais usuários. Através da aplicação do modelo de concorrência M:N, que combina tarefas de nível de sistema operacional com tarefas do nível de usuário, adicionamos no Redis a capacidade de execução de scripts em paralelo, permitindo que todos os núcleos do servidor sejam explorados. Com a utilização de corotinas Lua, implementamos um escalonador capaz de alocar e suspender a execução de tarefas de nível de usuário nos núcleos disponíveis sem necessidade de alteração do código dos scripts. Este modelo permitiu proteger o programador das complexidades naturais do paralelismo como sincronização no acesso a recursos compartilhados e escalonamento das tarefas. / [en] Redis is an open source key-value database that supports Lua programming language scripts, but it s implementation is single threaded. Long running scripts are discouraged because script evaluation is blocking, which may cause service levels deterioration. Applying the M:N threading model, which combines user and operating system threads, we added to Redis the ability of running scripts in parallel, leveraging all server cores.With the use of Lua coroutines, we implemented a scheduler able to allocate and suspend user-level tasks in the available cores without the need of changing scripts source code. The M:N model allowed us to protect the programmer from the natural complexities that arise from parallel programming, such as access to shared resources synchronization and scheduling of tasks into different operational system threads.
159

OperomeDB: database of condition specific transcription in prokaryotic genomes and genomic insights of convergent transcription in bacterial genomes

Chetal, Kashish 27 October 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / My thesis comprises of two individual projects: 1) we have developed a database for operon prediction using high-throughput sequencing datasets for bacterial genomes. 2) Genomics and mechanistic insights of convergent transcription in bacterial genomes. In the first project we developed a database for the prediction of operons for bacterial genomes using RNA-seq datasets, we predicted operons for bacterial genomes. RNA-seq datasets with different condition for each bacterial genome were taken into account and predicted operons using Rockhopper. We took RNA-seq datasets from NCBI with distinct experimental conditions for each bacterial genome into account and analyzed using tool for operon prediction. Currently our database contains 9 bacterial organisms for which we predicted operons. User interface is simple and easy to use, in terms of visualization, downloading and querying of data. In our database user can browse through reference genome, genes present in that genome and operons predicted from different RNA-seq datasets. Further in the second project, we studied the genomic and mechanistic insights of convergent transcription in bacterial genomes. We know that convergent gene pairs with overlapping head-to-head configuration are widely spread across both eukaryotic and prokaryotic genomes. They are believed to contribute to the regulation of genes at both transcriptional and post-transcriptional levels, although factors contributing to their abundance across genomes and mechanistic basis for their prevalence are poorly understood. In this study, we explore the role of various factors contributing to convergent overlapping transcription in bacterial genomes. Our analysis shows that the proportion of convergent overlapping gene pairs (COGPs) in a genome is affected due to endospore formation, bacterial habitat, oxygen requirement, GC content and the temperature range. In particular, we show that bacterial genomes thriving in specialized habitats, such as thermophiles, exhibit a high proportion of COGPs. Our results also conclude that the density distribution of COGPs across the genomes is high for shorter overlaps with increased conservation of distances for decreasing overlaps. Our study further reveals that COGPs frequently contain stop codon overlaps with the middle base position exhibiting mismatches between complementary strands. Further, for the functional analysis using cluster of orthologous groups (COGs) annotations suggested that cell motility, cell metabolism, storage and cell signaling are enriched among COGPs, suggesting their role in processes beyond regulation. Our analysis provides genomic insights into this unappreciated regulatory phenomenon, allowing a refined understanding of their contribution to bacterial phenotypes.
160

Unlocking the Potential of Biodiversity Data : Managing and Sharing Data from Biodiversity Surveys / Att låsa upp potentialen av biodiversitets data : Hantering och delning av data från Naturvärdesinventeringar

Karlsson, Tom, Rådberg, Anton January 2023 (has links)
Biodiversity surveys conducted by consulting firms generate valuable biodiversity data. However, the full potential of this data remains untapped without proper structuring and integration into a collaborative network of shared data. This thesis addresses the lack of studies exploring data management in biodiversity surveys and the specific requirements for data structuring. The objective of this Bachelor’s thesis is to introduce the concepts of data management in biodiversity surveys and collaborative data sharing, which have gained scientific popularity due to advancements in data-driven technologies and growing public interest in environmental sustainability. Using a qualitative and exploratory approach based on the Design Science Research paradigm, this research draws general conclusions on data management practices in biodiversity surveys and identifies the requirements for new data standards. The findings of this thesis serve as a starting point for ongoing efforts toward establishing a standardised structure for biodiversity survey data. Such a structure would enable meaningful insights through data-driven technologies and facilitate integration into a collaborative network of shared data. By addressing the gap in understanding data management and data standards in biodiversity surveys, this research contributes to the effective utilization of biodiversity data, ultimately supporting the achievement of sustainability goals. / Naturvärdesinventeringar utförda av konsultfirmor i Sverige genererar en betydande mängd biodiversitetsdata. För att utnyttja värdet av denna data fullt ut är strukturerad hantering och öppen samverkan och delning av data inom öppna data-ekosystem avgörande. Genom att organisera och standardisera datan på ett enhetligt sätt blir den mer användbar och kan bidra till att uppnå globala och nationella hållbarhetsmål. Trots behovet finns det brist på studier som fokuserar på datahantering vid Naturvärdesinventeringar och specifikt specificerar kraven för strukturerad hantering av data. Detta kandidatarbete introducerar ämnet datahantering från Naturvärdesinventeringar samt delning av data inom öppna data-ekosystem. Dessa ämnen har blivit alltmer populära inom forskningen på grund av framstegen inom datadrivna teknologier och det ökande allmänna intresset för miljömässig hållbarhet. Med en kvalitativ och utforskande metod baserad på Design Science Research-paradigmet drar denna forskning generella slutsatser om god praxis för datahantering vid Naturvärdesinventeringar samt identifierar krav som behöver uppfyllas för att kunna dela datan inom öppna data-ekosystem. Resultaten av detta arbete utgör en grund för fortsatt arbete mot att etablera en standardiserad struktur för biodiversitetsdata från Naturvärdesinventeringar. En sådan struktur skulle möjliggöra djupare insikter genom datadrivna teknologier samt underlätta integreringen av data inom öppna data-ekosystem. Genom att belysa kunskapsbristen inom datahantering och datastandarder vid Naturvärdesinventeringar bidrar detta arbete till en effektivare användning av biodiversitetsdata och stödjer därmed arbetet med att uppnå hållbarhetsmålen.

Page generated in 0.065 seconds