• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1606
  • 457
  • 422
  • 170
  • 114
  • 102
  • 61
  • 49
  • 40
  • 36
  • 29
  • 23
  • 21
  • 17
  • 16
  • Tagged with
  • 3645
  • 856
  • 804
  • 754
  • 608
  • 544
  • 420
  • 400
  • 392
  • 363
  • 310
  • 304
  • 296
  • 277
  • 264
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
581

LymphTF Database- A Database of Transcription Factor Activity in Lymphocyte Development

Childress, Paul 26 July 2006 (has links)
Submitted to the faculty of the Bioinformatics Graduate Program in partial fulfillment of the requirements for the degree Master of Science in the School of Informatics, Indiana University September 2005 / Study of the transcriptional regulation of lymphocyte development has advanced greatly in the past 15 years. Owing to improved techniques and intense interest in the topic, a great many interactions between transcription factors and their target genes have been described. For these B and T cells, a more clear picture is beginning to emerge of how they start with a common progenitor cell, and progressively restrict potential to give many different types of terminally differentiated cells. As B and T cells develop they both follow a roughly similar path that involves early stepwise progression to later stages where multiple developmental options are available. To progress in the developmental regime they share requirements for proper anatomical location and successful rearrangements of the germ line DNA to give the plethora of antibodies and T cell receptors seen in the immune system. Because the amount of information is quickly becoming more than can be assimilated by researchers, a knowledge gap has opened between what is known about the transcription factor activities during this process and what any one individual can recall. To help fill this gap, we have created the LymphTF Database. This database holds interactions between individual transcription factors and their specific targets at a given developmental time. It is our hope that storing the interactions in developmental time will allow for elucidation of regulatory networks which guide the process. Work for this project also included construction of a custom data entry web page that automates many tasks associated with populating the database tables. These tables have also been related in multiple ways to allow for storage of incomplete information on transcription factor activity. This is done without having to replace existing records as details become available. The LymphTF DB is a relational MySQL database which can be accessed freely on the web at http://www.iupui.edu/~tfinterx/.
582

A Database Supported Modeling Environment for Pandemic Planning and Course of Action Analysis

Ma, Yifei 24 June 2013 (has links)
Pandemics can significantly impact public health and society, for instance, the 2009 H1N1<br />and the 2003 SARS. In addition to analyzing the historic epidemic data, computational simulation of epidemic propagation processes and disease control strategies can help us understand the spatio-temporal dynamics of epidemics in the laboratory. Consequently, the public can be better prepared and the government can control future epidemic outbreaks more effectively. Recently, epidemic propagation simulation systems, which use high performance computing technology, have been proposed and developed to understand disease propagation processes. However, run-time infection situation assessment and intervention adjustment, two important steps in modeling disease propagation, are not well supported in these simulation systems. In addition, these simulation systems are computationally efficient in their simulations, but most of them have<br />limited capabilities in terms of modeling interventions in realistic scenarios.<br />In this dissertation, we focus on building a modeling and simulation environment for epidemic propagation and propagation control strategy. The objective of this work is to<br />design such a modeling environment that both supports the previously missing functions,<br />meanwhile, performs well in terms of the expected features such as modeling fidelity,<br />computational efficiency, modeling capability, etc. Our proposed methodologies to build<br />such a modeling environment are: 1) decoupled and co-evolving models for disease propagation, situation assessment, and propagation control strategy, and 2) assessing situations and simulating control strategies using relational databases. Our motivation for exploring these methodologies is as follows: 1) a decoupled and co-evolving model allows us to design modules for each function separately and makes this complex modeling system design simpler, and 2) simulating propagation control strategies using relational databases improves the modeling capability and human productivity of using this modeling environment. To evaluate our proposed methodologies, we have designed and built a loosely coupled and database supported epidemic modeling and simulation environment. With detailed experimental results and realistic case studies, we demonstrate that our modeling environment provides the missing functions and greatly enhances many expected features, such as modeling capability, without significantly sacrificing computational efficiency and scalability. / Ph. D.
583

The optimization of Database queries by using a dynamic caching policy on the application side of a system

Granbohm, Martin, Nordin, Marcus January 2019 (has links)
Det är viktigare än någonsin att optimera svarstiden för databasförfrågningardå internettrafiken ökar och storleken på data växer. IT-företag har också blivitmer medvetna om vikten av att snabbt leverera innehåll till slutanvändaren pågrund av hur långsammare svarstider kan påverka kvalitetsuppfattningen påen produkt/ett system. Detta kan i sin tur leda till en negativ påverkan på ettföretags intäkter.I det här arbetet utvecklar och implementerar vi en ny dynamisk cachelösningpå applikationssidan av systemet och testar den mot väletablerade cachestrategier. Vi undersökte kända cache-strategier och relaterad forskning somtar hänsyn till den aktuella databasbelastningen så som historisk frekvens fören specifik databasförfrågan och tillämpade detta i vår algoritm. Vi utveckladefrån detta en dynamisk cachepolicy som använder en logaritmisk beräkningsom involverar den historiska frekvensen tillsammans med endatabasförfrågans svarstid och beräknade en vikt för en viss databasförfrågan.Vikten ger sedan prioritet i förhållande till andra databasförfrågningar som ärcachade. Vi kan här påvisa en prestandahöjning på 11-12% mot LRU, enprestandahöjning på 15% mot FIFO och en väsentlig prestandahöjning mot attanvända databasen direkt med både MySQL-cache aktiverad och inaktiverad. / With IP traffic and data sets continuously growing together with IT companiesbecoming more and more dependent on large data sets, it is more importantthan ever to optimize the load time of queries. IT companies have also becomemore aware of the importance of delivering content quickly to the end userbecause of how slower response times can affect quality perception which inturn can have a negative impact on revenue.In this paper, we develop and implement a new dynamic cache managementsystem with the cache on the application side of the system and test it againstwell-established caching policies. By looking at known caching strategies andresearch that takes the current database load into account with attributes suchas a queries frequency and incorporating this into our algorithm, we developeda dynamic caching policy that utilizes a logarithmic calculation involvinghistorical query frequency together with query response time to calculate aweight for a specific query. The weight gives priority in relation to other queriesresiding within the cache, which shows a performance increase towards existingcaching policies. The results show that we have a 11-12 % performance increasetowards LRU, a 15 % performance increase towards FIFO and a substantialperformance increase towards using the database directly with both MySQLcaching enabled and disabled.
584

Low-latency Estimates for Window-Aggregate Queries over Data Streams

Bhat, Amit 01 January 2011 (has links)
Obtaining low-latency results from window-aggregate queries can be critical to certain data-stream processing applications. Due to a DSMS's lack of control over incoming data (typically, because of delays and bursts in data arrival), timely results for a window-aggregate query over a data stream cannot be obtained with guarantees about the results' accuracy. In this thesis, I propose a technique, which I term prodding, to obtain early result estimates for window-aggregate queries over data streams. The early estimates are obtained in addition to the regular query results. The proposed technique aims to maximize the contribution to a result-estimate computation from all the stateful operators across a multi-level query plan. I evaluate the benefits of prodding using real-world and generated data streams having different patterns in data arrival and data values. I conclude that, in various DSMS applications, prodding can generate low-latency estimates to window-aggregate query results. The main factors affecting the degree of inaccuracy in such estimates are: the aggregate function used in a query, the patterns in arrivals and values of stream data, and the aggressiveness of demanding the estimates. The utility of the estimates obtained using prodding should be optimized by tuning the aggressiveness in result-estimate demands to the specific latency and accuracy needs of a business, considering any available knowledge about patterns in the incoming data.
585

Continuation and discontinuation of benzodiazepine prescriptions: A cohort study based on a large claims database in Japan / ベンゾジアゼピン処方の継続と中止:大規模レセプトデータを用いたコホート研究

Takeshima, Nozomi 23 May 2016 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(医学) / 甲第19890号 / 医博第4139号 / 新制||医||1016(附属図書館) / 32967 / 京都大学大学院医学研究科医学専攻 / (主査)教授 川上 浩司, 教授 福原 俊一, 教授 村井 俊哉 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
586

Multipaged implementation of MRDS on UNIX

Pal, Jatinder. January 1984 (has links)
No description available.
587

Implementing QT-selectors and updates for a primary memory version of Aldat

Tsakalis, Maria. January 1987 (has links)
No description available.
588

Implementation of a domain algebra and a functional syntax for a relational database system

Van Rossum, Ted. January 1983 (has links)
No description available.
589

Algorithms and data structures for the implimentation of a relational database system

Orenstein, J. A. January 1982 (has links)
No description available.
590

Practical and consistent database replication

Lin, Yi, 1972- January 2007 (has links)
No description available.

Page generated in 0.0472 seconds