• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1606
  • 457
  • 422
  • 170
  • 114
  • 102
  • 61
  • 49
  • 40
  • 36
  • 29
  • 23
  • 21
  • 17
  • 16
  • Tagged with
  • 3646
  • 856
  • 805
  • 754
  • 608
  • 544
  • 420
  • 400
  • 392
  • 363
  • 310
  • 304
  • 296
  • 277
  • 264
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
881

VPU MIF studentų elektroninis žiniaraštis / An Electonic Database of Faculty of Mathematics and Informatics of Vilnius Pedagogical University

Šerplė, Jurgita 16 August 2007 (has links)
Šiuo darbu buvo siekiama įgyvendinti tikslą, sukurti Vilniaus Pedagoginio Universiteto Matematikos ir informatikos fakultetui elektroninį žiniaraštį, kuris palengvintų darbą su studento informaciją. Sistemos kūrimui buvo pasirinkta atviro kodo programinė įranga PHP, MySQL ir Apache. Teorinėje dalyje yra apžvelgiama programinė įranga, organizuojamas darbas ir aprašoma įgyvendinta sistema. Buvo savarankiškai išmokta dirbti su pasirinkta programine įranga ir sukurta sistema „elektroninis žiniaraštis“. / This work is for accomplishing an aim – to create electronic register for "Vilnius Pedagoginis Universitetas" mathematics and informatics faculty, which would make the work with student's information easier. Open source software basis, which consists of PHP, MySQL and Apache, was chosen for creating this system. Theoretical part consists of review on software, work organization and description of realized system. In order to create the system "elektroninis žiniaraštis" ("electronic register") there was a need to learn to use with chosen software.
882

Lock-based concurrency control for XML

Ahmed, Namiruddin. January 2006 (has links)
As XML gains popularity as the standard data representation model, there is a need to store, retrieve and update XML data efficiently. McXml is a native XML database system that has been developed at McGill University and represents XML data as trees. McXML supports both read-only queries and six different kinds of update operations. To support concurrent access to documents in the McXML database, we propose a concurrency control protocol called LockX which applies locking to the nodes in the XML tree. LockX maximizes concurrency by considering the semantics of McXML's read and write operations in its design. We evaluate the performance of LockX as we vary factors such as the structure of the XML document and the proportion of read operations in transactions. We also evaluate LockX's performance on the XMark benchmark [16] after extending it with suitable update operations [13]. Finally, we compare LockX's performance with two snapshot-based concurrency control protocols (SnaX, OptiX) that provide a committed snapshot of the data for client operations.
883

Automated Storage Layout for Database Systems

Ozmen, Oguzhan 08 1900 (has links)
Modern storage systems are complex. Simple direct-attached storage devices are giving way to storage systems that are flexible, network-attached, consolidated and virtualized. Today, storage systems have their own administrators, who use specialized tools and expertise to configure and manage storage resources. As a result, database administrators are no longer in direct control of the design and configuration of their database systems' underlying storage resources. This introduces problems because database physical design and storage configuration are closely related tasks, and the separation makes it more difficult to achieve a good end-to-end design. For instance, the performance of a database system depends strongly on the storage layout of database objects, such as tables and indexes, and the separation makes it hard to design a storage layout that is tuned to the I/O workload generated by the database system. In this thesis we address this problem and attempt to close the information gap between database and storage tiers by addressing the problem of predicting the storage (I/O) workload that will be generated by a database management system. Specifically, we show how to translate a database workload description, together with a database physical design, into a characterization of the I/O workload that will result. Such a characterization can directly be used by a storage configuration tool and thus enables effective end-to-end design and configuration spanning both the database and storage tiers. We then introduce our storage layout optimization tool, which leverages such workload characterizations to generate an optimized layout for a given set of database objects. We formulate the layout problem as a non-linear programming (NLP) problem and use the I/O characterization as input to an NLP solver. We have incorporated our I/O estimation technique into the PostgreSQL database management system and our layout optimization technique into a database layout advisor. We present an empirical assessment of the cost of both tools as well as the efficacy and accuracy of their results.
884

Public Health in Georgia, An Internet Advocacy Tool: A Capstone Project

Garcia, Patricia B 01 August 2010 (has links)
Local Public Health programs are at the frontline of Georgia’s struggle to prevent disease, prolong citizens’ lives, and promote health. In recent history it has been observed that both Georgia’s citizens and state government do not completely understand the breadth of the Public Health system and all it beneficiaries. Unfortunately this lack of comprehension about the scope of Public Health programs has lead to a significant decrease in support and funding. This capstone project describes the systematic development of an online educational portal that is a central tool used in the Public Health advocacy campaign in Georgia, “Partner-Up for Public Health”. An electronic database of Public Health statistics for all of Georgia’s counties (n=159) was created using secondary sources. The database presents data on four primary domains: geographic/population descriptive statistics, broad social determinants of health, health indicators, and health outcomes. Within these domains, there are a total of twenty-one indices. This project is important because it collects and presents Public Health information into one centralized location for easy retrieval and is formatted to deliver content in non-technical jargon. A hallmark of the online portal is that it facilitates the mobilization of information and tools necessary for Georgian’s to advocate for local Public Health action, programs, funding, and political attention.
885

Duomenų bazių projektavimo metodų ir priemoių analizė / Investigation of database design methods and tools

Šafranovič, Jekaterina 13 June 2005 (has links)
Database design methods and tools have been investigated in the paper. The action diagram construction process has been investigated and three data design methodologies IDE1X, IE and Chen‘ have been considered. The main definitions have been described and classified in detail. The problems of database design have been stated and table normal forms have been presented. The company‘s employee income tax database model has been designed on the basis of the theory. Four step Chen‘s diagram has been used to design the data model. The company‘s employee income tax database has been designed and implemented in MS Access environment. The data import program from MS Excel has been developed.
886

Kosminio žingsninio strateginio žaidimo kūrimas. Duomenų bazės projektavimas ir realizavimas / Space Development of Space Turn-Based Strategy Game: Database Design and Implementation

Valčiukas, Remigijus 07 September 2010 (has links)
Darbas skirtas sukurti invariantišką kosminių žingsninių strateginių žaidimų duomenų bazę. Darbo metu buvo atlikta žaidimo modelio analizė, pagal kurią buvo suprojektuota duomenų bazė tinkanti bet kokiam žaidimo modeliui. Taip pat žaidimo varikliuke buvo sukurtas modulis dirbti su duomenų baze bei atlikus duomenų bazės efektyvumo tyrimą realizuotas duomenų bazės optimizavimo metodas konkrečiam žaidimui. / The purpose of this work is to create an invariant space turn-based strategy game database. Working with game model and its analysis was designed database suitable for any game model. Also, a module for work with the database was implemented in the game engine. Furthermore, after testing the database a method was implemented, used for optimizing the database for a specific game.
887

Atvirojo kodo duomenų bazių serverių analizė aptarnavimo paslaugų teikimo valdymo sistemos pagrindu / Analysis of open source database servers using service management system

Jankevičius, Vytautas 26 May 2006 (has links)
Jankevičius, Vytautas (2006). “Analysis of open source database servers using service management system”. MA Graduation Paper. Kaunas: Faculty of informatics, University of Kaunas Technology . 49 p. The graduation work object is to analyse the popular open source databases servers and determine which is the optimal for service management system. To complete the goal there were made such tasks: • Perform theoretical analysis of open source database servers • To determine similarities and differences of open source database servers. • To analyze and compare functional features of service management systems. • To design the special class and methods necessary for research. • To determine the optimal database server for service management system using the special class and methods. There are 49 pages in the work. First part of the work consists of 13 pages, second part – 13 pages and the third part is 14 pages. There are 3 tables and 32 pictures in the work. The first part of the work proposed the review and the comparison of service management systems and open source database servers. Second part of the work is proposed the project model of service management system Third part of the work proposed the results of open source database servers’ research.
888

Duomenų modeliavimo ir schemos tikrinimo metodika / Data Modeling and Schema Testing Methodology

Paulauskas, Vytautas 11 January 2007 (has links)
We present methodology, which allows automatically test database schemes. Such methodology allows you to create new one or expand existing CASE tool. You can integrate database tests in generating physical database model process. In this way you can make scheme validation much easier, as a result it lowers the possibility of errors in early steps of modelling. Acording to our analysis, there are very few tools that covers such methodotlogy. So during our work we have designed and created a new tool – „DbTestAddin“, tool that tests database schemes. It works as Microsoft Office Visio 2003 Add-in. With this tool you can easily test any database scheme you want, compare it with a real database etc. In future, there are several updates and functional enhacements which can make this tool more attractive to use.
889

Automatic Identification of Protein Characterization Articles in support of Database Curation

Denroche, Robert 01 February 2010 (has links)
Experimentally determining the biological function of a protein is a process known as protein characterization. Establishing the role a specific protein plays is a vital step toward fully understanding the biochemical processes that drive life in all its forms. In order for researchers to efficiently locate and benefit from the results of protein characterization experiments, the relevant information is compiled into public databases. To populate such databases, curators, who are experts in the biomedical domain, must search the literature to obtain the relevant information, as the experiment results are typically published in scientific journals. The database curators identify relevant journal articles, read them, and then extract the required information into the database. In recent years the rate of biomedical research has greatly increased, and database curators are unable to keep pace with the number of articles being published. Consequently, maintaining an up-to-date database of characterized proteins, let alone populating a new database, has become a daunting task. In this thesis, we report our work to reduce the effort required from database curators in order to create and maintain a database of characterized proteins. We describe a system we have designed for automatically identifying relevant articles that discuss the results of protein characterization experiments. Classifiers are trained and tested using a large dataset of abstracts, which we collected from articles referenced in public databases, as well as small datasets of manually labeled abstracts. We evaluate both a standard and a modified naïve Bayes classifier and examine several different feature sets for representing articles. Our findings indicate that the resulting classifier performs well enough to be considered useful by the curators of a characterized protein database. / Thesis (Master, Computing) -- Queen's University, 2010-01-28 18:45:17.249
890

Evaluation of a Statistical Model-Based Prediction of Mercury Concentrations in Ontario Sport Fish

DeLong, Eric 12 September 2012 (has links)
Since the mid-1970s, the Ontario (Canada) Ministry of Environment (OMOE) has been collecting data on fish tissue mercury (Hg) contamination in provincial waterbodies. By 2004, approximately 160,000 fish from 86 species at over 1,600 sites were tested for Hg. This large database is primarily used to issue advisories for safe human fish consumption via publication of the biennial Guide to Eating Ontario Sport Fish. Analysis to uncover spatio-temporal trends while maximising the use of data points is complicated by the application of a non-random heterogeneous sampling design. The National Descriptive Model for Mercury in Fish (NDMMF) developed by the United States Geological Survey (USGS) is a statistical model of Hg concentrations that can potentially mitigate these challenges by separating the spatiotemporal variability of fish-[Hg] sampling while considering the effects of species, size, and fish sample portion type. However, the NDMMF has not been fully exploited, likely due to lack of rigorous evaluation. We conduct the first detailed investigation on the ability of the NDMMF to reproduce the observed fish-[Hg] in coolwater walleye (Sander vitreous) and warm-water yellow perch (Perca flavescens). Approximately two-thirds of both walleye and yellow perch [Hg]-length relationships could be accurately predicted using the NDMMF. For these cases, a majority (>85%) of the estimates are within the same consumption advisory categories as the interpolated [Hg] value based on the observed data, using an average-length fish. For the remaining incidences with significantly different NDMMF fish [Hg]-length relationships compared to those from the observed data, the NDMMF notably yields similar results, with a majority (>75%) of [Hg] estimates still falling within the same consumption advisory categories. For the small fraction of incidences with inaccurate advisory categorization, the instances of conservative over-prediction (<18%) would be of little human health concern as these would recommend fewer meals than otherwise suggested using observed data. For the few instances when [Hg] is under-predicted (<11%), the nature of the human health concern would be relatively minor because the advisory classification is almost never (<1%) more than one category less restrictive. / Thesis (Master, Biology) -- Queen's University, 2012-08-29 15:31:54.349

Page generated in 0.0596 seconds