• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1016
  • 224
  • 97
  • 96
  • 70
  • 31
  • 29
  • 19
  • 19
  • 14
  • 12
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 2078
  • 745
  • 706
  • 585
  • 437
  • 357
  • 330
  • 310
  • 227
  • 221
  • 193
  • 189
  • 174
  • 165
  • 160
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
861

Automatic Physical Design for XML Databases

Elghandour, Iman January 2010 (has links)
Database systems employ physical structures such as indexes and materialized views to improve query performance, potentially by orders of magnitude. It is therefore important for a database administrator to choose the appropriate configuration of these physical structures (i.e., the appropriate physical design) for a given database. Deciding on the physical design of a database is not an easy task, and a considerable amount of research exists on automatic physical design tools for relational databases. Recently, XML database systems are increasingly being used for managing highly structured XML data, and support for XML data is being added to commercial relational database systems. This raises the important question of how to choose the appropriate physical design (i.e., the appropriate set of physical structures) for an XML database. Relational automatic physical design tools are not adequate, so new research is needed in this area. In this thesis, we address the problem of automatic physical design for XML databases, which is the process of automatically selecting the best set of physical structures for a given database and a given query workload representing the client application's usage patterns of this data. We focus on recommending two types of physical structures: XML indexes and relational materialized views of XML data. For each of these structures, we study the recommendation process and present a design advisor that automatically recommends a configuration of physical structures given an XML database and a workload of XML queries. The recommendation process is divided into four main phases: (1) enumerating candidate physical structures, (2) generalizing candidate structures in order to generate more candidates that are useful to queries that are not seen in the given workload but similar to the workload queries, (3) estimating the benefit of various candidate structures, and (4) selecting the best set of candidate structures for the given database and workload. We present a design advisor for recommending XML indexes, one for recommending materialized views, and an integrated design advisor that recommends both indexes and materialized views. A key characteristic of our advisors is that they are tightly coupled with the query optimizer of the database system, and rely on the optimizer for enumerating and evaluating physical designs whenever possible. This characteristic makes our techniques suitable for any database system that complies with a set of minimum requirements listed within the thesis. We have implemented the index, materialized view, and integrated advisors in a prototype version of IBM DB2 V9, which supports both relational and XML data, and we experimentally demonstrate the effectiveness of their recommendations using this implementation.
862

Risk of Stroke in Older Women Treated for Early Invasive Breast Cancer, Tamoxifen vs. Aromatase Inhibitors: A Population based Retrospective Cohort Study

Wijeratne, Don Thiwanka Dilshan 30 December 2010 (has links)
Tamoxifen and aromatase inhibitors are treatment options for women with breast cancer and evidence on the risk of stroke is important in choosing between these two options. A systematic review of two randomized controlled trials and their nine related trial reports showed different methods for adverse event reporting and inconsistent estimates of stroke risk. In an observational cohort study of 5443 Ontario women, aged 66 years or older with early stage breast cancer, 86 ischemic stroke events (1.6%) occurred during follow-up of 5 years. There was no statistically significant difference in the risk of stroke between the hormone therapy groups [adjusted HR for tamoxifen compared to AI 1.330 (0.810, 2.179)]. Results were similar across cardiovascular disease risk groups and were robust to different follow up periods and analytic methods. This study suggests that there is no significant difference in stroke between these treatment options.
863

Tinklinių duomenų bazių sandara tinklo paslaugų suradimui / Grid Database Structure for Web Service Discovery

Čupačenko, Aleksandr 24 September 2004 (has links)
Grids are collaborative distributed Internet systems characterized by large scale, heterogeneity, lack of central control, multiple autonomous administrative domains, unreliable components and frequent dynamic change. In such systems, it is desirable to maintain and query dynamic and timely information about active participants such as services, resources and user communities. The web services vision promises that programs are made more flexible, adaptive and powerful by querying Internet databases (registries) at runtime in order to discover information and network attached building blocks, enabling the assembly of distributed higher-level components. In support of this vision, we introduce the Web Service Discovery Architecture (WSDA), we introduce the hyper registry, which is a centralized database node for discovery of dynamic distributed content. It supports XQueries over a tuple set from a dynamic XML data model.
864

Paskirstytų duomenų bazių replikavimo komponentas / Replication component of distributed databases

Zemblys, Tomas 05 January 2005 (has links)
Many organizations use relational databases (RDB) at this time. These databases can be different producers (Oracle, MS SQL Server and etc.), operate different operating systems and use different software. But the main problem if organizations want collaborate they must exchange data between them. Sometimes if they use different data structures exchange data between them is very hard or impossible. If they want integrate different databases they must use general standard. This standard must be adjustable for different databases. One of the general standards is XML. The main purpose of this project was to analyse two functional requirements supply methods (UML and Communication Action Loops), possible architecture decisions and separate replication streams of distributed databases. One of the possible architecture decisions is replication using XML. Considering to this was created the appropriate software tool. Using this software, administrator can manipulate data and exchange data between two databases (MS SQL Server and Oracle) using XML. We have same data in two databases and we need write same date in two different databases and in this project used XML to solve this problem.
865

Korporatyvinės įmonės duomenų saugyklos modelio sudarymas ir tyrimas / Corporative enterprise data storage model development and analysis

Buškauskaitė, Laima 16 August 2007 (has links)
Šiuolaikinis verslas naudoja didžiulį duomenų kiekį, tačiau įmonėje šie duomenys taip ir liks tik balastas, jeigu nesugebėsime jų išanalizuoti ir tinkamai interpretuoti. Tik duomenų analizė, naudojant specialius programinius įrankius, iš „žalios“ informacijos leis atrinkti naudingus grūdelius ir perdirbti juos į vertingas žinias, kuriuos taps teisingų verslo sprendimų pagrindu. Naudojant OLAP (On-line Analytical Processing) priemones, sukuriama duomenų saugykla, kuri leidžia greitai, bei patogiai analizuoti duomenis. Taip pat šis produktas leidžia analizuoti duomenis, kurie yra gauti iš skirtingų verslo valdymo sistemų, kurios yra naudojamos skirtinguose geografiškai nutolusiuose įmonės padaliniuose. Kas tai - OLAP? Šis terminas naudojamas norint apibūdinti programinius produktus, kurie leidžia visapusiškai analizuoti verslo informaciją realiuoju laiku. Sąveika su tokiomis sistemomis vyksta interaktyviai, atsakymai net į daug skaičiavimų reikalaujančias užklausas gaunami per kelias sekundes. OLAP sistemos yra vienas iš daugelio verslo analitikos (Business Intelligence) grupei priskiriamų produktų. Pasaulyje yra sukurta nemažai sistemų, priklausančių šiai produktų grupei: nuo paprasčiausių MS SQL OLAP kubų iki tokių sistemų kaip „Business Objects“, „Cognos“, Corporate Planner“, „Microstrategy“. / Business use a big amount of data in our days, that data become just ballast if we incapasity to sift it. Just data mining using special software, transform data in to information. That information become reason of correct business rule. OLAP software make data warehouse, that help analyse data quick and convenient, even data are from diferent environment. OLAP technology uses measures and tools to transform and store information, create and execute queries, and generate graphic user interface. Because OLAP systems are becoming more affordable companies have to face the issue of how to choose the best product and then design and implement OLAP systems according to their business requirements. This research aims at comparing different OLAP systems at functional and data structure level in order to design and implement an OLAP system. What is OLAP? OLAP is On-Line Analitical Procesing. This term use to describe software products, thats have possibility to analyse information in real time. OLAP systems are one from a lot of Business Intelligence group products. In the worl are a lot products which are the part of this group: from MS SQL OLAP cubes to „Business Objects“, „Cognos“, Corporate Planner“ and „Microstrategy“ systems. The purpose of this work is to create and analyze the OLAP data warehouse models of large corporations.
866

Ryšių su klientais valdymo sistema metaduomenų pagrindu / Metadata based CRM system

Krugiškis, Mantas 28 January 2008 (has links)
Pagrindinis projekto tikslas – sukurti informacinę sistemą, paremtą metaduomenų lygiu, nes išanalizavus esamų portalų rinką Lietuvoje buvo pastebėta, kad šie portalai neturi tam tikrų funkcionalumo galimybių: sudėtingas portalo valdymas bei duomenų įkėlimas į portalus, sunkus ir brangus perprojektavimas, lėtas veikimas, didelės eksploatacinės išlaidos. Buvo suprojektuota loginė portalo architektūra, sistemos elgsenos modelis (naudojantis sekų diagramomis), reliacinės duomenų bazės schema, sistemos testavimo modelis ir metabazės lygis. Ištestuotas sistemos modelis, paruošta vartotojo instrukcija. Šis portalas „Būsto portalas“ leidžia lengviau keisti duomenų struktūrą ir ją praplėsti, patogiau įkelti naują informaciją, lengviau keisti vartotojo galimybes bei pačiam vartotojui naudotis portalu. Dėka metabazės lygio greičiau pasiekiami duomenys. Portalo administratoriui lengviau ir paprasčiau perprojektuoti visą portalą kitam tikslui (pasikeitus portalo informacijos sričiai). Esant paprastai turinio valdymo sistemai – lengviau neįgudusiam administratoriui prižiūrėti portalą, nes atskirta portalo veikimo logika nuo naudojamų duomenų. / The main goal of this project is to create an informative system based on metadata level due to the research done on existing sites in Lithuania, which has shown that those sites don‘t have some functional possibilities: they are difficult to manage, loading data is complicated, redesigning is expensive and hard, slow operation, high financial maintenance. Logical site architecture has been designed as well as the model of system behaviour (using diagrams of sequences), scheme of relative data base, model of system testing and meta-base level. System model has been tested, maintenance and user‘s instruction documents have been prepared, any logical and other mistakes have been corrected. This site ‚The dwelling site‘lets easily change the structure of data and widen it, it has become more convenient to change user‘s possibilities and load new data and use the site itself. The data is quicker to reach because of the metadata level. It is also easier for site administrator to redesign the whole site for the different purpose when the range of information changes. When managing of content is easy to maintain it is also easier for a less experienced administrator to look after the site, because the logic of site operation is separated from data being used.
867

Požymių erdvės mažinimo metodų kokybės tyrimas / Comparison of methods for features space reduction

Vaišnoraitė, Giedrė 16 August 2007 (has links)
Magistro darbo tikslas yra tarpusavyje palyginti klasifikavimui skirtų požymių mažinimo metodus, kurie turimą požymių aibę transformuoja į mažesnės eilės aibę. Duomenų klasifikavimo kokybė transformuotoje požymių erdvėje turi nenukentėti. Eksperimentams naudotos keturios realių duomenų bazės. Kiekvienai duomenų bazei tikrinama hipotezė apie vidutinių reikšmių lygybę, t.y. lyginamos dvi skirtingos vidutinės klasifikavimo klaidos ir nuspręsta ar jos yra panašios, ar skirtingos, naudojant Stjudento (t) testą. Tam, kad tai patikrinti bus skaičiuojama T statistika. Pirmą kartą duomenų požymių atrinkimui panaudotas neraiškaus integralo metodas su pilnuoju matu. Visi gauti eksperimentų rezultatai pateikti paveiksluose ir apibendrinti lentelėse. Magistrinio darbo išvadose pateiktas trumpas gautų rezultatų aprašymas. / The process of finding features that meet the given constrains out of a large group of features is called feature reduction. The reduction concept can be divided into feature selection and feature extraction techniques. The feature selection approach selects the independent features that provide sufficient information for a satisfactory separation between the different situations we want to discriminate. The physical values of selected features remain unchanged. The redundancy of features might be identified by a feature clustering and selection algorithm or we might remove features with the highest correlation. The algorithm removes similar features. This implies a faster training of consequent classifiers on reduced feature space. The feature extraction method works in opposite. Hereby, the features are projected onto a set of reduced feature space by some transformation function. The features in transformed space are no longer representing the same physical meaning as in original space. The transformation function is an analytical function and the challenge is to find representative and informative transformation for the given feature set. Very well known techniques are: the principal components analysis (PCA) and dimensionality reduction by auto-associative mapping using MLP neural. Four methods for features space reduction were analyzed in this work. All these methods have been used with four publicly available databases and applied to very well known k-nearest neighbor... [to full text]
868

Patobulintos objektinio modelio transformacijos į reliacinių duomenų bazių schemas UML CASE įrankiuose / Complete transformations of object models to relational database schemas in UML CASE tools

Maslauskas, Raimondas 16 August 2007 (has links)
Šio darbo tikslas – pilnų objektinių-sąryšinių transformacijų algoritmų sukūrimas ir įgyvendinimas UML CASE įrankiuose. Pilnomis transformacijomis suprantamos tokios transformacijos, kurios gali transformuoti visas objektinio modelio konstrukcijas į sąryšinių duomenų bazių konstrukcijas. To neatlieka esami CASE įrankiai. Siūlomas sprendimas – panaudoti esamas CASE įrankių transformacijas ir papildyti jas iki teorinio modelio. Eksperimentinė realizacija UML CASE įrankyje Magic Draw patvirtino šio sprendimo veiksmingumą. / The goal of the current work – creation of complete transformation algorithms from object models to relational databases and their implementation in UML CASE tools. OOP has enabled the creation of tools for object oriented software and databases. One of the main aims of such tools is to create the object model of software and also a database for storing information about these objects. Most tools of software engineering enable the creation of object oriented software model and its transformation into RDB model. Afterwards the program code is generated from the OOP model and the SQL script of the database is generated from the RDB model. The analysis of CASE tools indicates that class models are not completely transformed into relational database schemes, i.e. none of existing case tools performs full transformation. One of the solutions is to take complete transformation process from class model into realational database model. The analysis of prototypes of created algorithms revealed that it is possible to perform such a task. The final result is the improved CASE tool with complete transformations from objectmodels to relational database schemas.
869

Reliacinių duomenų bazių saugumo modelio tyrimas / The research on security model of relational databases

Brobliauskas, Žilvinas 28 August 2009 (has links)
Žilvino Brobliausko magistro studijų baigiamajame darbe atliekamas daugiašalio reliacinių duomenų bazių saugumo modelio teorinis tyrimas: suformuluojami pagrindiniai reikalavimai, keliami tokio tipo modeliui; pasiūlomas modelis, leidžiantis vykdyti paiešką ir taikyti sumos, bei vidurkio agregatines funkcijas neiššifruojant skaitinių duomenų RDBVS pusėje; nurodomi pateikto modelio privalumai ir trūkumai. Pateikiama demonstracinė programa, realizuojanti pasiūlytą modelį. / The multilateral security model of relational databases is analyzed in master thesis of Žilvinas Brobliauskas. The results of research includes: the formulated requirements for multilateral security model of relational databases, proposed model, which allows range queries and aggregation functions over encrypted data without decrypting them at RDBMS level, and determined advantages and disadvantages of it. The program which realizes proposed model is given as proof of concept.
870

Lituanistikos informacinė sistema / Information system “Lituanistika”

Kučiukas, Tomas 26 August 2010 (has links)
Šio darbo tyrimo sritis – programinės įrangos projektavimo metodai, leidžiantys išplėsti egzistuojančias sistemas, ir jų taikymas bibliotekų informacinei sistemai. Tyrimo objektas – mokslo publikacijų, atitinkančių lituanistikos sampratą, bibliografinės duomenų bazės projektavimas ir kūrimas. Darbo tikslas – pagerinti lituanistikos sampratą atitinkančių mokslo publikacijų vertinimo bei sklaidos galimybes, automatizuoti šių publikacijų pateikimo, ekspertavimo bei sklaidos procesus. / International social and humanitarian science and abstract bibliographic databases includes the most important and most significant of these documents in scientific fields. Databases developed thematically (eg, MLA International Bibliography, Sociological Abstracts CSA) and / or regional basis (eg, British Humanities Index). Search of international databases are a great tool for quickly and efficiently find information about the world of the studies performed and published scholarly works in the social and humanitarian sciences publications. Lithuanian academic journals and research publications in international databases is increasing in recent years, but in Lithuania the tests and researchers work to develop international databases is only fragmentary. International social and humanitarian sciences databases of lithuanistic ensure the development of Lithuanian scientists work visibility in national and international level and at the same scientists laid down in recognition of excellence. This database should be based on best international social and humanitarian experience of databases.

Page generated in 0.0439 seconds