• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 17
  • 17
  • 5
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 118
  • 118
  • 33
  • 23
  • 21
  • 21
  • 18
  • 18
  • 16
  • 16
  • 15
  • 15
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Μαζική ανάλυση δεδομένων κυτταρομετρίας ροής με τη χρήση σχεσιακών βάσεων δεδομένων

Αθανασοπούλου, Πολυξένη 31 August 2012 (has links)
Η κυτταρομετρία ροής (Flow Cytometry–FC), είναι μία σύγχρονη αυτοματοποιημένη τεχνική ανάλυσης των φυσικοχημικών χαρακτηριστικών των κυττάρων και των σωματιδίων, η οποία επιτρέπει την μεμονωμένη μέτρησή τους, καθώς διέρχονται σε νηματική ροή από ένα σταθερό σημείο, που προσπίπτει ακτίνα laser. Η ουσιαστική χρήση της FC είναι η προσφορά της σε διάγνωση και παρακολούθηση ασθενών με νοσήματα που συνοδεύονται από παρουσία παθολογικών κυττάρων σε διάφορα βιολογικά υγρά ή και στερεούς ιστούς κατάλληλα επεξεργασμένους. Το αποτέλεσμα της κυτταρομετρικής ανάλυσης είναι μία πληθώρα μετρήσεων φθορισμού, καθώς και των δύο μετρήσεων πρόσθιου (Forward Scatter–FS) και πλάγιου (Side Scatter-SS) σκεδασμού, που εξαρτώνται από τα φυσικά χαρακτηριστικά κάθε κυττάρου. Μετά την ανάλυση των δεδομένων από τον Ηλεκτρονικό Υπολογιστή (Η/Υ) του κυτταρομετρητή, τα αποτελέσματα παρουσιάζονται υπό τη μορφή μονοπαραμετρικών ή πολυπαραμετρικών κατανομών. Στην ανάλυση που χρησιμοποιήθηκε (με χρήση 5 φθοριοχρωμάτων), ο κυτταρομετρητής ροής παρήγαγε 7 τιμές για κάθε ένα από τα 30.000 κύτταρα περίπου που μετρήθηκαν σε κάθε πρωτόκολλο. Με τη χρήση των Η/Υ μπορούμε να αναλύσουμε γρήγορα και αξιόπιστα όλον αυτό τον μεγάλο όγκο δεδομένων εφαρμόζοντας μοντέλα βάσεων δεδομένων. Η βασική δομή του σχεσιακού μοντέλου δεδομένων αναπαριστάται με ένα πίνακα, στον οποίο αποθηκεύονται δεδομένα, σε στήλες και γραμμές, τα οποία αφορούν μία συγκεκριμένη οντότητα. Οι σχέσεις των πινάκων περιγράφουν τoν τρόπο σύνδεσης διαφορετικών οντοτήτων, οι οποίες συνδυαστικά δημιουργούν λογικούς πίνακες, που με τη σειρά τους περιγράφουν πιο σύνθετες οντότητες. Κατά αυτόν τον τρόπο μπορούμε να κάνουμε περαιτέρω συγκρίσεις μεταξύ των εξετάσεων των ασθενών, που ίσως καταλήξουν σε ευνοϊκά συμπεράσματα, όσον αφορά την πρόγνωση και την θεραπεία κυρίως των νεοπλασματικών νοσημάτων του αίματος. Ο ρόλος της FC σε αιματολογικά νοσήματα όπως τα μυελοδυσπλαστικά σύνδρομα (ΜΔΣ) είναι ακόμα υπό διερεύνηση. Τα ΜΔΣ είναι νοσήματα με σημαντική κλινική και αιματολογική ετερογένεια, κάτι που καθιστά σαφή την ανάγκη μαζικής ανάλυσης των δεδομένων τους, για την αναγνώριση ομοιόμορφων υποομάδων με κοινά γνωρίσματα και άρα ενός πληροφοριακού μοντέλου ανάλυσης που θα διευκολύνει την λήψη των κατάλληλων θεραπευτικών επιλογών.Η παρούσα εργασία ασχολείται με τις πολυπαραμετρικές εξετάσεις των ΜΔΣ, την πληροφορία των οποίων είναι ικανή να παρέχει η FC. Θα γίνει προσπάθεια να καταγραφούν αναλυτικά όλα τα απαραίτητα βήματα, έτσι ώστε σε δεύτερο χρόνο να αναλυθεί μαζικά όλη αυτή η πληροφορία μέσω ενός σχεσιακού μοντέλου βάσεων δεδομένων. / Flow cytometry (Flow Cytometry-FC), is a modern automated technical analysis of the physicochemical characteristics of cells and particles, which allows the individual measuring them as they pass in threaded flow from a fixed point, incident beam laser. The effective use of FC is offering a diagnosis and monitoring of patients with diseases associated with the presence of abnormal cells in various biological fluids and solid tissues or processed properly. The results of cytometric analysis is a plethora of fluorescence measurements and measurements of both anterior (Forward Scatter-FS) and lateral (Side Scatter-SS) dispersion, which depends on the physical characteristics of each cell. After analyzing the data from the PC (H / H) on the cytometer, the results presented in the form monoparametric or multi parameter distributions. The analysis used (using fluorochrome 5), the flow cytometer produced 7 values ​​for each of the 30,000 or so which cells were measured in each protocol. By using the H / H can be analyzed quickly and reliably throughout this large volume of data by applying models of databases. The basic structure of relational data model is represented by a table that stores data in columns and rows, which relate to a specific entity. Relations ton of tables describing how to connect different entities, which in combination create logical tables, which in turn describe more complex entities. In this way we can make further comparisons between the examinations of patients, which may lead to favorable conclusions regarding the prognosis and treatment of neoplastic diseases, especially blood. The role of FC in hematological diseases such as myelodysplastic syndromes (MDS) is still under investigation. MDS is a disease with significant clinical and haematological heterogeneity, which makes clear the need for mass analysis of their data, to identify subgroups with common standard features and thus an informative analysis model that will facilitate the adoption of appropriate therapeutic epilogon.I present work dealing with multivariate MDS tests, information which is capable of providing the FC. I try to record in detail all the necessary steps so that a second time to analyze all this mass of information via a relational database model.
82

Návrh databáze pro připojení systému SAP jako zdroje dat pro webovou aplikaci / Database design for connecting SAP as a data source for a Web application

MARHOUN, Lukáš January 2016 (has links)
The thesis deals with connecting SAP ERP system via local database system MS SQL Server using the tools SAP BI, data synchronization between systems and advanced usage of T-SQL language for preparing data for web applications and reports written in PHP. The thesis contains a brief overview of the SAP system and the possibility of connecting to the SAP system. The general principles of described solution can be used in conjunction with other systems and programming languages.
83

Modelagem e implementação de banco de dados clínicos e moleculares de pacientes com câncer e seu uso para identificação de marcadores em câncer de pâncreas / Database design and implementation of clinical and molecular data of cancer patients and its application for biomarker discovery in pancreatic cancer

Ester Risério Matos Bertoldi 20 October 2017 (has links)
O adenocarcinoma pancreático (PDAC) é uma neoplasia de difícil diagnóstico precoce e cujo tratamento não tem apresentado avanços expressivos desde a última década. As tecnologias de sequenciamento de nova geração (next generation sequencing - NGS) podem trazer importantes avanços para a busca de novos marcadores para diagnóstico de PDACs, podendo também contribuir para o desenvolvimento de terapias individualizadas. Bancos de dados são ferramentas poderosas para integração, padronização e armazenamento de grandes volumes de informação. O objetivo do presente estudo foi modelar e implementar um banco de dados relacional (CaRDIGAn - Cancer Relational Database for Integration and Genomic Analysis) que integra dados disponíveis publicamente, provenientes de experimentos de NGS de amostras de diferentes tipos histopatológicos de PDAC, com dados gerados por nosso grupo no IQ-USP, facilitando a comparação entre os mesmos. A funcionalidade do CaRDIGAn foi demonstrada através da recuperação de dados clínicos e dados de expressão gênica de pacientes a partir de listas de genes candidatos, associados com mutação no oncogene KRAS ou diferencialmente expressos em tumores identificados em dados de RNAseq gerados em nosso grupo. Os dados recuperados foram utilizados para a análise de curvas de sobrevida que resultou na identificação de 11 genes com potencial prognóstico no câncer de pâncreas, ilustrando o potencial da ferramenta para facilitar a análise, organização e priorização de novos alvos biomarcadores para o diagnóstico molecular do PDAC. / Pancreatic Ductal Adenocarcinoma (PDAC) is a type of cancer difficult to diagnose early on and treatment has not improved over the last decade. Next Generation Sequencing (NGS) technology may contribute to discover new biomarkers, develop diagnose strategies and personalised therapy applications. Databases are powerfull tools for data integration, normalization and storage of large data volumes. The main objective of this study was the design and implementation of a relational database to integrate publicly available data of NGS experiments of PDAC pacients with data generated in by our group at IQ-USP, alowing comparisson between both data sources. The database was called CaRDIGAn (Cancer Relational Database for Integration and Genomic Analysis) and its funcionalities were tested by retrieving clinical and expression data of public data of genes differencially expressed genes in our samples or genes associated with KRAS mutation. The output of those queries were used to fit survival curves of patients, which led to the identification of 11 genes potencially usefull for PDAC prognosis. Thus, CaRDIGAn is a tool for data storage and analysis, with promissing applications to identification and priorization of new biomarkers for molecular diagnosis in PDAC.
84

Metadados de Bancos de Dados Relacionais: Extração e Exposição com o Protocolo OAI-PMH / Metadata of Relacional Database: Extraction and ExpositionWith OAI-PMH Protocol

KOWATA, Elisabete Tomomi 11 September 2011 (has links)
Made available in DSpace on 2014-07-29T14:57:50Z (GMT). No. of bitstreams: 1 Dissertacao Elisabete T Kowata.pdf: 2397519 bytes, checksum: df1ed4bd74a16c5e66a0ff4d7f8f9825 (MD5) Previous issue date: 2011-09-11 / Information about a particular subject can be stored in different repositories such as databases, digital libraries, spreadsheets, text files, web pages etc. In this context of heterogeneous data sources, to query, possibly in natural language, to integrate information and to promote interoperability are tasks that depend, among other factors, on the prior knowledge that an user has regarding location, owner, content description of each information source etc. More specifically, in the case of database, this information are not usually stored in a catalogue of the database management system and to obtain is necessary to resort to the administrator s knowledge database. Another factor is the absence of search engines to databases in the web that access and make available the information in those repositories, data are limited due to the organizations themselves. In a shared information environment, it is highly relevant to make possible access to metadata that describe a data source, regardlessly of the device and format in which is stored. This study aims to describe a mechanism to promote interoperability of relational databases with other sources of information through the extraction and exposing of metadata using OAI-PMH / Informações sobre um determinado assunto podem estar armazenadas em diferentes repositórios como banco de dados, bibliotecas digitais, planilhas eletrônicas, arquivos textos, páginas na web etc. Nesse contexto de fontes de dados heterogêneas, consultar, possivelmente em linguagem natural, integrar informações e promover interoperabilidade são tarefas que dependem, dentre outros fatores, do conhecimento prévio que um usuário tem sobre a localização, o proprietário, a descrição do conteúdo de cada fonte de informação. Mais especificamente, no caso de bancos de dados, essas informações não são, em geral, armazenadas no catálogo de um sistema gerenciador de bancos de dados; para obtê-las é necessário recorrer ao conhecimento do administrador desse banco. Outro fator que evidencia essa dependência é a ausência de mecanismos de busca a bancos de dados na web que acessam e tornam disponíveis as informações contidas nesses repositórios, devido ao fato desses dados estarem limitados às próprias organizações. Em um ambiente de compartilhamento de informações, é altamente relevante tornar possível o acesso aos metadados que descrevem uma fonte de dados, independentemente do meio e do formato em que esteja armazenada. Este trabalho tem como objetivo descrever um mecanismo para promover interoperabilidade de bancos de dados relacionais com outras fontes de informações, por meio da extração e exposição dos metadados usando o protocolo OAI-PMH.
85

Establishing a computer-based data system for early communication intervention in South Africa

Kritzinger, Alta M. (Aletta Margaretha) 19 March 2004 (has links)
The study identifies the increase in populations at risk for communication disorders world-wide and in South Africa as one the reasons for research to develop early communication intervention (ECI) services as a societal responsibility in South Africa. Since ECI is largely an unknown entity in the South African health system, but shares several mutual objectives, the dire need for data of populations at-risk validates the development of a computer-based relational data system as a 21st century research tool for ECI. Underpinnings for the development of a research database for ECI were obtained from the use of database management systems for early intervention in the USA, identified as leaders in the application of database technology in the field of Speech-Language Pathology. The aim of the study was to develop and establish a computerized database system to describe the characteristics of young children at risk for communication disorders enrolled in an existing ECI programme. Using a descriptive survey as research design, a rich description of 153 subjects and their families was obtained. The findings relating to the multiple risk profiles of the subjects revealed results not extensively described or emphasized in the literature, indicating the in-depth analysis of results that is possible when utilizing a database approach to research. The complex risk profile found in the subgroup of subjects with cleft lip and palate is an example of a need for further investigation. The results also indicated the critical importance of early identification of risk events throughout a child’s life to improve the efficacy of ECI services. Further results emphasized the important role of parents to identify the early signs of risks for communication disorders in their children, provided they are equipped with the necessary knowledge. A conceptual framework for the early identification of risks for communication disorders is proposed for best practice in ECI in South Africa. The study concluded that the CHRIB database system was successfully applied in the empirical research and is now established as a versatile 21st century research tool to be utilized in second generation research in ECI in South Africa. / Thesis (DPhil(Communication Pathology))--University of Pretoria, 2005. / Speech-Language Pathology and Audiology / Unrestricted
86

Návrh systému pro účely administrativy fotbalového svazu / Design of a Football Association System for Administration Purposes

Vařacha, Jan January 2015 (has links)
This master’s thesis aims to design a suitable system based on a relational database for the purposes of administrative activities of the District Football Association. The created relational database should be managed primarily by the association secretary, to a lesser extent by members of association specialist committees. The database should be able to contain all the information and records which have been dealing with the paper form so far (match fixtures, awarding fines, clubs’ fees, players’ punishments, etc.). Routine administrative work, such as reading, inserting, deleting and updating the data will be carried out through the web interface and should not place any special demands on the level of users computer skills.
87

Analýza dat na sociálních sítích s využitím dolování dat / Analysis of Data on Social Networks Based on Data Mining

Fešar, Marek January 2014 (has links)
The thesis presents general principles of data mining and it also focuses on specific needs of social networks. Certain social networks, chosen with respect to popularity and availability to Czech users, are discussed from various points of view. The benefits and drawbacks of each are also mentioned. Afterwards, one suitable API is selected for futher analysis. The project explains harvesting data via Twitter API and the process of mining of data from this particular network. Design of a mining algorithm inspired by density based clustering methods is described. The implementation is explained in its own chapter, preceded by thorough explanation of MVC architectural pattern. In the end some examples of usage of gathered knowledge are shown as well as possibility of future extensions.
88

TSQL2 interpret nad post-relačními databázemi v Oracle Database / Processor of TSQL2 on Post-Relational Databases in Oracle Database

Szkandera, Jan January 2011 (has links)
This thesis focuses on temporal databases and their multimedia and spatial extensions. The introduction of this work summarizes results in the area of research of temporal databases - key concepts of a TSQL2 language and post-relational extension of Oracle database are introduced. Main part of the thesis is design of an interpreter as a layer between user application and relational database.  In the next part of the thesis control of integrity constraints in temporal databases are discussed. Result of this work is functional interpreter of TSQL2 language able to store post-relational data.
89

Performance benchmarking of data-at-rest encryption in relational databases

Istifan, Stewart, Makovac, Mattias January 2022 (has links)
This thesis is based on measuring how Relational Database Management Systems utilizing data-at-rest encryption with varying AES key lengths impact the performance in terms of transaction throughput of operations through the process of a controlled experiment. By measuring the effect through a series of load tests followed by statistical analysis, the impact of adopting a specific data-at-rest encryption algorithm could be displayed. The results gathered from this experiment were measured regarding the average transactional throughput of SQL operations. An OLTP workload in the benchmarking tool HammerDB was used to generate a transactional workload. This, in turn, was used to perform load tests on SQL databases encrypted with different AES-key lengths. The data gathered from these tests then underwent statistical analysis to either keep or reject the stated hypotheses. The statistical analysis performed on the different versions of the AES-algorithm showed no significant difference in terms of transaction throughput concerning the results gathered from the load tests on MariaDB. However, statistically, significant differences are proven to exist when running the same tests on MySQL. These results answered our research question, "Is there a significant difference in transaction throughput between the AES-128, AES-192, and AES-256 algorithms used to encrypt data-at-rest in MySQL and MariaDB?". The conclusion is that the statistical evidence suggests a significant difference in transactional throughput between AES algorithms in MySQL but not in MariaDB. This conclusion led us to investigate further transactional database performance between MySQL and MariaDB, where a specific type of transaction is measured to determine if there was a difference in performance between the databases themselves using the same encryption algorithm. The statistical evidence confirmed that MariaDB vastly outperformed MySQL in transactional throughput.
90

Generátor databázové vrstvy aplikací / Application Database Layer Generator

Kuboš, Jaroslav Unknown Date (has links)
This diploma thesis deals with the design and implementation of a framework for the database persistence layer development. This framework is easy to use while keeping the code elegance. It supports object oriented programming features such as inheritance and collections. Other features include versioning of objects and lazy loading. The object metadata are obtained through reflection provided by the .NET framework. The framework is not using any literal for identification (classes, attributes) even in object queries. Most of checks are done by compiler.

Page generated in 0.1363 seconds