• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1606
  • 457
  • 422
  • 170
  • 114
  • 102
  • 61
  • 49
  • 40
  • 36
  • 29
  • 23
  • 21
  • 17
  • 16
  • Tagged with
  • 3646
  • 856
  • 805
  • 754
  • 608
  • 544
  • 420
  • 400
  • 392
  • 363
  • 310
  • 304
  • 296
  • 277
  • 264
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
991

FiLDB : An Architecture for Securely Connecting Databases to the Internet

Hermansson, Tobias January 2001 (has links)
Today more and more Information systems exist and they contain more and more information. Many information systems contain information about people that is secret or sensitive. Such information should not be allowed to leak from a database. This problem grows more and more as databases are made available via the Internet. There have been a number of publicised occasions where hackers have passed security barriers and got information that is not intended to be publicly available. There have also been cases where the administrators of systems have made mistakes, so that classified information was published on the Internet. The FiLDB architecture uses existing technology together with new components to provide an environment in which databases can be connected to the Internet without losing security. Two databases, with physical separation between them, are used as a security measure. Secret information is stored only in an internal database, which is separated from the Internet. An external database contains information which is to be used from the Internet, and hence sensitive information is not stored in this database.
992

Principles of a Central Database for System Interfaces during Train Development

Lännhult, Peter January 2011 (has links)
This thesis has developed a database solution for storage of interface data which are to different systems in a train, the interface data is used in the design of data communication between different systems in the vehicles. The database solution has focused on following problems: revision control of project related data, consistency of interface data between documentation and database, the possibility to roll back the database to an earlier revision, and the possibility to extract delta documents between two revisions in the database. For demonstration of the database solution, a user interface program has been created which communicates with the database. Revision control of the database has been solved by dividing the project related data into three sections: one approved, one modified, and one revised section. The approved section always contains the latest approved data and thereby the ability to read data even though it is subject for a revision at the moment. The modified section contains data that are currently being changed.  Obsolete data are stored in the revised section. To aviod inconsistency of interface data which are stored in both Word documents and in the database, the data is extracted from the database and inserted into tables in the Word documents. The Word documents contain bookmarks where the tables shall be inserted. Algorithms for rolling back the database to an earlier revision, and to extract delta documents were created. These algorithms are not implemented in the user interface program. As a result from this thesis, the interface data is revision controlled and no data is removed from the database during the change process; the data is moved between sections with different flags and revision numbers. Only if the database is rolled back to an earlier revision, data is removed. The functionality to transfer data from the database into tables in Word documents is verified. / Detta examensarbete har tagit fram en databaslösning för lagring av gränssnittsdata för olika systemenheter i ett tåg, gränssnittsdatat används i konstruktionen av kommunikation mellan olika system i fordonen. Databaslösningen har fokuserats på följande problem: revisionskontroll av projekt relaterat data, att gränssnittsdata överensstämmer mellan dokument och databasen, möjligheten att kunna gå tillbaks till en tidigare revision i databasen, samt möjligheten att kunna exportera delta dokument mellan två revisioner i databasen. För att demonstrera databaslösningen har ett användarprogram skapats som kommunicerar med databasen. Revisionskontroll i databasen har lösts genom att dela upp det projektrelaterade datat i tre sektioner: en godkänd, en modifierad samt en reviderad sektion. I den godkända sektionen finns alltid det senast godkända datat och möjligheten att läsa dessa data även om den är under ändring. I den modifierade sektonen finns data som är under pågående ändring. Data som har blivit ersatt återfinns i den reviderade sektionen. För att undvika inkonsekvens av gränssnittssdata som återfinns både i Word-dokument samt i databasen, extraheras datat från databasen till tabeller i Word-dokumenten. Word-dokumenten innehåller bokmärken där tabellerna sätts in. Algoritmer är framtagna för att kunna backa tillbaka till en tidigare revision i databasen samt kunna exportera delta dokument. Dessa algoritmer är inte implementerade i användarprogrammet. Detta examensarbete har resluterat i att gränssnittsdatat är revisionskontrollerat och inget data tas bort från databasen under en ändringsrutin, datat flyttas bara mellan olika sektioner med olika flaggor och revisionsnummer. Endast om man går tillbaks till en tidigare revision tas data bort ur databasen. Funktionaliteten att överföra gränssnittsdata från databasen till tabeller i Word-dokument är verifierad.
993

Matching in MySQL : A comparison between REGEXP and LIKE

Carlsson, Emil January 2012 (has links)
When needing to search for data in multiple datasets there is a risk that not all da-tasets are of the same type. Some might be in XML-format; others might use a re-lational database. This could frighten developers from using two separate datasets to search for the data in, because of the fact that crafting different search methods for different datasets can be time consuming. One option that is greatly overlooked is the usage of regular expressions. If a search expression is created it can be used in a majority of database engines as a “WHERE” statement and also in other form of data sources such as XML. This option is however, at best, poorly documented and few tests have been made in how it performs against traditional search methods in databases such as “LIKE”. Multiple experiments comparing “LIKE” and “REGEXP” in MySQL have been performed for this paper. The results of these experiments show that the possible overhead by using regular expressions can be motivated when considering the gain of only using one search phrase over several data sources. / När behovet att söka over flertalet typer av datakällor finns det alltid en risk att inte alla datakällor är av samma typ. Några kan vara i XML-format; andra kan vara i form av en relationsdatabas. Detta kan avskräcka utvecklare ifrån att använda två oberoende datakällor för att söka efter data, detta för att det kan vara väldigt tidskrävande att utveckla två olika vis att skapa sökmetoderna. Ett alternativ som ofta är förbisett är att använda sig av reguljära uttryck. Om ett sökuttryck är skapat i reguljära uttryck så kan det användas i en majoritet av data-basmotorerna på marknaden som ett ”WHERE” påstående, men det kan även an-vändas i andra typer av datakällor så som XML. Detta alternativ är allt som ofta dåligt dokumenterat och väldigt få tester har ut-förts på prestandan i jämförelse med ”LIKE”. Som grund för denna uppsats har flertalet experiment utförs där ”LIKE” och ”REGEXP” jämförs i en MySQL databas. Försöken visar på att den eventuella försämringen i prestanda kan betala sig vid användande av multipla datatyper.
994

Open source : evaluation of database modeling CASE

Othman, Bassam January 2003 (has links)
Open source software is becoming increasingly popular and many organizations are using them, such as apache (used by over 50% of the world’s web servers) and Linux (a popular operating system). There exist mixed opinions about the quality of this type of software. The purpose of this study is to evaluate quality of open source CASE-tools and compare it with quality of proprietary CASE-tools. The evaluation concerns tools used for database modeling, where the DDL-generation capabilities of these tools are studied. The study is performed as a case study where one open source (two, after experiencing some difficulties with the first tool) and one proprietary tool are studied. The results of this study indicate that open source database modeling CASE-tools are not ready to challenge proprietary tools. However software developed as open source usually evolve rapidly (compared to proprietary software) and a more mature open source tool could emerge in the near future.
995

Characterising and predicting amyloid mutations in proteins

Gardner, Allison January 2016 (has links)
A database, AmyProt, was developed that collated details of 32 human amyloid proteins associated with disease and 488 associated mutations and polymorphisms, of which 316 are classified as amyloid. A detailed profile of the mutations was developed in terms of location within domains and secondary structures of the proteins and functional effects of the mutations. The data was used to test the hypothesis that mutations enhance amyloidosis in human amyloid proteins have distinctive characteristics, in terms of specific location within proteins and physico-chemical characteristics, which differentiate them from non-amyloid forming polymorphisms in amyloid proteins and from disease mutations and polymorphisms in non-amyloid disease linked proteins. The aim was to use these characteristics to train a prediction algorithm for amyloid mutations that will provide a more accurate prediction than current general disease prediction tools and amyloid prediction tools that focus on aggregating regions. 66 location specific features and changes upon mutation of 366 amino acids propensities, derived from the amino acid index database AAindex, were analysed. A significant proportion of mutations were located with aggregating regions, however the majority of mutations were not associated with these regions. An analysis of motifs showed that amyloid mutations had a significant association with transmembrane helix motifs such as GxxxG. Statistical analysis of substitutions mutations, using substitution matrices, showed that amyloid mutations have a decrease in α-helix propensity and overall secondary structure propensity compared to the disease mutations and disease and amyloid polymorphisms. Machine learning was used to reduce the large set of features to a set of 18 features. These included location near transmembrane helices, secondary structure features; transmembrane and extracellular domains and 4 amino acid propensities: knowledge-based membrane propensity scale from 3D helix; α-helix propensity; partition coefficient; normalized frequency of coil. The AmyProt mutations and non-amyloid polymorphisms were used to train and test the novel amyloid mutation prediction tool, AmyPred, the first tool developed purely to predict amyloid mutations. AmyPred predicts the amyloidogenicity of mutations as a consensus by majority vote (CMV) and mean probability (CMP) of 5 classifiers. Validation of AmyPred with 27 amyloid mutations and 20 non-amyloid mutations from APP, Tau and TTR proteins, gave classification accuracies of 0.7/0.71 (CMV/CMP) and with an MCC of 0.4 (CMV) and 0.41 (CMP). AmyPred out performed other tools such as SIFT (0.37) and PolyPhen (0.36) and the amyloid consensus prediction tool, MetAmyl (0.13). Finally, AmyPred was used to analyse p53 mutations to characterize amyloid and non-amyloid mutations within this protein.
996

A Non-functional evaluation of NoSQL Database Management Systems

Landbris, Johan January 2015 (has links)
NoSQL is basically a family name for all Database Management Systems (DBMS) that is not Relational DBMS. The fast growth of all social networks has led to a huge amount of unstructured data that NoSQL DBMS is supposed to handle better than Relational DBMS. Most comparisons performed are between Relational DBMS and NoSQL DBMS. In this paper, the comparison is about non-functional properties for different types of NoSQL DBMS instead. Three of the most common NoSQL types are Document Stores, Key-Value Stores and Column Stores. The most used DBMS of those types are MongoDB, Redis and Apache Cassandra. After working with the databases and performing YCSB Benchmarking the conclusion is that if the database should handle an enormous amount of data, Cassandra is most probably best choice. If speed is the most important property and if all data fits within the memory; Redis is probably the most well suited database. If the database needs to be flexible and versatile, MongoDB is probably the best choice.
997

A database design for IDE

Yichong, Zhou, Chenxi, Zhang January 2014 (has links)
The thesis is the culmination of an academic degree and an important steppingstone for the student on the way to employment. Academic and industrial institutionsrely on thesis students to explore research directions that may otherwise beoverlooked. Consequently, an efficient process for connecting students withsupervisors and relevant, viable thesis proposals is crucial for students, for academiaas well as for the industry. A database can serve as the basis of a software applicationto facilitate such a process. Support for tackling concerns such as data persistence,redundancy and security, which are challenges in most application designs, is built into common database systems. In this work, we investigate how a database system canbe leveraged as the foundation for an application that connects students with thesisproposals and supervisors.
998

Données de tests non fonctionnels de l'ombre à la lumière : une approche multidimensionnelle pour déployer une base de données / On the Highlighting of Non-Functional Test Data : A Multidimensional Approach for Database Deployment

Brahimi, Lahcene 03 July 2017 (has links)
Le choix d'un système de gestion de bases de données (SGBD) et de plateforme d'exécution pour le déploiement est une tâche primordiale pour la satisfaction des besoins non-fonctionnels(comme la performance temporelle et la consommation d'énergie). La difficulté de ce choix explique la multitude de tests pour évaluer la qualité des bases de données (BD) développées.Cette évaluation se base essentiellement sur l'utilisation des métriques associées aux besoins non fonctionnels. En effet, une mine de tests existe couvrant toutes les phases de cycle de vie de conception d'une BD. Les tests et leurs environnements sont généralement publiés dans des articles scientifiques ou dans des sites web dédiés comme le TPC (Transaction Processing Council).Par conséquent, cette thèse contribue à la capitalisation et l'exploitation des tests effectués afin de diminuer la complexité du processus de choix. En analysant finement les tests, nous remarquons que chaque test porte sur les jeux de données utilisés, la plateforme d'exécution, les besoins non fonctionnels, les requêtes, etc. Nous proposons une démarche de conceptualisation et de persistance de toutes .ces dimensions ainsi que les résultats de tests. Cette thèse a donné lieu aux trois contributions. (1) Une conceptualisation basée sur des modélisations descriptive,prescriptive et ontologique pour expliciter les différentes dimensions. (2) Le développement d'un entrepôt de tests multidimensionnel permettant de stocker les environnements de tests et leurs résultats. (3) Le développement d'une méthodologie de prise de décision basée sur un système de recommandation de SGBD et de plateformes. / Choosing appropriate database management systems (DBMS) and/or execution platforms for given database (DB) is complex and tends to be time- and effort-intensive since this choice has an important impact on the satisfaction of non-functional requirements (e.g., temporal performance or energy consumption). lndeed, a large number of tests have been performed for assessing the quality of developed DB. This assessment often involves metrics associated with non-functional requirement. That leads to a mine of tests covering all life-cycle phases of the DB's design. Tests and their environments are usually published in scientific articles or specific websites such as Transaction Processing Council (TPC). Therefore, this thesis bas taken a special interest to the capitalization and the reutilization of performed tests to reduce and mastery the complexity of the DBMS/platforms selection process. By analyzing the test accurately, we identify that tests concem: the data set, the execution platform, the addressed non-functional requirements, the used queries, etc. Thus, we propose an approach of conceptualization and persistence of all dimensions as well as the results of tests. Conseguently, this thesis leads to the following contributions. (1) The design model based on descriptive, prescriptive and ontological concepts to raise the different dimensions. (2) The development of a multidimensional repository to store the test environments and their results. (3) The development of a decision making methodology based on a recommender system for DBMS and platforms selection.
999

Proposing Molecularly Targeted Therapies Using an Annotated Drug Database Querying Algorithm in Cutaneous Melanoma

Aaron Pavlik, Schneider, Phillip, Cropp, Cheryl January 2015 (has links)
Class of 2015 Abstract / Objectives: The aim of this study was to develop a computational process capable of hypothesizing potential chemotherapeutic agents for the treatment of skin cutaneous melanoma given an annotated chemotherapy molecular target database and patient-specific genetic tumor profiles. Methods: Aberrational profiles for a total of 246 melanoma patients indexed by the Cancer Genome Atlas (TCGA) for whom complete somatic mutational, mRNA expression, and protein expression data was available were queried against an annotated targeted therapy database using Visual Basic for Applications and Python in conjunction with Microsoft Excel. Identities of positively and negatively associated therapy-profile matches were collected and ranked. Results: Subjects included in the analysis were predominantly Caucasian (93%), non-Hispanic (95.9%), female (59%), and characterized as having stage III clinical disease (37.4%). The most frequently occurring positive and negative therapy associations were determined to be 17-AAG (tanespimycin; 42.3%) and sorafenib (41.9%), respectively. Mean total therapy hypotheses per patient did not differ significantly with regard to either positive or negative associations (p=0.1951 and 0.4739 by one-way ANOVA, respectively) when stratified by clinical melanoma stage. Conclusions: The developed process does not appear to offer discernably different therapy hypotheses amongst clinical stages of cutaneous melanoma based upon genetic data alone. The therapy-matching algorithm may be useful in quickly retrieving potential therapy hypotheses based upon the genetic characteristics of one or many subjects specified by the user.
1000

Drug Therapy Interactions with New Oral Anticoagulants in Oncology Patients: a Retrospective Database Analysis 2013 - 2015

Blaskowsky, Jeffrey, Odeh, Adam, Stuntz, Tyler, McBride, Ali January 2016 (has links)
Class of 2016 Abstract / Objectives: To identify common and serious drug-drug interactions involving novel anticoagulant drugs in cancer patients. Subjects: 60 patients who were treated at the Banner University of Arizona Cancer Center between November 1, 2013 and April 1, 2015 with rivaroxaban, dabigatran, or apixaban. Methods: A retrospective chart review was performed for patients who received a NOAC (novel oral anticoagulant) to determine if a medication regimen contained a drug-drug interaction involving the NOAC. Results: When analyzing the DDIs involving rivaroxaban, dabigatran, and apixaban, Micromedex® detected a total of 123 interactions, compared to Lexicomp®, which detected 111 interactions. When using Lexicomp®, there were 59 (32%) instances of no detected interactions, 19 (10%) moderate interactions, 27 (15%) major interactions, and 65 (36%) contraindicated DDIs with rivaroxaban. When using Micromedex®, there were 47 (26%) instances where no interaction was detected, 4 (2%) moderate interactions, and 119 (65%) major interactions, and no interactions were classified as contraindicated with rivaroxaban. Lexicomp® detected 3 (50%) interactions as major, and found no DDIs in 3 (50%) instances for dabigatran, and detected 1 (7%) moderate, 2 (14%) major and 6 (43%) contraindicated interactions for apixaban. Micromedex® detected 3 (50%) interactions as major, and found no DDIs in 3 (50%) instances for dabigatran, and detected 12 (86%) of interactions as major and found no DDIs in 2 (14%) instances for apixaban. Conclusions: There was significant variation in DDI detection between current literature4,5 and the drug information databases, Lexicomp® and Micromedex®, however most interactions detected were major or contraindicated.

Page generated in 0.0319 seconds