• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1016
  • 224
  • 97
  • 96
  • 69
  • 31
  • 29
  • 19
  • 19
  • 14
  • 12
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 2077
  • 745
  • 706
  • 585
  • 437
  • 357
  • 330
  • 310
  • 227
  • 221
  • 193
  • 189
  • 174
  • 165
  • 160
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
621

Data analysis and creation of epigenetics database

Desai, Akshay A. 21 May 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This thesis is aimed at creating a pipeline for analyzing DNA methylation epigenetics data and creating a data model structured well enough to store the analysis results of the pipeline. In addition to storing the results, the model is also designed to hold information which will help researchers to decipher a meaningful epigenetics sense from the results made available. Current major epigenetics resources such as PubMeth, MethyCancer, MethDB and NCBI’s Epigenomics database fail to provide holistic view of epigenetics. They provide datasets produced from different analysis techniques which raises an important issue of data integration. The resources also fail to include numerous factors defining the epigenetic nature of a gene. Some of the resources are also struggling to keep the data stored in their databases up-to-date. This has diminished their validity and coverage of epigenetics data. In this thesis we have tackled a major branch of epigenetics: DNA methylation. As a case study to prove the effectiveness of our pipeline, we have used stage-wise DNA methylation and expression raw data for Lung adenocarcinoma (LUAD) from TCGA data repository. The pipeline helped us to identify progressive methylation patterns across different stages of LUAD. It also identified some key targets which have a potential for being a drug target. Along with the results from methylation data analysis pipeline we combined data from various online data reserves such as KEGG database, GO database, UCSC database and BioGRID database which helped us to overcome the shortcomings of existing data collections and present a resource as complete solution for studying DNA methylation epigenetics data.
622

Traditional Chinese medical clinic system

Liu, Chaomei 01 January 2004 (has links)
The Chinese Medical Clinic System is designed to help acupuncturists and assistants record and store information. This system can maintain and schedule appointments and view patient diagnoses effectively. The system will be implemented on a desktop PC connected to the internet to facilitate the acupuncturists record of information.
623

Databáze s obchodně-marketingovými informacemi a jejich využití z pohledu marketingového pracovníka / Business and marketing information databases and their usage from marketing manager's point of view

Pospíšilová, Petra January 2011 (has links)
The thesis is engaged in business and marketing information databases about economic subjects registered in the Czech Republic which are available on the Czech market. It monitors the rise and development of databases in general, deals with the databases of businesses and provides an overview of the most important providers of information about companies and their information products. Analyzes two selected databases of companies and describes the practical examples of their usage from marketing manager's point of view.
624

Supporting Advanced Queries on Scientific Array Data

Ebenstein, Roee A. 18 December 2018 (has links)
No description available.
625

Using a Data Warehouse as Part of a General Business Process Data Analysis System

Maor, Amit 01 January 2016 (has links)
Data analytics queries often involve aggregating over massive amounts of data, in order to detect trends in the data, make predictions about future data, and make business decisions as a result. As such, it is important that a database management system (DBMS) handling data analytics queries perform well when those queries involve massive amounts of data. A data warehouse is a DBMS which is designed specifically to handle data analytics queries. This thesis describes the data warehouse Amazon Redshift, and how it was used to design a data analysis system for Laserfiche. Laserfiche is a software company that provides each of their clients a system to store and process business process data. Through the 2015-16 Harvey Mudd College Clinic project, the Clinic team built a data analysis system that provides Laserfiche clients with near real-time reports containing analyses of their business process data. This thesis discusses the advantages of Redshift’s data model and physical storage layout, as well as Redshift’s features directly benefit of the data analysis system.
626

Databasskydd / Protection of databases

Axhamn, Johan January 2016 (has links)
The capacity to assemble, store, and make available information in databases is ever growing. This development has accelerated in recent decades, driven by the advent and increased use of digital networks. Already at an early stage, it led to demands for legal protection of databases. In most countries databases have been protected in national legislation based on copyright principles. However, this kind of protection has been regarded as insufficient. The reason for this is that copyright protection only covers the selection or arrangement of the contents of the database. By rearranging the contents, it is possible to avoid liability for copyright infringement. To address the specific needs of producers of databases, the then European community adopted a directive in 1996 on the legal protection of databases. The Directive aims to harmonise copyright protection for databases and to introduce a new, sui generis, right for the legal protection of databases. The sui generis right protects the investments in obtaining, verifying, and presenting the contents in a database. The sui generis right has been described in the literature as one of the most complex intellectual property rights ever established. Its complexity resides in the unclear relationship between the requirements for protection and the content and scope of protection. This dissertation describes, analyses, compares and systematises the legal protection for databases as provided for in the EU Database Directive – both in relation to copyright and sui generis protection, but also in relation to the intellectual property system in general and principles and rules on unfair competition. The study also describes and analyses the Directive as implemented into Swedish law. To do this, it makes use of relevant legal sources, with particular account taken of relevant sources of EU law such as the Directive itself, adjacent directives in the field of copyright and related rights, as well as unfair competition law and the case law and legal method developed by the Court of Justice of the European Union. The study also draws on underlying theories of intellectual property protection and unfair competition law, as well as arguments based on unjust enrichment and pure economic loss. The study establishes how the sui generis right serves as a legal hybrid between traditional intellectual property rights and protection against unfair competition. The structure of the right resembles traditional intellectual property rights, with requirements for protection, provisions on exclusive rights, exceptions and limitations and a term of protection. At the same time, the content and scope of protection provide measures similar to those countering unfair competition with aspects of protection against pure economic loss. The right protects against certain activities carried out in the market rather than providing protection for a traditional object of intellectual property law. When implementing the Directive, the Swedish legislator overlooked these aspects of the sui generis right, creating legal uncertainties when interpreting and applying the national legislation. The study concludes with a look forward and suggestions for future research.
627

Modeling and Querying Evidential Databases / Modélisation et exploitation des bases de données évidentielles

Bousnina, Fatma Ezzahra 11 June 2019 (has links)
La théorie des fonctions des croyances offre des outils puissants pour modéliser et traiter les informations imparfaites. En effet, cette théorie peut représenter l'incertitude,l'imprécision et l'ignorance. Dans ce contexte, les données sont stockées dans des bases de données spécifiques qu'on appelle les bases de données crédibilistes. Une base de donnée crédibiliste a deux niveaux d'incertitudes: (i) l'incertitude au niveau des attributs qui se manifeste à travers des degrés de véracité sur les hypothèses des attributs; (ii) l'incertitude au niveau des tuples représentée par des intervalles de confiance sur l'existence des tuples au sein de la table en question. D'autre part, la base de donnée crédibiliste peut être modélisée sous deux formes: (i) la forme compacte caractérisée par un ensemble d'attributs et un ensemble de tuples; (ii) la forme des mondes possibles représentée par un ensemble de base de données candidates où chaque base candidate est une représentation possible de la base de donnée compacte. Interroger la représentation des mondes possibles est une étape fondamentale pour valider les méthodes d'interrogation sur la base compacte crédibiliste. En effet, un modèle de base de donnée est dit système fort si le résultat de l'interrogation de sa représentation compacte est équivalent au résultat de l'interrogation de sa représentation des mondes possibles.Cette thèse est une étude sur les fondements des bases de données crédibilistes. Les contributions sont résumées comme suit:(i) La modélisation et l'interrogation de la base crédibiliste (EDB): Nous mettons en pratique le modèle compacte de la base de données (EDB) en proposant une implémentation objet-relationnelle, ce qui permet d'introduire l'interrogation de ce modèle avec les opérateurs relationnels. D'autres part, nous présentons le formalisme, les algorithmes et les expérimentations d'autres types de requêtes :les top-k évidentiel et le skyline évidentiel que nous appliquons sur des données réelles extraites de la plateforme Tripadvisor.(ii) La modélisation de la base de données sous sa forme des mondes possibles: Nous modélisons la forme de mondes possibles de la base de données (EDB) en traitant les deux niveaux d'incertitudes (niveau attributs et niveau tuples).(iii) La modélisation et l'interrogation de la base de données crédibiliste (ECD): Après avoir prouvé que le modèle des bases de données (ED B) n'est pas un système de représentation fort, nous développons le modèle de la base de données crédibiliste conditionnelle nommée (ECD). Nous présentons le formalisme de l’interrogation sur les deux formes (compacte et mondes possibles) de la base de données (ECD). Finalement, nous discutons les résultats de ces méthodes d'interrogation et les spécificités du modèle (ECD). / The theory of belief functions (a.k.a, the Evidence Theory) offers powerful tools to mode! and handle imperfect pieces of information. Thus, it provides an adequate framework able to represent conjointly uncertainty, imprecision and ignorance. In this context, data are stored in a specific database model called evidential databases. An evidential database includes two levels of uncertainty: (i) the attribute level uncertainty expressed via some degrees of truthfulness about the hypotheses in attributes; (ii) the tuple level uncertainty expressed through an interval of confidence about the existenceof the tuple in the table. An evidential database itself can be modeled in two forms:(i) the compact form represented as a set of attributes and a set of tuples; (ii) the possible worlds' form represented as a set of candidate databases where each candidate is a possible representation of the imperfect compact database. Querying the possible worlds' form is a fundamental step in order to check the querying methods over the compact one. In fact, a model is said to be a strong representation system when results of querying its compact form are equivalent to results of querying its non compact form.This thesis focuses on foundations of evidential databases in both modeling and querying. The main contributions are summarized as follows:(i) Modeling and querying the compact evidential database (EDB): We implement the compact evidential database (EDB) using the object-relational design which allows to introduce the querying of the database model under relational operators. We also propose the formalism, the algorithms and the experiments of other typesof queries: the evidential top-k and the evidential skyline that we apply over a real dataset extracted from TripAdvisor.(ii) Modeling the possible worlds' form of (EDB): We model the possible worlds' form of the evidential database (EDB) by treating both levels of uncertainty (the tuple leve! and the attribute level).(iii) Modeling and querying the evidential conditional database (ECD): After provingt hat the evidential database (EDB) is not a strong representation system, we develop a new evidential conditional database model named (ECD). Thus, we present the formalism of querying the compact and the possible worlds' forms of the (ECD) to evaluate the querying methods under relational operators. Finally, we discuss the results of these querying methods and the specificities of the (ECD)model.
628

Compartilhamento de objetos compostos entre bases de dados orientadas a objetos / Sharing composite objects in object-oriented databases

Ferreira, João Eduardo 05 July 1996 (has links)
Este trabalho apresenta uma proposta para o compartilhamento de dados entre bases de dados orientadas a objetos, em ambientes de desenvolvimento de projetos. O processo de compartilhamento e realizado através de três fases: separação, evolução e integração de dados. Esta forma de compartilhamento atua através de vínculos entre os objetos de base original com a base produto. Foram definidos seis tipos de vínculos, que são estabelecidos no processo de separação: apenas leitura, isolado, flagrante, mutuamente exclusivo, independente e on-line. Com isso, ambas as bases, respeitando as limitações impostas pelo tipo de vinculo entre as mesmas, podem evoluir separadamente e depois de um determinado tempo realizarem, se conveniente, um processo de reintegração. O processo de compartilhamento de dados tem por unidade de gerenciamento os objetos, compostos de base de dados. Os conceitos apresentados podem ser universalmente aplicados, em qualquer base de dados que efetue gerenciamento sobre a composição de seus objetos. Neste trabalho os conceitos de compartilhamento de dados são exemplificados através do modelo de dados SIRIUS. / This work presents a technique to share data stored in an object-oriented databases aimed at design environments. Three process enable the sharing of data between databases: separation, evolution and data integration. Whenever a block of data need to be shared between original and product database, it is spread among both, resulting in two block: one in the original database, and another in the receiver one, identified as the product of the sharing process. During the evolution phase of the sharing process, these blocks are not required to be kept identical. Six types of links to drive the updates were defined: read only, isolated, snapshot, mutually exclusive, independent and on-line. The original and product databases, both restricted by rules imposed by the type of links, can evolve alone. After a while they may enter into an reintegration process, which uses the composite objects as the control units. The presented concepts can be applied to any data model supporting composite objects. The SIRIUS datamodel is used to exemplify those concepts.
629

Predicting inter -organizational knowledge satisfaction through knowledge conversion and task characteristics in a minority -owned business

Ward, Terrence L. 01 January 2009 (has links)
Knowledge management has been extensively studied from the single organization (intra-organizational) perspective for many years. Although the literature on intra-organizational knowledge is extensive, there still exist gaps in the literature with regards to knowledge being shared by multiple organizations (inter-organizational knowledge). Inter-organizational knowledge satisfaction is gained when the organizations successfully embody the knowledge gained via the cooperation and crystallizes that knowledge within the organization. The problem addressed in this study is the lack of a model for predicting inter-organizational knowledge satisfaction utilizing task characteristics and the knowledge conversion process. The purpose of the study was to predict inter-organizational knowledge satisfaction for a contract company. The research question addressed how task characteristic and knowledge conversion can predict inter-organizational knowledge satisfaction. The theoretical frameworks include Nonaka's theory on organizational knowledge creation and Becerra-Fernandez and Sabherwal's theory for task characteristics. The study is a correlation research design using multiple linear regression as the data analysis method. An online questionnaire was administered to all executives, first- and mid-level managers, and professionals. The predictor variables task characteristic and knowledge conversion are used to predict inter-organizational knowledge satisfaction (IOKS). Predictor variables accounted for 35.3% of the variance in the IOKS score. This study contributes to social change by helping organizations gain a competitive advantage through developing and implementing both creative and timely knowledge management initiatives to gain inter-organizational knowledge satisfaction.
630

Asset Reuse of Images From a Repository

Herman, Deirdre 01 January 2011 (has links)
According to Markus's theory of reuse, when digital repositories are deployed to collect and distribute organizational assets, they supposedly help ensure accountability, extend information exchange, and improve productivity. Such repositories require a large investment due to the continuing costs of hardware, software, user licenses, training, and technical support. The problem addressed in this study was the lack of evidence in the literature on whether users in fact reused enough digital assets in repositories to justify the investment. The objective of the study was to investigate the organizational value of repositories to better inform architectural, construction, software and other industries whether repositories are worth the investment. This study was designed to examine asset reuse of medical images at a health information publisher. The research question focused on the amount of asset reuse over time, which was determined from existing repository transaction logs generated over an 8-year period by all users. A longitudinal census data analysis of archival research was performed on the entire dataset of 85,250 transaction logs. The results showed that 42 users downloaded those assets, including 11,059 images, indicating that the repository was used by sufficient users at this publisher of about 80 employees. From those images, 1,443 medical images were reused for new product development, showing a minimal asset reuse rate of 13%. Assistants (42%), writers (20%), and librarians (16%) were the primary users of this repository. Collectively, these results demonstrated the value of repositories in improving organizational productivity---through reuse of existing digital assets such as medical images to avoid unnecessary duplication costs---for social change and economic transformation.

Page generated in 0.0713 seconds