• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1606
  • 457
  • 422
  • 170
  • 114
  • 102
  • 61
  • 49
  • 40
  • 36
  • 29
  • 23
  • 21
  • 17
  • 16
  • Tagged with
  • 3646
  • 856
  • 805
  • 754
  • 608
  • 544
  • 420
  • 400
  • 392
  • 363
  • 310
  • 304
  • 296
  • 277
  • 264
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
691

Modelagem e implementação de banco de dados clínicos e moleculares de pacientes com câncer e seu uso para identificação de marcadores em câncer de pâncreas / Database design and implementation of clinical and molecular data of cancer patients and its application for biomarker discovery in pancreatic cancer

Ester Risério Matos Bertoldi 20 October 2017 (has links)
O adenocarcinoma pancreático (PDAC) é uma neoplasia de difícil diagnóstico precoce e cujo tratamento não tem apresentado avanços expressivos desde a última década. As tecnologias de sequenciamento de nova geração (next generation sequencing - NGS) podem trazer importantes avanços para a busca de novos marcadores para diagnóstico de PDACs, podendo também contribuir para o desenvolvimento de terapias individualizadas. Bancos de dados são ferramentas poderosas para integração, padronização e armazenamento de grandes volumes de informação. O objetivo do presente estudo foi modelar e implementar um banco de dados relacional (CaRDIGAn - Cancer Relational Database for Integration and Genomic Analysis) que integra dados disponíveis publicamente, provenientes de experimentos de NGS de amostras de diferentes tipos histopatológicos de PDAC, com dados gerados por nosso grupo no IQ-USP, facilitando a comparação entre os mesmos. A funcionalidade do CaRDIGAn foi demonstrada através da recuperação de dados clínicos e dados de expressão gênica de pacientes a partir de listas de genes candidatos, associados com mutação no oncogene KRAS ou diferencialmente expressos em tumores identificados em dados de RNAseq gerados em nosso grupo. Os dados recuperados foram utilizados para a análise de curvas de sobrevida que resultou na identificação de 11 genes com potencial prognóstico no câncer de pâncreas, ilustrando o potencial da ferramenta para facilitar a análise, organização e priorização de novos alvos biomarcadores para o diagnóstico molecular do PDAC. / Pancreatic Ductal Adenocarcinoma (PDAC) is a type of cancer difficult to diagnose early on and treatment has not improved over the last decade. Next Generation Sequencing (NGS) technology may contribute to discover new biomarkers, develop diagnose strategies and personalised therapy applications. Databases are powerfull tools for data integration, normalization and storage of large data volumes. The main objective of this study was the design and implementation of a relational database to integrate publicly available data of NGS experiments of PDAC pacients with data generated in by our group at IQ-USP, alowing comparisson between both data sources. The database was called CaRDIGAn (Cancer Relational Database for Integration and Genomic Analysis) and its funcionalities were tested by retrieving clinical and expression data of public data of genes differencially expressed genes in our samples or genes associated with KRAS mutation. The output of those queries were used to fit survival curves of patients, which led to the identification of 11 genes potencially usefull for PDAC prognosis. Thus, CaRDIGAn is a tool for data storage and analysis, with promissing applications to identification and priorization of new biomarkers for molecular diagnosis in PDAC.
692

Database-Assisted Analysis and Design of Wind Loads on Rigid Buildings

Habte, Filmon Fesehaye 06 July 2016 (has links)
The turbulent nature of the wind flow coupled with additional turbulence created by the wind-building interaction result in highly non-uniform, fluctuating wind-loading on building envelopes. This is true even for simple rectangular symmetric buildings. Building codes and standards should reflect the information on which they are based as closely as possible, and this should be achieved without making the building codes too complicated and/or bulky. However, given the complexity of wind loading on low-rise buildings, its codification can be difficult, and it often entails significant inconsistencies. This required the development of alternative design methods, such as the Database-Assisted-Design (DAD) methodology, that can produce more accurate and risk-consistent estimates of wind loads or their effects. In this dissertation, the DAD methodology for rigid-structures has been further developed into a design tool capable of automatically helping to size member cross sections that closely meet codified strength and serviceability requirements. This was achieved by the integration of the wind engineering and structural engineering phases of designing for wind and gravity loads. Results obtained using this method showed DAD’s potential for practical use in structural design. Different methods of synthesizing aerodynamic and climatological data were investigated, and the effects of internal pressure in structural design were also studied in the context of DAD. This dissertation also addressed the issues of (i) insufficiently comprehensive aerodynamic databases for various types of building shapes, and (ii) the large volume (in size) of existing aerodynamic databases, that can significantly affect the extent to which the DAD methodology is used in engineering practice. This research is part of an initiative to renew the way we evaluate wind loads and perform designs. It is transformative insofar as it enables designs that are safe and economical owing to the risk-consistency inherent in DAD, meaning that enough structural muscle is provided to assure safe behavior, while fat is automatically eliminated in the interest of economy and CO2 footprint reduction.
693

Sample synopses for approximate answering of group-by queries

Lehner, Wolfgang, Rösch, Philipp 22 April 2022 (has links)
With the amount of data in current data warehouse databases growing steadily, random sampling is continuously gaining in importance. In particular, interactive analyses of large datasets can greatly benefit from the significantly shorter response times of approximate query processing. Typically, those analytical queries partition the data into groups and aggregate the values within the groups. Further, with the commonly used roll-up and drill-down operations a broad range of group-by queries is posed to the system, which makes the construction of highly-specialized synopses difficult. In this paper, we propose a general-purpose sampling scheme that is biased in order to answer group-by queries with high accuracy. While existing techniques focus on the size of the group when computing its sample size, our technique is based on its standard deviation. The basic idea is that the more homogeneous a group is, the less representatives are required in order to give a good estimate. With an extensive set of experiments, we show that our approach reduces both the estimation error and the construction cost compared to existing techniques.
694

Multi-Model Snowflake Schema Creation

Gruenberg, Rebecca 25 April 2022 (has links)
No description available.
695

Database Selection Process in Very Small Enterprises in Software Development : A Case Study examining Factors, Methods, and Properties

Adolfsson, Teodor, Sundin, Axel January 2023 (has links)
This thesis investigates the database model selection process in VSEs, looking into how priorities and needs differ compared to what is proposed by existing theory in the area.  The study was conducted as a case study of a two-person company engaged in developing various applications and performing consulting tasks. Data was collected through two semi-structured interviews. The first interview aimed to understand the company's process for selecting a database model, while the second interview focused on obtaining their perspective on any differences in their selection process compared to the theoretical recommendations and suggested methodology. The purpose was to investigate the important factors involved in the process and explore why and how they deviated from what the theory proposes. The study concludes that VSEs have different priorities compared to larger enterprises. Factors like transaction amount does not have to be considered much at the scale of a VSE. It is more important to look into the total cost of the database solution, including making sure that the selected technology is sufficiently efficient to use in development and relatively easy to maintain. Regarding selection methodology it was concluded that the time investment required to decide what is the best available database solution can be better spent elsewhere in the enterprise, and finding a good enough solution to get the wheels of the ground is likely a more profitable aim.
696

The use of frames in database modeling

Sweet, Barbara Moore. January 1984 (has links)
Call number: LD2668 .T4 1984 S93 / Master of Science
697

The use of null values in a relational database to represent incomplete and inapplicable information

Wilson, Maria Marshall. January 1985 (has links)
Call number: LD2668 .T4 1985 W547 / Master of Science
698

MANAGING MULTI-VENDOR INSTRUMENTATION SYSTEMS WITH ABSTRACTION MODELS

Lockard, Michael T., Garling, James A. Jr 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / The quantity and types of measurements and measurement instrumentation required for a test are growing. This paper describes a methodology to define and program multi-vendor instrumentation using abstraction models in a database that allows new instrumentation to be defined rapidly. This allows users to support multiple vendors’ systems while using a common user interface to define instrumentation networks, bus catalogs, measurements, pulse code modulated (PCM) formats, and data processing requirements.
699

A RELATIONAL APPROACH FOR MANAGING LARGE FLIGHT TEST PARAMETER LISTS

Penna, Sérgio D., Espeschit, Antônio Magno L. 10 1900 (has links)
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada / The number of aircraft parameters used in flight-testing has constantly increased over the years and there is no sign that situation will change in the near future. On the contrary, in modern, software-driven, digital avionic systems, all sorts of parameters circulate through digital buses and can be transferred to on-board data acquisition systems more easily than those converted from traditional analog transducers, facilitating the request for more and more parameters to be acquired, processed, visualized, stored and retrieved at any given time. The constant unbalance between what parameter quantity engineers believe to be “sufficient” for developing and troubleshooting systems in a new aircraft, which tends to grow with aircraft complexity, and the associated cost of instrumenting a test prototype accordingly, which tends to grow beyond budget limits, pushes for new creative ways of handling both tendencies without compromising the ease of performing an engineering analysis directly from flight test data. This paper presents an alternative for handling large collections of flight test parameters through a relational approach, particularly in two important scenarios: the very basic creation and administration of the traditional “Flight Test Parameter List” and the transmission of selected data over a telemetry link for visualization in a Ground Station.
700

Ontological approach for database integration

Alalwan, Nasser Alwan January 2011 (has links)
Database integration is one of the research areas that have gained a lot of attention from researcher. It has the goal of representing the data from different database sources in one unified form. To reach database integration we have to face two obstacles. The first one is the distribution of data, and the second is the heterogeneity. The Web ensures addressing the distribution problem, and for the case of heterogeneity there are many approaches that can be used to solve the database integration problem, such as data warehouse and federated databases. The problem in these two approaches is the lack of semantics. Therefore, our approach exploits the Semantic Web methodology. The hybrid ontology method can be facilitated in solving the database integration problem. In this method two elements are available; the source (database) and the domain ontology, however, the local ontology is missing. In fact, to ensure the success of this method the local ontologies should be produced. Our approach obtains the semantics from the logical model of database to generate local ontology. Then, the validation and the enhancement can be acquired from the semantics obtained from the conceptual model of the database. Now, our approach can be applied in the generation phase and the validation-enrichment phase. In the generation phase in our approach, we utilise the reverse engineering techniques in order to catch the semantics hidden in the SQL language. Then, the approach reproduces the logical model of the database. Finally, our transformation system will be applied to generate an ontology. In our transformation system, all the concepts of classes, relationships and axioms will be generated. Firstly, the process of class creation contains many rules participating together to produce classes. Our unique rules succeeded in solving problems such as fragmentation and hierarchy. Also, our rules eliminate the superfluous classes of multi-valued attribute relation as well as taking care of neglected cases such as: relationships with additional attributes. The final class creation rule is for generic relation cases. The rules of the relationship between concepts are generated with eliminating the relationships between integrated concepts. Finally, there are many rules that consider the relationship and the attributes constraints which should be transformed to axioms in the ontological model. The formal rules of our approach are domain independent; also, it produces a generic ontology that is not restricted to a specific ontology language. The rules consider the gap between the database model and the ontological model. Therefore, some database constructs would not have an equivalent in the ontological model. The second phase consists of the validation and the enrichment processes. The best way to validate the transformation result is to facilitate the semantics obtained from the conceptual model of the database. In the validation phase, the domain expert captures the missing or the superfluous concepts (classes or relationships). In the enrichment phase, the generalisation method can be applied to classes that share common attributes. Also, the concepts of complex or composite attributes can be represented as classes. We implement the transformation system by a tool called SQL2OWL in order to show the correctness and the functionally of our approach. The evaluation of our system showed the success of our proposed approach. The evaluation goes through many techniques. Firstly, a comparative study is held between the results produced by our approach and the similar approaches. The second evaluation technique is the weighting score system which specify the criteria that affect the transformation system. The final evaluation technique is the score scheme. We consider the quality of the transformation system by applying the compliance measure in order to show the strength of our approach compared to the existing approaches. Finally the measures of success that our approach considered are the system scalability and the completeness.

Page generated in 0.0497 seconds