• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • Tagged with
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Representação de variabilidade estrutural de dados por meio de famílias de esquemas de banco de dados / Representing structural data variability using families of database schemas

Rodrigues, Larissa Cristina Moraes 09 December 2016 (has links)
Diferentes organizações dentro de um mesmo domínio de aplicação costumam ter requisitos de dados bastante semelhantes. Apesar disso, cada organização também tem necessidades específicas, que precisam ser consideradas no projeto e desenvolvimento dos sistemas de bancos de dados para o domínio em questão. Dessas necessidades específicas, resultam variações estruturais nos dados das organizações de um mesmo domínio. As técnicas tradicionais de modelagem conceitual de banco de dados (como o Modelo Entidade-Relacionamento - MER - e a Linguagem Unificada de Modelagem - UML) não nos permitem expressar em um único esquema de dados essa variabilidade. Para abordar esse problema, este trabalho de mestrado propôs um novo método de modelagem conceitual baseado no uso de Diagramas de Características de Banco de Dados (DBFDs, do inglês Database Feature Diagrams). Esse método foi projetado para apoiar a criação de famílias de esquemas conceituais de banco de dados. Uma família de esquemas conceituais de banco de dados compreende todas as possíveis variações de esquemas conceituais de banco de dados para um determinado domínio de aplicação. Os DBFDs são uma extensão do conceito de Diagrama de Características, usado na Engenharia de Linhas de Produtos de Software. Por meio dos DBFDs, é possível gerar esquemas conceituais de banco de dados personalizados para atender às necessidades específicas de usuários ou organizações, ao mesmo tempo que se garante uma padronização no tratamento dos requisitos de dados de um domínio de aplicação. No trabalho, também foi desenvolvida uma ferramenta Web chamada DBFD Creator, para facilitar o uso do novo método de modelagem e a criação dos DBFDs. Para avaliar o método proposto neste trabalho, foi desenvolvido um estudo de caso no domínio de dados experimentais de neurociência. Por meio do estudo de caso, foi possível concluir que o método proposto é viável para modelar a variabilidade de dados de um domínio de aplicação real. Além disso, foi realizado um estudo exploratório com um grupo de pessoas que receberam treinamentos, executaram tarefas e preencheram questionários de avaliação sobre o método de modelagem e a sua ferramenta de software de apoio. Os resultados desse estudo exploratório mostraram que o método proposto é reprodutível e que a ferramenta de software tem boa usabilidade, amparando de forma apropriada a execução do passo-a-passo do método. / Different organizations within the same application domain usually have very similar data requirements. Nevertheless, each organization also has specific needs that should be considered in the design and development of database systems for that domain. These specific needs result in structural variations in data from organizations of the same domain. The traditional techniques of database conceptual modeling (such as Entity Relationship Model - ERM - and Unified Modeling Language - UML) do not allow to express this variability in a single data schema. To address this problem, this work proposes a new conceptual modeling method based on the use of Database Feature Diagrams (DBFDs). This method was designed to support the creation of families of conceptual database schemas. A family of conceptual database schemas includes all possible variations of database conceptual schemas for a particular application domain. The DBFDs are an extension of the concept of Features Diagram used in the Software Product Lines Engineering. Through DBFDs, it is possible to generate customized database conceptual schemas to address the specific needs of users or organizations at the same time we ensure a standardized treatment of the data requirements of an application domain. At this work, a Web tool called DBFD Creator was also developed to facilitate the use of the new modeling method and the creation of DBFDs. To evaluate the method proposed in this work, a case study was developed on the domain of neuroscience experimental data. Through the case study, it was possible to conclude that the proposed method is feasible to model data variability of a real application domain. In addition, an exploratory study was conducted with a group of people who have received training, executed tasks and filled out evaluation questionnaires about the modeling method and its supporting software tool. The results of this exploratory study showed that the proposed method is reproducible and that the software tool has good usability, properly supporting the execution of the method\'s step-by-step procedure.
2

Representação de variabilidade estrutural de dados por meio de famílias de esquemas de banco de dados / Representing structural data variability using families of database schemas

Larissa Cristina Moraes Rodrigues 09 December 2016 (has links)
Diferentes organizações dentro de um mesmo domínio de aplicação costumam ter requisitos de dados bastante semelhantes. Apesar disso, cada organização também tem necessidades específicas, que precisam ser consideradas no projeto e desenvolvimento dos sistemas de bancos de dados para o domínio em questão. Dessas necessidades específicas, resultam variações estruturais nos dados das organizações de um mesmo domínio. As técnicas tradicionais de modelagem conceitual de banco de dados (como o Modelo Entidade-Relacionamento - MER - e a Linguagem Unificada de Modelagem - UML) não nos permitem expressar em um único esquema de dados essa variabilidade. Para abordar esse problema, este trabalho de mestrado propôs um novo método de modelagem conceitual baseado no uso de Diagramas de Características de Banco de Dados (DBFDs, do inglês Database Feature Diagrams). Esse método foi projetado para apoiar a criação de famílias de esquemas conceituais de banco de dados. Uma família de esquemas conceituais de banco de dados compreende todas as possíveis variações de esquemas conceituais de banco de dados para um determinado domínio de aplicação. Os DBFDs são uma extensão do conceito de Diagrama de Características, usado na Engenharia de Linhas de Produtos de Software. Por meio dos DBFDs, é possível gerar esquemas conceituais de banco de dados personalizados para atender às necessidades específicas de usuários ou organizações, ao mesmo tempo que se garante uma padronização no tratamento dos requisitos de dados de um domínio de aplicação. No trabalho, também foi desenvolvida uma ferramenta Web chamada DBFD Creator, para facilitar o uso do novo método de modelagem e a criação dos DBFDs. Para avaliar o método proposto neste trabalho, foi desenvolvido um estudo de caso no domínio de dados experimentais de neurociência. Por meio do estudo de caso, foi possível concluir que o método proposto é viável para modelar a variabilidade de dados de um domínio de aplicação real. Além disso, foi realizado um estudo exploratório com um grupo de pessoas que receberam treinamentos, executaram tarefas e preencheram questionários de avaliação sobre o método de modelagem e a sua ferramenta de software de apoio. Os resultados desse estudo exploratório mostraram que o método proposto é reprodutível e que a ferramenta de software tem boa usabilidade, amparando de forma apropriada a execução do passo-a-passo do método. / Different organizations within the same application domain usually have very similar data requirements. Nevertheless, each organization also has specific needs that should be considered in the design and development of database systems for that domain. These specific needs result in structural variations in data from organizations of the same domain. The traditional techniques of database conceptual modeling (such as Entity Relationship Model - ERM - and Unified Modeling Language - UML) do not allow to express this variability in a single data schema. To address this problem, this work proposes a new conceptual modeling method based on the use of Database Feature Diagrams (DBFDs). This method was designed to support the creation of families of conceptual database schemas. A family of conceptual database schemas includes all possible variations of database conceptual schemas for a particular application domain. The DBFDs are an extension of the concept of Features Diagram used in the Software Product Lines Engineering. Through DBFDs, it is possible to generate customized database conceptual schemas to address the specific needs of users or organizations at the same time we ensure a standardized treatment of the data requirements of an application domain. At this work, a Web tool called DBFD Creator was also developed to facilitate the use of the new modeling method and the creation of DBFDs. To evaluate the method proposed in this work, a case study was developed on the domain of neuroscience experimental data. Through the case study, it was possible to conclude that the proposed method is feasible to model data variability of a real application domain. In addition, an exploratory study was conducted with a group of people who have received training, executed tasks and filled out evaluation questionnaires about the modeling method and its supporting software tool. The results of this exploratory study showed that the proposed method is reproducible and that the software tool has good usability, properly supporting the execution of the method\'s step-by-step procedure.
3

Vysoce výkonné analýzy / High Performance Analytics

Kalický, Andrej January 2013 (has links)
This thesis explains Big Data Phenomenon, which is characterised by rapid growth of volume, variety and velocity of data - information assets, and thrives the paradigm shift in analytical data processing. Thesis aims to provide summary and overview with complete and consistent image about the area of High Performance Analytics (HPA), including problems and challenges on the pioneering state-of-art of advanced analytics. Overview of HPA introduces classification, characteristics and advantages of specific HPA method utilising the various combination of system resources. In the practical part of the thesis the experimental assignment focuses on analytical processing of large dataset using analytical platform from SAS Institute. The experiment demonstrates the convenience and benefits of In-Memory Analytics (specific HPA method) by evaluating the performance of different analytical scenarios and operations. Powered by TCPDF (www.tcpdf.org)
4

Assessment of structural damage using operational time responses

Ngwangwa, Harry Magadhlela 31 January 2006 (has links)
The problem of vibration induced structural faults has been a real one in engineering over the years. If left unchecked it has led to the unexpected failures of so many structures. Needless to say, this has caused both economic and human life losses. Therefore for over forty years, structural damage identification has been one of the important research areas for engineers. There has been a thrust to develop global structural damage identification techniques to complement and/or supplement the long-practised local experimental techniques. In that respect, studies have shown that vibration-based techniques prove to be more potent. Most of the existing vibration-based techniques monitor changes in modal properties like natural frequencies, damping factors and mode shapes of the structural system to infer the presence of structural damage. Literature also reports other techniques which monitor changes in other vibration quantities like the frequency response functions, transmissibility functions and time-domain responses. However, none of these techniques provide a complete identification of structural damage. This study presents a damage detection technique based on operational response monitoring, which can identify all the four levels of structural damage and be implemented as a continuous structural health monitoring technique. The technique is based on monitoring changes in internal data variability measured by a test statistic <font face="symbol">c</font>2Ovalue. Structural normality is assumed when the <font face="symbol">c</font>2Om value calculated from a fresh set of measured data is within the limits prescribed by a threshold <font face="symbol">c</font>2OTH value . On the other hand, abnormality is assumed when this threshold value has been exceeded. The quantity of damage is determined by matching the <font face="symbol">c</font>2Om value with the <font face="symbol">c</font>2Op values predicted using a benchmark finite element model. The use of <font face="symbol">c</font>2O values is noted to provide better sensitivity to structural damage than the natural frequency shift technique. The analysis carried out on a numerical study showed that the sensitivity of the proposed technique ranged from three to thousand times as much as the sensitivity of the natural frequencies. The results from a laboratory structure showed that accurate estimates of damage quantity and remaining service life could be achieved for crack lengths of less than 0.55 the structural thickness. This was due to the fact that linear elastic fracture mechanics theory was applicable up to this value. Therefore, the study achieved its main objective of identifying all four levels of structural damage using operational response changes. / Dissertation (MSc (Mechanics))--University of Pretoria, 2007. / Mechanical and Aeronautical Engineering / unrestricted
5

Uncertainty in life cycle costing for long-range infrastructure. Part I: leveling the playing field to address uncertainties

Scope, Christoph, Ilg, Patrick, Muench, Stefan, Guenther, Edeltraud 25 August 2021 (has links)
Purpose Life cycle costing (LCC) is a state-of-the-art method to analyze investment decisions in infrastructure projects. However, uncertainties inherent in long-term planning question the credibility of LCC results. Previous research has not systematically linked sources and methods to address this uncertainty. Part I of this series develops a framework to collect and categorize different sources of uncertainty and addressing methods. This systematization is a prerequisite to further analyze the suitability of methods and levels the playing field for part II. Methods Past reviews have dealt with selected issues of uncertainty in LCC. However, none has systematically collected uncertainties and linked methods to address them. No comprehensive categorization has been published to date. Part I addresses these two research gaps by conducting a systematic literature review. In a rigorous four-step approach, we first scrutinized major databases. Second, we performed a practical and methodological screening to identify in total 115 relevant publications, mostly case studies. Third, we applied content analysis using MAXQDA. Fourth, we illustrated results and concluded upon the research gaps. Results and discussion We identified 33 sources of uncertainty and 24 addressing methods. Sources of uncertainties were categorized according to (i) its origin, i.e., parameter, model, and scenario uncertainty and (ii) the nature of uncertainty, i.e., aleatoric or epistemic uncertainty. The methods to address uncertainties were classified into deterministic, probabilistic, possibilistic, and other methods. With regard to sources of uncertainties, lack of data and data quality was analyzed most often. Most uncertainties having been discussed were located in the use stage. With regard to methods, sensitivity analyses were applied most widely, while more complex methods such as Bayesian models were used less frequently. Data availability and the individual expertise of LCC practitioner foremost influence the selection of methods. Conclusions This article complements existing research by providing a thorough systematization of uncertainties in LCC. However, an unambiguous categorization of uncertainties is difficult and overlapping occurs. Such a systemizing approach is nevertheless necessary for further analyses and levels the playing field for readers not yet familiar with the topic. Part I concludes the following: First, an investigation about which methods are best suited to address a certain type of uncertainty is still outstanding. Second, an analysis of types of uncertainty that have been insufficiently addressed in previous LCC cases is still missing. Part II will focus on these research gaps.

Page generated in 0.0969 seconds