• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 24
  • 23
  • 15
  • 13
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 176
  • 176
  • 32
  • 31
  • 29
  • 28
  • 25
  • 22
  • 21
  • 20
  • 18
  • 16
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

An approach to open virtual commissioning for component-based automation

Kong, Xiangjun January 2013 (has links)
Increasing market demands for highly customised products with shorter time-to-market and at lower prices are forcing manufacturing systems to be built and operated in a more efficient ways. In order to overcome some of the limitations in traditional methods of automation system engineering, this thesis focuses on the creation of a new approach to Virtual Commissioning (VC). In current VC approaches, virtual models are driven by pre-programmed PLC control software. These approaches are still time-consuming and heavily control expertise-reliant as the required programming and debugging activities are mainly performed by control engineers. Another current limitation is that virtual models validated during VC are difficult to reuse due to a lack of tool-independent data models. Therefore, in order to maximise the potential of VC, there is a need for new VC approaches and tools to address these limitations. The main contributions of this research are: (1) to develop a new approach and the related engineering tool functionality for directly deploying PLC control software based on component-based VC models and reusable components; and (2) to build tool-independent common data models for describing component-based virtual automation systems in order to enable data reusability.
32

Máquina e modelo de dados dedicados para aplicações de engenharia / A Data model and a database machine for engineering applications

Traina Junior, Caetano 03 December 1986 (has links)
Esta tese envolve duas áreas de conhecimento, nomeadamente a de modelagem de dados para Sistemas de Gerecnciamento de Bases de Dados, e a de desenvolvimento de Máquinas de Bases de Dados. Devido a isso, esta tese apresenta-se também dividida em duas partes. Na primeira parte analisam-se os modelos já existentes e a partir das deficiências que apresentam para aplicações como Base de Dados para Engenharia, define-se o Modelo de Representação de Objetos. Na segunda parte são analisados arquiteturas de Máquinas de Base de Dados existentes e faz-se a proposta de uma nova arquitetura dedicada, para suportar uma implementação capaz de aproveitar o paralelismo que o modelo apresentado permite. Nas duas partes faz-se um levantamento de trabalhos relevantes que existem nas respectivas áreas, e mostra-se como as soluções apresentadas satisfazem as necessidades inerentes de cada parte / This work deals with two research areas: the data modeling for Database Management Systems; and the development of Database Machines. This work is divided in two parts. The first one analyzes already existent data models, and based on the characteristics required from engineering database applications, the Object Representation Model is defined. The second part analyzes existing database machines architectures and proposes a new one intended to support the intrinsic parallelism of the algorithms developed to implement the presented data model. A survey of relevant results obtained in both areas is included and a thorough discussion concludes the work
33

Modeling and Querying Graph Data

Yang, Hong 12 March 2009 (has links)
Databases are used in many applications, spanning virtually the entire range of data processing services industry. The data in many database applications can be most naturally represented in the form of a graph structure consisting of various types of nodes and edges with several properties. These graph data can be classified into four categories: social networks describing the relationships between individual person and/or groups of people (e.g. genealogy, network of coauthorship among academics, etc); information networks in which the structure of the network reflects the structure of the information stored in the nodes (e.g. citation network among academic papers, etc); geographic networks, providing geographic information about public transport systems, airline routes, etc; and biological networks (e.g. biochemical networks, neuron network, etc). In order to analyze such networks and obtain desired information that users are interested in, some typical queries must be conducted. It can be seen that many of the query patterns are across multiple categories described above, such as finding nodes with certain properties in a path or graph, finding the distance between nodes, finding sub-graphs, paths enumeration, etc. However, the classical query languages like SQL, OQL are inept dealing with these types of queries needed to be performed in the above applications. Therefore, a data model that can effectively represent the graph objects and their properties, and a query language which empowers users to answer queries across multiple categories are needed. In this research work, a graph data model and a query language are proposed to resolve the issues existing in the current database applications. The proposed graph data model is an object-oriented graph data model which aims to represent the graph objects and their properties for various applications. The graph query language empowers users to query graph objects and their properties in a graph with specified conditions. The capability to specify the relationships among the entities composing the queried sub-graph makes the language more flexible than others.
34

Correcting for CBC model bias. A hybrid scanner data - conjoint model.

Natter, Martin, Feurstein, Markus January 2001 (has links) (PDF)
Choice-Based Conjoint (CBC) models are often used for pricing decisions, especially when scanner data models cannot be applied. Up to date, it is unclear how Choice-Based Conjoint (CBC) models perform in terms of forecasting real-world shop data. In this contribution, we measure the performance of a Latent Class CBC model not by means of an experimental hold-out sample but via aggregate scanner data. We find that the CBC model does not accurately predict real-world market shares, thus leading to wrong pricing decisions. In order to improve its forecasting performance, we propose a correction scheme based on scanner data. Our empirical analysis shows that the hybrid method improves the performance measures considerably. (author's abstract) / Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
35

Data Management in an Object-Oriented Distributed Aircraft Conceptual Design Environment

Lu, Zhijie 16 January 2007 (has links)
Aircraft conceptual design, as the first design stage, provides major opportunity to compress design cycle time and is the cheapest place for making design changes. However, traditional aircraft conceptual design programs, which are monolithic programs, cannot provide satisfactory functionality to meet new design requirements due to the lack of domain flexibility and analysis scalability. Therefore, we are in need of the next generation aircraft conceptual design environment (NextADE). To build the NextADE, the framework and the data management problem are two major problems that need to be addressed at the forefront. Solving these two problems, particularly the data management problem, is the focus of this research. In this dissertation, a distributed object-oriented framework is firstly formulated and tested for the NextADE. In order to improve interoperability and simplify the integration of heterogeneous application tools, data management is one of the major problems that need to be tackled. To solve this problem, taking into account the characteristics of aircraft conceptual design data, a robust, extensible object-oriented data model is then proposed according to the distributed object-oriented framework. By overcoming the shortcomings of the traditional approach of modeling aircraft conceptual design data, this data model makes it possible to capture specific detailed information of aircraft conceptual design without sacrificing generality. Based upon this data model, a prototype of the data management system, which is one of the fundamental building blocks of the NextADE, is implemented utilizing the state of the art information technologies. Using a general-purpose integration software package to demonstrate the efficacy of the proposed framework and the data management system, the NextADE is initially implemented by integrating the prototype of the data management system with other building blocks of the design environment. As experiments, two case studies are conducted in the integrated design environments. One is based upon a simplified conceptual design of a notional conventional aircraft; the other is a simplified conceptual design of an unconventional aircraft. As a result of the experiments, the proposed framework and the data management approach are shown to be feasible solutions to the research problems.
36

Towards tool support for phase 2 in 2G

Stefánsson, Vilhjálmur January 2002 (has links)
<p>When systematically adopting a CASE (Computer-Aided Software Engineering) tool, an organisation evaluates candidate tools against a framework of requirements, and selects the most suitable tool for usage. A method, called 2G, has been proposed that aims at developing such frameworks based on the needs of a specific organisation.</p><p>This method includes a pilot evaluation phase, where state-of-the-art CASE-tools are explored with the aim of gaining more understanding of the requirements that the organisation adopting CASE-tools puts on candidate tools. This exploration results in certain output data, parts of which are used in interviews to discuss the findings of the tool exploration with the organisation. This project has focused on identifying the characteristics of these data, and subsequently to hypothesise a representation of the data, with the aim of providing guidelines for future tool support for the 2G method.</p><p>The approach to reaching this aim was to conduct a case study of a new application of the pilot evaluation phase, which resulted in data that could subsequently be analysed with the aim of identifying characteristics. This resulted in a hypothesised data representation, which was found to fit the data from the conducted application well, although certain situations were identified that the representation might not be able to handle.</p>
37

Duomenų loginių struktūrų išskyrimas funkcinių reikalavimų specifikacijos pagrindu / Data logical structure segregation on the ground of a functional requirements specification

Jučiūtė, Laura 25 May 2006 (has links)
The place of data modelling in information systems‘ life cycle and the importance of data model quality in effective IS exploitation are shown in this masters' work. Refering to results of the nonfiction literature analysis the reasons why the process of data modelling must be automated are introduced; current automatization solutions are described. And as it is the main purpose of this work an original data modelling method is described and programmable prototype which automates one step of that method – schema integration is introduced.
38

Objektinių ir reliacinių schemų integracijos modelis / Model for integrating object and relational schemas

Bivainis, Vytenis 02 September 2008 (has links)
Šiame darbe nagrinėjama objektinių ir reliacinių schemų integruojamumo ir suderinamumo problema. Programinei įrangai kurti šiuo metu populiariausios objektinės programavimo kalbos, tačiau duomenys, kuriais manipuliuojama, dažniausiai saugojami reliacinėse duomenų bazėse, todėl aktualu programuojant naudojamas struktūras susieti su reliacinės duomenų bazės struktūromis. Organizacijų informacijų sistemose duomenys dažnai yra saugojami keliose duomenų saugyklose, yra poreikis integruoti įvairiose saugyklose esančius duomenis. Tam tikslui naudojamos federacinės duomenų bazės, besiremiančios kanoniniu duomenų modeliu. Šiame darbe aprašomas objektinių ir reliacinių schemų integracijos modelis. Pasiūlytas skurdus kanoninis duomenų modelis, kurį sudaro atributai ir apribojimai: funkcinės, jungimo/projekcijos ir poaibio priklausomybės. Aprašytos transformacijos iš reliacinių ir objektinių schemų į kanoninę schemą, algoritmas kanoninėms schemoms integruoti, kanoninės schemos transformacija į struktūrinius tipus, naudojant modifikuotą sintezės algoritmą, ir OWL. Aprašyti algoritmai leidžia pasiekti vienareikšmiškumą ir iš dalies automatizuotumą. Modifikuotas sintezės algoritmas duoda geresnius rezultatus nei standartinis, nes įvertina jungimo/projekcijos priklausomybes. Pasiūlyti algoritmai gali būti naudojami integracijai, norint atkurti konceptualiąją schemą ar objektines struktūras iš reliacinės schemos. / In this work the problem of integration and compatibility of relational and object schemas is investigated. Nowadays object-oriented programming languages are the most popular, but data that has to be manipulated is usually stored in relational databases. It is relevant to map structures that are used in programming languages to relational structures. Data is usually stored in several repositories in enterprise information systems, so there is the need to integrate them. Federated databases are used for this purpose, and they have canonical data model. Semantically poor canonical data model, which consists of attributes and constraints (functional, join and subset dependencies), is proposed. Algorithms are given for transforming relational and object schemas to canonical schema, integrating canonical schemas, transforming canonical schema to structural types (using modified synthesis algorithm) and OWL. Proposed algorithms give unambiguous result and can be partially automated. Modified synthesis algorithm gives better results than standard algorithm as it takes join dependencies into account. The algorithms can be used to restore conceptual schema and object structures from relational schema as well as to integrate schemas.
39

Assisting in the reuse of existing materials to build adaptive hypermedia

Zemirline, Nadjet 12 July 2011 (has links) (PDF)
Nowadays, there is a growing demand for personalization and the "one-size-fits-all" approach for hypermedia systems is no longer applicable. Adaptive hypermedia (AH) systems adapt their behavior to the needs of individual users. However due to the complexity of their authoring process and the different skills required from authors, only few of them have been proposed. These last years, numerous efforts have been put to propose assistance for authors to create their own AH. However, as explained in this thesis some problems remain.In this thesis, we tackle two particular problems. A first problem concerns the integration of authors' materials (information and user profile) into models of existing systems. Thus, allowing authors to directly reuse existing reasoning and execute it on their materials. We propose a semi-automatic merging/specialization process to integrate an author's model into a model of an existing system. Our objectives are twofold: to create a support for defining mappings between elements in a model of existing models and elements in the author's model and to help creating consistent and relevant models integrating the two models and taking into account the mappings between them.A second problem concerns the adaptation specification, which is famously the hardest part of the authoring process of adaptive web-based systems. We propose an EAP framework with three main contributions: a set of elementary adaptation patterns for the adaptive navigation, a typology organizing the proposed elementary adaptation patterns and a semi-automatic process to generate adaptation strategies based on the use and the combination of patterns. Our objectives are to define easily adaptation strategies at a high level by combining simple ones. Furthermore, we have studied the expressivity of some existing solutions allowing the specification of adaptation versus the EAP framework, discussing thus, based on this study, the pros and cons of various decisions in terms of the ideal way of defining an adaptation language. We propose a unified vision of adaptation and adaptation languages, based on the analysis of these solutions and our framework, as well as a study of the adaptation expressivity and the interoperability between them, resulting in an adaptation typology. The unified vision and adaptation typology are not limited to the solutions analysed, and can be used to compare and extend other approaches in the future. Besides these theoretical qualitative studies, this thesis also describes implementations and experimental evaluations of our contributions in an e-learning application.
40

Evaluation of functional data models for database design and use

Kulkarni, Krishnarao Gururao January 1983 (has links)
The problems of design, operation, and maintenance of databases using the three most popular database management systems (Hierarchical, CQDASYL/DBTG, and Relational) are well known. Users wishing to use these systems have to make conscious and often complex mappings between the real-world structures and the data structuring options (data models) provided by these systems. In addition, much of the semantics associated with the data either does not get expressed at all or gets embedded procedurally in application programs in an ad-hoc way. In recent years, a large number of data models (called semantic data models) have been proposed with the aim of simplifying database design and use. However, the lack of usable implementations of these proposals has so far inhibited the widespread use of these concepts. The present work reports on an effort to evaluate and extend one such semantic model by means of an implementation. It is based on the functional data model proposed earlier by Shipman (SHIP81). We call this 'Extended Functional Data Model' (EFDM). EFDM, like Shipman's proposals, is a marriage of three of the advanced modelling concepts found in both database and artificial intelligence research: the concept of entity to represent an object in the real world, the concept of type hierarchy among entity types, and the concept of derived data for modelling procedural knowledge. The functional notation of the model lends itself to high level data manipulation languages. The data selection in these languages is expressed simply as function application. Further, the functional approach makes it possible to incorporate general purpose computation facilities in the data languages without having to embed them in procedural languages. In addition to providing the usual database facilities, the implementation also provides a mechanism to specify multiple user views of the database.

Page generated in 0.0537 seconds