• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • 1
  • Tagged with
  • 12
  • 12
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Novel processes for smart grid information exchange and knowledge representation using the IEC common information model

Hargreaves, Nigel January 2013 (has links)
The IEC Common Information Model (CIM) is of central importance in enabling smart grid interoperability. Its continual development aims to meet the needs of the smart grid for semantic understanding and knowledge representation for a widening domain of resources and processes. With smart grid evolution the importance of information and data management has become an increasingly pressing issue not only because far more data is being generated using modern sensing, control and measuring devices but also because information is now becoming recognised as the ‘integral component’ that facilitates the optimal flexibility required of the smart grid. This thesis looks at the impacts of CIM implementation upon the landscape of smart grid issues and presents research from within National Grid contributing to three key areas in support of further CIM deployment. Taking the issue of Enterprise Information Management first, an information management framework is presented for CIM deployment at National Grid. Following this the development and demonstration of a novel secure cloud computing platform to handle such information is described. Power system application (PSA) models of the grid are partial knowledge representations of a shared reality. To develop the completeness of our understanding of this reality it is necessary to combine these representations. The second research contribution reports on a novel methodology for a CIM-based model repository to align PSA representations and provide a knowledge resource for building utility business intelligence of the grid. The third contribution addresses the need for greater integration of information relating to energy storage, an essential aspect of smart energy management. It presents the strategic rationale for integrated energy modeling and a novel extension to the existing CIM standards for modeling grid-scale energy storage. Significantly, this work has already contributed to a larger body of work on modeling Distributed Energy Resources currently under development at the Electric Power Research Institute (EPRI) in the USA.
2

The development of a semantic model for the interpretation of mathematics including the use of technology

Peters, Michael January 2010 (has links)
The semantic model developed in this research was in response to the difficulty a group of mathematics learners had with conventional mathematical language and their interpretation of mathematical constructs. In order to develop the model ideas from linguistics, psycholinguistics, cognitive psychology, formal languages and natural language processing were investigated. This investigation led to the identification of four main processes: the parsing process, syntactic processing, semantic processing and conceptual processing. The model showed the complex interdependency between these four processes and provided a theoretical framework in which the behaviour of the mathematics learner could be analysed. The model was then extended to include the use of technological artefacts into the learning process. To facilitate this aspect of the research, the theory of instrumentation was incorporated into the semantic model. The conclusion of this research was that although the cognitive processes were interdependent, they could develop at different rates until mastery of a topic was achieved. It also found that the introduction of a technological artefact into the learning environment introduced another layer of complexity, both in terms of the learning process and the underlying relationship between the four cognitive processes.
3

Frazeologismy s komponenty - somatismy v ruštině a češtině / Phraseological units with somatic components in Russian and Czech

Vašíčková, Anastázie January 2016 (has links)
The object of the work are phraseological units with somatic components «head» and «heart». Phraseological units of each group are analyzed by a semantic and structural perspective. An etymologic explanation is introduced and then phraseological units are illustrated within a Russian context. The foundations of the Russian and Czech languages are deduced by exploration of the differences and similarities of the phraselogical units.
4

Analysis and Modeling of the Structure of Semantic Dynamics in Texts

Ren, Zhaowei January 2017 (has links)
No description available.
5

Towards a new approach for enterprise integration : the semantic modeling approach

Radhakrishnan, Ranga Prasad 01 February 2005
Manufacturing today has become a matter of the effective and efficient application of information technology and knowledge engineering. Manufacturing firms success depends to a great extent on information technology, which emphasizes the integration of the information systems used by a manufacturing enterprise. This integration is also called enterprise application integration (here the term application means information systems or software systems). The methodology for enterprise application integration, in particular enterprise application integration automation, has been studied for at least a decade; however, no satisfactory solution has been found. Enterprise application integration is becoming even more difficult due to the explosive growth of various information systems as a result of ever increasing competition in the software market. This thesis aims to provide a novel solution to enterprise application integration. The semantic data model concept that evolved in database technology is revisited and applied to enterprise application integration. This has led to two novel ideas developed in this thesis. First, an ontology of an enterprise with five levels (following the data abstraction: generalization/specialization) is proposed and represented using unified modeling language. Second, both the ontology for the enterprise functions and the ontology for the enterprise applications are modeled to allow automatic processing of information back and forth between these two domains. The approach with these novel ideas is called the enterprise semantic model approach. The thesis presents a detailed description of the enterprise semantic model approach, including the fundamental rationale behind the enterprise semantic model, the ontology of enterprises with levels, and a systematic way towards the construction of a particular enterprise semantic model for a company. A case study is provided to illustrate how the approach works and to show the high potential of solving the existing problems within enterprise application integration.
6

Towards a new approach for enterprise integration : the semantic modeling approach

Radhakrishnan, Ranga Prasad 01 February 2005 (has links)
Manufacturing today has become a matter of the effective and efficient application of information technology and knowledge engineering. Manufacturing firms success depends to a great extent on information technology, which emphasizes the integration of the information systems used by a manufacturing enterprise. This integration is also called enterprise application integration (here the term application means information systems or software systems). The methodology for enterprise application integration, in particular enterprise application integration automation, has been studied for at least a decade; however, no satisfactory solution has been found. Enterprise application integration is becoming even more difficult due to the explosive growth of various information systems as a result of ever increasing competition in the software market. This thesis aims to provide a novel solution to enterprise application integration. The semantic data model concept that evolved in database technology is revisited and applied to enterprise application integration. This has led to two novel ideas developed in this thesis. First, an ontology of an enterprise with five levels (following the data abstraction: generalization/specialization) is proposed and represented using unified modeling language. Second, both the ontology for the enterprise functions and the ontology for the enterprise applications are modeled to allow automatic processing of information back and forth between these two domains. The approach with these novel ideas is called the enterprise semantic model approach. The thesis presents a detailed description of the enterprise semantic model approach, including the fundamental rationale behind the enterprise semantic model, the ontology of enterprises with levels, and a systematic way towards the construction of a particular enterprise semantic model for a company. A case study is provided to illustrate how the approach works and to show the high potential of solving the existing problems within enterprise application integration.
7

A Systems Engineering-based semantic model to support “Product-Service System” life cycle / Un modèle sémantique basé sur l'ingénierie des systèmes pour supporter le cycle de vie des systèmes "Produit-Service"

Maleki, Elaheh 21 December 2018 (has links)
Les Systèmes Produit-Service (PSS) résultent d'une intégration de composants hétérogènes couvrant à la fois des aspects matériels et immatériels (mécanique, électrique,logiciel, processus, organisation, etc.). Le processus de développement d’un PSS est fortement collaboratif impliquant des acteurs métier très variés.Ce caractère interdisciplinaire nécessite des référentiels sémantiques standardisés pour gérer la multitude des points de vue métier et faciliter l’intégration de tous les composants hétérogènes dans un système unique. Ceci est encore plus complexe dans le cas des PSS personnalisables, majoritaires dans le milieu industriel. Malgré les nombreuses méthodologies dans littérature, la gestion des processus de développement du PSS reste encore limitée face à cette complexité. Dans ce contexte, l'Ingénierie des systèmes (IS) pourrait être une solution avantageuse au regard de ses qualités bien prouvé pour la modélisation et la gestion de systèmes complexes. Cette thèse vise à explorer le potentiel d'utilisation de l'Ingénierie des systèmes (IS) comme fondement conceptuel pour représenter d’une façon intégrée tous les différents points de vue métier associés au cycle de vie du PSS. Dans ce cadre, un méta-modèle de PSS est proposé et exemplifié dans des cas industriels. Un modèle ontologique est aussi présenté comme une application d’une partie des modèles pour structurer le référentiel commun de la plateforme ICP4Life. / Product-service systems (PSS) result from the integration of heterogeneous components covering both tangible and intangible aspects(mechanical, electrical, software, process, organization, etc.). The process of developing PSS is highly collaborative involving a wide variety of stakeholders. This interdisciplinary nature requires standardized semantic repositories to handle the multitude of business views and facilitate the integration of all heterogeneous components into a single system. This is even more complex in the case of customizable PSS in the industrial sector. Despite the many methodologies in literature, the management of the development processes of the PSS is still limited to face this complexity. In this context, Systems Engineering (SE) could bean advantageous solution in terms of its proven qualities for the modeling and management of complex systems. This thesis aims at exploring the potentials of Systems Engineering (SE) as a conceptual foundation to represent various different business perspectives associated with the life cycle of the PSS. In this context, a meta-model for PSS is proposed and verified in industrial cases. An ontological model is also presented as an application of a part of the model to structure the common repository of the ICP4Life platform.
8

Model Development for Autonomous Short-Term Adaptation of Cobots' Motion Speed to Human Work Behavior in Human-Robot Collaboration Assembly Stations

Jeremy Amadeus Deniz Askin (11625070) 26 July 2022 (has links)
<p>  </p> <p>Manufacturing flexibility and human-centered designs are promising approaches to face the demand for individualized products. Human-robot assembly cells still lack flexibility and adaptability (VDI, 2017) using static control architectures (Bessler et al., 2020). Autonomous adaptation to human operators in short time horizons increases the willingness to work with cobots. Besides, monotonous static assembling in manufacturing operations does not accommodate the human way of working. Therefore, Human-Robot Collaboration (HRC) workstations require a work behavior adaptation accommodating varying work behavior regarding human mental and physical conditions (Weiss et al., 2021). The thesis presents the development of a cyber-physical HRC assembly station.</p> <p>Moreover, the thesis includes an experimental study investigating the influence of a cobot’s speed on human work behavior. The Cyber-Physical System (CPS) integrates the experiment's findings with event-based software architecture and a semantic knowledge representation. Thereby, the work focuses on demonstrating the feasibility of the CPS and the semantic model, allowing the self-adaptation of the system. Finally, the conclusion identifies the need for further research in human work behavior detection and fuzzy decision models. Such detection and decision models could improve self-adaptation in human-centered assembly systems.</p>
9

Systém pro zpracování dat z regulátoru HAWK firmy Honeywell / HAWK controller data processing system

Dostál, Jiří January 2017 (has links)
This thesis deals with developing program components for collection of semantically labeled data from Honeywell's Hawk controller. The basic principles and capabilities of development using Niagara Framework, on which Hawk is based, are explained. Lastly, the specific components and external database application is described.
10

Transfer Learning in Deep Structured Semantic Models for Information Retrieval / Kunskapsöverföring mellan datamängder i djupa arkitekturer för informationssökning

Zarrinkoub, Sahand January 2020 (has links)
Recent approaches to IR include neural networks that generate query and document vector representations. The representations are used as the basis for document retrieval and are able to encode semantic features if trained on large datasets, an ability that sets them apart from classical IR approaches such as TF-IDF. However, the datasets necessary to train these networks are not available to the owners of most search services used today, since they are not used by enough users. Thus, methods for enabling the use of neural IR models in data-poor environments are of interest. In this work, a bag-of-trigrams neural IR architecture is used in a transfer learning procedure in an attempt to increase performance on a target dataset by pre-training on external datasets. The target dataset used is WikiQA, and the external datasets are Quora’s Question Pairs, Reuters’ RCV1 and SQuAD. When considering individual model performance, pre-training on Question Pairs and fine-tuning on WikiQA gives us the best individual models. However, when considering average performance, pre-training on the chosen external dataset result in lower performance on the target dataset, both when all datasets are used together and when they are used individually, with different average performance depending on the external dataset used. On average, pre-training on RCV1 and Question Pairs gives the lowest and highest average performance respectively, when considering only the pre-trained networks. Surprisingly, the performance of an untrained, randomly generated network is high, and beats the performance of all pre-trained networks on average. The best performing model on average is a neural IR model trained on the target dataset without prior pre-training. / Nya modeller inom informationssökning inkluderar neurala nät som genererar vektorrepresentationer för sökfrågor och dokument. Dessa vektorrepresentationer används tillsammans med ett likhetsmått för att avgöra relevansen för ett givet dokument med avseende på en sökfråga. Semantiska särdrag i sökfrågor och dokument kan kodas in i vektorrepresentationerna. Detta möjliggör informationssökning baserat på semantiska enheter, vilket ej är möjligt genom de klassiska metoderna inom informationssökning, som istället förlitar sig på den ömsesidiga förekomsten av nyckelord i sökfrågor och dokument. För att träna neurala sökmodeller krävs stora datamängder. De flesta av dagens söktjänster används i för liten utsträckning för att möjliggöra framställande av datamängder som är stora nog att träna en neural sökmodell. Därför är det önskvärt att hitta metoder som möjliggör användadet av neurala sökmodeller i domäner med små tillgängliga datamängder. I detta examensarbete har en neural sökmodell implementerats och använts i en metod avsedd att förbättra dess prestanda på en måldatamängd genom att förträna den på externa datamängder. Måldatamängden som används är WikiQA, och de externa datamängderna är Quoras Question Pairs, Reuters RCV1 samt SquAD. I experimenten erhålls de bästa enskilda modellerna genom att föträna på Question Pairs och finjustera på WikiQA. Den genomsnittliga prestandan över ett flertal tränade modeller påverkas negativt av vår metod. Detta äller både när samtliga externa datamänder används tillsammans, samt när de används enskilt, med varierande prestanda beroende på vilken datamängd som används. Att förträna på RCV1 och Question Pairs ger den största respektive minsta negativa påverkan på den genomsnittliga prestandan. Prestandan hos en slumpmässigt genererad, otränad modell är förvånansvärt hög, i genomsnitt högre än samtliga förtränade modeller, och i nivå med BM25. Den bästa genomsnittliga prestandan erhålls genom att träna på måldatamängden WikiQA utan tidigare förträning.

Page generated in 0.0691 seconds