• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 218
  • 71
  • 32
  • 19
  • 10
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 526
  • 526
  • 146
  • 138
  • 122
  • 121
  • 118
  • 109
  • 102
  • 100
  • 96
  • 82
  • 79
  • 64
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

SweetDeal: Representing Agent Contracts With Exceptions using XML Rules, Ontologies, and Process Descriptions

GROSOF, BENJAMIN, POON, TERRENCE C. 16 September 2003 (has links)
SweetDeal is a rule-based approach to representation of business contracts that enables software agents to create, evaluate, negotiate, and execute contracts with substantial automation and modularity. It builds upon the situated courteous logic programs knowledge representation in RuleML, the emerging standard for Semantic Web XML rules. Here, we newly extend the SweetDeal approach by also incorporating process knowledge descriptions whose ontologies are represented in DAML+OIL (the close predecessor of W3C's OWL, the emerging standard for Semantic Web ontologies), thereby enabling more complex contracts with behavioral provisions, especially for handling exception conditions (e.g., late delivery or non-payment) that might arise during the execution of the contract. This provides a foundation for representing and automating deals about services – in particular, about Web Services, so as to help search, select, and compose them. We give a detailed application scenario of late delivery in manufacturing supply chain management (SCM). In doing so, we draw upon our new formalization of process ontology knowledge from the MIT Process Handbook, a large, previously-existing repository used by practical industrial process designers. Our system is the first to combine emerging Semantic Web standards for knowledge representation of rules (RuleML) with ontologies (DAML+OIL/OWL) with each other, and moreover for a practical e-business application domain, and further to do so with process knowledge. This also newly fleshes out the evolving concept of Semantic Web Services. A prototype (soon public) i
502

Modelos de representación de arquetipos en sistemas de información sanitarios.

Menárguez Tortosa, Marcos 29 May 2013 (has links)
En esta tesis doctoral se presenta una propuesta de representación ontológica de la arquitectura de modelo dual de la Historia Clínica Electrónica. La representación de arquetipos con el lenguaje OWL ha permitido: 1) la definición e implementación de un método de evaluación de la calidad de arquetipos basado en técnicas de razonamiento, 2) la definición de una metodología y un marco de trabajo para la interoperabilidad de modelos de contenido clínico, y 3) la aplicación de técnicas y herramientas de desarrollo de software dirigido por modelos para la generación automática de sistemas de información sanitarios a partir de arquetipos. / In this doctoral thesis an ontology-based approach for representing the dual model architecture of Electronic Health Record is presented. The representation of archetypes in OWL allows: 1) the definition and implementation of a quality evaluation method for archetypes based on reasoning techniques, 2) the definition of a methodology and a framework for the interoperability of clinical content models, and 3) applying model driven software development techniques and tools for the automatic generation of health information systems from archetypes.
503

Semantics and Knowledge Engineering for Requirements and Synthesis in Conceptual Design: Towards the Automation of Requirements Clarification and the Synthesis of Conceptual Design Solutions

Christophe, François 27 July 2012 (has links) (PDF)
This thesis suggests the use of tools from the disciplines of Computational Linguistics and Knowledge Representation with the idea that such tools would enable the partial automation of two processes of Conceptual Design: the analysis of Requirements and the synthesis of concepts of solution. The viewpoint on Conceptual Design developed in this research is based on the systematic methodologies developed in the literature. The evolution of these methodologies provided precise description of the tasks to be achieved by the designing team in order to achieve successful design. Therefore, the argument of this thesis is that it is possible to create computer models of some of these tasks in order to partially automate the refinement of the design problem and the exploration of the design space. In Requirements Engineering, the definition of requirements consists in identifying the needs of various stakeholders and formalizing it into design speciႡcations. During this task, designers face the problem of having to deal with individuals from different expertise, expressing their needs with different levels of clarity. This research tackles this issue with requirements expressed in natural language (in this case in English). The analysis of needs is realised from different linguistic levels: lexical, syntactic and semantic. The lexical level deals with the meaning of words of a language. Syntactic analysis provides the construction of the sentence in language, i.e. the grammar of a language. The semantic level aims at Ⴁnding about the specific meaning of words in the context of a sentence. This research makes extensive use of a semantic atlas based on the concept of clique from graph theory. Such concept enables the computation of distances between a word and its synonyms. Additionally, a methodology and a metric of similarity was defined for clarifying requirements at syntactic, lexical and semantic levels. This methodology integrates tools from research collaborators. In the synthesis process, a Knowledge Representation of the necessary concepts for enabling computers to create concepts of solution was developed. Such, concepts are: function, input/output Ⴂow, generic organs, behaviour, components. The semantic atlas is also used at that stage to enable a mapping between functions and their solutions. It works as the interface between the concepts of this Knowledge Representation.
504

A model of pulsed signals between 100MHz and 100GHz in a Knowldege-Based Environment

Fitch, Phillip January 2009 (has links)
The thesis describes a model of electromagnetic pulses from sources between 100MHz and 100GHz for use in the development of Knowledge-Based systems. Each pulse, including its modulations, is described as would be seen by a definable receiving system. The model has been validated against a range of Knowledge-Based systems including a neural network, a Learning systems and an Expert system.
505

Modelo para estruturação e representação de diálogos em fórum de discussão

Buiar, José Antônio 16 October 2012 (has links)
A adaptação dos sistemas tradicionais de ensino presencial para o ambiente de ensino a distância introduz diversas mudanças na práxis escolar. Com a ausência do contato direto entre educador e educandos, surge a necessidade de utilização de artefatos tecnológicos que substituam a interação direta. O fórum de discussão é um desses artefatos tecnológicos. Ele possui a característica de ser um elemento catalisador da comunicação entre os envolvidos e pode ser um importante instrumento no processo educacional. Contudo, a natureza não estruturada das mensagens textuais de um fórum dificulta o seu uso como instrumento na avaliação individual do aluno. A análise e qualificação do conteúdo das mensagens armazenadas em um fórum representa um grande desafio para o instrutor. A ausência de uma estrutura formal de representação dos conceitos, crenças e idéias dos alunos poderia ser apontado como um dos elementos que contribuem para esse desafio. A proposta desta pesquisa é o desenvolvimento de um modelo que permita a estruturação e representação das mensagens de um fórum. Essa estruturação considera três aspectos da mensagem: i) os conceitos apresentados, ii) quem os apresentou e finalmente iii) quando esses conceitos foram apresentados. Para validar esse modelo, um programa de computador foi desenvolvido e testado em um fórum do ambiente virtual Moodle. O conceitos desenvolvidos para o Modelo de Estruturação e Representação das Mensagens do Fórum foram utilizados no desenvolvimento desse programa de computador. Por meio desse modelo de estruturação e representação das mensagens, um mapa ou guia é gerado. Esse mapa ou guia pode ser acessado pelo professor ou instrutor. Esse novo recurso desenvolvido, pode ser utilizado como uma ferramenta de apoio à análise ou avaliação do fórum do ambiente Moodle como um todo ou de cada participação individual do aluno. / The traditional learning practices adaptation to the distance learning introduces several changes in school practice. Since in distance learning the direct contact between educators and students does not exist, new technological artifacts become necessary in order to replace direct interaction. One of these artifacts is the discussion forum, which works as a catalyzer element of the communication between involved ones and can be an important tool in the educational process. Nevertheless, non-structured nature of text messages on a forum hampers its use as a tool in individual student assessment. Analysis and qualification of message contents stored on a forum represents an important challenge for instructors. The absence of a formal representation of concepts, ideas and beliefs from students could be designated as one of the factors that make this challenge even harder. This research proposes the development of a model that allows the messages on a forum can be structured and represented. This structuration considers three message aspects: i) presented concepts, ii) who has presented it, and iii) when concepts have been presented. As a means to validate this model, a computer program was developed and tested in a Moodle virtual environment forum. The concepts developed to the Structuration and Representation of the Forum Messages Model were used on this computer program development. Through the use of this model a map or guide is generated. This map or guide can be accessed by the professor or instructor. This new feature can be used as a support tool to analysis or evaluation of a Moodle forum environment.
506

Modelo para estruturação e representação de diálogos em fórum de discussão

Buiar, José Antônio 16 October 2012 (has links)
A adaptação dos sistemas tradicionais de ensino presencial para o ambiente de ensino a distância introduz diversas mudanças na práxis escolar. Com a ausência do contato direto entre educador e educandos, surge a necessidade de utilização de artefatos tecnológicos que substituam a interação direta. O fórum de discussão é um desses artefatos tecnológicos. Ele possui a característica de ser um elemento catalisador da comunicação entre os envolvidos e pode ser um importante instrumento no processo educacional. Contudo, a natureza não estruturada das mensagens textuais de um fórum dificulta o seu uso como instrumento na avaliação individual do aluno. A análise e qualificação do conteúdo das mensagens armazenadas em um fórum representa um grande desafio para o instrutor. A ausência de uma estrutura formal de representação dos conceitos, crenças e idéias dos alunos poderia ser apontado como um dos elementos que contribuem para esse desafio. A proposta desta pesquisa é o desenvolvimento de um modelo que permita a estruturação e representação das mensagens de um fórum. Essa estruturação considera três aspectos da mensagem: i) os conceitos apresentados, ii) quem os apresentou e finalmente iii) quando esses conceitos foram apresentados. Para validar esse modelo, um programa de computador foi desenvolvido e testado em um fórum do ambiente virtual Moodle. O conceitos desenvolvidos para o Modelo de Estruturação e Representação das Mensagens do Fórum foram utilizados no desenvolvimento desse programa de computador. Por meio desse modelo de estruturação e representação das mensagens, um mapa ou guia é gerado. Esse mapa ou guia pode ser acessado pelo professor ou instrutor. Esse novo recurso desenvolvido, pode ser utilizado como uma ferramenta de apoio à análise ou avaliação do fórum do ambiente Moodle como um todo ou de cada participação individual do aluno. / The traditional learning practices adaptation to the distance learning introduces several changes in school practice. Since in distance learning the direct contact between educators and students does not exist, new technological artifacts become necessary in order to replace direct interaction. One of these artifacts is the discussion forum, which works as a catalyzer element of the communication between involved ones and can be an important tool in the educational process. Nevertheless, non-structured nature of text messages on a forum hampers its use as a tool in individual student assessment. Analysis and qualification of message contents stored on a forum represents an important challenge for instructors. The absence of a formal representation of concepts, ideas and beliefs from students could be designated as one of the factors that make this challenge even harder. This research proposes the development of a model that allows the messages on a forum can be structured and represented. This structuration considers three message aspects: i) presented concepts, ii) who has presented it, and iii) when concepts have been presented. As a means to validate this model, a computer program was developed and tested in a Moodle virtual environment forum. The concepts developed to the Structuration and Representation of the Forum Messages Model were used on this computer program development. Through the use of this model a map or guide is generated. This map or guide can be accessed by the professor or instructor. This new feature can be used as a support tool to analysis or evaluation of a Moodle forum environment.
507

A new framework for a technological perspective of knowledge management

Botha, Antonie Christoffel 26 June 2008 (has links)
Rapid change is a defining characteristic of our modern society. This has huge impact on society, governments, and businesses. Businesses are forced to fundamentally transform themselves to survive in a challenging economy. Transformation implies change in the way business is conducted, in the way people perform their contribution to the organisation, and in the way the organisation perceives and manages its vital assets – which increasingly are built around the key assets of intellectual capital and knowledge. The latest management tool and realisation of how to respond to the challenges of the economy in the new millennium, is the idea of "knowledge management" (KM). In this study we have focused on synthesising the many confusing points of view about the subject area, such as: <ul><li> a. different focus points or perspectives; </li><li> b. different definitions and positioning of the subject; as well as</li><li> c. a bewildering number of definitions of what knowledge is and what KM entails.</li></ul> There exists a too blurred distinction in popular-magazine-like sources about this area between subjects and concepts such as: knowledge versus information versus data; the difference between information management and knowledge management; tools available to tackle the issues in this field of study and practice; and the role technology plays versus the huge hype from some journalists and within the vendor community. Today there appears to be a lack of a coherent set of frameworks to abstract, comprehend, and explain this subject area; let alone to build successful systems and technologies with which to apply KM. The study is comprised of two major parts:<ul><li> 1. In the first part the study investigates the concepts, elements, drivers, and challenges related to KM. A set of models for comprehending these issues and notions is contributed as we considered intellectual capital, organizational learning, communities of practice, and best practices. </li><li> 2. The second part focuses on the technology perspective of KM. Although KM is primarily concerned with non-technical issues this study concentrates on the technical issues and challenges. A new technology framework for KM is proposed to position and relate the different KM technologies as well as the two key applications of KM, namely knowledge portals and knowledge discovery (including text mining). </li></ul> It is concluded that KM and related concepts and notions need to be understood firmly as well as effectively positioned and employed to support the modern business organisation in its quest to survive and grow. The main thesis is that KM technology is a necessary but insufficient prerequisite and a key enabler for successful KM in a rapidly changing business environment. / Thesis (PhD (Computer Science))--University of Pretoria, 2010. / Computer Science / unrestricted
508

USING REINFORCEMENT LEARNING FOR ACTIVE SHOOTER MITIGATION

Robert Eugen Bott (11791199) 20 December 2021 (has links)
This dissertation investigates the value of deep reinforcement learning (DRL) within an agent-based model (ABM) of a large open-air venue. The intent is to reduce civilian casualties in an active shooting incident (ASI). There has been a steady increase of ASIs in the United States of America for over 20 years, and some of the most casualty-producing events have been in open spaces and open-air venues. More research should be conducted within the field to help discover policies that can mitigate the threat of a shooter in extremis. This study uses the concept of dynamic signage, controlled by a DRL policy, to guide civilians away from the threat and toward a safe exit in the modeled environment. It was found that a well-trained DRL policy can significantly reduce civilian casualties as compared to baseline scenarios. Further, the DRL policy can assist decision makers in determining how many signs to use in an environment and where to place them. Finally, research using DRL in the ASI space can yield systems and policies that will help reduce the impact of active shooters during an incident.
509

AI on the Edge with CondenseNeXt: An Efficient Deep Neural Network for Devices with Constrained Computational Resources

Priyank Kalgaonkar (10911822) 05 August 2021 (has links)
Research work presented within this thesis propose a neoteric variant of deep convolutional neural network architecture, CondenseNeXt, designed specifically for ARM-based embedded computing platforms with constrained computational resources. CondenseNeXt is an improved version of CondenseNet, the baseline architecture whose roots can be traced back to ResNet. CondeseNeXt replaces group convolutions in CondenseNet with depthwise separable convolutions and introduces group-wise pruning, a model compression technique, to prune (remove) redundant and insignificant elements that either are irrelevant or do not affect performance of the network upon disposition. Cardinality, a new dimension to the existing spatial dimensions, and class-balanced focal loss function, a weighting factor inversely proportional to the number of samples, has been incorporated in order to relieve the harsh effects of pruning, into the design of CondenseNeXt’s algorithm. Furthermore, extensive analyses of this novel CNN architecture was performed on three benchmarking image datasets: CIFAR-10, CIFAR-100 and ImageNet by deploying the trained weight on to an ARM-based embedded computing platform: NXP BlueBox 2.0, for real-time image classification. The outputs are observed in real-time in RTMaps Remote Studio’s console to verify the correctness of classes being predicted. CondenseNeXt achieves state-of-the-art image classification performance on three benchmark datasets including CIFAR-10 (4.79% top-1 error), CIFAR-100 (21.98% top-1 error) and ImageNet (7.91% single model, single crop top-5 error), and up to 59.98% reduction in forward FLOPs compared to CondenseNet. CondenseNeXt can also achieve a final trained model size of 2.9 MB, however at the cost of 2.26% in accuracy loss. Thus, performing image classification on ARM-Based computing platforms without requiring a CUDA enabled GPU support, with outstanding efficiency.<br>
510

Leakage Conversion For Training Machine Learning Side Channel Attack Models Faster

Rohan Kumar Manna (8788244) 01 May 2020 (has links)
Recent improvements in the area of Internet of Things (IoT) has led to extensive utilization of embedded devices and sensors. Hence, along with utilization the need for safety and security of these devices also increases proportionately. In the last two decades, the side-channel attack (SCA) has become a massive threat to the interrelated embedded devices. Moreover, extensive research has led to the development of many different forms of SCA for extracting the secret key by utilizing the various leakage information. Lately, machine learning (ML) based models have been more effective in breaking complex encryption systems than the other types of SCA models. However, these ML or DL models require a lot of data for training that cannot be collected while attacking a device in a real-world situation. Thus, in this thesis, we try to solve this issue by proposing the new technique of leakage conversion. In this technique, we try to convert the high signal to noise ratio (SNR) power traces to low SNR averaged electromagnetic traces. In addition to that, we also show how artificial neural networks (ANN) can learn various non-linear dependencies of features in leakage information, which cannot be done by adaptive digital signal processing (DSP) algorithms. Initially, we successfully convert traces in the time interval of 80 to 200 as the cryptographic operations occur in that time frame. Next, we show the successful conversion of traces lying in any time frame as well as having a random key and plain text values. Finally, to validate our leakage conversion technique and the generated traces we successfully implement correlation electromagnetic analysis (CEMA) with an approximate minimum traces to disclosure (MTD) of 480.

Page generated in 0.0924 seconds