• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 9
  • 9
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A context-based name resolution approach for semantic schema integration

BELIAN, Rosalie Barreto 31 January 2008 (has links)
Made available in DSpace on 2014-06-12T15:50:47Z (GMT). No. of bitstreams: 2 arquivo1988_1.pdf: 1433897 bytes, checksum: 2bd67eddaeadba13aa380ec5c913b7e0 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2008 / Uma das propostas da Web Semântica é fornecer uma grande diversidade de serviços de diferentes domínios na Web. Estes serviços são, em sua maioria, colaborativos, cujas tarefas se baseiam em processos de tomada de decisão. Estas decisões, por sua vez, serão mais bem embasadas se considerarem a maior quantidade possível de informação relacionada às tarefas em execução. Neste sentido, este cenário encoraja o desenvolvimento de técnicas e ferramentas orientadas para a integração de informação, procurando soluções para a heterogeneidade das fontes de dados. A arquitetura baseada em mediação, utilizada no desenvolvimento de sistemas de integração de informações tem como objetivo isolar o usuário das fontes de dados distribuídas utilizando uma camada intermediária de software chamada de mediador. O mediador, em um sistema de integração de informações, utiliza um esquema global para a execução das consultas do usuário que são reformuladas em sub-consultas de acordo com os esquemas locais das fontes de dados. Neste caso, um processo de integração de esquemas gera o esquema global (esquema de mediação) como resultado da integração dos esquemas individuais das fontes de dados. O problema maior em integração de esquemas é a heterogeneidade das fontes de dados locais. Neste sentido, a resolução semântica é primordial. A utilização de métodos puramente estruturais e sintáticos na integração de esquemas é pouco eficaz se antes não houver a identificação do real significado dos elementos dos esquemas. Um processo de integração de esquemas tem como resultado um esquema global integrado e um conjunto de mapeamentos inter-esquema e usualmente, compreende algumas etapas básicas como: pré-integração, comparação, mapeamento e unificação de esquemas e geração do esquema de mediação. Em integração de esquemas, resolução de nomes é o processo que determina a qual entidade do mundo real um dado elemento de esquema se refere, levando em consideração um conjunto de informações semânticas disponíveis. A informação semântica necessária para resolução de nomes, em geral, é obtida de vocabulários genéricos e/ou específicos de um determinado domínio de conhecimento. Nomes de elementos podem apresentar significados diferentes dependendo do contexto semântico ao qual eles estão relacionados. Assim, o uso de informação contextual, além da de domínio, pode trazer uma maior precisão na interpretação dos elementos permitindo modificar o seu significado de acordo com um dado contexto. Este trabalho propõe uma abordagem de resolução de nomes baseada em contexto para integração de esquemas. Um de seus pontos fortes é a utilização e modelagem da informação contextual necessária à resolução de nomes em diferentes etapas do processo de integração de esquemas. A informação contextual está modelada utilizando uma ontologia, o que favorece a utilização de mecanismos de inferência, compartilhamento e reuso da informação. Além disto, este trabalho propõe um processo de integração de esquemas simples e extensível de forma que seu desenvolvimento se concentrasse principalmente nos requisitos relacionados à resolução de nomes. Este processo foi desenvolvido para um sistema de integração de informações baseado em mediação, que adota a abordagem GAV e XML como modelo comum para intercâmbio de dados e integração de fontes de dados na Web
2

The Training Deficiency in Corporate America: Training Security Professionals to Protect Sensitive Information

Johnson, Kenneth Tyrone 01 January 2017 (has links)
Increased internal and external training approaches are elements senior leaders need to know before creating a training plan for security professionals to protect sensitive information. The purpose of this qualitative case study was to explore training strategies telecommunication industry leaders use to ensure security professionals can protect sensitive information. The population consisted of 3 senior leaders in a large telecommunication company located in Dallas, Texas that has a large footprint of securing sensitive information. The conceptual framework on which this study was based was the security risk planning model. Semistructured interviews and document reviews helped to support the findings of this study. Using the thematic approach, 3 major themes emerged. The 3 themes included security training is required for all professionals, different approaches to training are beneficial, and using internal and external training's to complement each other. The findings revealed senior leaders used different variations of training programs to train security professionals on how to protect sensitive information. The senior leaders' highest priority was the ability to ensure all personnel accessing the network received the proper training. The findings may contribute to social change by enhancing area schools' technology programs with evolving cyber security technology, helping kids detect and eradicate threats before any loss of sensitive information occurs.
3

Modelling QoS in IoT applications

Awan, Irfan U., Younas, M., Naveed, W. January 2015 (has links)
No / Abstract: Internet of Things (IoT) aims to enable the interconnection of a large number of smart devices (things) using a combination of networks and computing technologies. But an influx of interconnected things makes a greater demand on the underlying communication networks and affects the quality of service (QoS). This paper investigates into the QoS of delay sensitive things and the corresponding traffic they generate over the network. Things such as security alarms, cameras, etc, generate delay sensitive information that must be communicated in a real time. Such things have heterogeneous features with limited buffer capacity, storage and processing power. Thus the most commonly used Best Effort service model cannot be an attractive mechanism to treat delay sensitive traffic. This paper proposes a cost-effective analytical model for a finite capacity queueing system with pre-emptive resume service priority and push-out buffer management scheme. Based on the analytical model various simulation results are generated in order to analyse the mean queue length and the blocking probability of high and low priority traffic for system with various capacities.
4

Sharing Sensitive Information with Parents: A Guide for Discussing Speech and Language Concerns

Boggs, Teresa 01 January 2009 (has links)
No description available.
5

Sharing Sensitive Information with Parents: A Guide for Discussing Speech, Language, and Developmental Concerns

Boggs, Teresa 01 January 2000 (has links)
No description available.
6

Sharing Sensitive Information with Parents: A Guide for Early Childhood Educators

Boggs, Teresa 01 January 2011 (has links)
No description available.
7

Web authenticity

Sedaghat, Soroush, University of Western Sydney, School of Computing and Information Technology January 2002 (has links)
Advancements in web technology such as the integration of operating platforms, the application of mobile applets and connectivity with databases into an encompassing web environment has promoted the development of complex web-based systems for enterprise computing. In this integration the inherent security flaws and associated problems of these technological advancements are also brought together under this web environment. These flaws, when exploited, could lead to system compromises of various kinds including stealing of security sensitive information, system take-over fabrication of fake electronic documents and illegal alteration to web sites contents. Therefore, the successful, prompt and appropriate responses of these complex web-based systems to security threats and breaches, such as in the cases of document tampering and misrepresentation from illegal users, are imperative in promoting the user's willingness and confidence when interacting with these systems. Accordingly, this has become one of the major concerns in development, management and operation of web-based systems. This thesis also studies the authentication of dynamic web contents through the application of a one-time digital signature. To this effect, relevant concepts and possible approaches have been discussed. Authentication and verification of dynamic contents, efficiently in the web environment, is suggested as a separate research area and as a direction for future work. / Master of Science (Honours) Computing
8

Towards ensuring scalability, interoperability and efficient access control in a triple-domain grid-based environment

Nureni Ayofe, Azeez January 2012 (has links)
Philosophiae Doctor - PhD / The high rate of grid computing adoption, both in academe and industry, has posed challenges regarding efficient access control, interoperability and scalability. Although several methods have been proposed to address these grid computing challenges, none has proven to be completely efficient and dependable. To tackle these challenges, a novel access control architecture framework, a triple-domain grid-based environment, modelled on role based access control, was developed. The architecture’s framework assumes three domains, each domain with an independent Local Security Monitoring Unit and a Central Security Monitoring Unit that monitors security for the entire grid.The architecture was evaluated and implemented using the G3S, grid security services simulator, meta-query language as “cross-domain” queries and Java Runtime Environment 1.7.0.5 for implementing the workflows that define the model’s task. The simulation results show that the developed architecture is reliable and efficient if measured against the observed parameters and entities. This proposed framework for access control also proved to be interoperable and scalable within the parameters tested.
9

Návrh koncepce prevence ztráty dat / Design of conception of data loss prevention

Brejla, Tomáš January 2011 (has links)
This work deals with the making of conception of implementation of processes and software tools designed to ensure sensitive data leakage prevention from the organization infrastructure. The structure consists of three key parts. The first one describes theoretical basis of the work. It explains what is the data loss prevention, what it comes from, why it is necessary to deal with it and what its goals are. It also describes how this fits into the whole area of corporate ICT security environment. There are defined all the risks associated with leakage of sensitive data and there are also defined possible solutions and problems that are associated with these solutions. The first part also analyzes the current state of data loss prevention in organizations. They are divided according to their size and for each group there is a list of the most common weaknesses and risks. It is evaluated how the organizations currently solve prevention of data loss and how they cover this issue from both a procedural point of view and in terms of software tools. The second part focuses directly on the software tools. It is characterized the principle of operation of these systems and it is explained their network architecture. There are described and evaluated current trends in the development of the data loss prevention tools and it is outlined possible further development. They are divided into different categories depending on what features they offer and how these categories cover the needs of organizations. At the end of the second part there are compared the software solutions from leading vendors in the market against actual experience, focusing on their strengths and weaknesses. The third part presents the core content. IT joins two previous sections and the result is the creation of the overall concept of the implementation of data loss prevention with focus on breakdown by several different levels -- processes, time and size of the company. At the beginning of this third section it is described what precedes the implementation of data loss prevention, and what the organizations should be careful of. It is defined by how and what the organizations should set their own expectations for the project could be manageable. The main point is the creation of a procedure of data loss prevention implementation by creating a strategy, choice of solutions, to the implementation of this solution and related processes. The end of the third part deals with the legal and personnel issues which are with the implementation of DLP very closely related. There are made recommendations based on analysis of the law standards and these recommendations are added to the framework approach of HR staff. At the very end there are named benefits of implementing data loss prevention, and the created concept is summarized as a list of best practices.

Page generated in 0.1133 seconds