• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Open-source environmental scanning and risk assessment in the statutory counterespionage milieu

Duvenage, Petrus Carolus 23 May 2011 (has links)
The research focuses on the utilisation of open-source information in augmentation of the all-source counterespionage endeavour. The study has the principal objective of designing, contextualising and elucidating a micro-theoretical framework for open-source environmental scanning within the civilian, statutory counterespionage sphere. The research is underpinned by the central assumption that the environmental scanning and the contextual analysis of overt information will enable the identification, description and prioritisation of espionage risks that would not necessarily have emerged through the statutory counterespionage process in which secretly collected information predominates. The environmental scanning framework is further assumed to offer a theoretical foundation to surmount a degenerative counterespionage spiral driven by an over-reliance on classified information. Flowing from the central assumption, five further assumptions formulated and tested in the research are the following: (1) A methodically demarcated referent premise enables the focusing and structuring of the counterespionage environmental scanning process amid the exponential proliferation of overt information. (2) Effective environmental scanning of overt information for counterespionage necessitates a distinctive definition of ‘risk’ and ‘threat’, as these are interlinked yet different concepts. It is therefore asserted that current notions of ‘threat’ and ‘risk’ are inadequate for feasible employment within an overt counterespionage environmental scanning framework. (3) A framework for overt counterespionage environmental scanning has as its primary requirement the ability to identify diverse risks, descriptively and predicatively, on a strategic as well as a tactical level. (4) The degree of adversity in the relationship between a government and an adversary constitutes the principal indicator and determinant of an espionage risk. (5) The logical accommodation of a framework for overt counterespionage environmental scanning necessitates a distinctive counterintelligence cycle, as existing conceptualisations of the intelligence cycle are inadequate. The study’s objective and the testing of these five assumptions are pursued on both the theoretical and pragmatic-utilitarian levels. The framework for counterespionage, open-source environmental scanning and risk assessment is presented as part of a multilayered unison of alternative theoretical propositions on the all-source intelligence, counterintelligence and counterespionage processes. It is furthermore advanced from the premise of an alternative proposition on an integrated approach to open-source intelligence. On a pragmatic-utilitarian level, the framework’s design is informed and its application elucidated through an examination of the 21st century espionage reality confronting the nation state, contemporary statutory counterintelligence measures and the ‘real-life’ difficulties of open-source intelligence confronting practitioners. Although with certain qualifications, the assumptions are in the main validated by the research. The research furthermore affirms this as an exploratory thesis in a largely unexplored field. / Thesis (Ph.D)--University of Pretoria, 2010. / Political Sciences / Unrestricted
2

Qualitative reinforcement for man-machine interactions / Renforcements naturels pour la collaboration homme-machine

Nicart, Esther 06 February 2017 (has links)
Nous modélisons une chaîne de traitement de documents comme un processus de décision markovien, et nous utilisons l’apprentissage par renforcement afin de permettre à l’agent d’apprendre à construire des chaînes adaptées à la volée, et de les améliorer en continu. Nous construisons une plateforme qui nous permet de mesurer l’impact sur l’apprentissage de divers modèles, services web, algorithmes, paramètres, etc. Nous l’appliquons dans un contexte industriel, spécifiquement à une chaîne visant à extraire des événements dans des volumes massifs de documents provenant de pages web et d’autres sources ouvertes. Nous visons à réduire la charge des analystes humains, l’agent apprenant à améliorer la chaîne, guidé par leurs retours (feedback) sur les événements extraits. Pour ceci, nous explorons des types de retours différents, d’un feedback numérique requérant un important calibrage, à un feedback qualitatif, beaucoup plus intuitif et demandant peu, voire pas du tout, de calibrage. Nous menons des expériences, d’abord avec un feedback numérique, puis nous montrons qu’un feedback qualitatif permet toujours à l’agent d’apprendre efficacement. / Information extraction (IE) is defined as the identification and extraction of elements of interest, such as named entities, their relationships, and their roles in events. For example, a web-crawler might collect open-source documents, which are then processed by an IE treatment chain to produce a summary of the information contained in them.We model such an IE document treatment chain} as a Markov Decision Process, and use reinforcement learning to allow the agent to learn to construct custom-made chains ``on the fly'', and to continuously improve them.We build a platform, BIMBO (Benefiting from Intelligent and Measurable Behaviour Optimisation) which enables us to measure the impact on the learning of various models, algorithms, parameters, etc.We apply this in an industrial setting, specifically to a document treatment chain which extracts events from massive volumes of web pages and other open-source documents.Our emphasis is on minimising the burden of the human analysts, from whom the agent learns to improve guided by their feedback on the events extracted. For this, we investigate different types of feedback, from numerical rewards, which requires a lot of user effort and tuning, to partially and even fully qualitative feedback, which is much more intuitive, and demands little to no user intervention. We carry out experiments, first with numerical rewards, then demonstrate that intuitive feedback still allows the agent to learn effectively.Motivated by the need to rapidly propagate the rewards learnt at the final states back to the initial ones, even on exploration, we propose Dora: an improved version Q-Learning.
3

<strong>TOWARDS A TRANSDISCIPLINARY CYBER FORENSICS GEO-CONTEXTUALIZATION FRAMEWORK</strong>

Mohammad Meraj Mirza (16635918) 04 August 2023 (has links)
<p>Technological advances have a profound impact on people and the world in which they live. People use a wide range of smart devices, such as the Internet of Things (IoT), smartphones, and wearable devices, on a regular basis, all of which store and use location data. With this explosion of technology, these devices have been playing an essential role in digital forensics and crime investigations. Digital forensic professionals have become more able to acquire and assess various types of data and locations; therefore, location data has become essential for responders, practitioners, and digital investigators dealing with digital forensic cases that rely heavily on digital devices that collect data about their users. It is very beneficial and critical when performing any digital/cyber forensic investigation to consider answering the six Ws questions (i.e., who, what, when, where, why, and how) by using location data recovered from digital devices, such as where the suspect was at the time of the crime or the deviant act. Therefore, they could convict a suspect or help prove their innocence. However, many digital forensic standards, guidelines, tools, and even the National Institute of Standards and Technology (NIST) Cyber Security Personnel Framework (NICE) lack full coverage of what location data can be, how to use such data effectively, and how to perform spatial analysis. Although current digital forensic frameworks recognize the importance of location data, only a limited number of data sources (e.g., GPS) are considered sources of location in these digital forensic frameworks. Moreover, most digital forensic frameworks and tools have yet to introduce geo-contextualization techniques and spatial analysis into the digital forensic process, which may aid digital forensic investigations and provide more information for decision-making. As a result, significant gaps in the digital forensics community are still influenced by a lack of understanding of how to properly curate geodata. Therefore, this research was conducted to develop a transdisciplinary framework to deal with the limitations of previous work and explore opportunities to deal with geodata recovered from digital evidence by improving the way of maintaining geodata and getting the best value from them using an iPhone case study. The findings of this study demonstrated the potential value of geodata in digital disciplinary investigations when using the created transdisciplinary framework. Moreover, the findings discuss the implications for digital spatial analytical techniques and multi-intelligence domains, including location intelligence and open-source intelligence, that aid investigators and generate an exceptional understanding of device users' spatial, temporal, and spatial-temporal patterns.</p>

Page generated in 0.0527 seconds