Spelling suggestions: "subject:"6electronic data processing"" "subject:"belectronic data processing""
491 |
'n Ondersoek na die geskiktheid van 'n datavloeiverwerker as 'n herstruktureerbare spesiale verwerkerLoubser, Nicolas Johan 11 1900 (has links)
Thesis (MEng.) -- Stellenbosch Universiteit, 1984. / AFRIKAANSE OPSOMMING: Hierdie tesis behels 'n ondersoek na die geskiktheid van 'n
datavloei-verwerker om as 'n herstruktureerbare spesiale verwerker
te dien.
Die werking van 'n datavloei-verwerkermodel word aan die hand
van datavloeikonsepte verduidelik. Die tekortkominge van die
model, naamlik die gebrek aan datastruktuur-hanterings,
toevoer/afvoer en hertoelatingsmeganismes wor-d uitgelig en
moontlike oplos~ings word gege••
'n Semodifiseerde datavloei-model, wat beide struktuurhantering
en toevoer/afvoermeganismes insluit, word voorgestel.
Hertoelating word met behulp van 'n datapakketbenamingsmetode
bewerkstellig. Om die programmeerbaarheid en
die herstruktureerbaarheid van die model te ondersoek, is
besluit om 'n datavloei-verwerker te simuleer.
Die model is met behulp van die hoevlaktaal PASCAL, en bedryfstelselroepe
op die VAX 11/780 rekenaar gesimuleer. ParallelIe
verwerkingskonsepte in beide programmatuur en argitektuur
word gedemonstreer.
|
492 |
Deriving distributed garbage collectors from distributed termination algorithmsNorcross, Stuart John January 2004 (has links)
This thesis concentrates on the derivation of a modularised version of the DMOS distributed garbage collection algorithm and the implementation of this algorithm in a distributed computational environment. DMOS appears to exhibit a unique combination of attractive characteristics for a distributed garbage collector but the original algorithm is known to contain a bug and, previous to this work, lacks a satisfactory, understandable implementation. The relationship between distributed termination detection algorithms and distributed garbage collectors is central to this thesis. A modularised DMOS algorithm is developed using a previously published distributed garbage collector derivation methodology that centres on mapping centralised collection schemes to distributed termination detection algorithms. In examining the utility and suitability of the derivation methodology, a family of six distributed collectors is developed and an extension to the methodology is presented. The research work described in this thesis incorporates the definition and implementation of a distributed computational environment based on the ProcessBase language and a generic definition of a previously unimplemented distributed termination detection algorithm called Task Balancing. The role of distributed termination detection in the DMOS collection mechanisms is defined through a process of step-wise refinement. The implementation of the collector is achieved in two stages; the first stage defines the implementation of two distributed termination mappings with the Task Balancing algorithm; the second stage defines the DMOS collection mechanisms.
|
493 |
Interrupt-generating active data objectsClayton, Peter Graham January 1990 (has links)
An investigation is presented into an interrupt-generating object model which is designed to reduce the effort of programming distributed memory multicomputer networks. The object model is aimed at the natural modelling of problem domains in which a number of concurrent entities interrupt one another as they lay claim to shared resources. The proposed computational model provides for the safe encapsulation of shared data, and incorporates inherent arbitration for simultaneous access to the data. It supplies a predicate triggering mechanism for use in conditional synchronization and as an alternative mechanism to polling. Linguistic support for the proposal requires a novel form of control structure which is able to interface sensibly with interrupt-generating active data objects. The thesis presents the proposal as an elemental language structure, with axiomatic guarantees which enforce safety properties and aid in program proving. The established theory of CSP is used to reason about the object model and its interface. An overview is presented of a programming language called HUL, whose semantics reflect the proposed computational model. Using the syntax of HUL, the application of the interrupt-generating active data object is illustrated. A range of standard concurrent problems is presented to demonstrate the properties of the interrupt-generating computational model. Furthermore, the thesis discusses implementation considerations which enable the model to be mapped precisely onto multicomputer networks, and which sustain the abstract programming level provided by the interrupt-generating active data object in the wider programming structures of HUL.
|
494 |
A framework for scoring and tagging NetFlow dataSweeney, Michael John January 2019 (has links)
With the increase in link speeds and the growth of the Internet, the volume of NetFlow data generated has increased significantly over time and processing these volumes has become a challenge, more specifically a Big Data challenge. With the advent of technologies and architectures designed to handle Big Data volumes, researchers have investigated their application to the processing of NetFlow data. This work builds on prior work wherein a scoring methodology was proposed for identifying anomalies in NetFlow by proposing and implementing a system that allows for automatic, real-time scoring through the adoption of Big Data stream processing architectures. The first part of the research looks at the means of event detection using the scoring approach and implementing as a number of individual, standalone components, each responsible for detecting and scoring a single type of flow trait. The second part is the implementation of these scoring components in a framework, named Themis1, capable of handling high volumes of data with low latency processing times. This was tackled using tools, technologies and architectural elements from the world of Big Data stream processing. The performance of the framework on the stream processing architecture was shown to demonstrate good flow throughput at low processing latencies on a single low end host. The successful demonstration of the framework on a single host opens the way to leverage the scaling capabilities afforded by the architectures and technologies used. This gives weight to the possibility of using this framework for real time threat detection using NetFlow data from larger networked environments.
|
495 |
'n Bestuurshulpmiddel vir die evaluering van 'n maatskappy se rekenaarsekerheidsgraadVon Solms, Rossouw 13 May 2014 (has links)
M.Sc. (Informatics) / Information is power. Any organization must secure and protect its entire information assets. Management is responsible for the well-being of the organization and consequently for computer security. Management must become and stay involved with the computer security situation of the organization, because the existence of any organization depends on an effective information system. One way in which management can stay continually involved and committed with the computer security situation of the organization, is by -, the periodic evaluation of computer security. The results from this evaluation process can initiate appropriate actions to increase computer security in areas needed. For effective management involvement, a tool is needed to aid management in monitoring the status of implementing computer security on a regular basis. The main objective of this dissertation is to develop such a management tool. Basically the thesis consists of three parts, namely framework for effective computer security evaluation, the definition of the criteria to be included in the tool and lastly, the tool itself. The framework (chapters 1 to 6) defines the basis on which the tool (chapters 7 to 9) is built, e.g. that computer security controls need to be cost-effective and should aid the organization in accomplishing its objectives. The framework is based on a two dimensional graph: firstly, tho various risk areas in which computer security should be applied and secondly, the severity of controls in each of these areas. The tool identifies numerous risk areas critical to the security of the computer and its environment. Each of these risk areas need to be evaluated to find out how well it is secured. From these results an overall computer security situation is pictured. The tool is presented as a spreadsheet, containing a number of questions. The built -in formulae in the spreadsheet perform calculations resulting in an appreciation of the computer security situation. The results of the security evaluation can be used by management to take appropriate actions regarding the computer security situation.
|
496 |
SoDA : a model for the administration of separation of duty requirements in workflow systemsPerelson, Stephen January 2001 (has links)
The increasing reliance on information technology to support business processes has emphasised the need for information security mechanisms. This, however, has resulted in an ever-increasing workload in terms of security administration. Security administration encompasses the activity of ensuring the correct enforcement of access control within an organisation. Access rights and their allocation are dictated by the security policies within an organisation. As such, security administration can be seen as a policybased approach. Policy-based approaches promise to lighten the workload of security administrators. Separation of duties is one of the principles cited as a criterion when setting up these policy-based mechanisms. Different types of separation of duty policies exist. They can be categorised into policies that can be enforced at administration time, viz. static separation of duty requirements and policies that can be enforced only at execution time, viz. dynamic separation of duty requirements. This dissertation deals with the specification of both static separation of duty requirements and dynamic separation of duty requirements in role-based workflow environments. It proposes a model for the specification of separation of duty requirements, the expressions of which are based on set theory. The model focuses, furthermore, on the enforcement of static separation of duty. The enforcement of static separation of duty requirements is modelled in terms of invariant conditions. The invariant conditions specify restrictions upon the elements allowed in the sets representing access control requirements. The sets are themselves expressed as database tables within a relational database management system. Algorithms that stipulate how to verify the additions or deletions of elements within these sets can then be performed within the database management system. A prototype was developed in order to demonstrate the concepts of this model. This prototype helps demonstrate how the proposed model could function and flaunts its effectiveness.
|
497 |
Computer audit concerns in the client-server environmentStreicher, Rika 13 February 2014 (has links)
M. Com. (Computer Auditing) / and peer-to-peer have taken the world by storm. Dramatic changes have taken place in the information technology of organisations that have opted to follow this trend in the quest for greater flexibility and access to all those connected. Though technology has already had far-reaching effects on business, many changes are yet to be seen. The threats associated with the continuing developments in computer technology have resulted in many traditional internal control processes changing forever. Although, according to the above, it is fairly common that the client-server technology brings with it new threats and risks with internal control processes having to change to address these threats and risks, not all areas have been addressed yet. It is therefore clear that computer audit has a role to play. The main objective of this short dissertation is to shed some light on the problem described above: How will the changes wrought by the client-server technology affect the traditional audit approach? In other words, how will the computer auditor narrow the gap that has originated between traditional established audit procedures and an audit approach that meets the new challenges of the client-server environment? This will be achieved by pinpointing the audit concerns that arise due to the fundamental differences between the traditional systems environment and the new client-server environment...
|
498 |
Development of a MIAME-compliant microarray data management system for functional genomics data integrationOelofse, Andries Johannes 22 August 2007 (has links)
No abstract available / Dissertation (MSc (Bioinformatics))--University of Pretoria, 2007. / Biochemistry / MSc / unrestricted
|
499 |
Highly concurrent vs. control flow computing modelsMarshall, Robert Clarence January 1982 (has links)
Typescript (photocopy).
|
500 |
A virtual-community-centric model for coordination in the South African public sectorThomas, Godwin Dogara Ayenajeh January 2014 (has links)
Organizations face challenges constantly owing to limited resources. As such, to take advantage of new opportunities and to mitigate possible risks they look for new ways to collaborate, by sharing knowledge and competencies. Coordination among partners is critical in order to achieve success. The segmented South African public sector is no different. Driven by the desire to ensure proper service delivery in this sector, various government bodies and service providers play different roles towards the attainment of common goals. This is easier said than done, given the complexity of the distributed nature of the environment. Heterogeneity, autonomy, and the increasing need to collaborate provoke the need to develop an integrative and dynamic coordination support service system in the SA public sector. Thus, the research looks to theories/concepts and existing coordination practices to ground the process of development. To inform the design of the proposed artefact the research employs an interdisciplinary approach championed by coordination theory to review coordination-related theories and concepts. The effort accounts for coordination constructs that characterize and transform the problem and solution spaces. Thus, requirements are explicit towards identifying coordination breakdowns and their resolution. Furthermore, how coordination in a distributed environment is supported in practice is considered from a socio-technical perspective in an effort to account holistically for coordination support. Examining existing solutions identified shortcomings that, if addressed, can help to improve the solutions for coordination, which are often rigidly and narrowly defined. The research argues that introducing a mediating technological artefact conceived from a virtual community and service lenses can serve as a solution to the problem. By adopting a design-science research paradigm, the research develops a model as a primary artefact to support coordination from a collaboration standpoint. The suggestions from theory and practice and the unique case requirement identified through a novel case analysis framework form the basis of the model design. The proposed model support operation calls for an architecture which employs a design pattern that divides a complex whole into smaller, simpler parts, with the aim of reducing the system complexity. Four fundamental functions of the supporting architecture are introduced and discussed as they would support the operation and activities of the proposed collaboration lifecycle model geared towards streamlining coordination in a distributed environment. As part of the model development knowledge contributions are made in several ways. Firstly, an analytical instrument is presented that can be used by an enterprise architect or business analyst to study the coordination status quo of a collaborative activity in a distributed environment. Secondly, a lifecycle model is presented as meta-process model with activities that are geared towards streamlining the coordination of dynamic collaborative activities or projects. Thirdly, an architecture that will enable the technical virtual community-centric, context-aware environment that hosts the process-based operations is offered. Finally, the validation tool that represents the applied contribution to the research that promises possible adaptation for similar circumstances is presented. The artefacts contribute towards a design theory in IS research for the development and improvement of coordination support services in a distributed environment such as the South African public sector.
|
Page generated in 0.158 seconds