Spelling suggestions: "subject:"computer atemsystem"" "subject:"computer systsystem""
71 |
Vergelyking van die intydse transaksieverwerkingsvermoë van CICS/6000 en ORACLE7Geldenhuys, Jan Harm Steenkamp 30 September 2014 (has links)
M.Sc. (Informatics) / Please refer to full text to view abstract
|
72 |
Faktore wat die effektiewe aanwending van rekenaarondersteunde opleiding in Eskom beinvloedViljoen, Charl Julius 10 February 2014 (has links)
M.Com. (Industrial Psychology) / Due to a present shortage and projected future shortage of instructional staff, the electricity utility of south Africa, Eskom, introduced the PLATO computer-based training system in an effort to increase the productivity of instructors. The author acted as project leader when the PLATO system was installed in Eskom during June 1979. Thus Eskom became the first company in South Africa to use the PLATO system for industrial training. Training is delivered by means of a mainframe Cyber system at Eskom's Head Office, Megawatt Park, situated in Sandton. The mainframe is connected to 300 student terminals via the Eskom network to 88 learning stations throughout the country. Training is offered to all the power stations and distribution head offices and other main buildings where at least 200 employees are based. It was found that the PLATO system was not used equally effectively at all sites and it was decided to conduct an analysis of those factors that may have an effect on the effective utilisation of the system in order to try to eliminate the negative factors and to strengthen the positive factors. The research hipothesis stated that there are many factors that will influence the effectiveness of the system both individually and combined .with other factors. It was however felt that those factors resulting from the management and administration of the system will have the greatest influence on the effectiveness of the utilisation.
|
73 |
Real time defect detection in welds by ultrasonic meansLu, Yicheng January 1992 (has links)
A computer controlled weld quality assurance system has been developed to detect weld defects ultrasonically whilst welding is in progress. This system, including a flash analogue to digital converter and built-in memories to store sampled data, a peak characters extractor and a welding process controller, enabled welding processes to be controlled automatically and welding defects to be detected concurrently with welding. In this way, the weld quality could be satisfactorily assured if no defect was detected and the welding cost was minimised either through avoiding similar defects to occur or by stopping the welding process if repair was necessary. This work demonstrated that the high temperature field around the weld pool was the major source of difficulties and unreliabilities in defect detection during welding and, had to be taken into account in welding control by ultrasonic means. The high temperatures not only influence ultrasonic characteristic parameters which are the defect judgement and assessment criterion, but also introduce noise into signals. The signal averaging technique and statistical analysis based on B-scan data have proved their feasibility to increase 'signal to noise ratio' effectively and to judge or assess weld defects. The hardware and the software for the system is explained in this work. By using this system, real-time 'A-scan' signals on screen display, and, A-scan, B-scan or three dimensional results can be printed on paper, or stored on disks, and, as a result, weld quality could be fully computerized.
|
74 |
Analyzing communication flow and process placement in Linda programs on transputersDe-Heer-Menlah, Frederick Kofi 28 November 2012 (has links)
With the evolution of parallel and distributed systems, users from diverse disciplines have looked to these systems as a solution to their ever increasing needs for computer processing resources. Because parallel processing systems currently require a high level of expertise to program, many researchers are investing effort into developing programming approaches which hide some of the difficulties of parallel programming from users. Linda, is one such parallel paradigm, which is intuitive to use, and which provides a high level decoupling between distributable components of parallel programs. In Linda, efficiency becomes a concern of the implementation rather than of the programmer. There is a substantial overhead in implementing Linda, an inherently shared memory model on a distributed system. This thesis describes the compile-time analysis of tuple space interactions which reduce the run-time matching costs, and permits the distributon of the tuple space data. A language independent module which partitions the tuple space data and suggests appropriate storage schemes for the partitions so as to optimise Linda operations is presented. The thesis also discusses hiding the network topology from the user by automatically allocating Linda processes and tuple space partitons to nodes in the network of transputers. This is done by introducing a fast placement algorithm developed for Linda. / KMBT_223
|
75 |
Defending Against Adversarial Attacks Using Denoising AutoencodersRehana Mahfuz (8617635) 24 April 2020 (has links)
Gradient-based adversarial attacks on neural networks threaten extremely critical applications such as medical diagnosis and biometric authentication. These attacks use the gradient of the neural network to craft imperceptible perturbations to be added to the test data, in an attempt to decrease the accuracy of the network. We propose a defense to combat such attacks, which can be modified to reduce the training time of the network by as much as 71%, and can be further modified to reduce the training time of the defense by as much as 19%. Further, we address the threat of uncertain behavior on the part of the attacker, a threat previously overlooked in the literature that considers mostly white box scenarios. To combat uncertainty on the attacker's part, we train our defense with an ensemble of attacks, each generated with a different attack algorithm, and using gradients of distinct architecture types. Finally, we discuss how we can prevent the attacker from breaking the defense by estimating the gradient of the defense transformation.
|
76 |
Techniques for Managing Irregular Control Flow on GPUsJad Hbeika (5929730) 25 June 2020 (has links)
<p>GPGPU is a highly multithreaded throughput architecture that can deliver high speed-up for regular applications while remaining energy efficient. In recent years, there has been much focus on tuning irregular applications and/or the GPU architecture to achieve similar benefits for irregular applications as well as efforts to extract data parallelism from task parallel applications. In this work we tackle both problems.</p><p>The first part of this work tackles the problem of Control divergence in GPUs. GPGPUs’ SIMT execution model is ineffective for workloads with irregular control-flow because GPGPUs serialize the execution of divergent paths which lead to thread-level parallelism (TLP) loss. Previous works focused on creating new warps based on the control path threads follow, or created different warps for the different paths, or ran multiple narrower warps in parallel. While all previous solutions showed speedup for irregular workloads, they imposed some performance loss on regular workloads. In this work we propose a more fine-grained approach to exploit <i>intra-warp</i>convergence: rather than threads executing the same code path, <i>opcode-convergent threads</i>execute the same instruction, but with potentially different operands. Based on this new definition we find that divergent control blocks within a warp exhibit substantial opcode convergence. We build a compiler that analyzes divergent blocks and identifies the common streams of opcodes. We modify the GPU architecture so that these common instructions are executed as convergent instructions. Using software simulation, we achieve a 17% speedup over baseline GPGPU for irregular workloads and do not incur any performance loss on regular workloads.</p><p>In the second part we suggest techniques for extracting data parallelism from irregular, task parallel applications in order to take advantage of the massive parallelism provided by the GPU. Our technique involves dividing each task into multiple sub-tasks each performing less work and touching a smaller memory footprint. Our framework performs a locality-aware scheduling that works on minimizing the memory footprint of each warp (a set of threads performing in lock-step). We evaluate our framework with 3 task-parallel benchmarks and show that we can achieve significant speedups over optimized GPU code.</p>
|
77 |
Analyzing Sensitive Data with Local Differential PrivacyTianhao Wang (10711713) 30 April 2021 (has links)
<div>Vast amounts of sensitive personal information are collected by companies, institutions and governments. A key technological challenge is how to effectively extract knowledge from data while preserving the privacy of the individuals involved. In this dissertation, we address this challenge from the perspective of privacy-preserving data collection and analysis. We focus on investigation of a technique called local differential privacy (LDP) and studied several aspects of it. </div><div><br></div><div><br></div><div>In particular, the thesis serves as a comprehensive study of multiple aspects of the LDP field. We investigated the following seven problems: (1) We studied LDP primitives, i.e., the basic mechanisms that are used to build LDP protocols. (2) We then studied the problem when the domain size is very big (e.g., larger than $2^{32$), where finding the values with high frequency is a challenge, because one needs to enumerate through all values. (3) Another interesting setting is when each user possesses a set of values, instead of a single private value. (4) With the basic problems visited, we then aim to make the LDP protocols practical for real-world scenarios. We investigated the case where each user's data is high-dimensional (e.g., in the census survey, each user has multiple questions to answer), and the goal is to recover the joint distribution among the attributes. (5) We also built a system for companies to issue SQL queries over the data protected under LDP, where each user is associated with some public weights and holds some private values; an LDP version of the values is sent to the server from each user. (6) To further increase the accuracy of LDP, we study how to add post-processing steps to protocols to make them consistent while achieving high accuracy for a wide range of tasks, including frequencies of individual values, frequencies of the most frequent values, and frequencies of subsets of values. (7) Finally, we investigate a different model of LDP which is called the shuffler model. While users still use LDP algorithms to report their sensitive data, now there exists a semi-trusted shuffler that shuffles the users' reports and then send them to the server. This model provides better utility but at the cost of requiring more trust that the shuffler should not collude with the server.</div>
|
78 |
COMPARING SOCIAL ENGINEERING TRAINING IN THE CONTEXT OF HEALTHCAREGiovanni Ordonez (12481197) 03 May 2022 (has links)
<p>Social Engineering attacks have been a rising issue in recent years, affecting a multitude of industries. One industry that has been of great interest to hackers is the Healthcare industry due to the high value of patient information. Social Engineering attacks are mainly common because of the ease of execution and the high probability of victimization. A popular way of combatting Social Engineering attacks is by increasing the user’s ability to detect indicators of attack, which requires a level of cybersecurity education. While the number of cybersecurity training programs is increasing, Social Engineering attacks are still very successful. Therefore, education programs need to be improved to effectively increase the ability of users to notice indicators of attack. This research aimed to answer the question - what teaching method results in the greatest learning gains for understanding Social Engineering concepts? This was done by investigating text-based, gamification, and adversarial thinking teaching methods. These three teaching methods were used to deliver lessons on an online platform to a sample of Purdue students. After conducting analysis, both text-based and adversarial thinking showed significant improvement in the understanding of Social Engineering concepts within the student sample. After conducting a follow-up test, a single teaching method was not found to be better among the three teaching methods. However, this study did find two teaching methods that can be used to develop training programs to help decrease the total number of successful Social Engineering attacks across industries. </p>
|
79 |
Office automationStutz, Peter January 1989 (has links)
Bibliography: p. 100-104. / Office automation systems have become an essential tool for the operation of the modern office. With the emphasis of a modern office being placed on efficiency and ease of communication, office automation systems have become the backbone of successful businesses. COSNET is a prototype office automation system designed and implemented at the Department of the University of Cape Town and runs on Personal Computers that are linked to a NCR UNIX TOWER, which acts as the host. This dissertation investigates the different facilities supported by some of the office automation systems compared in this thesis, and describes the COSNET features. This prototype office automation system supports many of the facilities that are supported by large office automation systems. COSNET allows the user to define any MS-DOS based editor or word processor, and uses a simple editor for the creation of mail. The electronic filing facility allows documents to be created, filed, retrieved and deleted, and thus provides the users with the necessary features for document exchange. A user may set access permissions to each of his documents and may grant other users either read or write access to a specific document. The mail facility lets the user read, file, forward, delete and print a message, and supports classification of mail. A calendar facility is used as an electronic diary and stores all the user's schedules. These schedules may be viewed in either daily, weekly and monthly display modes. Read and write access to the calendar can be set by the user, in order to allow other users to manipulate his schedules. Any MS-DOS based application software can be added to COSNET. This facility allows the COSNET user to configure the office automation system to simulate the office environment. COSNET thus supports most of the necessary features required by an office automation system.
|
80 |
Practical Type and Memory Safety Violation Detection MechanismsYuseok Jeon (9217391) 29 August 2020 (has links)
System programming languages such as C and C++ are designed to give the
programmer full control over the underlying hardware. However, this freedom comes
at the cost of type and memory safety violations which may allow an attacker to
compromise applications.
In particular, type safety violation, also known as type confusion, is one of the
major attack vectors to corrupt modern C++ applications. In the past years, several
type confusion detectors have been proposed, but they are severely limited by high
performance overhead, low detection coverage, and high false positive rates. To address these issues, we propose HexType and V-Type. First, we propose HexType, a
tool that provides low-overhead disjoint metadata structures, compiler optimizations,
and handles specific object allocation patterns. Thus, compared to prior work, HexType significantly improves detection coverage and reduces performance overhead. In
addition, HexType discovers new type confusion bugs in real world programs such as
Qt and Apache Xerces-C++. However, HexType still has considerable overhead from
managing the disjoint metadata structure and tracking individual objects, and has
false positives from imprecise object tracking, although HexType significantly reduces
performance overhead and detection coverage. To address these issues, we propose a
further advanced mechanism V-Type, which forcibly changes non-polymorphic types
into polymorphic types to make sure all objects maintain type information. By doing
this, V-Type removes the burden of tracking object allocation and deallocation and
of managing a disjoint metadata structure, which reduces performance overhead and
improves detection precision. Another major attack vector is memory safety violations, which attackers can take
advantage of by accessing out of bound or deleted memory. For memory safety violation detection, combining a fuzzer with sanitizers is a popular and effective approach.
However, we find that heavy metadata structure of current sanitizers hinders fuzzing
effectiveness. Thus, we introduce FuZZan to optimize sanitizer metadata structures
for fuzzing. Consequently, FuZZan improves fuzzing throughput, and this helps the
tester to discover more unique paths given the same amount of time and to find bugs
faster.
In conclusion, my research aims to eliminate critical and common C/C++ memory
and type safety violations through practical programming analysis techniques. For
this goal, through these three projects, I contribute to our community to effectively
detect type and memory safety violations.
|
Page generated in 0.0717 seconds