Spelling suggestions: "subject:"data aprocessing"" "subject:"data eprocessing""
301 |
The design of a consumer information system in the supermarket environmentBerman, Moira Elaine January 1979 (has links)
The purpose of this thesis is to explore the possibility of creating and maintaining a database in the public domain.
The concepts considered, relate to general computerized storage of consumer goods information, allowing dissemination of this information to the public. The focus however, is on a Consumer Information System (CIS) in the grocery industry, with emphasis on price data.
The major topics discussed include the advent of the Universal Product Code, the subsequent introduction of automated checkout and scanning systems in supermarkets, interest groups involved, one possible design of the CIS, and the feasibility of such a system. The system is designed to meet a minimum set of objectives of the interest groups.
Based on the analysis, the development of a CIS is feasible, subject to the mutual cooperation of the interest groups involved. Suggestions are made with regard to the practical implementation of the ideas generated. Future implications and possible research constitute the final sections of the thesis. / Business, Sauder School of / Graduate
|
302 |
Die evaluering van 'n aantal kriptologiese algoritmesVan der Bank, Dirk Johannes 18 March 2014 (has links)
M.Sc. (Computer Science) / The main themes of this thesis are the characteristics of natural language, cryptographic algorithms to encipher natural language and possible figures of merit with which to compare different cryptographic algorithms. In this thesis the characteristics of natural language and the influence this has on cryptographic algorithms is investigated. The entropy function of Shannon is used extensively to evaluate the different models that can be constructed to simulate natural language. Natural language redundancy is , investigated and quantified by the entropy function. The influence this redundancy has on the theoretic security of different algorithms is tabulated. Shannon's unicity distance is used as a measure of security for this purpose. The unicity distance is already shown at this early stage to be not a very accurate measure of real (practical) security of cryptographic ciphers. The cryptographic algorithms discussed in this thesis are arbitarily divided into three groups: classical algorithms, public key algorithms and computer algorithms. In the classical algorithms cryptographic techniques such as transposition and character substitution are included. Well known ciphers such as the Playfair and Hill encipherment schemes are also included as classical cryptographic techniques. A special section is devoted to the use and cryptanalytic techniques of polyaphabetic ciphers. The public key ciphers are divided into three main groups: knapsack ciphers, RSA type ciphers and discrete logarithmic systems. Except for the discrete logarithmic cipher several examples of the other two groups are given. Examples of knapsack ciphers are: Merkle Hellman knapsack, Graham-Shamir knapsack and Shamir's random knapsack.
|
303 |
A cryptographically secure protocol for key exchangeHerdan, David Errol 11 September 2014 (has links)
M.Sc. (Computer Science) / Since the emergence of electronic communication, scientists have strived to make these communication systems as secure as possible. Classical cryptographical methods provided secrecy, with the proviso that the courier delivering the keys could be trusted. This method of key distribution proved to be too inefficient and costly. 'Cryptographical renaissance' was brought about with the advent of public key cryptography, in which the message key consists of a pair of mathematically complementary keys, instead of the symmetric keys of its forerunner. Classical cryptographical techniques were by no means obsolete, as the idea of using 'hybrid' systems proved to be very effective, by using the tedious public key techniques to allow both parties to share a secret, and the more efficient symmetric algorithms to actually encrypt the message. New technology leads, however, to new difficulties and the problems of key management now arose. Various protocols started emerging as solutions to the key distribution problem, each with their own advantages and disadvantages. The aim of this work is to critically review these protocols, analyse the shortfalls and attempt to design a protocol which will overcome these shortfalls. The class of protocol reviewed are the so-called 'strong authentication' protocols, whereby interaction between the message sender and recipient is required.
|
304 |
Die funksie van die eksterne ouditeur in die veranderende ouditsituasie meegebring deur die elektronieseverwerking van handelsdata met spesiale verwysing na die indeling van interne beheerpuntePretorius, Jacobus Petrus Steyn 23 September 2014 (has links)
M.Com. (Auditing) / Please refer to full text to view abstract
|
305 |
Understanding the performance of healthcare services: a data-driven complex systems modeling approachTao, Li 13 February 2014 (has links)
Healthcare is of critical importance in maintaining people’s health and wellness. It has attracted policy makers, researchers, and practitioners around the world to .nd better ways to improve the performance of healthcare services. One of the key indicators for assessing that performance is to show how accessible and timely the services will be to speci.c groups of people in distinct geographic locations and in di.erent seasons, which is commonly re.ected in the so-called wait times of services. Wait times involve multiple related impact factors, called predictors, such as demographic characteristics, service capacities, and human behaviors. Some impact factors, especially individuals’ behaviors, may have mutual interactions, which can lead to tempo-spatial patterns in wait times at a systems level. The goal of this thesis is to gain a systematic understanding of healthcare services by investigating the causes and corresponding dynamics of wait times. This thesis presents a data-driven complex systems modeling approach to investigating the causes of tempo-spatial patterns in wait times from a self-organizing perspective. As the predictors of wait times may have direct, indirect, and/or moderating e.ects, referred to as complex e.ects, a Structural Equation Modeling (SEM)-based analysis method is proposed to discover the complex e.ects from aggregated data. Existing regression-based analysis techniques are only able to reveal pairwise relationships between observed variables, whereas this method allows us to explore the complex e.ects of observed and/or unobserved(latent) predictors on waittimes simultaneously. This thesis then considers how to estimate the variations in wait times with respect to changes in speci.c predictors and their revealed complex e.ects. An integrated projection method using the SEM-based analysis, projection, and a queuing model analysis is developed. Unlike existing studies that either make projections based primarily on pairwise relationships between variables, or queuing model-based discrete event simulations, the proposed method enables us to make a more comprehensive estimate by taking into account the complex e.ects exerted by multiple observed and latent predictors, and thus gain insights into the variations in the estimated wait times over time. This thesis further presents a method for designing and evaluating service management strategies to improve wait times, which are determined by service management behaviors. Our proposed strategy for allocating time blocks in operating rooms (ORs) incorporates historical feedback information about ORs and can adapt to the unpredictable changes in patient arrivals and hence shorten wait times. Existing time block allocations are somewhat ad hoc and are based primarily on the allocations in previous years, and thus result in ine.cient use of service resources. Finally, this thesis proposes a behavior-based autonomy-oriented modeling method for modeling and characterizing the emergent tempo-spatial patterns at a systems level by taking into account the underlying individuals’ behaviors with respect to various impact factors. This method uses multi-agent Autonomy-Oriented Computing (AOC), a computational modeling and problem-solving paradigm with a special focus on addressing the issues of self-organization and interactivity, to model heterogeneous individuals (entities), autonomous behaviors, and the mutual interactions between entities and certain impact factors. The proposed method therefore eliminates to a large extent the strong assumptions that are used to de.ne the stochastic properties of patient arrivalsand servicesinstochasticmodeling methods(e.g.,thequeuing model and discrete event simulation), and those of .xed relationships between entities that are held by system dynamics methods. The method is also more practical than agent-based modeling (ABM) for discovering the underlying mechanisms for emergent patterns, as AOC provides a general principle for explicitly stating what fundamental behaviors of and interactions between entities should be modeled. To demonstrate the e.ectiveness of the proposed systematic approach to understanding the dynamics and relevant patterns of wait times in speci.c healthcare service systems, we conduct a series of studies focusing on the cardiac care services in Ontario, Canada. Based on aggregated data that describe the services from 2004 to 2007, we use the SEM-based analysis method to (1) investigate the direct and moderating e.ects that speci.c demand factors, in terms of certaingeodemographicpro.les, exert onpatient arrivals, whichindirectly a.ect wait times; and (2) examine the e.ects of these factors (e.g., patient arrivals, physician supply, OR capacity, and wait times) on the wait times in subsequent units in a hospital. We present the e.ectiveness of integrated projection in estimating the regional changes in service utilization and wait times in cardiac surgery services in 2010-2011. We propose an adaptive OR time block allocation strategy and evaluate its performance based on a queuing model derived from the general perioperative practice. Finally, we demonstrate how to use the behavior-based autonomy-oriented modeling method to model and simulate the cardiac care system. We .nd that patients’ hospital selection behavior, hospitals’ service adjusting behavior, and their interactions via wait times may account for the emergent tempo-spatial patterns that are observed in the real-world cardiac care system. In summary, this thesis emphasizes the development of a data-driven complex systems modeling approach for understanding wait time dynamics in a healthcare service system. This approach will provide policy makers, researchers, and practitioners with a practically useful method for estimating the changes in wait times in various “what-if” scenarios, and will support the design and evaluation of resource allocation strategies for better wait times management. By addressing the problem of characterizing emergenttempo-spatial waittimepatternsinthe cardiac care system from a self-organizing perspective, we have provided a potentially e.ective means for investigating various self-organized patterns in complex healthcare systems. Keywords: Complex Healthcare Service Systems, Wait Times, Data-Driven Complex Systems Modeling, Autonomy-Oriented Computing(AOC), Cardiac Care
|
306 |
Test Case Generation According to the Binary Search StrategyBeydeda, Sami, Gruhn, Volker 08 November 2018 (has links)
One of the important tasks during software testing is the generation of test cases. Unfortunately, existing approaches to test case generation often have problems limiting their use. A problem of dynamic
test case generation approaches, for instance, is that a large number of iterations can be necessary to obtain test cases. This article introduces a formal framework for the application of the well-known search strategy of binary search in path-oriented test case generation and explains the binary search-based test case generation (BINTEST) algorithm.
|
307 |
Advances in categorical data clusteringZhang, Yiqun 29 August 2019 (has links)
Categorical data are common in various research areas, and clustering is a prevalent technique used for analyse them. However, two challenging problems are encountered in categorical data clustering analysis. The first is that most categorical data distance metrics were actually proposed for nominal data (i.e., a categorical data set that comprises only nominal attributes), ignoring the fact that ordinal attributes are also common in various categorical data sets. As a result, these nominal data distance metrics cannot account for the order information of ordinal attributes and may thus inappropriately measure the distances for ordinal data (i.e., a categorical data set that comprises only ordinal attributes) and mixed categorical data (i.e., a categorical data set that comprises both ordinal and nominal attributes). The second problem is that most hierarchical clustering approaches were actually designed for numerical data and have very high computation costs; that is, with time complexity O(N2) for a data set with N data objects. These issues have presented huge obstacles to the clustering analysis of categorical data. To address the ordinal data distance measurement problem, we studied the characteristics of ordered possible values (also called 'categories' interchangeably in this thesis) of ordinal attributes and propose a novel ordinal data distance metric, which we call the Entropy-Based Distance Metric (EBDM), to quantify the distances between ordinal categories. The EBDM adopts cumulative entropy as a measure to indicate the amount of information in the ordinal categories and simulates the thinking process of changing one's mind between two ordered choices to quantify the distances according to the amount of information in the ordinal categories. The order relationship and the statistical information of the ordinal categories are both considered by the EBDM for more appropriate distance measurement. Experimental results illustrate the superiority of the proposed EBDM in ordinal data clustering. In addition to designing an ordinal data distance metric, we further propose a unified categorical data distance metric that is suitable for distance measurement of all three types of categorical data (i.e., ordinal data, nominal data, and mixed categorical data). The extended version uniformly defines distances and attribute weights for both ordinal and nominal attributes, by which the distances measured for the two types of attributes of a mixed categorical data can be directly combined to obtain the overall distances between data objects with no information loss. Extensive experiments on all three types of categorical data sets demonstrate the effectiveness of the unified distance metric in clustering analysis of categorical data. To address the hierarchical clustering problem of large-scale categorical data, we propose a fast hierarchical clustering framework called the Growing Multi-layer Topology Training (GMTT). The most significant merit of this framework is its ability to reduce the time complexity of most existing hierarchical clustering frameworks (i.e., O(N2)) to O(N1.5) without sacrificing the quality (i.e., clustering accuracy and hierarchical details) of the constructed hierarchy. According to our design, the GMTT framework is applicable to categorical data clustering simply by adopting a categorical data distance metric. To make the GMTT framework suitable for the processing of streaming categorical data, we also provide an incremental version of GMTT that can dynamically adopt new inputs into the hierarchy via local updating. Theoretical analysis proves that the GMTT frameworks have time complexity O(N1.5). Extensive experiments show the efficacy of the GMTT frameworks and demonstrate that they achieve more competitive categorical data clustering performance by adopting the proposed unified distance metric.
|
308 |
Determining the Factors Influential in the Validation of Computer-based Problem Solving SystemsMorehead, Leslie Anne 01 January 1996 (has links)
Examination of the literature on methodologies for verifying and validating complex computer-based Problem Solving Systems led to a general hypothesis that there exist measurable features of systems that are correlated with the best testing methods for those systems. Three features (Technical Complexity, Human Involvement, and Observability) were selected as the basis of the current study. A survey of systems currently operating in over a dozen countries explored relationships between these system features, test methods, and the degree to which systems were considered valid. Analysis of the data revealed that certain system features and certain test methods are indeed related to reported levels of confidence in a wide variety of systems. A set of hypotheses was developed, focused in such a way that they correspond to linear equations that can be estimated and tested for significance using statistical regression analysis. Of 24 tested hypotheses, 17 were accepted, resulting in 49 significant models predicting validation and verification percentages, using 37 significant variables. These models explain between 28% and 86% of total variation. Interpretation of these models (equations) leads directly to useful recommendations regarding system features and types of validation methods that are most directly associated with the verification and validation of complex computer systems. The key result of the study is the identification of a set of sixteen system features and test methods that are multiply correlated with reported levels of verification and validation. Representative examples are: • People are more likely to trust a system if it models a real-world event that occurs frequently. • A system is more likely to be accepted if users were involved in its design. • Users prefer systems that give them a large choice of output. • The longer the code, or the greater the number of modules, or the more programmers involved on the project, the less likely people are to believe a system is error-free and reliable. From these results recommendations are developed that bear strongly on proper resource allocation for testing computer-based Problem Solving Systems. Furthermore, they provide useful guidelines on what should reasonably be expected from the validation process.
|
309 |
Hierarchical decomposition of polygons with applicationsElGindy, Hossam A. January 1985 (has links)
No description available.
|
310 |
The introduction of computer networking and activities in K-12 classrooms : a case study of a secondary schoolSilva, Marcos, 1953- January 1998 (has links)
No description available.
|
Page generated in 0.0689 seconds