• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19646
  • 3370
  • 2417
  • 2007
  • 1551
  • 1432
  • 877
  • 406
  • 390
  • 359
  • 297
  • 234
  • 208
  • 208
  • 208
  • Tagged with
  • 38133
  • 12457
  • 9252
  • 7111
  • 6698
  • 5896
  • 5291
  • 5197
  • 4727
  • 3455
  • 3303
  • 2815
  • 2726
  • 2539
  • 2116
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
941

Towards computer-based analysis of clinical electroencephalograms

Doyle, Daniel John January 1974 (has links)
Two approaches to the automatic analysis of clinical electroencephalograms (EEGs) are considered with a view towards classifying clin ical EEGs as normal or abnormal. The first approach examines the variability of various EEG features in a population of astronaut candidates known to be free of neurological disorders by constructing histograms of these features; unclassified EEGs of subjects in the same age group are examined by comparison of their feature values to the histograms of this neurologically normal group. The second approach employs the techniques of automatic pattern recognition for classification of clinical EEGs. A set of 57 EEG records designated normal or abnormal by clinical electro-encephalographers are used to evaluate pattern recognition systems based on stepwise discriminant analysis. In particular, the efficacy of using various feature sets in such pattern recognition systems is evaluated in terms of estimated classification error probabilities (Pe). The results of the study suggest a potential for the development of satisfactory automatic systems for the classification of clinical EEGs. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
942

A study of the motor unit potential for application to the automatic analysis of clinical EMG signals

Boyd, David Colin January 1976 (has links)
A computer model of the human single motor unit potential has been created for the purpose of developing methods of automated analysis in clinical electromyography. This approach was taken in order to examine the effects of pathological changes on the electromyographic potentials. A comprehensive review of the previous methods of automatic analysis of clinical EMG signals described in the literature has been presented and discussed, together with the relevant work on the production and detection of electrical activity with intramuscular electrodes. A methodology has been devised for the collection and preprocessing of the electromyographic signals and an . EMG data base established at U.B.C. An interactive graphics routine was developed to display the EMG waveform and allow the extraction of single motor unit potentials for further analysis. A computer model has been proposed for the generation of single motor unit potentials observed during clinical EMG examinations of the normal biceps brachii muscle. This model was based on physiological findings. In the model the single fiber activity was represented by a dipole current source and the motor unit was constructed from a uniform random array of fibers. Motor unit potentials generated from this array were examined at various points both inside and outside the array and the effects of single fiber axial dispersion, were investigated. The simulated motor unit potentials generated by the model have been compared with existing data from multielectrode studies in biceps brachii. The hypothesis that there is a variation in motor unit potential shape at successive discharges was investigated and the model employed for this purpose. It has been shown that for the normal motor unit potential, one major contributor to the shape variance is electromyographic jitter. The predictions from the model were compared with human experimental data. These results reveal that the variance may be a useful diagnostic indicator, although further research is warranted. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
943

The design of a consumer information system in the supermarket environment

Berman, Moira Elaine January 1979 (has links)
The purpose of this thesis is to explore the possibility of creating and maintaining a database in the public domain. The concepts considered, relate to general computerized storage of consumer goods information, allowing dissemination of this information to the public. The focus however, is on a Consumer Information System (CIS) in the grocery industry, with emphasis on price data. The major topics discussed include the advent of the Universal Product Code, the subsequent introduction of automated checkout and scanning systems in supermarkets, interest groups involved, one possible design of the CIS, and the feasibility of such a system. The system is designed to meet a minimum set of objectives of the interest groups. Based on the analysis, the development of a CIS is feasible, subject to the mutual cooperation of the interest groups involved. Suggestions are made with regard to the practical implementation of the ideas generated. Future implications and possible research constitute the final sections of the thesis. / Business, Sauder School of / Graduate
944

An integrated data system for wildlife management

Kale, Lorne Wayne January 1979 (has links)
ID 1975 the British Columbia Fish and Wildlife Branch implemented the Management Unit system for controlling and monitoring wildlife harvests in the province. This change in management boundaries should have been accompanied by an intensified data handling system, so that accurate and reliable management indices could be produced for each M.U. This thesis describes a data system that was developed in response to Region 1 blacktailed deer management needs and offers a new approach to wildlife data system management. The proposed system integrates field contact and hunter questionnaire data, and allows managers to monitor the effects of their policy decisions. Management strategies can be tested by manipulating exploitation parameters, such as bag limits and season lengths, to determine their effect on specific wildlife populations. In addition, the system restores and upgrades obsolete data files, thus allowing past harvest trends to be applied to new management zones. Flexibility, for both anticipated changes in resource stratification and unanticipated data needs, is also preserved. Biologists require management estimates for specific areas within M.U.s to manage wildlife effectively at the M.U. level. Each of the 15 M.U.s in Region 1 have been subdivided into between 5 and 32 subunits, depending on area and geography. The total 246 subunits attempt to partition large unmanageable wildlife resources into separate populations of manageable size. A location list or computerized gazetteer was used to automatically assign hunt location descriptions to appropriate M.U.s and subunits. Hew techniques for hunter sample estimates are proposed in this thesis. Mark-recapture methods for determining sampling intensities and the partitioning of large resident areas into resident M.U.s can improve estimates. Different methods for treating multiple mailing stage data are also presented. The data system described in this thesis consists of two parts; 1) the establishment of master data files and 2) the retrieval of data from those files. Five subsystems of PORTBAN computer programs control the input of Fish and Wildlife harvest data and manipulate them into master data files. The information retrieval is accomplished by standard statistical packages, such as SPSS. A hierarchial file structure is used to store the harvest data, thus most wildlife management data requests can be answered directly. The 1975 Region 1 blacktailed deer harvest data were used to test the sampling assumptions in both the hunter sample and field contact programs. Significant differences between resident M.U.s were found for hunter sample sampling intensity, percentage response, percentage sampled, and percentage of hunters among respondents. Significant differences were established in the percentage hunter success in different resident M.U.s and for different mailing phases. The 1975 field contact program produced a non-uniform distribution of contacts with respect to M.U.s. Highly significant differences between the percentage of licence holders checked from different resident M.U.s were also found. Kills for field checked hunters who also responded to the hunter sample questionnaire were compared to kills reported on the questionnaire. Numerous irregularities, including unreported kills, misreported kills, and totals exceeding bag limits, were found and a minimum error rate of about 20% was calculated. Known buck kills were generally (87.9%) reported as bucks, while does were only reported correctly 74.% of the time, and fawns only 48.0%. The format of the 1975 deer hunter questionnaire is suspected to have influenced those error rates. Successful and unsuccessful hunters had different probabilities of responding to the hunter questionnaire. Only 48.0% of unsuccessful hunters responded, while 59.6% of successful hunters reported. Hunter sample harvest estimates using different estimation methods were compared to known kills in two Vancouver Island subunits. During the 1975 season, 88 deer were shot in subunit 1-5-3 Nanaimo River), while 140 were estimated to have been shot in subunit 1-5-7 (Northwest Bay), all estimated kills were considerable higher than the known harvest, with the marked success-phase mailing estimation method producing the lowest estimates — 170 deer (193%) for subunit 15-3 and 179 deer (127%) for subunit 1-5-7. Although the total estimated deer kill for Vancouver Island remained relatively constant from 1964 to 1974, the same data when analysed by M.U. and subunit showed decreasing harvests in some M.U.s and subunits which were balanced by increasing kills in others., The data system proposed in this thesis provides an opportunity for B.C. wildlife management to develop an effective management framework for B.C.'s valuable wildlife resources. However, to do so the proposed system or one with similar capabilities must be implemented and supported by the B. C. Fish and Wildlife Branch. / Land and Food Systems, Faculty of / Graduate
945

Shannon’s information theory in hydrologic network design and estimation

Husain, Tahir January 1979 (has links)
The hydrologic basin and its data collection network is treated as a communication system. The spatial and temporal characteristics of the hydrologic events throughout the basin are represented as a message source and this message is transmitted by the network stations to a data base. A measure of the basin information transmitted by the hydrologic network is derived using Shannon's multivariate information. An optimum network station selection criterion, based on Shannon's methodology, is established and is shown to be independent of the estimation of the events at ungauged locations. Multivariate information transmission for the hydrologic network is initially computed using the discrete entropy concept. The computation of the multivariate entropy is then extended to the case of variables represented by continuous distributions. Bivariate and multivariate forms of the normal and lognormal distributions and the bivariate form of gamma, extreme value and exponential probability density functions are considered. Computational requirements are substantial when dealing with large numbers of grid points in the basin representation, and in the combinatorial search for optimum networks. Computational aids are developed which reduce the computational load to a practical level. The performance of optimal information transmission networks is compared with networks designed by existing methods. The ability of Shannon's theory to cope with the multivariate nature of the output from a network is shown to provide network designs with generally superior estimation performance. Although the optimal information transmission criterion avoids the necessity of specifying the estimators for events at ungauged locations, the criterion can also be applied to the determination of optimal estimators. The applicability of the information transmission criterion in determining optimal estimation parameters is demonstrated for simple and multiple linear regression and Kalman filter estimation. Information transmission criterion is applied to design the least cost network where a choice of instrument precision exists. / Applied Science, Faculty of / Civil Engineering, Department of / Graduate
946

Die evaluering van 'n aantal kriptologiese algoritmes

Van der Bank, Dirk Johannes 18 March 2014 (has links)
M.Sc. (Computer Science) / The main themes of this thesis are the characteristics of natural language, cryptographic algorithms to encipher natural language and possible figures of merit with which to compare different cryptographic algorithms. In this thesis the characteristics of natural language and the influence this has on cryptographic algorithms is investigated. The entropy function of Shannon is used extensively to evaluate the different models that can be constructed to simulate natural language. Natural language redundancy is , investigated and quantified by the entropy function. The influence this redundancy has on the theoretic security of different algorithms is tabulated. Shannon's unicity distance is used as a measure of security for this purpose. The unicity distance is already shown at this early stage to be not a very accurate measure of real (practical) security of cryptographic ciphers. The cryptographic algorithms discussed in this thesis are arbitarily divided into three groups: classical algorithms, public key algorithms and computer algorithms. In the classical algorithms cryptographic techniques such as transposition and character substitution are included. Well known ciphers such as the Playfair and Hill encipherment schemes are also included as classical cryptographic techniques. A special section is devoted to the use and cryptanalytic techniques of polyaphabetic ciphers. The public key ciphers are divided into three main groups: knapsack ciphers, RSA type ciphers and discrete logarithmic systems. Except for the discrete logarithmic cipher several examples of the other two groups are given. Examples of knapsack ciphers are: Merkle Hellman knapsack, Graham-Shamir knapsack and Shamir's random knapsack.
947

A cryptographically secure protocol for key exchange

Herdan, David Errol 11 September 2014 (has links)
M.Sc. (Computer Science) / Since the emergence of electronic communication, scientists have strived to make these communication systems as secure as possible. Classical cryptographical methods provided secrecy, with the proviso that the courier delivering the keys could be trusted. This method of key distribution proved to be too inefficient and costly. 'Cryptographical renaissance' was brought about with the advent of public key cryptography, in which the message key consists of a pair of mathematically complementary keys, instead of the symmetric keys of its forerunner. Classical cryptographical techniques were by no means obsolete, as the idea of using 'hybrid' systems proved to be very effective, by using the tedious public key techniques to allow both parties to share a secret, and the more efficient symmetric algorithms to actually encrypt the message. New technology leads, however, to new difficulties and the problems of key management now arose. Various protocols started emerging as solutions to the key distribution problem, each with their own advantages and disadvantages. The aim of this work is to critically review these protocols, analyse the shortfalls and attempt to design a protocol which will overcome these shortfalls. The class of protocol reviewed are the so-called 'strong authentication' protocols, whereby interaction between the message sender and recipient is required.
948

Die funksie van die eksterne ouditeur in die veranderende ouditsituasie meegebring deur die elektronieseverwerking van handelsdata met spesiale verwysing na die indeling van interne beheerpunte

Pretorius, Jacobus Petrus Steyn 23 September 2014 (has links)
M.Com. (Auditing) / Please refer to full text to view abstract
949

Information security management : processes and metrics

Von Solms, Rossouw 11 September 2014 (has links)
PhD. (Informatics) / Organizations become daily more dependent on information. Information is captured, processed, stored and distributed by the information resources and services within the organization. These information resources and services should be secured to ensure a high level of availability, integrity and privacy of this information at all times. This process is referred to as Information Security Management. The main objective of this, thesis is to identify all the processes that constitute Information Security Management and to define a metric through which the information security status of the organization can be measured and presented. It is necessary to identify an individual or a department which will be responsible for introducing and managing the information security controls to maintain a high level of security within the organization. The position .and influence of this individual, called the Information Security officer, and/or department within the organization, is described in chapter 2. The various processes and subprocesses constituting Information Security Management are identified and grouped in chapter 3. One of these processes, Measuring and Reporting, is currently very ill-defined and few guidelines and/or tools exist currently to help the Information Security officer to perform this task. For this reason the rest of the thesis is devoted to providing an effective means to enable the Information Security officer to measure and report the information security status in an effective way...
950

Mining continuous classes using evolutionary computing

Potgieter, Gavin 22 July 2005 (has links)
Data mining is the term given to knowledge discovery paradigms that attempt to infer knowledge, in the form of rules, from structured data using machine learning algorithms. Specifically, data mining attempts to infer rules that are accurate, crisp, comprehensible and interesting. There are not many data mining algorithms for mining continuous classes. This thesis develops a new approach for mining continuous classes. The approach is based on a genetic program, which utilises an efficient genetic algorithm approach to evolve the non-linear regressions described by the leaf nodes of individuals in the genetic program's population. The approach also optimises the learning process by using an efficient, fast data clustering algo¬rithm to reduce the training pattern search space. Experimental results from both algorithms are compared with results obtained from a neural network. The experimental results of the genetic program is also compared against a commercial data mining package (Cubist). These results indicate that the genetic algorithm technique is substantially faster than the neural network, and produces comparable accuracy. The genetic program produces substantially less complex rules than that of both the neural network and Cubist. / Dissertation (MSc)--University of Pretoria, 2006. / Computer Science / unrestricted

Page generated in 0.2734 seconds