• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19724
  • 3374
  • 2417
  • 2015
  • 1551
  • 1432
  • 884
  • 406
  • 390
  • 359
  • 297
  • 238
  • 208
  • 208
  • 208
  • Tagged with
  • 38302
  • 12480
  • 9264
  • 7133
  • 6700
  • 5896
  • 5334
  • 5218
  • 4750
  • 3471
  • 3307
  • 2868
  • 2731
  • 2550
  • 2119
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
691

Towards a happy ending for girls and computing?

Gansmo, Helen Jøsok January 2004 (has links)
Once upon a time in the promised land of Information Society there was a widespread concern for the dreaded Digital Divide. In the wide range of modern stories and "fairytales" in our technoromantic era, access to and knowledge about computers always seem to hold the key to a prosperous future, while deprivation of such access will doom you to the far side of the Digital Divide. ICT (information and communication technology) thus seems to be the driving force of the 21st century as the electricity was for the 20th. As a result, the policymakers of Norway, even if, or just because, they rule a small country, are afraid of falling behind on the golden route to the future, and thus aspire to be on the right side of the digital divide. This is in accordance with general concerns in other countries and the EU about lagging behind, since they have observed that Japan and USA have been leading the way enroute to the Information Society (Selwyn 2002, Servaes and Heinderyckx 2002). This is also a double drama because as in the traditional fairytales, princesses are in special danger or have wandered off and must be rescued by the heroes: …knowledge in technology must be shared by all groups in order to prevent new differences from developing in the presuppositions for participation. Dissemination must thus proceed so that it does not consolidate traditional gender divisions where girls are raised to believe that "women do not understand" natural science and technology. Girls and computing has been a topic in the Norwegian public discourse since "once upon a time" around 25 years ago, and I will through this collection of articles investigate various stories about the girls and computing problem at different sites and look into how these stories relate to each other.
692

Identification of abnormal ST segments in electrocardiograms using fast fourier transform analysis

McCutchan, Larry J. January 1975 (has links)
Electrocardiogram (EKG) signals were digitized and the data analyzed with a fast Fourier transform computer pro- rain. The signals were amplified with a differential input EKG amplifier and converted to a frequency with a model 8038 function generator. The output frequency response was linear from 150 kHz to 300 kHz for an input voltage range of four volts. The frequency was recorded as a function of time Nuclear Data 2200 multichannel analyzer operated in the multiscale mode utilizing a dwell time of four cosec per channel. Digitized EKG data for 17 subjects were obtained in this manner. Previously digitized data for 29 patients were also obtained from the Public Health Service. Discrete Fourier transform analysis was performed on the data and the power spectrum was investigated for diagnostic use. The presence of ST depression in the EKG trace was found to be accompanied by a significantly larger harmonic amplitude coefficient at n = 2 and significantly lower harmonic amplitude coefficients for n = 13 through 20 than for normal EKG's. Diagnostic criteria were developed based on these power spectrum coefficients for the identification of EKG traces with abnormal ST segments.
693

Process control using an optomux control board

Sabri, Dina O. January 1987 (has links)
In this thesis process control concepts were used to develop software that could be adapted to a real world situation. The software was used to control a simple temperature regulating experiment. This experiment was used to demonstrate the use of OPTOMUX analog and digital input/output devices in controlling a process. The goal of this experiment was to use the input/output devices in controlling the temperature of the box within specified tolerances for a designated period of time. To accomplish optimal use of equipment and optimal control, a mathematical model was derived to predict the behavior of the process under control. The pattern observed while the temperature was increasing toward room temperature closely resembled an exponential function. For temperatures above room temperatures the curve then approximated a square root function. The pattern followed when decreasing the temperature-was exponential throughout. The time required to collect all the significant data in the case of increasing the temperature was two hours. In the case of decreasing temperature, one hour. Beyond these time limits the temperature remained essentially constant. The maximum temperature value that could be reached was six degrees above room temperature and the minimum two degrees below room temperature.
694

A computerized working environment for retail pharmacists

Van Ostrand, Rita A. January 1987 (has links)
The purpose of this study was to investigate how well the computer hardware/software industry was meeting the needs of the retail pharmacist. The needs were determined by a survey of 1000 Indiana pharmacists. A reply rate of 22% revealed that the most important problems pharmacists were facing with their computer systems were slow access of the data, the length of backup time, no drug interaction check, and no multitasking. Hardware and software means of meeting these problems were studied. Also the currently available systems were evaluated in terms of these problems. It was found that while most systems were adequately meeting some of these problems no system was addressing all of them. Some of the systems were multitasking but were much too expensive for the small pharmacy. A system can be designed that meets all of these needs without neglecting the basic needs of pharmacists and at a very reasonable cost.
695

Data mining and classical statistics

Luo, Man January 2004 (has links)
This study introduces an overview of data mining. It suggests that methods derived from classical statistics are an integrated part of data mining. However, there are substantial differences between these two areas. Classical statistical models and non-statistical models used in data mining, such as regression trees and artificial neural networks, are presented to emphasize their unique approaches to extract information from data. In summation, this research provides some background to data mining and the role of classical statistics played in it. / Department of Mathematical Sciences
696

The status of data-processing services in the public schools of Indiana with supporting case studies revealing patterns of organization of existingregional data-processing centers in adjoining states

Wagner, Ivan D. January 1970 (has links)
The purpose of the study was to determine the status of electronic data processing services and/or functions in the public schools of Indiana. To provide additional insight to the organizational structure of a data processing center, four existing regional data processing centers serving a group of school districts in adjoining states through a centrally located computer facility were examined. The procedures used in collecting and analyzing the data for the study included the following: (1) appropriate data gathering instruments were designed, (2) a questionnaire accompanied by a cover letter explaining the purpose of the study was mailed to each superintendent of all school corporations in Indiana, (3) a personal interview was conducted with the administrative personnel directly responsible for the operation of each of the four selected regional data processing centers, (3) data obtained from the questionnaire and the personal interview were compiled, tabulated, analyzed, and presented, and (4) the findings, conclusions and recommendations for further study were presented. The superintendents were asked to respond to the question, "Do you presently utilize data processing (computer and punch card) equipment in any part of the operation and/or administration of your school district?" Those school administrators responding "YES" were then asked to indicate those services and functions which were accomplished in their school corporation, to indicate the method of obtaining data processing services and to indicate methods. All of the superintendents were asked to indicate their future plans for the use of data processing services in their local school corporation(s). The systematic analysis of the procedures necessary for implementing educational data processing services to a group of school districts in adjoining states included the following elements: (1) background information, (2) objectives, (3) historical development, (4) equipment, (5) personnel, (6) services and/or functions, (7) orientation procedures, (8) advantages of a central computer facility, (9) disadvantages of a central computer facility, and (10) future plans. The findings indicated the following major general conclusions to be appropriate the level of their satisfaction with both services. There has been only limited development toward the organization of educational data processing systems throughout the state of Indiana. This situation lends itself to collective exploration by interested school administrators toward cooperative arrangements of obtaining data processing services, regardless of the size of the school district. Qualified and competent personnel must be responsible for the operation of the data processing center. The director of the data processing center should be totally familiar with all facets of school administration so that he may use electronic data processing services to facilitate the instructional program. A central computer facility organized to provide electronic data processing services to a group of school districts is capable of supporting the more sophisticated computer equipment at a reduction in cost, provided the equipment is operating at a near capacity work load. It is possible for a group of school districts to benefit from the developmental experience of existing data processing centers. The amount of time once considered necessary to progress from feasibility of operation to full-scale implementation has been greatly reduced by administrators' capitalizing on the systematic developments of existing regional data processing centers.School districts involved in data processing can benefit from the efforts of an administrative officer whose function is to systematically analyze administrative operations. This individual is referred to as a systems analyst or operations analyst. Teleprocessing has great potential for providing electronic data processing services to school districts regardless of size. Teleprocessing provides school administrators with direct access to sophisticated electronic data processing equipment and computer programs which have been successfully developed and implemented in other school districts. The Indiana Department of Education should assume the leadership in establishing regional service centers throughout Indiana to provide comprehensive educational data processing services to local school districts.Colleges and universities should serve as a catalyst of ideas for the intelligent use of electronic data processing services to perform administrative and instructional functions.
697

Mining frequent itemsets from uncertain data: extensions to constrained mining and stream mining

Hao, Boyu 19 July 2010 (has links)
Most studies on frequent itemset mining focus on mining precise data. However, there are situations in which the data are uncertain. This leads to the mining of uncertain data. There are also situations in which users are only interested in frequent itemsets that satisfy user-specified aggregate constraints. This leads to constrained mining of uncertain data. Moreover, floods of uncertain data can be produced in many other situations. This leads to stream mining of uncertain data. In this M.Sc. thesis, we propose algorithms to deal with all these situations. We first design a tree-based mining algorithm to find all frequent itemsets from databases of uncertain data. We then extend it to mine databases of uncertain data for only those frequent itemsets that satisfy user-specified aggregate constraints and to mine streams of uncertain data for all frequent itemsets. Experimental results show the effectiveness of all these algorithms.
698

Frequent pattern mining of uncertain data streams

Jiang, Fan January 2011 (has links)
When dealing with uncertain data, users may not be certain about the presence of an item in the database. For example, due to inherent instrumental imprecision or errors, data collected by sensors are usually uncertain. In various real-life applications, uncertain databases are not necessarily static, new data may come continuously and at a rapid rate. These uncertain data can come in batches, which forms a data stream. To discover useful knowledge in the form of frequent patterns from streams of uncertain data, algorithms have been developed to use the sliding window model for processing and mining data streams. However, for some applications, the landmark window model and the time-fading model are more appropriate. In this M.Sc. thesis, I propose tree-based algorithms that use the landmark window model or the time-fading model to mine frequent patterns from streams of uncertain data. Experimental results show the effectiveness of our algorithms.
699

Mining frequent patterns from uncertain data with MapReduce

Hayduk, Yaroslav 04 April 2012 (has links)
Frequent pattern mining from uncertain data allows data analysts to mine frequent patterns from probabilistic databases, within which each item is associated with an existential probability representing the likelihood of the presence of the item in the transaction. When compared with precise data, the solution space for mining uncertain data is often much larger due to the probabilistic nature of uncertain databases. Thus, uncertain data mining algorithms usually take substantially more time to execute. Recent studies show that the MapReduce programming model yields significant performance gains for data mining algorithms, which can be mapped to the map and reduce execution phases of MapReduce. An attractive feature of MapReduce is fault-tolerance, which permits detecting and restarting failed jobs on working machines. In this M.Sc. thesis, I explore the feasibility of applying MapReduce to frequent pattern mining of uncertain data. Specifically, I propose two algorithms for mining frequent patterns from uncertain data with MapReduce.
700

Achievement, background and commitment : classifications of biographical data in personnel selection

Drakeley, Russell John January 1988 (has links)
No description available.

Page generated in 0.1702 seconds