• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6741
  • 1917
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 11414
  • 11414
  • 7848
  • 7848
  • 1180
  • 1159
  • 910
  • 814
  • 811
  • 759
  • 742
  • 541
  • 466
  • 452
  • 447
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Complete Randomized Cutting Plane Algorithms for Propositional Satisfiability

Hansen, Stephen Lee 01 January 2000 (has links)
The propositional satisfiability problem (SAT) is a fundamental problem in computer science and combinatorial optimization. A considerable number of prior researchers have investigated SAT, and much is already known concerning limitations of known algorithms for SAT. In particular, some necessary conditions are known, such that any algorithm not meeting those conditions cannot be efficient. This paper reports a research to develop and test a new algorithm that meets the currently known necessary conditions. In chapter three, we give a new characterization of the convex integer hull of SAT, and two new algorithms for finding strong cutting planes. We also show the importance of choosing which vertex to cut, and present heuristics to find a vertex that allows a strong cutting plane. In chapter four, we describe an experiment to implement a SAT solving algorithm using the new algorithms and heuristics, and to examine their effectiveness on a set of problems. In chapter five, we describe the implementation of the algorithms, and present computational results. For an input SAT problem, the output of the implemented program provides either a witness to the satisfiability or a complete cutting plane proof of satisfiability. The description, implementation, and testing of these algorithms yields both empirical data to characterize the performance of the new algorithms, and additional insight to further advance the theory. We conclude from the computational study that cutting plane algorithms are efficient for the solution of a large class of SAT problems.
22

A Use-Case Model for a Knowledge Management System to Facilitate Disaster Relief Operations

Eudy, Eileen 01 January 2004 (has links)
There are numerous disaster relief agencies poised to respond to disasters; however, coordinating the activities of these diverse and dispersed entities and capitalizing on their knowledge assets can be a challenge. All of these agencies are dedicated to serving survivors of disasters, but they at times lack the coordination necessary to respond efficiently. The Virginia Voluntary Organizations Active in Disaster (VOAD) is an umbrella organization of existing agencies dedicated to working closely with other organizations to improve service and minimize duplication during disaster operations. To better cope with disasters, the Virginia VOAD needs to develop knowledge management strategies to coordinate its resources. The goal of this study was to design a use-case model of a web-based knowledge management system to support state and local level disaster recovery planning and operations in the aftermath of a disaster. The focus of this study was to support the disaster field office (DFO) operations. The use-case methodology outlined in the Rational Unified Process and supported by the Unified Modeling Language notation provided the means of systematically discovering and documenting system requirements. The resulting model provides a framework for a knowledge management system that has been adapted to the disaster recovery domain. Evaluation and validation of the model has shown this to be a viable concept. It is anticipated that this model could serve as the basis for developing a prototype knowledge management system that may also be adapted to similar state and local VOAD chapters around the country.
23

The Extraction of Classification Rules and Decision Trees from Independence Diagrams

Kerbs, Robert W. 01 January 2001 (has links)
Databases are growing exponentially in many application domains. Timely construction of models that represent identified patterns and regularities in the data facilitate the prediction of future events based upon past performance. Data mining can promote this process through various model building techniques. The goal is to create models that intuitively represent the data and perhaps aid in the discovery of new knowledge. Most data mining methods rely upon either fully-automated information-theoretic or statistical algorithms. Typically, these algorithms are non-interactive, hide the model derivation process from the user, require the assistance of a domain expert, are application-specific, and may not clearly translate detected relationships. This paper proposes a visual data mining algorithm, BLUE, as an alternative to present data mining techniques. BLUE visually supports the processes of classification and prediction by combining two visualization methods. The first consists of a modification to independence diagrams, called BIDS, allowing for the examination of pairs of categorical attributes in relational databases. The second uses decision trees to provide a global context from which a model can be constructed. Classification rules are extracted from the decision trees to assist in concept representation. BLUE uses the abilities of the human visual system to detect patterns and regularities in images. The algorithm employs a mechanism that permits the user to interactively backtrack to previously visited nodes to guide and explore the creation of the model. As a decision tree is induced, classification rules are simultaneously extracted. Experimental results show that BLUE produces models that are more comprehensible when compared with alternative methods. These experimental results lend support for future studies in visual data mining.
24

A Self-Adaptive Evolutionary Negative Selection Approach for Anomaly Detection

Gonzalez, Luis J. 01 January 2005 (has links)
Forrest et al. (1994; 1997) proposed a negative selection algorithm, also termed the exhaustive detector generating algorithm, for various anomaly detection problems. The negative selection algorithm was inspired by the thymic negative selection process that is intrinsic to natural immune systems, consisting of screening and deleting self-reactive T-cells, i.e., those T-cells that recognize self-cells. The negative selection algorithm takes considerable time (exponential to the size of the self-data) and produces redundant detectors. This time/size limitation motivated the development of different approaches to generate the set of candidate detectors. A reasonable way to find suitable parameter settings is to let an evolutionary algorithm determine the settings itself by using self-adaptive techniques. The objective of the research presented in this dissertation was to analyze, explain, and demonstrate that a novel evolutionary negative selection algorithm for anomaly detection (in non-stationary environments) can generate competent non redundant detectors with better computational time performance than the NSMutation algorithm when the mutation step size of the detectors is self-adapted.
25

A Study of the Relationships Between End-User Information Systems Problems and Helpdesk Critical Success Factors in Higher Education

Parrott, Richard Dale 01 January 2005 (has links)
In the last fifteen years, information technology (IT) customer support has increased in importance within higher education. The pervasiveness of computers and technology on the campus has allowed students, staff, and faculty to perform a multitude of tasks by controlling their own environments and setting their own priorities. Qualified professional system and user support services have lagged demand. The problem investigated in this study was end-users' satisfaction levels of the higher education helpdesk and how end-users' satisfaction levels affected a helpdesk manager's critical success factors performance and goals. In this study, the first goal was to identify the critical success factors (CSF) for the higher education academic helpdesk manager. The second goal was to assess the relationships of CSFs to problems associated with end-user satisfaction levels within a higher education environment. The population of interest included all accredited higher education institutions (as of the publishing date of the 2003 Higher Education Directory). The researcher used a random sample of 1,765 from the list of 4,282 profiles in the 2003 Higher Education Directory (http://www.hepinc.com). The survey instrument was an online questionnaire implemented as an HTML form. Eight research questions and eight hypotheses were developed. Specifically, the researcher conducted the following statistical analyses: (a) descriptive statistics for the variables of interest, (b) a Chi-square test between the respondents and non-respondents to check for non-response bias, (c) a factor analysis to identify CSF constructs and helpdesk problems, (d) multiple regression to determine the relationship between CSFs and helpdesk problems using the helpdesk problem constructs identified from the factor analysis as dependent variables and the helpdesk CSFs as independent variables (e) MANOVA to determine the relationship between CSFs and the stage of growth of the helpdesk, and (f) seven ratios to serve as CSF performance indicators.
26

A Study to Determine the Predictions of Success in a distance Doctoral Program

Wilkinson, Carole E. 01 January 2002 (has links)
The doctoral major of Computing Technology in Education at Nova Southeastern University (NSU) has the lowest graduation rate of the doctoral majors in the Graduate School of information and Computer Sciences (GSCIS) as well as the entire university (Atherton, 1998). The goal of this study was to determine if there were factors that influence the persistence and graduation rates of this major. The study used student learning style and locus of control to determine if these factors could be used to answer two questions. First, are there differences between students that complete the coursework and those that do not finish? Second, are there differences between students who, after completing the coursework, complete the dissertation and those who do not? The participants of the study were doctoral students enrolled in the Computing Technology in Education major at the GSCIS. Student data was collected via a data collection form, the Kolb Learning Style Inventory and the Nowicki-Strickland Locus of Control instrument. After being tracked longitudinally, student attrition and graduation rate were analyzed using the aforementioned data. The study showed promising results by demonstrating the usefulness of learning style and locus of control as factors to consider when addressing attrition from a distance education program. Specifically, learning style proved to be an efficient predictor of coursework completion while an individual 's locus of control was predictive of their graduation from the program.
27

Wireless Networking Technologies in the Contexts of Energy Efficiency and Internet Interoperability

Manchala, Dhruv 01 January 2016 (has links)
At the heart of the Internet of Things revolution is wireless networking. This paper examines wireless networking technologies for the Internet of Things from the two most important, yet conflicting, perspectives – energy efficiency and interoperability with the “traditional” Internet stack – WiFi/TCP/IP. We begin by establishing the need for both energy-efficient and Internet-compatible wireless technologies for the Internet of Things. We then analyze sources of energy consumption at the various layers of wireless networking technologies, and examine alternative energy-efficient protocols and the most popular IoT wireless technologies, Bluetooth Low Energy (BLE) and IEEE 802.15.4.
28

Development of an Expert System Prototype od Dyslexia and Attention Deficit Hyperactivity Disorder

Gil, Tony, Jr. 01 January 1996 (has links)
The primary purpose of this study was to identify the information and to develop a prototype expert system to assist in the assessment of specific psychological disorders that may be directly or indirectly the cause of some learning disabilities. This tool will assist teachers as well as parents in recognizing the symptoms of a possible disorder, in identifying the tests that are available to confirm the existence of the disorder, and in minimizing the effects of the disorder to the greatest extent possible through early diagnosis and intervention. In order to develop a prototype expert system this study was limited to Dyslexia and Attention Deficit Hyperactivity Disorder (ADHD). The prototype was developed with the assistance of Psychiatrist and Psychologists. The prototype was field tested with ninety five percent correct diagnoses.
29

Examining Statistical Process Control as a Method of Temporal Data Mining

Brown, Herbert Earle Mathias 01 January 2007 (has links)
A methodology based on statistical process control was examined for the data mining problem of anomaly detection. This methodology does not suffer from many of the limitations of other data mining techniques often proposed for anomaly detection. This research demonstrated statistical process control has sound theoretical backing, has a linear time complexity, is accurate in classifying anomalies, and is able to identify novel information. Furthermore, it was shown that the contemporaneous use of numerous univariate statistical process control charts can address the prevalent problem of class imbalance. This research found that statistical process control based techniques are an effective method of temporal anomaly detection. Statistical process control based algorithms were developed and tested on a food industry complaint database of both frequent and infrequent Poisson distributed events. In applications of statistical process control, Shewhart charts are often used in conjunction with either exponential weighted moving average or cumulative sum charts for detecting both large and small shifts in a process quickly. This research compared exponentially weighted moving average charts and cumulative sum charts, each used with Shewhart charts, for the purpose of data mining. For the trial database considered, the cumulative sum based method was preferred finding significantly more events of interest. Considerations for the design, setup, and maintenance of statistical process control based anomaly detection algorithms were also examined. The relationships between a confusion matrix, often used for binary classification, and typical measures of statistical process control performance, e.g. average run length and average time to signal, were derived. In addition, a general process of adapting statistical process control charts for a data mining task was developed.
30

Knowledge Management: Evaluating Strategies and Processes Used in Higher Education

McCarthy, Anita F. 01 January 2006 (has links)
The promise of Knowledge Management (KM), coupled with ever-growing academic and intellectual resources, has led Higher Education Institutions (HEI) to explore strategies aimed at increasing knowledge-based activities with common organizational goals. The goal of this dissertation was to ascertain whether the KM process used in business and industry is applicable in the field of higher education. This study investigated the integration of KM initiatives into the organizational culture of HEIs by utilizing a case study method. The qualitative methods used in this study are designed to gain a deeper understanding of KM processes within HEIs. The study began with the case study selection. The approach used related research fields of knowledge management, learning organizations, knowledge managing systems, teaching and learning theories and balanced scorecard theory to explore the case. Prior to the examination of the case a modified was conducted Delphi study to develop questions applicable to KM in the higher education environment. Through triangulation three types of evidence (questionnaires, interviews, and document analysis) were used to analyze and report the results of using KM in a HEIs. The results demonstrated several important findings: first, teaching and learning can be enhanced by using KM, that is, what the institution knows is easily shared among all members when KM is used. Second, respondents reported that the KM development was of significant help for knowledge workers at an HEI, especially in the area of research. While the results revealed strong support for KM usage at JSU, there was also a recognition of the weakness of specific KM performance results in some aspects of the KM program, especially in the areas that required knowledge sharing among different departments. Lastly, recommendations for further research are offered in order to help identify successful KM initiatives in HEls.

Page generated in 0.102 seconds