Spelling suggestions: "subject:"computer sciences"" "subject:"coomputer sciences""
251 |
Structural Design of a Fast Convergence Algorithm: A Semi-Universal Routing ProtocolKengni, Gabriel 01 January 2000 (has links)
The function of a routing algorithm is to guide packets through the communication network to the correct destinations. The goal of this study was to design and to analyze a loop-free fast-convergence routing algorithm. The introduction consisted of a general framework on which the algorithm was based. This general model formed the basis of a proposal for a routing protocol suitable to heterogeneous environments. This research focused on link-state algorithms.
The subject of this investigation was a new routing algorithm called fast convergence algorithm (FEA). FEA did not only reduce the number of cases in which a temporary routing loop could occur but also was designed to be loop free at every instant. The hierarchical characteristics of FEA allowed the new algorithm to handle aggregation of routing information, a technique that was mandatory for accommodating the increasing number of Internet users. FEA was shown to converge in finite time after an arbitrary sequence of link cost or topological changes and to outperform all other loop-free routing algorithms already proposed.
In FEA, each router maintained a subset of the topology corresponding to the links used by its neighbor routers in their preferred paths to known destinations. Based on that subset of topology information, routers derived their own preferred paths and communicated the corresponding link-state information to their neighbors. Simulations were used to show FEA to be much more efficient than the diffusing update algorithm and the shortest path algorithm in terms of speed, communication and processing overhead required to converge to correct routing tables. FEA’s correctness was verified for arbitrary types of routing when correct and deterministic algorithms were used to select preferred paths at each router. To increase the responsiveness of a routing protocol and to guarantee the quality of service required by users, FEA integrated routing and congestion control mechanisms.
This feature ensured that packets arriving into a packet-switched network were delivered unless a resource failure occurred. This technique ensured a high level of performance for network flows. Update messages from a node were sent only to its neighbors. Each such message contained a distance vector of one or more entries, and each entry specified the length of the selected path to a network destination, as well as an indication of whether the entry constituted an update, a query, or a reply to a previous query.
|
252 |
Active Database Rule Set Reduction by Knowledge DiscoveryKerdprasop, Kittisak 01 January 1999 (has links)
The advent of active databases enhances the functionality of conventional passive databases. A large number of applications benefit from the active database systems because of the provision of the powerful active rule language and rule processing algorithm. With the power of active rules, data manipulation operations can be executed automatically when certain events occur and certain conditions are satisfied. Active rules can also impose unique and consistent constraints on the database, independent of the applications, such that no application can violate.
The additional database functionality offered by active rules, however, comes at a price. It is not a straightforward task for database designers to define and maintain a large set of active rules. Moreover, the termination property of an active rule set is difficult to detect because of the subtle interactions among active rules. This dissertation has proposed a novel approach of applying machine learning techniques to discover a set of newly simplified active rules. The termination property of the discovered active rule set is also guaranteed via the stratification technique. The approach of discovering active rules is proposed in the context of relational active databases. It is an attempt to assist database designers by providing the facility to analyze and refine active rules at designing time.
The main algorithm of active rule discovery is called the ARD algorithm. The usefulness of the algorithm was verified by the actual running on sample sets of active rules. The running results, which were these corresponding new sets of active rules, will be analyzed on the basis of the size and the complexity of the discovered rule sets. The size of the discovered rule set was analyzed in term of the number of active rules. The complexity was analyzed in term of the number of transition states, which are the changes in the database states as the result of rule execution. The experimental results revealed that with the proposed approach, the numbers of active rules and transition states could be reduced 61.11 % and 40%, respectively, on average.
|
253 |
The Application of Inductive Logic Programming to Support Semantic Query OptimizationKerdprasop, Nittaya 01 January 1999 (has links)
Inductive logic programming (ILP) is a recently emerging subfield of machine learning that aims at overcoming the limitations of most attribute-value learning algorithms by adopting a more powerful language of first-order logic. Employing successful learning techniques of ILP to learn interesting characteristics among database relations is of particular interest to the knowledge discovery in databases research community.
However, most existing ILP systems are general-purpose learners and that means users have to know how to tune some factors of ILP learners to best suit their tasks at hand. One such factor with great impact on the efficiency of ILP learning is how to specify the language bias. The language bias is a restriction on the format (or syntax) of clauses allowed in the hypothesis space. If the language is too weak, the search space is very large, and hence, the learning efficiency is decreased. On the contrary, if the language is too strong, the search space is so small that many interesting rules may be excluded from consideration.
It is the purpose of this dissertation to develop an algorithm to generate a potentially useful language bias that is more appropriate for the task of inducing semantic constraints from the database relations. These constraints will be a major source of semantic knowledge for semantic query optimization in database query processing. The efficiency of the proposed algorithm was verified experimentally. The appropriate form of language bias specification, which is the output of the algorithm, was tested on the ILP system CLAUDIEN comparing with a number of different forms of language bias specification.
The learning results were compared on the basis of number of rules discovered, the quality of rules, total time spent to learn rules, and the size of the search space. The experimental results showed that the proposed algorithm is helpful for the induction of semantic rules.
|
254 |
The Effects of Computerized Tools In Teaching English Composition For Basic WritersKime, Harold A. 01 January 1992 (has links)
This experimental project studied the effects of computerized revision tools--word processing, grammar checkers, and spell checkers--on the writing of 32 college freshmen in remedial English composition. The students were divided into two groups. Each group received 13 weeks of instruction, meeting five days per week. On Mondays, Wednesdays, and Fridays, they met as a group to receive traditional composition and grammar instruction. On Tuesdays and Thursdays, they met for in-class revision and writing practice. The experimental group used computers to perform all revisions while the control group revised using paper and pen. Analysis of the scores from two final compositions graded holistically determined that the experimental group performed significantly better in surface aspects of writing and revising, including style and appropriateness, and grammar and punctuation. However, in subsurface areas of writing and revising, including organization and presentation, audience awareness, and style and appropriateness, the experimental group did not perform significantly better than the control group.
Analysis of the data from pre- and posttest scores on the written English Expression Placement Test revealed that the experimental group made no greater gains in formal rule acquisition. The use of computers during the revising process did not prove beneficial to rule acquisition; and therefore, some doubt exists as to whether the new revising processes could be transferable to revising when computers are not available.
|
255 |
Computer Applications At The Village MailboxKing, Daniel E. 01 January 1993 (has links)
The content of this dissertation includes the collection of data for the design, evaluation and distribution of a survey questionnaire to collect the information needed to develop a "Small Business Owner's Guide to Computer Applications." By working closely with the manager/owner of The Village Mailbox in Portsmouth, Virginia, computer applications areas were identified, the questionnaire was developed then validated through pilot testing. After pilot testing the questionnaire survey was distributed through three mailings to ensure maximum participation. Responses were divided into two groups (strata) and non-franchised responses were compared to franchised responses. Through the use of descriptive statistics the responses were analyzed for response rate, Pearson r, significance, degrees of freedom, standard deviation, z-test score and significance level and the Kuder Richardson KR21 was applied for a reliability score.
The survey data collected was analyzed conclusions were drawn. Conclusions led to and the development of the guide to assist owners of small businesses offering mailing services during considerations about computer applications. This guide was developed to solve the problem statement of this study, give the small business owners a logical approach to computer applications and to serve as a tool for small business planning where it relates to computer applications in mailing service businesses.
The survey and the data collected could be generalized to all small businesses offering mailing services. Small businesses offering mailing services should find the study interesting and the results handy in providing useful information related to computer applications. To use this information in any other environment would require research that included information specific to that environment. Information collected here was meant for use only in the area of small businesses offering mailing services.
|
256 |
A Model for In-house Development of Computer-Based TrainingKing, Diane N. 01 January 1999 (has links)
The goal of this dissertation was to develop a practical reference guide that trainers and courseware developers can use as an on-the-job performance support tool to develop effective multimedia, computer-based training (CBT) courseware. The foundation for the project was a model that could be applied by corporate training departments to develop in-house expertise for multimedia training production. The purpose was to develop tools that can be used to teach in-house trainers how to design and develop courseware for multimedia based training following accepted principles and models of adult learning, instructional design, and computer-based training multimedia software design. The framework of the proposed model consisted of the following phases: Tools, Standards, Templates, Staff Development, and Support. The model includes guidelines for the selection of software and hardware tools to support the effort; development and documentation of guidelines and standards for CBT design; creation of templates to facilitate CBT authoring; planning and implementation of a staff development program to teach trainers how to apply the principles of adult learning and the instructional design model to multimedia training and how to use pre-authoring and authoring software tools; and a plan to provide support to novice CBT designers.
|
257 |
A Software Development Life-Cycle Model for Web-Based Application DevelopmentKing, Barbara M. 01 January 2004 (has links)
Software development life cycle models were believed to play a critical role in improving software quality by guiding tasks in the software development processes since being formally introduced and embraced in the 1970s. Many organizations attempted to deploy software development life cycle methodologies with the intent to improve the software development process from conception through implementation to delivery. Numerous established software development models exist, including the classic waterfall life cycle model, Spiral model, Prototyping, Evolutionary, (e.g., Staged, Phased, and Timebox models), object-oriented design (OOD) (e.g., Rational Unified Process), and agile processes (e.g., eXtreme Programming [XP]). The design and development of web-based applications introduced new problems and requirements that did not exist when traditional software development life-cycle models were being put into practice. This research presents empirical software development practice data pertaining to web-based application development.
The goal of this project was to answer the question, "What is the general paradigm of an SDLC model for web-based application development?" The focus of the project was to derive an empirical SDLC model for web-based application development. Data from current practices was collected via a web-based application. Study participants used the web-based application to input data concerning the SDLC model of their web-based application development process. The empirical model was derived from the data provided by participants on current professional web-based application development practices.
The results of this research showed that although web-based application development life-cycle does parallel traditional SDLCs in some phases, there were enough differences that an exact fit to an existing model does not exist. A modified version of the Classic Waterfall with some repetitiveness of the Spiral model with the addition of optional phases best met the situational requirements of web-based application development.
|
258 |
Comparison of Social Presence in Voice-based and Text-based Asynchronous computer ConferencingKing, Karen D. 01 January 2008 (has links)
The significance of social presence in asynchronous computer conferencing has become an increasingly important factor in establishing high-quality online learning environments. Levels of social presence exhibited in asynchronous computer conferences influence students' perceptions of learning and satisfaction levels in a Web based course. Evidence in the literature supports the use of text-based asynchronous computer conferences to enhance learning in online learning environments. Recently, faculty teaching online courses have begun to use voice-based asynchronous conferencing tools with little research to support the appropriateness of the media.
A quasi-experimental study design framed this examination of the levels of social presence as measured by interaction patterns in voice-based and text-based asynchronous computer conferences. Qualitative analysis of content transcripts representing voice based and text -based asynchronous computer conferences from one human physiology course at a state university located in the southeastern United States was examined in this study. The analysis was based on the affective, communicative reinforcement, and cohesive interactions as defined by Rourke, Anderson, Garrison, and Archer. A social density score was derived from transcripts. A multivariate analysis of variance was conducted to determine if there were significant differences in levels of social presence between voice-based and text-based asynchronous computer conferences.
Results reported higher levels of affective and communicative reinforcement interactions in the text-based asynchronous computer conferences at a statistically significant level. Voice-based asynchronous computer conferences contained higher levels of cohesive interaction patterns, although levels were not statistically significant. Deployment of voice-based technology as a pedagogical tool is delivered at a considerable cost to higher education institutions. These tools are often marketed based on the effectiveness of the technology in a learning environment. However, according to this study, there is no apparent benefit in using voice-based rather than text-based technology tools to facilitate asynchronous computer conferences in a Web-based learning environment.
|
259 |
A Method for the Application of Computer Analytic Tools to Clinical Research: Neural Networks Analysis of Liver Function Tests to Assist in the differential Diagnosis of Liver DiseaseKirchner, John P. 01 January 1997 (has links)
There is a great deal of effort by medical educators in the instruction of the use of computers for clinical practice but little training in the methods and utility of computer applications in clinical research. In this paper I present a method for the application of computer analytic tools to clinical research. The steps described lead the student through the standard clinical research process and demonstrate the decision making that must take place in each step of any clinical research program. This example is based upon a valid and unique research question relating to my particular field of medical hepatology. Liver disease represents a significant cause of morbidity and mortality throughout the world. In order to significantly reduce the impact of liver disease it is essential that such disorders be recognized with great accuracy early on in their course. The diagnosis of liver disease is frequently difficult and expensive even for specialists in this area.
Standard liver function tests have the potential of assisting the physician as accurate, reliable, inexpensive, and easily accessible tools for the differential diagnosis of liver disease. However, the current applications of liver function tests for this purpose offer only limited value to the clinician in terms of reliability and validity. Clinical research efforts aimed at improving upon the precision of liver function tests through such techniques as test panels, test ratios, multivariate statistical methods and the applications of traditional expert systems have all had limited success and acceptance. The research described in this paper has resulted in the development of a probabilistic neural network program that was able to classify 109 sets of liver function tests into one of eighteen possible diagnostic categories with a precision of over 90%. The neural network developed as a result of this research should serve as an efficient tool for the clinician in the management of patients with liver disease. It should also act as a stimulus for further research in the application of neural network tools to clinical medical research.
|
260 |
An Exploratory Qualitative Study for the Design and Implementation of an Educational mentoring Program For At-Risk StudentsKistler, John 01 January 1990 (has links)
This exploratory qualitative study arose from the need to provide computer exposure to at-risk junior high students being tutored in a business-school partnership with a high-tech company, and to help both the tutors and students develop deeper personal relationships during the tutoring.
The study's objective was to determine if the use of a mentoring model, or parts of such a model derived from studies in business, is an effective strategy to provide both computer and relationship skill-building experiences. A black male and Hispanic female student, members of an interschool club designed to provide academic enrichment in math and science, were paired with two white male computer engineers from Digital Equipment Corporation. These relationship teams met both informally and in formal mentoring sessions for a period of ten to twelve weeks, during which various strategies from the proposed mentoring model were investigated and analyzed. An organizational development model was used to determine the problems and build the proposed mentor model before the mentoring began.
Case study methodology was used to collect and analyze the data, most of which was generated from a series of observations and interviews conducted by the writer. The study examined the following mentoring-related areas: general requirements for establishing mentoring, prementoring educational intervention, mentoring functions, mentoring phases, gender and ethnic areas in mentoring, mentoring and the at-risk student, mentoring and self-esteem, and mentoring's worth as a strategy. The male student-mentor team successfully established a close personal relationship and completed a computer-related project, and the student's self-esteem showed improvement on a standard inventory. The female student-mentor team was unsuccessful in either area, though her self-esteem appeared unaffected. As a result of the study, a revised mentor model was developed and is included in the study's appendices. The project confirms that hi-tech mentoring could be a useful strategy in education and is worthy of future study.
|
Page generated in 0.0782 seconds