Spelling suggestions: "subject:"computer."" "subject:"coomputer.""
441 |
A Model for Cultural Resistance in Business Process Re-engineering FailureBeebe, Larry E. 01 January 1997 (has links)
The need for a new way of conducting organizational business has been identified as essential to remaining competitive. Increasingly, businesses and organizations have turned to redesigning or re-engineering operational business processes to improve performance and competitiveness. Business process re-engineering (BPR) has become a methodology that management uses when radical change is required in organizations practices. Despite the widespread implementation of BPR, most projects have failed. A major reason for reengineering failure is cultural resistance.
The evidence about the culture in re-engineering suggests that the majority of BPR projects are implemented by cross-functional, multi-disciplined teams so that was the focus of the research. A review of the literature failed to provide a significant guideline that management could use to address cultural resistance. Accordingly, it was necessary to examine social issues in order to determine what management could do to reduce cultural resistance in BPR teams.
The hypothesis was that cultural resistance in BPR implementations can be reduced and that a model can be developed that will effectively guide management intervention into the implementation of BPR. Findings suggested that cultural resistance could be reduced, if the correct combination of team characteristics are present, such as: openness and candor, leadership that does not dominate, decisions by consensus, understood and accepted goals, progress and results assessed, comfortable atmosphere, common access to information, a win-win approach to conflict. Results indicate that these characteristics can be measured and relationships established using the Myers Briggs Temperament Index, the Belbin Leadership Model, and the Motivational Potential Score. The QFD Matrix has been demonstrated to provide a sound approach for assessment and relationships. Committees and a Pilot Group provided feedback during the development of the model.
It seems clear that BPR methodology, with a credible plan for social re-engineering implementation, can play a significant role in gaining competitive advantage in the modem organization. BPR without consideration of the social or cultural factors is likely to meet significant resistance. This resistance will result in disappointing re-engineering implementation results, wasting vital organizational resources.
|
442 |
The Effect of Computer-Based Accounting Practice Sets on The Achievement of Introductory College Accounting StudentsBernard, Bryce A. 01 January 2002 (has links)
The purpose of this study was to measure the effectiveness of using a computer-based practice set to teach college students who are enrolled in an introductory accounting course the process, procedures, and records, that are used in an accounting system. The study addressed the ongoing concern of accounting educators about the effectiveness of using computer-based accounting practice sets by comparing test results for a group of students enrolled in a lower division accounting principles course. The course was structured with a common lecture component and two accounting lab sections. Students in one lab section completed a manual accounting practice set and students in the other lab section completed a computer-based accounting practice set. All students were pretested at the beginning of the semester and post-tested at the end of the semester. The results of this study indicate that difference in treatment had no significant impact on post-test scores
|
443 |
Knowledge Discovery by Attribute-Oriented Approach Under Directed Acyclic Concept Graph(DACG)Bi, Wenyi 01 January 2001 (has links)
Knowledge discovery in databases (KDD) is an active and promising research area with potentially high payoffs in business and scientific applications. The great challenge of knowledge discovery in databases is to process large quantities of raw data automatically, to identify the most significant and meaningful patterns, and to present this knowledge in an appropriate form for decision making and other purposes. In previous researches, Attribute-Oriented Induction, implemented artificial intelligence, "learning from examples" paradigm. This method integrates traditional database operations to extract rules from database systems. The key techniques in attribute-oriented induction are attribute generalization and undesirable attribute removal. Attribute generalization is implemented by replacing a low-level concept with its corresponding high-level concept.
The core part of this approach is a concept hierarchy, which is a linear tree schema built on each individual and independent domain (attribute), to control concept generalization.
Because such linear structure of a concept hierarchy represents the concepts that are confined to each independent domain, this topology leads to a learning process without the capability of conditional concept generalization. Therefore, it is unable to extract rich knowledge implied in different directions of non-linear concept scheme.
Although some recent improvements have extended to the basic attribute-oriented induction (BAOD approach, they have some shortcomings. For example, rule-based attribute-oriented induction has to invoke a backtracking algorithm to tackle information loss problem, whereas path id generalization has to transform each data values (at a great cost) in databases into its corresponding path id in order to perform generalization on the path id relation instead.
To overcome the above limitations, we propose a non-linear concept schema: Directed Acyclic Concept Graph (DACG), to extend the power in BAOI in order to discover knowledge across multiple domains conditionally. By utilizing graph theory, DACG can be transformed to its equivalent linear concept tree, which is a linear concept schema. Additionally, we also apply functional mappings, which reflect values from multiple domains into their high-level concepts in their codomains, to implement concept generalization. Therefore, our approach overcomes the limitations of BAOI and enriches the spectrum of learned patterns.
Even though a concept learning under a non-linear concept schema is substantially more complicated than under linear concept tree in BAOI, this research shows that our approach is feasible and practical. In addition to presenting the theoretical discussion in this dissertation, our solution has been implemented by both Java JDK1.2 in Oracle 8i under Solaris at Ultra 450 machines and PUSQL in Oracle 8i under Windows 2000 to generalize rich knowledge from live production databases.
|
444 |
Web Information System(WIS): Information Delivery Through Web BrowsersBianco, Joseph 01 January 2000 (has links)
The Web Information System (WIS) is a new type of Web browser capable of retrieving and displaying the physical attributes (retrieval time, age, size) of a digital document. In addition, the WIS can display the status of Hypertext Markup Language (HTML) links using an interface that is easy to use and interpret. The WIS also has the ability to dynamically update HTML links, thereby informing the user regarding the status of the information.
The first generation of World Web browsers allowed for the retrieval and rendering of HTML documents for reading and printing. These browsers also provided basic management of HTML links, which are used to point to often-used information. Unfortunately, HTML links are static in nature -- other than a locator for information, an HTML link provides no other useful data. Because of the elusive characteristics of electronic information, document availability, document size (page length), and absolute age of the information can only be assessed after retrieval.
WIS addresses the shortcomings of the Web by using a different approach to delivering digital information within a Web browser. By attributing the physical parameters of printed documentation such as retrieval time, age, and size to digital information, the WIS makes using online information easier and more productive than the current method.
|
445 |
A Model for Developing Interactive Instructional Multimedia Applications for Electronic Music InstructorsBiello, Antonio D. 01 January 2005 (has links)
This study investigated methods for designing a procedural model for the development of interactive multimedia applications for electronic music instruction. The model, structured as a procedural guide, was derived from methodologies synthesized from related research areas and documented as a reference for educators, instructional designers, and product developers for developing interactive multimedia applications.
While the model was designed primarily for junior college electronic music students, it has the potential for generalization to other related disciplines. A Formative Committee consisting of five experts in the areas of education, music education, cognitive psychology, and institutional research assisted in the development of a set of criteria for the model. Utilizing the Nominal Group Technique, the committee examined, evaluated, and scored the efficacy of each proposed criterion according to its relevance to the model. Criteria approved by the committee and the researcher were incorporated in the model design.
A Design Committee comprised of five experts in the areas of instructional design, media/interaction design, behavioral psychology, and electronic music evaluated and validated the criteria set established by the Formative Committee. The validation was realized through surveys and formative feedback of the criteria set developed by the Formative Committee.
Prototype instantiations of the process model were an integral part of the model development process. Prototypes derived from the model were used to test the efficacy of the model criteria. A Development Committee comprised of members of the Formative and Design committees examined and evaluated prototype instantiations.
Recommendations for improvements were implemented in the model design. A Pilot Study was conducted by the Development Committee to assist the product development process and to evaluate the efficacy of the model. As a result of the study, a number of suggestions proposed by the committee were implemented for further improvement of the model. A Summative Committee comprised of educational experts having significant experience in educational research examined the efficacy of the model criteria established and validated by the Formative and Design committees. The Summative Committee evaluated the model and made recommendations for improving the model.
Founded on a set of criteria, the electronic music model was successfully developed and evaluated by a team of professionals. The Development and Summative Committees were satisfied with the results of this study and the criteria developed for the model design were deemed to be complete and relevant to the model. The results of this study suggest that instruction based on this model will support the unique learning needs of students having diverse cultural, learning, and educational backgrounds.
|
446 |
Evolutionary Algorithm for Generation of Air Pressure and Lip Pressure Parameters for Automated Performance of Brass InstrumentsBilitski, James A. 01 January 2006 (has links)
The artificial mouth is a robotic device that simulates a human mouth. It consists of moveable lips and an adjustable air supply. The uses of an artificial mouth include research for physical modeling of the lips and automatic performance. Automatic performance of a musical instrument is when an instrument is played without the direct interaction of a human. Typically mechanics and robotics are used instead of a human.
In this study the use of a genetic algorithm to compute air pressure and lip pressure values so that the artificial mouth can correctly play five notes on a brass instrument is investigated. In order to properly playa brass instrument, a player must apply proper tension between the lips and apply proper airflow so that the lips vibrate at the proper frequency. A player changes the notes on a brass instrument by depressing keys and changing lip pressure and air flow. This study investigated a machine learning approach to finding lip pressure and air pressure parameters so that an artificial mouth could play five notes of a scale on a trumpet. A fast search algorithm was needed because it takes about 4 seconds to measure the frequency produced by each combination of pressure parameters. This measurement is slow because of the slow moving mechanics of the system and a delay produced while the notes are measured for pitch. Two different mouthpieces were used to investigate the ability to adapt to different mouthpieces. The algorithm started with a randomly generated population and evolved the lip pressure and air pressure parameters with an evolutionary algorithm using crossover and mutation designed for the knowledge scheme in this application. The efficiency of this algorithm was compared to an exhaustive search. Experimentation was performed using various combinations of genetic parameters including population size, crossover rate, and mutation rate. The evolutionary search was shown to be about 10 times faster than the exhaustive search because the evolutionary algorithm searches only very small portion of the search space. A recommendation for future research is to conduct further experimentation to determine more optimal crossover and mutation rates.
|
447 |
The Development of Reliable Metrics to Measure the Efficiency of Object-Oriented Dispatching using Ada 95 a High-Level Language implementing Hard-Deadline Real-time ProgrammingBingue, Eugene W. P. 01 January 2002 (has links)
The purpose of this study is to produce a metric to accurately capture the effects of real time dispatching using object-oriented (00) programming applied in the maintenance phase of the life cycle of hard, real-time systems. The hypothesis presented is that object-oriented programming constructs can be applied in a manner that will have beneficial life-cycle maintenance effects while avoiding adverse timing side effects. This study will use complexity measures instruments that will calculate the Cyciomatic Complexity. This study will examine the dispatching time of each program, and utilize utilities to calculate the number of machine cycles for each program component. Coding techniques will be presented for various program design dilemmas, which examine the object-oriented dispatching features.
|
448 |
Author-Statement Citation Analysis Applied as a Recommender System to Support Non-Domain-Expert Academic ResearchBlazek, Rick 01 January 2007 (has links)
This study will investigate the use of citation indexing to provide expert recommendations to domain-novice researchers. Prioritizing the result-set returned from an electronic academic library query is both an essential task and a significant start-up burden for a domain-novice researcher. Current literature reveals many attempts to provide recommender systems in support of research. However, these systems rely on some form of relevance feedback from the user. The domain-novice researcher is unable to satisfy this expectation. Additional research demonstrates that a network of expert recommendations is available in each collection of academic documents. A power distribution, Lotka's law, has been found to be an attribute of the citation network found in large collections of academic domain documents.
The issue under study is whether the network of recommendations found in a relatively small collection of academic documents reveals a citation density that conforms to the distribution pattern of large collections. This study will use a descriptive, comparative methodology to answer this question. The study will use Lotka's law to form a predicted density and distribution for comprehensive domain collections. Next, the study will calculate an actual concentration and distribution from a sample population. The sample population will be a result-set returned from a general query to an academic collection.
The two indexes and distributions will be statistically compared to ascertain whether the actual density is equivalent to the predicted. If the sample set does not conform to normative Lotkian density, it will demonstrate an unnatural bias and therefore not qualify as an appropriate set of recommendations for guiding domain novice research.
The null hypothesis is that the actual density will be statistically equal to the predicted index. If this expectation is met, the result will be a set of expert recommendations that is user-independent for providing domain-relevant expert prioritization. A recommender system based on such recommendations would significantly improve the early research tasks of a domain novice by overcoming the identified start-up problem. It would remove the burden of expertise required when a domain novice seeks to effectively use the result set from a novice query. This experiment will test an alternative hypothesis by isolating smaller subsets of the sample and testing the citation density of each using a factorial orthogonal design. This experiment will attempt to determine the minimal population size valid for the predicted density index. It is anticipated that a sample size below the lower bound for distribution validity will be non-ambiguously identified by actual indexes significantly below that of the standard
|
449 |
Computer Anxiety: Its Related Characteristics And Its Relationship to Achievement In Computer Literacy of Slippery Rock University StudentsBoettner, Linda M. 01 January 1991 (has links)
This study was designed to investigate what effects the completion of a computer literacy course had on computer-related anxiety, what factors were correlated with computer anxiety, and what relation computer anxiety had to achievement in computer literacy. The possible correlates of computer anxiety considered in this study were gender, the number of semesters of previous computer experience, the number of university credit hours completed, and cumulative quality point average. Analyses were conducted to identify any differences in computer anxiety levels among groups of subjects with different declared major areas of study.
Slippery Rock University undergraduates (N = 325) who were enrolled in the university's computer literacy course in the 1991 spring semester were surveyed before and after completing the course. Data about the subjects' computer anxiety levels and achievement in computer literacy were collected by means of standardized tests, and the demographic data for the subjects were gathered through a questionnaire and through the university's mainframe computer. Hypotheses were tested at the 0.05 confidence level using either a point biserial correlation coefficient, a Pearson product moment correlation coefficient, a t-test for paired variates, or an analysis of variance. Because the analysis of variance indicated differences among the groups with different major areas of study, the Scheffe test was applied to identify which pairs of groups differed. Of the possible correlates of computer anxiety tested, only gender and the number of university credit hours completed were found to be not significantly related to computer anxiety. The number of semester of previous computer experience was inversely related to computer anxiety, and both cumulative quality point average and achievement in computer literacy were determined to be positively correlated to computer anxiety. Differences in the mean computer anxiety levels of the groups of subjects were identified. Based upon the results of this study, several curricular recommendations were made. Recommendations for future study suggested expanding the study to encompass more semesters and a larger population of subjects.
|
450 |
Finding a Fitness Function to be Used with Genetic Algorithms to Solve a Protein Folding Problem: The ab initio Prediction of a Protein Using Torsion AnglesBohonak, Noni McCullough 01 January 2000 (has links)
This dissertation shows that the ab initio prediction of a protein using torsion angles will work using the correct fitness function. It shows that work can be done on a high-end workstation using a small model of a protein. It was based on the previous work of Dr. Steffen Schulze-Kremer who received limited success with a faculty fitness function and a massively parallel system. The purpose of this work was to not only find the solution but to demonstrate how our rapidly advancing technology will permit this type of research to be moved from the costly parallel systems, nuclear magnetic resonance, and x-ray crystallography to a less costly microcomputer system. In order to accomplish this, the code was run with Microsoft's Visual C++ (version 6) on Intel systems running at 220 MHz, 550 MHz, and 700 MHz with 40 MB, 512 MB, and 256 MB of memory. The results of this work will pave the way for further research in this area on less costly hardware.
|
Page generated in 0.0855 seconds