Spelling suggestions: "subject:"business anda management"" "subject:"business ando management""
681 |
An inquiry concerning some variables which influence cotton pricing in the United StatesSmith, Novel John, III 01 August 1965 (has links)
No description available.
|
682 |
Combining schema and instance information for integrating heterogeneous databases: An analytical approach and empirical evaluationZhao, Huimin January 2002 (has links)
Critical to semantic integration of heterogeneous data sources, determining the semantic correspondences among the data sources is a very complex and resource-consuming task and demands automated support. In this dissertation, we propose a comprehensive approach to detecting both schema-level and instance-level semantic correspondences from heterogeneous data sources. Semantic correspondences on the two levels are identified alternately and incrementally in an iterative procedure. Statistical cluster analysis methods and the Self-Organizing Map (SOM) neural network method are used first to identify similar schema elements (i.e., relations and attributes). Based on the identified schema-level correspondences, classification techniques drawn from statistical pattern recognition, machine learning, and artificial neural networks are then used to identify matching tuples. Multiple classifiers are combined in various ways, such as bagging, boosting, concatenating, and stacking, to improve classification accuracy. Statistical analysis techniques, such as correlation and regression, are then applied to a preliminary integrated data set to evaluate the relationships among schema elements more accurately. Improved schema-level correspondences are fed back into the identification of instance-level correspondences, resulting in a loop in the overall procedure. Empirical evaluation using real-world and simulated data that has been performed is described to demonstrate the utility of the proposed multi-level, multi-technique approach to detecting semantic correspondences from heterogeneous data sources.
|
683 |
Searching and mining the Web for personalized and specialized informationChau, Michael C. January 2003 (has links)
With the rapid growth of the Web, users are often faced with the problem of information overload and find it difficult to search for relevant and useful information on the Web. Besides general-purpose search engines, there exist some alternative approaches that can help users perform searches on the Web more effectively and efficiently. Personalized search agents and specialized search engines are two such approaches. The goal of this dissertation is to study how machine learning and artificial intelligence techniques can be used to improve these approaches. A system development research process was adopted as the methodology in this dissertation. In the first part of the dissertation, five different personalized search agents, namely CI Spider, Meta Spider, Cancer Spider, Nano Spider, and Collaborative Spider, were developed. These spiders combine Web searching with various techniques such as noun phrasing, text clustering, and multi-agent technologies to help satisfy users' information needs in different domains and different contexts. Individual experiments were designed and conducted to evaluate the proposed approach and the experimental results showed that the prototype systems performed better than or comparable to traditional search methods. The second part of the dissertation aims to investigate how artificial intelligence techniques can be used to facilitate the development of specialized search engines. A Hopfield Net spider was proposed to locate from the Web URLs that are relevant to a given domain. A feature-based machine-learning text classifier also was proposed to perform filtering on Web pages. A prototype system was built for each approach. Both systems were evaluated and the results demonstrated that they both outperformed traditional approaches. This dissertation has two main contributions. Firstly, it demonstrated how machine learning and artificial intelligence techniques can be used to improve and enhance the development of personalized search agents and specialized search engines. Secondly, it provided a set of tools that can facilitate users in their Web searching and Web mining activities in various contexts.
|
684 |
Facilitating knowledge discovery by integrating bottom-up and top-down knowledge sources: A text mining approachLeroy, Gondy A. January 2003 (has links)
This dissertation aims to discover synergistic combinations of top-down (ontologies), interactive (relevance feedback), and bottom-up (machine learning) knowledge encoding techniques for text mining. The strength of machine learning techniques lies in their coverage and efficiency because they can discover new knowledge without human intervention. The output, however, is often imprecise and irrelevant. Human knowledge, top-down or interactively encoded, may remedy this. The research question addressed is if knowledge discovery can become more precise and relevant with hybrid systems. Three different combinations are evaluated. The first study investigates an ontology, the Unified Medical Language System (UMLS), combined with an automatically created thesaurus to dynamically adjust the thesaurus' output. The augmented thesaurus was added to a medical, meta-search portal as a keyword suggester and compared with the unmodified thesaurus and UMLS. Users preferred the hybrid approach. Thus, the combination of the ontology with the thesaurus was better than the components separately. The second study investigates implicit relevance feedback combined with genetic algorithms designed to adjust user queries for online searching. These were compared with pure relevance feedback algorithms. Users were divided into groups based on their overall performance. The genetic algorithm significantly helped low achievers, but hindered high achievers. Thus, the interactively elicited knowledge from relevance feedback was judged insufficient to guide machine learning for all users. The final study investigates ontologies combined with two natural language processing techniques: a shallow parser and an automatically created thesaurus. Both capture relations between phrases in biomedical text. Qualified researchers found all terms to be precise; however, terms that belonged to ontologies were more relevant. Parser relations were all precise. Thesaurus relations were less precise, but precision improved for relations that had their terms represented in ontologies. Thus, this integration of ontologies with natural language processing provided good results. In general, it was concluded that top-down encoded knowledge could be effectively integrated with bottom-up encoded knowledge for knowledge discovery in text. This is particularly relevant to business fields, which are text and knowledge intensive. In the future, it will be worthwhile to extend the parser and also to test similar hybrid approaches for data mining.
|
685 |
Principles and methodology for computer-assisted instruction (CAI) designCrews, Janna Margarette January 2004 (has links)
As the role of computer-assisted instruction (CAI) rapidly expands in the educational and training efforts of all types of organizations, the need for well-designed, learner-centered CAI continues to grow. The CAI design principles and methodology proposed herein provide systems designers with a framework for designing effective, learner-centered CAI systems that support learning with information technologies. Implementing the framework should lead to CAI that better supports learners in the development of their mental schemas, and ultimately, in achieving their learning objectives. The primary goals of this research are two-fold. First, derive a theoretically and empirically-based set of CAI design principles directed at purposefully exploiting the unique capabilities of information technology to help learners develop their mental schemas. Second, codify a methodology for implementing these principles in the systems analysis and design process. Both goals are accomplished as follows. First, a literature review was undertaken to uncover features important for designing CAI to improve learning. Concurrently, the design features and functionality of several existing CAI were reviewed. A field study of one distinctive CAI was conducted to investigate and substantiate its effectiveness. Results indicated that learners using the CAI improved their achievement significantly more than learners who did not use the CAI. Moreover, learners attributed their improved performance to using the CAI. Based on the literature review, review of existing CAI, and the results of the field study, a set of principles and a methodology for designing CAI were derived. The design principles and methodology focus the CAI design process on supporting learners' development of their mental schemas. Finally, we designed, developed and implemented a prototype web based, multimedia training system in accordance with the proposed CAI design principles. As a partial instantiation of the proposed principles and methodology, this prototype CAI provides a proof-of-concept. The design and effectiveness of the prototype CAI has been tested in a series of experiments. The corroborating evidence from these studies indicates that the prototype CAI is well-designed, usable, and an effective training tool. The demonstrated success of the prototype provides evidence of the merits of the proposed principles and methodology.
|
686 |
Experimental investigation of the development, maintenance, and disintegration of trust between anonymous agentsMurphy, Ryan O'Loughlin January 2004 (has links)
In this dissertation the dynamics of trust between anonymous agents playing iterated trust dilemmas are examined. Two separate experimental institutions are used. The first institution is a three-player Centipede Game. The second is a novel institution called the Real Time Trust Game. Both games have structures with diametrically opposed Pareto optimal outcomes and Nash equilibrium. Mutually anonymous individuals play the games for real pecuniary rewards. A variety of experimental manipulations are employed to determine their relative effects on individuals' decisions, as well as the changes in population dynamics. General findings confirm the fragility of trust in these barren institutions, the institutions used here are barren in the sense that the normal social mechanisms that typically facilitate trust (credible communication, reputation, potential retaliation) are unavailable to players. As a consequence, most of the results display a slow breakdown of trust between players, with the rate of decay being subject to experimental manipulation. Nonetheless, individual differences are substantial and a handful of players resist playing narrowly rational strategies, exhibiting "hard-core" cooperative tendencies.
|
687 |
Data allocation and query optimization in large scale distributed databasesZhou, Zehai, 1962- January 1996 (has links)
Distributed database technology is expected to have a significant impact on data processing in the upcoming years because distributed database systems have many potential advantages over centralized systems for geographically distributed organizations. Data allocation and query optimization are two of the most important aspects of distributed database design. Data allocation involves placing a database and the applications that run against it in the multiple sites of a network. It is a very complex problem consisting of two processes: data fragmentation and fragment allocation. Data fragmentation involves the partitioning of each relation into a group of fragment relations while fragment allocation deals with the distribution of these fragmented relations across the sites of the distributed system. Query optimization includes designing algorithms that analyze and convert queries into a set of data manipulation operations. Both the data allocation and query optimization problems are NP-hard in nature and notoriously difficult to solve. We have attempted to combine the two highly interrelated and interactive decision processes in data allocation by formulating them as integer programs taking into consideration different constraints and under various assumptions. Various solution methods are discussed and a new linearization method is investigated. We next analyze the query optimization problem and reduce it to a join ordering problem. Several heuristics and a genetic algorithm have been developed for solving the join ordering problem. Some computational experiments on these algorithms were conducted and solution qualities compared. The computation experiments show that the suggested linearization method performs clearly and consistently better than a currently widely used method and that heuristics and genetic algorithms are viable methods for solving query optimization problem. It is anticipated that the models and solution methods developed in this study for data allocation and query optimization in distributed database systems may be of practical as well as theoretical use. Nevertheless, much more needs to be done to solve the distributed database design problems in order to achieve its potential benefits. Our models and solution methods can be the starting point for eventual resolution of these complex problems in large scale distributed database systems.
|
688 |
Maintaining legitimacy through public organizational discourse: Crisis and communication in the United States airline industryMassey, Joseph Eric, 1964- January 1997 (has links)
Organizations are beginning to realize the importance of consistent communication with their constituencies. Several organizations have experienced negative consequences for producing inconsistent messages to their publics. This dissertation provides an investigation of the effects of message consistency on perceptions of organizational legitimacy. Legitimacy is the perception that an organization is good and has a right to continue operations. Legitimacy is viewed as an important variable in the study of organizations, since organizations that are not perceived as being legitimate face internal and external threats that could lead to the demise of the organization. Image management theory, crisis management theory, and niche-width theory are relied on in this investigation to examine the effects of message consistency on organizational legitimacy. Image management theory holds that organizations have images in much the same way that people do, and it is therefore incumbent on organizations to engage in strategic communication behaviors designed to influence perceptions of their image. The end goal of image management is the production and maintenance of legitimate organizational status. During crisis events organizational image is threatened, and therefore at no time is the legitimacy of the organization more salient. Crisis management theory provides the explanatory calculus and the context in which to study consistent discourse and perceptions of legitimacy. Finally, niche-width theory argues that different organizational forms are developed in response to different environmental conditions. Two particular types of organizations, generalists and specialists, are found in most organizational fields. Generalist organizations are ones that have many resources and are equipped to deal with much variety in their environment. Specialist organizations, on the other hand, have few resources, and are better equipped to deal with particular aspects of their environment. Niche-width theory is incorporated into the dissertation to determine whether the type of organization (specialist vs. generalist) affects perceptions of organizational legitimacy. These theories provide the foundation for the empirical investigation in this dissertation. Several hypotheses were generated from these theories. Support was found for all but one hypothesis. Results suggest that organizations experiencing crisis should produce consistent messages to both internal and external publics to be perceived as being legitimate.
|
689 |
Successful behaviors in information systems development teamsGlynn, Melissa Sue, 1969- January 1998 (has links)
This dissertation research examines the impact of leadership, cohesion, and information sharing, and the application of group support systems on information design systems (ISD) project quality and project team satisfaction. Research has identified that after 40 years of developing information systems, there are still widespread difficulties in delivering systems on time and on budget. The research objective of this study is to examine the group level processes to understand how ISD team behavior can impact quality issues. A group support system was introduced to act as a sensemaking treatment to increase team performance. The following research questions were identified: (1) What is the impact of cohesion on project quality? (2) What is the impact of leadership on project quality? (3) What is the impact of information sharing on project quality? (4) What is the impact of cohesion on team satisfaction? (5) What is the impact of leadership on team satisfaction? (6) What is the impact of information sharing on team satisfaction? (7) Is there a relationship between group support systems use and project quality? (8) Can group support systems enable sensemaking activities? A longitudinal experiment was conducted with subjects who were enrolled in four sections of an upper-division Management Information Systems course in Systems Analysis and Design in consecutive semesters. Lectures and class-activities were identical in all four sections except that group support systems technology (GSS) was used by the second-semester classes, the treatment group. Student teams in all sections completed a semester-long ISD project.
|
690 |
Dynamic schema evolution in a heterogeneous database environment: A graph theoretic approachGanesan, Shankaranarayanan January 1998 (has links)
The objective of this dissertation is to create a theoretical framework and mechanisms for automating dynamic schema evolution in a heterogeneous database environment. The structure or schema of databases changes over time. Accommodating changes to the schema without loss of existing data and without significantly affecting the day to day operation of the database is the management of dynamic schema evolution. To address the problem of schema evolution in a heterogeneous database environment, we first propose a comprehensive taxonomy of schema changes and examine their implications. We then propose a formal methodology for managing schema evolution using graph theory with a well-defined set of operators and graph-based algorithms for tracking and propagating schema changes. We show that these operators and algorithms preserve the consistency and correctness of the schema following the changes. The complete framework is embedded in prototype software system called SEMAD (Schema Evolution Management ADvisor). We evaluate the system for its usefulness by conducting exploratory case studies using two different heterogeneous database domains, viz., a University database environment and a scientific database environment that is used by atmospheric scientists and hydrologists. The results of the exploratory case studies supported the hypothesis that SEMAD does help database administrators in their tasks. The results indicate that SEMAD helps the administrators identify and incorporate changes better than performing these tasks manually. An important overhead cost in SEMAD is the creation of the semantic data model, capturing the meta data associated with the model, and defining the mapping information that relates the model and the set of underlying databases. This task is a one-time effort that is performed at the beginning. The subsequent changes are incrementally captured by SEMAD. However, the benefits of using SEMAD in dynamically managing schema evolution appear to offset this overhead cost.
|
Page generated in 0.1153 seconds