Spelling suggestions: "subject:"business anda management"" "subject:"business ando management""
691 |
Facilitating interoperability among heterogeneous geographic database systems: A theoretical framework, a prototype system, and evaluationPark, Jinsoo January 1999 (has links)
The objective of this research is to develop a formal semantic model, theoretical framework and methodology to facilitate interoperability among distributed and heterogeneous geographic database systems (GDSs). The primary research question is how to identify and resolve various data- and schematic-level conflicts among such information sources. Set theory is used to formalize the semantic model, which supports explicit modeling of the complex nature of geographic data objects. The semantic model is used as a canonical model for conceptual schema design and integration. The intension (including structure, integrity rules and meta-properties) of the database schema is captured in the semantic model. A comprehensive framework classifying various semantic conflicts is proposed. This framework is then used as a basis for automating the detection and resolution of semantic conflicts among heterogeneous databases. A methodology for conflict detection and resolution is proposed to develop interoperable system environment. The methodology is based on the concept of a "mediator." Several types of semantic mediators are defined and developed to achieve interoperability. An ontology is developed to capture various semantic conflicts. The metadata and ontology are stored in a common repository and manipulated by description logic-based operators. A query processing technique is developed to provide uniform and integrated access to the multiple heterogeneous databases. Logic is employed to formalize our methodology, which provides a unified view of the underlying representational and reasoning formalism for the semantic mediation process. A usable prototype system is implemented to provide proof of the concept underlying this work. The system has been integrated with the Internet and can be accessed through any Java-enabled web browser. Finally, the usefulness of our methodology and the system is evaluated using three different cases that represent different application domains. Various heterogeneous geospatial datasets and non-geographic datasets are used during the evaluation phase. The results of the evaluation suggest that correct identification and construction of both schema and ontology-schema mapping knowledge play very important roles in achieving interoperability at the both data and schema levels. The research adopts a multi-methodological approach that incorporates set theory, logic, prototyping, and case study.
|
692 |
Building fuzzy front-end decision support systems for new product information in global telecommunication markets: A measure theoretical approachLiginlal, Divakaran January 1999 (has links)
In today's highly competitive business environment, innovation and new product introduction are recognized as the sustaining forces of corporate success. The early phases of new product development, collectively known as the 'front-end', are crucial to the success of new products. Building a fuzzy front-end decision support system, balancing the needs for analytical soundness and model robustness while incorporating decision-maker's subjectivity and adaptability to different business situations, is a challenging task. A process model and a structural model focusing on the different forms of uncertainties involved in new product introduction in a global telecommunication market are presented in this dissertation. Fuzzy measure theory and fuzzy set theory are used to build a quantitative model of the executive decision-process at the front-end. Solutions to the problem of exponential complexity in defining fuzzy measures are also proposed. The notion of constrained fiizzy integrals demonstrates how the fuzzy measure-theoretical model integrates resource allocation in the presence of project interactions. Forging links between business strategies and expert evaluations of critical success factors is attempted through fuzzy rule-based techniques in the framework of the proposed model. Interviews with new product managers of several American business firms have confirmed the need for building an intelligent front-end decision support system for new product development. The outline of a fuzzy systems development methodology and the design of a proof-of-concept prototype serve as significant contributions of this research work toward this end. In the context of executive decision making, a usability inspection of the prototype is carried out and results are discussed. A computational analysis, based upon methods of tactical systems simulation, measures the rank order consistency of the fuzzy measure theoretical approach in comparison with two competing fuzzy multiple attribute decision models under structural variations of the underlying models. The results demonstrate that (1) the modeling of the fuzzy numbers representing the linguistic variables, (2) the selection of the granularity of the linguistic scales, and (3) the selection of the model dimensions significantly affect the quality of the decisions suggested by the decision aid. A comprehensive plan for future validation of the decision aid is also presented.
|
693 |
Software inspections: Collaboration and feedbackRodgers, Thomas Lee January 1999 (has links)
This dissertation studies impact of collaboration and feedback on software inspection productivity. Used as a software-engineering validation technique, software inspections can be a cost-effective method for identifying latent issues (defects) within design documents and program code. For over two years, Baan Company has used a generalized Electronic Meeting System (EMS also referred to as GroupWare) to support software inspections and reported EMS to be more productive than face-to-face paper-based inspections (Genuchten, 1998). Validation of this phenomenon and initial development of a potentially more effective specialized EMS (SEMS) tool is the basis of this dissertation. Explanations of the collaborative phenomenon are presented within a theoretical framework along with testable hypotheses. The framework is derived from Media Synchronicity Theory (Dennis and Valacich) and Focus Theory of Productivity (Briggs and Nunamaker). Two main research questions are explored. (1) Do collaboration tools improve software inspection productivity? (2) Can feedback dimensions that significantly improve productivity be identified and incorporated within software inspections? The first research question is supported. In a detailed reevaluation of the Baan study, EMS inspections are shown to be 32% more efficient than paper-based inspections. During the subsequent period, the results were more pronounced with EMS inspections being 66% more efficient even controlling for inspector proficiency. Significantly more conveying communication than convergent communication occurs during inspection meetings. EMS inspections enable more deliberation, less attention for communication, and more attention for information access compared to face-to-face paper-based inspections. The second research question is explored. Surveys and analysis probe some previously unexplored feedback dimensions (review rate, inspector proficiency and inspection process maturity). Experienced inspectors are surveyed regarding process maturity, inspector proficiency, and collaborative aspects of inspections. Preparation and review rates are necessary but not sufficient to explain productivity. Inspector proficiency is perceived to be important and multi-dimensional. Participation by highly proficient inspectors resulted in 49-76% more effective inspections. Significant inspection process variations exist within mature development organizations. Based on theory and experiences, the SEMS inspection tool is developed and a quasi-experiment proposed. Initial results using the SEMS inspection tool are reported and suggestions made for future enhancements.
|
694 |
The antecedents of trust in a manager: The subordinate tells the story of timeCherry, Bennett Wayne January 2000 (has links)
Trust is considered essential for effective relationships both in the workplace and outside the workplace. Unfortunately, there is a paucity of empirical support for how interpersonal trust is actually developed between a manager and subordinate. This research examines this development by empirically testing antecedents of trust including time as an important moderator. In earlier proposed models of trust, time has either been left out entirely or innocently subsumed in other factors. This research also investigates the impact of different sources of information that subordinates use in determining the trustworthiness of their manager. Multiple research methods are used to address the research questions. First, in-depth interviews with employees were conducted to determine whether the proposed trust model includes all of the important factors that influence employees' trust in their supervisors. Following this, a scenario study was developed to test a portion of the model that deals with the sources of information that subordinates use in assessing a manager's trustworthiness. Finally, two samples of employees responded to a comprehensive questionnaire that uncovered the factors hypothesized to influence trust in their manager. The results from these multiple studies produce a surprisingly simple result: the trust that an employee has in his/her manager is developed through word-of-mouth or reputational information and frequent interaction with the manager. Although a moderator model was proposed and tested, the results, nevertheless, suggest that a more parsimonious model is possible.
|
695 |
The demands-control model in fast-food restaurants: Effects of emotional labor, customer treatment, demands, control, and supportRichmond, Sandra Mansell, 1944- January 1997 (has links)
In this cross-sectional field study of a fast-food organization, self-report data provided by workers and interview data from managers were used to assess the effects of the work environment on fast-food worker attitudes and behavior. Job demands, worker control and management support (Karasek & Theorell, 1990) were the predictor variables in this research. Additional job demands of emotional labor and customer behavior were measured and tested. Results indicated that control, emotional labor and management support were negatively associated with reported stress and positively associated with reported satisfaction and commitment. Additionally, customer behavior and demands were positively associated with reported stress and customer behavior was negatively associated with reported satisfaction and commitment.
|
696 |
The diffusion of the Internet in ChinaFoster, William Abbott January 2001 (has links)
The number of Internet users in China has grown from 8.9 million users in 1999 to 22 million in 2001. However, estimates of users alone do not give an adequate picture of the Internet in China. The Global Diffusion of the Internet (GDI) Project has developed a framework for looking at Internet diffusion at a country level across six dimensions. The Chinese government made the decision in 1996 to allow two organizations to run interconnecting networks that provide commercial global Internet connectivity. Under a strategy known as "letting the sons compete", it has authorized more and more state owned organizations to run competing interconnecting networks. Under this state-coordinated competition, China has diffused rapidly along all the dimensions of the global diffusion of the Internet framework. A world class backbone infrastructure is being built by multiple carriers. Almost all government agencies and most major businesses have a Web presence. However, though the infrastructure is being built and the cost of access is dropping rapidly, most organizations have not yet significantly redesigned their business processes to take advantage of the Internet.
|
697 |
Economics of patent policy in the digital economyKim, Taeha January 2002 (has links)
Advances in information technology (IT) have enabled the design and development of innovations in software and computer-assisted business methods. Firms attempt to leverage these innovations to gain competitive advantages through cost reduction, or quality improvements, and often pass some benefits to consumers. However such competitive advantages are increasingly difficult to sustain because IT-enabled innovations are becoming easier to copy or imitate. Competitors can use reverse engineering or decryption techniques to discover how an innovation operates, modify the original and distribute the amended innovation as a new product. Unfortunately, the ability of competing firms to imitate quickly and cheaply may reduce the incentives for firms to incur the cost to innovate. Much literature discusses ways government may induce firms to innovate and thus increase current and future social welfare. One tool available to government to provide such incentives is patent protection, i.e. providing an exclusive right of the innovations to the innovator. One goal of patent policy is to maximize social welfare by providing incentives to innovate while simultaneously maintaining a competitive market. Policymakers disagree over how to balance these two often-conflicting goals. Much of the disagreement is based on what factors the government may control to provide protection for innovating firms and the socially optimal level of patent protection. Determining optimal protection policy is a non-trivial task. The complexity arises from stakeholders who may have contradictory objectives and a menu or mix of options. Of course it is difficult to reach the first-best solution because the social planner does not have full access to information about firms and consumers. This dissertation reviews economic theories of patent protection, address problems and issues related to the progress of ITs, and investigate how a specific set of patent policy affect the incentives for firm to develop technological innovations and the way developed innovations are adopted and diffused throughout the marketplace. This work builds on, and contributes to, literatures in the areas of information systems, economics, public policy and law and provides valuable insights for regulators responsible for designing and evaluating patent systems and for firms competing strategically under these systems.
|
698 |
Language- and domain-independent knowledge maps: A statistical phrase indexing approachOng, Thian-Huat January 2004 (has links)
Global economy increases the need for multilingual systems, while each domain has a large repository of knowledge, particularly explicit knowledge usually captured in text. The speed of textual information being produced has exceeded the speed at which a person can process the information, so an automated approach to alleviate the information overload problem is needed. Unlike structured data in databases, unstructured text cannot be readily understood and processed by computers. This dissertation aims to create a language- and domain-independent approach to automatically generating hierarchical knowledge maps that enable the users to browse and understand the concepts hidden in the underlying knowledge sources. A system development research methodology was adopted to build and evaluate prototype systems to study the research questions. In order to process textual knowledge, a statistical phrase indexing algorithm was proposed and applied to the Chinese language. Next, the algorithm was extended to be able to process multiple languages and domains. Lastly, the results of the algorithm was further applied to a case study using the dissertation's proposed automated framework for generating hierarchical knowledge maps in Chinese news collection. This dissertation has two main contributions. First, it demonstrated that an automated approach is effective in creating knowledge maps for users to browse the underlying knowledge. The approach combines statistical phrase extraction algorithm for representing textual knowledge and neural networks for clustering related concepts and visualization. Second, it provided a set of language- and domain-independent tools to extract phrases from a textual knowledge in order to support text mining applications.
|
699 |
An automatic text mining framework for knowledge discovery on the WebChung, Wingyan January 2004 (has links)
As the World Wide Web proliferates, the amounts of data and information available have outpaced human ability to analyze them. Information overload is becoming ever more serious. Effectively and efficiently discovering knowledge on the Web has become a challenge. This dissertation investigates an automatic text mining framework for knowledge discovery on the Web. It consists of five generic steps: collection, conversion, extraction, analysis, and visualization. Input to and output of the framework are respectively Web data and knowledge discovered after applying the steps. Combinations of data and text mining techniques were used to assist human analysis in different scenarios. The research question was determining how knowledge discovery can be enhanced by using the framework. Three empirical studies applying the framework to business intelligence applications were conducted. First, the framework was applied to building a business intelligence search portal that provides meta-searching, Web page summarization, and result categorization. The portal was found to perform comparably to existing search engines in searching and browsing. Users liked its search and analysis capabilities. Thus, the framework can be used to analyze and integrate information distributed in heterogeneous sources. Second, the framework was applied to developing two browsing methods for clustering and visualizing business Web pages. In terms of precision, recall and accuracy, both outperformed list and map displays of search engine results. Users strongly favored the methods' usability and quality. Thus, the framework facilitated exploration of business intelligence from numerous results. Third, the framework was applied to classifying Web pages into different business stakeholder types. Experimental results showed that the framework could effectively help classify certain frequently appearing stakeholder types (e.g., partners). Users strongly preferred the efficiency and capability of this application. Thus, the framework helped identify and extract business stakeholder relationships. In conclusion, our framework alleviated information overload and enhanced human analysis on the Web effectively and efficiently. The research thereby contributes to developing a useful and comprehensive framework for knowledge discovery on the Web and to achieving better understanding of human-computer interaction.
|
700 |
Determining user interface effects of superficial presentation of dialog and visual representation of system objects in user directed transaction processing systemsCooney, Vance January 2001 (has links)
At the point of sale in retail businesses, employees are a problem. A problem comprised of high turnover, unmet consumer expectations and lost sales, among other things. One of the traditional strategies used by human resource departments to cope with employee behavior, or "misbehavior," has been to strictly script employee/customer interactions. Another, more recent approach, has been the development of systems to replace the human worker; in other words, to effect transactions directly between customers and an information system. In these systems one determinant of public acceptance may be the system's affect, whether that affect is "human-like" or takes some other form. Human-like affect can be portrayed by the use of multimedia presentation and interaction techniques to depict "employees" in familiar settings, as well as incorporating elements of human exchange (i.e., having the system use the customer's name in dialogs). The field of Human-Computer Interaction, which informs design decisions for such multimedia systems, is still evolving, and research on the application of multimedia to User Interfaces for automated transaction processing of this type is just beginning. This dissertation investigates two dimensions of User Interface design that bear on the issues of emulating "natural human" transactions by using a laboratory experiment employing a 2 x 2 factorial design. The first dimension investigated is personalization. Personalization is a theoretical construct derived from social role theory and applied in marketing. It is, briefly, the inclusion of scripted dialog crafted to make the customer feel a transaction is personalized. In addition to using the customer's name, scripts might call for ending a transaction with the ubiquitous "Have a nice day!" The second dimension investigated is the "richness" of representation of the UI. Richness is here defined as the degree of realism of visual presentation in the interface and bears on the concept of direct manipulation. An object's richness could vary from a text based description of the object to a full motion movie depicting the object. The design implications of the presence or absence of personalization at varying levels of richness in a prototype UI simulating a fast food ordering system are investigated. The results are presented.
|
Page generated in 0.1131 seconds