• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 190
  • 25
  • 22
  • 21
  • 14
  • 12
  • 7
  • 6
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 358
  • 358
  • 66
  • 63
  • 61
  • 55
  • 50
  • 48
  • 43
  • 42
  • 41
  • 40
  • 37
  • 33
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Ontologie naturalisée et ingénierie des connaissances / Naturalized ontology and Knowledge Engineering

Zarebski, David 15 November 2018 (has links)
«Qu’ai-je besoin de connaître minimalement d’une chose pour la connaître ?» Le fait que cette question aux allures de devinette s’avère cognitivement difficile à appréhender de par son degré de généralité explique sans peine la raison pour laquelle son élucidation demeura plusieurs millénaires durant l’apanage d’une discipline unique : la Philosophie. Dans ce contexte, énoncer des critères à même de distinguer les composants primitifs de la réalité – ou le "mobilier du monde" – ainsi que leurs relations revient à produire une Ontologie. Cet ouvrage s’attelle à la tâche d’élucider le tournant historique curieux, en apparence anodin, que constitue l’émergence de ce type de questionnement dans le champ de deux disciplines connexes que constituent l’Intelligence Artificielle et l’Ingénierie des Connaissances. Nous montrons plus particulièrement ici que leur import d’une forme de méthodologie ontologique appliquée à la cognition ou à la représentation des connaissances ne relève pas de la simple analogie mais soulève un ensemble de questions et d’enjeux pertinents tant sur un plan appliqué que spéculatif. Plus spécifiquement, nous montrons ici que certaines des solutions techniques au problème de la data-masse (Big Data) – i.e. la multiplication et la diversification des données en ligne – constitue un point d’entrée aussi nouveau qu’inattendu dans de nombreuses problématiques traditionnellement philosophiques relatives à la place du langage et des raisonnements de sens commun dans la pensée ou encore l’existence d’une structuration de la réalité indépendante de l’esprit humain. / «What do I need to know about something to know it ?». It is no wonder that such a general, hard to grasp and riddle-like question remained the exclusive domain of a single discipline for centuries : Philosophy. In this context, the distinction of the primitive components of reality – the so called "world’s furniture" – and their relations is called an Ontology. This book investigates the emergence of similar questions in two different though related fields, namely : Artificial Intelligence and Knowledge Engineering. We show here that the way these disciplines apply an ontological methodology to either cognition or knowledge representation is not a mere analogy but raises a bunch of relevant questions and challenges from both an applied and a speculative point of view. More specifically, we suggest that some of the technical answers to the issues addressed by Big Data invite us to revisit many traditional philosophical positions concerning the role of language or common sense reasoning in the thought or the existence of mind-independent structure in reality.
172

Uma abordagem centrada no usuário para compartilhamento e gerenciamento de dados entre aplicações web. / A user-centric approach to data management and sharing between Web applications.

Dominicini, Cristina Klippel 01 August 2012 (has links)
Nos últimos anos, tornou-se muito mais fácil para os usuários criarem e disseminarem seus dados na Web. Neste contexto, uma grande quantidade de dados dos usuários é criada e utilizada de forma distribuída entre várias aplicações web. Essa distribuição de dados criou novas necessidades para o usuário, que agora precisa gerenciar dados que estão hospedados em diferentes sites e compartilhar dados entre sites de forma protegida. Entretanto, os mecanismos existentes na Web hoje não conseguem atender a essas necessidades e o usuário se vê obrigado a utilizar mecanismos que consomem muito tempo, causam replicação de dados, não são suficientemente seguros ou fornecem apenas um controle limitado sobre os seus dados. Assim, o objetivo deste trabalho é propor uma arquitetura centrada no usuário para gerenciamento de dados na Web e compartilhamento desses dados com aplicações web. A proposta dessa arquitetura é baseada em uma especificação detalhada dos requisitos do sistema usando um método específico para o ambiente Web. Para assegurar que o sistema proposto é seguro, é realizada uma análise para identificação das vulnerabilidades e das ameaças do sistema e geração um plano de mitigação. A viabilidade técnica de implementação da arquitetura proposta é mostrada por meio da implementação de um protótipo como prova de conceito. Ao final, mostra-se por meio desse protótipo que o sistema cumpre os objetivos e requisitos propostos. / In recent years, it became much easier for users to create and disseminate their data on the Web. In this context, a large amount of user data is created and used in a distributed way across multiple web applications. This data distribution has created new needs for the user, who now needs to manage data that is hosted on different sites and to securely share data between sites. However, existing mechanisms on the Web cannot meet these needs and the user is forced to use mechanisms that are time consuming, cause data replication, are not sufficiently secure or provide limited control over their data. The objective of this thesis is to propose a user-centric architecture for data management in the Web and data sharing across web applications. The proposed architecture is based on a detailed system requirements specification, which uses an appropriate method for the Web environment. To ensure system security, a threat and vulnerability analysis is performed and a mitigation plan is generated. The technical feasibility of implementing the proposed architecture is shown through the implementation of a proof of concept prototype. Finally, this prototype helps to show that the system meets the proposed objectives and requirements.
173

Gestion multisite de workflows scientifiques dans le cloud / Multisite management of scientific workflows in the cloud

Liu, Ji 03 November 2016 (has links)
Les in silico expérimentations scientifiques à grande échelle contiennent généralement plusieurs activités de calcule pour traiter big data. Workflows scientifiques (SWfs) permettent aux scientifiques de modéliser les activités de traitement de données. Puisque les SWfs moulinent grandes quantités de données, les SWfs orientés données deviennent un problème important. Dans un SWf orienté donnée, les activités sont liées par des dépendances de données ou de contrôle et une activité correspond à plusieurs tâches pour traiter les différentes parties de données. Afin d’exécuter automatiquement les SWfs orientés données, Système de management pour workflows scientifiques (SWfMSs) peut être utilisé en exploitant High Perfmance Comuting (HPC) fournisse par un cluster, grille ou cloud. En outre, SWfMSs génèrent des données de provenance pour tracer l’exécution des SWfs.Puisque le cloud fournit des services stables, diverses ressources, la capacité de calcul et de stockage virtuellement infinie, il devient une infrastructure intéressante pour l’exécution de SWf. Le cloud données essentiellement trois types de services, i.e. Infrastructure en tant que Service (IaaS), Plateforme en tant que Service (PaaS) et Logiciel en tant que Service (SaaS). SWfMSs peuvent être déployés dans le cloud en utilisant des Machines Virtuelles (VMs) pour exécuter les SWfs orientés données. Avec la méthode de pay-as-you-go, les utilisateurs de cloud n’ont pas besoin d’acheter des machines physiques et la maintenance des machines sont assurée par les fournisseurs de cloud. Actuellement, le cloud généralement se compose de plusieurs sites (ou centres de données), chacun avec ses propres ressources et données. Du fait qu’un SWf orienté donnée peut-être traite les données distribuées dans différents sites, l’exécution de SWf orienté donnée doit être adaptée aux multisite cloud en utilisant des ressources de calcul et de stockage distribuées.Dans cette thèse, nous étudions les méthodes pour exécuter SWfs orientés données dans un environnement de multisite cloud. Certains SWfMSs existent déjà alors que la plupart d’entre eux sont conçus pour des grappes d’ordinateurs, grille ou cloud d’un site. En outre, les approches existantes sont limitées aux ressources de calcul statique ou à l’exécution d’un seul site. Nous vous proposons des algorithmes pour partitionner SWfs et d’un algorithme d’ordonnancement des tâches pour l’exécution des SWfs dans un multisite cloud. Nos algorithmes proposés peuvent réduire considérablement le temps global d’exécution d’un SWf dans un multisite cloud.En particulier, nous proposons une solution générale basée sur l’ordonnancement multi-objectif afin d’exécuter SWfs dans un multisite cloud. La solution se compose d’un modèle de coût, un algorithme de provisionnement de VMs et un algorithme d’ordonnancement des activités. L’algorithme de provisionnement de VMs est basé sur notre modèle de coût pour générer les plans à provisionner VMs pour exécuter SWfs dans un cloud d’un site. L’algorithme d’ordonnancement des activités permet l’exécution de SWf avec le coût minimum, composé de temps d’exécution et le coût monétaire, dans un multisite cloud. Nous avons effectué beaucoup d’expérimentations et les résultats montrent que nos algorithmes peuvent réduire considérablement le coût global pour l’exécution de SWf dans un multisite cloud. / Large-scale in silico scientific experiments generally contain multiple computational activities to process big data. Scientific Workflows (SWfs) enable scientists to model the data processing activities. Since SWfs deal with large amounts of data, data-intensive SWfs is an important issue. In a data-intensive SWf, the activities are related by data or control dependencies and one activity may consist of multiple tasks to process different parts of experimental data. In order to automatically execute data-intensive SWfs, Scientific Work- flow Management Systems (SWfMSs) can be used to exploit High Performance Computing (HPC) environments provided by a cluster, grid or cloud. In addition, SWfMSs generate provenance data for tracing the execution of SWfs.Since a cloud offers stable services, diverse resources, virtually infinite computing and storage capacity, it becomes an interesting infrastructure for SWf execution. Clouds basically provide three types of services, i.e. Infrastructure-as-a-Service (IaaS), Platform- as-a-Service (PaaS) and Software-as-a-Service (SaaS). SWfMSs can be deployed in the cloud using Virtual Machines (VMs) to execute data-intensive SWfs. With a pay-as-you- go method, the users of clouds do not need to buy physical machines and the maintenance of the machines are ensured by the cloud providers. Nowadays, a cloud is typically made of several sites (or data centers), each with its own resources and data. Since a data- intensive SWf may process distributed data at different sites, the SWf execution should be adapted to multisite clouds while using distributed computing or storage resources.In this thesis, we study the methods to execute data-intensive SWfs in a multisite cloud environment. Some SWfMSs already exist while most of them are designed for computer clusters, grid or single cloud site. In addition, the existing approaches are limited to static computing resources or single site execution. We propose SWf partitioning algorithms and a task scheduling algorithm for SWf execution in a multisite cloud. Our proposed algorithms can significantly reduce the overall SWf execution time in a multisite cloud.In particular, we propose a general solution based on multi-objective scheduling in order to execute SWfs in a multisite cloud. The general solution is composed of a cost model, a VM provisioning algorithm, and an activity scheduling algorithm. The VM provisioning algorithm is based on our proposed cost model to generate VM provisioning plans to execute SWfs at a single cloud site. The activity scheduling algorithm enables SWf execution with the minimum cost, composed of execution time and monetary cost, in a multisite cloud. We made extensive experiments and the results show that our algorithms can reduce considerably the overall cost of the SWf execution in a multisite cloud.
174

D-CAPE: A Self-Tuning Continuous Query Plan Distribution Architecture

Sutherland, Timothy Michael 05 May 2004 (has links)
The study of systems for querying data streams, coined Data Stream Management Systems (DSMS), has gained in popularity over the last several years. This new area of research for the database community includes studies in areas such as Sensor Networks, Network Intrusion, and monitoring data such as Medicine, Stock, or Weather feeds. With this new popularity comes increased performance expectations, with increased data sizes and speed and larger more complex query plans as well as high volumes of possibly small queries. Due to the finite resources on a single query processor, future Data Stream Management Systems must distribute their workload to multiple query processors, working together in a synchronized manner. This thesis discusses a new Distributed Continuous Query System (D-CAPE) developed here at WPI that has the ability to distribute query plans over a large cluster of machines. We describe the architecture of the new system, policies for query plan distribution to improve overall performance, as well as techniques for self-tuning query plan re-distribution. D-CAPE is designed to be as flexible as possible for future research. We include a multi-tiered architecture that scales to a large number of query processors. D-CAPE has also been designed to minimize the cost of the communications network by bundling synchronization messages, thus minimizing packets sent between query processors. These messages are also incremental at run-time to aid in minimizing the communication cost of D-CAPE. The architecture allows for the flexible incorporation of different distribution algorithms and operator reallocation policies.. D-CAPE provides an operator reallocation algorithm that is able to seamlessly move an operator(s) across any query processors in our computing cluster. We do so by creating ``pipes" between query processors to allow the data streams to flow, and then filling these pipes with data streams once execution begins. Operator redistribution is accomplished by systematically reconnecting these pipes as to not interrupt the data flow. Experimental evaluation using our real prototype system (not just simulation) shows that executing a query plan distributed over multiple machines causes no more overhead than processing it on a single centralized query processor; even for rather lightly loaded machines. Further, we find that distributing a query plan among a cluster of query processors can boost performance up to twice that of a centralized DSMS. We conclude that the limitation of each query processor within the distributed network of cooperating processors is not primarily in the volume of the data nor the number of query operators, but rather the number of data connections per processor and the allocation of the stateful and thus most costly operators. We also find that the overhead of distributing query operators is very low, allowing for a potentially frequent dynamic redistribution of query plans during execution.
175

Uma abordagem centrada no usuário para compartilhamento e gerenciamento de dados entre aplicações web. / A user-centric approach to data management and sharing between Web applications.

Cristina Klippel Dominicini 01 August 2012 (has links)
Nos últimos anos, tornou-se muito mais fácil para os usuários criarem e disseminarem seus dados na Web. Neste contexto, uma grande quantidade de dados dos usuários é criada e utilizada de forma distribuída entre várias aplicações web. Essa distribuição de dados criou novas necessidades para o usuário, que agora precisa gerenciar dados que estão hospedados em diferentes sites e compartilhar dados entre sites de forma protegida. Entretanto, os mecanismos existentes na Web hoje não conseguem atender a essas necessidades e o usuário se vê obrigado a utilizar mecanismos que consomem muito tempo, causam replicação de dados, não são suficientemente seguros ou fornecem apenas um controle limitado sobre os seus dados. Assim, o objetivo deste trabalho é propor uma arquitetura centrada no usuário para gerenciamento de dados na Web e compartilhamento desses dados com aplicações web. A proposta dessa arquitetura é baseada em uma especificação detalhada dos requisitos do sistema usando um método específico para o ambiente Web. Para assegurar que o sistema proposto é seguro, é realizada uma análise para identificação das vulnerabilidades e das ameaças do sistema e geração um plano de mitigação. A viabilidade técnica de implementação da arquitetura proposta é mostrada por meio da implementação de um protótipo como prova de conceito. Ao final, mostra-se por meio desse protótipo que o sistema cumpre os objetivos e requisitos propostos. / In recent years, it became much easier for users to create and disseminate their data on the Web. In this context, a large amount of user data is created and used in a distributed way across multiple web applications. This data distribution has created new needs for the user, who now needs to manage data that is hosted on different sites and to securely share data between sites. However, existing mechanisms on the Web cannot meet these needs and the user is forced to use mechanisms that are time consuming, cause data replication, are not sufficiently secure or provide limited control over their data. The objective of this thesis is to propose a user-centric architecture for data management in the Web and data sharing across web applications. The proposed architecture is based on a detailed system requirements specification, which uses an appropriate method for the Web environment. To ensure system security, a threat and vulnerability analysis is performed and a mitigation plan is generated. The technical feasibility of implementing the proposed architecture is shown through the implementation of a proof of concept prototype. Finally, this prototype helps to show that the system meets the proposed objectives and requirements.
176

Information Integration in a Grid Environment Applications in the Bioinformatics Domain

Radwan, Ahmed M. 16 December 2010 (has links)
Grid computing emerged as a framework for supporting complex operations over large datasets; it enables the harnessing of large numbers of processors working in parallel to solve computing problems that typically spread across various domains. We focus on the problems of data management in a grid/cloud environment. The broader context of designing a services oriented architecture (SOA) for information integration is studied, identifying the main components for realizing this architecture. The BioFederator is a web services-based data federation architecture for bioinformatics applications. Based on collaborations with bioinformatics researchers, several domain-specific data federation challenges and needs are identified. The BioFederator addresses such challenges and provides an architecture that incorporates a series of utility services; these address issues like automatic workflow composition, domain semantics, and the distributed nature of the data. The design also incorporates a series of data-oriented services that facilitate the actual integration of data. Schema integration is a core problem in the BioFederator context. Previous methods for schema integration rely on the exploration, implicit or explicit, of the multiple design choices that are possible for the integrated schema. Such exploration relies heavily on user interaction; thus, it is time consuming and labor intensive. Furthermore, previous methods have ignored the additional information that typically results from the schema matching process, that is, the weights and in some cases the directions that are associated with the correspondences. We propose a more automatic approach to schema integration that is based on the use of directed and weighted correspondences between the concepts that appear in the source schemas. A key component of our approach is a ranking mechanism for the automatic generation of the best candidate schemas. The algorithm gives more weight to schemas that combine the concepts with higher similarity or coverage. Thus, the algorithm makes certain decisions that otherwise would likely be taken by a human expert. We show that the algorithm runs in polynomial time and moreover has good performance in practice. The proposed methods and algorithms are compared to the state of the art approaches. The BioFederator design, services, and usage scenarios are discussed. We demonstrate how our architecture can be leveraged on real world bioinformatics applications. We preformed a whole human genome annotation for nucleosome exclusion regions. The resulting annotations were studied and correlated with tissue specificity, gene density and other important gene regulation features. We also study data processing models on grid environments. MapReduce is one popular parallel programming model that is proven to scale. However, using the low-level MapReduce for general data processing tasks poses the problem of developing, maintaining and reusing custom low-level user code. Several frameworks have emerged to address this problem; these frameworks share a top-down approach, where a high-level language is used to describe the problem semantics, and the framework takes care of translating this problem description into the MapReduce constructs. We highlight several issues in the existing approaches and alternatively propose a novel refined MapReduce model that addresses the maintainability and reusability issues, without sacrificing the low-level controllability offered by directly writing MapReduce code. We present MapReduce-LEGOS (MR-LEGOS), an explicit model for composing MapReduce constructs from simpler components, namely, "Maplets", "Reducelets" and optionally "Combinelets". Maplets and Reducelets are standard MapReduce constructs that can be composed to define aggregated constructs describing the problem semantics. This composition can be viewed as defining a micro-workflow inside the MapReduce job. Using the proposed model, complex problem semantics can be defined in the encompassing micro-workflow provided by MR-LEGOS while keeping the building blocks simple. We discuss the design details, its main features and usage scenarios. Through experimental evaluation, we show that the proposed design is highly scalable and has good performance in practice.
177

A generic information platform for product families

Sivard, Gunilla January 2001 (has links)
The research work detailed in this dissertation relates to the computer representation of information which concerns product families and product platforms. Common to competitive companies today, is the quest of designing products and processes to meet a large variety of customer needs, in short time, and based on few resources. One way to succeed with this endeavor is to plan for the variety and design a modular, or adaptive, product family based on a common platform of resources. To further increase the efficiency in delivering customized products in time, a computer processible model of the family is created, which is used to realize a customer specific product variant during the order phase. The objective of this research is to define a generally applicable model of product family information for the purpose of supporting various applications, and for achieving an efficient utilization of information. The approach is to define a model of the product family according to the theory of Axiomatic Design, which reflects the trace from various requirements to functions and different properties and components of the product. By representing information from design in a generally applicable format, this information can be reused when building the configuration models of the order phase. By adapting the model to an existing standard, information exchange between systems is supported, and access is provided to information concerning detailed physical parts as well as constructs addressing various use and version management. Contributions include a description of a model architecture with reusable functional solutions, interfaces, structures and interrelations between platform solutions and product family. Further, it is described how to extend and model the domains and interrelations of axiomatic design in an information model, which is adapted to the product modeling standard of ISO10303-214. / QC 20100812
178

Robustness in Automatic Physical Database Design

El Gebaly, Kareem January 2007 (has links)
Automatic physical database design tools rely on ``what-if'' interfaces to the query optimizer to estimate the execution time of the training query workload under different candidate physical designs. The tools use these what-if interfaces to recommend physical designs that minimize the estimated execution time of the input training workload. Minimizing estimated execution time alone can lead to designs that are not robust to query optimizer errors and workload changes. In particular, if the optimizer makes errors in estimating the execution time of the workload queries, then the recommended physical design may actually degrade the performance of these queries. In this sense, the physical design is risky. Furthermore, if the production queries are slightly different from the training queries, the recommended physical design may not benefit them at all. In this sense, the physical design is not general. We define Risk and Generality as two new measures aimed at evaluating the robustness of a proposed physical database design, and we show how to extend the objective function being optimized by a generic physical design tool to take these measures into account. We have implemented a physical design advisor in PostqreSQL, and we use it to experimentally demonstrate the usefulness of our approach. We show that our two new metrics result in physical designs that are more robust, which means that the user can implement them with a higher degree of confidence. This is particularly important as we move towards truly zero-administration database systems in which there is not the possibility for a DBA to vet the recommendations of the physical design tool before applying them.
179

Robustness in Automatic Physical Database Design

El Gebaly, Kareem January 2007 (has links)
Automatic physical database design tools rely on ``what-if'' interfaces to the query optimizer to estimate the execution time of the training query workload under different candidate physical designs. The tools use these what-if interfaces to recommend physical designs that minimize the estimated execution time of the input training workload. Minimizing estimated execution time alone can lead to designs that are not robust to query optimizer errors and workload changes. In particular, if the optimizer makes errors in estimating the execution time of the workload queries, then the recommended physical design may actually degrade the performance of these queries. In this sense, the physical design is risky. Furthermore, if the production queries are slightly different from the training queries, the recommended physical design may not benefit them at all. In this sense, the physical design is not general. We define Risk and Generality as two new measures aimed at evaluating the robustness of a proposed physical database design, and we show how to extend the objective function being optimized by a generic physical design tool to take these measures into account. We have implemented a physical design advisor in PostqreSQL, and we use it to experimentally demonstrate the usefulness of our approach. We show that our two new metrics result in physical designs that are more robust, which means that the user can implement them with a higher degree of confidence. This is particularly important as we move towards truly zero-administration database systems in which there is not the possibility for a DBA to vet the recommendations of the physical design tool before applying them.
180

A Study of the Implementation of Collaborative Product Commerce System in Taiwan

Chen, Kuan-Hua 28 January 2004 (has links)
In recent years, due to severe global market competition, increase of manpower costs and decrease in foreign trade, plus low-price labor costs from China and southeastern Asia countries (eg. Vietnam), all these factors enforce Taiwan manufacturing, such as motorcycle industry, moving their production factories to those countries. This phenomenon has been brought to Taiwan government¡¦s attention, and this crisis has encouraged our industrial circles to develop higher-level R&D and design center. Furthermore, the official department (eg. Industrial Development Bureau) also supplies enterprises with expenses support to conduct R&D and design. Within New Product Development (NPD) is one of the major subsidiary entries. Take Taiwan motorcycle industry as an example. To develop NPD is an essential competitive strategy for enterprise. On the one hand, this strategy has helped Taiwan motorcycle industry to be independent of technical domination from Japan; on the other hand, it has created differentiation in motorcycle industry and consolidated the foundation marching into international market. This is also what internal manufacture enterprises strive for. NPD process involves several stages. For example, in 1994 Cooper brought up product composition, initial evaluation, concept design, product development, product test, engineering trial production and limited quantity to market, these 7 stages, and NPD participant units or departments are quite a lot. For example, in motorcycle development process, participant units include merchandise plan, sales, R&D, manufacture, mold design, and quality control departments, and even parts supplier or motorcycle agent. This kind of collaborative development method has the advantages of putting heads together so as to get better results and cooperation. However, it remains existing problems with wasting time and efforts on inter-department interaction and manual data communication, and data accuracy (eg. version of design chart). To cope with the above problems, most enterprises are solving via existing IT system, such as simplex e-mail, more complex ERP (enterprise resources planning, ERP) or PDM (product data management, PDM). These systems have their own functions. E-mail focuses on communication; ERP integrates information of manufacture, human resources, finance, and marketing; PDM puts emphasis on engineering data management. In the viewpoint of NPD, these systems can only provides partial functions, but are incapable of support the requirements of entire collaborative process. For example, e-mail cannot supplies simultaneous communication; ERP lacks of design mold required by R&D department; PDM only has engineering data. If other departments need related data, they must develop other software to obtain. Because of swift progress in IT plus the cooperation demands in business operation among enterprises, departments and individuals, all these arouse attention on collaborative commerce, and it can also recover the disadvantages of e-mail, ERP or PDM while operating in NPD process. Collaborative commerce contains collaborative scheme, collaborative marketing, collaborative product commerce (or development) and collaborative service, these system classifications. Within them, collaborative product development binds NPD most. In current, the main manufacturers are PTC, HP and IBM. The merits of the system lie in effective controlling NPD process, constructing NPD operation standard, and accumulating experiences in new product design and manufacture. For example, in1995 Airbus in France had used PTC Windchill to conduct collaborative aircraft design. In 1999, there was Taiwan manufacturer under government¡¦s subsidy applying this system on new motorcycle model design. The main objective to introduce collaborative product development software is applying IT to support NPD process. IT introduction process is an important period for enterprises to identify whether it is successful or not, and the adaptation during the process is the key accordance to determine success or failure of IT. Therefore, some scholars discuss IT introduction process from adaptation point of view. For example, Leonard ¡V Barton¡]LB¡^¡]1988¡^addresses mutual adaptation mode between technology and organization to resolve misalignment during introduction process in technology (original IT specification), delivery system (training courses), performance criteria (impact upon activity). Susman et al.¡]2003¡^addresses that while using collaborative technology, the misalignments between technology and work, team and organization should be solved. DeSantics and Poole¡]1994¡^ bring up adaptive structuration theory¡]AST¡^. The theory emphasizes on appropiration in technology, work, organizational environment and group. The higher the appropiration is, the higher the decision performance will be. Tyre and Orlikowski¡]1994¡^deem the technology adaptation is not gradual and continuous, but highly discontiunous. They indicate that in adaptation discrepancy events will discontinuously occur. This event provides enterprise an opportunity to review the suitability of existing process or a method to modify present process. Although above researches have provided vital results, research result from Majchrzak et al.¡]2000¡^about new technology introduction process still cannot clearly describe all phenomena. Hence, they have discussed adaptation in project process via rocket design project and used collaborative technology (such as e-mail, data sharing or electric board) Majchrzak et al. has connected collaborative technology and NPD, but the research has discussed small and simple collaborative technology only (such as e-mail), but lacked of result of large and complex collaborative product development software. Meanwhile, although the result is the application of NPD, it does not provide the adaptation of each NPD stage (such as engineering trial manufacture). Furthermore, the mature experiences from western countries, such as Airbus in France, in introduction of collaborative product development software in NPD is worthy of consultation, but the specific situations in different countries should be taken into consideration. In Taiwan, cases which application of collaborative product development software supports NPD are still rare, but these introduction experiences are worthy of making thorough inquiry for other enterprises¡¦ reference. Therefore, the article has selected a case closed study of Taiwan manufacturing that introduced collaborative product development software and accompanied with related adaptation theory (such as LB mode, AST, discrepancy event, etc.) to thoroughly investigate adaptation conditions and result analysis before, in the middle of, and after introduction.

Page generated in 0.0626 seconds