• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 75
  • 16
  • 10
  • 9
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 148
  • 28
  • 28
  • 20
  • 19
  • 18
  • 17
  • 16
  • 13
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Integração entre múltiplas ontologias: reúso e gerência de conflitos / Multiple ontology integration: reuse and conflict management

Cobe, Raphael Mendes de Oliveira 10 December 2014 (has links)
A reutilização de conhecimento é uma tarefa chave para qualquer sistema computacional. Entretanto, o reúso indiscriminado desse conhecimento pode gerar resultados conflitantes com o objetivo de uso do conhecimento, levando sistemas a se comportarem de maneira imprevisível. Neste trabalho estudamos as consequências do reúso de conhecimento em ontologias baseadas em lógicas de descrição. Focamos principalmente nos problemas que podem ser causados pela fusão de ontologias. Investigamos e comparamos a capacidade das ferramentas de desenvolvimento de ontologias atuais de lidarem com esses problemas e como a teoria se desenvolveu para resolver os mesmos problemas. Realizamos a construção de um arcabouço lógico e de software, organizado na forma de um processo, que tem como objetivo auxiliar o projetista de ontologias a resolver conflitos advindos da fusão. O processo agrupa tarefas descritas normalmente na literatura em separado. Acreditamos que a união dessas abordagens leva a uma melhor solução de conflitos. Durante o desenvolvimento deste trabalho, concentramos nossos esforços principalmente no desenvolvimento de algoritmos para a construção de sub-ontologias maximais, onde os conflitos não ocorram, bem como a ordenação desses conjuntos segundo critérios comuns discutidos na literatura. Tais estratégias foram implementadas em software e testadas utilizando dados gerados automaticamente e dados reais. / Knowledge reuse is a key task during any system development. Nevertheless, careless knowledge reuse may generate conflicting outcomes regarding the system goal, leading such systems to unpredictable behaviour. With that in mind, during this research we studied the consequences of knowledge reuse in ontologies based on description logics. We focused mainly on conflicts arising from ontology merging. We investigated and compared the features developed for this purpose on ontology development tools and how the theory field proposed to deal with the same issues. We developed both a logical and a software framework grouped into a process that aims to help the ontology designer solve conflicts arising from ontology merging. The process groups common tasks that are normally described separately. We believe that the unification of these approaches should result in a better solution for the merging conflicts. We concentrated our efforts during this work on building algorithms for building maximal sub-ontologies where such conflicts are non-existent as well as means for ordering such sets according to a few relevance criteria commonly described at the literature. Such algorithms were implemented and tested against automatically generated and real data.
12

Distributed Clustering for Scaling Classic Algorithms

Hore, Prodip 01 July 2004 (has links)
Clustering large data sets recently has emerged as an important area of research. The ever-increasing size of data sets and poor scalability of clustering algorithms has drawn attention to distributed clustering for partitioning large data sets. Sometimes, centrally pooling the distributed data is also expensive. There might be also constraints on data sharing between different distributed locations due to privacy, security, or proprietary nature of the data. In this work we propose an algorithm to cluster large-scale data sets without centrally pooling the data. Data at distributed sites are clustered independently i.e. without any communication among them. After partitioning the local/distributed sites we send only the centroids of each site to a central location. Thus there is very little bandwidth cost in a wide area network scenario. The distributed sites/subsets neither exchange cluster labels nor individual data features thus providing the framework for privacy preserving distributive clustering. Centroids from each local site form an ensemble of centroids at the central site. Our assumption is that data in all distributed locations are from the same underlying distribution and the set of centroids obtained by partitioning the data in each subset/distributed location gives us partial information about the position of the cluster centroids in that distribution. Now, the problem of finding a global partition using the limited knowledge of the ensemble of centroids can be viewed as the problem of reaching a global consensus on the position of cluster centroids. A global consensus on the position of cluster centroids of the global data using only the very limited statistics of the position of centroids from each local site is reached by grouping the centroids into consensus chains and computing the weighted mean of centroids in a consensus chain to represent a global cluster centroid. We compute the Euclidean distance of each example from the global set of centroids, and assign it to the centroid nearest to it. Experimental results show that quality of clusters generated by our algorithm is similar to the quality of clusters generated by clustering all the data at a time. We have shown that the disputed examples between the clusters generated by our algorithm and clustering all the data at a time lay on the border of clusters as expected. We also proposed a centroid-filtering algorithm to make partitions formed by our algorithm better.
13

A user interface for the ontology merging tool SAMBO

Abdulahad, Bassam, Lounis, Georgios January 2004 (has links)
<p>Ontologies have become an important tool for representing data in a structured manner. Merging ontologies allows for the creation of ontologies that later can be composed into larger ontologies as well as for recognizing patterns and similarities between ontologies. Ontologies are being used nowadays in many areas, including bioinformatics. In this thesis, we present a desktop version of SAMBO, a system for merging ontologies that are represented in the languages OWL and DAML+OIL. The system has been developed in the programming language JAVA with JDK (Java Development Kit) 1.4.2. The user can open a file locally or from the network and can merge ontologies using suggestions generated by the SAMBO algorithm. SAMBO provides a user-friendly graphical interface, which guides the user through the merging process.</p>
14

Similarity-Driven Cluster Merging Method for Unsupervised Fuzzy Clustering

Xiong, Xuejian, Tan, Kian Lee 01 1900 (has links)
In this paper, a similarity-driven cluster merging method is proposed for unsupervised fuzzy clustering. The cluster merging method is used to resolve the problem of cluster validation. Starting with an overspecified number of clusters in the data, pairs of similar clusters are merged based on the proposed similarity-driven cluster merging criterion. The similarity between clusters is calculated by a fuzzy cluster similarity matrix, while an adaptive threshold is used for merging. In addition, a modified generalized objective function is used for prototype-based fuzzy clustering. The function includes the p-norm distance measure as well as principal components of the clusters. The number of the principal components is determined automatically from the data being clustered. The performance of this unsupervised fuzzy clustering algorithm is evaluated by several experiments of an artificial data set and a gene expression data set. / Singapore-MIT Alliance (SMA)
15

A Practical Approach to Merging Multidimensional Data Models

Mireku Kwakye, Michael 30 November 2011 (has links)
Schema merging is the process of incorporating data models into an integrated, consistent schema from which query solutions satisfying all incorporated models can be derived. The efficiency of such a process is reliant on the effective semantic representation of the chosen data models, as well as the mapping relationships between the elements of the source data models. Consider a scenario where, as a result of company mergers or acquisitions, a number of related, but possible disparate data marts need to be integrated into a global data warehouse. The ability to retrieve data across these disparate, but related, data marts poses an important challenge. Intuitively, forming an all-inclusive data warehouse includes the tedious tasks of identifying related fact and dimension table attributes, as well as the design of a schema merge algorithm for the integration. Additionally, the evaluation of the combined set of correct answers to queries, likely to be independently posed to such data marts, becomes difficult to achieve. Model management refers to a high-level, abstract programming language designed to efficiently manipulate schemas and mappings. Particularly, model management operations such as match, compose mappings, apply functions and merge, offer a way to handle the above-mentioned data integration problem within the domain of data warehousing. In this research, we introduce a methodology for the integration of star schema source data marts into a single consolidated data warehouse based on model management. In our methodology, we discuss the development of three (3) main streamlined steps to facilitate the generation of a global data warehouse. That is, we adopt techniques for deriving attribute correspondences, and for schema mapping discovery. Finally, we formulate and design a merge algorithm, based on multidimensional star schemas; which is primarily the core contribution of this research. Our approach focuses on delivering a polynomial time solution needed for the expected volume of data and its associated large-scale query processing. The experimental evaluation shows that an integrated schema, alongside instance data, can be derived based on the type of mappings adopted in the mapping discovery step. The adoption of Global-And-Local-As-View (GLAV) mapping models delivered a maximally-contained or exact representation of all fact and dimensional instance data tuples needed in query processing on the integrated data warehouse. Additionally, different forms of conflicts, such as semantic conflicts for related or unrelated dimension entities, and descriptive conflicts for differing attribute data types, were encountered and resolved in the developed solution. Finally, this research has highlighted some critical and inherent issues regarding functional dependencies in mapping models, integrity constraints at the source data marts, and multi-valued dimension attributes. These issues were encountered during the integration of the source data marts, as it has been the case of evaluating the queries processed on the merged data warehouse as against that on the independent data marts.
16

Scalable on-demand streaming of stored complex multimedia

Zhao, Yanping 09 August 2004 (has links)
Previous research has developed a number of efficient protocols for streaming popular multimedia files on-demand to potentially large numbers of concurrent clients. These protocols can achieve server bandwidth usage that grows much slower than linearly with the file request rate, and with the inverse of client start-up delay. This hesis makes the following three main contributions to the design and performance evaluation of such protocols. The first contribution is an investigation of the network bandwidth requirements for scalable on-demand streaming. The results suggest that the minimum required network bandwidth for scalable on-demand streaming typically scales as K/ln(K) as the number of client sites K increases for fixed request rate per client site, and as ln(N/(ND+1)) as the total file request rate N increases or client start-up delay D decreases, for a fixed number of sites. Multicast delivery trees configured to minimize network bandwidth usage rather than latency are found to only modestly reduce the minimum required network bandwidth. Furthermore, it is possible to achieve close to the minimum possible network and server bandwidth usage simultaneously with practical scalable delivery protocols. Second, the thesis addresses the problem of scalable on-demand streaming of a more complex type of media than is typically considered, namely variable bit rate (VBR) media. A lower bound on the minimum required server bandwidth for scalable on-demand streaming of VBR media is derived. The lower bound analysis motivates the design of a new immediate service protocol termed VBR bandwidth skimming (VBRBS) that uses constant bit rate streaming, when sufficient client storage space is available, yet fruitfully exploits the knowledge of a VBR profile. Finally, the thesis proposes non-linear media containing parallel sequences of data frames, among which clients can dynamically select at designated branch points, and investigates the design and performance issues in scalable on-demand streaming of such media. Lower bounds on the minimum required server bandwidth for various non-linear media scalable on-demand streaming approaches are derived, practical non-linear media scalable delivery protocols are developed, and, as a proof-of-concept, a simple scalable delivery protocol is implemented in a non-linear media streaming prototype system.
17

Scalable on-demand streaming of stored complex multimedia

Zhao, Yanping 09 August 2004
Previous research has developed a number of efficient protocols for streaming popular multimedia files on-demand to potentially large numbers of concurrent clients. These protocols can achieve server bandwidth usage that grows much slower than linearly with the file request rate, and with the inverse of client start-up delay. This hesis makes the following three main contributions to the design and performance evaluation of such protocols. The first contribution is an investigation of the network bandwidth requirements for scalable on-demand streaming. The results suggest that the minimum required network bandwidth for scalable on-demand streaming typically scales as K/ln(K) as the number of client sites K increases for fixed request rate per client site, and as ln(N/(ND+1)) as the total file request rate N increases or client start-up delay D decreases, for a fixed number of sites. Multicast delivery trees configured to minimize network bandwidth usage rather than latency are found to only modestly reduce the minimum required network bandwidth. Furthermore, it is possible to achieve close to the minimum possible network and server bandwidth usage simultaneously with practical scalable delivery protocols. Second, the thesis addresses the problem of scalable on-demand streaming of a more complex type of media than is typically considered, namely variable bit rate (VBR) media. A lower bound on the minimum required server bandwidth for scalable on-demand streaming of VBR media is derived. The lower bound analysis motivates the design of a new immediate service protocol termed VBR bandwidth skimming (VBRBS) that uses constant bit rate streaming, when sufficient client storage space is available, yet fruitfully exploits the knowledge of a VBR profile. Finally, the thesis proposes non-linear media containing parallel sequences of data frames, among which clients can dynamically select at designated branch points, and investigates the design and performance issues in scalable on-demand streaming of such media. Lower bounds on the minimum required server bandwidth for various non-linear media scalable on-demand streaming approaches are derived, practical non-linear media scalable delivery protocols are developed, and, as a proof-of-concept, a simple scalable delivery protocol is implemented in a non-linear media streaming prototype system.
18

A Practical Approach to Merging Multidimensional Data Models

Mireku Kwakye, Michael 30 November 2011 (has links)
Schema merging is the process of incorporating data models into an integrated, consistent schema from which query solutions satisfying all incorporated models can be derived. The efficiency of such a process is reliant on the effective semantic representation of the chosen data models, as well as the mapping relationships between the elements of the source data models. Consider a scenario where, as a result of company mergers or acquisitions, a number of related, but possible disparate data marts need to be integrated into a global data warehouse. The ability to retrieve data across these disparate, but related, data marts poses an important challenge. Intuitively, forming an all-inclusive data warehouse includes the tedious tasks of identifying related fact and dimension table attributes, as well as the design of a schema merge algorithm for the integration. Additionally, the evaluation of the combined set of correct answers to queries, likely to be independently posed to such data marts, becomes difficult to achieve. Model management refers to a high-level, abstract programming language designed to efficiently manipulate schemas and mappings. Particularly, model management operations such as match, compose mappings, apply functions and merge, offer a way to handle the above-mentioned data integration problem within the domain of data warehousing. In this research, we introduce a methodology for the integration of star schema source data marts into a single consolidated data warehouse based on model management. In our methodology, we discuss the development of three (3) main streamlined steps to facilitate the generation of a global data warehouse. That is, we adopt techniques for deriving attribute correspondences, and for schema mapping discovery. Finally, we formulate and design a merge algorithm, based on multidimensional star schemas; which is primarily the core contribution of this research. Our approach focuses on delivering a polynomial time solution needed for the expected volume of data and its associated large-scale query processing. The experimental evaluation shows that an integrated schema, alongside instance data, can be derived based on the type of mappings adopted in the mapping discovery step. The adoption of Global-And-Local-As-View (GLAV) mapping models delivered a maximally-contained or exact representation of all fact and dimensional instance data tuples needed in query processing on the integrated data warehouse. Additionally, different forms of conflicts, such as semantic conflicts for related or unrelated dimension entities, and descriptive conflicts for differing attribute data types, were encountered and resolved in the developed solution. Finally, this research has highlighted some critical and inherent issues regarding functional dependencies in mapping models, integrity constraints at the source data marts, and multi-valued dimension attributes. These issues were encountered during the integration of the source data marts, as it has been the case of evaluating the queries processed on the merged data warehouse as against that on the independent data marts.
19

Extraction of Contextual Knowledge and Ambiguity Handling for Ontology in Virtual Environment

Lee, Hyun Soo 2010 August 1900 (has links)
This dissertation investigates the extraction of knowledge from a known environment. Virtual ontology – the extracted knowledge – is defined as a structure of a virtual environment with semantics. While many existing 3D reconstruction approaches can generate virtual environments without structure and related knowledge, the use of Metaearth architecture is proposed as a more descriptive data structure for virtual ontology. Its architecture consists of four layers: interactions and relationships between virtual components can be represented in the virtual space layer; and the library layers contribute to the design of large-scale virtual environments with less redundancy; and the mapping layer links the library layer to the virtual space layer; and the ontology layer functions as a context for the extracted knowledge. The dissertation suggests two construction methodologies. The first method generates a scene structure from a 2D image. Unlike other scene understanding techniques, the suggested method generates scene ontology without prior knowledge and human intervention. As an intermediate process, a new and effective fuzzy color-based over-segmentation method is suggested. The second method generates virtual ontology with 3D information using multi-view scenes. The many ambiguities in extracting 3D information are resolved by employing a new fuzzy dynamic programming method (FDP). The hybrid approach of FDP and 3D reconstruction method generates more accurate virtual ontology with 3D information. A virtual model is equipped with virtual ontology whereby contextual knowledge can be mapped into the Metaearth architecture via the proposed isomorphic matching method. The suggested procedure guarantees the automatic and autonomous processing demanded in virtual interaction analysis with far less effort and computational time.
20

Discuss the International Merging Activities in Human Resource Management Point of View-According to the Case of TFT-LCD Industry at Taiwan and Japan

Yang, Lih-Shine 09 February 2004 (has links)
First of all, we put an emphasis on the global market status of TFT-LCD, which is also our government¡¦s most important economic investment in the near future. We think it is necessary for Taiwan cooperating with Japan to reach the global No.1 place in TFT-LCD industry since Japan initiates and masters many related technologies. A-Company; however, gets such right chance for multinational combination by taking good use of some key factors¡Xorganization integration, communication and human resources management. Because TFT-LCD products get short-lasting life and fast-advancing technology, A-Company must be greatly expanding its investment annually. Although the company has independent technologies, it still has to rapidly enhance its competitiveness on global purchase, global distribution and global service through making cooperation with other technical corporations. Thus, how to take advantage of both companies¡¦ interior resources to create even more high values is meaningful to A-Company. This research shows that organization integration, personal factors, organization promises and individual defense will affect merger achievements. We design a questionnaire and use SPSS for quantification analysis to strengthen the reliability and validity of our research. Lastly, we bring out the final results and conclusions for A-Company reference. According to the questionnaire we find out that, A-Company employees think the high-level managers play quite an important role in the organization integration; their communication and expression on human resource management will directly affect merger achievements. Nevertheless, the company seems not participate in the consolidating planning before combination and not elaborate on the communication after that which leads it into negative effects after combination. From this, we deeply realize that ¡§Manpower¡¨ is the most emphasized resource for consolidating business. Our conclusion is that many failure factors of overseas or domestic combination business are also applied to TFT-LCD Company, and therefore, we think that the success factors during its merger can be imitated by other industries as well. Anyhow, it¡¦s a real pity for A-Company¡¦s not taking good application to academic researches and some successful measures from other business. In sum, there are not only multinational cooperation but also multicultural races merging in the special TFT-LCD industry which reveals a fact that the affiliation of academia will speed up this industry¡¦s growth.

Page generated in 0.1272 seconds