• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 567
  • 181
  • 143
  • 142
  • 141
  • 95
  • 33
  • 28
  • 16
  • 14
  • 12
  • 12
  • 12
  • 9
  • 9
  • Tagged with
  • 1602
  • 348
  • 298
  • 253
  • 249
  • 233
  • 227
  • 218
  • 209
  • 176
  • 159
  • 143
  • 138
  • 126
  • 119
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
611

Knowledge-enhanced text classification : descriptive modelling and new approaches

Martinez-Alvarez, Miguel January 2014 (has links)
The knowledge available to be exploited by text classification and information retrieval systems has significantly changed, both in nature and quantity, in the last years. Nowadays, there are several sources of information that can potentially improve the classification process, and systems should be able to adapt to incorporate multiple sources of available data in different formats. This fact is specially important in environments where the required information changes rapidly, and its utility may be contingent on timely implementation. For these reasons, the importance of adaptability and flexibility in information systems is rapidly growing. Current systems are usually developed for specific scenarios. As a result, significant engineering effort is needed to adapt them when new knowledge appears or there are changes in the information needs. This research investigates the usage of knowledge within text classification from two different perspectives. On one hand, the application of descriptive approaches for the seamless modelling of text classification, focusing on knowledge integration and complex data representation. The main goal is to achieve a scalable and efficient approach for rapid prototyping for Text Classification that can incorporate different sources and types of knowledge, and to minimise the gap between the mathematical definition and the modelling of a solution. On the other hand, the improvement of different steps of the classification process where knowledge exploitation has traditionally not been applied. In particular, this thesis introduces two classification sub-tasks, namely Semi-Automatic Text Classification (SATC) and Document Performance Prediction (DPP), and several methods to address them. SATC focuses on selecting the documents that are more likely to be wrongly assigned by the system to be manually classified, while automatically labelling the rest. Document performance prediction estimates the classification quality that will be achieved for a document, given a classifier. In addition, we also propose a family of evaluation metrics to measure degrees of misclassification, and an adaptive variation of k-NN.
612

Personagem e narrativa no document?rio Jogo de cena

Baumhardt, Virg?nia Caetano 24 January 2011 (has links)
Made available in DSpace on 2015-04-14T14:41:17Z (GMT). No. of bitstreams: 1 429401.pdf: 1461361 bytes, checksum: 7165bbde26373452143bc8fc376f0ace (MD5) Previous issue date: 2011-01-24 / Este trabalho tem por objetivo entender em que medida a constru??o da personagem no document?rio Jogo de cena (2007), de Eduardo Coutinho, contribui para a compreens?o da narrativa deste filme. A abordagem te?rica a respeito da personagem se d? a partir de duas perspectivas o conte?do dos depoimentos e a forma com que eles s?o realizados no filme. O segundo eixo te?rico refere-se a um estudo da narrativa cinematogr?fica, levando em conta aspectos como descri??o, imagin?rio e narra??o. O trabalho inclui ainda o estudo do conceito de document?rio e sua compreens?o na obra do diretor Eduardo Coutinho. A metodologia utilizada ? a an?lise f?lmica, que tem como fun??o destrinchar o objeto de pesquisa a fim de reconstru?-lo a partir do ponto-de-vista te?rico formulado no trabalho.
613

A probabilistic and incremental model for online classification of documents : DV-INBC

Rodrigues, Thiago Fredes January 2016 (has links)
Recentemente, houve um aumento rápido na criação e disponibilidade de repositórios de dados, o que foi percebido nas áreas de Mineração de Dados e Aprendizagem de Máquina. Este fato deve-se principalmente à rápida criação de tais dados em redes sociais. Uma grande parte destes dados é feita de texto, e a informação armazenada neles pode descrever desde perfis de usuários a temas comuns em documentos como política, esportes e ciência, informação bastante útil para várias aplicações. Como muitos destes dados são criados em fluxos, é desejável a criação de algoritmos com capacidade de atuar em grande escala e também de forma on-line, já que tarefas como organização e exploração de grandes coleções de dados seriam beneficiadas por eles. Nesta dissertação um modelo probabilístico, on-line e incremental é apresentado, como um esforço em resolver o problema apresentado. O algoritmo possui o nome DV-INBC e é uma extensão ao algoritmo INBC. As duas principais características do DV-INBC são: a necessidade de apenas uma iteração pelos dados de treino para criar um modelo que os represente; não é necessário saber o vocabulário dos dados a priori. Logo, pouco conhecimento sobre o fluxo de dados é necessário. Para avaliar a performance do algoritmo, são apresentados testes usando datasets populares. / Recently the fields of Data Mining and Machine Learning have seen a rapid increase in the creation and availability of data repositories. This is mainly due to its rapid creation in social networks. Also, a large part of those data is made of text documents. The information stored in such texts can range from a description of a user profile to common textual topics such as politics, sports and science, information very useful for many applications. Besides, since many of this data are created in streams, scalable and on-line algorithms are desired, because tasks like organization and exploration of large document collections would be benefited by them. In this thesis an incremental, on-line and probabilistic model for document classification is presented, as an effort of tackling this problem. The algorithm is called DV-INBC and is an extension to the INBC algorithm. The two main characteristics of DV-INBC are: only a single scan over the data is necessary to create a model of it; the data vocabulary need not to be known a priori. Therefore, little knowledge about the data stream is needed. To assess its performance, tests using well known datasets are presented.
614

Legal Design / Legal Design

Kuk, Michal January 2019 (has links)
- j - Legal Design Abstract Legal world found itself in a situation when it is forced by rising demand and new tech- nologies to seek new ways to practice law. It is not sufficient anymore to simply provide bare legal services without acknowledging context and users' actual needs. Some degree of automatization and cost cutting of basic legal work has found its way into the legal practice. In accordance with trends about transparency and publicity of state administration, raises also demand to make law more affordable. Legal world was able to resist changes for a long time due to the lawyers' lack of motivation to do so and clients' ignorance of possibilities base on their lack of legal understanding. However, the situation is starting to change as can be seen on rising numbers of legal innovation start-ups. There are many possible solutions to these new challenges, and one of them is Legal De- sign. This discipline tries to implement design methodology into legal services with the goal of creating valuable innovations. Fundamental is human or user-centered approach in order to provide solutions that better suites them. For example, it aims to develop contract from ex post problem solving tool, into relationship building tool. Also for judicial system to not only decide disputes, but also strengthen sense of...
615

Modelagem gerativa para sumarização automática multidocumento / Generative modeling for multi-document sumarization

Jorge, María Lucía Del Rosario Castro 09 March 2015 (has links)
A Sumarização Multidocumento consiste na produção automática de um único sumário a partir de um conjunto de textos que tratam de um mesmo assunto. Essa tarefa vem se tornando cada vez mais importante, já que auxilia o processamento de grandes volumes de informação, permitindo destacar a informação mais relevante para o usuário. Nesse trabalho, são propostas e exploradas modelagens baseadas em Aprendizado Gerativo, em que a tarefa de Sumarização Multidocumento é esquematizada usando o modelo Noisy- Channel e seus componentes de modelagem de língua, de transformação e decodificação, que são apropriadamente instanciados para a tarefa em questão. Essas modelagens são formuladas com atributos superficiais e profundos. Em particular, foram definidos três modelos de transformação, cujas histórias gerativas capturam padrões de seleção de conteúdo a partir de conjuntos de textos e seus correspondentes sumários multidocumento produzidos por humanos. O primeiro modelo é relativamente mais simples, pois é composto por atributos superficiais tradicionais; o segundo modelo é mais complexo, pois, além de atributos superficiais, adiciona atributos discursivos monodocumento; finalmente, o terceiro modelo é o mais complexo, pois integra atributos superficiais, de natureza discursiva monodocumento e semântico-discursiva multidocumento, pelo uso de informação proveniente das teorias RST e CST, respectivamente. Além desses modelos, também foi desenvolvido um modelo de coerência (ou modelo de língua) para sumários multidocumento, que é projetado para capturar padrões de coerência, tratando alguns dos principais fenômenos multidocumento que a afetam. Esse modelo foi desenvolvido com base no modelo de entidades e com informações discursivas. Cada um desses modelos foi inferido a partir do córpus CSTNews de textos jornalísticos e seus respectivos sumários em português. Finalmente, foi desenvolvido também um decodificador para realizar a construção do sumário a partir das inferências obtidas. O decodificador seleciona o subconjunto de sentenças que maximizam a probabilidade do sumário de acordo com as probabilidades inferidas nos modelos de seleção de conteúdo e o modelo de coerência. Esse decodificador inclui também uma estratégia para evitar que sentenças redundantes sejam incluídas no sumário final. Os sumários produzidos a partir dessa modelagem gerativa são comparados com os sumários produzidos por métodos estatísticos do estado da arte, os quais foram implementados, treinados e testados sobre o córpus. Utilizando-se avaliações de informatividade tradicionais da área, os resultados obtidos mostram que os modelos desenvolvidos neste trabalho são competitivos com os métodos estatísticos do estado da arte e, em alguns casos, os superam. / Multi-document Summarization consists in automatically producing a unique summary from a set of source texts that share a common topic. This task is becoming more important, since it supports large volume data processing, enabling to highlight relevant information to the users. In this work, generative modeling approaches are proposed and investigated, where the Multidocument Summarization task is modeled through the Noisy-Channel framework and its components: language model, transformation model and decoding, which are properly instantiated for the correspondent task. These models are formulated with shallow and deep features. Particularly, three main transformation models were defined, establishing generative stories that capture content selection patterns from sets of source texts and their corresponding human multi-document summaries. The first model is the less complex, since its features are traditional shallow features; the second model is more complex, incorporating single-document discursive knowledge features (given by RST) to the features proposed in the first model; finally, the third model is the most complex, since it incorporates multi-document discursive knowledge features (given by CST) to the features provided by models 1 and 2. Besides these models, it was also developed a coherence model (represented by the Noisy-Channel´s language model) for multi-document summaries. This model, different from transformation models, aims at capturing coerence patterns in multi-document summaries. This model was developed over the Entity-based Model and incorporates discursive knowledge in order to capture coherence patterns, exploring multi-document phenomena. Each of these models was treined with the CSTNews córpus of journalistic texts and their corresponding summaries. Finally, a decoder to search for the summary that maximizes the probability of the estimated models was developed. The decoder selects the subset of sentences that maximize the estimated probabilities. The decoder also includes an additional functionality for treating redundancy in the decoding process by using discursive information from the CST. The produced summaries are compared with the summaries produced by state of the art generative models, which were also treined and tested with the CSTNews corpus. The evaluation was carried out using traditional informativeness measures, and the results showed that the generative models developed in this work are competitive with the state of the art statistical models, and, in some cases, they outperform them. .
616

Documentação e internacionalismo em Paul Otlet / -

Moura, Amanda Pacini de 15 September 2015 (has links)
O trabalho investiga a relação entre documentação e internacionalismo na obra de Paul Otlet (1868-1944), com foco particular no papel do internacionalismo sobre a formação das problemáticas e soluções em torno do documento. Adotaram-se como procedimentos metodológicos levantamento, revisão e análise bibliográfico-documental, constituindo-se um corpus da produção de Otlet e de textos de seus intérpretes. Observa-se a centralidade da Primeira Guerra Mundial na argumentação de Otlet quanto ao funcionamento da vida social, e os modelos descritivos baseados na biologia, na físico-química industrial e no racionalismo pelos quais ele compreende a dinâmica social. Expõe-se seu diagnóstico de crise de crescimento e adaptação das estruturas sociopolíticas frente à internacionalização - a insuficiência do Estado-nação, a proliferação das associações internacionais, a necessidade de uma Sociedade das Nações -, e aponta-se como o internacionalismo se manifesta em seu pensamento tanto como fato quanto como posição política. Discute-se o entendimento de Otlet quanto à documentação como um fenômeno sociotécnico, observando como sua construção fundamenta e fundamenta-se sobre uma visão evolucionária do homem e da sociedade. Observa-se sua articulação das figuras de matéria e força para descrever a ação da documentação sobre o pensamento humano, expondo o documento como condição material para as possibilidades de comunicação duradoura, construção de conhecimento objetivo e em última instância de coesão social. Demonstra-se o contexto do entre-guerras como o momento em que Otlet buscara viabilizar institucionalmente a relação entre documentação e internacionalismo por meio de uma nova estrutura organizacional, o Palais mondial (mais tarde Mundaneum), e pela demanda de reconhecimento pela Liga das Nações das demandas sociais por cooperação intelectual internacional. Expõe-se como Otlet conectaria assim o desenvolvimento de consenso e a possibilidade de ação democrática ao desenvolvimento do conhecimento e à organização dos documentos. Aponta-se, por fim, a interdependência entre documentação e internacionalismo em Otlet como exemplo da necessidade de se considerar os elementos políticos e sociais subjacentes às concepções teóricas e técnicas na Ciência da Informação. / This research investigates the relationship between documentation and internationalism in Paul Otlet\'s (1868-1944) thought, focusing specifically in how internationalism informs the problematics and solutions surrounding the document. The methods employed were bibliographic and documentary survey, review and analysis of a corpus of Otlet\'s texts, as well as texts form his interpreters. It observes the centrality of the First World War in Otlet\'s reasoning concerning the workings of social life, and the descriptive models based on biology, industrial physics and chemistry, and rationalism through which he understood social dynamics. It exposes his diagnosis of a crisis of social growth and adaptation to internationalization - the insufficiency of the Nation-State, the proliferation of international associations, the need for a Society of Nations -, and it establishes how internationalism manifests in his thought both as a fact and as a political position. It discusses Otlet\'s understanding of documentation as a sociotechnical phenomenon, following how its construction supports and is supported by an evolutionary view of man and society. It observes how he employs the images of matter and force to describe the effect of documentation on human thought, pointing out the document as the material condition for the possibilities of sustained communication, the development of objective knowledge and ultimately social cohesion. It demonstrates how in the years between the World Wars Otlet aimed to establish institutionally the connection between documentation and internationalism, both by conceiving a new organizational structure, the Palais mondial (later Mundaneum), and by arguing for the League of Nations\' recognition of the social demands for international intellectual cooperation. It exposes how Otlet connected thus the development of social consensus and the possibility of democratic action to the development of knowledge and the organization of documents. Finally, it points out the interdependence between documentation and internationalism in Otlet\'s thought as an example of the need to consider the political and social elements underlying theoretical and technical conceptions in Information Science.
617

Descrição arquivística de documentos fotográficos em sistemas informatizados / Archival description of photographic documents in the systems informatization.

Elisa Maria Lopes Chaves 10 December 2018 (has links)
Nas instituições arquivísticas, as imagens estão cada vez mais acessíveis através da web. A pesquisa analisa a descrição dos documentos fotográficos digitais neste contexto, sejam eles produtos da digitalização da imagem física, isto é, produzida em processos analógicos, ou das imagens nato-digitais. Visa analisar o acesso em ambientes virtuais, através da padronização das normas de descrição arquivística. Para isso, foi utilizado o AtoM, software desenvolvido pelo Conselho Internacional de Arquivo (CIA), ferramenta totalmente voltada para web, que segue padrões de normas arquivísticas como a ISAD(G). Com o objetivo de complementar as especificidades do documento fotográfico, analisamos a ferramenta Sepiades, desenvolvida pelo CIA para descrever coleções fotográficas, atendendo às normas arquivísticas. Através da análise das duas ferramentas, realizada por meio de pesquisa bibliográfica e documental, verificamos que o AtoM é a ferramenta mais indicada para a descrição dos documentos. Como resultado, geramos um quadro para descrição de documentos fotográficos arquivísticos com base no AtoM, compatibilizado com os parâmetros do modelo Sepiades. / In archival institutions, images are increasingly accessible through the web. The research analyzes the description of digital photographic documents in this context, whether they are products of physical image scanning, that is, produced in analogical processes, or of digital-born images. It aims to analyze access in virtual environments, through the standardization of the norms of archival description. For this purpose, AtoM was used, software developed by the International Archive Council (CIA), a web-based tool that follows the standards of archival standards such as ISAD(G). In order to complement the specificities of the photographic document, we analyzed the Sepiades tool, developed by the CIA to describe photographic collections, meeting archival standards. Through the analysis of the two tools, carried out through bibliographical and documentary research, we verified that the AtoM is the most suitable tool for the description of the documents. As a result, we generated a framework for describing archival photographic documents based on AtoM, compatible with the parameters of the Sepiades model
618

Ant colony optimization based clustering for data partitioning.

January 2005 (has links)
Woo Kwan Ho. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 148-155). / Abstracts in English and Chinese. / Contents --- p.ii / Abstract --- p.iv / Acknowledgements --- p.vii / List of Figures --- p.viii / List of Tables --- p.x / Chapter Chapter 1 --- Introduction --- p.1 / Chapter Chapter 2 --- Literature Reviews --- p.7 / Chapter 2.1 --- Block Clustering --- p.7 / Chapter 2.2 --- Clustering XML by structure --- p.10 / Chapter 2.2.1 --- Definition of XML schematic information --- p.10 / Chapter 2.2.2 --- Identification of XML schematic information --- p.12 / Chapter Chapter 3 --- Bi-Tour Ant Colony Optimization for diagonal clustering --- p.15 / Chapter 3.1 --- Motivation --- p.15 / Chapter 3.2 --- Framework of Bi-Tour Ant Colony Algorithm --- p.21 / Chapter 3.3 --- Re-order of the data matrix in BTACO clustering method --- p.27 / Chapter 3.3.1 --- Review of Ant Colony Optimization --- p.29 / Chapter 3.3.2 --- Bi-Tour Ant Colony Optimization --- p.36 / Chapter 3.4 --- Determination of partitioning scheme --- p.44 / Chapter 3.4.1 --- Weighed Sum of Error (WSE) --- p.48 / Chapter 3.4.2 --- Materialization of partitioning scheme via hypothetic matrix --- p.50 / Chapter 3.4.3 --- Search of best-fit hypothetic matrix --- p.52 / Chapter 3.4.4 --- Dynamic programming approach --- p.53 / Chapter 3.4.5 --- Heuristic partitioning approach --- p.57 / Chapter 3.5 --- Experimental Study --- p.62 / Chapter 3.5.1 --- Data set --- p.63 / Chapter 3.5.2 --- Study on DP Approach and HP Approach --- p.65 / Chapter 3.5.3 --- Study on parameter settings --- p.69 / Chapter 3.5.4 --- Comparison with GA-based & hierarchical clustering methods --- p.81 / Chapter 3.6 --- Chapter conclusion --- p.90 / Chapter Chapter 4 --- Application of BTACO-based clustering in XML database system --- p.93 / Chapter 4.1 --- Introduction --- p.93 / Chapter 4.2 --- Overview of normalization and vertical partitioning in relational DB design --- p.95 / Chapter 4.2.1 --- Normalization of relational models in database design --- p.95 / Chapter 4.2.2 --- Vertical partitioning in database design --- p.98 / Chapter 4.3 --- Clustering XML documents --- p.100 / Chapter 4.4 --- Proposed approach using BTACO-based clustering --- p.103 / Chapter 4.4.1 --- Clustering XML documents by structure --- p.103 / Chapter 4.4.2 --- Clustering XML documents by user transaction patterns --- p.109 / Chapter 4.4.3 --- Implementation of Query Manager for our experimental study --- p.114 / Chapter 4.5 --- Experimental Study --- p.118 / Chapter 4.5.1 --- Experimental Study on the clustering by structure --- p.118 / Chapter 4.5.2 --- Experimental Study on the clustering by user access patterns --- p.133 / Chapter 4.6 --- Chapter conclusion --- p.141 / Chapter Chapter 5 --- Conclusions --- p.143 / Chapter 5.1 --- Contributions --- p.144 / Chapter 5.2 --- Future works --- p.146 / Bibliography --- p.148 / Appendix I --- p.156 / Appendix II --- p.168 / Index tables for Profile A --- p.168 / Index tables for Profile B --- p.171 / Appendix III --- p.174
619

Efficient Xpath query processing in native XML databases. / CUHK electronic theses & dissertations collection

January 2007 (has links)
A general XML index can itself be sizable leading to low efficiency. To alleviate this predicament, frequently asked queries can be indexed by a database system. They are referred to as views. Answering queries using materialized views is always cheaper than evaluating over the base data. Traditional techniques solve this problem by considering only a single view. We approach this problem by exploiting the potential relationships of multiple views, which can be used together to answer a given query. Experiments show that significant performance gain can be achieved from multiple views. / A XML query can be decomposed to a sequence of structural joins (e.g., parent/child and ancestor/descendant) and content joins. Thus, structural join optimization is a key to improving join-based evaluation. We optimize structural join with two orthogonal methods: partition-based method exploits the spatial specialities of XML encodings by projecting them on a plane; and location-based method improves structural join by accurately pruning all irrelevant nodes, which cannot produce results. / As XML (eXtensible Markup Language) becomes a universal medium for data exchange over the Internet, efficient XML query processing is now the focus of considerable research and development activities. This thesis describes works toward efficient XML query evaluation and optimization in native XML databases. / XML indexes are widely studied to evaluate XML queries and in particular to accelerate join-based approaches. Index-based approaches outperform join-based approaches (e.g., holistic twig join) if the queries match the index. Existing XML indexes can only support a small set of XML queries because of the varieties in XML query representations. A XML query may involve child-axis only, both child-axis and branches, or additional descendant-or-self-axis but only in the query root. We propose novel indexes to efficiently support a much wider range of XML queries (with /, //, [], *). / Tang, Nan. / "December 2007." / Advisers: Kam-Fei Wong; Jeffrey Xu Yu. / Source: Dissertation Abstracts International, Volume: 69-08, Section: B, page: 4861. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 152-163). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
620

An adaptive communication mechanism for heterogeneous distributed environments using XML and servlets.

January 2001 (has links)
Cheung Wing Hang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 107-112). / Abstracts in English and Chinese. / Abstract --- p.ii / Abstract in Chinese --- p.iv / Acknowledgments --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Firewall Issue in Distributed Systems --- p.2 / Chapter 1.2 --- Heterogeneous Communication Protocols --- p.4 / Chapter 1.3 --- Translator for Converting Interface Definition to Flexible XML --- p.8 / Chapter 1.4 --- An Implementation of a Scalable Mediator Query System --- p.9 / Chapter 1.5 --- Our Contributions --- p.9 / Chapter 1.6 --- Outline of This Thesis --- p.10 / Chapter 2 --- Related Work and Technologies --- p.12 / Chapter 2.1 --- Overview of XML Technology --- p.12 / Chapter 2.1.1 --- XML Basic Syntax --- p.13 / Chapter 2.1.2 --- DTD: The Grammar Book --- p.15 / Chapter 2.1.3 --- Representing Complex Data Structures --- p.17 / Chapter 2.2 --- Overview of Java Servlet Technology --- p.18 / Chapter 2.3 --- Overview of Simple Object Access Protocol --- p.20 / Chapter 2.4 --- Overview of XML-RPC --- p.21 / Chapter 2.5 --- Overview of XIOP --- p.22 / Chapter 3 --- Using XML and Servlets to Support CORBA Calls --- p.24 / Chapter 3.1 --- Objective --- p.24 / Chapter 3.2 --- General Concept of Our Mechanism --- p.25 / Chapter 3.2.1 --- At Client Side --- p.27 / Chapter 3.2.2 --- At Server Side --- p.28 / Chapter 3.3 --- Data in Transmission --- p.30 / Chapter 3.3.1 --- Using XML --- p.30 / Chapter 3.3.2 --- Format of Messages in Transmission --- p.30 / Chapter 3.4 --- Supporting Callbacks in CORBA Systems --- p.33 / Chapter 3.4.1 --- What is callback? --- p.33 / Chapter 3.4.2 --- Enhancement to Allow Callbacks --- p.34 / Chapter 3.5 --- Achieving Transparency with Add-on Components --- p.37 / Chapter 4 --- A Translator to Convert CORBA IDL to XML --- p.39 / Chapter 4.1 --- Introduction to CORBA IDL --- p.39 / Chapter 4.2 --- Mapping from IDL to XML --- p.40 / Chapter 4.2.1 --- IDL Basic Data Types --- p.41 / Chapter 4.2.2 --- IDL Complex Data Types --- p.42 / Chapter 4.2.3 --- IDL Interface --- p.48 / Chapter 4.2.4 --- Attributes --- p.48 / Chapter 4.2.5 --- Operations (Methods) --- p.49 / Chapter 4.2.6 --- Exceptions --- p.50 / Chapter 4.2.7 --- Inheritance --- p.51 / Chapter 4.2.8 --- IDL Modules --- p.52 / Chapter 4.2.9 --- A Sample Conversion --- p.52 / Chapter 4.3 --- Making a Request or Response --- p.53 / Chapter 4.4 --- Code Generation for Add-on Components --- p.54 / Chapter 4.4.1 --- Generation of Shadow Objects --- p.54 / Chapter 4.4.2 --- Generation of Servlet Components --- p.55 / Chapter 5 --- Communication in Heterogeneous Distributed Environments --- p.58 / Chapter 5.1 --- Objective --- p.58 / Chapter 5.2 --- General Concept --- p.60 / Chapter 5.3 --- Case Study 1 - Distributed Common Object Model --- p.61 / Chapter 5.3.1 --- Brief Overview of Programming in DCOM --- p.61 / Chapter 5.3.2 --- Mapping the Two Different Interface Definitions --- p.63 / Chapter 5.3.3 --- Sample Architecture of Communicating Between DCOM and CORBA --- p.66 / Chapter 5.4 --- Case Study 2 - Java Remote Methods Invocation --- p.67 / Chapter 5.4.1 --- Brief Overview of Programming in Java RMI --- p.67 / Chapter 5.4.2 --- Mapping the Two Different Interface Definitions --- p.69 / Chapter 5.4.3 --- Sample Architecture of Communicating Between JavaRMI and CORBA --- p.71 / Chapter 5.5 --- Be Generic: Binding with the WEB --- p.72 / Chapter 6 --- Building a Scalable Mediator-based Query System --- p.74 / Chapter 6.1 --- Objectives --- p.74 / Chapter 6.2 --- Introduction to Our Mediator-based Query System --- p.76 / Chapter 6.2.1 --- What is mediator? --- p.76 / Chapter 6.2.2 --- The Architecture of our Mediator Query System --- p.77 / Chapter 6.2.3 --- The IDL Design of the Mediator System --- p.79 / Chapter 6.2.4 --- Components in the Query Mediator System --- p.80 / Chapter 6.3 --- Helping the Mediator System to Expand Across the Firewalls --- p.83 / Chapter 6.3.1 --- Implementation --- p.83 / Chapter 6.3.2 --- Across Heterogeneous Systems with DTD --- p.87 / Chapter 6.4 --- Adding the Callback Feature to the Mediator System --- p.89 / Chapter 6.5 --- Connecting our CORBA System with Other Environments --- p.90 / Chapter 6.5.1 --- Our Query System in DCOM --- p.91 / Chapter 6.5.2 --- Our Query System in Java RMI --- p.92 / Chapter 6.5.3 --- Binding Heterogeneous Systems --- p.93 / Chapter 7 --- Evaluation --- p.95 / Chapter 7.1 --- Performance Statistics --- p.95 / Chapter 7.1.1 --- Overhead in other methods --- p.97 / Chapter 7.2 --- Means for Enhancement --- p.98 / Chapter 7.2.1 --- Connection Performance of HTTP --- p.98 / Chapter 7.2.2 --- Transmission Data Compression --- p.99 / Chapter 7.2.3 --- Security Concern --- p.99 / Chapter 7.3 --- Advantages of Using Our Mechanism --- p.101 / Chapter 7.4 --- Disadvantages of Using Our Mechanism --- p.102 / Chapter 8 --- Conclusion --- p.104

Page generated in 0.0523 seconds