Spelling suggestions: "subject:"published""
71 |
Planejamento e gestão da pesquisa e da inovação : conceitos e instrumentos / Planning and managing research and innovation : concepts and toolsBin, Adriana, 1977- 08 July 2008 (has links)
Orientador: Sérgio Luiz Monteiro Salles Filho / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Geociencias / Made available in DSpace on 2018-08-11T16:52:29Z (GMT). No. of bitstreams: 1
Bin_Adriana_D.pdf: 1493222 bytes, checksum: d0625bda059d3fcaf1f193734c85426e (MD5)
Previous issue date: 2008 / Resumo: A discussão sobre o planejamento e a gestão de organizações públicas de pesquisa constitui o tema central da tese. Motivado pela percepção sobre a importância crescente dos múltiplos objetivos e funções que estas organizações vêm desempenhando no âmbito dos sistemas de pesquisa e de inovação dos quais elas fazem parte, assim como da emergência de padrões cada vez mais colaborativos de execução de atividades de produção do conhecimento, o trabalho visa identificar as especificidades e premissas que devem ser consideradas na execução de seus processos de planejamento e gestão, assim como alguns métodos e instrumentos mais adequados para tal. A Parte I da tese trata do tema de forma mais abrangente, analisando, do ponto de vista conceitual e metodológico, as especificidades e premissas para o planejamento e gestão de atividades de ciência, tecnologia e inovação. Tal análise baseia-se nas particularidades que estas atividades apresentam, assim como na compreensão da evolução histórica relacionada com sua organização e institucionalização. Resulta, por sua vez, na identificação do caráter indeterminado e multi-institucional destas atividades, assim como do perfil profissional diferenciado que as distingue. A Parte II, focada nas organizações públicas de pesquisa, identifica as especificidades da gestão pública e suas implicações, que em conjunto com as
especificidades associadas às atividades de ciência, tecnologia e inovação, são importantes para o planejamento e a gestão destas organizações. Os direcionamentos gerais que devem ser buscados pelas organizações de pesquisa na condução de processos abrangentes de planejamento e na constituição de seus modelos de gestão, assim como o potencial de utilização de um conjunto de métodos e instrumentos como suporte para tais direcionamentos são também discutidos nesta parte do trabalho. Uma conclusão
fundamental que resulta de toda a discussão é a da interpretação dos esforços de planejamento e gestão a partir da mesma lógica que guia o entendimento sobre as atividades de ciência, tecnologia e inovação, lógica esta baseada em uma perspectiva
evolucionária e institucional. Por conseguinte, deriva-se a importância do aprendizado organizacional que abarca a atribuição de significado e valor a estas práticas, de forma a torná-las mais legítimas e resilientes ao longo do tempo / Abstract: The main theme of the thesis is the discussion about planning and managing public research organizations. Based on the increasing perception of the multiple objectives and roles that these organizations are performing in research and innovation systems and on the emergence of more collaborative patterns of knowledge production, this work aims to identify the specificities and premises that have to be considered on the execution of planning and managing processes and also adequate methods and tools to do so. Part I deals with this theme on a more comprehensive way, analyzing, from a conceptual and a methodological point of view, the specificities and premises to plan and manage science, technology and innovation activities. This analysis is based on the particularities of these activities and on the interpretation of the historical evolution of its organization and institutionalization. It results in the identification of the indeterminacy and multiinstitutionality of these activities and also of the professional profile that distinguish them. Parte II, focused on public research organizations, identifies the specificities of public management and their implications, which in addition to the specificities of science, technology and innovation activities, are important to plan and manage these organizations.
General directions to guide their comprehensive planning processes and the constitution of their management models and the potential application of some methods and tools that support these directions are also discussed in the work. A fundamental conclusion that results from all the discussion is the interpretation of planning and managing efforts with the same evolutionary and institutional logic that guides the understanding of science, technology and innovation activities. In consequence, it can be derived the importance of organizational learning that encompasses the attribution of significance and value to these practices, turning them into more legitimated and resilient efforts over time / Doutorado / Doutor em Política Científica e Tecnológica
|
72 |
Enabling Internet-Scale Publish/Subscribe In Overlay NetworksRahimian, Fatemeh January 2011 (has links)
As the amount of data in todays Internet is growing larger, users are exposedto too much information, which becomes increasingly more difficult tocomprehend. Publish/subscribe systems leverage this problem by providingloosely-coupled communications between producers and consumers of data ina network. Data consumers, i.e., subscribers, are provided with a subscriptionmechanism, to express their interests in a subset of data, in order to be notifiedonly when some data that matches their subscription is generated by theproducers, i.e., publishers. Most publish/subscribe systems today, are basedon the client/server architectural model. However, to provide the publish/-subscribe service in large scale, companies either have to invest huge amountof money for over-provisioning the resources, or are prone to frequent servicefailures. Peer-to-peer overlay networks are attractive alternative solutions forbuilding Internet-scale publish/subscribe systems. However, scalability comeswith a cost: a published message often needs to traverse a large number ofuninterested (unsubscribed) nodes before reaching all its subscribers. Werefer to this undesirable traffic, as relay overhead. Without careful considerations,the relay overhead might sharply increase resource consumption for therelay nodes (in terms of bandwidth transmission cost, CPU, etc) and couldultimately lead to rapid deterioration of the system’s performance once therelay nodes start dropping the messages or choose to permanently abandonthe system. To mitigate this problem, some solutions use unbounded numberof connections per node, while some other limit the expressiveness of thesubscription scheme. In this thesis work, we introduce two systems called Vitis and Vinifera, fortopic-based and content-based publish/subscribe models, respectively. Boththese systems are gossip-based and significantly decrease the relay overhead.We utilize novel techniques to cluster together nodes that exhibit similarsubscriptions. In the topic-based model, distinct clusters for each topic areconstructed, while clusters in the content-based model are fuzzy and do nothave explicit boundaries. We augment these clustered overlays by links thatfacilitate routing in the network. We construct a hybrid system by injectingstructure into an otherwise unstructured network. The resulting structuresresemble navigable small-world networks, which spans along clusters of nodesthat have similar subscriptions. The properties of such overlays make theman ideal platform for efficient data dissemination in large-scale systems. Thesystems requires only a bounded node degree and as we show, through simulations,they scale well with the number of nodes and subscriptions and remainefficient under highly complex subscription patterns, high publication rates,and even in the presence of failures in the network. We also compare bothsystems against some state-of-the-art publish/subscribe systems. Our measurementsshow that both Vitis and Vinifera significantly outperform theircounterparts on various subscription and churn scenarios, under both syntheticworkloads and real-world traces. / QC 20111114
|
73 |
Occupancy Sensor System : For Context-aware ComputingHübinette, Daniel January 2007 (has links)
This masters thesis project, "Occupancy Sensor System", was conducted at Kungliga Tekniska Högskolan (KTH), Stockholm, Sweden, during the period 2007-04-24 – 2007-12-17. The goal of the project was to design an occupancy sensor system that determines if there exists more than one person in a defined region. The output of this system is for use in a context-aware system at the KTH Center for Wireless Systems (Wireless@KTH). The system is important because there is a need for specific input to context-aware systems concerning occupancy of spaces and because this thesis has focused on a problem that enables new complex and interesting services. Additionally, the specific problem of determining not only occupancy, but if this occupancy is zero, one, many has not been widely examined previously. The significance of zero occupants indicating an empty room has already been recognized as having economic and environmental value in terms of heating, ventilating, air-conditioning, and lighting. However, there has not been an effort to differentiate between a person being alone or if more than one person is present. A context-aware system might be able to use this later information to infer that a meeting is taking place in a meeting room, a class taking place in a classroom or that an individual is alone in a conference room, class room, etc. Thus enabling context-aware services to change their behavior based upon the differences in these situations. An occupancy sensor system prototype was designed to monitor a boundary by using a thermal detector, gumstix computer, an analog to digital converter prototype board, laptop computer, and a context broker. The testing and evaluation of the system, proved it to be sound. However, there are still further improvements and tests to be made. These improvements include: dynamic configuration of the system, communication between the different system entities, detection algorithms, and code improvements. Tests measuring accuracy of a detection algorithm and determining optimal detector placement need to be performed. The next step is to design applications that use the context information provided from the occupancy sensor system and expand the system to use multiple detectors. / Examensarbetet "Occupancy Sensor System" genomfördes på Kungliga Tekniska Högskolan (KTH), Stockholm, Sverige, under perioden 2007-04-24 – 2007-12-17. Målet med examensarbetet var att designa ett sensorsystem, som avgör om ett rum är befolkat med fler än en person i ett definierat område. Resultatet av detta system är till för användning i ett kontextmedvetet system som finns i KTH Center for Wireless Systems (Wireless@KTH). Systemet är viktigt eftersom det finns ett behov för specifik input till kontextmedvetna system som berör befolkning av rum och eftersom detta examensarbete har fokuserat på ett problem som möjliggör nya komplexa och intressanta tjänster. Dessutom har det inte tidigare undersökts i vidare bemärkelse hur man kan avgöra om ett rum befolkats av noll, en eller flera personer. Betydelsen av att ett rum är obefolkat har redan ansetts ha ekonomiskt och miljöbetingat värde vad gäller uppvärming, ventilation, luftkonditionering och belysning. Däremot har det inte gjorts ansträngningar att differentiera mellan att en ensam person eller flera är närvarande. Ett kontextmedvetet system skulle kunna använda den senare nämnda informationen för att dra slutsatsen att ett möte pågår i ett mötesrum, en lektion är igång i ett klassrum o.s.v. Detta möjliggör i sin tur för kontextmedvetna tjänster att ändra på sina beteenden baserat på skillnaderna i dessa situationer. En prototyp utvecklades för att övervaka en gräns genom användningen av en termisk detektor, gumstixdator, analog till digital signalkonverterare, bärbar dator och en context broker (kontextförmedlare). Testningar och utvärderingar av systemet visade att systemet var dugligt. Flera förbättringar och tester behöver dock göras i framtiden. Dessa förbättringar inkluderar: dynamisk konfiguration av systemet, kommunikation mellan de olika systementiteterna, detektionsalgoritmer och kodförbättringar. Återstående tester inkluderar mätning av en detektionsalgoritms tillförlitlighet samt optimal placering av detektorer. Nästa steg är att utveckla applikationer som använder kontextinformationen från systemet samt att utveckla systemet till att kunna använda flera detektorer.
|
74 |
Efficient Content-Based Publish/Subscribe Systems for Dynamic and Large-Scale Networked ApplicationsZhao, Yaxiong January 2012 (has links)
This thesis presents the design and evaluation of content-based publish/subscribe systems for efficient content dissemination and sharing of dynamic and large-scale networked applications. The rapid development of network technologies and the continuous investment in network infrastructure have realized a ubiquitous platform for sharing information. However, there lacks efficient protocol and software that can utilize such resource to support novel networked applications. In this thesis, we explore the possibility of content-based publish/subscribe as an efficient communication substrate for dynamic and large-scale networked applications. Although content-based publish/subscribe has been used extensively in many small-to-medium scale systems, there is no Internet-scale applications that utilize this technology. The research reported in this thesis investigates the technical challenges and their solutions of applying content-based publish/subscribe in various applications in mobile networks and Internet. We apply content-based publish/subscribe in the interest-driven information sharing for smartphone networks. We design efficient approximate content matching algorithms and data structures. We study how to construct optimal overlay publish/subscribe overlay networks. We propose architecture designs that make Internet content-based publish/subscribe robust. We also design a name resolution system that enables content discovery in the Internet. These techniques are evaluated comprehensively in realistic simulation studies, and some of them are further evaluated on PlanetLab testbed with prototype implementations. / Computer and Information Science
|
75 |
On a Self-Organizing MANET Event Routing Architecture with Causal Dependency AwarenessPei, Guanhong 07 January 2010 (has links)
Publish/subscribe (P/S) is a communication paradigm of growing popularity for information dissemination in large-scale distributed systems. The weak coupling between information producers and consumers in P/S systems is attractive for loosely coupled and dynamic network infrastructures such as ad hoc networks. However, achieving end-to-end timeliness and reliability properties when P/S events are causally dependent is an open problem in ad hoc networks.
In this thesis, we present, evaluate benefits of, and compare with past work, an architecture design that can effectively support timely and reliable delivery of events and causally related events in ad hoc environments, and especially in mobile ad hoc networks (MANETs). With observations from both realistic application model and simulation experiments, we reveal causal dependencies among events and their significance in a typical use notional system. We also examine and propose engineering methodologies to further tailor an event-based system to facilitate its self-reorganizing capability and self-reconfiguration. Our design features a two-layer structure, including novel distributed algorithms and mechanisms for P/S tree construction and maintenance. The trace-based experimental simulation studies illustrate our design's effectiveness in both cases with and without causal dependencies. / Master of Science
|
76 |
Scalable view-based techniques for web data : algorithms and systems / Techniques efficaces basées sur des vues matérialisées pour la gestion des données du Web : algorithmes et systèmesKatsifodimos, Asterios 03 July 2013 (has links)
Le langage XML, proposé par le W3C, est aujourd’hui utilisé comme un modèle de données pour le stockage et l’interrogation de grands volumes de données dans les systèmes de bases de données. En dépit d’importants travaux de recherche et le développement de systèmes efficace, le traitement de grands volumes de données XML pose encore des problèmes des performance dus à la complexité et hétérogénéité des données ainsi qu’à la complexité des langages courants d’interrogation XML. Les vues matérialisées sont employées depuis des décennies dans les bases de données afin de raccourcir les temps de traitement des requêtes. Elles peuvent être considérées les résultats de requêtes pré-calculées, que l’on réutilise afin d’éviter de recalculer (complètement ou partiellement) une nouvelle requête. Les vues matérialisées ont fait l’objet de nombreuses recherches, en particulier dans le contexte des entrepôts des données relationnelles.Cette thèse étudie l’applicabilité de techniques de vues matérialisées pour optimiser les performances des systèmes de gestion de données Web, et en particulier XML, dans des environnements distribués. Dans cette thèse, nos apportons trois contributions.D’abord, nous considérons le problème de la sélection des meilleures vues à matérialiser dans un espace de stockage donné, afin d’améliorer la performance d’une charge de travail des requêtes. Nous sommes les premiers à considérer un sous-langage de XQuery enrichi avec la possibilité de sélectionner des noeuds multiples et à de multiples niveaux de granularités. La difficulté dans ce contexte vient de la puissance expressive et des caractéristiques du langage des requêtes et des vues, et de la taille de l’espace de recherche de vues que l’on pourrait matérialiser.Alors que le problème général a une complexité prohibitive, nous proposons et étudions un algorithme heuristique et démontrer ses performances supérieures par rapport à l’état de l’art.Deuxièmement, nous considérons la gestion de grands corpus XML dans des réseaux pair à pair, basées sur des tables de hachage distribuées. Nous considérons la plateforme ViP2P dans laquelle des vues XML distribuées sont matérialisées à partir des données publiées dans le réseau, puis exploitées pour répondre efficacement aux requêtes émises par un pair du réseau. Nous y avons apporté d’importantes optimisations orientées sur le passage à l’échelle, et nous avons caractérisé la performance du système par une série d’expériences déployées dans un réseau à grande échelle. Ces expériences dépassent de plusieurs ordres de grandeur les systèmes similaires en termes de volumes de données et de débit de dissémination des données. Cette étude est à ce jour la plus complète concernant une plateforme de gestion de contenus XML déployée entièrement et testée à une échelle réelle.Enfin, nous présentons une nouvelle approche de dissémination de données dans un système d’abonnements, en présence de contraintes sur les ressources CPU et réseau disponibles; cette approche est mise en oeuvre dans le cadre de notre plateforme Delta. Le passage à l’échelle est obtenu en déchargeant le fournisseur de données de l’effort de répondre à une partie des abonnements. Pour cela, nous tirons profit de techniques de réécriture de requêtes à l’aide de vues afin de diffuser les données de ces abonnements, à partir d’autres abonnements.Notre contribution principale est un nouvel algorithme qui organise les vues dans un réseau de dissémination d’information multi-niveaux ; ce réseau est calculé à l’aide d’outils techniques de programmation linéaire afin de passer à l’échelle pour de grands nombres de vues, respecter les contraintes de capacité du système, et minimiser les délais de propagation des information. L’efficacité et la performance de notre algorithme est confirmée par notre évaluation expérimentale, qui inclut l’étude d’un déploiement réel dans un réseau WAN. / XML was recommended by W3C in 1998 as a markup language to be used by device- and system-independent methods of representing information. XML is nowadays used as a data model for storing and querying large volumes of data in database systems. In spite of significant research and systems development, many performance problems are raised by processing very large amounts of XML data. Materialized views have long been used in databases to speed up queries. Materialized views can be seen as precomputed query results that can be re-used to evaluate (part of) another query, and have been a topic of intensive research, in particular in the context of relational data warehousing. This thesis investigates the applicability of materialized views techniques to optimize the performance of Web data management tools, in particular in distributed settings, considering XML data and queries. We make three contributions.We first consider the problem of choosing the best views to materialize within a given space budget in order to improve the performance of a query workload. Our work is the first to address the view selection problem for a rich subset of XQuery. The challenges we face stem from the expressive power and features of both the query and view languages and from the size of the search space of candidate views to materialize. While the general problem has prohibitive complexity, we propose and study a heuristic algorithm and demonstrate its superior performance compared to the state of the art.Second, we consider the management of large XML corpora in peer-to-peer networks, based on distributed hash tables (or DHTs, in short). We consider a platform leveraging distributed materialized XML views, defined by arbitrary XML queries, filled in with data published anywhere in the network, and exploited to efficiently answer queries issued by any network peer. This thesis has contributed important scalability oriented optimizations, as well as a comprehensive set of experiments deployed in a country-wide WAN. These experiments outgrow by orders of magnitude similar competitor systems in terms of data volumes and data dissemination throughput. Thus, they are the most advanced in understanding the performance behavior of DHT-based XML content management in real settings.Finally, we present a novel approach for scalable content-based publish/subscribe (pub/sub, in short) in the presence of constraints on the available computational resources of data publishers. We achieve scalability by off-loading subscriptions from the publisher, and leveraging view-based query rewriting to feed these subscriptions from the data accumulated in others. Our main contribution is a novel algorithm for organizing subscriptions in a multi-level dissemination network in order to serve large numbers of subscriptions, respect capacity constraints, and minimize latency. The efficiency and effectiveness of our algorithm are confirmed through extensive experiments and a large deployment in a WAN.
|
77 |
A Social Semantic Web System for Coordinating Communication in the Architecture, Engineering & Construction IndustryZhang, Jinyue 08 March 2011 (has links)
The AEC industry has long been in need of effective modes of information exchange and knowledge sharing, but their practice in the industry is still far from satisfactory. In order to maintain their competence in a highly competitive environment and a globalized market, many organizations in the AEC industry have aimed at a move towards the development of learning organizations. Knowledge management has been seen as an effective way to have every member of an organization engaged in learning at all levels. At the very centre of knowledge management and learning is knowledge sharing through effective communication. Unfortunately, however, there is a big gap in the AEC industry between existing practice and the ideal in this area.
In order to effectively coordinate information and knowledge flow in the AEC industry, this present research has developed a framework for an information system – a Construction Information and Knowledge Protocol/Portal (CIKP) which integrates within it a publish/subscribe system, Semantic Web technology, and Social Web concepts. Publish/subscribe is an appropriate many-to-many, people-to-people communication paradigm for handling a highly fragmented industry such as construction. In order to enrich the expressiveness of publications and subscriptions, Semantic Web technology has been incorporated into this system through the development of ontologies as a formal and interoperable form of knowledge representation. This research first involved the development of a domain-level ontology (AR-Onto) to encapsulate knowledge about actors, roles, and their attributes in the AEC industry. AR-Onto was then extended and tailored to create an application-level ontology (CIKP-Onto) which has been used to support the semantics in the CIKP framework. Social Web concepts have been introduced to enrich the description of publications and subscriptions. Our aim has been to break down linear communication through social involvement and encourage a culture of sharing, and in the end, the CIKP framework has been developed to specify desired services in communicating information and knowledge, applicable technical approaches, and more importantly, the functions required to satisfy the needs of a variety of service scenarios.
|
78 |
A Social Semantic Web System for Coordinating Communication in the Architecture, Engineering & Construction IndustryZhang, Jinyue 08 March 2011 (has links)
The AEC industry has long been in need of effective modes of information exchange and knowledge sharing, but their practice in the industry is still far from satisfactory. In order to maintain their competence in a highly competitive environment and a globalized market, many organizations in the AEC industry have aimed at a move towards the development of learning organizations. Knowledge management has been seen as an effective way to have every member of an organization engaged in learning at all levels. At the very centre of knowledge management and learning is knowledge sharing through effective communication. Unfortunately, however, there is a big gap in the AEC industry between existing practice and the ideal in this area.
In order to effectively coordinate information and knowledge flow in the AEC industry, this present research has developed a framework for an information system – a Construction Information and Knowledge Protocol/Portal (CIKP) which integrates within it a publish/subscribe system, Semantic Web technology, and Social Web concepts. Publish/subscribe is an appropriate many-to-many, people-to-people communication paradigm for handling a highly fragmented industry such as construction. In order to enrich the expressiveness of publications and subscriptions, Semantic Web technology has been incorporated into this system through the development of ontologies as a formal and interoperable form of knowledge representation. This research first involved the development of a domain-level ontology (AR-Onto) to encapsulate knowledge about actors, roles, and their attributes in the AEC industry. AR-Onto was then extended and tailored to create an application-level ontology (CIKP-Onto) which has been used to support the semantics in the CIKP framework. Social Web concepts have been introduced to enrich the description of publications and subscriptions. Our aim has been to break down linear communication through social involvement and encourage a culture of sharing, and in the end, the CIKP framework has been developed to specify desired services in communicating information and knowledge, applicable technical approaches, and more importantly, the functions required to satisfy the needs of a variety of service scenarios.
|
79 |
A Content-Oriented Architecture for Publish/Subscribe SystemsChen, Jiachen 16 March 2015 (has links)
No description available.
|
80 |
Hardware Architecture of an XML/XPath Broker/Router for Content-Based Publish/Subscribe Data Dissemination SystemsEl-Hassan, Fadi 25 February 2014 (has links)
The dissemination of various types of data faces ongoing challenges with the growing
need of accessing manifold information. Since the interest in content is what drives
data networks, some new technologies and thoughts attempt to cope with these challenges
by developing content-based rather than address-based architectures. The Publish/
Subscribe paradigm can be a promising approach toward content-based data dissemination,
especially that it provides total decoupling between publishers and subscribers.
However, in content-based publish/subscribe systems, subscriptions are expressive and
the information is often delivered based on the matched expressive content - which may
not deeply alleviate considerable performance challenges.
This dissertation explores a hardware solution for disseminating data in content-based
publish/subscribe systems. This solution consists of an efficient hardware architecture
of an XML/XPath broker that can route information based on content to either other
XML/XPath brokers or to ultimate users. A network of such brokers represent an overlay
structure for XML content-based publish/subscribe data dissemination systems. Each
broker can simultaneously process many XPath subscriptions, efficiently parse XML
publications, and subsequently forward notifications that result from high-performance
matching processes. In the core of the broker architecture, locates an XML parser that
utilizes a novel Skeleton CAM-Based XML Parsing (SCBXP) technique in addition to an
XPath processor and a high-performance matching engine. Moreover, the broker employs
effective mechanisms for content-based routing, so as subscriptions, publications, and
notifications are routed through the network based on content.
The inherent reconfigurability feature of the broker’s hardware provides the system
architecture with the capability of residing in any FPGA device of moderate logic density.
Furthermore, such a system-on-chip architecture is upgradable, if any future hardware
add-ons are needed. However, the current architecture is mature and can effectively be
implemented on an ASIC device.
Finally, this thesis presents and analyzes the experiments conducted on an FPGA
prototype implementation of the proposed broker/router. The experiments tackle tests
for the SCBXP alone and for two phases of development of the whole broker. The
corresponding results indicate the high performance that the involved parsing, storing,
matching, and routing processes can achieve.
|
Page generated in 0.0384 seconds