• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 191
  • 25
  • 22
  • 21
  • 14
  • 12
  • 7
  • 6
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 359
  • 359
  • 66
  • 63
  • 62
  • 56
  • 50
  • 48
  • 43
  • 42
  • 41
  • 40
  • 37
  • 33
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Using web services for customised data entry

Deng, Yanbo January 2007 (has links)
Scientific databases often need to be accessed from a variety of different applications. There are usually many ways to retrieve and analyse data already in a database. However, it can be more difficult to enter data which has originally been stored in different sources and formats (e.g. spreadsheets, other databases, statistical packages). This project focuses on investigating a generic, platform independent way to simplify the loading of databases. The proposed solution uses Web services as middleware to supply essential data management functionality such as inserting, updating, deleting and retrieval of data. These functions allow application developers to easily customise their own data entry applications according to local data sources, formats and user requirements. We implemented a Web service to support loading data to the Germinate database at the New Zealand Institute of Crop & Food Research (CFR). We also provided language specific client toolkits to help developers invoke the Web service. The toolkits allow applications to be easily customised for different platforms. In addition, we developed sample applications to help end users load data from their project data sources via the Web service. The Web service approach was evaluated through user and developer trials. The feedback from the developer trial showed that using Web services as middleware is a useful approach to allow developers and competent end users to customise data entry with minimal effort. More importantly, the customised client applications enabled end users to load data directly from their project spreadsheets and databases. It significantly reduced the effort required for exporting or transforming the source data.
242

Putting data delivery into context: Design and evaluation of adaptive networking support for successful communication in wireless self-organizing networks

Carneiro Viana, Aline 14 December 2011 (has links) (PDF)
Ce document est dédié à mes travaux de recherche développés au cours des six dernières années sur la conception et l'évaluation de systèmes de réseaux sans fil et est le résultat d'un certain nombre de collaborations. En particulier, mon objectif principal a été le soutien à la livraison fiable de données dans les réseaux sans fil auto-organisés. La question centrale, qui a guidée mes activités de recherche, est la suivante: "quels sont les services réseaux sous-jacents à la bonne conception de stratégies de communication sans fil dans les systèmes de réseaux auto-organisés (fixe ou mobile)?". Les réseaux auto-organisés (WSONs) ont des caractéristiques intrinsèques et, par conséquent, nécessitent des solutions particulières qui les distinguent des réseaux traditionnels basés sur des graphes. Les différents types de WSONs nécessitent des services adaptatifs ciblés pour faire face à leur nature (i.e., la mobilité, la limitation des ressources, le manque de fiabilité des communications sans fil,. . .) et pour trouver une adéquation entre leur fonctionnement et l'environnement. Influencée par de telles observations, mes activités de recherche ont été guidées par l'objectif principal de fournir au niveau du réseau un soutien à la livraison fiable de données dans les réseaux sans fil auto-organisés. Les axes de recherche, que j'ai développés avec mes collègues dans ce contexte, sont classés comme étant des services adaptifs "au niveau noeud" et "au niveau réseau" et se distinguent par le niveau auquel l'adaptation est considérée. Mes contributions, liées à la première catégorie de service, reposent sur les services de localisation et de découverte de voisinage. En raison de la limitation de page, ce manuscrit est, cependant, consacré à la recherche que j'ai menée autour des services adaptatifs au niveau du réseau. Par conséquent, il est structuré en trois chapitres principaux correspondants à trois classes de services réseaux : des services de gestion de la topologie, des services de gestion des données et des services de routage et d'acheminement. Ma première contribution concerne des services de gestion de la topologie, qui sont réalisés grâce à l'adaptation des noeuds - en imposant une hiérarchie dans le réseau via la clusterisation ou en supprimant des noeuds du graphe du réseau en les éteignant - et par la mobilité contrôlée - qui affecte à la fois la présence de noeuds et de liens, ainsi que la qualité des liens dans le graphe du réseau. Se basant sur l'adaptation de noeuds, le protocole SAND, les systèmes VINCOS et NetGeoS qui portent respectivement sur la conservation d'énergie et sur l'auto-structuration des réseaux de capteurs sans fil (WSN) ont été proposés. Ensuite, se basant sur la mobilité contrôlée, des propositions, liées à la conception de trajectoire de Hilbert et du protocole Cover, ont été présentées. Elles se concentrent sur le déploiement de solutions pour la couverture de zone avec des noeuds mobiles et ont été conçues pour surveiller périodiquement une zone géographique ou pour couvrir des noeuds de capteurs mobiles (cibles). Considérant les services de gestion de données, mes contributions se rapportent à la collecte des données - qui implique des solutions de distribution de données avec des objectifs liés a l'organisation - et la diffusion des données - où les flux de données sont dirigés vers le réseau. Pour cela, les protocoles DEEP et Supple ont été conçus pour les réseaux de capteurs sans fil, tandis que FairMix et VIP delegation se concentrent sur la diffusion d'information dans les réseaux sans fil sociaux. En particulier, afin d'améliorer la diffusion des données, FairMix et VIP delegation, exploitent les similarités des intérêts sociaux des personnes ou des groupes dans les réseaux fixes ou l'aspect social de leurs interactions sans fil dans les réseaux mobiles. Finalement, mes travaux sur les services adaptatifs d'acheminement attaquent la problèmatique de la connectivité opportuniste dans les réseaux sans fil tolérants aux délais. Dans ce contexte, les protocoles Seeker et GrAnt ont été conçus et utilisent respectivement l'histoire du contact entre les noeuds (les schémas de contact et de communication) et les propriétés des réseaux sociaux de noeuds afin de prédire les futures rencontres et de mieux ajuster les décisions de transfert. Au regard des nouvelles possibilités de communication et du changement dynamique observé au cours des dernières années dans les réseaux sans fil, mes activités de recherche se sont progressivement orientés des réseaux auto-organisés connectés vers les réseaux connectés par intermittence et opportunistes. De cette façon, mes perspectives de recherche future sont: (1) tirer profit des schémas de mobilité incontrôlée des dispositifs mobiles pervasifs pour améliorer les efforts de perception collaborative; (2) regarder plus en profondeur les techniques de génération de graphes sociaux à partir des traces décrivant les contacts entre les noeuds; (3) étudier quels sont les facteurs ayant un impact (positif ou négatif) sur le succès de la diffusion de l'information dans les réseaux sociaux mobiles, et (4) étudier la possibilité d'adapter le codage réseau à la diffusion d'information dans les réseaux sociaux mobiles.
243

Discovering and Tracking Interesting Web Services

Rocco, Daniel J. (Daniel John) 01 December 2004 (has links)
The World Wide Web has become the standard mechanism for information distribution and scientific collaboration on the Internet. This dissertation research explores a suite of techniques for discovering relevant dynamic sources in a specific domain of interest and for managing Web data effectively. We first explore techniques for discovery and automatic classification of dynamic Web sources. Our approach utilizes a service class model of the dynamic Web that allows the characteristics of interesting services to be specified using a service class description. To promote effective Web data management, the Page Digest Web document encoding eliminates tag redundancy and places structure, content, tags, and attributes into separate containers, each of which can be referenced in isolation or in conjunction with the other elements of the document. The Page Digest Sentinel system leverages our unique encoding to provide efficient and scalable change monitoring for arbitrary Web documents through document compartmentalization and semantic change request grouping. Finally, we present XPack, an XML document compression system that uses a containerized view of an XML document to provide both good compression and efficient querying over compressed documents. XPack's queryable XML compression format is general-purpose, does not rely on domain knowledge or particular document structural characteristics for compression, and achieves better query performance than standard query processors using text-based XML. Our research expands the capabilities of existing dynamic Web techniques, providing superior service discovery and classification services, efficient change monitoring of Web information, and compartmentalized document handling. DynaBot is the first system to combine a service class view of the Web with a modular crawling architecture to provide automated service discovery and classification. The Page Digest Web document encoding represents Web documents efficiently by separating the individual characteristics of the document. The Page Digest Sentinel change monitoring system utilizes the Page Digest document encoding for scalable change monitoring through efficient change algorithms and intelligent request grouping. Finally, XPack is the first XML compression system that delivers compression rates similar to existing techniques while supporting better query performance than standard query processors using text-based XML.
244

Resilient Reputation and Trust Management: Models and Techniques

Xiong, Li 26 August 2005 (has links)
The continued advances in service-oriented computing and global communications have created a strong technology push for online information sharing and business transactions among enterprises, organizations and individuals. While these communities offer enormous opportunities, they also present potential threats due to a lack of trust. Reputation systems provide a way for building trust through social control by harnessing the community knowledge in the form of feedback. Although feedback-based reputation systems help community participants decide who to trust and encourage trustworthy behavior, they also introduce vulnerabilities due to potential manipulations by dishonest or malicious players. Therefore, building an effective and resilient reputation system remains a big challenge for the wide deployment of service-oriented computing. This dissertation proposes a decentralized reputation based trust supporting framework called PeerTrust, focusing on models and techniques for resilient reputation management against feedback aggregation related vulnerabilities, especially feedback sparsity with potential feedback manipulation, feedback oscillation, and loss of feedback privacy. This dissertation research has made three unique contributions for building a resilient decentralized reputation system. First, we develop a core reputation model with important trust parameters and a coherent trust metric for quantifying and comparing the trustworthiness of participants. We develop decentralized strategies for implementing the trust model in an efficient and secure manner. Second, we develop techniques countering potential vulnerabilities associated with feedback aggregation, including a similarity inference scheme to counter feedback sparsity with potential feedback manipulations, and a novel metric based on Proportional, Integral, and Derivative (PID) model to handle strategic oscillating behavior of participants. Third but not the least, we develop privacy-conscious trust management models and techniques to address the loss of feedback privacy. We develop a set of novel probabilistic decentralized privacy-preserving computation protocols for important primitive operations. We show how feedback aggregation can be divided into individual steps that utilize above primitive protocols through an example reputation algorithm based on kNN classification. We perform experimental evaluations for each of the schemes we proposed and show the feasibility, effectiveness, and cost of our approach. The PeerTrust framework presents an important step forward with respect to developing attack-resilient reputation trust systems.
245

Power System Data Compression For Archiving

Das, Sarasij 11 1900 (has links)
Advances in electronics, computer and information technology are fueling major changes in the area of power systems instrumentations. More and more microprocessor based digital instruments are replacing older type of meters. Extensive deployment of digital instruments are generating vast quantities of data which is creating information pressure in Utilities. The legacy SCADA based data management systems do not support management of such huge data. As a result utilities either have to delete or store the metered information in some compact discs, tape drives which are unreliable. Also, at the same time the traditional integrated power industry is going through a deregulation process. The market principle is forcing competition between power utilities, which in turn demands a higher focus on profit and competitive edge. To optimize system operation and planning utilities need better decision making processes which depend on the availability of reliable system information. For utilities it is becoming clear that information is a vital asset. So, the utilities are now keen to store and use as much information as they can. Existing SCADA based systems do not allow to store data of more than a few months. So, in this dissertation effectiveness of compression algorithms in compressing real time operational data has been assessed. Both, lossy and lossless compression schemes are considered. In lossless method two schemes are proposed among which Scheme 1 is based on arithmetic coding and Scheme 2 is based on run length coding. Both the scheme have 2 stages. First stage is common for both the schemes. In this stage the consecutive data elements are decorrelated by using linear predictors. The output from linear predictor, named as residual sequence, is coded by arithmetic coding in Scheme 1 and by run length coding in Scheme 2. Three different types of arithmetic codings are considered in this study : static, decrement and adaptive arithmetic coding. Among them static and decrement codings are two pass methods where the first pass is used to collect symbol statistics while the second is used to code the symbols. The adaptive coding method uses only one pass. In the arithmetic coding based schemes the average compression ratio achieved for voltage data is around 30, for frequency data is around 9, for VAr generation data is around 14, for MW generation data is around 11 and for line flow data is around 14. In scheme 2 Golomb-Rice coding is used for compressing run lengths. In Scheme 2 the average compression ratio achieved for voltage data is around 25, for frequency data is around 7, for VAr generation data is around 10, for MW generation data is around 8 and for line flow data is around 9. The arithmetic coding based method mainly looks at achieving high compression ratio. On the other hand, Golomb-Rice coding based method does not achieve good compression ratio as arithmetic coding but it is computationally very simple in comparison with the arithmetic coding. In lossy method principal component analysis (PCA) based compression method is used. From the data set, a few uncorrelated variables are derived and stored. The range of compression ratio in PCA based compression scheme is around 105-115 for voltage data, around 55-58 for VAr generation data, around 21-23 for MW generation data and around 27-29 for line flow data. This shows that the voltage parameter is amenable for better compression than other parameters. Data of five system parameters - voltage, line flow, frequency, MW generation and MVAr generation - of Souther regional grid of India have been considered for study. One of the aims of this thesis is to argue that collected power system data can be put to other uses as well. In particular we show that, even mining the small amount of practical data (collected from SRLDC) reveals some interesting system behavior patterns. A noteworthy feature of the thesis is that all the studies have been carried out considering data of practical systems. It is believed that the thesis opens up new questions for further investigations.
246

On the Design of Socially-Aware Distributed Systems

Kourtellis, Nicolas 01 January 2012 (has links)
Social media services and applications enable billions of users to share an unprecedented amount of social information, which is further augmented by location and collocation information from mobile phones, and can be aggregated to provide an accurate digital representation of the social world. This dissertation argues that extracted social knowledge from this wealth of information can be embedded in the design of novel distributed, socially-aware applications and services, consequently improving system response time, availability and resilience to attacks, and reducing system overhead. To support this thesis, two research avenues are explored. First, this dissertation presents Prometheus, a socially-aware peer-to-peer service that collects social information from multiple sources, maintains it in a decentralized fashion on user-contributed nodes, and exposes it to applications through an interface that implements non-trivial social inferences. The system's socially-aware design leads to multiple system improvements: 1) it increases service availability by allowing users to manage their social information via socially-trusted peers, 2) it improves social inference performance and reduces message overhead by exploiting naturally-formed social groups, and 3) it reduces the opportunity of attackers to influence application requests. These performance improvements are assessed via simulations and a prototype deployment on a local cluster and on a worldwide testbed (PlanetLab) under emulated application workloads. Second, this dissertation defines the projection graph, the result of decentralizing a social graph onto a peer-to-peer system such as Prometheus, and studies the system's network properties and how they can be used to design more efficient socially-aware distributed applications and services. In particular: 1) it analytically formulates the relation between centrality metrics such as degree centrality, node betweenness centrality, and edge betweenness centrality in the social graph and in the emerging projection graph, 2) it experimentally demonstrates on real networks that for small groups of users mapped on peers, there is high association of social and projection graph properties, 3) it shows how these properties of the (dynamic) projection graph can be accurately inferred from the properties of the (slower changing) social graph, and 4) it demonstrates with two search application scenarios the usability of the projection graph in designing social search applications and unstructured P2P overlays. These research results lead to the formulation of lessons applicable to the design of socially-aware applications and distributed systems for improved application performance such as social search, data dissemination, data placement and caching, as well as for reduced system communication overhead and increased system resilience to attacks.
247

The Long Tail of hydroinformatics : implementing biological and oceanographic information in hydrologic information systems

Hersh, Eric Scott 01 February 2013 (has links)
Hydrologic Information Systems (HIS) have emerged as a means to organize, share, and synthesize water data. This work extends current HIS capabilities by providing additional capacity and flexibility for marine physical and chemical observations data and for freshwater and marine biological observations data. These goals are accomplished in two broad and disparate case studies – an HIS implementation for the oceanographic domain as applied to the offshore environment of the Chukchi Sea, a region of the Alaskan Arctic, and a separate HIS implementation for the aquatic biology and environmental flows domains as applied to Texas rivers. These case studies led to the development of a new four-dimensional data cube to accommodate biological observations data with axes of space, time, species, and trait, a new data model for biological observations, an expanded ontology and data dictionary for biological taxa and traits, and an expanded chain-of-custody approach for improved data source tracking. A large number of small studies across a wide range of disciplines comprise the “Long Tail” of science. This work builds upon the successes of the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) by applying HIS technologies to two new Long Tail disciplines: aquatic biology and oceanography. In this regard this research improves our understanding of how to deal with collections of biological data stored alongside sensor-based physical data. Based on the results of these case studies, a common framework for water information management for terrestrial and marine systems has emerged which consists of Hydrologic Information Systems for observations data, Geographic Information Systems for geographic data, and Digital Libraries for documents and other digital assets. It is envisioned that the next generation of HIS will be comprised of these three components and will thus actually be a Water Information System of Systems. / text
248

Phylogenetic studies of the vesicular fusion machinery / Phylogenetische Studien der vesikulären Fusionsmaschinerie

Kienle, Nickias 12 July 2010 (has links)
No description available.
249

Detection of malicious user communities in data networks

Moghaddam, Amir 04 April 2011 (has links)
Malicious users in data networks may form social interactions to create communities in abnormal fashions that deviate from the communication standards of a network. As a community, these users may perform many illegal tasks such as spamming, denial-of-service attacks, spreading confidential information, or sharing illegal contents. They may use different methods to evade existing security systems such as session splicing, polymorphic shell code, changing port numbers, and basic string manipulation. One way to masquerade the traffic is by changing the data rate patterns or use very low (trickle) data rates for communication purposes, the latter is focus of this research. Network administrators consider these communities of users as a serious threat. In this research, we propose a framework that not only detects the abnormal data rate patterns in a stream of traffic by using a type of neural network, Self-organizing Maps (SOM), but also detect and reveal the community structure of these users for further decisions. Through a set of comprehensive simulations, it is shown in this research that the suggested framework is able to detect these malicious user communities with a low false negative rate and false positive rate. We further discuss ways of improving the performance of the neural network by studying the size of SOM's.
250

Sistema gerenciador de documentação de projeto / A design trace management system

Soares, Sandro Neves January 1996 (has links)
A complexidade do projeto de sistemas eletrônicos, devido ao número de ferramentas envolvidas, ao grande volume de dados gerado e a natureza complicada destes dados, foi a causa principal do aparecimento, no final da década de 80, dos frameworks. Frameworks são plataformas que suportam o desenvolvimento de ambientes de projeto e que tem, como objetivo principal, liberar os projetistas das tarefas acessórias dentro do processo de projeto (como, por exemplo, a gerencia dos dados criados), possibilitando-lhes direcionar os esforços, exclusivamente, para a obtenção de melhores resultados, em menor tempo e a baixo custo. Para a realização deste objetivo, diversas técnicas são utilizadas na construção dos frameworks. Uma delas é conhecida como documentação dos passos de projeto. A documentação dos passos de projeto é um recurso utilizado para manter a história do projeto (usualmente, ferramentas executadas e dados gerados). Ela tem sido amplamente utilizada em trabalhos relacionados a frameworks. Porém, nenhum destes trabalhos aproveita toda a potencialidade do recurso. Alguns utilizam-no apenas nos serviços relacionados a gerencia de dados. Outros, utilizam-no apenas nos serviços relacionados a gerencia de projeto. A proposta deste trabalho, então, é a criação de um sistema que explore toda a potencialidade da documentação dos passos de projeto, disponibilizando, a partir daí, informações e serviços a outros sub-sistemas do framework, de forma a complementar a funcionalidade destes, tornando-os mais abrangentes e poderosos. / The VLSI design complexity, due to the number of involved tools, the enormous generated data volume and the complex nature of the data, was the main cause of the appearance of the frameworks in the end of the 80's. Frameworks are platforms that support the development of design environments and, as their main purpose, liberate the VLSI designers from the supplementary tasks in the design process, as the data management. It makes possible to direct efforts exclusively to obtaining better results, in shorter time and with lower costs. To this purpose, many techniques have been used in the implementation of frameworks. One of these techniques is known as design steps documentation. The design steps documentation is a resource used to keep the design history (usually, executed tools and generated data). It has been widely used in various frameworks. But none of them take full advantage of this resource. Some of them use the design steps documentation only in the data management services. Others, use it only in the design management services. So, the proposal of this work is to create a system that takes full advantage of the design steps documentation, providing information and services to other sub-systems of the framework to complement their functionality, making them more powerful.

Page generated in 0.0689 seconds