• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 189
  • 25
  • 22
  • 21
  • 14
  • 12
  • 7
  • 6
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 357
  • 357
  • 66
  • 63
  • 61
  • 54
  • 50
  • 48
  • 43
  • 42
  • 41
  • 40
  • 37
  • 33
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Discovering and Tracking Interesting Web Services

Rocco, Daniel J. (Daniel John) 01 December 2004 (has links)
The World Wide Web has become the standard mechanism for information distribution and scientific collaboration on the Internet. This dissertation research explores a suite of techniques for discovering relevant dynamic sources in a specific domain of interest and for managing Web data effectively. We first explore techniques for discovery and automatic classification of dynamic Web sources. Our approach utilizes a service class model of the dynamic Web that allows the characteristics of interesting services to be specified using a service class description. To promote effective Web data management, the Page Digest Web document encoding eliminates tag redundancy and places structure, content, tags, and attributes into separate containers, each of which can be referenced in isolation or in conjunction with the other elements of the document. The Page Digest Sentinel system leverages our unique encoding to provide efficient and scalable change monitoring for arbitrary Web documents through document compartmentalization and semantic change request grouping. Finally, we present XPack, an XML document compression system that uses a containerized view of an XML document to provide both good compression and efficient querying over compressed documents. XPack's queryable XML compression format is general-purpose, does not rely on domain knowledge or particular document structural characteristics for compression, and achieves better query performance than standard query processors using text-based XML. Our research expands the capabilities of existing dynamic Web techniques, providing superior service discovery and classification services, efficient change monitoring of Web information, and compartmentalized document handling. DynaBot is the first system to combine a service class view of the Web with a modular crawling architecture to provide automated service discovery and classification. The Page Digest Web document encoding represents Web documents efficiently by separating the individual characteristics of the document. The Page Digest Sentinel change monitoring system utilizes the Page Digest document encoding for scalable change monitoring through efficient change algorithms and intelligent request grouping. Finally, XPack is the first XML compression system that delivers compression rates similar to existing techniques while supporting better query performance than standard query processors using text-based XML.
242

Resilient Reputation and Trust Management: Models and Techniques

Xiong, Li 26 August 2005 (has links)
The continued advances in service-oriented computing and global communications have created a strong technology push for online information sharing and business transactions among enterprises, organizations and individuals. While these communities offer enormous opportunities, they also present potential threats due to a lack of trust. Reputation systems provide a way for building trust through social control by harnessing the community knowledge in the form of feedback. Although feedback-based reputation systems help community participants decide who to trust and encourage trustworthy behavior, they also introduce vulnerabilities due to potential manipulations by dishonest or malicious players. Therefore, building an effective and resilient reputation system remains a big challenge for the wide deployment of service-oriented computing. This dissertation proposes a decentralized reputation based trust supporting framework called PeerTrust, focusing on models and techniques for resilient reputation management against feedback aggregation related vulnerabilities, especially feedback sparsity with potential feedback manipulation, feedback oscillation, and loss of feedback privacy. This dissertation research has made three unique contributions for building a resilient decentralized reputation system. First, we develop a core reputation model with important trust parameters and a coherent trust metric for quantifying and comparing the trustworthiness of participants. We develop decentralized strategies for implementing the trust model in an efficient and secure manner. Second, we develop techniques countering potential vulnerabilities associated with feedback aggregation, including a similarity inference scheme to counter feedback sparsity with potential feedback manipulations, and a novel metric based on Proportional, Integral, and Derivative (PID) model to handle strategic oscillating behavior of participants. Third but not the least, we develop privacy-conscious trust management models and techniques to address the loss of feedback privacy. We develop a set of novel probabilistic decentralized privacy-preserving computation protocols for important primitive operations. We show how feedback aggregation can be divided into individual steps that utilize above primitive protocols through an example reputation algorithm based on kNN classification. We perform experimental evaluations for each of the schemes we proposed and show the feasibility, effectiveness, and cost of our approach. The PeerTrust framework presents an important step forward with respect to developing attack-resilient reputation trust systems.
243

Power System Data Compression For Archiving

Das, Sarasij 11 1900 (has links)
Advances in electronics, computer and information technology are fueling major changes in the area of power systems instrumentations. More and more microprocessor based digital instruments are replacing older type of meters. Extensive deployment of digital instruments are generating vast quantities of data which is creating information pressure in Utilities. The legacy SCADA based data management systems do not support management of such huge data. As a result utilities either have to delete or store the metered information in some compact discs, tape drives which are unreliable. Also, at the same time the traditional integrated power industry is going through a deregulation process. The market principle is forcing competition between power utilities, which in turn demands a higher focus on profit and competitive edge. To optimize system operation and planning utilities need better decision making processes which depend on the availability of reliable system information. For utilities it is becoming clear that information is a vital asset. So, the utilities are now keen to store and use as much information as they can. Existing SCADA based systems do not allow to store data of more than a few months. So, in this dissertation effectiveness of compression algorithms in compressing real time operational data has been assessed. Both, lossy and lossless compression schemes are considered. In lossless method two schemes are proposed among which Scheme 1 is based on arithmetic coding and Scheme 2 is based on run length coding. Both the scheme have 2 stages. First stage is common for both the schemes. In this stage the consecutive data elements are decorrelated by using linear predictors. The output from linear predictor, named as residual sequence, is coded by arithmetic coding in Scheme 1 and by run length coding in Scheme 2. Three different types of arithmetic codings are considered in this study : static, decrement and adaptive arithmetic coding. Among them static and decrement codings are two pass methods where the first pass is used to collect symbol statistics while the second is used to code the symbols. The adaptive coding method uses only one pass. In the arithmetic coding based schemes the average compression ratio achieved for voltage data is around 30, for frequency data is around 9, for VAr generation data is around 14, for MW generation data is around 11 and for line flow data is around 14. In scheme 2 Golomb-Rice coding is used for compressing run lengths. In Scheme 2 the average compression ratio achieved for voltage data is around 25, for frequency data is around 7, for VAr generation data is around 10, for MW generation data is around 8 and for line flow data is around 9. The arithmetic coding based method mainly looks at achieving high compression ratio. On the other hand, Golomb-Rice coding based method does not achieve good compression ratio as arithmetic coding but it is computationally very simple in comparison with the arithmetic coding. In lossy method principal component analysis (PCA) based compression method is used. From the data set, a few uncorrelated variables are derived and stored. The range of compression ratio in PCA based compression scheme is around 105-115 for voltage data, around 55-58 for VAr generation data, around 21-23 for MW generation data and around 27-29 for line flow data. This shows that the voltage parameter is amenable for better compression than other parameters. Data of five system parameters - voltage, line flow, frequency, MW generation and MVAr generation - of Souther regional grid of India have been considered for study. One of the aims of this thesis is to argue that collected power system data can be put to other uses as well. In particular we show that, even mining the small amount of practical data (collected from SRLDC) reveals some interesting system behavior patterns. A noteworthy feature of the thesis is that all the studies have been carried out considering data of practical systems. It is believed that the thesis opens up new questions for further investigations.
244

On the Design of Socially-Aware Distributed Systems

Kourtellis, Nicolas 01 January 2012 (has links)
Social media services and applications enable billions of users to share an unprecedented amount of social information, which is further augmented by location and collocation information from mobile phones, and can be aggregated to provide an accurate digital representation of the social world. This dissertation argues that extracted social knowledge from this wealth of information can be embedded in the design of novel distributed, socially-aware applications and services, consequently improving system response time, availability and resilience to attacks, and reducing system overhead. To support this thesis, two research avenues are explored. First, this dissertation presents Prometheus, a socially-aware peer-to-peer service that collects social information from multiple sources, maintains it in a decentralized fashion on user-contributed nodes, and exposes it to applications through an interface that implements non-trivial social inferences. The system's socially-aware design leads to multiple system improvements: 1) it increases service availability by allowing users to manage their social information via socially-trusted peers, 2) it improves social inference performance and reduces message overhead by exploiting naturally-formed social groups, and 3) it reduces the opportunity of attackers to influence application requests. These performance improvements are assessed via simulations and a prototype deployment on a local cluster and on a worldwide testbed (PlanetLab) under emulated application workloads. Second, this dissertation defines the projection graph, the result of decentralizing a social graph onto a peer-to-peer system such as Prometheus, and studies the system's network properties and how they can be used to design more efficient socially-aware distributed applications and services. In particular: 1) it analytically formulates the relation between centrality metrics such as degree centrality, node betweenness centrality, and edge betweenness centrality in the social graph and in the emerging projection graph, 2) it experimentally demonstrates on real networks that for small groups of users mapped on peers, there is high association of social and projection graph properties, 3) it shows how these properties of the (dynamic) projection graph can be accurately inferred from the properties of the (slower changing) social graph, and 4) it demonstrates with two search application scenarios the usability of the projection graph in designing social search applications and unstructured P2P overlays. These research results lead to the formulation of lessons applicable to the design of socially-aware applications and distributed systems for improved application performance such as social search, data dissemination, data placement and caching, as well as for reduced system communication overhead and increased system resilience to attacks.
245

The Long Tail of hydroinformatics : implementing biological and oceanographic information in hydrologic information systems

Hersh, Eric Scott 01 February 2013 (has links)
Hydrologic Information Systems (HIS) have emerged as a means to organize, share, and synthesize water data. This work extends current HIS capabilities by providing additional capacity and flexibility for marine physical and chemical observations data and for freshwater and marine biological observations data. These goals are accomplished in two broad and disparate case studies – an HIS implementation for the oceanographic domain as applied to the offshore environment of the Chukchi Sea, a region of the Alaskan Arctic, and a separate HIS implementation for the aquatic biology and environmental flows domains as applied to Texas rivers. These case studies led to the development of a new four-dimensional data cube to accommodate biological observations data with axes of space, time, species, and trait, a new data model for biological observations, an expanded ontology and data dictionary for biological taxa and traits, and an expanded chain-of-custody approach for improved data source tracking. A large number of small studies across a wide range of disciplines comprise the “Long Tail” of science. This work builds upon the successes of the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) by applying HIS technologies to two new Long Tail disciplines: aquatic biology and oceanography. In this regard this research improves our understanding of how to deal with collections of biological data stored alongside sensor-based physical data. Based on the results of these case studies, a common framework for water information management for terrestrial and marine systems has emerged which consists of Hydrologic Information Systems for observations data, Geographic Information Systems for geographic data, and Digital Libraries for documents and other digital assets. It is envisioned that the next generation of HIS will be comprised of these three components and will thus actually be a Water Information System of Systems. / text
246

Phylogenetic studies of the vesicular fusion machinery / Phylogenetische Studien der vesikulären Fusionsmaschinerie

Kienle, Nickias 12 July 2010 (has links)
No description available.
247

Detection of malicious user communities in data networks

Moghaddam, Amir 04 April 2011 (has links)
Malicious users in data networks may form social interactions to create communities in abnormal fashions that deviate from the communication standards of a network. As a community, these users may perform many illegal tasks such as spamming, denial-of-service attacks, spreading confidential information, or sharing illegal contents. They may use different methods to evade existing security systems such as session splicing, polymorphic shell code, changing port numbers, and basic string manipulation. One way to masquerade the traffic is by changing the data rate patterns or use very low (trickle) data rates for communication purposes, the latter is focus of this research. Network administrators consider these communities of users as a serious threat. In this research, we propose a framework that not only detects the abnormal data rate patterns in a stream of traffic by using a type of neural network, Self-organizing Maps (SOM), but also detect and reveal the community structure of these users for further decisions. Through a set of comprehensive simulations, it is shown in this research that the suggested framework is able to detect these malicious user communities with a low false negative rate and false positive rate. We further discuss ways of improving the performance of the neural network by studying the size of SOM's.
248

Sistema gerenciador de documentação de projeto / A design trace management system

Soares, Sandro Neves January 1996 (has links)
A complexidade do projeto de sistemas eletrônicos, devido ao número de ferramentas envolvidas, ao grande volume de dados gerado e a natureza complicada destes dados, foi a causa principal do aparecimento, no final da década de 80, dos frameworks. Frameworks são plataformas que suportam o desenvolvimento de ambientes de projeto e que tem, como objetivo principal, liberar os projetistas das tarefas acessórias dentro do processo de projeto (como, por exemplo, a gerencia dos dados criados), possibilitando-lhes direcionar os esforços, exclusivamente, para a obtenção de melhores resultados, em menor tempo e a baixo custo. Para a realização deste objetivo, diversas técnicas são utilizadas na construção dos frameworks. Uma delas é conhecida como documentação dos passos de projeto. A documentação dos passos de projeto é um recurso utilizado para manter a história do projeto (usualmente, ferramentas executadas e dados gerados). Ela tem sido amplamente utilizada em trabalhos relacionados a frameworks. Porém, nenhum destes trabalhos aproveita toda a potencialidade do recurso. Alguns utilizam-no apenas nos serviços relacionados a gerencia de dados. Outros, utilizam-no apenas nos serviços relacionados a gerencia de projeto. A proposta deste trabalho, então, é a criação de um sistema que explore toda a potencialidade da documentação dos passos de projeto, disponibilizando, a partir daí, informações e serviços a outros sub-sistemas do framework, de forma a complementar a funcionalidade destes, tornando-os mais abrangentes e poderosos. / The VLSI design complexity, due to the number of involved tools, the enormous generated data volume and the complex nature of the data, was the main cause of the appearance of the frameworks in the end of the 80's. Frameworks are platforms that support the development of design environments and, as their main purpose, liberate the VLSI designers from the supplementary tasks in the design process, as the data management. It makes possible to direct efforts exclusively to obtaining better results, in shorter time and with lower costs. To this purpose, many techniques have been used in the implementation of frameworks. One of these techniques is known as design steps documentation. The design steps documentation is a resource used to keep the design history (usually, executed tools and generated data). It has been widely used in various frameworks. But none of them take full advantage of this resource. Some of them use the design steps documentation only in the data management services. Others, use it only in the design management services. So, the proposal of this work is to create a system that takes full advantage of the design steps documentation, providing information and services to other sub-systems of the framework to complement their functionality, making them more powerful.
249

Quality Data Management in the Next Industrial Revolution : A Study of Prerequisites for Industry 4.0 at GKN Aerospace Sweden

Erkki, Robert, Johnsson, Philip January 2018 (has links)
The so-called Industry 4.0 is by its agitators commonly denoted as the fourth industrial revolution and promises to turn the manufacturing sector on its head. However, everything that glimmers is not gold and in the backwash of hefty consultant fees questions arises: What are the drivers behind Industry 4.0? Which barriers exists? How does one prepare its manufacturing procedures in anticipation of the (if ever) coming era? What is the internet of things and what file sizes’ is characterised as big data? To answer these questions, this thesis aims to resolve the ambiguity surrounding the definitions of Industry 4.0, as well as clarify the fuzziness of a data-driven manufacturing approach. Ergo, the comprehensive usage of data, including collection and storage, quality control, and analysis. In order to do so, this thesis was carried out as a case study at GKN Aerospace Sweden (GAS). Through interviews and observations, as well as a literature review of the subject, the thesis examined different process’ data-driven needs from a quality management perspective. The findings of this thesis show that the collection of quality data at GAS is mainly concerned with explicitly stated customer requirements. As such, the data available for the examined processes is proven inadequate for multivariate analytics. The transition towards a data-driven state of manufacturing involves a five-stage process wherein data collection through sensors is seen as a key enabler for multivariate analytics and a deepened process knowledge. Together, these efforts form the prerequisites for Industry 4.0. In order to effectively start transition towards Industry 4.0, near-time recommendations for GAS includes: capture all data, with emphasize on process data; improve the accessibility of data; and ultimately taking advantage of advanced analytics. Collectively, these undertakings pave the way for the actual improvements of Industry 4.0, such as digital twins, machine cognition, and process self-optimization. Finally, due to the delimitations of the case study, the findings are but generalized for companies with similar characteristics, i.e. complex processes with low volumes.
250

Sistema gerenciador de documentação de projeto / A design trace management system

Soares, Sandro Neves January 1996 (has links)
A complexidade do projeto de sistemas eletrônicos, devido ao número de ferramentas envolvidas, ao grande volume de dados gerado e a natureza complicada destes dados, foi a causa principal do aparecimento, no final da década de 80, dos frameworks. Frameworks são plataformas que suportam o desenvolvimento de ambientes de projeto e que tem, como objetivo principal, liberar os projetistas das tarefas acessórias dentro do processo de projeto (como, por exemplo, a gerencia dos dados criados), possibilitando-lhes direcionar os esforços, exclusivamente, para a obtenção de melhores resultados, em menor tempo e a baixo custo. Para a realização deste objetivo, diversas técnicas são utilizadas na construção dos frameworks. Uma delas é conhecida como documentação dos passos de projeto. A documentação dos passos de projeto é um recurso utilizado para manter a história do projeto (usualmente, ferramentas executadas e dados gerados). Ela tem sido amplamente utilizada em trabalhos relacionados a frameworks. Porém, nenhum destes trabalhos aproveita toda a potencialidade do recurso. Alguns utilizam-no apenas nos serviços relacionados a gerencia de dados. Outros, utilizam-no apenas nos serviços relacionados a gerencia de projeto. A proposta deste trabalho, então, é a criação de um sistema que explore toda a potencialidade da documentação dos passos de projeto, disponibilizando, a partir daí, informações e serviços a outros sub-sistemas do framework, de forma a complementar a funcionalidade destes, tornando-os mais abrangentes e poderosos. / The VLSI design complexity, due to the number of involved tools, the enormous generated data volume and the complex nature of the data, was the main cause of the appearance of the frameworks in the end of the 80's. Frameworks are platforms that support the development of design environments and, as their main purpose, liberate the VLSI designers from the supplementary tasks in the design process, as the data management. It makes possible to direct efforts exclusively to obtaining better results, in shorter time and with lower costs. To this purpose, many techniques have been used in the implementation of frameworks. One of these techniques is known as design steps documentation. The design steps documentation is a resource used to keep the design history (usually, executed tools and generated data). It has been widely used in various frameworks. But none of them take full advantage of this resource. Some of them use the design steps documentation only in the data management services. Others, use it only in the design management services. So, the proposal of this work is to create a system that takes full advantage of the design steps documentation, providing information and services to other sub-systems of the framework to complement their functionality, making them more powerful.

Page generated in 0.1236 seconds