1 |
A new approach to implementing atomic data typesWu, Zhixue January 1993 (has links)
No description available.
|
2 |
Design and Analysis of a Highly Efficient File Server GroupLiu, Feng-jung 29 January 2005 (has links)
The IT community has increasingly come to view storage as a resource that should be shared among computer systems and managed independently of the computer systems that it serves. And, the explosive growth of the Web contents has led to increasing attention on two major challenges: scalability and high availability of network file system. Therefore, the ways to improve the reliability and availability of system, to achieve the expected reduction in operational expenses and to reduce the operations of system management of system have become essential issues. A basic technique for improving reliability of a file system is to mask the effects of failures through replication. Consistency control protocols are implemented to ensure the consistency among these replicas.
In this dissertation, we leveraged the concept of intermediate file handle to cover the heterogeneity of file system. But, the monolithic server system suffered from the poor system utilization due to the lack of dependence checking between writes and management of out-of-ordered requests. Hence, in this dissertation, we followed the concept of intermediate file handle and proposed an efficient data consistency control scheme, which attempts to eliminate unnecessary waits for independent NFS writes to improve the efficiency of file server group. In addition, we also proposed a simple load-sharing mechanism for NFS client to improve system throughput and the utilization of duplicates. Finally, the results of experiments proved the efficiency of the proposed consistency control mechanism and load-sharing policy. Above all, easy to implement is our main design consideration.
|
3 |
Improving data quality : data consistency, deduplication, currency and accuracyYu, Wenyuan January 2013 (has links)
Data quality is one of the key problems in data management. An unprecedented amount of data has been accumulated and has become a valuable asset of an organization. The value of the data relies greatly on its quality. However, data is often dirty in real life. It may be inconsistent, duplicated, stale, inaccurate or incomplete, which can reduce its usability and increase the cost of businesses. Consequently the need for improving data quality arises, which comprises of five central issues of improving data quality, namely, data consistency, data deduplication, data currency, data accuracy and information completeness. This thesis presents the results of our work on the first four issues with regards to data consistency, deduplication, currency and accuracy. The first part of the thesis investigates incremental verifications of data consistencies in distributed data. Given a distributed database D, a set S of conditional functional dependencies (CFDs), the set V of violations of the CFDs in D, and updates ΔD to D, it is to find, with minimum data shipment, changes ΔV to V in response to ΔD. Although the problems are intractable, we show that they are bounded: there exist algorithms to detect errors such that their computational cost and data shipment are both linear in the size of ΔD and ΔV, independent of the size of the database D. Such incremental algorithms are provided for both vertically and horizontally partitioned data, and we show that the algorithms are optimal. The second part of the thesis studies the interaction between record matching and data repairing. Record matching, the main technique underlying data deduplication, aims to identify tuples that refer to the same real-world object, and repairing is to make a database consistent by fixing errors in the data using constraints. These are treated as separate processes in most data cleaning systems, based on heuristic solutions. However, our studies show that repairing can effectively help us identify matches, and vice versa. To capture the interaction, a uniform framework that seamlessly unifies repairing and matching operations is proposed to clean a database based on integrity constraints, matching rules and master data. The third part of the thesis presents our study of finding certain fixes that are absolutely correct for data repairing. Data repairing methods based on integrity constraints are normally heuristic, and they may not find certain fixes. Worse still, they may even introduce new errors when attempting to repair the data, which may not work well when repairing critical data such as medical records, in which a seemingly minor error often has disastrous consequences. We propose a framework and an algorithm to find certain fixes, based on master data, a class of editing rules and user interactions. A prototype system is also developed. The fourth part of the thesis introduces inferring data currency and consistency for conflict resolution, where data currency aims to identify the current values of entities, and conflict resolution is to combine tuples that pertain to the same real-world entity into a single tuple and resolve conflicts, which is also an important issue for data deduplication. We show that data currency and consistency help each other in resolving conflicts. We study a number of associated fundamental problems, and develop an approach for conflict resolution by inferring data currency and consistency. The last part of the thesis reports our study of data accuracy on the longstanding relative accuracy problem which is to determine, given tuples t1 and t2 that refer to the same entity e, whether t1[A] is more accurate than t2[A], i.e., t1[A] is closer to the true value of the A attribute of e than t2[A]. We introduce a class of accuracy rules and an inference system with a chase procedure to deduce relative accuracy, and the related fundamental problems are studied. We also propose a framework and algorithms for inferring accurate values with users’ interaction.
|
4 |
Data Consistency Checks on Flight Test DataMueller, G. 10 1900 (has links)
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA / This paper reflects the principal results of a study performed internally by Airbus's flight test centers. The purpose of this study was to share the body of knowledge concerning data consistency checks between all Airbus business units. An analysis of the test process is followed by the identification of the process stakeholders involved in ensuring data consistency. In the main part of the paper several different possibilities for improving data consistency are listed; it is left to the discretion of the reader to determine the appropriateness these methods.
|
5 |
Data Consistency and Conflict Avoidance in a Multi-User CAx EnvironmentMoncur, Robert Aaron 19 July 2012 (has links) (PDF)
This research presents a new method to preserve data consistency in a multi-user CAx environment. The new method includes three types of constraints which work by constraining and controlling both features and users across an entire multi-user CAx platform. The first type of constraint includes locking or reserving features to enable only one user at a time to edit a given feature. The second type of constraint, collaborative feature constraints, allows flexible constraining of each individual feature in a model, and the data that defines it. The third type of constraint, collaborative user constraints, allows the constraining of user permissions and user actions individually or as a group while providing as much flexibility as possible. To further present this method, mock-ups and suggested implementation guidelines are presented. To demonstrate the effectiveness of the method, a proof-of-concept implementation was built using the CATIA Connect multi-user CAD prototype developed at BYU. Using this implementation usage examples are provided to show how this method provides important tools that increase collaborative capabilities to a multi-user CAx system. By using the suggested method design teams will be able to better control how their data is used and edited, maintaining better data consistency and preventing data conflict and data misuse.
|
6 |
A simulation framework to ensure data consistency in sensor networksShah, Nikhil Jeevanlal January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Gurdip Singh / The objective of this project is to address the problem of data consistency in sensor network applications. An application may involve data being gathered from several sources to be delivered to multiple sinks, resulting in multiple data streams with several sources and sinks for each stream. There may be several inter-stream constraints to be satisfied in order to ensure data consistency. In this report, we model this problem as that of variable sharing between the components in an application, and propose a framework for implementing variable sharing in a distributed sensor network. In this framework, we define the notion of variable sharing in component based systems. We allow the application designer to specify data consistency constraints in an application. Given an application, we implement a tool to identify various types of shared variables in an application. Given the shared variables and the data consistency constraints, we provide an infrastructure to implement the shared variables. This infrastructure has tools to synthesize the code to be deployed on each of the nodes in the physical topology. The infrastructure has been built for the TinyOS platform. We have evaluated the framework using several examples using the TOSSIM simulator.
|
7 |
Criteria for Data Consistency Evaluation Prior to Modal Parameter EstimationPatil, Vivek 04 October 2021 (has links)
No description available.
|
8 |
Conflict Management and Model Consistency in Multi-user CADHepworth, Ammon Ikaika 01 August 2014 (has links) (PDF)
The NSF Center for e-Design, Brigham Young University (BYU) site has re-architected Computer Aided Design (CAD) tools, enabling multiple users to concurrently create, modify and view the same CAD part or assembly. This technology allows engineers, designers and manufacturing personnel to simultaneously contribute to the design of a part or assembly in real time,enabling parallel work environments within the CAD system. Multi-user systems are only as robust and efficient as their methods for managing conflicts and preserving model consistency. Conflicts occur in multi-user CAD when multiple users interoperate with the same or dependent geometry. Some conflicts can lead to model inconsistencies which means that each user's instance of the model are not identical. Other conflicts cause redundant work or waste in the design process. This dissertation presents methods to avoid and resolve conflicts which lead to model inconsistency and waste in the design process. The automated feature reservation method is presented which prevents multiple users from simultaneously editing the same feature, thus avoiding conflicts. In addition, a method is also presented which ensures that copies of the model stay consistent between distributed CAD clients by enforcing modeling operations to occur in the same order on all the clients. In cases of conflict, the conflicting operations are preserved locally for manual resolution by the user. An efficient model consistency method is presented which provides consistent references to the topological entities in a CAD model, ensuring operations are applied consistently on all models. An integrated task management system is also presented which avoids conflicts related to varying user design intent. Implementations and results of each method are presented. Results show that the methods effectively manage conflicts and ensure model consistency, thus providing a solution for a robust multi-user CAD system.
|
9 |
Hit and Bandwidth Optimal Caching for Wireless Data Access NetworksAkon, Mursalin January 2011 (has links)
For many data access applications, the availability of the most updated information is a fundamental and rigid requirement. In spite of many technological improvements, in wireless networks, wireless channels (or bandwidth) are the most scarce resources and hence are expensive. Data access from remote sites heavily depends on these expensive resources. Due to affordable smart mobile devices and tremendous popularity of various Internet-based services, demand for data from these mobile devices are growing very fast. In many cases, it is becoming impossible for the wireless data service providers to satisfy the demand for data using the current network infrastructures. An efficient caching scheme at the client side can soothe the problem by reducing the amount of data transferred over the wireless channels. However, an update event makes the associated cached data objects obsolete and useless for the applications. Frequencies of data update, as well as data access play essential roles in cache access and replacement policies. Intuitively, frequently accessed and infrequently updated objects should be given higher preference while preserving in the cache. However, modeling this intuition is challenging, particularly in a network environment
where updates are injected by both the server and the clients, distributed all over networks.
In this thesis, we strive to make three inter-related contributions. Firstly, we propose two enhanced cache access policies. The access policies ensure strong consistency of the cached data objects
through proactive or reactive interactions with the data server. At the same time, these policies collect information about access and update frequencies of hosted objects to facilitate efficient deployment of the cache replacement policy. Secondly, we design a replacement policy which plays the decision maker role when there is
a new object to accommodate in a fully occupied cache. The statistical information collected by the access policies enables the
decision making process. This process is modeled around the idea of preserving frequently accessed but less frequently updated objects in the cache. Thirdly, we analytically show that a cache management
scheme with the proposed replacement policy bundled with any of the cache access policies guarantees optimum amount of data transmission by increasing the number of effective hits in the cache system.
Results from both analysis and our extensive simulations demonstrate that the proposed policies outperform the popular Least Frequently Used (LFU) policy in terms of both effective hits and bandwidth
consumption. Moreover, our flexible system model makes the proposed policies equally applicable to applications for the existing 3G, as well as upcoming LTE, LTE Advanced and WiMAX wireless data access networks.
|
10 |
Hit and Bandwidth Optimal Caching for Wireless Data Access NetworksAkon, Mursalin January 2011 (has links)
For many data access applications, the availability of the most updated information is a fundamental and rigid requirement. In spite of many technological improvements, in wireless networks, wireless channels (or bandwidth) are the most scarce resources and hence are expensive. Data access from remote sites heavily depends on these expensive resources. Due to affordable smart mobile devices and tremendous popularity of various Internet-based services, demand for data from these mobile devices are growing very fast. In many cases, it is becoming impossible for the wireless data service providers to satisfy the demand for data using the current network infrastructures. An efficient caching scheme at the client side can soothe the problem by reducing the amount of data transferred over the wireless channels. However, an update event makes the associated cached data objects obsolete and useless for the applications. Frequencies of data update, as well as data access play essential roles in cache access and replacement policies. Intuitively, frequently accessed and infrequently updated objects should be given higher preference while preserving in the cache. However, modeling this intuition is challenging, particularly in a network environment
where updates are injected by both the server and the clients, distributed all over networks.
In this thesis, we strive to make three inter-related contributions. Firstly, we propose two enhanced cache access policies. The access policies ensure strong consistency of the cached data objects
through proactive or reactive interactions with the data server. At the same time, these policies collect information about access and update frequencies of hosted objects to facilitate efficient deployment of the cache replacement policy. Secondly, we design a replacement policy which plays the decision maker role when there is
a new object to accommodate in a fully occupied cache. The statistical information collected by the access policies enables the
decision making process. This process is modeled around the idea of preserving frequently accessed but less frequently updated objects in the cache. Thirdly, we analytically show that a cache management
scheme with the proposed replacement policy bundled with any of the cache access policies guarantees optimum amount of data transmission by increasing the number of effective hits in the cache system.
Results from both analysis and our extensive simulations demonstrate that the proposed policies outperform the popular Least Frequently Used (LFU) policy in terms of both effective hits and bandwidth
consumption. Moreover, our flexible system model makes the proposed policies equally applicable to applications for the existing 3G, as well as upcoming LTE, LTE Advanced and WiMAX wireless data access networks.
|
Page generated in 0.1163 seconds