• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 55
  • 23
  • 13
  • 8
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 219
  • 219
  • 84
  • 73
  • 48
  • 42
  • 32
  • 25
  • 24
  • 22
  • 20
  • 17
  • 17
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Synthesis of New Magnetic Nanocomposite Materials for Data Storage

Alamri, Haleema January 2012 (has links)
The confinement of magnetic nanoparticles (Prussian blue analogues (PBAs) has been achieved using mesostructured silica as a matrix. The PBAs have the general formula AxMy[M'(CN)n]z, where A is an alkali metal cation; M: CoII, NiII, SmIII; and M': CoII. The two reactions were run in parallel and led to a mesostructured silica matrix that contains nanoparticles of PBA homogeneously distributed within the silica framework. As initially reported for the synthesis of Co3[Fe(CN)6]2 magnetic nanoparticles, in the research conducted for this thesis, this synthesis has been extended to other compounds and to lanthanides such as Sm and has also included the study of the influence of different parameters (pH, concentration). As these nanocomposites are potentially good candidates for the preparation of bimetallic nanoparticles and oxides through controlled thermal treatment, the second goal of the research was to employ an adapted thermal treatment in order to prepare metal and metal oxide nanoparticles from PBA, directly embedded in the silica matrix. To this end, the influence of the thermal treatment (temperature, time, atmosphere) on the nature and structure of the resulting materials was investigated, with a focus on the potential use of the combustion of the organic templates as in-situ reducing agents. For some compounds, the preparation of bimetallic nanoparticles was successful. This method was tentatively applied to the preparation of specific Sm:Co bimetallic compounds, are well known as one of the best permanent magnets currently available.
32

Σχεδιασμός και υλοποίηση ενός διαδικτυακού σκληρού δίσκου

Ζαγκλής, Νικόλας 01 July 2015 (has links)
Σκοπός αυτής της διπλωματικής εργασίας είναι η δημιουργία ενός διαδικτυακού σκληρού δίσκου, ο οποίος θα βασίζεται στο ενσωματωμένο επικοινωνιακό σύστήμα Zedboard και το λειτουργικό σύστημα Linux. Χρησιμοποιώντας λοιπόν τον Microsoft iSCSI client θα διαβάζονται και θα γράφονται δεδομένα πάνω στο board, το οποίο θα παίζει ρόλο server. Για την υλοποίηση αυτή θα πρέπει να προγραμματιστεί το board κατάλληλα σύμφωνα με το διαδικτυακό πρωτόκολλο αποθήκευσης iSCSI, έτσι ώστε να μπορεί να ανταλλάσει δεδομένα με τον client. Τελικός στόχος λοιπόν, θα είναι η διαδικτυακή εγγραφή και ανάγνωση δεδομένων από την DRAM του Zedboard, η οποία θα πραγματοποιείται έχοντας σαν βάση το TCP/IP και το διαδικτυακό πρωτόκολλο αποθήκευσης δεδομένων. / --
33

Measuring the applicability of Open Data Standards to a single distributed organisation: an application to the COMESA Secretariat

Munalula, Themba 01 January 2008 (has links)
Open data standardization has many known benefits, including the availability of tools for standard encoding formats, interoperability among systems and long term preservation of data. Mark-up languages and their use on the World Wide Web have implied further ease for data sharing. The Extensible Markup Language (XML), in particular, has succeeded due to its simplicity and ease of use. Its primary purpose is to facilitate the sharing of data across different information systems, particularly systems connected via the Internet. Whether open and standardized or not, organizations generate data daily. Offline exchange of documents and data is undertaken using existing formats that are typically defined by the organizations that generate the data in the documents. With the Internet, the realization of data exchange has had a direct implication on the need for interoperability and comparability. As much as standardization is the accepted approach for online data exchange, little is understood about how a specific organization’s data “fits” a given data standard. This dissertation develops data metrics that represent the extent to which data standards can be applied to an organization’s data. The research identified key issues that affect data interoperability or the feasibility of a move towards interoperability. This research tested the unwritten rule that organizational setups tend to regard and design data requirements more from internal needs than interoperability needs. Essentially, by generating metrics that affect a number of data attributes, the research quantified the extent of the gap that exists between organizational data and data standards. Key data attributes, i.e. completeness, concise representation, relevance and complexity, were selected and used as the basis for metric generation. Additional to the generation of attribute-based metrics, hybrid metrics representing a measure of the “goodness of fit” of the source data to standard data were generated. Regarding the completeness attribute, it was found that most Common Market for Eastern and Southern Africa (COMESA) head office data clusters had lower than desired metrics to match the gap highlighted above. The same applied to the concise representation attribute. Most data clusters had more concise representation for the COMESA data than the data standard. The complexity metrics generated confirmed the fact that the number of data elements is a key determinant in any move towards the adoption of data standards. This fact was also borne out by the magnitude of the hybrid metrics which to some extent depended on the complexity metrics. An additional contribution of the research was the inclusion of expert users’ weights to the data elements and recalculation of all metrics. A comparison with the unweighted metrics yielded a mixed picture. Among the completeness metrics and for the data retention rate in particular, increases were recorded for data clusters for which greater weight was allocated to mapped elements than to those that were not mapped. The same applied to the relative elements ratio. The complexity metrics showed general declines when user-weighted elements were used in the computation as opposed to the unweighted elements. This again was due to the fact that these metrics are dependent on the number of elements. Hence for the former case, the weights were evenly distributed while for the latter case, some elements were given lower weights by the expert users, hence leading to an overall decline in the metric. A number of implications emerged for COMESA. COMESA would have to determine the extent to which its source data rely on data sources for which international standards are being promoted. Secondly, an inventory of users and collectors of the data COMESA uses is necessary in order to determine who would be the beneficiary of a standards-based information system. Thirdly, and from an organizational perspective, COMESA needs to designate a team to guide the process of creation of such a standards-based information system. Lastly there is need for involvement in consortia that are responsible for these data standards. This has an implication on organizational resources. In totality, this research provided a methodology for determination of the feasibility of a move towards standardization and hence makes it possible to answer the critical first stage questions such a move begs answers to.
34

An investigation of 3D simulation and electronic medical records for gait data

Alfalah, Salsabeel Fayiz Mohammad January 2013 (has links)
No description available.
35

DSFS: a data storage facilitating service for maximizing security, availability, performance, and customizability

Bilbray, Kyle 12 January 2015 (has links)
The objective of this thesis is to study methods for the flexible and secure storage of sensitive data in an unaltered cloud. While current cloud storage providers make guarantees on the availability and security of data once it enters their domain, clients are not given any options for customization. All availability and security measures, along with any resulting performance hits, are applied to all requests, regardless of the data's sensitivity or client's wishes. In addition, once a client's data enters the cloud, it becomes vulnerable to different types of attacks. Other cloud users may access or disrupt the availability of their peers' data, and cloud providers cannot protect from themselves in the event of a malicious administrator or government directive. Current solutions use combinations of known encoding schemes and encryption techniques to provide confidentiality from peers and sometimes the cloud service provider, but its an all-or-nothing model. A client either uses the security methods of their system, or does not, regardless of whether the client's data needs more or less protection and availability. Our approach, referred to as the Data Storage Facilitating Service (DSFS), involves providing a basic set of proven protection schemes with configurable parameters that encode input data into a number of fragments and intelligently scatters them across the target cloud. A client may choose the encoding scheme most appropriate for the sensitivity of their data. If none of the supported schemes are sufficient for the client's needs or the client has their own custom encoding, DSFS can accept already encoded fragments and perform secure placement. Evaluation of our prototype service demonstrates clear trade-offs in performance between the different levels of security encoding provides, allowing clients to choose how much the importance of their data is worth. This amount of flexibility is unique to DSFS and turns it into more of a secure storage facilitator that can help clients as much or as little as required. We also see a significant effect on overhead from the service's location relative to its cloud when we compare performances of our own setup with a commercial cloud service.
36

Synthesis of New Magnetic Nanocomposite Materials for Data Storage

Alamri, Haleema January 2012 (has links)
The confinement of magnetic nanoparticles (Prussian blue analogues (PBAs) has been achieved using mesostructured silica as a matrix. The PBAs have the general formula AxMy[M'(CN)n]z, where A is an alkali metal cation; M: CoII, NiII, SmIII; and M': CoII. The two reactions were run in parallel and led to a mesostructured silica matrix that contains nanoparticles of PBA homogeneously distributed within the silica framework. As initially reported for the synthesis of Co3[Fe(CN)6]2 magnetic nanoparticles, in the research conducted for this thesis, this synthesis has been extended to other compounds and to lanthanides such as Sm and has also included the study of the influence of different parameters (pH, concentration). As these nanocomposites are potentially good candidates for the preparation of bimetallic nanoparticles and oxides through controlled thermal treatment, the second goal of the research was to employ an adapted thermal treatment in order to prepare metal and metal oxide nanoparticles from PBA, directly embedded in the silica matrix. To this end, the influence of the thermal treatment (temperature, time, atmosphere) on the nature and structure of the resulting materials was investigated, with a focus on the potential use of the combustion of the organic templates as in-situ reducing agents. For some compounds, the preparation of bimetallic nanoparticles was successful. This method was tentatively applied to the preparation of specific Sm:Co bimetallic compounds, are well known as one of the best permanent magnets currently available.
37

Accurate Hardware RAID Simulator

Weng, Darrin Kalung 01 June 2013 (has links)
Computer data storage is growing at an astonishing rate. With cloud computing and the growth of the Internet enterprise storage has been predicted to grow at rates as high as 300\% per year. To fulfill this need technologies such as Redundant Array of Independent Disks or RAID are being used in industry today. Not only does RAID increase I/O performance but also provides redundancy measures to protect against hardware failure. Even though RAID has existed for some time now and is well understood, proprietary optimizations such as command scheduling and cache strategies that are employed by current RAID controllers are not well known. This thesis presents a model for RAID 5 that incorporates these features and describes the overall function of hardware RAID controllers. Also a python implementation of this model, Accurate Hardware RAID Simulator (AHRS) is presented and validated against a current hardware RAID controller. It is shown that AHRS can reproduce the behavior of a hardware RAID system with an accuracy of 97.92\% on average compared to a LSI hardware RAID controller.
38

Carbon Coated Tellurium for Optical Data Storage

Abbott, Jonathan D. 16 December 2009 (has links) (PDF)
A highly durable optical disk has been developed for data archiving. This optical disk uses tellurium as the write layer and carbon as a dielectric and oxidation prevention layer. The sandwich style CTeC film was deposited on polycarbonate and silicon substrates by plasma sputtering. These films were then characterized with SEM, TEM, EELS, ellipsometry, ToF-SIMS, etc, and were tested for writability and longevity. Results show the films were uniform in physical structure, are stable, and able to form permanent pits. Data was written to a disk and successfully read back in a commercial DVD drive.
39

Interactive Techniques Between Collaborative Handheld Devices and Wall Displays

Schulte, Daniel Leon 12 August 2013 (has links) (PDF)
Handheld device users want to work collaboratively on large wall-sized displays with other handheld device users. However, no software frameworks exist to support this type of collaborative activity. This thesis introduces a collaborative application framework that allows users to collaborate with each other across handheld devices and large wall displays. The framework is comprised of a data storage system and a set of generic interactive techniques that can be utilized by applications. The data synchronization system allows data to be synchronized across multiple handheld devices and wall displays. The interactive techniques enable users to create data items and to form relationships between those data items. The framework is evaluated by creating two sample applications and by conducting a set of user study interactive tasks. The data recorded from these evaluations shows that the framework is easy to extend, and that with minimal training, the generic interactive techniques are easy to learn and effective.
40

REST API to Access and Manage Geospatial Pipeline Integrity Data

Francis, Alexandra Michelle 01 June 2015 (has links) (PDF)
Today’s economy and infrastructure is dependent on raw natural resources, like crude oil and natural gases, that are optimally transported through a net- work of hundreds of thousands of miles of pipelines throughout America[28]. A damaged pipe can negatively a↵ect thousands of homes and businesses so it is vital that they are monitored and quickly repaired[1]. Ideally, pipeline operators are able to detect damages before they occur, but ensuring the in- tegrity of the vast amount of pipes is unrealistic and would take an impractical amount of time and manpower[1]. Natural disasters, like earthquakes, as well as construction are just two of the events that could potentially threaten the integrity of pipelines. Due to the diverse collection of data sources, the necessary geospatial data is scat- tered across di↵erent physical locations, stored in di↵erent formats, and owned by di↵erent organizations. Pipeline companies do not have the resources to manually gather all input factors to make a meaningful analysis of the land surrounding a pipe. Our solution to this problem involves creating a single, centralized system that can be queried to get all necessary geospatial data and related informa- tion in a standardized and desirable format. The service simplifies client-side computation time by allowing our system to find, ingest, parse, and store the data from potentially hundreds of repositories in varying formats. An online web service fulfills all of the requirements and allows for easy remote access to do critical analysis of the data through computer based decision support systems (DSS). Our system, REST API for Pipeline Integrity Data (RAPID), is a multi- tenant REST API that utilizes HTTP protocol to provide a online and intuitive set of functions for DSS. RAPID’s API allows DSS to access and manage data stored in a geospatial database with a supported Django web framework. Full documentation of the design and implementation of RAPID’s API are detailed in this thesis document, supplemented with some background and validation of the completed system.

Page generated in 0.0751 seconds