• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 55
  • 23
  • 13
  • 8
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 219
  • 219
  • 84
  • 73
  • 48
  • 42
  • 32
  • 25
  • 24
  • 22
  • 20
  • 17
  • 17
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Data storage security for cloud computing using elliptic curve cryptography

Buop, George Onyango January 2020 (has links)
Institutions and enterprises are moving towards more service availability, managed risk and at the same time, aim at reducing cost. Cloud Computing is a growing technology, thriving in the fields of information communication and data storage. With the proliferation of online activity, more and more information is saved as data every day. This means that more data is being stored in the cloud than ever before. Data that is stored online often holds private information – such as addresses, payment details and medical documentation. These become the target of cyber criminals. There is therefore growing need to protect these data from threats and issues such as data breach and leakage, data loss, account takeover or hijackings, among others. Cryptography refers to securing the information and communication techniques based on mathematical concepts and algorithms which transform messages in ways that are hard to decipher. Cryptography is one of the techniques we could protect data stored in the cloud as it enables security properties of data confidentiality and integrity. This research investigates the security issues that affect storage of data in the cloud. This thesis also discusses the previous research work and the currently available technology and techniques that are used for securing data in the cloud. This thesis then presents a novel scheme for security of data stored in Cloud Computing by using Elliptic Curve Integrated Encryption Scheme (ECIES) that provides for confidentiality and integrity. This scheme also uses Identity Based Cryptography (IBC) for more efficient key management. The proposed scheme combines the security of Identity- Based Cryptography (IBC), Trusted cloud (TC), and Elliptic Curve Cryptography (ECC) to reduce system complexity and provide more security for cloud computing applications. The research shows that it is possible to securely store confidential user data on a Public Cloud such as Amazon S3 or Windows Azure Storage without the need to trust the Cloud Provider and with minimal overhead in processing time. The results of implementing the proposed scheme shows faster and more efficient communication operation when it comes to key generation as well as encryption and decryption. The difference in the time taken for these operations is as a result of the use of ECC algorithm which has a small key size and hence highly efficient compared with other types of asymmetric cryptography. The results obtained show the scheme is more efficient, when compared with other classification techniques in the literature.
22

Elektronické datové úložiště / Electronic data storage system

Valkovič, Marek January 2009 (has links)
The work presents the design and real world implementation of an information system serving as an electronic disk with web based access and administration. The task is being solved using the PHP scripting language and MySQL relational database management system. The study examines PHP and SQL databases, states basic facts and explains how they are connected to create one single complex system. Issues of an internet based payment system are being considered too. The proposed system features complete file management capabilities. Separate access rights can be set for individual users. The administrator of the application can display several interesting statistics. Results of the work are being demonstrated on the final web application.
23

A Neuroimaging Web Interface for Data Acquisition, Processing and Visualization of Multimodal Brain Images

Lizarraga, Gabriel M 12 October 2018 (has links)
Structural and functional brain images are generated as essential modalities for medical experts to learn about the different functions of the brain. These images are typically visually inspected by experts. Many software packages are available to process medical images, but they are complex and difficult to use. The software packages are also hardware intensive. As a consequence, this dissertation proposes a novel Neuroimaging Web Services Interface (NWSI) as a series of processing pipelines for a common platform to store, process, visualize and share data. The NWSI system is made up of password-protected interconnected servers accessible through a web interface. The web-interface driving the NWSI is based on Drupal, a popular open source content management system. Drupal provides a user-based platform, in which the core code for the security and design tools are updated and patched frequently. New features can be added via modules, while maintaining the core software secure and intact. The webserver architecture allows for the visualization of results and the downloading of tabulated data. Several forms are ix available to capture clinical data. The processing pipeline starts with a FreeSurfer (FS) reconstruction of T1-weighted MRI images. Subsequently, PET, DTI, and fMRI images can be uploaded. The Webserver captures uploaded images and performs essential functionalities, while processing occurs in supporting servers. The computational platform is responsive and scalable. The current pipeline for PET processing calculates all regional Standardized Uptake Value ratios (SUVRs). The FS and SUVR calculations have been validated using Alzheimer's Disease Neuroimaging Initiative (ADNI) results posted at Laboratory of Neuro Imaging (LONI). The NWSI system provides access to a calibration process through the centiloid scale, consolidating Florbetapir and Florbetaben tracers in amyloid PET images. The interface also offers onsite access to machine learning algorithms, and introduces new heat maps that augment expert visual rating of PET images. NWSI has been piloted using data and expertise from Mount Sinai Medical Center, the 1Florida Alzheimer’s Disease Research Center (ADRC), Baptist Health South Florida, Nicklaus Children's Hospital, and the University of Miami. All results were obtained using our processing servers in order to maintain data validity, consistency, and minimal processing bias.
24

EXTENSION OF A COMMON DATA FORMAT FOR REAL-TIME APPLICATIONS

Wegener, John A., Davis, Rodney L. 10 1900 (has links)
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California / The HDF5 (Hierarchical Data Format) data storage family is an industry standard format that allows data to be stored in a common format and retrieved by a wide range of common tools. HDF5 is a widely accepted industry standard container for data storage developed by the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign. The HDF5 data storage family includes HDF-Time History, intended for data processing, and HDF-Packet, intended for real-time data collection; each of these is an extension to the basic HDF5 format, which defines data structures and associated interrelationships, optimized for that particular purpose. HDF-Time History, developed jointly by Boeing and NCSA, is in the process of being adopted throughout the Boeing test community and by its external partners. The Boeing/NCSA team is currently developing HDF-Packet to support real-time streaming applications, such as airborne data collection and recording of received telemetry. The advantages are significant cost reduction resulting from storing the data in its final format, thus avoiding conversion between a myriad of recording and intermediate formats. In addition, by eliminating intermediate file translations and conversions, data integrity is maintained from recording through processing and archival storage. As well, HDF5 is a general-purpose wrapper, into which can be stored processed data and other data documentation information (such as calibrations), thus making the final data file self-documenting. This paper describes the basics of the HDF-Time History, the extensions required to support real-time acquisition with HDF-Packet, and implementation issues unique to real-time acquisition. It also describes potential future implementations for data acquisition systems in different segments of the test data industry.
25

ENTERPRISE DATA MANAGEMENT SYSTEMS

Garling, James, Cahill, David 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / This paper discusses ongoing regulatory effects on efforts aimed at developing data infrastructures that assist test engineers in achieving information superiority and for maintaining their information, and on possible architectural frameworks for resolving the engineer’s need versus the regulatory requirements. Since current commercial-off-the-shelf (COTS) Enterprise Content Management (ECM) systems are targeted primarily at business environments such as back office applications, financial sectors, and manufacturing, these COTS systems do not provide sufficient focus for managing the unique aspects of flight test data and associated artifacts (documents, drawings, pretest data, etc.). This paper presents our ongoing efforts for deploying a storage infrastructure independent enterprise data management system for maintaining vital up-to-date information and for managing the archival of such data.
26

NATO ADVANCED DATA STORAGE STANDARD STANAG 4575

Feuer, Gary 10 1900 (has links)
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California / NATO NAFAG Air Group IV (AG IV) established the NATO Advanced Data Storage Technical Support Team (NADS TST) to investigate the technology and to develop an interface Standardization Agreement (STANAG) for recording, storage, and exchange of imagery data. Government agencies and industry involved in these technologies are participating in this effort.
27

AN INTEGRATED APPROACH TO DATA ACQUISITION AND DATA PROCESSING FOR INDUSTRIAL APPLICATIONS

Schmalz, Axel 10 1900 (has links)
International Telemetering Conference Proceedings / October 22-25, 1984 / Riviera Hotel, Las Vegas, Nevada / The requirements for data acquisition systems grow rapidly with the progress of technology. Increasingly complex test instruments become available. Integration of instruments and computers into an operational measurement system, however, is more difficult and expensive as requirements increase. A family of instruments was developed which can perform complex measurement tasks without a large integration effort since it provides a large number of compatible hardware and software modules for conditioning and conversion of signals into digital form, for data storage, data transmission, and data pre-processing.
28

A PERSONAL TELEMETRY STATION

Hui, Yang, Shanzhong, Li, Qishan, Zhang 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California / In this paper, a PCM telemetry system based on Personal computer is presented and some important methods that are used to realize the system will be introduced, such as a new kind of all digital PLL bit synchronizer and a way to solve the problem of high-rate data storage. The main idea of ours is to make the basic parts of PCM telemetry system (except receiver) in the form of PC cards compatible with EISA Bus, which forms a telemetry station with resource of PC computer. Finally, a laboratory prototype with rate up to 3.2Mbps is built.
29

Storage Physics and Noise Mechanism in Heat-Assisted Magnetic Recording

Li, Hai 01 September 2016 (has links)
As cloud computing and massive-data machine learning are applied pervasively, ultra-high volume data storage serves as the foundation block. Every day, nearly 2.5 quintillion bytes (50000 GB/second in 2018) of data is created and stored. Hard Disk Drive (HDD) takes major part of this heavy duty. However, despite the amazing evolution of HDD technology during the past 50 years, the conventional Perpendicular Magnetic Recording (PMR), the state-of-the-art HDD technique, starts to have less momentum in increasing storage density because of the recording trilemma. To overcome this, Heat-Assisted Magnetic Recording (HAMR) was initially proposed in 1990s. With years of advancement, recent industrial demos have shown the potential of HAMR to actually break the theoretical limit of PMR. However, to fully take advantage of HAMR and realize the commercialization, there are still quite a few technical challenges, which motivated this thesis work. Via thermal coupled micromagnetic simulation based upon Landau-Lifshitz-Bloch (LLB) equation, the entire dynamic recording process has been studied systematically. The very fundamental recording physics theorem is established, which manages to elegantly interpret the previously conflicting experimental observations. The thermal induced field dependence of performance, due to incomplete switching and erase-after-write, is proposed for the first time and validated in industrial lab. The combinational effects form the ultimate physical limit of this technology. Meanwhile, this theorem predicts the novel noise origins, examples being Curie temperature distribution and temperature distribution, which are the key properties but ignored previously. To enhance performance, utilizations of higher thermal gradient, magnetically stiffer medium, optimal field etc. have been suggested based upon the theorem. Furthermore, a novel concept, Recording Time Window (RTW), has been proposed. It tightly correlates with performance and serves as a unified optimization standard, summarizing almost all primary parameters. After being validated via spin stand testing, the theorem has been applied to provide solutions for guiding medium design and relaxing the field and heating requirement. This helps solve the issues around writer limit and thermal reliability. Additionally, crosstrack varying field has been proposed to solve the well-known transition curvature issue, which may increase the storage density by 50%.
30

AUTOMATIC GENERATION OF WEB APPLICATIONS AND MANAGEMENT SYSTEM

Zhou, Yu 01 March 2017 (has links)
One of the major difficulties in web application design is the tediousness of constructing new web pages from scratch. For traditional web application projects, the web application designers usually design and implement web application projects step by step, in detail. My project is called “automatic generation of web applications and management system.” This web application generator can generate the generic and customized web applications based on software engineering theories. The flow driven methodology will be used to drive the project by Business Process Model Notation (BPMN). Modules of the project are: database, web server, HTML page, functionality, financial analysis model, customer, and BPMN. The BPMN is the most important section of this entire project, due to the BPMN flow engine that most of the work and data flow depends on the engine. There are two ways to deal with the project. One way is to go to the main page, then to choose one web app template, and click the generating button. The other way is for the customers to request special orders. The project then will give suitable software development methodologies to follow up. After a software development life cycle, customers will receive their required product.

Page generated in 0.0382 seconds