• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 327
  • 92
  • 70
  • 52
  • 34
  • 31
  • 16
  • 9
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 806
  • 340
  • 131
  • 125
  • 124
  • 117
  • 100
  • 69
  • 68
  • 65
  • 63
  • 62
  • 60
  • 60
  • 58
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Disaggregated Zoned Namespace for Multi-tenancy Scenarios

Ramakrishnapuram Selvanathan, Subhalakshmi 22 May 2024 (has links)
The traditional block-based interface used in flash-based Solid State Drives (SSDs) imposes limitations on performance and endurance due to write amplification and garbage collection overheads. In response to these challenges, the NVMe Zoned Namespaces (ZNS) devices introduces a novel storage interface organized into zones, optimizing garbage collection and reducing write amplification. This research delves into the exploration and profiling of ZNS device characteristics, aiming to enhance user comprehension and utilization. Additionally, the study investigates the integration of ZNS devices into disaggregated storage frameworks to improve resource utilization, proposing server-side management features to simplify client operations and minimize overhead. By offering insights for future development and optimization of ZNS-based storage solutions, this work contributes to advancing storage technology and addressing the shortcomings of traditional block-based interfaces. Through extensive experimentation and analysis, this study sheds light on the optimal configurations and deployment strategies for ZNS-based storage solutions. / Master of Science / Traditional storage drives, like those found in computers and data centers, face challenges that limit their performance and durability. These challenges stem from the way data is stored and managed within these drives, resulting in inefficiencies known as write amplification and garbage collection overheads. To address these issues, a new type of storage device called NVMe Zoned Namespaces (ZNS) has been developed. ZNS devices organize data in a smarter way, grouping it into specific areas called zones. This organization helps to reduce inefficiencies and improve performance. This research explores the characteristics of ZNS devices and how they can be used more effectively. By better understanding and using these devices, we can improve the way data is stored and accessed, leading to faster and more reliable storage solutions. Additionally, this research looks at how ZNS devices can be integrated into larger storage systems to make better use of available resources. Ultimately, this work contributes to advancing storage technology and overcoming the limitations of traditional storage interfaces. We aim to uncover the best ways to deploy and optimize ZNS-based storage solutions for a variety of applications.
212

Effect of certain parameters on response time of an Oracle database

Aihe, David Osemeahon 01 April 2001 (has links)
No description available.
213

A novel Adaptive Filtering approach to Drive File Identification for Service Environment Replication

Balasubramanya, Bharath 27 November 2024 (has links)
Service Environment Replication refers to the process of using test machines to apply controlled dynamic loads to test articles in order to replicate operating conditions that the article was designed for. Such test machines hence require the development of dynamic time series commands that drive the actuators in order to replicate the responses of the actual dynamic system measured separately in its service environment. A novel adaptive filtering approach, called the Pulse Train Filtered-x Least Mean Square algorithm for waveform generation and drive file identification is proposed in this thesis based on methods developed for Active Noise and Vibration Control. Simulation studies are considered using various test benches with varying degrees of nonlinearity to validate the performance of the proposed algorithm to rapidly converge to a dynamic solution in a small number of iterations. The PT-Fx-LMS algorithm is also shown to enable targeted iteration over isolated time slices within the data set, which challenge conventional iterative DFID techniques. Further modifications to the algorithm are proposed that uses a completely offline workflow using the estimated dynamics of the plant and an empirical termination criteria to improve performance and ensure stability of the adaptive process. The architecture developed is applicable for a wide array of dynamic systems with single or multiple actuators and sensors. Experimental validation of the proposed algorithm is conducted using an acoustic setup to replicate target sound fields for a wide array of configurations. / Doctor of Philosophy / Testing an article in the environment where it is designed to be operated can be a time consuming and expensive process without laboratory-based, repeatable testing environments. The goal of these test rigs is hence to replicate the service environment in order to design, develop and validate the article-under-test. Different methods have been developed over the years by manufacturers to replicate such environments within the confines of a laboratory where the most important task is to generate the required control signals to drive the actuators on the test rig to induce the required responses from the dynamic system under test. The objective of the thesis is to develop a novel time-domain based algorithm that can be used to iteratively derive the control signals required to replicate the responses of the dynamic system on a simulated test bench in as few iterations as possible, thereby saving computation time, experiment time and cost. The proposed algorithm is compared against conventional methods for deriving these control signals and further improvements to the proposed method are suggested in order to improve performance, stability, safety and ease of workflow on the test rig.
214

Completing the Picture : Fragments and Back Again

Karresand, Martin January 2008 (has links)
<p>Better methods and tools are needed in the fight against child pornography. This thesis presents a method for file type categorisation of unknown data fragments, a method for reassembly of JPEG fragments, and the requirements put on an artificial JPEG header for viewing reassembled images. To enable empirical evaluation of the methods a number of tools based on the methods have been implemented.</p><p>The file type categorisation method identifies JPEG fragments with a detection rate of 100% and a false positives rate of 0.1%. The method uses three algorithms, Byte Frequency Distribution (BFD), Rate of Change (RoC), and 2-grams. The algorithms are designed for different situations, depending on the requirements at hand.</p><p>The reconnection method correctly reconnects 97% of a Restart (RST) marker enabled JPEG image, fragmented into 4 KiB large pieces. When dealing with fragments from several images at once, the method is able to correctly connect 70% of the fragments at the first iteration.</p><p>Two parameters in a JPEG header are crucial to the quality of the image; the size of the image and the sampling factor (actually factors) of the image. The size can be found using brute force and the sampling factors only take on three different values. Hence it is possible to use an artificial JPEG header to view full of parts of an image. The only requirement is that the fragments contain RST markers.</p><p>The results of the evaluations of the methods show that it is possible to find, reassemble, and view JPEG image fragments with high certainty.</p>
215

Log File Categorization and Anomaly Analysis Using Grammar Inference

Memon, Ahmed Umar 28 May 2008 (has links)
In the information age of today, vast amounts of sensitive and confidential data is exchanged over an array of different mediums. Accompanied with this phenomenon is a comparable increase in the number and types of attacks to acquire this information. Information security and data consistency have hence, become quintessentially important. Log file analysis has proven to be a good defense mechanism as logs provide an accessible record of network activities in the form of server generated messages. However, manual analysis is tedious and prohibitively time consuming. Traditional log analysis techniques, based on pattern matching and data mining approaches, are ad hoc and cannot readily adapt to different kinds of log files. The goal of this research is to explore the use of grammar inference for log file analysis in order to build a more adaptive, flexible and generic method for message categorization, anomaly detection and reporting. The grammar inference process employs robust parsing, islands grammars and source transformation techniques. We test the system by using three different kinds of log file training sets as input and infer a grammar and generate message categories for each set. We detect anomalous messages in new log files using the inferred grammar as a catalog of valid traces and present a reporting program to extract the instances of specified message categories from the log files. / Thesis (Master, Computing) -- Queen's University, 2008-05-22 14:12:30.199
216

A HIGHLY RELIABLE NON-VOLATILE FILE SYSTEM FOR SMALL SATELLITES

Nimmagadda, Rama Krishna 01 January 2008 (has links)
Recent Advancements in Solid State Memories have resulted in packing several Giga Bytes (GB) of memory into tiny postage stamp size Memory Cards. Of late, Secure Digital (SD) cards have become a de-facto standard for all portable handheld devices. They have found growing presence in almost all embedded applications, where huge volumes of data need to be handled and stored. For the very same reason SD cards are being widely used in space applications also. Using these SD Cards in space applications requires robust radiation hardened SD cards and Highly Reliable Fault Tolerant File Systems to handle them. The present work is focused on developing a Highly Reliable Fault Tolerant SD card based FAT16 File System for space applications.
217

A hypertext graph theory reference system

Islam, Mustafa R. January 1993 (has links)
G-Net system is being developed by the members of the G-Net research group under the supervision of Dr. K. Jay Bagga. The principle objective of the G-Net system is to provide an integrated tool for dealing with various aspects of graph theory. G-Net system is divided into two parts. GETS (Graph theory Experiments Tool Set) will provide a set of tools to experiment with graph theory, and HYGRES (HYpertext Graph theory Reference Service), the second subcomponent of the G-Net system to aid graph theory study and research. In this research a hypertext application is built to present the graph theory concepts, graph models and the algorithms. In other words, HYGRES (Guide Version) provides the hypertext facilities for organizing a graph theory database in a very natural and interactive way. An hypertext application development tool, called Guide, is used to implement this version of HYGRES. This project integrates the existing version of GETS so that it can also provide important services to HYGRES. The motivation behind this project is to study the initial criterion for developing a hypertext system, which can be used for future development of a stand alone version of the G-Net system. / Department of Computer Science
218

Analysis of multiple software releases of AFATDS using design metrics

Bhargava, Manjari January 1991 (has links)
The development of high quality software the first time, greatly depends upon the ability to judge the potential quality of the software early in the life cycle. The Software Engineering Research Center design metrics research team at Ball State University has developed a metrics approach for analyzing software designs. Given a design, these metrics highlight stress points and determine overall design quality.The purpose of this study is to analyze multiple software releases of the Advanced Field Artillery Tactical Data System (AFATDS) using design metrics. The focus is on examining the transformations of design metrics at each of three releases of AFATDS to determine the relationship of design metrics to the complexity and quality of a maturing system. The software selected as a test case for this research is the Human Interface code from Concept Evaluation Phase releases 2, 3, and 4 of AFATDS. To automate the metric collection process, a metric tool called the Design Metric Analyzer was developed.Further analysis of design metrics data indicated that the standard deviation and mean for the metric was higher for release 2, relatively lower for release 3, and again higher for release 4. Interpreting this means that there was a decrease in complexity and an improvement in the quality of the software from release 2 to release 3 and an increase in complexity in release 4. Dialog with project personnel regarding design metrics confirmed most of these observations. / Department of Computer Science
219

A hypertext application and system for G-net and the complementary relationship between graph theory and hypertext

Sawant, Vivek Manohar January 1993 (has links)
Many areas of computer science use graph theory and thus benefit from research in graph theory. Some of the important activities involved in graph theory work are the study of concepts, algorithm development, and theorem proving. These can be facilitated by providing computerized tools for graph drawing, algorithm animation and accessing graph theory information bases. Project G-Net is aimed at developing a set of such tools.Project G-Net has chosen to provide the tools in hypertext form based on the analysis of users' requirements. The project is presently developing a hypertext application and a hypertext system for providing the above set of tools. In the process of this development various issues pertaining to hypertext authoring, hypertext usability and application of graph theory to hypertext are being explored.The focus of this thesis is in proving that hypertext approach is most appropriate for realizing the goals of the G-Net project. The author was involved in the research that went into analysis of requirements, design of hypertext application and system, and the investigation of the complementary relationship between graph theory and hypertext. / Department of Computer Science
220

DELPHIN 6 Climate Data File Specification, Version 1.0

Nicolai, Andreas January 2017 (has links)
This paper describes the file format of the climate data container used by the DELPHIN, THERAKLES and NANDRAD simulation programs. The climate data container format holds a binary representation of annual and continuous climatic data needed for hygrothermal transport and building energy simulation models. The content of the C6B-Format is roughly equivalent to the epw-climate data format.:1 Introduction 1.1 General File Layout 1.2 Principle Data Types 2 Magic Header and File Version 2.1 Version Number Encoding 3 Meta Data Section 4 Data Section 4.1 Cyclic annual data 4.2 Non-cyclic/continuous data

Page generated in 0.0677 seconds