• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 167
  • 167
  • 167
  • 60
  • 37
  • 31
  • 31
  • 31
  • 29
  • 27
  • 26
  • 23
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Viability and Implementation of a Vector Cryptography Extension for Risc-V

Skelly, Jonathan W 01 June 2022 (has links) (PDF)
RISC-V is an open-source instruction-set architecture (ISA) forming the basis of thousands of commercial and experimental microprocessors. The Scalar Cryptography extension ratified in December 2021 added scalar instructions that target common hashing and encryption algorithms, including SHA2 and AES. The next step forward for the RISC-V ISA in the field of cryptography and digital security is the development of vector cryptography instructions. This thesis examines if it is viable to add vector implementations of existing RISC-V scalar cryptography instructions to the existing vector instruction format, and what improvements they can make to the execution of SHA2 and AES algorithms. Vector cryptography instructions vaeses, vaesesm, vaesds, vaesdsm, vsha256sch, and vsha256hash are proposed to optimize AES encryption and decryption, SHA256 message scheduling, and SHA256 hash rounds, with pseudocode, assembly examples, and a full 32-bit instruction format for each. Both algorithms stand to benefit greatly from vector instructions in reduction of computation time, code length, and instruction memory utilization due to large operand sizes and frequently repeated functions. As a proof of concept for the vector cryptography operations proposed, a full vector-based AES-128 encryption and SHA256 message schedule generation are performed on the 32-bit RISC-V Ibex processor and 128-bit Vicuna Vector Coprocessor in the Vivado simulation environment. Not counting stores or loads for fair comparison, the new Vector Cryptography extension completes a full encryption round in a single instruction compared to sixteen with the scalar extension, and can generate eight SHA256 message schedule double-words in a single instruction compared to the forty necessary on the scalar extension. These represent a 93.75% and 97.5% reduction in required instructions and memory for these functions respectively, at a hardware cost of 19.4% more LUTs and 1.44% more flip-flops on the edited Vicuna processor compared to the original.
42

Low Cost NeuroChairs

Pike, Frankie 01 December 2012 (has links) (PDF)
Electroencephalography (EEG) was formerly confined to clinical and research settings with the necessary hardware costing thousands of dollars. In the last five years a number of companies have produced simple electroencephalograms, priced below $300 and available direct to consumers. These have stirred the imaginations of enthusiasts and brought the prospects of "thought-controlled" devices ever closer to reality. While these new devices were largely targeted at video games and toys, active research on enabling people suffering from debilitating diseases to control wheelchairs was being pursued. A number of neurochairs have come to fruition offering a truly hands-free mobility solution, but whether these results could be replicated with emerging low cost products, and thus become a viable option for more people is an open question. This thesis examines existing research in the field of EEG-based assistive technologies, puts current consumer-grade hardware to the test, and explores the possibility of a system designed from the ground up to be only a fraction of the cost of currently completed research prototypes.
43

Analysis of System Reliability as a Capital Investment

Williams, Albert J. 01 January 1978 (has links) (PDF)
This report, "Analysis of System Reliability as a Capital Investment", is an analysis of radar system reliability of two similar tracking radar systems as a capital investment. It describes the two tracking radar systems and calculates the mission failures rates based upon field failure data. Additionally, an analysis of a simulation program written in FORTRAN is performed which treats system reliability as a capital investment based on 335 electronic systems that were fabricated with a reliability program versus 564 electronic systems fabricated without a reliability program. The data from the two tracking radar systems, one with reliability program and the other without, is incorporated in the computer program to verify the conclusions of the author of the computer simulation program.
44

Extending Service Oriented Architecture Using Generic Service Representatives

Najafi, Mehran 04 1900 (has links)
<p>Service-Oriented Architecture (SOA) focuses on dividing the enterprise application layer of an enterprise system into components (as services) that have direct relationships with the business functionality of the enterprise. Web services, which are based on message exchanges, are the most widely adopted SOA technology. Web services provide web-accessible programs and devices that have been widely promoted for cloud computing environments. However, different types of web services are required to model actual services in the business domain. Particularly, enterprises (business providers such as banks, health care, and insurance companies) usually send their agents or other personnel (e.g., representatives, installers, maintainers, and trainers) to client sides to perform required services. An enterprise agent can be modeled as a software agent - a computer program that cannot be transmitted efficiently by communication messages. Lacking an efficient way to model the transmission of enterprise agents in traditional message based technologies restricts the application and usage of service-oriented architectures. The central problem addressed in this thesis is the need to develop an efficient SOA model for enterprise agents that will enable service providers to process client data locally at the client side.</p> <p>To address the research problem, the thesis proposes to model enterprise agents in SOA with a generic software agent called the Service Representative. This is a generic software agent which stays at the client side and can be customized by different service providers to process client data locally. Moreover, to employ a service representative, the thesis proposes a new type of web services called Task Services. While a traditional web service, called Data Service, processes client data completely at the server side, a task service is a web service with the capability of processing client data and resources partially or completely at the client side, using a Service Representative. Each task service assigns a task with three components to the generic service representative: task model, task knowledge, and task data. The task components are mapped to business components such as business process models, business rules and actions, and business data, where they can be efficiently transmitted by service messages.</p> <p>The combination of a service representative and task services provides an executable platform for service providers at the client side. Moreover, the client does not need to reveal its data, and hence privacy and security are maintained. Large volume client data is processed locally, causing less network traffic. Finally, real-time and event-triggered web services can be developed, based on the proposed approach.</p> <p>The main contributions and novelty of this research are: i) a domain independent computational model of enterprise agents in SOA to support a wide variety of client-processing tasks, ii) client- side web services which are compatible with typical server-side web services and comparable to other client-side processing technologies, iii) extensions of the SOA architecture by adding novel generic components including the service representative, the competition desk, and the service composition certifier, iv) provision of a formal model of client-side and server-side web services based on their construction of business components, v) empirical evaluations of the web service model in a number of different applications, using a prototype system, and vi) the application of the developed model to a number of target domains including the healthcare field. Furthermore, because client-side and server-side web services are complementary, a decision support model is provided that will assist service developers to decide upon the best service type for a web service.</p> / Doctor of Science (PhD)
45

Fog Computing with Go: A Comparative Study

Butterfield, Ellis H 01 January 2016 (has links)
The Internet of Things is a recent computing paradigm, de- fined by networks of highly connected things – sensors, actuators and smart objects – communicating across networks of homes, buildings, vehicles, and even people. The Internet of Things brings with it a host of new problems, from managing security on constrained devices to processing never before seen amounts of data. While cloud computing might be able to keep up with current data processing and computational demands, it is unclear whether it can be extended to the requirements brought forth by Internet of Things. Fog computing provides an architectural solution to address some of these problems by providing a layer of intermediary nodes within what is called an edge network, separating the local object networks and the Cloud. These edge nodes provide interoperability, real-time interaction, routing, and, if necessary, computational delegation to the Cloud. This paper attempts to evaluate Go, a distributed systems language developed by Google, in the context of requirements set forth by Fog computing. Similar methodologies of previous literature are simulated and benchmarked against in order to assess the viability of Go in the edge nodes of Fog computing architecture.
46

Towards Design and Analysis For High-Performance and Reliable SSDs

Xia, Qianbin 01 January 2017 (has links)
NAND Flash-based Solid State Disks have many attractive technical merits, such as low power consumption, light weight, shock resistance, sustainability of hotter operation regimes, and extraordinarily high performance for random read access, which makes SSDs immensely popular and be widely employed in different types of environments including portable devices, personal computers, large data centers, and distributed data systems. However, current SSDs still suffer from several critical inherent limitations, such as the inability of in-place-update, asymmetric read and write performance, slow garbage collection processes, limited endurance, and degraded write performance with the adoption of MLC and TLC techniques. To alleviate these limitations, we propose optimizations from both specific outside applications layer and SSDs' internal layer. Since SSDs are good compromise between the performance and price, so SSDs are widely deployed as second layer caches sitting between DRAMs and hard disks to boost the system performance. Due to the special properties of SSDs such as the internal garbage collection processes and limited lifetime, traditional cache devices like DRAM and SRAM based optimizations might not work consistently for SSD-based cache. Therefore, for the outside applications layer, our work focus on integrating the special properties of SSDs into the optimizations of SSD caches. Moreover, our work also involves the alleviation of the increased Flash write latency and ECC complexity due to the adoption of MLC and TLC technologies by analyzing the real work workloads.
47

Using Machine Learning to Detect Malicious URLs

Cheng, Aidan 01 January 2017 (has links)
There is a need for better predictive model that reduces the number of malicious URLs being sent through emails. This system should learn from existing metadata about URLs. The ideal solution for this problem would be able to learn from its predictions. For example, if it predicts a URL to be malicious, and that URL is deemed safe by the sandboxing environment, the predictor should refine its model to account for this data. The problem, then, is to construct a model with these characteristics that can make these predictions for the vast number of URLs being processed. Given that the current system does not employ machine learning methods, we intend to investigate multiple such models and summarize which of those might be worth pursuing on a large scale.
48

Advanced Text Analytics and Machine Learning Approach for Document Classification

Anne, Chaitanya 19 May 2017 (has links)
Text classification is used in information extraction and retrieval from a given text, and text classification has been considered as an important step to manage a vast number of records given in digital form that is far-reaching and expanding. This thesis addresses patent document classification problem into fifteen different categories or classes, where some classes overlap with other classes for practical reasons. For the development of the classification model using machine learning techniques, useful features have been extracted from the given documents. The features are used to classify patent document as well as to generate useful tag-words. The overall objective of this work is to systematize NASA’s patent management, by developing a set of automated tools that can assist NASA to manage and market its portfolio of intellectual properties (IP), and to enable easier discovery of relevant IP by users. We have identified an array of methods that can be applied such as k-Nearest Neighbors (kNN), two variations of the Support Vector Machine (SVM) algorithms, and two tree based classification algorithms: Random Forest and J48. The major research steps in this work consist of filtering techniques for variable selection, information gain and feature correlation analysis, and training and testing potential models using effective classifiers. Further, the obstacles associated with the imbalanced data were mitigated by adding synthetic data wherever appropriate, which resulted in a superior SVM classifier based model.
49

Mitigating Interference During Virtual Machine Live Migration through Storage Offloading

Stuart, Morgan S 01 January 2016 (has links)
Today's cloud landscape has evolved computing infrastructure into a dynamic, high utilization, service-oriented paradigm. This shift has enabled the commoditization of large-scale storage and distributed computation, allowing engineers to tackle previously untenable problems without large upfront investment. A key enabler of flexibility in the cloud is the ability to transfer running virtual machines across subnets or even datacenters using live migration. However, live migration can be a costly process, one that has the potential to interfere with other applications not involved with the migration. This work investigates storage interference through experimentation with real-world systems and well-established benchmarks. In order to address migration interference in general, a buffering technique is presented that offloads the migration's read, eliminating interference in the majority of scenarios.
50

Analysis and Detection of Heap-based Malwares Using Introspection in a Virtualized Environment

Javaid, Salman 13 August 2014 (has links)
Malware detection and analysis is a major part of computer security. There is an arm race between security experts and malware developers to develop various techniques to secure computer systems and to find ways to circumvent these security methods. In recent years process heap-based attacks have increased significantly. These attacks exploit the system under attack via the heap, typically by using a heap spraying attack. The main drawback with existing techniques is that they either consume too many resources or are complicated to implement. Our work in this thesis focuses on new methods which offloads process heap analysis for guest Virtual Machines (VM) to the privileged domain using Virtual Machine Introspection (VMI) in a Cloud environment. VMI provides us with a seamless, non-intrusive and invisible (to malwares) way of observing the memory and state of VMs without raising red flags for the malwares.

Page generated in 0.137 seconds