• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • Tagged with
  • 171
  • 171
  • 171
  • 62
  • 37
  • 32
  • 31
  • 31
  • 31
  • 27
  • 26
  • 23
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Decentralized Machine Learning On Blockchain: Developing A Federated Learning Based System

Sridhar, Nikhil 01 December 2023 (has links) (PDF)
Traditional Machine Learning (ML) methods usually rely on a central server to per-form ML tasks. However, these methods have problems like security risks, datastorage issues, and high computational demands. Federated Learning (FL), on theother hand, spreads out the ML process. It trains models on local devices and thencombines them centrally. While FL improves computing and customization, it stillfaces the same challenges as centralized ML in security and data storage. This thesis introduces a new approach combining Federated Learning and Decen-tralized Machine Learning (DML), which operates on an Ethereum Virtual Machine(EVM) compatible blockchain. The blockchain’s security and decentralized naturehelp improve transparency, trust, scalability, and efficiency. The main contributionsof this thesis include:1. Redesigning a semi-centralized system with enhanced privacy and the multi-KRUM algorithm, following the work of Shayan et al..2. Developing a new decentralized framework that supports both standard anddeep-learning FL, using the InterPlanetary File System (IPFS) and EthereumVirtual Machine (EVM)-compatible Smart Contracts.3. Assessing how well the system defends against common data poisoning attacks,using a version of Multi-KRUM that’s better at detecting outliers.4. Applying privacy methods to securely combine data from different sources.
102

ACTION : Adaptive Cache Block Migration in Distributed Cache Architectures

Mummidi, Chandra Sekhar 20 October 2021 (has links)
Increasing number of cores in chip multiprocessors (CMP) result in increasing traffic to last-level cache (LLC). Without commensurate increase in LLC bandwidth, such traffic cannot be sustained resulting in loss of performance. Further, as the number of cores increases, it is necessary to scale up the LLC size; otherwise, the LLC miss rate will rise, resulting in a loss of performance. Unfortunately, for a unified LLC with uniform cache access time, access latency increases with cache size, resulting in performance loss. Previously, researchers have proposed partitioning the cache into multiple smaller caches interconnected by a communication network which increases aggregate cache bandwidth but causes non-uniform access latency. Such a cache architecture is called non-uniform cache architecture (NUCA). While NUCA addresses the LLC bandwidth issue, partitioning by itself does not address the access latency problem. Consequently, researchers have previously considered data placement techniques to improve access latency. However, earlier data placement work did not account for the frequency with which specific memory references are accessed. A major reason for that is access frequency for all memory references is difficult to track. In this research, we present a hardware-assisted solution called ACTION (Adaptive Cache Block Migration) to track the access frequency of individual memory references and prioritize their placement closer to the affine core. ACTION mechanism implements cache block migration when there is a detectable change in access frequencies due to a change in the program phase. To keep the hardware overhead low, ACTION counts access references in the LLC stream using a simple and approximate method, and uses simple algorithms for placement and migration. We tested ACTION on a 4-core CMP with a 5x5 mesh LLC network implementing a partitioned D-NUCA against workloads exhibiting distinct asymmetry in cache block access frequency. Our simulation results indicate that ACTION can improve CMP performance by as much as 8% over the state-of-the-art (SOTA) solutions.
103

WORKBENCH FOR MODELING AND OPTIMIZATION OF DIVERSE NETWORKS

Aziz, Malik Junaid 04 1900 (has links)
<p>This work describes an architecture which enables experiments in optimization of networks that represent systems in diverse application domains, e.g. multi-product food production plants, gasoline blending and shipment, heat exchanger networks in refineries, etc. The prototype implementation is a web-based workbench (NOPT). Design of the workbench enables instantiation of different application domains via attributes describing entities (materials, energy) flowing through network arcs, and via node models relevant to the domain. From data describing the network attributes, NOPT generates a mathematical model described by a set of linear equations and provides a user with abilities to select appropriate solution algorithms. Multi-step composite algorithms, each solving a subnetwork or an entire network for specific time periods can be constructed with input from the user. Some of the steps in the algorithm can be non-linear procedures which compute specific model parameters. Hence, the architecture enables solution of bi linear systems of type “x*y” (e.g. energy balances) by first solving for “x’ (e.g. mass flows) from some other set of equations (e.g. mass balances) and then solve for “y” since “x’ is known. Current architecture of NOPT also supports the inclusion of external node models that helps user to import his customized node models into the workbench via the feature called User Node.</p> / Master of Computer Science (MCS)
104

A Service Oriented Architecture for Performance Support Systems

Bokhari, Asghar Ali Syed 05 1900 (has links)
<p>This thesis documents research encompassing the design of dynamic electronic performance support systems. Essentially, an Electronic Performance Support System (EPSS) is complex distributed software that provides on-the-job support in order to facilitate task performance within some particular target application domain. In view of the rapid pace of change in current business and industrial environments, the conventional practice of issuing a new release of Electronic Performance Support System (EPSS) every few years to incorporate changes, is no longer practical. An EPSS is required to adapt to the changes as soon as possible and without the need for major code modification. This is accomplished by creating a design in which task specific knowledge is not hard coded in the software but is extracted on the fly. The design also enables a loose coupling among different modules of the system so that functionalities may be added, removed, modified or extended with minimum disruption. In this thesis we show how to combine service-oriented architecture with the concepts of software agents to achieve a software architecture that provides the required agility. Traditionally Unified Modeling Language (UML), which lacks formal semantics, has been the tool of choice for design and analysis of such systems and that means formal analysis techniques cannot be used for verification of UML models, whereas Software Engineering practices require analysis and verification at an early stage in the software development process. In this thesis we present an algorithm to transform UML state chart models to Object Coloured Petri (OCP) nets that have a strong mathematical foundation and can be implemented by standard tools such as Design/CPN for simulation and dynamic analysis in order to verify behavioural properties of the model. We show how to apply this technique to verify some of the desirable behavioural properties of the proposed EPSS architecture. To demonstrate the feasibility of our approach we have successfully implemented a prototype of an EPSS based on the proposed design.</p> <p>The main contributions of this research are: 1. Proposed an anthropomorphic architecture for a dynamic PSS. 2. Combined the concepts of services oriented architecture and software agents to achieve dynamic updating of task specific knowledge and minimal coupling between different modules of complex software to allow painless evolution. 3. Brought formal methods to the design phase in the development of agent based software systems by proposing an algorithm to transform UML state diagrams to OCP nets for dynamic analysis. 4. Modelled the dynamic creation and deletion of objects/agents using OCP net concepts and Design/CPN. 5. Proposed an architecture that can be used for creating families of agile PSS.</p> / Doctor of Philosophy (PhD)
105

Efficient Scaling of a Web Proxy Cluster

Zhang, Hao 27 October 2017 (has links) (PDF)
With the continuing growth in network traffic and increasing diversity in web content, web caching, together with various network functions (NFs), has been introduced to enhance security, optimize network performance, and save expenses. In a large enterprise network with more than tens of thousands of users, a single proxy server is not enough to handle a large number of requests and turns to group processing. When multiple web cache proxies are working as a cluster, they talk with each other and share cached objects by using internet cache protocol (ICP). This leads to poor scalability. This thesis describes the development of a framework that provides the efficient management of a distributed web cache. A controller is introduced into the cluster of proxy servers and becomes responsible for managing objects shared within the cluster. By obtaining a knowledge of global states from the controller, proxy servers that are working in the group do not need to query its neighbors' storage. This reduces traffic in the cluster and saves the computing resources of associated proxy servers. The evaluation on a caching proxy benchmark has shown that our approach demonstrates a superior scalability in comparison to an ICP web caching cluster.
106

Stock Price Movement Prediction Using Sentiment Analysis and Machine Learning

Wang, Jenny Zheng 01 June 2021 (has links) (PDF)
Stock price prediction is of strong interest but a challenging task to both researchers and investors. Recently, sentiment analysis and machine learning have been adopted in stock price movement prediction. In particular, retail investors’ sentiment from online forums has shown their power to influence the stock market. In this paper, a novel system was built to predict stock price movement for the following trading day. The system includes a web scraper, an enhanced sentiment analyzer, a machine learning engine, an evaluation module, and a recommendation module. The system can automatically select the best prediction model from four state-of-the-art machine learning models (Long Short-Term Memory, Support Vector Machine, Random Forest, and Extreme Boost Gradient Tree) based on the acquired data and the models’ performance. Moreover, stock market lexicons were created using large-scale text mining on the Yahoo Finance Conversation boards and natural language processing. Experiments using the top 30 stocks on the Yahoo users’ watchlists and a randomly selected stock from NASDAQ were performed to examine the system performance and proposed methods. The experimental results show that incorporating sentiment analysis can improve the prediction for stocks with a large daily discussion volume. Long Short-Term Memory model outperformed other machine learning models when using both price and sentiment analysis as inputs. In addition, the Extreme Boost Gradient Tree (XGBoost) model achieved the highest accuracy using the price-only feature on low-volume stocks. Last but not least, the models using the enhanced sentiment analyzer outperformed the VADER sentiment analyzer by 1.96%.
107

A Federation Of Sentries: Secure And Efficient Trusted Hardware Element Communication

Ward, Blake A 01 June 2024 (has links) (PDF)
Previous work introduced TrustGuard, a design for a containment architecture that allows only the result of the correct execution of approved software to be outputted. A containment architecture prevents results from malicious hardware or software from being communicated externally. At the core of TrustGuard is a trusted, pluggable device that sits on the path between an untrusted processor and the outside world. This device, called the Sentry, is responsible for validating the correctness of all communication before it leaves the system. This thesis seeks to leverage the correctness guarantees that the Sentry provides to enable efficient secure communication between two systems each protected by their own Sentry. This thesis reviews the literature for methods of enabling secure communication between two computer-Sentry pairs. It categorizes the pieces of the solution into three sections: attestation, establishing a tunnel, and communicating securely. Attestation in this context provides evidence of identity. It proposes a new configurable design for a secure network architecture, which includes a new version of the Sentry with a hardware accelerator for secure symmetric encryption, ring oscillator-based physically unclonable functions, and random number generators for attestation and key generation. These design elements are then evaluated based on how they might affect the overall system in terms of resource constraints, performance impacts, and scalability.
108

Energy Efficient Spintronic Device for Neuromorphic Computation

Azam, Md Ali 01 January 2019 (has links)
Future computing will require significant development in new computing device paradigms. This is motivated by CMOS devices reaching their technological limits, the need for non-Von Neumann architectures as well as the energy constraints of wearable technologies and embedded processors. The first device proposal, an energy-efficient voltage-controlled domain wall device for implementing an artificial neuron and synapse is analyzed using micromagnetic modeling. By controlling the domain wall motion utilizing spin transfer or spin orbit torques in association with voltage generated strain control of perpendicular magnetic anisotropy in the presence of Dzyaloshinskii-Moriya interaction (DMI), different positions of the domain wall are realized in the free layer of a magnetic tunnel junction to program different synaptic weights. Additionally, an artificial neuron can be realized by combining this DW device with a CMOS buffer. The second neuromorphic device proposal is inspired by the brain. Membrane potential of many neurons oscillate in a subthreshold damped fashion and fire when excited by an input frequency that nearly equals their Eigen frequency. We investigate theoretical implementation of such “resonate-and-fire” neurons by utilizing the magnetization dynamics of a fixed magnetic skyrmion based free layer of a magnetic tunnel junction (MTJ). Voltage control of magnetic anisotropy or voltage generated strain results in expansion and shrinking of a skyrmion core that mimics the subthreshold oscillation. Finally, we show that such resonate and fire neurons have potential application in coupled nanomagnetic oscillator based associative memory arrays.
109

Memory-Aware Scheduling for Fixed Priority Hard Real-Time Computing Systems

Chaparro-Baquero, Gustavo A 21 March 2018 (has links)
As a major component of a computing system, memory has been a key performance and power consumption bottleneck in computer system design. While processor speeds have been kept rising dramatically, the overall computing performance improvement of the entire system is limited by how fast the memory can feed instructions/data to processing units (i.e. so-called memory wall problem). The increasing transistor density and surging access demands from a rapidly growing number of processing cores also significantly elevated the power consumption of the memory system. In addition, the interference of memory access from different applications and processing cores significantly degrade the computation predictability, which is essential to ensure timing specifications in real-time system design. The recent IC technologies (such as 3D-IC technology) and emerging data-intensive real-time applications (such as Virtual Reality/Augmented Reality, Artificial Intelligence, Internet of Things) further amplify these challenges. We believe that it is not simply desirable but necessary to adopt a joint CPU/Memory resource management framework to deal with these grave challenges. In this dissertation, we focus on studying how to schedule fixed-priority hard real-time tasks with memory impacts taken into considerations. We target on the fixed-priority real-time scheduling scheme since this is one of the most commonly used strategies for practical real-time applications. Specifically, we first develop an approach that takes into consideration not only the execution time variations with cache allocations but also the task period relationship, showing a significant improvement in the feasibility of the system. We further study the problem of how to guarantee timing constraints for hard real-time systems under CPU and memory thermal constraints. We first study the problem under an architecture model with a single core and its main memory individually packaged. We develop a thermal model that can capture the thermal interaction between the processor and memory, and incorporate the periodic resource sever model into our scheduling framework to guarantee both the timing and thermal constraints. We further extend our research to the multi-core architectures with processing cores and memory devices integrated into a single 3D platform. To our best knowledge, this is the first research that can guarantee hard deadline constraints for real-time tasks under temperature constraints for both processing cores and memory devices. Extensive simulation results demonstrate that our proposed scheduling can improve significantly the feasibility of hard real-time systems under thermal constraints.
110

Cyber Profiling for Insider Threat Detection

Udoeyop, Akaninyene Walter 01 August 2010 (has links)
Cyber attacks against companies and organizations can result in high impact losses that include damaged credibility, exposed vulnerability, and financial losses. Until the 21st century, insiders were often overlooked as suspects for these attacks. The 2010 CERT Cyber Security Watch Survey attributes 26 percent of cyber crimes to insiders. Numerous real insider attack scenarios suggest that during, or directly before the attack, the insider begins to behave abnormally. We introduce a method to detect abnormal behavior by profiling users. We utilize the k-means and kernel density estimation algorithms to learn a user’s normal behavior and establish normal user profiles based on behavioral data. We then compare user behavior against the normal profiles to identify abnormal patterns of behavior.

Page generated in 0.1073 seconds