• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • Tagged with
  • 55
  • 55
  • 33
  • 26
  • 20
  • 15
  • 12
  • 10
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

A Neural Network Approach to Border Gateway Protocol Peer Failure Detection and Prediction

White, Cory B. 01 December 2009 (has links) (PDF)
The size and speed of computer networks continue to expand at a rapid pace, as do the corresponding errors, failures, and faults inherent within such extensive networks. This thesis introduces a novel approach to interface Border Gateway Protocol (BGP) computer networks with neural networks to learn the precursor connectivity patterns that emerge prior to a node failure. Details of the design and construction of a framework that utilizes neural networks to learn and monitor BGP connection states as a means of detecting and predicting BGP peer node failure are presented. Moreover, this framework is used to monitor a BGP network and a suite of tests are conducted to establish that this neural network approach as a viable strategy for predicting BGP peer node failure. For all performed experiments both of the proposed neural network architectures succeed in memorizing and utilizing the network connectivity patterns. Lastly, a discussion of this framework's generic design is presented to acknowledge how other types of networks and alternate machine learning techniques can be accommodated with relative ease.
42

CUDA Web API Remote Execution of CUDA Kernels Using Web Services

Becker, Massimo J 01 June 2012 (has links) (PDF)
Massively parallel programming is an increasingly growing field with the recent introduction of general purpose GPU computing. Modern graphics processors from NVIDIA and AMD have massively parallel architectures that can be used for such applications as 3D rendering, financial analysis, physics simulations, and biomedical analysis. These massively parallel systems are exposed to programmers through in- terfaces such as NVIDIAs CUDA, OpenCL, and Microsofts C++ AMP. These frame- works expose functionality using primarily either C or C++. In order to use these massively parallel frameworks, programs being implemented must be run on machines equipped with massively parallel hardware. These requirements limit the flexibility of new massively parallel systems. This paper explores the possibility that massively parallel systems can be exposed through web services in order to facilitate using these architectures from remote systems written in other languages. To explore this possi- bility, an architecture is put forth with requirements and high level design for building a web service that can overcome limitations of existing tools and frameworks. The CUDA Web API is built using Python, PyCUDA, NumPy, JSON, and Django to meet the requirements set forth. Additionaly, a client application, CUDA Cloud, is built and serves as an example web service client. The CUDA Web API’s performance and its functionality is validated using a common matrix multiplication algorithm implemented using different languages and tools. Performance tests show runtime improvements for larger datasets using the CUDA Web API for remote CUDA kernel execution over serial implementations. This paper concludes that existing limitations associated with GPGPU usage can be overcome with the specified architecture.
43

An Exploratory Analysis of Twitter Keyword-Hashtag Networks and Knowledge Discovery Applications

Hamed, Ahmed A 01 January 2014 (has links)
The emergence of social media has impacted the way people think, communicate, behave, learn, and conduct research. In recent years, a large number of studies have analyzed and modeled this social phenomena. Driven by commercial and social interests, social media has become an attractive subject for researchers. Accordingly, new models, algorithms, and applications to address specific domains and solve distinct problems have erupted. In this thesis, we propose a novel network model and a path mining algorithm called HashnetMiner to discover implicit knowledge that is not easily exposed using other network models. Our experiments using HashnetMiner have demonstrated anecdotal evidence of drug-drug interactions when applied to a drug reaction context. The proposed research comprises three parts built upon the common theme of utilizing hashtags in tweets. 1 Digital Recruitment on Twitter. We build an expert system shell for two different studies: (1) a nicotine patch study where the system reads streams of tweets in real time and decides whether to recruit the senders to participate in the study, and (2) an environmental health study where the system identifies individuals who can participate in a survey using Twitter. 2 Does Social Media Big Data Make the World Smaller? This work provides an exploratory analysis of large-scale keyword-hashtag networks (K-H) generated from Twitter. We use two different measures, (1) the number of vertices that connect any two keywords, and (2) the eccentricity of keyword vertices, a well-known centrality and shortest path measure. Our analysis shows that K-H networks conform to the phenomenon of the shrinking world and expose hidden paths among concepts. 3 We pose the following biomedical web science question: Can patterns identified in Twitter hashtags provide clinicians with a powerful tool to extrapolate a new medical therapies and/or drugs? We present a systematic network mining method HashnetMiner, that operates on networks of medical concepts and hashtags. To the best of our knowledge, this is the first effort to present Biomedical Web Science models and algorithms that address such a question by means of data mining and knowledge discovery using hashtag-based networks.
44

Rethinking the I/O Stack for Persistent Memory

Chowdhury, Mohammad Ataur Rahman 28 March 2018 (has links)
Modern operating systems have been designed around the hypotheses that (a) memory is both byte-addressable and volatile and (b) storage is block addressable and persistent. The arrival of new Persistent Memory (PM) technologies, has made these assumptions obsolete. Despite much of the recent work in this space, the need for consistently sharing PM data across multiple applications remains an urgent, unsolved problem. Furthermore, the availability of simple yet powerful operating system support remains elusive. In this dissertation, we propose and build The Region System – a high-performance operating system stack for PM that implements usable consistency and persistence for application data. The region system provides support for consistently mapping and sharing data resident in PM across user application address spaces. The region system creates a novel IPI based PMSYNC operation, which ensures atomic persistence of mapped pages across multiple address spaces. This allows applications to consume PM using the well understood and much desired memory like model with an easy-to-use interface. Next, we propose a metadata structure without any redundant metadata to reduce CPU cache flushes. The high-performance design minimizes the expensive PM ordering and durability operations by embracing a minimalistic approach to metadata construction and management. To strengthen the case for the region system, in this dissertation, we analyze different types of applications to identify their dependence on memory mapped data usage, and propose user level libraries LIBPM-R and LIBPMEMOBJ-R to support shared persistent containers. The user level libraries along with the region system demonstrate a comprehensive end-to-end software stack for consuming the PM devices.
45

Using Statistical Methods to Determine Geolocation Via Twitter

Wright, Christopher M. 01 May 2014 (has links)
With the ever expanding usage of social media websites such as Twitter, it is possible to use statistical inquires to form a geographic location of a person using solely the content of their tweets. According to a study done in 2010, Zhiyuan Cheng, was able to detect a location of a Twitter user within 100 miles of their actual location 51% of the time. While this may seem like an already significant find, this study was done while Twitter was still finding its ground to stand on. In 2010, Twitter had 75 million unique users registered, as of March 2013, Twitter has around 500 million unique users. In this thesis, my own dataset was collected and using Excel macros, a comparison of my results to that of Cheng’s will see if the results have changed over the three years since his study. If found to be that Cheng’s 51% can be shown more efficiently using a simpler methodology, this could have a significant impact on Homeland Security and cyber security measures.
46

Optimizing Main Memory Usage in Modern Computing Systems to Improve Overall System Performance

Campello, Daniel Jose 20 June 2016 (has links)
Operating Systems use fast, CPU-addressable main memory to maintain an application’s temporary data as anonymous data and to cache copies of persistent data stored in slower block-based storage devices. However, the use of this faster memory comes at a high cost. Therefore, several techniques have been implemented to use main memory more efficiently in the literature. In this dissertation we introduce three distinct approaches to improve overall system performance by optimizing main memory usage. First, DRAM and host-side caching of file system data are used for speeding up virtual machine performance in today’s virtualized data centers. The clustering of VM images that share identical pages, coupled with data deduplication, has the potential to optimize main memory usage, since it provides more opportunity for sharing resources across processes and across different VMs. In our first approach, we study the use of content and semantic similarity metrics and a new algorithm to cluster VM images and place them in hosts where through deduplication we improve main memory usage. Second, while careful VM placement can improve memory usage by eliminating duplicate data, caches in current systems employ complex machinery to manage the cached data. Writing data to a page not present in the file system page cache causes the operating system to synchronously fetch the page into memory, blocking the writing process. In this thesis, we address this limitation with a new approach to managing page writes involving buffering the written data elsewhere in memory and unblocking the writing process immediately. This buffering allows the system to service file writes faster and with less memory resources. In our last approach, we investigate the use of emerging byte-addressable persistent memory technology to extend main memory as a less costly alternative to exclusively using expensive DRAM. We motivate and build a tiered memory system wherein persistent memory and DRAM co-exist and provide improved application performance at lower cost and power consumption with the goal of placing the right data in the right memory tier at the right time. The proposed approach seamlessly performs page migration across memory tiers as access patterns change and/or to handle tier memory pressure.
47

Routing Protocol Performance Evaluation for Mobile Ad-hoc Networks

Lopez-Fernandez, Pedro A. 01 January 2008 (has links)
Currently, MANETs are a very active area of research, due to their great potential to provide networking capabilities when it is not feasible to have a fixed infrastructure in place, or to provide a complement to the existing infrastructure. Routing in this kind of network is much more challenging than in conventional networks, due to its mobile nature and limited power and hardware resources. The most practical way to conduct routing studies of MANETs is by means of simulators such as GloMoSim. GloMoSim was utilized in this research to investigate various performance statistics and draw comparisons among different MANET routing protocols, namely AODV, LAR (augmenting DSR), FSR (also known as Fisheye), WRP, and Bellman-Ford (algorithm). The network application used was FTP, and the network traffic was generated with tcplib [Danzig91]. The performance statistics investigated were application bytes received, normalized application bytes received, routing control packets transmitted, and application byte delivery ratio. The scenarios tested consisted of an airborne application at a high (26.8 m/s) and a low speed (2.7 m/s) on a 2000 m x 2000 m domain for nodal values of 36, 49, 64, 81, and 100 nodes, and radio transmit power levels of 7.005, 8.589, and 10.527 dBm. Nodes were paired up in fixed client-server couples involving 10% and 25% of the nodes being V111 clients and the same quantity being servers. AODV and LAR showed a significant margin of performance advantage over the remaining protocols in the scenarios tested.
48

Taxonomy of synchronization and barrier as a basic mechanism for building other synchronization from it

Braginton, Pauline 01 January 2003 (has links)
A Distributed Shared Memory(DSM) system consists of several computers that share a memory area and has no global clock. Therefore, an ordering of events in the system is necessary. Synchronization is a mechanism for coordinating activities between processes, which are program instantiations in a system.
49

Motivation and Learning of Non-Traditional Computing Education Students in a Web-based Combined Laboratory

Green, Michael Jesse 01 January 2015 (has links)
Hands-on experiential learning activities are an important component of computing education disciplines. Laboratory environments provide learner access to real world equipment for completing experiments. Local campus facilities are commonly used to host laboratory classes. While campus facilities afford hands-on experience with real equipment high maintenance costs, restricted access, and limited flexibility diminish laboratory effectiveness. Web-based simulation and remote laboratory formats have emerged as low cost options, which allow open access and learner control. Simulation lacks fidelity and remote laboratories are considered too complex for novice learners. A web-based combined laboratory format incorporates the benefits of each format while mitigating the shortcomings. Relatively few studies have examined the cognitive benefits of web-based laboratory formats in meeting computing education students’ goals. A web-based combined laboratory model that incorporates motivation strategies was developed to address non-traditional computing education students’ preferences for control of pace and access to learning. Internal validation of the laboratory model was conducted using pilot studies and Delphi expert review techniques. A panel of instructors from diverse computing education backgrounds reviewed the laboratory model. Panel recommendations guided enhancement of the model design.
50

A method of evaluation of high-performance computing batch schedulers

Futral, Jeremy Stephen 01 January 2019 (has links)
According to Sterling et al., a batch scheduler, also called workload management, is an application or set of services that provide a method to monitor and manage the flow of work through the system [Sterling01]. The purpose of this research was to develop a method to assess the execution speed of workloads that are submitted to a batch scheduler. While previous research exists, this research is different in that more complex jobs were devised that fully exercised the scheduler with established benchmarks. This research is important because the reduction of latency even if it is miniscule can lead to massive savings of electricity, time, and money over the long term. This is especially important in the era of green computing [Reuther18]. The methodology used to assess these schedulers involved the execution of custom automation scripts. These custom scripts were developed as part of this research to automatically submit custom jobs to the schedulers, take measurements, and record the results. There were multiple experiments conducted throughout the course of the research. These experiments were designed to apply the methodology and assess the execution speed of a small selection of batch schedulers. Due to time constraints, the research was limited to four schedulers. x The measurements that were taken during the experiments were wall time, RAM usage, and CPU usage. These measurements captured the utilization of system resources of each of the schedulers. The custom scripts were executed using, 1, 2, and 4 servers to determine how well a scheduler scales with network growth. The experiments were conducted on local school resources. All hardware was similar and was co-located within the same data-center. While the schedulers that were investigated as part of the experiments are agnostic to whether the system is grid, cluster, or super-computer; the investigation was limited to a cluster architecture.

Page generated in 0.0722 seconds