• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • Tagged with
  • 9
  • 9
  • 9
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A simulative analysis of the robustness of Smart Grid networks and a summary of the security aspects

Kubler, Sarah Marie January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Caterina M. Scoglio / The need for reliable and quick communication in the power grid is growing and becoming more critical. With the Smart Grid initiative, an increasing number of intelligent devices, such as smart meters and new sensors, are being added to the grid. The traffic demand on the communications network increases as these new devices are being added. This can cause issues such as longer delay, dropped packets, and equipment failure. The power grid relies on this data to function properly. The power grid will lose reliability and will not be able to provide customers with power unless it has correct and timely data. The current communications network architecture needs to be evaluated and improved. In this thesis, a simulator program is developed to study the communications network. The simulation model is written in C++ and models the components of the communications network. The simulation results provide insight on how to design the communications network in order for the system to be robust from failures. We are using the simulator to study different topologies of the communications network. The communications network often has a simular topology to the power grid. This is because of right-a-ways and ownership of equipment. Modifying the topology of the communications network slightly can improve the performance of the network. Security of the communications network is a very important aspect. There is a risk of successful attacks on the communications network without the implementation of security protocols. Attacks can come from malicious users of the communications network or from entities outside the network. These attacks may lead to damaged equipment, loss of power to consumers, network overload, loss of data, and loss of privacy. This thesis presents a short overview of the major issues related to the security of the communications network. The department of Electrical and Computer Engineering (ECE) at Kansas State University (K-State) is working on developing a Smart Grid lab. Burns and McDonnell has collaborated with the ECE department at K-State to develop the Smart Grid Lab. This lab will be located inside of the ECE department. The lab will consist of both power grid equipment and network communication equipment. This thesis describes similar labs. It then describes the initial plan for the lab, which is currently in the planning stage.
2

Modeling, simulations, and experiments to balance performance and fairness in P2P file-sharing systems

Li,Yunzhao January 1900 (has links)
Doctor of Philosophy / Department of Electrical and Computer Engineering / Don Gruenbacher / Caterina Scoglio / In this dissertation, we investigate research gaps still existing in P2P file-sharing systems: the necessity of fairness maintenance during the content information publishing/retrieving process, and the stranger policies on P2P fairness. First, through a wide range of measurements in the KAD network, we present the impact of a poorly designed incentive fairness policy on the performance of looking up content information. The KAD network, designed to help peers publish and retrieve sharing information, adopts a distributed hash table (DHT) technology and combines itself into the aMule/eMule P2P file-sharing network. We develop a distributed measurement framework that employs multiple test nodes running on the PlanetLab testbed. During the measurements, the routing tables of around 20,000 peers are crawled and analyzed. More than 3,000,000 pieces of source location information from the publishing tables of multiple peers are retrieved and contacted. Based on these measurements, we show that the routing table is well maintained, while the maintenance policy for the source-location-information publishing table is not well designed. Both the current maintenance schedule for the publishing table and the poor incentive policy on publishing peers eventually result in the low availability of the publishing table, which accordingly cause low lookup performance of the KAD network. Moreover, we propose three possible solutions to address these issues: the self-maintenance scheme with short period renewal interval, the chunk-based publishing/retrieving scheme, and the fairness scheme. Second, using both numerical analyses and agent-based simulations, we evaluate the impact of different stranger policies on system performance and fairness. We explore that the extremely restricting stranger policy brings the best fairness at a cost of performance degradation. The varying tendency of performance and fairness under different stranger policies are not consistent. A trade-off exists between controlling free-riding and maintaining system performance. Thus, P2P designers are required to tackle strangers carefully according to their individual design goals. We also show that BitTorrent prefers to maintain fairness with an extremely restricting stranger policy, while aMule/eMule’s fully rewarding stranger policy promotes free-riders’ benefit.
3

Unobtrusive ballistocardiography using an electromechanical film to obtain physiological signals from children with autism spectrum disorder

Rubenthaler, Steve January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Steven Warren / Polysomnography is a method to obtain physiological signals from individuals with potential sleep disorders. Such physiological data, when acquired from children with autism spectrum disorders, could allow caregivers and child psychologists to identify sleep disorders and other indicators of nighttime well-being that affect their quality of life and ability to learn. Unfortunately, traditional polysomnography is not well suited for children with autism spectrum disorder because they commonly have an aversion to unfamiliar objects – in this case, the numerous wires and electrodes required to perform a full polysomnograph. Therefore, an innovative, unobtrusive method for gathering relevant physiological data must be designed. This report discusses several methods for obtaining a ballistocardiogram (BCG), which is a representation of the ballistic forces created by the heart during the cardiac cycle. A ballistocardiograph design is implemented using an electromechanical film placed under the center of a bed sheet. While an individual sleeps on the bed, the circuitry attached to the film extract and amplify the BCG data, which are then streamed to a computer through a LabVIEW interface and stored in a text file. These data are analyzed with a MATLAB algorithm which uses autocorrelation and linear predictive coding in the time domain to sharpen the signal. Frequency-domain peaks are then extracted to determine average heart rate every ten seconds. Initial tests involved four participants (student members of the research team) who laid in four positions: on their back, stomach, right side, and left side, yielding 16 unique data sets. Each participant laid in at least one position that allowed for accurate tracking of heart rate, with seven of the 16 signals demonstrating heart rates with less than 2% error when compared to heart rates acquired with a commercial pulse oximeter. The stomach position appeared to offer the lowest total error, while lying on the right side offered the highest total error. Overall, heart rates acquired from this initial set of participants exhibited an average error of approximately 2.5% for all four positions.
4

The effects of hardware acceleration on power usage in basic high-performance computing

Amsler, Christopher January 1900 (has links)
Master of Science / Department of Electrical Engineering / Dwight Day / Power consumption has become a large concern in many systems including portable electronics and supercomputers. Creating efficient hardware that can do more computation with less power is highly desirable. This project proposes a possible avenue to complete this goal by hardware accelerating a conjugate gradient solve using a Field Programmable Gate Array (FPGA). This method uses three basic operations frequently: dot product, weighted vector addition, and sparse matrix vector multiply. Each operation was accelerated on the FPGA. A power monitor was also implemented to measure the power consumption of the FPGA during each operation with several different implementations. Results showed that a decrease in time can be achieved with the dot product being hardware accelerated in relation to a software only approach. However, the more memory intensive operations were slowed using the current architecture for hardware acceleration.
5

Custom biomedical sensors for application in wireless body area networks and medical device integration frameworks

Li, Kejia January 1900 (has links)
Doctor of Philosophy / Department of Electrical & Computer Engineering / Steve Warren / The U.S. health care system is one of the most advanced and costly systems in the world. The health services supply/demand gap is being enlarged by the aging population coupled with shortages in the traditional health care workforce and new information technology workers. This will not change if the current medical system adheres to the traditional hospital-centered model. One promising solution is to incorporate patient-centered, point-of-care test systems that promote proactive and preventive care by utilizing technology advancements in sensors, devices, communication standards, engineering systems, and information infrastructures. Biomedical devices optimized for home and mobile health care environments will drive this transition. This dissertation documents research and development focused on biomedical device design for this purpose (including a wearable wireless pulse oximeter, motion sensor, and two-thumb electrocardiograph) and, more importantly, their interactions with other medical components, their supporting information infrastructures, and processing tools that illustrate the effectiveness of their data. The GumPack concept and prototype introduced in Chapter 2 addresses these aspects, as it is a sensor-laden device, a host for a local body area network (BAN), a portal to external integration frameworks, and a data processing platform. GumPack sensor-component design (Chapters 3 and 4) is oriented toward surface applications (e.g., touch and measure), an everyday-carry form factor, and reconfigurability. Onboard tagging technology (Chapters 5 and 6) enhances sensor functionality by providing, e.g., a signal quality index and confidence coefficient for itself and/or next-tier medical components (e.g., a hub). Sensor interaction and integration work includes applications based on the GumPack design (Chapters 7 through 9) and the Medical Device Coordination Framework (Chapters 10 through 12). A high-resolution, wireless BAN is presented in Chapter 8, followed by a new physiological use case for pulse wave velocity estimation in Chapter 9. The collaborative MDCF work is transitioned to a web-based Hospital Information Integration System (Chapter 11) by employing database, AJAX, and Java Servlet technology. Given the preceding sensor designs and the availability of information infrastructures like the MDCF, medical platform-oriented devices (Chapter 12) could be an innovative and efficient way to design medical devices for hospital and home health care applications.
6

Effect of memory access and caching on high performance computing

Groening, James January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Dwight Day / High-performance computing is often limited by memory access. As speeds increase, processors are often waiting on data transfers to and from memory. Classic memory controllers focus on delivering sequential memory as quickly as possible. This will increase the performance of instruction reads and sequential data reads and writes. However, many applications in high-performance computing often include random memory access which can limit the performance of the system. Techniques such as scatter/gather can improve performance by allowing nonsequential data to be written and read in a single operation. Caching can also improve performance by storing some of the data in memory local to the processor. In this project, we try to find the benefits of different cache configurations. The different configurations include different cache line sizes as well as total size of cache. Although a range of benchmarks are typically used to test performance, we focused on a conjugate gradient solver, HPCCG. The program HPCCG incorporates many of the elements of common benchmarks used in high-performance computing, and relates better to a real world problem. Results show that the performance of a cache configuration can depend on the size of the problem. Problems of smaller sizes can benefit more from a larger cache, while a smaller cache may be sufficient for larger problems.
7

Cross-domain sentiment classification using grams derived from syntax trees and an adapted naive Bayes approach

Cheeti, Srilaxmi January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / There is an increasing amount of user-generated information in online documents, includ- ing user opinions on various topics and products such as movies, DVDs, kitchen appliances, etc. To make use of such opinions, it is useful to identify the polarity of the opinion, in other words, to perform sentiment classification. The goal of sentiment classification is to classify a given text/document as either positive, negative or neutral based on the words present in the document. Supervised learning approaches have been successfully used for sentiment classification in domains that are rich in labeled data. Some of these approaches make use of features such as unigrams, bigrams, sentiment words, adjective words, syntax trees (or variations of trees obtained using pruning strategies), etc. However, for some domains the amount of labeled data can be relatively small and we cannot train an accurate classifier using the supervised learning approach. Therefore, it is useful to study domain adaptation techniques that can transfer knowledge from a source domain that has labeled data to a target domain that has little or no labeled data, but a large amount of unlabeled data. We address this problem in the context of product reviews, specifically reviews of movies, DVDs and kitchen appliances. Our approach uses an Adapted Naive Bayes classifier (ANB) on top of the Expectation Maximization (EM) algorithm to predict the sentiment of a sentence. We use grams derived from complete syntax trees or from syntax subtrees as features, when training the ANB classifier. More precisely, we extract grams from syntax trees correspond- ing to sentences in either the source or target domains. To be able to transfer knowledge from source to target, we identify generalized features (grams) using the frequently co-occurring entropy (FCE) method, and represent the source instances using these generalized features. The target instances are represented with all grams occurring in the target, or with a reduced grams set obtained by removing infrequent grams. We experiment with different types of grams in a supervised framework in order to identify the most predictive types of gram, and further use those grams in the domain adaptation framework. Experimental results on several cross-domains task show that domain adaptation approaches that combine source and target data (small amount of labeled and some unlabeled data) can help learn classifiers for the target that are better than those learned from the labeled target data alone.
8

Understanding and implementing different modes of pacemaker

Kurcheti, Krishna Kiran January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / John Hatcliff / The Heart is a specialized muscle that contracts regularly and continuously, pumping blood to the body and the lungs. Heart’s natural Pacemaker, the SA node is responsible for this pumping action by causing a flow of electricity through the heart. These electrical impulses cause the atria and ventricles to contract and thereby pump the blood to different parts of the body. Malfunction of the SA node leads to a disturbance in the heart’s rhythm in which heart beats lower than 60 times a minute ending up with Bradycardia. It also leads to ventricular arrhythmia which disrupts the ability of the ventricles to pump blood effectively to the body. This can cause a loss of all blood pressure leading to cardiac arrest and eventually death. In order to restore the heart’s natural healthy rhythm, an artificial pacemaker is necessary. A Pacemaker adapts to the present condition of the heart and responds to the heart by either pacing or just sensing it. It paces whenever there is some problem in the heart’s electrical activity and inhibits the pace when there is a proper intrinsic beat. There are various modes in which Pacemaker can operate based on the condition of the heart. Ventricles and atria are individually paced in few modes such as VOO, VVT, VVI, AOO, AAT, and AAI and paced together in some modes such as DVI, DI, DDD, DDDR as per the requirement of the heart. The main goal of this report is to understand the various modes, their nomenclature, working strategy, developing the pseudo code and implementing different modes namely VOO, AOO, VVI, AAI, VVT and AAT modes using an academic, dual chamber pacemaker.
9

Genetic network parameter estimation using single and multi-objective particle swarm optimization

Morcos, Karim M. January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Sanjoy Das / Stephen M. Welch / Multi-objective optimization problems deal with finding a set of candidate optimal solutions to be presented to the decision maker. In industry, this could be the problem of finding alternative car designs given the usually conflicting objectives of performance, safety, environmental friendliness, ease of maintenance, price among others. Despite the significance of this problem, most of the non-evolutionary algorithms which are widely used cannot find a set of diverse and nearly optimal solutions due to the huge size of the search space. At the same time, the solution set produced by most of the currently used evolutionary algorithms lacks diversity. The present study investigates a new optimization method to solve multi-objective problems based on the widely used swarm-intelligence approach, Particle Swarm Optimization (PSO). Compared to other approaches, the proposed algorithm converges relatively fast while maintaining a diverse set of solutions. The investigated algorithm, Partially Informed Fuzzy-Dominance (PIFD) based PSO uses a dynamic network topology and fuzzy dominance to guide the swarm of dominated solutions. The proposed algorithm in this study has been tested on four benchmark problems and other real-world applications to ensure proper functionality and assess overall performance. The multi-objective gene regulatory network (GRN) problem entails the minimization of the coefficient of variation of modified photothermal units (MPTUs) across multiple sites along with the total sum of similarity background between ecotypes. The results throughout the current research study show that the investigated algorithm attains outstanding performance regarding optimization aspects, and exhibits rapid convergence and diversity.

Page generated in 0.1204 seconds