291 |
Evaluation of Two-Dimensional Codes for Digital Information Security in Physical DocumentsChen, Shuai 17 July 2015 (has links)
Nowadays, paper documents are still frequently used and exchanged in our daily life. To safely manage confidential paper information such as medical and financial records has increasingly become a challenge. If a patient's medical diagnosis get stolen or dumped without shredding, his or her private information would be leaked. Some companies and organizations do not pay enough attention to the problem, letting their customers suffer the loss. In the thesis, I designed a hybrid system to solve this problem effectively and economically. This hybrid system integrates physical document properties with digital security technology, which brings in a revolutionary idea for processing sensitive paper information in modern world. Based on that, I focus on different QR code sizes and versions, compare their attributes and relations, and find the best QR code size and version according to data amount in a given area. Finally I implement them in CryptoPaper word plugin, using several test cases to test the functionality of it.
|
292 |
Design and Implementation of a High Performance Network Processor with Dynamic Workload ManagementDuggisetty, Padmaja 23 November 2015 (has links)
Internet plays a crucial part in today's world. Be it personal communication, business transactions or social networking, internet is used everywhere and hence the speed of the communication infrastructure plays an important role. As the number of users increase the network usage increases i.e., the network data rates ramped up from a few Mb/s to Gb/s in less than a decade. Hence the network infrastructure needed a major upgrade to be able to support such high data rates. Technological advancements have enabled the communication links like optical fibres to support these high bandwidths, but the processing speed at the nodes remained constant. This created a need for specialised devices for packet processing in order to match the increasing line rates which led to emergence of network processors. Network processors were both programmable and flexible. To support the growing number of internet applications, a single core network processor has transformed into a multi/many core network processor with multiple cores on a single chip rather than just one core. This improved the packet processing speeds and hence the performance of a network node. Multi-core network processors catered to the needs of a high bandwidth networks by exploiting the inherent packet-level parallelism in a network. But these processors still had intrinsic challenges like load balancing. In order to maximise throughput of these multi-core network processors, it is important to distribute the traffic evenly across all the cores. This thesis describes a multi-core network processor with dynamic workload management. A multi-core network processor, which performs multiple applications is designed to act as a test bed for an effective workload management algorithm. An effective workload management algorithm is designed in order to distribute the workload evenly across all the available cores and hence maximise the performance of the network processor. Runtime statistics of all the cores were collected and updated at run time to aid in deciding the application to be performed on a core to to enable even distribution of workload among the cores. Hence, when an overloading of a core is detected, the applications to be performed on the cores are re-assigned. For testing purposes, we built a flexible and a reusable platform on NetFPGA 10G board which uses a FPGA-based approach to prototyping network devices. The performance of the designed workload management algorithm is tested by measuring the throughput of the system for varying workloads.
|
293 |
Network Virtualization and Emulation using Docker, OpenvSwitch and Mininet-based Link EmulationPrabhu, Narendra 18 December 2020 (has links)
With the advent of virtualization and artificial intelligence, research on networked systems has progressed substantially. As the technology progresses, we expect a boom in not only the systems research but also in the network of systems domain. It is paramount that we understand and develop methodologies to connect and communicate among the plethora of devices and systems that exist today. One such area is mobile ad-hoc and space communication, which further complicates the task of networking due to myriad of environmental and physical conditions. Developing and testing such systems is an important step considering the large investment required to build such gigantic communication arrangements. We address two important aspects of network emulation in this work. We propose a network emulation framework, which emulates the functioning of a hierarchical software defined network. One such use-case is described using a mobile ad-hoc network (MANET) topology within a single system by leveraging contemporary network virtualization technologies. We present various aspects of the network, such as the dynamic communication in the software domain and provide a novel approach to build upon existing emulation techniques. The second part of the thesis presents a dynamic network link emulator. This emulator enables suitable link property re-configurations such as bandwidth, delay and packet loss for networked systems using simulation software. We characterize the results of tests for the link emulation using a hardware and software testbed. Through this thesis, we aim to make a small yet crucial contribution to the niche area of software defined networks.
|
294 |
Rules Based Analysis Engine for Application Layer IDSScrobonia, David 01 May 2017 (has links)
Web application attack volume, complexity, and costs have risen as people, companies, and entire industries move online. Solutions implemented to defend web applications against malicious activity have traditionally been implemented at the network or host layer. While this is helpful for detecting some attacks, it does not provide the gran- ularity to see malicious behavior occurring at the application layer. The AppSensor project, an application level intrusion detection system (IDS), is an example of a tool that operates in this layer. AppSensor monitors users within the application by observing activity in suspicious areas not able to be seen by traditional network layer tools. This thesis aims to improve the state of web application security by supporting the development of the AppSensor project. Specifically, this thesis entails contributing a rules-based analysis engine to provide a new method for determining whether suspicious activity constitutes an attack. The rules-based method aggregates information from multiple sources into a logical rule to identify malicious activity, as opposed to relying on a single source of information. The rules-based analysis engine is designed to offer more flexible configuration for administrators and more accurate results than the incumbent analysis engine. Tests indicate that the new engine should not hamper the performance of AppSensor and use cases highlight how rules can be leveraged for more accurate results.
|
295 |
Internet Infrastructures for Large Scale Emulation with Efficient HW/SW Co-designGula, Aiden K 20 October 2021 (has links)
Connected systems are becoming more ingrained in our daily lives with the advent of cloud computing, the Internet of Things (IoT), and artificial intelligence. As technology progresses, we expect the number of networked systems to rise along with their complexity. As these systems become abstruse, it becomes paramount to understand their interactions and nuances. In particular, Mobile Ad hoc Networks (MANET) and swarm communication systems exhibit added complexity due to a multitude of environmental and physical conditions. Testing these types of systems is challenging and incurs high engineering and deployment costs. In this work, we propose a scalable MANET emulation framework using virtualized internet infrastructures that generalizes an assortment of application spaces with diverse attributes. We then quantify the architecture using various evaluation techniques to determine both feasibility and scalability. Finally, we developed a hardware offload engine for virtualized network systems that builds upon recent work in the field.
|
296 |
Quantifying Parkinson's Disease Symptoms Using Mobile DevicesAylward, Charles R 01 December 2016 (has links)
Current assessments for evaluating the progression of Parkinson’s Disease are largely qualitative and based on small sets of data obtained from occasional doctor-patient interactions. There is a clinical need to improve the techniques used for mitigating common Parkinson’s Disease symptoms. Available data sets for researching the disease are minimal, hindering advancement toward understanding the underlying causes and effectiveness of treatment and therapies. Mobile devices present an opportunity to continuously monitor Parkinson’s Disease patients and collect important information regarding the severity of symptoms. The evolution of digital technology has opened doors for clinical research to extend beyond the clinic by incorporating complex sensors in commonly used devices. Leveraging these sensors to quantify characteristic Parkinson’s Disease symptoms may drastically improve patient care and the reliability of symptom assessment.
The goal of this project is to design and develop a system for measuring and analyzing the cardinal symptoms of Parkinson’s using mobile devices. An application for the iPhone and Apple Watch is developed, utilizing the sensors on the devices to collect data during the performance of motor tasks. Assessments for tremor, bradykinesia, and postural instability are implemented to mimic UPDRS evaluations normally performed by a neurologist. The application connects to a cloud-based server to transfer the collected data for remote access and analysis. Example MatLab analysis demonstrates potential approaches for extracting meaningful data to be used for monitoring the progression of Parkinson’s Disease and the effectiveness of treatment and therapies. High-level verification testing is performed to show general efficacy of the assessment tasks. The system design successfully lays the groundwork for a mobile device-based assessment tool to objectively measure Parkinson’s Disease symptoms
|
297 |
Relationship Management Communications by NHL Teams on TwitterBaker, Kelsey M. 29 May 2019 (has links)
The sports industry is massive, bolstered by its relationship with media. A recent development in the sport industry is the advent of social media, which offers the potential for two-way communication between sports organizations and their relevant stakeholders. Relationship management theory helps cultivate an understanding of social media as a vehicle for value creation for an organization and its stakeholders. This thesis is a content analysis of relationship communications strategies on Twitter using the accounts of five National Hockey League teams.
This study builds upon existing literature by identifying stakeholder groups targeted on Twitter by NHL teams, defining subcategories in relationship management communications, and comparing the strategies and tactics used among five NHL teams. Results indicate that players are the most common internal stakeholder identified within this study, while sponsors are the most popular external stakeholder. Interactivity is not a major driver of social media content, but when teams do contribute to some form of interaction, they are most likely to place a mention of a stakeholder or stakeholder group within a tweet. Among relationship management communications strategies, NHL Twitter accounts most often provide announcements directly related to team performance. Engagement metrics show that team promotions receive the greatest number of replies and retweets. Four out of five NHL teams in this study are very similar with their use of relationship management communications strategies and identification of relevant stakeholders. In this sample, the San Jose Sharks account differs the most from the other teams in this study, emphasizing fan interaction and brand personification the most compared to the other teams in this study. Overall, this thesis contributes to knowledge about social media in the sports industry by providing an in-depth look at the stakeholders and communications strategies identified among NHL teams on Twitter.
|
298 |
Blind Front-end Processing of Dynamic Multi-channel Wideband SignalsJackson, Kevin 01 May 2016 (has links)
In wireless digital communications, the sender and receiver typically know the modulation scheme with which they will be communicating. Automatic modulation identification is the ability to identify the modulation in a communication system with little to no prior knowledge of the modulation scheme. Many techniques for modulation identification operate on many assumptions including that the input signal is base-banded, the carrier frequency is known and that the signal is narrow-band (i.e. neighboring signals in the wide-band are excluded). This work provides the blind processing of an arbitrary wide-band signal to allow such assumptions. The challenges of such a front-end or pre-processor include detecting signals which can appear at any frequency, with any band-width at any given time and for any arbitrary duration. This work takes as its input a wide-band signal with a random number of sub-signals, each turning on and o at random times and each at random locations in the frequency domain. The output of the system is a collection of signals corresponding to each sub-signal brought down to base-band, isolated in the frequency and time domains, nominally sampled and with estimates of key parameters.
|
299 |
Analysis of Relay-based Cellular SystemsNegi, Ansuya 01 January 2006 (has links)
Relays can be used in cellular systems to increase coverage as well as reduce the total power consumed by mobiles in a cell. This latter benefit is particularly useful for mobiles operating on a depleted battery. The relay can be a mobile, a car or any other device with the appropriate communication capabilities. In thesis we analyze the impact of using relays under different situations. We first consider the problem of reducing total power consumed in the system by employing relays intelligently. We find that in a simulated, fully random, mobile cellular network for CDMA (Code Division Multiple Access), significant energy savings are possible ranging from 1.76 dB to 8.45 dB.
In addition to reducing power needs, relays can increase the coverage area of a cell by enabling mobiles located in dead spots to place relayed calls. We note that use of relays can increase the useful service area by about 10% with real life scenarios. We observe that in heavy building density areas there is more need of relays as compared to low building density areas. However, the chance of finding relays is greater in low building density areas. Indeed, having more available idle nodes helps in choosing relays, so we conclude that unlike present day implementations of cellular networks, the base station should admit more mobiles (beyond the capacity of the cell) even if they are not placing calls since they can be used as relays.
One constraint of using relays is the potential to add interference in the same cell and in neighboring cells. This is particularly true if the relays are not under power control. Based on our analysis, we conclude that in interference limited systems like CDMA the relays have to be under power control otherwise we will reduce the total capacity by creating more dead spots. Thus, we believe that either the base station should be responsible to allocate relays or relays should be provided with enough intelligence to do power control of the downlink. Finally, we show how utility of data services can be increased by use of relays.
|
300 |
Accelerated Iterative Algorithms with Asynchronous Accumulative Updates on a Heterogeneous ClusterGubbi Virupaksha, Sandesh 23 March 2016 (has links)
In recent years with the exponential growth in web-based applications the amount of data generated has increased tremendously. Quick and accurate analysis of this 'big data' is indispensable to make better business decisions and reduce operational cost. The challenges faced by modern day data centers to process big data are multi fold: to keep up the pace of processing with increased data volume and increased data velocity, deal with system scalability and reduce energy costs. Today's data centers employ a variety of distributed computing frameworks running on a cluster of commodity hardware which include general purpose processors to process big data. Though better performance in terms of big data processing speed has been achieved with existing distributed computing frameworks, there is still an opportunity to increase processing speed further. FPGAs, which are designed for computationally intensive tasks, are promising processing elements that can increase processing speed. In this thesis, we discuss how FPGAs can be integrated into a cluster of general purpose processors running iterative algorithms and obtain high performance.
In this thesis, we designed a heterogeneous cluster comprised of FPGAs and CPUs and ran various benchmarks such as PageRank, Katz and Connected Components to measure the performance of the cluster. Performance improvement in terms of execution time was evaluated against a homogeneous cluster of general purpose processors and a homogeneous cluster of FPGAs. We built multiple four-node heterogeneous clusters with different configurations by varying the number of CPUs and FPGAs.
We studied the effects of load balancing between CPUs and FPGAs. We obtained a speedup of 20X, 11.5X and 2X for PageRank, Katz and Connected Components benchmarks on a cluster cluster configuration of 2 CPU + 2 FPGA for an unbalancing ratio against a 4-node homogeneous CPU cluster. We studied the effect of input graph partitioning, and showed that when the input is a Multilevel-KL partitioned graph we obtain an improvement of 11%, 26% and 9% over randomly partitioned graph for Katz, PageRank and Connected Components benchmarks on a 2 CPU + 2 FPGA cluster.
|
Page generated in 0.095 seconds