Spelling suggestions: "subject:"computerscience"" "subject:"composerscience""
151 |
Multi-Temporal-Spectral Land Cover Classification for Remote Sensing Imagery Using Deep LearningUnknown Date (has links)
Sustainability research of the environment depends on accurate land cover information over large areas. Even with the increased number of satellite systems and sensors acquiring data with improved spectral, spatial, radiometric and temporal characteristics and the new data distribution policy, most existing global land cover datasets were derived from a single-date multi-spectral remotely sensed image using pixel-based classifiers with low accuracy. To improve the accuracy, the bottleneck is how to develop accurate and effective image classification techniques. By incorporating and utilizing the spatial and multi-temporal information with multi-spectral information of remote sensing images for land cover classification, and considering their spatial and temporal interdependence, I propose three deep network systems tailored for medium-resolution remote sensing data. With a test site from the Florida Everglades area (with a size of 771 square kilometers), the proposed new deep systems have achieved significant improvements in the classification accuracy over most existing pixel-based classifiers. A proposed patch-based recurrent neural network (PB-RNN) system, a proposed pixel-based recurrent neural network system and a proposed patch-based convolutional neural network system achieve 97.21%, 87.65% and 89.26% classification accuracy respectively while a pixel-based single-image neural network (NN) system achieves only 64.74% classification accuracy. By integrating the proposed deep networks and the huge collection of medium-resolution remote sensing data, I believe that much accurate land cover datasets can be produced over large areas. / A Dissertation submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester 2018. / July 20, 2018. / Includes bibliographical references. / Xiuwen Liu, Professor Directing Dissertation; Xiaojun Yang, University Representative; Gary Tyson, Committee Member; Peixiang Zhao, Committee Member.
|
152 |
Sensor Systems and Signal Processing Algorithms for Wireless ApplicationsUnknown Date (has links)
The demand for high performance wireless networks and systems have become increasingly high over the last decade. This dissertation addresses three systems that were designed to improving the efficiency, reliability and security of wireless systems. To improve the efficiency and reliability of wireless systems, we propose two algorithms, namely CSIFit and CSIApx, to compress the Channel State Information (CSI) of Wi-Fi networks with Orthogonal Frequency Division Multiplexing (OFDM) and Multiple Input Multiple Output (MIMO). We evaluated these systems with both experimental and synthesized CSI data. Our work on CSIApx confirmed that we can achieve very good compression ratios with very little loss accuracy, at a fraction of the complexity needed in current state-of-the-art compression methods. The second system is sensor based application to reliably detect falls inside homes. A automatic fall detection system has tremendous value to the well-being of seniors living alone. We design and implement MultiSense, a novel fall detection system, which has the following desirable features. First, it does not require the human to wear any device, therefore is convenient to seniors. Second, it has been tested in typical settings including living rooms and bathrooms, and has shown very good accuracy. Third, it is built with inexpensive components, with expected hardware cost around $150 to cover a typical room. MultiSense does not require any training data and is comparatively non-invasive than similar systems. Our evaluation showed that MultiSense achieved no False Negatives, i.e., was able to detect falls accurately each time, while producing no False Positives in a daily use test. Therefore, we believe MultiSense can be used to accurately detect human falls and can be extremely helpful to seniors living alone. Lastly, TBAS is a spoof detection method designed to improve the security of wireless networks. TBAS is based on two facts: 1) different transmitting locations likely result in different wireless channels, and 2) the drift in channel state information within a short time interval should be bounded. We proposed and implemented TBAS on Microsoft's SORA platform as well as commodity wireless cards and tested it's performance in typical Wi-Fi environments with different levels of channel mobility. Our results show that TBAS can be very accurate when running on 3 by 2 systems and above, i.e., TBAS on MIMO has a very low false positive error ratio, where a false positive event occurs when two packets from the same user are misclassified as from different users, while also maintaining a very low false negative ratio of 0.1%, where a false negative event occurs when two packets from different users are misclassified as from the same user. We believe our experimental findings can be used as a guideline for future systems that will deploy TBAS. / A Dissertation submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester 2018. / July 12, 2018. / channel state, networks, security, wireless / Includes bibliographical references. / Zhenghao Zhang, Professor Directing Dissertation; Ming Yu, University Representative; Piyush Kumar, Committee Member; Xiuwen Liu, Committee Member.
|
153 |
Design and Evaluation of Networking Techniques for the Next Generation of Interconnection NetworksUnknown Date (has links)
High performance computing(HPC) and data center systems have undergone rapid growth in recent years. To meet the current and future demand of compute- and data-intensive applications, these systems require the integration of a large number of processors, storage and I/O devices through high-speed interconnection networks. In massively scaled HPC and data centers, the performance of the interconnect is a major defining factor for the performance of the entire system. Interconnect performance depends on a variety of factors including but not limited to topological characteristics, routing schemes, resource management techniques and technological constraints. In this dissertation, I explore several approaches to improve the performance of large-scale networks. First, I investigate the topological properties of a network and their effect on the performance of the system under different workloads. Based on detailed analysis of graph structures, I find a well-known graph as a potential topology of choice for the next generation of large-scale networks. Second, I study the behavior of adaptive routing on the current generation of supercomputers based on the Dragonfly topology and highlight the fact that the performance of adaptive routing on such networks can be enhanced by using detailed information about the communication pattern. I develop a novel approach for identifying the traffic pattern and then use this information to improve the performance of adaptive routing on dragonfly networks. Finally, I investigate the possible advantages of utilizing emerging software defined networking technology in the high performance computing domain. My findings show that by leveraging the use of SDN, we can achieve near-optimal rate allocation for communication patterns in an HPC cluster, which can remove the necessity for expensive adaptive routing schemes and simplify the control plane on the next generation of supercomputers. / A Dissertation submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Spring Semester 2018. / April 18, 2018. / high performance computing, Interconnection networks, performance evaluation, Routing design, software defined networking, Topology design / Includes bibliographical references. / Xin Yuan, Professor Directing Dissertation; Fengfeng Ke, University Representative; Ashok Srinivasan, Committee Member; Gary Tyson, Committee Member.
|
154 |
A Comprehensive Study of Portability Bug Characteristics in Desktop and Android ApplicationsUnknown Date (has links)
Since 2008, the Android ecosystem has been tremendously popular with consumers, developers, and manufacturers due to the open nature of the operating system and its compatibility and availability on a range of devices. This, however, comes at a cost. The variety of available devices and speed of evolution of the Android system itself adds layers of fragmentation to the ecosystem around which developers must navigate. Yet this phenomenon is not unique to the Android ecosystem, impacting desktop applications like Apache Tomcat and Google Chrome as well. As fragmentation of a system grows, so does the burden on developers to produce software than can execute on a wide variety of potential device, environment, and system combinations, while the reality prevents developers from anticipating every possible scenarios. This study provides the first empirical study characterizing portability bugs in both desktop and Android applications. Specifically, we examined 228 randomly selected bugs from 18 desktop and Android applications for the common root causes, manifestation patterns, and fix strategies used to combat portability bugs. Our study reveals several commonalities among the bugs and platforms, which include: (1) 92.14% of all bugs examined are caused by an interaction with a single dependency, (2) 53.13% of all bugs examined are caused by an interaction with the system, and (3) 33.19% of all bugs examined are fixed by adding a direct or indirect check against the dependency causing the bug. These results provide guidance for techniques and strategies to help developers and researchers identify and fix portability bugs. / A Thesis submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Master of Science. / Summer Semester 2018. / July 20, 2018. / characteristic study, portability bugs / Includes bibliographical references. / Adrian Nistor, Professor Directing Thesis; Sonia Haiduc, Committee Member; David Whalley, Committee Member.
|
155 |
DAGDA Decoupling Address Generation from Loads and StoresUnknown Date (has links)
DAGDA exposes some of the hidden operations that the hardware uses when performing loads and stores to the compiler to save energy and increase performance. We decouple the micro-operations for loads and stores into two operations: the first, the "prepare to access memory" instruction, or "pam", checks to see if a line is resident in the L1 DC and determines its way in the L1 DC data array, if it exists. The second operations performs the actual data access. This allows us to both save energy using compiler optimization techniques and improve performance because "pam" operations are a natural way of prefetching data into the L1 DC / A Thesis submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Master of Science. / Summer Semester 2018. / May 4, 2018. / Includes bibliographical references. / David B. Whalley, Professor Directing Thesis; Xiuwen Liu, Committee Member; Gary Tyson, Committee Member.
|
156 |
Deep: Dependency Elimination Using Early PredictionsUnknown Date (has links)
Conditional branches have traditionally been a performance bottleneck for most processors. The high frequency of branches in code coupled with expensive pipeline flushes on mispredictions make branches expensive instructions worth optimizing. Conditional branches have historically inhibited compilers from applying optimizations across basic block boundaries due to the forks in control flow that they introduce. This thesis describes a systematic way of generating paths (traces) of branch-free code at compile time by decomposing branching and verification operations to eliminate the dependence of a branch on its preceding compare instruction. This explicit decomposition allows us to move comparison instructions past branches and to merge pre and post branch code. These paths generated at compile time can potentially provide additional opportunities for conventional optimizations such as common subexpression elimination, dead assignment elimination and instruction selection. Moreover, this thesis describes a way of coalescing multiple branch instructions within innermost loops to produce longer basic blocks to provide additional optimization opportunities. / A Thesis submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Master of Science. / Summer Semester 2018. / July 20, 2018. / Includes bibliographical references. / David Whalley, Professor Directing Thesis; Xin Yuan, Committee Member; Weikuan Yu, Committee Member.
|
157 |
staDFA: An Efficient Subexpression Matching MethodUnknown Date (has links)
The main task of a Lexical Analyzer such as Lex [20], Flex [26] and RE/Flex [34], is to perform tokenization of a given input file within reasonable time and with limited storage requirements. Hence, most lexical analyzers use Deterministic Finite Automata (DFA) to tokenize input to ensure that the running time of the lexical analyzer is linear (or close to linear) in the size of the input. However, DFA constructed from Regular Expressions (RE) are inadequate to indicate the positions and/or extents in a matching string of a given subexpression of the regular expression. This means that all implementations of trailing contexts in DFA-based lexical analyzers, including Lex, Flex and RE/Flex, produce incorrect results. For any matching string in the input (also called the lexeme) that matches a token is regular expression pattern, it is not always possible to tell the position of a part of the lexeme that matches a subexpression of the regular expression. For example, the string abba matches the pattern a b*/b a, but the position of the trailing context b a of the pattern in the string abba cannot be determined by a DFA-based matcher in the aforementioned lexical analyzers. There are algorithms based on Nondeterministic Finite Automata (NFA) that match subexpressions accurately. However, these algorithms are costly to execute and use backtracking or breadth-first search algorithms that run in non-linear time, with polynomial or even exponential worst-case time complexity. A tagged DFA-based approach (TDFA) was pioneered by Ville Laurikari [15] to efficiently match subexpressions. However, TDFA are not perfectly suitable for lexical analyzers since the tagged DFA edges require sets of memory updates, which hampers the performance of DFA edge traversals when matching input. I will introduce a new DFA-based algorithm for efficient subexpression matching that performs memory updates in DFA states. I propose, the Store-Transfer-Accept Deterministic Finite Automata (staDFA). In my proposed algorithm, the subexpression matching positions and/or extents are stored in a Marker Position Store (MPS). The MPS is updated while the input is tokenized to provide the positions/extents of the sub-match. Compression techniques for DFA, such as Hopcroft’s method [14], default transitions [18, 19], and other methods, can be applied to staDFA. For an instance, this thesis provide a modified Hopcroft’s method for the minimization of staDFA. / A Thesis submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Master of Science. / Summer Semester 2018. / July 20, 2018. / Includes bibliographical references. / Robert A. van Engelen, Professor Directing Thesis; David Whalley, Committee Member; An-I Andy Wang, Committee Member.
|
158 |
Securing Systems by Vulnerability Mitigation and Adaptive Live PatchingUnknown Date (has links)
The number and type of digital devices are increasing tremendously in today's world. However, as the code size soars, the hidden vulnerabilities become a major threat to user security and privacy. Vulnerability mitigation, detection, and patch generation are key protection mechanisms against attacks and exploits. In this dissertation, we first explore the limitations of existing solutions. For vulnerability mitigation, in particular, currently deployed address space layout randomization (ASLR) has the drawbacks that the process is randomized only once, and the segment is moved as a whole. This design makes the program particularly vulnerable to information leaks. For vulnerability detection, many existing solutions can only detect the symptoms of attacks, instead of locating the underlying exploited vulnerabilities, since the manifestation of an attack does not always coincide with the exploited vulnerabilities. For patch generation towards a large number of different devices, current schemes fail to meet the requirements of timeliness and adaptiveness. To tackle the limitations of existing solutions, this dissertation introduces the design and implementation of three countermeasures. First, we present Remix, an effective and efficient on-demand live randomization system, which randomizes basic blocks of each function during runtime to provide higher entropy and stronger protection against code reuse attacks. Second, we propose Ravel, an architectural approach to pinpointing vulnerabilities from attacks. It leverages a record & replay mechanism to reproduce attacks in the lab environment, and uses the program's memory access patterns to locate targeted vulnerabilities which can be a variety of types. Lastly, we present KARMA, a multi-level live patching framework for Android kernels with minor performance overhead. The patches are written in a high-level memory-safe language, with the capability to be adapted to thousands of different Android kernels. / A Dissertation submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Spring Semester 2018. / January 23, 2018. / Android, ASLR, Patch, Randomization, System Security, Vulnerability / Includes bibliographical references. / Zhi Wang, Professor Directing Dissertation; Ming Yu, University Representative; Xiuwen Liu, Committee Member; An-I Andy Wang, Committee Member.
|
159 |
Modeling and Comparison of Large-Scale Interconnect DesignsUnknown Date (has links)
Modern day high performance computing (HPC) clusters and data centers require a large number of computing and storage elements to be interconnected. Interconnect performance is considered a major bottleneck to the overall performance of such systems. Due to the massive scale of the network, interconnect designs are often evaluated and compared through models. My research is focused on developing scalable yet accurate methods to model large-scale interconnections and their architectural components. Such models are applied to investigate the performance characteristics of different components of interconnect systems including the topology, the routing scheme, and the network control/management scheme. Then, through multiple experimental studies, I apply the newly developed modeling techniques to evaluate the performance of novel interconnects technologies and thus, validate the case for their adoptions in the current and future generation of interconnected systems. / A Dissertation submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Spring Semester 2018. / April 18, 2018. / data center network, high performance computing, Interconnection, performance modeling, routing, topology / Includes bibliographical references. / Xin Yuan, Professor Directing Dissertation; Fengfeng Ke, University Representative; Sudhir Aggarwal, Committee Member; Robert van Engelen, Committee Member; Scott Pakin, Committee Member.
|
160 |
Appearance-Based Classification and Recognition Using Spectral Histogram Representations and Hierarchical Learning for OCAUnknown Date (has links)
This thesis is composed of two parts. Part one is on Appearance-Based Classification and Recognition Using Spectral Histogram Representations. We present a unified method for appearance-based applications including texture classification, 2D object recognition, and 3D object recognition using spectral histogram representations. Based on a generative process, the representation is derived by partitioning the frequency domain into small disjoint regions and assuming independence among the regions.This give rise to a set of filters and a representation consisting of marginal distribution of those filers responses. We provide generic evidence for its effectiveness in characterizing object appearance through statistical sampling and in classification by visualizing images in the spectral histogram space. We use a multilayer perception as the classifier and propose a selection algorithm by maximizing the performance over training samples. A distinct advantage of the representation is that it can be effectiveness used for different classification and recognition tasks. The claim is supported by experiments and comparisons in texture classification, face recognition, and appearance-based 3D object recognition. The marked improvement over existing methods justifies the effectiveness of the generative process and the derived spectral histogram representation. Part two is on Hierarchical Learning for Optimal Component Analysis. Optimization problems on manifolds such as Grassmann and Stiefel have been a subject of active research recently. However the learning process can be slow when the dimension of data is large. As a learning example on the Grassmann manifold, optimal component analysis (OCA) provides a general subspace formulation and a stochastic optimization algorithm is used to learn optimal bases. In this paper, we propose a technique called hierarchical learning that can reduce the learning time of OCA dramatically. Hierarchical learning decomposes the original optimization problem into several levels according to a specially designed hierarchical organization and the dimension of the data is reduced at each level using a shrinkage matrix. The learning process starts from the lowest level with an arbitrary initial point. The following approach is then applied recursively: (i) optimize the recognition performance in the reduced space using the expanded optimal basis learned from the next lower level as an initial condition, and (ii) expand the optimal subspace to the bigger space in a pre-specified way. By applying this decomposition procedure recursively, a hierarchy of layers is formed. We show that the optimal performance obtained in the reduced space is maintained after the expansion. Therefore, the learning process of each level starts with a good initial point obtained from the next lower level. This speeds up the original algorithm significantly since the learning is performed mainly in reduced spaces and the computational complexity is reduced greatly at each iteration. The effectiveness of the hierarchical learning is illustrated on two popular data-sets, where the computation time is reduced by a factor of about 30 compared to the original algorithm. / A Thesis submitted to the Department of Computer Science in partial fulfillment of
the requirements for the degree of Master of Science. / Degree Awarded: Spring Semester, 2005. / Date of Defense: March 22, 2005. / Object Recognition, Computer Vision, Feature Extraction / Includes bibliographical references. / Xiuwen Liu, Professor Directing Thesis; David Whalley, Committee Member; Kyle Gallivan, Committee Member.
|
Page generated in 0.1056 seconds