• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 83
  • Tagged with
  • 84
  • 84
  • 84
  • 84
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A high-speed color-based object detection algorithm| Quay crane collision warning device

Gao, Xiang 08 July 2016 (has links)
<p>Safety and efficiency are the most important factors in handling container cranes at ports all over the world. Rapid economic growth has led to a large increase of quay cranes in operation over the past decades, which is consequently paired with an increasing number of crane incidents. Crane operation becomes even more difficult with larger sized cranes, as the safety of these operations are solely depending on the experience of the operator. Thus, this heightens the demand for additional safety assistance devices. In this project, a camera based image processing design is introduced. By detecting the container that is being handled and adjacent ones at high speed, this system can predict and send a warning for a potential collision before the operator actually realizes the risk. </p><p> The proposed Edge Approaching Detection algorithm incorporated with the Hue, Saturation, and Value (HSV) algorithm is the key to this design. The combination of these two algorithms make it much faster to detect color-based objects at high speed and in real-time. By taking advantage of HSV&rsquo;s high efficiency, the computation required by traditional object detection is reduced dramatically. In this paper, this computation will be compared in terms of frames per second (FPS). As a result, accuracy is improved, speed is increased, and if possible, the switch to a cheaper platform powerful enough to support one specific project will reduce costs. </p>
2

Optimizing task assignment for collaborative computing over heterogeneous network devices

Kao, Yi-Hsuan 30 July 2016 (has links)
<p> The Internet of Things promises to enable a wide range of new applications involving sensors, embedded devices and mobile devices. Different from traditional cloud computing, where the centralized and powerful servers offer high quality computing service, in the era of the Internet of Things, there are abundant computational resources distributed over the network. These devices are not as powerful as servers, but are easier to access with faster setup and short-range communication. However, because of energy, computation, and bandwidth constraints on smart things and other edge devices, it will be imperative to collaboratively run a computational-intensive application that a single device cannot support individually. As many IoT applications, like data processing, can be divided into multiple tasks, we study the problem of assigning such tasks to multiple devices taking into account their abilities and the costs, and latencies associated with both task computation and data communication over the network.</p><p> A system that leverages collaborative computing over the network faces highly variant run-time environment. For example, the resource released by a device may suddenly decrease due to the change of states on local processes, or the channel quality may degrade due to mobility. Hence, such a system has to learn the available resources, be aware of changes and flexibly adapt task assignment strategy that efficiently makes use of these resources.</p><p> We take a step by step approach to achieve these goals. First, we assume that the amount of resources are deterministic and known. We formulate a task assignment problem that aims to minimize the application latency (system response time) subject to a single cost constraint so that we will not overuse the available resource. Second, we consider that each device has its own cost budget and our new multi-constrained formulation clearly attributes the cost to each device separately. Moving a step further, we assume that the amount of resources are stochastic processes with known distributions, and solve a stochastic optimization with a strong QoS constraint. That is, instead of providing a guarantee on the average latency, our task assignment strategy gives a guarantee that <i>p</i>% of time the latency is less than <i> t,</i> where <i>p</i> and <i>t</i> are arbitrary numbers. Finally, we assume that the amount of run-time resources are unknown and stochastic, and design online algorithms that learn the unknown information within limited amount of time and make competitive task assignment. </p><p> We aim to develop algorithms that efficiently make decisions at run-time. That is, the computational complexity should be as light as possible so that running the algorithm does not incur considerable overhead. For optimizations based on known resource profile, we show these problems are NP-hard and propose polynomial-time approximation algorithms with performance guarantee, where the performance loss caused by sub-optimal strategy is bounded. For online learning formulations, we propose light algorithms for both stationary environment and non-stationary environment and show their competitiveness by comparing the performance with the optimal offline policy (solved by assuming the resource profile is known).</p><p> We perform comprehensive numerical evaluations, including simulations based on trace data measured at application run-time, and validate our analysis on algorithm's complexity and performance based on the numerical results. Especially, we compare our algorithms with the existing heuristics and show that in some cases the performance loss given by the heuristic is considerable due to the sub-optimal strategy. Hence, we conclude that to efficiently leverage the distributed computational resource over the network, it is essential to formulate a sophisticated optimization problem that well captures the practical scenarios, and provide an algorithm that is light in complexity and suggests a good assignment strategy with performance guarantee.</p>
3

Image Super-Resolution Enhancements for Airborne Sensors

Woods, Matthew 23 December 2016 (has links)
<p> This thesis discusses the application of advanced digital signal and image processing techniques, particularly the technique known as super-resolution (SR), to enhance the imagery produced by cameras mounted on an airborne platform such as an unmanned aircraft system (UAS). SR is an image processing technology applicable to any digital, pixilated camera that is physically limited by construction to sample a scene with a discrete, <i><b>m x n</b></i> pixel array. The straightforward objective of SR is to utilize mathematics and signal processing to overcome this physical limitation of the <i><b> m x n</b></i> array and emulate the &ldquo;capabilities&rdquo; of a camera with a higher-density, <i><b>km x kn</b></i> (<i><b>k</b></i>><b> 1</b>) pixel array. The exact meaning of &ldquo;capabilities&rdquo;, in the preceding sentence, is application dependent.</p><p> SR is a well-studied field starting with the seminal 1984 paper by Huang and Tsai. Since that time, a multitude of papers, books, and software solutions have been written and published on the subject. However, although sharing many common aspects, the application to imaging systems on airborne platforms brings forth a number of unique challenges as well as opportunities that are neither currently addressed nor currently exploited by the state-of-the-art. These include wide field-of-view imagery, optical distortion, oblique viewing geometries, spectral variety from the visible band through the infrared, constant ego-motion, and availability of supplementary information from inertial measurement sensors. Our primary objective in this thesis is to extend the field of SR by addressing these areas. In our research experiments, we make significant use of both simulated imagery as well as real video collected from a number of flying platforms.</p>
4

Efficient Distributed Rendezvous Schemes And Spectrum Management For Cognitive Radio Networks

Li, Ji 19 April 2017 (has links)
<p> Cognitive radio emerges as a technology to realize the dynamic spectrum access by dynamically configuring its transmission parameters. In a cognitive radio network (CRN), there are two types of users: primary users (PUs) and secondary users (SUs). PUs are the licensed users or the traditional wireless users who can access a specific licensed spectrum band. SUs are the unlicensed users equipped with cognitive radios that can opportunistically use currently unoccupied channels to transmit, but have to vacate channels for the returning PUs, and then switch to other available channels for continuous transmissions. When two SUs want to establish a link, they have to meet on the same channel that must be available for both of them simultaneously. This process is called <i>rendezvous.</i> </p><p> Past research works on rendezvous only focused on designing the channel hopping sequence for the rendezvous process while ignoring some practical problems like rendezvous in wide-band CRNs, rendezvous without a predetermined sender and receiver, rendezvous considering directional antennas, and how to maximize the number of common available channels. In this dissertation, we propose five schemes to realize efficient rendezvous and spectrum management considering these practical problems under different scenarios. We first propose a rendezvous and communication framework for wide-band CRNs. Furthermore, we propose two efficient rendezvous schemes without predetermined sender and receiver. Moreover, we propose a rendezvous scheme specifically for SUs equipped with directional antennas. Last, we propose a power control protocol to maximize the number of common available channels. All of the proposed schemes can realize both efficient rendezvous and spectrum management with practical assumptions under different scenarios.</p>
5

Spectral Clustering for Electrical Phase Identification Using Advanced Metering Infrastructure Voltage Time Series

Blakely, Logan 14 February 2019 (has links)
<p> The increasing demand for and prevalence of distributed energy resources (DER) such as solar power, electric vehicles, and energy storage, present a unique set of challenges for integration into a legacy power grid, and accurate models of the low-voltage distribution systems are critical for accurate simulations of DER. Accurate labeling of the phase connections for each customer in a utility model is one area of grid topology that is known to have errors and has implications for the safety, efficiency, and hosting capacity of a distribution system. This research presents a methodology for the phase identification of customers solely using the advanced metering infrastructure (AMI) voltage timeseries. This thesis proposes to use Spectral Clustering, combined with a sliding window ensemble method for utilizing a long-term, time-series dataset that includes missing data, to group customers within a lateral by phase. These clustering phase predictions validate over 90% of the existing phase labels in the model and identify customers where the current phase labels are incorrect in this model. Within this dataset, this methodology produces consistent, high-quality results, verified by validating the clustering phase predictions with the underlying topology of the system, as well as selected examples verified using satellite and street view images publicly available in Google Earth. Further analysis of the results of the Spectral Clustering predictions are also shown to not only validate and improve the phase labels in the utility model, but also show potential in the detection of other types of errors in the topology of the model such as errors in the labeling of connections between customers and transformers, unlabeled residential solar power, unlabeled transformers, and locating customers with incomplete information in the model. These results indicate excellent potential for further development of this methodology as a tool for validating and improving existing utility models of the low-voltage side of the distribution system.</p><p>
6

A method of moments analysis of microstructured optical fibers

Arvas, Serhend. January 2009 (has links)
Thesis (Ph. D.)--Syracuse University, 2009. / "Publication number: AAT 3381559."
7

Path enumeration & redundancy removal for timing optimization

Khoury, Nancy. January 2009 (has links)
Thesis (Ph. D.)--Syracuse University, 2009. / "Publication number: AAT 3381581."
8

Wearable human activity recognition systems

Ameri-Daragheh, Alireza 12 September 2015 (has links)
<p> In this thesis, we focused on designing wearable human activity recognition (WHAR) systems. As the first step, we conducted a thorough research over the publications during the recent ten years in this area. Then, we proposed an all-purpose architecture for designing the software of WHAR systems. Afterwards, among various applications of these wearable systems, we decided to work on wearable virtual fitness coach device which can recognize various types and intensities of warm-up exercises that an athlete performs. We first proposed a basic hardware platform for implementing the WHAR software. Afterwards, the software design was done in two phases. In the first phase, we focused on four simple activities to be recognized by the wearable device. We used Weka machine learning tool to build a mathematical model which could recognize the four activities with the accuracy of 99.32%. Moreover, we proposed an algorithm to measure the intensity of the activities with the accuracy of 93%. In the second phase, we focused on eight complex warm-up exercises. After building the mathematical model, the WHAR system could recognize the eight activities with the accuracy of 95.60%.</p>
9

Post-silicon Functional Validation with Virtual Prototypes

Cong, Kai 27 August 2015 (has links)
<p> Post-silicon validation has become a critical stage in the system-on-chip (SoC) development cycle, driven by increasing design complexity, higher level of integration and decreasing time-to-market. According to recent reports, post-silicon validation effort comprises more than 50% of the overall development effort of an 65nm SoC. Though post-silicon validation covers many aspects ranging from electronic properties of hardware to performance and power consumption of whole systems, a central task remains validating functional correctness of both hardware and its integration with software. There are several key challenges to achieving accelerated and low-cost post-silicon functional validation. First, there is only limited silicon observability and controllability; second, there is no good test coverage estimation over a silicon device; third, it is difficult to generate good post-silicon tests before a silicon device is available; fourth, there is no effective software robustness testing approaches to ensure the quality of hardware/software integration.</p><p> We propose a systematic approach to accelerating post-silicon functional validation with virtual prototypes. Post-silicon test coverage is estimated in the pre-silicon stage by evaluating the test cases on the virtual prototypes. Such analysis is first conducted on the initial test suite assembled by the user and subsequently on the expanded test suite which includes test cases that are automatically generated. Based on the coverage statistics of the initial test suite on the virtual prototypes, test cases are automatically generated to improve the test coverage. In the post-silicon stage, our approach supports coverage evaluation of test cases on silicon devices to ensure fidelity of early coverage evaluation. The generated test cases are issued to silicon devices to detect inconsistencies between virtual prototypes and silicon devices using conformance checking. We further extend the test case generation framework to generate and inject fault scenario with virtual prototypes for driver robustness testing. Besides virtual prototype-based fault injection, an automatic driver fault injection approach is developed to support runtime fault generation and injection for driver robustness testing. Since virtual prototype enables early driver development, our automatic driver fault injection approach can be applied to driver testing in both pre-silicon and post-silicon stages. </p><p> For preliminary evaluation, we have applied our coverage evaluation and test generation to several network adapters and their virtual prototypes. We have conducted coverage analysis for a suite of common tests on both the virtual prototypes and silicon devices. The results show that our approach can estimate the test coverage with high fidelity. Based on the coverage estimation, we have employed our automatic test generation approach to generate additional tests. When the generated test cases were issued to both virtual prototypes and silicon devices, we observed significant coverage improvement. And we detected 20 inconsistencies between virtual prototypes and silicon devices, each of which reveals a virtual prototype or silicon device defect. After we applied virtual prototype-based fault injection approach to virtual prototypes for three widely-used network adapters, we generated and injected thousands of fault scenarios and found 2 driver bugs. For automatic driver fault injection, we have applied our approach to 12 widely used drivers with either virtual prototypes or silicon devices. After testing all these drivers, we found 28 distinct bugs.</p>
10

Document and natural image applications of deep learning

Kang, Le 31 October 2015 (has links)
<p> A tremendous amount of digital visual data is being collected every day, and we need efficient and effective algorithms to extract useful information from that data. Considering the complexity of visual data and the expense of human labor, we expect algorithms to have enhanced generalization capability and depend less on domain knowledge. While many topics in computer vision have benefited from machine learning, some document analysis and image quality assessment problems still have not found the best way to utilize it. In the context of document images, a compelling need exists for reliable methods to categorize and extract key information from captured images. In natural image content analysis, accurate quality assessment has become a critical component for many applications. Most current approaches, however, rely on the heuristics designed by human observations on severely limited data. These approaches typically work only on specific types of images and are hard to generalize on complex data from real applications. </p><p> This dissertation looks to address the challenges of processing heterogeneous visual data by applying effective learning methods that directly model the data with minimal preprocessing and feature engineering. We focus on three important problems - text line detection, document image categorization, and image quality assessment. The data we work on typically contains unconstrained layouts, styles, or noise, which resemble the real data from applications. First, we present a graph-based method, learning the line structure from training data for text line segmentation in handwritten document images, and a general framework to detect multi-oriented scene text lines using Higher-Order Correlation Clustering. Our method depends less on domain knowledge and is robust to variations in fonts or languages. Second, we introduce a general approach for document image genre classification using Convolutional Neural Networks (CNN). The introduction of CNNs for document image genre classification largely reduces the needs of hand-crafted features or domain knowledge. Third, we present our CNN based methods to general-purpose No-Reference Image Quality Assessment (NR-IQA). Our methods bridge the gap between NR-IQA and CNN and opens the door to a broad range of deep learning methods. With excellent local quality estimation ability, our methods demonstrate the state of art performance on both distortion identification and quality estimation.</p>

Page generated in 0.1846 seconds