61 |
Monitoring vehicle entry and departure through location-based servicesChopra, Varun Nikhil G. 10 December 2015 (has links)
<p> Shipping ports and terminals are usually very busy with high traffic duration times. The high trafficked areas of shipping terminals at ports often contribute to the high density traffic volume which affects the community and the port by possibly extending commuters' travel time, delaying shipments of goods, and potentially being a safety hazard. Location-based services would be able to measure the time a vehicle enters and exits the terminals at the port. Location-based services such as geofencing would help determine entry and exit times of vehicles. These services would be used in hopes of determining an efficient way to reduce traffic conditions by notifying terminals of entry and departure times of vehicles. By gathering travel times of vehicles, a process could be developed by representatives of the terminals at the port to more efficiently operate. A system which consists of two architectures is built to gather adequate travel times. The first system is a server side application with REST endpoints exposed and the second application is a client side application which consumes those endpoints. This study provides an analysis and implementation on how location-based services establishes a means to measure entry and exit times of vehicles moving through geofenced gates.</p>
|
62 |
Image content in shopping recommender systems for mobile usersZuva, Tranos January 2012 (has links)
Thesis (D.Tech. in Computer Systems Engineering) -- Tshwane University of Technology, 2012. / The general problem of generating recommendations from a recommender system for users is an arduous one. More arduous is the generation of recommendations for mobile users, because of the limitations of the mobile devices on which the recommendations are to be projected. Mobile devices with integrated support of camera can be used to offer online services to global community whenever and wherever they are located. The mobile user expects to receive a limited number of probable recommendations from a shopping recommender system in few seconds and must be approximately accurate to the mobile user's needs. In order to achieve this objective proposed client-server architecture for image content based shopping recommender system framework over wireless mobile devices was implemented. The image content shopping recommender system performed a query by external image captured by the mobile device's camera. It then generated a set of recommendations that is viewed on the mobile device using the Internet browser. The image content used to improve recommendations generation is the shape extracted using level sets and active contour without edge methods. An algorithm to represent the extracted shape content such that it will be invariant to Euclidean transform, affine transformation and robust to occlusion and clutter was found. The shape invariant content was then used to characterise sales item for effective recommendations generation. Suitable distance measure was used to evaluate the images' similarity for retrieval purpose on the content representation. Experimental results were generated and analyzed to test the efficacy of the shape content representation and matching algorithm. Finally the Image Content in Recommender System for Mobile Users is simulated and evaluated by users.
|
63 |
Beneath the Attack SurfaceMowery, Keaton 18 August 2015 (has links)
<p> Computer systems are often analyzed as purely virtual artifacts, a collection of software operating on a Platonic ideal of a computer. When software is executed, it runs on actual hardware: an increasingly complex web of analog physical components and processes, cleverly strung together to present an illusion of pure computation. When an abstract software system is combined with individual hardware instances to form functioning systems, the overall behavior varies subtly with the hardware. These minor variations can change the security and privacy guarantees of the entire system, in both beneficial and harmful ways. We examine several such security effects in this dissertation. </p><p> First, we look at the fingerprinting capability of JavaScript and HTML5: when invoking existing features of modern browsers, such as JavaScript execution and 3-D graphics, how are the results affected by underlying hardware, and how distinctive is the resulting fingerprint?</p><p> Second, we discuss AES side channel timing attacks, a technique to extract information from AES encryption running on hardware. We present several reasons why we were unable to reproduce this attack against modern hardware and a modern browser.</p><p> Third, we examine positive uses of hardware variance: namely, seeding Linux's pseudorandom number generator at kernel initialization time with true entropy gathered during early boot. We examine the utility of these techniques on a variety of embedded devices, and give estimates for the amount of entropy each can generate.</p><p> Lastly, we evaluate a cyberphysical system: one which combines physical processes and analog sensors with software control and interpretation. Specifically, we examine the Rapiscan Secure~1000 backscatter X-ray full-body scanner, a device for looking under a scan subject's clothing, discovering any contraband secreted about their person. We present a full security analysis of this system, including its hardware, software, and underlying physics, and show how an adaptive, motivated adversary can completely subvert the scan to smuggle contraband, such as knives, firearms, and plastic explosives, past a Secure~1000 checkpoint. These attacks are entirely based upon understanding the physical processes and sensors which underlie this cyberphysical system, and involve adjusting the contraband's location and shape until it simply disappears.</p>
|
64 |
Reference architecture representation environment (RARE) : systematic derivation and evaluation of domain-specific, implementation-independent software architecturesGraser, Thomas Jeffrey, 1962- 14 March 2011 (has links)
Not available / text
|
65 |
Computer interfaces for data communications王漢江, Wong, Hon-kong, Kenneth. January 1975 (has links)
published_or_final_version / Electrical Engineering / Master / Master of Philosophy
|
66 |
Efficient ray tracing architecturesSpjut, Josef Bo 22 October 2015 (has links)
<p> This dissertation presents computer architecture designs that are efficient for ray tracing based rendering algorithms. The primary observation is that ray tracing maps better to independent thread issue hardware designs than it does to dependent thread and data designs used in most commercial architectures. While the independent thread issue causes extra overhead in the fetch and issue parts of the pipeline, the number of computation resources required can be reduced through the sharing of less frequently used execution units. Furthermore, since all the threads run a single program on multiple data (SPMD), thread processors can share instruction and data caches. Ray tracing needs read-only access to the scene data during each frame, so caches can be optimized for reading, and traditional cache coherence protocols are unnecessary for maintaining coherent memory access. The resultant image exists as a write only frame buffer, allowing memory writes to avoid the cache entirely, preventing cache pollution and increasing the performance of smaller caches. </p><p> Commercial real-time rendering systems lean heavily on high-performance graphics processing units (GPU) that use the rasterization and z-buffer algorithms for rendering. A single pass of rasterization throws out much of the global scene information by streaming the surface data that a ray tracer keeps resident in memory. As a result, ray tracing is more naturally able to support rendering effects involving global information, such as shadows, reflections, refractions and camera lens effects. Rasterization has a time complexity of approximately <i> O</i>(<i>N log</i>(<i>P</i>)) where <i>N</i> is the number of primitive polygons and <i>P</i> is the number of pixels in the image. Ray tracing, in contrast, has a time complexity of <i> O</i>(<i>P log</i>(<i>N</i>)) making ray tracing scale better to large scenes with many primitive polygons, allowing for increased surface detail. Finally, once the number of pixels reaches its limit, ray tracing should exceed the performance of rasterization by allowing the number of objects to increase with less of a penalty on performance.</p>
|
67 |
Fast modular exponentiation using residue domain representation| A hardware implementation and analysisNguyen, Christopher Dinh 01 March 2014 (has links)
<p> Using modular exponentiation as an application, we engineered on FPGA fabric and analyzed the first implementation of two arithmetic algorithms in Reduced-Precision Residue Number Systems (RP-RNS): the partial-reconstruction algorithm and quotient-first scaling algorithm. Residue number systems (RNS) provide an alternative representation to the binary system for computation. They offer full parallel computation for addition, subtraction, and multiplication. However, base extension, division, and sign detection become harder operations. Phatak's RP-RNS uses a time-memory trade-off to achieve O(lg N) running time for base extension and scaling, where N is the bit-length of the operands, compared with Kawamura's Cox-Rower architecture and its derivatives, which appear to take O(N) steps and therefore O(N) delay to the best of our knowledge. We implemented the fully parallel RP-RNS architecture based on Phatak's description and architecture diagrams. Our design decisions included distributing the lookup tables among each channel, removing the adder trees, and removing the parallel table access thus trading size for speed. In retrospect, we should have hosted the tables in memory off the FPGA. We measured the FPGA utilization, storage size, and cycle counts. The data we present, though less than optimal, confirms the theoretical trends calculated by Phatak. FPGA utilization grows proportional K log(K) where K is the number of hardware channels. Storage grows proportional to O(N</p><p>3 lg lg N). When using Phatak's recommendations,cycle count grows proportional to O(lg N). Our contributions include documentation of our design, architecture, and implementation; a detailed testing methodology; and performance data based on our implementation to enable others to replicate our implementation and findings.</p>
|
68 |
PLC code vulnerabilities through SCADA systemsValentine, Sidney E. 15 June 2013 (has links)
<p> Supervisory Control and Data Acquisition (SCADA) systems are widely used in automated manufacturing and in all areas of our nation's infrastructure. Applications range from chemical processes and water treatment facilities to oil and gas production and electric power generation and distribution. Current research on SCADA system security focuses on the primary SCADA components and targets network centric attacks. Security risks via attacks against the peripheral devices such as the Programmable Logic Controllers (PLCs) have not been sufficiently addressed. Our research results address the need to develop PLC applications that are correct, safe and secure. This research provides an analysis of software safety and security threats. We develop countermeasures that are compatible with the existing PLC technologies. We study both intentional and unintentional software errors and propose methods to prevent them. The main contributions of this dissertation are: 1). Develop a taxonomy of software errors and attacks in ladder logic 2). Model ladder logic vulnerabilities 3). Develop security design patterns to avoid software vulnerabilities and incorrect practices 4). Implement a proof of concept static analysis tool which detects the vulnerabilities in the PLC code and recommend corresponding design patterns.</p>
|
69 |
Result Distribution in Big Data SystemsCheelangi, Madhusudan 09 August 2013 (has links)
<p> We are building a Big Data Management System (BDMS) called <b>AsterixDB </b> at UCI. Since AsterixDB is designed to operate on large volumes of data, the results for its queries can be potentially very large, and AsterixDB is also designed to operate under high concurency workloads. As a result, we need a specialized mechanism to manage these large volumes of query results and deliver them to the clients. In this thesis, we present an architecture and an implementation of a new result distribution framework that is capable of handling large volumes of results under high concurency workloads. We present the various components of this result distribution framework and show how they interact with each other to manage large volumes of query results and deliver them to clients. We also discuss various result distribution policies that are possible with our framework and compare their performance through experiments. </p><p> We have implemented a REST-like HTTP client interface on top of the result distribution framework to allow clients to submit queries and obtain their results. This client interface provides two modes for clients to choose from to read their query results: synchronous mode and asynchronous mode. In synchronous mode, query results are delivered to a client as a direct response to its query within the same request-response cycle. In asynchronous mode, a query handle is returned instead to the client as a response to its query. The client can store the handle and send another request later, including the query handle, to read the result for the query whenever it wants. The architectural support for these two modes is also described in this thesis. We believe that the result distribution framework, combined with this client interface, successfully meets the result management demands of AsterixDB. </p>
|
70 |
A Framework for Quality of Service and Fault Management in Service-Oriented ArchitectureZhang, Jing 20 August 2013 (has links)
<p>Service-Oriented Architecture (SOA) provides a powerful yet flexible paradigm for integrating distributed services into business processes to perform complex functionalities. However, the flexibility and environmental uncertainties bring difficulties to system performance management. In this dissertation, a quality of service (QoS) management framework is designed and implemented to support reliable service delivery in SOA. The QoS management framework covers runtime process performance monitoring, faulty service diagnosis and process recovery. During runtime, the QoS management system provides a mechanism to detect performance issues, identify root cause(s) of problems, and repair a process by replacing faulty services. </p><p> To reduce the burden from monitoring all services, only a set of the most informative services are monitored at runtime. Several monitor selection algorithms are designed for wisely selecting monitoring locations. Three diagnosis algorithms, including Bayesian network (BN) diagnosis, dependency matrix based (DM) diagnosis, and a hybrid diagnosis, are designed for root cause identification. DM diagnosis does not require process execution history and has a lower time complexity than BN. However, BN diagnosis usually achieves a better diagnosis accuracy. The hybrid diagnosis integrates DM and BN diagnosis to get a good diagnosis result while reduces a large portion of the diagnosis cost in BN diagnosis. Moreover, heuristic strategies can be used in hybrid diagnosis to further improve its diagnosis efficiency. </p><p> We have implemented a prototype of the QoS and fault management framework in the Llama middleware. The thesis presents the design and implementation of the diagnosis engine, the adaptation manager (for process reconfiguration) in Llama. Diagnosis engine identifies root cause services and triggers the adaptation manager, which decides the solution of service replacement. System performance is studied by using realistic services deployed on networked servers. Both simulation result and system performance study show that our monitoring, diagnosis and recovery approaches are practical and efficient. </p>
|
Page generated in 0.0887 seconds