Spelling suggestions: "subject:"computerscience"" "subject:"composerscience""
91 |
Efficient Travel Decision Making Using Web Application Based on MVC ArchitectureSoni, Keval 10 May 2017 (has links)
<p> Software Engineering is a dynamic field where web applications provide services to users over the Internet. Web applications are constantly updated to incorporate new functionality and improve software quality. Web applications are developed based on Model-View-Controller (MVC) architecture. MVC architecture divides the application into different modules: user interface, intermediate controller, and data persistence. This modular approach helps achieve loose coupling between user interface and business logic, and it facilitates ease in maintenance of application and independence in upgrading the application. </p><p> The project showcases the benefits of using MVC architecture, Spring MVC Framework, and Restful service integration. It reflects upon the system analysis and design models built by using Object Oriented Analysis and Design methodology and Unified Modeling Language (UML) diagrams.</p><p> The application presents travel data from different travel modes in a unified view for efficient decision making. Users can schedule future travel plans and be notified about those plans.</p>
|
92 |
Automated Color and Location Comparison as a Timestamp Recognition Method in ImagesWegrzyn, Jeremy 07 June 2017 (has links)
<p> A method to recognize a source camera or family of cameras from a photograph’s timestamp would be a useful tool for image authentication. The proposed method uses the color and location of the timestamp in a photo of dubious origin and compares those characteristics to the known values of various camera makes and models. This method produces accurate results for the test images, correctly recognizing photos from families of cameras with known characteristics and excluding incorrect families. There are challenges to the recognition when the test image has been compressed heavily or reduced in resolution, as happens when uploaded to social media platforms. By looking at additional characteristics to compare with, such as spacing between characters, and a larger database of comparison data, this method would become and even more useful step in image authentication.</p>
|
93 |
Angular Component Library ComparisonCebrian, Michael Christopher 02 June 2017 (has links)
<p> The purpose of this study is to aide web developers in choosing which component library to integrate with their web based Angular project. Angular is a new platform and many of the existing component libraries are still under active development, or were only recently released, making it difficult for developers to know which component library would be the best fit for their project. This study reviews many factors which would influence a developer’s decision on which library to use including: the size of the community, number of components available, quality of documentation, payload size increase, and load time performance. This study proves that the most popular projects aren’t the most performant and are lacking key features, while some much less popular libraries are performing better and have a better set of components. Developers looking for the best combination of performance and features should consider ngx-bootstrap or Angular Material Design Lite. </p>
|
94 |
Algorithms and Techniques for Managing Extensibility in Cyber-Physical SystemsPradhan, Subhav Man 22 November 2016 (has links)
Over the past decades, distributed computing paradigm has evolved from smaller and mostly homogeneous clusters to the current notion of ubiquitous computing, which consists of dynamic and heterogeneous resources in large scale. Recent advancement of edge computing devices has resulted in sophisticated and resourceful devices that are equipped with variety of sensors and actuators. These devices can be used to connect physical world with the cyber world. As such, the future of ubiquitous computing is cyber-physical in nature, and therefore, Cyber-Physical Systems (CPS) will play a crucial role in the future of ubiquitous computing. CPS are engineered systems that integrate cyber and physical components, where cyber components include computation and communication resources and physical components represent physical systems. CPS can be considered a special type of ubiquitous system that combines control theory, communications, and real-time computing with embedded applications that interact with the physical world. However, in order to realize this future of ubiquitous computing, we need to investigate and understand limitations of traditional CPS that were not meant for large-scale dynamic environment comprising resources with distributed ownership and requirement to support continuous evolution and operation. Hence, the goal is to transition from traditional CPS to the next-generation CPS that supports extensibility by allowing us to view CPS as a collection of heterogeneous subsystems with distributed ownership and capability to dynamically and continuously evolve throughout their lifetime while supporting continuous operation.
This dissertation first identifies key properties and challenges for next generation, extensible CPS. The four key properties of extensible CPS are: (1) resource dynamism, (2) resource heterogeneity, (3) multi-tenancy with respect to hosted applications, and (4) possible remote deployment of resources. These properties result in various challenges. This dissertation primarily focuses on challenges arising from dynamic, multi-tenant, and remotely deployed nature of extensible CPS. In addition, this dissertation also proposes a solution to address resource heterogeneity. Overall, this dissertation presents four contributions: (1) a resilient deployment and reconfiguration infrastructure to manage remotely deployed extensible CPS, (2) a mechanism to establish secure interactions across distributed applications, (3) a holistic management solution that uses a self-reconfiguration mechanism to achieve autonomous resilience, and (4) initial approach towards a generic computation model for heterogeneous applications.
|
95 |
Human Activity Analysis using Multi-modalities and Deep LearningZhang, Chenyang 23 November 2016 (has links)
<p> With the successful development of video recording devices and sharing platforms, visual media has become a significant component of everyone's life in the world. To better organize and understand the tremendous amount of visual data, computer vision and machine learning have become the key technologies to resolve such a huge problem. Among the topics in computer vision research, human activity analysis is one of the most challenging and promising areas. Human activity analysis is dedicated to detecting, recognizing, and understanding the context and meaning of human activities in visual media. This dissertation focuses on two aspects in human activity analysis: 1) how to utilize multi-modality approach, including depth sensors and traditional RGB cameras, for human action modeling. 2) How to utilize more advanced machine learning technologies, such as deep learning and sparse coding, to address more sophisticated problems such as attribute learning and automatic video captioning. </p><p> To explore the utilization of the depth cameras, we first present a depth camera-based image descriptor called histogram of 3D facets (H3DF) and its utilization in human action and hand gesture recognition and a holistic depth video representation for human actions. To unify both the inputs from depth cameras and RGB cameras, this dissertation first discusses a joint framework to model human affections from both facial expressions and body gestures with a multi-modality fusion framework. Then we present deep learning-based frameworks for human attribute learning and automatic video captioning tasks. Compared to human action detection recognition, automatic video captioning is more challenging because it includes complex language models and visual context. Extensive experiments have also been conducted on several public datasets to demonstrate that our proposed frameworks in this dissertation outperform the state-of-the-art approaches in this research area.</p>
|
96 |
Decoupled Vector-Fetch Architecture with a Scalarizing CompilerLee, Yunsup 03 September 2016 (has links)
<p> As we approach the end of conventional technology scaling, computer architects are forced to incorporate specialized and heterogeneous accelerators into general-purpose processors for greater energy efficiency. Among the prominent accelerators that have recently become more popular are data-parallel processing units, such as classic vector units, SIMD units, and graphics processing units (GPUs). Surveying a wide range of data-parallel architectures and their parallel programming models and compilers reveals an opportunity to construct a new data-parallel machine that is highly performant and efficient, yet a favorable compiler target that maintains the same level of programmability as the others. </p><p> In this thesis, I present the Hwacha decoupled vector-fetch architecture as the basis of a new data-parallel machine. I reason through the design decisions while describing its programming model, microarchitecture, and LLVM-based scalarizing compiler that efficiently maps OpenCL kernels to the architecture. The Hwacha vector unit is implemented in Chisel as an accelerator attached to a RISC-V Rocket control processor within the open-source Rocket Chip SoC generator. Using complete VLSI implementations of Hwacha, including a cache-coherent memory hierarchy in a commercial 28 nm process and simulated LPDDR3 DRAM modules, I quantify the area, performance, and energy consumption of the Hwacha accelerator. These numbers are then validated against an ARM Mali-T628 MP6 GPU, also built in a 28 nm process, using a set of OpenCL microbenchmarks compiled from the same source code with our custom compiler and ARM's stock OpenCL compiler. </p>
|
97 |
Graph-based approaches to resolve entity ambiguityPershina, Maria 17 September 2016 (has links)
<p> Information extraction is the task of automatically extracting structured information from unstructured or semi-structured machine-readable documents. One of the challenges of Information Extraction is to resolve ambiguity between entities either in a knowledge base or in text documents. There are many variations of this problem and it is known under different names, such as coreference resolution, entity disambiguation, entity linking, entity matching, etc. For example, the task of coreference resolution decides whether two expressions refer to the same entity; entity disambiguation determines how to map an entity mention to an appropriate entity in a knowledge base (KB); the main focus of entity linking is to infer that two entity mentions in a document(s) refer to the same real world entity even if they do not appear in a KB; entity matching (also record deduplication, entity resolution, reference reconciliation) is to merge records from databases if they refer to the same object. </p><p> Resolving ambiguity and finding proper matches between entities is an important step for many downstream applications, such as data integration, question answering, relation extraction, etc. The Internet has enabled the creation of a growing number of large-scale knowledge bases in a variety of domains, posing a scalability challenge for Information Extraction systems. Tools for automatically aligning these knowledge bases would make it possible to unify many sources of structured knowledge and to answer complex queries. However the efficient alignment of large-scale knowledge bases still poses a considerable challenge. </p><p> Various aspects and different settings to resolve ambiguity between entities are studied in this dissertation. A new scalable domain-independent graph-based approach utilizing Personalized Page Rank is developed for entity matching across large-scale knowledge bases and evaluated on datasets of 110 million and 203 million entities. A new model for entity disambiguation between a document and a knowledge base utilizing a document graph and effectively filtering out noise is proposed; corresponding datasets are released. A competitive result of 91.7\% in microaccuracy on a benchmark AIDA dataset is achieved, outperforming the most recent state-of-the-art models. A new technique based on a paraphrase detection model is proposed to recognize name variations for an entity in a document. Corresponding training and test datasets are made publicly available. A new approach integrating a graph-based entity disambiguation model and this technique is presented for an entity linking task and is evaluated on a dataset for the Text Analysis Conference Entity Discovery and Linking task.</p>
|
98 |
UV3| Re-engineering the unravel program slicing toolLamoreaux, Candace M. 21 September 2016 (has links)
<p> Static program slicing is a technique used to analyze code for single points of failure or errors that could cause catastrophic events in a software system. This analysis technique is especially useful in large-scale systems where a software failure could have very serious consequences.</p><p> In 1995 the National Institute of Standards and Technology (NIST) created a Computer Aided Software Engineering (CASE) tool called Unravel, a static program slicing tool that they used to evaluate the safety and integrity of software. Because of the old libraries used by Unravel, it can no longer be run on modern computer systems. This project re-engineers the original Unravel application so that it can run on modern computer systems.</p><p> The re-engineered version of the program, called Unravel V3 (UV3), implements all the functional requirements of the original program but provides a more modern user interface and moves the program from the procedural language of C to the object oriented language C#.</p>
|
99 |
A control plane for low power lossy networksPope, James H. 24 January 2017 (has links)
<p> Low power, lossy networks (LLNs) are becoming a critical infrastructure for future applications that are ad-hoc and untethered for periods of years. The applications enabled by LLNs include Smart Grid, data center power control, industrial networks and building and home automation systems. LLNs intersect a number of research areas to include the Internet of Things (IoT), Cyber Physical Systems (CPSs) and Wireless Sensor Networks (WSNs). A number of LLN applications require quality of service guarantees, such as industrial sensor networks. These applications are not currently supported by LLN routing protocols that allow dynamic changes in the network structure, specifically the standardized IPv6 based Routing Protocol for Low-Power and Lossy Networks (RPL). </p><p> I developed the Coordinated Routing for Epoch-based Stable Tree (CREST) control plane infrastructure to address this problem allowing better quality of service guarantees by providing a stable routing tree. The framework assumes efficient and reliable information collection and dissemination mechanisms. Using a medium sized LLN, I showed that the control plane was successful in allowing an example application that requires a stable routing tree to maximize the application’s goal. </p><p> To address dissemination scalability, I developed and demonstrated the centralized Heuristic Approach for Spanning Caterpillar Trees (HASTE) and distributed Deal algorithms, and associated Radiate protocol, to improve reliability and further reduce the number of required transmissions for network broadcasts. The reliability is improved by using passive acknowledgments and a retransmission scheme. The number of transmissions is reduced by restricting retransmissions to only non-leaf nodes and generating maximally leafy spanning trees. Using test beds with up to 340 nodes, the approach was shown to use between 1/3 to 1/5 the number of transmissions compared to standard flooding protocol techniques. </p><p> This research bridges the gap between performance sensitive LLN applications and the current set of LLN routing protocols. Furthermore, it provides usable implementations for the research and industry communities.</p>
|
100 |
Fabbed to Sense| Integrated Design of Geometry and Sensing Algorithms for Interactive ObjectsSavage, Valkyrie Arline 02 February 2017 (has links)
<p>Task-specific tangible input devices, like video game controllers, improve user speed and accuracy in input tasks compared to the more general-purpose touchscreen or mouse and keyboard. However, while modifying a graphical user interface (GUI) to accept mouse and keyboard inputs for new and specific tasks is relatively easy and requires only software knowledge, tangible input devices are challenging to prototype and build.
Rapid prototyping digital fabrication machines, such as vinyl cutters, laser cutters, and 3D printers, now permeate the design process for such devices. Using these tools, designers can realize a new tangible design faster than ever. In a typical design process, these machines are not used to create the interaction in these interactive product prototypes: they merely create the shell, case, or body, leaving the designer to, in an entirely separate process, assemble and program electronics for sensing a user's input. What are the most cost-effective, fast, and flexible ways of sensing rapid-prototyped input devices? In this dissertation, we investigate how 2D and 3D models for input devices can be automatically generated or modified in order to employ standard, off-the-shelf sensing techniques for adding interactivity to those objects: we call this ``fabbing to sense.''
We describe the capabilities of modern rapid prototyping machines, linking these abilities to potential sensing mechanisms when possible. We plunge more deeply into three examples of sensing/fabrication links: we build analysis and design tools that help users design, fabricate, assemble, and \emph{use} input devices sensed through these links. First, we discuss Midas, a tool for building capacitive sensing interfaces on non-screen surfaces, like the back of a phone. Second, we describe Lamello, a technique that generates lasercut and 3D printed tine structures and simulates their vibrational frequencies for training-free audio sensing. Finally, we present Sauron, a tool that automatically modifies the interior of 3D input models to allow sensing via a single embedded camera. We demonstrate each technique's flexibility to be used for many types of input devices through a series of example objects.
|
Page generated in 0.0915 seconds