Spelling suggestions: "subject:"computerscience"" "subject:"composerscience""
111 |
Online Distributed Depository Selection in Opportunistic Device-to-Device NetworksBashar, A. M. A. Elman 01 December 2016 (has links)
<p> Device-to-device (D2D) is a new paradigm in cellular networks that enhances network performance by introducing increased spectral efficiency and reduced communication delay. Efficient data dissemination is indispensable for supporting many D2D applications such as content distribution and location-aware advertisement. In this work, I investigate a new and interesting data dissemination problem where the receivers are not explicitly known and data must be disseminated to the receivers within a probabilistic delay budget. I propose to exploit data depositories, which can temporarily house data and deliver them to interested receivers upon requests. I formally formulate the delay-constrained profit maximization problem for data deposition in D2D networks and show its NP-hardness. Under the unique mobile opportunistic network setting, a practical solution for such problem must be distributed, localized, and online. To this end, I introduce three algorithms for Direct Online Selection of 1-Depository, Direct Online Selection of L-Depositories, and Mixed Online Selection of L-Depositories. To demonstrate and evaluate the system, I implement a prototype using Google Nexus handsets and conduct experiments for five weeks. I further carry out simulations based on real-world mobility traces for evaluation of large-scale networks and various network settings that are impractical to experiment. </p>
|
112 |
Hybrid Mission Planning with Coalition FormationDukeman, Anton Leo 27 March 2017 (has links)
Robotic systems have proven effective in many domains. Some robotic domains, such as mass casualty response, require close coupling between the humans and robots that are able to adapt to the environment and tasks. The coalition formation problem allocates coalitions to each task, but does not produce executable plans. The planning problem creates executable plans, but problem difficulty scales with the number of agents and tasks in the problem. A hybrid solution to solve both problems will produce executable plans for the assigned tasks, while satisfying computational resource constraints. Four solution tools are presented and evaluated using four test domains, including a novel domain simulating the immediate response to a tornado by local government agencies. Each domain and problem was implemented in a new problem description language combining planning and coalition formation.
Planning alone is an existing tool to produce high quality plans by considering all possible interactions between tasks and agents simultaneously. However, planning alone requires large amounts of time and memory, both of which are constrained in real world applications. The coalition formation then planning tool factors the problem to reduce the required computational resources, but coalition formation cannot be relied upon to produce executable coalitions in all cases. The relaxed plan coalition augmentation tool addresses nonexecutable coalitions by selecting the agent(s) required to produce an executable coalition. The final tool, task fusion, addresses reduced solution quality by selecting tasks and coalitions for which planning together will increase solution quality. The relaxed plan coalition augmentation tool solved at least as many problems as planning alone and averaged much less computational resource usage. The task fusion tool solved more problems than planning alone, but plan quality and computational resource usage was mixed when compared to relaxed plan coalition augmentation.
|
113 |
Analyzing User Comments On YouTube Coding Tutorial VideosPoche, Elizabeth Heidi 30 March 2017 (has links)
Video coding tutorials enable expert and novice programmers to visually observe real developers write, debug, and execute code. Previous research in this domain has focused on helping programmers find relevant content in coding tutorial videos as well as understanding the motivation and needs of content creators. In this thesis, we focus on the link connecting programmers creating coding videos with their audience. More specifically, we analyze user comments on YouTube coding tutorial videos. Our main objective is to help content creators to effectively understand the needs and concerns of their viewers, thus respond faster to these concerns and deliver higher-quality content. A dataset of 6000 comments sampled from 12 YouTube coding videos is used to conduct our analysis. Important user questions and concerns are then automatically classified and summarized. The results show that Support Vector Machines can detect useful viewers' comments on coding videos with an average accuracy of 77%. The results also show that SumBasic, an extractive frequency-based summarization technique with redundancy control, can sufficiently capture the main concerns present in viewers' comments.
|
114 |
Empirical Estimation of Intra-Voxel Structure with Persistent Angular Structure and Q-ball Models of Diffusion Weighted MRINath, Vishwesh 05 April 2017 (has links)
The diffusion tensor model is non-specific in regions where micrometer structural patterns are inconsistent at the millimeter scale (i.e., brain regions with pathways that cross, bend, branch, fan, etc.). Numerous models have been proposed to represent crossing fibers and complex intra-voxel structure from in vivo diffusion weighted magnetic resonance imaging (e.g., high angular resolution diffusion imaging â HARDI). Here, we present an empirical comparison of two HARDI approaches â persistent angular structure MRI (PAS-MRI) and Q-ball â using a newly acquired reproducibility dataset. Briefly, a single subject was scanned 11 times with 96 diffusion weighted directions and 10 reference volumes for each of two b-values (1000 and 3000 s/mm2 for a total of 2144 volumes). Empirical reproducibility of intra-voxel fiber fractions (number/strength of peaks), angular orientation, and fractional anisotropy was compared with metrics from a traditional tensor analysis approach, focusing on b-values of 1000 s/mm2 and 3000 s/mm2. PAS-MRI is shown to be more reproducible than Q-ball and offers advantages at low b-values. However, there are substantial and biologically meaningful differences between the intra-voxel structures estimated both in terms of analysis method as well as by b-value. Hence, it is premature to perform meta-analysis or combine results across HARDI studies using different analysis model or acquisition sequences.
|
115 |
Computational semantics: a study of a class of verbsFisher, John N. D. January 1974 (has links)
No description available.
|
116 |
Building Efficient Large-Scale Big Data Processing PlatformsWang, Jiayin 20 June 2017 (has links)
<p> In the era of big data, many cluster platforms and resource management schemes are created to satisfy the increasing demands on processing a large volume of data. A general setting of big data processing jobs consists of multiple stages, and each stage represents generally defined data operation such as ltering and sorting. To parallelize the job execution in a cluster, each stage includes a number of identical tasks that can be concurrently launched at multiple servers. Practical clusters often involve hundreds or thousands of servers processing a large batch of jobs. Resource management, that manages cluster resource allocation and job execution, is extremely critical for the system performance. </p><p> Generally speaking, there are three main challenges in resource management of the new big data processing systems. First, while there are various pending tasks from dierent jobs and stages, it is difficult to determine which ones deserve the priority to obtain the resources for execution, considering the tasks' different characteristics such as resource demand and execution time. Second, there exists dependency among the tasks that can be concurrently running. For any two consecutive stages of a job, the output data of the former stage is the input data of the later one. The resource management has to comply with such dependency. The third challenge is the inconsistent performance of the cluster nodes. In practice, run-time performance of every server is varying. The resource management needs to dynamically adjust the resource allocation according to the performance change of each server. </p><p> The resource management in the existing platforms and prior work often rely on fixed user-specic congurations, and assumes consistent performance in each node. The performance, however, is not satisfactory under various workloads. This dissertation aims to explore new approaches to improving the eciency of large-scale big data processing platforms. In particular, the run-time dynamic factors are carefully considered when the system allocates the resources. New algorithms are developed to collect run-time data and predict the characteristics of jobs and the cluster. We further develop resource management schemes that dynamically tune the resource allocation for each stage of every running job in the cluster. New findings and techniques in this dissertation will certainly provide valuable and inspiring insights to other similar problems in the research community.</p>
|
117 |
Designing a Customer Relationship Management (CRM) System for the Home Automation IndustrySchleich, Christopher W. 20 April 2017 (has links)
<p> In recent years, businesses have begun storing their data in the Cloud due to an increased demand for having business information accessible at remote locations. A large portion of the technology used to store company data in the Cloud is generic software with features applicable to most types of industries. However, the home automation industry does not fit this mold. </p><p> Home automation companies’ employees spend most of their time working away from the office. As such, they need to have business information - such as contacts, tasks, work orders, purchase and change orders, accessible from laptops, phones, and tablets. Most companies in the industry use D-Tools System Integrator, a Windows-based application used to generate contracts and sell home automation technologies to clients. However, not only does D- Tools System Integrator fail to provide the tools needed to manage daily operations once a contract is signed, it also fails to provide compatibility with non-Windows computers. Thankfully there is a newly developed system to help fix these shortcomings.</p><p> Automation Pro is an online Customer Relationship Management (CRM) System designed by Christopher Schleich, tailored specifically for the home automation industry. Automation Pro is built in ASP.NET and MVC frameworks using Microsoft Visual Studio. The goal of the system is to provide all of the tools that D-Tools System Integrator fails to deliver, and develop a user interface that will automatically resize based on the device accessing the system. The design and implementation of Automation Pro is the focus of this project.</p>
|
118 |
Scalable, situationally aware visual analytics and applicationsEaglin, Todd 20 April 2017 (has links)
<p>There is a need to understand large and complex datasets to provide better situa- tional awareness in-order to make timely well-informed actionable decisions in critical environments. These types of environments include emergency evacuations for large buildings, indoor routing for buildings in emergency situations, large-scale critical infrastructure for disaster planning and first responders, LiDAR analysis for coastal planning in disaster situations, and social media data for health related analysis. I introduce novel work and applications in real-time interactive visual analytics in these domains. I also detail techniques, systems and tools across a range of disciplines from GPU computing for real-time analysis to machine learning for interactive analysis on mobile and web-based platforms.
|
119 |
Formal reasoning in software-defined networksReitblatt, Mark 01 March 2017 (has links)
<p> This thesis presents an end-to-end approach for building computer networks that can be reasoned about and verified formally. In it, we present a high-level specification language for describing the desired forwarding behavior of networks based on regular expressions over network paths, as well as a tool that automatically verifies network forwarding policies; an approach to building formally verified compilers and runtimes for forwarding policies written in a network programming language that preserve the semantics of the source policy; and a technique for updating network configurations while preserving correctness.</p>
|
120 |
Integrated Timing Analysis and Verification of Component-based Distributed Real-time SystemsKumar, Pranav Srinivas 19 October 2016 (has links)
Distributed real-time embedded systems that address safety and mission-critical system requirements are applied in a variety of heterogeneous domains today e.g. avionics, automotive systems, locomotives, and industrial control systems. The volume and complexity of such software grows everyday depending on an assortment of factors, including challenging system requirements e.g. resilience to hardware and software faults, remote deployment and repair. To mitigate the software complexity in such systems, model-driven component-based software engineering and development has become an accepted practice. Integrating appropriate modeling and analysis techniques into the design of such systems helps ensure predictable, dependable and safe operation upon deployment. The research presented in this dissertation has lead to the development of a methodology to model and analyze the temporal behavior of such distributed component-based applications in order to verify system-level timing properties such as worst-case response times, lack of deadline violations etc. Our approach relies on formalizing the structure and behavior of component-based applications using Colored Petri Nets (CPN) i.e. modeling the component assembly, operation scheduling, thread execution etc. and analyzing the temporal behavior of the overall system using simulation, state space analysis and model checking techniques. To bridge the gap between the system model and the analysis model, we have developed a modeling language to describe the business logic of component operations. Using the overall system model and the per-operation business logic models, a CPN timing analysis model is fully generated for analysis. The generality of the modeling principles used show the applicability of this method to a wide range of similar systems. We have also developed methods to structurally reduce our CPN and improve the scalability and performance of analysis to work for medium-to-large scale systems. Lastly, the results obtained from CPN analysis have been validated by executing experimental component assemblies on a cyber-physical systems testbed, a 32 Beaglebone Black cluster. Results show that the worst-case response times of component operations calculated by the CPN analysis are close, conservative estimates of the real-world execution.
|
Page generated in 0.0786 seconds