Spelling suggestions: "subject:"[een] COMPUTER SCIENCE"" "subject:"[enn] COMPUTER SCIENCE""
21 |
Optimizing task assignment for collaborative computing over heterogeneous network devicesKao, Yi-Hsuan 30 July 2016 (has links)
<p> The Internet of Things promises to enable a wide range of new applications involving sensors, embedded devices and mobile devices. Different from traditional cloud computing, where the centralized and powerful servers offer high quality computing service, in the era of the Internet of Things, there are abundant computational resources distributed over the network. These devices are not as powerful as servers, but are easier to access with faster setup and short-range communication. However, because of energy, computation, and bandwidth constraints on smart things and other edge devices, it will be imperative to collaboratively run a computational-intensive application that a single device cannot support individually. As many IoT applications, like data processing, can be divided into multiple tasks, we study the problem of assigning such tasks to multiple devices taking into account their abilities and the costs, and latencies associated with both task computation and data communication over the network.</p><p> A system that leverages collaborative computing over the network faces highly variant run-time environment. For example, the resource released by a device may suddenly decrease due to the change of states on local processes, or the channel quality may degrade due to mobility. Hence, such a system has to learn the available resources, be aware of changes and flexibly adapt task assignment strategy that efficiently makes use of these resources.</p><p> We take a step by step approach to achieve these goals. First, we assume that the amount of resources are deterministic and known. We formulate a task assignment problem that aims to minimize the application latency (system response time) subject to a single cost constraint so that we will not overuse the available resource. Second, we consider that each device has its own cost budget and our new multi-constrained formulation clearly attributes the cost to each device separately. Moving a step further, we assume that the amount of resources are stochastic processes with known distributions, and solve a stochastic optimization with a strong QoS constraint. That is, instead of providing a guarantee on the average latency, our task assignment strategy gives a guarantee that <i>p</i>% of time the latency is less than <i> t,</i> where <i>p</i> and <i>t</i> are arbitrary numbers. Finally, we assume that the amount of run-time resources are unknown and stochastic, and design online algorithms that learn the unknown information within limited amount of time and make competitive task assignment. </p><p> We aim to develop algorithms that efficiently make decisions at run-time. That is, the computational complexity should be as light as possible so that running the algorithm does not incur considerable overhead. For optimizations based on known resource profile, we show these problems are NP-hard and propose polynomial-time approximation algorithms with performance guarantee, where the performance loss caused by sub-optimal strategy is bounded. For online learning formulations, we propose light algorithms for both stationary environment and non-stationary environment and show their competitiveness by comparing the performance with the optimal offline policy (solved by assuming the resource profile is known).</p><p> We perform comprehensive numerical evaluations, including simulations based on trace data measured at application run-time, and validate our analysis on algorithm's complexity and performance based on the numerical results. Especially, we compare our algorithms with the existing heuristics and show that in some cases the performance loss given by the heuristic is considerable due to the sub-optimal strategy. Hence, we conclude that to efficiently leverage the distributed computational resource over the network, it is essential to formulate a sophisticated optimization problem that well captures the practical scenarios, and provide an algorithm that is light in complexity and suggests a good assignment strategy with performance guarantee.</p>
|
22 |
Enforcing Security Policies On GPU Computing Through The Use Of Aspect-Oriented Programming TechniquesAlBassam, Bader 03 August 2016 (has links)
<p> This thesis presents a new security policy enforcer designed for securing parallel computation on CUDA GPUs. We show how the very features that make a GPGPU desirable have already been utilized in existing exploits, fortifying the need for security protections on a GPGPU. An aspect weaver was designed for CUDA with the goal of utilizing aspect-oriented programming for security policy enforcement. Empirical testing verified the ability of our aspect weaver to enforce various policies. Furthermore, a performance analysis was performed to demonstrate that using this policy enforcer provides no significant performance impact over manual insertion of policy code. Finally, future research goals are presented through a plan of work. We hope that this thesis will provide for long term research goals to guide the field of GPU security.</p>
|
23 |
Cybersecurity of wireless implantable medical devicesAsh, Sarah L. 14 June 2016 (has links)
<p> Wireless implantable medical devices are used to improve and prolong the lives of persons with critical medical conditions. The World Society of Arrhythmias reported that 133,262 defibrillators had been implanted in the United States in 2009 (NBC News, 2012). With the convenience of wireless technology comes the possibility of wireless implantable medical devices being accessed by unauthorized persons with malicious intents. Each year, the Food and Drug Agency (FDA) collects information on medical device failures and has found a substantial increase in the numbers of failures each year (Sametinger, Rozenblit, Lysecky, & Ott, 2015). Mark Goodman, founder of the Future Crimes Institute, wrote an article regarding wireless implantable medical devices (2015). According to Goodman, approximately 300,000 Americans are implanted with wireless implantable medical devices including, but not limited to, cardiac pacemakers and defibrillators, cochlear implants, neurostimulators, and insulin pumps. In upwards of 2.5 million people depend on wireless implantable medical devices to control potential life-threatening diseases and complications. It was projected in a 2012 study completed by the Freedonia Group that the need for wireless implantable medical devices would increase 7.7 percent annually, creating a 52 billion dollar business by 2015 (Goodman, 2015). This capstone project will examine the current cybersecurity risks associated with wireless implantable medical devices. The research will identify potential security threats, current security measures, and consumers’ responsibilities and risks once they acquire the wireless implantable medical devices. Keywords: Cybersecurity, Professor Christopher M. Riddell, critical medical conditions, FDA, medical device failures, risk assessment, wireless networks.</p>
|
24 |
Exploring Web Simplification for People with Cognitive DisabilitiesHoehl, Jeffery Arthur 08 June 2016 (has links)
<p> The web has become more than a supplementary information resource but a valuable and pervasive tool for nearly all aspects of daily life including social and community participation, health promotion, creative pursuits, education, and employment opportunity. However, the web is not yet easily accessible to all people, particularly those with cognitive disabilities who encounter many challenges with access and use of the web including limited accessibility of online content and difficulty with content comprehension. Furthermore, little is documented about how individuals with cognitive disabilities who currently use the web are overcoming or being inhibited by these challenges. Much of what is documented is anecdotal or generalized as broad technical guidance rather than providing methods to empower individual end users. This research explores which websites people with cognitive disabilities use and do not use and what challenges and successes they encounter with those websites. We developed the SimpleWebAnywhere tool to address the above research needs and serve as a technology probe to determine how content simplification affects web use by people with cognitive disabilities. We explored personalizable content transformation techniques, including advertisement removal, content extraction, and text to speech, to make webpages easier to use and comprehend. We found that many people with cognitive disabilities frequently access the web for long periods of time despite popular opinion to the contrary. Web access is preferred via mobile platforms, such as smartphones and tablet computers. Users had a strong preference for entertainment content largely comprised of images, videos, and games but did not necessarily have difficulty using or understanding long, complex textual content. An intercommunity approach of combining existing open source software to provide personalized content manipulations was found to be an effective method to improve web accessibility for people with cognitive disabilities.</p>
|
25 |
FHIR| Cell-Level Security and Real Time Access with AccumuloRuiz, Daniel Alfonso 03 June 2016 (has links)
<p> The American Recovery and Reinvestment Act (ARRA) requires the adoption of Electronic Medical Records (EMRs) for seventy percent of the primary care provider population by 2014. Furthermore, by 2015 providers are expected to be utilizing EHR in compliance with “meaningful use”[28] definition or they can face financial penalties under Medicare. In addition to this momentous task, EMR data has stringent security requirements. It is largely due to these security requirements that medical information is being digitized. However, sharing to entitled information is often slow or non-existent because of information silos. Fast Healthcare Interoperability Resources (FHIR) is an emerging information sharing standard that is designed to aid in the tearing down of these silos. The focus of this thesis is to show how FHIR can be further improved by allowing for cellular level security. Additionally, this thesis will introduce novel ways that vast amounts of FHIR resources can be stored and queried in real-time with Accumulo. It will do so by utilizing and improving on Dynamic Distributed Dimensional Data Model (D4M) [9] Schema to better allow for “real-time” REST queries of FHIR compliant data. Pagination is necessary for it to remain a real-time system since some queries can have millions or even billions of positive hits. To satisfy this requirement a new approach to Accumulo pagination is laid out that increases performance, flexibility and control. All tests are performed against a M4.2xlarge Amazon Machine Image.</p>
|
26 |
Bayesian generative modeling for complex dynamical systemsGuan, Jinyan 08 June 2016 (has links)
<p> This dissertation presents a Bayesian generative modeling approach for complex dynamical systems for emotion-interaction patterns within multivariate data collected in social psychology studies. While dynamical models have been used by social psychologists to study complex psychological and behavior patterns in recent years, most of these studies have been limited by using regression methods to fit the model parameters from noisy observations. These regression methods mostly rely on the estimates of the derivatives from the noisy observation, thus easily result in overfitting and fail to predict future outcomes. A Bayesian generative model solves the problem by integrating the prior knowledge of where the data comes from with the observed data through posterior distributions. It allows the development of theoretical ideas and mathematical models to be independent of the inference concerns. Besides, Bayesian generative statistical modeling allows evaluation of the model based on its predictive power instead of the model residual error reduction in regression methods to prevent overfitting in social psychology data analysis. </p><p> In the proposed Bayesian generative modeling approach, this dissertation uses the State Space Model (SSM) to model the dynamics of emotion interactions. Specifically, it tests the approach in a class of psychological models aimed at explaining the emotional dynamics of interacting couples in committed relationships. The latent states of the SSM are composed of continuous real numbers that represent the level of the true emotional states of both partners. One can obtain the latent states at all subsequent time points by evolving a differential equation (typically a coupled linear oscillator (CLO)) forward in time with some known initial state at the starting time. The multivariate observed states include self-reported emotional experiences and physiological measurements of both partners during the interactions. To test whether well-being factors, such as body weight, can help to predict emotion-interaction patterns, We construct functions that determine the prior distributions of the CLO parameters of individual couples based on existing emotion theories. Besides, we allow a single latent state to generate multivariate observations and learn the group-shared coefficients that specify the relationship between the latent states and the multivariate observations. </p><p> Furthermore, we model the nonlinearity of the emotional interaction by allowing smooth changes (drift) in the model parameters. By restricting the stochasticity to the parameter level, the proposed approach models the dynamics in longer periods of social interactions assuming that the interaction dynamics slowly and smoothly vary over time. The proposed approach achieves this by applying Gaussian Process (GP) priors with smooth covariance functions to the CLO parameters. Also, we propose to model the emotion regulation patterns as clusters of the dynamical parameters. To infer the parameters of the proposed Bayesian generative model from noisy experimental data, we develop a Gibbs sampler to learn the parameters of the patterns using a set of training couples. </p><p> To evaluate the fitted model, we develop a multi-level cross-validation procedure for learning the group-shared parameters and distributions from training data and testing the learned models on held-out testing data. During testing, we use the learned shared model parameters to fit the individual CLO parameters to the first 80% of the time points of the testing data by Monte Carlo sampling and then predict the states of the last 20% of the time points. By evaluating models with cross-validation, one can estimate whether complex models are overfitted to noisy observations and fail to generalize to unseen data. I test our approach on both synthetic data that was generated by the generative model and real data that was collected in multiple social psychology experiments. The proposed approach has the potential to model other complex behavior since the generative model is not restricted to the forms of the underlying dynamics.</p>
|
27 |
Characterize the Difficulties that International Computer Science Students FaceAlharbi, Eman 04 May 2016 (has links)
<p> International Computer Science students, who form the majority of students in Engineering colleges in the U.S (Anderson, 2013), face a lot of difficulties and barriers that are unknown and unexpressed. Hiding these struggles may affect the quality of their education, and will repeat the struggles over and over with the coming students. We conducted a qualitative study to discover the barriers that international Computer Science students have and their special needs. The data was collected by interviewing international Computer Science students and some of their instructors in the University of Colorado at Colorado Springs (UCCS). The study found that international Computer Science students have English barriers evaluated on the following dimensions: listening and understanding lectures, participating and expressing ideas, presenting, writing, and reading. Moreover, students have identified another set of difficulties, which is technical barriers based on educational background and the ability to deal with advanced software tools.</p>
|
28 |
An integrated modelling framework for the design and construction of distributed messaging systemsMakoond, Bippin Lall January 2008 (has links)
Having evolved to gain the capabilities of a computer and the inherent characteristic of mobility, mobile phones have transcended into the realm of the Internet, forcing mobile telecommunication to experience the phenomenon of IP Convergence. Within the wide spectrum of mobile services, the messaging business has shown the most promising candidate to exploiting the Internet due to its adaptability and growing popularity. However, mobile operators have to change the way they traditionally handle the message logistics, transforming their technologies while adhering to aspects of quality of service. To keep up with the growth in messaging, in the UK alone reaching to 52 billion in 2007, and with the increased complexity of the messages, there is an urgent need to move away from traditional monolithic architectures and to adopt distributed and autonomous systems. The aim of this thesis is to propose and validate the implementation of a new distributed messaging infrastructure that will sustain the dynamics of the mobile market by providing innovative technological resolutions to the common problem of quality modelling, communication, evolution and resource management, within mobile Telecoms. To design such systems, requires techniques, not only found in classical software engineering, but also in the scientific methods, statistics and economics, thus the emergence of an apparent problem of combining these tools in a logical and meaningful manner. To address this problem, we propose a new blended modelling approach which is at the heart of the research process model. We formulate a Class of problems that categorises problem attributes into an information system and assess each requirement against a quality model. To ensure that quality is imprinted in the design of the distributed messaging system, we formulate dynamic models and simulation methods to measure the QoS capabilities of the system, particular in terms of communication and distributed resource management. The outcomes of extensive simulation enabled the design of predictive models to build a system for capacity. A major contribution of this work relates to the problem of integrating the aspect of evolution within the communication model. We propose a new multi-criteria decision making mechanism called the BipRyt algorithm, which essentially preserve the quality model of the system as it tends to grow in size and evolve in complexity. The decision making I process is based on the availability of computational resources, associated rules of usage and defined rules for a group of users or the system as a whole. The algorithm allows for local and global optimisation of resources during the system life cycle while managing conflicts among the rules, such as racing condition and resource starvation. Another important contribution relates to the process of organizing and managing nodes over distributed shared memory. We design the communication model in the shape of a grid architecture, which empowers the concept of single point management of the system (without being a single point of failure), using the same discipline of managing an information system. The distributed shared memory is implemented over the concept of RDMA, where the system runs at very high performance and low latency, while preserving requirements such as high availability and horizontal scalability. A working prototype of the grid architecture is presented, which compares different network technologies against a set of quality metrics for validation purposes.
|
29 |
Gamification in Introductory Computer ScienceBehnke, Kara Alexandra 31 December 2015 (has links)
<p> This thesis investigates the impact of gamification on student motivation and learning in several introductory computer science educational activities. The use of game design techniques in education offers the potential to make learning more motivating and more enjoyable for students. However, the design, implementation, and evaluation of game elements that actually realize this promise remains a largely unmet challenge. This research examines whether the introduction of game elements into curriculum positively impacts student motivation and intended learning outcomes for entry-level computer science education in four settings that apply similar game design techniques in different introductory computer science educational settings. The results of these studies are evaluated using mixed methods to compare the effects of game elements on student motivation and learning in both formal and non-formal learning environments.</p>
|
30 |
Sketch-based skeleton-driven 2D animation and motion capturePan, Junjun January 2009 (has links)
This research is concerned with the development of a set of novel sketch-based skeleton-driven 2D animation techniques, which allow the user to produce realistic 2D character animation efficiently. The technique consists of three parts: sketch-based skeleton-driven 2D animation production, 2D motion capture and a cartoon animation filter. For 2D animation production, the traditional way is drawing the key-frames by experienced animators manually. It is a laborious and time-consuming process. With the proposed techniques, the user only inputs one image ofa character and sketches a skeleton for each subsequent key-frame. The system then deforms the character according to the sketches and produces animation automatically. To perform 2D shape deformation, a variable-length needle model is developed, which divides the deformation into two stages: skeleton driven deformation and nonlinear deformation in joint areas. This approach preserves the local geometric features and global area during animation. Compared with existing 2D shape deformation algorithms, it reduces the computation complexity while still yielding plausible deformation results. To capture the motion of a character from exiting 2D image sequences, a 2D motion capture technique is presented. Since this technique is skeleton-driven, the motion of a 2D character is captured by tracking the joint positions. Using both geometric and visual features, this problem can be solved by ptimization, which prevents self-occlusion and feature disappearance. After tracking, the motion data are retargeted to a new character using the deformation algorithm proposed in the first part. This facilitates the reuse of the characteristics of motion contained in existing moving images, making the process of cartoon generation easy for artists and novices alike. Subsequent to the 2D animation production and motion capture,"Cartoon Animation Filter" is implemented and applied. Following the animation principles, this filter processes two types of cartoon input: a single frame of a cartoon character and motion capture data from an image sequence. It adds anticipation and follow-through to the motion with related squash and stretch effect.
|
Page generated in 0.0635 seconds