Spelling suggestions: "subject:"computerscience"" "subject:"composerscience""
171 |
Compile time processing of Ada task constructs for real-time programmingUnknown Date (has links)
Compile-time preprocessing can be used to schedule systems of periodic tasks written in an Ada subset. Applied to a task system, this can both test feasibility and simplify runtime scheduling. This can be viewed as both a program verification technique and a more efficient implementation of Ada tasks. / Two models of Ada tasking are considered. Both consist of periodic Ada tasks with relative deadlines. The first model assumes that all rendezvous are determined at compile time (deterministic rendezvous) and allows arbitrary task start times. Task systems conforming to this model are transformed into systems of independent tasks allowing earliest deadline scheduling. The second model allows more Ada-like nondeterministic rendezvous but requires that all tasks start at the same time. Task systems conforming to this model are processed by enumerating equivalent systems with deterministic rendezvous until one is schedulable. The result in either case is a set of periodic independent tasks. These can be scheduled directly at runtime using the earliest deadline scheduling algorithm, or scheduling can be simulated at compile time to create a cyclic deterministic schedule. / The results of experimental studies of the preprocessing of Ada programs conforming to the nondeterministic rendezvous model are presented. The first study uses a simulation program which processes a skeletal Ada task system and prints out a valid schedule for it if one exists. The second involves implementing a program which translates Ada programs into executable deterministic task systems. / Source: Dissertation Abstracts International, Volume: 52-03, Section: B, page: 1544. / Major Professor: Theodore P. Baker. / Thesis (Ph.D.)--The Florida State University, 1991.
|
172 |
Improving the Effectiveness of Performance Analysis for HPC by Using Appropriate Modeling and Simulation SchemesUnknown Date (has links)
Performance modeling and simulation of parallel applications are critical performance analysis techniques in High Performance
Computing (HPC). Efficient and accurate performance modeling and simulation can aid the tuning and optimization of current systems as well as
the design of future HPC systems. As the HPC applications and systems increase in size, efficient and accurate performance modeling and
simulation of parallel applications is becoming increasingly challenging. In general, simulation yields higher accuracy at the cost of high
simulation time in comparison to modeling. This dissertation aims at developing effective performance analysis techniques for the next
generation HPC systems. Since modeling is often orders of magnitude faster than simulation, the idea is to separate HPC applications into two
types: 1) the ones that modeling can produce similar performance results as simulation and 2) the ones that simulation can result in more
meaningful information about the application performance than modeling. By using modeling for the first type of applications and simulation
for the rest of applications, the efficiency of performance analysis can be significantly improved. The contribution of this thesis is
three-fold. First, a comprehensive study of the performance and accuracy trade-offs between modeling and simulation on a wide range of HPC
applications is performed. The results indicate that for the majority of HPC applications, modeling and simulation yield similar performance
results. This lays the foundation for improving performance analysis on HPC systems by selecting between modeling and simulation on each
application. Second, a scalable and fast classification techniques (MFACT) are developed based on the Lamport's logical clock that can
provide fast diagnosis of MPI application performance bottleneck and assist in the processing of application tuning and optimization on
current and future HPC systems. MFACT also classifies HPC applications into bandwidth-bound, latency-bound, communication-bound, and
computation-bound. Third, built-upon MFACT, for a given system configuration, statistical methods are introduced to classify HPC applications
into the two types: the ones that needs simulation and the ones that modeling is sufficient. The classification techniques and tools enable
effective performance analysis for future HPC systems and applications without losing accuracy. / A Dissertation submitted to the Department of Computer Science in partial fulfillment of the requirements
for the degree of Doctor of Philosophy. / Fall Semester 2017. / November 6, 2017. / Application, Communication, HPC, Performance modeling, performance simulation / Includes bibliographical references. / Xin Yuan, Professor Directing Dissertation; Fengfeng Ke, University Representative; Zhenghao Zhang,
Committee Member; Sonia Haiduc, Committee Member; Scott Pakin, Committee Member.
|
173 |
Game Based Visual-to-Auditory Sensory Substitution TrainingUnknown Date (has links)
There has been a great deal of research devoted to computer vision related assistive technologies. Unfortunately, this area of research has not produced many usable solutions. The
long cane and the guard dog are still far more useful than most of these devices. Through the push for advanced mobile and gaming systems, new low-cost solutions have become available for
building innovative and creative assistive technologies. These technologies have been used for sensory substitution projects that attempt to convert vision into either auditory or tactile
stimuli. These projects have reported some degree of measurable success. Most of these projects focused on converting either image brightness or depth into auditory signals. This research
was devoted to the design and creation of a video game simulator that was capable of performing research and training for these sensory substitution concepts that converts vision into
auditory stimuli. The simulator was used to perform direct comparisons between some of the popular sensory substitution techniques as well as exploring new concepts for conversion. This
research of 42 participants tested different techniques for image simplification and discovered that using depth-to-tone sensory substitution may be more usable than brightness-to-tone
simulation. The study has shown that using 3D game simulators can be used in lieu of building costly prototypes for testing new sensory substitution concepts. / A Dissertation submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Fall Semester 2015. / November 6, 2015. / auditory vision, image to tone conversion, Sensory Substitution Training, Serious Gaming, Vision Impairment / Includes bibliographical references. / Gary Tyson, Professor Directing Dissertation; Gordon Erlebacher, University Representative; Xiuwen Liu, Committee Member; Margareta Ackerman, Committee
Member.
|
174 |
Bio-signal data gathering, management and analysis within a patient-centred health care contextMunnoch, Robert Alexander January 2017 (has links)
The healthcare service is under pressure to do more with less, and changing the way the service is modelled could be the key to saving resources and increasing efficacy. This change could be possible using patient-centric care models. This model would include straightforward and easy-to-use telemonitoring devices and a flexible data management structure. The structure would maintain its state by ingesting many sources of data, then tracking this data through cleaning and processing into models and estimates to obtaining values from data which could be used by the patient. The system can become less disease-focused and more health-focused by being preventative in nature and allowing patients to be more proactive and involved in their care by automating the data management. This work presents the development of a new device and a data management and analysis system to utilise the data from this device and support data processing along with two examples of its use. These are signal quality and blood pressure estimation. This system could aid in the creation of patient-centric telecare systems.
|
175 |
High Performance Multi-core Transaction Processing via Deterministic ExecutionFaleiro, Jose Manuel 19 March 2019 (has links)
<p> The increasing democratization of server hardware with multi-core CPUs and large main memories has been one of the dominant hardware trends of the last decade. "Bare metal" servers with tens of CPU cores and over 100 gigabytes of main memory have been available for several years now. Recently, this large scale hardware has also been available via the cloud; for instance, Amazon EC2 now provides instances with 64 physical CPU cores. Database systems, with their roots in uniprocessors and paucity of main memory, have unsurprisingly been found wanting on modern hardware.</p><p> In addition to changes in hardware, database systems have had to contend with changing application requirements and deployment environments. Database systems have long provided applications with an interactive interface, in which an application can communicate with the database over several round-trips in the course of a single request. A large class of applications, however, does not require interactive interfaces, and is unwilling to pay the performance cost associated with overly flexible interfaces. Some of these applications have eschewed database systems altogether in favor of high-performance key-value stores.</p><p> Finally, modern applications are increasingly deployed at ever increasing scales, often serving hundreds of thousands to millions of simultaneous clients. These large scale deployments are more prone to errors due to consistency issues in their underlying database systems. Ever since their inception, database systems have provided applications to tradeoff consistency for performance, and often nudge applications towards weak consistency. When deployed at scale, weak consistency exposes latent consistency-related bugs, in the same way that failures are more likely to occur at scale. Nearly every widely deployed database system provides applications with weak consistency consistency by default, and its widespread use in practice significantly complicates application development, leading to latent Heisenbugs that are only exposed in production. </p><p> This dissertation proposes and explores the use of deterministic execution to address these concerns. Database systems have traditionally been non-deterministic; given an input list of transactions, the final state of the database, which corresponds to some totally ordered execution of transactions, is dependent on non-deterministic factors such as thread scheduling decisions made by the operating system and failures. Deterministic execution, on the other hand, ensures that the database's final state is always determined by its input list of transactions; in other words, the input list of transactions is the same as the total order of transactions that determines the database's state. </p><p> While non-deterministic database systems expend significant resources in determining valid total orders of transactions, we show that deterministic systems can exploit simple and low-cost up-front total ordering of transactions to execute and schedule transactions much more efficiently. We show that deterministic execution enables low-overhead, highly-parallel scheduling mechanisms, that can address the performance limitations of existing database systems on modern hardware. Deterministic database systems are designed based on the assumption that applications can submit their transactions in one-shot prepared transactions, instead of multiple round-trips. Finally, we attempt to understand the fundamental reason for the observed performance differences between various consistency levels in database systems, and based on this understanding, show that we can exploit deterministic execution to provide strong consistency at a cost that is competitive with that offered by weak consistency levels.</p><p>
|
176 |
A Compositional Automation Engine for Verifying Complex System SoftwareWu, Xiongnan Newman 19 March 2019 (has links)
<p> Formal verification is the only known way of building bug-free or hacker-proof programs. However, due to its prohibitive associated costs, formal verification has rarely been considered as an option in building robust large-scale system software. Practical system software normally consists of highly correlated interdependent subsystems, with complex invariants that need to be globally maintained. To reason about the correctness of a program, we not only need to show that the program in consideration satisfies the invariants and the specification, but also prove that the invariants cannot be accidentally broken by other parts of the system, e.g., via pointer manipulation. Furthermore, we often have some snippet of code that temporarily breaks the invariants, and re-establishes them later, which could make reasoning of such code more complex. Even worse, many complex system software contains device drivers to work with the devices, which brings a major challenge of handling device interrupts; and consists of multiple threads running on multiple CPUs concurrently. This forces us to further reason about arbitrary interactions and interleaved executions among different devices, interrupts, and programs running on different CPUs, which could quickly make the verification task intractable.</p><p> In this dissertation, we present a compositional, and powerful automation engine for effectively verifying complex system software. It is compositional because it solely focuses on providing strong automation support for verifying functional correctness properties of C source programs, while taking the memory isolation and invariant properties as given, and separately provide a systematic approach for guaranteeing the isolation among different in-memory data, and proving invariants, completely at the logical level. The engine also contains a novel way of representing devices and drivers, and simulation-based approach for turning the low-level interrupt model into an equivalent abstract model which is suitable for reasoning about interruptible code. Furthermore, the engine provides a new way of representing concurrently shared states into a sequence of external UO events and allows us to verify concurrent programs as if they were sequential and provided a separate logical framework to effectively reason about interleaved executions. This very modular design allows us to be able to reason about each aspect of the system separately, and while each of the reasoning tasks looks unbelievably simple, we could combine all the proofs to obtain proofs of properties about complex system software. An OS kernel is a typical example of complex low-level system software with highly interdependent modules. To illustrate the effectiveness of our approach, using all the tools, we have developed a fully verified feature-rich operating system kernel with machine-checkable proof in the Coq proof assistant.</p><p>
|
177 |
Real-Time Inverse Lighting for Augmented Reality Using a Dodecahedral MarkerStraughn, Glen K. 21 March 2019 (has links)
<p> Lighting is a major factor in the perceived realism of virtual objects, and thus lighting virtual objects so that they appear to be illuminated by real-world light sources—a process known as <i>inverse lighting </i>—is a crucial component to creating realistic augmented reality images. This work presents a new, real-time inverse lighting method that samples the light reflected off of a regular, twelve-sided (dodecahedral), 3D object to estimate the light direction of a scene’s primary light source. Using the light sample results, each visible face of the dodecahedron is determined to either be in light or in shadow. One or more light vectors then are calculated for each face by either using the surface normal vector of the face as a light direction vector if the face is in light, or by reflecting the face’s surface normal across the normal vector of every adjacent illuminated face in the case of shadowed faces. If the shadowed face is not adjacent to any illuminated faces, the normal vector is reversed instead. These light vectors then are averaged to produce a vector pointing to the primary light source in the environment. This method is designed with special consideration to ease of use for the user, requiring no configuration stages. </p><p>
|
178 |
Indoor Scene 3D Modeling with Single ImageFan, Chuanmao 08 March 2019 (has links)
<p> 3D modeling is a fundamental and very important research area in computer vision and computer graphics. One specific category of this research field is indoor scene 3D modeling. Many efforts have been devoted to its development, but this particular type of modeling is far from mature. Some researchers have focused on single-view reconstruction which reconstructs a 3D model from a single-view 2D indoor image. This is based on the Manhattan world assumption, which states that structure edges are usually parallel with the X, Y, Z axis of the Cartesian coordinate system defined in a scene. Parallel lines, when projected to a 2D image, are straight lines that converge to a vanishing point. Single-view reconstruction uses these constraints to do 3D modeling from a 2D image only. However, this is not an easy task due to the lack of depth information in the 2D image. With the development and maturity of 3D imaging methods such as stereo vision, structured light triangulation, laser strip triangulation, etc., devices that gives 2D images associated with depth information, which form the so called RGBD image, are becoming more popular. Processing of RGB color images and depth images can be combined to ease the 3D modeling of indoor scenes. Two methods combining 2D and 3D modeling are developed in this thesis for comparison. One is region growing segmentation, and second is RANSAC planar segmentation in 3D directly. Results are compared, and 3D modeling is illustrated. 3D modeling is composed of plane labeling, automatic floor, wall, and boundary point detection, wall domain partitions using automatically detected wall, and wall boundary points in 2D image, 3D modeling by extruding from obtained boundary points from floor plane etc. Tests were conducted to verify the method.</p><p>
|
179 |
A Unified Characterization of Runtime Verification Systems as Patterns of CommunicationSwords, Cameron 23 March 2019 (has links)
<p> Runtime verification, as a field, provides tools to describe how programs should behave during execution, allowing programmers to inspect and enforce properties about their code at runtime. This field has resulted in a wide variety of approaches to inspecting and ensuring correct behavior, from instrumenting individual values in order to verify global program behavior to writing ubiquitous predicates that are checked over the course of execution. Although each verification approach has its own merits, each has also historically required ground-up development of an entire framework to support such verification. </p><p> In this work, we start with software contracts as a basic unit of runtime verification, exploring the myriad of approaches to enforcing them—ranging from the straightforward pre-condition and post-condition verification of Eiffel to lazy, optional, and parallel enforcement strategies—and present a unified approach to understanding them, while also opening the door to as-yet-undiscovered strategies. By observing that contracts are fundamentally about communication between a program and a monitor, we reframe contract checking as communication between concurrent processes. This brings out the underlying relations between widely-studied verification strategies, including strict and lazy enforcement as well as concurrent approaches, including new contracts and strategies. We introduce a concurrent core calculus (with proofs of type safety), show how we may encode each of these contract verification strategies in it, and demonstrate a proof (via simulation) of correctness for one such encoding. </p><p> After demonstrating this unified framework for contract verification strategies, we extend our verification framework with meta-strategy operators—strategy-level operations that take one or more strategies (plus additional arguments) as input and produce new verification behaviors—and use these extended behavioral constructs to optimize contract enforcement, reason about global program behavior, and even perform runtime instrumentation, ultimately developing multiple runtime verification behaviors using our communication-based view of interaction. </p><p> Finally, we introduce an extensible, Clojure-based implementation of our framework, demonstrating how our approach fits into a modern programming language by recreating our contract verification and general runtime verification examples.</p><p>
|
180 |
A Comparative Analysis of In-House and Offshore Software Development by Using Agile Methodologies at the Design/Code Phase of Software Development| An Empirical StudyNardelli, Robert 06 April 2019 (has links)
<p> Offshoring software projects have been common for a few decades and were once thought to be the savior of software development project issues that plagued in-house software developers. Even with many recent advances in software development and communication, many projects are still compromised in some way. This dissertation analyzes in-house and offshore projects that were conducted using the waterfall methodology to determine the real source of the issues. The main hypothesis here is that by implementing agile, at least in part, at the design/code phase of software development will not only reduce or eliminate issues that were identified using waterfall but prove that development problems are independent of whether a project is developed offshore or in-house. This study also shows that, in addition to agile mitigating project issues at one phase of software development, project stakeholders are more comfortable, if they are in the process of migrating to agile development, by implementing agile initially at only one phase of the process.</p><p>
|
Page generated in 0.2534 seconds