Spelling suggestions: "subject:"computer cience"" "subject:"computer cscience""
561 |
Network-Calculus-Based Analysis of Power Management in Video Sensor NetworksCao, Yanchuan 30 December 2008 (has links)
This thesis considers two important issues for video sensor networks, (1) timely delivery of captured video stream and (2) energy-efficient network design. Based on network calculus, it presents a unified analytical framework that is able to quantitatively weigh the tradeoff between these two factors. In particular, it derives the service curve, buffer and delay bound under single-hop and multi-hop scenarios for different power management policies. To the best of our knowledge, this is the first work that extends the network calculus theory to the domain of wireless network with the consideration of power management and wireless interference. Our analysis has been validated through experiments conducted on a video sensor network testbed.
|
562 |
MODELING, SIMULATION, AND VERIFICATION OF BIOCHEMICAL PROCESSES USING STOCHASTIC HYBRID SYSTEMSRiley, Derek David 08 April 2009 (has links)
Formal modeling and analysis methods hold great promise to help further discovery and innovation for biochemical systems. Domain experts from physicians to chemical engineers can use computational modeling and analysis tools to clarify and demystify complex systems. However, development of accurate and efficient modeling methodologies and analysis techniques pose challenges for biochemical systems. Simulation of biochemical systems is difficult because of the complex dynamics, exhaustive verification methods are computationally expensive for large systems, and Monte Carlo methods are inaccurate and inefficient when rare events are present.
This dissertation uses Stochastic Hybrid Systems (SHS) for modeling and analysis because they can formally capture the complex dynamics of a large class of biochemical systems. An advanced fixed step simulation technique is developed for SHS that employs improved boundary crossing detection methods using probabilistic sampling. Further, an adaptive time stepping simulation method for SHS is implemented to improve accuracy and efficiency. An exhaustive verification method for SHS based on dynamic programming is developed as a tool for analyzing reachability properties for the entire state space. A parallelization of the verification method is developed to improve efficiency. Reachability analysis can also be performed using Monte Carlo methods, so Monte Carlo methods for SHS are implemented. A variance reduction method called MultiLevel Splitting (MLS) is developed for SHS that improves accuracy and efficiency in the presence of rare events. Parameter selection methods are created to help determine appropriate MLS configuration parameters.
Realistic case studies are used to demonstrate the modeling capabilities of SHS and the proposed analysis methods. The case studies include models of sugar cataract development in the lens of a human eye, a commercial biodiesel production system, glycolysis, which is a cellular energy conversion mechanism found in every living cell, and the water and electrolyte balance system in humans. These case studies are used to present experimental results for the analysis methods developed in this work.
|
563 |
A Two Event Brokering Architecture for Information Dissemination in Vehicular NetworksDevkota, Tina 27 March 2009 (has links)
Emerging applications in the realm of Intelligent Transportation systems, such as traffic congestion management and vehicle rerouting, weather alerts, and infotainment via multimedia streaming, require timely and reliable dissemination of bulky data. Vehicle mobility, unreliable wireless communications, and short range of communications with infrastructure entities, such as road-side units (RSUs), make it hard to satisfy these application properties in a scalable manner. For example, the high mobility of a vehicle results in less than a minute of connectivity with a RSU that have a range of just 1,000 meters and with vehicles moving in clusters, each vehicle will be served only a fraction of its connected time.
This thesis makes three contributions. With simulation results we show that in this time window, it is impossible for multimedia related applications to work no matter what the speed of the cluster is. We analyze our simulation results and conclude that cooperation among cars in a cluster and cooperation among RSUs along the road have a great potential in overcoming these challenges. We then propose a two-level event brokering framework. At the first level, a collection of RSUs and Internet-based nodes form a static brokering network, which cooperate to estimate the expected connectivity windows of vehicles with different RSUs, and accordingly segment and disseminate application-level information to the different RSUs along the path of the vehicles. At the second level, the vehicle density is leveraged to superimpose a brokering network over the dynamic vehicular ad-hoc network to scalably disseminate information. Events are routed not only by content and interested subscribers but also by their priority level.
|
564 |
Agile Techniques for Developing and Evaluating Large-scale Component-based Distributed Real-time and Embedded SystemsHill, James H 10 April 2009 (has links)
<p>Agile techniques are a promising approach to facilitate the development of large-scale component-based distributed real-time and embedded (DRE) systems. Conventional agile techniques help ensure functional concerns of such systems continuously throughout the software lifecycle. Ensuring quality-of-service (QoS) concerns of large-scale component-based DRE systems using conventional agile techniques, however, is hard due in large part to an observed gap between agile development and component-based software engineering (CBSE) caused by the serialized-phasing development problem.</p>
<p>This dissertation presents novel techniques in the form of algorithms, analytics, patterns, and tools to bridge the gap between agile development and CBSE for large-scale DRE systems to overcome the serialized-phasing development problem. Furthermore, this dissertation shows how our techniques can facilitate evaluation of QoS concerns continuously throughout the software lifecycle. The techniques discussed in this dissertation have been validated the context of representative large-scale component-based DRE systems from production projects in several mission-critical domains.</p>
|
565 |
A Modular Computer Program for the Acquisition and Analysis of Biomagnetic Signals Using SQUID MagnetometersIrimia, Andrei 02 August 2004 (has links)
The study of bioelectric and biomagnetic activity in the human gastrointestinal (GI) tract is of great interest in clinical research due to the proven possibility to detect pathological conditions thereof from electric and magnetic field recordings. The magnetogastrogram (MGG) and magnetoenterogram (MENG) can be recorded using superconducting quantum interference device (SQUID) magnetometers, which are the most sensitive magnetic flux-to-voltage converters currently available. To address the urgent need for powerful acquisition & analysis software tools faced by many researchers and clinicians in this important area of investigation, an integrative and modular computer program was developed for the acquisition, processing and analysis of GI SQUID signals. In addition to a robust hardware implementation for efficient data acquisition, a number of signal processing and analysis modules were developed to serve in a variety of both clinical procedures and scientific investigations. Implemented software features include data processing and visualization, waterfall plots of signal frequency spectra as well as spatial maps of GI signal frequencies. Moreover, a software tool providing powerful 3D visualizations of GI signals was created using realistic models of the human torso and internal organs. Due to the novelty of our modular and integrative approach to GI signal analysis and to our highly realistic depiction of gastric and intestinal signals originating in the human body, our powerful methods for biomagnetic field analysis are bound to set the standard in today's gastroenterological research and possibly to help revolutionize clinical GI diagnosis methods via biomagnetic signal analysis.
|
566 |
Component-based Fault Tolerance for Distributed Real-Time and Embedded SystemsWolf, Friedhelm 20 April 2009 (has links)
Component middleware has become increasingly important in
distributed real-time and embedded (DRE) systems. DRE systems
are characterized by resource constraints and stringent quality
of service (QoS) requirements. Growing demands on system
dependability in turn increases the importance of fault-tolerance
as a QoS aspect.
<p>
Research on fault-tolerance in DRE systems has focused mainly on
replication and recovery on the granularity level of single
distributed objects and processes. Component middleware provides
higher-level abstractions, such as a container infrastructure,
means to assemble components to larger units of functionality, and
standardized deployment mechanisms. These mechanisms provide new
opportunities to standardize fault-tolerance, but also pose new
challenges, such as efficient synchronization of internal component
state, failure correlation across groups of components and
configuration of fault-tolerance properties per component.
<p>
This thesis makes three contributions to the research on
component-based fault-tolerance. First, we present Components with
HEterogeneous State Synchronization (CHESS), which is a mechanism for
component state replication that enables the flexible use of the
most appropriate communication mechanism. Second, we present
COmponent Replication based on Failover Units (CORFU) that provides
fail-stop behavior and fault correlation across groups of
components. Third, we present an evaluation of the proposed
solutions in comparison to existing object fault-tolerance methods.
<p>
These results show that DRE systems based on component middleware
ease the burden of application development by providing middleware
support for fault-tolerance on the level of components. The results
also quantify the performance trade-off compared to object level
fault-tolerance and show that it is acceptable for many DRE systems.
|
567 |
Perceptual Analysis of Motion Blur Techniques in Distribution Ray TracingBedikian, Raffi Agop 25 April 2011 (has links)
The goal of this work was to improve computer graphics by better understanding how internal rendering parameters affect the visual perception of animation. The scope of our work was constrained to the perception of object motion in animated sequences, specifically the motion blur effect that is generated to mimic the shutter and exposure of a film camera. Although this motion blur occurs as a natural consequence in real-world cameras due to the nonzero time needed for the image to be fully exposed, it is generated artificially through a variety of techniques in computer graphics in order to make the generated animation appear more realistic. In this work, we constrain the scope of our examination of motion blur to ray-traced rendering. We examine in detail the perceptual aspects of different techniques for generating ray-traced motion blur via experiments designed to isolate the various parameters in order to determine their effectiveness.
|
568 |
A REAL-TIME, EVENT BASED DRIVER ALERT SYSTEM FOR ACCIDENT AVOIDANCE DUE TO RED LIGHT RUNNINGThopte, Deepti Suresh 20 July 2009 (has links)
Traffic signals provide the key mechanism to regulate traffic, and promote safe and efficient traffic flow at intersections. However, incidents of red light running can compromise the safety of the system. Research shows that many drivers violate red signals, placing themselves and other road users at risk for serious collisions. These incidents stem mostly due to the well known dilemma zone problem where the drivers cannot decide whether slowing down the vehicle would bring their vehicle to a stop or should they increase the speed and cross the intersection. Other factors, such as wet road conditions, make the problem worse. This thesis presents Intelligent Traffic Light (ITL), which is a cyber physical system that combines information technology and communications technology with the transportation infrastructure to address the red light running incidents. ITL estimates when a traffic light will change to red, and warns drivers of approaching vehicles about when and how much to slow down to avoid red light running. ITL can also account for road conditions, which increases its usability in challenging weather conditions.
|
569 |
INFORMATION ABSTRACTION VISUALIZATION FOR HUMAN-ROBOT INTERACTIONHumphrey, Curtis Michael 28 July 2009 (has links)
Future emergency incident responses, including Chemical, Biological, Radiological, Nuclear, and Explosive (CBRNE), will incorporate robots. The ability to interact with robots and understand the resulting volumes of information requires a system of human-robot interfaces employing directable visualizations that provide information immediacy, relevancy, and sharing appropriate for each humans responsibilities.
This dissertation conducted two modified Cognitive Tasks Analyses (CTA) on the CBRNE incident response. The Cognitive Information Flow Analysis (CIFA) was developed to combine CTA results and to analyze the path of information as it passes through and is transformed by the system at different human-robot interaction (HRI) user levels. These analyses (i.e., modified CTAs and CIFA) collectively informed the HRI design and development.
The primary contributions of this dissertation are the development and evaluation of two novel visualization techniques that present immediate, relevant, and shared information provided by the robots to the human users in the system of human-robot interfaces. The General Visualization Abstraction (GVA) algorithm, the first technique, is designed to provide information immediacy and relevancy by displaying the most useful information at any given moment determined by rewarding information that is either historically and currently relevant or novel and emerging. The Decision Information Abstracted to a Relevant Encapsulation (DIARE) concept, the second technique, supports decision-making by representing prior event information as a defined volume in the visualizations information space and encapsulates the volume into an explicit and visual object that can be shared across time and users.
User evaluations were conducted for both visualization techniques. The GVA algorithms evaluation results indicate that it can reduce cognitive workload, increase situational awareness, and improve performance for two different HRI user levels. The DIARE concept results indicate that participants were able to rapidly ascertain what had happened previously with great accuracy and good memory recall. Together, these two visualization techniques can assist decision-makers using directable visualizations, such as those used in HRI, by offering an effective method of sharing and providing real-time, relevant information.
|
570 |
Distributed Diagnosis of Continuous Systems: Global Diagnosis Through Local AnalysisRoychoudhury, Indranil 04 August 2009 (has links)
<p>Early detection and isolation of faults is crucial for ensuring system safety and efficiency. Online diagnosis schemes are usually integrated with fault adaptive control schemes to mitigate these fault effects, and avoid catastrophic consequences. These diagnosis approaches must be robust to uncertainties, such as sensor and process noise, to be effective in real world applications. Also, diagnosis schemes must address the drawbacks of centralized diagnosis schemes, such as large memory and computational requirements, single points of failure, and poor scalability. Finally, to be effective, fault diagnosis schemes must be capable of diagnosing different fault types, such as incipient (slow) and abrupt (fast) faults in system parameters.</p>
<p>This dissertation addresses the above problems by developing:
(i) a unified qualitative diagnosis framework for incipient and abrupt faults in system parameters; (ii) a distributed, qualitative diagnosis approach, where each diagnoser generates globally correct diagnosis results without a centralized coordinator and communicates minimal measurement information and no partial diagnosis results with other diagnosers; (iii) a centralized Bayesian diagnosis scheme that combines our qualitative diagnosis approach with a Dynamic Bayesian network (DBN)-based diagnosis scheme; and (iv) a distributed DBN-based diagnosis scheme, where the global DBN is systematically factored into structurally observable independent DBN factors that are decoupled across time, so that the random variables in each DBN factor are conditionally independent of those in all other factors, given a subset of communicated measurements that are converted into system inputs. This allows the implementation of the combined qualitative and DBN-based diagnosis scheme on each DBN factor, which operate independently with a minimal number of shared measurements to generate globally correct diagnosis results locally without a centralized coordinator, and without communicating any partial diagnosis results with other diagnosers. The correctness and effectiveness of these diagnosis approaches is demonstrated by applying the qualitative diagnosis approaches to the Advanced Water Recovery System developed at NASA Johnson Space Center; and the DBN-based diagnosis schemes to a complex, twelfth-order electrical system.</p>
|
Page generated in 0.347 seconds