Spelling suggestions: "subject:"real time"" "subject:"deal time""
481 |
Vector Graphics for Real-time 3D RenderingQin, Zheng January 2009 (has links)
Algorithms are presented that enable the use of vector graphics representations
of images in texture maps for 3D real time rendering.
Vector graphics images are resolution independent and
can be zoomed arbitrarily without losing detail
or crispness. Many important types of images, including text and
other symbolic information, are best represented in vector form. Vector
graphics textures can also be used as transparency mattes to augment
geometric detail in models via trim curves.
Spline curves are used to represent boundaries around regions
in standard vector graphics representations, such as PDF and SVG.
Antialiased rendering of such content can be obtained by thresholding
implicit representations of these curves.
The distance function is an especially useful implicit representation.
Accurate distance function computations would also allow the implementation
of special effects such as embossing.
Unfortunately, computing the true distance to higher order spline curves
is too expensive for real time rendering.
Therefore, normally either the distance is approximated
by normalizing some other implicit representation
or the spline curves are approximated with simpler primitives.
In this thesis, three methods for
rendering vector graphics textures in real time are introduced,
based on various approximations of the distance computation.
The first and simplest approach to the distance computation
approximates curves with line segments.
Unfortunately, approximation with line segments gives only C0 continuity.
In order to improve smoothness, spline curves can also be approximated
with circular arcs.
This approximation has C1 continuity and computing the distance
to a circular arc is only slightly more expensive than
computing the distance to a line segment.
Finally an iterative algorithm
is discussed that has good performance in practice and can compute the
distance to any parametrically differentiable curve
(including polynomial splines of any order)
robustly. This algorithm is demonstrated in the context of a system
capable of real-time rendering of SVG content in a texture map on a GPU.
Data structures and acceleration algorithms in the context of massively
parallel GPU architectures are also discussed.
These data structures and acceleration structures allow arbitrary vector
content (with space-variant complexity, and overlapping regions) to be
represented in a random-access texture.
|
482 |
Real Time PCR Protocol Development for Rapid and Low Cost Quantification of Baculovirus and for Monitoring Progression of InfectionGeorge, Steve January 2010 (has links)
The work presented in this thesis aims to further the understanding and implementation of the Baculovirus Expression Vector System (BEVS) for varied uses such as protein production and viral vector production. To this end, three projects have been presented, two of which deal with methods to quantify baculovirus titres and the last deals with tracking baculovirus transcripts in infected insect cells.
The first project examined assumption-free analysis as a method for data analysis of Real Time PCR data in order to enable direct comparison of baculovirus titres between samples, without the need for a traditional standard curve. It concluded that assumption-free analysis was well suited for this purpose and fold differences of baculovirus titres of different samples obtained using this method corresponded to real differences in sample titres.
The second project aimed to develop a cheap and reliable method for sample preparation for Real Time PCR which would remove the need for the use of commercially available extraction kits. Samples were subjected to various combinations of Triton X-100 at different concentrations and different numbers of freeze/thaw cycles in order to determine the combination which would provide the best baculovirus genome exposure. One of these combinations was found to be at least as good as commercially available kits in reliably extracting baculovirus DNA and providing baculovirus titres that are at least as accurate.
The third project was a preliminary study examining the effects of multiplicity of infection on the levels of baculovirus Gp-64 transcript in insect cell culture. The study concludes that at high multiplicities of infection, there seems to be no increase in baculovirus transcripts when the multiplicity of infection is further increased. This study served to allow for familiarization with tracking transcript levels, and the principles and techniques demonstrated here will form the basis for an exhaustive future study on the same subject.
|
483 |
Reducing the effect of network delay on tightly-coupled interactionStuckel, Dane Joshua 31 March 2008 (has links)
Tightly-coupled interaction is shared work in which each persons actions immediately and continuously influence the actions of others. Tightly-coupled interaction is a hallmark of expert behaviour in face-to-face activity, but becomes extremely difficult to accomplish in distributed groupware. The main cause of this difficulty is network delay even amounts as small as 100ms that disrupts peoples ability to synchronize their actions with another person. To reduce the effects of delay on tightly-coupled interaction, I introduce a new technique called Feedback-Feedthrough Synchronization (FFS). FFS causes visual feedback from an action to occur at approximately the same time for both the local and the remote person, preventing one person from getting ahead of the other in the coordinated interaction. I tested the effects of FFS on group performance in several delay conditions, and my study showed that FFS substantially improved users performance: accuracy was significantly improved at all levels of delay, and without noticeable increase in perceived effort or frustration. Techniques like FFS that support the requirements of tightly-coupled interaction provide new means for improving the usability of groupware that operates on real-world networks.
|
484 |
Optimization and validation of the method lactose intolerance genotyping with real-time PCRStenberg, Jenny January 2011 (has links)
Abstract Primary lactose intolerance has been associated with a single nucleotide polymorphism located upstream of the lactase gene. The most common diagnostic tests for lactose intolerance are time-consuming and the patient is not allowed to eat and drink for 12 hours before the test is carried out. A method that can establish the genotype would be an easier way of diagnosing lactose intolerance compared to fenotypic lactose intolerance tests. Optimization and validation of a previously published method was performed with real-time polymerase chain reaction. We used whole blood from de-identified blood donors. During the optimization and validation we used a positive control, genotype C/T from Laboratoriemedicin Västernorrland, Sundsvall. The whole-blood was extracted using the MagNa Pure LC instrument. The reagent used was KAPA PROBE FAST qPCR Master Mix. The optimized program for real-time PCR was established to be 95°C 3min [95°C x 3sec, 55°C x 20sec, detection, 72°C x 15sec] x 50 cycles. Optimal probe concentration was found to be 0.2µM and primer concentration will be 0.5µM. This genotyping method is a good first-stage screening test for lactoseintolerance. Before it can be used as a routine method further validation will be necessary in order to ensure that the evaluation of the results can be done in an easy and secure way.
|
485 |
Evaluation and Comparison of Beamforming Algorithms for Microphone Array Speech ProcessingAllred, Daniel Jackson 11 July 2006 (has links)
Recent years have brought many new developments in the processing of speech and acoustic signals.
Yet, despite this, the process of acquiring signals has gone largely unchanged.
Adding spatial diversity to the repertoire of signal acquisition has long been known to offer
advantages for processing signals further. The processing capabilities of mobile devices had not
previously been able to handle the required computation to handle these previous streams of information. But current processing capabilities are such that the extra workload introduced by the addition of mutiple sensors on a mobile device are not over-burdensome. How these extra data streams can best be handled is still an open question. The present work deals with the examination of one type of spatial processing technique, known as beamforming. A microphone array test platform is constructed and verified through a number of beamforming agorithms. Issues related to speech acquisition through microphones arrays are discussed. The algorithms used for verification are presented in detail and compared to one another.
|
486 |
Worst Case Execution time Analysis Support for the ARM Processor Using GCCYen, Cheng-Yu 09 August 2010 (has links)
This thesis presents a tool for obtaining worst-case execution time (WCET) guarantees for ARM processors. This tool is an interface between ARM¡¦s GCC compiler and the SWEET WCET analyzer. SWEET is an open-source static analyzer that derives a guaranteed upper bound on the WCET of a program.
The WCET of a program is an important metric in real-time systems. The task scheduler must decide how much time to allot for each process; if the allotted time exceeds the WCET, the process can be guaranteed to always finish in time. Although the WCET value is therefore useful, it is difficult to find. But, for the purpose of guaranteeing that a process finishes on time, an upper bound on the WCET suffices. Static program analysis has been proposed as a method to derive such an upper-bound on the WCET, by means of conservatively approximating the runtime of the individual parts of a complete program. SWEET is one such static analyzer.
Our tool works inside of ARM-GCC, extracting all of the information that SWEET needs about the program¡¦s behavior. Our tool then packages the information into the SWEET¡¦s ALF format. The tool has been tested and works correctly for every input source that we have tested (including all 34 benchmarks from the WCET BENCHMARK SUITE[1]).
This work was funded by Taiwan¡¦s National Science Council, grant NSC 97-2218-E-110-003
|
487 |
A Memory-Realistic SPM Allocator with WCET/ACET Tunable PerformanceBai, Jia-yu 16 September 2010 (has links)
Real-time systems often use SPM instead of cache, because SPM allows a program¡¦s run time to be more predictable. Real-time system need predictable runtimes, because they must schedule programs to finish within specific deadlines. A deadline should be larger than its program¡¦s worst-case execution time (WCET).
Our laboratory is conducting ongoing research into scratchpad memory allocation (SPM) for reducing the WCET of a program. Compared to our previous work, this current thesis improves our memory model, our allocation algorithms, our real-time support, and our measurement benchmarks and platform.
Our key accomplishments in this paper are to: 1) add, for the first time in the literature, true WCETmeas analysis to an SPM allocator, 2) to modestly improve the performance of our previous allocator, and 3) to greatly increase the applicability over that allocator, by extending the method to support recursive programs.
|
488 |
The Implementation and Applications of Multi-pattern Matching Algorithm over General Purpose GPUCheng, Yan-Hui 08 July 2011 (has links)
With the current technology more and more developed, in our daily life, whether
doing research or work, we often use a variety of computer equipment to help us deal
with some of our frequently used data. And the type and quantity of data have become
more and more, such as satellite imaging data, genetic engineering, the global climate
forecasting data, and complex event processing, etc. Some certain types of the data
require both accuracy and timeliness. That is, we hope to look for some data in a shorter
time.
According to MIT Technology Review in August 2010 reported that the relevant
published, complex event processing becomes a new research, and it also includes in
the part of data search. Data search often means data comparing. Given
specified keywords or key information which we are looking for, we design a pattern
matching algorithm to find the results within a shorter time, or even real-time.
In our research, the purpose is to use the general-purpose GPU, NVIDIA Tesla
C2050, with parallel computing architecture to implement parallelism of the pattern
matching. Finally, we construct a service to handle a large number of real-time data. We
also make some performance tests and compare the results with the well-known
software ¡§Apache Solr¡¨ to find the differences and the possible application in the future.
|
489 |
Microbial bioremediation and monitoring of a TCE-contaminated siteLi, Kuan-hsun 11 July 2011 (has links)
The goal of this study was to use molecular biology techniques to access and monitor the efficacy of bioremediation on a trichloroethene (TCE) polluted site. We added emulsified hydrogen releasing materials to stimulate onsite microbial growth and the biodegradation of TCE. This process was known as enhanced bioremediation. In this study, there were two bioremediation sites had been treated anaerobically. Groundwater samples were taken periodically for microbial analysis. Denaturing gradient gel electrophoresis (DGGE) was used to evaluate the variations in microbial community structures during the in situ groundwater remediation. The DGGE DNA bandings were sequenced to determine the 16S rRNA gene sequences and identify the dominate bacterial species. In addition, we used Dehalococcoides spp. 16S rRNA genes as the targets to do real-time PCR. Results show that the emulsified hydrogen releasing materials could enhance anaerobic reductive dechlorination. After addition of emulsified hydrogen releasing materials, we found that the volatile organic compounds concentrations (i.e., TCE, 1, 1-DCE and VC) were decreased. In microbial analysis, the diversities of the microbial community were increased after nutrient supplement. According to the DNA sequencing results, there were 31 bacterial species had been found that related to TCE degradation (i.e., Acidovorax sp., Burkholderiales, Pseudomonas sp., £]-proteobacterium, Comamonadaceae, Iron-reducing bacterium, Hydrogenophilaceae, Clostridium sp., Geobacter sp., Rhodoferax ferrireducens, Dehalospirillum multivorans and Dehalococcoides spp.). Dehalococcoides spp. can be used as a biomarker to evaluate the efficacy of anaerobic bioremediation on a TCE contaminated site. Therefore, we quantified Dehalococcoides populations to explain the capacity of bioremediation after addition of emulsified hydrogen releasing materials to groundwater. Results reveal that Dehalococcoides cell numbers of site A were 4.47¡Ñ103-8.26¡Ñ104 CFU/liter, site B were 4.60¡Ñ102-9.31¡Ñ107 CFU/liter. This data indicated that the addition of emulsified substrate would increase the growth of total Dehalococcoides population under anaerobic conditions. Overall, results from this study demonstrated that the microbial analysis and quantities of Dehalococcoides at different time points can provide useful information to proceed with bioremediation methods.
|
490 |
Integrating the SWEET WCET Analyzer into ARM-GCC with Extra WCFP Information to Enable WCET-Targeted Compiler OptimizationsHao, Wen-Chuan 23 December 2011 (has links)
Finding the worst-case execution time (WCET) on a hard real-time system is extremely important. Only static WCET analysis can give us an upper bound of WCET which guarantees the deadline, however, industrial practice still relies on measurement-based WCET analysis, even for many hard real-time systems; because static analysis tools are not a mature technology.
We use SWEET (SWEdish Execution Time tool) to provide WCET analysis support for the ARM. SWEET is a static WCET analyzer developed by the Mälardalen Real-Time Research Center (MRTC). We modified ARM-GCC to obtain input files in specific format for SWEET: ALF, TCD, and MAP. Besides, for WCET optimization supporting and over-optimizing issue, we modified SWEET to obtain additional worst-case flow path (WCFP) and the second worst-case information.
By testing with benchmark files from [1], our modified ARM-GCC can create correct input files for SWEET, and also the modified SWEET can produce additional worst-case information.
|
Page generated in 0.1152 seconds