Spelling suggestions: "subject:"cientific applicatications"" "subject:"cientific applicationoptions""
11 |
A LABORATORY INSTRUMENT COMPUTER.Kim, Yong Chin. January 1982 (has links)
No description available.
|
12 |
A Cinematographic Comparison of Two Long-Hang Kip Techniques on the Horizontal BarCox, Pamela S. 08 1900 (has links)
This study used cinematography to determine differences in velocity, acceleration, moments of force, and body centers of gravity in four different positions of two techniques of the long-hang kip. Three female gymnasts performed five attempts of each technique: the traditional method, with an arch in the lower back at the end of the forward swing, and approximate shoulder angle of 180 degrees or more; and the newer method, with no arch in the lower back and approximate shoulder angle of 90 degrees or less. Three. USGF-rated judges scored the kips, and due to inability to distinguish between the two techniques, two subjects were eliminated. Major differences occurred in the swing extension, with the newer technique producing more velocity and a higher center of gravity throughout the movement.
|
13 |
A novel NN paradigm for the prediction of hematocrit value during blood transfusionUnknown Date (has links)
During the Leukocytapheresis (LCAP) process used to treat patients suffering from acute Ulcerative Colitis, medical practitioners have to continuously monitor the Hematocrit (Ht) level in the blood to ensure it is within the acceptable range. The work done, as a part of this thesis, attempts to create an early warning system that can be used to predict if and when the Ht values will deviate from the acceptable range. To do this we have developed an algorithm based on the Group Method of Data Handling (GMDH) and compared it to other Neural Network algorithms, in particular the Multi Layer Perceptron (MLP). The standard GMDH algorithm captures the fluctuation very well but there is a time lag that produces larger errors when compared to MLP. To address this drawback we modified the GMDH algorithm to reduce the prediction error and produce more accurate results. / by Jay Thakkar. / Pagination error. "References" should be leaves 63-67, and pagination end with leaf 67. / Thesis (M.S.C.S.)--Florida Atlantic University, 2011. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2011. Mode of access: World Wide Web.
|
14 |
Fuzzycuda: interactive matte extraction on a GPUUnknown Date (has links)
Natural matte extraction is a difficult and generally unsolved problem. Generating a matte from a nonuniform background traditionally requires a tediously hand drawn matte. This thesis studies recent methods requiring the user to place only modest scribbles identifying the foreground and the background. This research demonstrates a new GPU-based implementation of the recently introduced Fuzzy- Matte algorithm. Interactive matte extraction was achieved on a CUDA enabled G80 graphics processor. Experimental results demonstrate improved performance over the previous CPU based version. In depth analysis of experimental data from the GPU and the CPU implementations are provided. The design challenges of porting a variant of Dijkstra's shortest distance algorithm to a parallel processor are considered. / by Joel Gibson. / Thesis (M.S.C.S.)--Florida Atlantic University, 2008. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2008. Mode of access: World Wide Web.
|
15 |
Qualitative Performance Analysis for Large-Scale Scientific WorkflowsBuneci, Emma 30 May 2008 (has links)
<p>Today, large-scale scientific applications are both data driven and distributed. To support the scale and inherent distribution of these applications, significant heterogeneous and geographically distributed resources are required over long periods of time to ensure adequate performance. Furthermore, the behavior of these applications depends on a large number of factors related to the application, the system software, the underlying hardware, and other running applications, as well as potential interactions among these factors.</p>
<p>Most Grid application users are primarily concerned with obtaining the result of the application as fast as possible, without worrying about the details involved in monitoring and understanding factors affecting application performance. In this work, we aim to provide the application users with a simple and intuitive performance evaluation mechanism during the execution time of their long-running Grid applications or workflows. Our performance evaluation mechanism provides a qualitative and periodic assessment of the application's behavior by informing the user whether the application's performance is expected or unexpected. Furthermore, it can help improve overall application performance by informing and guiding fault-tolerance services when the application exhibits persistent unexpected performance behaviors.</p>
<p>This thesis addresses the hypotheses that in order to qualitatively assess application behavioral states in long-running scientific Grid applications: (1) it is necessary to extract temporal information in performance time series data, and that (2) it is sufficient to extract variance and pattern as specific examples of temporal information. Evidence supporting these hypotheses can lead to the ability to qualitatively assess the overall behavior of the application and, if needed, to offer a most likely diagnostic of the underlying problem.</p>
<p>To test the stated hypotheses, we develop and evaluate a general <em> qualitative performance analysis</em> framework that incorporates (a) techniques from time series analysis and machine learning to extract and learn from data, structural and temporal features associated with application performance in order to reach a qualitative interpretation of the application's behavior, and (b) mechanisms and policies to reason over time and across the distributed resource space about the behavior of the application. </p>
<p>Experiments with two scientific applications from meteorology and astronomy comparing signatures generated from instantaneous values of performance data versus those generated from temporal characteristics support the former hypothesis that temporal information is necessary to extract from performance time series data to be able to accurately interpret the behavior of these applications. Furthermore, temporal signatures incorporating variance and pattern information generated for these applications reveal signatures that have distinct characteristics during well-performing versus poor-performing executions. This leads to the framework's accurate classification of instances of similar behaviors, which represents supporting evidence for the latter hypothesis. The proposed framework's ability to generate a qualitative assessment of performance behavior for scientific applications using temporal information present in performance time series data represents a step towards simplifying and improving the quality of service for Grid applications.</p> / Dissertation
|
16 |
Verification of Ionospheric tomography using MIDAS over Grahamstown, South AfricaKatamzi, Zama Thobeka January 2008 (has links)
Global Positioning System (GPS) satellites and receivers are used to derive total electron content (TEC) from the time delay and phase advance of the radiowaves as they travels through the ionosphere. TEC is defined as the integralof the electron density along the satellite-receiver signal path. Electron densityprofiles can be determined from these TEC values using ionospheric tomographic inversion techniques such as Multi-Instrument Data Analysis System (MIDAS).This thesis reports on a study aimed at evaluating the suitability of ionospheric tomography as a tool to derive one-dimensional electron density profiles, using the MIDAS inversion algorithm over Grahamstown, South Africa (33.30◦S, 26.50◦E). The evaluation was done by using ionosonde data from the Louisvale (28.50◦S, 21.20◦E) and Madimbo (22.40◦S, 30.90◦E) stations to create empirical orthonormal functions (EOFs). These EOFs were used by MIDAS in the inversion process to describe the vertical variation of the electron density. Profiles derived from the MIDAS algorithm were compared with profiles obtained from the international Reference Ionosphere (IRI) 2001 model and with ionosonde profiles from the Grahamstown ionosonde station. The optimised MIDAS profiles show a good agreement with the Grahamstown ionosonde profiles. The South African Bottomside Ionospheric Model (SABIM) was used to set the limits within which MIDAS was producing accurate peak electron density (NmF2) values and to define accuracy in this project, with the understanding that the national model (SABIM) is currently the best model for the Grahamstown region. Analysis show that MIDAS produces accurate results during the winter season, which had the lowest root mean square (rms) error of 0.37×1011[e/m3] and an approximately 86% chance of producing NmF2 closer to the actual NmF2 value than the national model SABIM. MIDAS was found to also produce accurate NmF2 values at 12h00 UT, where an approximately 88% chance of producing an accurate NmF2 value, which may deviate from the measured value by 0.72×1011[e/m3], was determined. In conclusion, ionospheric tomographic inversion techniques show promise in the reconstruction of electron density profiles over South Africa, and are worth pursuing further in the future.
|
17 |
Analyzing and Evaluating the Resilience of Scheduling Scientific Applications on High Performance Computing Systems using a Simulation-based MethodologySukhija, Nitin 09 May 2015 (has links)
Large scale systems provide a powerful computing platform for solving large and complex scientific applications. However, the inherent complexity, heterogeneity, wide distribution, and dynamism of the computing environments can lead to performance degradation of the scientific applications executing on these computing systems. Load imbalance arising from a variety of sources such as application, algorithmic, and systemic variations is one of the major contributors to their performance degradation. In general, load balancing is achieved via scheduling. Moreover, frequently occurring resource failures drastically affect the execution of applications running on high performance computing systems. Therefore, the study of deploying support for integrated scheduling and fault-tolerance mechanisms for guaranteeing that applications deployed on computing systems are resilient to failures becomes of paramount importance. Recently, several research initiatives have started to address the issue of resilience. However, the major focus of these efforts was geared more toward achieving system level resilience with less emphasis on achieving resilience at the application level. Therefore, it is increasingly important to extend the concept of resilience to the scheduling techniques at the application level for establishing a holistic approach that addresses the performability of these applications on high performance computing systems. This can be achieved by developing a comprehensive modeling framework that can be used to evaluate the resiliency of such techniques on heterogeneous computing systems for assessing the impact of failures as well as workloads in an integrated way. This dissertation presents an experimental methodology based on discrete event simulation for the analysis and the evaluation of the resilience of scheduling scientific applications on high performance computing systems. With the aid of the methodology a wide class of dependencies existing between application and computing system are captured within a deterministic model for quantifying the performance impact expected from changes in application and system characteristics. Ideally, the results obtained by employing the proposed simulation-based performance prediction framework enabled an introspective design and investigation of scheduling heuristics to reason about how to best fully optimize various often antagonistic objectives, such as minimizing application makespan and maximizing reliability.
|
18 |
Three-dimensional spatial variation in tropical forest structureYoder, Carrie L. 01 July 2000 (has links)
No description available.
|
19 |
A Study of Microwave curing of Underfill using Open and Closed microwave ovensThakare, Aditya 14 April 2015 (has links)
As the demand for microprocessors is increasing with more and more consumers using integrated circuits in their daily life, the demand on the industry is increasing to ramp up production.
In order to speed up the manufacturing processes, new and novel approaches are trying to change certain aspects of it. Microwaves have been tried as an alternative to conventional ovens in the curing of the polymers used as underfills and encapsulants in integrated circuits packages. Microwaves however being electromagnetic waves have non uniform energy distribution in different settings, causing burning or incomplete cure of polymers.
In this study, we compare the two main types of microwaves proposed to perform the task of curing the polymers. To limit the study and obtain comparable results, both microwaves were limited to propagate in a single mode, TE10. The first is a closed microwave cavity using air as the propagation medium, and the second is an open microwave oven with a PTFE cavity that uses an evanescent field to provide energy.
The open air cavity was studied with different orientations of a substrate placed inside it so as to find the best case scenario in the curing process. This scenario was then compared with the best case scenario found for a sample cured in an evanescent field.
This comparison yielded results showing an advantage of the open microwave in maximum field present, thus leading to higher localized energy absorption and temperatures in the substrate, however this case also lead to a higher temperature gradient. The substrate cured in the closed microwave has a lower temperature gradient, but also a lower maximum field which leads to slower cure.
In the TE10 mode therefore, a closed microwave has an overall advantage as the heating process is only slightly slower than that of an open cavity, but the temperature gradient in this case is significantly lower.
|
20 |
LB_Migrate a dynamic load balancing library for scientific applications /Chaube, Rohit Kailash, January 2007 (has links)
Thesis (M.S.)--Mississippi State University. Department of Electrical and Computer Engineering. / Title from title screen. Includes bibliographical references.
|
Page generated in 0.1226 seconds