Spelling suggestions: "subject:"highperformance"" "subject:"highperformance""
761 |
The Determination of Lead and Cadmium in Tobacco Using High Performance Liquid Chromatography and Dithiocarbamate Derivitization.Klein, Mark Stephen 07 May 2011 (has links) (PDF)
A reversed phase-high performance liquid chromatography method has been developed that is capable of resolving lead and cadmium diethyldithiocarbamate complexes. The method does not require the presence of hazardous solvents to optimize peak shape and resolution; the mobile phase consists of methanol, water, and a surfactant. Tobacco samples were chemically oxidized and reacted with sodium diethyldithiocarbamate reagent prior to analysis using the HPLC method. The lead diethyldithiocarbamate determination was compromised by the presence of a significant interference in the analysis; lead determinations in 10 foreign and domestic tobacco samples ranged from 27.14 to 134.84 μg/g. The cadmium diethyldithiocarbamate determination was not adversely affected by interferences. Cadmium determinations in 10 foreign and domestic tobacco samples ranged from 0.89 to 6.96 μg/g. The tobacco samples were also analyzed using atomic absorption spectrometry. Foreign tobacco brands that contained clove as a spice showed lower levels of cadmium and lead.
|
762 |
Determination of Hydroquinone in Cosmetic Creams by High Performance Liquid Chromatography.Liu, Fuyou 14 August 2007 (has links) (PDF)
Hydroquinone is a most commonly used whitening agent in cosmetics. A high performance liquid chromatography method was developed and validated for the quantitative determination of hydroquinone in creams. Validation parameters such as linearity, precision, accuracy, and limit of detection and limit of quantitation were determined. HPLC was carried by reverses phase technique on a RP-C18 column with a mobile phase of water and methanol (pH 5.0) 70:30. The linearity in the range of 2.0-40.0 μg/mL presents a correlation coefficient of 0.9998. The LOD and LOQ were 0.16 and 0.53 μg/mL, respectively. The precision of the method was found to be satisfactory with a coefficient of variation below 2.2%. The recovery values were in the range of 92.4 to 99.0%. The method is sensitive, fast, and simple. It has been successfully applied to the determination of hydroquinone in cosmetic creams. The results obtained agreed well with the percentages given by the manufacturers.
|
763 |
Predictability of Optimal Core Distribution Based on Weight and SpeedupEriksson, Rasmus January 2022 (has links)
Efficient use of hardware resources is a vital part of getting good results within high performance computing. This thesis explores the predictability of optimal CPU-core distribution between two tasks running in parallel on a shared-memory machine, with the intent to reach the shortest total runtime possible. The predictions are based on the weight and speedup of each task, in regards to the CPU-frequency decrease that comes with a growing number of active cores in modern CPUs. The weight of a task is the number of floating point operations needed to compute it to completion. The Intel oneAPI Math Kernel Library is used to create a set of different tasks, where each task consists of a single call to a dgemm-routine. Two prediction algorithms for optimal core distribution are presented and used in this thesis. Their predictions are compared to the fastest distribution observed by either running the tasks back-to-back, with each using all available cores, or running the tasks simultaneously in two parallel regions. Experimental results suggest that there is merit to this method, with the best of the two algorithms having a 14/15 prediction-accuracy of the core distribution resulting in the fastest run.
|
764 |
Research on High-performance and Scalable Data Access in Parallel Big Data ComputingYin, Jiangling 01 January 2015 (has links)
To facilitate big data processing, many dedicated data-intensive storage systems such as Google File System(GFS), Hadoop Distributed File System(HDFS) and Quantcast File System(QFS) have been developed. Currently, the Hadoop Distributed File System(HDFS) [20] is the state-of-art and most popular open-source distributed file system for big data processing. It is widely deployed as the bedrock for many big data processing systems/frameworks, such as the script-based pig system, MPI-based parallel programs, graph processing systems and scala/java-based Spark frameworks. These systems/applications employ parallel processes/executors to speed up data processing within scale-out clusters. Job or task schedulers in parallel big data applications such as mpiBLAST and ParaView can maximize the usage of computing resources such as memory and CPU by tracking resource consumption/availability for task assignment. However, since these schedulers do not take the distributed I/O resources and global data distribution into consideration, the data requests from parallel processes/executors in big data processing will unfortunately be served in an imbalanced fashion on the distributed storage servers. These imbalanced access patterns among storage nodes are caused because a). unlike conventional parallel file system using striping policies to evenly distribute data among storage nodes, data-intensive file systems such as HDFS store each data unit, referred to as chunk or block file, with several copies based on a relative random policy, which can result in an uneven data distribution among storage nodes; b). based on the data retrieval policy in HDFS, the more data a storage node contains, the higher the probability that the storage node could be selected to serve the data. Therefore, on the nodes serving multiple chunk files, the data requests from different processes/executors will compete for shared resources such as hard disk head and network bandwidth. Because of this, the makespan of the entire program could be significantly prolonged and the overall I/O performance will degrade. The first part of my dissertation seeks to address aspects of these problems by creating an I/O middleware system and designing matching-based algorithms to optimize data access in parallel big data processing. To address the problem of remote data movement, we develop an I/O middleware system, called SLAM, which allows MPI-based analysis and visualization programs to benefit from locality read, i.e, each MPI process can access its required data from a local or nearby storage node. This can greatly improve the execution performance by reducing the amount of data movement over network. Furthermore, to address the problem of imbalanced data access, we propose a method called Opass, which models the data read requests that are issued by parallel applications to cluster nodes as a graph data structure where edges weights encode the demands of load capacity. We then employ matching-based algorithms to map processes to data to achieve data access in a balanced fashion. The final part of my dissertation focuses on optimizing sub-dataset analyses in parallel big data processing. Our proposed methods can benefit different analysis applications with various computational requirements and the experiments on different cluster testbeds show their applicability and scalability.
|
765 |
Research In High Performance And Low Power Computer Systems For Data-intensive EnvironmentShang, Pengju 01 January 2011 (has links)
According to the data affinity, DAFA re-organizes data to maximize the parallelism of the affinitive data, and also subjective to the overall load balance. This enables DAFA to realize the maximum number of map tasks with data-locality. Besides the system performance, power consumption is another important concern of current computer systems. In the U.S. alone, the energy used by servers which could be saved comes to 3.17 million tons of carbon dioxide, or 580,678 cars {Kar09}. However, the goals of high performance and low energy consumption are at odds with each other. An ideal power management strategy should be able to dynamically respond to the change (either linear or nonlinear, or non-model) of workloads and system configuration without violating the performance requirement. We propose a novel power management scheme called MAR (modeless, adaptive, rule-based) in multiprocessor systems to minimize the CPU power consumption under performance constraints. By using richer feedback factors, e.g. the I/O wait, MAR is able to accurately describe the relationships among core frequencies, performance and power consumption. We adopt a modeless control model to reduce the complexity of system modeling. MAR is designed for CMP (Chip Multi Processor) systems by employing multi-input/multi-output (MIMO) theory and per-core level DVFS (Dynamic Voltage and Frequency Scaling).; TRAID deduplicates this overlap by only logging one compact version (XOR results) of recovery references for the updating data. It minimizes the amount of log content as well as the log flushing overhead, thereby boosts the overall transaction processing performance. At the same time, TRAID guarantees comparable RAID reliability, the same recovery correctness and ACID semantics of traditional transactional processing systems. On the other hand, the emerging myriad data intensive applications place a demand for high-performance computing resources with massive storage. Academia and industry pioneers have been developing big data parallel computing frameworks and large-scale distributed file systems (DFS) widely used to facilitate the high-performance runs of data-intensive applications, such as bio-informatics {Sch09}, astronomy {RSG10}, and high-energy physics {LGC06}. Our recent work {SMW10} reported that data distribution in DFS can significantly affect the efficiency of data processing and hence the overall application performance. This is especially true for those with sophisticated access patterns. For example, Yahoo's Hadoop {refg} clusters employs a random data placement strategy for load balance and simplicity {reff}. This allows the MapReduce {DG08} programs to access all the data (without or not distinguishing interest locality) at full parallelism. Our work focuses on Hadoop systems. We observed that the data distribution is one of the most important factors that affect the parallel programming performance. However, the default Hadoop adopts random data distribution strategy, which does not consider the data semantics, specifically, data affinity. We propose a Data-Affinity-Aware (DAFA) data placement scheme to address the above problem. DAFA builds a history data access graph to exploit the data affinity.; The evolution of computer science and engineering is always motivated by the requirements for better performance, power efficiency, security, user interface (UI), etc {CM02}. The first two factors are potential tradeoffs: better performance usually requires better hardware, e.g., the CPUs with larger number of transistors, the disks with higher rotation speed; however, the increasing number of transistors on the single die or chip reveals super-linear growth in CPU power consumption {FAA08a}, and the change in disk rotation speed has a quadratic effect on disk power consumption {GSK03}. We propose three new systematic approaches as shown in Figure 1.1, Transactional RAID, data-affinity-aware data placement DAFA and Modeless power management, to tackle the performance problem in Database systems, large scale clusters or cloud platforms, and the power management problem in Chip Multi Processors, respectively. The first design, Transactional RAID (TRAID), is motivated by the fact that in recent years, more storage system applications have employed transaction processing techniques Figure 1.1 Research Work Overview] to ensure data integrity and consistency. In transaction processing systems(TPS), log is a kind of redundancy to ensure transaction ACID (atomicity, consistency, isolation, durability) properties and data recoverability. Furthermore, high reliable storage systems, such as redundant array of inexpensive disks (RAID), are widely used as the underlying storage system for Databases to guarantee system reliability and availability with high I/O performance. However, the Databases and storage systems tend to implement their independent fault tolerant mechanisms {GR93, Tho05} from their own perspectives and thereby leading to potential high overhead. We observe the overlapped redundancies between the TPS and RAID systems, and propose a novel reliable storage architecture called Transactional RAID (TRAID).
|
766 |
An Architecture For High-performance Privacy-preserving And Distributed Data MiningSecretan, James 01 January 2009 (has links)
This dissertation discusses the development of an architecture and associated techniques to support Privacy Preserving and Distributed Data Mining. The field of Distributed Data Mining (DDM) attempts to solve the challenges inherent in coordinating data mining tasks with databases that are geographically distributed, through the application of parallel algorithms and grid computing concepts. The closely related field of Privacy Preserving Data Mining (PPDM) adds the dimension of privacy to the problem, trying to find ways that organizations can collaborate to mine their databases collectively, while at the same time preserving the privacy of their records. Developing data mining algorithms for DDM and PPDM environments can be difficult and there is little software to support it. In addition, because these tasks can be computationally demanding, taking hours of even days to complete data mining tasks, organizations should be able to take advantage of high-performance and parallel computing to accelerate these tasks. Unfortunately there is no such framework that is able to provide all of these services easily for a developer. In this dissertation such a framework is developed to support the creation and execution of DDM and PPDM applications, called APHID (Architecture for Private, High-performance Integrated Data mining). The architecture allows users to flexibly and seamlessly integrate cluster and grid resources into their DDM and PPDM applications. The architecture is scalable, and is split into highly de-coupled services to ensure flexibility and extensibility. This dissertation first develops a comprehensive example algorithm, a privacy-preserving Probabilistic Neural Network (PNN), which serves a basis for analysis of the difficulties of DDM/PPDM development. The privacy-preserving PNN is the first such PNN in the literature, and provides not only a practical algorithm ready for use in privacy-preserving applications, but also a template for other data intensive algorithms, and a starting point for analyzing APHID's architectural needs. After analyzing the difficulties in the PNN algorithm's development, as well as the shortcomings of researched systems, this dissertation presents the first concrete programming model joining high performance computing resources with a privacy preserving data mining process. Unlike many of the existing PPDM development models, the platform of services is language independent, allowing layers and algorithms to be implemented in popular languages (Java, C++, Python, etc.). An implementation of a PPDM algorithm is developed in Java utilizing the new framework. Performance results are presented, showing that APHID can enable highly simplified PPDM development while speeding up resource intensive parts of the algorithm.
|
767 |
Developing a sustainable ultra-high performance concrete using seawater and sea-sand in combination with super-fine stainless wiresYu, F., Dong, S., Li, L., Ashour, Ashraf, Ding, S., Han, B., Ou, J. 09 March 2023 (has links)
Yes / Utilizing seawater and sea-sand for producing ultra-high performance concrete (UHPC) can substantially reduce raw materials costs and alleviate the current freshwater and river sand resources shortage in coastal and marine areas. However, the corrosion risk to reinforcing fibers inside UHPC caused by chlorides in seawater and sea-sand cannot be ignored. In this study, a new type of sustainable UHPC composed of seawater and desalinated sea-sand (UHPSSC) reinforced with stainless profile, super-fine stainless wire (SSW) was developed. Its mechanical properties and chloride content were studied. The research results show that SSWs do not rust after immersion in seawater. The flexural and compressive strengths of UHPSSC incorporating 1.5% SSWs are 13.8MPa and 138.6MPa, respectively, and the flexural toughness of UHPSSC is increased by 428.9%, reaching the basic mechanical requirements of UHPC. The high specific surface area of SSW and enrichment of silica fume on its surface enhance the interfacial bond between fiber and matrix, further promoting the full play of the SSWs’ reinforcing mechanisms as proved by the decrease of the Ca/Si ratio at the SSW surface. The C-S-H gels with a high Ca/Si ratio within the ITZ as well as Friedel’s salt are conducive to immobilize chlorides, blocking the migration of chlorides through the matrix and further mitigating the risk of long-term chloride corrosion of SSWs. Overall, utilizing seawater and desalinated sea-sand in combination with SSWs can produce UHPC with improved strength and toughness, making it a suitable choice for applications where high durability and long-term mechanical performance is required.
|
768 |
Self-sensing ultra-high performance concrete: A reviewGuo, Y., Wang, D., Ashour, Ashraf, Ding, S., Han, B. 02 November 2023 (has links)
Yes / Ultra-high performance concrete (UHPC) is an innovative cementitious composite, that has been widely applied in numerous structural projects because of its superior mechanical properties and durability. However, ensuring the safety of UHPC structures necessitates an urgent need for technology to continuously monitor and evaluate their condition during their extended periods of service. Self-sensing ultra-high performance concrete (SSUHPC) extends the functionality of UHPC system by integrating conductive fillers into the UHPC matrix, allowing it to address above demands with great potential and superiority. By measuring and analyzing the relationship between fraction change in resistivity (FCR) and external stimulates (force, stress, strain), SSUHPC can effectively monitor the crack initiation and propagation as well as damage events in UHPC structures, thus offering a promising pathway for structural health monitoring (SHM). Research on SSUHPC has attracted substantial interests from both academic and engineering practitioners in recent years, this paper aims to provide a comprehensive review on the state of the art of SSUHPC. It offers a detailed overview of material composition, mechanical properties and self-sensing capabilities, and the underlying mechanisms involved of SSUHPC with various functional fillers. Furthermore, based on the recent advancements in SSUHPC technology, the paper concludes that SSUHPC has superior self-sensing performance under tensile load but poor self-sensing performance under compressive load. The mechanical and self-sensing properties of UHPC are substantially dependent on the type and dosage of functional fillers. In addition, the practical engineering SHM application of SSUHPC, particularly in the context of large-scale structure, is met with certain challenges, such as environment effects on the response of SSUHPC. Therefore, it still requires further extensive investigation and empirical validation to bridge the gap between laboratory research and real engineering application of SSUHPC. / The full-text of this article will be released for public view at the end of the publisher embargo on 28 Dec 2024.
|
769 |
Can High Performance Work Systems Transfer Organizational Citizenship Behavior from A Discretionary to A Sustainable Advantage? The Questions of How, Why, and WhenWang, Chun-Hsiao 06 1900 (has links)
One issue that has been neglected and is gaining currency in the organizational citizenship behavior (OCB) literature is the extent to which individuals consider OCB to be part of the job (OCB role definition). A recent meta-analytic review reveals that employees are more likely to perform OCB when they define OCB as in-role rather than as extra-role. However, little attention has been paid to the influences of organizational practices on employee OCB role definition. This neglect is of particular relevance because researchers have argued that how employees view their role obligations are likely to be subject to some purposeful organizational practices. Thus, this paper focuses on the effects of high-performance work systems (HPWS) on employee OCB role definition.
This paper adopts multiple theoretical perspectives (e.g., social exchange, organizational identification, ability-motivation-opportunity, and trust) to understand how, why, and when HPWS cause employees to expand their job requirements to include OCBs like helping and voice. Using a multisource data collected at 4 waves from 208 supervisor-employee dyads in Taiwan, I examined the following: (a) the direct effect of employee-experienced HPWS on employee helping and voice role definitions, (b) the mediating roles of employee helping and voice role definitions in the employee-experienced HPWS and actual employee helping and voice relationships, (c) the mediating roles of employee social exchange and organizational identification perceptions toward the organization, as well as employee efficacy, instrumentality, and autonomy perceptions toward helping and voice in the relationships between employee-experienced HPWS and OCB role definitions, (d) the direct effect of employee trust in supervisor on employee helping and voice role definitions, and (e) the moderating role of employee trust in supervisor in the relationships between employee-experienced HPWS and employee helping and voice role definitions. The results confirm the direct effects of employee-experienced HPWS and trust in supervisor, the mediating effects of employee helping and voice role definition, and employee efficacy, instrumentality, and autonomy perceptions toward helping and voice, as well as the moderating effects of employee trust in supervisor, such that employee trust in supervisor strengthened the effects of employee-experienced HPWS on employee helping and voice role definitions when trust in supervisor was high than when it was low. Implications for research and practice are discussed. / Dissertation / Doctor of Philosophy (PhD)
|
770 |
The role of Reynolds number in the fluid-elastic instability of cylinder arraysGhasemi, Ali 05 1900 (has links)
The onset of fluid-elastic instability in cylinder arrays is usually thought to depend primarily on the mean flow velocity, the Scruton number and the natural frequency of the cylinders. Currently, there is considerable evidence from experimental measurements and computational fluid dynamic (CFD) simulations that the Reynolds number is also an important parameter. However, the available data are not sufficient to understand or quantify this effect. In this study we use a high resolution pseudo-spectral scheme to solve 2-D penalized Navier-Stokes equations in order to accurately model turbulent flow past cylinder array. To uncover the Reynolds number effect we perform simulations that vary Reynolds number independent of flow velocity at a fixed Scruton number, and then analyze the cylinder responses. The computational complexity of our algorithm is a function of Reynolds number. Therefore, we developed a high performance parallel code which allows us to simulate high Reynolds numbers at a reasonable computational cost.
The simulations reveal that increasing Reynolds number has a strong de-stabilizing effect for staggered arrays. On the other hand, for the in-line array case Reynolds number still affects the instability threshold, but the effect is not monotonic with increasing Reynolds number. In addition, our findings suggest that geometry is also an important factor since at low Reynolds numbers critical flow velocity in the staggered array is considerably higher than the in-line case. This study helps to better predict how the onset of fluid-elastic instability depends on Reynolds number and reduces uncertainties in the experimental data which usually do not consider the effect of Reynolds number. / Thesis / Master of Science (MSc)
|
Page generated in 0.0546 seconds