311 |
THE EFFECTS OF A MEDICINE BALL TRAINING PROGRAM ON RUNNING ECONOMYRasicci, Veronica Michelle 21 July 2017 (has links)
No description available.
|
312 |
High-Performance Sparse Matrix-Multi Vector Multiplication on Multi-Core ArchitectureSingh, Kunal 15 August 2018 (has links)
No description available.
|
313 |
Drag and pressure die flow effects on the production and properties of a Rayon-Nylon skin-core type composite fiberRabe, Richard L. January 1987 (has links)
No description available.
|
314 |
Determining the Correlation Between Core Performance and Golf Swing Kinematics and KineticsYontz, Nicholas Allen 22 October 2010 (has links)
No description available.
|
315 |
Transforming and Optimizing Irregular Applications for Parallel ArchitecturesZhang, Jing 12 February 2018 (has links)
Parallel architectures, including multi-core processors, many-core processors, and multi-node systems, have become commonplace, as it is no longer feasible to improve single-core performance through increasing its operating clock frequency. Furthermore, to keep up with the exponentially growing desire for more and more computational power, the number of cores/nodes in parallel architectures has continued to dramatically increase. On the other hand, many applications in well-established and emerging fields, such as bioinformatics, social network analysis, and graph processing, exhibit increasing irregularities in memory access, control flow, and communication patterns. While multiple techniques have been introduced into modern parallel architectures to tolerate these irregularities, many irregular applications still execute poorly on current parallel architectures, as their irregularities exceed the capabilities of these techniques. Therefore, it is critical to resolve irregularities in applications for parallel architectures. However, this is a very challenging task, as the irregularities are dynamic, and hence, unknown until runtime.
To optimize irregular applications, many approaches have been proposed to improve data locality and reduce irregularities through computational and data transformations. However, there are two major drawbacks in these existing approaches that prevent them from achieving optimal performance. First, these approaches use local optimizations that exploit data locality and regularity locally within a loop or kernel. However, in many applications, there is hidden locality across loops or kernels. Second, these approaches use "one-size-fits-all'' methods that treat all irregular patterns equally and resolve them with a single method. However, many irregular applications have complex irregularities, which are mixtures of different types of irregularities and need differentiated optimizations. To overcome these two drawbacks, we propose a general methodology that includes a taxonomy of irregularities to help us analyze the irregular patterns in an application, and a set of adaptive transformations to reorder data and computation based on the characteristics of the application and architecture.
By extending our adaptive data-reordering transformation on a single node, we propose a data-partitioning framework to resolve the load imbalance problem of irregular applications on multi-node systems. Unlike existing frameworks, which use "one-size-fits-all" methods to partition the input data by a single property, our framework provides a set of operations to transform the input data by multiple properties and generates the desired data-partitioning codes by composing these operations into a workflow. / Ph. D. / Irregular applications, which present unpredictable and irregular patterns of data accesses and computation, are increasingly important in well-established and emerging fields, such as biological data analysis, social network analysis, and machine learning, to deal with large datasets. On the other hand, current parallel processors, such as multi-core CPUs (central processing units), GPUs (graphics processing units), and computer clusters (i.e., groups of connected computers), are designed for regular applications and execute irregular applications poorly. Therefore, it is critical to optimize irregular applications for parallel processors. However, it is a very challenging task, as the irregular patterns are dynamic, and hence, unknown until application execution. To overcome this challenge, we propose a general methodology that includes a taxonomy of irregularities to help us analyze the irregular patterns in an application, and a set of adaptive transformations to reorder data and computation for exploring hidden regularities based on the characteristics of the application and processor. We apply our methodology on couples of important and complex irregular applications as case studies to demonstrate that it is effective and efficient.
|
316 |
The Core Value Compass: visually evaluating the goodness of brands that do goodYoganathan, Vignesh, McLeay, F., Osburg, V-S., Hart, D. 2017 September 1917 (has links)
Yes / Brands that do good for the society as well as for
themselves are motivated by the core values they espouse,
which necessitates a better understanding of what qualities a
true core value must possess. The inherent tension within
brands that do good, between commercial interests to
increase competitiveness, and societal interests that are
closely linked to the brand’s authenticity, has largely been
overlooked. Hence, we develop and demonstrate a relatively
easy-to-apply visual tool for evaluating core values based on
a set of ‘goodness’ criteria derived from extant theory. The
Core Value Compass adopts a paradox-based, evolutionary
perspective by incorporating the inherent tensions within
true core values, and classifying them according to their
temporal orientation. Thus, we contribute towards a better
understanding of underlying tensions of core values and
provide a practical tool that paves the way for improved, and
indeed ethical, corporate branding strategies. Furthermore,
we demonstrate the Compass’ application using the case of a
public sector brand, which is a quintessential brand that does
good. Therefore, we also contribute to the nascent theoretical
discourse on public sector branding. This paper therefore
adds to the notable attempts to bridge the gap between theory
and practice in core values-based corporate branding.
|
317 |
INVESTIGATION OF CENOZOIC CRUSTAL EXTENSION INFERRED FROM SEISMIC REFLECTION PROFILES AND FIELD RELATIONS, SE ARIZONAArca, Mehmet Serkan January 2009 (has links)
Mid-Tertiary metamorphic core complexes in the Basin and Range province of the western North American Cordillera are characterized by large-magnitude extensional deformation. Numerous models have been proposed for the kinematic evolution of these metamorphic core complexes. Such models generally invoke footwall isotatic rebound due to tectonic denudation, and the presence of a weak middle crust capable of flow at mid-crustal levels. In popular models of Cordilleran-style metamorphic core-complex development, initial extension occurs along a breakaway fault, which subsequently is deformed into a synform and abandoned in response to isostatic rebound, with new faults breaking forward in the dominant transport direction. In southeast Arizona, the Catalina and Pinaleño Mountains core complexes have been pointed to as type examples of this model. In this study, the “traditional” core-complex model is tested through analysis of field relations and geochronological age constraints, and by interpretation of seismic reflection profiles along a transect incorporating these core complexes. Elements of these linked core-complex systems, from southwest to northeast, include the Tucson Basin, the Santa Catalina-Rincon Mountains, the San Pedro trough, the Galiuro Mountains, the Sulphur Springs Valley, the Pinaleño Mountains, and the Safford Basin. A new digital compilation of geological data, across highly extended terranes, in conjunction with reprocessing and interpretation of a suite of industry 2-D seismic reflection profiles spanning nearly sub-parallel to regional extension, illuminate subsurface structural features related to Cenozoic crustal extension and provide new constraints on evolution of core complexes in southeast Arizona. The main objective is to develop a new kinematic model for mid-Tertiary extension and core complex evolution in southeast Arizona that incorporates new geological and geophysical observations. Geological and seismological data indicate that viable alternative models explain observations at least as well as previous core-complex models. In contrast to the “traditional” model often employed for these structures, our models suggest that the southwest- and northeast-dipping normal-fault systems on the flanks of the Galiuro Mountains extend to mid-crustal depths beneath the San Pedro trough and Sulphur-Springs Valley, respectively. In our interpretations and models, these oppositely vergent fault systems are not the breakaway faults for the Catalina and Pinaleño detachment systems.
|
318 |
Qualification du calcul de la puissance des coeurs de réacteurs à plaques : développement et application d'une nouvelle approche géostatistique / Qualification of the power profile for slabs core reactors : development and application of a new approach based on geostatisticsSimonini, Giorgio 04 October 2012 (has links)
Cette thèse a pour but de contribuer à la qualification du formulaire de calcul neutronique NARVAL, dédié aux coeurs de réacteurs à plaques. En particulier, l’objectif est de développer des méthodes innovantes permettant d’utiliser les données expérimentales inédites du programme HIPPOCAMPE pour évaluer la précision du profil de puissance calculé. La complexité provient de la localisation de l’instrumentation (chambres à fission placées entre les assemblages) et des hétérogénéités caractéristiques de ce type de coeurs (géométrie à plaques, poisons consommables et de contrôle solides). Pour aborder ce problème deux voies ont été mises en oeuvre : la première voie consiste à « combiner puis extrapoler » les écarts C/E observés afin de déterminer les incertitudes associées aux facteurs de puissance. Nous avons utilisé, pour ce faire, la méthode « P/A », traditionnellement employée dans les REP électrogènes mais jamais appliquée aux coeurs à plaques à ce jour. La deuxième voie passe, en revanche, par la reconstruction d’une nappe de puissance à utiliser comme référence (comparaison calcul/« expérience-reconstruite ») : nous avons focalisé notre travail sur des techniques géostatistiques. Après avoir constaté que les deux méthodes conduisent à des résultats satisfaisants (erreur comparable à l’incertitude expérimentale cible) nous avons continué notre recherche, en explorant les possibles développements et en introduisant en particulier une nouvelle méthode hybride (associant les techniques géostatistiques à la méthode P/A) qui permet d'améliorer ultérieurement la qualification du profil de puissance (écart-type des écarts C/E cohérent avec la constatation expérimentale). / The aim of this doctoral thesis work is to contribute to the experimental validation of a neutron physic code, called NARVAL, devoted to the analysis of slab cores reactors. The primary objective is to develop some innovative methods in order to validate the computed power map starting from the original experimental data, provided by the HIPPOCAMPE campaign. The particular position of the instrumentation (fission chambers located between the assemblies) and the strong heterogeneities, characterising this specific core design (slab geometry, burnable and control neutron absorbers in solid state) represent the main challenge of this work. Two different approaches are investigated : the first one consists in “combining and extrapolating” the observed calculated/experimental results in order to evaluate the uncertainty of power coefficients. Among different solutions, the “P/A” method is chosen : it is usually employed to perform conventional PWR plant analysis and has never been applied before to slab cores. The latter aims to reconstruct a power map that could be used as a direct reference for code validation : in this case the geostatistical techniques are selected. These methods provide satisfactory results as estimated errors are in good agreement with the experimental uncertainty target. Nevertheless, in this work a new hybrid method, applying the geostatistical technics to the P/A scheme, is investigated and developed. The good agreement between the experimental and the estimated validations of the computed power map attests the noteworthy performance of this new method.
|
319 |
Planning Considerations Of Tall Buildings: Service Core Configuration And TypologiesKeskin, Zeynep 01 November 2012 (has links) (PDF)
In general, tall buildings, some of which are termed as &ldquo / skyscrapers&rdquo / , are among the typical and almost unavoidable features of the metropolitan cities. There is a competititive race of constructing higher and higher buildings since the birth of the infamous Home Insurance Building in Chicago which is still considered to be the pioneer of the modern tall buildings. Recently, an efficient service core design is strongly needed and inquired with the increase in height and capacity of tall buildings. Such needs and demands are primarily due to the circulation volume of occupants since height has an adverse effect on the size and capacity of the service core. This thesis investigates the features of service cores that play an important role in the planning considerations of tall building design, and their effect on architectural, structural and sustainable design. Within this context, a classification of service cores based on their location in architectural design is proposed.
|
320 |
Promoting Engagement Through Instructional Practices Using the Common Core State Standards For MathematicsSpears, Tyler S. 19 December 2013 (has links)
No description available.
|
Page generated in 0.0415 seconds