141 |
Massive Crowd Simulation With Parallel ProcessingYilmaz, Erdal 01 February 2010 (has links) (PDF)
This thesis analyzes how parallel processing with Graphics Processing Unit (GPU) could be used for massive crowd simulation, not only in terms of rendering but also the computational power that is required for realistic simulation. The extreme population in massive crowd simulation introduces an extra computational load, which is quite difficult to meet by using Central Processing Unit (CPU) resources only. The thesis shows the specific methods and approaches that maximize the throughput of GPU parallel computing, while using GPU as the main processor for massive crowd simulation.
The methodology introduced in this thesis makes it possible to simulate and visualize hundreds of thousands of virtual characters in real-time. In order to achieve two orders of magnitude speedups by using GPU parallel processing, various stream compaction and effective memory access approaches were employed.
To simulate crowd behavior, fuzzy logic functionality on the GPU has been implemented from scratch. This implementation is capable of computing more than half billion fuzzy inferences per second.
|
142 |
Massive Higher Derivative Gravity TheoriesGullu, Ibrahim 01 December 2011 (has links) (PDF)
In this thesis massive higher derivative gravity theories are analyzed in some detail.
One-particle scattering amplitude between two covariantly conserved sources mediated by a graviton exchange
is found at tree-level in D dimensional (Anti)-de Sitter and flat spacetimes for the most
general quadratic curvature theory augmented with the Pauli-Fierz mass term. From the amplitude expression, the
Newtonian potential energies are calculated for various cases.
Also, from this amplitude and the propagator structure, a three dimensional unitary theory is identified. In the second part of the thesis,
the found three dimensional unitary theory is studied in more detail from a canonical point of view. The general higher order action is written
in terms of gauge-invariant functions both in flat and de Sitter backgrounds. The analysis is extended by
adding static sources, spinning masses and the gravitational Chern-Simons term separately to the theory in
the case of flat spacetime. For all cases the microscopic spectrum and the masses are found. In the discussion of curved
spacetime, the masses are found in the relativistic and non-relativistic limits. In the Appendix,
some useful calculations that are frequently used in the bulk of the thesis are given.
|
143 |
The Effect on Taiwan Investment in China¡¦s Western Region--A Study of Development Strategy and Location Factors.Wu, Li-Sheng 14 June 2001 (has links)
The objective of this thesis is to explore the development strategy and location factors that affect on Taiwanese firms' investment in China's western region.
In the first step of this research, we review literature and analyze China's strategy of western region. Main theories adopted in this study include "Regional Growth Theory", "Location Theory", and "Location Policy".
In the second step, we adopt questionnaire survey to analyze the intention of Taiwanese firms' investment in China's western region. The statistical analysis of questionnaire data includes the use of t-test, ANOVA-test, Likert's summated scale analysis, factor analysis, and cluster analysis. They examine the relationship and effectiveness between the location selection and intention of Taiwanese firms' investment in China's western region.
Through this research, the conclusions of this thesis are as following: firstly, the strategy " Spot-Axle Model" is adopted for the development of western region. Therefore, the location with railway has high priority in western development.
Secondly, a total of 585 questionnaires were mailed to 12 industries of Taiwanese firms in China, out of which 78 firms were responding and 70 of them were valid questionnaires. The results of questionnaire survey are presented below:
1) In Taiwanese firms' views, the Sichuan Province, Chongging Municipality, Yunna Province, and Shaanxi Province are the best regions to investment for China's western region. And the best timing is during the period of 2001-2010s.
2) The certainty factors that affect on investment are labors, property policies, communications, and infrastructure. And market is an uncertainty factor.
3) They are different consideration by Taiwanese firm for the choice of locations between eastern region and western region. Taiwanese firms attach importance to market and labor factors in the eastern region. And the infrastructure, communication, preferential policies are the important factors in the western region.
4) According to the factor analysis, 76% of investment considerations can be explained by 10 factors, including quantity and price of labor and land, communications, public facilities, cultures, agglomeration, markets, minerals, policies, economics, and nature environment, which are selected from 36 factors included in this study.
5) We use cluster analysis to analyze the choice of location by Taiwanese firms. The results show that 58.6% samples belong to the category of "quantity and price of labor and land¡Vmarket oriented", and 18.6% samples belong to "nature¡Vinfrastructure oriented".
|
144 |
Etoiles massives les plus jeunes des Nuages de Magellan : Les HEBs et leur environnementMeynadier, Frédéric 17 June 2005 (has links) (PDF)
Cette thèse est consacrée à l'étude des «blobs à haute excitation»<br />(HEBs), phase caractéristique de la formation des étoiles massives,<br />encore mal connue. Ces objets sont des régions HII compactes des<br />Nuages de Magellan, observables dans le domaine optique. Par le biais d'observations à haute résolution angulaire (HST, ainsi que<br />restoration d'images de téléscopes au sol), j'ai mis en évidence les différentes populations stellaires associées aux blobs. Des<br />observations proche-IR (VLT) m'ont également permis de sonder<br />l'environnement extrêmement hétérogène de ces objets. De plus, une<br />étude spectroscopique m'a permis de définir une nouvelle catégorie de ces objets : les blobs à faible excitation (LEBs). Cet ensemble de données m'a permis de mener une étude détaillée de plusieurs<br />propriétés physiques de ces objets et souligne l'intérêt de leur étude avec les instruments en cours de réalisation (ALMA, JWST, etc.).
|
145 |
VALUE STREAM MAPPING – A CASE STUDY OF CONSTRUCTION SUPPLY CHAINOF PREFABRICATED MASSIVE TIMBER FLOOR ELEMENTMarzec, Cindy, Gustavsson, Joachim January 2007 (has links)
<p>The purpose of this Master Thesis is to study how the value stream mapping concept can be applied along the construction supply chain for prefabricated massive timber floor elements. Identification and qualification of waste are starting points to propose suggestions on how to reduce and/or eliminate them. In order to use the value stream mapping along the construction supply chain, pertinent data has been collected and analyzed. To conduct the value stream mapping, the first three steps of the lean thinking principles in construction have been followed. The first step aims at defining the customer and his value as well as the value for the delivery team and how it is specified in the product. The second step is based on identifying the value stream and this is done through defining the resources and activities needed to manufacture, deliver and install the floor elements. This is conducted by using the VSMM methodology. In addition the current practice should be standardized and key component suppliers should be defined and located. The third and last step identifies non-value adding activities, in other words waste and suggestions on how to remove and/or reduce waste have been reached. Wastes from product defects, transportation waste and waste of waiting were to be found in the construction supply chain. Propositions to reduce and/or eliminate wastes were to implement a more careful planning of the manufacturing process and production schedule, to apply lean production principles in the manufacturing facility and decrease and or eliminate storage time. The study made has shown that in the supply chain of massive timber floor elements at Limnologen there is a big potential to lower costs and increase customer value as value added-time accounted for only 2% of the total time.</p>
|
146 |
Superluminous supernovae : theory and observationsChatzopoulos, Emmanouil 25 October 2013 (has links)
The discovery of superluminous supernovae in the past decade challenged our understanding of explosive stellar death. Subsequent extensive observations of superluminous supernova light curves and spectra has provided some insight for the nature of these events. We present observations of one of the most luminous self-interacting supernovae ever observed, the hydrogen-rich SN 2008am discovered by the Robotic Optical Transient Search Experiment Supernova Verification Project with the ROTSE-IIIb telescope located in the McDonald Observatory. We provide theoretical modeling of superluminous supernova light curves and fit the models to a number of observed events and similar transients in order to understand the mechanism that is responsible for the vast amounts of energy emitted by these explosions. The models we investigate include deposition of energy due to the radioactive decays of massive amounts of nickel-56, interaction of supernova ejecta with a dense circumstellar medium and magnetar spin-down. To probe the nature of superluminous supernovae progenitor stars we study the evolution of massive stars, including important effects such as rotation and magnetic fields, and perform multi-dimensional hydrodynamics simulations of the resulting explosions. The effects of rotational mixing are also studied in solar-type secondary stars in cataclysmic variable binary star systems in order to provide an explanation for some carbon-depleted examples of this class. We find that most superluminous supernovae can be explained by violent interaction of the SN ejecta with >1 Msun dense circumstellar shells ejected by the progenitor stars in the decades preceding the SN explosion. / text
|
147 |
Small cell and D2D offloading in heterogeneous cellular networksYe, Qiaoyang 08 September 2015 (has links)
Future wireless networks are evolving to become ever more heterogeneous, including small cells such as picocells and femtocells, and direct device-to-device (D2D) communication that bypasses base stations (BSs) altogether to share stored and personalized content. Conventional user association schemes are unsuitable for heterogeneous networks (HetNets), due to the massive disparities in transmit power and capabilities of different BSs. To make the most of the new low-power infrastructure and D2D communication, it is desirable to facilitate and encourage users to be offloaded from the macro BSs. This dissertation characterizes the gain in network performance (e.g., the rate distribution) from offloading users to small cells and the D2D network, and develops efficient user association, resource allocation, and interference management schemes aiming to achieve the performance gain. First, we optimize the load-aware user association in HetNets with single-antenna BSs, which bridges the gap between the optimal solution and a simple small cell biasing approach. We then develop a low-complexity distributed algorithm that converges to a near-optimal solution with a theoretical performance guarantee. Simulation results show that the biasing approach loses surprisingly little with appropriate bias factors, and there is a large rate gain for cell-edge users. This framework is then extended to a joint optimization of user association and resource blanking at the macro BSs – similar to the enhanced intercell interference coordination (eICIC) proposed in the global cellular standards, 3rd Generation Partnership Project (3GPP). Though the joint problem is nominally combinatorial, by allowing users to associate to multiple BSs, the problem becomes convex. We show both theoretically and through simulation that the optimal solution of the relaxed problem still results in a mostly unique association. Simulation shows that resource blanking can further improve the network performance. Next, the above framework with single-antenna transmission is extended to HetNets with BSs equipped with large-antenna arrays and operating in the massive MIMO regime. MIMO techniques enable the option of another interference management: serving users simultaneously by multiple BSs – termed joint transmission (JT). This chapter formulates a unified utility maximization problem to optimize user association with JT and resource blanking, exploring which an efficient dual subgradient based algorithm approaching optimal solutions is developed. Moreover, a simple scheduling scheme is developed to implement near-optimal solutions. We then change direction slightly to develop a flexible and tractable framework for D2D communication in the context of a cellular network. The model is applied to study both shared and orthogonal resource allocation between D2D and cellular networks. Analytical SINR distributions and average rates are derived and applied to maximize the total throughput, under an assumption of interference randomization via time and/or frequency hopping, which can be viewed as an optimized lower bound to other more sophisticated scheduling schemes. Finally, motivated by the benefits of cochannel D2D links, this dissertation investigates interference management for D2D links sharing cellular uplink resources. Showing that the problem of maximizing network throughput while guaranteeing the service of cellular users is non-convex and hence intractable, a distributed approach that is computationally efficient with minimal coordination is proposed instead. The key algorithmic idea is a pricing mechanism, whereby BSs optimize and transmit a signal depending on the interference to D2D links, who then play a best response (i.e., selfishly) to this signal. Numerical results show that our algorithms converge quickly, have low overhead, and achieve a significant throughput gain, while maintaining the quality of cellular links at a predefined service level. / text
|
148 |
The G305 star forming complex : a panoramic view of the environment and star formationHindson, Luke Paul January 2012 (has links)
This thesis presents molecular line and radio continuum observations of the giant molecular cloud (GMC) complex known as G305. The energy input from high-mass stars in the form of powerful winds and ionising radiation is one of the primary feedback mechanisms in GMCs. This feedback is thought to play a dual role both dispersing and destroying the natal environment but also sweeping up and compressing molecular gas and potentially triggering new episodes of star formation. Despite their importance to the evolution of GMCs and galaxies as a whole, the physical processes behind the formation and evolution of high-mass stars remains poorly understood. We therefore set out to obtain wide-field observations of the ionised and molecular environment to study the impact of high-mass stars on the evolution of G305. Observations conducted with the Mopra telescope of the molecular gas traced by NH3 in the (1,1), (2,2) and (3,3) transition and CO (12CO, 13CO and C18O J = 1–0) reveals the reservoir for future star formation in G305 and allows the physical properties and kinematics of the region to be studied. We identify 15 large molecular clouds and 57 smaller molecular clumps towards G305. The physical properties of the molecular gas are consistent with G305 being amongst the most massive a vigorous star forming regions in the Galaxy. We find a total molecular gas mass of 2:5–6:5 105M indicating that there is a large reservoir for future star formation. By considering virial equilibrium within the molecular clumps we discover that only 14% of the molecular clumps in G305 are gravitationally unstable, however these clumps contain > 30% of the molecular mass in G305 suggesting there is scope for considerable future star formation. To study the ionised environment towards G305 we have obtained some of the largest and most detailed wide-area mosaics with the Australia Telescope Compact Array to date. These radio continuum observations were performed simultaneously at 5.5 and 8.8 GHz and by applying two imaging techniques we are able to resolve HII regions from the ultra-compact to classical evolutionary phase. This has allowed high-mass star formation within G305 to be traced over the extent and lifetime of the complex. We discover that more than half of the observable total ionising flux in G305 is associated with embedded high-mass star formation around the periphery of a central cavity that has been driven into the molecular gas by a cluster of optically visible massive stars. By considering the contribution of embedded and visible massive stars to the observed radio continuum we suggest that more than 45 massive stars exist within G305. Combination of these two studies and recent and ongoing star formation provides the most in depth view of G305 to date and allows the star formation history and impact of high-mass stars to be investigated. We find compelling morphological evidence that suggests triggering is responsible for at least some of the observed high-mass star formation and construct a star formation history for the region.
|
149 |
Practical Precoding Design for Modern Multiuser MIMO CommunicationsLiang, Le 08 December 2015 (has links)
The use of multiple antennas to improve the reliability and capacity of wireless communication has been around for a while, leading to the concept of multiple-input multiple-output (MIMO) communications. To enable full MIMO potentials, the precoding design has been recognized as a crucial component. This thesis aims to design multiuser MIMO precoders of practical interest to achieve high reliability and capacity performance under various real-world constraints like inaccuracy of channel information acquired at the transmitter, hardware complexity, etc. Three prominent cases are considered which constitute the mainstream evolving directions of the current cellular communication standards and future 5G cellular communications. First, in a relay-assisted multiuser MIMO system, heavily quantized channel information obtained through limited feedback contributes to noticeable rate loss compared to when perfect channel information is available. This thesis derives an upper bound to characterize the system throughput loss caused by channel quantization error, and then develops a feedback quality control strategy to maintain the rate loss within a bounded range. Second, in a massive multiuser MIMO channel, due to the large array size, it is difficult to support each antenna with a dedicated radio frequency chain, thus making high-dimensional baseband precoding infeasible. To address this challenge, a low-complexity hybrid precoding scheme is designed to divide the precoding into two cascaded stages, namely, the low-dimensional baseband precoding and the high-dimensional phase-only processing at the radio frequency domain. Its performance is characterized in a closed form and demonstrated through computer simulations. Third, in a mmWave multiuser MIMO scenario, smaller wavelengths make it possible to incorporate excessive amounts of antenna elements into a compact form. However, we are faced with even worse hardware challenges as mixed signal processing at mmWave frequencies is more complex and power consuming. The channel sparsity is taken advantage of in this thesis to enable a simplified precoding scheme to steer the beam for each user towards its dominant propagation paths at the radio frequency domain only. The proposed scheme comes at significantly reduced complexity and is shown to be capable of achieving highly desirable performance based on asymptotic rate analysis. / Graduate
|
150 |
Distributed and Multiphase Inference in Theory and Practice: Principles, Modeling, and Computation for High-Throughput ScienceBlocker, Alexander Weaver 18 September 2013 (has links)
The rise of high-throughput scientific experimentation and data collection has introduced new classes of statistical and computational challenges. The technologies driving this data explosion are subject to complex new forms of measurement error, requiring sophisticated statistical approaches. Simultaneously, statistical computing must adapt to larger volumes of data and new computational environments, particularly parallel and distributed settings. This dissertation presents several computational and theoretical contributions to these challenges. In chapter 1, we consider the problem of estimating the genome-wide distribution of nucleosome positions from paired-end sequencing data. We develop a modeling approach based on nonparametric templates that controls for variability due to enzymatic digestion. We use this to construct a calibrated Bayesian method to detect local concentrations of nucleosome positions. Inference is carried out via a distributed HMC algorithm that scales linearly in complexity with the length of the genome being analyzed. We provide MPI-based implementations of the proposed methods, stand-alone and on Amazon EC2, which can provide inferences on an entire S. cerevisiae genome in less than 1 hour on EC2. We then present a method for absolute quantitation from LC-MS/MS proteomics experiments in chapter 2. We present a Bayesian model for the non-ignorable missing data mechanism induced by this technology, which includes an unusual combination of censoring and truncation. We provide a scalable MCMC sampler for inference in this setting, enabling full-proteome analyses using cluster computing environments. A set of simulation studies and actual experiments demonstrate this approach's validity and utility. We close in chapter 3 by proposing a theoretical framework for the analysis of preprocessing under the banner of multiphase inference. Preprocessing forms an oft-neglected foundation for a wide range of statistical and scientific analyses. We provide some initial theoretical foundations for this area, including distributed preprocessing, building upon previous work in multiple imputation. We demonstrate that multiphase inferences can, in some cases, even surpass standard single-phase estimators in efficiency and robustness. Our work suggests several paths for further research into the statistical principles underlying preprocessing. / Statistics
|
Page generated in 0.0497 seconds