Spelling suggestions: "subject:"massive"" "subject:"assive""
241 |
Imaginário e design: resignificação do jogo eletrônico por meio da linguagem expressivaSato, Adriana Kei Ohashi 14 September 2007 (has links)
Made available in DSpace on 2016-03-15T19:43:06Z (GMT). No. of bitstreams: 1
Adriana Kei Ohashi Sato.pdf: 1422574 bytes, checksum: 540e2341a2b34d4eca47c4af926c8a33 (MD5)
Previous issue date: 2007-09-14 / This work intends to analyze design as the object that generates answers to the user's (gamer's) expectations based on the collective imaginarium. In turn, this imaginarium originates in a specific social-cultural context. Considering men as symbolic beings, they look for the reason of their existence by attribuing meaning to objects. Those meanings are interpreted by society, creating values, behaviors and
cultural references. This work presents fundamental concepts about design and its presence in the social-cultural context, being design the language that interprets and re-signify the electronic game based on the collective imaginarium. / A presente pesquisa procura analisar o design como objeto gerador de respostas às expectativas do usuário (jogador) a partir do imaginário coletivo. Por sua vez, este imaginário, tem a sua origem em um determinado contexto sócio-cultural. Considerando o homem como um ser simbólico, este busca o sentido para sua existência por meio da atribuição de significados aos objetos. Tais significados são interpretados pela sociedade criando seus valores, comportamentos e referenciais culturais. Ao longo deste trabalho são apresentados conceitos fundamentais sobre o design e sua presença no contexto sócio-cultural, sendo o design, a linguagem que interpreta e resignifica, a partir do imaginário coletivo, um jogo eletrônico.
|
242 |
The magmatic-hydrothermal architecture of the Archean Volcanic Massive Sulfide (VMS) System at Panorama, Pilbara, Western AustraliaDrieberg, Susan L. January 2003 (has links)
[Truncated abstract. Formulae and special characters can only be approximated here. Please see the pdf version of this abstract for an accurate representation.] The 3.24 Ga Panorama VMS District, located in the Pilbara Craton of Western Australia, is exposed as a cross-section through subvolcanic granite intrusions and a coeval submarine volcanic sequence that hosts Zn-Cu mineralization. The near-complete exposure across the district, the very low metamorphic grade, and the remarkable preservation of primary igneous and volcanic textures provides an unparalleled opportunity to examine the P-T-X-source evolution of a VMS ore-forming system and to assess the role of the subvolcanic intrusions as heat sources and/or metal contributors to the overlying VMS hydrothermal system. Detailed mapping of the Panorama VMS District has revealed seven major vein types related to the VMS hydrothermal system or to the subvolcanic intrusions. (1) Quartz-chalcopyrite veins, hosted in granophyric granite immediately beneath the granite-volcanic contact, formed prior to main stage VMS hydrothermal convection, and were precipitated from mixed H2OCO 2-NaCl-KCl fluids with variable salinities (2.5 to 8.5 wt% NaCl equiv). (2) Quartz-sericite veins, ubiquitous across the top 50m of the volcanic sequence, were formed from an Archean seawater with a salinity of 9.7 to 11.2 wt% NaCl equiv at temperatures of 90° to 135°C. These veins formed synchronous with the regional feldspar-sericite-quartz-ankerite alteration during seawater recharge into the main stage VMS hydrothermal convection cells. (3) Quartz-pyrite veins hosted in granophyric granite, and (4) quartz-carbonate-pyrite veins hosted in andesitebasalt, also formed from relatively unevolved Archean seawater (5.5 to 10.1 wt% NaCl equiv; 150° to 225°C), but during the collapse of the VMS hydrothermal system when cool, unmodified seawater invaded the top of the subvolcanic intrusions. (5) Quartz-topaz-muscovite greisen, (6) quartz-chlorite-chalcopyrite vein greisen, and (7) hydrothermal Cu-Zn-Sn veins are hosted in the subvolcanic intrusions. Primary H2O-NaCl-CaCl2 fluid inclusions in the vein greisens were complex high temperature hypersaline inclusions (up to 590°C and up to 56 wt% NaCl equiv). The H2O-CO2-NaCl fluid inclusions in the Cu-Zn-Sn veins have variable salinities, ranging from 4.9 to 14.1 wt% NaCl equiv, and homogenization temperatures ranging from 160° to 325°C. The hydrothermal quartz veins and magmatic metasomatic phases in the subvolcanic intrusions were formed from a magmatic-hydrothermal fluid that had evolved through wallrock reactions, cooling, and finally mixing with seawater-derived VMS hydrothermal fluids.
|
243 |
Análise de redes sociais em comunidades virtuais emergentes de jogos on-line por meio de coleta de dados automatizadaRodrigues, Lia Carrari 11 February 2009 (has links)
Made available in DSpace on 2016-04-18T21:39:48Z (GMT). No. of bitstreams: 3
Lia Carrari Rodrigues1.pdf: 2007869 bytes, checksum: 5127705a8183cb3056356f8cba16af6a (MD5)
Lia Carrari Rodrigues2.pdf: 1875473 bytes, checksum: 48cfbdbc52e6db38b63fd8d1622c69ed (MD5)
Lia Carrari Rodrigues3.pdf: 2987315 bytes, checksum: 7dacd97c56a6b31303f12c62c9657e92 (MD5)
Previous issue date: 2009-02-11 / Fundo Mackenzie de Pesquisa / The current worldwide popularity of on-line games has resulted in the formation of virtual communities with hundreds of people. This is due to the daily interaction of people in games called Massive Multiplayer Online Role-Playing Games (MMORPGs). These communities are culturally diverse and permeated by social ties estabilished through different ways in the game s virtual world. The present research held a study on the communities found in World of Warcraft through social network analysis. The goal of this approach was to define a typology of social ties and study the behavioural patterns related to the structure of these networks. This involved the construction of an efficient data collecting system, as well as analysis tools. That particular methodology verified that this kind of organization can be characterized as an emergent adaptive complex system. To that end, different theories were utilized, such as graph theory, Kohonen networks algebra and software engineering. / Atualmente a popularidade mundial de jogos on-line tem resultado na formação de comunidades virtuais com centenas de pessoas. Isso se deve à interação diária das pessoas em jogos chamados Massive Multiplayer Online Role-Playing Games (MMORPGs). Estas comunidades são culturalmente diversificadas e permeadas por laços sociais estabelecidos de diferentes maneiras no ambiente virtual do jogo. A presente pesquisa realizou um estudo de comunidades do jogo World of Warcraft por meio da abordagem da análise de redes sociais. O intuito desta abordagem foi definir uma tipologia de laços sociais e estudar padrões de comportamento vinculados à estrutura destas. Isso envolveu a construção de um sistema de coleta de dados eficiente desta rede, assim como ferramentas de análise. Com esta metodologia, verificou-se que este tipo de organização pode ser caracterizada como um sistema complexo adaptativo emergente. Para isso, foram utilizadas diferentes teorias, como a teoria dos grafos, redes de Kohonen, álgebra e engenharia de software.
|
244 |
Metodologias ativas de aprendizagem interferem no desempenho de estudantes / Active learning methods interfere in student performanceIara Yamamoto 16 September 2016 (has links)
Esta pesquisa analisa fatores que sustentam o uso das metodologias ativas para o aumento do desempenho dos estudantes para a aprendizagem significativa, levando-se em consideração que o ato de aprender é intransferível, só o indivíduo pode fazê-lo e ninguém pode aprender por outro, mas pode-se incentivar o interesse dos estudantes, explorando novas oportunidades de aprendizagem, bem mais centradas na atividade dos estudantes, utilizando a hibridização, que é a mescla de técnicas e ferramentas que auxiliam e dinamizam o aprendizado com a combinação entre ambientes presenciais e virtuais de ensino - usando a ferramenta dos MOOCs pela concepção da sala de aula invertida. Para avaliar a interferência da metodologia ativa de aprendizagem no desempenho, participaram da análise estudantes universitários, de duas instituições particulares, na área de Ciências Sociais Aplicadas, que após um semestre responderam a dois questionários de escalas validadas estatisticamente: Academic Motivation Scale e Escala Estratégica de Aprendizagem para ambientes on-line. Utilizou-se técnicas de análise multivariada, composta pela análise de componentes principais e análise de agrupamento, para avaliar a presença de grupos de motivação. Para a caracterização dos grupos de motivação obtidos foram calculadas as frequências dos grupos formados e médias das componentes principais, bem como as marcações de significância estatística para o teste-t de diferença de médias e técnicas de modelo de regressão para avaliar as médias finais (notas) dos estudantes de acordo com as covariáveis (estudantes que participaram do curso na plataforma MOOC, turma, grupo de motivação e gênero). Os principais resultados demonstram que a escolha de um método ativo de aprendizagem, utilizando a plataforma MOOC interfere em todos os grupos, no aumento da nota final em comparação aos estudantes que não acessaram a plataforma, portanto, não participaram de todo o processo, sendo esse efeito ainda mais expressivo para o grupo de motivação 1 (motivados pela excelência acadêmica), a IES-1 obteve um desempenho superior a IES-2. O êxito da introdução das metodologias ativas está diretamente relacionado a importância do envolvimento de todos os atores desse processo, com destaque as instituições de ensino e professores, para a formação de um indivíduo capaz de transformar a sua vida, o seu meio e a nossa sociedade. / This research analyzes the factors that support the use of active methods to increase the performance of the students towards meaningful learning, taking into account that the act of learning is not transferable, only the individual can do it and no one can learn on the other, but can stimulate the interest of students, exploring new learning opportunities and more focused on the activity of the students, using the hybridization, which is the mixture of techniques and tools to assist and streamline the learning with the combination of classroom and virtual teaching environments - using MOOCs tools for the design of the inverted classroom. In order to evaluate the interference of active learning methodology over performance, students from two private universities in the area of Applied Social Sciences took part in this research and, after one semester answered two statistically validated scales questionnaires: Academic Motivation Scale and Strategic Learning Scale, suitable for online environments. Multivariate analysis techniques, composed by Principal Component Analysis and Cluster Analysis, have been used to assess the presence of motivation groups. In order to characterize obtained motivation groups, frequencies of formed groups and main components averages were calculated as well as markings of statistical significance for the t-test mean difference and regression techniques to assess students final average (grades) according to the covariates (students who took the course in MOOC platform, classroom, motivation group and gender). Main results show that the choice of an active learning method, using MOOC platform interferes in all groups, in the final score increase comparatively to students who had not accessed the platform, thus not participating in the process; this effect was even more significant on motivation of group 1 (motivated by academic excellence), PU-1 achieved a superior performance comparatively to PU-2. The active methods introduction success is directly related to the importance of the involvement of all actors in the process, especially educational institutions and teachers, for the formation of an individual able to transform his life, his environment and our society .
|
245 |
[pt] DETECÇÃO DE SINAIS EM SISTEMAS MIMO MASSIVOS / [en] SIGNAL DETECTION IN MASSIVE MIMO SYSTEMSALVARO JAVIER ORTEGA 26 April 2016 (has links)
[pt] Este trabalho de dissertação de mestrado apresenta uma comparação
de algumas das técnicas de detecção de sinais mais promissoras para a viabilização de sistemas MIMO de grande porte em termos de desempenho,
taxa de erro de bit e complexidade, número médio de flops requeridos por
vetor de símbolos recebido. Com este objetivo foram também consideradas
as técnicas de detecção clássicas, visando assim ressaltar o desempenho
das novas técnicas com relação as antigas. Além disso foram propostas e
investigadas novas estruturas para detectores SIC baseados em lista (i.e.,
com múltiplos ramos) que resultaram em melhor desempenho com menor
complexidade quando comparados aos detectores deste tipo já propostos.
Na comparação dos algoritmos, foram considerados três cenários diferentes:
(i ) monousuário, com ganhos de canal gaussianos complexos independentes e identicamente distribuídos, ou seja, uma propagação que só considera a presença de desvanecimento de Rayleigh; (ii ) múltiplos usuários com canais correlatados e que considera as perdas de propagação de pequena e larga escala num sistema com antena centralizada; e (iii ) múltiplos usuários com canais correlatados e que considera as perdas de propagação de pequena e larga escala num sistema com antena distribuída. / [en] This work dissertation presents a comparison of some of the signal
detection techniques most promising for the viability of large MIMO systems
in terms of performance, bit error rate, and complexity, average number
of flops required by transmitted symbol vector. For this purpose it was
also considered classical detection techniques, thus aiming to highlight the
performance of new techniques with respect the old. Also it has been
proposed and investigated new structures to SIC detectors based on list (i.e.,
with multiple branches) resulting in better performance with less complexity
compared to detectors of this kind already proposed. In the comparison of
algorithms, three different scenarios were used: (i ) single user, with channel
gains independent and distributed identically complex Gaussian, that is, a
spread that only considers the presence of Rayleigh fading; (ii ) multiple
users, with correlated channels, and considers the short and large scale
path loss in a system with centralized antenna; e (iii ) multiple users, with
correlated channels, and considers the short and large scale path loss in a
system with distributed antenna.
|
246 |
Stochastic Geometry Perspective of Massive MIMO SystemsParida, Priyabrata 27 September 2021 (has links)
Owing to its ability to improve both spectral and energy efficiency of wireless networks, massive multiple-input multiple-output (mMIMO) has become one of the key enablers of the fifth-generation (5G) and beyond communication systems. For successful integration of this promising physical layer technique in the upcoming cellular standards, it is essential to have a comprehensive understanding of its network-level performance. Over the last decade, stochastic geometry has been instrumental in obtaining useful system design insights of wireless networks through accurate and tractable theoretical analysis. Hence, it is only natural to consider modeling and analyzing the mMIMO systems using appropriate statistical constructs from the stochastic geometry literature and gain insights for its future implementation.
With this broader objective in mind, we first focus on modeling a cellular mMIMO network that uses fractional pilot reuse to mitigate the sole performance-limiting factor of mMIMO networks, namely, pilot contamination. Leveraging constructs from the stochastic geometry literature, such as Johnson-Mehl cells, we derive analytical expressions for the uplink (UL) signal-to-interference-and-noise ratio (SINR) coverage probability and average spectral efficiency for a random user. From our system analysis, we present a partitioning rule for the number of pilot sequences to be reserved for the cell-center and cell-edge users that improves the average cell-edge user spectral efficiency while achieving similar cell-center user spectral efficiency with respect to unity pilot reuse. In addition, using the analytical approach developed for the cell-center user performance evaluation, we study the performance of a small cell system where user and base station (BS) locations are coupled. The impact of distance-dependent UL power control on the performance of an mMIMO network with unity pilot reuse is analyzed and subsequent system design guidelines are also presented.
Next, we focus on the performance analysis of the cell-free mMIMO network, which is a distributed implementation of the mMIMO system that leads to the second and third contributions of this dissertation. Similar to the cellular counterpart, the cell-free systems also suffer from pilot contamination due to the reuse of pilot sequences throughout the network. Inspired by a hardcore point process known as the random sequential adsorption (RSA) process, we develop a new distributed pilot assignment algorithm that mitigates the effect of pilot contamination by ensuring a minimum distance among the co-pilot users. This pilot assignment scheme leads to the construction of a new point process, namely the multilayer RSA process. We study the statistical properties of this point process both in one and two-dimensional spaces by deriving approximate but accurate expressions for the density and pair correlation functions. Leveraging these new results, for a cell-free network with the proposed RSA-based pilot assignment scheme, we present an analytical approach that determines the minimum number of pilots required to schedule a user with probabilistic guarantees. In addition, to benchmark the performance of the RSA-based scheme, we propose two optimization-based centralized pilot allocation schemes using linear programming principles. Through extensive numerical simulations, we validate the efficacy of the distributed and scalable RSA-based pilot assignment scheme compared to the proposed centralized algorithms.
Apart from pilot contamination, another impediment to the performance of a cell-free mMIMO is limited fronthaul capacity between the baseband unit and the access points (APs). In our fourth contribution, using appropriate stochastic geometry-based tools, we model and analyze the downlink of such a network for two different implementation scenarios. In the first scenario, we consider a finite network where each AP serves all the users in the network. In the second scenario, we consider an infinite network where each user is served by a few nearby APs in order to limit the load on fronthaul links. From our analyses, we observe that for the finite network, the achievable average system sum-rate is a strictly quasi-concave function of the number of users in the network, which serves as a key guideline for scheduler design for such systems. Further, for the user-centric architecture, we observe that there exists an optimal number of serving APs that maximizes the average user rate.
The fifth and final contribution of this dissertation focuses on the potential improvement that is possible by the use of mMIMO in citizen broadband radio service (CBRS) spectrum sharing systems. As a first concrete step, we present comprehensive modeling and analysis of this system with omni-directional transmissions. Our model takes into account the key guidelines by the Federal Communications Commission for co-existence between licensed and unlicensed networks in the 3.5 GHz CBRS frequency band. Leveraging the properties of the Poisson hole process and Matern hardcore point process of type II, a.k.a. ghost RSA process, we analytically characterize the impact of different system parameters on various performance metrics such as medium access probability, coverage probability, and area spectral efficiency. Further, we provide useful system design guidelines for successful co-existence between these networks. Building upon this omni-directional model, we also characterize the performance benefits of using mMIMO in such a spectrum sharing network. / Doctor of Philosophy / The emergence of cloud-based video and audio streaming services, online gaming platforms, instantaneous sharing of multimedia contents (e.g., photos, videos) through social networking platforms, and virtual collaborative workspace/meetings require the cellular communication networks to provide high data-rate as well as reliable and ubiquitous connectivity. These constantly evolving requirements can be met by designing a wireless network that harmoniously exploits the symbiotic co-existence among different types of cutting-edge wireless technologies. One such technology is massive multiple-input multiple-output (mMIMO), whose core idea is to equip the cellular base stations (BSs) with a large number of antennas that can be leveraged through appropriate signal processing algorithms to simultaneously accommodate multiple users with reduced network interference. For successful deployment of mMIMO in the upcoming cellular standards, i.e., fifth-generation (5G) and beyond systems, it is necessary to characterize its performance in a large-scale wireless network taking into account the inherent spatial randomness in the BS and user locations. To achieve this goal, in this dissertation, we propose different statistical methods for the performance analysis of mMIMO networks using tools from stochastic geometry, which is a field of mathematics related to the study of random patterns of points.
One of the major deployment issues of mMIMO systems is pilot contamination, which is a form of coherent network interference that degrades user performance. The main reason behind pilot contamination is the reuse of pilot sequences, which are a finite number of known signal waveforms used for channel estimation between a user and its serving BS. Further, the effect of pilot contamination is more severe for the cell-edge users, which are farther from their own BSs. An efficient scheme to mitigate the effect of pilot contamination is fractional pilot reuse (FPR). However, the efficiency of this scheme depends on the pilot partitioning rule that decides the fraction of total pilot sequences that should be used by the cell-edge users. Using appropriate statistical constructs from the stochastic geometry literature, such as Johnson-Mehl cells, we present a partitioning rule for efficient implementation of the FPR scheme in a cellular mMIMO network.
Next, we focus on the performance analysis of the cell-free mMIMO network. In contrast to the cellular network, where each user is served by a single BS, in a cell-free network each user can be served by multiple access points (APs), which have less complex hardware compared to a BS. Owing to this cooperative and distributed implementation, there are no cell-edge users. Similar to the cellular counterpart, the cell-free systems also suffer from pilot contamination due to the reuse of pilot sequences throughout the network. Inspired by a hardcore point process known as the random sequential adsorption (RSA) process, we develop a new distributed pilot assignment algorithm that mitigates the effect of pilot contamination by ensuring a minimum distance among the co-pilot users. Further, we show that the performance of this distributed pilot assignment scheme is appreciable compared to different centralized pilot assignment schemes, which are algorithmically more complex and difficult to implement in a network. Moreover, this pilot assignment scheme leads to the construction of a new point process, namely the multilayer RSA process. We derive the statistical properties of this point process both in one and two-dimensional spaces.
Further, in a cell-free mMIMO network, the APs are connected to a centralized baseband unit (BBU) that performs the bulk of the signal processing operations through finite capacity links, such as fiber optic cables. Apart from pilot contamination, another implementational issue associated with the cell-free mMIMO systems is the finite capacity of fronthaul links that results in user performance degradation. Using appropriate stochastic geometry-based tools, we model and analyze this network for two different implementation scenarios. In the first scenario, we consider a finite network where each AP serves all the users in the network. In the second scenario, we consider an infinite network where each user is served by a few nearby APs. As a consequence of this user-centric implementation, for each user, the BBU only needs to communicate with fewer APs thereby reducing information load on fronthaul links. From our analyses, we propose key guidelines for the deployment of both types of scenarios.
The type of mMIMO systems that are discussed in this work will be operated in the sub-6 GHz frequency range of the electromagnetic spectrum. Owing to the limited availability of spectrum resources, usually, spectrum sharing is encouraged among different cellular operators in such bands. One such example is the citizen broadband radio service (CBRS) spectrum sharing systems proposed by the Federal Communications Commission (FCC). The final contribution of this dissertation focuses on the potential improvement that is possible by the use of mMIMO in the CBRS systems. As our first step, using tools from stochastic geometry, we model and analyze this system with a single antenna at the BSs. In our model, we take into account the key guidelines by the FCC for co-existence between licensed and unlicensed operators. Leveraging properties of the Poisson hole process and hardcore process, we provide useful theoretical expressions for different performance metrics such as medium access probability, coverage probability, and area spectral efficiency. These results are used to obtain system design guidelines for successful co-existence between these networks. We further highlight the potential improvement in the user performance with multiple antennas at the unlicensed BS.
|
247 |
Caractérisation des discontinuités dans des ouvrages massifs en béton par la diagraphie électrique de résistivitéTaillet, Elodie January 2014 (has links)
Résumé : Le vieillissement des ouvrages en béton est une préoccupation majeure affectant la pérennité et l’efficacité des structures. Le maître d’ouvrage se doit de maintenir les fonctions d’usage de la structure tout en gardant une gestion économique efficace. L’objectif final de ces travaux de recherche est, donc de pouvoir renseigner sur l’état global de fissuration de la structure afin d’aider le maître d’ouvrage à respecter ses engagements.
Dans cette optique, cette thèse développe une nouvelle technique aidant à la quantification de l’état des ouvrages massifs en béton. Elle s’appuie, pour cela, sur la méthode non-destructive de résistivité électrique en surface, connue pour sa sensibilité face à des facteurs révélateurs d’une altération. Toutefois, à cause de sa dépendance entre la profondeur d’investigation et la résolution, la méthode ne peut pas garantir de l’état global d’un ouvrage. De ce fait, il a été décidé d’utiliser la résistivité électrique via des forages préexistants dans la structure (diagraphie électrique). L’outil utilisé est une sonde en dispositif normal réservée jusqu’à présent pour la prospection pétrolière et hydrogéologique. En plus d’une prospection en profondeur via le forage, cette sonde peut acquérir des informations sur un rayon de 3.2m autour du forage. Cependant, à mesure que le volume de béton sondé augmente, la résolution décroit. La difficulté est donc de pouvoir exploiter les capacités de prospection de la sonde tout en sachant que la résolution faillit. Il s’agit de contourner le problème en maîtrisant les concepts de la diagraphie et son nouveau milieu d’application.
Cette thèse est basée sur une première approche numérique permettant d’apporter des corrections sur les données de terrain et de déterminer la sensibilité de l’outil face à de l’endommagement d’ouverture plurimillimétrique à centimétrique. Ceci est validé par des mesures réalisées sur une écluse de la Voie Maritime du Saint-Laurent. Une étude numérique de la réponse de l’outil en fonction des paramètres de fissure tels que l’ouverture, le contraste entre la résistivité de la discontinuité et du béton, et l’extension est réalisée. Elle permet de construire une base de données afin de développer une méthode pour la caractérisation de l’endommagement. Cette méthode s’appuie sur ces réponses diagraphiques pour retrouver les paramètres de fissure recherchés (problème inverse). Nous procédons tout d’abord par une analyse préliminaire se basant sur un croisement des informations apportées par les différentes électrodes de la sonde puis nous optimisons les résultats par la méthode de recuit simulé. La méthode, ainsi développée est ensuite appliquée à un deuxième ouvrage pour en déterminer l’état interne. Ces travaux détectent plusieurs zones endommagées et caractérisent l’une d’elles par une ouverture centimétrique et une extension comprise entre 1.6m et 3.2m.
Ces travaux prometteurs, attestent d’un premier diagnostic interne des ouvrages massifs en béton, un enjeu qui restait sans réponses satisfaisantes jusqu’à maintenant. // Abstract : The aging of concrete structures is a major problem affecting their sustainability and their efficiency. The owner must maintain the structure serviceability and provide cost-effective management. The goal of this work is to provide detailed information about the state of cracking inside the structure in order to assist the owner to meet its commitments.
In this context, this thesis develops a new technology to assess the condition of mass concrete structures. It relies on a non-destructive method based on electrical resistivity measured from surface, known for its sensitivity to factors associated with concrete deterioration. However, because of its dependence between the investigation depth and the resolution, the method cannot assess the overall state of a structure. Therefore, it was decided to use the electrical resistivity through preexisting boreholes in the structure (electrical logging). The tool used is a normal probe, which has been traditionally used for oil and hydrogeological exploration. In addition to the investigation in depth via boreholes, this probe can get information over a radius of 3.2m around the borehole. However, as the probing volume of concrete increases, the resolution decreases. Difficulty is to use the exploration abilities of the tool, knowing that the resolution is limited. This is to get around the problem by mastering logging concepts and its new application environment. This thesis is based on a first numerical approach to make corrections on field data and to determine the tool sensitivity with regard to the multi-millimeter and centimeter crack size
damage. This was validated with measurements made on a full-size lock located on the St. Lawrence Seaway. A numerical study of the tool response versus the discontinuities parameters such as the crack aperture, the resistivity contrast between the discontinuity and the concrete, and the extension was done. It allowed building a database used to develop a method for the characterization of the damage. This method is based on the tool responses to find the crack parameters (inverse problem). First, we proceed with a preliminary analysis based on a cross of information provided by the different electrodes of the probe then we optimize the results by the method of simulated annealing. The characterization method is applied to another structure to quantify its internal state. These studies detect several damaged areas and characterize one of them by a centimeter aperture and an extension between 1.6m and 3.2m.
This work attest to a first internal diagnosis of massive concrete structures, an issue that
remained without satisfactory answers so far.
|
248 |
Nucleosynthesis in stellar models across initial masses and metallicities and implications for chemical evolutionRitter, Christian Heiko 25 April 2017 (has links)
Tracing the element enrichment in the Universe requires to understand the element production in stellar models which is not well understood, in particular at low metallicity. In this thesis a variety of nucleosynthesis processes in stellar models across initial masses and metallicities is investigated and their relevance for chemical evolution explored.
Stellar nucleosynthesis is investigated in asymptotic giant branch (AGB) models and massive star models with initial masses between 1 M⊙ and 25 M⊙ for metal fractions of Z = 0.02, 0.01, 0.006, 0.001, 0.0001. A yield grid with elements from H to Bi is calculated. It serves as an input for chemical evolution simulations. AGB models are computed towards the end of the AGB phase and massive star models are calculated until core collapse followed by explosive core-collapse nucleosynthesis. The simulations include convective boundary mixing in all AGB star models and feature efficient hot-bottom burning and hot dredge-up in AGB models as well the predictions of both heavy elements and CNO species under hot-bottom burning conditions. H-ingestion events in the low-mass low-Z AGB model with initial mass of 1M⊙ at Z = 0.0001 result in the production of large amounts of heavy elements. In super-AGB models H ingestion could potentially lead to the intermediate neutron-capture process.
To model the chemical enrichment and feedback of simple stellar populations in hydrodynamic simulations and semi-analytic models of galaxy formation the SYGMA module is created and its functionality is verified through a comparison with a widely adopted code. A comparison of ejecta of simple stellar populations based on yields of this work with a commonly adopted yield set shows up to a factor of 3.5 and 4.8 less C and N enrichment from AGB stars at low metallicity which is attributed to complete stellar models, the modeling of the AGB stage and hot-bottom burning in super- AGB stars. Analysis of two different core-collapse supernova fallback prescriptions show that the total amount of Fe enrichment by massive stars differs by up to two at Z = 0.02.
Insights into the chemical evolution at very low metallicity as motivated by the observations of extremely metal poor stars require to understand the H-ingestion events common in stellar models of low metallicity. The occurrence of H ingestion events in super-AGB stars is investigated and identified as a possible site for the production of heavy elements through the intermediate neutron capture process. The peculiar abundance of some C-Enhanced Metal Poor stars are explained with simple models of the intermediate neutron capture process. Initial efforts to model this heavy element production in 3D hydrodynamic simulations are presented.
For the first time the nucleosynthesis of interacting convective O and C shells in massive star models is investigated in detail. 1D calculations based on input from 3D hydrodynamic simulations of the O shell show that such interactions can boost the production of odd-Z elements P, Cl, K and Sc if large entrainment rates associated with O-C shell merger are assumed. Such shell merger lead in stellar evolution models to overproduction factors beyond 1 dex and p-process overproduction factors above 1 dex for 130,132Ba and heavier isotopes. Chemical evolution models are able to reproduce the Galactic abundance trends of these odd-Z elements if O-C shell merger occur in more than 50% of all massive stars. / Graduate
|
249 |
Vysoce výkonné analýzy / High Performance AnalyticsKalický, Andrej January 2013 (has links)
This thesis explains Big Data Phenomenon, which is characterised by rapid growth of volume, variety and velocity of data - information assets, and thrives the paradigm shift in analytical data processing. Thesis aims to provide summary and overview with complete and consistent image about the area of High Performance Analytics (HPA), including problems and challenges on the pioneering state-of-art of advanced analytics. Overview of HPA introduces classification, characteristics and advantages of specific HPA method utilising the various combination of system resources. In the practical part of the thesis the experimental assignment focuses on analytical processing of large dataset using analytical platform from SAS Institute. The experiment demonstrates the convenience and benefits of In-Memory Analytics (specific HPA method) by evaluating the performance of different analytical scenarios and operations. Powered by TCPDF (www.tcpdf.org)
|
250 |
Nastavení virtuální ekonomiky uvnitř MMOG / Setting of the virtual economy within the mmogHusárik, Braňko January 2010 (has links)
The thesis is dedicated to the proper virtual economy setup of Massive Multiplayer Online Game to make the game competitive in this area to other products on the market. In addition to setting the economy on the example thesis includes a market analysis of computer and console games, comparison of virtual economy with real and comparison of the selected virtual economies based on defined criteria. The primary goal is to make a procedure for setting up the virtual economy and its verification. Another goals are the analysis of the games market, comparison of the games market economies and a demonstration of web game making. Game making is depended on its feasibility analysis, which is also included in the thesis. The work also verify the correctness of the settings according to the defined criteria.
|
Page generated in 0.0578 seconds