• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 27
  • 10
  • 9
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 167
  • 167
  • 25
  • 16
  • 16
  • 15
  • 15
  • 15
  • 14
  • 14
  • 14
  • 14
  • 14
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Distribuição de lei de potência gradualmente truncada aplicada na educação : vestibular da Academia da Força Aérea /

Schinaider, Sidney Jorge. January 2006 (has links)
Orientador: Hari Mohan Gupta / Banca: Gerson Antonio Santarine / Banca: Osvaldo Missiato / Resumo: Educação e aprendizado são assuntos de grande importância para a sociedade em vista do desenvolvimento tecnológico e do progresso social. No presente trabalho analisamos a distribuição estatística das notas dos candidatos ao vestibular (Exame de Admissão) da Academia da Força Aérea, situada em Pirassununga, Estado de São Paulo Brasil, onde se formam os Oficiais da Aeronáutica (Força Aérea Brasileira), entre os anos de 1999 a 2004, em busca de algumas características que indiquem o processo de aprendizagem em cada disciplina do vestibular. O exame de admissão consta de 4 disciplinas: Física, Matemática, Inglês e Português, todos com questões objetivas. Os candidatos melhor classificados são selecionados de acordo com o número de vagas determinado pelo Comando da Aeronáutica. Notou-se, claramente, que, nas disciplinas Física, Matemática e Inglês, as notas obedecem a uma distribuição do tipo Lei de Potência Gradualmente Truncada, como também foi observado anteriormente nas disciplinas, em conjunto, de Ciências Exatas e Biológicas. Na disciplina Português as notas obedecem a uma distribuição normal, resultado que se explica, considerando-se a dependência dos assuntos dados na área de Física, Matemática e Inglês (língua estrangeira) aos assuntos ministrados anteriormente, enquanto em Português, (língua materna) cada capítulo é relativamente independente. Também apresentamos sugestão para melhorar o ensino de ciências e matemáticas. / Abstract: Science and Mathematic Education is a subject of great importance for the society in sight of recent technological and social program. In the present work, we study the statistical distribution of the marks obtained by the candidates in entrance examination of Air Force Academy, which prepare officers for Brazilian Air Force and is situated at Pirassununga São Paulo, in the period of 1999-2004. Our object is to find some characteristics of the process of learning in various disciplines. The admission examination consist of four disciplines; Physics, Mathematics, English and Portuguese. The candidates are selected in accordance with the merit list in the examination and number of seats available as determined by the Air Force Command. We showed that in the discipline of Physics, Mathematics and English, the distribution of marks obtained is in accordance with Gradually Truncated Power Law as also have been reported earlier in Exact and Biological Sciences in University entrance examination. In Portuguese the Distribution is Normal. We explained these results considering importance of the understanding of material given previously to understand a new chapter in area of Physics, Mathematics and English as our foreign language. In the case of Portuguese (Native Language), each chapter is relatively independent and thus not require knowledge of previous chapters. We also presented some suggestions to improve the science and Mathematics Education at High School level. / Mestre
62

Modelling multi-phase non-Newtonian flows using incompressible SPH

Xenakis, Antonios January 2016 (has links)
Non-Newtonian fluids are of great scientific interest due to their range of physical properties, which arise from the characteristic shear stress-shear rate relation for each fluid. The applications of non-Newtonian fluids are widespread and occur in many industrial (e.g. lubricants, suspensions, paints, etc.) and environmental (e.g. mud, ice, blood, etc.) problems, often involving multiple fluids. In this study, the novel technique of Incompressible Smoothed Particle Hydrodynamics (ISPH) with shifting (Lind et al., J. Comput. Phys., 231(4):1499-1523, 2012), is extended beyond the state-of-the-art to model non-Newtonian and multi-phase flows. The method is used to investigate important problems of both environmental and industrial interest. The proposed methodology is based on a recent ISPH algorithm with shifting with the introduction of an appropriate stress formulation. The new method is validated both for Newtonian and non-Newtonian fluids, in closed-channel and free-surface flows. Applications in complex moulding flows are conducted and compared to previously published results. Validation includes comparison with other computational techniques such as weakly compressible SPH (WCSPH) and the Control Volume Finite Element method. Importantly, the proposed method offers improved pressure results over state-of-the-art WCSPH methods, while retaining accurate prediction of the flow patterns. Having validated the single-phase non-Newtonian ISPH algorithm, this develops a new extension to multi-phase flows. The method is applied to both Newtonian/Newtonian and Newtonian/non-Newtonian problems. Validations against a novel semi-analytical solution of a two-phase Poiseuille Newtonian/non-Newtonian flow, the Rayleigh-Taylor instability, and a submarine landslide are considered. It is shown that the proposed method can offer improvements in the description of interfaces and in the prediction of the flow fields of demanding multi-phase flows with both environmental and industrial application. Finally, the Lituya Bay landslide and tsunami is examined. The problem is approached initially on the real length-scales and compared with state-of-the-art computational techniques. Moreover, a detailed investigation is carried out aiming at the full reproduction of the experimental findings. With the introduction of a k-ε turbulence model, a simple saturation model and correct experimental initial conditions, significant improvements over the state-of-the-art are shown, managing an accurate representation of both the landslide as well as the wave run-up. The computational method proposed in this thesis is an entirely novel ISPH algorithm capable of modelling highly deforming non-Newtonian and multi-phase flows, and in many cases shows improved accuracy and experimental agreement compared with the current state-of-the-art WCSPH and ISPH methodologies. The variety of problems examined in this work show that the proposed method is robust and can be applied to a wide range of applications with potentially high societal and economical impact.
63

Análise da convecção forçada laminar em dutos circulares submetidos aos efeitos da condução axial e radiação

Veloso, Dhiego Luiz de Andrade 08 August 2015 (has links)
Submitted by Maike Costa (maiksebas@gmail.com) on 2017-05-23T14:15:11Z No. of bitstreams: 1 arquivo total.pdf: 1081749 bytes, checksum: 9f58d6baeffd1f939a659b2d2e3102be (MD5) / Made available in DSpace on 2017-05-23T14:15:11Z (GMT). No. of bitstreams: 1 arquivo total.pdf: 1081749 bytes, checksum: 9f58d6baeffd1f939a659b2d2e3102be (MD5) Previous issue date: 2015-08-08 / With the great technological advances experienced by humanity becomes providential depth knowledge about real processes of heat transfer, as well as a need arises to analyze them quantitatively. In the present work is studied the heat transfer in laminar forced convective in the entrance region of a circular tube considering the effects of axial conduction into the fluid and radiation, since in low Peclet numbers play an important role in heat transfer problems and its omission offers a significant error in the computation of the heat transfer rate. In the first part of this work is considered a slug-flow, whose exact analytical solution was discussed. In the second part of the work is considered a flow in the power law model, proposes an approximate analytic solution and numerical solution, as well as the comparison of these solutions. In this paper is used the hybrid numericanalytical method named Generalized Integral Transform Technique (GITT) to solve the energy equation. The temperature field and local Nusselt number are calculated for several values of Peclet numbers and with a boundary condition of first kind. The results presented in the form of tables and graphs permit to analyze the influence that the Peclet number and the power law index exercise in the temperature profile and the Nusselt number. The results of this study are presented in full compliance with the scientific literature. / Com o grande avanço tecnológico experimentado pela humanidade torna-se providencial um conhecimento aprofundado sobre os processos reais de transferência de calor, bem como surge uma necessidade de analisá-los quantitativamente. No presente trabalho estuda-se a transferência de calor na convecção forçada laminar na região de entrada térmica de um tubo circular considerando os efeitos de condução axial no fluido e de radiação, uma vez que em baixos números de Peclet desempenham um papel importante nos problemas de transferência de calor e sua omissão oferece um erro apreciável na computação da taxa de transferência de calor. Na primeira parte deste trabalho considerase um escoamento tipo pistão, cuja solução analítica exata foi discutida. Na segunda parte do trabalho considera-se um escoamento no modelo lei de potência, apresentando uma proposta de solução analítica aproximada e uma solução numérica, bem como a comparação entre as duas soluções. Neste trabalho utiliza-se o método híbrido numéricoanalítico denominado Técnica de Transformada Integral Generalizada (GITT) para resolver a equação da energia. O campo de temperatura e o número de Nusselt local são avaliados para vários valores do número de Peclet e uma condição de contorno do primeiro tipo. Os resultados, apresentados na forma de tabelas e gráficos, permitem analisar a influência que o número de Peclet e o índice lei de potência exercem no perfil de temperatura e no número de Nusselt. Os resultados obtidos neste trabalho se apresentam em total concordância com as literaturas científicas.
64

Difusão anômala de micropartículas em células no regime de altas frequências / Anomalous diffusion of microbeads in cells at a high frequency regime

Adriana Valerio 07 November 2017 (has links)
Este trabalho tem como objetivo caracterizar experimentalmente a difusão anômala de microesferas em células com alta resolução temporal. As microesferas são cobertas com um peptídeo para que elas fiquem aderidas ao citoesqueleto celular (CSK), de forma que quando o CSK se movimenta, as microesferas se movimentam junto. A grande parte dos trabalhos na literatura usa técnicas ativas, que consistem em aplicar uma perturbação na célula, para estudar a movimentação das microesferas, diferente da técnica usada neste trabalho, que é passiva. A vantagem de usar a técnica passiva é que é possível olhar a difusão das microesferas, sem haver fatores externos ativos atuando, porque o CSK já é um ambiente sujeito a forças dos próprios motores celulares, o que está relacionado com o comportamento anômalo. Ao calcular o deslocamento quadrático médio (MSD) das microesferas, os regimes aos quais as microesferas estão sujeitas, são o subdifusivo e o superdifusivo, e ambos possuem características que podem ser consideradas comportamento anômalo. Neste trabalho focamos em estudar a difusão anômala a altas frequências, usando uma câmera que pode chegar a 1000 frames por segundo (fps) para observações curtas, ou em torno de 200 fps, sustentável por longos tempos, a fim de evidenciar o comportamento anômalo. Conseguimos mostrar que a movimentação das microesferas segue uma lei de potência para deslocamentos normalizados |Z| > 3, o que indica que o fenômeno é livre de escala, corroborando com a hipótese de o citoplasma celular ter um comportamento do tipo mole. Além disso, como a análise do movimento é baseada na análise de imagem da posição das microesferas, propusemos um estudo para estimar o erro na posição das microesferas. / The aim of this master\'s thesis is to characterize experimentally anomalous diffusion of microbeads in cells with high temporal resolution. The microbeads are coated with a peptide, such that they can bound to integrins, which is a specific cell surface receptor, thus when the cytoskeleton moves the microbeads move together. The majority of works in scientific literature deals with active techniques, that consists of applying a disturbance on the cell, in order to investigate the movement of the beads, differently from the passive technique used in this work. The advantage of using the passive technique is that it makes possible to look to diffusion without having external active factors, because the cell cytoskeleton itself is an environment subjected to forces, provided by cell motors, which has been related to the anomalous behavior in the mean squared displacement. When calculating the microbeads mean squared displacement (MSD), they are subjected to the subdiffusive and supperdifusive regimes, and both have characteristics that can be considered anomalous behavior. Our goal was to study anomalous diffusion at high frequencies by using a camera that reaches up to 1000 frames per second (fps), for shorter observation times, or around 200 fps, sustainable for longer observation times, having the purpose of evidencing the anomalous behavior. We were able to show that the microbeads movement follows a power law for normalized displacements |Z| > 3, indicating that the phenomenon is scale-free, agreeing with the hypothesis that the cellular cytoplasm has a soft glassy behavior. Besides, since the movement analysis is based on the microbeads position image analysis, we have proposed a way to estimate the error in the microbeads position.
65

Determinação da distribuição de momento em superfluidos atômicos aprisionados: regimes turbulento e não turbulento / Determination of momentum distribution in a superfluid atomic trap: turbulent and non-turbulent regimes

Guilherme de Guzzi Bagnato 23 July 2013 (has links)
A turbulência clássica é um fenômeno de natureza caótica, mas de difícil estudo por ser constituída pela fusão e superposição de vórtices aleatórios, dificultando sua descrição matemática. A turbulência quântica (TQ), embora também caótica, é composta por vórtices quantizados, que favorecem o controle experimental e sua definição teórica. Embora a evidência experimental da TQ tenha sido obtida em sistemas de He líquido, sua caracterização em condensados de Bose-Einstein (BEC) ainda não foi totalmente realizada. Neste trabalho, estudamos a distribuição de momento em BECs expandidos em tempo de voo, nos regimes convencional e turbulento. Para a produção experimental da amostra quanticamente degenerada, utilizamos a técnica do resfriamento evaporativo em átomos de 87Rb, previamente resfriados em uma armadilha puramente magnética do tipo QUIC. A turbulência quântica foi produzida no sistema através de um par de bobinas de excitação capaz de produzir uma perturbação oscilatória na nuvem previamente condensada. O diagnóstico da amostra aprisionada é feito por imagem de absorção durante expansão livre da nuvem. Durante a expansão, tanto a nuvem condensada quanto a turbulenta, alcançaram um valor assintótico no aspect ratio, indicando uma evolução isotrópica. A partir deste resultado, elaboramos um método teórico capaz de determinar a projeção isotrópica da distribuição de momento, baseado na imagem produzida experimentalmente. Através de argumentos de simetria e de uma transformada integral, recuperamos a densidade de momento tridimensional da projeção, para então determinar o espectro de energia cinética da nuvem, observando uma lei de escala para um estreito intervalo de momento. A lei de escala já foi prevista teoricamente para sistemas quânticos e medida para o He superfluido, mas pela primeira vez foi evidenciada em um BEC. Desta forma, os resultados corroboram a existência da turbulência quântica em uma amostra quanticamente degenerada, introduzindo os BECs como candidatos alternativos ao He líquido superfluido no estudo deste fenômeno. / Classical turbulence is a chaotic phenomenon that requires labored work, because of its merging and overlapping of random vortices nature, which hinders its mathematical description. Quantum turbulence (QT), although chaotic, is comprised of quantized vortices that favor the experimental control and its theoretical definition. Although experimental evidence of QT has been proved in liquid helium systems, its characterization in Bose-Einstein condensates (BEC) has not been fully accomplished. In this work, we studied the momentum distribution of expanding turbulent and non-turbulent BEC. For experimental achievement of the quantum degenerated sample, we used evaporative cooling in rubidium atoms, previously cooled in a QUIC trap. Quantum turbulence was produced through a pair of excitation coils capable of producing an oscillatory perturbation in the cloud previously condensed. The diagnosis of the trapped sample is done by absorption image during free expansion of the cloud. During the expansion, both clouds achieved a asymptotic value of the aspect ratio, indicating an isotropic evolution. From this result, we have developed a theoretical method able to determine the projection of the isotropic distribution of momentum, based on the image produced experimentally. Through symmetry arguments and an integral transformation, we recovered the tridimensional momentum distribution of the projection and then determined the kinetic energy spectrum of the cloud, observing a scaling power law for a narrow range of momenta. The scaling law has been theoretically predicted for quantum systems and has been proved to liquid helium superfluid, but, in this work, was for the first time evidenced in a BEC. Thus, the results support the existence of quantum turbulence in our quantum degenerated sample, introducing the BECs as potential candidates besides liquid helium superfluid for the study of this phenomenon.
66

Analysis of energy based signal detection

Lehtomäki, J. (Janne) 29 November 2005 (has links)
Abstract The focus of this thesis is on the binary signal detection problem, i.e., if a signal or signals are present or not. Depending on the application, the signal to be detected can be either unknown or known. The detection is based on some function of the received samples which is compared to a threshold. If the threshold is exceeded, it is decided that signal(s) is (are) present. Energy detectors (radiometers) are often used due to their simplicity and good performance. The main goal here is to develop and analyze energy based detectors as well as power-law based detectors. Different possibilities for setting the detection threshold for a quantized total power radiometer are analyzed. The main emphasis is on methods that use reference samples. In particular, the cell-averaging (CA) constant false alarm rate (CFAR) threshold setting method is analyzed. Numerical examples show that the CA strategy offers the desired false alarm probability, whereas a more conventional strategy gives too high values, especially with a small number of reference samples. New performance analysis of a frequency sweeping channelized radiometer is presented. The total power radiometer outputs from different frequencies are combined using logical-OR, sum and maximum operations. An efficient method is presented for accurately calculating the likelihood ratio used in the optimal detection. Also the effects of fading are analyzed. Numerical results show that although sweeping increases probability of intercept (POI), the final probability of detection is not increased if the number of observed hops is large. The performance of a channelized radiometer is studied when different CFAR strategies are used to set the detection threshold. The proposed iterative methods for setting the detection threshold are the forward consecutive mean excision (FCME) method with the CA scaling factors in final detection decision (FCME+CA), the backward consecutive mean excision (BCME) method with the CA scaling factors in detection (BCME+CA) and a method that uses the CA scaling factors for both censoring and detection (CA+CA). Numerical results show that iterative CFAR methods may improve detection performance compared to baseline methods. Finally, a method to set the threshold of a power-law detector that uses a nonorthogonal transform is presented. The mean, variance and skewness of the decision variable in the noise-only case are derived and these are used to find a shifted log-normal approximation for the distribution of the decision variable. The accuracy of this method is verified through simulations.
67

Nonlinear Response and Avalanche Behavior in Metallic Glasses

Riechers, Birte 07 June 2017 (has links)
No description available.
68

Neural Network Approach for Predicting the Failure of Turbine Components

Bano, Nafisa January 2013 (has links)
Turbine components operate under severe loading conditions and at high and varying temperatures that result in thermal stresses in the presence of temperature gradients created by hot gases and cooling air. Moreover, static and cyclic loads as well as the motion of rotating components create mechanical stresses. The combined effect of complex thermo-mechanical stresses promote nucleation and propagation of cracks that give rise to fatigue and creep failure of the turbine components. Therefore, the relationship between thermo-mechanical stresses, chemical composition, heat treatment, resulting microstructure, operating temperature, material damage, and potential failure modes, i.e. fatigue and/or creep, needs to be well understood and studied. Artificial neural networks are promising candidate tools for such studies. They are fast, flexible, efficient, and accurate tools to model highly non-linear multi-dimensional relationships and reduce the need for experimental work and time-consuming regression analysis. Therefore, separate neural network models for γ’ precipitate strengthened Ni based superalloys have been developed for predicting the γ’ precipitate size, thermal expansion coefficient, fatigue life, and hysteresis energy. The accumulated fatigue damage is then estimated as the product of hysteresis energy and fatigue life. The models for γ’ precipitate size, thermal expansion coefficient, and hysteresis energy converge very well and match experimental data accurately. The fatigue life proved to be the most challenging aspect to predict, and fracture mechanics proved to potentially be a necessary supplement to neural networks. The model for fatigue life converges well, but relatively large errors are observed partly due to the generally large statistical variations inherent to fatigue life. The deformation mechanism map for 1.23Cr-1.2Mo-0.26V rotor steel has been constructed using dislocation glide, grain boundary sliding, and power law creep rate equations. The constructed map is verified with experimental data points and neural network results. Although the existing set of experimental data points for neural network modeling is limited, there is an excellent match with boundaries constructed using rate equations which validates the deformation mechanism map.
69

Modeling and predicting time series of social activities with fat-tailed distributions

Miotto, José Maria 17 August 2016 (has links)
Fat-tailed distributions, characterized by the relation P(x) ∝ x^{−α−1}, are an emergent statistical signature of many complex systems, and in particular of social activities. These fat-tailed distributions are the outcome of dynamical processes that, contrary to the shape of the distributions, is in most cases are unknown. Knowledge of these processes’ properties sheds light on how the events in these fat tails, i.e. extreme events, appear and if it is possible to anticipate them. In this Thesis, we study how to model the dynamics that lead to fat-tailed distributions and the possibility of an accurate prediction in this context. To approach these problems, we focus on the study of attention to items (such as videos, forum posts or papers) in the Internet, since human interactions through the online media leave digital traces that can be analysed quantitatively. We collected four sets of time series of online activity that show fat tails and we characterize them. Of the many features that items in the datasets have, we need to know which ones are the most relevant to describe the dynamics, in order to include them in a model; we select the features that show high predictability, i.e. the capacity of realizing an accurate prediction based on that information. To quantify predictability we propose to measure the quality of the optimal forecasting method for extreme events, and we construct this measure. Applying these methods to data, we find that more extreme events (i.e. higher value of activity) are systematically more predictable, indicating that the possibility of discriminate successful items is enhanced. The simplest model that describes the dynamics of activity is to relate linearly the increment of activity with the last value of activity recorded. This starting point is known as proportional effect, a celebrated and widely used class of growth models in complex systems, which leads to a distribution of activity that is fat-tailed. On the one hand, we show that this process can be described and generalized in the framework of Stochastic Differential Equations (SDE) with Normal noise; moreover, we formalize the methods to estimate the parameters of such SDE. On the other hand, we show that the fluctuations of activity resulting from these models are not compatible with the data. We propose a model with proportional effect and Lévy-distributed noise, that proves to be superior describing the fluctuations around the average of the data and predicting the possibility of an item to become an extreme event. However, it is possible to model the dynamics using more than just the last value of activity; we generalize the growth models used previously, and perform an analysis that indicates that the most relevant variable for a model is the last increment in activity. We propose a new model using only this variable and the fat-tailed noise, and we find that, in our data, this model is superior to the previous models, including the one we proposed. These results indicate that, even if present, the relevance of proportional effect as a generative mechanism for fat-tailed distributions is greatly reduced, since the dynamical equations of our models contain this feature in the noise. The implications of this new interpretation of growth models to the quantification of predictability are discussed along with applications to other complex systems.
70

ZipThru: A software architecture that exploits Zipfian skew in datasets for accelerating Big Data analysis

Ejebagom J Ojogbo (9529172) 16 December 2020 (has links)
<div>In the past decade, Big Data analysis has become a central part of many industries including entertainment, social networking, and online commerce. MapReduce, pioneered by Google, is a popular programming model for Big Data analysis, famous for its easy programmability due to automatic data partitioning, fault tolerance, and high performance. Majority of MapReduce workloads are summarizations, where the final output is a per-key ``reduced" version of the input, highlighting a shared property of each key in the input dataset.</div><div><br></div><div>While MapReduce was originally proposed for massive data analyses on networked clusters, the model is also applicable to datasets small enough to be analyzed on a single server. In this single-server context the intermediate tuple state generated by mappers is saved to memory, and only after all Map tasks have finished are reducers allowed to process it. This Map-then-Reduce sequential mode of execution leads to distant reuse of the intermediate state, resulting in poor locality for memory accesses. In addition the size of the intermediate state is often too large to fit in the on-chip caches, leading to numerous cache misses as the state grows during execution, further degrading performance. It is well known, however, that many large datasets used in these workloads possess a Zipfian/Power Law skew, where a minority of keys (e.g., 10\%) appear in a majority of tuples/records (e.g., 70\%). </div><div><br></div><div>I propose ZipThru, a novel MapReduce software architecture that exploits this skew to keep the tuples for the popular keys on-chip, processing them on the fly and thus improving reuse of their intermediate state and curtailing off-chip misses. ZipThru achieves this using four key mechanisms: 1) Concurrent execution of both Map and Reduce phases; 2) Holding only the small, reduced state of the minority of popular keys on-chip during execution; 3) Using a lookup table built from pre-processing a subset of the input to distinguish between popular and unpopular keys; and 4) Load balancing the concurrently executing Map and Reduce phases to efficiently share on-chip resources. </div><div><br></div><div>Evaluations using Phoenix, a shared-memory MapReduce implementation, on 16- and 32-core servers reveal that ZipThru incurs 72\% fewer cache misses on average over traditional MapReduce while achieving average speedups of 2.75x and 1.73x on both machines respectively.</div>

Page generated in 0.0752 seconds