761 |
Data Analytics for Statistical LearningKomolafe, Tomilayo A. 05 February 2019 (has links)
The prevalence of big data has rapidly changed the usage and mechanisms of data analytics within organizations. Big data is a widely-used term without a clear definition. The difference between big data and traditional data can be characterized by four Vs: velocity (speed at which data is generated), volume (amount of data generated), variety (the data can take on different forms), and veracity (the data may be of poor/unknown quality). As many industries begin to recognize the value of big data, organizations try to capture it through means such as: side-channel data in a manufacturing operation, unstructured text-data reported by healthcare personnel, various demographic information of households from census surveys, and the range of communication data that define communities and social networks.
Big data analytics generally follows this framework: first, a digitized process generates a stream of data, this raw data stream is pre-processed to convert the data into a usable format, the pre-processed data is analyzed using statistical tools. In this stage, called statistical learning of the data, analysts have two main objectives (1) develop a statistical model that captures the behavior of the process from a sample of the data (2) identify anomalies in the process.
However, several open challenges still exist in this framework for big data analytics. Recently, data types such as free-text data are also being captured. Although many established processing techniques exist for other data types, free-text data comes from a wide range of individuals and is subject to syntax, grammar, language, and colloquialisms that require substantially different processing approaches. Once the data is processed, open challenges still exist in the statistical learning step of understanding the data.
Statistical learning aims to satisfy two objectives, (1) develop a model that highlights general patterns in the data (2) create a signaling mechanism to identify if outliers are present in the data. Statistical modeling is widely utilized as researchers have created a variety of statistical models to explain everyday phenomena such as predicting energy usage behavior, traffic patterns, and stock market behaviors, among others. However, new applications of big data with increasingly varied designs present interesting challenges. Consider the example of free-text analysis posed above. There's a renewed interest in modeling free-text narratives from sources such as online reviews, customer complaints, or patient safety event reports, into intuitive themes or topics. As previously mentioned, documents describing the same phenomena can vary widely in their word usage and structure.
Another recent interest area of statistical learning is using the environmental conditions that people live, work, and grow in, to infer their quality of life. It is well established that social factors play a role in overall health outcomes, however, clinical applications of these social determinants of health is a recent and an open problem. These examples are just a few of many examples wherein new applications of big data pose complex challenges requiring thoughtful and inventive approaches to processing, analyzing, and modeling data.
Although a large body of research exists in the area of anomaly detection increasingly complicated data sources (such as side-channel related data or network-based data) present equally convoluted challenges. For effective anomaly-detection, analysts define parameters and rules, so that when large collections of raw data are aggregated, pieces of data that do not conform are easily noticed and flagged.
In this work, I investigate the different steps of the data analytics framework and propose improvements for each step, paired with practical applications, to demonstrate the efficacy of my methods. This paper focuses on the healthcare, manufacturing and social-networking industries, but the materials are broad enough to have wide applications across data analytics generally. My main contributions can be summarized as follows:
• In the big data analytics framework, raw data initially goes through a pre-processing step. Although many pre-processing techniques exist, there are several challenges in pre-processing text data and I develop a pre-processing tool for text data.
• In the next step of the data analytics framework, there are challenges in both statistical modeling and anomaly detection
o I address the research area of statistical modeling in two ways:
- There are open challenges in defining models to characterize text data. I introduce a community extraction model that autonomously aggregates text documents into intuitive communities/groups
- In health care, it is well established that social factors play a role in overall health outcomes however developing a statistical model that characterizes these relationships is an open research area. I developed statistical models for generalizing relationships between social determinants of health of a cohort and general medical risk factors
o I address the research area of anomaly detection in two ways:
- A variety of anomaly detection techniques exist already, however, some of these methods lack a rigorous statistical investigation thereby making them ineffective to a practitioner. I identify critical shortcomings to a proposed network based anomaly detection technique and introduce methodological improvements
- Manufacturing enterprises which are now more connected than ever are vulnerably to anomalies in the form of cyber-physical attacks. I developed a sensor-based side-channel technique for anomaly detection in a manufacturing process / PHD / The prevalence of big data has rapidly changed the usage and mechanisms of data analytics within organizations. The fields of manufacturing and healthcare are two examples of industries that are currently undergoing significant transformations due to the rise of big data. The addition of large sensory systems is changing how parts are being manufactured and inspected and the prevalence of Health Information Technology (HIT) systems in healthcare systems is also changing the way healthcare services are delivered. These industries are turning to big data analytics in the hopes of acquiring many of the benefits other sectors are experiencing, including reducing cost, improving safety, and boosting productivity. However, there are many challenges that exist along with the framework of big data analytics, from pre-processing raw data, to statistical modeling of the data, and identifying anomalies present in the data or process. This work offers significant contributions in each of the aforementioned areas and includes practical real-world applications.
Big data analytics generally follows this framework: first, a digitized process generates a stream of data, this raw data stream is pre-processed to convert the data into a usable format, the pre-processed data is analyzed using statistical tools. In this stage, called ‘statistical learning of the data’, analysts have two main objectives (1) develop a statistical model that captures the behavior of the process from a sample of the data (2) identify anomalies or outliers in the process.
In this work, I investigate the different steps of the data analytics framework and propose improvements for each step, paired with practical applications, to demonstrate the efficacy of my methods. This work focuses on the healthcare and manufacturing industries, but the materials are broad enough to have wide applications across data analytics generally. My main contributions can be summarized as follows:
• In the big data analytics framework, raw data initially goes through a pre-processing step. Although many pre-processing techniques exist, there are several challenges in pre-processing text data and I develop a pre-processing tool for text data.
• In the next step of the data analytics framework, there are challenges in both statistical modeling and anomaly detection
o I address the research area of statistical modeling in two ways:
- There are open challenges in defining models to characterize text data. I introduce a community extraction model that autonomously aggregates text documents into intuitive communities/groups
- In health care, it is well established that social factors play a role in overall health outcomes however developing a statistical model that characterizes these relationships is an open research area. I developed statistical models for generalizing relationships between social determinants of health of a cohort and general medical risk factors
o I address the research area of anomaly detection in two ways:
- A variety of anomaly detection techniques exist already, however, some of these methods lack a rigorous statistical investigation thereby making them ineffective to a practitioner. I identify critical shortcomings to a proposed network-based anomaly detection technique and introduce methodological improvements
- Manufacturing enterprises which are now more connected than ever are vulnerable to anomalies in the form of cyber-physical attacks. I developed a sensor-based side-channel technique for anomaly detection in a manufacturing process.
|
762 |
Quantitative Methods of Statistical ArbitrageBoming Ning (18414465) 22 April 2024 (has links)
<p dir="ltr">Statistical arbitrage is a prevalent trading strategy which takes advantage of mean reverse property of spreads constructed from pairs or portfolios of assets. Utilizing statistical models and algorithms, statistical arbitrage exploits and capitalizes on the pricing inefficiencies between securities or within asset portfolios. </p><p dir="ltr">In chapter 2, We propose a framework for constructing diversified portfolios with multiple pairs trading strategies. In our approach, several pairs of co-moving assets are traded simultaneously, and capital is dynamically allocated among different pairs based on the statistical characteristics of the historical spreads. This allows us to further consider various portfolio designs and rebalancing strategies. Working with empirical data, our experiments suggest the significant benefits of diversification within our proposed framework.</p><p dir="ltr">In chapter 3, we explore an optimal timing strategy for the trading of price spreads exhibiting mean-reverting characteristics. A sequential optimal stopping framework is formulated to analyze the optimal timings for both entering and subsequently liquidating positions, all while considering the impact of transaction costs. Then we leverages a refined signature optimal stopping method to resolve this sequential optimal stopping problem, thereby unveiling the precise entry and exit timings that maximize gains. Our framework operates without any predefined assumptions regarding the dynamics of the underlying mean-reverting spreads, offering adaptability to diverse scenarios. Numerical results are provided to demonstrate its superior performance when comparing with conventional mean reversion trading rules.</p><p dir="ltr">In chapter 4, we introduce an innovative model-free and reinforcement learning based framework for statistical arbitrage. For the construction of mean reversion spreads, we establish an empirical reversion time metric and optimize asset coefficients by minimizing this empirical mean reversion time. In the trading phase, we employ a reinforcement learning framework to identify the optimal mean reversion strategy. Diverging from traditional mean reversion strategies that primarily focus on price deviations from a long-term mean, our methodology creatively constructs the state space to encapsulate the recent trends in price movements. Additionally, the reward function is carefully tailored to reflect the unique characteristics of mean reversion trading.</p>
|
763 |
Statistical Foundations of Operator LearningNelsen, Nicholas Hao January 2024 (has links) (PDF)
<p>This thesis studies operator learning from a statistical perspective. Operator learning uses observed data to estimate mappings between infinite-dimensional spaces. It does so at the conceptually continuum level, leading to discretization-independent machine learning methods when implemented in practice. Although this framework shows promise for physical model acceleration and discovery, the mathematical theory of operator learning lags behind its empirical success. Motivated by scientific computing and inverse problems where the available data are often scarce, this thesis develops scalable algorithms for operator learning and theoretical insights into their data efficiency.</p>
<p>The thesis begins by introducing a convergent operator learning algorithm that is implementable on a computer with controlled complexity. The method is based on linear combinations of function-valued random features, enjoys efficient training via convex optimization, and accurately approximates nonlinear solution operators of parametric partial differential equations. A statistical analysis derives state-of-the-art error bounds for the method and establishes its robustness to errors stemming from noisy observations and model misspecification. Next, the thesis tackles fundamental statistical questions about how problem structure, data quality, and prior information influence learning accuracy. Specializing to a linear setting, a sharp Bayesian nonparametric analysis shows that continuum linear operators, such as the integration or differentiation of spatially varying functions, are provably learnable from noisy input-output pairs. The theory reveals that smoothing operators are easier to learn than unbounded ones and that training with rough or high-frequency input data improves sample complexity. When only specific linear functionals of the operator’s output are the primary quantities of interest, the final part of the thesis proves that the smoothness of the functionals determines whether learning directly from these finite-dimensional observations carries a statistical advantage over plug-in estimators based on learning the entire operator. To validate the findings beyond linear problems, the thesis develops practical deep operator learning architectures for nonlinear mappings that send functions to vectors, or vice versa, and shows their corresponding universal approximation properties. Altogether, this thesis advances the reliability and efficiency of operator learning for continuum problems in the physical and data sciences.</p>
|
764 |
A statistical theory of the epilepsiesThomas, Kuryan January 1988 (has links)
A new physical and mathematical model for the epilepsies is proposed, based on the theory of bond percolation on finite lattices. Within this model, the onset of seizures in the brain is identified with the appearance of spanning clusters of neurons engaged in the spurious and uncontrollable electrical activity characteristic of seizures. It is proposed that the fraction of excitatory to inhibitory synapses can be identified with a bond probability, and that the bond probability is a randomly varying quantity displaying Gaussian statistics. The consequences of the proposed model to the treatment of the epilepsies is explored.
The nature of the data on the epilepsies which can be acquired in a clinical setting is described. It is shown that such data can be analyzed to provide preliminary support for the bond percolation hypothesis, and to quantify the efficacy of anti-epileptic drugs in a treatment program. The results of a battery of statistical tests on seizure distributions are discussed.
The physical theory of the electroencephalogram (EEG) is described, and extant models of the electrical activity measured by the EEG are discussed, with an emphasis on their physical behavior. A proposal is made to explain the difference between the power spectra of electrical activity measured with cranial probes and with the EEG. Statistical tests on the characteristic EEG manifestations of epileptic activity are conducted, and their results described.
Computer simulations of a correlated bond percolating system are constructed. It is shown that the statistical properties of the results of such a simulation are strongly suggestive of the statistical properties of clinical data.
The study finds no contradictions between the predictions of the bond percolation model and the observed properties of the available data. Suggestions are made for further research and for techniques based on the proposed model which may be used for tuning the effects of anti-epileptic drugs. / Ph. D.
|
765 |
Statistics of Tensor FormsSon, Jaesung January 2025 (has links)
Statistics of tensor forms appear in various contexts and provide a useful way to model dependence, which naturally arises in network data. In this thesis, we study the statistics of two different tensor forms.
In the first part of the thesis, we derive central limit theorem results which exhibit a fourth moment phenomenon. That is, the fourth moment converging to 3 implies the convergence of the statistic to a normal distribution. We also establish the other direction, which provides us with an if and only if condition for asymptotic normality. The settings and the results are very easily applied to the monochromatic subgraph count in the problem of graph coloring.
The second part of the thesis compares the relative efficiency of the maximum likelihood estimator (MLE) and the maximum pseudolikelihood estimator (MPLE) for particular p-tensor Ising models. Specifically, we show that in the graph case, i.e. when p = 2, the MLE and MPLE are equally Bahadur efficient. For high-order tensors, i.e. p ≥ 3, we show a two-layer phase transition in which the MPLE is less Bahadur efficient than the MLE in certain regimes of the parameter space, depending also on the magnitudes of the null and alternate parameters.
|
766 |
A course in statistical engineeringCampean, Felician, Grove, Daniel M., Henshall, Edwin January 2005 (has links)
No / A course in statistical engineering has recently been added to the Ford Motor Company's Technical Education Program. The aim was to produce materials suitable for use by Ford but which could also be promoted by the UK's Royal Statistical Society within the university sector. The course is built around a sequence of realistic tasks dictated by the flow of the product creation process. Its structure and content is thus driven by engineering need rather than statistical method, promoting constructivist learning. Before
describing the course content we review the changing role of the engineer and comment on the relationships between Systems
Engineering, Design for Six Sigma and Statistical Engineering. We give details of a case study which plays a crucial role in the course. We focus on some important features of the development process and conclude with a discussion of the approach we have taken and possible future developments.
|
767 |
A practical introduction to medical statisticsScally, Andy J. 16 October 2013 (has links)
No / Medical statistics is a vast and ever-growing field of academic endeavour, with direct application to developing the robustness of the evidence base in all areas of medicine. Although the complexity of available statistical techniques has continued to increase, fuelled by the rapid data processing capabilities of even desktop/laptop computers, medical practitioners can go a long way towards creating, critically evaluating and assimilating this evidence with an understanding of just a few key statistical concepts. While the concepts of statistics and ethics are not common bedfellows, it should be emphasised that a statistically flawed study is also an unethical study.[1] This review will outline some of these key concepts and explain how to interpret the output of some commonly used statistical analyses. Examples will be confined to two-group tests on independent samples, using both a continuous and a dichotomous/binary outcome measure.
|
768 |
The Statistical Bootstrap ModelHamer, Christopher John January 1972 (has links) (PDF)
<p>A review is presented of the statistical bootstrap model of Hagedorn and Frautschi. This model is an attempt to apply the methods of statistical mechanics in high-energy physics, while treating all hadron states (stable or unstable) on an equal footing. A statistical calculation of the resonance spectrum on this basis leads to an exponentially rising level density ρ(m) ~ cm<sup>-3</sup> e<sup>βom</sup> at high masses.</p>
<p>In the present work, explicit formulae are given for the asymptotic dependence of the level density on quantum numbers, in various cases. Hamer and Frautschi's model for a realistic hadron spectrum is described.</p>
<p>A statistical model for hadron reactions is then put forward, analogous to the Bohr compound nucleus model in nuclear physics, which makes use of this level density. Some general features of resonance decay are predicted. The model is applied to the process of NN annihilation at rest with overall success, and explains the high final state pion multiplicity, together with the low individual branching ratios into two-body final states, which are characteristic of the process. For more general reactions, the model needs modification to take account of correlation effects. Nevertheless it is capable of explaining the phenomenon of limited transverse momenta, and the exponential decrease in the production frequency of heavy particles with their mass, as shown by Hagedorn. Frautschi's results on "Ericson fluctuations" in hadron physics are outlined briefly. The value of β<sub>o</sub> required in all these applications is consistently around [120 MeV]<sup>-1</sup> corresponding to a "resonance volume" whose radius is very close to ƛ<sub>π</sub>. The construction of a "multiperipheral cluster model" for high-energy collisions is advocated.</p>
|
769 |
Statistics in Air TransportationChen, Gong 18 December 2024 (has links)
Civil aviation demands punctual and efficient commercial flights. Flight delays adversely affect passengers, airlines, airports, and the environment (Cook and Tanner, 2015; Cook, Tanner, and Lawes, 2012). Flight delays are typically characterized as the time difference between the actual departure/arrival time of an aircraft and its scheduled departure/arrival time (EUROCONTROL, 2018). Air transportation functions within a complex system and delays are influenced by a multitude of factors. At its core, delays arise due to an imbalance between demand and capacity, where the demand exceeds the available capacity (EUROCONTROL, 2018; Technology Assessment, 1984; Wells and Young, 2004). Air Traffic Flow Management (ATFM) can adjust the demand and balance the imbalance between demand and capacity to achieve a better equilibrium (EUROCONTROL, 2023; Odoni, 1987; Ball et al., 2003; Bertsimas, Lulli, and Odoni, 2011; Murca, 2018; Xu et al., 2020). This dissertation encompasses applications of statistical methods in air transport, such as landing time predictions and weather variable interpolations to enhance ATFM, as well as delay propagation inferences among airports to comprehend patterns of delay transmission, all aiming to understand and mitigate flight delays.
Efficient ATFM requires accurate monitoring and prediction of the current capacity and demand imbalance status. Accurate prediction of flight delay helps airports to monitor better, make more informed decisions and increase airport efficiency (Fricke and Schultz, 2009; Lordan, Sallan, and Valenzuela-Arroyo, 2016; Wang et al., 2021). Besides delay prediction, landing time prediction also improves resource monitoring. Many machine learning methods are available to make predictions of landing time. Chapter 2 compares the accuracy of different machine learning methods to predict landing time at Zurich Airport by cross- validation errors. Important factors contributing to the landing time prediction are also identified. The results showcase the effectiveness of the decision tree methods in accurately predicting landing times, which helps improve the management of runways and resources at the local airport.
Besides a warning of delays, rerouting can prevent delays by exploring alternative flight routes, which involves re-planning trajectories to bypass congested airspace and hotspots. Weather information serves as a critical input for trajectory planners. The question pertains to choosing interpolation methods to extend the weather data available at 1-degree grid points defined by latitudes, longitudes, and pressure levels with high accuracy. Chapter 3 explores different interpolation techniques for crucial weather variables such as temperature, wind speed, and wind direction. These methods, including Ordinary Kriging, the radial basis function method, neural networks, and decision trees, are compared using cross-validation interpolation errors. A Monte Carlo simulation of a trajectory from Prague to Tunis is conducted to examine the impact of input weather data and the interpolation method (Ordinary Kriging) on planned trajectories. Even though errors in GFS data and Ordinary Kriging are inevitable, the inaccuracy of the data has a minor impact on the planned trajectory.
Flight delays negatively affect passengers, airlines, airports, and the environment. Besides mitigating delays at individual airports and for specific flights, considering the potential propagation of delays from other airports is necessary. Assessing delay propagation among airports in the network contributes to understanding the systemic impact of delays. Analyzing delay propagation assists in understanding the patterns of delay transmission and identifying potential strategies for mitigation. Graph network theory has enabled the construction of delay propagation networks to understand the delay transmission pattern using time series data (Belkoura and Zanin, 2016; Zanin, Belkoura, and Zhu, 2017; Du et al., 2018; Mazzarisi et al., 2020b; Xiao et al., 2020; Wang et al., 2020; Jia et al., 2022). However, inferring connections from time series data using statistical methods can introduce biases resulting from excluding airports (Belkoura and Zanin, 2016; Zanin, Belkoura, and Zhu, 2017; Du et al., 2018) or false positives by inappropriate statistical methods (Mazzarisi et al., 2020b), consequently overestimating propagation. Overestimation of delay propagation can undermine the credibility of the reported results, as it becomes dubious to discern whether inaccurate inferences drive the observed delay propagation. Chapter 4 infers Granger causality among airports by avoiding the overestimation of propagation from excluding airports and false positives. The “one-standard-error” rule (Hastie et al., 2009) is recommended to mitigate a high false positive rate during parameter tuning. It is found that the choice of data inputs for model training influences the delay propagation inference results. When early arrivals and punctual flights are included, the observed delay propagation among airports can stem from correlations among punctual and early arrivals rather than delayed flights. In contrast to recent research (Xiao et al., 2020; Jia et al., 2022), this study unveils that large airports exert a substantial influence on the delay propagation network.
In summary, this work aims to enhance our understanding of and mitigate flight delays. Chapters 2 and 3 focus on delay mitigation, while Chapter 4 contributes to our understanding of delay interactions.
|
770 |
Atitude e motivação em relação ao desempenho acadêmico de alunos do curso de graduação em administração em disciplinas de estatística / Attitude and motivation in relation to the academic performance of undergraduate students in management courses in statisticsViana, Gustavo Salomão 31 October 2012 (has links)
Em uma sociedade que apresenta ênfase no conhecimento, torna-se importante analisar uma quantidade significativa de informações contidas nos bancos de dados, objetivando transformá-las em conhecimentos utilizáveis, tanto para fins comerciais, quanto científicos. A Administração surge como uma área em que uma grande multiplicidade de aplicações estatísticas é possível, indo ao encontro das próprias competências e habilidades focadas no processo decisório do administrador. Neste sentido, a Estatística torna-se importante ferramenta na área financeira, de marketing, de produção e de recursos humanos. Porém, uma questão de grande relevância reside na formação Estatística dos profissionais da Administração, considerando as problemáticas envolvidas com o ensino de tal conteúdo nos cursos de graduação. Observando, portanto, a problemática envolvida no ensino de Estatística para o curso de graduação em Administração e levando em consideração a existência de alternativas para mensuração da atitude perante a Estatística, bem como da motivação acadêmica, surgiu como possibilidade de pesquisa a investigação do modo como se dá a interação da atitude perante a Estatística e da motivação acadêmica com o desempenho acadêmico do aluno nas disciplinas de Estatística. Para a consecução do objetivo do presente trabalho, foi realizado um estudo quantitativo, por meio da aplicação da Escala de Atitude dos alunos frente a Estatística - Survey of Attitudes Toward Statistics (SATS) - e da Escala de Motivação Acadêmica - Échelle de Motivation en Éducation (EMA) -, com 278 alunos de duas faculdades públicas de Administração. Na criação dos modelos de relacionamento entre motivação acadêmica e atitude perante a Estatística (variáveis independentes) e desempenho (variável dependente), verificou-se que, em relação à nota da disciplina, o melhor modelo apresentou um baixo valor de explicação (R2 ajustado = 7,3%), surgindo como variáveis preditoras significantes apenas o Afeto, a Motivação Extrínseca - introjeção, a Motivação Extrínseca - controle externo e a Motivação Intrínseca - vivenciar estímulos. Entretanto, o modelo que apresentou a autopercepção de desempenho como variável dependente apresentou um considerável valor de explicação (R2 ajustado = 45,5%), surgindo como variáveis preditoras significantes apenas o Afeto, a Competência cognitiva e a Motivação Extrínseca - introjeção. Por meio da análise de cluster, verificou-se que um dos três grupos formados apresentou valores superiores e menos dispersos, tanto no que concerne à nota, quanto em relação à autopercepção de desempenho. Neste sentido, dada a análise dos resultados, foi possível concluir que, de modo geral, o grupo de alunos com maior interesse na área de Finanças apresentou as maiores pontuações tanto em relação à atitude perante a Estatística como em relação à motivação acadêmica. / In a society that has an emphasis on knowledge it becomes important to analyze a significant amount of information in databases in order to transform them into usable knowledge both for commercial purposes, as scientific. The Administration appears as an area where a large variety of statistical applications is possible, to suit their own skills and abilities in decision making focused administrator. In this sense, the statistic becomes important tool in finance, marketing, production and human resources. However, that issue becomes of great importance is the training of professionals Statistics Administration, considering the issues involved with teaching such content in undergraduate courses. Noting therefore the problems involved in teaching statistics to undergraduate degree in Business Administration and taking into account the existence of alternatives to measure attitude toward statistics and the academic motivation, emerged as potential research investigating the so how is the interaction of attitude Statistics and academic motivation with the academic performance of students in the disciplines of Statistics. To achieve the objective of this study was a quantitative study conducted by applying the Attitude Scale front of students to Statistics - Survey of Attitudes Toward Statistics (SATS) - and the Academic Motivation Scale - Échelle of Motivation en Éducation (EMA) - with 278 students from two public colleges of Management. In the creation of models of relationship between academic motivation and attitude towards Statistics (independent variables) and performance (dependent variable) found that compared to note the best model of discipline exhibited a low value of explanation (adjusted R2 = 7.3 %), emerging as the only significant predictors of Affection, Extrinsic Motivation - introjection, Extrinsic Motivation - external control and Intrinsic Motivation - experiencing stimuli. However, the model showed that the perception of performance as the dependent variable showed a considerable amount of explanation (adjusted R2 = 45.5%), emerging as the only significant predictors Affect, Cognitive Competence and Extrinsic Motivation - introjection. By means of cluster analysis verified that one of the three groups obtained showed greater and less dispersed, both with regard to note, as compared to the perception of performance. In this sense, given the analysis of the results, it was concluded that, in general, the group of students with greater interest in Finance presented the highest scores both in terms of attitude towards Statistics as in relation to academic motivation.
|
Page generated in 0.2241 seconds