• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 11
  • 9
  • 7
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 101
  • 101
  • 27
  • 26
  • 25
  • 15
  • 11
  • 11
  • 11
  • 11
  • 9
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Co-operative learning in the teaching of mapwork to geography students in tertiary education

Tshibalo, Azwindini Ernest 11 1900 (has links)
This study investigates the use of co-operative learning in the teaching of mapwork to Geography students in tertiary education. Diverse methods of teaching Geography mapwork and also theories of learning that are relevant to the teaching of mapwork are discussed. Co-operative learning, and how it can be employed in the teaching of mapwork is fully explained. The study revealed that co-operative learning method can help students to achieve higher marks in mapwork. It is an instructional method that uses small groups of students working together to meet educational goals. The approach relies on interaction and interdependence and thus is especially suited to higher level conceptual tasks requiring problem-solving and decision-making. / Psychology of Education / M. Ed. (Psychology of Education)
82

[en] ADAPTIVE RELAXED SYNCHRONIZATION THROUGH THE USE OF SUPERVISED LEARNING METHODS / [pt] RELAXAMENTO ADAPTATIVO DA SINCRONIZAÇÃO ATRAVÉS DO USO DE MÉTODOS DE APRENDIZAGEM SUPERVISIONADA

ANDRE LUIS CAVALCANTI BUENO 31 July 2018 (has links)
[pt] Sistemas de computação paralelos vêm se tornando pervasivos, sendo usados para interagir com o mundo físico e processar uma grande quantidade de dados de várias fontes. É essencial, portanto, a melhora contínua do desempenho computacional para acompanhar o ritmo crescente da quantidade de informações que precisam ser processadas. Algumas dessas aplicações admitem uma menor qualidade no resultado final em troca do aumento do desempenho de execução. Este trabalho tem por objetivo avaliar a viabilidade de usar métodos de aprendizagem supervisionada para garantir que a técnica de Sincronização Relaxada, utilizada para o aumento do desempenho de execução, forneça resultados dentro de limites aceitáveis de erro. Para isso, criamos uma metodologia que utiliza alguns dados de entrada para montar casos de testes que, ao serem executados, irão fornecer valores representativos de entrada para o treinamento de métodos de aprendizagem supervisionada. Dessa forma, quando o usuário utilizar a sua aplicação (no mesmo ambiente de treinamento) com uma nova entrada, o algoritmo de classificação treinado irá sugerir o fator de relaxamento de sincronização mais adequado à tripla aplicação/entrada/ambiente de execução. Utilizamos essa metodologia em algumas aplicações paralelas bem conhecidas e mostramos que, aliando a Sincronização Relaxada a métodos de aprendizagem supervisionada, foi possível manter a taxa de erro máximo acordada. Além disso, avaliamos o ganho de desempenho obtido com essa técnica para alguns cenários em cada aplicação. / [en] Parallel computing systems have become pervasive, being used to interact with the physical world and process a large amount of data from various sources. It is essential, therefore, the continuous improvement of computational performance to keep up with the increasing rate of the amount of information that needs to be processed. Some of these applications admit lower quality in the final result in exchange for increased execution performance. This work aims to evaluate the feasibility of using supervised learning methods to ensure that the Relaxed Synchronization technique, used to increase execution performance, provides results within acceptable limits of error. To do so, we have created a methodology that uses some input data to assemble test cases that, when executed, will provide input values for the training of supervised learning methods. This way, when the user uses his/her application (in the same training environment) with a new input, the trained classification algorithm will suggest the relax synchronization factor that is best suited to the triple application/input/execution environment. We used this methodology insome well-known parallel applications and showed that, by combining Relaxed Synchronization with supervised learning methods, it was possible to maintain the maximum established error rate. In addition, we evaluated the performance gain obtained with this technique for a number of scenarios in each application.
83

L'apprentissage du français langue étrangère facilité par la technologie (French)

Watt, Liezl-marie 18 February 2003 (has links)
This thesis will concentrate on previous and current learning methods of French as a foreign language. This understanding will help to plot the rapidness of evolution within foreign-language teaching. In conjunction with this evolution the thesis will also give a brief overview of the exponential development of technology. It will focus specifically on how technology created a new way of learning. The aim of this thesis is to depict whether there is a need to adapt the French language classroom with the current learning technologies in use. The thesis will also show that since people are different and since each generation differs in its learning preference, that technology can help to bridge the ever-growing gap between the learner and the learning material since people learn work on different ways. According to the proof that generations differ from each other and that the current young generation is referred to as the Net-generation, it will be clearly shown that this generation prefers to learn with technology. The correct mix of learning methods, learning technologies and different learning styles is one that is humanly impossible to achieve in a conventional way. It is on this basis then that the thesis will show that the correct e-learning technology should form an integral part of the new language classroom as it is the only solution to ensure that learning stays current and adaptive, and that it keeps on playing an important part in the evolution of mankind. Furthermore, a brief study will be conducted on the current and prospective use of e-learning technologies in the French language classroom of South Africa. / Thesis (MA (French))--University of Pretoria, 2004. / Modern European Languages / unrestricted
84

Vzdělávání a rozvoj zaměstnanců v organizaci / Learning and development

Machů, Lucie January 2014 (has links)
The aim of this master thesis is to explore the process of learning and development of workers in the Czech industrial company dealing with the manufacture of tires. The thesis is divided into a theoretical and a practical part. In the theoretical part the content of methods and processes as well as the principles of staff learning and development are explained. The practical part contains the analysis of the principles and methods used in the education and development of workers in a particular enterprise, focused on the administrative and manual workers. As part of the conducted analysis measures to improve the process of staff education and development are proposed.
85

Reconfigurable Microwave/Millimeter-Wave Filters: Automated tuning and Power Handling Analysis

Pintu Adhikari (11640121) 03 November 2021 (has links)
<div>In recent years, intelligent devices such as smartphones and self-driving cars are becoming ubiquitous in daily life, and thus, wireless communication is turning out to be increasingly omnipresent. To efficiently utilize the electromagnetic spectrum, automatically reconfigurable software-controlled radio transceivers are drawing an extensive amount of attention. In order to implement a reconfigurable radio transceiver, automatically tunable RF front-end components such as tunable filters are indispensable. Over the last decade, tunable filters have shown promising performance with high-quality factor (Q), a wide tuning range, and high-power handling. However, most of the existing tunable filters are manually adjusted. In this regard, this research work focuses on developing a novel automatic software-driven tuning technique for continuously tunable microwave and millimeter-wave filters.</div><div><br></div><div><br></div><div>First, a K-band continuously tunable bandpass filter has been demonstrated with contactless printed circuit board (PCB) tuners. Then, an automatic tuning technique based on deep-Q learning has been proposed and realized to tune a filter with contactless tuners automatically. Two-pole, three-pole, and four-pole bandpass filters are experimentally tested as examples without any human intervention to prove the feasibility of the tuning technique. For the first time, unlike a look-up table, the filters can be continuously tuned at a practically infinite number of frequencies inside the tuning range. </div><div><br></div><div>Next, a K/Ka-band tunable absorptive bandstop filter (ABSF) has been designed and fabricated in low-cost PCB technology. Contrary to a reflective bandstop filter, an ABSF filter is preferred for interference mitigation due to its deeper notch and lower reflection. However, the absorbed power may limit the filter's power handling. Therefore, lastly, a comparative analysis of power handling capability (PHC) between a reflective bandstop filter and an absorptive bandstop filter has been studied theoretically and experimentally in this dissertation.</div>
86

HIGHER ORDER OPTIMIZATION TECHNIQUES FOR MACHINE LEARNING

Sudhir B. Kylasa (5929916) 09 December 2019 (has links)
<div> <div> <div> <p>First-order methods such as Stochastic Gradient Descent are methods of choice for solving non-convex optimization problems in machine learning. These methods primarily rely on the gradient of the loss function to estimate descent direction. However, they have a number of drawbacks, including converging to saddle points (as opposed to minima), slow convergence, and sensitivity to parameter tuning. In contrast, second order methods that use curvature information in addition to the gradient, have been shown to achieve faster convergence rates, theoretically. When used in the context of machine learning applications, they offer faster (quadratic) convergence, stability to parameter tuning, and robustness to problem conditioning. In spite of these advantages, first order methods are commonly used because of their simplicity of implementation and low per-iteration cost. The need to generate and use curvature information in the form of a dense Hessian matrix makes each iteration of second order methods more expensive. </p><p><br></p> <p>In this work, we address three key problems associated with second order methods – (i) what is the best way to incorporate curvature information into the optimization procedure; (ii) how do we reduce the operation count of each iteration in a second order method, while maintaining its superior convergence property; and (iii) how do we leverage high-performance computing platforms to significant accelerate second order methods. To answer the first question, we propose and validate the use of Fisher information matrices in second order methods to significantly accelerate convergence. The second question is answered through the use of statistical sampling techniques that suitably sample matrices to reduce per-iteration cost without impacting convergence. The third question is addressed through the use of graphics processing units (GPUs) in distributed platforms to deliver state of the art solvers.</p></div></div></div><div><div><div> <p>Through our work, we show that our solvers are capable of significant improvement over state of the art optimization techniques for training machine learning models. We demonstrate improvements in terms of training time (over an order of magnitude in wall-clock time), generalization properties of learned models, and robustness to problem conditioning. </p> </div> </div> </div>
87

THE GAME CHANGER: ANALYTICAL METHODS FOR ENERGY DEMAND PREDICTION UNDER CLIMATE CHANGE

Debora Maia Silva (10688724) 22 April 2021 (has links)
<div>Accurate prediction of electricity demand is a critical step in balancing the grid. Many factors influence electricity demand. Among these factors, climate variability has been the most pressing one in recent times, challenging the resilient operation of the grid, especially during climatic extremes. In this dissertation, fundamental challenges related to accurate characterization of the climate-energy nexus are presented in Chapters 2--4, as described below. </div><div><br></div><div>Chapter 2 explores the cost of neglecting the role of humidity in predicting summer-time residential electricity consumption. Analysis of electricity demand in the CONUS region demonstrates that even though surface temperature---the most widely used metric for characterising heat stress---is an important factor, it is not sufficient for accurately characterizing cooling demand. The chapter proceeds to show significant underestimations of the climate sensitivity of demand, both in the observational space as well as under climate change. Specifically, the analysis reveals underestimations as high as 10-15% across CONUS, especially in high energy consuming states such as California and Texas. </div><div><br></div><div>Chapter 3 takes a critical look at one of the most widely used metrics, namely, the Cooling Degree Days (CDD), often calculated with an arbitrary set point temperature of 65F or 18.3C, ignoring possible variations due to different patterns of electricity consumption across different regions and climate zones. In this chapter, updated values are derived based on historical electricity consumption data across the country at the state level. Chapter 3 analysis demonstrates significant variation, as high as +-25%, between derived set point variables and the conventional value of 65F. Moreover, the CDD calculation is extended to account for the role of humidity, in the light of lessons learnt in the previous chapter. Our results reveal that under climate change scenarios, the air-temperature based CDD underestimates thermal comfort by as much as ~22%.</div><div><br></div><div>The predictive analytics conducted in Chapter 2 and Chapter 3 revealed a significant challenge in characterizing the climate-demand nexuses: the ability to capture the variability at the upper tails. Chapter 4 explores this specific challenge, with the specific goal of developing an algorithm to increase prediction accuracy at the higher quantiles of the demand distributions. Specifically, Chapter 4 presents a data-centric approach at the utility level (as opposed to the state-level analyses in the previous chapters), focusing on high-energy consuming states of California and Texas. The developed algorithm shows a general improvement of 7% in the mean prediction accuracy and an improvement of 15% for the 90th quantile predictions.</div>
88

Predictive Quality Analytics

Salim A Semssar (11823407) 03 January 2022 (has links)
Quality drives customer satisfaction, improved business performance, and safer products. Reducing waste and variation is critical to the financial success of organizations. Today, it is common to see Lean and Six Sigma used as the two main strategies in improving Quality. As advancements in information technologies enable the use of big data, defect reduction and continuous improvement philosophies will benefit and even prosper. Predictive Quality Analytics (PQA) is a framework where risk assessment and Machine Learning technology can help detect anomalies in the entire ecosystem, and not just in the manufacturing facility. PQA serves as an early warning system that directs resources to where help and mitigation actions are most needed. In a world where limited resources are the norm, focused actions on the significant few defect drivers can be the difference between success and failure
89

Large Eddy Simulations of a Back-step Turbulent Flow and Preliminary Assessment of Machine Learning for Reduced Order Turbulence Model Development

Biswaranjan Pati (11205510) 30 July 2021 (has links)
Accuracy in turbulence modeling remains a hurdle in the widespread use of Computational Fluid Dynamics (CFD) as a tool for furthering fluids dynamics research. Meanwhile, computational power remains a significant concern for solving real-life wall-bounded flows, which portray a wide range of length and time scales. The tools for turbulence analysis at our disposal, in the decreasing order of their accuracy, include Direct Numerical Simulation (DNS), Large Eddy Simulation (LES), and Reynolds-Averaged Navier Stokes (RANS) based models. While DNS and LES would remain exorbitantly expensive options for simulating high Reynolds number flows for the foreseeable future, RANS is and continues to be a viable option utilized in commercial and academic endeavors. In the first part of the present work, flow over the back-step test case was solved, and parametric studies for various parameters such as re-circulation length (X<sub>r</sub>), coefficient of pressure (C<sub>p</sub>), and coefficient of skin friction (C<sub>f</sub>) are presented and validated with experimental results. The back-step setup was chosen as the test case as turbulent modeling of flow past backward-facing step has been pivotal to understand separated flows better. Turbulence modeling is done on the test case using RANS (k-ε and k-ω models), and LES modeling, for different values of Reynolds number (Re ∈ {2, 2.5, 3, 3.5} × 10<sup>4</sup>) and expansion ratios (ER ∈ {1.5, 2, 2.5, 3}). The LES results show good agreement with experimental results, and the discrepancy between the RANS results and experimental data was highlighted. The results obtained in the first part reveal a pattern of under-prediction noticed with using RANS-based models to analyze canonical setups such as the backward-facing step. The LES results show close proximity to experimental data, as mentioned above, which makes it an excellent source of training data for the machine learning analysis outlined in the second part. The highlighted discrepancy and the inability of the RANS model to accurately predict significant flow properties create the need for a better model. The purpose of the second part of the present study is to make systematic efforts to minimize the error between flow properties from RANS modeling and experimental data, as seen in the first part. A machine learning model was constructed in the second part of the present study to predict the eddy viscosity parameter (μt) as a function of turbulent kinetic energy (TKE) and dissipation rate (ε) derived from LES data, effectively working as an ad hoc eddy-viscosity based turbulence model. The machine learning model does not work well with the flow domain as a whole, but a zonal analysis reveals a better prediction of eddy viscosity than the whole domain. Among the zones, the area in the vicinity of the re-circulation zone gives the best result. The obtained results point towards the need for a zonal analysis for the better performance of the machine learning model, which will enable us to improve RANS predictions by developing a reduced order turbulence model.
90

Исследование эмоционального интеллекта у студентов : магистерская диссертация / The study of emotional intelligence among students

Белобородов, А. М., Beloborodov, A. M. January 2015 (has links)
The thesis presents the results of an empirical study of emotional intelligence on a sample of psychology students and managers, describes the features of socially-psychological training and seminars as active methods of formation of emotional intelligence. / В диссертации представлены результаты эмпирического исследования эмоционального интеллекта на выборке студентов-психологов и управленцев, описаны особенности социально-психологического тренинга и семинарских занятий как активных методов формирования эмоционального интеллекта.

Page generated in 0.0679 seconds