• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 185
  • 78
  • 27
  • 13
  • 9
  • 8
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 457
  • 124
  • 122
  • 99
  • 96
  • 90
  • 71
  • 68
  • 66
  • 62
  • 59
  • 54
  • 52
  • 49
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Generating Synthetic Training Data with Stable Diffusion

Rynell, Rasmus, Melin, Oscar January 2023 (has links)
The usage of image classification in various industries has grown significantly in recentyears. There are however challenges concerning the data used to train such models. Inmany cases the data used in training is often difficult and expensive to obtain. Furthermore,dealing with image data may come with additional problems such as privacy concerns. Inrecent years, synthetic image generation models such as Stable Diffusion has seen signifi-cant improvement. Solely using a textual description, Stable Diffusion is able to generate awide variety of photorealistic images. In addition to textual descriptions, other condition-ing models such as ControlNet has enabled the possibility of additional grounding infor-mation, such as canny edge and segmentation images. This thesis investigates if syntheticimages generated by Stable Diffusion can be used effectively in training an image classifier.To find the most effective method for generating training data, multiple conditioning meth-ods are investigated and evaluated. The results show that it is possible to generate high-quality training data using several conditioning techniques. The best performing methodwas using canny edge grounded images to augment already existing data. Extending twoclasses with additional synthetic data generated by the best performing method, achievedthe highest average F1-score increase of 0.85 percentage points compared with a baselinesolely trained on real images.
332

Data Augmentation GUI Tool for Machine Learning Models

Sharma, Sweta 30 October 2023 (has links)
The industrial production of semiconductor assemblies is subject to high requirements. As a result, several tests are needed in terms of component quality. In the long run, manual quality assurance (QA) is often connected with higher expenditures. Using a technique based on machine learning, some of these tests may be carried out automatically. Deep neural networks (NN) have shown to be very effective in a diverse range of computer vision applications. Especially convolutional neural networks (CNN), which belong to a subset of NN, are an effective tool for image classification. Deep NNs have the disadvantage of requiring a significant quantity of training data to reach excellent performance. When the dataset is too small a phenomenon known as overfitting can occur. Massive amounts of data cannot be supplied in certain contexts, such as the production of semiconductors. This is especially true given the relatively low number of rejected components in this field. In order to prevent overfitting, a variety of image augmentation methods may be used to the process of artificially creating training images. However, many of those methods cannot be used in certain fields due to their inapplicability. For this thesis, Infineon Technologies AG provided the images of a semiconductor component generated by an ultrasonic microscope. The images can be categorized as having a sufficient number of good and a minority of rejected components, with good components being defined as components that have been deemed to have passed quality control and rejected components being components that contain a defect and did not pass quality control. The accomplishment of the project, the efficacy with which it is carried out, and its level of quality may be dependent on a number of factors; however, selecting the appropriate tools is one of the most important of these factors because it enables significant time and resource savings while also producing the best results. We demonstrate a data augmentation graphical user interface (GUI) tool that has been widely used in the domain of image processing. Using this method, the dataset size has been increased while maintaining the accuracy-time trade-off and optimizing the robustness of deep learning models. The purpose of this work is to develop a user-friendly tool that incorporates traditional, advanced, and smart data augmentation, image processing, and machine learning (ML) approaches. More specifically, the technique mainly uses are zooming, rotation, flipping, cropping, GAN, fusion, histogram matching, autoencoder, image restoration, compression etc. This focuses on implementing and designing a MATLAB GUI for data augmentation and ML models. The thesis was carried out for the Infineon Technologies AG in order to address a challenge that all semiconductor industries experience. The key objective is not only to create an easy- to-use GUI, but also to ensure that its users do not need advanced technical experiences to operate it. This GUI may run on its own as a standalone application. Which may be implemented everywhere for the purposes of data augmentation and classification. The objective is to streamline the working process and make it easy to complete the Quality assurance job even for those who are not familiar with data augmentation, machine learning, or MATLAB. In addition, research will investigate the benefits of data augmentation and image processing, as well as the possibility that these factors might contribute to an improvement in the accuracy of AI models.
333

Extraction of gating mechanisms from Markov state models of a pentameric ligand-gated ion channel

Karalis, Dimitrios January 2021 (has links)
GLIC är en pH-känslig pentamerisk ligandstyrd jonkanal (pLGIC) som finns i cellmembranet hos prokaryoten Gloeobacter violaceus. GLIC är en bakteriell homolog till flera receptorer som är viktiga i nervsystemet hos de flesta eukaryotiska organismer. Dessa receptorer fungerar som mallar för utvecklingen av målstyrda bedövnings- och stimulerande läkemedel som påverkar nervsystemet. Förståelsen av ett proteins mekanismer har därför hög prioritet inför läkemedelsutvecklingen. Eukaryota pLGICs är dock mycket komplexa eftersom några av de är heteromera, har flera domäner, och de pågår eftertranslationella ändringar. GLIC, å andra sidan, har en enklare struktur och det räcker att analysera strukturen av en subenhet - eftersom alla subenheter är helt lika. Flertalet möjliga grindmekanismer föreslogs av vetenskapen men riktiga öppningsmekanismen av GLIC är fortfarande oklar. Projektets mål är att genomföra maskininlärning (ML) för att upptäcka nya grindmekanismer med hjälp av datormetoder. Urspungsdatan togs från tidigare forskning där andra ML-redskap såsom molekyldynamik (MD), elastisk nätverksstyrd Brownsk dynamik (eBDIMS) och Markovstillståndsmodeller (MSM) användes. Utifrån dessa redskap simulerades proteinet som vildtyp samt med funktionsförstärkt mutation vid två olika pH värden. Fem makrotillstånd byggdes: två öppna, två stängda och ett mellanliggande. I projektet användes ett annat ML redskap: KL-divergens. Detta redskap användes för att hitta skillnader i avståndfördelning mellan öppet och stängt makrotillstånd. Utifrån ursprungsdatan byggdes en tensor som lagrade alla parvisa aminosyrornas avstånd. Varje aminosyrapar hade sin egen metadata som i sin tur användes för att frambringa alla fem avståndsfördelningar fråm MSMs som byggdes i förväg. Sedan bräknades medel-KL-divergens mellan två avståndfördelningar av intresse för att filtrera bort aminosyropar med överlappande avståndsfördelningar. För att se till att aminosyror inom aminosyrapar som låg kvar kan påverka varandra, filtrerades bort alla par vars minsta och medelavstånd var stora. De kvarvarande aminosyroparen utvärderades i förhållande till alla fem makrotillstånd Viktiga nya grindmekanismer som hittades genom både KL-divergens och makrotillståndsfördelningar innefattade loopen mellan M2-M3 helixarna av en subenhet och både loopen mellan sträckor β8 och β9 (Loop F)/N-terminal β9-sträckan och pre-M1/N-terminal M1 av närliggande subenheten. Loopen mellan sträckor β8 och β9 (Loop F) visade höga KL-värden också med loopen mellan sträckor β1 och β2 loop samt med loopen mellan sträckor β6 och β7 (Pro-loop) och avståndet mellan aminosyror minskade vid kanalens grind. Övriga intressanta grindmekanismer innefattade parning av aminosyror från loopen β4-β5 (Loop A) med aminosyror från sträckor β1 och β6 samt böjning av kanalen porangränsande helix. KL-divergens påvisades vara ett viktigt redskap för att filtrera tillgänglig data och de nya grindmekanismer kan bli användbara både för akademin, som vill reda ut GLIC:s fullständiga grindmekanismer, och läkemedelsföretag, som letar efter bindningsställen inom molekylen för att utveckla nya läkemedel. / GLIC is a transmembrane proton-gated pentameric ligand-gated ion channel (pLGIC) that is found in the prokaryote Gloeobacter violaceus. GLIC is the prokaryotic homolog to several receptors that are found in the nervous system of many eukaryotic organisms. These receptors are targets for the development of pharmaceutical drugs that interfere with the gating of these channels - such drugs involve anesthetics and stimulants. Understanding the mechanism of a drug’s target is a high priority for the development of a novel medicine. However, eukaryotic pLGICs are complex to analyse, because some of them are heteromeric, have more domains, and because of their post-translational modifications (PTMs). GLIC, on the other hand, has a simpler structure and it is enough to study the structure of only one subunit - since all subunits are identical. Several possible gating mechanisms have been proposed by the scientific community, but the complete gating of GLIC remains unclear. The goal of this project is to implement machine learning (ML) to discover novel gating mechanisms by computational approaches. The starting data was extracted from a previous research where computational tools like unbiased molecular dynamics (MD), elastic network-driven Brownian Dynamics (eBDIMS), and Markov state models (MSMs) were used. From those tools, the protein was simulated in wild-type and in a gain-of-function mutation at two different pH values. Five macrostates were constructed: two open, two closed, and an intermediate. In this project another ML tool was used: KL divergence. This tool was used to score the difference between the distance distributions of one open and one closed macrostate. The starting data was used to create a tensor that stored all residue-residue distances. Each residue pair had its own metadata, which in turn was used to yield the distance distributions of all five pre-build MSMs. Then the average KL scores between two states of interest were calculated and were used to filter out the residue pairs with overlapping distance distributions. To make sure that the residues within a pair can interact with each other, all residue pairs with very high minimum and average distance were filtered out as well. The residue pairs that remained were later evaluated across all five macrostates for further studies. Important novel mechanisms discovered in this project through both the KL divergence and the macrostate distributions involved the M2-M3 loop of one subunit and both the β8-β9 loop/N-terminal β9 strand and the preM1/N-terminal M1 region of the neighboring subunit. The β8-β9 loop (Loop F) showed high KL scores with the β1-β2 and β6-β7 (Pro-loop) loops as well with decreasing distances upon the channel’s opening. Other notable gating mechanisms involved are the pairing of residues from the β1-β2 loop (Loop A) with residues from the strands β1 and β6, as well as the kink of the pore-lining helix. KL divergence proved a valuable tool to filter available data and the novel mechanisms can prove useful both to the academic community that seeks to unravel the complete gating mechanism of GLIC and to the pharmaceutical companies that search for new binding sites within the molecule for new drugs.
334

Optimal Precoder Design and Block-Equal QRS Decomposition for ML Based Successive Cancellation Detection

Fang, Dan 10 1900 (has links)
<p>The Multiple-input and Multiple-output (MIMO) channel model is very useful for the presentation of a wide range of wireless communication systems. This thesis addresses the joint design of a precoder and a receiver for a MIMO channel model, in a scenario in which perfect channel state information (CSI) is available at both ends. We develop a novel framework for the transmitting-receiving procedure. Under the proposed framework, the receiver decomposes the channel matrix by using a block QR decomposition, where Q is a unitary matrix and R is a block upper triangular matrix. The optimal maximum likelihood (ML) detection process is employed within each diagonal block of R. Then, the detected block of symbols is substituted and subtracted sequentially according to the block QR decomposition based successive cancellation. On the transmitting end, the expression of probability of error based on ML detection is chosen as the design criterion to formulate the precoder design problem. This thesis presents a design of MIMO transceivers in the particular case of having 4 transmitting and 4 receiving antennas with full CSI knowledge on both sides. In addition, a closed-form expression for the optimal precoder matrix is obtained for channels satisfying certain conditions. For other channels not satisfying the specific condition, a numerical method is applied to obtain the optimal precoder matrix.</p> / Master of Applied Science (MASc)
335

Two New Applications of Tensors to Machine Learning for Wireless Communications

Bhogi, Keerthana 09 September 2021 (has links)
With the increasing number of wireless devices and the phenomenal amount of data that is being generated by them, there is a growing interest in the wireless communications community to complement the traditional model-driven design approaches with data-driven machine learning (ML)-based solutions. However, managing the large-scale multi-dimensional data to maintain the efficiency and scalability of the ML algorithms has obviously been a challenge. Tensors provide a useful framework to represent multi-dimensional data in an integrated manner by preserving relationships in data across different dimensions. This thesis studies two new applications of tensors to ML for wireless communications where the tensor structure of the concerned data is exploited in novel ways. The first contribution of this thesis is a tensor learning-based low-complexity precoder codebook design technique for a full-dimension multiple-input multiple-output (FD-MIMO) system with a uniform planar antenna (UPA) array at the transmitter (Tx) whose channel distribution is available through a dataset. Represented as a tensor, the FD-MIMO channel is further decomposed using a tensor decomposition technique to obtain an optimal precoder which is a function of Kronecker-Product (KP) of two low-dimensional precoders, each corresponding to the horizontal and vertical dimensions of the FD-MIMO channel. From the design perspective, we have made contributions in deriving a criterion for optimal product precoder codebooks using the obtained low-dimensional precoders. We show that this product codebook design problem is an unsupervised clustering problem on a Cartesian Product Grassmann Manifold (CPM), where the optimal cluster centroids form the desired codebook. We further simplify this clustering problem to a $K$-means algorithm on the low-dimensional factor Grassmann manifolds (GMs) of the CPM which correspond to the horizontal and vertical dimensions of the UPA, thus significantly reducing the complexity of precoder codebook construction when compared to the existing codebook learning techniques. The second contribution of this thesis is a tensor-based bandwidth-efficient gradient communication technique for federated learning (FL) with convolutional neural networks (CNNs). Concisely, FL is a decentralized ML approach that allows to jointly train an ML model at the server using the data generated by the distributed users coordinated by a server, by sharing only the local gradients with the server and not the raw data. Here, we focus on efficient compression and reconstruction of convolutional gradients at the users and the server, respectively. To reduce the gradient communication overhead, we compress the sparse gradients at the users to obtain their low-dimensional estimates using compressive sensing (CS)-based technique and transmit to the server for joint training of the CNN. We exploit a natural tensor structure offered by the convolutional gradients to demonstrate the correlation of a gradient element with its neighbors. We propose a novel prior for the convolutional gradients that captures the described spatial consistency along with its sparse nature in an appropriate way. We further propose a novel Bayesian reconstruction algorithm based on the Generalized Approximate Message Passing (GAMP) framework that exploits this prior information about the gradients. Through the numerical simulations, we demonstrate that the developed gradient reconstruction method improves the convergence of the CNN model. / Master of Science / The increase in the number of wireless and mobile devices have led to the generation of massive amounts of multi-modal data at the users in various real-world applications including wireless communications. This has led to an increasing interest in machine learning (ML)-based data-driven techniques for communication system design. The native setting of ML is {em centralized} where all the data is available on a single device. However, the distributed nature of the users and their data has also motivated the development of distributed ML techniques. Since the success of ML techniques is grounded in their data-based nature, there is a need to maintain the efficiency and scalability of the algorithms to manage the large-scale data. Tensors are multi-dimensional arrays that provide an integrated way of representing multi-modal data. Tensor algebra and tensor decompositions have enabled the extension of several classical ML techniques to tensors-based ML techniques in various application domains such as computer vision, data-mining, image processing, and wireless communications. Tensors-based ML techniques have shown to improve the performance of the ML models because of their ability to leverage the underlying structural information in the data. In this thesis, we present two new applications of tensors to ML for wireless applications and show how the tensor structure of the concerned data can be exploited and incorporated in different ways. The first contribution is a tensor learning-based precoder codebook design technique for full-dimension multiple-input multiple-output (FD-MIMO) systems where we develop a scheme for designing low-complexity product precoder codebooks by identifying and leveraging a tensor representation of the FD-MIMO channel. The second contribution is a tensor-based gradient communication scheme for a decentralized ML technique known as federated learning (FL) with convolutional neural networks (CNNs), where we design a novel bandwidth-efficient gradient compression-reconstruction algorithm that leverages a tensor structure of the convolutional gradients. The numerical simulations in both applications demonstrate that exploiting the underlying tensor structure in the data provides significant gains in their respective performance criteria.
336

[en] ON THE INTERACTION BETWEEN SOFTWARE ENGINEERS AND DATA SCIENTISTS WHEN BUILDING MACHINE LEARNING-ENABLED SYSTEMS / [pt] SOBRE A INTERAÇÃO ENTRE ENGENHEIROS DE SOFTWARE E CIENTISTAS DE DADOS CONSTRUINDO SISTEMAS HABILITADOS POR APRENDIZADO DE MÁQUINA

GABRIEL DE ANDRADE BUSQUIM 18 June 2024 (has links)
[pt] Nos últimos anos, componentes de aprendizado de máquina têm sido cada vez mais integrados aos sistemas principais de organizações. A construção desses sistemas apresenta diversos desafios, tanto do ponto de vista teórico quanto prático. Um dos principais desafios é a interação eficaz entre atores com diferentes formações que precisam trabalhar em conjunto, como engenheiros de software e cientistas de dados. Este trabalho apresenta três estudos distintos que investigam as dinâmicas de colaboração entre esses dois atores em projetos de aprendizado de máquina. Primeiramente, realizamos um estudo de caso exploratório com quatro profissionais com experiência em engenharia de software e ciência de dados de um grande projeto de sistema habilitado por aprendizado de máquina. Em nosso segundo estudo, realizamos entrevistas complementares com membros de duas equipes que trabalham em sistemas habilitados por aprendizado de máquina para obter mais percepções sobre como cientistas de dados e engenheiros de software compartilham responsabilidades e se comunicam. Por fim, nosso terceiro estudo consiste em um grupo focal onde validamos a relevância dessa colaboração durante várias tarefas relacionadas à sistemas habilitados por aprendizado de máquina e avaliamos recomendações que podem melhorar a interação entre os atores. Nossos estudos revelaram vários desafios que podem dificultar a colaboração entre engenheiros de software e cientistas de dados, incluindo diferenças de conhecimento técnico, definições pouco claras das funções de cada um, e a falta de documentos que apoiem a especificação do sistema habilitado por aprendizado de máquina. Possíveis soluções para enfrentar esses desafios incluem incentivar a comunicação na equipe, definir claramente responsabilidades, e produzir uma documentação concisa do sistema. Nossa pesquisa contribui para a compreensão da complexa dinâmica entre engenheiros de software e cientistas de dados em projetos de aprendizado de máquina e fornece recomendações para melhorar a colaboração e a comunicação nesse contexto. Incentivamos novos estudos que investiguem essa interação em outros projetos. / [en] In recent years, Machine Learning (ML) components have been increasingly integrated into the core systems of organizations. Engineering such systems presents various challenges from both a theoretical and practical perspective. One of the key challenges is the effective interaction between actors with different backgrounds who need to work closely together, such as software engineers and data scientists. This work presents three studies investigating the current interaction and collaboration dynamics between these two roles in ML projects. Our first study depicts an exploratory case study with four practitioners with experience in software engineering and data science of a large ML-enabled system project. In our second study, we performed complementary interviews with members of two teams working on ML-enabled systems to acquire more insights into how data scientists and software engineers share responsibilities and communicate. Finally, our third study consists of a focus group where we validated the relevance of this collaboration during multiple tasks related to ML-enabled systems and assessed recommendations that can foster the interaction between the actors. Our studies revealed several challenges that can hinder collaboration between software engineers and data scientists, including differences in technical expertise, unclear definitions of each role s duties, and the lack of documents that support the specification of the ML-enabled system. Potential solutions to address these challenges include encouraging team communication, clearly defining responsibilities, and producing concise system documentation. Our research contributes to understanding the complex dynamics between software engineers and data scientists in ML projects and provides insights for improving collaboration and communication in this context. We encourage future studies investigating this interaction in other projects.
337

ENHANCED MULTIPLE DENSE LAYER EFFICIENTNET

Aswathy Mohan (18806656) 03 September 2024 (has links)
<p dir="ltr">In the dynamic and ever-evolving landscape of Artificial Intelligence (AI), the domain of deep learning has emerged as a pivotal force, propelling advancements across a broad spectrum of applications, notably in the intricate field of image classification. Image classification, a critical task that involves categorizing images into predefined classes, serves as the backbone for numerous cutting-edge technologies, including but not limited to, automated surveillance, facial recognition systems, and advanced diagnostics in healthcare. Despite the significant strides made in the area, the quest for models that not only excel in accuracy but also demonstrate robust generalization across varied datasets, and maintain resilience against the pitfalls of overfitting, remains a formidable challenge.</p><p dir="ltr">EfficientNetB0, a model celebrated for its optimized balance between computational efficiency and accuracy, stands at the forefront of solutions addressing these challenges. However, the nuanced complexities of datasets such as CIFAR-10, characterized by its diverse array of images spanning ten distinct categories, call for specialized adaptations to harness the full potential of such sophisticated architectures. In response, this thesis introduces an optimized version of the EffciientNetB0 architecture, meticulously enhanced with strategic architectural modifications, including the incorporation of an additional Dense layer endowed with 512 units and the strategic use of Dropout regularization. These adjustments are designed to amplify the model's capacity for learning and interpreting complex patterns inherent in the data.</p><p dir="ltr">Complimenting these architectural refinements, a nuanced two-phase training methodology is also adopted in the proposed model. This approach commences with the initial phase of training where the base model's pre-trained weights are frozen, thus leveraging the power of transfer learning to secure a solid foundational understanding. The subsequent phase of fine-tuning, characterized by the selective unfreezing of layers, meticulously calibrates the model to the intricacies of the CIFAR-10 dataset. This is further bolstered by the implementation of adaptive learning rate adjustments, ensuring the model’s training process is both efficient and responsive to the nuances of the learning curve.</p><p><br></p>
338

Optimering av beslutsstöd inom verksamhetsstyrning genom en undersökning av artificiell intelligens : En djupgående undersökning av effektiva AI-tekniker för bättre affärsbeslut / Optimizing decision support in business management through an artificial intelligence study : An in-depth survey of effective AI techniques for better business decisions

Sakhai, Aram January 2024 (has links)
Denna studie undersöker hur artificiell intelligens (AI) kan optimera beslutsstödet inom verksamhetsstyrning genom analys av ostrukturerad data. Genom att granska begrepp som verksamhetsstyrning, Business Intelligence (BI), AI och maskininlärning (ML), belyser studien hur dessa teknologier kan förbättra organisationers beslutsprocesser. Verksamhetsstyrning syftar till att samordna och optimera organisationens delar för att nå gemensamma mål. AI (NLP, ML) samt särskilt genom BI spelar en avgörande roll genom att förbättra effektivitet och kvalitet. BI samlar och analyserar affärsinformation, medan ML möjliggör automatisk lärande från data. Studiens problemområde identifierar utmaningen med att hantera stora mängder ostrukturerad data. Trots AI:s potential att förbättra beslutsfattandet har dess fulla potential ännu inte realiserats. Genom att undersöka effektiv användning av AI för ostrukturerad data, bidrar studien till en bättre förståelse av hur AI kan förbättra beslutsstödet.Den kvalitativa ansatsen använde semistrukturerade intervjuer med IT-experter för att samla insikter om AI:s användning i beslutsfattande. Respondenterna beskrev hur AI analyserar data, förutsäger trender, optimerar processer och personaliserar kundupplevelser. AI automatiserar också tidskrävande uppgifter, vilket ökar effektiviteten och frigör tid för strategiskt arbete. Det visar att AI kan förbättra datakvalitet, automatisera processer och ge djupare insikter i kundbeteenden och marknadstrender. AI:s förmåga att hantera ostrukturerad data möjliggör identifiering av trender och mönster som annars skulle vara svåra att upptäcka. Utmaningar med AI-implementering inkluderar systemintegrering och behovet av teknisk expertis. Sammanfattningsvis visar studien att AI har stor potential att optimera beslutsstödet inom verksamhetsstyrning genom analys av ostrukturerad data.
339

Physics-informed Hyper-networks

Abhinav Prithviraj Rao (18865099) 23 June 2024 (has links)
<p dir="ltr">There is a growing trend towards the development of parsimonious surrogate models for studying physical phenomena. While they typically offer less accuracy, these models bypass the computational costs of numerical methods, usually by multiple orders of magnitude, allowing statistical applications such as sensitivity analysis, stochastic treatments, parametric problems, and uncertainty quantification. Researchers have explored generalized surrogate frameworks leveraging Gaussian processes, various basis function expansions, support vector machines, and neural networks. Dynamical fields, represented through time-dependent partial differential equation, pose a particular hardship for existing frameworks due to their high dimensional representation, and possibly multi-scale solutions.</p><p dir="ltr">In this work, we present a novel architecture for solving time-dependent partial differential equations using co-ordinate neural networks and time-marching updates through hyper-networks. We show that it provides a temporally meshed and spatially mesh-free solution which are causally coherent as justified through a theoretical treatment of Lie groups. We showcase results on some benchmark problems in computational physics while discussing their performance against similar physics-informed approaches like physics-informed DeepOnets and Physics informed neural networks.</p>
340

Les enseignants généralistes de l'école primaire : des représentations de compétences aux compétences réelles pour enseigner la musique

Jaccard, Sylvain 16 April 2018 (has links)
L'objectif de cette recherche était d'explorer les représentations de compétences ainsi que les compétences réelles en enseignement de la musique des enseignants généralistes des écoles primaires francophones du canton de Berne (Suisse). L'enseignement de la musique étant depuis tout temps de la responsabilité des généralistes dans cette région, il s'avérait pertinent de déterminer s'ils se sentent suffisamment compétents pour le faire et à quel degré ils le sont réellement, d'autant plus que les réformes actuelles en Suisse reconsidèrent fondamentalement le modèle d'enseignant idoine pour l'école primaire. À cet effet, le chercheur a procédé par triangulation des instruments de recherche, en proposant un questionnaire à l'ensemble de la population à l'étude (N = 721, 184 ont répondu), en menant des entrevues auprès d'un échantillon de volontaires (n = 21) et en filmant une leçon élaborée pour les besoins de la recherche donnée par 14 volontaires issus des entrevues. Trois experts ont évalué les compétences réelles de ces 14 volontaires. Différents tests de statistique descriptive, corrélationnelle, inférentielle et multivariée ont été opérés pour pouvoir déterminer les représentations de compétences des participants dans leur contexte pluridisciplinaire et pour estimer les relations qui apparaissent entre les représentations de compétences et les compétences réelles. L'analyse de contenu des entrevues a permis d'éclairer les données statistiques obtenues. Les résultats indiquent que l'Éducation musicale fait partie des disciplines pour lesquelles les généralistes se sentent le moins compétents, surtout lorsqu'ils n'enseignent pas cette discipline. Il appert par contre que les généralistes qui enseignent une discipline - Éducation musicale comprise - se sentent assez compétents pour le faire. Une corrélation significative est apparue entre l'évaluation de la compétence réelle des participants à l'étude et leur sentiment de compétences. Alors que les représentations de compétences sont significativement corrélées avec l'arrière-plan musical des participants, les compétences réelles le sont avant tout avec leur pratique musicale actuelle. Les résultats indiquent également que le modèle actuel d'enseignement de la musique par le généraliste n'est pas à rejeter fondamentalement, mais qu'une collaboration étroite avec un spécialiste pourrait potentiellement améliorer la qualité de l'enseignement musical et pallier les fragilités des généralistes.

Page generated in 0.0389 seconds