371 |
System Parameter Adaptation Based On Image Metrics For Automatic Target DetectionKurekli, Kenan 01 June 2004 (has links) (PDF)
Automatic object detection is a challenging field which has been evolving over decades. The application areas span many domains such as robotics inspection, medical imaging, military targeting, and reconnaissance. Some of the most concentrated efforts in automatic object detection have been in the military domain, where most of the problems deal with automatic target detection and scene analysis in the outdoors using a variety of sensors.
One of the critical problems in Automatic Target Detection (ATD) systems is multiscenario adaptation. Most of the ATD systems developed until today perform unpredictably i.e. perform well in certain scenarios, and poorly in others. Unless
ATD systems can be made adaptable, their utility in battlefield missions remains questionable.
This thesis describes a methodology that adapts parameterized ATD systems with image metrics as the scenario changes so that ATD system can maintain better
performance. The methodology uses experimentally obtained performance models, which are functions of image metrics and system parameters, to optimize performance measures of the ATD system. Optimization is achieved by adapting system parameters with incoming image metrics based on performance models as the system works in field. A simple ATD system is also proposed in this work to describe and test the methodology.
|
372 |
A recourse-based solution approach to the design of fuel cell aeropropulsion systemsChoi, Taeyun Paul 01 April 2008 (has links)
The past few decades have witnessed a growing interest in the engineering communities to approach the handling of imperfect information from a quantitatively justifiable angle. In the aerospace engineering domain, the movement to develop creative avenues to nondeterministically solving engineering problems has emerged in the field of aerospace systems design. Inspired by statistical data modeling and numerical analysis techniques that used to be relatively foreign to the designers of aerospace systems, a variety of strategies leveraging upon the probabilistic treatment of uncertainty has been, and continue to be, reported. Although each method differs in the sequence in which probabilistic analysis and numerical optimization are performed, a common motif in all of them is the lack of any built-in provisions to compensate for infeasibilities that occur during optimization. Constraint violations are either strictly prohibited or striven to be held to an acceptable probability threshold, implying that most hitherto developed probabilistic design methods promote an avoid-failure approach to developing aerospace systems under uncertainty.
It is the premise of this dissertation that such a dichotomous structure of addressing imperfections is hardly a realistic model of how product development unfolds in practice. From a time-phased view of engineering design, it is often observed that previously unknown parameters become known with the passing of each design milestone, and their effects on the system are realized. Should these impacts happen to be detrimental to critical system-level metrics, then a compensatory action is taken to remedy any unwanted deviations from the target or required bounds, rather than starting the process completely anew. Anecdotal accounts of numerous real-world design projects confirm that such remedial actions are commonly practiced means to ensure the successful fielding of aerospace systems. Therefore, formalizing the remedial aspect of engineering design into a new methodological capability would be the next logical step towards making uncertainty handling more pragmatic for this generation of engineers.
In order to formulate a nondeterministic solution approach that capitalizes on the practice of compensatory design, this research introduces the notion of recourse. Within the context of engineering an aerospace system, recourse is defined as a set of corrective actions that can be implemented in stages later than the current design phase to keep critical system-level figures of merit within the desired target ranges, albeit at some penalty. The terminology is inspired by the concept of the same name in the field of statistical decision analysis, where it refers to an action taken by a decision maker to mitigate the unfavorable consequences caused by uncertainty realizations. Recourse programs also introduce the concept of stages to optimization formulations, and allow each stage to encompass as many sequences or events as determined necessary to solve the problem at hand. Together, these two major premises of classical stochastic programming provide a natural way to embody not only the remedial, but also the temporal and nondeterministic aspects of aerospace systems design.
A two-part strategy, which partitions the design activities into stages, is proposed to model the bi-phasal nature of recourse. The first stage is defined as the time period in which an a priori design is identified before the exact values of the uncertain parameters are known. In contrast, the second stage is a period occurring some time after the first stage, when an a posteriori correction can be made to the first-stage design, should the realization of uncertainties impart infeasibilities. Penalizing costs are attached to the second-stage corrections to reflect the reality that getting it done right the first time is almost always less costly than fixing it after the fact. Consequently, the goal of the second stage becomes identifying an optimal solution with respect to the second-stage penalty, given the first-stage design, as well as a particular realization of the random parameters. This two-stage model is intended as an analogue of the traditional practice of monitoring and managing key Technical Performance Measures (TPMs) in aerospace systems development settings. Whenever an alarmingly significant discrepancy between the demonstrated and target TPM values is noted, it is generally the case that the most cost-effective recourse option is selected, given the available resources at the time, as well as scheduling and budget constraints.
One obvious weakness of the two-stage strategy as presented above is its limited applicability as a forecasting tool. Not only cannot the second stage be invoked without a first-stage starting point, but also the second-stage solution differs from one specific outcome of uncertainties to another. On the contrary, what would be more valuable given the time-phased nature of engineering design is the capability to perform an anticipatory identification of an optimum that is also expected to incur the least costly recourse option in the future. It is argued that such a solution is in fact a more balanced alternative than robust, probabilistically maximized, or chance-constrained solutions, because it represents trading the design optimality in the present with the potential costs of future recourse. Therefore, it is further proposed that the original two-stage model be embedded inside a larger design loop, so that the realization of numerous recourse scenarios can be simulated for a given first-stage design. The repetitive procedure at the second stage is necessary for computing the expected cost of recourse, which is equivalent to its mathematical expectation as per the strong law of large numbers. The feedback loop then communicates this information to the aggregate-level optimizer, whose objective is to minimize the sum total of the first-stage metric and the expected cost of future corrective actions. The resulting stochastic solution is a design that is well-hedged against the uncertain consequences of later design phases, while at the same time being less conservative than a solution designed to more traditional deterministic standards.
As a proof-of-concept demonstration, the recourse-based solution approach is presented as applied to a contemporary aerospace engineering problem of interest - the integration of fuel cell technology into uninhabited aerial systems. The creation of a simulation environment capable of designing three system alternatives based on Proton Exchange Membrane Fuel Cell (PEMFC) technology and another three systems leveraging upon Solid Oxide Fuel Cell (SOFC) technology is presented as the means to notionally emulate the development process of this revolutionary aeropropulsion method. Notable findings from the deterministic trade studies and algorithmic investigation include the incompatibility of the SOFC based architectures with the conceived maritime border patrol mission, as well as the thermodynamic scalability of the PEMFC based alternatives. It is the latter finding which justifies the usage of the more practical specific-parameter based approach in synthesizing the design results at the propulsion level into the overall aircraft sizing framework. The ensuing presentation on the stochastic portion of the implementation outlines how the selective applications of certain Design of Experiments, constrained optimization, Surrogate Modeling, and Monte Carlo sampling techniques enable the visualization of the objective function space. The particular formulations of the design stages, recourse, and uncertainties proposed in this research are shown to result in solutions that are well compromised between unfounded optimism and unwarranted conservatism. In all stochastic optimization cases, the Value of Stochastic Solution (VSS) proves to be an intuitively appealing measure of accounting for recourse-causing uncertainties in an aerospace systems design environment.
|
373 |
Simultaneous approach to model building and process design using experimental design: application to chemical vapor depositionWissmann, Paul J. 25 August 2008 (has links)
In this thesis a tool to be used in experimental design for batch processes is presented. Specifically, this method is to aid in the development of a process model. Currently, experimental design methods are either empirical in nature which need very little understanding of the underlying phenomena and without the objective of more fundamental understanding of the process. Other methods are model based which assume the model is correct and attempt to better define the model parameters or discriminate between models.
This new paradigm for experimental design allows for process optimization and process model development to occur simultaneously. The methodology specifically evaluates multiple models as a check to evaluate whether the models are capturing the trend in the experimental data. A new tool for experimental design developed here is called the grid algorithm which is designed to constrain the experimental region to potential optimal points of the user defined objective function for the process. It accomplishes this by using the confidence interval on the objective function value. The objective function value is calculated using the model prediction of the best performing model among a set of models at the predicted optimal point.
This new experimental design methodology is tested first on simulated data. The first simulation fits a model to data generated by the modified Himmelblau function (MHF). The second simulation fits multiple models to data generated to simulate a film growth process. In both simulations the grid algorithm leads to improved prediction at the optimal point and better sampling of the region around the optimal point.
This experimental design method was then applied to an actual chemical vapor deposition system. The films were analyzed using atomic force microscopy (AFM) to find the resulting film roughness. The methodology was then applied to design experiments using models to predict roughness. The resulting experiments were designed in a region constrained by the grid algorithm and were close to the predicted optimum of the process. We found that the roughness of a thin film depended on the substrate temperature but also showed a relationship to the nucleation density of the thin film.
|
374 |
Modelling the cutting process and cutting performance in abrasive waterjet machining with controlled nozzle oscillationXu, Shunli January 2006 (has links)
Abrasive waterjet (AWJ) cutting is one of the most recently developed manufacturing technologies. It is superior to many other cutting techniques in processing various materials, particularly in processing difficult-to-cut materials. This technology is being increasingly used in various industries. However, its cutting capability in terms of the depth of jet penetration and kerf quality is the major obstruction limiting its further applications. More work is required to fully understand the cutting process and cutting mechanism, and to optimise cutting performance. This thesis presents a comprehensive study on the controlled nozzle oscillation technique aiming at increasing the cutting performance in AWJ machining. In order to understand the current state and development in AWJ cutting, an extensive literature review is carried out. It has found that the reported studies on controlled nozzle oscillation cutting are primarily about the use of large oscillation angles of 10 degrees or more. Nozzle oscillation in the cutting plane with such large oscillation angles results in theoretical geometrical errors on the component profile in contouring. No published attempt has been found on the study of oscillation cutting under small angles although it is a common application in practice. Particularly, there is no reported research on the integration of nozzle oscillation technique into AWJ multipass cutting, which is expected to significantly enhance the cutting performance. An experimental investigation is first undertaken to study the major cutting performance measures in AWJ single pass cutting of an 87% alumina ceramic with controlled nozzle oscillation at small angles. The trends and characteristics of cutting performance quantities with respect to the process parameters as well as the science behind which nozzle oscillation affects the cutting performance have been analysed. It has been shown that as with oscillation cutting at large angles, oscillation at small angles can have an equally significant impact on the cutting performance. When the optimum cutting parameters are used for both nozzle oscillation and normal cutting, the former can statistically increase the depth of cut by 23% and smooth depth of cut by 30.8%, and reduce kerf surface roughness by 11.7% and kerf taper by 54%. It has also been found that if the cutting parameters are not selected properly, nozzle oscillation can reduce some major cutting performance measures. In order to correctly select the process parameters and to optimise the cutting process, the mathematical models for major cutting performance measures have then been developed. The predictive models for the depth of cut in both normal cutting and oscillation cutting are developed by using a dimensional analysis technique. Mathematical models for other major cutting performance measures are also developed with the aid of empirical approach. These mathematical models are verified both qualitatively and quantitatively based on the experimental data. The assessment reveals that the developed models conform well to the experimental results and can provide an effective means for the optimum selection of process variables in AWJ cutting with nozzle oscillation. A further experimental investigation of AWJ cutting of alumina ceramics is carried out in order to study the application of AWJ oscillation technique in multipass cutting. While high nozzle traverse speed with multipass can achieve overall better cutting performance than low traverse speed with single pass in the same elapsed time, it has been found that the different combination of nozzle traverse speed with the number of passes significantly affects cutting process. Optimum combination of nozzle traverse speed with the number of passes is determined to achieve maximum depth of cut. It has also demonstrated that the multipass cutting with low nozzle traverse speed in the first pass and a comparatively high traverse speed for the following passes is a sensible choice for a small kerf taper requirement. When nozzle oscillation is incorporated into multipass cutting, it can greatly increase the depth of cut and reduce kerf taper. The predictive models for the depth of cut in both multipass normal cutting and multipass oscillation cutting are finally developed. With the help of dimensional analysis, the models of the incremental cutting depth for individual pass are derived based on the developed depth of cut models for single pass cutting. The models of depth of cut for a multipass cutting operation are then established by the sum of the incremental cutting depth from each pass. A numerical analysis has verified the models and demonstrated the adequacy of the models' predictions. The models provide an essential basis for the development of optimization strategies for the effective use of the AWJ cutting technology when the multipass cutting technique is used with controlled nozzle oscillation.
|
375 |
Evaluation of the implementation of a preferred music intervention for reducing agitation and anxiety in institutionalised elders with dementiaSung, Huei-Chuan (Christina) January 2006 (has links)
There is some evidence about the efficacy of preferred music on agitation in elders with dementia; however, little is known about its effectiveness on agitation when implemented by nursing staff in long-term care facilities. Even less is known about use of preferred music for managing anxiety in those with dementia. This quasi-experimental study aimed to evaluate the implementation of a preferred music intervention delivered by nursing staff on agitation and anxiety of institutionalised elders with dementia. The sample comprised of 57 elders with dementia residing in two building complexes which provided similar care routines and staffing in a large Taiwanese residential care facility. These two building complexes were randomly assigned as the experimental and control group. Nursing staff in the experimental group received a facilitation program to prepare them for implementing the preferred music intervention; whereas nursing staff in the control group received no facilitation program. The music intervention based on each resident's music preferences was then provided by the trained nursing staff for 32 experimental residents twice a week for six weeks. Meanwhile, 25 residents in the control group only received the usual standard care without music. All residents were assessed by Cohen-Mansfield Agitation Inventory (CMAI) for overall and three subtypes of agitated behaviours and by Rating of Anxiety in Dementia for anxiety at baseline and week 6. Additionally, the modified CMAI measured the 30-minute occurrence of agitation at baseline, session 4, and session 12. The results indicate that institutionalised elders with dementia who received six weeks of preferred music intervention implemented by trained nursing staff had significant reductions on overall, three subtypes of agitated behaviours, anxiety, and 30-minute occurrence of agitation over time compared to those who received the usual standard care without music. Preferred music shows promise as a strategy for reducing agitation and anxiety in those with dementia when implemented by trained nursing staff. Such intervention can be incorporated into routine activities to improve the quality of care provided by nursing staff and the quality of life of those with dementia in long-term care settings. Our study results provide clinically relevant evidence which contribute to closing the gap between research and practice.
|
376 |
Society as a laboratory : Donald T. Campbell and the history of social experimentation /Bartholomée, Yvette. January 2004 (has links)
Thesis (doctoral)--Rijksuniversiteit te Groningen, 2004. / Cover title. Includes bibliographical references (p. 157-171).
|
377 |
Sustainable Lighting - Designed Considering Emotional AspectsMaila, Reetta January 2008 (has links)
<p>Global warming challenges designers to pay attention to environmental effects of manufacturing when designing new products. This examination project was a personal challenge to uphold ethical responsibility as a designer and consider emotional aspects of design while aiming to create a pleasurable lighting for the home environment.</p><p>The underpinning idea for the project was to promote the use of recycled materials and an environmentally friendly light source aiming to create a sustainable everyday commonplace product that it is possible to manufacture. High power LED-technology was chosen because of its energy efficiency, flexibility and a particularly long life-cycle. Recycled plastic and fibre cardboard were chosen to be applied as the shades of the lamps. Both these recycled materials can be broken down and recycled again after use.</p><p>Emotional design aspect was the leading theory in the design process. The intention was to consider different levels of emotional aspects when defining the main characteristics of the lamp to create pleasurable lighting: Among usability and aesthetics the concentration was on the semiotics of the product and its usage context. It was designed with the aim of evoking pleasurable feelings in users who desire to lead an active and urban life-style but who are simultaneously worried about global warming.</p><p>Both of the lighting designs are for a dining context. They are supposed to create a pleasurable atmosphere around a dining table while separating the party around the table from the rest of the space. Other lights can be dimmed or switched off when it is time to gather around the table to accentuate the illumination and feeling of togetherness.</p><p>Inspiration for the project came from sustainability, contemporary thoughts and trends embodied into maps. The products turned out to be silent statements of today’s global world; Antarctica refers to glacial retreat while Town symbolises the importance of people’s own origin in this globalised world.</p>
|
378 |
Modelagem geoestatística em quatro formações florestais do Estado de São Paulo / Geostatistical modeling in four forest formations of Sao Paulo StateMelissa Oda-Souza 18 September 2009 (has links)
Em muitos estudos ecológicos a distribuição dos organismos vivos era considerada aleatória, uniforme ou orientada ao longo de um simples gradiente. Ao contrário disso, sabe-se que eles podem se apresentar agregados em manchas, em forma de gradientes ou em outros tipos de estruturas espaciais. Dessa forma, a descrição e incorporação da estrutura espacial para a compreensão dos fenômenos ecológicos tem se tornado cada vez mais necessária. Neste trabalho, foram discutidos aspectos relacionados à amostragem e à modelagem da estrutura de continuidade espacial, por meio da geoestatística baseada em modelo, em quatro formações florestais do Estado de São Paulo. Nas quatro formações florestais foram instaladas parcelas permanentes de 320 × 320 m e todos os indivíduos arbóreos no interior das parcelas com diâmetro maior ou igual a 5 cm foram mapeados, georreferenciados, medidos e identificados. Os modelos geoestatísticos ajustados mostraram que a percepção da estrutura de dependência espacial foi influenciada pelo tamanho e pela forma da unidade amostral. As parcelas quadradas de 20×20 m foram as que melhor descreveram a estrutura de continuidade espacial e as parcelas retangulares captaram a variabilidade da floresta. As quatro formações florestais avaliadas apresentaram estruturas espacias distintas, sendo que a Savana e Ombrófila apresentam estruturas espaciais mais pronunciadas do que as formações Estacional e Restinga. Por fim, ao comparar as estimativas geradas pela abordagem baseada em delineamento (teoria da amostragem clássica) e a abordagem baseada em modelo (geoestatística) por estudos de simulação, verificou-se que mesmo com dependência espacial os estimadores clássicos fornecem estimativas e intervalos de confiança igualmente válidos. / In many ecological studies the distribution of living organisms was considered random, uniform or oriented along a single gradient. Unlike this, it is known that they can present aggregated in patches, in the form of gradients or other types of spatial structures. Thus, the description and the incorporation of spatial structure for understanding of ecological phenomena is becoming increasingly necessary. In this work were discussed aspects related to sampling and modeling the structure of spatial continuity through model-based geostatistics on four forest formations of Sao Paulo State. In the four forest formations were installed permanent plots of 320 × 320 m. All individual trees within the plots with a diameter greater than or equal to 5 cm were mapped, georeferenced, measured and identified. The adjusted geostatistical models showed that the perception of spatial structure of dependence was influenced by the size and shape of sampling unit. The structure of spatial continuity was best described by square plots of 20 × 20 m. The rectangular plots capture the variability of the forest. The four forest formations evaluated showed distinct spatial structures. The Savanna and Dense Rain formations have spatial structures more pronounced than the Seasonal Semideciduos and Restinga formations. Finally, to compare the estimates generated by the design-based approach (classical sampling theory) and model-based approach (geostatistics) for simulation studies, was found that even with the spatial dependence, the classical estimators provide estimates and confidence intervals equally valid.
|
379 |
Estudo de método multirresíduo para determinação de agrotóxicos em águas superficiais por SPE e GC-MS/MS / Study method for determination of pesticide multiresidue in surface waters by SPE and GC-MS/MSHermann, Alessandro 30 July 2013 (has links)
Modern agriculture started using different techniques and materials in order to
minimize losses in production and meet the growing demand for food, combining
productivity and profitability. However, the indiscriminate use of these different inputs
(pesticides and fertilizers) can lead to several environmental damage. Study
conducted by the National Agency for Sanitary Vigilance regarding the pesticide
market in Brazil reveals that there pace of expansion in consumption, putting the
country on alert in conferring the enhancement of environmental contamination
arising from the use of pesticides. The present work aims to study multiresidue
method for determination of pesticides using solid phase extraction and quantification
by gas chromatography coupled to mass spectrometry in tandem GC-(TQ) -MS/MS,
as well as evaluating the storage of compounds directly extraction cartridges in
different time periods. To optimize the extraction procedure was used factorial
planning, assessing three kinds of sorbents, Oasis ® HLB, C18 Strata Strata-X ® and
varying proportions of methanol in methylene chloride as well as different pH values.
The extraction procedure is optimized for use with the sorbent cartridge SPE Oasis ®
HLB polymeric mg/3mL 60, the sample filtered with 0.22 nylon membrane
micrometers. Conditioning the cartridge with 3 mL of methanol followed by 3 mL of
ultrapure water and 3 ml of ultrapure water at pH (6.2). Percolation 100 mL sample
previously acidified to pH 6.2. For elution was used a mixture of MeOH: DCM 45:55
(v, v) with two additions of 1 mL, leaving two minutes of contact with the sorbent
before elution. The assessment of figures of merit for the proposed method were
satisfactory, with linear calibration curves with r2 greater than 0.99 in the range 5-200
mg L-1, with values of LODm LOQm and 0.03 and 0.10 mg L-1, respectively. When
evaluated recoveries found values between 70 and 120% for 39 of the 43 pesticides
evaluated with precision values RSD ≤ 20%. The matrix effect was evaluated and
proved to be over 10% for most compounds. The application of the method was
performed on 10 samples of dams in different cities of Colonial Northwest region of
Rio Grande do Sul, found pesticide residues in eight. The method proved to be
effective for the determination of pesticide residues by SPE and GC-(TQ) -MS/MS,
being able to be used in routine analysis. / A agricultura moderna passou a utilizar diferentes técnicas e insumos a fim de
minimizar as perdas na produção e atender a demanda crescente por alimentos,
aliando produtividade e rentabilidade. Entretanto, o uso indiscriminado desses
diferentes insumos (agrotóxicos e fertilizantes) pode acarretar diversos prejuízos
ambientais. Estudo desenvolvido pela Agência Nacional de Vigilância Sanitária
referente ao mercado de agrotóxicos revela que no Brasil há ritmo de expansão no
consumo, colocando o país em alerta no que confere a potencialização da
contaminação ambiental oriunda do uso de agrotóxicos. O presente trabalho visa o
estudo de método multirresíduo para determinação de agrotóxicos utilizando
extração por fase sólida e quantificação por cromatografia em fase gasosa acoplada
a espectrometria de massas em série GC-(TQ)-MS/MS, bem como avaliar a
armazenagem dos compostos diretamente nos cartuchos de extração em diferentes
períodos de tempo. Para otimização do procedimento de extração, foi utilizado
planejamento fatorial em estrela, avaliando três tipos de sorventes, Oasis® HLB,
Strata C18 e Strata-X®, variando proporções de metanol em diclorometano , bem
como diferentes valores de pH. O procedimento de extração otimizado consiste em
utilização de cartucho SPE com o sorvente polimérico Oasis® HLB 60 mg/3mL,
filtração da amostra com membrana de nylon de 0,22 μm. Condicionamento do
cartucho com 3 mL de metanol, seguido de 3 mL de água ultrapura e 3 mL de água
ultrapura em pH (6,2). Percolação de 100 mL de amostra previamente acidificada
em pH 6,2. Para eluição foi utilizada uma mistura de MeOH:DCM 45:55 (v,v), com
duas adições de 1 mL, deixando dois minutos de contato com o sorvente antes da
eluição. A avaliação das figuras de mérito para o método proposto foram
satisfatórias, apresentando curvas analíticas lineares com r2 superiores a 0,99 na
faixa de 5 a 200 μg L-1, apresentando valores de LODm e LOQm de 0,03 e 0,10 μg L-
1, respectivamente. Quando avaliada as recuperações foi encontrado valores entre
70 e 120% para 39 dos 43 agrotóxicos avaliados, com valores de precisão RSD ≤
20%. O efeito matriz foi avaliado, mostrando-se superior a 10% para a maioria dos
compostos. A aplicação do método foi realizada em 10 amostras de açudes de
diferentes municípios da região Noroeste Colonial do Rio grande do Sul, sendo
encontrado resíduo de agrotóxicos em oito delas. O método proposto mostrou ser
eficaz para a determinação de resíduos de agrotóxicos por SPE e GC-(TQ)-MS/MS,
sendo passível de utilização em análises de rotina.
|
380 |
Etayage de l'activité de conception expérimentale par un EIAH pour apprendre la notion de métabolisme cellulaire en terminale scientifique / scaffold the experimental design activity with a TEL system to learn the cellular metabolism in high schoolBonnat, Catherine 10 July 2017 (has links)
L’objectif de la thèse est d’étayer l’activité de conception expérimentale avec l’utilisation d’un environnement informatique pour l’apprentissage humain (EIAH). La conception expérimentale correspond à une partie de la démarche d’investigation qui fait l’objet de nombreuses recherches, à la fois parce que cette activité favorise l’apprentissage mais aussi parce que c’est une tâche complexe à l’origine de difficultés identifiées.La situation que nous avons choisie est la mise en évidence de la fermentation alcoolique, thème abordé en classe de terminale scientifique de spécialité en sciences de la vie et de la terre. Les élèves doivent concevoir une expérience pour mettre en évidence ce métabolisme.La première étape de la thèse a consisté à effectuer une modélisation didactique des connaissances en jeu. Pour cela nous nous plaçons dans le cadre de la théorie anthropologique du didactique et plus précisément l’approche praxéologique (Bosch & Chevallard, 1999). Nous avons ainsi obtenu un premier résultat qui est la modélisation d’une praxéologie de référence à partir d’une analyse épistémologique des savoirs en jeu et des attentes institutionnelles.Afin d’aider les élèves dans cette activité à l’origine de difficultés, nous utilisons des supports d’étayage, portés par la plateforme informatique, LabBook. Cet EIAH structure des rapports expérimentaux, à l’aide de plusieurs outils (texte, tableur, dessin, protocole) mis à disposition des élèves. L’outil protocole « Copex », permet de pré structurer un protocole expérimental.La deuxième étape de la thèse a été de proposer un protocole pré structuré en étapes, actions et paramètres d’actions qui prend en charge les difficultés attendues, et de l’implémenter dans LabBook, en tenant compte des contraintes de l’EIAH.L’étape suivante a été de tester l’efficacité de la prise en charge de ces difficultés dans la situation proposée en classe. Pour cela nous avons ainsi réalisé deux expérimentations en classes de terminale scientifique dans trois lycées différents. Nous avons recueilli les productions des élèves ainsi que leur réponse à des questionnaires (pré-test et post-test).L’analyse des résultats a montré que l’activité proposée favorise les apprentissages des concepts en jeu, et fait évoluer les conceptions des élèves.Concernant la conception du protocole, la pré structuration proposée aide les élèves à produire des protocoles communicables et pertinents.A partir des praxéologies personnelles modélisées a priori nous avons mis en évidence la présence de praxéologies personnelles d’élèves.Les analyses effectuées ont permis de faire évoluer la situation proposée et de valider la proposition de pré structuration du protocole en étapes et en actions paramétrées.Enfin nous proposons des préconisations pour un diagnostic automatique des erreurs des élèves, dans le but de produire des rétroactions élaborées à partir du modèle praxéologique développé dans la thèse. / The thesis work involves a scaffold of the experimental design activity carried out by high school students in scientific activities using a TEL system (Technology Enhanced Learning).This type of activity promotes learning, but it is also a complex task that leads to difficulties identified in the literature. The situation highlights a specific cellular metabolism, the alcoholic fermentation, this topic being studied in high school biology classes. Students have to design an experimental procedure to highlight this metabolism.The first step of the thesis consisted in knowledge modelling for designing an experimental situation in biology. The framework used to model knowledge, is the Anthropological Theory of Didactics (ATD) and more precisely the praxeology model (Bosch & Chevallard, 1999). An epistemological analysis has been done in order to identify difficulties in this situation: difficulties related to knowledge and also to experimental procedure.A structured procedure was modelled into steps, actions and parameters, which takes into account difficulties identified a priori and implemented in a web environment named LabBook,.This TEL system offers fixed scaffolds in order to help students in an experimental design activity.The situation implemented in LabBook has been tested in three biology classes in high school, during two sets of experimentation. The analysis is based on the students’ productions and their answers to questionnaires (pre-test and post-test).The results analysis showed that the experimental design activity promotes learning and changes students’ conceptions. Regarding design experiment, the pre structuration helps students to produce relevant and communicable procedures.This reveals that students are able to use the pre-structured experimental procedure tool in LabBook.This is a requirement for students’ errors diagnosis in order to propose automatic personalised feedbacks. We make recommendations for such feedbacks based on the praxeology model developed in this thesis.
|
Page generated in 0.0387 seconds