331 |
ALD of Copper and Copper Oxide Thin Films For Applications in Metallization Systems of ULSI DevicesWaechtler, Thomas, Oswald, Steffen, Roth, Nina, Lang, Heinrich, Schulz, Stefan E., Gessner, Thomas 15 July 2008 (has links) (PDF)
<p>
As a possible alternative for growing seed layers
required for electrochemical Cu deposition of
metallization systems in ULSI circuits,
the atomic layer deposition (ALD) of Cu is
under consideration. To avoid drawbacks related
to plasma-enhanced ALD (PEALD), thermal growth
of Cu has been proposed by two-step processes
forming copper oxide films by ALD which are
subsequently reduced.
</p>
<p>
This talk, given at the 8th International
Conference on Atomic Layer Deposition
(ALD 2008), held in Bruges, Belgium from
29 June to 2 July 2008, summarizes the results
of thermal ALD experiments from
[(<sup><i>n</i></sup>Bu<sub>3</sub>P)<sub>2</sub>Cu(acac)]
precursor and wet O<sub>2</sub>. The precursor is of particular
interest as it is a liquid at room temperature
and thus easier to handle than frequently
utilized solids such as Cu(acac)<sub>2</sub>,
Cu(hfac)<sub>2</sub> or
Cu(thd)<sub>2</sub>. Furthermore the substance is
non-fluorinated, which helps avoiding a major
source of adhesion issues repeatedly observed
in Cu CVD.
</p>
<p>
As result of the ALD experiments, we obtained composites of metallic and
oxidized Cu on Ta
and TaN, which was determined by
angle-resolved XPS analyses. While smooth,
adherent films were grown on TaN in an ALD
window up to about 130°C, cluster-formation due to
self-decomposition of the precursor was observed
on Ta. We also recognized a considerable
dependency of the growth on the degree of
nitridation of the TaN. In contrast, smooth
films could be grown up to 130°C on SiO<sub>2</sub>
and Ru, although in the latter case the ALD window
only extends to about 120°C. To apply the ALD
films as seed layers in subsequent electroplating
processes, several reduction processes are
under investigation. Thermal and plasma-assisted
hydrogen treatments are studied, as well as
thermal treatments in vapors of isopropanol,
formic acid, and aldehydes. So far these
attempts were most promising using formic
acid at temperatures between 100 and 120°C,
also offering the benefit of avoiding
agglomeration of the very thin ALD films on
Ta and TaN. In this respect, the process
sequence shows potential for depositing
ultra-thin, smooth Cu films at temperatures
below 150°C.
</p>
|
332 |
PaVo Un tri parallèle adaptatifDurand, Marie 25 October 2013 (has links) (PDF)
Les joueurs exigeants acquièrent dès que possible une carte graphique capable de satisfaire leur soif d'immersion dans des jeux dont la précision, le réalisme et l'interactivité redoublent d'intensité au fil du temps. Depuis l'avènement des cartes graphiques dédiées au calcul généraliste, ils n'en sont plus les seuls clients. Dans un premier temps, nous analysons l'apport de ces architectures parallèles spécifiques pour des simulations physiques à grande échelle. Cette étude nous permet de mettre en avant un goulot d'étranglement en particulier limitant la performance des simulations. Partons d'un cas typique : les fissures d'une structure complexe de type barrage en béton armé peuvent être modélisées par un ensemble de particules. La cohésion de la matière ainsi simulée est assurée par les interactions entre elles. Chaque particule est représentée en mémoire par un ensemble de paramètres physiques à consulter systématiquement pour tout calcul de forces entre deux particules. Ainsi, pour que les calculs soient rapides, les données de particules proches dans l'espace doivent être proches en mémoire. Dans le cas contraire, le nombre de défauts de cache augmente et la limite de bande passante de la mémoire peut être atteinte, particulièrement en parallèle, bornant les performances. L'enjeu est de maintenir l'organisation des données en mémoire tout au long de la simulation malgré les mouvements des particules. Les algorithmes de tri standard ne sont pas adaptés car ils trient systématiquement tous les éléments. De plus, ils travaillent sur des structures denses ce qui implique de nombreux déplacements de données en mémoire. Nous proposons PaVo, un algorithme de tri dit adaptatif, c'est-à-dire qu'il sait tirer parti de l'ordre pré-existant dans une séquence. De plus, PaVo maintient des trous dans la structure, répartis de manière à réduire le nombre de déplacements mémoires nécessaires. Nous présentons une généreuse étude expérimentale et comparons les résultats obtenus à plusieurs tris renommés. La diminution des accès à la mémoire a encore plus d'importance pour des simulations à grande échelles sur des architectures parallèles. Nous détaillons une version parallèle de PaVo et évaluons son intérêt. Pour tenir compte de l'irrégularité des applications, la charge de travail est équilibrée dynamiquement par vol de travail. Nous proposons de distribuer automatiquement les données en mémoire de manière à profiter des architectures hiérarchiques. Les tâches sont pré-assignées aux cœurs pour utiliser cette distribution et nous adaptons le moteur de vol pour favoriser des vols de tâches concernant des données proches en mémoire.
|
333 |
Medindo a predisposição para a tecnologiaBernardi Junior, Plinio 18 February 2008 (has links)
Made available in DSpace on 2010-04-20T20:48:31Z (GMT). No. of bitstreams: 3
71040100499.pdf.jpg: 16266 bytes, checksum: 600e8a9f1cfb89d1b0ed0df002296139 (MD5)
71040100499.pdf: 3824101 bytes, checksum: cb7a1bf5076a35da07172b02ade64ace (MD5)
71040100499.pdf.txt: 233643 bytes, checksum: a6de5bcd9e087e7f9317d47d3ad96665 (MD5)
Previous issue date: 2008-02-18T00:00:00Z / Existe a expectativa de que cada indivíduo absorva de forma rápida e satisfatória os avanços tecnológicos para que possa usufruir dos seus benefícios e permanecer competitivo no mercado de trabalho. Mesmo que o foco principal da maior parte das pesquisas esteja no alcance de benefícios para as empresas com o uso de tecnologia, a intenção de comportamento do indivíduo representa o passo inicial para a sua adoção. No entanto, ao mesmo tempo em que se percebe a evolução das tecnologias em benefício das pessoas, também existem evidências no sentido de um sentimento de frustração com a tecnologia. Nenhum estudo é conclusivo sobre a identificação das variáveis que afetam o desenvolvimento das percepções e intenções para a tecnologia. Além disso, a maior parte dos modelos foi testada em países desenvolvidos ou em camadas sociais superiores da população. O propósito principal dessa tese é apresentar forma alternativa de medir a Predisposição para a Tecnologia, que seja aplicável não apenas em situações específicas, mas também para toda a gama da população. O trabalho faz uso das ferramentas da Teoria de Resposta ao Item para a proposição e validação de uma nova escala de Predisposição para a Tecnologia, que se mostrou bastante consistente e coerente. A nova escala possui a vantagem de ter maior poder discriminante, especialmente para as classes de menor nível educacional e de renda. Além disso, a escala criada apresenta mais informação com um número reduzido de itens, o que pode representar reduções de custo e tempo de aplicação dos questionários. / There is an expectation that each individual absorb, in a fast and satisfactory way, technological advances so that he/she can take advantage of their benefits and remain competitive in the job market. Although the main focus of most researches is on the technological benefits for businesses using technology, the behavior intention of the individual represents the first step towards its adoption. However, at the same time, technology is perceived as an evolution for the benefit of people, there are also clear signs towards a sense of frustration with technology. No study is conclusive about identification of variables that affect the development of perceptions and intentions toward technology. Moreover, most of the models were tested in developed countries or in high social level populations. The main purpose of this thesis is to present an alternative way of measuring the Technology Predisposition, which applies not only in specific situations, but also for the full range of population. This study work makes use of the tools of Item Response Theory, for proposition and validation of a new scale of Technology Predisposition, which proved to be quite consistent and coherent. The new scale has the advantage of having more discriminative power, especially for the lower classes of educational level and income. Moreover, the created scale displays more information with a limited number of items, which may represent cost and time reductions.
|
334 |
Uso da Teoria Clássica dos Testes – TCT e da Teoria de Resposta ao Item – TRI na avaliação da qualidade métrica de testes de seleção / Uso da Teoria Clássica dos Testes – TCT e da Teoria de Resposta ao Item – TRI na avaliação da qualidade métrica de testes de seleçãoMAIA, José Leudo January 2009 (has links)
MAIA, José Leudo. Uso da Teoria Clássica dos Testes – TCT e da Teoria de Resposta ao Item – TRI na avaliação da qualidade métrica de testes de seleção. 2009. 325f. Tese (Doutorado em Educação) – Universidade Federal do Ceará, Faculdade de Educação, Programa de Pós-Graduação em Educação Brasileira, Fortaleza-CE, 2009. / Submitted by Maria Josineide Góis (josineide@ufc.br) on 2012-07-10T11:42:58Z
No. of bitstreams: 1
2009_Tese_JLMaia.pdf: 4582126 bytes, checksum: 35b2f8279baa21b052a910889b5a7001 (MD5) / Approved for entry into archive by Maria Josineide Góis(josineide@ufc.br) on 2012-07-13T11:49:37Z (GMT) No. of bitstreams: 1
2009_Tese_JLMaia.pdf: 4582126 bytes, checksum: 35b2f8279baa21b052a910889b5a7001 (MD5) / Made available in DSpace on 2012-07-13T11:49:37Z (GMT). No. of bitstreams: 1
2009_Tese_JLMaia.pdf: 4582126 bytes, checksum: 35b2f8279baa21b052a910889b5a7001 (MD5)
Previous issue date: 2009 / sse trabalho doutoral tem como proposta fazer uso da Teoria Clássica dos Testes – TCT e da Teoria de Resposta ao Item – TRI como instrumentos na avaliação da qualidade métrica de testes de seleção, sob quatro aspectos de investigação: Análise da Validez do Construto; Análise Psicométrica dos Itens; Funcionamento Diferencial dos Itens – DIF; e Função de Informação. Para tanto, foram utilizados dados dos resultados das provas de Português e Matemática do concurso vestibular da Universidade Estadual do Estado do Ceará – UECE, de 2007, em que participaram 20.016 candidatos a 38 Cursos de Graduação, somente na Capital do Estado. Para o tratamento desses dados, foram utilizados os seguintes softwares: SPSS, v15; BILOG-MG, v3.0; MULTILOG FOR WINDOWS, v1.0; e o TESTFACT v4.0. A primeira providência foi verificar a dimensionaidade dessas provas. Para tanto se utilizou o Método de Kaiser-Guttman, Scree-plot, e o Método das Cargas Fatoriais e das Comunalidades da Matriz de Fatores. A constatação foi de que a prova de Português apresentava características multidimensionais, sendo, portanto, descartada, por não atender aos pressupostos básicos da Unidimensionalidade e Independência Local dos Itens. A prova de Matemática, no entanto, por apresentar comportamento unidimensional, se tornou o foco deste trabalho. A análise da Validez do Construto foi realizada por meio dos coeficientes Alpha de Cronbach e Kuder-Richardson, tendo gerado valores iguais a 0,685, além da utilização, também, do método das Cargas Fatoriais, com cargas entre 0,837 e 0,960, indicando intensa consistência interna. A análise psicométrica dos itens foi realizada por meio dos índices de dificuldade, discriminação e acerto ao acaso, para ambas as teorias, indicando ser essa uma prova de dificuldade mediana, com bom comportamento discriminativo e baixo índice de acerto ao acaso. A análise do DIF foi realizada, segundo o gênero dos candidatos, pelos métodos Delta-plot, Maentel-Haenszel, Regressão Logística e Comparação dos Betas, indicando resultados estatísticamente não significativos, no que se concluiu não apresentar, a prova, comportamento diferenciado, segundo o gênero. A análise da Função de Informação da prova permitiu se observar que esta é particularmente válida para candidatos com aptidão em torno de 0,8750 e que, a um nível de confiança de 95%, 49,3% dos candidatos atenderiam a essa indicação. Observou-se também que 90,6% dos candidatos, em ambos os processos, apresentaram o mesmo nível de aptidão, indicando uma convergência bastante razoável entre os resultados gerados pela TCT e TRI, no entanto, no estudo amostral, a TRI identificou que 9,4% dos candidatos apresentaram maior aptidão para a realização de um curso superior que os selecionados pela TCT.
|
335 |
Optimisation des performances et de la robustesse d’un électrolyseur à hautes températures / Optimization of the performances and the robustness of an electrolyser at high temperaturesUsseglio-Viretta, François 05 October 2015 (has links)
La réponse thermique, électrochimique et mécanique d'un électrolyseur de la vapeur d'eau à haute température (EVHT) a été analysée dans ce travail. Pour ce faire, une approche de modélisation multi-physique et multi-échelle a été employée : • Un modèle local, à l'échelle de la microstructure des électrodes, a été utilisé pour analyser le comportement électrochimique apparent des électrodes de la cellule d'électrolyse étudiée. Le fonctionnement du système au sein d'un empilement de plusieurs cellules a ensuite été analysé grâce à un modèle thermoélectrochimique à l'échelle macroscopique de l'EVHT. Un élément de validation expérimentale du modèle accompagne les résultats. • Un modèle thermomécanique pour le calcul de l'état de contrainte de l'EVHT a été développé. Celui-ci tient compte des phénomènes physiques intrinsèques à la cellule et à son fonctionnement sous courant à hautes températures et à ceux imputables aux interactions mécaniques entre la cellule et son environnement. Les données manquantes nécessaires à l'exécution des modèles ont été obtenues par la caractérisation et par des calculs d'homogénéisation de la microstructure tridimensionnelle des électrodes. Par ailleurs le comportement viscoplastique du matériau de la cathode a été mis évidence par des essais de fluage en flexion quatre points. L'étude a permis de définir un domaine de fonctionnement optimal garantissant des performances électrochimiques élevées avec des niveaux de température acceptables. Des propositions visant à réduire l'endommagement mécanique du système ont également été produites. / The thermal, electrochemical and mechanical response of a high temperature steam electrolyzer (HTSE) has been analyzed in this work. To this end, a multi-physics and multi-scale modelling approach has been employed: • A local model, at the microstructure scale of the electrodes, has been used to analyze the apparent electrochemical behavior of the electrodes related to the studied electrolysis cell. System operation, in a stack of several cells, has been then analyzed using a thermoelectrochemical model at the macroscopic scale of the HTSE. An element of experimental validation of the model comes with the results. • A thermomechanical model for the calculation of the stress state of the HTSE has been developed. In this model, the intrinsic physical phenomena of the cell, of its operation under current at high temperatures and those ascribable to the mechanical interactions between the cell and its environment have been considered. The unknown data required for the models have been obtained by the characterization and homogenization calculations of the three-dimensional microstructure of the electrodes. Besides, the viscoplastic behavior of the cathode material has been determined by a four-point bending creep test. The study made it possible to define an optimal operating zone, ensuring both high electrochemical performances and acceptable temperature levels. Proposals aiming to reduce the mechanical damage of the system have been also produced.
|
336 |
Scalable Sprase Bayesian Nonparametric and Matrix Tri-factorization Models for Text Mining ApplicationsRanganath, B N January 2017 (has links) (PDF)
Hierarchical Bayesian Models and Matrix factorization methods provide an unsupervised way to learn latent components of data from the grouped or sequence data. For example, in document data, latent component corn-responds to topic with each topic as a distribution over a note vocabulary of words. For many applications, there exist sparse relationships between the domain entities and the latent components of the data. Traditional approaches for topic modelling do not take into account these sparsity considerations. Modelling these sparse relationships helps in extracting relevant information leading to improvements in topic accuracy and scalable solution. In our thesis, we explore these sparsity relationships for di errant applications such as text segmentation, topical analysis and entity resolution in dyadic data through the Bayesian and Matrix tri-factorization approaches, propos-in scalable solutions.
In our rest work, we address the problem of segmentation of a collection of sequence data such as documents using probabilistic models. Existing state-of-the-art Hierarchical Bayesian Models are connected to the notion of Complete Exchangeability or Markov Exchangeability. Bayesian Nonpareil-metric Models based on the notion of Markov Exchangeability such as HDP-HMM and Sticky HDP-HMM, allow very restricted permutations of latent variables in grouped data (topics in documents), which in turn lead to com-mutational challenges for inference. At the other extreme, models based on Complete Exchangeability such as HDP allow arbitrary permutations within each group or document, and inference is significantly more tractable as a result, but segmentation is not meaningful using such models. To over-come these problems, we explored a new notion of exchangeability called Block Exchangeability that lies between Markov Exchangeability and Com-plate Exchangeability for which segmentation is meaningful, but inference is computationally less expensive than both Markov and Complete Exchange-ability. Parametrically, Block Exchangeability contains sparser number of transition parameters, linear in number of states compared to the quadratic order for Markov Exchangeability that is still less than that for Complete Exchangeability and for which parameters are on the order of the number of documents. For this, we propose a nonparametric Block Exchangeable model (BEM) based on the new notion of Block Exchangeability, which we have shown to be a superclass of Complete Exchangeability and subclass of Markov Exchangeability. We propose a scalable inference algorithm for BEM to infer the topics for words and segment boundaries associated with topics for a document using the collapsed Gibbs Sampling procedure. Empirical results show that BEM outperforms state-of-the-art nonparametric models in terms of scalability and generalization ability and shows nearly the same segmentation quality on News dataset, Product review dataset and on a Synthetic dataset. Interestingly, we can tune the scalability by varying the block size through a parameter in our model for a small trade-o with segmentation quality.
In addition to exploring the association between documents and words, we also explore the sparse relationships for dyadic data, where associations between one pair of domain entities such as (documents, words) and as-associations between another pair such as (documents, users) are completely observed. We motivate the analysis of such dyadic data introducing an additional discrete dimension, which we call topics, and explore sparse relation-ships between the domain entities and the topic, such as of user-topic and document-topic respectively.
In our second work, for this problem of sparse topical analysis of dyadic data, we propose a formulation using sparse matrix tri-factorization. This formulation requires sparsity constraints, not only on the individual factor matrices, but also on the product of two of the factors. To the best of our knowledge, this problem of sparse matrix tri-factorization has not been stud-ide before. We propose a solution that introduces a surrogate for the product of factors and enforces sparsity on this surrogate as well as on the individual factors through L1-regularization. The resulting optimization problem is e - cogently solvable in an alternating minimization framework over sub-problems involving individual factors using the well-known FISTA algorithm. For the sub-problems that are constrained, we use a projected variant of the FISTA algorithm. We also show that our formulation leads to independent sub-problems towards solving a factor matrix, thereby supporting parallel implementation leading to a scalable solution. We perform experiments over bibliographic and product review data to show that the proposed framework based on sparse tri-factorization formulation results in better generalization ability and factorization accuracy compared to baselines that use sparse bi-factorization.
Even though the second work performs sparse topical analysis for dyadic data, ending sparse topical associations for the users, the user references with di errant names could belong to the same entity and those with same names could belong to different entities. The problem of entity resolution is widely studied in the research community, where the goal is to identify real users associated with the user references in the documents.
Finally, we focus on the problem of entity resolution in dyadic data, where associations between one pair of domain entities such as documents-words and associations between another pair such as documents-users are ob.-served, an example of which includes bibliographic data. In our nil work, for this problem of entity resolution in bibliographic data, we propose a Bayesian nonparametric `Sparse entity resolution model' (SERM) exploring the sparse relationships between the grouped data involving grouping of the documents, and the topics/author entities in the group. Further, we also exploit the sparseness between an author entity and the associated author aliases. Grouping of the documents is achieved with the stick breaking prior for the Dirichlet processes (DP). To achieve sparseness, we propose a solution that introduces separate Indian Bu et process (IBP) priors over topics and the author entities for the groups and k-NN mechanism for selecting author aliases for the author entities. We propose a scalable inference for SERM by appropriately combining partially collapsed Gibbs sampling scheme in Focussed topic model (FTM), the inference scheme used for parametric IBP prior and the k-NN mechanism. We perform experiments over bibliographic datasets, Cite seer and Rexa, to show that the proposed SERM model imp-proves the accuracy of entity resolution by ending relevant author entities through modelling sparse relationships and is scalable, when compared to the state-of-the-art baseline
|
337 |
A reta de Euler e a circunferência dos nove pontos: um olhar algébricoSouto, Antonio Marcos da Silva 12 August 2013 (has links)
Submitted by Viviane Lima da Cunha (viviane@biblioteca.ufpb.br) on 2015-05-19T14:38:35Z
No. of bitstreams: 2
license_rdf: 22190 bytes, checksum: 19e8a2b57ef43c09f4d7071d2153c97d (MD5)
arquivototal.pdf: 4151214 bytes, checksum: 332991af15a25d2ea7775c9d71ef3216 (MD5) / Approved for entry into archive by Leonardo Americo (leonardo@sti.ufpb.br) on 2015-05-19T15:13:52Z (GMT) No. of bitstreams: 2
license_rdf: 22190 bytes, checksum: 19e8a2b57ef43c09f4d7071d2153c97d (MD5)
arquivototal.pdf: 4151214 bytes, checksum: 332991af15a25d2ea7775c9d71ef3216 (MD5) / Made available in DSpace on 2015-05-19T15:13:52Z (GMT). No. of bitstreams: 2
license_rdf: 22190 bytes, checksum: 19e8a2b57ef43c09f4d7071d2153c97d (MD5)
arquivototal.pdf: 4151214 bytes, checksum: 332991af15a25d2ea7775c9d71ef3216 (MD5)
Previous issue date: 2013-08-12 / This work is the result of a research on the Euler line and the circumference of the
nine points. The software geogebra was used to illustrate geometric constructions
and present some practical activities for the study of notable points of the triangle,
the Euler line and the circumference of the nine points to high school students.
However, the work was based on the proof, with the use of Modern Algebra and
Linear Algebra, the existence and properties of the object of this research, especially
the universal property of points in the plane, critical in these demonstrations. / Este trabalho é o resultado de uma pesquisa sobre a reta de Euler e a circunferência
dos nove pontos. Foi utilizado o software geogebra para ilustrar as construções
geométricas e apresentar algumas atividades práticas para o estudo dos pontos notá-
veis do triângulo, da reta de Euler e da circunferência dos nove pontos aos estudantes
do Ensino Médio. Todavia, o trabalho se baseou nas demonstrações, com o uso da
Álgebra Moderna e da Álgebra Linear, da existência e das propriedades do objeto
desta pesquisa, sobretudo da propriedade universal dos pontos no plano, fundamental
nestas demonstrações.
|
338 |
Uso da Teoria ClÃssica dos Testes â TCT e da Teoria de Resposta ao Item â TRI na avaliaÃÃo da qualidade mÃtrica de testes de seleÃÃo / Use of the Classical Theory Tests â TCT and Item Response Theory â IRT in the evaluation of the metric quality of selection testsJosà Leudo Maia 18 December 2009 (has links)
FundaÃÃo de Amparo à Pesquisa do Estado do Cearà / Esse trabalho doutoral tem como proposta fazer uso da Teoria ClÃssica dos Testes â TCT e da Teoria de Resposta ao Item â TRI como instrumentos na avaliaÃÃo da qualidade mÃtrica de testes de seleÃÃo, sob quatro aspectos de investigaÃÃo: AnÃlise da Validez do Construto; AnÃlise PsicomÃtrica dos Itens; Funcionamento Diferencial dos Itens â DIF; e FunÃÃo de InformaÃÃo. Para tanto, foram utilizados dados dos resultados das provas de PortuguÃs e MatemÃtica do concurso vestibular da Universidade Estadual do Estado do Cearà â UECE, de 2007, em que participaram 20.016 candidatos a 38 Cursos de GraduaÃÃo, somente na Capital do Estado. Para o tratamento desses dados, foram utilizados os seguintes softwares: SPSS, v15; BILOG-MG, v3.0; MULTILOG FOR WINDOWS, v1.0; e o TESTFACT v4.0. A primeira providÃncia foi verificar a dimensionaidade dessas provas. Para tanto se utilizou o MÃtodo de Kaiser-Guttman, Scree-plot, e o MÃtodo das Cargas Fatoriais e das Comunalidades da Matriz de Fatores. A constataÃÃo foi de que a prova de PortuguÃs apresentava caracterÃsticas multidimensionais, sendo, portanto, descartada, por nÃo atender aos pressupostos bÃsicos da Unidimensionalidade e IndependÃncia Local dos Itens. A prova de MatemÃtica, no entanto, por apresentar comportamento unidimensional, se tornou o foco deste trabalho. A anÃlise da Validez do Construto foi realizada por meio dos coeficientes Alpha de Cronbach e Kuder-Richardson, tendo gerado valores iguais a 0,685, alÃm da utilizaÃÃo, tambÃm, do mÃtodo das Cargas Fatoriais, com cargas entre 0,837 e 0,960, indicando intensa consistÃncia interna. A anÃlise psicomÃtrica dos itens foi realizada por meio dos Ãndices de dificuldade, discriminaÃÃo e acerto ao acaso, para ambas as teorias, indicando ser essa uma prova de dificuldade mediana, com bom comportamento discriminativo e baixo Ãndice de acerto ao acaso. A anÃlise do DIF foi realizada, segundo o gÃnero dos candidatos, pelos mÃtodos Delta-plot, Maentel-Haenszel, RegressÃo LogÃstica e ComparaÃÃo dos Betas, indicando resultados estatÃsticamente nÃo significativos, no que se concluiu nÃo apresentar, a prova, comportamento diferenciado, segundo o gÃnero. A anÃlise da FunÃÃo de InformaÃÃo da prova permitiu se observar que esta à particularmente vÃlida para candidatos com aptidÃo em torno de 0,8750 e que, a um nÃvel de confianÃa de 95%, 49,3% dos candidatos atenderiam a essa indicaÃÃo. Observou-se tambÃm que 90,6% dos candidatos, em ambos os processos, apresentaram o mesmo nÃvel de aptidÃo, indicando uma convergÃncia bastante razoÃvel entre os resultados gerados pela TCT e TRI, no entanto, no estudo amostral, a TRI identificou que 9,4% dos candidatos apresentaram maior aptidÃo para a realizaÃÃo de um curso superior que os selecionados pela TCT.
|
339 |
Transposição da Teoria da Resposta ao Item: uma abordagem pedagógica / Transposition of Item Response Theory: a pedagogical approachEder Alencar Silva 23 June 2017 (has links)
Este trabalho tem por objetivo apresentar a Teoria da Resposta ao Item (TRI), por meio de uma abordagem pedagógica, aos professores da educação básica, que mencionaram esta necessidade por meio de pesquisa realizada pelo autor. Levar parte do conhecimento teórico que embasa esta teoria ao conhecimento do docente, principalmente a construção da curva de probabilidade de acerto do item, favorecerá a compreensão, a análise e o monitoramento do processo avaliativo educacional. Este material apresenta as principais definições e conceitos da avaliação externa em larga escala, além de fornecer insumos para a compreensão das suposições realizadas para aplicação da metodologia. Neste sentido, o texto foi estruturado de forma a apresentar didaticamente as etapas do processo de implementação de uma avaliação, desde a construção do item até a apuração e divulgação dos resultados. Todo enfoque será dado à construção do modelo da TRI com um parâmetro (dificuldade do item), também conhecido como modelo de Rasch, o que simplifica e facilita a compreensão da metodologia. O modelo utilizado nas avaliações externas em larga escala (modelo com três parâmetros) será introduzido a partir das considerações realizadas na abordagem que explicita o pensamento da construção do modelo de um parâmetro. Acredita-se que esta compreensão possa colaborar com o professor na exploração das habilidades/competências dos alunos durante os anos escolares. / This study aims to present the Item Response Theory (IRT), through a pedagogical approach, to teachers of basic education, which mentioned this necessity through research conducted by the author. To take part of the theoretical knowledge that supports this theory to the teacher\'s knowledge, especially the construction of probability curve of item correct response, it will favor for understanding, analysis and monitoring the evaluation educational process. This material presents the main definitions and concepts of the external evaluation in large scale, besides providing inputs for understanding the assumptions made to apply the methodology. In this sense, the text was structured in order to present the implementation process stages of a large scale assessment, from the item construction to the results calculation and dissemination. The focus will be given to the IRT model construction of one-parameter (difficulty of the item), also known as Rasch model, since it simplifies and facilitates the understanding of methodology. The model used in the external assessment on a large scale (three-parameter model) will be introduced from the considerations made in the approach that explicit the thought of one-parameter model construction. It is believed that understanding can collaborate with teacher in exploration of the students\' skills/competences during the school year.
|
340 |
Etude structurale des nanotubes de carbone double parois. / Structural study of double-walled carbon nanotubesGhedjatti, Ahmed 29 January 2016 (has links)
Le nanotube de carbone double parois représente le cas idéal pour étudier la nature de l'interaction entre parois des tubes multiparois. En partant d'échantillons dispersés de DWNTs synthétisés par CVD, nous avons pu, grâce à la microscopie électronique en transmission haute résolution (METHR), établir une procédure robuste de détermination structurale des configurations. Il apparaît alors que certaines configurations structurales sont privilégiées alors que d'autres sont interdites, mettant en évidence les effets du couplage interparoi. À partir de simulations Monte Carlo réalisées sur des DWNTs de configurations interdites, nous avons montré que le tube interne modifie sa structure pour atteindre une stabilité énergétique, ce que nous avons pu rapprocher d'observations expérimentales. Pour étudier les propriétés électroniques des DWNTs observés expérimentalement, nous avons corrélé les techniques de METHR et d'absorption optique pour analyser des populations différenciées de tubes en nombre de parois, diamètre et nature électronique, grâce à la technique DGU (Ultracentrifugation de Gradient de Densité). À l'issue de trois tris successifs, nous avons pu isoler une population de tubes double parois pure à 95% et dont les tubes extérieurs sont de nature semi-conducteur à 90\%. / Double-walled carbon nanotube represents the ideal case to investigate the nature of the interaction between walls of multiwall tubes. Starting with scattered samples of DWNTs synthesized by CVD, we have established a robust procedure for structure determination of configurations based on high resolution electron microscopy transmission (HRTEM). After achieving a statistical study, it appears that some structural configurations have been favored while others are completely forbidden, highlighting the effects of inter-wall coupling. To go beyond, we have performed Monte Carlo simulations at atomic scale on DWNTs with forbidden configurations. As a result, we have shown that the inner tube changes its structure to achieve energy stability, in good agreement with experimental observations. To study the electronic properties of DWNTs observed experimentally, we correlated HRTEM and optical absorption techniques for analyzing differentiated tubes populations by number of walls, diameter and electronic nature, thanks to the technical DGU (Density Gradient Ultracentrifugation ). After three successive sorting, a pure population of double-walled tubes to 95% and of which 90% of the outer tubes are semiconductor has been isolated.
|
Page generated in 0.0447 seconds