Spelling suggestions: "subject:"essessment off intelligence"" "subject:"essessment oof intelligence""
1 |
Seismic Vulnerability Assessment Using Artificial Neural NetworksGuler, Altug 01 June 2005 (has links) (PDF)
In this study, an alternative seismic vulnerability assessment model is developed. For this purpose, one of the most popular artificial intelligence techniques, Artificial Neural Network (ANN), is used.
Many ANN models are generated using 4 different network training functions, 1 to 50 hidden neurons and combination of structural parameters like number of stories, normalized redundancy scores, overhang ratios, soft story indices, normalized total column areas, normalized total wall areas are used to achieve the best assessment performance.
Duzce database is used throughout the thesis for training ANN. A neural network simulator is developed in Microsoft Excel using the weights and parameters obtained from the best model created at Duzce damage database studies. Afyon, Erzincan, and Ceyhan databases are simulated using the developed simulator. A recently created database named Zeytinburnu is used for the projection purposes. The building sesimic vulnerability assessment of Zeytinburnu area is conducted on 3043 buildings using the proposed procedure.
|
2 |
Development and assessment of computer-game-like tests of human cognitive abilities.McPherson, Jason January 2008 (has links)
The present thesis describes the development and assessment of two computer-game-like tests designed to measure two cognitive abilities currently of considerable interest to many researchers: processing speed (Gs) and working memory (WM). It is hoped that such tests could provide a unique and important addition to the range of tests currently employed by researchers interested in these constructs. The results of five separate studies are presented across three published papers. In Paper 1-Study 1 (N = 49) a speeded computerized coding test (Symbol Digit) using the mouse as the response device was assessed. Because speeded tests are thought to be highly sensitive to response methods (Mead & Drasgow, 1994) it was deemed important to first assess how a mouse response method might affect the underlying construct validity of a speeded coding test independently of whether it was game-like. Factor analytic results indicated that the computerized coding test loaded strongly on the same factor as paper-andpencil measures of Gs. For Paper 2-Study 1 (N = 68) a more computer-game-like version of Symbol Digit was developed, Space Code. Development of Space Code involved the provision of a cover story, the replacing of code symbols with ‘spaceship’ graphics, the situating of the test within an overall ‘spaceship cockpit’, and numerous other graphical and aural embellishments to the task. Factor analytic results indicated that Space Code loaded strongly on a Gs factor but also on a factor comprised of visuo-spatial (Gv) ability tests. This finding was further investigated in the subsequent study. Paper 2-Study 2 (N = 74) involved a larger battery of ability marker tests and a range of additional computer-game-like elements were added to Space Code. Space Code included a scoring system, a timer with additional voice synthesized countdowns, aversive feedback for errors, and background music. Factor analysis indicated that after a general factor was extracted Space Code loaded on the same factor as paper-and-pencil measures of Gs and did not load on a factor comprised of non-speeded Gv tests. Paper 3-Study 1 (N = 74) was aimed at assessing a computer-game-like test of WM (Space Matrix) and further assessing Space Code within a broader network of tests. Space Matrix used a dual task format combining a simple version of Space Code with a visually presented memory task based on the Dot Matrix test (Miyake, Friedman, Rettinger, Shah, & Hegarty, 2001). The cover story and scoring system for Space Code was expanded to incorporate this additional memory element. Factor analysis indicated that Space Matrix was loaded on the same first order factor as standard WM tests and the Raven’s Advanced Progressive Matrices (Gf). Space Code was substantially loaded on the second order factor but was weakly loaded on each of two first order factors interpreted as Gs and WM/Gf. A final study is presented (Paper 3-Study2) in which Space Code and Space Matrix was administered to a school aged sample (N=94). Space Matrix exhibited construct validity as well as predictive validity (as a predictor of school grades), while results for Space Code were less encouraging. Space Matrix and Raven’s Progressive Matrices showed comparable relationships to school grades for Mathematics, English and Science subjects. It is concluded that the development of computer-game-like tests represents a promising new format for research and applied assessment of known cognitive abilities. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1342350 / Thesis (Ph.D.) -- University of Adelaide, School of Psychology, 2008
|
3 |
Development and assessment of computer-game-like tests of human cognitive abilities.McPherson, Jason January 2008 (has links)
The present thesis describes the development and assessment of two computer-game-like tests designed to measure two cognitive abilities currently of considerable interest to many researchers: processing speed (Gs) and working memory (WM). It is hoped that such tests could provide a unique and important addition to the range of tests currently employed by researchers interested in these constructs. The results of five separate studies are presented across three published papers. In Paper 1-Study 1 (N = 49) a speeded computerized coding test (Symbol Digit) using the mouse as the response device was assessed. Because speeded tests are thought to be highly sensitive to response methods (Mead & Drasgow, 1994) it was deemed important to first assess how a mouse response method might affect the underlying construct validity of a speeded coding test independently of whether it was game-like. Factor analytic results indicated that the computerized coding test loaded strongly on the same factor as paper-andpencil measures of Gs. For Paper 2-Study 1 (N = 68) a more computer-game-like version of Symbol Digit was developed, Space Code. Development of Space Code involved the provision of a cover story, the replacing of code symbols with ‘spaceship’ graphics, the situating of the test within an overall ‘spaceship cockpit’, and numerous other graphical and aural embellishments to the task. Factor analytic results indicated that Space Code loaded strongly on a Gs factor but also on a factor comprised of visuo-spatial (Gv) ability tests. This finding was further investigated in the subsequent study. Paper 2-Study 2 (N = 74) involved a larger battery of ability marker tests and a range of additional computer-game-like elements were added to Space Code. Space Code included a scoring system, a timer with additional voice synthesized countdowns, aversive feedback for errors, and background music. Factor analysis indicated that after a general factor was extracted Space Code loaded on the same factor as paper-and-pencil measures of Gs and did not load on a factor comprised of non-speeded Gv tests. Paper 3-Study 1 (N = 74) was aimed at assessing a computer-game-like test of WM (Space Matrix) and further assessing Space Code within a broader network of tests. Space Matrix used a dual task format combining a simple version of Space Code with a visually presented memory task based on the Dot Matrix test (Miyake, Friedman, Rettinger, Shah, & Hegarty, 2001). The cover story and scoring system for Space Code was expanded to incorporate this additional memory element. Factor analysis indicated that Space Matrix was loaded on the same first order factor as standard WM tests and the Raven’s Advanced Progressive Matrices (Gf). Space Code was substantially loaded on the second order factor but was weakly loaded on each of two first order factors interpreted as Gs and WM/Gf. A final study is presented (Paper 3-Study2) in which Space Code and Space Matrix was administered to a school aged sample (N=94). Space Matrix exhibited construct validity as well as predictive validity (as a predictor of school grades), while results for Space Code were less encouraging. Space Matrix and Raven’s Progressive Matrices showed comparable relationships to school grades for Mathematics, English and Science subjects. It is concluded that the development of computer-game-like tests represents a promising new format for research and applied assessment of known cognitive abilities. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1342350 / Thesis (Ph.D.) -- University of Adelaide, School of Psychology, 2008
|
4 |
Development and assessment of computer-game-like tests of human cognitive abilities.McPherson, Jason January 2008 (has links)
The present thesis describes the development and assessment of two computer-game-like tests designed to measure two cognitive abilities currently of considerable interest to many researchers: processing speed (Gs) and working memory (WM). It is hoped that such tests could provide a unique and important addition to the range of tests currently employed by researchers interested in these constructs. The results of five separate studies are presented across three published papers. In Paper 1-Study 1 (N = 49) a speeded computerized coding test (Symbol Digit) using the mouse as the response device was assessed. Because speeded tests are thought to be highly sensitive to response methods (Mead & Drasgow, 1994) it was deemed important to first assess how a mouse response method might affect the underlying construct validity of a speeded coding test independently of whether it was game-like. Factor analytic results indicated that the computerized coding test loaded strongly on the same factor as paper-andpencil measures of Gs. For Paper 2-Study 1 (N = 68) a more computer-game-like version of Symbol Digit was developed, Space Code. Development of Space Code involved the provision of a cover story, the replacing of code symbols with ‘spaceship’ graphics, the situating of the test within an overall ‘spaceship cockpit’, and numerous other graphical and aural embellishments to the task. Factor analytic results indicated that Space Code loaded strongly on a Gs factor but also on a factor comprised of visuo-spatial (Gv) ability tests. This finding was further investigated in the subsequent study. Paper 2-Study 2 (N = 74) involved a larger battery of ability marker tests and a range of additional computer-game-like elements were added to Space Code. Space Code included a scoring system, a timer with additional voice synthesized countdowns, aversive feedback for errors, and background music. Factor analysis indicated that after a general factor was extracted Space Code loaded on the same factor as paper-and-pencil measures of Gs and did not load on a factor comprised of non-speeded Gv tests. Paper 3-Study 1 (N = 74) was aimed at assessing a computer-game-like test of WM (Space Matrix) and further assessing Space Code within a broader network of tests. Space Matrix used a dual task format combining a simple version of Space Code with a visually presented memory task based on the Dot Matrix test (Miyake, Friedman, Rettinger, Shah, & Hegarty, 2001). The cover story and scoring system for Space Code was expanded to incorporate this additional memory element. Factor analysis indicated that Space Matrix was loaded on the same first order factor as standard WM tests and the Raven’s Advanced Progressive Matrices (Gf). Space Code was substantially loaded on the second order factor but was weakly loaded on each of two first order factors interpreted as Gs and WM/Gf. A final study is presented (Paper 3-Study2) in which Space Code and Space Matrix was administered to a school aged sample (N=94). Space Matrix exhibited construct validity as well as predictive validity (as a predictor of school grades), while results for Space Code were less encouraging. Space Matrix and Raven’s Progressive Matrices showed comparable relationships to school grades for Mathematics, English and Science subjects. It is concluded that the development of computer-game-like tests represents a promising new format for research and applied assessment of known cognitive abilities. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1342350 / Thesis (Ph.D.) -- University of Adelaide, School of Psychology, 2008
|
5 |
Estrutura Fatorial do WISC-III em crianças com dificuldades de aprendizagem: uma validação em amostra brasileira / FACTOR STRUCTURE OF THE WISC-III FOR CHILDREN WITH LEARNING DISABILITIES: A BRAZILIAN VALIDATIONVidal, Francisco Antonio Soto 30 April 2010 (has links)
Made available in DSpace on 2016-03-22T17:26:22Z (GMT). No. of bitstreams: 1
Francisco Antonio Soto Vidal.pdf: 467609 bytes, checksum: 1074dd1abd43689b74cb26d93afc7e10 (MD5)
Previous issue date: 2010-04-30 / The adaptation of a psychological instrument to another cultural environment requires
that its rules, validity and reliability be revised. Although the WISC-III has already
been adapted to the Brazilian context, further studies on the verification of its
construction validity should be performed when used in clinical groups. This work
contributes to this research and to the investigation of a factorial model that is more
appropriate for Brazilian children with learning disabilities (LD). An amount of 263
WISC-III test protocols performed in public school students referred by their teachers
by having difficulties in reading, writing and/or arithmetic after a psychological
evaluation were analyzed. Statistical techniques of Exploratory Factor Analysis and
Confirmatory Factor Analysis were performed. This study, besides corroborating the
factor structure defined in the Brazilian standardization, meets the results of the
international research for the definition of four-factor model as the best adjusting for
the LD population. Although we also have identified two three-factor models as
advantageous as to the fit, parsimony and theoretical interpretability, the four-factorial
structure is the most suitable for clinical interpretation of the scores that express the AD
group cognitive abilities, since it allows us to leverage existing standards of WISC -III for the general population / A adaptação de um instrumento psicológico a outro meio cultural requer que sejam
revisadas suas normas, sua validade e sua fidedignidade. Apesar de o WISC-III já ter
sido adaptado ao contexto brasileiro, novos estudos sobre a verificação de sua validade
de construto devem realizar-se quando utilizado em grupos clínicos. Este trabalho
contribui a essa pesquisa e à investigação do modelo fatorial mais adequado para
crianças brasileiras com dificuldades de aprendizagem (DA). Foram analisados 263
protocolos do teste WISC-III de alunos de escolas públicas encaminhados por seus
professores para avaliação psicológica por apresentarem dificuldades em leitura, escrita
e/ou aritmética. Foram utilizadas as técnicas estatísticas da Análise Fatorial
Exploratória e da Análise Fatorial Confirmatória. O presente estudo, além de corroborar
a estrutura fatorial definida na padronização brasileira, vai ao encontro dos resultados
da pesquisa internacional quanto à definição do modelo de quatro fatores como o de
melhor ajuste para o grupo clínico DA. Apesar de também ter identificado dois
modelos trifatoriais como vantajosos quanto ao ajuste, parcimônia e interpretabilidade
teórica, a estrutura quadrifatorial é a mais indicada para interpretar clinicamente as
pontuações que expressam as habilidades cognitivas do grupo DA, uma vez que
permite aproveitar as normas existentes do WISC-III para a população geral
|
6 |
Ist der Mehrfachwahl-Wortschatz-Test Version A (MWT-A) zur Schätzung des prämorbiden Intelligenzniveaus geeignet? - Überprüfung an einer konsekutiven Stichprobe einer Demenz-SpezialambulanzBinkau, Sabrina 09 August 2016 (has links)
Vocabulary tests have long been used for estimating premorbid intelligence level in the neuropsychological assessment of dementia. However, doubts exist about the validity of such intelligence tests. The present study examines whether the Multiple-Choice Vocabulary Test – Version A (Mehrfachwahl-Wortschatz-Test – Version A, MWT-A) is valid for assessing premorbid intelligence level. Data from a total of 821 patients in a specialized outpatient clinic for dementia (memory clinic), covering the whole spectrum of cognitive impairment, were evaluated using analysis of variance with the dependent variable premorbid intelligence level (MWT-A) and the independent variable extent of global cognitive impairment (Mini-Mental-State Examination, MMSE: mean = 25.2, SD = 3.9). The latter was divided into six MMSE ranges or groups, respectively (29–30, 28–28, 27–27, 25–26, 22–24, 05–21). In the case of a pathologically relevant global cognitive impairment (24–26 MMSE points), the MWT-A underestimates the premorbid intelligence level. This effect is moderated neither by age nor education. Results indicate that the MWT-A is unsuitable for estimating premorbid intelligence level in neuropsychological assessments of cognitively impaired patients or demented patients.
|
Page generated in 0.119 seconds