| Nome: | Descrição: | Tamanho: | Formato: | |
|---|---|---|---|---|
| 2.76 MB | Adobe PDF |
Orientador(es)
Resumo(s)
Introdução
O uso de jogos como ferramenta para auxiliar o processo de aprendizagem já está em
estudo há anos, e a existência de dezenas de exemplos práticos eficazes de jogos sendo
usados para a educação infantil e até mesmo para o treinamento profissional não pode
ser questionada. Mas há uma pequena quantidade de estudos que observem a capacidade
de jogos avaliarem o conhecimento dos alunos, com os estudos mais expressivos muitas
vezes encontrando soluções que, por mais efetivas que sejam, encontram conflitos na
implementação, como no desenvolvimento prático e na análise dos dados obtidos. Este
estudo se propõe a explorar uma linha mais simples de abordagem, que se adeque com
maior facilidade a sistemas de avaliação já presentes em escolas, ao aplicar uma
roupagem digital lúdica a provas mais tradicionais, mas também permitindo coletas de
dados simples e não invasivos que podem ser interpretados pelo professor para maior
compreensão do conhecimento e dificuldades do aluno.
Neste estudo, são utilizados e analisados a Teoria de Resposta ao Item de três parâmetros
(TRI 3PL) para garantir a confiabilidade dos resultados dos testes e determinar quais
dados poderiam ser usados junto com ela para reforçar sua eficácia, os impactos que esse
método de teste poderia ter nos alunos, incluindo, mas não se limitando ao aumento ou
diminuição da ansiedade de teste, e os ensaios de implementação de tal sistema nos
sistemas escolares.
Metodologia
Na primeira fase, foi realizada uma pesquisa de artigos acadêmicos nas áreas de Jogos
de Vídeo na Educação, para encontrar as maneiras como os jogos já são comumente
usados nas escolas; TRI, para garantir a confiança nos dados dos testes; Ansiedade
Escolar, para encontrar uma conexão entre ela e o desempenho dos alunos nos testes; a
Eficácia dos Jogos de Vídeo no tratamento da ansiedade.
Na segunda fase, jogos desenvolvidos em 2019 pelo NPC – Núcleo de Produção Criativa,
do Centro Universitário Farias Brito, sob a liderança do Professor Bruno Saraiva e
supervisionados pelo Coordenador dos cursos de Ciência da Computação Ricardo Wagner, foram modificados e usados para testes presenciais, com a devida autorização,
em alunos da turma de Design de Jogos, dos quais fizeram um teste escrito de múltipla
escolha, jogaram o jogo e preencheram um formulário online onde compartilharam suas
experiências com o sistema tradicional de testes de papel e caneta e com os jogos, e casos
comparando ambos diretamente. Um teste remoto também foi realizado com um grupo
separado de sujeitos, onde o teste anterior de papel e caneta foi adaptado para um teste
online através de um formulário online, mas devido à sua natureza, os dados coletados
dessa forma são menos seguros em comparação com os testes presenciais.
Os dados coletados foram então analisados considerando os resultados no jogo e no teste
de papel e caneta e comparando-os com suas experiências compartilhadas através dos
formulários, mantendo os dados e análises dos testes presenciais e remotos separados
devido às diferenças de como foram aplicados.
Resultados e Análise
Os testes presenciais apresentaram uma taxa de participantes que relataram frequentes
casos de Ansiedade de Provas (AP) maior que aqueles que não tinham ou acontecia com
pouca frequência. No geral, viu-se uma melhora expressiva quando comparando a
avaliação escrita com a avaliação por jogo, com Geometria Analítica alcançando uma
melhora de 60%. Sobre suas opiniões os voluntários apresentaram boa aceitação à
proposta, tal como acreditam que teria uma influência positiva nos seus resultados caso
aplicado quando fossem crianças. Porém foi notado que para os voluntários os jogos
ainda deram a sensação de fazer uma avaliação, dessa forma levantando a possibilidade
de uma efetividade reduzida na prática.
Os testes remotos apresentaram, diferente do presencial, uma taxa de participante os
quais relataram casos de AP menor que aqueles que não tinham ou que aconteciam com
pouca frequência, sendo este um caso inverso ao teste presencial. Novamente percebeuse uma melhora nos resultados quando comparados exceto por Geometria Analítica, que
teve uma queda nos resultados, esses podendo ter causas não pertinentes à competência,
mas sim ao estado técnico da plataforma utilizada para os testes. Dos voluntários
percebeu-se também uma maior rejeição a plataforma utilizada tal como a ideia de
influenciar positivamente seus resultados se aplicados durante seu período de infância.
Porém igualmente aos voluntários do teste presencial, mesmo sem concordarem em
todos os pontos igualmente, há uma opinião favorável à utilização de jogos como provas,
com a maioria preferindo jogar a fazer uma prova caso permitidos escolherem. Conclusão e Trabalhos Futuros
Foi notada uma melhora significativa de performance dos voluntários em sua maioria,
mas notando problemas presentes na plataforma, na inadequação do meio remoto para
os testes, tal como baixa taxa de amostragem em ambos torna inconclusivas quaisquer
teorias e questionamentos levantados nesse estudo, mas ainda abrindo margem a
debates como precedentes a futuros estudos, onde ao considerar a melhora nos
resultados dentre a prova escrita para o jogo pode-se levantar a possibilidade de uma
efetividade do método proposto. São necessários estudos mais extensos tal como uma
taxa de amostragem exponencialmente maior, com preferência ao público infantil, de tal
maneira necessitando a parceria com uma instituição de ensino disposta a aplicar este
método alternadamente com provas tradicionais de tal maneira que seus resultados
possam ser comparados ao longo de um ou mais anos. Tal como o desenvolvimento de
uma nova plataforma, esta por sua vez, completamente desenhada para estes testes, não
a adaptação de uma solução sucateada de outro projeto similar.
O uso de TRI 3PL provou-se essencial para garantia da validade dos dados, tal como aliar
a dados extras coletados dos voluntários, sendo esse o tempo de conclusão e resposta de
itens, permitiu uma maior compreensão dos resultados assim como de seus
comportamentos, como já é por padrão compatível com sistemas de notas já utilizados
em escolas atualmente. Para estudos e testes futuros é também notada a necessidade de
uma integração direta com a API de plataformas de análise de dados, como o ChatGPT,
para facilitar e agilizar o output dos resultados tratados, porém isso acarreta em um
impacto direto ao custo de manutenção após sua implementação.
Introduction The use of games as tools to support the learning process has been under study for many years, and the existence of numerous effective examples of games applied in early childhood education and even professional training is unquestionable. However, only a limited number of studies have examined the potential of games to assess students’ knowledge. Even the most promising studies often face implementation challenges, such as practical development or the analysis of the data provided by such games. This study seeks to explore a simpler approach—one that can be more easily integrated into existing school assessment systems—by applying a playful digital layer to traditional tests while also enabling the collection of simple, non-invasive data that can be interpreted by teachers to better understand students’ knowledge and difficulties. In this research, the Three-Parameter Logistic Item Response Theory (IRT 3PL) model is applied to ensure the reliability of test results and to examine which types of additional data could strengthen its effectiveness. The study also investigates the potential impacts of this testing method on students, including (but not limited to) increases or decreases in test anxiety, as well as the feasibility of implementing such a system within school environments. Methodology In the first phase, a literature review was conducted in four areas: Video Games in Education, to identify how games are already commonly used in schools; Item Response Theory, to ensure the reliability of test data; Test Anxiety, to explore its relationship with student performance; and the Effectiveness of Video Games in addressing anxiety. In the second phase, games developed in 2019 by the Núcleo de Produção Criativa (NPC) at Centro Universitário Farias Brito, under the leadership of Professor Bruno Saraiva and supervision of Computer Science Program Coordinator Ricardo Wagner, were modified and used in in-person trials. With proper authorization, students from the Game Design program participated by completing a multiple-choice written exam, playing the game, and filling out an online form where they shared their experiences with both the traditional paper-and-pencil test and the game-based assessment, including direct comparisons. A remote test was also carried out with a separate group, in which the previous written exam was adapted into an online form. However, due to its nature, the remotely collected data were considered less secure than those gathered in person. Data was then analyzed by comparing the outcomes of the written and game-based tests with participants’ reported experiences, keeping in-person and remote analyses separate due to differences in administration. Results and Analysis The in-person trials revealed that a higher proportion of participants reported experiencing frequent Test Anxiety (TA) compared to those who reported little or no anxiety. Overall, a significant improvement was observed when comparing written assessments with game-based assessments, with Analytic Geometry showing up to a 60% improvement. Volunteers expressed strong acceptance of the proposal, many believing it would have positively influenced their performance had it been applied during childhood. However, participants also noted that the games still felt like an assessment, raising the possibility of reduced effectiveness in real-world application. By contrast, the remote trials showed the opposite trend: participants reported fewer cases of TA compared to those with little or no anxiety in the in-person group. Performance improvements were observed in most areas, except for Analytic Geometry, which showed a decline likely attributable not to content mastery but to technical issues with the online platform. Remote participants also expressed greater dissatisfaction with the platform and skepticism about its potential positive impact during their school years. Nevertheless, as with the in-person group, most participants maintained a favorable opinion toward game-based assessments, with the majority preferring to play rather than take a traditional test when given the choice. Conclusion and Future Works While a significant improvement in performance was observed among most volunteers, issues related to platform limitations, the unsuitability of remote testing, and the small sample size render the findings inconclusive. Nonetheless, the study raises important questions and provides precedents for future research. Considering the observed improvements from written to game-based assessments, the method shows potential effectiveness. A remote test was also carried out with a separate group, in which the previous written exam was adapted into an online form. However, due to its nature, the remotely collected data were considered less secure than those gathered in person. Data was then analyzed by comparing the outcomes of the written and game-based tests with participants’ reported experiences, keeping in-person and remote analyses separate due to differences in administration. Results and Analysis The in-person trials revealed that a higher proportion of participants reported experiencing frequent Test Anxiety (TA) compared to those who reported little or no anxiety. Overall, a significant improvement was observed when comparing written assessments with game-based assessments, with Analytic Geometry showing up to a 60% improvement. Volunteers expressed strong acceptance of the proposal, many believing it would have positively influenced their performance had it been applied during childhood. However, participants also noted that the games still felt like an assessment, raising the possibility of reduced effectiveness in real-world application. By contrast, the remote trials showed the opposite trend: participants reported fewer cases of TA compared to those with little or no anxiety in the in-person group. Performance improvements were observed in most areas, except for Analytic Geometry, which showed a decline likely attributable not to content mastery but to technical issues with the online platform. Remote participants also expressed greater dissatisfaction with the platform and skepticism about its potential positive impact during their school years. Nevertheless, as with the in-person group, most participants maintained a favorable opinion toward game-based assessments, with the majority preferring to play rather than take a traditional test when given the choice. Conclusion and Future Works While a significant improvement in performance was observed among most volunteers, issues related to platform limitations, the unsuitability of remote testing, and the small sample size render the findings inconclusive. Nonetheless, the study raises important questions and provides precedents for future research. Considering the observed improvements from written to game-based assessments, the method shows potential effectiveness. Further research is needed with substantially larger sample sizes, ideally focusing on children, which would require collaboration with educational institutions willing to alternate between game-based and traditional assessments over one or more academic years to allow longitudinal comparison. In addition, the development of a new, purposebuilt platform—designed specifically for assessment rather than adapted from existing projects—is necessary to ensure proper implementation and data reliability. The use of TRI3PL proved essential in ensuring data validity. When combined with additional data collected from volunteers—specifically item response and game completion times, it enabled a deeper understanding of both the results and participant behavior. This approach is inherently compatible with grading systems currently employed in schools. For future studies and testing, there is a noted need for direct integration with APIs from platforms capable of data analysis, such as ChatGPT, to streamline and accelerate the output of processed results. However, this integration would have a direct impact on post-implementation maintenance costs.
Introduction The use of games as tools to support the learning process has been under study for many years, and the existence of numerous effective examples of games applied in early childhood education and even professional training is unquestionable. However, only a limited number of studies have examined the potential of games to assess students’ knowledge. Even the most promising studies often face implementation challenges, such as practical development or the analysis of the data provided by such games. This study seeks to explore a simpler approach—one that can be more easily integrated into existing school assessment systems—by applying a playful digital layer to traditional tests while also enabling the collection of simple, non-invasive data that can be interpreted by teachers to better understand students’ knowledge and difficulties. In this research, the Three-Parameter Logistic Item Response Theory (IRT 3PL) model is applied to ensure the reliability of test results and to examine which types of additional data could strengthen its effectiveness. The study also investigates the potential impacts of this testing method on students, including (but not limited to) increases or decreases in test anxiety, as well as the feasibility of implementing such a system within school environments. Methodology In the first phase, a literature review was conducted in four areas: Video Games in Education, to identify how games are already commonly used in schools; Item Response Theory, to ensure the reliability of test data; Test Anxiety, to explore its relationship with student performance; and the Effectiveness of Video Games in addressing anxiety. In the second phase, games developed in 2019 by the Núcleo de Produção Criativa (NPC) at Centro Universitário Farias Brito, under the leadership of Professor Bruno Saraiva and supervision of Computer Science Program Coordinator Ricardo Wagner, were modified and used in in-person trials. With proper authorization, students from the Game Design program participated by completing a multiple-choice written exam, playing the game, and filling out an online form where they shared their experiences with both the traditional paper-and-pencil test and the game-based assessment, including direct comparisons. A remote test was also carried out with a separate group, in which the previous written exam was adapted into an online form. However, due to its nature, the remotely collected data were considered less secure than those gathered in person. Data was then analyzed by comparing the outcomes of the written and game-based tests with participants’ reported experiences, keeping in-person and remote analyses separate due to differences in administration. Results and Analysis The in-person trials revealed that a higher proportion of participants reported experiencing frequent Test Anxiety (TA) compared to those who reported little or no anxiety. Overall, a significant improvement was observed when comparing written assessments with game-based assessments, with Analytic Geometry showing up to a 60% improvement. Volunteers expressed strong acceptance of the proposal, many believing it would have positively influenced their performance had it been applied during childhood. However, participants also noted that the games still felt like an assessment, raising the possibility of reduced effectiveness in real-world application. By contrast, the remote trials showed the opposite trend: participants reported fewer cases of TA compared to those with little or no anxiety in the in-person group. Performance improvements were observed in most areas, except for Analytic Geometry, which showed a decline likely attributable not to content mastery but to technical issues with the online platform. Remote participants also expressed greater dissatisfaction with the platform and skepticism about its potential positive impact during their school years. Nevertheless, as with the in-person group, most participants maintained a favorable opinion toward game-based assessments, with the majority preferring to play rather than take a traditional test when given the choice. Conclusion and Future Works While a significant improvement in performance was observed among most volunteers, issues related to platform limitations, the unsuitability of remote testing, and the small sample size render the findings inconclusive. Nonetheless, the study raises important questions and provides precedents for future research. Considering the observed improvements from written to game-based assessments, the method shows potential effectiveness. A remote test was also carried out with a separate group, in which the previous written exam was adapted into an online form. However, due to its nature, the remotely collected data were considered less secure than those gathered in person. Data was then analyzed by comparing the outcomes of the written and game-based tests with participants’ reported experiences, keeping in-person and remote analyses separate due to differences in administration. Results and Analysis The in-person trials revealed that a higher proportion of participants reported experiencing frequent Test Anxiety (TA) compared to those who reported little or no anxiety. Overall, a significant improvement was observed when comparing written assessments with game-based assessments, with Analytic Geometry showing up to a 60% improvement. Volunteers expressed strong acceptance of the proposal, many believing it would have positively influenced their performance had it been applied during childhood. However, participants also noted that the games still felt like an assessment, raising the possibility of reduced effectiveness in real-world application. By contrast, the remote trials showed the opposite trend: participants reported fewer cases of TA compared to those with little or no anxiety in the in-person group. Performance improvements were observed in most areas, except for Analytic Geometry, which showed a decline likely attributable not to content mastery but to technical issues with the online platform. Remote participants also expressed greater dissatisfaction with the platform and skepticism about its potential positive impact during their school years. Nevertheless, as with the in-person group, most participants maintained a favorable opinion toward game-based assessments, with the majority preferring to play rather than take a traditional test when given the choice. Conclusion and Future Works While a significant improvement in performance was observed among most volunteers, issues related to platform limitations, the unsuitability of remote testing, and the small sample size render the findings inconclusive. Nonetheless, the study raises important questions and provides precedents for future research. Considering the observed improvements from written to game-based assessments, the method shows potential effectiveness. Further research is needed with substantially larger sample sizes, ideally focusing on children, which would require collaboration with educational institutions willing to alternate between game-based and traditional assessments over one or more academic years to allow longitudinal comparison. In addition, the development of a new, purposebuilt platform—designed specifically for assessment rather than adapted from existing projects—is necessary to ensure proper implementation and data reliability. The use of TRI3PL proved essential in ensuring data validity. When combined with additional data collected from volunteers—specifically item response and game completion times, it enabled a deeper understanding of both the results and participant behavior. This approach is inherently compatible with grading systems currently employed in schools. For future studies and testing, there is a noted need for direct integration with APIs from platforms capable of data analysis, such as ChatGPT, to streamline and accelerate the output of processed results. However, this integration would have a direct impact on post-implementation maintenance costs.
Descrição
Palavras-chave
Ansiedade no Ambiente de Teste Ansiedade no Ambiente Escolar Jogos Digitais Jogos na Educação Jogos no Tratamento de Ansiedade Teoria de Resposta ao Item
