Large language models have emerged as a novel and popular phenomenon that attracts many people. These models are trained on comprehensive text data sets, equipping them with the ability to execute tasks such as text recognition, translation, prediction, and generation. However, their application in education is controversial, as they are known to produce content that can sometimes be misleading and erroneous.1 This poses a challenge to young students’ ability to discern between reliable and unreliable information that is generated by artificial intelligence (AI).2 Particularly in physics, advanced large language models demonstrate quite remarkable abilities, for example, being able to support teachers3 or students during inquiry-based learning.4 One example of such a generative AI model is DALL-E 2, which can create novel images based on text prompts. Many of these realistic-looking images are widely circulated on the web, making it difficult to protect the young generation from exposure to the generated AI. Therefore, it is critical to reinforce critical thinking in the classrooms in this era of misinformation. Instead of viewing this as a disadvantage, one could use this as an opportunity to generate exercises for students to enhance their critical thinking and discussion abilities. In this paper, we propose the use of DALL-E 2 as a tool to improve students’ critical thinking skills by engaging them in peer discussions about the accuracy of the generated images. In this way, this exercise has two aims: (i) to train students to discriminate between physically correct and incorrect depictions of natural phenomena, thus learning physical concept knowledge (here, the refraction of light); and (ii) to enhance students’ critical thinking skills in the context of generative artificial intelligence.

In general, optical phenomena cover a range of topics that are well suited to enhancing critical thinking via the AI-based generation of images, as they are observable in nature, so it is easy for students to engage with these images. In this example, we demonstrate how DALL-E can be used to generate photorealistic images of refraction phenomena. Refraction occurs when light crosses from one medium with a certain refraction index n1 to a medium with another refraction index n2, because of the difference in phase velocities v1 and v2. Snell’s law describes the relation between the ratio of the angle of incidence θ1, measured between the normal and incident beam, and the angle of refraction θ2 and the ratio of the refraction indices:
(1)

If n2 > n1, the light is refracted toward the normal, and if n1 > n2, the light is refracted away from the normal. In nature and our daily life, refraction phenomena play a central role, for instance, in rainbows or optical instruments, such as glasses, or when observing objects in water.

In this example, we have leveraged the application programming interface (API) of OpenAI to integrate DALL-E 2 into a web page.5 The API acts as a bridge enabling the use of the DALL-E 2 in our own learning environment. The web page shows the task and enables the student to create novel and unique images and enter correct and incorrect aspects of the created images. This exercise could be implemented in a peer-discussion format, so students can jointly explore, identify, and discuss physically correct and incorrect features of each new AI-generated image. The web page consists of an input field where the students can enter a brief prompt describing the image that DALL-E should generate, such as “refraction of light in nature” or similar. After an image is generated, it is displayed in an output window. Multiple images can be generated and arranged in this output window. The students have the option to select two of these images that caught their interest, which will then appear along with two input fields. One field is for entering incorrect features of the image, and the other field is for describing what is accurate. After the students write down their reflections on the images, they can save their responses to a pdf file and send it to the teacher. The teacher can now select a few images together with the identified correct and incorrect features of the image and discuss them with all students in the class.

Figure 1 presents three examples of images generated by DALL-E, all of which are related to the phenomenon of refraction. The choice of subject matter is at the discretion of the teacher, guiding students toward creating images on specific topics. In this instance, we’ve selected refraction as the topic. The prompts provided to DALL-E are indicated in the figure legend. We observed that many of the images created by DALL-E were not physically accurate. For instance, Fig. 1(a) depicts two pencils passing through the walls of a glass of water, accompanied by a description from DALL-E stating, “Two pencils partially submerged in water appear bent at the water-air interface due to a change in the direction of light passing through the interface.” This description, however, is not shown to the students and is solely preserved in our records. In this case, it is obvious that a pencil cannot cross the wall of glass of water, but apart from that, it is noticeable that the green pencil enters the water at an angle θ1 < 90°, so the angle of refraction should be θ2 < θ1. However, it seems that the angle of refraction is either equal to the angle of incidence or even slightly larger.
Fig. 1.

Examples of images generated by the generative AI model DALL-E 2 illustrating the refraction phenomena. Prompts: (a) “Refraction.” (b) “Refraction in nature.” (c) “Refraction in nature.” Editor’s Note: TPT policy is not to publish images of unknown provenance, which is a concern with AI-generated images. However, in this case, we feel that the educational merit outweighs the potential copyright concerns.

Fig. 1.

Examples of images generated by the generative AI model DALL-E 2 illustrating the refraction phenomena. Prompts: (a) “Refraction.” (b) “Refraction in nature.” (c) “Refraction in nature.” Editor’s Note: TPT policy is not to publish images of unknown provenance, which is a concern with AI-generated images. However, in this case, we feel that the educational merit outweighs the potential copyright concerns.

Close modal

In some cases, the images returned by DALL-E appeared very realistic but contained subtle errors that may be difficult for nonexperts to spot. For example, Fig. 1(b) shows a rainbow generated by the mist of a waterfall with its corresponding secondary rainbow above it. This image appears very realistic. However, an expert would easily recognize that the color sequence of a secondary rainbow should be reversed, with red on the bottom and violet on top, which is not the case in this image. The description of the generated image is as follows: “A waterfall with sunlight passing through the water, causing a beautiful spectrum of colors to appear in the mist.” Other cases of images look very realistic and do not depict any incorrect optical phenomena, such as Fig. 1(c). The description of this image is, “A rainbow appears in the sky after a rainstorm.”

In conclusion, with the guidance of educators, technology such as DALL-E 2 can be used to reinforce critical thinking. The examples presented here are specifically related to the phenomenon of refraction of light, but this approach could also be applied to other subjects and areas of physics. The task of spotting mistakes by DALL-E may support both conceptual understanding and critical thinking in the context of generative AI. Therefore, we suggest targeting critical thinking about generative AI explicitly together with the spotting of correct and incorrect aspects. Moreover, we observed that this task is well suited for a peer or group work activity as it makes students verbalize and discuss the optical concepts and errors generative AI makes. Additionally, as all of the generated images are unique, a teacher could discuss positive and negative aspects that different groups identified in a classroom setting.

1.
S.
Küchemann
et al., “
Are large multimodal foundation models all we need? On opportunities and challenges of these models in education
,”
EdArXiv
, DOI: (
2024
).
2.
S.
Küchemann
,
S.
Steinert
,
J.
Kuhn
,
K.
Avila
, and
S.
Ruzika
, “
Large language models—Valuable tools that require a sensitive integration into teaching and learning physics
,”
Phys. Teach.
62
,
400
402
(
2024
).
3.
K. E.
Avila
,
S.
Steinert
,
S.
Ruzika
,
J.
Kuhn
, and
S.
Küchemann
, “
Using ChatGPT for teaching physics
,”
Phys. Teach.
62
,
536
537
(
2024
).
4.
S.
Steinert
,
K. E.
Avila
,
J.
Kuhn
, and
S.
Küchemann
, “
Using GPT-4 as a guide during inquiry-based learning
,”
Phys. Teach.
62
,
618
619
(
2024
).

AI Physics Tools (AI@TPT) features similarly structured short papers (generally less than 1000 words) describing tried and tested classroom examples using AI applications. Submissions should be sent to Jochen Kuhn and Stefan Küchemann ([email protected]).

Published open access through an agreement with Technische Informationsbibliothek