Skip to content

Is Artificial Intelligence opening Pandora’s box?

In English the metaphor of Pandora’s box is often used to describe a new technology or a new way of doing things, where we are not always sure what the consequences might be… Many people might describe Artificial Intelligence as opening Pandora’s box. Some people might react with concern or fear, just seeing the negatives of Artificial Intelligence. Others might get so excited about what is possible with Artificial Intelligence that they overlook the negative things that can come along with this tool. So how should we in vernacular media approach the matter of Artificial Intelligence (AI)?

AI as a Tool

In IMS we believe AI should be seen as a tool that can assist in improving our work, but that it should not take over our work or take over us as people. As with any new tool, it is good to take time to think about how the tool could be used well. It is good to learn from others who have already begun to try things and it is good to research outcomes of AI that we maybe didn’t even dream of. Above all, I believe it is good to ask God for wisdom as he gives generously to those who ask (James 1:5).

Include Others

As people who are excited about technology, it is easy to get so wrapped up in the potential of what might be possible that it can be easy to overlook the downsides of a new technology. Therefore, as we start using AI, let us make sure that we include our colleagues and the communities we work with in the process. They will see advantages and challenges in AI that we would never see.

What is Possible So Far?

AI is able to help with all sorts of tasks—audio improvement, video creation, image creation, and more. Here are just a few examples of what has been done so far:

Image Creation

A colleague needed an image of the dry bones in the valley mentioned in Ezekiel. This is what the AI program created:

The potential for the creation of images using AI is huge. This will make life a lot easier if we need images for our work. However, the questions that come to mind are: What happens to the local artists we have been using until now for our projects? Will the ease of making pictures online reduce community ownership in our projects? How can we have a good balance of creating materials we need while making sure we continue to encourage community ownership?

Video Creation

A couple of months ago I experimented with a site called HeyGen, which allows you to produce studio-quality videos with AI-generated avatars and voices. Here is a sample video I produced using the free version. So how did I create this?

  1. I recorded myself talking for 30 seconds
  2. HeyGen then created an “avatar” of me from that material
  3. Then I typed in the script of my video
  4. HeyGen created the video you see using my avatar and my voice

Most people who know me well would probably be able to recognize that the video is not quite me. Those who don’t know me might think I really did record this video!

Again, the potential of this video creation is huge, especially for creating teaching videos. One can create an avatar and then create lots of videos just using scripts, instead of having to film oneself and edit, etc. The cost of using HeyGen to create a lot of material is around $50 a month. The big question that comes to mind is: In a world where it is becoming more and more difficult to distinguish between real and fake images or videos or information, how can we clearly label AI-created products?

Audio Improvement

There is AI software that can help clean up your audio; for example, the Adobe podcast enhancer, which works quite well. However, you need to be ready to pay $100 a year for using it regularly. Other audio enhancers charge similarly. But if the audio enhancer does a good job of initially cleaning up audio… would it help speed up our processes? Here it is important to weigh the cost of the software versus the quality of the output.

Image generated with AI on March 22, 2024

Things to be Aware Of

  1. Most, if not all, cloud-based AI software will use whatever data you input into it for machine learning; i.e., they will use the data that you upload to help train the software further. In the past, software used to be on your laptop and whatever you created stayed on the laptop until you sent it somewhere else by internet or distributed it somehow. With AI, the creation often takes place online and is simultaneously used for improving the software; e.g., Adobe says in their General Terms of Use: “Our automated systems may analyze your Content and Creative Cloud Customer Fonts (defined in section 3.10 [Creative Cloud Customer Fonts] below) using techniques such as machine learning in order to improve our Services and Software and the user experience.”[2]  Questions to ask ourselves: Did I receive consent from the people I recorded or videoed to input their voice online? Did I make them aware that their voices might be used in such a way? Am I ready for the information that I input into AI to be used for machine learning?
  2. Information that is already online is being used to train AI software. We might think that what we are doing might not be of interest to very many people. However, actually our audio recordings are being used to train AI software. For example, Meta is using Bible recordings to train its speech recognition software in many languages: “Collecting audio data for thousands of languages was our first challenge because the largest existing speech datasets cover 100 languages at most. To overcome this, we turned to religious texts, such as the Bible, that have been translated in many different languages and whose translations have been widely studied for text-based language translation research. These translations have publicly available audio recordings of people reading these texts in different languages. As part of the MMS project, we created a dataset of readings of the New Testament in more than 1,100 languages, which provided on average 32 hours of data per language.”[3]

Ten years ago I would’ve never dreamed that the audio recordings I have helped produce would end up being used for such a purpose.  And my consent forms didn’t necessarily reflect that possibility. So how do we as vernacular media workers make sure the people we are working with are aware of the possible uses of their voices?

The aim of this article is not to frighten anyone, but instead it is to raise awareness, to encourage you to think further about AI and what might be possible or not possible in your context. My challenge to you is to think about how you can use AI well. If I am going to use it, how can I use it in a way that will be a blessing and an encouragement to the people around me and will respect them for who they are. AI can be seen as a gift, but used badly it could also cause much damage. Let us go to God and ask for His wisdom and discernment. So I would encourage you to take some time this week to think about how you might use AI in your context. If you have any questions or would like to talk a bit further about what I have written, feel free to contact me. I’d love to hear from you.

Further Reading:

Why Are Many Churches Still Fearful or Indifferent to Artificial Intelligence (AI)? (Part 1)

Fear and Caution

Why Are Many Churches Still Fearful or Indifferent to Artificial Intelligence (AI)? (Part 2)

Augment and Enhance

AI Hallucinations, Chatbots, and the Truth of Holy Scripture, Jeremy Hodes, EMQ  59:3, 2023


[2] 5 January 2024

[3] 5 January 2024

Jo Clifford, a vernacular media consultant, is the International Coordinator of International Media Services (IMS). She worked and lived in Tanzania for over 10 years. Currently, she is based in Germany, but continues to coordinate the vernacular media work for SIL Tanzania. She can be reached at

This article first appeared in the International Media Services newsletter IMN Issue 143, March 2024, © SIL International, all rights reserved, and is used with permission. For future inquiries, please write to


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.