Bea Gilbert and Lucy Rowland
June 2024

Guest editing Environmental Scientist with ChatGPT: An interview

An image generated by DreamStudio AI programme, with a tall robot-like figure walking through a dreamy landscape with fireflies in the foreground

In this interview, editors of environmental SCIENTIST Bea Gilbert and Lucy Rowland discuss how they experimented with AI software ChatGPT and DreamStudio for the latest issue of the journal.


You decided to use ChatGPT to guest edit and write the editorial for the June issue of environmental SCIENTIST, ‘Where Green Meets Machine’. What were your motivations for this? 

Bea Gilbert: Our main motivation for trying out ChatGPT was experimentation, though it also felt appropriate for a couple of other reasons. Firstly, this journal edition focuses on the emergence of digital technologies, within which ChatGPT has a high profile. Secondly, environmental scientists want to stay on top of the ways that technology can support their work. We felt it would be especially interesting to see the kind of quality and depth of articles that ChatGPT was able to suggest or write. It was also interesting to see how it could synthesise what is published online on these topics, and to consider whether it made our research process more (or less) efficient. 

Lucy Rowland: I agree. The fact that the AI (Artificial Intelligence) uses the whole internet as a dataset and can present summaries of topics in seconds could be a game-changer. It takes time to research and narrow down topics that we want to cover in each journal - to make sure that we've got a good range of disciplines, and that all the articles are relevant and timely. In theory, ChatGPT is a really great way to experience other kinds of research methods, and we wanted to see if it could pick up things that we wouldn't necessarily have access to, or topics that we would miss because they might be in obscure corners of the internet. So this was an interesting experiment. 

What was the process for using ChatGPT to inform the issue, and what kind of prompts did you use?

Lucy Rowland: We each used slightly differently worded prompts, but the main thing we asked ChatGPT to do was provide a list of 10 topics or ideas about new digital technologies with environmental applications, with an emphasis on AI. We often found that it would come up with things that didn't necessarily fit that brief: for example, when we asked for digital technologies, it would sometimes suggest things that were mechanical, robotic or biological. 

So, we had to ask follow-up questions after receiving those initial lists of topics and clarify things as we went. When we had enough ideas, we took the process out of ChatGPT and used our usual research methods, by following up on those topics to see if ideas were strong and which were worth taking forward. We quickly realised that some topics sounded really exciting, but perhaps weren’t viable as articles for an environmental science journal because there wasn't enough evidence of ‘real world’ application of those technologies yet. Many of the technologies ChatGPT raised were still in the ‘potential’ space, and they haven’t been widely used or studied yet. We did have to supplement the lists with our own ideas; things that we found through our own usual methods of research. 

Bea Gilbert: If there was a topic that ChatGPT suggested that seemed promising, we would then ask whether it could give us a real-life case study or example. Sometimes, it would provide us with the name of an organisation, however, most of these were in the United States. They were also usually quite big corporate organisations, who might have run a project or scheme with an environmental arm, rather than developing or adapting technologies for environmental purposes specifically. 

What problems did you face? 

Bea Gilbert: We never really knew if the topics ChatGPT produced just came from its own selection bias, as it sweeps the internet for information and bundles it together. You can’t be sure whether the information it’s giving you is there because it just appears the most, because larger organisations have the money to promote their products, or whether a suggestion is actually the most salient and representative of a promising technology or idea. We also worried that it might not be able to find emerging or new concepts, because these might not be widely talked about (and therefore maybe demoted within the data set), or they could be in development somewhere without much of a public profile. We also found ChatGPT could be quite obstinate with the topics it suggested: it could give you 10 topics, and none would be suitable, yet after further prompting to give more of a digital focus, it might just double down on topics (like bio transplants in coral, which is totally irrelevant). It seemed keen on reiterating the same topics repeatedly. 

Lucy Rowland: I imagine there's a kind of economic bias in that, because it kept coming up with micro robotics ideas for agriculture. I imagine that is an area that's getting a lot of investment. This topic obviously has a significant emerging industry behind it, so there's going to be that kind of bias latent within the system. As you mentioned, it scrapes information from the internet, but there's definitely more exciting developments in research and ideas that are happening offline as well, that it obviously can’t capture. It wouldn’t capture, for example, the latest conference and all of the things that are being discussed there, and all the nuances within those discussions. 

I remember it specifically pushing concepts like robot trees and robot bees, and I think that's a reflection of what humans find exciting: robot versions of things that already exist, and robot solutions to our current problems, like deforestation or poor air quality within cities. Robotic trees are being developed to address problems like air quality, but again, on digging further into these topics, we found that there were a couple of prototypes, but all the ideas were very theoretical, even though it made a big splash in the news headlines. I think that's why there was a push towards those kinds of topics: they got traction on the internet, and ChatGPT took that media interest as fact, and as proof that something was a valid concept. 

Bea Gilbert: Another issue with it is that it could almost reverse engineer the research process. When you’re conducting research yourself, you either go out into the field, or you might be at your desk reading books and journals, pulling out different ideas and information, and you have to use your brain to filter and decide what's reputable. ChatGPT just chooses the topics for you, and so it acts as a search engine, but at the same time it flips the research process on its head and takes away the act of choice when you're doing research. This is the opposite to how we normally would see things play out. I also wonder if the algorithm could be intentionally exploited by those who, for example, have a vested interest in topics like micro robotics, financial or otherwise.  

Lucy Rowland: I think that's a possibility. The technology is advancing so quickly, it's difficult to know how many people are actually keeping up with it to that extent, but you make a very good point that if you lose the ability to make decisions yourself, you can also lose the whole process of critical thinking if you start and end with ChatGPT or another similar software. As you say, it reverse engineers that research process, and presents you with a neatly packaged set of information that you then take forward, but it's eliminated that crucial process of discarding ideas and picking out the ones that are worth pursuing. So you lose agency in that regard, whether you're a researcher, an editor, or anyone working in the environmental professions. It reminds me a bit of how I remember educators responding when Wikipedia first became massive. The emphasis was always on using Wikipedia as a starting point, but also on going back and finding the source information, and then gathering your own ideas from those sites. It's a good source for a collection of ideas, but to make sure you fully understand the topic and have formed an opinion outside of what is being presented to you, the individual research process is essential. I remember we were struggling to find someone for a very specific topic initially, and I asked it to give me a list of five UK-based researchers in this area. It gave me these names, and once I started researching, I realised these people didn't actually exist, they were just characters based on an amalgamation of real people. I went back to ChatGPT and asked whether these people were real; it gave me that sort of stock answer it has, which is ‘I've made a mistake’. That wasted a lot of time! 

How much editing did you need to do on the editorial that was written by ChatGPT? What about the topics list that it presented? 

Bea Gilbert: We only cut out a few sentences here in the editorial, as opposed to amending the tone or content. For the topics list, an important aspect was that we kept slightly changing the prompt, or demanding more from ChatGPT, whether that was asking for more detail, or for ideas for articles in certain areas, such as the marine space. 

Lucy Rowland: The editorial didn’t need much work, we mostly focused on making it readable and succinct. The topics list needed a lot more refining, however: as we’ve discussed, the problems inherent with using ChatGPT became apparent as we worked through the list of topics, so in the end, the list of articles was more 'informed' by ChatGPT, rather than 'defined' by it. 

You also used it to create the cover artwork, how was that process? 

Bea Gilbert: I experimented with a number of programs, such as Midjourney, DALL-E and OpenArt, until I worked out which was most successfully producing art in the style that we envisaged and desired, and also which had the capacity for us to dictate the image proportions and quality. DreamStudio has the option to adjust a lot of parameters, including using an ‘Image Strength’ function, which controls the extent (in per cent) to which a generated image is trained on a previously made image. This was important for inching towards a great option; many AI art programs will create something wildly different with every iteration of the same prompt. This is really fun to play with, but from an editorial perspective, a greater extent of control is important, particularly where the program is used to ‘tweak’ what it has already created, which mirrors the briefing and draft amending process that one would expect with a human artist. 

For the cover image itself, we initially came up with a few ideas ourselves: the hands of God and Adam from Michaelangelo’s 'The Creation of Adam', reimagined as humanoid robot hands, a ‘glitching’ natural scene, hikers approaching a mountain made of data, and more. Then we experimented with asking ChatGPT to write a prompt, which didn’t go very well. When we asked it to make the art prompt more specific, it instead made the prompt very complex. This was also interesting, because it was an insight as to how natural language processing models may, or may not, be able to differentiate between words that don’t have drastically different meanings on paper, but in fact can produce very different results depending on the context in which they are used. Humans would understand this nuance more.

Eventually, I just thought ‘let’s try having a pond’; most importantly, I asked for this alongside a tone: ‘dreamy’. The result was instantly beautiful and captured what I think is an enthralling cover – gentle and inviting whilst also being quite eerie. I've found that AI artists are, in general, better at following emotive, atmospheric instructions than discrete, object-based ones. They are very good at visualising ‘dreamy’ or ‘harsh’ or ‘lush’, but will omit requested details such as ‘a fox is emerging from a bush’ if a lot of detail has already been demanded, or if the image is busy ‘concentrating’ on another figure or concept. 

What are the ethical implications of using ChatGPT/other AI programmes in these ways? 

Bea Gilbert: I think there could be ethical problems with using the technologies in this way if you're not filtering it. We did let the programmes take the reins in some ways, but under our supervision.  

Lucy Rowland: I agree, we wouldn't have got far if we'd let the programmes govern the entire process, and if we hadn’t been critical or hadn’t used our own perspectives to shape the topics. If we had, we would have ended up with a very vague and “mushy” issue of the journal. 

It also seems like there are ethical questions relating to most of the digital technologies covered in various ways within this issue. There are complex ethical dimensions to everything, and we can't really talk about the 'pros' without talking about the 'cons'.  

The ethical question I would say I felt slightly more hesitant about, on a personal level, was the idea that we were outsourcing the cover art to an AI programme. Creating the cover art is a creative endeavour that usually we would pay a designer or illustrator to produce for us, so I felt like that was a more tangible ethical quandary, because we were outsourcing a piece of work to a computer for free rather than paying someone to do it. There is also the well-publicised question about whether it is actually basing its design on anthor human artists, or amlgamation of artists, which raises copyright issues that are being widely discussed elsewhere. I don't really know if we arrived at a satisfactory way of approaching this, but I think as part of the experiment, we wanted to follow that line of thinking and see where it took us. For this one issue, it allowed us to explore that aspect of the questions around human creativity, which was interesting. 

Would you use it again in the future? What lessons might there be for others? 

Bea Gilbert: In terms of lesson for others, we know that we must vet the ideas ChatGPT comes up with in the same way that we vet ideas from our authors. As normal practice, when we notice that a source referenced in an article is really old, we check it out to see whether it’s still relevant. So, we just applied the same principle to ChatGPT here, and I think this is a good place to start. 

Lucy Rowland: Yes, your own judgment should always be the main driver of how you use ChatGPT. That can also mean recognising where your own knowledge might not be as strong, and choosing to use ChatGPT as a starting point to inform where you decide to go with your ideas. 

Bea Gilbert: I don’t think we’d use ChatGPT for another journal necessarily, and definitely not for anything creative. On a personal level, using it for anything social or artistic – anything that I was emotionally invested in – would be completely off the table. However, in terms of finding a shortcut to getting an initial burst of ideas to riff on, it could be helpful in future. It’s a bit like brainstorming with other people: other people’s brains are useful because they see things in a way that you might not. But ChatGPT does inherently lack that originality of thought; it’s a weaker version of a human brain in terms of imaginative capacity, so it’s impossible for any editorial process to begin and end with ChatGPT. 

Lucy Rowland: I don't see myself using ChatGPT as a big part of my current work, mainly due to it suggesting fake people and sending me down that rabbit hole: I resented that! I also enjoy the aspect of this work that entails finding threads of stories and research to follow that come from different platforms and information streams. I find pursuing these is a really positive part of the work we do so I think I’m less likely to use it in that way.  

It does behave as another entity to bounce things off though, and it can be good at reflecting your own ideas back at you in new ways. I would use it to summarise things I've come up with myself, as I often struggle to cut down or put a complex topic in a succinct, reader-friendly way. The results of this kind of prompt will still need a bit of editing, because as we know, ChatGPT writes in a very vague and non-specific way. 

Bea Gilbert: Yeah, it’s a “pinch of salt” type approach: you want to lightly touch on what these programmes might have to offer, but use perhaps 5% – maximum – of what they might produce. 

Lucy Rowland: Agreed. Retaining that critical part of your brain that questions everything, judges everything, and has a healthy scepticism is really important when it comes to using ChatGPT and other AI software. 

Bea Gilbert: Exactly, and remaining sceptical not just about the information it's giving, but also its motivations. It will be trained by and influenced by other biases and the people who have programmed it: not in a conspiratorial way, but these influences are inevitable within the training data. 

Lucy Rowland: Yeah, it's never going to be a neutral, non-human entity, because humans have shaped it, the internet has shaped it, and just like humans, the internet is biased and flawed. It's not a neutral thing in itself. and I think that's important to remember when you’re using AI software in most contexts. 


To read the latest issue of environmental SCIENTIST, ‘Where Green Meets Machine’, and our journal archive, visit the journal homepage