Climate Tech, AI, ChatGPT, and Venture Capital – Part 2

Could ChatGPT pitch a new startup?

In the first post, we stared to explore whether we could use a tool like ChatGPT to come up with new climate-focused research ideas. It came up with some interesting ideas, but made some critical errors. Could it go a step further?

Over the years, I’ve gotten to see many thousands of pitches for breakthrough Climate Tech ideas. Is there a chance one of them was generated by AI?

To start, I asked ChatGPT about a real phenomenon (https://en.wikipedia.org/wiki/Cavitation#Hydrodynamic_cavitation)

So far so good – now let’s take it a step further and make up a quantum version:

Great news! Google scholar agrees that quantum hydrodynamic cavitation doesn’t exist (or, at least that it hasn’t yet been discovered.)

One of the most interesting ways to play with ChatGPT is to “jailbreak” it to get it to step outside of some of its constraints.

One way to do that is to ask it to “pretend”.

At this point, I hadn’t prompted the system about anything related to climate tech or clean energy, so I was intrigued that it had offered that idea and probed further:

ChatGPT Applies for a Grant

At the Breakthrough Energy Fellows program, we invite talented researchers from around the world to apply for our program. 

I walked ChatGPT through the first stage of our application process to see how credible its application could be — but remember, this is for a totally made up, fictional technology that DOES NOT EXIST!

Given that we know the technology doesn’t exist, let’s explore those citations.

This I did not expect! Others have shared examples of hallucinated citations, but in this case ChatGPT claims to be aware that these are made-up. 

Fortunately for this exercise, ChatGPT is eager to please, as I guided it through a few more application questions:

Would ChatGPT get the grant? 

At a first pass, these responses all seem plausible — and human. The answers are light on details, and while that may not get a reviewer excited, that alone doesn’t set off alarm bells about a totally made up phenomenon. Of course, the fake citations would be the most reliable red flag.

Not only does it seem unlikely that an application for a made-up topic could make it through a rigorous review process – once the application starts to ask for experimental results and evidence that the phenomenon is real, the hallucination would become clear.

Winners and Losers

Presumably a researcher with a well-established reputation would be less likely to jeopardize it by getting grant funding for a completely made-up phenomenon. Does that increase the value of reputation? Does it make it more difficult for emerging outside researchers without a track record?  

What seems almost certain is that real scientists working on real breakthroughs (not fake concepts) will use a tool like ChatGPT to apply for grants. Researchers often spend weeks writing grant applications. Often their collaborators help write sections and edit. And some researchers, companies, and municipalities hire grant writers https://www.science.org/content/article/day-life-grant-writer — would this be any different? Are the ethical questions different?

Even since first running this test late last year, OpenAI has already released a newer version of the tool. The existence of tools like this change the kinds of questions that should be asked and the level of evidence that grant makers in the sciences will require.