OpenAI recently announced CriticGPT, a new AI model that provides critiques of ChatGPT responses in order to help the humans training GPT models better evaluate outputs during reinforcement learning from human feedback (RLFH). According to OpenAI, CriticGPT isn’t perfect, but it does help trainers catch more problems than they do on their own.
But is adding more AI into the quality step such a good idea? In the latest episode of our podcast, we spoke with Rob Whiteley, CEO of Coder, about this idea.
Here is an edited and abridged version of that conversation:
A lot of people are working with ChatGPT, and we’ve heard all about hallucinations and all kinds of problems, you know, violating copyrights by plagiarizing things and all this kind of stuff. So OpenAI, in its wisdom, decided that it would have an untrustworthy AI be checked by another AI that we’re now supposed to trust is going to be better than their first AI. So is that a bridge too far for you?
I think on the surface, I would say yes, if you need to pin me down to a single answer, it’s probably a bridge too far. However, where things get interesting is really your degree of comfort in tuning an AI with different parameters. And what I mean by that is, yes, logically, if you have an AI that is producing inaccurate results, and then you ask it to essentially check itself, you’re removing a critical human in the loop. I think the vast majority of customers I talk to kind of stick to an 80/20 rule. About 80% of it can be produced by an AI or a GenAI tool, but that last 20% still requires that human.
And so on the surface, I worry that if you become lazy and say, okay, I can now leave that last 20% to the system to check itself, then I think we’ve wandered into dangerous territory. But, if there’s one thing I’ve learned about these AI tools, it’s that they’re only as good as the prompt you give them, and so if you are very specific in what that AI tool can check or not check — for example, look for coding errors, look for logic fallacies, look for bugs, do not look for or do not hallucinate, do not lie, if you do not know what to do, please prompt me — there’s things that you can essentially make explicit instead of implicit, which will have a much better effect.
The question is do you even have access to the prompt, or is this a self-healing thing in the background? And so to me, it really comes down to, can you still direct the machine to do your bidding, or is it now just kind of semi-autonomous, working in the background?
So how much of this do you think is just people kind of rushing into AI really quickly?
We are definitely in a classic kind of hype bubble when it comes to the technology. And I think where I see it is, again, specifically, I want to enable my developers to use Copilot or some GenAI tool. And I think victory is declared too early. Okay, “we’ve now made it available.” And first of all, if you can even track its usage, and many companies can’t, you’ll see a big spike. The question is, what about week two? Are people still using it? Are they using it regularly? Are they getting value from it? Can you correlate its usage with outcomes like bugs or build times?
And so to me, we are in a ready fire aim moment where I think a lot of companies are just rushing in. It kind of feels like cloud 20 years ago, where it was the answer regardless. And then as companies went in, they realized, wow, this is actually expensive or the latency is too bad. But now we’re sort of committed, so we’re going to do it.
I do fear that companies have jumped in. Now, I’m not a GenAI naysayer. There is value, and I do think there’s productivity gains. I just think, like any technology, you have to make a business case and have a hypothesis and test it and have a good group and then roll it out based on results, not just, open the floodgates and hope.
Of the developers that you speak with, how are they viewing AI. Are they looking at this as oh, wow, this is a great tool that’s really going to help me? Or is it like, oh, this is going to take my job away? Where are most people falling on that?
Coder is a software company, so of course, I employ a lot of developers, and so we sort of did a poll internally, and what we found was 60% were using it and happy with it. About 20% were using it but had sort of abandoned it, and 20% hadn’t even picked it up. And so I think first of all, for a technology that’s relatively new, that’s already approaching pretty good saturation.
For me, the value is there, the adoption is there, but I think that it’s the 20% that used it and abandoned it that kind of scare me. Why? Was it just because of psychological reasons, like I don’t trust this? Was it because of UX reasons? Was it that it didn’t work in my developer flow? If we could get to a point where 80% of developers — we’re never going to get 100% — so if you get to 80% of developers getting value from it, I think we can put a stake in the ground and say this has kind of transformed the way we develop code. I think we’ll get there, and we’ll get there shockingly fast. I just don’t think we’re there yet.
I think that that’s an important point that you make about keeping humans in the loop, which circles back to the original premise of AI checking AI. It sounds like perhaps the role of developers will morph a little bit. As you said, some are using it, maybe as a way to do documentation and things like that, and they’re still coding. Other people will perhaps look to the AI to generate the code, and then they’ll become the reviewer where the AI is writing the code.
Some of the more advanced users, both in my customers and even in my own company, they were before AI an individual contributor. Now they’re almost like a team lead, where they’ve got multiple coding bots, and they’re asking them to perform tasks and then doing so, almost like pair programming, but not in a one-to-one. It’s almost a one-to-many. And so they’ll have one writing code, one writing documentation, one assessing a code base, one still writing code, but on a different project, because they’re signed into two projects at the same time.
So absolutely I do think developer skill sets need to change. I think a soft skill revolution needs to occur where developers are a little bit more attuned to things like communicating, giving requirements, checking quality, motivating, which, believe it or not, studies show, if you motivate the AI, it actually produces better results. So I think there is a definite skill set that will kind of create a new — I hate to use the term 10x — but a new, higher functioning developer, and I don’t think it’s going to be, do I write the best code in the world? It’s more, can I achieve the best outcome, even if I have to direct a small virtual team to achieve it?