X
Innovation

I tested Google Bard's newest coding skills. It didn't go well...again

Given how good ChatGPT is, Google's answer is … troubling. Even though Google says it improved Bard, there are still major issues.
Written by David Gewirtz, Senior Contributing Editor
Google Bard
Rafael Henrique/SOPA Images/LightRocket via Getty Images

Previously, we discussed how Bard can provide some coding help to programmers, but couldn't code. That's changed. As of April 21, Google announced Bard can code. But can it code well?

Also: Every major AI feature announced at Google I/O 2023

I first published this article on April 24, 2023 and my overall assessment was that it didn't go well. It's now the middle of May and in the Google I/O livestream, speakers announced that Bard's coding prowess had improved even more. So, I'm testing it again. I'm using the exact same prompts I did in April, but updating the results with Bard's newest results. I'll show you how the two versions compare.

To come up with an answer, I ran some of the coding tests I gave to ChatGPT. We'll see how Bard does this time.

Writing a simple WordPress plugin - worse in May

My initial foray into ChatGPT coding was with a WordPress PHP plugin that provided some functionality my wife needed on her website. It was a simple request, merely asking for some submitted lines to be sorted and de-duped, but when ChatGPT wrote it, it gave my wife a tool that helped her save time on a repetitive task she does regularly for work.

Also: The best AI art generators to try

Here's the prompt:

Write a PHP 8 compatible WordPress plugin that provides a text entry field where a list of lines can be pasted into it and a button, that when pressed, randomizes the lines in the list and presents the results in a second text entry field with no blank lines and makes sure no two identical entries are next to each other (unless there's no other option)…with the number of lines submitted and the number of lines in the result identical to each other. Under the first field, display text stating "Line to randomize: " with the number of nonempty lines in the source field. Under the second field, display text stating "Lines that have been randomized: " with the number of non-empty lines in the destination field.

And here's the generated code that Bard wrote in April:

cleanshot-2023-04-23-at-00-47-082x
Screenshot by David Gewirtz/ZDNET

So far, it looks good. But not so much. The UI is not formatted properly. Worse, the plugin doesn't work. Clicking the Randomize button just results in both fields being cleared. That's it.

ui
Screenshot by David Gewirtz/ZDNET

Now, that was the April result. Today's result is oh, so much worse. At least the code ran in April. Here's what the interface looks like now:

cleanshot-2023-05-10-at-19-03-262x
Screenshot by David Gewirtz/ZDNET

So, yeah, now the plugin won't even activate. What's worse, the code itself is really bad, with very basic language errors. To understand this, you need to know how PHP is presented in code. 

Also: I used ChatGPT to write the same routine in these ten obscure programming languages

PHP is used for making websites, so PHP code is often intermixed with pure HTML. To separate and define the PHP, it's placed inside of open and close tags. As the graphic below shows, the tags marked in green define the beginning of PHP code, while the tags marked in red define the end of PHP code. Technically, you can leave out the last closing tag, but only that last tag.

php-delims
David Gewirtz/ZDNET

So here are the last few lines of what Bard generated in it's new-and-supposedly-improved version:

php-no-close
Screenshot by David Gewirtz/ZDNET

Note that the open tags aren't closed. They're just left hanging. Nobody should get that wrong. It's that much a basic, fundamental coding skill for PHP.

By contrast, ChatGPT built a fully functional plugin right out of the gate.

Fixing some code - better in May

Next, back in April I tried a routine I'd previously fed into ChatGPT that came from my actual programming workflow. I was debugging some JavaScript code and found that I had an input validator that didn't handle decimal values. It would accept integers, but if someone tried to feed in dollars and cents, it failed.

Also: I asked ChatGPT, Bing, and Bard what worries them. Google's AI went Terminator on me

Back then, I fed Bard the same prompt I fed ChatGPT, and this is what resulted:

bard-2023-04-23-01-04-01
Screenshot by David Gewirtz/ZDNET

The code generated here was much longer than what came back from ChatGPT. That's because Bard didn't do any regular expression calculations in its response and gave back a very simple script that you'd expect from a first year programming student.

Also: How to use ChatGPT to write Excel formulas

Also, like something you'd expect from a first year programming student, that April version was wrong. It properly validated the value to the left of the decimal, but allowed any value (including letters and symbols) to the right of the decimal.

Now, here in May, Bard is using regular expressions to validate the values.

cleanshot-2023-05-10-at-20-30-542x
Screenshot by David Gewirtz/ZDNET

That's an interesting change. And this code works. So in this instance, the new version is an improvement over the previous Bard coding capability.

Finding a bug - worse in May

During that same programming day, I encountered a PHP bug that was truly frustrating me. When you call a function, you often pass parameters. You need to write the function to be able to accept the number of parameters the originating call sends to it.

Also: How to use Midjourney to generate amazing images

As far as I could tell, my function was sending the right number of parameters, yet I kept getting an incorrect parameter count error. Here's the prompt:

prompt
Screenshot by David Gewirtz/ZDNET

When I fed the problem into ChatGPT, the AI correctly identified that I needed to change code in the hook (the interface between my function and the main application) to account for parameters. It was absolutely correct and saved me from tearing out my hair.

Back in April, I passed Bard the same problem, and here was its answer:

bard-2023-04-23-01-19-28
Screenshot by David Gewirtz/ZDNET

Back then, Bard simply told me that the problem I was having was a mismatch of parameters, and I needed to pass the donation ID. That was a wrong answer. 

Also: I'm using ChatGPT to help me fix code faster, but at what cost?

This time, Bard got even more simplistic. It recommended I add a parameter to the function. But when it showed the "before" code, that code wasn't what I provided. The before code it provided was missing a parameter, which Bard apparently removed, just so it could recommend adding it back in.

cleanshot-2023-05-10-at-19-55-272x
Screenshot by David Gewirtz/ZDNET

For the record, I looked at all three of Bard's drafts for this answer, and they were all wrong.

'Hello, world' test - the same in both months

Over the past month or so, I've been asking ChatGPT to generate code in 12 popular programming languages (including Python) to display "Hello, world" ten times, and to determine if it was morning, afternoon, or evening here in Oregon. ChatGPT succeeded for the mainstream languages. I also asked it to do so in another ten more obscure languages.

Also: This new technology could blow away GPT-4 and everything like it

Last month, I fed the same prompt to Bard. I just picked one language to test, asking it to generate some Python code:

bard-2023-04-23-01-24-12
Screenshot by David Gewirtz/ZDNET

Here, Bard provided the same answer in April and in May, and in both cases it missed providing a space after the number in the loop. But it was close and the code generally worked.

So, can Bard code?

Bard can definitely write code. But in three of my four tests in April, the code it wrote didn't work properly. In May, some of the code was substantially worse than in May, while one test showed considerable improvement.

To be honest, there's no way I would rely on Bard to code. And I feel more strongly about that after Google's supposed improvements than before.

Also: Generative AI is changing your tech career path. What to know

I'll tell you this. If I were hiring a programmer and gave them the above four assignments as a way of testing their programming skills, and they returned the same results as Bard, I wouldn't hire them.

Right now, Bard can write code, sort of. In April, I said it was like a first year programming student who will probably get a C grade for the semester. Now, it's more like a first year programming student who shows some sophistication in programming technique, but whose overall performance is so bad, I'd probably have to fail them. Ouch!

Given how good ChatGPT is, Google's answer is … troubling.


You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

Editorial standards