Menu

How real-world programmer's are actually using A.I to write code

How real-world programmer's are actually using A.I to write code

With the latest advents in artificial intelligence catered towards developers, such as GitHub Copilot and ChatGPT, the concern that programmer's will soon be replaced by sentient robots is all too real.

But personally, I don't see it. At least not just yet. I use ChatGPT daily to help me with my work and so far, it hasn't been able to simply write a complex component on its own without a substantial amount of prompting and correcting along the way.

And even when it kind of gets it right, there might be one or two deprecated functions that need replacing, a query that's not quite correct and an assumption that's completely wrong. And that really just goes to show you that A.I. programmer's aren't quite ready just yet to hit the mainstream and run billion dollar tech companies.

But it isn't all terrible and I have been able to use it as a supplement to my code writing and overall, it has definitely helped to shave off a few hours from my workload each and every week. Here's how I'm personally using it as a software developer working at a small startup.

Implementing API's

Multiple times during your stint in a company, you're going to find yourself reading through someone else's API documentation trying to figure out why the 204 error pops up on every 3rd run but only on Tuesdays.

I can't even count how many hours I've spent in the past attempting to debug that very issue named above. And typically it's all trial and error, because once you read the paragraph that explains it from the official source, there really isn't anywhere else to look for solutions when it fails to work. Sometimes StackOverflow (less so these days), but depending on how niche the issue, you might be out of luck in terms of getting anywhere fast.

But assuming that the API you are implementing hasn't completely been rewritten in the past few months, there is a good chance that ChatGPT can at least help you get started with a basic request.

From my experience, you don't even really need to train the model ahead of time with any resources or links to source material as ChatGPT has a fair amount of the data already in its data stores.

Just recently I was in that same situation working with an API that was less than a year old and that was yet to be fully documented. And worse off, it was already on V2.0. Their official docs were very short and had no real world examples to be found. After emailing the development team and going back and forth for a few days, I decided to just ask ChatGPT about it. And to somewhat of a surprise, it gave me a fully functional version, but it was using V1.0 code.

Well, at least for the time being, I was able to get it run as V1.0 was still functional. And, it took less than 15 minutes to implement the sample code. So while it didn't fully solve my problem in the moment (they updated the V2.0 documentation later on), it was definitely the most helpful resource that I had during this kind of situation.

Reading documentation

There is a fair amount of great written documentation on the internet for various libraries and frameworks. But there are also some relatively bad ones as well. Those are usually very verbose, include very little in terms of coding examples and span multiple pages that may or may not have any real order or logic.

As a programmer, I typically want to get started writing code as soon as I can, because there's deadlines and because there's going to be some debugging time needed at some point. So good documentation is always something that I look for.

Being able to feed ChatGPT a URL, or even just the contents of an API doc, has been huge in terms of getting started.

I'll say now that it doesn't always work and because API's do change, and sometimes they contradict themselves (see section above), it might be difficult for a model to figure out what's actually going on. That still requires good old fashion human neurons firing at billions of times per second to figure out.

And the challenge really is that humans make mistakes. I've read documentation where the parameter's being listed are either named incorrectly, required in a specific format without context, or require other fields to work and those fields aren't listed. And without really knowing the intent of the person who wrote the docs, I'd wager that not even A.I. could figure it out.

Writing documentation

This is probably an area that most programmer's struggle with when dealing with corporate projects. They either don't do it at all, or they write way too much. Very few will land somewhere in the middle where it's more so expected.

I'm guilty of both of those scenarios. Early on in my career I would write way too much when documenting and overall my notes were challenging to read and not very useful overall to other developers going over it. And as the "senior" title started to get attached to my name, documentation grew more scarce. Mainly because there was always some fire needing to be put out somewhere and time is money.

But I think this is definitely where A.I. can lend a helping hand, because it can both read your code and it can also annotate it fairly well. And you can keep asking it to edit the words until you have the desired outcome.

Often times, people forgot that being a programmer isn't fully about writing code. There's maintenance, documentation, reporting and analytics that are always going in the background. But most programmers usually wish that they could spend more time writing code.

Writing repetitive code

A big portion of the day, for me personally, is spent in writing what I would consider repetitive code where only a few parameters need changing because of very specific business requirements. This can include anything from reading from a database, inserting data into a database, logging errors and validating user input, etc.

Sometimes, whatever you need to code out only has a slight variation to something else that you wrote. And you could either refactor your code, add some fancy new parameters, a few default values and make everything more scalable, or you can take 30 seconds and copy/paste and add parameters because odds are this function is going to need continous changes.

Ideally, of course, we should always be refactoring and being less repetitive. But realistically, that small refactoring step could affect hundreds of other functions, break things in unforeseen ways and cost way too much overall to implement in terms of time and money.

What's a developer to do. Well, I personally have been asking ChatGPT to add utility functions and to refactor my React components when possible. And, while not perfect, it's pretty darn good overall. At the minimum, it can definitely save you 15 straight minutes of typing in a matter of seconds.

Caveat though, some of the code produced isn't always correct. And I've been a programmer long enough to be able to spot issues very quickly when I see them. It isn't so much that the code is wrong, as it builds and compiles just fine, but it's more so that there are edge cases that the model does not account for, meaning that you could potentially end up with issues down the line.

Improving performance on SQL queries

Pretty much every website out on the internet these days relies on databases to functions. And these databases are made up of alot of tables and more so queries. And if you're working with large datasets and your server is limited in power, then you're going to end up with performance issues.

As of late I've been using ChatGPT to help me optimize my queries in terms of performance and speed. And the results...were interesting. They weren't good. But they weren't bad. But they definitely weren't good.

Because the queries that I work on are relatively complicated and long-winded, it was a bit challenging at first to notice that the "improvements" I was given were pretty much wrong. Initially I thought that the solution to all of my speed issues was solved once and for all, but alas, it was not to be.

After running the queries several times (which were definitely fast) I noticed that the dataset was pretty much wrong. Something that could have easily been missed had I not been staring at the original dataset for so long. But, it kind of helped me to think of the query a bit differently. I got the gist of what the model wanted to accomplish and it was enough to nudge me in the right direction.

In closing, I'm still not quite worried about losing my job to a robot just yet. I hear more concern about programmer's losing their jobs from non-programmers than I do from programmer's these days and rightfully so as it makes for a great headline. But writing production level code on a 10 year old system, with hundreds of stakeholders and decision makers and working to keep up with a dynamic market that changes every minute of every day isn't quite as simple as many make it out to be.

Walter G. author of blog post
Walter Guevara is a Computer Scientist, software engineer, startup founder and previous mentor for a coding bootcamp. He has been creating software for the past 20 years.

Get the latest programming news directly in your inbox!

Have a question on this article?

You can leave me a question on this particular article (or any other really).

Ask a question

Community Comments

No comments posted yet
Ad Unit

Current Poll

Total Votes:
Q:
Submit

Add a comment