Ever since machines learned to talk, through chatbots, there has been a fear of them inevitably taking our human jobs. Everything from customer support to medical diagnosis to software development was at risk of getting automated. If there was a database of information somewhere with the right algorithms to best match a given question or situation, then you're in trouble. But so far, much of that has been little more than a fanciful story and a good movie plot. Because as it turns out, doing any of those things listed is kind of difficult, and humans can barely get them right themselves.
Which is why ChatGPT is kind of a big deal. If you haven't been following with the latest A.I. and bot news, then here is a quick synopsis.
ChatGPT is an AI tool, developed by OpenAI, that gives users the ability to ask questions in natural language and get a response also in natural language. ChatGPT is advanced enough to be able to answer follow up questions and to challenge your incorrect premises. And it also limits answers in areas morality or ethics might be involved.
And while you could ask it for today's weather and to add 2 numbers together, you probably shouldn't, because it isn't really designed for that.
What can it do?
Surprisingly, ChatGPT can't actually tell you the current weather as it lacks the ability to browse the internet. However that doesn't stop it from knowing an enormous amount of information, as it has been trained on a large portion of the web. ChatGPT is based on GPT-3, which has been trained on 175 billion parameters requiring around 800GB of storage.
More importantly though, the model has also been trained on an enormous amount of code (through public repositories), meaning that developers can do things such as ask for actual code snippets, or even have ChatGPT review non-functional code and look for mistakes. And so far from my personal testing, it does a pretty good job at not only giving you correct code, but also explaining it.
For example, I recently had the following conversation with ChatGPT:
"Can you write a JavaScript function that validates email formats?"
And in less time than it would have taken me to type the query into Google and to search through Stack Overflow, I had an answer with a full breakdown (though simple as it was) right before my eyes. And because I already knew the answer ahead of time, I was impressed with the overall quality of the snippet that it provided me.
And you might be wondering (or thinking) if the code quality is usable in the real-world. So here is that exact snippet that I received:
function validateEmail(email) {
var re = /^(([^<>()[\]\\.,;:\s@\"]+(\.[^<>()[\]\\.,;:\s@\"]+)*)|(\".+\"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/;
return re.test(email);
}
I can only guess that the regex given is valid, but assuming that it is, this is probably the most straight-forward way of validating email address formats. The only interesting thing to note is that ChatGPT chose to declare the variable as a var instead of a let.
But not only did ChatGPT give me that function, it also gave me an example of how to call said function, which was much appreciated as that is often lacking when searching through solutions online.
As an added challenge, I also asked ChatGPT to give me the same function, but this time, to avoid using a regular expression. And because the model can recall previous queries, I didn't have to prefix my question with any kind of context. I simply asked:
"Can you do it without Regex?"
And, as it turns out, it definitely can.
function validateEmail(email) {
// Check that the email contains at least one '@' character and one '.' character
if (email.indexOf('@') === -1 || email.indexOf('.') === -1) {
return false;
}
// Split the email into parts using '@' as the separator
var parts = email.split('@');
// Check that the email contains only one '@' character
if (parts.length !== 2) {
return false;
}
// Check that the first part (the part before '@') is not empty
if (parts[0].length === 0) {
return false;
}
// Check that the second part (the part after '@') contains at least one '.' character
if (parts[1].indexOf('.') === -1) {
return false;
}
// Check that the second part (the part after '@') does not start or end with a '.' character
if (parts[1][0] === '.' || parts[1][parts[1].length - 1] === '.') {
return false;
}
// If all of the above checks pass, the email is valid
return true;
}
And this aspect of ChatGPT is what definitely impresses me the most. Because normally, using a search engine, I'd have to qualify all of my queries with the full context in order to get a valid response.
But not here. In fact, I could keep the conversation going about this specific code snippet as long as I had valid questions to ask.
And secondly, you have to appreciate the fact that the code comes loaded with comments explaining each major portion. Definitely not something that is guaranteed if you were to search for the code yourself.
You might be thinking, that this is a relatively straight forward question and that clearly the model just pasted code it found on GitHub. And you might be right to some extent.
Except that I also tried to search for this code myself, and in typical fashion, found various answers on different websites, each with long lists of pros and cons added by many different developers. Some of those snippets found were broken, or missing context, and if you're a relatively new developer trying to grow your skill set in the corporate world, it's these challenges that will inevitably slow you down. Because alot of the answers seemed to be the 'right' ones.
To close out my conversation about email validation with ChatGPT, I asked it if the code snippet given would work for every type of email provided. And the answer kind of surprised me. It surprised me because I already knew the answer. The answer is that there is no fool-proof way to validate emails. A regular expression can offer higher coverage, but that depends on the exact expression and it also depends on changes to the official email address specification.
And that is exactly what ChatGPT responded with. And it was able to reference both snippets in it's explanation as to why one of them is better to use. That extra step is what makes this model a game changer for the software development industry.
Because often times, the missing ingredient in most online code snippets found is best practices, business logic and discussion about standards. Which is why "copy and pasting" has become a common meme when discussing a programmer's day to day workload.
Overall, I was left with a positive attitude towards our robotic coding counterparts.
What it can't do
You would assume that a chatbot would be able to answer (or give an answer) to any and all questions given. But that is not so in this case. For one, ChatGPT isn't designed to do that. It can't browse the internet in real time and give you a weather report or stock information.
And it also can't give you answers to questionable queries, particularly if the answer involves breaking the law in any way.
ChatGPT also won't be your ticket to financial freedom, as it doesn't give out financial advice. When I asked it if it was smart to invest in cryptocurrency, it essentially answered that I should consult my financial advisor, which realistically is a pretty good answer.
And lastly, ChatGPT can't really give you advice on what the "best" code to use is, as that is usually a subjective question. But, it will give you statistics about certain best-case situations, and that might be enough to help you make a specific choice.
For example, when asked what the most popular programming language is, it informed me that it depends on a variety of parameters including industry and location, but based on a given survey it provided me with 5 languages.
Final impressions
Overall, I'm very impressed with ChatGPT in its current stage. It definitely isn't the all-knowing oracle that many make it out to be online, but I don't believe that that's its intent.
It's designed to mimic natural language when being given a query, and to remember the conversation as it progresses. And that's way different than just storing general knowledge.
I also don't believe that it is going to take away programmer jobs in the near future, as it still requires some form of developer knowledge in order for it to be used effectively.
If anything, I can definitely see it helping developers that are just starting out or that lack more senior mentorship in their work environments. Sometimes, knowing why something should be done a certain way is much more important than knowing how.
However, I've only spent a handful of hours with ChatGPT, and as such, I can't recommend using the given code snippets in your production code. Not because the code isn't valid or insecure, but because as a developer, a big part of your role is to verify and assess any code that you may use, just as you would if you used any code found online.