A big portion of my blog as of late is the Typing Test area that I created over a year ago to test my own 'words-per-minute'. But in that time it's garnered some popularity and thousands of people use it every week to practice and to test their own typing skills.
Needless to say, it's something that I now need to maintain, update, and on occasion, debug. A while back as I was adding features and testing and such, I noticed something strange. If you clicked on "Try again" once or twice after completing a test everything was just fine for the most part. The state would reset and the rendering function would reload the test as expected.
But around the 3rd or 4th "Try again" instance, things got slow. And by the 5th or 6th attempt, it was pretty much unusable. And that's a problem, because from the looks of it, traffic was picking up and people were doing multiple tests per session.
The code itself for generating the tests isn't the most complex in the world. I essentially create a 'database' of words using a standard string in JavaScript as follows:
And depending on the kind of test that a user selects, I filter out that giant list of words to match the specific difficulty level. The best part is that it's quick, at least when it works. I don't have to fetch data from a database or wait for API calls to finish in order to start the test.
Each instance of a 'test' is then added to an array, along with the 'Formula' required to filter the list of words. And this array is my final database.
This was suppose to be a temporary database until I built out something more formal, but it ended up working so well that I kept it. Let's continue talking about the error though, because the error is in that screenshot above, it's just somewhat hard to spot.
The Formula function for each test is where the actual list of words is built for any specific test. Its main job is to only select the words that match a certain criteria and to build a new string, 'Words', from those words. Pretty straightforward.
Dissecting the error
When I first noticed that something was awry with the test reload sequence, a few things came to mind:
- Performance degrades around the 3rd or 4th test
- There were zero console errors or warnings
- The test still works and gives an accurate WPM
- It works on Firefox for some reason, but not on a Chromium based browser
The list of symptoms unfortunately seemed very random and each bullet point wasn't really related to any other. This made things difficult to figure out.
The fact that it worked on Firefox, but not on Chromium was the most confusing part and what really caused the majority of the headaches. Because I wasn't using any proprietary 'Firefox' functionality. This code was as vanilla as it could get.
Leveraging A.I. to help
After 100's of console.log's on every line almost, and 1000's of 'Aha, I think I found it' moments, I decided to let ChatGPT take a stab at my code. So I fed the entire script, told it the symptoms, and let it do its magic.
The biggest problem with using A.I. though, is that it does mainly what you tell it to. If you ask it to 'find the issues' with something, it's going to do just that. And so it recommended a moderate sized list of things to consider.
Nothing on the list that I personally believed was the culprit, but I humored it anyway, because at this point days (maybe weeks) had passed and I wasn't any closer to finding an answer.
But even after almost rewriting every function in the hopes that I was just missing something obvious, I wasn't any closer to a solution.
Just for the time being, if a user wanted to retry() a test, I would just reload the page. It wasn't the 'smoothest' transition from test session to test session, but at least it wouldn't crash the browser.
But I still needed to find a solution, mainly for my sanity. Because I could go a few days without thinking about this silly issue. But as soon as I fired up the script just to take a look, I'd be caught up in that trap again.
Finding the solution
I knew that if I was going to find the actual solution, that I needed to sit down for 2-3 hours of uninterrupted time (no Googling, no ChatGPT). Just me, VS Code and the browser. And so I started where the issue began, on the "Try again" sequence.
As mentioned above, this part is relatively straightforward. I reset all of the state variables, reset the DOM elements to their defaults, and re-generate the words and characters to use for the test.
But I knew that somehow this code was the issue, even though it managed to function properly in Firefox.
And in case you missed it, I'll point out the issue right now. It's in this line:
currentTest.Formula();
The Formula() function from my word database that generates the list of words is the culprit. Here is the implementation once again for reference:
The main problem with the implementation is that the Words property is never reset after a test session ends, so it essentially doubles in size after each and every test. A simple "this.Words = ''" before the forEach would have saved me hours of, what amounts to, staring at the code.
The result was that on each and every test session thousands upon thousands of <div> elements were added to the DOM, eventually causing the browser to stutter and eventually give up.
Why was it hard to find
All of this really brings me to the main point of me ranting about a simple initialization error in a JavaScript based typing test. And that's the fact that often times the most complex bugs that an application develops are the simplest to fix. It's just that we don't know where they are.
Firefox's performance also didn't help, because it took that giant array and didn't skip a beat and loaded everything just fine time and time again. So kudos to Firefox.
It also didn't help that my database of words was essentially a separate JavaScript file that I never bothered to look at, because I incorrectly assumed that the issue must be in the test logic itself. So my clever hack of filtering words based on specified test came back to bite me in the end.
But it's finally fixed and I am now able to sleep again. At least until something else breaks. If you ever find yourself on the other end of a bug that you can't explain, it might be good to look anywhere except for where you're looking at.