Creating a Visual Comparison Tool for Front-End Developers 🔬
• James Ives
Regressions are the worst, and they’re inevitable in any software development cycle. We can mitigate their occurances as much as possible through automated testing, but getting 100% test coverage is a time-consuming task and sometimes not viable. Something I’ve run into a lot as a Front-End developer are visual regressions, where a change will be made to a stylesheet that at first glance seems fine, but later causes chaos somewhere else in the application. As the length and complexity of a project increases, making sure your styles don’t regress can be a tricky task.
I’ve used a couple of tools in the past that show visual diffs between two pages, but I’ve always wondered what it would take to make my own version. So join me as I dive down the rabbit hole once again to create my own CRUD (Crete, Read, Update, Delete) application to do exactly that. For this project I’ll be using NodeJS, React and Redux.
Creating the API
The first order of business was setting up a database and creating an API to store the tests. To handle this I decided to utilize Express for my routes and MongoDB for the database. To get started I laid out the database connections using the Mongoose package, requiring the path from the /config/database.config.js file. Once the connection has been established I required the routes for the API and then started Express.
Afterwards I started setting up the API endpoints. As I intend for this to be a CRUD application I started with the usual suspects, I needed an endpoint to create a test, retrieve all tests, find a specific one, update and delete. Inside of my routes/tests.route.js file I exported a module that defined these endpoints, and then tied them to a function that would later be defined within controllers/tests.controller.js.
I knew what type of information I’d need in the database in order to run a test, so the next step was creating a schema for the test object before we started interfacting with the database. I set this up within models/tests.model.js. The schema tells the database what fields it can expect. However, it doesn’t define which fields are required. I’ll set that up in our controller within the tests.create and tests.update methods.
Because the tests will rely on a Node service to run I’d need some sort of interface that would allow our Front-End to trigger a process on the backend. Therefore in addition to the endpoints I’ve already created I setup two more, one that will run all tests, and one that will run a specific test.
The intention here is to fire a Node function using the test data in the database when the endpoint is requested, generating the visual diff images. From an API perspective it would return a success or failure boolean when it’s done processing so we can inform our Front-End of the status. If this application was being deployed to a server there would need to be more considerations made for this in regards to performance and authentication, but for local use only it’s fine.
Generating the Diffs
To generate the diff images I used two Node libraries. One is Puppeteer, which provides a high-level API to control the Chrome browser; I’ll be using this to take the page screenshots. The other is Pixelmatch, a library by Mapbox which highlighlights pixel differences between two images. I started off by creating and exporting a module which would capture the initial browser shots. This module should accept an array of test data as its arguement and return the success flag when it was done creating the visual diff images on the localdisk. In order to achieve this I had the module return a promise.
As it’s expecting an array of data, I’ll need to fire off the runTest function for every item in the array, therefore I’ll need to lean even harder on the promise library. Using Array.map I converted my array of data into promises. That way I can then pass that mapped array into a Promise.all statement and resolve our wrapping promise once all of our items in the array are done processing.
I also created an if statement that would fire at the start of the block that will create our folder to hold the images in if it doesn’t exist already, if the user accidently deletes the folder it will error out the entire process and the application would no longer work.
The goal of the runTest function is to start a headless Chrome browser using Puppeteer and then do the following:
1. Set the viewport width if a size value is provided.
2. Navigate to the live page and wait for a few seconds to let any animations finish playing.
3. Take a screenshot of the live page.
4. Do the same for the dev page.
5. Generate the pixel comparison overlay using the live and dev page images.
6. Close the browser and return.
The documentation for Puppeteer is quite clear, and getting an initial proof of concept going was quite simple.
This initial pass of the function does mostly everything I need it to, except there’s a problem, primarily in the error handling department. If I run through this function and provide it two valid paths it will complete and everything will be fine, however if I provide it at least one invalid path, it will error and not pass back any form of usable data for our API response. To improve this I utilized try, catch and finally to throw errors when they are encountered. In the example below I setup catch cases on the goto calls, and then throw an error if it’s triggered, moving it to the wrapping catch case, this allows me to set the property success to false. If it reaches the end of the try case then finally is triggered so I can then set success to true.
The compare function that gets fired near the end of runTest also resolves a promise when it’s done processing, signalling the async function to move onto the next step. The code is mostly unchanged from the Pixelmatch README example.
With these building blocks in place I can now call the capture module when our endpoint is requested, and trigger a response with usable data. The response will be delayed until the runTest function has been fired for every item in the array.
For the Front-End I decided to make a simple application using React and Redux that will display a list of tests, with a test page to display our diffs, and a form page to create/edit the tests. Using Redux and Axios I began by creating a series of action creators that will fire off the API calls to the Node service I just created.
Some of the action creators I created require a reducer to bind the response to the Redux state, so in the reducers/index.js file I added some. In the below example all represents all of the tests in the API /tests endpoint, whereas test represents the current test that is being viewed. testValidation will display the result of a test that has just finished running, and will allow us to access the success flag from the API response.
With the action creators and reducers setup I can utilize the Redux connect method to bind the Redux state to the React component props. In the example below I fire off the fetchTests action creator before the component mounts within componentWillMount(), and then using connect() I pull the state from Redux and utilize the mapStateToProps function to bind that state to our component props, allowing me to access this.props.tests within the render() method.
As there needs to be a page to view a specific test, I’ll need to do something similar to the example above. The primary change being that I’ll need to run fetchTest instead using the id in the page route as the arguement. As I pushed the user to the /tests/:id path within TestIndex using react-router, I can harness that data by accessing this.props.params.id. You can learn more about how react-router works here.
I also need to create a way to actually run the test, and delete them if desired. Just as I’ve run fetchTest, I can extend the same functionality to use our other action creators on button presses. You can see in the onRunClick() method that I’m calling the runtest action creator, and then checking in the success handler if the first index of testValidation has success: false, this way i can display an error to the user if there was an issue with the test. I can also use the this.state.running boolean to display a loading bar while the test runs for a better user experience.
I also needed a component to display the visual diffs as that’s what this entire application is all about! For this I created a simple component which accepted the src and overlay paths as props, which get handed down by the TestShow component. I then setup a simple on click handler which toggles the image source to the overlay. If there’s an error with the test, for instance if the test fails to run or hasn’t been run yet at all, then the onError handler is triggered setting the image to the placeholer.
The last order of business is creating a form which can create and edit the test data, and submit it to the database. As I already have an action creator for this I just need to push an object into the function. I decided to use the redux-form library for this, as it has some great built-in tools for form validation. Whenever we touch our text boxes the validate function should get run which checks to make sure that we’re not leaving any required fields empty. There’s also some checks setup on the backend that require the name, live and dev fields to be filled out, otherwise the endpoint will reject the request.
I wanted this component to be reusable for both the new test and edit use cases, so I set it up to accept an onSubmit function as a prop, which will be our action creator that will submit the form data to the database. You can see in the export statement that I’m setting up the Redux form here, and even requiring validate from the TestForm component so it knows to run it.
With that setup I could now include the <TestNew />or <TestEdit /> component in a route and have them reuse the same form with varying functionality. The primary difference between the two of them is that TestEdit starts off with some initial state that is sourced from the test that is currently being edited.
After numerous amounts of testing and making sure everything worked together, I finally ended up with an application that worked! This post was intended to give you insight into my thought process while building this project, however if you’d like to go into more detail you can check out the source on Github.
As always if you have any questions or feedback feel free to leave a comment, or reach out on Twitter or via my contact form.
James Ives is a Full-Stack developer from London currently living and working in the United States for The Washington Post.