At the beginning of any technology project, from crafting a new website to developing an industry-first product, are a core set of assumptions. Assumptions vary in size and are by nature subjective, often based on a mix of data and opinion, so a crucial piece of any development is confirming or reevaluating your assumptions as early as possible. A product that continues to build on incorrect assumptions is destined for failure.

An assumption might look something like:

  • Developing a one-page checkout to replace the multi-step process for our eCommerce website will reduce the number of cart abandonments.
  • Our new system will predominantly be used on a desktop because our target users are office-based.

Following a standard product development process, from proof of concept to minimum viable product and beyond to a fully-fledged product, businesses have ample opportunity to test these assumptions in different ways and most importantly generate feedback that influences decision-making.

But before, during, and even after development, how can we gather feedback from target and active users?

Establish test groups early

It might sound like something more fitting of an enterprise-sized organisation, but small businesses and even single-person startups can establish test groups for new development, too.

Having a test group of 5-10 people, which could be anything from staff members across different departments down to friends and family, gives you the foundations for gathering feedback as early as possible on your development. For industry-specialised developments, where your test group participants won’t always have domain knowledge, their feedback on whether a tool is intuitive is crucial.

With a small group established, set regular intervals for the participants to provide feedback to you, either in a group setting or individually. This feedback, particularly when delivered in a group setting, is likely to fuel further conversation and second-stage feedback that you can use to iterate on your development so far.

Utilise behaviour analytics

Gathering direct, communicated feedback from users and even test groups is notoriously difficult. People are busy and the time it takes to share feedback, particularly if an experience is poor, is time that’s better invested in completing the same action with one of your competitors.

Assuming that your users will consistently and willingly give you feedback is not a reliable strategy, which is where automated feedback tools – the likes of behavioural analytics – come into play.

Behaviour analytics in the context of web and software development is tracking the activity of your users. It often focuses on aspects like:

  • Where users scroll to when reading a page, which gives an indicator of where in the content is the answer or action that they’re looking for,
  • Where users position and move their cursor on a desktop, close to the top left often signalling their intent to leave the page quickly,
  • Which pages do users spend the most time on, which can be an indicator of either insightful content or a roadblock in the process in that the next step to take is not clear,
  • What the last action that a user takes before leaving the website or application is.

Using a combination of tools like Google Analytics and Hotjar, you can quickly integrate behaviour analytics into your development from an early stage and begin to track the activity of your users (with their consent, of course).

See what’s hot and what’s not with heatmaps

Behaviour analytics, tracking the position of your users on any given screen or page, allows you to create aggregated heatmaps that show where your users as a whole tend to spend their time on any given page by using a red-blue overlay on a screenshot of your entire page

As an example, if the vast majority of your users remain ‘above the fold’ on a given screen (that is, do not scroll down at all) then the top of your page would be shown in red as the hot area, which transitions to blue the further down the page you go, shown in blue, indicating cold, which in essence means nearly nobody has viewed or interacted with it.

Whilst it’s somewhat fair to assume (coming back to assumptions) that above the fold is most likely always going to be the primary hot region, a heatmap with enough data can also show you where at the top of the page users tend to interact. If you have key action buttons spread evenly between the left and right of the screen, a heatmap will show you which one the user spent more time near to help you position your call to action in line with your user activity and intent.

Heatmaps are data-driven and are a great way to challenge your assumptions on how you believe your users act when using your product.

Utilise A/B testing

A/B testing is a crucial tool in gathering insight into user activity and is particularly useful for products that already exist and are looking to redevelop an existing feature by trialing a new way of doing things with a subset of your users. Through this method, you’ll see which option, A or B, delivers your preferred action more frequently or more quickly.

A prime example is a website’s checkout process. Earlier this year, Shopify released their one-page checkout solution which merchants can optionally enable. The alternative is a more traditional three-step checkout process, which many customers are arguably more familiar with. As a retailer, you may choose to A/B test both checkout options on the basis that:

  1. The one-page checkout means there are fewer pages to click through before a purchase is completed, at the expense of the checkout page being longer and requiring a user to scroll.
  2. The three-page checkout is what our website customers are familiar with, it’s simple to progress through but requires the user to complete three pages of information before their purchase is confirmed.

A case can be made for both solutions, so the ideal way to test which is the best for conversion rates would be through an A/B test.

Address your audience

We’ve covered already that getting users to willingly give valuable feedback is notoriously difficult, but that doesn’t mean that it should be ignored entirely. It simply should not be relied on as the only source of feedback.

The most common way of addressing an audience is in the form of post-action feedback. For example, asking the user a simple yes-no question along the lines of “Did you find the checkout process was simple?” having completed a purchase.

You can also address your audience on a much wider scale through the likes of surveying, which typically takes place as part of researching your initial product development. Whilst this does not always provide feedback on your development directly, it provides valuable insight into what your users often expect from a system or product for it to bring them value.