I had a very interesting Friday last week; and I honestly wasn’t expecting it.
Some of the eQ team visited the Digital Accessibility Centre (DAC) on Wednesday to observe testing of our live eQ product along with some prototypes we are currently working on. During their visit they each made notes and observations about how things were performing then pulled them together in a shared Google Sheet on Thursday. As an aside, this approach worked really well and we’ll be using it again next time.
At Friday morning’s stand-up the team were running through the board as usual and hit the card related to the DAC testing sitting in the Research Outcomes column; a brief nod towards the associated Google Sheet and we agreed to quickly review it immediately after the stand-up was complete (thinking it wouldn’t take long and not wanting the outcomes to linger).
What followed was a completely unplanned and unexpected full-day session discussing and going through the findings (we finished just before 4 o’clock). We generated a wealth of small actionable changes we could make for either an immediate improvement to the system or areas that needed new prototypes and additional research.
Although this was a full day session there was a real sense that this was time well spent. We left the room late in the afternoon and actually all felt slightly exhausted from the marathon stint but still buzzing. It felt good, it was a feeling that we’re doing something right and it’s going to make a difference.
I caught up on the inevitable inbox queue that accompanies being away from the desk all day and went to get a coffee; upon returning I found the team huddled around one of the developers desks listening to an iPhone VoiceOver trying out different approaches to one of the areas that hadn’t tested well; I looked around and realised that there was hardly anyone left in the office (now being what I would consider late for a Friday!) suffice to say we were inspired.
To me this speaks volumes about the power of testing and research with real users, the way it energised and mobilised the team in a way that a list of requirements never could. I wasn’t present during the actual testing but the ‘effect’ is infectious, I truly found myself drawn in.
I will admit that we should have been doing the kind of accessibility testing (with real users) much earlier in the product lifecycle and this did mean we had accumulated a lot of potential debt. Going forwards we’re committed to testing little and often rather than large and infrequently. To support this we’re discussing with DAC how we can undertake quick remote testing sessions.
I expect that as the team becomes more familiar with what does and doesn’t work for accessibility in our service we will become more adept at putting together a better initial solution and spotting likely problem areas sooner. However, it is clear to me and the team, from this round of testing, there is no substitute for testing with real users or the power that comes with it.