The Digital Publishing technology stack – Part 3: Our frontend systems

In part one we looked at our hosting platform and how we build, deploy and manage services in our test and production environments. And in part two we looked at our backend systems, and how we import, store, publish and export data.

In the third and final part, we’ll look at our frontend systems, and how we build internal tooling and web services for publishing and accessing our content.

Part 3: Our frontend systems

We have a few independent frontend products which work in very different ways.

The ONS website

The ONS website is mostly rendered server-side using a mixture of Go and Java – most of it is in two frontend components known as babbage and dp-frontend-renderer. We also have dp-frontend-router which is responsible for handling and routing all inbound web traffic.

Babbage is a monolithic service which contains most of our handlers/controllers, templates, and lots of other supporting code for making API calls, handling search and supporting our publishing process. We’re no longer actively developing on Babbage, so any new or updated website functionality is built using our new architectural approach (essentially, microservices with clearly defined purpose and responsibilities).

Our new frontend controllers typically belong to their own service – for example, our new “Filter a dataset” service is powered by dp-frontend-filter-dataset-controller – while the templates belong to the dp-frontend-renderer component. This has resulted in much cleaner code, and gives full ownership of specific functionality to a clearly defined service.

The website uses HTML5, CSS3 and Javascript, but we need to support old browsers like Internet Explorer 9, so we’re sometimes a bit limited on the technologies we can use or which parts of the HTML, CSS and ECMAScript specs we can rely on.

Some of the website is a bit heavyweight – for example, we use jQuery for simple things we could have written ourselves, and we import vast amounts of Javascript and CSS into every page, even when most of it isn’t really being used. Our frontend performance is already pretty good, but we know we can do more, so we’re looking at ways to optimise this, particularly for accessibility users and people on low power devices or high-latency or low bandwidth networks.

The developer site

The developer site is (or will be) built as static HTML pages and hosted on Amazon S3.

The current (prototype) developer site is built with DapperDox, but we’ve found that difficult to customise and extend for what we need, and it means running, monitoring and maintaining server side components for something that really isn’t very dynamic.

Our new beta developer site (built as part of the “Filter a dataset” service) uses Go templates to generate static HTML pages directly from our OpenAPI specs, and is published to Amazon S3 and hosted using CloudFront. We’ll be using progressive enhancement to add additional functionality where possible.

This significantly reduces our running costs, is far simpler to understand and maintain, makes updating it and running multiple versions much easier, and hopefully gives a nicer user experience. We’ve been user testing it, and had some really positive feedback, but we know we’ve still got more work to do before it becomes our production developer site.

Our content management system

Our CMS (Florence) is a client-side React app with some server-side proxying (in Go) to talk to our APIs (with a legacy bit built with jQuery).

Florence is used internally by our publishing, editorial and data visualisation teams for preparing content for publication, and by other parts of the organisation to preview and sign off publications.

Since we’re in a known environment we can rely on having the latest browsers, on a (reasonably) fast network, with Javascript enabled, and particular screen sizes. This means we can use far newer technologies and much heavier libraries without worrying as much about the capability of the users’ device.

It also means our users are typically in the same building as us, which makes it much easier to identify, diagnose and fix bugs, and gives us plenty of opportunities for user research and testing.

Florence is currently a mixture of a clearly structured React app and some legacy jQuery-heavy concatenated Javascript files. We’ve replaced significant parts of the jQuery code with React, but we haven’t had time to look at the “workspace” yet – that’s the most complex (and also the most business critical) part of the product where nearly all of the content preparation is done (like creating and editing pages, uploading files or publishing data visualisations).

We’re planning on breaking this apart slowly, for example the functionality required for “Filter a dataset” has been built using React and lives outside the workspace, and we expect to do something similar when we take on the bulletin redesign work next year.

Our developer dashboard

We also have a “developer dashboard” which lives on GitHub, is entirely client-side, and uses Google Firebase for near-realtime updates from our Concourse servers. It actually uses AWS Lambda to fetch data from Concourse and write it to Firebase, but we’d like to integrate it directly into our CI pipelines.

It provides information on our daily releases and build pipelines, and was built as a 10% time project by our frontend engineers.

Although we haven’t had much time to focus on this, it lives permanently on wall displays in our office and already provides a huge amount of value.

What we’ve been doing

We’ve still got some code left over from the website alpha and beta projects from before the website launch. This is becoming a maintenance burden, so we’re working hard to refactor and replace as much as we can. That’s meant a considerable amount of refactoring work, building new services and APIs, and generally trying to clean things up a bit at a time.

But there’s still more surprises hiding in our frontend codebases that we haven’t found or had time to fix yet – things like HTML templates making backend API calls, search API clients which return formatted HTML rather than structured data, and monolithic web services which use unknown and undocumented web frameworks that don’t offer the flexibility or reliability we’d expect.

Over the past year we’ve been working on the “Filter a dataset” service and refactoring Florence to introduce React and remove a lot of the old legacy code. That means we haven’t been able to focus as much on the website or developer site as we’d have liked to, but lots of other discovery and alpha work has been going on to explore some of the future work we’d like to do (and some of it has already been applied to the existing website, like the redesigned bulletin headers).

At the moment these are all owned by a small team of frontend engineers, which means regular context switching, and increases the cognitive demand on our engineers when making changes which span multiple systems.

Some of the tools and services we use in frontend engineering:

  • HTML5 and CSS3 – standard markup and styling languages
  • Node and NPM – Javascript packages and build tools
  • Sass – CSS preprocessor
  • Webpack – Javascript module and asset builds
  • React – a client-side web application framework
  • Go – for some of our server side code, routing and templating
  • jQuery – a Javascript library for dynamic web development
  • BrowserStack – virtualised browser testing environment
  • Puppeteer – browser-based acceptance testing in Chrome
  • Selenium – browser-based acceptance testing in most browsers

Our plans

Over the next couple of years we’ll be building on the work we’ve already done – continuing to refactor Florence, extending the beta developer site and getting the “Filter a dataset” service live. We’ll also be adding geography support to the ONS website and taking on the bulletin redesign work.

Our goal is to have our web and CMS products owned and maintained by separate teams. We’ve put a lot of thought into this, and we feel it will let us give each product the focus it deserves, and gives us more time for regular maintenance, bug fixes and feature delivery.

To achieve this, we’re also planning on scaling up our frontend engineering team – we’ve recently hired a new engineer who’ll join us in March, and we’re hoping there’s more roles to come in future!

Leave a comment

Your email address will not be published. Required fields are marked *