The Role of Tech In Marketing: Vigilance

alt
Josh Berry-Jenkins - Technical Director Written on October 3rd, 2022, Last updated on October 10th, 2023
Welcome to our first in-depth post around what we here at Bind Media dub the Five Pillars of tech within digital marketing. If you haven’t already read the lead-up post around this you can find it here.
So today we are focusing on the first key aspect of tech in marketing, the pillar that we describe as Vigilance. To begin with, let’s talk about exactly what we mean when we are talking about Vigilance.

What is Vigilance?

Vigilance has been a core foundational part of digital marketing for a long time, it is keeping a close eye on budgets, watching for fluctuations in impressions or conversions and sounding the alarm when something is not as it should be. But with the shift towards ever greater levels of automation, and ever-increasing budgets and client demands for returns, marketers can easily find themselves with an overfilling plate.
This is where Vigilance comes into play for the tech team, it’s essentially fighting automation with automation. But “Google Ads already has systems to tell you about fluctuations!” I hear you cry, this is true, many platforms have some built-in controls that marketers should be making use of on a daily basis but here are my counterpoints:
  • Most of these platforms all work independently, meaning they only give part of the picture
  • The alerts of these systems are either in-platform or email, both of which are over-saturated mediums
  • Some of these platforms are missing crucial data from other relevant data sources
  • There is often a lack of ability to do any custom calculations with this data
  • Lack of ability to adjust the sensitivity of anomaly detection

Handling Vigilance

Fine, so we understand the need for vigilance and the limitations imposed on marketers from this, but what are the alternative solutions?
One route is to go with an external tool, for example, the likes of Opteo for Google Ads insights, this helps get vigilance automated and integrated with different mediums such as Slack, it also allows you to adjust what feedback you actually get on an account level, but sadly doesn’t help with other platforms or connecting external data sources. There are other tools of course across a range of platforms and price points, and here are my main general issues with them as a whole:
  • Overly platform specific
  • Lacking in customisation options
  • Too expensive for what they give
  • Require a large amount of time or money to set up and maintain
They don’t all have all of these issues of course, and for some marketers, these tools may be a useful bandaid to help shore up their overall vigilance capabilities, but I believe the role of tech in this evolving world of digital marketing is the key to doing better.

Bind Media’s Approach

So how are we handling dealing with vigilance? What solution do we have that actually does tick all the right boxes?
Firstly, I don’t want to bog this post down with tonnes of technical detail – every company is unique, both in the resources they have available and their specific requirements, but here is a simplified overview of how we handle Vigilance within Bind Media.
As with all good data sources it starts with ETL (Extract, Transform, Load). We pull data from various marketing API sources for a client into a central SQL repository where all the metrics that are combinable are combined. This gives a client-level full data source, meaning all pertinent information to run a Vigilance initiative is sitting in one singular place.
  • Multiple ad platform data stored in one place
  • Ability to do custom calculations (Via SQL)
We now have the main data view to combine additional data via further ETL if desired, such as important internal CRM metrics such as number of Qualified Leads, or Lead Classification meaning we get to enrich this data before we analyse it. We also combine this with a simple client settings table that allows for an adjustment of client anomaly sensitivity.  By doing so we account for what might technically be an anomaly for one client, which may be more or less of a worry for another.
  • Adding missing crucial data from other relevant data sources
  • Ability to adjust anomaly sensitivity levels
Great, we have pretty much all the data we require, but how do we turn this into a proper Vigilance system that integrates with our team of marketers?

Implementation Example

We decided to go down the Google route, making use of Cloud architecture and Google App Script libraries. This gave us access to Cloud Functions to run Node.js or Python libraries against data that we were storing in Google BigQuery if required. This gave the flexibility to analyse this data in ways such as sliding weighted Z-Scores for combined budget variations or looking at anomalies in the CPAs of Qualified Leads via SQL or more complex means.
Combining this with our sensitivity settings and our enriched data set meant we could quickly assess client data for meaningful data disruptions. Having the data analyses alone is not enough, however, we also need a means of disseminating this information accurately.
As such the final step was then connecting this through to the marketing team. We decided on a lightweight solution of using AppScript triggers to pull the latest analysis at key points in the day. Based on the results of this data we then send relevant information to the marketing team via a Slackbot chat message. This proved to be a quick and lightweight solution that was easily adjustable.
Here is the order of events for the morning anomaly detection for example:
The trigger runs early morning before work => AppScript uses BigQuery Library to call SQL views  =>BigQuery runs analysis for anomalies and returns the data to the calling AppScript => The script then packages the data and sends it to the Slack API => digital marketing team get a breakdown of anomalous events as direct messages first thing in the morning waiting for them.
You can then combine this with levels of anomalies as well, for example, we run further triggers for any high-priority anomalies that come throughout the day to ensure our marketing team is always up to date. Additionally, this could all be run via Google Cloud architecture through functions and pub/sub if SQL analysis proved not enough.

Closing Thoughts

Vigilance has always been important, but the increased complexity of digital marketing across multiple platforms, and the need for more accurate and meaningful data, mean that Vigilance should be a key pillar of the technical team of any digital marketing agency in 2022. We’re able to pull off the above with a team of two and minimal budget overhead for a solution that is bespoke and deserving of our clients.
It’s worth mentioning there are many other features that can be rolled up into a technical vigilance initiative. Some might want to use it with spellcheck APIs to check ad copy, others to monitor quality scores or crawling landing pages for changes that might break tracking. But hopefully, this example case has helped open up the potential possibilities in your imagination.
As always, I’d love to hear – what are your thoughts? What is your team building? How important do you believe using tech for vigilance is in today’s current digital marketing environment? Reach out to our team today to get your free proposal!
alt
Josh Berry-Jenkins - Technical Director I’m Josh and I fill the role of Technical Director at Bind Media. I spend an ungodly amount of time tangled in deep analytical webs using Google’s suite of web analytics tools such as Google Tag Manager, Google Analytics and Google App Script (to name a few). You’ll generally find me being drip-fed copious amounts of coffee in a dark room, face brightly illuminated by multiple screens.

Ready to supercharge your paid media?