Server-side GTM on Cloud Run with Cloud Load Balancing

How to Setup Server-Side Tag Manager on Cloud Run with a Load Balancer

Server-side GTM on Cloud Run with Cloud Load Balancing

Now, that’s a mouthful of a title!

Quite some time has passed since Simo and Mark wrote their super insightful articles about using GCP’s Cloud Run to deploy server-side GTM containers instead of App Engine, which is now on the verge of becoming the standard way of (or already is?) running server-side GTM.

If you haven’t read the articles mentioned above, I advise you to do so first as we will build on their knowledge. Or not, who am I to tell you what to do anyway?

I will try not to reiterate too many of the points Simo and Mark made, but one of many (possible) reasons for utilizing Cloud Run as opposed to App Engine is the fact that you can manage a multi-region setup in one Google Cloud project.

With App Engine once you selected a region at creation, congratulations you just got married to that region! The only way to get divorced is to hop on to another Google Cloud project and never look back. Not a recommended behavior!

Naturally, there is nothing wrong with sticking to one region for your project, but there might be some cases where it is very useful to be able to send requests and responses from different regions based on the proximity to the user’s location.

Most notably when a website has a lot of traffic from all over the world, it could be worthwhile to use the closest servers of Google (or any provider) to reduce latency. If you care about that sort of thing, that is.

A Load Balancer takes care of distributing the incoming traffic by routing it to the backend service based on routing rules and proximity.

If the logic seems very similar to a Content Delivery Network (CDN)…well, it certainly is.

Keen eyes you have there!

Let’s take a look at the steps that are required to set up Load Balancing with Cloud Run for deploying server-side GTM.

  1. Reserve an external static IP address in GCP
  2. Create Cloud Run services (manually or with shell script)
  3. Create a Load Balancer

    1. Frontend configuration
    2. Backend configuration
    3. Routing rules
  1. Add an A record to the DNS of your domain
  2. Modify the transport URL in the client-side GTM container

Disclaimer: running Cloud Run and Cloud Load Balancing is not free. Cloud Run can scale to zero if you set the minimum instances to be 0 (at your own risk), but configuring a forwarding rule for the Load Balancer has a minimum service charge. If you want to learn more about the pricing, check the documentation.

Without further ado, let’s dive into the setup!

Reserve an external static IP address in GCP

Navigate to IP Addresses in GCP and make sure you are in the right project.

Click on ‘Reserve external static address’.

Choose an appropriate name that you feel most comfortable with.

For a global load balancer, you will have to choose the ‘Premium’ network service tier.

Leave the IP version at IPv4.

Set the type to Global.

GCP reserve external IP address

Create Cloud Run services (manually or with shell script)

For this step, you can either create Cloud Run services manually or use Simo’s Cloud Shell script that creates a tagging and a debug service for you. While I recommend using the shell script if it’s your first time setting up a Cloud Run service, let me show you how you would do it manually.

Navigate to Cloud Run.

Click ‘Create service’.

By clicking on it, you will automatically enable the Cloud Run Admin API.

Set container image URL to:
gcr.io/cloud-tagging-10302018/gtm-cloud-image:stable

If you use Load Balancing with Cloud Run, I presume you want to manage a multi-region setup for which I recommend following a naming convention.

For example: gtm-server-*region*, where you create a service for every region you want to manage (gtm-server-eu-west3, gtm-server-us-east1 etc.) and a separate gtm-server-debug, which is only used for *drumroll* debugging. When you create the debug server, choose a region closest to the lovely people who are actually debugging.

Also, feel free to follow your own heart and logic when it comes to naming conventions. As long as they make sense.

GCP Cloud Run debug service

First, we set up the debug server.

Leave the ‘CPU is only allocated during request processing’ option checked.

Autoscaling: for the debug server a minimum of 1 (or 0 if you like to live dangerously) and a maximum of 1 should be fine. But hey, you do you.

Also, make sure that internet access is granted to your service (choose ‘All’) and allow unauthenticated invocations.

You can set the memory to 256 MiB as it’s only a debug server.

Don’t forget to add
RUN_AS_PREVIEW_SERVER = true and
CONTAINER_CONFIG = {{your server-side GTM container config}}
environment variables!

GCP Cloud Run service settings

Click ‘Create’ and wait for the service to be created.

Make note of the URL of the debug server (you’ll need it at the PREVIEW_SERVER_URL environment variable for the production service(s)).

For the production service, we can copy the debug service and make a few modifications or just create a new service with the ‘Create service’ button.

Copy Cloud Run service

The settings you need to change:

  1. Service name to gtm-server-*region* or gtm-server-prod or whatever you like
  2. Select appropriate region
  3. Change the minimum and maximum instances (min.: 3 and max.:6 to mimic App Engine recommended setup)
  4. Set the Memory to 512 MiB
  5. Replace the RUN_AS_PREVIEW_SERVER variable with PREVIEW_SERVER_URL and set the value to the debug server’s URL (the CONTAINER_CONFIG should be the same as the debug’s)

For a multi-region setup, you would just copy the production service and change the region and the autoscaling settings according to the expected traffic from that region. Repeat the process for every region.

Create a Load Balancer

Navigate to Cloud Load Balancing in GCP.

Click ‘Create Load Balancer’.

Start configuration for HTTP(S) Load Balancing.

Choose ‘From Internet to my VMs or serverless services’ and ‘Global HTTP(S) Load Balancer’.

If you are wondering what the difference is between the Global and Global (classic) load balancers, check out this documentation.

  1. Frontend configuration
    • Give awesome name
    • Choose HTTPS protocol
    • IP version is IPv4
    • Choose the IP address you created in the first step
    • Leave Port as 443 (not like you can change it)
    • Create a Google-managed certificate (unless you want to upload your own for some reason) for the subdomain you intend to use for tagging (e.g.: data.mydomain.com)

The frontend configuration is pretty straightforward, we can’t really mess this up (yes, I will hold your beer).

  1. Backend configuration
    • Create a backend service
    • Select ‘Serverless network endpoint group’ as Backend type
    • Under Backends, create serverless NEG
    • Select ‘Cloud Run’ and the region of one of the Cloud Run services you previously created
    • Select the Cloud Run service you created under ‘Service’

For the backend configuration, the process should look similar to the following screenshot.

GCP create serverless network endpoint group

Click ‘Create’.

Add new backends for all the Cloud Run services you created (mind the region!)

GCP Load Balancer - creating backend service

I will confess that I haven’t explored enabling the Cloud CDN for the backends yet.
If you have any experience with it, please do share!

Create a separate backend service (same steps) for the debug server.

This should leave you with a backend configuration something similar to this.

GCP Load Balancer backend configuration

Routing rules

Set the routing rules based on hosts and paths.

/gtm/debug and /gtm/get_memo should point to the debug server. Alternatively, you could just use /gtm/* (the asterisk serves as a wildcard). The first row of Host 1 and Path 1 is the default service meaning every host and path combination that does not match the other rules will be handled by that service (in our case gtm-server-prod consisting of eu-west3 and us-east1 Cloud Run services).

GCP Load Balancer routing rules

If you want an extra layer of security, you could create a separate path for your tagging services such as /measurement (that you route to the prod service), and create a ‘catch all’ service (with lower computational capabilities) for everything else. This way if someone spams the endpoint and the path is not specified to /measurement, your catch-all service will be spammed instead of your tagging service.

Note: if you do choose to use a separate path, don’t forget to add it to the transport URL in the client-side GTM settings (see here).

If you want proper security though, you could also check out Google Cloud Armor or any similar tool.

Add an A record to the DNS of your domain

Remember the IP address you created in the first step?

Or the SSL certificate you created for your subdomain at the frontend configuration?

This is the part where it all comes together.

Navigate to the DNS settings of your website and add an ‘A record’ pointing to the IP address you created, with the name of the subdomain you created the SSL certificate for.

It would look like this in Cloudflare.

IP mapping in Cloudflare DNS

Note: if you are already using a regular Cloud Run or App Engine implementation, you could use a different subdomain to make migration a bit manageable or just time the changing of the IP mapping well so that there is no downtime.

Modify the transport URL in the client-side GTM container

At this point, you should have everything in place for the endpoint to work correctly. Now all that is left is to send the events to the Load Balancer endpoint that will route the traffic to the Cloud Run services.

You just have to add the URL of the subdomain you mapped the IP address to (in your DNS) to the GA4 configuration tag.

GA4 server container URL configuration

As always, before going live first test everything in debug mode to make sure everything is working as expected.

Closing thoughts

It’s great that by using Cloud Run, we introduce more points of optimization of the server-side GTM setup, which were previously unavailable. I am super excited to see how it will evolve in the future!

Hopefully, I could help you with understanding and managing the process of setting up a load balancer!

While the process certainly has its caveats, if you follow the documentation and the steps outlined above or if you have mad googling skills (or ChatGPT-ing nowadays), everything should be fine.

If you have any questions or suggestions about this topic, feel free to connect with me on Linkedin or contact us if you want us to help you out!