You can deploy to multiple regions for resilience and one of the regions must be configured as the "PRIMARY_REGION". Each region could also be hosted with a different provider, for example you could host your primary region (and DBaaS service) with Linode and your secondary regions could be hosted with Linode, or Exoscale, or Vultr and so on. Or you could host your PRIMARY_REGION (and DBaaS service) with Vultr and your secondary regions with Vultr, Digital Ocean and Linode and so on. You will have noticed that your DBaaS service has to be hosted with whichever provider is assigned the role of PRIMARY_REGION.

You will need to have a build-machine for each region (whether that provider is the same as your PRIMARY_REGION provider or not). Autoscaling will apply to each region separately meaning that if you want to scale up the webservers in your PRIMARY_REGION to 5 for from 3 you will have to adjust the scaling using the helper script on the build machine that is running in the same PRIMARY_REGION and if you want to adjust the number of webservers in a different region you will have to adjust the number of number of webservers using the helperscript on that regions build-machine also. So, you can have 5 webservers running in region 1, 8 webservers running in region 2 and 4 webservers running in region 3, all of which may or may not be with different providers from each other.

The way things work is that for the PRIMARY_REGION the DBaaS will be running in the same VPC as the webservers and so access to the DBaaS service is granted by allowing that VPC access through the DBaaS firewall. Any secondary region (with the same provider or a different provider) is in a different VPC to the DBaaS instance and so the way that access is granted is by having each webserver that is in a secondary region write its public IP address to a shared "multi-region" bucket in the S3 datastore and then updating the DBaaS firewall to allow access to the public IP addresses that have been written to the shared multi-region bucket through the DBaaS firewall. Only IP addresses that are in the restricted access shared mutli-region bucket are allowed access to the DBaaS service. Obviously accessing the DBaaS service from different regions is a latency hit because you might be, for example, be accessing a DBaaS service in London from webservers running in Amsterdam and that would incur an increased latency. This latency should be acceptable between relatively close regions (so you could, for example, run webservers in Frankfurt, Amsterdam and London) but would become untennable if your webservers were, for example, in Melbourne and your DBaaS instance was running in London. You could maybe try out this kind of stuff and see how it works for you.

Another thing to rememeber is that the way things are configured (it could be set up that the public IP address multi-region bucket is used to add and remove webserver firewalls across all regions from the DNS system but that isn't currently implemented) and so what that means is that you need to configure things such that reverse proxies are running in each region. The reason why this is cleaner is that the IP addresses of the reverse proxies are assigned to the DNS system from each region at deployment time and don't then change for the duration of the deployment adn the dynamic assignment of webserver IPs in response to autoscaling events is managed by the reverse proxy machines.

The DNS system you use for multi-region deployments should also be Cloudflare. If you pay for Cloudflare DNS you have features such as status checks for the webservers that you IP addresses are pointed to which means that any misbehaving webservers are taken out of rotation and there's also proper loadbalancing with a paid Cloudflare service rather than just the round-robin DNS loadbalancing that is provided by the free plan. Round robin likely functions OK for you, but, ultimately you might want a true loadbalancing policy between your webservers.

As you know, you can configure your datastores to be multi-region and you do that through parameters such as S3_ACCESS_KEY and S3_SECRET_KEY which you can refer to in the specification. What it means when you configure your datastores for multiple regions is that backups, stored assets and so on are replicated between multiple regions. When, for example, you backup a webroot to your datastore if multiple-regions are configured for your datastores then a single "Put" operation as seen from the command line will automatically write that backup to multiple regions which is good for safety, resilence and access. You can be deploying only to a single region and make backups to multiple regions, for example, by configuring the S3_ACCESS_KEY and S3_SECRET_KEY as well as S3_LOCATION and S3_HOST_BASE to be multi region/multi provider aware and by doing that you have backups, for example, automatically replicated in a multi-region and possibly multi-provider way to do everything you can to ensure that you have a reliable backup system for your application source code. By going mutli-provider with your backups if there was some disaster (you know stuff can happen like your provider shutting your accounts(s) or something thus denying you access) then by having multi-provider access you can still have access to your application backups with an alternative provider in other words, these are strange times and its not untennable that in some way you might be denied access to your systems because of some issue or other. You can even backup to S3 services that are not one of the currently supported quartet of providers (Linode, Digital Ocean, Vultr or Exoscale) such as Amazon S3 by setting the S3_ACCESS_KEY and S3_SECRET_KEY as well as S3_LOCATION and S3_HOST_BASE to your Amazon S3 account rather than one of the quartet of providers that I have supported out of the box.

The configuration directory is region specific in other words, each region that you deploy to will have its own configuration directory and it is these configuration directories which contain the settings for autoscaling and so on.

The webroots are kept in sync across mutiple regions by replicating any changes to a local webroot across mutliple regions and the way that this is done is to sync the webroots to a shared S3 datastore configured to only overwrite older files that already exist in the datastore and then pull down the changes that are written to the shared datastore into each webroot in each region thus synchronising across multiple regions. The syncronisation takes place every minute so there is a one minute lag between webroot updates in a local region and those updates being avaiable on remote webroots. If this isn't suitable for you you can decrease the one minute polling time to something more frequent but there's always going to be at least some delay between local webroot updates and them being reflected and actioned in remote regions.

If you are mounting your assets from a datastore then those assets will be excluded from any webroot syncing and rather when PERSIST_ASSETS_TO_DATASTORE is set to "1" and the S3 system is multi region configured then updates to the application assets in one mounted datastore are synchronised to all other datastores again on a minute by minute basis. Please note that when PERSIST_ASSETS_TO_DATASTORE is set to 1 assets are mounted from the assets directory local to that region it will never be the case that assets are mounted from a remote region to a local webroot.