CMS - duplicate CMS for redundancy/backup

Hi guys,

I’ve just setup a CMS on my own server (synology using docker) and works well. I’m wondering if its possible to have a secondary CMS that it sync with as a backup or redundancy offsite? I currently have a hosting package which I can have a Xibo CMS instance installed on, this will be purely to keep as a backup in case of catastrophic failure on my server.

Is there any easy way to do this?

This sort of configuration gets complicated very fast (especially if you’re talking active/active). This is really beyond what we can go into here - the principles of active/active for every web application apply - clustered mysql, a web proxy, etc.

As a base line (without full active/active or active/passive) it would be good to ensure that you have backups being created in your shared folder and to backup the entire shared folder. That makes restoring to your latest backup much easier.

Dont forget to ensure that you can spin up a replace server on the same domain name - you don’t want to have to visit every player to update the CMS URL.

Thanks Dan,

The idea is to have a backup server I can stand up relatively easy, but not instantly. Im happy with it being down for a bit, but just dont want to start fresh if I had an issue. DNS isn’t an issue, I have a A name record pointing to the cms at the moment, so redirecting elsewhere is easily done.

Will look at a backup of the shared folder as suggested.

Cheers mate

1 Like

Men,

just to chime in, as at home I have my dev server running 5 web front ends and a shared DB server using HAProxy (with master & slave) Load balancer.

one server is my “live” code dump, from that I use the simple scp command and copy the “master” nodes web sites to the other 4.

as far as I can tell XIBO is working fine in this mode… as the image is identical on every server and all instances are point to the DB server.

the only sticking point will be if you make changes via the haproxy you won’t know which server your files were saved to… In that case what I do is always connect to the “master” image via ip or direct dns name, make my changes and propagate them across the other farm members.

works well…

Peter

How are you keeping the library and cache in sync between the nodes? That data lives on the filesystem and not in the database.

Hi Alex,

The script file is literaly making a full copy of the source web sites to
the target servers.
If i were to use a real automation tool like puppet…probably wouldn’t
work with xibo…

By accident, on the onset I had copied all the web sites on my dev server
to the other farm servers including xibo…

I can do some testing/verification for you and send you the configs for
haproxy and the simple scp scripts to do the copy if you think it would be
of benifit.
If you have listing of the file system locations that get “touch” when a
layout gets altered, i can probably change the script to transfer just
those folders to the other farm members without copying the full sites
contents

Thanks
Peter

Copying the source from one server to another will work for the application itself, but the library must be on shared storage.

Any time you amend a layout, or a Player downloads content from a ticker, or a Player even does its routine collection with the CMS, the contents of the library changes. Assuming all your servers are getting traffic all the time, then they will all have changes (adds, edits and deletes) from that folder. You can’t sync it periodically as you’ll get errors in that intervening time.