Docker bridge IP conflict

I have my Xibo server set up in my local office (“Office A”), and it is running without any problems at all. I have a player set up in a remote office (“Office B”). The player in Office B refuses to connect to the server. Troubleshooting led to the fact that no computer within Office B will connect to the server in Office A, even though they will connect to any other authorized destination within the network in Office A. My head has almost exploded with trying to find out why this might be; there are NO blocks in place in either Office A or Office B that would stop communication with a specific IP and not any others.

As a side note of some significance, I need to point out that I am using 172.20.0.0/24 in Office B, with 172.20.0.1 being the gateway. Office A uses 172.17.1.0/24.

After much tail chasing to no avail, I finally, out of desperation, logged in to my Linux server to see if I could ping from Office A to Office B. Before I actually tried the ping, however, I did ifconfig on a whim, where I found this:

br-610d8954f4eb Link encap:Ethernet  HWaddr 02:42:fe:54:5b:1f
          inet addr:172.20.0.1  Bcast:172.20.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:feff:fe54:5b1f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1229263 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1245832 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:6958679062 (6.9 GB)  TX bytes:454071718 (454.0 MB)

docker0   Link encap:Ethernet  HWaddr 02:42:23:ba:69:59
          inet addr:172.18.0.1  Bcast:172.18.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr 00:15:5d:01:de:04
          inet addr:172.17.1.250  Bcast:172.17.1.255  Mask:255.255.255.0
          inet6 addr: fe80::215:5dff:fe01:de04/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:43956617 errors:0 dropped:876392 overruns:0 frame:0
          TX packets:2609225 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:4034068179 (4.0 GB)  TX bytes:6060750487 (6.0 GB)

As you can see, the bridge is configured to use the same exact IP as the gateway in Office B. I am positive that this is what is causing my lack of communication, but I am completely clueless how I might fix it, and I’m hoping that someone can guide me.

I would like to point out that I have done some investigation into this issue. Docker Docs has this:

https://docs.docker.com/v17.09/engine/userguide/networking/default_network/custom-docker0/

I can create the daemon.json file mentioned in the above doc, and I can populate it, but I’m not sure exactly what to enter there, which is the majority of the reason I’m asking for assistance. I know my network information required in the bottom part of the sample given, but does it matter what I put in place for the bip and fixed-cidr in the first part?

{
  "bip": "192.168.1.5/24",
  "fixed-cidr": "192.168.1.5/25",
  "fixed-cidr-v6": "2001:db8::/64",
  "mtu": 1500,
  "default-gateway": "10.20.1.1",
  "default-gateway-v6": "2001:db8:abcd::89",
  "dns": ["10.20.1.2","10.20.1.3"]
}

The default-gateway and dns lines will be my Office A information, which I know well, but I don’t want to break the internal communication by assigning a subnet that differs wildly from that of docker0, but I don’t see anything that specifies whether this is a concern or not.

Thank you in advance!

I have discovered that adding daemon.json to /etc/docker does NOT work. I added my values very carefully, but came up empty because Docker wouldn’t restart after adding that file. I did some more digging…

According to Docker Docs with regard to Linux distributions that use systemd:

systemd vs daemon.json

Configuring Docker to listen for connections using both the `systemd` unit file and the `daemon.json` 
file causes a conflict that prevents Docker from starting.

So, adding the daemon.json file is not the answer. I will continue to pursue the systemd solution suggested in the same Docker Docs page:

Configuring remote access with systemd unit file

    Use the command sudo systemctl edit docker.service to open an override file for docker.service in a text editor.

    Add or modify the following lines, substituting your own values.

    [Service]
    ExecStart=
    ExecStart=/usr/bin/dockerd -H fd:// -H tcp://127.0.0.1:2375

    Save the file.

    Reload the systemctl configuration.

     $ sudo systemctl daemon-reload

    Restart Docker.

    $ sudo systemctl restart docker.service

    Check to see whether the change was honored by reviewing the output of netstat to confirm dockerd is listening on the configured port.

    $ sudo netstat -lntp | grep dockerd
    tcp        0      0 127.0.0.1:2375          0.0.0.0:*               LISTEN      3758/dockerd

If there is nothing that can be done to change the subnet the Docker bridge is using, I believe I can change the network in Office B to use a different subnet, thereby removing the conflict, but that will require at least a day-trip to that office to reconfigure everything in that office, in addition to reconfiguring the VPN tunnel to Office A. I would prefer to have option to change things on the server side if at all possible, as that will be considerably easier.

As before, I appreciate any and all help that can be offered.

Thank you!

I’ve not had issues using daemon.json on Ubuntu 18.04 - which uses systemd.

As far as I’m aware that should work. If docker won’t restart when that file is present, then it’s likely a syntax error in that file.

I think all you’ll need to set is the bip, and dns if you need that. The other values I believe can be left as default.

If that doesn’t resolve it, then I’d head to the Docker forums where someone may be able to assist you. We do what we can to offer support for it, but ultimately there’s limits to what we can offer for free when very specific setups are required.

Ok, I’m about ready to let this go and just bite the bullet and change my Office B subnet completely. Before I drop this completely, though, I wanted to put this out there.

After a LONG morning of digging around through the interwebs, I found out that I can push a new IP address into the bridge using ip <bridge name> address <new subnet>. That worked just fine; I got the bridge to accept the new IP address.

I also noticed that my iptables were still pointing everything to the wrong subnet. More long digging later, I got iptables reconfigured to match my new subnet. At that point, I assumed my troubles were over, but no - restarting docker completely restored the IP address from the wrong subnet, including rewriting my iptables. So I’m back to square one, but with the sneaking suspicion that the bridge is hard coded somewhere within the Xibo code to use the 172.20.0.0/24 subnet:

I have to believe that the steps outlined in that article are being used here, as there is the default docker bridge, AND a xibo_default bridge:

# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
9786f8659831        bridge              bridge              local
692e3272e240        host                host                local
f51b220f1b32        none                null                local
610d8954f4eb        xibo_default        bridge              local

It is absolutely the xibo_default bridge that is giving me the problem:

# ifconfig br-610d8954f4eb
br-610d8954f4eb Link encap:Ethernet  HWaddr 02:42:c0:29:7c:b7
          inet addr:172.20.0.1  Bcast:172.20.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:c0ff:fe29:7cb7/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:599 errors:0 dropped:0 overruns:0 frame:0
          TX packets:691 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:291725 (291.7 KB)  TX bytes:223812 (223.8 KB)

I can’t and won’t ask you to change that just for me at this time, but I thought it might be good to document this as completely as possible so someone else won’t go through the same problems.

Would it be possible to add the functionality of allowing the user to specify a subnet when installing future versions of Xibo? It would make things a lot easier for any who know their networking and have multiple subnets in use.

Thank you for your time, and I apologize for bringing esoteric stuff to the table (again).

Edit: I see in your docker-compose.yml that you have it configured thusly:

networks:
   default:

So the default behavior of docker seems to be to use that subnet. I will continue looking into whether this is something I can edit and change before installing Xibo.

Final edit to this post, I promise.

After considerable digging, I finally found documentation from Docker:

When user creates a network without specifying a --subnet, docker will pick a subnet for the network from the static set 172.[17-31].0.0/16 and 192.168.[0-240].0/20 for the local scope networks and from the static set 10.[0-255].[0-255].0/24 for the global scope networks.

That finally explains how my container ended up with the subnet 172.20.0.0/24.

Unfortunately, Docker has not yet actually implemented the --subnet or --default-address-pool capability, so I’m stuck.

My apologies for bothering everyone here, but if this will help someone else understand broken networking at some point, I will feel justified.

My apologies, also, for insinuating that it was something Xibo has hard coded. Such is absolutely not the case, nor is it possible for Xibo to hard code an IP address range into the docker-compose.yml. Yet. I hope that, if/when Docker adds this functionality, Xibo will allow for it in building a container for the CMS.

Further investigation shows that --default-address-pool is included, so I expect to uninstall and re-install Xibo in an effort to remove the problem with the errant IP address.

Final post - I have succeeded in learning what I needed to do to resolve my IP address conflict. I managed to destroy my initial installation before I learned, but sometimes progress requires sacrifice, right? I hope that someone else might be able to learn from this. I also hope that the Xibo team might include this in the documentation for installing Xibo on Ubuntu 16.04 LTS, as it could potentially stave off major problems with installing and using this software, which I have found to be excellent in every way (minus this hiccup, of course, which is clearly a docker issue, not a Xibo issue).

I had to start by creating a separate bridge that utilized my specified subnet. This is done after installing docker, but before attempting to install Xibo:

docker network create --subnet 172.25.0.0/24 xibo-network

The subnet has to be in CIDR notation, and the name at the end cannot contain any spaces. Naming is important, as you will need to connect to that bridge using the name, as I will explain in a moment.

The above command created the bridge with a subnet that would not interfere with my company’s multiple subnets. Docker uses the range 172.17.0.0/12, or anything from 172.16.0.0 through 172.31.255.255 inclusively, so I chose something within that range that we don’t use anywhere, and have documented my use so we won’t ever try to use it elsewhere.

I then had to slightly alter the docker-compose.yml file included with Xibo. At the bottom of the file, I added the following:

networks:
    default:
        external:
            name: xibo-network

From there, I was able to use the docker-compose up -d command and everything built correctly, using the external bridge successfully.

As a point of interest, using docker-compose down completely removes the containers in which Xibo runs, after which docker-compose up -d rebuilds them using the docker-compose.yml file. I wish I had known that tidbit before I entered into this adventure - I may have been able to resolve this without eventually completely killing the original installation to the point that I couldn’t reach it and lost all of the content we had built prior to discovering the problem with the subnet (I did actually find it afterward in another post here, but it was about a week late). It is what it is; we had the content offsite and were able to import it back in, which is something I highly recommend regardless of problems.