Docker installation doesn't start anymore

Dear Alex,

Could you please help me to get the issue fixed? I got same problem as topic starter had.
The installation is absolutely fresh. Version is 1.8.9. After unexpected power-off last day I can see only blank screen when trying to reach CMS interface. It is running maintenance as is written in logs. Log.ibd file reached 9GB. I truncated it on the base of your advice in this thread and it is 114MB now.

docker-compose logs cms-web shows this

Attaching to xibodocker189_cms-web_1
cms-web_1 | Starting cron
cms-web_1 | Starting webserver
cms-web_1 | AH00558: httpd: Could not reliably determine the server’s fully qualified domain name, using 172.17.0.4. Set the ‘ServerName’ directive globally to suppress this message
cms-web_1 | Waiting for MySQL to start - max 300 seconds
cms-web_1 | MySQL started
cms-web_1 | DBVersion
cms-web_1 | 140
cms-web_1 | Existing Database, checking if we need to upgrade it
cms-web_1 | Configuring Maintenance
cms-web_1 | crontab: can’t open ‘/etc/crontabs/apache’: Permission denied
cms-web_1 | Running maintenance
cms-web_1 | Waiting for MySQL to start - max 300 seconds
cms-web_1 | MySQL started
cms-web_1 | DBVersion
cms-web_1 | 140
cms-web_1 | Existing Database, checking if we need to upgrade it
cms-web_1 | Configuring Maintenance
cms-web_1 | crontab: can’t open ‘/etc/crontabs/apache’: Permission denied
cms-web_1 | Running maintenance
cms-web_1 | Waiting for MySQL to start - max 300 seconds
cms-web_1 | MySQL started
cms-web_1 | DBVersion
cms-web_1 | 140
cms-web_1 | Existing Database, checking if we need to upgrade it
cms-web_1 | Configuring Maintenance
cms-web_1 | crontab: can’t open ‘/etc/crontabs/apache’: Permission denied
cms-web_1 | Running maintenance

docker-compose logs cms-db shows this

cms-db_1 | 2018-05-24 07:00:35 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
cms-db_1 | 2018-05-24 07:00:35 0 [Note] mysqld (mysqld 5.6.40) starting as process 1 …
cms-db_1 | 2018-05-24 07:00:35 1 [Note] Plugin ‘FEDERATED’ is disabled.
cms-db_1 | 2018-05-24 07:00:35 1 [Note] InnoDB: Using atomics to ref count buffer pool pages
cms-db_1 | 2018-05-24 07:00:35 1 [Note] InnoDB: The InnoDB memory heap is disabled
cms-db_1 | 2018-05-24 07:00:35 1 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
cms-db_1 | 2018-05-24 07:00:35 1 [Note] InnoDB: Memory barrier is not used
cms-db_1 | 2018-05-24 07:00:35 1 [Note] InnoDB: Compressed tables use zlib 1.2.3
cms-db_1 | 2018-05-24 07:00:35 1 [Note] InnoDB: Using Linux native AIO
cms-db_1 | 2018-05-24 07:00:35 1 [Note] InnoDB: Using CPU crc32 instructions
cms-db_1 | 2018-05-24 07:00:35 1 [Note] InnoDB: Initializing buffer pool, size = 128.0M
cms-db_1 | 2018-05-24 07:00:35 1 [Note] InnoDB: Completed initialization of buffer pool
cms-db_1 | 2018-05-24 07:00:35 1 [Note] InnoDB: Highest supported file format is Barracuda.
cms-db_1 | 2018-05-24 07:00:35 1 [Note] InnoDB: 128 rollback segment(s) are active.
cms-db_1 | 2018-05-24 07:00:35 1 [Note] InnoDB: Waiting for purge to start
cms-db_1 | 2018-05-24 07:00:35 1 [Note] InnoDB: 5.6.40 started; log sequence number 6236423
cms-db_1 | 2018-05-24 07:00:35 1 [Note] Server hostname (bind-address): ‘*’; port: 3306
cms-db_1 | 2018-05-24 07:00:35 1 [Note] IPv6 is available.
cms-db_1 | 2018-05-24 07:00:35 1 [Note] - ‘::’ resolves to ‘::’;
cms-db_1 | 2018-05-24 07:00:35 1 [Note] Server socket created on IP: ‘::’.
cms-db_1 | 2018-05-24 07:00:35 1 [Warning] Insecure configuration for --pid-file: Location ‘/var/run/mysqld’ in the path is accessible to all OS users. Consider choosing a different directory.
cms-db_1 | 2018-05-24 07:00:35 1 [Warning] ‘proxies_priv’ entry ‘@ root@3c6c194f3377’ ignored in --skip-name-resolve mode.
cms-db_1 | 2018-05-24 07:00:35 1 [Note] Event Scheduler: Loaded 0 events
cms-db_1 | 2018-05-24 07:00:35 1 [Note] mysqld: ready for connections.
cms-db_1 | Version: ‘5.6.40’ socket: ‘/var/run/mysqld/mysqld.sock’ port: 3306 MySQL Community Server (GPL)

Can’t show you a XMR address setting without your help. I changed it when interface was accessible. But I suppose, something has been made incorrectly because I could not receive any screenshots from Windows Player.

When I got same issue last time - I just made a fresh reinstall. Everything was OK until last power down.

Thanks a lot for the great product!

If you’re using Docker, then the XMR internal address is set correctly for you out of the box, and then hidden.

The only setting we expose is the external address, and that can’t cause slow startup.

Do you have proof of play stats enabled? If so, perhaps you have a huge backlog of those which it’s busy archiving or deleting?

cms-web_1 | crontab: can’t open ‘/etc/crontabs/apache’: Permission denied

That line is somewhat concerning. Have you tried to modify that file perhaps? If that isn’t setup correctly, then maintenance (XTR) won’t be running normally and that will explain why you have a huge backlog.

Hi Alex,

Thanks for your answer!

No, I didn’t modify any of files instead of those listed here https://xibo.org.uk/manual/en/install_docker_linux.html

And I commented first two rows in docker-compose.yml (version and services) because docker had issues with them not allowing bootstrap.

I am not sure about “proof of play stats” because tried to follow manuals.

Just for information if it might be necessary

xibo@xibosrv:/opt/xibo/xibo-docker-1.8.9$ docker -v
Docker version 1.13.1, build 092cba3
xibo@xibosrv:/opt/xibo/xibo-docker-1.8.9$ docker-compose -v
docker-compose version 1.8.0, build unknown

Crontab string concerned me too… Haven’t found solution yet. And there is no folder named “crontabs” though.

Can you get me some remote access to it (TeamViewer or SSH) and I can look. DM me the details.

There’s definitely a crontabs folder in the container I have here for 1.8.9, so I suspect yours has been modified in some way.

Alex, I have updated docker and docker-compose to the latest versions and made a fresh cms installation with uncommented rows. Let’s see how it goes.

Thanks a lot!

You shouldn’t need to start over as a result.

You should be able to use your existing docker-compose.yml, config.env and shared folder.

An existing setup continued to show the crontab message in logs and didn’t start. I supposed, it is better to start over with new docker version.
I still have the previous folder and can go back there.

1 Like

Hello! I too am getting the following issue:

Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /.

Reason: Error reading from remote server

I was logged in to the CMS, editing some images in some regions in a layout. I went down to turn on the display that runs the layout and the Display couldn’t talk to the server to download the updated layout. I went back to my computer to try and access the CMS and was prompted with the above error.

The Linux box that is running the CMS (xibo-docker-1.8.10) is running okay, but Xibo is not.

docker ps -a shows:

CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS                     PORTS                                NAMES
284acb3efe05        xibosignage/xibo-cms:release-1.8.10   "/entrypoint.sh"         2 hours ago         Up 4 minutes               127.0.0.1:8080->80/tcp               xibo_cms-web_1
102b49b43754        xibosignage/xibo-xmr:release-0.7      "/entrypoint.sh"         2 hours ago         Up 4 minutes               50001/tcp, 0.0.0.0:65500->9505/tcp   xibo_cms-xmr_1
72778f75132d        mysql:5.6                             "docker-entrypoint..."   2 hours ago         Up 4 minutes               3306/tcp                             xibo_cms-db_1
3b452d31c3ae        hello-world                           "/hello"                 12 months ago       Exited (0) 12 months ago                                        xenodochial_sammet

docker-compose logs cms-web shows

Attaching to xibo_cms-web_1
cms-web_1  | Waiting for MySQL to start - max 300 seconds
cms-web_1  | MySQL started
cms-web_1  | DBVersion
cms-web_1  | 141
cms-web_1  | Existing Database, checking if we need to upgrade it
cms-web_1  | Updating settings.php
cms-web_1  | Configuring Maintenance
cms-web_1  | Removing web/install/index.php from production container
cms-web_1  | Running maintenance
cms-web_1  | Waiting for MySQL to start - max 300 seconds
cms-web_1  | MySQL started
cms-web_1  | DBVersion
cms-web_1  | 141
cms-web_1  | Existing Database, checking if we need to upgrade it
cms-web_1  | Configuring Maintenance
cms-web_1  | crontab: can't open '/etc/crontabs/apache': Permission denied
cms-web_1  | Running maintenance
cms-web_1  | Waiting for MySQL to start - max 300 seconds
cms-web_1  | MySQL started
cms-web_1  | DBVersion
cms-web_1  | 141
cms-web_1  | Existing Database, checking if we need to upgrade it
cms-web_1  | Configuring Maintenance
cms-web_1  | crontab: can't open '/etc/crontabs/apache': Permission denied
cms-web_1  | Running maintenance

docker-compose logs cms-db shows

Attaching to xibo_cms-db_1
cms-db_1   | 2018-07-31 09:43:17 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
cms-db_1   | 2018-07-31 09:43:17 0 [Note] mysqld (mysqld 5.6.36) starting as process 1 ...
cms-db_1   | 2018-07-31 09:53:45 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
cms-db_1   | 2018-07-31 09:53:45 0 [Note] mysqld (mysqld 5.6.36) starting as process 1 ...
cms-db_1   | 2018-07-31 12:04:12 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
cms-db_1   | 2018-07-31 12:04:12 0 [Note] mysqld (mysqld 5.6.36) starting as process 1 ...

Any ideas? I can’t work out why it would suddenly just stop working. Thanks.

You need to let maintenance run. On Windows particularly, it can take some time to do so.

It can also be a symptom of incorrectly configured XMR settings, although it’s unlikely, especially if it’s an install that has always been on Docker.

If you just wait, it will start correctly given time.

When you say some time, how long do you mean?

This install is running on Linux. It used to be a manual install using Apache/mysql. When i updated to 1.8.2, I swapped it over to Docker. When 1.8.10 came out, I updated it okay. It has always been fine and the CMS accessable.

When I got this issue the other day, we really needed to change some of the content on one of the displays. After trying lots of things, the only thing I could think of doing was reverting back to the 1.8.2 backup I made. This resulted in a working system and I was able to change the display I needed.

Yesterday morning I installed some more RAM in the server thinking that it might help the maintenance to run quicker, but since the server was turned on, if I run ‘docker-compose logs cms-web’ it still says it is running maintenance and I get the proxy error if I try and access the CMS. I have come in today to find that it is still in the same state, should it take this long?

Thanks

The longest I’ve seen is 5-10 minutes. No longer than that.

If it’s still doing it, you can get a shell in the cms-web container, kill the running PHP process and the webserver will start.

You can then look in to why the maintenance task is taking so long to run

Ah, that would be useful.

I am very new to Docker, I hadn’t used it before I updated to 1.8.2. Would you be able to point me to somewhere that I can read up on how to get a shell, and then what to do to stop the running PHP process?

Thanks

Run

docker ps

Find the container that has cms-web_1 in the name, then run

docker exec -ti example_cms-web_1 bash

substituting the correct container name. That will get you a shell inside the container. You can then use ps -aux to get a listing of all processes. There will be one php process. Note it’s process ID (PID).

kill -KILL 4

substituting the appropriate process ID.

exit

Will get your back out of the container, and then the container logs should show that Apache has started.

1 Like

Thank you Alex!

Now to find out what is causing the maintenance task to take so long…

First thing I’d check is your XMR private address setting. By default it’s hidden in a Docker install, but as this is a conversion from a standard installation, it may well be wrong.

For a Docker install, it should be:

tcp://cms-xmr:50001

Don’t confuse this with the XMR public address which is different.

If you can’t see the XMR private address in the CMS settings (Displays tab I think) then you’ll need to update that directly in the database. To do so, connect to your MySQL container (see How can I run a SQL command when using a Docker Install?) and then run the following SQL:

UPDATE `setting` SET `value`='tcp://cms-xmr:50001', `userChange`=0, `userSee`=0 WHERE `setting`='XMR_ADDRESS';

Then go in to the CMS, go to the Settings page and click Save.

The XMR private address is accessible in settings and it was set to localhost, I have now set it to what it should be. I noticed that there are quite a few duplicate settings in the display tab (see attached image), Whitelist Load Balancers on the network tab and Resting Log Level / Elevate Log Until on the Troubleshooting tab

Thank you so much for the help.

Hmm that looks like some of the upgrade steps have been run multiple times. You can safely delete duplicate entries from the setting table in the database, as long as you leave one of each with the intended values.

Thanks for that, I went through and deleted all of the duplicate settings, so that has solved that one.

Am I right in thinking that now, if I request a screenshot, it should do it straight away? It doesn’t here, which makes me think that the XMR setup still isn’t quite right. Are there any other tests I can do?

Just used this to resolve an issue upgrading to 2.0.3
using sudo docker ps -a, sudo docker exec -ti name_cms-web_1 bash, ps and in my case kill -KILL 61 for 1 instance of Apache that must have been hung up.
access to CMS restored, thanks!