Regular maintenance turns of players via WOL

Hello everyone.

We’re using Xibo on Docker for Windows and got everything set up pretty good.
We’ve encountered a problem, that if we use the WOL function in each display, then the regular maintenance task turns on the displays no matter what time is set up for WOL in the display configurations.

WOL works perfectly when the player isn’t registered in Xibo, but once I got the display up and running in Xibo, all the displays get turned on simultaneously when the maintenance task is triggered.

Anyone else experienced this? Should I turn off WOL in Xibo? We would really love this feature to work properly + scheduled WOL (through the scheduler, not fixed time).

Thanks in advance.

You’ve set a Wake on Lan Time for each display in the Edit Display form? What do you have in there? It should be a h:i time string (i.e. 08:00)

Regular maintenance does indeed send WOL packets for displays, but only when the WOL time is ahead of the current time - specifically it will sent a packet each time it runs from :clock8: until :clock12:

Hi Dan,
My time is set at 08:30, and I’ve tested this entire day - after the designated time in WOL settings for each display.

Just to be clear - you’d expect the wake on lan packet to be sent every 10 minutes from 08:30 until 00:00 - but you are seeing it fire between 00:00 and 08:30 ?

I actually thought that WOL would occur only at 08:30 as set up and that’s it.
Is there a chance that we don’t set up WOL time on each display - we’d rather set up a layout with WOL command the same way as we’d turn it off?

My apologies, I’ve fed you incorrect information.

Each time we send a WOL we record the “last send time” - we only then send if this send time is before the wake on lan time.

I.e. it should call once and once only at some time after 08:30 when regular maintenance runs.

After the regular maintenance task has run, hover over the success tick and it will explain what it has done, whether it has sent a command or not.

Layouts are rendered Player side - so if a player needs waking up by wake on lan, it will never run the Layout with the wake on lan command on it

You can use the commands function to run whatever command you want - you can use cmd.exe and pass any arguments.

Yep, this is the what it says on the task:

Wake On LAN

  • Display1 Sleeping
  • Display1 N/A
  • Display1 Sleeping
  • Display1 N/A
  • Display1 Sleeping
  • Display1 N/A
  • Display1 Sleeping
  • Display1 N/A
  • Display1 Sleeping
  • Display1 N/A
  • Done

Build Layouts

  • Done

Tidy Library

  • Done

It seems to me that if I turn off all the displays via command, that it will see it’s sleeping at maintenance task time and it will wake them up every 5 mins (at least this is what happens all the time).

Since you wrote that it checks the WOL time every maintenance time, I’ve set up the time on each display on 23:00 and it doesn’t wake up the players (Intel NUCs in our case). Is there any chance to have WOL scheduled on certain time, but on such way that it doesn’t constantly wake them up?

Thanks allot for your time, Dan, appreciate it.

This means that it did NOT send a WOL packet

Bottom line is this - you set a wake on lan time of 08:30, maintenance task runs as 08:32, WOL packet is sent. Maintenance task runs at 08:42, WOL packet is NOT sent.

You can check the task output to confirm, if a WOL packet is sent it will say “Sent WOL Message. Previous WOL send time: xxxxx”

So could this mean that there is an error in our setup or something like that?
Since I changed WOL time on each display, they’re rock solid turn off.
If I change that to a time before current time and run maintenance task - all of NUCs will turn on.

Interesting things are:

  • we have all displays showed up as the same IP address
  • we’re struggling with email notifications - they’re sent even if the display is properly turned off AND email notifications per display are set off

Also, could I suspect on our WOL settings or even LAN setup? What worries me the most is that if I get Xibo server out of the story, our NUCs only wake up when we wanted them to wake up. The moment I set them up in Xibo makes them turn on every 5 minutes (except the case when I set up WOL time to future).

Ah, yes, if I change WOL time to history time, it does indeed show that message:

Wake On LAN

  • XIBO-PU Sent WOL Message. Previous WOL send time: 2018-02-20 15:36:00
  • XIBO-RK Sleeping
  • XIBO-RP Sleeping
  • XIBO-SM Sleeping
  • XIBO-UZ Sent WOL Message. Previous WOL send

After sending this, if I turn off the display via command, next time maintenance runs it will turn on the NUC even if it sent the WOL message at 15:36.

I suppose you’re proxying into the docker container somehow? It might be that the forwarded-for header isn’t being sent by the proxy

Email alerts are only created if the maintenance email alerts setting is on, the display is set to email alert and the display has gone offline (it was logged in, but now is not)

Can you show the log from such an event and also tell us what was in your WOL time during that run?

For reference this is the code that runs:

This is unsure, as I should research more into our network (it’s being maintained by 3rd party)

This is the log after I set up WOL time to history (to be exact - 08:30)

Wake On LAN

  • XIBO-PU Sent WOL Message. Previous WOL send time: 2018-02-20 16:18:00
  • XIBO-RK Sleeping
  • XIBO-RP Sleeping
  • XIBO-SM Sleeping
  • XIBO-UZ Sent WOL Message. Previous WOL send "

It seems the text is too long to show the entire message, I’ve only set up WOL time on these two displays:

I just don’t understand it i’m afraid - the logic is testing 2018-02-20 16:18:00 < 2018-02-20 08:30:00 and coming out as “true”, and then sending the WOL packet.

It makes no sense :face_with_raised_eyebrow:

Just on the off chance, can you try 08:30:00 just in case it needs the seconds for some reason?

I’ve tried entering 08:30:00 (without quotes) in the WOL time and saved, edited the display again and it converted the value into 08:30, and also wakes the display on the next maintenance task.
Really weird, I’ve really gone through the logic of the WOL dependecy and everything seems ok, except it triggers a WOL packet on every maintenance task except when the time is set to future.

Yeah, i’ve been over the logic several times myself and I can’t see anything wrong with it.

We need to try and recreate the problem on another environment - @Peter can you schedule something in to do that (we don’t actually need WOL to fire, we just need to make sure the logs are correct)

We’ve recreated the issue and fixed it:

You could patch your /lib/Entity/Display.php file by downloading the one from the commit in that issue and volume mapping it into your container. To do that you’d put it in your xibo-docker root folder and then modify the docker-compose.yml file, adding another volume:

- ./Display.php:/var/www/cms/lib/Entity/Display.php

Okay, I’ve downloaded Display.php from that issue, placed it in C:\Docker (our Docker shared folder with the files), then I’ve edited docker-compose.yml and it looks like this (part of it):

image: xibosignage/xibo-cms:release-1.8.6
- “./shared/cms/custom:/var/www/cms/custom”
- “./shared/backup:/var/www/backup”
- “./shared/cms/web/theme/custom:/var/www/cms/web/theme/custom”
- “./shared/cms/library:/var/www/cms/library”
- “./shared/cms/web/userscripts:/var/www/cms/web/userscripts”
- "./Display.php:/var/www/cms/lib/Entity/Display.php"
restart: always

Is this ok? If I understand correct - now I have to do

docker-compose down
docker-compose up -d

Is it safe to do so without losing anything?
Is this the

You just need to do the up - docker will detect the changed container and recreate it.

If you want to be completely safe you could backup your shared/ folder too before

I’ve done:

docker-compose stop
docker-compose up -d

And I guess everything went well, I just don’t know if it deployed the file. Tried WOL settings and it still turns on the player, so my guess is that it didn’t deploy the file. How can I check if ‘up’ did properly copy the file?

You can exec into the container and look at the file with more

docker exec -it <container_name> sh