Yum Update, docker broken

Dear Xibo Community

Yesterday while using ssh i noticed 964 login attempts failed, so I updated the server. Using “sudo yum update”. Well, i think i shouldn’t done that. Now docker seems broken :face_with_raised_eyebrow:

[d4dd416@LIMONEDS xibo-docker]$ docker-compose -f cms_custom-ports.yml up -d
ERROR: Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?

If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.

I’m trying to fix this…
I also disabled root user deleting his password, suggested by a friend, and added my user to docker group, but for the moment I don’t have a clue to how to restore xibo.

[d4dd416@LIMONEDS ~]$ systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since Fri 2023-04-14 12:02:01 BST; 2min 56s ago
     Docs: https://docs.docker.com
  Process: 4562 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
 Main PID: 4562 (code=exited, status=1/FAILURE)

Apr 14 12:01:59 LIMONEDS.poundhost.com systemd[1]: Failed to start Docker Application Container Engine.
Apr 14 12:01:59 LIMONEDS.poundhost.com systemd[1]: Unit docker.service entered failed state.
Apr 14 12:01:59 LIMONEDS.poundhost.com systemd[1]: docker.service failed.
Apr 14 12:02:01 LIMONEDS.poundhost.com systemd[1]: docker.service holdoff time over, scheduling restart.
Apr 14 12:02:01 LIMONEDS.poundhost.com systemd[1]: Stopped Docker Application Container Engine.
Apr 14 12:02:01 LIMONEDS.poundhost.com systemd[1]: start request repeated too quickly for docker.service
Apr 14 12:02:01 LIMONEDS.poundhost.com systemd[1]: Failed to start Docker Application Container Engine.
Apr 14 12:02:01 LIMONEDS.poundhost.com systemd[1]: Unit docker.service entered failed state.
Apr 14 12:02:01 LIMONEDS.poundhost.com systemd[1]: docker.service failed.
[d4dd416@LIMONEDS ~]$ journalctl -xe
--
-- The result is failed.
Apr 14 12:02:01 LIMONEDS.poundhost.com systemd[1]: Unit docker.service entered failed state.
Apr 14 12:02:01 LIMONEDS.poundhost.com systemd[1]: docker.service failed.
Apr 14 12:02:55 LIMONEDS.poundhost.com sshd[4574]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=mail.yegara.org  user=root
Apr 14 12:02:55 LIMONEDS.poundhost.com sshd[4574]: pam_succeed_if(sshd:auth): requirement "uid >= 1000" not met by user "root"
Apr 14 12:02:58 LIMONEDS.poundhost.com sshd[4574]: Failed password for root from 165.227.228.212 port 58154 ssh2
Apr 14 12:02:58 LIMONEDS.poundhost.com sshd[4574]: Received disconnect from 165.227.228.212 port 58154:11: Bye Bye [preauth]
Apr 14 12:02:58 LIMONEDS.poundhost.com sshd[4574]: Disconnected from 165.227.228.212 port 58154 [preauth]
Apr 14 12:03:33 LIMONEDS.poundhost.com sshd[4577]: reverse mapping checking getaddrinfo for 179-99-212-180.dsl.telesp.net.br [179.99.212.180] failed - POSSIBLE BREAK-IN ATTEM
Apr 14 12:03:33 LIMONEDS.poundhost.com sshd[4577]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=179.99.212.180  user=root
Apr 14 12:03:33 LIMONEDS.poundhost.com sshd[4577]: pam_succeed_if(sshd:auth): requirement "uid >= 1000" not met by user "root"
Apr 14 12:03:36 LIMONEDS.poundhost.com sshd[4577]: Failed password for root from 179.99.212.180 port 52088 ssh2
Apr 14 12:03:36 LIMONEDS.poundhost.com sshd[4577]: Received disconnect from 179.99.212.180 port 52088:11: Bye Bye [preauth]
Apr 14 12:03:36 LIMONEDS.poundhost.com sshd[4577]: Disconnected from 179.99.212.180 port 52088 [preauth]
Apr 14 12:04:06 LIMONEDS.poundhost.com sshd[4579]: Invalid user zxiptv from 165.227.228.212 port 57348
Apr 14 12:04:06 LIMONEDS.poundhost.com sshd[4579]: input_userauth_request: invalid user zxiptv [preauth]
Apr 14 12:04:07 LIMONEDS.poundhost.com sshd[4579]: pam_unix(sshd:auth): check pass; user unknown
Apr 14 12:04:07 LIMONEDS.poundhost.com sshd[4579]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=mail.yegara.org
Apr 14 12:04:09 LIMONEDS.poundhost.com sshd[4579]: Failed password for invalid user zxiptv from 165.227.228.212 port 57348 ssh2
Apr 14 12:04:09 LIMONEDS.poundhost.com sshd[4579]: Received disconnect from 165.227.228.212 port 57348:11: Bye Bye [preauth]
Apr 14 12:04:09 LIMONEDS.poundhost.com sshd[4579]: Disconnected from 165.227.228.212 port 57348 [preauth]
Apr 14 12:04:19 LIMONEDS.poundhost.com sshd[4581]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.193.18  user=root
Apr 14 12:04:19 LIMONEDS.poundhost.com sshd[4581]: pam_succeed_if(sshd:auth): requirement "uid >= 1000" not met by user "root"
Apr 14 12:04:22 LIMONEDS.poundhost.com sshd[4581]: Failed password for root from 159.223.193.18 port 35056 ssh2
Apr 14 12:04:22 LIMONEDS.poundhost.com sshd[4581]: Received disconnect from 159.223.193.18 port 35056:11: Bye Bye [preauth]
Apr 14 12:04:22 LIMONEDS.poundhost.com sshd[4581]: Disconnected from 159.223.193.18 port 35056 [preauth]
Apr 14 12:05:14 LIMONEDS.poundhost.com sshd[4585]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=mail.yegara.org  user=root
Apr 14 12:05:14 LIMONEDS.poundhost.com sshd[4585]: pam_succeed_if(sshd:auth): requirement "uid >= 1000" not met by user "root"
Apr 14 12:05:15 LIMONEDS.poundhost.com sshd[4587]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.193.18  user=root
Apr 14 12:05:15 LIMONEDS.poundhost.com sshd[4587]: pam_succeed_if(sshd:auth): requirement "uid >= 1000" not met by user "root"
Apr 14 12:05:15 LIMONEDS.poundhost.com sshd[4585]: Failed password for root from 165.227.228.212 port 56540 ssh2
Apr 14 12:05:15 LIMONEDS.poundhost.com sshd[4585]: Received disconnect from 165.227.228.212 port 56540:11: Bye Bye [preauth]
Apr 14 12:05:15 LIMONEDS.poundhost.com sshd[4585]: Disconnected from 165.227.228.212 port 56540 [preauth]
Apr 14 12:05:15 LIMONEDS.poundhost.com sshd[4584]: Received disconnect from 49.88.112.118 port 58406:11:  [preauth]
Apr 14 12:05:15 LIMONEDS.poundhost.com sshd[4584]: Disconnected from 49.88.112.118 port 58406 [preauth]
Apr 14 12:05:16 LIMONEDS.poundhost.com sshd[4587]: Failed password for root from 159.223.193.18 port 46356 ssh2
Apr 14 12:05:17 LIMONEDS.poundhost.com sshd[4587]: Received disconnect from 159.223.193.18 port 46356:11: Bye Bye [preauth]
Apr 14 12:05:17 LIMONEDS.poundhost.com sshd[4587]: Disconnected from 159.223.193.18 port 46356 [preauth]
[d4dd416@LIMONEDS ~]$ sudo dockerd
[sudo] password for d4dd416:
INFO[2023-04-14T13:02:43.847521699+01:00] Starting up
WARN[2023-04-14T13:02:43.848263172+01:00] failed to rename /var/lib/docker/tmp for background deletion: rename /var/lib/docker/tmp /var/lib/docker/tmp-old: file exis                           ts. Deleting synchronously
INFO[2023-04-14T13:02:43.848671075+01:00] [core] [Channel #1] Channel created           module=grpc
INFO[2023-04-14T13:02:43.848701717+01:00] [core] [Channel #1] original dial target is: "unix:///run/containerd/containerd.sock"  module=grpc
INFO[2023-04-14T13:02:43.848747698+01:00] [core] [Channel #1] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix                            Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}  module=grpc
INFO[2023-04-14T13:02:43.848766501+01:00] [core] [Channel #1] Channel authority set to "localhost"  module=grpc
INFO[2023-04-14T13:02:43.848892462+01:00] [core] [Channel #1] Resolver state updated: {
  "Addresses": [
    {
      "Addr": "/run/containerd/containerd.sock",
      "ServerName": "",
      "Attributes": {},
      "BalancerAttributes": null,
      "Type": 0,
      "Metadata": null
    }
  ],
  "ServiceConfig": null,
  "Attributes": null
} (resolver returned new addresses)  module=grpc
INFO[2023-04-14T13:02:43.849012485+01:00] [core] [Channel #1] Channel switches to new LB policy "pick_first"  module=grpc
INFO[2023-04-14T13:02:43.849089543+01:00] [core] [Channel #1 SubChannel #2] Subchannel created  module=grpc
INFO[2023-04-14T13:02:43.849148315+01:00] [core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING  module=grpc
INFO[2023-04-14T13:02:43.852883995+01:00] [core] [Channel #1 SubChannel #2] Subchannel picks a new address "/run/containerd/containerd.sock" to connect  module=grpc
INFO[2023-04-14T13:02:43.852909594+01:00] [core] [Channel #1] Channel Connectivity change to CONNECTING  module=grpc
INFO[2023-04-14T13:02:43.853139768+01:00] [core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY  module=grpc
INFO[2023-04-14T13:02:43.853166575+01:00] [core] [Channel #1] Channel Connectivity change to READY  module=grpc
INFO[2023-04-14T13:02:43.853944637+01:00] [core] [Channel #4] Channel created           module=grpc
INFO[2023-04-14T13:02:43.853969383+01:00] [core] [Channel #4] original dial target is: "unix:///run/containerd/containerd.sock"  module=grpc
INFO[2023-04-14T13:02:43.853991520+01:00] [core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix                            Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}  module=grpc
INFO[2023-04-14T13:02:43.854010803+01:00] [core] [Channel #4] Channel authority set to "localhost"  module=grpc
INFO[2023-04-14T13:02:43.854042509+01:00] [core] [Channel #4] Resolver state updated: {
  "Addresses": [
    {
      "Addr": "/run/containerd/containerd.sock",
      "ServerName": "",
      "Attributes": {},
      "BalancerAttributes": null,
      "Type": 0,
      "Metadata": null
    }
  ],
  "ServiceConfig": null,
  "Attributes": null
} (resolver returned new addresses)  module=grpc
INFO[2023-04-14T13:02:43.854135665+01:00] [core] [Channel #4] Channel switches to new LB policy "pick_first"  module=grpc
INFO[2023-04-14T13:02:43.854166369+01:00] [core] [Channel #4 SubChannel #5] Subchannel created  module=grpc
INFO[2023-04-14T13:02:43.854196106+01:00] [core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING  module=grpc
INFO[2023-04-14T13:02:43.854229543+01:00] [core] [Channel #4 SubChannel #5] Subchannel picks a new address "/run/containerd/containerd.sock" to connect  module=grpc
INFO[2023-04-14T13:02:43.854240078+01:00] [core] [Channel #4] Channel Connectivity change to CONNECTING  module=grpc
INFO[2023-04-14T13:02:43.854370544+01:00] [core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY  module=grpc
INFO[2023-04-14T13:02:43.854398438+01:00] [core] [Channel #4] Channel Connectivity change to READY  module=grpc
ERRO[2023-04-14T13:02:44.009457786+01:00] [graphdriver] prior storage driver overlay is deprecated and will be removed in a future release; update the the daemon con                           figuration and explicitly choose this storage driver to continue using it; visit https://docs.docker.com/go/storage-driver/ for more information
INFO[2023-04-14T13:02:44.009766823+01:00] [core] [Channel #1] Channel Connectivity change to SHUTDOWN  module=grpc
INFO[2023-04-14T13:02:44.009808240+01:00] [core] [Channel #1 SubChannel #2] Subchannel Connectivity change to SHUTDOWN  module=grpc
INFO[2023-04-14T13:02:44.009831489+01:00] [core] [Channel #1 SubChannel #2] Subchannel deleted  module=grpc
INFO[2023-04-14T13:02:44.009847138+01:00] [core] [Channel #1] Channel deleted           module=grpc
failed to start daemon: error initializing graphdriver: prior storage driver overlay is deprecated and will be removed in a future release; update the the daemon con                           figuration and explicitly choose this storage driver to continue using it; visit https://docs.docker.com/go/storage-driver/ for more information

please, if someone could help will be really appreciated

I noticed a thing, maybe its not useful, but the prompt is different when i log in, now it’s [d4dd416@LIMONEDS ~]
but before was [d4dd416@my ip]

1 Like

I’ve backup of these folders:

cms_custom-ports.yml
config.env
shared/backup
shared/cms

If I could be sure that this is enough, maybe for me could be easier to format the server and reinstall everything

it is enough? it’s all the stuff I need?

Ok, I solved it, with this:

$ sudo service docker stop
$ sudo mv /var/lib/docker /var/lib/docker.bak
$ sudo service docker start

This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.