I have an annoying problem on my server and google has been of no help. I have two drives mirrored for the OS through mdadm, and I recently replaced them with larger versions through the normal process of replacing one at a time and letting the new drive re-sync, then growing the raids in place. Everything is working as expected, with the exception of systemd… It is filling my logs with messages of timing out while trying to locate both of the old drives that no longer exist. Mdadm itself is perfectly happy with the new storage space and has reported no issues, and since this is a server I can’t just blindly reboot it to get systemd to shut the hell up.

So what’s the solution here? What can I do to make this error message go away? Thanks.

[Update] Thanks to everyone who made suggestions below, it looks like I finally found the solution in systemctl daemon-reload however there is a lot of other great info provided to help with troubleshooting. I’m still trying to learn the systemd stuff so this has all been greatly appreciated!

  • caseyweederman@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    9 months ago

    Right. systemctl list-automounts
    to find the name, maybe? I’ve never had exactly this problem though.

    Looks like list-automounts is relatively new, try systemctl status --full --all -t mount for all mounts and look for your old disks in the info.
    -t automount might work but mine is empty, which makes me think this might not be related to the automount unit type.
    Hopefully this will point us in the right direction though.

    • ShdwdrgnOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      That appears to be a success! Thanks for the pointers, I’m still trying to figure out the systemd stuff since I rarely have to touch it.

    • ShdwdrgnOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      Ah cool… the ‘full’ command actually advised running systemctl daemon-reload which appears to have cleared the errors listed. Based on previous errors in the log it will likely be another 20 minutes before another error would be generated, so I’m waiting to see what happens now.