• Morethanevil@lemmy.fedifriends.social
    link
    fedilink
    arrow-up
    152
    ·
    5 months ago

    Cleanup

    Check current disk usage:

    sudo journalctl --disk-usage

    Use rotate function:

    sudo journalctl --rotate

    Or

    Remove all logs and keep the last 2 days:

    sudo journalctl --vacuum-time=2days

    Or

    Remove all logs and only keep the last 100MB:

    sudo journalctl --vacuum-size=100M

    How to read logs:

    Follow specific log for a service:

    sudo journalctl -fu SERVICE

    Show extended log info and print the last lines of a service:

    sudo journalctl -xeu SERVICE

  • RobotZap10000@feddit.nl
    link
    fedilink
    arrow-up
    31
    ·
    5 months ago

    Try 60GB of system logs after 15 minutes of use. My old laptop’s wifi card worked just fine, but spammed the error log with some corrected error. Adding pci=noaer to grub config fixed it.

  • Andrew
    link
    fedilink
    arrow-up
    35
    arrow-down
    5
    ·
    edit-2
    5 months ago

    *cough*80 GiB*cough*

  • hushable@lemmy.world
    link
    fedilink
    arrow-up
    26
    ·
    edit-2
    5 months ago

    Once I had a mission critical service crash because the disk got full, turns out there was a typo on the logrotate config and as a result the logs were not being cleaned up at all.

    edit: I should add that I used the commands shared in this post to free up space and bring the service back up

  • wildbus8979@sh.itjust.works
    link
    fedilink
    arrow-up
    28
    arrow-down
    3
    ·
    edit-2
    5 months ago

    Fucking blows my mind that journald broke what is essentially the default behavior of every distro’s use of logrotate and no one bats an eye.

    • Regalia@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      26
      arrow-down
      1
      ·
      edit-2
      5 months ago

      I’m not sure if you’re joking or not, but the behavior of journald is fairly dynamic and can be configured to an obnoxious degree, including compression and sealing.

      By default, the size limit is 4GB:

      SystemMaxUse= and RuntimeMaxUse= control how much disk space the journal may use up at most. SystemKeepFree= and RuntimeKeepFree= control how much disk space systemd-journald shall leave free for other uses. systemd-journald will respect both limits and use the smaller of the two values.

      The first pair defaults to 10% and the second to 15% of the size of the respective file system, but each value is capped to 4G.

    • tentacles9999@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 months ago

      Still boggles my mind that systemd being terrible is still a debate. Like of all things, wouldn’t text logs make sense?

  • muhyb@programming.dev
    link
    fedilink
    arrow-up
    16
    ·
    5 months ago

    This once happened to me on my pi-hole. It’s an old netbook with 250 GB HDD. Pi-hole stopped working and I checked the netbook. There was a 242 GB log file. :)

  • zoey@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    11
    ·
    5 months ago

    Recently had the jellyfin log directory take up 200GB, checked the forums and saw someone with the same problem but 1TB instead.

    • Agent641@lemmy.world
      link
      fedilink
      arrow-up
      17
      ·
      edit-2
      5 months ago

      2024-03-28 16:37:12:017 - Everythings fine

      2024-03-28 16:37:12:016 - Everythings fine

      2024-03-28 16:37:12:015 - Everythings fine

  • FiniteBanjo@lemmy.today
    link
    fedilink
    arrow-up
    6
    ·
    5 months ago

    Windows isn’t great by any means but I do like the way they have the Event Viewer layout sorted to my tastes.

    • MonkeMischief@lemmy.today
      link
      fedilink
      arrow-up
      3
      ·
      5 months ago

      True that. Sure, I need to keep my non-professional home sysadmin skills sharp and enjoy getting good at these things, but I wouldn’t mind a better GUI journal reader / configurator thing. KDE has a halfway decent log viewer.

      It might also go a long way towards helping the less sysadmin-for-fun-inclined types troubleshoot.

      Maybe there is one and I just haven’t checked. XD

  • Scribbd@feddit.nl
    link
    fedilink
    arrow-up
    6
    ·
    5 months ago

    I recently discovered the company I work for, has an S3 bucket with network flow logs of several TB. It contains all network activity if the past 8 years.

    Not because we needed it. No, the lifecycle policy wasn’t configured correctly.

  • alien@lemm.ee
    link
    fedilink
    arrow-up
    5
    ·
    5 months ago

    I couldn’t tell for a solid minute if the title was telling me to clear the journal or not