Akshunhiro

Members
  • Posts

    25
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Akshunhiro's Achievements

Noob

Noob (1/14)

4

Reputation

  1. I just updated but now get this error: I'm on 6.11.4 EDIT: Fixed by updating to 6.12.6 - Had been putting it off due to TrueNAS ZFS pool but it's working fine
  2. Hello, could I please get some help with date formats to setup Shadow Copies? I'm still on 6.11.4 with a pool created in TrueNAS and wanted to get Shadow Copies working in Windows but first need to work out the correct date format for both my User Script script; #!/bin/bash DATE=$(date +%B%y) zfs snapshot -r chimera@auto-$DATE (this used to be "zfs snapshot -r chimera@auto-`date +%B%y`" but just updated it to the above while testing matching the format). and my "shadow: format = auto-" in /boot/config/smb-extra.conf (only contains ZFS pool share) If I leave "shadow: format = auto-" and "shadow: localtime = yes" then all the previous versions have the same date. Any other combo and I don't see any. Here's the output of zfs list -t snapshot showing the current format; zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT chimera@auto-April23 0B - 224K - chimera@auto-May23 0B - 224K - chimera@auto-June23 0B - 224K - chimera@auto-July23 0B - 224K - chimera@auto-August23 0B - 224K - chimera@auto-September23 0B - 224K - chimera@auto-October23 0B - 224K - chimera/data@auto-April23 31.0G - 20.0T - chimera/data@auto-May23 594M - 21.1T - chimera/data@auto-June23 45.5G - 21.9T - chimera/data@auto-July23 103G - 22.3T - chimera/data@auto-August23 118G - 22.9T - chimera/data@auto-September23 89.8G - 23.5T - chimera/data@auto-October23 34.7G - 24.8T - This pool is only hosting media so I didn't think it'd be worthwhile setting up zfs-auto-snapshot.sh Fixed it with: "shadow: format = auto-%B%y"
  3. Hi everyone, Just noticed I've lost a few entries after restoring my container from backup. Unfortunately my array drive died (SSD so no parity) but I wasn't too worried as I had backups. I was keen to see how Vaultwarden would go as I'd read that the apps and browser extensions would keep a copy and sync back when it can. There are only 3 entries but not sure why these didn't update to the container. I have 2 other devices so disconnected them from the net before opening the app and I can see them there (216 entries vs 213). Will the logs show what's happening if I let one of them sync?
  4. Just saw the plugin update, wondering whether I need to change anything on my system. I'm on 6.11.4 and have a ZFS pool created in TrueNAS passed-through.
  5. Hi all, just wanted to register my interest in adding support for Dell servers. I've got an R520 and had been using a crude script to get me by. This plugin is installed but I have no idea how to setup a JSON for fan control. Reading is working fine though. For whatever reason, my script has randomly stopped working twice now. Logs say it's ran, and it outputs/prints the temps but that's it. No errors seen in iDRAC, no change if I reboot the iDRAC or even uninstall ipmitool, the plugin or a combination of all. I ended up installing PowerEdge-shutup but struggling to fine-tune is as it seems pretty aggressive. PowerEdge-shutup with log output.sh Original with log output
  6. Ooh, I found it. Was reading up more on where snapshots are stored and was able to navigate to /mnt/chimera/.zfs/snapshot/manual/data and everything's there. It's read-only though so a standard move is taking just as long as a copy from the array. Anything else I can try? I suspect a zfs send and zfs recv will suffer from the same bottleneck. EDIT: Nevermind. It's not an elegant solution since I copied to the root and not a child dataset. Not sure how I managed that but I'll just copy everything again.
  7. Hi all, wondering what I may have done wrong here. I've setup a pool and moved data from the array to the pool but noticed most of the folders are empty. I ran rsync -avh /source /destination and it took about 36 hours to move 15TB Once the transfer had completed, I took a snapshot before renaming the source folder from data to data_old with "mv /mnt/user/data /mnt/user/data_old" I then edited the mountpoint for the pool with "zfs set mountpoint=/mnt/chimera chimera" and symlinked /mnt/user/data with /mnt/chimera/data I saw the free space for the share in unRAID GUI reflected the available space but, after checking the share via SMB, most folders are empty. Confirmed this was the case in CLI as well. I don't think I can rollback either as the "refer" is in the pool, not the data dataset. When copying another folder, it seemed to write everything back and not just restore or refer from snapshot. Don't really want to transfer everything all over again, is there anything I can do? zfs list NAME USED AVAIL REFER MOUNTPOINT chimera 15.2T 28.2T 14.9T /mnt/chimera chimera/data 312G 28.2T 312G /mnt/chimera/data zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT chimera@manual 0B - 14.9T - chimera/data@manual 719K - 164G -
  8. Ah yep Damn, wonder why it's so big then haha So I've tested reverting back to a 20GB btrfs image and Docker is working fine now I'll make a note of the container configs just in case and purge the /mnt/user/system/docker directory tonight (I assume that's fine to do since it's an option in the GUI? And yes, I'll do it through the GUI) I'll then keep Docker setup as a directory instead of an image, I think I changed it because of excess write concerns on the SSD Thanks for your assistance!
  9. Oh sorry, missed this Will give it a try when I get home from work
  10. Thanks trurl, I didn't and that was my first mistake I did get a pop up from Fix Common Problems saying that Docker would break if files were moved The Docker tab was showing a lot of broken containers with the LT logo and broken link icon Pretty sure I turned off Docker and VM Manager when I got that pop up so ran the mover again, was left with ~7GB on the old cache drive and, after checking the contents, assumed it was done My problem was likely caused by manually copying (cp -r) the appdata, system & domain folders (these were all originally set to cache:Prefer) to the new cache drive rather than letting mover do it as I don't think it retained the correct permissions With the other changes to my setup, I read that having a docker folder rather than the image is a better way to go so that change was made a while ago I no longer have any vDisks as I recently grabbed a Synology E10M20-T1 so have a m.2 passed through to each of my VMs I can't remember where Plex metadata is stored but suspect that's what contributing to the 132GB The only container I have working is Cloudflare tunnel but that was setup through CLI I tried removing vm_custom_icons but get the same error message when re-installing and attempting to start it VMs are still working I think I need to purge Docker completely and set it up again but not sure what the correct method would be I also tried installing 6.11.0-rc2 to upgrade Docker without any luck but suspect it's because either system or appdata still have broken permissions, it won't pickup those folders properly
  11. So I failed at swapping over my cache SSD I set cache to yes and ran the mover but took a backup of appdata and system anyway What I failed to do was let mover automatically move stuff back to the new cache SSD instead of choosing prefer:cache to retain the permissions Long story short, permissions are borked and I'm not sure how to fix it I've tried restoring a backup but appdata and system are on the array so will continue to have borked permissions Tried deleting appdata so it'd be recreated but still can't get containers to start Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown ganymede-diagnostics-20220801-1759.zip
  12. Hey all, sorry to create a new topic but my Google-Fu is failing me Recently setup an unRAID box and it's been great One annoyance I've noticed though is SMB stops working after creating a new share, I need to restart to bring it all back I couldn't find anyway to restart smbd manually but could be looking for/trying to do the wrong thing Could someone please advise how to restart the service (if that is in fact the issue) rather than restarting the whole box? I'm on 6.9.2 Thanks