Nevis

Members
  • Posts

    16
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Nevis's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Is it possible to assign static ip to LXC container via network configuration? Or even better is it possible to isolate a container from local lan using these settings? I wish to create a linux container to run a specific program but I don't want it to have access to my local lan. It should have only internet and allow ssh in from local lan, if possible. This is how LXC container network configuration look by default. # Network configuration lxc.net.0.type = veth lxc.net.0.flags = up lxc.net.0.link = br0 lxc.net.0.name = eth0 Technically I could use iptables with-in the container os but I would prefer an outside solution to the isolation, rather than trust the software modifications done to the operating system running inside the container.
  2. Yeah tell me about it. I pretty much wanted ZFS for the snapshot capabilities since I got tired of home assistant and nextcloud breaking themselves between updates and sometimes feels like I don't change anything and those still end up committing seppuku out of pure spite. Then again I had my own adventure with ZFS when I almost nuked my appdata. Documented that here. Luckily @JorgeB helped me out with it and pointed me to right direction. But to be more precise. The original dataset dockers/lxc appeared when I made the share on my ZFS drive. But when I first tried to create LXC container I had ZFS option on. That created separate dataset dockers/zfs_lxccontainers/node01 and still gave me the same error I mentioned in previous posts. Then I destroyed dockers/zfs_lxccontainers/node01 datasets, changed LXC save settings to directory and tried again. Still got same error. At that point I started to go through the support thread and check if someone else had come by same message.
  3. The dataset got created automatically when I created the share. At first I tried to to use ZFS as save method but I looked at some of the previous posts where users had created snapshot with ZFS master plugin and that had damaged their containers. So I switched to "directory" instead. But I did try create it with ZFS enabled at first. I'm using spaceinvader one's script to turn directories to datasets. Descripted in this video. I'm also using his script to sync the dataset to array. This is using Sanoid. So I figured better not to mix another different kind ZFS snapshoting in there and instead do normal backup (directories) and do snapshoting using the setup I use for my appdata for instance. Only bad side is I get single snapshot which contains all my LXC containers. So if I have many in the future and want to roll back. I roll back everything. Unless I use the snapshots which come with LXC plugin.
  4. Thanks for super fast reply. Seems like I didn't check the thread through enough. But that did it. The commands worked and I got my container running.
  5. Sorry if this has been answered here before but I couldn't find anything similar when searching the forum. I'm trying to setup my first container in LXC and get the following error when I try create container. "mkdir: cannot create directory '/var/cache/lxc': Too many levels of symbolic links" Any idea what I can do fix this? 01 Error message when trying to create container 02 LXC settings 03 LXC share settings 04 drive setup using ZFS if that has anything to do with it 05 settings I used to create container
  6. Edit: Adding this in case someone stumbles on this topic later in the future. Read the bottom of the post after "edit" section. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Original post Previously the docker snapshots where under dockers/system instead in root, if you can call it that. Not sure what the correct term is. If I borrow an example screenshot from this reddit thread ZFS Snapshots creating themselves ??? 01 Example how docker datasets used to look like. If this was the case I could just exclude the path with ZFS master plugin. See picture 02 from previous post. 02 Docker snapshots appear in root instead under "system". Appdata appears correctly where all subdatasets are under "appdata". This worked before when I first time converted folder to datasets using space invader one's video ZFS Essentials: Auto-Converting Folders to Datasets on Unraid The script can be found from his github. I tried to use the script to somehow change the point where new datasets are created but couldn't get it to work. Probably because those are already datasets instead folders like when I previously used it. I also tried to google ZFS commands to do this but I'm not exactly sure what I'm looking for. ----------------------------------------------------------------------------------------------------------------------------------- Edit: Upon further inspection you are right. I checked the textfile where I listed all my datasets and there are over 1000 lines of them. That was probably my first problem anyway when I started running out of space on my drives. I assumed the exclusion setting in ZFS master plugin actually prevented it from creating datasets all together for dockers but it just stops showing them in dashboard. Welp the more you know. I'll be switching back to docker image then. Is there somekind collective topic on the forum where these kind of "good to know" information snipped are collected, regarding ZFS? I would like to avoid other settings related mismatches with ZFS pools.
  7. Sorry for the delay. Took a while before I had time to try figure this out. I think I managed to get my appdata back in place. But I got another problem. I'm using docker folders, so when ever I install or update docker, ZFS creates multiple datasets for it. Before this wasn't a problem because all dockers went under dockers/system (dockers settings in 03) share. So I could exclude that folder from snapshots (picture 02) and this way prevent my drives from filling with excess snapshots. Is there anyway I can exclude these or change the default path ZFS uses for these newly created datasets? When I hover over one of these docker created datasets (picture 04) it points to /dockers pool but mounting point is "legazy". 01 Legazy mounting point 02 ZFS master settings 03 Docker settings 04 Hover over docker snapshot
  8. I think it would be best to take the latest, if that is enough for the snapshot system to recover the whole thing? That's what I'm curious about since I don't know the exact logic how the snapshots work. If we picture the scenario where both my dockers and vms pools have fresh disk with nothing in them. Can the snapshot system recover the whole thing from just the latest snapshot or does it need to whole subset of snapshots for full recovery? I guess there is no harm if I copy all the snapshots, recover and let it run few days. See if everything is back to normal, then create new ones just in case. The script from Spaceinvader one should remove old useless snapshots automatically. After I swap the nvme drivers, space shouldn't be an issue.
  9. Yeah the idea was if you have a problem with some specific docker you can just roll back that specific dataset instead all. Specially because photoprism and nextcloud consist of huge amount of small files, recovering those is huge pain in the butt if you want to recover something not related to those. Also the dataset are taken in separate times, In general it should keep 1 monthly, 1 weekly and last 4 days if I remember correctly. So you can roll back longer period if need be. Also if you look at the picture in the first post "Example of the datasets on disk4" it lists 566 snapshots in disk4 backup share. I don't need to recover them all but I assume if they are used for recovering it would need all snapshots in same subset.
  10. CLI isn't my strong suite but I give it a go. I had little trouble getting the list out since the list is so long half of it gets wiped. So I couldn't just copy and paste from console. Had to figure how to input into text file and then get editing permissions so I could actually give you the list. Anyways here is the result as attachment.. zfspools disk4.txt shows which datasets are available on disk4 zfspools dockers and wms.txt lists the datasets I wish to recover. I'm not sure how the recovery works but I listed the partial newest snapshots in "zfspools dockers and wms.txt". But if it's partial do we also need the "rest" of the points for recovery? Those can be found from "zfspools disk4.txt" in case we need them. I also had the docker directories under "system" share in my dockers nvme but I assume I'm better off just redownloading those from previous apps once the appdata recovery is done. But in short once we are done it should look like under "wms" share VMS/FreeIPA VMS/Hassio VMS/logs and under "dockers" share appdata/Authelia appdata/EmbyServer appdata/Logarr appdata/Nginx appdata/PuTTY appdata/QDirStat appdata/bazarr appdata/binhex-krusader appdata/binhex-lidarr appdata/binhex-qbittorrentvpn appdata/binhex-radarr appdata/binhex-sonarr appdata/clamav appdata/cloudflared appdata/code-server appdata/dirsyncpro appdata/freescout appdata/goaccess appdata/mariadb appdata/matomo appdata/meshcentral appdata/nextcloud appdata/organizrv2 appdata/paperless-ngx appdata/photoprism appdata/pihole appdata/prowlarr appdata/recyclarr appdata/redis appdata/satisfactory appdata/scrutiny appdata/tasmobackupv1 appdata/unmanic appdata/uptimekuma appdata/valheim appdata/vm_custom_icons appdata/wordpress appdata/youtubedl-material zfspools disk4.txt zfspools dockers and wms.txt Or if we can we could just recover the whole appdata and VMS. If that is possible? Do these also bring the subset of datasets under these main datasets? dockers_appdata@syncoid_Unraid_2023-09-02:06:00:13-GMT03:00 wms_VMS@syncoid_Unraid_2023-09-02:06:30:13-GMT03:00 zfspools.txt contants the whole thing unedited. zfspools.txt
  11. Sounds exactly what I'm looking for. Would like to hear more detailed instructions. By chance is this something than can be done with ZFS Master or does this require using cli?
  12. After spending a few weekends trying to figure this out, I think I finally managed to get it in a satisfying state. Since there isn’t much information regarding this, I add this little guide here in case someone else stumbles into this topic. Goal: Upgrade ZFS drive which isn’t in an array without losing data. Prerequisite: Backup of the data on the drive. This can be: a) traditional folder backup which is done via “Backup/Restore Appdata” plugin or equivalent b) Dataset backup where snapshots are synced to another drive Spaceinvader One has video about this Part A and Part B If the reader is interested in starting to use ZFS you should also check out other videos in this series. 1. ZFS Essentials: Reformat (ZFS) and or Upgrade Cache Pools in Unraid 2. ZFS Essentials: Auto-Converting Folders to Datasets on Unraid 3. ZFS Essentials: Array Disk Conversion to ZFS or Other Filesystems - No Data Loss, No Parity Break! Some other things to note: -If you are using ZFS and your dockers are stored this drive make sure you are using dockers image instead dockers directory. Otherwise, you end up like me and have thousands of snapshots filling your drives. -If you are using ZFS Master plugin (which I recommend you do) the exclusion option doesn’t prevent the system from creating snapshots. It just stops showing them on Unraid’s main page. -If you are using custom docker networks, note those down. Also note down your docker and Virtual machine settings. I also recommend making backup of your virtual machines, libvirt and docker image if those are stored in the drive you are changing. -If your drive has datasets which are also shares, you can't use moved or unbalanced to move the data off from your disk. Or atleast I couldn't as of writing this guide. That is why I'm going to skip this step in this guide. But if your data doesn't contain datasets, then you can just do the usual. Set shares to another disk and use mover to move the data off the disk and back once you are done changing the drive. I’m going to use my setup here as an example so replace disk-, pool- and sharenames with your own. In case of where you already have datasets in the drive you want to replace. 1. Run following command and note down your datasets zfs list -t all Make sure you have a fresh set of datasets copies in another drive a side from the one you want to upgrade/replace. In my case I have datasets on my dockers drive and disk4. If your drive doesn’t have datasets but just folders make sure you have backup on another drive. 2. Stop docker and VM services from unraid settings in case those make changes to the drive you are upgrading. 3. Swap in new drive or erase the drive you want to change/ upgrade. The erasing part isn’t strictly necessary if you are swapping the drive to a completely new part. In my case I swapped two drives between themselves, so I didn’t make changes to the hardware. Thus I had to erase the data and start both drives as fresh. But there is no hurt if you leave the old part as backup while you are working on this. 4. Assign the new drives to the pools and select file system. I used ZFS encrypted. 5. Start array and format the drives. Once array has started. If you want to make subdatasets like my docker specific files appear under “appdata” use ZFS master to create those root datasets now before you start copying files. In my case I used ZFS master to create dockers/system datasets where my libvirt and dockers image will live. If you used dataset sync to another drive. If not skip to step 7. 6.1 List your datasets with command: zfs list -t all 6.2. Move your data your new disk with command: zfs send -R docker/appdata@last_snapshot_name | zfs receive destination_zpool_name/appdata note: ZFS master has dataset copy function but as of writing this, plugin didn’t support copying dataset to another pool. Also when copying dataset to new pool choose the root dataset which should contain all subset datasets aswel. Also note than there might be better way to use the backups in another drive, so take a look at spaceinvader one's youtube channel to see if there is video about it. But as of writing this guide, this was the best choice I could find. 7. If you didn’t use datasets but regular folders, use Backup/Restore Appdata to restore your drive data. 8. Start your VM and dockers services. 9. Create your custom docker networks if you had them in use and they were wiped in earlier steps with command: docker network create *networkname* without ** markings For example docker network create proxynet 10. Download your apps from apps page -> Previous Apps Also thanks to @JorgeB for the commands to get this done. ----------------------------------------------------------------------------- Original post starts here: I have gone through spaceinvader one's videos about ZFS. I got setup where in my main array I have one ZFS drive for snapshot cloning and I have converted all my nvme drives into ZFS aswel. I got three separate pools cache, Dockers and Wms (should be vms for virtual machines but I brainfarted when I named the pool). See the attached picture. My dockers drive contains both the docker directories itself and my appdata. All appdata and Wms are converted to datasets. Currently using unraid v6.12.3. Anyway the problem, my docker ssd is getting too small. Current usage is around 165GB and when I enable snapshots it easily fills the 248GB nvme. Now since my virtual machines don't need that much space. I wish the switch the nvmes between dockers and Wms. At first I thought I can just do the usual: -Disable dockers and virtual machine services -Use mover to move the data to array -Swap the disk -Use mover to move the data back in place Unfortunately I didn't realize this doesn't work with ZFS and when the data is in datasets instead folders or at least that is what I assume is happening. Please correct if I'm wrong. Mover managed to move some but not all the data and it does run. Just seems to do it extremely slowly. So now my situation is: I got some of my appdata in dockers nvme and some in array. Can anyone recommend steps how I should a start fixing this up? I do have appdata backup which I made before I turned appdata from folder to datasets. I also have synced snapshot before I started to whole process in disk4. So in theory I could format both nvmes dockers and Wms, swap the disks and recover from snapshots which are stored in disk4. Problem is I'm not exactly sure how to do that. Does the snapshots need to be in dockers nvme before I can use them to recover or can I just recover directly from disk4 and it all goes in the original pool? Note: disk4 has synced snapshots. Those are originally stored in dockers nvme and then synced to array (disk4). Same setup as used in space invader one's video about the matter If it all goes to hell, I still have the choice of nuking both drives and recovering from appdata backup. Then I just need to go through ZFS dataset process again and download all the dockers from previous apps. But I would like to avoid this if possible. I have posted this to reddit few days ago but I didn't get a reply so I'm hoping I might get some help here. Here is the link to reddit post. I attached unraid diagnostic aswel while mover loging was enabled. Just in case my guess is wrong about mover not working same as before when dealing with ZFS and datasets. unraid-diagnostics-20230915-0004.zip Array and pool setup. Example of the datasets on disk4
  13. Sorry, you are indeed correct. Tested it and I can't reproduce it. I just can't think of any other reason than it could happen. I have someone managed to clear my templates-user folder twice now. Both times it happened when I removed some dockers that I didn't need and cleaned appdata from unnecessary folders using this plugin. Both times I noticed it after running the pluggin but something else must have done it right before. But no idea what since nothing I did should effect the template folder. Oh well this is wrong thread to start troubleshooting that anyway.
  14. I recently cleared my appdata folder from docker folders I no longer needed and after I noticed than all my dockers had their templates removed. Docker icons weren't loading and I couldn't edit any of them. Lucky I had just made a backup of my flashdrive. Is this plugging suppose to clear unraid usb/config/plugins/dockerMan/templates-user/ folder completely? As far as I know it didn't do this before. Running unraid 6.9.2 and plugin version is 2021.03.10.