Jaster

Members
  • Posts

    409
  • Joined

  • Last visited

Everything posted by Jaster

  1. done that and running a parity check to see if this was the issue. was just wondering if smart is already telling to replace the disk.
  2. Hi guys, I have a disk with read errors and attached the SMART report - could you tell me if that thing is still fine or if it needs to be replaced? knowlage-smart-20210409-1441.zip
  3. My Server suddenly become unstable, - what is happening, what can I do?
  4. Hi Guys, this keeps happening with different VMs Randomly, trying to start a VM this message pops up. If I restart some running VMs I can start also the one which would not start before and the issue is gone. Super annoying and keeps happening
  5. Corruption can occur any time, so my questions is how to overcome it. Full backup of the drive, sure... However if this is the risky part about BTRFS, I'd like to cover this with a little bit less effort (hardware wise). So... would it be possible to create a backup or an "external" RAID just for the file system? E.g. Having the array with all HDDs protected by parity, while the file system resides on a RAID 5/10 Pool of SSDs. Or just have a job that back ups the file system on an hourly/daily basis?
  6. I'm not talking about an empty file system. The data is protected by the parity. The Filesystem is not - so why not back it up?
  7. Sorry if it sounds silly, but is there a way to backup/replicate the filesystem without the actual data?
  8. Lets say I run into a filesystem corruption, can I just restore the whole disk with paraties in place?
  9. So the increased risk is losing the filesystem on a disk, but this can be recovered by the usual parity rebuild? If I want to backup something from another pool, I need to create an entry point based on a disk instead of a share, right? Anything else I do not see? Is there something like a "save procedure" to migrate from xfs to btrfs?
  10. How does Parity interact with this? Can I still lose a disk and restore it? Shares will not work across disks, but I rather have to create share by disk?...
  11. I saw it is possible to run the Array with BTRFS drives. As I'm using more BTRFS features (snapshots for backups, etc) I'd like to know if it is a viable option and if there is any risk involved migrating from XFS? Right now I'm running an array with two paraties (XFS), a nvme raid 0 (BTRFS), another SSD raid 10 (BTRFS) and a backup raid 10 (BTRFS) with HDDs. My Idea would be to integrate the backup raid into the array and perform backups to the array rather then to a separate instance. Further I could stripe the SSDs to get more space there and have also backups inside the array. Good Idea? Bad Idea?
  12. Basicly, even if I change the location back to where it was, the VM won't boot anymore..?!
  13. I've been trying to move some vm images (windows 10) to a btrfs snapshot folder, but it apprears they don't boot anymore. What did I do wrong here?
  14. Starting VMs takes way longer than it used to be on 6.8.3. It takes about ~7Minutes to get my VMs with passthrough up and running. The Docker and Apps tab are loading for about ~30 seconds on every refresh. In general it feels like VM's are slower over all - I'm running on an XFS Cache right now.
  15. New Issue: I can't unpin CPUs from dockers. When I do so in pinning and I press apply, everything seems fine. After entering the pinning settings, everything is back to "as it was". If I go into the container settings for the docker and change settings there, it persists.
  16. knowlage-diagnostics-20201026-2051.zip Hi guys, I see tons of unexpected GSO type: 0x0, gso_size 35, hdr_len 89 Messages - I attached the diagnostics. Anything I can/should do about it?
  17. Seems like that does the job, but I don't feel very 'save' running a beta. Any way to include the driver into a stable build?
  18. Hi Guys, I just updated to a new mainboard/cpu and it seems I can't get a network connection. Running unraid 6.8.3. I tried resetting the network.cfg and also tried different dhcp/static settings. Nothing works - booting windows immediately connects to the router. current diagnostics attached - what can I do? knowlage-diagnostics-20201020-0207.zip
  19. I am going to run the images on a non assigned nvme. Should I create a subvol there or can I use "the whole thing"? As for now I am at about 1TB+ of VM's. I think I'll check the performance with a spinner and decide if I want to kill the SSDs or if I can live with a spinner - which I would only update every 2 weeks or so.
  20. raid10 with 4 it is. How would I restore it if one drive dies/drops, etc?...
  21. I'm about to perform the migration. Should I still use the domains share or should I use custom share for the used images? How would/should I set it up? What I gonna do: use a nvme for the images itself. Have a raid 10 SSD BTRFS cache where I keep the backups, but don't transfer those to the parity array (via mover or what so ever). Create a script that creates a new "root backup" every Sunday and creates increments from those on a daily basis. After I got 5 weeks full, I'll delete the latest... I'm wondering if I need to set this up file by file or if I can script it somehow on a folder/share basis...?
  22. Thats very detailed, thanks! I'm planing a single nvme drive for the VMs and using the cache (btrfs raid) as the backup location. If I'd like to copy a specific snapshot, could I just "copy" (e.g. cp or krusader) or do I need to do some kind of restore? Are the deltas created from the initial snapshot or from the previous?
  23. I've been struggling with btrfs quite a lot and went away from it in favor of xfs without a raid. I'd like to give it another try, but doing it the "right way". I can use 3 or 4 2TB SSDs, but those ain't the same model. I want a setup where a disk can die/fail and I am still able to restore the data or even run in a degraded mode until I have a replacement... Is this achievable and if so; how?