JonathanM

Moderators
  • Posts

    16148
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. That's exactly how many (most) people are currently using unraid. The spinning rust is in the parity array, the SSD devices are in the cache pool.
  2. Yes, the primary reason for parity checks is to confirm that the array is capable of reconstructing a failed disk accurately. That includes both the concept of mathematical accuracy as well as disk reliability. In a "normal" RAID setup, all disks are spinning all the time and pretty much participating equally to some extent. In unraid, however, it's perfectly plausible to have disks that are NEVER accessed during day to day activities, due to them not containing any data that someone needs currently. If one of those drives fails, you wouldn't know it until it was too late, when you were trying to reconstruct a different failed drive. Parity checks provide a way to keep up with the health of those seldom used drives.
  3. It was an idea anyway. My thought process was that even though the CPU may not be vulnerable, the mitigations would still be applied in the code regardless. Honestly I don't know enough low level coding to be able to figure it out for myself, so I just wanted to advance the theory. All these issues seemed to start popping up at roughly the same time frame, so it's tough to distinguish what may or may not be truly causal, or just coincidental.
  4. Yes, the disk could fail even though it's not being used. This would manifest by a read failure, where the disk would report it couldn't return the data from that address. The chances of that happening are vanishingly slim. Much more likely for it to give an error when trying to read the 0 that was placed there.
  5. You can't. Cause it doesn't exist that I know of. However... https://linuxize.com/post/how-to-use-linux-screen/
  6. So toggling the mitigations doesn't change anything?
  7. Could you please toggle this plugin and check status with all mitigations enabled and disabled?
  8. I wonder if the sqlite thing is related, since we know there have been issues with sqlite on the fuse system for some users for a long time, but recently it's become a major issue. Perhaps when the file system performance falls below some threshold, sqlite reacts poorly and corrupts instead of waiting for completion. Maybe there is a latent bug in sqlite that is being triggered by i/o speed?
  9. https://www.grc.com/shieldsup
  10. Are you getting those results from the unraid console, or another machine on your network?
  11. Well, I just had a cache SSD die, and am in the process of recovering. The restore process went fine, but when I added my previous apps, they ALL got set to auto start, and my staggered delays were all gone, set to 0. Where is that information stored? I can certainly recreate it, and the start order was maintained, but I have a bunch of containers that are started only when I need them, and it would be nice if the auto start state and delay were saved as well. Thanks for this plugin @Squid, it really saved the day.
  12. Not possible. You want single, not RAID0. Plug your numbers in here and change the RAID levels to see what the usable capacity will be. https://carfax.org.uk/btrfs-usage/
  13. Do you have either diagnostics, a status email, or a screenshot of the main gui from before all this started?
  14. I would avoid those tiny drives, for the reasons you are figuring out. They are so small, they don't shed heat very well. A little larger drive with a metal case is going to be much more reliable. https://www.kingston.com/us/usb-flash-drives/datatraveler-se9-usb-flash-drive
  15. I can't find solid confirmation with a quick google, but some WD blue SSD's use marvell controllers, which are known to cause issues with linux. If you continue to have issues whenever you assign that drive, post your diagnostics so we can see which controller it uses.
  16. Back in the day unraid was limited to 2.2TB, so the largest drive was practically a 2TB. That changed with the advent of unraid 5.0 I believe, and the new limit was ReiserFS, at 16TB. Since 6.0, or whenever XFS and BTRFS were added, the limits aren't really in sight any more, at least not for several years. If you are still using ReiserFS, you will have performance issues with larger drives, 2TB is stretching it performance wise. There is still an issue with older hardware that can't see more than 2.2TB, but that's not anything that unraid can bypass.
  17. I've got a test in progress, I'll let you know the outcome. 1. Set up container on unraid at location A. - done 2. Install client on Windows 7 VM running on unraid with urbackup server. - done 3. Run default backup on VM. No restart, just install and start backup. - done 4. Configure port forwarding. (All 3 ports, was that necessary?) - done 5. Spin up blank VM with iso bare metal restore image at location B. - done 6. Connect to urbackup server, select backup completed in step 3. - done 7. Restore backup and compare VM at location B. - 6% complete, may be waiting a while due to internet speed. Test cancelled indefinitely. SSD cache at location B disappeared. 😪
  18. The only size limit you will encounter for many more years is hardware compatibility. Unraid software can handle any size disk your hardware can support.
  19. I have 96gb installed 4x8GB and 4x16GB. my issue is similar in that it only shows 32GB in unraid as usable and installed, in Windows 10 everything works PERFECT all 96GB show, but 64GB of the ram is missing in unraid 6.6.6 or 6.7.2 If you read what I posted through this thread, you would have seen where I said... which is exactly what you found. Linux in general is more picky about hardware, or to be more accurate, hardware vendors spend much less time making sure their products are compatible with linux than windows. Something that "just works" in windows may very well have issues in linux. Unraid doesn't author the linux kernel they use, they just package it.
  20. JonathanM

    6.7.3 RC?

    Yes. It's a release series specifically targeting the sqlite corruption issues.
  21. Yeah, it'll work to start and stop containers, vm's, anything you want to do with a command line script. It just takes some time and smarts to set it up, it's not exactly noob friendly. It's also not good for time sensitive stuff. Depending on how you set it up, you could be waiting several minutes for the action to be applied. At least you have feedback, as the trigger file can be deleted or modified to indicate success.
  22. This is hardly an ideal solution, but you could script a restart that looked for a specific file to exist on a user share. Run the script on a cron every minute or so, if the file exists, delete the file and docker restart container. Then all you have to do is figure out remote access for a location on the array, which can be handled MANY different ways, some more secure than others. You could, for instance, set up nextcloud, with a sync for that user, and when they create a reboot.me file in their nextcloud sync folder it triggers the script.
  23. Are you sure you have the correct cables? There are 2 varieties that have physically identical connectors, but are wired differently to go from individual ports to a drive cage, or from a card like you have to individual drives.
  24. Yes and no. I would make a plan to migrate off of reiserfs whenever convenient. Definitely any new disks shouldn't be reiserfs. There is a whole thread discussing moving your data around to change file systems. Long story short you need enough free space available to be able to completely empty your largest drive.