JonathanM

Moderators
  • Posts

    14762
  • Joined

  • Last visited

  • Days Won

    58

Everything posted by JonathanM

  1. Since you are seriously thinking about all new parts, my suggestion would be to build the new server without using ANY of the current parts so you can leave it running. Spin up the new server with a trial license, copy your data and such over the network, then when the new box is fully running as you like, either transfer the configuration to your licensed USB stick, or purchase a new license and use the old box as backup, only powering it up to do data copies when desired. If you can leave the old build untouched it will greatly reduce the chances of data loss.
  2. For those of us with servers tucked away, any chance of this working over usb-ip? I'm envisioning perhaps a raspberry pi clone of some flavor, joined to the local wifi, cycling between several servers.
  3. Try setting the share to cache:yes, then run the mover. If you enable mover logging before you run it, you can see any error messages in the logs. If you can't figure it out from there, attach diagnostics to your next post and someone can take a look.
  4. Just as a general thing you shouldn't leave file manager containers running. They generally do not have the same access security controls in place, so anyone that stumbles over the webui for the container suddenly has full access without entering any credentials.
  5. Do you have any ports forwarded on your router pointed to Unraid's IP? Are any of your plex libraries set to remove after watching? Attaching diagnostics to your next post in this thread may help with clues.
  6. Yeah, it's a common thing. I'd guess perhaps 25% of the posts in this thread are asking about updates, mostly when the base container is updated to the point the app no longer runs because it's never been updated. Think of containers like miniature VM's, they contain parts of the OS and support structures as well as the app itself. This specific container doesn't force update the app, just the base OS and supporting files. It's up to you to keep the app up to date. Many containers also update their main app, but NC was deemed too fragile or something, it requires a significant amount of handholding for some updates.
  7. Can we get a display option to show sparse files size? It would be nice to see size on disk vs. apparent size.
  8. If that's the template you are using. Some support links have recently been broken, so you may have to do a search for "unraid pihole" and find which one you are using if there are more than one.
  9. In the context of Unraid, the parity array can indeed have zfs and btrfs and xfs all as single devices in the array, each drive in the array has an independent filesystem. Each pool in Unraid only has a single file system type, but you could have a zfs pool, a btrfs pool, and a single device xfs pool.
  10. Memory is fuzzy, but I think that file is created by one of the flash backup packages inside the backup archive, so the only reason it would exist on the USB is if you extracted a backup to the drive at some point. I don't think it's normally on the USB stick, and it's not updated on the drive.
  11. Don't know if it will help make it a thing, but could you do a short feature request to at least put it on the radar? Seems like it should be just a quick addition to the GUI.
  12. Would there be any harm in using the -nvram switch on a VM that didn't "need" it?
  13. I've never seen a correlation made to explain why it effects some and not others.
  14. If you are passing through a physical GPU it's assumed you will have a monitor connected to that GPU to see the output. Some cards even require some sort of monitor to be detected to even work at all, in those cases if you are using some flavor of remote desktop software to access it you still need a monitor or a monitor emulation (dummy) plug.
  15. Preferably not, but it's at least something to work with vs. complete loss. "Sorry, we tried everything we could" is preferable to "You have no backup archives at all, so you are hosed"
  16. What is your definition of "best"? In the context of Unraid, best typically means USB 2.0, physically large, all metal, low idle power draw, brand name sticks. Very few drives that meet that criteria are still for sale unfortunately.
  17. Sometimes the partially valid backup contains all that is really needed to recover, the parts that change rapidly may not be "valuable", in the sense that restoring an older copy of some files in the archive may not be a breaking issue, vs not having any data at all. The ability to keep a backup that technically isn't complete, but is complete enough, is better than nothing. Chances are, even if the backup doesn't verify, it's still usable enough for disaster recovery, especially if you keep multiple dates of backups. Some (many?) containers have the option to keep internal database backups, and since those files are pretty much guaranteed to be stable even when the container is still running and changing the active database, the "invalid" backup still contains a valid backup made by the app itself that can be used. Making backups of running containers is complicated, it's not a black and white issue like your statement seems to imply.
  18. https://forums.unraid.net/forum/76-deutsch/
  19. Why so often? Default is once a month, typically that's enough. Parity checks are only needed to verify that parity is being kept up to date, and they are useful to ensure that seldom used disks are still completely error free so if a drive fails it can be successfully rebuilt. Parity is kept up to date in realtime, checks are only needed to verify the realtime process is still working as it should.
  20. In the past, the docker engine required a certain filesystem with specific options. An image was a quick way to make sure the correct options were used, without requiring an entire drive to be formatted that way.
  21. I know you are kicking yourself hard enough already, but I do need to point out that using disk encryption can greatly complicate file system corruption issues because there is another layer that has to be perfect. I would NEVER recommend encrypting drives where you aren't keeping current separate backups, it's just too risky. Verify your backup strategy works by restoring random files and comparing them before you start using encryption. Honestly, I wouldn't recommend encryption unless you have a well laid out argument FOR encrypting.