Jump to content

JonathanM

Moderators
  • Posts

    16,736
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. The author is NOT affiliated with Unraid. Any improvements will need to come from third party solutions, MacOS as a VM on non apple hardware is NOT supported by Unraid, and will not be unless apple changes their license agreement to allow it.
  2. Are you sure the change was applied?
  3. At the moment, yes, because we have no way of knowing what a good value would be for any specific setup. It's up to you to set a sane value.
  4. Not optional, mandatory. All changes to the array disks, formatting, file system checks, etc, MUST be done while the array is started so parity can remain in sync with the changes made.
  5. Speculation here, but it's probably a combination of the new filesystem and the older disks having more fragmentation. Doing a fresh copy to a newly formatted disk solves both issues.
  6. Perfectly normal. Check the box that acknowledges "Yes, I want to do this" and it will become available. We make it a multi step process to format drives because too often if someone has file system corruption the first thing they want to do is format the drive to make it mountable and then they think parity will restore their files, when the correct thing to do is a file system check. Formatting replaces the table of contents with a blank version, and it effects the parity drive as well, so it makes recovering data much harder or impossible. In your case, you genuinely DON'T have a valid filesystem yet, so check the box and apply the format.
  7. You don't. slave doesn't apply to variables, only mapped paths.
  8. It's theoretically possible, but it's a pain. google windows connection sharing and see if it's worth trying. You will need to semi-undo the manual static connection that you were told how to set up, however you will need the knowledge of HOW to set it up and add in the connection sharing bit.
  9. Why? It's a simple rename, takes seconds. For instance, if the TV share was on disk1, then to move the files the path would be renamed from /mnt/disk1/TV/shows/seasons to /mnt/disk1/Media/TV/shows/seasons. Like JorgeB said, shares are just root folders on the /mnt/diskX and /mnt/poolname paths.
  10. I don't think the 92xx series has been manufactured by LSI for quite some time, so the only genuine ones are used server pulls. If it states new, it's probably counterfeit.
  11. If you can deal with the support requests for the app itself showing up in your thread, go for it. People continually want support for the app vs. the Unraid implementation, which causes many to abandon the effort. A properly implemented NC container that talks to an external database and runs the internal Collabora server without issues would be great! Even better if you can help migrate people's existing LSIO containers!
  12. In the not too distant past, before Linux was as large a target for exploits, and Unraid was a NAS only with very little else running on it, uptimes could easily go a couple years. Now that security holes are found almost daily, and a large percentage of Unraid instances serve WAN facing apps, it's almost mandatory to get an update that forces a restart after only a few months.
  13. 🙂 I stand corrected. My perception was based on not seeing support posts for it on a daily basis, vs. the LSIO container which gets much more traffic. Neither one seems to be monitored by the developers however, it looks purely P2P support.
  14. One very notable exception is the LSIO Nextcloud container, it only updates the support packages, never the Nextcloud app itself, which can lead to some serious issues if you don't keep up with the app updates, where the app will no longer start properly after the environment is updated too far. There are other issues with that container, I now recommend using the official NC container, even though there is no support for it here on the forum.
  15. Unless you are planning to mostly fill those data drives with the initial data load I would advise not using so many data slots. Each empty drive uses power on hours, and unnecessary risk if you do have a drive failure. All drives, even totally empty filesystems, participate end to end in the parity equation for rebuilding a failed drive. I typically recommend limiting the parity array free space to twice the capacity of your largest drive, and adding space only after you fall below that largest drive's worth of space. So, in your case, I recommend reducing free space down to 36TB at most, and adding back data slots when the free space falls below 18TB free. Excess capacity is better sitting on the shelf waiting to replace the inevitable drive failure instead of sitting in the array and potentially BEING the next drive failure. Even better is limiting the shelf time on your shelf, and using shelf time at the manufacturers end, so when you get the drive you have a longer warranty period and possibly a lower cost per TB. I typically keep 1 spare tested drive equal or larger than the biggest drive in service in any of my Unraid servers, if I have a drive failure I either replace the failed drive, or replace whichever makes sense and use the pulled good drive to replace the failure in another server. My drives migrate down the line from server to server, I have a backup server that still has some ancient 2TB drives that are still running fine. If one of those fails, my primary server gets an upgrade, and the good drive I replaced in the main server goes into the backup server to replace the failed drive. Tech happiness is a well executed backup system.
  16. Disable it and see. I honestly don't know, but it should be easy enough to test.
  17. The first hurdle is figuring out which files are implicated by a specific parity error. That's not an insignificant challenge. Not insurmountable, but not easy either. Mapping a raw sector address to the file it contains, repeating that process for every device in the parity calculation, querying each respective file system for possible hash data, etc, etc. There are no tools that I'm aware of to automate this, as you say.
  18. Link? How do you know which bit is wrong when all you know is at least one of them is wrong? Any of the data disks or parity disks could have a bit flipped, how do you pinpoint which one it is, when all you know is the sum is odd when it's supposed to be even?
  19. I moved your post to the proper area, as virtualizing Unraid is not officially supported so very few people are doing it. It's not forbidden or anything, just the pool of users that have experience with issues in virtualized Unraid is rather small. Hopefully by moving your post to the correct area people with experience can help.
  20. That's what I was trying to tell you, sorry I wasn't more explicit in how I tried to explain it.
  21. If that is indeed the case, is there a way to reset the partition layout to one that can be mounted in a general purpose linux without erasing the existing format and data?
  22. Have you tried nomachine instead of VNC or RDP?
  23. And you are not hearing what we are telling you, normally Unraid XFS drives do NOT show as linux raid member, implying there is something wrong. I was trying to determine whether the drive currently mounts correctly in Unraid. Given what you are telling us, I don't think it will.
  24. That statement on its own would be enough for me to return the drive, considering the forces needed to make that dent INSIDE a shipping container... however... obviously changes things. If you could return it and get perfect drive in return for nothing more than hassle on your part, personally I might return it. But given that the drive seems to be performing, maybe it's not worth the aggravation. How long after accepting and using the drive would you be able to return it without loss to you? I'd definitely be keeping an eye on it.
×
×
  • Create New...