• Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by JonathanM

  1. If you do that Unraid won't start properly. Revert instead to the go file packaged with the installation zip archive.
  2. The application itself cannot make changes to the container template. The template content is up to the publisher of the template, and you can override update changes with an edit to your local template. I forget what exactly needs to change, but a good search should reveal what you need.
  3. The official plex container doesn't have a support thread on this forum, they have their own support at There are several plex containers that are supported here, maybe try one of those if you can't get the official one to work?
  4. Maybe this?
  5. That's for the RAM, you also need to verify the CPU / chipset maximum speed for your configuration. Many AMD require slower speeds for stability.
  6. Did you pull all the old PSU supply cables out, or did you reuse some of them?
  7. I would also strongly recommend getting fully migrated away from ReiserFS to XFS before you move to the next stage getting things running. You will need to systematically copy files off the ReiserFS drives onto your XFS drives, format the ReiserFS drives that you copied from to XFS to make room for the next batch of copies. There is a whole thread dedicated to the different methods and techniques used for this.
  8. I'm guessing you hit enter before you finished the post. You can just reply to this thread and finish your thought.
  9. Maybe, depends on the enclosure. USB is not recommended for Unraid's main array for many reasons. I recommend watching a few youtube videos on Unraid, so you can get a feel for what it is and isn't. Also, The way Unraid's main array works means all the disks must be accessed simultaneously for parity building and recovery operations, and USB is bad at that type of I/O, some enclosures and USB controllers also have a bad habit of resetting connections which can cause a drive to get kicked from the array. Some have managed to get USB to work somewhat, but it's definitely not ideal.
  10. @SpencerJ? It may be a few hours until you can get a response, meanwhile I'd fill out and submit the form on this page
  11. Trying to get the iGPU to function on a board with IPMI has been problematic in the past, I seem to remember @Hoopster may have some experience.
  12. You probably have a share set to cache prefer, which puts the files from the array onto the cache drive, instead of cache yes, which puts new files on the cache and moves them to the array. Attach diagnostics to your next post in this thread if you can't solve it.
  13. If you are erasing the drives after you copy, why bother deleting? It's a good idea to verify the copy after it's done anyway, and you can't do that if you delete the source.
  14. That is not a permanent fix even if it works at the moment, it may not work on subsequent boots, as the sdX designations are subject to change based on many factors out of Unraid's control.
  15. SSD's in the parity array isn't recommended for performance reasons, unless all the array disks are SSD including parity, even then there are limitations.
  16. Are you positive you are monitoring the correct sensor? Sometimes the names are misleading.
  17. FTFY. Yes, I would request a new drive sent with proper packing. Take photos and record the S/N before you return it.
  18. Parity only applies to the main array, not to pools. Pools can be configured with various BTRFS RAID levels, but I don't think it's wise to attempt to expand a pool that currently has errors, you will probably end up worse off than before. I think your best option is to follow the replace cache disk procedure, which does entail using the mover to send the data to the main array by setting any cache prefer or only shares to cache yes, and running the mover. Typically you would need to disable the docker engine and VM services in the configuration so their tabs are gone from the GUI before you run the mover. Attach diagnostics to your next post in this thread for better educated opinions.
  19. If you set a new config, add a blank 18TB as parity, and add your current 18TB as disk1, and the rest of the drives which have data you want to keep as disk2, disk3, etc, then build parity that would accomplish what I think you want. You will need to copy data from disk to disk manually, and since you already started with one 18TB in UD, you may as well keep doing that manual process to copy the content of all the disks you want to remove, then only build parity with the drives you want to keep. If you copy instead of move, you can keep the removed drives as backup.
  20. Formatting is NEVER part of a recovery. The emulation includes the file system, so when you format, you tell Unraid to empty the filesystem. Hopefully you can read all the individual disks and recover your data, it doesn't really sound like you had real disk failures, more failure to read or write. Posting diagnostics may shed some light on what's currently available, but since you've rebooted multiple times since this started, the original errors that started this are lost.
  21. Edit your top post in the thread and change the title, add [Solved] or something like that.
  22. In my opinion, if you truly need VM primary drives larger than 265GB, then I would add the 1TB (1gb typo?) as an additional pool, dedicated to just the VM(s) primary vdisk(s). (domains share) Then you can use your current cache drive as storage for appdata and system shares, and as a temporary home for work in progress. Use the storage array for finished work, archives, and reference materials.
  23. Try enabling UEFI boot on the flash drive by renaming the EFI- folder to EFI. No guarantee it will work, but it may go farther.