Jump to content

JonathanM

Moderators
  • Posts

    16,729
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. The official plex container doesn't have a support thread on this forum, they have their own support at https://forums.plex.tv/ There are several plex containers that are supported here, maybe try one of those if you can't get the official one to work?
  2. Maybe this? https://en.wikipedia.org/wiki/U3_(software)#U3_platform
  3. That's for the RAM, you also need to verify the CPU / chipset maximum speed for your configuration. Many AMD require slower speeds for stability.
  4. Did you pull all the old PSU supply cables out, or did you reuse some of them?
  5. I would also strongly recommend getting fully migrated away from ReiserFS to XFS before you move to the next stage getting things running. You will need to systematically copy files off the ReiserFS drives onto your XFS drives, format the ReiserFS drives that you copied from to XFS to make room for the next batch of copies. There is a whole thread dedicated to the different methods and techniques used for this.
  6. I'm guessing you hit enter before you finished the post. You can just reply to this thread and finish your thought.
  7. Maybe, depends on the enclosure. USB is not recommended for Unraid's main array for many reasons. I recommend watching a few youtube videos on Unraid, so you can get a feel for what it is and isn't. Also, https://wiki.unraid.net/Manual/Overview The way Unraid's main array works means all the disks must be accessed simultaneously for parity building and recovery operations, and USB is bad at that type of I/O, some enclosures and USB controllers also have a bad habit of resetting connections which can cause a drive to get kicked from the array. Some have managed to get USB to work somewhat, but it's definitely not ideal.
  8. @SpencerJ? It may be a few hours until you can get a response, meanwhile I'd fill out and submit the form on this page https://unraid.net/contact
  9. Trying to get the iGPU to function on a board with IPMI has been problematic in the past, I seem to remember @Hoopster may have some experience.
  10. You probably have a share set to cache prefer, which puts the files from the array onto the cache drive, instead of cache yes, which puts new files on the cache and moves them to the array. Attach diagnostics to your next post in this thread if you can't solve it.
  11. If you are erasing the drives after you copy, why bother deleting? It's a good idea to verify the copy after it's done anyway, and you can't do that if you delete the source.
  12. That is not a permanent fix even if it works at the moment, it may not work on subsequent boots, as the sdX designations are subject to change based on many factors out of Unraid's control.
  13. SSD's in the parity array isn't recommended for performance reasons, unless all the array disks are SSD including parity, even then there are limitations.
  14. Are you positive you are monitoring the correct sensor? Sometimes the names are misleading.
  15. FTFY. Yes, I would request a new drive sent with proper packing. Take photos and record the S/N before you return it.
  16. Parity only applies to the main array, not to pools. Pools can be configured with various BTRFS RAID levels, but I don't think it's wise to attempt to expand a pool that currently has errors, you will probably end up worse off than before. I think your best option is to follow the replace cache disk procedure, which does entail using the mover to send the data to the main array by setting any cache prefer or only shares to cache yes, and running the mover. Typically you would need to disable the docker engine and VM services in the configuration so their tabs are gone from the GUI before you run the mover. Attach diagnostics to your next post in this thread for better educated opinions.
  17. If you set a new config, add a blank 18TB as parity, and add your current 18TB as disk1, and the rest of the drives which have data you want to keep as disk2, disk3, etc, then build parity that would accomplish what I think you want. You will need to copy data from disk to disk manually, and since you already started with one 18TB in UD, you may as well keep doing that manual process to copy the content of all the disks you want to remove, then only build parity with the drives you want to keep. If you copy instead of move, you can keep the removed drives as backup.
  18. Formatting is NEVER part of a recovery. The emulation includes the file system, so when you format, you tell Unraid to empty the filesystem. Hopefully you can read all the individual disks and recover your data, it doesn't really sound like you had real disk failures, more failure to read or write. Posting diagnostics may shed some light on what's currently available, but since you've rebooted multiple times since this started, the original errors that started this are lost.
  19. Edit your top post in the thread and change the title, add [Solved] or something like that.
  20. In my opinion, if you truly need VM primary drives larger than 265GB, then I would add the 1TB (1gb typo?) as an additional pool, dedicated to just the VM(s) primary vdisk(s). (domains share) Then you can use your current cache drive as storage for appdata and system shares, and as a temporary home for work in progress. Use the storage array for finished work, archives, and reference materials.
  21. Try enabling UEFI boot on the flash drive by renaming the EFI- folder to EFI. No guarantee it will work, but it may go farther.
  22. Different Jonathan. still from way back though. Do you have access to the local console? Typing diagnostics at the command line will put a zip file on the flash drive.
  23. Yep. Each drive in the main array is separate. Pools can be defined differently, so that's not necessarily applicable to pools.
×
×
  • Create New...