Jump to content

JonathanM

Moderators
  • Posts

    16,740
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Since shares are just a combined view of the individual disks, try rebooting and see if they are recreated.
  2. If IPMI is out, that suggests motherboard issues. The IPMI shouldn't be effected by a software crash, it should still allow you to at least connect to IPMI and issue a reset.
  3. You can go down the rabbit hole of giving the containers unique IP addresses, or you can manually add the custom port to each client once. Personally I think the one time each hassle of adding it into the client is way less work than sorting out the various pitfalls that can occur when giving containers unique IP's, but feel free to pursue it.
  4. When you are passing through the GPU, have you tried connecting the monitor to all the ports, obviously one at a time? Sometimes if a card has multiple types of outputs as most do, only one will actually be active until you get drivers loaded. For instance, if it has 2 HDMI and one display port, make sure you try all 3 different options. Another thing to try is installing nomachine while the VM has the VNC connection working, and make sure that you can connect to the nomachine from a networked computer, then when you switch to GPU passthrough you can see if the nomachine is up and working.
  5. Both are correct. No need to keep a disk spinning if it's only accessed a few minutes a day, but it makes no sense to spin it down and then access it again 20 minutes later. Try to limit the number of cycles, but keep rarely used disks spun down. Temperature stability is more important, so if you can't keep your disks from heating up excessively while they are spinning, keep them spun up so they don't go from cold to hot to cold. better to keep them consistently warm at 50C than cycling from 20C-50C every few hours.
  6. Is the amount of speed increase worth the hassle? https://netcraftsmen.com/just-say-no-to-jumbo-frames/
  7. Tools, diagnostics, attach the zip file to your next post in this thread. If everything else is healthy, you should be able to just rebuild on top of the same drive, but without knowing the health of your system any advice would be premature.
  8. Please attach the images to your post so we don't need to click on an unknown third party site link. Also more text describing what's going on would be helpful.
  9. That doesn't make sense to me. Try unplugging and replugging the ethernet cord to the server without rebooting and see if you can get a response after it renegotiates.
  10. Can you ping both of those addresses from another computer connected to the same router?
  11. Those ping results are from the Unraid terminal? Can you ping 1.1.1.1?
  12. Sorry, I misunderstood your question. Can you ping your router IP address from the console? (192.168.2.1)
  13. Trial keys require internet access to start the array, purchased keys work without internet. Otherwise people could use a trial forever.
  14. USB enclosures sometimes alter ID's in unpredictable ways, for instance yours may be reporting for the first slot that responds, and subsequent drives aren't. Some USB cages are better than others, but all suffer from bandwidth issues for parity. The only external hard drive cages that work comparable to internal SATA connections with Unraid is either ESATA with one port per drive, or SAS which allows multiple drives for a single cable. USB suffers from poor connection stability under heavy load, which can cause Unraid to disable drives somewhat randomly. You should be able to successfully use Unraid with USB connected array drives if you forego parity protection and just use data drives and single volume pools, any attempt to use parity with multiple USB data drives is going to be a frustrating experience compared to HBA sata internal connections.
  15. Have you tried your key in their rig as a test? Have you compared BIOS settings?
  16. Try adding a drive letter to the "change drive letter and path" setup. Don't mess with the existing path, just add another one.
  17. So what do you see when you go to that folder in explorer?
  18. post the plex docker run command and the path you are pasting.
  19. Depends on the size of your cache pool, and the amount of data you are transferring. If your cache pool has enough free space to hold all the data in this particular batch, then leave cache:YES and if the mover is scheduled when you aren't using the server, it will all get moved to the array at the slower array speed while you sleep. If you have more data than will fit in the pool, it will end up either giving you an out of space error or start copying directly to the slower array when it runs out of space, depending on your settings. Then when the mover is scheduled, it will move the data from the pool to the array freeing up the pool space. Either way, the transfer will take longer than if you just sent it directly to the array in the first place. Writing to the array is slower than writing to the cache pool in a typical system, but the data needs to end up on the array eventually anyway. tldr: Using cache pools to write to a parity protected array share don't speed up the parity array, it's just a fast temporary spot to put the data and let the server deal with it later. The data still has to get written at array speed, but you don't have to wait on it. Here's an extreme example. You have 10TB of data to transfer, and a 128GB cache pool. The first 128GB writes fast, then the transfer slows down, and the pool shows full. You manually run the mover, now the array is receiving writes from the mover, and your transfer slows down even further. The mover can't empty the pool as fast as you can fill it, so you are stuck at a snails pace.
×
×
  • Create New...