Jump to content

JonathanM

Moderators
  • Posts

    16,318
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. memtest also checks the memory controller and the data path. It would still be a good test. If it fails you have a smoking gun to investigate, even if it turns out all your RAM sticks are good, the memtest could still fail. Memory timing or voltage in BIOS could be wrong, or just not stable at current values. If memtest passes 24 hours with no errors, then you have another data point to help you diagnose things.
  2. New doesn't mean good. The first thing you need to do with new memory is a memtest.
  3. Click on the folder icon, select edit, move the selected containers inside the edit screen. After you do that, you can drag the folder using the drag arrows.
  4. Best bet would be to post in the support thread for that container.
  5. @ITKBI, stop posting new threads with this exact same content. Attach the diagnostics zip file to your next post in THIS thread if you want help.
  6. Probably not, here are the versions available. https://hub.docker.com/r/linuxserver/unifi-controller/tags
  7. Yep, The only downside is the need to manually manage the space, but it's not that difficult to remember to assign the proper pool to the task at hand. Not recommended. Preclear is really only useful to stress test regular hard drives.
  8. Just FYI that setting is the worst choice for most use cases.
  9. XFS drives can only be single member pools. This is not a bad thing in my opinion, considering the next best option for differing drive size pool use is single profile BTRFS pool, which gives you the sum of the member sizes at the expense of losing everything on both drives if one fails. You can't add the 4TB to the 1TB existing pool as XFS. What you would do is set up another pool, not add it, and call it vms2_nvme. I'm not sure if that's what you were trying to say, but if you have 4 total SSD I would recommend each having their own named pool, not combining them. The only time I personally would combine multiple drives in a single pool is if they were identical and I was using the mirror (RAID1) function of ZFS or BTRFS. I don't particularly like BTRFS though.
  10. You could... relax the affected share split levels and disk exclusions and verify your minimum free space settings manually move files from disk to disk to free up the space needed upgrade disk to larger model add disk You are going to keep fighting with this until you add more space or delete unwanted items to free up space. Converting h264 media to h265 can free up loads of space as well. I recommend reading through the help tips on the share page in the GUI, it might help you figure out specifically what is the immediate issue, as well as give you tools to manage it on an ongoing basis. You are running out of space, you will need to actively manage your share allocations until you get more space.
  11. Check disk1 to see if Share 2 was created there, and the files never actually moved to a different disk, just a different folder. The user share system is sometimes hard for people to understand, and it's easy to make a mistake while moving things. User shares and disk or pool names shouldn't be mixed in a copy or move command, it can cause data loss if you don't understand what's going on behind the scenes.
  12. Try not to move files off of ReiserFS, it's tortuously slow. Why bother moving when you are planning to format it anyway, copy instead. That way you can verify the copy is identical to the original, and when you are satisfied you have a good copy of everything on the ReiserFS volume, format it. Format only takes seconds, where deleting files (which is what a move does, copy then delete) can take minutes per file if the ReiserFS system is large and well used.
  13. Either search in the binhex-plexpass support thread to see if the issue is addressed there, or install the official container instead.
  14. to mount a share you must use a valid user, not root to connect to the console, you must use root, not one of the users use that path as the rsync destination, assuming it mounted correctly and you see the contents of the unraid share at that local path.
  15. console login is only root, no defined users, smb / nfs is only defined users, no root use rsync to transfer with the locally mapped mount location, not the network location. To what local path did you mount the unraid share?
  16. https://www.techpowerup.com/80711/dont-yell-at-your-hard-drives-sun-engineer
  17. You are very lucky, there have been multiple instances of people blowing up components by reusing cables they were SURE were compatible. Even from the same brand isn't a guarantee, because brands don't always make their own product, they relabel from a few giant manufacturers.
  18. What address and port are you trying in guac? I know a plain VNC client works fine for me using the server ip and the normal 5900, 5901, 5902, etc depending on what order the VM's were started. The VMS tab graphics column shows the port.
  19. /var/lib/docker/unraid-autostart Which is inside the docker.img file, so if that's not mounted you will need to manually mount it to make the edit. Or you can just delete the docker.img file and recreate it, you will need to recreate any custom networks before restoring all you containers with the previous apps section of CA. It only takes a few minutes, and you won't lose any data as long as your containers are properly mapped to appdata.
  20. I think a better index would be linked to a percentage of a normal system build cost. How much does the hardware cost in those countries? I honestly have no clue how much a system costs to build in other countries, but I don't think it's unreasonable to compare the hardware costs to the software costs.
  21. This plugin adds the capability to modify parity checks with scripting. Power outage shutdowns should never trigger a parity check, the UPS should be configured to allow a safe shutdown.
×
×
  • Create New...