Jump to content

JonathanM

Moderators
  • Posts

    16,723
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. If you told Unraid which disks to use, and which folders are allowed to automatically spread across those disks and which ones are restricted to disks where the folder already exists, where is it supposed to put the overflow after it runs out of space? Returning "no space left on device" is the only answer that makes logical sense if you follow it through, otherwise people would complain that the files are being written to locations that they didn't want. Unfortunately split level is a difficult concept to put across in purely words, many years ago I asked for a graphical representation that would populate a sample path across disks that would show you the valid target disk(s) for any specific path depth with the given share settings, but that was deemed too complex to code, or not enough benefit for the work, something like that.
  2. Tools, diagnostics, download the zip file and attach it to your next post, like it describes in the link automatically generated when the text diagnostics is detected in a post.
  3. It means a write to it failed, which could be cables, controller, or disk. Recovery is typically by replacing the disk and letting the array rebuild it, but without the diagnostics file, preferably downloaded before rebooting after the error occurred, it's impossible to recommend the correct course of action. It could be as simple as reseating the cables and trying again, or there could be other things in play.
  4. Also keep in mind your total capacity is limited to the smaller of two drives in RAID1, 128GB in your case. Size in the webui is reported wrong, free space should be correct.
  5. Have you masked off or disabled the 3.3V reset pins on the drives?
  6. Parity in Unraid, or ANY RAID system, is NOT a backup. There are many ways to lose data, drive failure is only one. You must keep a second (or third) copy on physically separate media.
  7. swag is a full NGINX server implementation that many people just happen to be using as a reverse proxy. NPM is a specialized NGINX set up to make reverse proxy convenient and easy to set up. It's not designed to do anything BUT reverse proxy, hence the name Nginx PROXY manager.
  8. That looks like what you told it to do in the pause and resume time settings.
  9. Be cautious doing this, it's easy to forget you've done it and end up causing issues when Unraid upgrades.
  10. Try reducing the resource allocation to the bare minimum and slowly add resources until performance quits improving, then back off a step. For example, set CPU to a single thread, and RAM to 4GB. See how it acts. Increase to 2 threads (single core). Repeat. Most people starve the host for resources, and the host is emulating the motherboard, so it's like putting a bunch of RAM and CPU on a super slow motherboard. The more resources you can leave for the host, the better your I/O throughput should be.
  11. There is a good reason it's referred to as "cutting (bleeding) edge technology" Serial 1st adopters have to be a special breed of person.
  12. If you can access the webui with an ip and port, use that in the reverse proxy. I.E. if you can use http://192.168.0.2:8989 to access sonarr, that is the entry you would use in the nginx configuration to point to sonarr.mydomain.com
  13. Probably not a lot of difference for most people, as the only time there are new files created is when a new VM is defined. Personally I use cache:only as follows. New VM's get set up on the pool, as I want them to be fast. I manually move less used VM's to one of the main array disks, but since I use the same folder structure, they all still appear under the domains share, and since the share is cache:only, mover doesn't try to put them back on the pool like it would if it were cache:prefer. The VM's never know the difference, they can run fine from either location. If you wanted your newly created VM's to be on the array, then manually move them to the pool if you wanted more performance, then you would set domains to cache:no, and new VM's would be created on the array, and you could manually move the ones that need more speed to the pool. If you want to move things manually, make sure you never mix /mnt/user paths with /mnt/diskX or /mnt/<poolname> paths. You can easily delete data doing things that look correct. As long as you only move between /mnt/diskX and /mnt/poolname paths you will be fine. Stay out of /mnt/user for manual file moves unless you educate yourself on exactly how the /mnt/user magic happens.
  14. If it's a SSD, the quickest way to erase it is blkdiscard /dev/sdX or /dev/nvmeX at the console, find the correct device on the main tab. USB GUID is constant, make a backup of the config folder, format the drive and use the USB creator tool or manually prepare the drive, then copy the license key out of your backup to the config folder on the stick.
  15. Turn on help on the share settings page. What you want is cache: yes and you need to stop / disable the vm service before you run mover There should not be a VM's tab in the GUI while you are doing this.
  16. No, after you start the array on the main page, there will be an option to format all unmountable drives.
  17. If you don't format the disks using the switch filesystem trick, they won't be erased.
  18. Did you start the array and format the disks between each change?
  19. I strongly recommend IPMI or some other out of band management system. That somewhat restricts your board (and CPU) choices if the board has built in management, or you can investigate PiKVM if you want to have remote management for a desktop grade system. Given the entire tone of your post however, I think you should be looking at server grade boards with IPMI built in. Perhaps a decommissioned used enterprise rack server would fit what you are asking, be sure you have signature viewing turned on in your profile, and browse around the forums to see what kind of kit others are using.
  20. As ghost82 said, yes, you can copy the disk just like any other disk. Let me rephrase the question, that I think you really are meaning to ask The answer to that is NO, parity is continually synced to the rest of the data disks, and as soon as it is out of sync, ANY recovery using it will be corrupted beyond use. Parity holds no data, it only completes the set with the data disks, so when any single disk is removed, it can answer the equation formed with the rest of the data disks. It is useless without the rest of the data disks. All this is explained in the link I posted.
  21. Parity disk by itself is worthless, you must have all remaining data disks as well to rebuild a failed drive. https://wiki.unraid.net/Parity Please read.
  22. RAID0 should be 1TB https://carfax.org.uk/btrfs-usage/ If you really want 1.5 I think you need single
×
×
  • Create New...