Jump to content

JonathanM

Moderators
  • Posts

    16,167
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. A move operation is a copy followed by a delete. Both operations are read and write operations, with lots of cycling back and forth between the data area and the file system metadata area on both source and destination disks. Compound that with the wait introduced by every write updating the corresponding area of the parity disk, and there you go.
  2. Possible very small gain. Probably not something you could feel except using a stopwatch and a consistent set of files.
  3. That's perfect. When you get the cache drive set up, temporarily disable the docker and vm services (not just stop the dockers) so there is no longer a menu item for vm or docker in the gui. Then, run the mover. It will take all the appdata folders currently scattered across the data disks and move them to the cache drive. After that is complete, you can re-enable the services and start your dockers. They will be none the wiser that they are now on a different physical disk, because the /mnt/user/appdata path is still valid.
  4. You misunderstand. cache IS a disk. In the case of multiple drives in a cache pool, it's still presented as a single cache disk. /mnt/user/appdata IS the same as /mnt/cache/appdata, as well as /mnt/disk1/appdata. ALL the array disks plus the cache pool are part of /mnt/user.
  5. That is exactly what user shares are. Every folder in the root of the actual disk is a user share, and they are all merged into a single view in the user share mode. Normally you would set appdata to cache prefer, and the bulk if not all of appdata would live on your SSD cache pool.
  6. It will use them all, and for reads that's fine. Writes all update the parity drive, so every time a write is requested, the system has to wait for the parity drive to commit before it can move on to the next write, regardless of which data drive is involved. If you wanted, I suppose you could get a hardware RAID card that is supported, and RAID0 a bunch of SSD's to use as parity.
  7. Yes, something corrupted your go file. Change the capital S to an & and you should be fine.
  8. Use one of @binhex's excellent VPN enabled torrent client dockers. However, you say your VPN is slow, I'm wondering if it properly supports torrents by providing an open incoming port. If not, you will not have much luck with it. I recommend PIA, they are the easiest to set up with binhex's dockers and work well provided you specify an endpoint with open ports.
  9. Not that I'm aware of. Why would you want to do that?
  10. If there is a "previous" folder on your flash drive, open the changes.txt file and post the first line here.
  11. If the data only exists in one place, you don't have a backup. And relying on a large corporation to keep your data safe for you, well, that's your call. You could checksum the files, but if they get altered or corrupted after the fact, you will have no recourse to fix them. Perhaps using par2 sets with some level of redundancy, whatever you feel comfortable with. As far as checking to be sure they were uploaded intact, the only way to be sure is to download them and run another checksum and compare it to the original. Checking them on the destination only gets you so far, you have to bring them back locally to use them, so until you know they can be retrieved intact, you can never be sure.
  12. So the data is not important, or easily re-creatable?
  13. The general consensus is once a month for an actively used server.
  14. 6.5.3 is FAR from the newest (6.6.5 as of this writing), and your server has been up for almost a month. (27 days) Are you sure you upgraded this server? Do you have 2 servers?
  15. That is very valid for normal and gaming PC use, but for unraid, where most have 3 or more spinning rust drives, the strategy is a little different. Cooling many hard drives effectively calls for strategic airflow management to eliminate dead spots around the drives. Typically that involves arranging things so that ALL the incoming air is forced over the drives, and every other opening is either taped off or set as exhaust. If you are using a HBA that was designed for a rack mount server, you may also need to divert airflow over the HBA heatsink as well, since rack mount cases typically have noisy high velocity fans forcing boatloads of air over the cards, where consumer cases typically rely on the cards themselves to provide the airflow, or steal available airflow from the drives. Air will take the path of least resistance, so having any intake fans in the case that are not force ducted over the drives is likely to reduce drive airflow and increase drive temps dramatically. The limited amount of airflow available through a stack of drives or cages also means that CFM is NOT the primary number to be concerned with, instead you need to shop based on static pressure available. Noise reduction, while nice, should be secondary to keeping your rig healthy.
  16. Because there is an existing support thread, with this exact question answered for you already. When you click on the app's icon in the GUI, the bottom item is "support". It will take you to the appropriate place. If you have a question not already answered in that thread, feel free to add your question to the existing thread, so others with that question can easily find the answer you are given.
  17. Does the system boot off USB if you load other USB bootable software to the flash drives? Do the unraid prepared USB sticks boot properly in other systems?
  18. Limetech is only going to spend time diagnosing issues with the current release and newer.
  19. Heh. I am using an OLD spare machine as my backup pfsense box. Since the DHCP info was migrated from the hardware to the VM, I just make sure when I add a new device I fire up the old hardware and make the same changes. I've never had any issues, since the IP's are defined as static DHCP. The guest network drops everyone, but who cares. The only issue I originally had was a poorly defined gateway detection rule in my unifi wireless config. The stoopid access points would turn off their SSID's if the internet was down, which meant I had to boot up a wired box with a static IP, or find one that had a valid DHCP lease to manage / troubleshoot the network if the internet gateway dropped. I have since fixed that by defining the watched IP to be my unraid box running the unifi docker, that way the access points are up even if my cable modem has a heart attack. If I could afford that, I wouldn't have to set up a VM to get decent VPN throughput.
  20. It may not change anything, but if there are options in the BIOS for USB configuration, try changing those around. I seem to remember discussions about changing seemingly meaningless USB settings accomplishing what you want for some boards.
  21. Edit the debian VM's /etc/fstab file and add the following line //unraidservername/unraidsharename /vmmountpoint cifs file_mode=0777,dir_mode=0777,_netdev,username=unraiduser,password=unraiduserpassword,uid=1000,gid=1000 0 0 Make sure the /vmmountpoint folder, whatever and wherever you choose, has appropriate permissions.
×
×
  • Create New...