Jump to content

JonathanM

Moderators
  • Posts

    16,168
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. You could probably get it to work, but it's not ideal.
  2. Because of the limited interactions with the stick, endurance is much more important than speed. I recommend https://www.kingston.com/us/usb-flash-drives/datatraveler-se9-usb-flash-drive They have a metal shell, which is good for heat dissipation and physical protection, and USB2, to help with compatibility. The 16GB version is more than generous, typically you will use less than 2GB of space for unraid and all the settings and customizations. The vast majority of add on programs will live on the cache drive. You want to avoid the super micro miniature USB 3 drives, the heat has nowhere to go, and they tend to cook themselves.
  3. Do you not have enough free space on your array? If you have less than 80GB free, I'd advise your first course of action would be to add a new array drive or upgrade a current drive. If you do have enough space, there are wiki articles and other resources describing how to accomplish exactly what you are talking about using space on the array.
  4. This can not be resolved here on the the forum, you must email support. [email protected]
  5. The OS installs to RAM, not to a drive. The USB boot stick contains the OS in a compressed state, and it is decompressed and installed to RAM fresh every boot.
  6. Hmm. Maybe temporarily pull 1 whole processor and the associated memory. When it reboots, does it sound like a normal power cycle? Is there anything that catches your attention before it actually fully reboots?
  7. If everything is working as designed, then what happened was a read error. The data that was supposed to be there was reconstructed from parity, and written back into place, and the write succeeded. If you haven't rebooted since then, the diagnostics might show exactly what happened. In any case, it wouldn't be the worst idea to run a full smart test on that disk.
  8. Memory. Try removing half and running for a while. This kind of thing is usually hardware. Have you been sitting near or in front of it when it power cycled yet?
  9. If you require that services be available in the interim, it gets complex. However, if you can stand to have your dockers and VM's disabled for the duration, then... 1. Disable docker and VM services. That means no docker or VM on the main GUI menu. If the menu items are there, the services aren't disabled. 2. Set all shares to cache: yes. Make a note of which shares you changed, so you can set them back after the new pool is formatted. Any shares currently set to cache: no or cache: only may need special treatment. Make a list of all folders (shares) that actually have content on the cache pool that were set to no or prefer, list any content in those folders. Those will need to be manually moved back to their locations on the pool. 3. Run mover. If everything is working properly, the cache pool will be empty after the mover is done. Set all shares to cache: no. 4. Shut down and physically remove drives. 5. Array shares can be used without the cache pool, but no vm or dockers. 6. Shut down and install new cache drives, assign and format them. 7. Change shares back to their original settings. Any that were originally set to no that had content on the cache, manually move that content back. Any that were set to only, change to cache prefer unless you need a large portion to remain on the array. Example... I personally have my domains share set to cache only. New VMs get created on the cache, but I have MANY that live on the array, and when I need space on the cache for new VMs, I'll manually move some of the lesser used VMs off the cache and only the array. 8. Run mover. Verify that the cache content is the same as it was before you started this odyssey. 9. Enable docker and VM services. 10. Profit! If you need VM's and dockers to run while you don't have a cache pool defined, you will have to go through each and every docker and VM setting and configuration to be sure there are no direct references to /mnt/cache. If there are, they must be changed to /mnt/user, or /mnt/diskX, where X is the disk where the content temporarily ended up based on the share allocation rules. Some dockers get tetchy about running from /mnt/user, especially recently with 6.7.X If you plan to go this route of setting things up to temporarily run from the array, disable ALL the individual auto starts for each VM and docker before you disable the services. If any of them get accidentally started with a direct reference to /mnt/cache, they will happily think they are starting up for the first time and create their folders in RAM, ignoring their actual config files and creating a mess.
  10. This. However, if you have two unraid servers, you could set up a VM with the urbackup client in one to backup to the server container on the other, and tell it to backup your shares. It won't be able to back up open files though.
  11. No. There can be only one operating system in control of the actual hardware. If you wish to run unraid and windows together on the same hardware, windows must be a virtual machine. There is your answer.
  12. Also, just as an aside, the extra steps to maintain drive slots will not work properly if you have a disk assigned to parity2. They only work if you are only using parity1. Doesn't matter if you only have one parity disk or not, the slots are different and not interchangeable. parity1 is valid as long as the same disks are still in the array, order is unimportant, so you can swap numbers using the method described. Parity2 relies on disk numbers being constant as well as the same array members, so no swapping without rebuilding parity. If at the end of the procedure you decide you want to reorder the disks, you can always do the new config / trust parity dance at the very end after all the data is moved.
  13. My fault, I should have told you to skip step 4 as well. The point of the extra steps is to move the drive assignments around so that content that started on disk1 will still be on disk1 when the procedure is done. However, it adds a bunch of complexity that is only needed if you care which disk holds what content. You mentioned earlier that you don't bother forcing content onto any particular disk, so it doesn't matter if the content is on a different number when you are done. Also, the trailing slash thing is a common error. You forgot to put the slash after the source. The last slash on the destination doesn't matter, but you forgot the important one. rsync --options /mnt/disk(source number)/ /mnt/disk(destination number)/ http://qdosmsq.dunbar-it.co.uk/blog/2013/02/rsync-to-slash-or-not-to-slash/
  14. Same error with the NVME WD Blue removed from the system?
  15. Personally I think the issue the extremely poor retention at the connector, and any attempt to bundle the cables is more likely to pull one of the ends out of alignment. The connector must be completely square in all dimensions to make a proper link that will stay connected during normal system vibration.
  16. Your mapped paths are inconsistent between sonarr and nzbget. https://forums.unraid.net/topic/57181-real-docker-faq/page/2/
  17. Since I know of no one that puts their server into safe mode and disables everything for their parity checks, I see some value in running a pass in both clean and dirty configurations. Problem is, there is no way to get consistent numbers when things are firing off seemingly randomly. I'm having a hard time wrapping my head around my own question, let alone the correct answer, so here goes. Is there a way to test or know for sure that the values obtained running clean are indeed the best values during heavy use? I'm hoping there is a logical explanation that says, "of course, that's how it works"
  18. I personally don't use plex, so I don't have a stake here, but I would like to request on behalf of the less knowledgeable here that you publish a step by step guide to help people migrate. Many use this container purely because it was supported by limetech, so to leave them hanging feels wrong.
  19. I don't know. I use Emby, not Plex, and I know Emby sometimes will add files to the media folders. If plex is the same way, then it's possible plex will operate on the source drive after the copy has happened, so the change will be lost. With emby it wouldn't be a big deal, but the verification step would show differences that you would need to track down. It shouldn't be a show stopper to leave plex running, but it may slow things down or complicate them a little, or may have no effect. Honestly I'd be tempted to try leaving it running, the possible downsides aren't that big.
  20. Post your docker run commands for sonarr, radarr, nzbget and your torrent client.
  21. Not discounting the value of the suggestion, which I personally have no need for but certainly can see a valid use case, but I'm wondering about your workflow. Normally vdisk files are sparse, so no matter how large or small you allocate, they only take up as much space as actually used by the files inside. So, why not just set up the base images with the size you need to begin with? It's not going to change the amount of space they occupy.
  22. That procedure contains the meat of what I've recommend, but it adds some extra steps designed to keep the content on the same drive slot after you are done, and some precautions so that the temporarily duplicated files won't cause issues with ongoing usage during conversion. The OP has indicated he doesn't really care what drive slot contains specific file or shares, so no need to go through the extra steps of swapping slots. Simply keeping accurate track of which drive is source and which drive is destination for each iteration is enough. It does, however, spell out the use of rsync instead of cp, and I personally use the rsync method to do the copy and then the compare before nuking the old source drive and moving on to the next transfer. In the OP's specific case the steps outlined in 10-18 can be skipped and instead simply change the format type of the last source disk number to become the new blank destination disk. As long as you keep track of which disk number is being currently read from and written to, you should be fine.
  23. Do you have any includes or excludes set currently?
  24. The array isn't going to be stopped, but if someone has files open, or adds files to a share that is being accessed, things get complicated. If you can tell people not to add or change files, and preferably temporarily not use the server, so much the better. Best is to disable the docker and vm service so nothing is interfering there either. If nobody is going to be accessing the files, then no need for a temporary folder, simply make an identical copy of all the folders. The side effect of this is that the temporarily duplicated files will be hidden from the user share system, so someone could make a change that you would subsequently format away. The file verification step should catch that, but easier not to have to mess with it at all.
  25. It complicates some things, and simplifies others. Is it acceptable to you to temporarily stop all array activity while the migration is happening, or will you be run out of town on a rail if certain people can't access what they want, when they want it?
×
×
  • Create New...