Jump to content

JonathanM

Moderators
  • Posts

    16,689
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Preferably in a session guaranteed to stay up, so it can complete. Either use the screen program from a remote terminal, or use the local console on the unraid box. Successful completion will give zero results, that command will give a listing of files that don't match. I suggest playing with it using a couple of small folders, so you can get comfortable with how it works. rsync -narcv <source path> <destination path>
  2. Based on what I see there, you currently have /mnt/user/appdata/data on the array mapped to /data inside the container. If you want to save to /mnt/disk1/ you need to change the host mapping there, and inside deluge you set it to /data
  3. new config, keep none, unassign all disks, preclear all spinning rust, blkdiscard ssd's, keep license key file from config folder, recreate USB with new install, put license key back in config folder.
  4. Set all shares to private or not exported, reboot the troubled machine and try again. Also, type the password into a plain text area first, and copy paste to verify no keyboard malfunction.
  5. That's not the point. trurl asked a specific question, "Which parity was it?" Did you remove parity1 or parity2?
  6. Things I would do. Memtest for 24 hours, verify fans and temps No correct parity check file system check on all drives Anticipate zero errors, any errors would be cause for evaluation and troubleshooting.
  7. rsync -narcv /mnt/diskX/ /mnt/diskY
  8. TDP is a very poor measure of consumption over time for a specific workload. TDP is there solely as a design requirement for how much heat shedding capacity is needed for the heat sink system. If you have airflow or overall package or harsh temperature environments, then sure, you need to be concerned with TDP. For a home server, typically a low TDP version of an otherwise identical chip will likely consume MORE power in the long term, because CPU heavy tasks will be throttled to keep the TDP down while the drives are forced to stay spun up for a longer duration, vs the higher TDP processor completing quicker and allowing the server to spin down. It's much more important to look at the overall consumption of the motherboard, RAM, HBA, and GPU than it is to focus on the processor TDP. Also, keeping the spindle count to the lowest possible number will help tremendously. Running 3 8TB drives straight off the motherboard is going to be hugely more efficient than running 9 2TB drives that have the same usable capacity but require a separate HBA because the motherboard only supports 6 SATA ports. Designing a low power server is complex.
  9. Not off the top of my head. I don't remember, I let the internet remember for me.
  10. After you do that you will need to rescan / update the database so the files show up in the nextcloud app.
  11. They are inside the docker image file, it's not particularly easy to get to them, and they will stay there bloating the image size until you delete and recreate it. If you wish to reclaim the space, take a look in the docker FAQ on this forum and follow the directions to recreate the docker image. It's quick and painless, you won't lose anything except the files that were saved inside the image in error.
  12. With that symptom, the first thing I would investigate is CPU cooling, PSU issues, memory issues. If it's hanging so hard that the IPMI is dead, that's almost got to be hardware.
  13. As long as your UPS is properly recognized by either the built in apcupsd in unraid or the third party NUT plugin, you can install either apcupsd or NUT into all the related VM's and PC's, and set them to slave mode over the network. Just be sure that the network path is powered during an outage, and the slaves are all set to finalize their shutdown routine well before the master server is set to go down. I personally have all the slave units set to shut down after a minute or so of no power, and the server set to shut down a few minutes later. Best policy is to keep as much power in the battery as possible, lead acid batteries don't like to be discharged below about 50%, and if you have a second power outage before the battery has a chance to fully recover, it takes roughly 10X the discharge period to recharge.
  14. https://forums.unraid.net/topic/84601-support-spaceinvaderone-macinabox/?do=findComment&comment=784480
  15. If they were previously managed by another controller, you will need to go through the procedure to adopt them into the new controller.
  16. No. If you start the array like that, I'm pretty sure your cache pool data will be gone. I believe the correct thing to do would be to set all cache devices to none, and start the array, then stop the array and assign both devices. However, I'm not at all sure about it, so if I were in your shoes, I'd put it back like it was with a single device in cache2, and hopefully you can see your data, if so I'd follow the normal procedures to replace the cache disk by disabling the docker and VM services so they don't show up in the GUI, set all shares to cache yes and run the mover. My best advice is to wait for @johnnie.black to chime in, he's much more familiar with BTRFS cache pool issues than I am. WHATEVER YOU DO, don't start the array with that "All existing data..." message showing. It WILL erase your cache pool. I stand corrected, thanks to johnnie.black. There have been so many threads with "OMG I JUST LOST MY DATA" when involving a cache pool I am overly paranoid that it's not going to work as advertised.
  17. If you want the best quality possible, don't convert it, store the raw rips. Unless you are using only lossless compression with no modification of encoding or format, every conversion loses quality. The point of conversion is typically to either modify the encoding or format to achieve the smallest possible file size with acceptable quality. What those settings are is highly personalized.
  18. There may be. For you. What is best for playing on a HD projection screen 7.1 home theatre is not even close to what would be best for someone who only ever watches on their smartphone with headphones on. Huge range of options, depending on the the target playback device. If you have multiple playback device types, there probably isn't one setting that isn't a compromise. You will have to research and make decisions based on your specific needs, and there are entire forums dedicated to optimizing your library. This isn't one of those forums.
  19. I don't know for certain which user this specific container uses, but LSIO nextcloud uses abc So, for every instance of www-data, you substitute abc It's a valid guide, you just need to apply some customization.
  20. To expand on that a little, the two parity slots are completely different. You can't take a drive that was valid in parity1 and assign it to parity2 and have it still be valid, it must be rebuilt. Data drive slot reassignments don't invalidate parity1 as long as the same drives are used, but parity2 must be rebuilt if you swap data slot assignments. Each parity slot can rebuild 1 data slot, independent of the other parity slot. Having only 1 parity drive in parity2 works just fine, it's a bit more complex mathematically, so the CPU works ever so slightly harder. In retrospect, I kind of wish that when the change was made to add the second parity slot that things were renamed to make the difference clearer. Maybe something like "Basic Parity" and "Complex Parity". Maybe not. It's tough to convey all the nuances with simple labels.
  21. I haven't experimented further yet, I was anticipating a flurry of update / bugfix / activity and figured I'd wait for the next version to have a play.
  22. It's fine. Why are you deleting fields? When I did my test, all I did was pull the container, set the host paths in the template to match my array locations, and on first run changed the data location in calibre itself to /books instead of /config.
  23. No, there are 2 options. Follow the linked directions on the deluge site to build an up to date windows client, or select an older container for unraid using a tag in the repository field of the container setup. Either will require effort on your part, but all the needed information is readily available if you search.
×
×
  • Create New...