bryansj

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by bryansj

  1. I have five unRAID systems that I babysit. All of them are running on retired Dell PowerEdge Rxxx servers which all have the tg3 driver for the built-in 4 port 1Gb NIC. Since I do all but my own server remotely I need to be sure I understand something about the blank tg3.conf. If I create the blank tg3.conf file BEFORE updating to 6.10.2 will it continue to provide network connectivity after rebooting for the upgrade? Or does it only work after upgrading to 6.10.2? The work-around would be doing this in iDRAC, but on a couple systems I'm accessing them over unRAID's WireGuard.
  2. I've moved these drives across different cards over the years. I just moved them this morning from a H730 mini to a H730P mini if that matters. I've moved other drives from a LSI IT mode card to various Dell cards with no issue. I haven't moved from a Dell server card to an IT mode LSI. I'm not really interested it testing unnecessary drive swaps so I'll have to update way down the road when I move to a new server.
  3. My drives were found by unRAID once I marked them as non-RAID in the H730 controller. I am showing SMART data in the drive properties. Not sure how to determine if the drive IDs were changed in any way other than confirming my array was picked up.
  4. I just got a R730XD as well. You can use the H730 RAID controller as is. You just need to go into each drive in iDrac or the controller config and mark every drive as "non-RAID". The other option is to change the controller to HBA mode. There is no need for a different controller card or IT mode.
  5. Forcing an update just now fixed it. I'm beginning to wonder why I stick with qbit. They seems to like breaking things often.
  6. Hard linking works fine in unraid and with qbit. You just can't link across shares. Your downloads and media need to be under the same share. /media/movies and /media/downloads
  7. I rolled back to linuxserver/qbittorrent:14.3.2.99202101080148-7233-0cbd15890ubuntu18.04.1-ls110 and it worked fine. The logs suggest moving the config folder, but I'll wait. "The legacy data directory '/config/data/qBittorrent/' is used. It is recommended to move its content to '/config/qBittorrent/'"
  8. I started as far back as MP3s in the 1990s with Usenet and moved to private trackers a few years ago. You might have a different opinion, but I'm not going back. I'm not talking about crappy public trackers here. I've done seed boxes, but they don't really meet my use case anymore.
  9. I remember from my attempt a couple years ago that gdrive and downloads didn't get along, but I couldn't remember where the problem was between them. The API ban would cause plenty of headaches. I also remember there was a catch-22 back when Plex would work straight from a gdrive before they canned that service. You could point Plex to gdrive and the users would be able to stream from there and not use your bandwidth. However, you couldn't encode your media and you risked Google deleting your content. If you encode your media it has to pass through your pipe to decode so you are back to using Plex "locally".
  10. I think I just don't really run a setup with a problem that this solution solves. First of all I'm up to 84TB of local storage. Second is I like my 4K HDR remuxes to direct play on my Shield through my Atmos/DTS-X AVR. Third is that I hardlink my downloads and want to long term seed. So if Downloads gets omitted from the script I'll still have to maintain a local copy and then I'm just uploading it for the hell of it and causing myself to have to download everything from my library even though the source copy is still local. A solution could be to not omit Downloads and see if seeding works from the cloud drive share and to stop buying EasyStores on sale. I already had the google account with unlimited and had tried it as a CrashPlan replacement back when they stopped doing their peer to peer backup. It turned out that Duplicati sucked so I just paid for CrashPlan. I decided to dust off the account after coming across the rclone plugin and these scripts. I think with these scripts I could revisit it for backup. I also may consider pointing my NextCloud to it and having it be a Google Drive hybrid of sorts. So if anyone has any other ideas on slick ways to use this then let me know.
  11. I got the scripts set up and working. Now I can't decide how I actually want to use it with my setup. I see that the upload script omits "downloads" and pushes the hard linked movies/tv/etc to the google drive. However, I basically permaseed everything and hard link. If my file is already using up the space on my server by seeding in Downloads I'm not using any additional space in my array for my media library. I also don't have symmetric Gigabit with Xfinity so that my upload at 40Mbps is rather slow too. Anyone have a use-case like mine? I'm thinking I could replace my CrashPlan backups with this by setting up a backup folder in the mount_mergerfs folder. Not really sure what to do, but I do have access to an unlimited google account.
  12. Is there a point in installing the beta version? I just did the regular one and got it mounted and working.
  13. Just migrated to RS and it was painless. First I logged in and exported my Vault. Then stopped the container. Opened a new unRAID tab. Started adding the new RS container. Copied the same port number and app data. Removed the old container. Clicked Done and created the RS container. Reverse proxy, vault password, and apps all continued to work.