Jorgen

Members
  • Posts

    269
  • Joined

  • Last visited

Everything posted by Jorgen

  1. Hmm, I don't actually read/write to the shares much from my Mac (Catalina), especially big files. But I just did a test and the speeds aren't as fast as I expected for me either. I'm getting 10-20MB/s write, and a bit less on reads (single large file). There are some SMB config tweaks floating around on the forum specifically for Mac transfer speeds, I will look into to this to see if it makes a difference and report back. In the mean time, have you enabled "Enhanced macOS interoperability" under Settings/SMB Settings in unraid? If not, do that and see if it helps (probably requires restart of array at least).
  2. I haven’t looked at your diagnostics yet, but I think you need to give us a bit more info about the scenario. - What version of Mac OS are you using? - How are you connecting to the unRAID server? WiFi or ethernet? Are you sure your network is ok? - Have you tested if you are getting the same or better speeds from a different computer/OS? - How are you mounting the share on the Mac? I assume via SMB? - What exactly are you doing when you see those speeds? E.g. copying one 10GB file from your Mac to the unRAID share, or copying lots of small file from unRAID to the Mac? Sent from my iPhone using Tapatalk
  3. Are you using iCloud to sync photos from your phone? In that case you can do what I do. I have Mac VM on unRAID that runs for a few hours per week (scheduled start and stops). On the VM I run Photos that syncs with iCloud. The storage is a disk image on the unRAID array. So any new photos/videos on my iPhone are immediately synced to iCloud, then synced to unRAID once a week. I also run TimeMachine to a separate unRAID disk, which is probably overkill. But I’ve lost irreplaceable photos in the past from phones dying and really don’t want it happening again. In the unlikely scenario that my phone dies AND apple somehow loses all my iCloud storage at the same time, I’m now only losing maximum one week of recent photos. Which I’m ok with. Sent from my iPhone using Tapatalk
  4. Probably the simplest solution is to use binhex privoxy docker to create the VPN tunnel and then configure Jackett to use it as a proxy. Sent from my iPhone using Tapatalk
  5. I had problems with rarbg when using binhex-privoxy as proxy for sonarr. Maybe worth a try to disable the proxy for a test, if you are using one? Sent from my iPhone using Tapatalk
  6. No, you can change CPU assignments, but the XML contains other custom bits that gets wiped out when changing CPUs via the unRAID GUI. So you have two options: 1. Edit the CPU assignment directly in the xml (swap to advanced view in unRAID) Or 2. Make a copy of the original xml, then edit CPU assignment in the GUI, save, switch to XML view and paste in all the bits that got removed from the original xml (from memory it’s the ovmf path and the block of custom arguments at the end of the xml) Either way you need to understand a bit about the xml structure, and it helps using a text editor that lets you compare two text files. Sent from my iPhone using Tapatalk
  7. I think the commonly accepted workaround is to restart the docker once a day. Using CA User Scripts to automate it. No one has ever been able to pinpoint the root cause, and it doesn’t affect everyone. Sent from my iPhone using Tapatalk
  8. See Q4 here: https://github.com/binhex/documentation/blob/master/docker/faq/delugevpn.md Sent from my iPhone using Tapatalk
  9. I think the Controller Hostname/IP setting should be set to the custom:br0 IP of the docker container. It should be the IP that the other devices need to use to connect to the controller. But I’m not sure you need to set that at all when using a custom network. Sent from my iPhone using Tapatalk
  10. See Q4 here: https://github.com/binhex/documentation/blob/master/docker/faq/delugevpn.md Alternatively you can enable the plugin then stop/start the backend server from within the deluge UI. I forget where to find it exactly, it’s in the deluge menu somewhere. Sent from my iPhone using Tapatalk
  11. You would have to change the application port from the Radarr UI under settings/general: Then map the host port to the new application port, but you need to do that on the VPN container when you are using it’s network (not the Radarr container). I don't believe the container port mapping is actually used at all when using the network from another container. Here's an example to illustrate. If you are using two Radarr containers, you need to add two custom port mappings to the VPN container.
  12. The preset you select in the GUI is only used for the file you manually load and process via the GUI. The docker setting controls which preset is used by the automated watch folder. The two are very separate and setting one does not influence the other.
  13. You couldn’t have, smb support was introduced in 11.12. You probably used AFP to backup to a network share? That works for all Mac OS versions, but is deprecated and shouldn’t be used. Sounds like the best way for you to get this working is to update to High Sierra. I understand you probably want the backups working before upgrading so I’m not sure what to suggest. Maybe set up time machine to an external HDD temporarily? Sent from my iPhone using Tapatalk
  14. I don’t know if this is actually how it works, but it’s how I think about it. And I have no problems running this docker on bridge mode. When running UniFi in a docker in bridge mode, the docker IP is something on the bridge network, e.g. 172.x.x.x. I think the unifi app detects it’s own IP and broadcasts it out to all APs that are listening. This is how auto-adopt works. But of course, the AP can’t talk back to unifi on 172.x.x.x. It needs to use the unRAID host IP. The controller override settings in unifi neatly fixes this problem by broadcasting an IP that the AP’s can actually reach. So in summary, bridge mode works well but you HAVE TO set the controller override options correctly. Again, this is how I think about it, I don’t know for sure that it’s how it works behind the scenes. Sent from my iPhone using Tapatalk
  15. It does, but time machine over SMB is only supported from Sierra and up. For what it’s worth, I’m having no problems with time machine to smb unRAID shares from multiple macs running either High Sierra or Catalina. But the original time machine jobs was actually created on High Sierra and the Mac later upgraded to Catalina. Don’t know if that makes a difference. Sent from my iPhone using Tapatalk
  16. @mbc0 Jackett has a setting for allowing "External access". Not sure exactly what it does, but if I turn it off I can't get to the Web UI either when using the delugevpn network. If I turn it on the Web UI is accessible again. If you can't get to the UI to change it, you can edit the config file directly in config/ServerConfig.json and restart the container.
  17. And what does the Jackett logs say, like IceNine451 suggested? It's possible for the container to start and use the delugevpn network, but the Jackett application itself might be having problems. This is what my logs look like on a successful start: 2020-04-13 13:19:55,749 DEBG 'jackett' stdout output: 04-13 13:19:55 Info Jackett startup finished in 2.927 s 2020-04-13 13:19:55,749 DEBG 'jackett' stdout output: Hosting environment: Production Content root path: /usr/lib/jackett/Content Now listening on: http://[::]:9117 Application started. Press Ctrl+C to shut down. And if you run top from the docker console, you should see jackett listed as a process
  18. And just to confirm, your unRAID IP is 192.168.0.33 and you are using v6.8.3? No ad blockers or similar used in your browser? Sent from my iPhone using Tapatalk
  19. For what it’s worth, I just set up binhex jackett using the network of binhex delugevpn following spaceinvader’s video. Can access jacket UI without problems and have confirmed it is using the VPN tunnel. And I left the port mappings in place for jackett.
  20. The underlying OS of the dockers aren’t routing through privoxy, but the application itself is. So sonarr will go through the VPN, but the curl command from the OS level won’t. Same in your VM. The browser is using privoxy as a proxy, but not windows itself. You CAN configure both the VM and the dockers to use privoxy for all network traffic, but it’s not needed for the docker apps you mention as they have built-in proxy support. Edit: Binhex explained it better
  21. Just reporting a successful (but long) 3-cycle preclear of a Seagate 8TB archive drive. Triggered with: /usr/local/bin/preclear_binhex.sh -f -c 3 /dev/sdd It's been a few years since I last precleared a drive, and this docker was very easy to use, thanks for making it available! I remember using the script directly before and I was never really sure I had managed to get the right version of it from the forum threads.
  22. You can't set those docker containers to use a proxy, so I guess what you have done is setting the application within the container to use the proxy. The curl commands show you the containers networking not using the proxy, which is expected. Except for the deluge container itself, where the whole container is using the VPN tunnel (not via a proxy). I don't know how you can test if Sonarr etc. is actually using the proxy correctly, your best bet might be to look at the application logs to see if something useful is logged? For what it's worth, I am using the deluge proxy for both Sonarr and Radarr, but I haven't actually checked if the traffic goes via the VPN tunnel or not. But I can't see any reason why they wouldn't.
  23. Oh just realized that the disk is still under warranty! Would these problems qualify it to be returned under warranty? It's a Seagate 8TB Archive drive (SMR)
  24. One of my disks (disk4) suddenly reported going form 0 to 168 errors for SMART 197 Current_Pending_Sector AND 198 Offline_Uncorrectable, both at the same time. Here's the unraid notification: 2019-09-06 08:38 Unraid Disk 4 SMART health [198] Warning [TOWER] - offline uncorrectable is 168 ST8000AS0002-1NA17Z_Z84102P2 (sdd) warning 2019-09-06 08:38 Unraid Disk 4 SMART health [197] Warning [TOWER] - current pending sector is 168 ST8000AS0002-1NA17Z_Z84102P2 (sdd) warning There wasn't much activity on the array at the time, and nothing is logged in the syslog for that time period. I've run a short and extended SMART self-test. The extended test reported errors: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed: read failure 90% 18661 2896134552 # 2 Short offline Completed without error 00% 18631 - Can someone with more experience in this confirm that the disk is indeed on its' last legs and should be replaced immediately? Extended smart report and diagnostics attached, let me know if you need anything else. Thanks in advance ST8000AS0002-1NA17Z_Z84102P2-20190908-0923.txt tower-diagnostics-20190907-2350.zip