shutterbug

Members
  • Posts

    30
  • Joined

  • Last visited

Everything posted by shutterbug

  1. sure enough, 5 total clicks on macOS installer after a reboot each time got me into the OS. Thank you! (and SpaceInvader One does indeed note this in the video, now that I've gone back and reviewed, doh!)
  2. yes, that's exactly what I see. I'm running around an 8th gen corei7, and 6.9 rc1.
  3. Well I finally got the macinbox vm working, I formatted my 107gig disk, started the OS reinstall of bigsur, it ran for about 1/2hr, appeared to complete without errors, then when it rebooted and I'm back to the screen that has macos base system, macos installer, recovery, uefi shell, shutdown and reset nvram. If I reinstall again, same things happens. I seem to be stuck in a loop and never get an option to boot into the OS.
  4. Deleting the custom_ovmf folder was what I needed, thanks so much!
  5. So I had macinabox working previously (the version before bigsur was supported), worked great. I followed space invader ones steps in the video, precisely, to remove my old VM, docker, template, etc... I've gone through the video several times to make sure I'm not missing a step. My docker is configured precisely as noted in the video (same paths/etc..) but I'm getting this error when I try to fire up the VM with bigsur for the first time: "operation failed: unable to find any master var store for loader: /mnt/user/system/custom_ovmf/Macinabox_CODE-pure-efi.fd" I've removed everything and have gone through the video several times, all with the same result. The only thing that's not 'stock' on my unraid would be that I have a pool of nvme SSDs in addition to my cache (running 6.9 rc1) and I have moved my 'domains' directory for VMs to that pool.
  6. Thank you! Is there any reason I wouldn't want to run privoxy? I never had it on before, but after enabling it and reading up on it, I reconfigured sonarr and radarr to pull from the privoxy port and this seems like an ideal solution. Just wondering if there are any cons to running it? (I don't use SABnzbd outside of my LAN)
  7. Sure enough, enabling privoxy solved it for me as well. I've never had this enabled previously, as you noted, worked fine before. Thanks!
  8. both sonarr and radarr show that it is running, the checks from those tools pass. I've tried to hit the web interface from multiple browsers and incognito mode, all time out. Unraid has been rebooted, problem remains.
  9. No auth errors, the wireguard interface appears to come up and I receive an IP: 2020-12-02 18:12:57,180 DEBG 'start-script' stdout output: [info] Attempting to bring WireGuard interface 'up'... 2020-12-02 18:12:57,189 DEBG 'start-script' stderr output: Warning: `/config/wireguard/wg0.conf' is world accessible 2020-12-02 18:12:57,194 DEBG 'start-script' stderr output: [#] ip link add wg0 type wireguard 2020-12-02 18:12:57,195 DEBG 'start-script' stderr output: [#] wg setconf wg0 /dev/fd/63 2020-12-02 18:12:57,206 DEBG 'start-script' stderr output: [#] ip -4 address add 10.x.x.x dev wg0 2020-12-02 18:12:57,210 DEBG 'start-script' stderr output: [#] ip link set mtu 1420 up dev wg0 2020-12-02 18:12:57,227 DEBG 'start-script' stderr output: [#] wg set wg0 fwmark 51820 2020-12-02 18:12:57,228 DEBG 'start-script' stderr output: [#] ip -4 route add 0.0.0.0/0 dev wg0 table 51820 2020-12-02 18:12:57,229 DEBG 'start-script' stderr output: [#] ip -4 rule add not fwmark 51820 table 51820 2020-12-02 18:12:57,230 DEBG 'start-script' stderr output: [#] ip -4 rule add table main suppress_prefixlength 0 2020-12-02 18:12:57,233 DEBG 'start-script' stderr output: [#] sysctl -q net.ipv4.conf.all.src_valid_mark=1 2020-12-02 18:12:57,234 DEBG 'start-script' stderr output: [#] iptables-restore -n 2020-12-02 18:12:57,236 DEBG 'start-script' stderr output: [#] '/root/wireguardup.sh' 2020-12-02 18:12:58,327 DEBG 'start-script' stdout output: [info] Application does not require external IP address, skipping external IP address detection 2020-12-02 18:12:58,328 DEBG 'start-script' stdout output: [info] WireGuard interface 'up' 2020-12-02 18:12:58,329 DEBG 'start-script' stdout output: [info] Application does not require port forwarding or VPN provider is != pia, skipping incoming port assignment 2020-12-02 18:12:58,382 DEBG 'watchdog-script' stdout output: [info] SABnzbd not running 2020-12-02 18:12:58,382 DEBG 'watchdog-script' stdout output: [info] Attempting to start SABnzbd... 2020-12-02 18:12:59,192 DEBG 'watchdog-script' stdout output: [info] SABnzbd process started [info] Waiting for SABnzbd process to start listening on port 8080... 2020-12-02 18:12:59,403 DEBG 'watchdog-script' stdout output: [info] SABnzbd process is listening on port 8080
  10. Thanks for this. So I went through your steps and it got rid of the warning in the logs, but now I have an error and still can't access the web interface. The error in the logs is: 2020-12-02 18:00:18,435 DEBG 'start-script' stderr output: parse error: Invalid numeric literal at line 4, column 0 the only thing that differed from your instructions was that the new variable asked for both a VALUE and a DEFAULT VALUE. I put 'wireguard' in VALUE and left DEFAULT VALUE empty (I also tried just 'wireguard' in the DEFAULT VALUE field, same impact)
  11. No, it's been running on port 8080 for months. I verified I'm hitting it at :8080 and that the config shows 8080. (see attached config)
  12. It does actually seem to be working, i.e. Sonarr passes the download client test. I just can't hit the web interface.
  13. Thanks, I went ahead and stepped through #19 and restarted the docker. I still can't load the webpage and the logs still show the same warning I posted above.
  14. I updated my docker for SABnzbdVPN this morning and now I can't get the web interface to respond (it just times out on port 8080). I'm seeing the following error in the logs: 2020-12-02 13:56:59 DEPRECATED OPTION: --cipher set to 'aes-128-cbc' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM). Future OpenVPN version will ignore --cipher for cipher negotiations. Add 'aes-128-cbc' to --data-ciphers or change --cipher 'aes-128-cbc' to --data-ciphers-fallback 'aes-128-cbc' to silence this warning. Could this be the reason I can't start it, and if so how do I change this? I use PIA for the VPN, and am running 6.9 beta 35 of unraid.
  15. I'm fairly certain I had no choice. i.e. everything was ghosted out and I could not start a rebuild on the new drive until the 'format drive' box was checked. Oh well, fortunately I have the failing drive and should be able to recover the files from it.
  16. When I say I formatted it, unraid forced the format. i.e. I powered down, installed new drive, when unraid came back online the array was not started and it noted something along the lines of "to continue click here to format the drive". Once I did that, the rebuild began. This is the same process I've used multiple times to replace a failed drive.
  17. After my last parity check this past weekend I found that I had a drive that had been marked 'drive disabled contents emulated'. My smart error count was high. I had a spare drive that I thought was good, popped it in and began a rebuild, it stopped rebuilding about 2 minutes in and again put this replacement drive in 'drive disabled contents emulated' mode. So I went out and bought a brand new drive, installed it, formatted it and the array rebuild kicked off. It ran for 10+ hrs, stats screen shows over 70,000 writes to the new drive. HOWEVER, the drive is entirely empty after the rebuild completed. There are no errors that I see, it completed the rebuild successfully, but not a single byte of data is sitting on this new drive. I don't get it. I'm assuming I've lost data, but I have no idea what. What are my next steps to examine what may have happened? I've attached my diagnostics file.
  18. I neglected to post my unraid specs for reference: Core i7 930 24gig DDR3 RAM 1 960 gig crucial Sata SSD 8 3.5" sata drives, ranging from 2-6TB each, 22TB total 1 parity drive (6TB) Intel 1gig pcie NIC Mellanox 10gbe single port card (installed in 16x slot) 2 Adaptec PCIe dual port SATA cards
  19. Read speeds from the cache drive over 10gig are where I'd expect them to be, i.e. 325-350MB/s (i.e. max read speed of the SSD on it's current interface). It is writes where it falls apart.
  20. I have a 10 gig mellanox card installed in my unraid as well as in a windows 10 pc. They are connected directly (no switch) with a copper SFP+, 2ft cable. The Windows 10 PC has an NVME Samsung SSD. When I start a file copy of a 10-30gig file, from the PC to the unraid cache drive, it starts out running at 700+MB/s for several seconds, and then quickly drops, either bottoming out at zero where it will just sit as though it's been paused for 10-30 seconds until it starts up again, OR it will drop to around gigabit speeds (100ish MB/s) and continue the transfer at that speed (the later is the more frequent scenario). I'd be happy to see 300-400MB/s sustained, but most of the time I average worse than a gigabit connection. I have a 2nd network card in unraid (gigabit) and copies to and from the cache drive using this card are solid and easily saturate my gigabit link without much fuss (112MB/s). No bottoming out or other performance issues using the 1 gig link, it saturates my link every time. I've tried 3 cache combinations on unraid. I started out with 4,120gig SSDs in a cache pool, moved to a single 500gig SSD and now to a single 960gig (modern) SSD. The problem is identical with all of them, terrible 10gig performance. At first I thought it was a RAM cache filling up, however I've noticed that it doesn't always start at 700MB/s, sometimes it starts out really slow, 30-40MB/s, chugs along there for awhile, might jump up to somewhere between 200-700MB/s briefly, then comes plunging back down again. As I'm writing this, I'm getting 32MB/s writes on the 10gig link. If I cancel it and start the same copy over my 1gig connection it's a solid 112MB/s. I've gone through post after post this morning and have tried numerous things including setting my MTU at 9000 on both NICs, enabling direct I/O, insuring my SSD was trimmed, disabling any docker or VMs running, replacing my SFP+ cable, checking performance (RAM and CPU usage are low), etc... nothing has had the slightest impact. I'm running unraid version 6.5.3 on a core i7 930 and am out of ideas.
  21. I pulled my failing, unassigned SSD, and all errors except the kernal tainted errors were resolved. So the only errors left in the log are a bunch of these: Jan 19 10:49:44 zorg kernel: CPU: 0 PID: 1 Comm: swapper/0 Tainted: G I 4.1.15-unRAID #1 Jan 19 10:49:44 zorg kernel: Call Trace: Jan 19 10:49:44 zorg kernel: CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W I 4.1.15-unRAID #1 Jan 19 10:49:44 zorg kernel: Call Trace: Jan 19 10:49:44 zorg kernel: CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W I 4.1.15-unRAID #1 Jan 19 10:49:44 zorg kernel: Call Trace: Jan 19 10:49:44 zorg kernel: CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W I 4.1.15-unRAID #1 Jan 19 10:49:44 zorg kernel: Call Trace: Jan 19 10:49:44 zorg kernel: CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W I 4.1.15-unRAID #1
  22. yes, SDE is part of the cache pool and needs to be replaced, currently it's just unassigned (not part of the pool). SMART is A-OK across the array.
  23. Thank you, I will look into that. Any comments on the errors above? Should I be concerned about these?
  24. Typically I access via unc, but one of my VMs does have mapped drives, yes
  25. I completed a reiserfsck on each of my drives, all report no corruptions found. In looking at the system log that's accessible from the GUI under tools, I am seeing a slew of errors. Any thoughts on the errors below? What are my next steps? After checking all the drives with reiserfsck and re-running my parity check (which again completes with zero errors), my files are still missing. **NEW DEVELOPMENT: Last night I went to watch something from a folder I was viewer 24hrs prior, and it is gone as well. The folder and all sub-folders/files just gone. Probably around 20 files in this case just simply gone. This is in addition to the files missing since I started this post. Jan 17 10:51:58 zorg kernel: CPU: 0 PID: 1 Comm: swapper/0 Tainted: G I 4.1.15-unRAID #1 Jan 17 10:51:58 zorg kernel: Call Trace: Jan 17 10:51:58 zorg kernel: CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W I 4.1.15-unRAID #1 Jan 17 10:51:58 zorg kernel: Call Trace: Jan 17 10:51:58 zorg kernel: CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W I 4.1.15-unRAID #1 Jan 17 10:51:58 zorg kernel: Call Trace: Jan 17 10:51:58 zorg kernel: CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W I 4.1.15-unRAID #1 Jan 17 10:51:58 zorg kernel: Call Trace: Jan 17 10:51:58 zorg kernel: CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W I 4.1.15-unRAID #1 Jan 17 10:51:58 zorg kernel: Call Trace: Jan 17 10:51:58 zorg kernel: CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W I 4.1.15-unRAID #1 Jan 17 10:51:58 zorg kernel: Call Trace: Jan 17 10:51:58 zorg kernel: CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W I 4.1.15-unRAID #1 Jan 17 10:51:58 zorg kernel: Call Trace: Jan 17 10:51:58 zorg kernel: CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W I 4.1.15-unRAID #1 Jan 17 10:51:58 zorg kernel: Call Trace: Jan 17 10:51:58 zorg kernel: ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x280000 action 0x6 Jan 17 10:51:58 zorg kernel: ata4.00: irq_stat 0x00020002, device error via SDB FIS Jan 17 10:51:58 zorg kernel: ata4.00: error: { ICRC ABRT } Jan 17 10:51:58 zorg kernel: ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x280000 action 0x6 Jan 17 10:51:58 zorg kernel: ata4.00: irq_stat 0x00020002, device error via SDB FIS Jan 17 10:51:58 zorg kernel: ata4.00: error: { ICRC ABRT } Jan 17 10:51:58 zorg kernel: ata18.00: exception Emask 0x1 SAct 0x0 SErr 0x0 action 0x0 Jan 17 10:51:58 zorg kernel: ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x200000 action 0x6 Jan 17 10:51:58 zorg kernel: ata4.00: irq_stat 0x00020002, device error via SDB FIS Jan 17 10:51:58 zorg kernel: ata4.00: error: { ICRC ABRT } Jan 17 10:51:58 zorg kernel: ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x280000 action 0x6 Jan 17 10:51:58 zorg kernel: ata4.00: irq_stat 0x00020002, device error via SDB FIS Jan 17 10:51:58 zorg kernel: ata4.00: error: { ICRC ABRT } Jan 17 10:51:58 zorg kernel: blk_update_request: I/O error, dev sde, sector 896 Jan 17 10:51:58 zorg kernel: blk_update_request: I/O error, dev sde, sector 896 Jan 17 10:51:58 zorg kernel: Buffer I/O error on dev sde, logical block 112, async page read Jan 17 10:51:58 zorg kernel: blk_update_request: I/O error, dev sde, sector 512 Jan 17 10:51:58 zorg kernel: blk_update_request: I/O error, dev sde, sector 512 Jan 17 10:51:58 zorg kernel: Buffer I/O error on dev sde, logical block 64, async page read Jan 17 10:51:58 zorg kernel: blk_update_request: I/O error, dev sde, sector 32 Jan 17 10:51:58 zorg kernel: blk_update_request: I/O error, dev sde, sector 32 Jan 17 10:51:58 zorg kernel: Buffer I/O error on dev sde, logical block 4, async page read Jan 17 10:51:58 zorg kernel: blk_update_request: I/O error, dev sde, sector 4096 Jan 17 10:51:58 zorg kernel: Buffer I/O error on dev sde, logical block 512, async page read Jan 17 10:51:58 zorg kernel: blk_update_request: I/O error, dev sde, sector 234441407 Jan 17 10:51:58 zorg kernel: blk_update_request: I/O error, dev sde, sector 234441407 Jan 17 10:51:58 zorg kernel: Buffer I/O error on dev sde1, logical block 234441344, async page read Jan 17 10:51:58 zorg kernel: blk_update_request: I/O error, dev sde, sector 234441408 Jan 17 10:51:58 zorg kernel: Buffer I/O error on dev sde1, logical block 234441345, async page read Jan 17 10:51:58 zorg kernel: Buffer I/O error on dev sde1, logical block 234441346, async page read Jan 17 10:51:58 zorg kernel: Buffer I/O error on dev sde1, logical block 234441347, async page read Jan 17 10:51:58 zorg kernel: Buffer I/O error on dev sde1, logical block 234441348, async page read Jan 17 10:51:58 zorg kernel: Buffer I/O error on dev sde1, logical block 234441349, async page read Jan 17 10:52:02 zorg kernel: blk_update_request: I/O error, dev sde, sector 234441472 Jan 17 10:52:02 zorg kernel: blk_update_request: I/O error, dev sde, sector 234441472 Jan 17 10:52:02 zorg kernel: Buffer I/O error on dev sde, logical block 29305184, async page read Jan 17 10:52:02 zorg kernel: blk_update_request: I/O error, dev sde, sector 234441632 Jan 17 10:52:02 zorg kernel: blk_update_request: I/O error, dev sde, sector 234441632 Jan 17 10:52:02 zorg kernel: Buffer I/O error on dev sde, logical block 29305204, async page read Jan 17 10:52:02 zorg kernel: blk_update_request: I/O error, dev sde, sector 0 Jan 17 10:52:02 zorg kernel: blk_update_request: I/O error, dev sde, sector 0 Jan 17 10:52:02 zorg kernel: Buffer I/O error on dev sde, logical block 0, async page read Jan 17 10:52:02 zorg kernel: blk_update_request: I/O error, dev sde, sector 8 Jan 17 10:52:02 zorg kernel: Buffer I/O error on dev sde, logical block 1, async page read Jan 17 10:52:02 zorg kernel: blk_update_request: I/O error, dev sde, sector 234441640 Jan 17 10:52:02 zorg kernel: Buffer I/O error on dev sde, logical block 29305205, async page read Jan 17 10:52:02 zorg kernel: blk_update_request: I/O error, dev sde, sector 234441640 Jan 17 10:52:02 zorg kernel: Buffer I/O error on dev sde, logical block 29305205, async page read Jan 17 10:52:02 zorg kernel: blk_update_request: I/O error, dev sde, sector 234441640 Jan 17 10:52:02 zorg kernel: Buffer I/O error on dev sde, logical block 29305205, async page read Jan 17 10:52:02 zorg kernel: Buffer I/O error on dev sde, logical block 29305205, async page read Jan 17 10:52:02 zorg kernel: Buffer I/O error on dev sde, logical block 29305205, async page read Jan 17 10:52:02 zorg kernel: Buffer I/O error on dev sde, logical block 29305205, async page read Jan 19 07:47:09 zorg kernel: blk_update_request: I/O error, dev sde, sector 234441407 Jan 19 07:47:09 zorg kernel: blk_update_request: I/O error, dev sde, sector 234441407 Jan 19 07:47:09 zorg kernel: Buffer I/O error on dev sde1, logical block 234441344, async page read Jan 19 07:47:09 zorg kernel: blk_update_request: I/O error, dev sde, sector 234441408 Jan 19 07:47:09 zorg kernel: Buffer I/O error on dev sde1, logical block 234441345, async page read