acosmichippo

Members
  • Posts

    77
  • Joined

  • Last visited

Everything posted by acosmichippo

  1. really depends on the cause. If it was planned maintenance or decommissioning of a commonly used feature, then that should have been communicated to us ahead of time. If I had known this a few weeks ago I would not have renewed my yearly subscription. also it has now been over 24 hours. Even if it was an unplanned incident, it seems to be taking them quite a while to fix.
  2. anyone know why total RAM fluctuates? I would expect the sum of Free+cached+used to always add up to my total available RAM (31.3GB in my case), but you can see here that it drops below 29GB around 4am.
  3. yep, Toronto is my default but that is giving me exit code 56 as well.
  4. whelp, it hung up again after 99% finishing parity check (like 24 hours later). Didn't even get to resume Mover this time. So frustrating. I have it running memtest now, so we'll see what happens. I took a look at the syslog messages and didn't see much other than an apparent bad sector on sdd. I'm not sure what timezone the syslog timestamps are in, but I'm in eastern US and the server crashed sometime around 14:00. But the syslog entries don't go past 08:16. If someone has time to take a look that would be greatly appreciated. Thanks! Sep 5 16:50:08 unraid rsyslogd: [origin software="rsyslogd" swVersion="8.1908.0" x-pid="31634" x-info="https://www.rsyslog.com"] start Sep 5 19:54:24 unraid webGUI: Unsuccessful login user acosmichippo from 10.0.0.185 Sep 5 19:54:34 unraid webGUI: Successful login user root from 10.0.0.185 Sep 6 00:00:01 unraid Plugin Auto Update: Checking for available plugin updates Sep 6 00:00:02 unraid Plugin Auto Update: Community Applications Plugin Auto Update finished Sep 6 00:40:45 unraid webGUI: Successful login user root from 10.0.0.185 Sep 6 03:00:01 unraid Recycle Bin: Scheduled: Files older than 30 days have been removed Sep 6 03:07:07 unraid kernel: mdcmd (46): spindown 7 Sep 6 04:00:01 unraid Docker Auto Update: Community Applications Docker Autoupdate running Sep 6 04:00:01 unraid Docker Auto Update: Docker not running. Exiting Sep 6 04:00:02 unraid kernel: sd 1:0:2:0: [sdd] tag#818 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Sep 6 04:00:02 unraid kernel: sd 1:0:2:0: [sdd] tag#818 Sense Key : 0x5 [current] Sep 6 04:00:02 unraid kernel: sd 1:0:2:0: [sdd] tag#818 ASC=0x21 ASCQ=0x0 Sep 6 04:00:02 unraid kernel: sd 1:0:2:0: [sdd] tag#818 CDB: opcode=0x42 42 00 00 00 00 00 00 00 18 00 Sep 6 04:00:02 unraid kernel: print_req_error: critical target error, dev sdd, sector 974765264 Sep 6 04:00:07 unraid sSMTP[13320]: Creating SSL connection to host Sep 6 04:00:07 unraid sSMTP[13320]: SSL connection using TLS_AES_256_GCM_SHA384 Sep 6 04:00:07 unraid sSMTP[13320]: Authorization failed (535 5.7.8 https://support.google.com/mail/?p=BadCredentials q142sm8074581qke.48 - gsmtp) Sep 6 06:00:03 unraid emhttpd: shcmd (1986): /usr/sbin/hdparm -y /dev/sdd Sep 6 06:00:04 unraid root: Sep 6 06:00:04 unraid root: /dev/sdd: Sep 6 06:00:04 unraid root: issuing standby command Sep 6 08:16:33 unraid kernel: mdcmd (47): spindown 5 Sep 6 08:16:36 unraid kernel: mdcmd (48): spindown 6 edit: memtest completed 2 passes with no failures, rebooted into safe mode and letting parity check run again.
  5. Hello all, This week i finally tried to resolve the high writing to my cache drive that a lot of people have been experiencing. I decided to reformat my cache with XFS following this guide: https://wiki.unraid.net/Replace_A_Cache_Drive Docker and VMs are all disabled. Everything went fine until the point of running mover the 2nd time to get the data back on the newly formatted XFS cache. The first attempt I noticed Mover seemed to be running VERY slow (like 30MB/s), but I figured that was just because most of the 400GB is plex metadata. So I let it go overnight and found it unresponsive in the morning, no ping or anything. I don't keep the server in a place convenient to hook up a monitor, so I had to force a shut down and hooked it up at my desk to check it out. For some reason it would not boot up, it just got stuck on a blinking cursor. This has never been an issue before and I did not change anything in the BIOS, but I decided to check out the boot settings. I put the USB stick first and disables UEFI, and luckily that worked and got back up and running. Not sure why this suddenly was an issue, so i figured it was just a fluke. After I got it back up I saw it only transferred about 40GB of data (about 10% of the total 400GB cache data). Let the automatic parity check run over 30 hours (12TB parity) which repaired a few errors, and this morning I started Mover again to resume the data transfer to cache, and it has frozen AGAIN. This time is does ping, but Web UI is unresponsive and can't login via SSH. Hooked up a monitor before shutting it off, and there was no video output, so i had to hard reboot again. luckily a hard reboot worked without getting hung on a blinking cursor like last time. Turns out Mover has now done 300GB of data, so only about 75% of the total 400GB. Gonna let parity check finish again before doing anything else. Pretty sure it's not an overheating issue or anything. Mover is going quite slow, CPU, RAM, and HDDs are barely utilized at all. I've put the system under MUCH more abuse over the last 4-5 months running all kinds of dockers and a VM. Has anyone seen anything like this before? Any suggestions? I've attached diagnostic files in case anyone has time to take a look. Thanks for reading and thanks for any help!
  6. hey all, does anyone have experience backing up to B2 on a gigabit connection? Seems like it maxes out around 50Mb/s (bit, not byte, to be clear). also tested with cloudberry and backblaze’s speedtest, which seem similar. i’ve seen recommendations to increase the number of threads in the backup software - is that possible with duplicati?
  7. I have a dumb question. Trying to look up the help on the Folder Caching for cache pressure, but... where is the help? Thanks!
  8. hi all, tried searching for this but didn't see anything about it. apologies if I missed it. Is it possible to use Jackett as a default search provider for the dashboard? I see it's an option on the dashboard, but not an option for a default in Heimdall's settings. Thanks!
  9. Trying to get this working, but from my first bit of googling, it seems ich9 requires pulseaudio, which isn't working on unraid. Am I understanding that correctly?
  10. I know I can mount SMB shares, but I WANT to passthrough. And you don't need to put it in quotes as if I made it up, it's right there in the qemu wiki. I want to access the files in the shares, not unassigned devices.
  11. @Mattyfaz same process worked for me. Hopefully SIO can update the first post, there are a lot of people having to piece those three parts together by skimming this whole thread. Especially the OS section which was not in the YouTube video.
  12. hey guys, Skimmed the whole post and saw a couple questions about this, but no answers yet. I'm just trying to passthrough my unraid shares to the VM: <filesystem type='mount' accessmode='passthrough'> <source dir='/mnt/user/'/> <target dir='unraid'/> <alias name='fs0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </filesystem> ...but in the VM I can't find the shares anywhere. also found this post: But as far as I can tell this does not work in MacOS. The tag "unraid" (as I specified in my XML file) is not available to mount in diskutil: erik@iMac ~ % diskutil list /dev/disk0 (internal, physical): #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *2.1 GB disk0 1: Apple_HFS macOS Base System 2.0 GB disk0s1 /dev/disk1 (internal, physical): #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *268.4 MB disk1 1: EFI EFI 101.4 MB disk1s1 2: Linux Filesystem 163.9 MB disk1s2 /dev/disk2 (internal, physical): #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *68.7 GB disk2 1: EFI EFI 209.7 MB disk2s1 2: Apple_APFS Container disk3 68.4 GB disk2s2 /dev/disk3 (synthesized): #: TYPE NAME SIZE IDENTIFIER 0: APFS Container Scheme - +68.4 GB disk3 Physical Store disk2s2 1: APFS Volume MacOS - Data 13.0 GB disk3s1 2: APFS Volume Preboot 80.0 MB disk3s2 3: APFS Volume Recovery 528.1 MB disk3s3 4: APFS Volume VM 1.1 MB disk3s4 5: APFS Volume MacOS 11.0 GB disk3s5 Has anyone gotten local shares passed through successfully? Thanks!
  13. Hi all, Familiar with qbt from windows, but a bit new to unraid and docker. Got this set up and working well with Radarr and Sonarr (all via unraid docker), but I have one weird issue. I wish I could search this post in case other people have seen this too, but I can't see a way to do that... Anyway, I can't seem to start any downloads from the Search page. Search runs fine, and the download options window pops up, but when I click Download, it isn't added to the Downloads. Is this a known issue or does anyone have an idea of what's going on? Thanks for any help!