WackyWRZ

Members
  • Posts

    17
  • Joined

  • Last visited

About WackyWRZ

  • Birthday May 30

Converted

  • Gender
    Male
  • Location
    Raleigh, NC

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

WackyWRZ's Achievements

Noob

Noob (1/14)

3

Reputation

  1. I've also noticed very odd behavior and it not working correctly after restart if PWM1 isn't set to "Enabled" in the plugin - even if it's not being used. I've had to enable PWM1 and leave it all blank for it to work on reboot. Unfortunately I have not been able to figure what is causing this.. A few of us reported the NVME ignore bug a while back... I'm not great at *nix scripting - or how to fix it, but did narrow it down to WHY. The problem is how it's interpreting an errorcode from SDSPIN. There was a problem a while back where it would spinup drives when polling drives, so it checks first to see if it's spun up. Standard HDDs return a 0 or 1 - NVMe drives return something else, and the autofan plugin is only looking for 0 or 1. Here's the bugreport on Git: https://github.com/bergware/dynamix/issues/83
  2. Thanks for putting this docker together! It's working great except one thing I have noted: When I attempt to stop the container it will not stop correctly. I get an "Execution Error" popup from GUI about 1 minute after clicking stop (timeout is 60s) and it shows "Exited (137)" as the status. It starts back up fine though, so I'm not sure if anything is actually "broke" here.
  3. Somewhere along the lines I lost the percentage next to the RPM in the footer for System Temp. In the syslog I can see: autofan: Highest disk temp is 36C, adjusting fan speed from: FULL (100% @ 2492rpm) to: 114 (44% @ 1354rpm) But in the Unraid footer I see the fan RPM and a 0% next to it all the time. Any suggestions on what to check for this? It looks like the system temperature plugin is supposed to pull the percentage from autofan, but it's getting a 0.
  4. You need to set the "Minimum PWM Value" to something high enough to keep them spinning. That's how I fixed mine, you might have to play with the value to find what works. 255 is "Max" - 65 works well for my system.
  5. I had issues for months with backups on my Macs stopping or getting corrupted and blamed Acronis. Then I started having problems copying large files to my unraid box from my Mac. File copies would hang and the SMB share would just disconnect after a short time. Would reconnect fine, until I started a copy again. At that point I figured it was a MacOS issue. I came across the post below from MaxRad about Samba 4.15.0 issues in RC2 and downgraded to RC1. I don't know exactly what the Samba problem in 4.15.0 is, or if it's Mac specific, but I've not had one SMB issue or drop since downgrading 2 weeks ago.
  6. Found this thread while searching for the same issue I have been fighting with the two MacBooks in the household, figured I would add things I have tried. One is always wired with 1GB connection and the other is always on WiFi - both running Big Sur. I'm using TimeMachineEditor to limit the backup to network 2x a day to minimize the drives being spun up. I've also noticed the "bursty" networking mentioned earlier during a backup, and if I have a 1GB backup - it'll take upwards of 20-30 minutes. Even setting it to backup to Cache, then run mover later is painfully slow. The kicker is if I try to copy over files using SMB and Finder - to either a normal share or even a "Time Machine" share - speeds are great and consistent. In my research I've also seen folks having similar issues on Synology devices too. I did come across this page https://osxdaily.com/2016/04/17/speed-up-time-machine-by-removing-low-process-priority-throttling/ which has a terminal command to turn off "throttling" of the time machine backups. When setting it to "0" there is an extreme increase in speed for time machine, especially on an initial backup. But after setting back to "1" my "incremental" backups are still very slow, as it seems leaving it on "0" isn't necessarily recommended. I also have a few Windows devices which backup using Veeam beautifully - but Veeam for Mac support is in its infancy, and also doesn't support full system restores. I have been trialing Acronis True Image on the Macs - pointed at a standard SMB share on Unraid for the past week and it has been working great. The initial (250GB) backup only took about 45mins and the 2x daily incremental are finishing in 10 mins or so. I hate to have to use another 3rd party software to do backups when Time Machine is built to do the job, but it feels like Apple just wants you to pay for iCloud storage and use that. If Acronis works out, there's usually a sale on it for $40 / 3 devices or less so I will just end up going that route. Coupled with the copy tests, and other NAS device users also experiencing issues, it makes me wonder if it's not so much a "Mac/Unraid" thing as it is Time Machine "working as designed".
  7. Seeing similar behavior. If I stop the array and re-start it (without rebooting the host) my VMs won't start. As soon as I reboot the host and start the array, everything starts as expected. I get QEMU errors about being unable to reserve port 5700 when I try and manually start the VM. ASRock B365-Pro4 and i5-8400 unraidnas-diagnostics-20201028-0942.zip
  8. I'm seeing the same behavior on 6.9.0-b25. If I stop the array make a change and re-start all dockers that are supposed to autostart are starting. I've got 2 VMs (one set to autostart) and it doesn't start when I start the array.
  9. Thanks for looking at it saarg. I tried linuxserver/emby:amd64-beta and that gave me the correct 4.5.0.13 version. Tried just "beta" again and it rolled back to 4.4.3.0. I opened "advanced" in docker and hit "force update" on the beta tag and then it switched back to 4.5.0.13... Weird, dunno if there was something cached somewhere but that appears to have resolved it.
  10. Could someone tell me if I am doing something wrong trying to update the beta version on this docker? If I set the repository to: linuxserver/emby:beta it for some reason reverts me back to the 4.4.3.0 version of Emby. If I use linuxserver/emby:4.5.0.13-ls48 or linuxserver/emby:4.5.0.12-ls46 it will install the correct version. I'm definitely not a docker expert, but in looking at the "digest" numbers it seems like just the "beta" tag should be working.
  11. The script is on Page 7 of this post. Link I just have the code below run 2x day from the "User Scripts" plugin. Change the /dev/sdX to whatever your drive is and change the SHARENAME to a share on the server. #/bin/bash # Get the TBW of /dev/s!db TBWSDB_TB=$(/usr/sbin/smartctl -A /dev/sdX | awk '$0~/LBAs/{ printf "%.1f\n", $10 * 512 / 1024^4 }') TBWSDB_GB=$(/usr/sbin/smartctl -A /dev/sdX | awk '$0~/LBAs/{ printf "%.1f\n", $10 * 512 / 1024^3 }') echo "TBW on $(date +"%d-%m-%Y %H:%M:%S") --> $TBWSDB_TB TB, which is $TBWSDB_GB GB." >> /mnt/user/SHARENAME/TBW_sdb.log
  12. I started monitoring with S1idney's script and saw almost 500GB/day writes on my drive. I originally switched from Plex official to lsio docker which reduced my writes dramatically to around 150GB/day which is still high. As others have said this isn't the fix, and I saw it start creeping up towards 500GB/day in about a week. Since I only have 1x SSD I decided to reformat to XFS instead of BTRFS earlier this week. I just looked at it today and with nothing else changing outside of the XFS format I'm only showing 20GB writes in total over the last week. Thank you to everyone's research so far in at least finding a workaround and saving my SSD from an early death! Hopefully this gets solved in an update soon.
  13. I tried just upgrading to 6.9b1 as part of my troubleshooting and it behaved exactly the same (all other things being equal).
  14. I've had problems with Realtek NICs not always being able to hit full gigabit speeds when heavily loaded. Intel and Broadcom are two widely used NICs in the enterprise server area and I've used both with unRAID without issue. You can usually pick up a PRO/1000 PT PCIe card on eBay for $15 or less. Sometimes you can find the newer Intel i350 cards for around $30. Either one would be more than good enough for an unRAID system. Just make sure it has the right height bracket on it!
  15. I should have been more specific - wasn't trying to say that's what everyone's issue is, just what I found on my machine.