• Content Count

  • Joined

  • Last visited

Everything posted by CS01-HS

  1. I don't know if this is a bug but I wasn't sure where else to post it. I have two spindown problems in RC2: I've set default spindown to 15 minutes to help with testing. (1) The USB disk in Unassigned devices Unraid tries to spin it down (good!) but doesn't actually do it. That may be a problem with the drive's USB interface so I've cron'd a user script to spin it down with hdparm -y /dev/sdb (which works.) It looks like unRAID never detects that it's spun down (or a successful return from its own spindown call) because absent drive activity I see
  2. Sorry about that. I meant it sincerely though, an unclean shutdown (and subsequent parity check) should be a big deal. Auto-pause mitigates one small consequence of that - the larger consequence is potential data loss.
  3. That's extreme. I think my initial backup over wireless averaged 7MB/s or at worst 5MB/s. 5MB/s (if my math's right) puts 850GB at around 2 days run time. Cache will function differently depending on the share's setting. If it's cache-yes, share data is written to cache and will remain there until mover's run at which point it's transferred to the (parity-protected) array. If new files are then written they'll be written to the cache and go through the same process. If it's cache-only, all share data, assuming it can fit, will remain permanently on the c
  4. You can also pause it manually and parity-tuning will handle the resume. If unclean shutdowns are so frequent the extra click is burdensome you've got bigger problems
  5. I'm running the original (more basic) version of this plugin with RC1 and two cache pools. So far so good.
  6. I think so but I'm running RC1 and telegraf and that combination prevents spindown. hdparm -y works for a time.
  7. I don't want to derail the OP's thread but do you mean this generally or specific to BTRFS/SSDs? Like the OP I have a BTRFS RAID1 cache pool so if the risk is low and in any case recovery unlikely it seems like real cost (half the space) for no real benefit.
  8. Instructions for an Email-to-Growl relay to get growl notifications from any app that supports email alerts, like unRAID. I have this setup on a pi but any debian (and maybe other) distros should work. This is a dirty first draft so if you're one of the two people who still use growl and run into difficulty let me know and I'll clean it up. NOTE: This assumes you have Growl setup and working on all client machines. I'm not sure where to download 2.x from anymore. If I find the installer I'll make it available. CAUTION: This assumes the target machine DO
  9. Great, added to config/go. Easy fix, thanks. The LED difference is still curious. I always assumed the "access" signal was caught when the MB (or card) detected HD access but this suggests it relies on the drive to say "I have been accessed."
  10. My array has 3 disks, all Seagate ST5000LM000 I have telegraf polling SMART data with a poll interval of 10 seconds. I noticed a chirping every 10 or so seconds coincident with drives' access LEDs flashing. I figured the access was due to telegraf and the chirping was the heads parking/un-parking. The SMART reports seem to confirm this, e.g. 9 Power on hours 0x0032 094 094 000 Old age Always Never 5996 (173 29 0) 193 Load cycle count 0x0032 001 001 000 Old age Always Never 454511 That works out to 75 times/hour so it's probably been going on ever since
  11. My guess would be to preserve the NOCOW attributes. If your current pool is RAID1 you can use this easier method:
  12. I have a USB drive mounted with unassigned devices and 3 SATA disks in 2nd pool that don't spin down on their own, so I wrote a script to spin them down which runs periodically. Is there a risk? I've been running it for months but not on the array disks which (prior to rc1) spun down on their own.
  13. Great. Also, in case it's new/useful info, when drives are spun down with hdparm -y the web interface shows them active despite smartctl and hdparm reporting standby.
  14. Alright, it seems that if drives are spun down by unraid, either manually with the web interface or automatically, calling smartctl wakes them up. If they're spun down with hdparm -y, calling smartctl does NOT wake them up. Is that intended behavior?
  15. I'm having the same problem - no spindown with telegraf running or any other app that calls smartctl. hdparm seems to be okay.
  16. This appears to be fixed in 6.9.0-rc1 but I base that only on one successful reboot. I'll keep monitoring and close the report if it doesn't reoccur.
  17. Any update on this? It's still a problem in 6.9.0-rc1
  18. 6.8.3 should work. The Spaceinvader tutorial (did you follow it?) uses that or an earlier version. Max throughput with SMB is fine - I can max out my wireless uploading to a share. It's small files or whatever Time Machine does under the hood that's painful. Try this benchmark for random read/write: https://www.katsurashareware.com/amorphousdiskmark/ I won't blame Limetech for slowness unless I see a network implementation that isn't slow and I haven't. Apple really dropped the ball on a clever and unique feature.
  19. I've been running beta35 since shortly after it was released. Now for the first time I see a continuous stream of the following error in syslog: Dec 9 10:44:24 NAS kernel: i915 0000:00:02.0: [drm] *ERROR* failed to enable link training Dec 9 10:44:28 NAS kernel: i915 0000:00:02.0: [drm] *ERROR* failed to enable link training Dec 9 10:44:31 NAS kernel: i915 0000:00:02.0: [drm] *ERROR* failed to enable link training It's not in any of my saved syslogs (syslog server) as far back as Sep 19, so beta25? I believe what triggered it was hardware transc
  20. I recently changed Error Logging from Syslog Only to Syslog and Output file. After a check I see the entries in syslog but can't find the "output file" - any idea where it's stored? (I saw a couple of posters ask but didn't see an answer.)
  21. Or maybe Extra Parameters because the iHD driver has to be disabled before startup? I went the script route because I'm not experienced with docker tweaks so if you figure it out let me know!
  22. I use the User Scripts plugin to run this script every hour - although maybe there's a more clever solution using Post Arguments in the docker template? #!/bin/bash # EmbyServer # # Verify it's running running=`docker container ls | grep EmbyServer | wc -l` if [ "${running}" != "0" ]; then docker exec EmbyServer /bin/sh -c "mv /lib/dri/iHD_drv_video.so /lib/dri/iHD_drv_video.so.disabled" 2>/dev/null if [[ $? -eq 0 ]]; then echo "EmbyServer: Detected iHD driver. Disabling and restarting EmbyServer..." docker restart EmbyServer echo "Done." fi fi exit 0
  23. Yes, but only by passing a whole controller and performance is terrible, although that may be specific to my convoluted setup. <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> I saw a recent post about usb/ip support in the next version which might be an opt
  24. I managed to cause the same error (and freeze my server) playing around with intel_gpu_top in the intel-gpu-tools container so my problem's at a lower level, your container's fine.