CS01-HS

Members
  • Content Count

    288
  • Joined

  • Last visited

Everything posted by CS01-HS

  1. Nice, thank you so much. Do you mind telling me how my diagnostics helped? Maybe a hidden UD log that might help me in the future.
  2. Sure, thanks. There's a bunch of spam in my syslog from an incompatible docker, let me know if you want me to clean it up. nas-diagnostics-20201123-1144.zip
  3. I may be an edge case but in beta35 this (very handy) docker fills up my syslog with the following error until the system's overloaded. Nov 23 10:00:10 NAS kernel: bad: scheduling from the idle thread! Nov 23 10:00:10 NAS kernel: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.8.18-Unraid #1 Nov 23 10:00:10 NAS kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./J5005-ITX, BIOS P1.40 08/06/2018 Nov 23 10:00:10 NAS kernel: Call Trace: Nov 23 10:00:10 NAS kernel: dump_stack+0x6b/0x83 Nov 23 10:00:10 NAS kernel: dequeue_task_idle+0x21/0x2a Nov 23 10:00:10 NAS kernel: __schedule+0
  4. The latest version won't mount my SMB1 share. I think it might not be passing sec=ntlm but I don't see a mount command in the logs. Any way to debug it? I've tried removing/readding the share with Force all SMB remote shares to SMB v1 set to Yes (as I had it) and No, neither works.
  5. I've updated the instructions to include temperature display.
  6. It happens on multiple computers, from Mojave to Catalina to Big Sur. It worked on all of them with 6.8.3. It broke sometime after I installed the first beta (beta25.) I'm pretty certain that's the culprit - the question is whether it's a bug in the release or something went wrong during the install process, maybe particular to my setup. To answer that it'd be helpful to know whether it's working with the beta for anyone on MacOS.
  7. Reasonable assumption but with same MacOSs searches on my Raspberry Pi share (with the standard packages) work properly. *Website is the Raspberry Pi share.
  8. In case it's not clear from my description here it is in pictures: A search of "debian" in my isos share (which clearly contains it) returns no results. Searching the full name debian-10.3.0-amd64-netinst.iso also returns no results. This problem persisted through the upgrade from beta30 to beta35, and from Catalina to Big Sur.
  9. I'm also curious. This line in the release notes, best I can tell, does not suggest i915 wasn't included in previous releases, only that it's among those now included - otherwise the modprobe command wouldn't have worked in previous releases, right? If so there's a new way to load it but the earlier way also works. I confirmed by upgrading, rebooting (without changing the go file) and testing the Emby docker.
  10. Maybe an oversight by Apple or maybe intentional. NAS is my unraid server. The closest supported server type I found was macpro-2019-rackmount so I customize the smb.service with the following script: cp -u /etc/avahi/services/smb.service /etc/avahi/services/smb.service.disabled cp /boot/extras/avahi/smb.service /etc/avahi/services/ chmod 644 /etc/avahi/services/smb.service touch /etc/avahi/services/smb.service.disabled Where /boot/extras/avahi/smb.service looks like: <?xml version='1.0' standalone='no'?><!--*-nxml-*--> <!DOCTYPE service-group S
  11. No. Just a simple search in the Finder window of a share.
  12. Still a problem in beta-35. Am I the only who searches SMB shares or are others not experiencing this?
  13. After enabling some disk-related power saving features I occasionally see the error below in the logs. Is it anything to worry about? ata3 is a mechanical disk connected to my motherboard's ASM1062 controller. I don't see any indication of a problem except the log message. Nov 13 04:05:04 NAS kernel: ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 13 04:05:04 NAS kernel: ata3.00: supports DRM functions and may not be fully accessible Nov 13 04:05:04 NAS kernel: ata3.00: supports DRM functions and may not be fully accessible Nov 13 04:05:04 NAS kernel: ata3.
  14. I do. Diagnostics and saved syslog attached. Parity check starts: 06-11-2020 07:35 finishes: 06-11-2020 21:01 You'll notice a lot of BTRFS messages. What caused the dirty shutdown was a system freeze while running rebalance. syslog-10.0.1.50.log.zip nas-diagnostics-20201107-1753.zip
  15. I thought the error might have been legitimate until I rebooted and it went away. Are the post-reboot diagnostics useful? If they are let me know and I'll upload them. EDIT: Looks like I changed the bug status. Sorry about that, not sure what I changed it from.
  16. Force a dirty shutdown and consequent parity check on startup Let parity check complete Stop array If you start the array you're forced to run a parity check because the completed run's not recognized. Reboot gets around it.
  17. Ah I see, that's still good power savings. Yes, I have the J5005-mITX with 6 HDDs (4 on the card, 2 on the board) and 2 Cache SDDs. More information is linked in my signature. If you want to see great power savings look at this: https://translate.google.com/translate?sl=auto&tl=en&u=https%3A%2F%2Fwww.computerbase.de%2Fforum%2Fthreads%2Fselbstbau-nas-im-10-zoll-rack.1932675%2F He also posts here:
  18. I'm running a j5005 with an H1110 HBA (cut to fit the x1 slot) and 8 disks: 2 SSD Cache (on the integrated intel) 3 Array disks and and a BTRFS Raid 5 Pool of 3 disks (all distributed between the integrated ASMedia and the HBA) It's not a performance server but I don't notice any slowness although your idle wattage with the new setup (22W) is close to mine. Do you remember what your idle wattage was with the J4105 and HBA? Benchmarking the x1 controller in DiskSpeed does show limitations but not enough to affect usability (although if my controller ran 8 disks i
  19. I saw a few of these hard resetting link errors during my mover run. Thankfully (?) no CRC errors reported. ata3 is a spinning disk attached to an integrated ASM1062 controller. I wonder if it might be related to the power-saving tweaks because nothing else changed. For now I've disabled them and will see if they reappear. Maybe coincidence but I'm posting in case others have the same issue. Nov 3 23:17:30 NAS move: move: file /mnt/cache/Download/movie_1.mp4 Nov 3 23:17:33 NAS kernel: ata3.00: exception Emask 0x10 SAct 0x80 SErr 0x4050002 action 0x6 frozen Nov 3 23:17:33 NAS
  20. I don't search often so I can't say for certain it's the beta but it used to work consistently and now it doesn't, even in safe mode. No results returned no matter how long I wait. I'm running the latest MacOS Catalina. I also tested in MacOS Mojave (unraid VM), same result. I have a Raspberry Pi shared over SMB where search from the same two clients works fine. Diagnostics from safe mode attached. nas-diagnostics-20201027-1319.zip
  21. I wouldn't take any meaningful risk to save 4W. Would you distinguish between any of the power-saving tweaks (SATA links, I2C, USB, PCI and increasing dirty_writeback) in terms of risk, assuming a UPS/no unexpected power loss?
  22. Your go files additions saved me also about 4W. And it might be coincidence but my SSD temps dropped about 4°C post-tuning. I've never seen them so cool. There has to be some cost to this, no? I haven't noticed a difference but nothing's free.
  23. Sure maybe if/when it gets cleaned up. Just to be clear this isn't related to spin-up groups (which I don't use either), just standard drives. I wanted a way to easily track whether my drives were sleeping/waking too frequently. Someone cleverer might be able to integrate total wakes over the specified time range.
  24. I have a visually pleasing (but technically dirty) solution to my quest for a spin-up graph. I'm new to grafana and find it frustrating so if anyone has improvements feel free to post them. This is the end result: Setup: 1. Start with a User Script script to track drive activity and temperature in influx set to run every 5 minutes (borrowed from this php version.) Replace every XX with your system's settings (default influx port is 8086) #!/bin/bash # User settings INFLUX_IP="XX" INFLUX_PORT="XX" HOSTNAME="XX" # Drive IDs (in /dev/disk/by-id