landS

Members
  • Posts

    823
  • Joined

  • Last visited

Everything posted by landS

  1. Let me know what you find!
  2. NON-PWM Hookup allows full RPM (at least I think it is full RPM) , the old familiar hum, and disk load in the low 40s :). No more email warnings about 56/57 spikes! Also, I ordered 2 of the 2000 RPM high static pressure fans. Thanks again garycase
  3. Nice find on the 2000 rpm fan Garycase! The pwm controller is on its way, I'm going to give that a shot first however will buy 2 of those fans for the significant pressure improvement if the temp doesn't go back to the normal levels. The temp spike happened immediately after the 6.35 to 6.4 update so I am also curious about the impact on pwm speed controls with thus board. I checked this morning and both fans reported 350ish rpm with all the disks spun down.
  4. I am so thankful – my teething tot fell asleep by 8.40 tonight! The case housing the D525 is in a Fractal Design R3 while the main server is a R4. The R3 was purchased in 2011 and I believe the intake fans have been in service since – Noctua NF-F12 PWM. I pulled the case down and confirmed that the cords are not using any of Noctua’s bundled Low Noise Adapters. Luckily the X7SPA-HF has IPMI… where I could see that the system fan is running at 770 rpm! Or half the 1500 rpm expected. … but the IPMI’s fan control was not accessible... too bad it is on the latest ipmi firmware.... First I tried to use the Dynamix System AutoFan Plugin (which appears to need Dynamix System Temp… which needs perl… which needs nerdpack). After all of this The System AutoFan Plugin sadly does NOT recognize this boards PWM controller So it was time to peek to these very forums for fan speed control... which do not appear to work under 6.4. https://lime-technology.com/forums/topic/10668-x7spa-hf-based-small-perfect-server-build/ So for now Ill spring for a Noctua PWM controller https://www.amazon.com/Noctua-NA-FC1-4-pin-PWM-Controller/dp/B072M2HKSN If this still does not solve the issue then it'll be time for fresh fans.... I see some Noctua 3000 rpm pwm units. Odd that this did not crop up until the 6.3.5 to 6.4 update... edit: console java redirection was not working... but i remembered a post from 2013 about a kvm standalone app that supermicro provides so going to check if the bios setting for the fan is set to default... https://lime-technology.com/forums/topic/26551-supermicro-ipmi-view-and-ikvm-setup-blank-kvm-terminal-solution/ And... oh nuts, this too relies upon the latest java runtime for kvm/console redirection. Back to using the PWM controller
  5. @garycase Sadly no. I had a failed fan, but it must have failed a long time ago. Replaced the fan, and under load (parity, verify) the drives still creep up to 56 (more slowly) . They dip to 38 at idle. The controller is a flashed dell h310. I have 2 120 noctua intakes and 2 140 noctua exhaust fans. For now I'm considering turning off verify and changed spindown time from 30 min to 15. Parity check is set to monthly. Backup writes occur 1-2 times/week... So really the temp spike will be 19 hours 1 day / week. The disks spin down now no problem now (it was a plugin, however I've forgotten which). Oddly my main server has no drive temp difference, it sits on the shelf next to this unit... And uses 1 of the same hgst drives. It uses the onboard sata controller Both are dust free with clean air screens.
  6. Good day Crew, The server has been updated to 6.4 stable (from 6.3.5 stable) and running for 3 days now. Fix Common Problems just alerted me to an issue that I have never seen before: Call Traces found on your Server Diagnostics are attached. Any advice would be greatly appreciated! tower-diagnostics-20180116-1634.zip
  7. These are actual still in my fractal define with 2 120mm intake fans blowing directly over the disks which checked out fine and 1 top/rear exhaust which has failed. For now, the machine is powered down and I ordered 2 140 top mount exhaust fans to replace the 1 rear failed 120. Another oddity that I'll need to track down, remembering that I have no exported shares, is that the disks refused to spin down on 6.4. I could manually trigger it and within 30 seconds they'd all be back up again. I'd guess it's a plugin, but this machine does not have too many of those enabled. *oh how I love the gui performance upgrade!
  8. Ahh... Screen is ok, but I love the gui version! I use it on all mechanical hard drives for myself, friends, family, and work when the HDD new.... And again when disks go out of commission... I also use it for fresh Unraid disks
  9. Even the transfer speeds to a SSD dropped greatly in comparison to v5 writes to a HDD. I posted some steps that could be taken on a Windows machine which allowed near v5 write speeds (similar steps exist on Linux machines), but that's a pita and wasn't required on v5. Once I moved this to a pure backup NAS to my MAIN Unraid storage, I killed off sharing the shares, and now use UD to mount the MAIN shares and User Scripts to schedule rysnc *backups* of the MAIN data. Doing this the speed doesn't matter, but pulling the files is significantly faster than pushing them... Near v5 write times! I was fortunate to run the old tunables script which helped to shave some parity check speed off. All my disks are hgst 4 TB units. Typically my hgst disks hit 48* during a dual-drive parity check, they are all hovering at 50-52 for this 18 hour check now that I'm on 6.4.... So once done I'll need to pull the rig and check the fans to see if it hardware related or if 6.4 pushes the disks harder. My house certainly is cooler this time of year and I've never seen a disk hit 50. Everything should be dust free.
  10. Does the gui terminal live in Unraid memory (aka a nerd pack screen replacement) or is it tied to the web session and cancels out the running command if closed (aka, a SSH replacement). Thanks!
  11. Smoking! The web Gui has always been a dog on my X7SPA-HF D525. Now it is more responsive than even my fastest Unraid rig on any prior version. Awesome work folks. Running a backup script to this server now, then will manually trigger mover followed by a dual-drive parity check. I'll update if any snags are hit. @garycase I think the folks just injected some major life support for the atom... Thanks @limetech and @bonienl
  12. Disk tunables is a must. I'm looking forward to the updated script going public! This shaves a couple of hours off of parity check. The biggest tweak on SMB writes from a windows machine (and Linux depending on what version of samba is being used) actually require some tweaks on the windows machine! I was bitten by the marvel bug after one of the 6.x updates, and swapped my saslp card for a lsi. If memory serves, my dual parity drives and my cache drive are on the motherboard headers. I'm running cache dirs, disk integrity, recycle bin, UD, and user scripts. That's near the limits. I actually have no longer have any shares exported via SMB. Instead I have my primary towers shares mounted via unassigned devices, and have an rsync job in user scripts that does not run on parity check day! Mover is set to once/day. As a backup Nas, this is still a great device.
  13. Tonight I tested blacklisting a pcie card which works just fine. Note that you must select a card that works with ODD/DVD/BD/etc... or flash a card to use IDE mode. The following works out of the box StarTech.com 2 Port PCI Express SATA 6 Gbps eSATA Controller Card https://www.amazon.com/gp/product/B00952N2DQ Because of the chipset it uses: Asmedia ASM1062 Under flash I added vfio-pci.ids=(my [id] info from system devices) I also had to add iommu=pt Under system devices
  14. This makes sense to me... But what does not make sense is that under the Docker settings for storage i have /mnt/user .. And under the WebGui's Details/Manage file i can back up to Root /config /defaults /flash /home /lib64 /libexec /media /mnt ... but everything downstream of /mnt is 'broken/blank' .... edit: nm. Now i see a grey scroll bar in the gui to get down to /storage. barely visible in my browser Thank you greatly! (this gui is a massive step backwards)
  15. Thanks a bunch for the assistance @Djoss! Originally everything appeared ok, having run the following due to migrating from 'that other docker' prior to starting up for the first time: cp -a /mnt/user/appdata/CrashPlan /mnt/user/appdata/CrashPlanPRO "Just re-select your files, they are under /storage." Under the WebGui's files I see all the same folders selected: Root>mnt>user CJK1, CompInstalls, etc ..BUT the size shows a dash, as does the date modifed, and a note out to the right indicates 'file is missing' When I click into the folders subfolders/files are all missing. Even if i must reupload all, that is ok as it works out to only be 1 TB ... Note that on the webportal if i select any date prior to 2 days ago all the files still exist for recovery. The log is attached Log for_ CrashPlanPRO.html edit: under the original crashplan container i see the storage (mnt/user) has read/write while under crashplan pro container it is marked as read only. This article highlights the symptoms i am experiencing: https://support.code42.com/CrashPlan/4/Troubleshooting/File_selection_shows_0MB_selected
  16. Oh boy... woke up this morning, all directories still selected, but now web ui says files are missing and zero size... and crashplan's website also indicates zero files available for backup. bummer my docker's storage location is /mnt/user and i have a few share's selected in crashplan for backup. it just... isn't recognizing any subfolders/files in these shares. Given that all the files disappeared from backup from the online portal, I deleted the appdata, restarted fresh. Still, no joy. FYI, the online portal restore/'show deleted' indicates the paths in light grey for yesterday and today... but they are not backing up. grr
  17. @Djoss another legacy Crashplan docker user, now on Pro Docker due to the autoupdating into a black screen. No problems updating nor adding in the memory. Thanks!
  18. I would also be highly interested in a more elegant odd sata passthrough solution. @jonp @lime13tech Guys, anything better than the below for this? what i know works. Pass through a controller that has been flashed specifically for ODD (issue: manual flash is a pita, eats up an entire expansion slot on the mobo) PCI Syba SD-SATA-4P Flashed to b5500 bios (IDE) Sata 4 port for ODD Chipset Sil3114 PCIe 1 StarTech Sata 2 port for ODD Chipset ASM1062 Pass through as a USB device. (issues: doesn't always work, trying to pass through 2 optical drives this way and the vm doesn't want to start, and read is slow). Generic USB 2.0 to 7+6 13Pin Slimline SATA Laptop CD/DVD Rom Optical Drive Adapter Cable. fyi, while the pictures of these tend to show 1 usb port, most take 2 (and extra dongle for power). https://www.amazon.com/gp/product/B00C574Q5G
  19. Good day @jonp any hope that 6.4 is going to have the built in VNC performance issues mitigated? Thanks!
  20. many ways to skin this cat. ... Add the new disk to the array and MOVE the files from 1 disk to another. OR Add the new disk via Unassigned Devices and COPY the files from 1 disk to another (which leaves a backup of the data on the failing hdd) from terminal/ssh, screen, etc: rsync -arv --progress /mnt/Disk2 /mnt/disks/DiscName/ use appropriate disk numbers - in this example Disk2 is your failing SBD and DiskName is your new SDD. Note that /mnt/disks/ is the *root* of the UD plugin while /mnt/ is the *root* of the unraid then Remove the old disk from the array, run Settings/New Config, Add the new disk ... You could instead add the disk directly to the array and follow the procedure in the rfs to xfs file system conversion in the wiki https://wiki.lime-technology.com/File_System_Conversion ... side note, my kiddo is rather ill atm and as such my response may be rather slow in coming
  21. Mount the new drive via Unassigned Devices plugin. . Use Rsync command, or Midnight Commander Gui, to move all from the old disk to the new one. I personally think rsync is easier. Stop array. Remove offending disk from the disk number array assignment, add replacement, run New Configuration, start array. . Consider running Pre-Clear on disc's before adding them to the array in the future to help spot quality issues. . Strongly consider a parity drive.
  22. Awesome Jonnie! rsync -av --progress --delete-before /mnt/disks/TOWER_Docs/ /mnt/user/Docs allows the correct date/timestamp to roll through! First update is passing through the backup data nicely.
  23. rsync -r -v --progress --delete -s /mnt/disks/TOWER_Docs/ /mnt/user/Docs Good day crew, the above command replaces EVERY file in TOWER_2/Docs with Every file from TOWER/Docs but with a date/timestamp of the time of file transfer. I need to sync the share (delete files that no longer exist, update files that have changed, leave alone files that have not changed, copy new files/folders, etc) Any help would be greatly appreciated.
  24. can this script be duplicated for removing other items? for example if i replace ".DS_Store" with ".trash-1000" will it remove the .trash-1000 folder and all subfolders/subfiles on each disk when present? as the .trash-1000 ends up in the root of any given share, can the maxdepth be set to 1? Thanks!
  25. I LOVE this plugin along with UD for automated backups! Thanks Squid TOWER is the primary storage device with SMB shares, some private TOWER2 is a backup of TOWER, with no SMB/NFS shares. This also happens to be a lowly D525! All on TOWER2 Use the Unassigned Devices Plugin to Mount a SMB share to your unRAID Source machine for each source to be backed up. Set Unassigned Devices settings page SMB share to yes, hidden, fully private. Use the User Scripts Plugin and add 2 scripts which can then be scheduled or run on demand from the plugin #Backup script to write once, never update rsync -r -v --progress --ignore-existing -s /mnt/disks/TOWER_Media/Movies/ /mnt/user/Media/Movies rsync -r -v --progress --ignore-existing -s /mnt/disks/TOWER_Media/Music/ /mnt/user/Media/Music #Backup to sync all rsync -av --progress --delete-before /mnt/disks/TOWER_Work/ /mnt/user/Work rsync -av --progress --delete-before /mnt/disks/TOWER_Docs/ /mnt/user/Docs (note that my write once, never update are local files only as the physical media is all in my crawl - my sync all have historical snapshots captured via the crashplan pro docker on TOWER)