RealActorRob

Members
  • Posts

    88
  • Joined

  • Last visited

Everything posted by RealActorRob

  1. Ugh. I tried it several times this way with a few variations thrown in and even used 'defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1' Nada.
  2. All I had to do was click the red 'X' by preclear on the Main tab and FORMAT is now orange. Not using it just yet, but looks good to go if/when needed.
  3. I did 'Fix Preclear' on that tools page, and I'm running another preclear but 99.99% sure it wasn't interrupted last time. I also cleared all the preclear reports on that drive, but I suppose those are stored in appdata and not on the precleared drive. Hopefully a complete preclear will fix it, and why not run another one LOL.
  4. Interesting, this is two disks: The second one has no filsesystem and neither does the top one. But one has a mount option greyed, and the other doesn't. All I've done is preclears so I wonder what I've done differently to the top one....confusing. Destructive mode is obv. enabled. After formatting second disk: So it's like sdi is in a limbo where it doesn't have a filesystem but also I never formatted it and the format button isn't avail.
  5. Ok, what am I missing? 'Mount' is greyed out. All I want to do it mount/automount the unassigned disk. Not even share it. Just want to use it to play around with Chia mining. One thought: do I have to restart the array to get it to work?
  6. Will be doing another one if the drive passes another preclear or two and the extended SMART test passes. I have a WD Red CMR NAS on order to replace it. Edit: Seagate Ironwolf NAS was a little cheaper and a little higher rated, review-wise.
  7. I'm doing a preclear at the same time as an extended test, which should hammer this drive pretty well.
  8. It wasn't, and I turned FTP off. It had no users enabled and firewall wasn't opened so it was pretty secure, but, one less thing. Thought I'd turned off FTP already...might have been doing something with it and forgot to re-disable. I also turned off the mover log, since mover isn't having issues ATM to reduce log file size.
  9. Found this on the device page under SMART log: ATA Error Count: 1 CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 1 occurred at disk power-on lifetime: 54381 hours (2265 days + 21 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 10 51 08 70 c9 a5 02 Error: IDNF Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 2f 00 01 10 00 00 00 00 3d+07:43:39.996 READ LOG EXT 60 20 08 e8 93 6d 40 00 3d+07:43:39.996 READ FPDMA QUEUED 60 00 00 e8 8f 6d 40 00 3d+07:43:39.996 READ FPDMA QUEUED 60 b0 00 38 8c 6d 40 00 3d+07:43:39.991 READ FPDMA QUEUED 60 00 00 38 88 6d 40 00 3d+07:43:39.986 READ FPDMA QUEUED
  10. It's from Automounter, which helps keep SMB shares from dropping and/or it's a discovery of services from same. It's helped with my SMB issues doing backups on Mac OS X Mojave.
  11. It's a 4TB I've run other tests on and all turned out fine. I replaced my working 3TB Parity 2 drive for future upgrades. So, this drive got x'd during a rebuild of parity onto it. Drive is in a disk tray on a server, so I doubt this is a 'cable' issue. Preclear went normally. Rebuild had 1 read errors but the SMART doesn't show ANY read errors or fails. running another preclear now and an extended test now. A short time ago, 800 hrs, this also passed an extended. Diags attached. Thoughts? To sum: other than the 1 read error on the parity rebuild, I don't see ANY indication of drive problems. I'm sure this is an ex-data center drive, noting the power on hours vs. power cycle count. blacktower-diagnostics-20211213-1224.zip
  12. Votes for: 1. Algorithmic caching of files, perhaps saying, use 50% of an SSD for this. A FIFO. Older files get aged out and written to the array. Drobos have a cache 'accelerator' bay that presumably does this. Hybrid drives have this. I.e., the Windows OS probably lives on the SSD portion of the drive as it gets read and written the most. 2. A certain % of fullness where the mover triggers vs. being time based. 3. Autobackup to array of VMs etc. that are on the cache drive. And should really rename this function since this isn't really caching? It's just fast storage. 4. Perhaps an asynchronous mirror of the cache drive. I.e., cache drive is priority read/write on the mirror then the SSD gets sync'd to it's mirror as speed allows. Example: I could have a 1TB SSD and a 1TB 5400 drive in the array that is a mirror of the SSD. No problem as long as the drive can catch up eventually.
  13. Same drives, same USB stick. I just moved from an X3440 Xeon T310 to an R720XD. (Intel® Xeon® CPU E5-2630 0 @ 2.30GHz) 16GB RAM vs 32GB RAM in the 720. All that said, my parity checks are RADCIALLY faster in the 720. Two parity disks, but the X3440 never was above 40% usage or so. So, I can't figure out why the older was so much slower, SOMETIMES. Just put the drives in the 720 and I'm getting 67.5 about 15% in to the check. The 1 error is the same uncorrected one, on one of the parity drives, so I'm just running another due to the drive move. On the 9th tho, it ran at 95Mb/s (!!) but the speeds are all over the map. Just a puzzler. Before I shut it down it had an uptime of like 3 weeks and that has been because of power issues. UPS battery just failed today so it evidently was erroneously saying it was good....which lead to the power issues and I'm sure, the parity issues. And per advice here, now I only do the parity check once per month. I'll probably correct that 1 error on the Q drive IIRC the next time I run it.
  14. You've got ALL new drives? I'd just move the existing drives (backup beforehand if you can) and then replace them later one by one after preclearing or doing some sort of burn-in. Then if the drives are really new you don't have to worry about if 1 or 2 die relatively quickly, as occasionally happens. I like to have drives of different brands/ages/etc. as well in case there's a bad lot. I run 2 parity drives as well, since some of my drives are older.
  15. Bummer. I suppose there's a way to autosync the files but it wouldn't be quite the same...
  16. I.e., if I have an SSD of 120GB and a hard drive of 120GB, will the pool operate at the speed of the slower device or will it write to the SSD then finish the write to the hard drive later? I'd like redundancy on the cheap. It's not a ton of data at the moment so it's not going to flood the hard drive in reality. Use case: I have a 120GB SSD and several 320GB WD Green drives I could throw in to mirror that. Although, yes, I should just buy a cheap SSD and probably will, this is a bit academic. But it could come into play, say, if I wanted to save a little and had a 512GB NVME mirrored by a 1TB drive.
  17. Yup, did that. Now to see if my format is correct to exclude the other node. It'd be nice just to have checkboxes for drilling down to subdirs.
  18. I'm running two storj nodes and that's where all my 'errors' are reported. The exclusions in settings are for top level folders, so now I have to figure out how to exclude a subfolder.... Can I request that feature or am I missing something?
  19. This *MAY* be Automounter. A tool that keeps shares from disconnecting. Not sure why it would show vsftpd connections other than maybe it's scanning something....
  20. I turned off the FTP server. Not sure why it was on. There were no usernames in the FTP user(s): slot. Blank. It says user access is disallowed but if I enable the server with no usernames still, I get connections from an internal machine. 9 19:02:42 BlackTower ool www[6129]: /usr/local/emhttp/plugins/dynamix/scripts/ftpusers '1' '' Nov 9 19:02:49 BlackTower vsftpd[8488]: connect from 192.168.1.105 (192.168.1.105) Nov 9 19:02:53 BlackTower ool www[6129]: /usr/local/emhttp/plugins/dynamix/scripts/ftpusers '0' '' Nov 9 19:14:59 BlackTower ool www[2467]: /usr/local/emhttp/plugins/dynamix/scripts/ftpusers '1' '' Nov 9 19:15:12 BlackTower vsftpd[25242]: connect from 192.168.1.105 (192.168.1.105) Nov 9 19:15:27 BlackTower vsftpd[26488]: connect from 192.168.1.105 (192.168.1.105) Nov 9 19:15:32 BlackTower ool www[26467]: /usr/local/emhttp/plugins/dynamix/scripts/ftpusers '0' '' Am I hacked? Scanning for viruses/malware now and installing 'Little Snitch' on my Mac Pro, which is the machine it's coming from.
  21. I don't spin down disks, probably would increase wear if I did because I span disks for two storj nodes. If storj takes off, it'd be somewhat harder to increase storage I'd think if I limited it to certain disks. One thing that would make this easier is if unraid had a feature to unspan directories. Would also help to remove a disk without losing parity.
  22. The complete File Integrity hash build should be a good test to see if anything's truly wrong since that'll read every file on the array, correct? Backups to the NAS have seen no big issues, so that's good.
  23. SUGGESTION: We need a good guid in the FAQ to handle parity issues. Some sort of flow to follow. First, there were power issues and though I have a UPS, I cannot assure that shutdown wasn't dirty. IIRC, there were some. As of now, UPS load is around 150W with a runtime of 40+ minutes so shutdowns SHOULD be graceful going forward...can't recall what load I had before, been doing some tweaks. (Crypto mining.) No SMART errors, but I'm running extended tests on all my drives. I'm Dynamix File Integrity is building now. I've been meaning to do that. Sorry, there's no log yet since I've had to shutdown for power issues since the last parity error. Schedule is to run on Sundays. In October, Sunday was 3rd/10/17/24/31. So, assuming the 16th was a power issue, and maybe I ran the 25th accidentally to correct but that means 7 popped up on the 29th. But none appeared between the 29th and 31st. The 16th must have been for power. My server has ECC so I'm guessing this is not a RAM issue but a dirty shutdown issue. I have two parity disks. When File Integrity is done, I'll run a check and get some logs. Log:
  24. Got it running. Notes and a YouTube video to follow. Using XMrig CPU only just to get it going. First note, 3GB of RAM is enough. Have 300+mb left over. I suppose this might not be enough for GPU mining, but it's fine for CPU XMR mining.