micokeman

Members
  • Posts

    59
  • Joined

  • Last visited

Everything posted by micokeman

  1. Hello, Not sure if anyone can help, but I just updated to the latest build. (I believe I was on the build 5.1.0). I was able to transfer 4TB from one drive to another on 3 days ago, but after the update, transferring no longer seems to be happening. I select one drive in the 'FROM' column and one drive in the 'TO' column and then select all the folders. I go to the next stage and the page loads with all the stats, but transferring does not seem to be happening. It has no 'end-time'. Last time I did this I immediately saw '28 hours' to go. Now when I try to go to the UnRAID main page or the UnBalance page ("https://10.10.10.201:6237" or "https://10.10.10.201") I get "The connection has timed out". This has happened twice already. This last time was after a server reboot.
  2. I thought you just edited the title and added [SOLVED] in the front of it.
  3. Thank you! There wouldn't be much writing. Actually, there would be some deleting of duplicate files (same content, but named differently), but not adding.
  4. I am looking to replace upgrade a current drive from 3 TB to 5 TB. Due to a system freeze (which is another story) the machine had to get rebooted, so it started a partity "check nocorrect". The check returned this: Feb 27 12:57:15 VVData kernel: md: recovery thread: PQ incorrect, sector=18302032 Feb 27 12:57:15 VVData kernel: md: recovery thread: PQ incorrect, sector=18302096 Feb 27 12:57:15 VVData kernel: md: recovery thread: PQ incorrect, sector=18302104 Feb 27 12:57:22 VVData kernel: md: recovery thread: P incorrect, sector=18670056 Feb 27 12:57:22 VVData kernel: md: recovery thread: P incorrect, sector=18670064 Feb 27 12:57:22 VVData kernel: md: recovery thread: P incorrect, sector=18670072 Feb 27 12:57:22 VVData kernel: md: recovery thread: P incorrect, sector=18670080 Feb 27 12:57:22 VVData kernel: md: recovery thread: P incorrect, sector=18670088 Feb 27 12:57:22 VVData kernel: md: recovery thread: P incorrect, sector=18670096 Feb 27 12:57:22 VVData kernel: md: recovery thread: P incorrect, sector=18670104 Feb 27 12:57:22 VVData kernel: md: recovery thread: P incorrect, sector=18670112 Feb 27 12:57:22 VVData kernel: md: recovery thread: P incorrect, sector=18670120 Feb 27 12:57:22 VVData kernel: md: recovery thread: P incorrect, sector=18670128 It finished last night with this: Feb 28 18:53:45 VVData kernel: md: sync done. time=107974sec Feb 28 18:53:45 VVData kernel: md: recovery thread: completion status: 0 I then came in this morning expecting to replace the drive and found the partiy check at 30.5%! After flipping out and then realising that it was doing its monthly check, I looked at the logs an found this: Mar 1 00:00:01 VVData kernel: mdcmd (58): check Mar 1 00:00:01 VVData kernel: md: recovery thread: check P Q ... Mar 1 00:00:01 VVData kernel: md: using 2304k window, over a total of 4883770532 blocks. Mar 1 00:00:04 VVData kernel: md: recovery thread: P corrected, sector=60672 Mar 1 00:00:04 VVData kernel: md: recovery thread: P corrected, sector=60680 Mar 1 00:00:04 VVData kernel: md: recovery thread: PQ corrected, sector=60952 Mar 1 00:03:24 VVData kernel: md: recovery thread: PQ corrected, sector=18302032 Mar 1 00:03:24 VVData kernel: md: recovery thread: PQ corrected, sector=18302096 Mar 1 00:03:24 VVData kernel: md: recovery thread: PQ corrected, sector=18302104 Mar 1 00:03:28 VVData kernel: md: recovery thread: P corrected, sector=18670056 Mar 1 00:03:28 VVData kernel: md: recovery thread: P corrected, sector=18670064 Mar 1 00:03:28 VVData kernel: md: recovery thread: P corrected, sector=18670072 Mar 1 00:03:28 VVData kernel: md: recovery thread: P corrected, sector=18670080 Mar 1 00:03:28 VVData kernel: md: recovery thread: P corrected, sector=18670088 Mar 1 00:03:28 VVData kernel: md: recovery thread: P corrected, sector=18670096 Mar 1 00:03:28 VVData kernel: md: recovery thread: P corrected, sector=18670104 Mar 1 00:03:28 VVData kernel: md: recovery thread: P corrected, sector=18670112 Mar 1 00:03:28 VVData kernel: md: recovery thread: P corrected, sector=18670120 Mar 1 00:03:28 VVData kernel: md: recovery thread: P corrected, sector=18670128 Mar 1 00:03:28 VVData kernel: md: recovery thread: PQ corrected, sector=18679960 Mar 1 00:03:28 VVData kernel: md: recovery thread: PQ corrected, sector=18679968 Mar 1 00:03:28 VVData kernel: md: recovery thread: PQ corrected, sector=18679976 Mar 1 00:03:28 VVData kernel: md: recovery thread: PQ corrected, sector=18679984 Mar 1 00:03:28 VVData kernel: md: recovery thread: PQ corrected, sector=18679992 . . . Mar 1 04:39:38 VVData kernel: md: recovery thread: stopped logging My questions are: Should I rerun the parity check with correct enabled before I replace the drive? Can I write data to the server while the parity is running? If I do write data to the server while the parity check is running, won't the parity check be off (since new data is now on the data drives) Should I wait to replace a drive until there are zero errors after a parity check. Thank you for any help!
  5. Excellent point! I never really thought about that. As it turns out, I have never purchased more than 3 drive in one shot, so I have been adhering to this without knowing I should, but thank you for that excellent pointer! Good tips there as well! Trying to decide how to go about both upgrading our current system and deciding if we should get a whole new system for backup. Or get a new system and use the current one for backup. I'll know by the end of the week.
  6. Thank you for your feedback! I will probably go with the 8TB drives. Not sure if red or black yet. "Rules"? What "rules"?!? Sorry, I really can't think of what you might be referring to. Perhaps, but you lose the expand-ability of the server.
  7. Thanks for that reminder. I've got a few threads I'm going to have to revisit once my restore is done. I can't really proceed to the next step until that is done, and that just takes... time! They are running about 40 C. That's a bit higher than the other drives. That brings up a REALLY good point! Hard to believe that there are 10 TB drives out there! (yes, I am a few years behind...) So it looks like Seagate has greater option for the 6-10 TB range. I am guessing that the Seagate IronWolf 6TB NAS is comprable to the WD Red. Since I am going to have to get quite a few new drives, I might as well get what I need in the long run. Am I correct that the currently $200.00 Seagate IronWolf 6TB NAS https://www.newegg.com/Product/Product.aspx?Item=N82E16822179004 drives are an good choice? I like to stay about one or two models back, if possible. That way hardware is already tested plus cost is usually reasonable for those older items. Hmmm, I was meaning to look into the multiple parity drives, as I just saw it available in version 6 (only recently upgraded). My understanding is that if I have two 8TB parity drives than that would be a good thing, no? Now up to two drives could fail without losing data. I take it that your idea is that the more drives I have, the great the chances that more than one will fail. Thus having 10TB drives... So I can either have less more costly drive with 1 parity drive or more smaller drives with 2 parity drives. So back to the Red vs. Black question... If starting new, would red or black be better? It seems that Red (Seagate or WD) would be best. Is that correct thinking?
  8. Would you be kind enough to either list or point me to a list of recommended hardware? I want to believe that unRAID will work in a business environment - my environment, specifically - but there was some hard drive failure and once that was fixed another problem happened and although there were individual issues that came up, my boss clumps them all together and 'wants a solution that works!'. Meaning two things, a solution that will have a mirror of the live data, synched via the internet to either a second unRAID or NAS and a data server that communicates well with Macs. So I am tasked to find that solution. Either unRAID or another solution. My feeling is that the drives are the culprit, which brings me to my next question. Would WD Red or WD Black drives be better? Sites seems to say that Black is basically Red+, but that Red is designed for NAS while Blacks are just faster - and yet WD Blacks https://www.newegg.com/Product/Product.aspx?Item=N82E16822236971 are cheaper on Newegg than WD Reds https://www.newegg.com/Product/Product.aspx?Item=9SIABZ053G6539. Are WD Blacks ALSO good for NAS devices? Another point that is worth mentioning is that we currently have over 50TB worth of data. I'd like to have a solution with about 80TB. Also, we will never have many users connecting simultaneously, but we will have a Windows and up to 5 Macs connecting. The Windows box could be taken out of the picture if I were able to get excellent transfer rates with a Mac. Any thoughts are eagerly welcomed!
  9. Thank you. Mine is set to 'Auto'. There has been to suggestions to convert my drives to XFS format. The large amount of writing to the drives is probably the problem.
  10. Thank you all for you replies and suggestions!
  11. So formatting each drive would be the best course of action? Edit: Thanks for that very helpful piece of information!
  12. Here is my diagnostics file if it will help any. vvdata-diagnostics-20170110-1152.zip
  13. Hey, thanks for your reply! I actually DID give up on AFP and started using SMB, but that started getting flaky and I about burst yesterday when I tried to save my Word, single page, document to the unRaid server (over SMB)mand it refused to connect twice and killed other FTP connections as well, so I started looking into AFP, but it sounds like that is worse. I am copying (via FTP) four files at a time and that will be going on consistently for a while as I am restoring data. Our files are usually over 50GB, so it will take a while. I feel like that should cause these connection issues, but it may be just that - too many connections at one time.
  14. Is that the same as "Tunable (enable NCQ):" I am thinking that something on my Disk Settings is not set correctly. I am at 4 GB and am researching the best settings
  15. Were you ever able to get AFP to work successfully? I am have real issues trying to connect my Mac machines to my UnRaid server.
  16. Update: My drives are formatted with reiserfs.
  17. Does anyone here have much success using Macs to connect with an UnRaid file server? I have been trying (for years now) to get shares (SMB or AFP) to mount with my Macs and it never works well. The first attempt to connect to the SMB share often fails completly and often kills any FTP file transfers I am doing at the time. If it does eventually connect, it often just stops working and then drops. The seemingly relevant logs might be this, but there is nothing else. Jan 10 09:20:42 VVData avahi-daemon[3680]: Service "VVData-AFP" (/services/afp.service) successfully established. Jan 10 09:23:31 VVData afpd[6127]: dsi_stream_read: len:0, unexpected EOF Jan 10 09:24:55 VVData afpd[6482]: transmit: Request to dbd daemon (volume TimeMachines) timed out. Jan 10 09:24:55 VVData afpd[6482]: afp_openvol(/mnt/user/TimeMachines): Fatal error: Unable to get stamp value from CNID backend Jan 10 09:26:14 VVData afpd[6482]: transmit: Request to dbd daemon (volume Time Machine Backups) timed out. Jan 10 09:26:14 VVData afpd[6482]: afp_openvol(/mnt/user/Time Machine Backups): Fatal error: Unable to get stamp value from CNID backend I am wondering which shares I should be using of if I should be formatting the drives to something other than XFS. My current system specs are: Model: N/A M/B: Supermicro - X8SIL CPU: Intel® Core™ i3 CPU 550 @ 3.20GHz HVM: Enabled IOMMU: Disabled Cache: 128 kB, 512 kB, 4096 kB Memory: 4 GB (max. installable capacity 16 GB) Network: eth0: 1000 Mb/s, full duplex, mtu 1500 eth1: 1000 Mb/s, full duplex, mtu 1500 Kernel: Linux 4.4.30-unRAID x86_64 OpenSSL: 1.0.2j Uptime: 0 days, 00:35:45 Any thoughts would be greatly appreciated.
  18. Thanks for that! I now see that there can be a corrupt file system and not physical hard drive corruption. I didn't quite grasp that before. As it turned out, when I remapped disk20, unRAID thought it should be formatted. Fortunately, disk20 had no significant data on it, so I formatted the drive and let the rebuild finish. As of this morning, all is running OK again and all folders are available and there are no errors. I will now start the recovery process and add drives as necessary. I am think that now would be a good time to upgrade to unRAID 6.0
  19. So here is the log after a reboot: Oct 21 12:34:22 VVData emhttp: shcmd (68): set -o pipefail ; mount -t reiserfs -o user_xattr,acl,noatime,nodiratime /dev/md20 /mnt/disk20 |$stuff$ logger (Drive related) Oct 21 12:34:22 VVData kernel: REISERFS (device md20): found reiserfs format "3.6" with standard journal (Routine) Oct 21 12:34:22 VVData kernel: REISERFS (device md20): using ordered data mode (Routine) Oct 21 12:34:22 VVData kernel: reiserfs: using flush barriers (Drive related) Oct 21 12:34:22 VVData kernel: REISERFS (device md20): journal params: device md20, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30 (Routine) Oct 21 12:34:22 VVData kernel: REISERFS (device md20): checking transaction log (md20) (Routine) Oct 21 12:34:23 VVData logger: mount: /dev/md20: can't read superblock (Drive related) Oct 21 12:34:23 VVData emhttp: _shcmd: shcmd (68): exit status: 32 (Drive related) Oct 21 12:34:23 VVData emhttp: disk20 mount error: 32 (Errors) Oct 21 12:34:23 VVData emhttp: shcmd (69): rmdir /mnt/disk20 (Drive related) Oct 21 12:34:23 VVData kernel: REISERFS warning: reiserfs-5089 is_internal: free space seems wrong: level=2, nr_items=4, free_space=3840 rdkey (Minor Issues) Oct 21 12:34:23 VVData kernel: REISERFS error (device md20): vs-5150 search_by_key: invalid format found in block 32770. Fsck? (Errors) Oct 21 12:34:23 VVData kernel: REISERFS (device md20): Remounting filesystem read-only (Drive related) Oct 21 12:34:23 VVData kernel: REISERFS error (device md20): vs-13070 reiserfs_read_locked_inode: i/o failure occurred trying to find stat data of [1 2 0x0 SD] (Errors) Oct 21 12:34:23 VVData kernel: REISERFS (device md20): Using r5 hash to sort names (Routine) Oct 21 12:34:24 VVData emhttp: shcmd (70): mkdir /mnt/user (Drive related) Oct 21 12:34:24 VVData emhttp: shcmd (71): /usr/local/sbin/shfs /mnt/user -disks 16777214 -o noatime,big_writes,allow_other -o remember=0 |$stuff$ logger (Drive related) Oct 21 12:34:24 VVData emhttp: shcmd (72): crontab -c /etc/cron.d -d $stuff$> /dev/null (Drive related) Oct 21 12:34:24 VVData emhttp: shcmd (73): /usr/local/sbin/emhttp_event disks_mounted (Drive related) Oct 21 12:34:24 VVData emhttp_event: disks_mounted (Drive related) Oct 21 12:34:25 VVData emhttp: shcmd (74): :>/etc/samba/smb-shares.conf (Drive related) Oct 21 12:34:25 VVData avahi-daemon[6737]: Files changed, reloading. (Drive related) Oct 21 12:34:26 VVData emhttp: get_config_idx: fopen /boot/config/shares/lost+found.cfg: No such file or directory - assigning defaults (Drive related) Oct 21 12:34:26 VVData emhttp: Restart SMB... (Drive related) Oct 21 12:34:26 VVData emhttp: shcmd (75): killall -HUP smbd (Minor Issues) Oct 21 12:34:26 VVData emhttp: shcmd (76): cp /etc/avahi/services/smb.service- /etc/avahi/services/smb.service (Drive related) Oct 21 12:34:26 VVData avahi-daemon[6737]: Files changed, reloading. (Drive related) Oct 21 12:34:26 VVData avahi-daemon[6737]: Service group file /services/smb.service changed, reloading. (Drive related) Oct 21 12:34:26 VVData emhttp: shcmd (77): ps axc | grep -q rpc.mountd (Drive related) Oct 21 12:34:26 VVData emhttp: _shcmd: shcmd (77): exit status: 1 (Drive related) Oct 21 12:34:26 VVData emhttp: shcmd (78): /usr/local/sbin/emhttp_event svcs_restarted (Drive related) Oct 21 12:34:26 VVData emhttp_event: svcs_restarted (Drive related) Oct 21 12:34:26 VVData emhttp: shcmd (79): /usr/local/sbin/emhttp_event started (Drive related) Oct 21 12:34:26 VVData emhttp_event: started (Drive related) Oct 21 12:34:27 VVData avahi-daemon[6737]: Service "VVData" (/services/smb.service) successfully established. (Drive related) Oct 21 12:40:26 VVData emhttp: shcmd (80): set -o pipefail ; mkreiserfs -q /dev/md20 |$stuff$ logger (Drive related) Oct 21 12:40:26 VVData logger: mkreiserfs 3.6.24 (Drive related) Oct 21 12:40:26 VVData logger: (Drive related) Oct 21 12:42:26 VVData emhttp: shcmd (81): mkdir /mnt/disk20 (Routine) Oct 21 12:42:26 VVData emhttp: shcmd (82): set -o pipefail ; mount -t reiserfs -o user_xattr,acl,noatime,nodiratime /dev/md20 /mnt/disk20 |$stuff$ logger (Drive related) Oct 21 12:42:26 VVData kernel: REISERFS (device md20): found reiserfs format "3.6" with standard journal (Routine) Oct 21 12:42:26 VVData kernel: REISERFS (device md20): using ordered data mode (Routine) Oct 21 12:42:26 VVData kernel: reiserfs: using flush barriers (Drive related) Oct 21 12:42:27 VVData kernel: REISERFS (device md20): journal params: device md20, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30 (Routine) Oct 21 12:42:27 VVData kernel: REISERFS (device md20): checking transaction log (md20) (Routine) Oct 21 12:42:33 VVData kernel: REISERFS (device md20): Using r5 hash to sort names (Routine) Oct 21 12:42:33 VVData kernel: REISERFS (device md20): Created .reiserfs_priv - reserved for xattr storage. (Drive related) Oct 21 12:42:33 VVData emhttp: resized: /mnt/disk20 (Drive related) Oct 21 12:42:34 VVData emhttp: shcmd (83): :>/etc/samba/smb-shares.conf (Drive related) Oct 21 12:42:34 VVData avahi-daemon[6737]: Files changed, reloading. (Drive related) Oct 21 12:42:34 VVData emhttp: shcmd (84): chmod 777 '/mnt/disk20' (Drive related) Oct 21 12:42:34 VVData emhttp: shcmd (85): chown 'nobody':'users' '/mnt/disk20' (Drive related) Oct 21 12:42:34 VVData emhttp: get_config_idx: fopen /boot/config/shares/lost+found.cfg: No such file or directory - assigning defaults (Drive related) Oct 21 12:42:34 VVData emhttp: Restart SMB... (Drive related) Oct 21 12:42:34 VVData emhttp: shcmd (86): killall -HUP smbd (Minor Issues) Oct 21 12:42:34 VVData emhttp: shcmd (87): cp /etc/avahi/services/smb.service- /etc/avahi/services/smb.service (Drive related) Oct 21 12:42:34 VVData avahi-daemon[6737]: Files changed, reloading. (Drive related) Oct 21 12:42:34 VVData avahi-daemon[6737]: Service group file /services/smb.service changed, reloading. (Drive related) Oct 21 12:42:34 VVData emhttp: shcmd (88): ps axc | grep -q rpc.mountd (Drive related) Oct 21 12:42:34 VVData emhttp: _shcmd: shcmd (88): exit status: 1 (Drive related) Oct 21 12:42:34 VVData emhttp: shcmd (89): /usr/local/sbin/emhttp_event svcs_restarted (Drive related) Oct 21 12:42:34 VVData emhttp_event: svcs_restarted (Drive related) Oct 21 12:42:35 VVData avahi-daemon[6737]: Service "VVData" (/services/smb.service) successfully established. (Drive related) Oct 21 12:47:10 VVData sSMTP[8607]: Creating SSL connection to host (Drive related) Oct 21 12:47:10 VVData sSMTP[8607]: SSL connection using AES128-SHA (Drive related) Oct 21 12:47:12 VVData sSMTP[8607]: Sent mail for root@localhost (221 2.0.0 closing connection a26sm1760895qtb.32 - gsmtp) uid=0 username=root outbytes=8479 (Drive related) THe line "Oct 21 12:34:23 VVData emhttp: disk20 mount error: 32 (Errors)" puzzles me. Is it really a bad drive?
  20. Something else I forgot to mention is that when I first fire the server up, one of the critical folders had no data in it. When I noticed this, I looked at the logs, and discovered errors on drive 13. When I removed drive 13, the folder now displayed the files. Whew! Just a thought, I have not replaced the RAM yet.
  21. First of all, thank you for who have gotten back to me with their ideas and comments! Yesterday, I went to my local store and spoke to a rep there (who happens to use unRAID) and he did the calculations for my hardware and it came up at about 466W. The PSU is over 5 years old, so it seemed like a good idea to replace it any way. Also keeping the load at %50 of the PSU's capacity is a good idea, so I went with a EVGA Supernova 850G2. I replaced the PSU (fortunately I had some extra molex splitters since there weren't enough in the box) and fired the box up. I decided to remap all of the drives and almost immediately drive 13 dropped off! The logs claimed corruption, so it was dropped off. So I remapped the drives WITHOUT drive 13. Now this morning drive 20 has 2258864 errors and I bet that if I stop the rebuild, I will find that drive 20 has dropped off. I happen to know that there is no data on that drive, but that's besides the point. Sadly, I cannot share anything more at the moment since there are no logs AT ALL! All I see on the logs page is "/usr/bin/tail -f /var/log/syslog" and the 'close' button. Now my concern is not seeing a log. Could it really be the USB stick? I'll make a note here that I do want to upgrade to unRAID 6, but have been waiting to get over the hurdle of 'drives dropping off first', but is that necessary? Maybe unRAID will reveal more? I don't know.
  22. Thanks! I will look into that. Quite frankly, that makes sense. I recently added more drives (filling up the remaining slots) and started upgrading the drives to 7200 rpm, so there's more power being drawn. Would a 850W PSU be sufficient for 20 drives?
  23. I don't know if anyone can help, but here is what is going on: A little while ago, I noticed that a drive would get write errors. When I stopped the array, I would notice that the drive was no longer available. It just disappeared! At first I thought it was the drive, so I replaced the drive, but then it happened again. I have replaced the controller card and the cable. I have run a chkdsk on the USB drive and it still keeps dropping off! Can anyone see where the problem lies? Here is the most recent log. I have noticed that disks 13 and then disk 17 are the ones that drop off. Any help would be MUCH appreciated! syslog-2016-10-19_cropped.txt
  24. Due to an unexpected failure over the weekend, I have had to think about redundancy once again and one are I just thought of is the booting thumb drive. Has anyone had experience with cloning the drive? Should I get another license and thumb drive and set it to sync automatically or is one sync good forever? Or is it good until the next drive is swapped? Perhaps there is no need to worry about this, but I was thinking that we've had this thumb drive for about 5 years now. It might be worth backing it up and replacing it. Am I correct that the thumb drive is getting written to fairly frequently? Any thoughts or suggestions are greatly appreciated!
  25. Hello, I haven't done much posting here, but I thought I might chip in my discoveries as I, too, have had similar issues with drives dropping off. I am currently (still) using 5.0.6 and have found that some drives would randomly get 'read-errors'. When I would stop the array, I noticed that the drive were actually not seen at all!. So although it was a read/write error, it was because it read or write to the non-existant drive. A re-boot usually solved this, but this past weekend, it finally happened that during a rebuild, another drive dropped off! Well, After removing the two 'bad' drives, I found that the software wasn't seeing the correct hardware. It was still seeing one of the old drives! (which is why formatting was failing) After multiple attempts to fix the mounting problems, I finally reset the configuration with the drives that were in place. This allowed the software to see the correct drives. I remapped the physical drive to drive numbers and the server just finished building the parity. The server is running smooth once again. I am really hoping that it stays this way for a while. I also disabled all the drives from spinning down. Someone did a $$$ savings for spinning the drives down and it wasn't worth it in a work environment. I am not sure if it helped or not, but I am certain it didn't hurt. I am now recovering the data on the two removed drives using the recovery mentioned here. http://lime-technology.com/forum/index.php?topic=27028.msg236889#msg236889 I hope this helps!