Jump to content

jphipps

Members
  • Posts

    334
  • Joined

  • Last visited

Everything posted by jphipps

  1. That is pretty cool... Just tried loading the package, and the zpool command is giving me: The ZFS modules are not loaded. Try running '/sbin/modprobe zfs' as root to load them. If I try to load the module, it states that it isn't found. Is there anything else that needs to be done outside of installing the plg?
  2. Figured this issue out... I am running Sophos Anti-Virus, and for some reason it thinks the real tine updating of the page is some sort of malicious code and blocks it. Once I whitelisted the ip it is working fine now...
  3. I get the popup window, but then it just says "waiting for {ip address}" and never seems to return. I can see the tail command running in the process list. https://www.dropbox.com/s/7pp8peeq20f5962/Screen%20Shot%202015-09-13%20at%2010.34.17%20AM.png?dl=0
  4. On Dashboard, under System Status, where it says "flash : log : docker" what is the percent used for log, the middle number? It shows 9% on both servers.
  5. Not sure if this belongs in this thread or general support, but I noticed under 6.1.2 that the log display through the UI seems to just hang and most of them time never displays. I tested this on both of my upgraded unraid servers and it does it across both.
  6. These entries from the smart report don't look too good on the drive: 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 120 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 120 You may want to replace that drive, and then run a pre-clear to test it out.
  7. It is not usually good to have any, but I usually keep and eye on it and if it remains constant I don't worry about it. I have a drive that has had 16 relocated sectors for about 5 years, and it has remained the same and I haven't encountered any other errors on the drive. You may want to run a short and long drive test to see if it identifies any other errors with the drive.
  8. If the disks are the same, there will only be a few lines of output. If they vary because of timestamp or permissions, you can change the parms to not include them. The "-a" option is really "-rlptgoD" The ptgo relate to the permissions, timestamp, group, owner. You can also use a "-c" which compares by checksum. It is slower, check the actual file contents for a comparison.
  9. If you run rsync with the "-n" option, it will do a dry run and tell you what it will do. I use that to verify 2 disks are the same, ex: rsync -avn /mnt/diskX /mnt/diskY Using the "-a" will include checking permissions, so if you have any files with different permissions, they will show up as changed.
  10. Yeah, I have done the same test as jumperalex and is powering up all the drives. I have also monitored iostat while doing my writes and it is doing equal IO across all the drives. I found the spin up is not too bad because read a lot more than doing writes, so the drives stay spun down a lot of the time. I have also found a big killer in performance is the 2TB and smaller drives. My other server has a mix of 2 and 4 TB drives, and a parity check is dramatically faster once it passes the 2TB mark...
  11. Just ran another test without turbo write on: Jeffs-Mac-mini-2:~ jphipps$ dd if=/dev/zero of=/Volumes/Movies/test.out bs=4096 count=6000000 6000000+0 records in 6000000+0 records out 24576000000 bytes transferred in 413.319705 secs (59460025 bytes/sec) Only about 60MB/s.. I usually leae it turned on because I havent' really seen any downside to leaving it on..
  12. Forgot to mention, I do have turbo write turned on. That test was using a server with 18 data drives and parity with no cache drive in use.
  13. I had done some testing between Reiserfs and XFS, and didn't really see much difference. This is the output of a DD test from OSX to unraid over NFS: Jeffs-Mac-mini-2:~ jphipps$ dd if=/dev/zero of=/Volumes/Movies/test.out bs=4096 count=6000000 6000000+0 records in 6000000+0 records out 24576000000 bytes transferred in 247.419756 secs (99329174 bytes/sec) Getting about 99MB/s writing about 24GB which should exceed the cache of my 8GB machine, and I don't use a cache drive.
  14. I recieved the same message, but then if you hit update again it seems to go through...
  15. You may want to check the cabling on the new drive and after booting backup, check the smart report on the new drive to see if you see any issues with the drive.
  16. Looks like they both do... But disk6 also has thousands of reallocated sectors as well..
  17. Looks like disk2 has some pending sectors, and disk4 has a end-to-end error and reported uncorrect error. But it is hard to tell when that happened, I have a few disks that had errors years ago, and I just keep any eye on them and they haven't had an issue since. You might want to run a smart test(short and long) on those 2 just to make sure they appear ok.
  18. No worries... You can either click on the little thumbs and then check the disk attributes or run a "smartctl --all /dev/sdX" on a command line. Check for things like "Reallocated Sector Ct", "Current Pending Sector", or any of the error counts. If they all look to be 0 you should be good to go forward.
  19. I was meaning on checking the integrity by just checking the smart reports on all the drives to make sure they don't appear to be failing.
  20. There is positiives and negatives with doing the backup first. It would probalbly take as much time migrating the data to other drives as replacing the failing drive and rebuilding. You could be protecting the data incase another failure occurs before the drive is replaced, but you would be doing alot of extra IO by moving the data, rebuilding the drive, and then potentially moving the data back. I would think your best path would be to verify the integrity of the other drives via smart reports to make srue they aren't having any issues, and then replace/rebuild the failed drive. Then you can pre-clear the suspect drive to see how it operates...
  21. Looks like that syslog, the drive is up, but probably encountered a write error that disabled the drive. Since it is disabled, it is emulating the drive, so you are probably best to replace the drive as soon as you can. After you recover to a new disk, you could run a preclear to see if the drive checks out, but with the smart attributes on that drive, I wouldn't use it for any important data if it passed...
  22. If you dont' have a drive to replace it that is pre-cleared, you may want to migrate any important data off the drive to another. I would also check the smart report of your other drives to make sure they are all in good shape.
  23. The drive looks in pretty bad shape from the smart attributes. I would probably replace the drive and let it rebuild. How much data do you have on that drive?
  24. The only think i would add is using rsync to migrate the data. rsync -av --progress --remove-source-files /mnt/diskX/ /mnt/diskY/ This allows you to easy stop and restart if needed, and only leave a single copy of the file on any disk at a time so there is no issue accessing the files while you are migrating. Some don't like the overhead of it removing the files from the source files as you copy, but if you don't it will have duplicates in the array for that disk until you remove/format the source disk. I think it took about 24 hours for each 4TB drive to migrate doing it serially.
  25. OK, I've got my appdata_backup.sh file created, but I must want to check 2 things, my flash drive doesn't have a /boot/custom folder, is this a folder named custom in the root of the flash drive? I have created the file how do I make it executable? once I've copied the file to the flash drive and modified the go file can I manually start a backup from the command line? You can create the directory and put the script in it. The reason it has to exist on the flash under some directory is "/etc" will be recreated on reboot, so you must copy the file into the cron directories on a reboot via the go script. The custom directory does have any special meaning. You would make it exectable by running "chmod 755 cache_backup.sh" to set the execute bits on the file...
×
×
  • Create New...