Jump to content

chooch

Members
  • Content Count

    85
  • Joined

  • Last visited

Community Reputation

0 Neutral

About chooch

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Until you experience issues, leave it be. Many people experience performance issues with Reiser, ranging from slow writes to complete lockups. If your system is running well, no need to switch. That is the reason I decided to convert to XFS. Normally it would take me around 20 minutes to copy a 20GB file to my server and recently it was taking taking a few hours because my disks were getting full. I read through all the posts and am in the process of converting to XFS. I am a very casual user and up until a week ago was running v5 of unraid. There seems to be a lot of different approaches and I ended up following RobJ's instructions for the most part with a few minor changes.
  2. Thanks for the response and I appreciate the tip about formatting instead of --delete. Part of the reason I am doing it this way is that I physically have the drives from top to bottom on my server in order of Parity to disk6. They are front loading so I like to know the physical locations in case I would need to replace a drive. I don't use user shares and have different data on each disk (disk1 TV, disk2 Bluray, disk 3 DVD, etc.). So I wasn't sure if I could simply reassign them afterwards to correspond to the physical locations on my server if I did the method you mentioned earlier. Also, since disk 6 is new and a different model that will eventually become my parity drive once I am done with the coping. Then my old parity will become disk4 and I will be back to a 5 data drive server. I was worried I would mess something up with all the reassigning and figured while longer this would be a safe approach. I know there are easier ways to do this and you had a really good example earlier. That is where I seen to use rsync -avPX. But I barely make any changes to my server and up until this week was running v5. I just use mine to back up my media and the only reason I started making changes was because the extremely slow disk speeds of disk3 is making me go from RFS to XFS. The disk is nearly full and someone mentioned earlier RFS can be very slow when full. Time is really not an issue here and I was just trying to take the simple approach for someone who is not an expert with unraid and didn't want to mess anything up.
  3. Thanks. Would it look something like this? rsync -avPX --delete /mnt/disk2/ /mnt/disk6/
  4. Thanks for the responses trurl and garycase. I had another question and was hoping someone could help. I am currently moving the data from disk 1 -> disk 6 (new disk) using rsync -avPX. Then I will format disk1 as XFS and move the data back from disk6 using rsync -avPX. Next I want to do the same thing for disk2 and move the data to disk6 and then back after formatting to XFS. disk1 (RFS) -> disk6 -> disk1 (XFS) then disk2 (RFS) -> disk6 -> disk2 (XFS) My question is do I need to delete all the data on disk 6 before using the rsync -avPX from disk2? Or will that just copy over the existing data on disk 6 (disk1's old data from first transfer)?
  5. This is the approach I am taking. I know it will take longer but I am not really in a rush. Plus I don't trust myself with all the reassigning, My current setup is: disk1: reiserfs 5TB disk2: reiserfs 5TB disk3: reiserfs 5TB disk4: reiserfs 2TB disk5: reiserfs 2TB disk6: XFS (Newly precleared drive 5 TB) I plan on using disk 6 as the temporary drive and will copy the data from disk 1 to it, reformat disk 1 to XFS, then copy the data back from disk 6. Then rinse and repeat for each other disk. Is this an OK approach? I started tonight using rsync -avPX /mnt/disk1/ /mnt/disk6/ When I added disk 6 it formatted to XFS since that is my default format and what I want all new drives to be. Is that OK? The reason I ask is because RobJ menioned is should not be XFS: My current copy rate using rsync seems to be around 36 MB/s. Is that typical? My most recent parity check was around 120 MB/s and my most recent preclear results were: Pre Read (105 MB/s), Zeroing (136 MB/s), and Post Read (48 MB/s)
  6. Thanks for the info. I already started the preclear last night however once it's done I add the patch and upgrade to 6.2.4 again in case I have to do it in the future. I did search before posting. How do you think I came across the thread where the person had the same issue and was using 6.2 beta 18. Sorry I didn't come across the post you mentioned from two days ago. I'm not on this form a lot and every form's search behavior is different. Even the forms I do spend a lot of times on I can still have issues searching for posts I made and the one thing I learned is the more you use them the better you get. But like any other forum instead of you posting a link to that specific thread from two days ago to help me out your quick to make assumptions and say "if I would have searched" like I didn't try doing that in the first place.
  7. Revered back to 6.1.9 and now it's working.
  8. I am trying to do a preclear and I keep getting an error saying Sorry: Device /dev/xxx is busy.: 1 I am using 6.2.4 and read in another thread a person was using 6.2 beta 18 and the included sfdisk command does not support the R option. They recommend doing the preclearing back in v6.1.9. Is that the case here with 6.2.4 and if so what is the most recent version of Unraind that you can run preclear with? That tread was from March so I wasn't sure if there were newer versions that worked since then with preclear. root@Tower:~# cd /boot root@Tower:/boot# preclear_disk.sh -l ====================================1.15 Disks not assigned to the unRAID array (potential candidates for clearing) ======================================== /dev/sdb = ata-TOSHIBA_HDWE150_Y666KF59F57D root@Tower:/boot# preclear_disk.sh /dev/sdb ./preclear_disk.sh: line 1307: /root/mdcmd: No such file or directory sfdisk: invalid option -- 'R' Usage: sfdisk [options] <dev> [[-N] <part>] sfdisk [options] <command> Display or manipulate a disk partition table. Commands: -A, --activate <dev> [<part> ...] list or set bootable MBR partitions -d, --dump <dev> dump partition table (usable for later input) -J, --json <dev> dump partition table in JSON format -g, --show-geometry [<dev> ...] list geometry of all or specified devices -l, --list [<dev> ...] list partitions of each device -F, --list-free [<dev> ...] list unpartitions free areas of each device -s, --show-size [<dev> ...] list sizes of all or specified devices -T, --list-types print the recognized types (see -X) -V, --verify [<dev> ...] test whether partitions seem correct --part-label <dev> <part> [<str>] print or change partition label --part-type <dev> <part> [<type>] print or change partition type --part-uuid <dev> <part> [<uuid>] print or change partition uuid --part-attrs <dev> <part> [<str>] print or change partition attributes <dev> device (usually disk) path <part> partition number <type> partition type, GUID for GPT, hex for MBR Options: -a, --append append partitions to existing partition table -b, --backup backup partition table sectors (see -O) --bytes print SIZE in bytes rather than in human readable for mat -f, --force disable all consistency checking --color[=<when>] colorize output (auto, always or never) colors are enabled by default -N, --partno <num> specify partition number -n, --no-act do everything except write to device --no-reread do not check whether the device is in use -O, --backup-file <path> override default backup file name -o, --output <list> output columns -q, --quiet suppress extra info messages -X, --label <name> specify label type (dos, gpt, ...) -Y, --label-nested <name> specify nested label type (dos, bsd) -L, --Linux deprecated, only for backward compatibility -u, --unit S deprecated, only sector unit is supported -h, --help display this help and exit -v, --version output version information and exit Available columns (for -o): gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S bsd: Slice Start End Sectors Cylinders Size Type Bsize Cpg Fsize sgi: Device Start End Sectors Cylinders Size Type Id Attrs sun: Device Start End Sectors Cylinders Size Type Id Flags For more details see sfdisk(. Sorry: Device /dev/sdb is busy.: 1 root@Tower:/boot#
  9. Attached is a syslog from a few days ago and it appears I am running RieserFS. What other file systems are there and is it too late to switch? Or should I just leave a certain % empty if I am stuck with RieserFS. I have since upgraded to 6.2.4 however even when I was on 5.0.6 disk 3 still behaved the same way. I will update with a new syslog tonight but really the only thing that changed was upgrading from 5.0.6 to 6.2.4. Dec 12 23:17:38 Tower emhttp: shcmd (20): mkdir /mnt/disk1 Dec 12 23:17:38 Tower emhttp: shcmd (21): set -o pipefail ; mount -t reiserfs -o user_xattr,acl,noatime,nodiratime /dev/md1 /mnt/disk1 |& logger Dec 12 23:17:38 Tower kernel: REISERFS (device md1): found reiserfs format "3.6" with standard journal Dec 12 23:17:38 Tower kernel: REISERFS (device md1): using ordered data mode Dec 12 23:17:38 Tower kernel: reiserfs: using flush barriers Dec 12 23:17:38 Tower kernel: REISERFS (device md1): journal params: device md1, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30 Dec 12 23:17:38 Tower kernel: REISERFS (device md1): checking transaction log (md1) Dec 12 23:17:38 Tower kernel: REISERFS (device md1): Using r5 hash to sort names Dec 12 23:17:38 Tower emhttp: shcmd (22): mkdir /mnt/disk2 Dec 12 23:17:38 Tower emhttp: shcmd (23): set -o pipefail ; mount -t reiserfs -o user_xattr,acl,noatime,nodiratime /dev/md2 /mnt/disk2 |& logger Dec 12 23:17:38 Tower kernel: REISERFS (device md2): found reiserfs format "3.6" with standard journal Dec 12 23:17:38 Tower kernel: REISERFS (device md2): using ordered data mode Dec 12 23:17:38 Tower kernel: reiserfs: using flush barriers Dec 12 23:17:38 Tower kernel: REISERFS (device md2): journal params: device md2, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30 Dec 12 23:17:38 Tower kernel: REISERFS (device md2): checking transaction log (md2) Dec 12 23:17:38 Tower kernel: REISERFS (device md2): Using r5 hash to sort names Dec 12 23:17:38 Tower emhttp: shcmd (24): mkdir /mnt/disk3 Dec 12 23:17:38 Tower emhttp: shcmd (25): set -o pipefail ; mount -t reiserfs -o user_xattr,acl,noatime,nodiratime /dev/md3 /mnt/disk3 |& logger Dec 12 23:17:38 Tower kernel: REISERFS (device md3): found reiserfs format "3.6" with standard journal Dec 12 23:17:38 Tower kernel: REISERFS (device md3): using ordered data mode Dec 12 23:17:38 Tower kernel: reiserfs: using flush barriers Dec 12 23:17:38 Tower kernel: REISERFS (device md3): journal params: device md3, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30 Dec 12 23:17:38 Tower kernel: REISERFS (device md3): checking transaction log (md3) Dec 12 23:17:38 Tower kernel: REISERFS (device md3): Using r5 hash to sort names Dec 12 23:17:38 Tower emhttp: shcmd (26): mkdir /mnt/disk4 Dec 12 23:17:38 Tower emhttp: shcmd (27): set -o pipefail ; mount -t reiserfs -o user_xattr,acl,noatime,nodiratime /dev/md4 /mnt/disk4 |& logger Dec 12 23:17:38 Tower kernel: REISERFS (device md4): found reiserfs format "3.6" with standard journal Dec 12 23:17:38 Tower kernel: REISERFS (device md4): using ordered data mode Dec 12 23:17:38 Tower kernel: reiserfs: using flush barriers Dec 12 23:17:38 Tower kernel: REISERFS (device md4): journal params: device md4, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30 Dec 12 23:17:38 Tower kernel: REISERFS (device md4): checking transaction log (md4) Dec 12 23:17:38 Tower kernel: REISERFS (device md4): Using r5 hash to sort names Dec 12 23:17:38 Tower emhttp: shcmd (28): mkdir /mnt/disk5 Dec 12 23:17:38 Tower emhttp: shcmd (29): set -o pipefail ; mount -t reiserfs -o user_xattr,acl,noatime,nodiratime /dev/md5 /mnt/disk5 |& logger Dec 12 23:17:39 Tower kernel: REISERFS (device md5): found reiserfs format "3.6" with standard journal Dec 12 23:17:39 Tower kernel: REISERFS (device md5): using ordered data mode Dec 12 23:17:39 Tower kernel: reiserfs: using flush barriers Dec 12 23:17:39 Tower kernel: REISERFS (device md5): journal params: device md5, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30 Dec 12 23:17:39 Tower kernel: REISERFS (device md5): checking transaction log (md5) Dec 12 23:17:39 Tower kernel: REISERFS (device md5): Using r5 hash to sort names syslog.txt
  10. I have 3 5TB Toshiba drives and they are all nearly full (30-200 GB left empty on each of them). Disk 3 has around 150 GB left and is very slow now when I access the drive. I understand that drive performance slows down as they get close to full but none of my other 5TB drives are acting this slow. Disk 2 actually has less free space then disk 3 and I'm not really seeing any performance issues with that drive. I don't think it's network related because I used the mv -v function to move a 20GB movie onto disk 3 last night and it took a few hours. Normally this would a few minutes so a few hours is a major change. Also, when you click on disk 3 form windows or unraid it can take a very long time to access it and open the folder. My current setup is: Parity - 5TB TOSHIBA PH3500U-1I72 disk1- 5TB TOSHIBA PH3500U-1I72 disk2- 5TB TOSHIBA PH3500U-1I72 disk3- 5TB TOSHIBA PH3500U-1I72 disk4 - 2TB Hitatchi Deskstar 5K3000 disk5 - 2TB Hitatchi Deskstar 5K3000 Would removing the drive and rebuilding possibly help? I'm not an expert when it comes to this and wanted to see if anyone had any suggestions. Is there an optimal % that a drive should be left empty to avoid performance issues? I do have a brand new 5TB drive I plan on adding soon to replace one of the 2TB drives so I could always put that in the slot for disk 3 for now and remove the current one for off-line testing. I'm only running with 1GB of memory (1GB recently died) however unraid is showing that my memory is only being used at 70-75% and CPU 30-35% so I didn't think that is the issue. Any help would be appreciated because while I am a long time unraid user I am far from an expert and just have a basic setup just for backing up my media.
  11. Thanks for the response and I will give that a try with -f. I have more ram and a better processor that I haven't got around to installing yet. After I do that I plan on upgrading to 6.2 so I should be able to use disk speed as normal since I'll have more memory installed.
  12. Thanks for the response and I am running 5.0.6. I did run 2.5 initially but when I got the error I tried 2.6.1 as well. I went back and ran 2.5 with the -l option and here is the log it created: ndNumDisabled: mdNumInvalid: 0 mdNumMissing: 0 mdResyncDb: sbNumDisks: 6 mdResyncPos: 0 /tmp/inventory1.txt ========== Disk /dev/sda: 999 MB, 999555072 bytes 255 heads, 63 sectors/track, 121 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x09d34f4f Device Boot Start End Blocks Id System /dev/sda1 * 1 122 976096+ 6 FAT16 Partition 1 has different physical/logical endings: phys=(120, 254, 63) logical=(121, 133, 12) Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes 1 heads, 63 sectors/track, 62016336 cylinders Units = cylinders of 63 * 512 = 32256 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 2 62016336 1953514552 83 Linux Partition 1 does not end on cylinder boundary. Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes 1 heads, 63 sectors/track, 62016336 cylinders Units = cylinders of 63 * 512 = 32256 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 2 62016336 1953514552+ 83 Linux Partition 1 does not end on cylinder boundary. Disk /dev/sdd: 5001.0 GB, 5000981078016 bytes 256 heads, 63 sectors/track, 605626 cylinders Units = cylinders of 16128 * 512 = 8257536 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdd1 1 266306 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. Disk /dev/sdf: 5001.0 GB, 5000981078016 bytes 256 heads, 63 sectors/track, 605626 cylinders Units = cylinders of 16128 * 512 = 8257536 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdf1 1 266306 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. Disk /dev/sde: 5001.0 GB, 5000981078016 bytes 256 heads, 63 sectors/track, 605626 cylinders Units = cylinders of 16128 * 512 = 8257536 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sde1 1 266306 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. Disk /dev/sdg: 5001.0 GB, 5000981078016 bytes 256 heads, 63 sectors/track, 605626 cylinders Units = cylinders of 16128 * 512 = 8257536 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdg1 1 266306 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. ========== Current Unraid slot: (Disk 5) - /dev/sdb /tmp/diskspeed.tmp ========== /dev/sdb: ATA device, with non-removable media Model Number: Hitachi HDS5C3020ALA632 Serial Number: ML0221F307PX3D Firmware Revision: ML6OA180 Transport: Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6; Revision: ATA8-AST T13 Project D1697 Revision 0b Standards: Used: unknown (minor revision code 0x0029) Supported: 8 7 6 5 Likely used: 8 Configuration: Logical max current cylinders 16383 16383 heads 16 16 sectors/track 63 63 -- CHS current addressable sectors: 16514064 LBA user addressable sectors: 268435455 LBA48 user addressable sectors: 3907029168 Logical Sector size: 512 bytes Physical Sector size: 512 bytes device size with M = 1024*1024: 1907729 MBytes device size with M = 1000*1000: 2000398 MBytes (2000 GB) cache/buffer size = 26129 KBytes (type=DualPortCache) Form Factor: 3.5 inch Nominal Media Rotation Rate: 5940 Capabilities: LBA, IORDY(can be disabled) Queue depth: 32 Standby timer values: spec'd by Standard, no device specific minimum R/W multiple sector transfer: Max = 16 Current = 16 Advanced power management level: disabled DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6 Cycle time: min=120ns recommended=120ns PIO: pio0 pio1 pio2 pio3 pio4 Cycle time: no flow control=120ns IORDY flow control=120ns Commands/features: Enabled Supported: * SMART feature set Security Mode feature set * Power Management feature set * Write cache * Look-ahead * Host Protected Area feature set * WRITE_BUFFER command * READ_BUFFER command * NOP cmd * DOWNLOAD_MICROCODE Advanced Power Management feature set Power-Up In Standby feature set * SET_FEATURES required to spinup after power up SET_MAX security extension * 48-bit Address feature set * Device Configuration Overlay feature set * Mandatory FLUSH_CACHE * FLUSH_CACHE_EXT * SMART error logging * SMART self-test Media Card Pass-Through * General Purpose Logging feature set * WRITE_{DMA|MULTIPLE}_FUA_EXT * 64-bit World wide name * URG for READ_STREAM[_DMA]_EXT * URG for WRITE_STREAM[_DMA]_EXT * WRITE_UNCORRECTABLE_EXT command * {READ,WRITE}_DMA_EXT_GPL commands * Segmented DOWNLOAD_MICROCODE unknown 119[7] * Gen1 signaling speed (1.5Gb/s) * Gen2 signaling speed (3.0Gb/s) * Gen3 signaling speed (6.0Gb/s) * Native Command Queueing (NCQ) * Host-initiated interface power management * Phy event counters * NCQ priority information Non-Zero buffer offsets in DMA Setup FIS * DMA Setup Auto-Activate optimization Device-initiated interface power management In-order data delivery * Software settings preservation * SMART Command Transport (SCT) feature set * SCT Write Same (AC2) * SCT Error Recovery Control (AC3) * SCT Features Control (AC4) * SCT Data Tables (AC5) Security: Master password revision code = 65534 supported not enabled not locked frozen not expired: security count not supported: enhanced erase 504min for SECURITY ERASE UNIT. Logical Unit WWN Device Identifier: 5000cca369c380d5 NAA : 5 IEEE OUI : 000cca Unique ID : 369c380d5 Checksum: correct ========== Model: [Hitachi HDS5C3020ALA632] Serial: [ML0221F307PX3D] GB: [1863] startpos: [0] startposdisp: [0] CurrPer: [0] Performance testing /dev/sdb (Disk 5) at 0 GB (0%) dd if=/dev/sdb of=/dev/null bs=1GB count=1 skip=0 iflag=direct /tmp/diskspeed_results.txt ========== dd: memory exhausted ========== ratedspeed: [] Program complete
  13. Anyone know why I am getting this error? When I used disk speed a year ago it worked great and the only thing that has changed with regards to my is going from 2 GB to 1 GB of memory (one stick went bad). Tower login: root Linux 3.9.11p-unRAID. root@Tower:~# cd /boot root@Tower:/boot# diskspeed.sh diskspeed.sh for UNRAID, version 2.6.1 By John Bartlett. Support board @ limetech: http://goo.gl/ysJeYV Performance testing /dev/sdb (Disk 5) at 0 GB (0%) ./diskspeed.sh: line 689: 0 + : syntax error: operand expected (error token is "+ ") ./diskspeed.sh: line 720: /tmp/diskspeed.sdb.graph1: No such file or directory rm: cannot remove `/tmp/diskspeed.sdb.graph1': No such file or directory ./diskspeed.sh: line 819: /tmp/diskspeed.sdb.graph2: No such file or directory rm: cannot remove `/tmp/diskspeed.sdb.graph2': No such file or directory To see a graph of the drive's speeds, please browse to the current directory and open the file diskspeed.html in your Internet Browser application. root@Tower:/boot# Attached is a copy of the syslog after doing a reboot. syslog.txt
  14. Just got done pre-clearing all three drives and the temps never got above 35 degrees. That's using a Antec 900 with hot swap cages and the stock Antec fans. They are running a few degrees hotter (35-37C) during the parity check compared to my 2TB Hitachi's coolspins (31-33C), however they are also the top three drives in the case so that might play a little part. Newegg seems to no longer carry them but B&H has them for the same price so I might pick a few more up.
  15. Pulled the trigger and ordered. 3. Been needing to upgrade my 2TB drives for some time.