Jump to content

Freddie

Members
  • Posts

    99
  • Joined

  • Last visited

Everything posted by Freddie

  1. I'll work on those 4K dumps. I have the preclear test output available right now. It shows which tests failed and might be more valuable to Joe L.: Pre-Clear unRAID Disk /dev/sde ################################################################## 1.14 Model Family: Seagate Desktop HDD.15 Device Model: ST4000DM000-1F2168 Serial Number: Z3006WWS LU WWN Device Id: 5 000c50 04fce621c Firmware Version: CC51 User Capacity: 4,000,787,030,016 bytes [4.00 TB] Disk /dev/sde: 4000.8 GB, 4000787030016 bytes 255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sde1 29344703 3300901822 1635778560 0 Empty Partition 1 does not start on physical sector boundary. ######################################################################## failed test 3 00000 00195 00191 00195 failed test 5 failed test 6 ========================================================================1.14 == == Disk /dev/sde is NOT precleared == 29344703 3271557120 4294967295 ============================================================================ And just in case you want to see a successful test (fake_clear in 5.05): Pre-Clear unRAID Disk /dev/sde ################################################################## 1.14 Model Family: Seagate Desktop HDD.15 Device Model: ST4000DM000-1F2168 Serial Number: Z3006WWS LU WWN Device Id: 5 000c50 04fce621c Firmware Version: CC51 User Capacity: 4,000,787,030,016 bytes [4.00 TB] Disk /dev/sde: 4000.8 GB, 4000787030016 bytes 255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sde1 1 4294967295 2147483647+ 0 Empty Partition 1 does not start on physical sector boundary. ######################################################################## ========================================================================1.14 == == DISK /dev/sde IS PRECLEARED with a GPT Protective MBR == ============================================================================ And as I was posting this, I noticed the partition start value. That should be a big red flag.
  2. I've done a bunch of testing and concluded that preclear is not writing a valid preclear signature on some drives in unRAID 6. My tests included: A 4TB disk precleared in 6.0 beta4 tests as NOT precleared in both 6.0b4 and 5.05. I precleared with all plugins removed, but still had the screen package installed and I had booted into unRAID with Xen and IronicBander's archVM running. A 250GB disk precleared in 6.0 beta4 tests as precleared, even with Xen and my selection of plugins installed. I also modified the preclear_disk.sh script to be able to create very dangerous disks that have the preclear signature without actually clearing the whole disk. I call it fake_clear and it allows me to test much quicker. A 250GB disk fake_cleared in 6.0b4 tests as precleared in both 6.0b4 and 5.05. A 4TB disk fake_cleared in 6.0b4 tests as NOT precleared in both 6.0b4 and 5.05. I have run the fake_clear in many 6.0b4 boot configurations including Safe Mode without Xen and with the default go file. A 4TB disk fake_cleared in 5.05 tests as precleared in both 6.0b4 and 5.05. I would have guessed the problem is related to disks over 2TB, but tr0910's results seem to contradict this.
  3. I precleared again without preread or postread. I then checked the preclear signature with preclear_disk.sh -t and the drive is reported as NOT precleared. The only things that happened between the finish of the preclear and the signature check where: 1. Drive spun down due to standby timeout previously set with hdparm -S 2. Mover ran 3. During mover, syslog shows one strange line (I think unrelated, but I don't know): Mar 30 03:40:38 uwer logger: Cannot stat file /proc/2910/fd/255: No such file or directory 4. I checked drive power state with hdparm -S I think my next step is to boot up a fresh install of unRAID 5.05 and check the preclear signature on the same drive. Then try the cycle again in unRAID 6 beta 4 with addons removed (I currently have apcupsd, SNAP and ntfs-3g installed)
  4. Or the process of assigning the disk in 6.0 somehow changes the preclear signature. I'm currently preclearing to try again.
  5. I just now tried assigning a precleared drive to the array and I saw the same thing: a warning that starting the array will cause the disk to be cleared. I unassigned the disk before starting the array. I then checked the preclear signature with "preclear_disk.sh -t" and the disk is reported as NOT precleared. In hindsight, I should have checked the preclear signature before assigning it to the array, but the only thing I've done with the drive since preclear is use hdparm to set the spindown time and manually spin it down. The disk was a Seagate 4T precleared on unRAID 6.0 b3 with preclear_disk.sh v1.14. Disk was assigned in unRAID 6.0 b4.
  6. You can run the newperms utility from the command line and specify the directory. For example: newperms /path/to/mounted/drive/ If you are curious, you can also view the newperms script to see the chmod command.
  7. The problem is here: Mar 14 19:10:19 Tower emhttp: writing GPT on disk (sdf), with partition 1 offset 64, erased: 0 Mar 14 19:10:19 Tower emhttp: shcmd (23): sgdisk -Z /dev/sdf &> /dev/null Mar 14 19:10:19 Tower emhttp: _shcmd: shcmd (23): exit status: 1 Mar 14 19:10:19 Tower emhttp: shcmd (24): sgdisk -o -a 64 -n 1:64:0 /dev/sdf |& logger Mar 14 19:10:19 Tower kernel: mdcmd (33): spinup 0 Mar 14 19:10:19 Tower kernel: mdcmd (34): spinup 1 Mar 14 19:10:19 Tower kernel: mdcmd (35): spinup 2 Mar 14 19:10:19 Tower kernel: mdcmd (36): spinup 3 Mar 14 19:10:19 Tower kernel: mdcmd (37): spinup 4 Mar 14 19:10:19 Tower logger: sgdisk: /usr/lib/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by sgdisk) The formatting of the new drive fails because a dependency is missing. You probably have an add-on hiding somewhere. Post a full syslog if you want help finding it.
  8. The command should be xl create arch.cfg Not arch.img
  9. My guess is that your computer is going into s3 sleep and it is only WOL that is not working. After issuing the sleep command, hit the power button and you should be able to tell if it is waking from sleep or booting up from scratch (look at the syslog of you are running without a monitor). Your output from ethtool in the original post shows "wake on: d". I think it needs to be "wake on: g" for magic packet WOL to work.
  10. echo -n mem > /sys/power/state from here: http://lime-technology.com/forum/index.php?topic=3657.msg262720#msg262720
  11. I've been testing powerdown V2.03 on unRAID 6.0-beta3 and it seems to work quite well. I installed the powerdown plugin from the first post and I do not have apcupsd. I tested with and without Xen enabled at boot. I also tested with and without a VM running. I noticed the custom Kxx scripts are executed in reverse order: K99 is first and K00 is last. My custom vm shutdown script is quite simple: [ -f /var/run/xenstored.pid ] && xl shutdown -a sleep 20 If you only use: xl shutdown -a the powerdown will continue while the VMs are shutting down and xen processes could get killed before the VM shutdowns are complete. I did many powerdown cycles on my HP N40L today and WOL worked just fine.
  12. I think there is a minor discrepancy in readme.txt: I found that ssh is not configured to allow null passwords. But it was easy to change with an sshd_config file in /boot/config/ssh.
  13. See here: http://lime-technology.com/wiki/index.php/Plugin/webGui/SMB Especially the section toward the end about Creating a Fresh Windows Connection.
  14. I think your allocation is working as expected. You can think of it like a set of wells where a larger disk is like a deeper well. And the only thing that matters is how much empty space there is at the top of the well. For your setup I wouldn't expect the share to start filling disk4 until disk3 reaches 75%. If disk3 were 50% full and disk 4 were empty, there would be 1T free on both disks. In this condition, new data will be written to disk3 until the next high water mark which is .5T free (or 75% used of 2T). Then new data will be written to disk4 until it has less than .5T free (or 50% used of 1T).
  15. I recall seeing that error on a fresh unraid installation. I think the super.dat file is not created until a disk is assigned. You can check to see if the file exists with the command: ls /boot/config/super.dat Or just assign a disk and see if the error stops showing up in the syslog.
  16. I have been annoyed with the blank screen also. Your question inspired me to figure it out. This command will keep the screen from blanking during the current boot: setterm -blank 0 You might also want to investigate the -powersave and -powerdown options. Looking at the /etc/rc.d/rc.M file, it might try to enter a deeper powersaving mode after 60 minutes. My monitor does not. I haven't tried putting this in the go script for future boots, but it seems like it should work. Sorry I can't help with your real problem (IRQ?).
×
×
  • Create New...