Guzzi

Members
  • Posts

    219
  • Joined

  • Last visited

Everything posted by Guzzi

  1. afaik this is a single rail powersupply and should be good for the drives to my personal experience. as stated, check cabling and connectors - they're causing lot's of problems...
  2. Thanks - I mounted the disk in the machine and got kernel errors - took me some time to find, it was the 16+ drive bug in current beta release - because I only had 16 drives (15+1, no cache drive). So it seems, the 16+-bug is not only related to the number of drives - but also related to the slots! After deleting super.dat (otherwise unraid always crashed on startup when trying to sync) and moving all drives to the lower slots it works fine ... now syncing...
  3. Thanks Joe for the Update. I try to "collect" those updates to keep my Unmenu ok - just checked Googlecode for latest version, but there is still 1.1 from 2008. Is there any chance to have one thread where all updates get published? Would make things easier for everybody to find ...
  4. Hi, I made another preclear on a disk with new cables, that had even syslogerrors before. Syslog is clean now, preclear did it, but tells about one difference in seek error rate. Something to worry about? THanks, Guzzi =========================================================================== = unRAID server Pre-Clear disk /dev/sdg = cycle 1 of 1 = Disk Pre-Clear-Read completed DONE = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Testing if the clear has been successful. DONE = Disk Post-Clear-Read completed DONE Disk Temperature: 32C, Elapsed Time: 15:00:48 ============================================================================ == == Disk /dev/sdg has been successfully precleared == ============================================================================ S.M.A.R.T. error count differences detected after pre-clear note, some 'raw' values may change, but not be an indication of a problem 58c58 < 7 Seek_Error_Rate 0x000e 200 200 051 Old_age Always - 0 --- > 7 Seek_Error_Rate 0x000e 100 253 051 Old_age Always - 0 ============================================================================
  5. If you were already running Unraid on newer builds: Did you experience an improvement over the global lock problem (NIC blocked, if HDs are spinning up etc.)?
  6. Well... sorry about causing you extra "trouble" but then I figure you might want to avoid extra issues that can be uncovered before you move your files... The cost of a few new drives is small compared to the amount of time and effort needed otherwise. I hope your data transfer goes smoothly once you have a set of disks to move it to. From what you've said, your RAID-5 array would have had to deal with the defects on those two old disks at some point... and it might not have been as easy to swap in a new larger drive. Joe L. I appreciate the help and the abilities of your tools - I didn't complain, just reported back my experience. Please don't misunderstand me - I am happy to discover the problems in advance instead of having the trouble later and yes, you're completely right - the price of a new disk is nothing compared to trouble of a machine and the data on it - that's why I replaced the failing drives quickly with new ones...
  7. Don't you mean... just installing and being surprised, if when something fails :( [...] Yes, you're absolutely right - but you noticed my smiley also, didn't you ... It IS a positive thing to get those extended informations - I appreciate it - and as you might have seen to my last posts: at least 2 drives of my former windows raid-5 do not behave good - and I am more than happy to identify them and throw them out of my box. It's just the thing, that I didn't expect all that extra trouble - my initial plan was just move drives from windows to unraid, move data and finished ;-)
  8. thanks for the infos - did some reading, lot's of details. Hmmm, I'm not sure, if I wasn't happier in total before thinking about my HDs - just installing and being surprised, if something fails ;-) - just kidding - I like the concept of the preclear script very much - once reading and writing the whole HD before using it in production IS a help to discover problems in advance. At least I found 2 harddiscs behaving strange - will have a closer look to them after doing my migration to the healthy drives.
  9. Joe, Thanks for your answer - to be honest, I do not fully understand those values - understand your theory ... is there some sort of standard values to look after when checking drive health? The idea to check the drives is good - but interpreting the values is difficult (at least for me ;-)) - Is it correct that sector reallocation is the thing to have focus on?
  10. Hi, I made another 4 preclears of disks formerly used in another windows raid-5. the Script claims there are some differences pre and post - can you have a look on the message of seekerrorrate and comment on it if it is something to worry? tnx, Guzzi =========================================================================== = unRAID server Pre-Clear disk /dev/sdb = cycle 1 of 1 = Disk Pre-Clear-Read completed DONE = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Testing if the clear has been successful. DONE = Disk Post-Clear-Read completed DONE Elapsed Time: 21:41:19 ============================================================================ == == Disk /dev/sdb has been successfully precleared == ============================================================================ S.M.A.R.T. error count differences detected after pre-clear note, some 'raw' values may change, but not be an indication of a problem 58c58 < 7 Seek_Error_Rate 0x000e 100 253 051 Old_age Always - 0 --- > 7 Seek_Error_Rate 0x000e 200 200 051 Old_age Always - 0 63c63 < 193 Load_Cycle_Count 0x0032 176 176 000 Old_age Always - 72598 --- > 193 Load_Cycle_Count 0x0032 176 176 000 Old_age Always - 72599 =========================================================================== = unRAID server Pre-Clear disk /dev/sdc = cycle 1 of 1 = Disk Pre-Clear-Read completed DONE = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Testing if the clear has been successful. DONE = Disk Post-Clear-Read completed DONE Elapsed Time: 23:20:10 ============================================================================ == == Disk /dev/sdc has been successfully precleared == ============================================================================ S.M.A.R.T. error count differences detected after pre-clear note, some 'raw' values may change, but not be an indication of a problem 58c58 < 7 Seek_Error_Rate 0x000e 100 253 051 Old_age Always - 0 --- > 7 Seek_Error_Rate 0x000e 200 200 051 Old_age Always - 0 63c63 < 193 Load_Cycle_Count 0x0032 176 176 000 Old_age Always - 73100 --- > 193 Load_Cycle_Count 0x0032 176 176 000 Old_age Always - 73101 =========================================================================== = unRAID server Pre-Clear disk /dev/sdd = cycle 1 of 1 = Disk Pre-Clear-Read completed DONE = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Testing if the clear has been successful. DONE = Disk Post-Clear-Read completed DONE Elapsed Time: 26:25:24 ============================================================================ == == Disk /dev/sdd has been successfully precleared == ============================================================================ S.M.A.R.T. error count differences detected after pre-clear note, some 'raw' values may change, but not be an indication of a problem 58c58 < 7 Seek_Error_Rate 0x000e 100 253 051 Old_age Always - 0 --- > 7 Seek_Error_Rate 0x000e 200 200 051 Old_age Always - 0 63c63 < 193 Load_Cycle_Count 0x0032 173 173 000 Old_age Always - 81301 --- > 193 Load_Cycle_Count 0x0032 173 173 000 Old_age Always - 81306 =========================================================================== = unRAID server Pre-Clear disk /dev/sde = cycle 1 of 1 = Disk Pre-Clear-Read completed DONE = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Testing if the clear has been successful. DONE = Disk Post-Clear-Read completed DONE Elapsed Time: 25:00:20 ============================================================================ == == Disk /dev/sde has been successfully precleared == ============================================================================ S.M.A.R.T. error count differences detected after pre-clear note, some 'raw' values may change, but not be an indication of a problem 19,20c19,20 < Offline data collection status: (0x82) Offline data collection activity < was completed without error. --- > Offline data collection status: (0x84) Offline data collection activity > was suspended by an interrupting command from host. 63c63 < 193 Load_Cycle_Count 0x0032 176 176 000 Old_age Always - 72723 --- > 193 Load_Cycle_Count 0x0032 176 176 000 Old_age Always - 72724 ============================================================================
  11. Istn't there still a bug with 16+ Drives? Check your syslog about problems - I had errors as soon as adding more than 16 drives (incl. parity excluding cache)
  12. ... you couldn't see it, because I didn't access the other drive - as soon as I do e.g. preclear on it, I get the same messages in syslog. I do NOT get any of those errors wth all other drives (did e.g. the reiserfsck on all drives except parity). Cabling is always a mess - I had those problems in the pre-unraid ära (windows raid-5 with the free veritas solution) as well - changed sata cables, chaged powercabling, changed powersupply, etc. The worst problem is those splitters, that you just touch and you hear the drive spindown and up again - just because voltage dropped a bit - this depends also on the brand of the drives - some are more sensitive, some less - at that time I replaced my powercabling from those PC-standard stuff to a more solid powerdistribution - helped a lot. Anyway, regarding this current situation: I have ordered a new drive yesterday, will be delivered today and it will replace those two "in question drives". I can then test those drives in another box when I have time to decide if or if not I can continue using them. If they show ok, I will throw them in my backupbox later. Currently my focus is on getting (or keeping) my main box stable and errorfree to put it "in the corner and forget about it" ;-)
  13. ... I'm done... I ran the memorytest overnight - it passed 8 times without errors plus I ran the reisefsck on all data drives - all went through without any errors reported. Checked syslog also, no errors, neither after boot nor after all those activities. Anything else I can / should do? So it seems that those problems are all around those 2 drives ? If so, I probably prefer to dispose them and order 2 new ones - much cheaper than the time it took me to check the whole server ... ;-)
  14. The preclear_disk script is very good at thrashing exercising a disk. As already said, it is far easier to RMA the drives before they are loaded with your data if you find they do not test well. The errors you saw could be because of bad SATA cables or bad power cables/splitters, or even a bad disk controller. But... Remember, your SMART report showed an emergency retraction of the heads to a safe landing spot when it thought the drive was losing power in the middle of the preclearing process. That is pretty drastic as it tries to save itself from a head crash. Is your power supply being overloaded? Are you using a backplane for power distribution? Lots to check out, but, at least you are more informed than most Window's OS users. They just blue-screen. Joe L. Maybe I wastn't completely clear: I have NO data on those 2 "suspicious" drives (they're unassigned and I didn't mount them except for temporal checking if they're empty) - only the array is filled with data (where I didn't encounter problems with the drives so far). The drives are not new - most of them are coming from my former windows box and had been running there as raid-5 for 1-2 years (hope warranty not yet over ...) I never got BSODs on the windows box - but a remember once or twice drives where showing "yellow" - which probably was the same CRC-Problem as now. But nevertheless I have to admit, that there is much more transparence with unraid and linux tools what's "really" happening - windows doesn't help you much with that (just "reactivate" the drive, errors corrected by raid-layer anyway). BTW: I ran the memorytest overnight - it passed 8 times without errors. Will chkdsk the drives when finding the time (currently working with my son on his motorcyle ;-)) The biggest hasstle with those "many-disk-machines (regardless of windows or linux, or something else) is power and cabling - and very difficult to diagnose. Power might be fine for all normal operations - but if you are accessing a disk and at the same time 20 other disks spin up it might pull the voltage down - and I experienced in the past that HDs are VERY sensitive to voltages below 4,8 v on the 5Vrail - to be measured at the drive itself, not somewhere else, because you loose voltage on the cables. Anyway, I thought to be safe, because I operated the windows box and now the unraid box with same powersupply but 8 drives less... so maybe again checking the cables - it seems to be focused on those two ports... So I don't think it's overloaded powerwise, but unraid is in a diffenent box with different cabvles, no powerbackplane and there might be issues - I won't have any other possibility than to check and solve - because there is planned to add the remaining 4 disks from the windowsraid to the unraid-array as soon as the 17+ bug is solved... I hope to soon reach the stage to put the box back in the corner and forget it for the next years ;-)
  15. Thanks Rob, Joe for the feedback. sdn and sdq are the two drives, I currently have not yet in the array - because they both were showing those errors when i first tried setting up the empty array some weeks ago. All other drives are in the array and were fine, showing no errors. Because I didn't trust those 2 drives I ran preclear script to be safe - with the result above. it was the very first time, I encountered such memoryrelated errors, never had it before - but you're right, I had even problems, accessing sambashares after this. I restarted the box and everything is fine so far, no errors at all in the syslog (except this DMA-stuff on the IDE-port - " kernel: atiixp 0000:00:14.1: simplex device: DMA disabled"). BTW: starting preclear on either of those 2 unassigned drives gives me those above "ata12.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0" - errors in the log. They do NOT appear during startup. cache_dirs was not started at all - removed it from go script and rebooted before I moved the files. So it definately cannot be responsible for any memoryrelated stuff. Me too I am worried, if I see such things - I think I will remove both of the drives and test them separately and see, if they need to be RMAed. Will also perform memorytest and chkdsk on all drives as recommended to be sure, everything is fine. And yes, there is already stuff on almost all drives, since I am already moving data during the last weeks. Will post after running the tests. Guzzi
  16. I have 2 GB RAM in the box: (from /usr/bin/top -b -n1) top - 01:15:04 up 1:13, 0 users, load average: 3.94, 4.00, 3.73 Tasks: 73 total, 2 running, 71 sleeping, 0 stopped, 0 zombie Cpu(s): 7.8%us, 60.5%sy, 0.0%ni, 22.3%id, 5.0%wa, 0.6%hi, 3.7%si, 0.0%st Mem: 1943344k total, 1617648k used, 325696k free, 39868k buffers Swap: 0k total, 0k used, 0k free, 1481180k cached (Did a reboot after I saw those kernel things in syslog - never had that before, just during this specific preclear) Addons: I have disabled cachedirs to keep memory free while moving data to the box. Here is the goscript: #!/bin/bash # Start the Management Utility /usr/local/sbin/emhttp & cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c # Unraid_Notify (E-Mail Notification) #installpkg /boot/packages/socat-1.7.0.0-i486-2bj.tgz #installpkg /boot/packages/unraid_notify-2.30-noarch-unRAID.tgz installpkg /boot/packages/acpitool-0.4.7-i486-1goa.tgz #unraid_notify start sleep 30 # enable wakeup /usr/sbin/ethtool -s eth0 wol g # Start UnMenu /boot/unmenu/uu I have to say that I was moving constantly data to the box while clearing the disk - maybe the problems with the disk has blocked the copy process? Do I need to upgrade the RAM to 4 GB?
  17. This is a new one to me... According to a "google" search on "Power-Off_Retract_Count", I got the following [pre] # Power-Off_Retract_Count = No of times drive was powered off in an emergency, called Emergency Unload. # Load_Cycle_Count = This number is highly affected by your power management policies. For e.g. a too aggressive power management might put hard disk to sleep too often. This number is indicative of when your hard disk parks, unparks , spins up, spins down. [/pre] So. reading between the lines... unless you powered down the disk while it was being cleared, it *thought* it had lost power, or it really did lose power. It retracted the disk heads in an emergency-unload, thinking it had lost power, then loaded them again once it thought power had been restored. I'd check the system log for any other errors while the drive was being cleared. I'd also check any power connectors or "Y" splitters. They can be intermittent. Joe L. Hi Joe, checking the powerconnectors is no problem - I can do that. I cheked the syslog several times during preclear and except in the very first minutes (some drive not ready) there was nothing special. But it seems, that in the post read there happened a lot - which I do not understand; could you have a look in the log? It's the complete preclear-process from beginning to the end!? Thanks, Guzzi
  18. Hi, I have succesfully precleared a disk, but got smartdifferences as below. Is this something I have to worry about or can I use this disk? I realized some interface errors in the log in the very beginning, but no errors in the script. Thanks, Guzzi ============================================================================ == == Disk /dev/sdq has been successfully precleared == ============================================================================ S.M.A.R.T. error count differences detected after pre-clear note, some 'raw' values may change, but not be an indication of a problem 62,63c62,63 < 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 31 < 193 Load_Cycle_Count 0x0032 192 192 000 Old_age Always - 25344 --- > 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 32 > 193 Load_Cycle_Count 0x0032 192 192 000 Old_age Always - 25345 ============================================================================
  19. I understand, it is also what makes it difficult for me to test... Combine that with the fact that the only WD 1TB drive I own is already part of my array (and nearly full), and I have no desire to clear it, and you can see why testing can take as long as it is. Can you do me a favor and let me know the "geometry" of the drive that fails to clear? You can do that by typing: fdisk -l /dev/sdX where sdX = the actual drive in your array. (replace the X with the correct drive letter) Joe L. Joe L. Sure - here you go: Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 1 heads, 63 sectors/track, 31008336 cylinders Units = cylinders of 63 * 512 = 32256 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 2 31008336 976762552+ 83 Linux Partition 1 does not end on cylinder boundary. The "funny" thing is, that the pre-read runs always to 100% - so maybe you can check your code about differences in the handling of pre-read and post-read? cheers, Guzzi
  20. I did 2 "preclears" on WD 1 TB drives - on two different servers. Both did hang at 88% - took approx. 25 hours (that's what makes it difficult to just "retest" ;-)) One board was 780G chipset, the other 690 - not sure if the drives were connected to onboard sata (there is some workaround in the kernel for those chipsets, ist't it?) or to sil3114. Maybe this info helps? cheers, Guzzi PS: I did run it in telnet session .... and yes, it stopped updating the screen. Using latest Unraid beta.
  21. Thanks for the hint - argh, I hate those cables. I replaced all Satacables some time ago because of problems, maybe I reused some of the old ones since this is my 2nd unraid server... tnx anyway, will have a look at this.
  22. This seems to happen once in a while. Most of the time if you start another pass on the drive it will finish as it should. Hmmm, well ok, I cancelled the process and started it on another drive - same size (1 TB WD green) and it happens exactly the same - hangs at 88% complete of the post-read, same position (888.330.240.000 of .... bytes read). Is this a problem with the WD-drives? 1st reading is ok, all steps of preclearing including writing zeroes is ok, only last pass ("post-read") hangs always at the same position. Any ideas? Only thing I saw in the log was some errors at the very beginning - while preclear didnb't give me any messages or errors. Beside the preclear hanging: Should I be worried about the logentries although I didn't get errors reported by preclear? Log: Jul 15 03:44:27 XMS-GMI-01 kernel: ata10.00: exception Emask 0x0 SAct 0x0 SErr 0x280000 action 0x0 Jul 15 03:44:27 XMS-GMI-01 kernel: ata10.00: BMDMA2 stat 0x6d0009 Jul 15 03:44:27 XMS-GMI-01 kernel: ata10: SError: { 10B8B BadCRC } Jul 15 03:44:27 XMS-GMI-01 kernel: ata10.00: cmd 25/00:00:4f:3b:4f/00:04:4c:00:00/e0 tag 0 dma 524288 in Jul 15 03:44:27 XMS-GMI-01 kernel: res 51/04:3f:10:3e:4f/00:01:4c:00:00/f0 Emask 0x1 (device error) Jul 15 03:44:27 XMS-GMI-01 kernel: ata10.00: status: { DRDY ERR } Jul 15 03:44:27 XMS-GMI-01 kernel: ata10.00: error: { ABRT } Jul 15 03:44:27 XMS-GMI-01 kernel: ata10.00: configured for UDMA/100 Jul 15 03:44:27 XMS-GMI-01 kernel: ata10: EH complete Jul 15 03:44:27 XMS-GMI-01 kernel: sd 10:0:0:0: [sdl] 1953525168 512-byte hardware sectors: (1.00 TB/931 GiB) Jul 15 03:44:27 XMS-GMI-01 kernel: sd 10:0:0:0: [sdl] Write Protect is off Jul 15 03:44:27 XMS-GMI-01 kernel: sd 10:0:0:0: [sdl] Mode Sense: 00 3a 00 00 Jul 15 03:44:27 XMS-GMI-01 kernel: sd 10:0:0:0: [sdl] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [...] Jul 15 04:24:55 XMS-GMI-01 kernel: ata10.00: exception Emask 0x0 SAct 0x0 SErr 0x280000 action 0x0 Jul 15 04:24:55 XMS-GMI-01 kernel: ata10.00: BMDMA2 stat 0x6d0009 Jul 15 04:24:55 XMS-GMI-01 kernel: ata10: SError: { 10B8B BadCRC } Jul 15 04:24:55 XMS-GMI-01 kernel: ata10.00: cmd 25/00:00:cf:86:33/00:04:1d:00:00/e0 tag 0 dma 524288 in Jul 15 04:24:55 XMS-GMI-01 kernel: res 51/04:2f:a0:87:33/00:03:1d:00:00/f0 Emask 0x1 (device error) Jul 15 04:24:55 XMS-GMI-01 kernel: ata10.00: status: { DRDY ERR } Jul 15 04:24:55 XMS-GMI-01 kernel: ata10.00: error: { ABRT } Jul 15 04:24:55 XMS-GMI-01 kernel: ata10.00: configured for UDMA/100 Jul 15 04:24:55 XMS-GMI-01 kernel: ata10: EH complete Jul 15 04:24:55 XMS-GMI-01 kernel: sd 10:0:0:0: [sdl] 1953525168 512-byte hardware sectors: (1.00 TB/931 GiB) Jul 15 04:24:55 XMS-GMI-01 kernel: sd 10:0:0:0: [sdl] Write Protect is off Jul 15 04:24:55 XMS-GMI-01 kernel: sd 10:0:0:0: [sdl] Mode Sense: 00 3a 00 00 Jul 15 04:24:55 XMS-GMI-01 kernel: sd 10:0:0:0: [sdl] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 15 05:21:20 XMS-GMI-01 emhttp: shcmd (103): /usr/sbin/hdparm -y /dev/sdm >/dev/null
  23. I have a question: I started preclear_disk on a drive I wanted to add to my array. Came back tonight expecting it to be finished, but it seems stuck. Telnetscreen shows: =========================================================================== = unRAID server Pre-Clear disk /dev/sdc = cycle 1 of 1 = Disk Pre-Clear-Read completed DONE = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Testing if the clear has been successful. DONE = Post-Read in progress: 88% complete. ( 888,330,240,000 of 1,000,204,886,016 bytes read ) Elapsed Time: 25:16:35 ps shows: root@XMS-GMI-01:~# ps -ef | grep preclear root 20752 27552 11 14:19 pts/0 00:44:03 /bin/bash ./preclear_disk.sh /dev/sdc root 21116 21101 0 20:40 pts/1 00:00:00 grep preclear root 27552 27244 0 Jul13 pts/0 00:01:06 /bin/bash ./preclear_disk.sh /dev/sdc root@XMS-GMI-01:~# Anything I can do except restating the whole from the beginning? Unraidserver is alive, can read and write to it. Thanks, Guzzi
  24. Sounds fine. This question applies even if they were two user-shared folders on the LAN. I'm guessing normal allocation rules you have in place are used. You are living dangerously... normally, moving a file into a directory where it already exists will overwrite the original. If you have duplicate files in parallel directories, don't just move folders... you might not end up with what you wanted. It has nothing to do with how the folders are shared on the LAN, it is all in the user-share file-system Tom created. These same file-system logic is used no matter how you move the files. You'll need to test this. Pretty sure the directory structure will be created based on your split level rules. If not enough space exists, a move might fail. (How gracefully it fails is a different matter... Hopefully, it will not delete the original until the move is successful... but test first with copies of files, not with files that are important. The mover will move it to the original directory it was supposed to be in... It will create the directory if needed, it will not move a file to a new folder just because you renamed it... (Unless Tom's logic is smart enough to rename the folder on the cache drive too when you rename the user-share... Since I don't have a cache drive assigned, you'll need to test that one. As I said, let us know what it does. Joe L. To be honest: I do not want to test this on my "prod-data". I think I will set up a testsystem with my sparelicense and do the tests there - might be a good thing for the future anyway and I can use my 120G drives lying around and being useless for other things. btw: the "nice thing" seems to be (?), that moves between usershares and structures do NOT move the filedata itself - so the "usersharefilesystemdriver" seems just to be handling the directorystructures. Thus I don't think the allocation rules even could be used. But as you know: A broken filesystemstructure is not a funny thing - I just tried to recreate a superblock on a 1TB drive without success, that's enough ;-) But lot's of questions, will set up a separate system to test ... ("beat the curiousity...") cheers, Guzzi PS: I didn't want to "hijaak" this thread, sorry - feel free to move it elsewhere with a better fitting headline - at least might also be interesting for others...