Jump to content

Shonky

Members
  • Posts

    73
  • Joined

  • Last visited

Everything posted by Shonky

  1. I have found that it just takes longer (sometimes significantly), but does eventually sync both ways. Have you tried just using very small text files to test? The delay for up-sync means that I wouldn't do any siginificant changes on UNRAID - don't know if this is the docker, or dropbox client related. Mine is doing this too. I have two machines (ignoring others but they should not make a difference), my Windows 7 laptop and my unRAID (and the Dropbox cloud I guess). Anything I put on the Windows 7 machine syncs fine up to Dropbox and back down to the NAS. No problem there. Anything I put on the unRAID machine, only syncs if it's at the top level or one down i.e. Dropbox/* or Dropbox/somefolder/* are OK but Dropbox/somefolder/anotherfolder/* does not sync. If Dropbox restarts, the deeper files *might* sync from unRAID (it takes a long time to restart). I seem to get a lot of conflicts possibly from this running portable apps from my Dropbox I have about 80000 files synced for about 25GB. The docker container reports /proc/sys/fs/inotify/max_user_watches as 524288 which should be plenty. I had similar issues back on unRAID v5 with a plugin. Does everyone else have Dropbox syncing up from unRAID within a few folders?
  2. Not quite following that last post but I think saying the same thing as me. I found that the automatic updates seemed to get "stuck" sometimes, so manually restarting the Docker and just monitoring the log files for the CPVER line was enough to push it up to 4.4.1. If it seemed to stop, just restart it. .ui_info seems to the be only file that needs to be change on a Windows frontend I found. .ui_properties seems to be ignored as does ui_<user name>.properties. Just set the local port to whatever you are tunneling through to the unRAID machine along with the GUID and you should be good to go. One thing I did find was that if I changed any of the mapped paths, it seemed to reset the container's version of CrashPlan back to 4.3.0 and it had to go through the stepped upgrade process again. Not sure if that's expected or not. Otherwise thanks. It's now working after mulitple false starts trying to get it to adopt my old install on unRAID v5. Also, should the default path be updated? Currently it's /mnt/sdd/appdata/crashplan whereas other dockers generally use /mnt/user/appdata/xxxxx. Minor annoyance but something I had to do multiple times since I was removing and re-installing a few times whilst I attempted to get it to work.
  3. Yes the problem is that the line that searches for the duplicate files cannot handle filenames (or folders in this case) with spaces in them. I'm looking at it right now but my knowledge of regular expressions isn't good enough. Even the comment before the code says: The offending bit of code from line 43 of 25-unmenu-dupe_files.awk cmd="tail -10000 /var/log/syslog | grep \"duplicate object\" | sed \"s/ / /g\" | cut -d\" \" -f8- |" cmd = cmd "sed -e \"s/^\\\\/[^\\\\/]*\\\\/[^\\\\/]*\\\\/\\\\(.*\\\\)/ls -lad \\\\/mnt\\\\/*\\\\/'\\\\1'/\"" cmd = cmd " | tr \"'\" \"*\" | sort -u | sh - " Essentially we need quotes around argument list to the ls -lad command.
  4. Running unraid 4.7 on a HP Proliant Microserver. Processor Dual-Core Processor AMD Athlon II Processor Model NEO N36L (1.30 GHz, 15W, 2MB) Cache Memory 2x 1MB Level 2 cache Chipset AMD RS785E/SB820M Memory Protection ECC Memory Type PC3-10600E unbuffered DDR3 ECC operating at 800MHz 1GB (1x1GB) Network Controller Embedded NC107i PCI Express Gigabit Ethernet Server Adapter 1 PCI-e x16 x16 1h Half height, half length slot 2 PCI-e x1 x1 2h Half height, half length slot Storage Controller Integrated SATA controller with embedded RAID (0, 1) Interfaces Graphics On board VGA USB 2.0 Ports 7 total (2 rear, 4 front panel, 1 internal) Network RJ-45 (Ethernet) 1 (10/100/1000 bits/s) eSATA Gen 2 1 rear Graphics 128MB shared supporting 1920x1200 @ 60Hz Industry Standard Compliance ACPI V2.0 Compliant PCI 2.3 Compliant PXE Support WOL Support IPMI 2.0 compliant USB 2.0 SATA Gen 2 I was trying to delete some files (actually move them from one physical disk to another). I found during the move it was setting that particular disk read only because reiserfs was crashing. Jun 8 11:34:39 Mars kernel: REISERFS error (device md2): vs-4080 _reiserfs_free_block: block 260637938: bit already cleared Jun 8 11:34:39 Mars kernel: REISERFS (device md2): Remounting filesystem read-only Jun 8 11:34:39 Mars kernel: REISERFS error (device md2): vs-4080 _reiserfs_free_block: block 260638041: bit already cleared Jun 8 11:34:39 Mars kernel: REISERFS error (device md2): vs-4080 _reiserfs_free_block: block 260643382: bit already cleared Jun 8 11:34:40 Mars kernel: ------------[ cut here ]------------ Jun 8 11:34:40 Mars kernel: WARNING: at fs/reiserfs/journal.c:3358 journal_end+0x5b/0xbc() Jun 8 11:34:40 Mars kernel: Hardware name: ProLiant MicroServer Jun 8 11:34:40 Mars kernel: Modules linked in: md_mod xor ahci Jun 8 11:34:40 Mars kernel: Pid: 3468, comm: rm Tainted: G W 2.6.32.9-unRAID #8 Jun 8 11:34:40 Mars kernel: Call Trace: Jun 8 11:34:40 Mars kernel: [<c102449e>] warn_slowpath_common+0x60/0x77 Jun 8 11:34:40 Mars kernel: [<c10244c2>] warn_slowpath_null+0xd/0x10 Jun 8 11:34:40 Mars kernel: [<c10c0547>] journal_end+0x5b/0xbc Jun 8 11:34:40 Mars kernel: [<c10af87d>] reiserfs_delete_inode+0x7b/0xae Jun 8 11:34:40 Mars kernel: [<c10af802>] ? reiserfs_delete_inode+0x0/0xae Jun 8 11:34:40 Mars kernel: [<c107b14f>] generic_delete_inode+0x75/0xdb Jun 8 11:34:40 Mars kernel: [<c107b1c6>] generic_drop_inode+0x11/0x4c Jun 8 11:34:40 Mars kernel: [<c107aabd>] iput+0x4b/0x4e Jun 8 11:34:40 Mars kernel: [<c1074d88>] do_unlinkat+0xb4/0xf9 Jun 8 11:34:40 Mars kernel: [<c1046c30>] ? call_rcu_sched+0xd/0xf Jun 8 11:34:40 Mars kernel: [<c1046c3a>] ? call_rcu+0x8/0xa Jun 8 11:34:40 Mars kernel: [<c10372b2>] ? __put_cred+0x3c/0x3e Jun 8 11:34:40 Mars kernel: [<c10170af>] ? do_page_fault+0x1db/0x1e4 Jun 8 11:34:40 Mars kernel: [<c1074ec6>] sys_unlinkat+0x2d/0x2f Jun 8 11:34:40 Mars kernel: [<c1002935>] syscall_call+0x7/0xb Jun 8 11:34:40 Mars kernel: ---[ end trace b813ccaaf3f13a41 ]--- It's quite repeatable. As soon as I try to delete a particular file, it crashes. Regardless of the corruption it shouldn't crash should it? I've unmounted it and I'm currently running a reiserfsck --fix-fixable on the device and getting these sorts of corrections. What would cause these sorts of errors. There are aren't any other indications in the syslog. ########### reiserfsck --fix-fixable started at Wed Jun 8 11:35:44 2011 ########### Replaying journal: Trans replayed: mountid 99, transid 1892575, desc 1575, len 1, commit 1577, next trans offset 1560 Trans replayed: mountid 99, transid 1892576, desc 1578, len 1, commit 1580, next trans offset 1563 Trans replayed: mountid 99, transid 1892577, desc 1581, len 1, commit 1583, next trans offset 1566 Trans replayed: mountid 99, transid 1892578, desc 1584, len 1, commit 1586, next trans offset 1569 Replaying journal: Done. Reiserfs journal '/dev/md2' in blocks [18..8211]: 4 transactions replayed Checking internal tree.. \/ 1 (of 11|/ 1 (of 161// 7 (of 165|bad_directory_item: block 37224453: The directory item [4 38255 0x1 DIR (3)] has a not properly hashed entry (4) bad_leaf: block 37224453, item 10: The corrupted item found (4 38255 0x1 DIR (3), len 712, location 1980 entry count 26, fsck need 0, format old) bad_directory_item: block 37224453: The directory item [4 72010 0x1 DIR (3)] has a not properly hashed entry (11) bad_leaf: block 37224453, item 12: The corrupted item found (4 72010 0x1 DIR (3), len 688, location 1248 entry count 18, fsck need 0, form/131 (of 161\/ 98 (of 170/bad_path: block 241369145, pointer 97: The used space (58275) of the child block (310476801) is not equal to the (blocksize (4096) - free space (0) - header size (24)) - corrected to (4072) / 2 (of 11//136 (of 170//109 (of 170-bad_indirect_item: block 220451472: The item (27314 207020 0x671001 IND (1), len 1992, location 2104 entry count 0, fsck need 0, format new) has the bad pointer (33) to the block (220400319), which is in tree already - zeroed / 4 (of 11|/116 (of 129|/ 32 (of 168|bad_indirect_item: block 164298762: The item (103348 103544 0x1 IND (1), len 736, location 632 entry count 0, fsck need 0, format new) has the bad pointer (29) to the block (159713031), which is in tree already - zeroed / 6 (of 11-/ 66 (of 170|/ 35 (of 170\ bad_indirect_item: block 69573480: The item (196168 196190 0x2d37f001 IND (1), len 4048, location 48 entry count 0, fsck need 0, format new) has the bad pointer (990) to the block (69781823), which is in tree already - zeroed / 72 (of 170|bad_indirect_item: block 69573486: The item (196168 196190 0x2eb37001 IND (1), len 4048, location 48 entry count 0, fsck need 0, format new) has the bad pointer (234) to the block (69787139), which is in tree already - zeroed bad_indirect_item: block 69573486: The item (196168 196190 0x2eb37001 IND (1), len 4048, location 48 entry count 0, fsck need 0, format new) has the bad pointer (594) to the block (69787499), which is in tree already - zeroed / 10 (of 11|/ 77 (of 114\/ 43 (of 170-bad_indirect_item: block 49808069: The item (250227 250228 0xae2c7001 IND (1), len 4048, location 48 entry count 0, fsck need 0, format new) has the bad pointer (283) to the block (50585914), which is in tree already - zeroed bad_indirect_item: block 49808069: The item (250227 250228 0xae2c7001 IND (1), len 4048, location 48 entry count 0, fsck need 0, format new) has the bad pointer (724) to the block (50586355), which is in tree already - zeroed / 47 (of 170-bad_indirect_item: block 49808073: The item (250227 250228 0xaf297001 IND (1), len 4048, location 48 entry count 0, fsck need 0, format new) has the bad pointer (178) to the block (50589857), which is in tree already - zeroed bad_indirect_item: block 49808073: The item (250227 250228 0xaf297001 IND (1), len 4048, location 48 entry count 0, fsck need 0, format new) has the bad pointer (674) to the block (50590353), which is in tree already - zeroed bad_indirect_item: block 49808073: The item (250227 250228 0xaf297001 IND (1), len 4048, location 48 entry count 0, fsck need 0, format new) has the bad pointer (997) to the block (50590676), which is in tree already - zeroed I don't think it's ever been shut down improperly so aside from a faulty disk is there anything else that might cause something like this? Thanks syslog.txt
  5. There was one hard error on one of the drives in the syslog but that shouldn't have caused it to run slow the whole time (syslog from boot attached). The automated status emails I got showed Parity CHECK/RESYNC in progress, 01.7% complete, est. finish in 1508.5 minutes. Speed: 21197 kb/s. Parity CHECK/RESYNC in progress, 05.8% complete, est. finish in 1439.7 minutes. Speed: 21277 kb/s. Parity CHECK/RESYNC in progress, 09.9% complete, est. finish in 1235.1 minutes. Speed: 23734 kb/s. Parity CHECK/RESYNC in progress, 14.0% complete, est. finish in 1200.4 minutes. Speed: 23299 kb/s. Parity CHECK/RESYNC in progress, 18.3% complete, est. finish in 1163.0 minutes. Speed: 22841 kb/s. Parity CHECK/RESYNC in progress, 22.5% complete, est. finish in 1157.9 minutes. Speed: 21764 kb/s. Parity CHECK/RESYNC in progress, 26.7% complete, est. finish in 1069.1 minutes. Speed: 22317 kb/s. Parity CHECK/RESYNC in progress, 30.8% complete, est. finish in 1051.7 minutes. Speed: 21416 kb/s. Parity CHECK/RESYNC in progress, 34.7% complete, est. finish in 920.3 minutes. Speed: 23065 kb/s. Parity CHECK/RESYNC in progress, 38.7% complete, est. finish in 952.9 minutes. Speed: 20917 kb/s. Parity CHECK/RESYNC in progress, 42.5% complete, est. finish in 850.1 minutes. Speed: 21981 kb/s. Parity CHECK/RESYNC in progress, 46.6% complete, est. finish in 740.2 minutes. Speed: 23479 kb/s. Parity CHECK/RESYNC in progress, 50.5% complete, est. finish in 755.2 minutes. Speed: 21300 kb/s. Parity CHECK/RESYNC in progress, 54.4% complete, est. finish in 714.7 minutes. Speed: 20734 kb/s. Parity CHECK/RESYNC in progress, 58.3% complete, est. finish in 629.6 minutes. Speed: 21524 kb/s. Parity CHECK/RESYNC in progress, 62.4% complete, est. finish in 559.9 minutes. Speed: 21856 kb/s. Parity CHECK/RESYNC in progress, 66.1% complete, est. finish in 517.2 minutes. Speed: 21290 kb/s. Parity CHECK/RESYNC in progress, 69.9% complete, est. finish in 484.8 minutes. Speed: 20188 kb/s. Parity CHECK/RESYNC in progress, 73.6% complete, est. finish in 412.9 minutes. Speed: 20733 kb/s. Parity CHECK/RESYNC in progress, 81.7% complete, est. finish in 91.2 minutes. Speed: 65055 kb/s. Parity CHECK/RESYNC in progress, 92.7% complete, est. finish in 40.9 minutes. Speed: 57647 kb/s. These are spaced hourly so speed was consistent to 75% and then it got much faster. I assume that's because currently the data drives are only 1.5TB each and the parity is 2TB so the last 0.5TB of parity isn't really anything (yet) So now the parity generation has completed I just started a parity check. It is running from the start at about 63MB/sec which is what was showing before I made all the changes. At least that shows there probably isn't anything to do with the disk change. So the question is why is the parity *build* about one third the speed of the parity *check*? Thanks syslog.txt
  6. Update: I noticed that the load cycle count had increased significantly. I had just recently turned off the automatic head parking using the wdidle3 tool. It had been set to 8 seconds previously. "Disabling" actually seemed to make it much much worse. One drive went from about 300k cycles to about 600k whilst trying rebuild the parity over a few days (basically since my first post). I could see the count going up just by refreshing the unmenu SMART page. So I cancelled the parity check, rebooted a DOS session to run the wdidle3 program again but this time I set it to 5 minutes rather than disabled. After rebooting, the speed started around 16000kB/sec and now is running at a fairly consistent 21000kB/sec (rather than jumping up and down like it was before). 30% complete after 7.5 hours. Also the load cycle counts are no longer increasing. Still not super fast but is this speed of ~20MB/sec reasonable to see (it will take <1 day to complete instead of 10-20)? I don't remember how long it took originally when I set it up. Thanks for listening.
  7. The Seagate is absolutely brand new. Never installed anywhere else. I ran the pre clear script on it before using it too. I do have another one installed (currently sde) that I'm running the pre-clear on today. Results attached from the current one. However when I first noticed the very poor parity build speed, I stopped and went back to using the original parity drive (was sdd) and got similar slow speeds (actually a bit slower). So that kinda rules out the Seagate. Is there a significant difference between a parity build and a parity check apart from writing the parity drive vs reading the parity drive? That's about the only thing I can see that has significantly changed. In fact, I still even get quite good hdparm results even *whilst* doing the parity build. root@Mars:~# hdparm -Tt /dev/sd[abc] /dev/sda: Timing cached reads: 1540 MB in 2.00 seconds = 770.26 MB/sec Timing buffered disk reads: 318 MB in 3.01 seconds = 105.55 MB/sec /dev/sdb: Timing cached reads: 1118 MB in 2.00 seconds = 558.89 MB/sec Timing buffered disk reads: 86 MB in 3.22 seconds = 26.74 MB/sec /dev/sdc: Timing cached reads: 998 MB in 2.00 seconds = 498.66 MB/sec Timing buffered disk reads: 96 MB in 3.09 seconds = 31.09 MB/sec preclear_start__5YD2SZRE_2011-06-02.txt preclear_finish__5YD2SZRE_2011-06-02.txt preclear_rpt__5YD2SZRE_2011-06-02.txt
  8. If it's a SMART type issue then why did it all of a sudden drop in performance? I got it cheap I think because of a reseller error but maybe it was a clearance too. It was quite a good deal for a solid little 5 bay device. There was a massive rush over a few days to get them. One reseller even bought a bunch o stock and then resold them. smartlogs_sda.txt smartlogs_sdb.txt smartlogs_sdc.txt
  9. I did link to the hardware specs - it's an off the shelf system - just added drives and USB key http://h10010.www1.hp.com/wwpc/us/en/sm/WF05a/15351-15351-4237916-4237918-4237917-4248009.html http://h18004.www1.hp.com/products/quickspecs/13716_na/13716_na.html Processor Dual-Core Processor AMD Athlon II Processor Model NEO N36L (1.30 GHz, 15W, 2MB) Cache Memory 2x 1MB Level 2 cache Chipset AMD RS785E/SB820M Memory Protection ECC Memory Type PC3-10600E unbuffered DDR3 ECC operating at 800MHz 1GB (1x1GB) Network Controller Embedded NC107i PCI Express Gigabit Ethernet Server Adapter 1 PCI-e x16 x16 1h Half height, half length slot 2 PCI-e x1 x1 2h Half height, half length slot Storage Controller Integrated SATA controller with embedded RAID (0, 1) Interfaces Graphics On board VGA USB 2.0 Ports 7 total (2 rear, 4 front panel, 1 internal) Network RJ-45 (Ethernet) 1 (10/100/1000 bits/s) eSATA Gen 2 1 rear Graphics 128MB shared supporting 1920x1200 @ 60Hz Industry Standard Compliance ACPI V2.0 Compliant PCI 2.3 Compliant PXE Support WOL Support IPMI 2.0 compliant USB 2.0 SATA Gen 2 Parity check *before* replacing the drive was looking like this from the automated emails I have setup: Parity CHECK/RESYNC in progress, 47.1% complete, est. finish in 187.2 minutes. Speed: 68953 kb/s. Parity CHECK/RESYNC in progress, 62.4% complete, est. finish in 140.1 minutes. Speed: 65436 kb/s. Parity CHECK/RESYNC in progress, 77.7% complete, est. finish in 87.3 minutes. Speed: 62312 kb/s. After the drive replacement: Parity CHECK/RESYNC in progress, 1.2% complete, est. finish in 6752.1 minutes. Speed: 4757 kb/s. Parity CHECK/RESYNC in progress, 3.1% complete, est. finish in 2147.3 minutes. Speed: 14680 kb/s Parity CHECK/RESYNC in progress, 4.8% complete, est. finish in 2277.5 minutes. Speed: 13602 kb/s. Parity CHECK/RESYNC in progress, 5.8% complete, est. finish in 12273.2 minutes. Speed: 2493 kb/s. Parity CHECK/RESYNC in progress, 6.7% complete, est. finish in 12582.1 minutes. Speed: 2412 kb/s Parity CHECK/RESYNC in progress, 7.9% complete, est. finish in 12286.9 minutes. Speed: 2435 kb/s. Parity CHECK/RESYNC in progress, 8.9% complete, est. finish in 11670.4 minutes. Speed: 2536 kb/s. (these are at hourly intervals)
  10. Thanks for the suggestions. I had already fitted the 2TB drives and it was working OK and didn't physically touch the machine since. Also sda through sdd are all in a cage with only one cable. I'd expect lots of errors in the syslog if there was a cable problem or worst case the drives would simply not be detected. I did pre-clear the new 2TB drive although I don't think that is as significant for a parity drive aside from the pre-stressing side of it. The other two 1.5TB drives do have high load cycle counts in SMART (damn WD) but otherwise are working fine. One does have "» load_cycle_count=336976 » current_pending_sector=4 » offline_uncorrectable=1 » multi_zone_error_rate=14" in the unMENU status page. I know this isn't good and will probably have to retire these drives soon but in the mean time I just want my protected array going in the interim.
  11. Ok I had a 3 drive system with 3 x 1.5TB WD EADS drives. It was working quite well in a HP Proliant Microserver (after being transferred from running as a VM). I was getting good parity check speeds of 60-65MB/sec (checked before changing config). This was using the free unRAID 4.7. Hardware link is below. It's no super computer but perfectly adequate for unRAID and was cheap at just A$180. Syslog attached upto about 10 minutes after a reboot and parity generation. The 2TB drives were fitted to pre-clear for a while and that went along no problem. http://h10010.www1.hp.com/wwpc/us/en/sm/WF05a/15351-15351-4237916-4237918-4237917-4248009.html I wanted more space so I paid for the Plus key and bought 2 x 2TB Seagate Green drives (eventual plan to have 3x1.5TB + 2x2TB for 6.5TB of space). So first thing was to replace the parity drive with one of those 2TB drives. That's where I'm at now. I stopped the array, replaced the parity drive (with the larger 2TB) and restarted it after checking the "are you sure?" box. So now I have (as an intermediate stage) 2TB - parity 2 x 1.5TB - storage Parity generation speed is jumping around but is usually less the 5MB/sec but sometimes briefly jumps up to around 10-15MB/sec. e.g. currently Estimated speed: 2,647 KB/sec Estimated finish: 12278.5 minutes The two 1.5TB data drives seem to be seeking about quite a bit whereas the parity one isn't (can hear and feel them). They are different drives so it's hard to say what the newer parity drive is doing. I did temporarily go back to 3 x 1.5TB and it was even a bit slower than this. There are no hardware type errors in the syslog. The 2TB gives 120MB/sec+ speeds for reading and writing. The 2x1.5TB give about 75-80MB/sec read speed (can't write since they have the current file system on them) This seems excessively slow and current targets put this about 10 days away with no protection in the mean time Any ideas? I don't know what's changed that could cause this. Thanks Christian syslog.txt
  12. Well, unRAID is linux based (Slackware if I remember rightly), so I'd start with a linux solution if possible rather than BSD.
  13. I got a pretty quick reply from Tom and he pointed me to a newer version. 4.5.1-sastest4 is working fairly well for me so far. I do have some concerns about the virtual machine though based on his comments. Tom did also mention that this version may not pass through hard drive information and serial numbers through correctly which could cause problems if the drives move onto different controllers etc.
  14. Thanks. I did search but I think I may have only been search in sub forums or something. I am a little confused though. I go to the top of this page and use the search box for "HDIO_GET_IDENTITY" minus quotes and it only shows my post. Anyway.... That was exactly a year ago and 4.5.1beta was still the next thing. 4.5 only just came out last month... Ah I see it *was* and old post and you came back and updated it. Cool. The thing is I'm not using a physical card at all. I'm using what VMware server emulates as a LSI Logic SAS card. It appears to be the same problem though with the driver so hopefully the same fix will work. I'll see if I can get on to Tom. Obviously I'm pretty new around here . I assume that is "tomm" known as limetech on the forums? Thanks again Christian
  15. Hi guys, I've been evaluating various forms of NAS and keep coming back to unRAID. I've looked at FreeNAS, Openfiler, Windows Home Server, EON and a couple of others with varying success. Windows Home Server probably came the closest but I didn't like the redundancy feature effectively halving the available space. However, I'm doing it with a bit of twist though trying to run it under VMware Server on a Windows 7 x64 host. I realise I'll take a bit of a performance hit but I really need Windows as a host to use the machine as a desktop. I'm getting a reasonable 25-30MB/sec transfer speed across the network between the host and guest so I'm happy with that. Unfortunately VMware server doesn't seem to support any form of disk larger than exactly 1024GB. See my unanswered post on the VMware forums if you want more exact details: http://communities.vmware.com/message/1448042#1448042 I'm trying to use direct physical access to 3 x 1.5TB SATA drives at this stage dedicated to unRAID. As IDE drives, I have irq timeout problems etc as discussed above. This happens with virtual disks as well as real disks and on other linux distributions like CentOS AND Windows Home Server so it's VMware issue. If I limit the direct disk access to <1024GB, unRAID and other OSes work just fine on IDE controllers. So I looked at SCSI devices instead of IDE. The LSI Logic SAS controller works just fine in unRAID in terms of the driver support, disk size etc using direct disk access. unRAID detects them no problem and likewise I've created the direct disk access files all no problem and they work as devices. Unfortunately unRAID 4.5 falls over when trying to add them to the system as drives. I get errors like this when I select the drives in the unRAID web GUI. Jan 7 19:30:54 Mars kernel: md: import disk2: HDIO_GET_IDENTITY ioctl error: -22 Jan 7 19:30:54 Mars kernel: md: disk2 missing Likewise when I execute "hdparm -i /dev/sda" I get: /dev/sda HDIO_GET_IDENTITY failed: Invalid argument unRAID does appear to correctly detect the SCSI drive generally in the boot process e.g. Jan 7 19:26:48 Mars kernel: scsi0 : ioc0: LSISAS1068 B0, FwRev=00000000h, Ports=1, MaxQ=128, IRQ=18 Jan 7 19:26:48 Mars kernel: mptsas: ioc0: attaching ssp device: fw_channel 0, fw_id 0, phy 0, sas_addr 0x5000c298abcffdad Jan 7 19:26:48 Mars kernel: scsi 0:0:0:0: Direct-Access VMware, VMware Virtual S 1.0 PQ: 0 ANSI: 2 Jan 7 19:26:48 Mars kernel: mptsas: ioc0: attaching ssp device: fw_channel 0, fw_id 1, phy 1, sas_addr 0x5000c297d4ee4c2f Jan 7 19:26:48 Mars kernel: scsi 0:0:1:0: Direct-Access VMware, VMware Virtual S 1.0 PQ: 0 ANSI: 2 Jan 7 19:26:48 Mars kernel: mptsas: ioc0: attaching ssp device: fw_channel 0, fw_id 2, phy 2, sas_addr 0x5000c2989218cbc1 Jan 7 19:26:48 Mars kernel: scsi 0:0:2:0: Direct-Access VMware, VMware Virtual S 1.0 PQ: 0 ANSI: 2 Jan 7 19:26:48 Mars kernel: sd 0:0:0:0: [sda] 2000000000 512-byte logical blocks: (1.02 TB/953 GiB) Jan 7 19:26:48 Mars kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 7 19:26:48 Mars kernel: sd 0:0:0:0: [sda] Mode Sense: 5d 00 00 00 Jan 7 19:26:48 Mars kernel: sd 0:0:0:0: [sda] Cache data unavailable Jan 7 19:26:48 Mars kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jan 7 19:26:48 Mars kernel: sd 0:0:0:0: [sda] Cache data unavailable Jan 7 19:26:48 Mars kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jan 7 19:26:48 Mars kernel: sda: Jan 7 19:26:48 Mars kernel: sd 0:0:1:0: [sdb] 2000000000 512-byte logical blocks: (1.02 TB/953 GiB) Jan 7 19:26:48 Mars kernel: sd 0:0:1:0: [sdb] Write Protect is off Jan 7 19:26:48 Mars kernel: sd 0:0:1:0: [sdb] Mode Sense: 5d 00 00 00 Jan 7 19:26:48 Mars kernel: sd 0:0:1:0: [sdb] Cache data unavailable Jan 7 19:26:48 Mars kernel: sd 0:0:1:0: [sdb] Assuming drive cache: write through Jan 7 19:26:48 Mars kernel: sd 0:0:1:0: [sdb] Cache data unavailable Jan 7 19:26:48 Mars kernel: sd 0:0:1:0: [sdb] Assuming drive cache: write through Jan 7 19:26:48 Mars kernel: sdb: Jan 7 19:26:48 Mars kernel: sd 0:0:2:0: [sdc] 2000000000 512-byte logical blocks: (1.02 TB/953 GiB) Jan 7 19:26:48 Mars kernel: sd 0:0:2:0: [sdc] Write Protect is off Jan 7 19:26:48 Mars kernel: sd 0:0:2:0: [sdc] Mode Sense: 5d 00 00 00 Jan 7 19:26:48 Mars kernel: sd 0:0:2:0: [sdc] Cache data unavailable Jan 7 19:26:48 Mars kernel: sd 0:0:2:0: [sdc] Assuming drive cache: write through Jan 7 19:26:48 Mars kernel: sd 0:0:2:0: [sdc] Cache data unavailable Jan 7 19:26:48 Mars kernel: sd 0:0:2:0: [sdc] Assuming drive cache: write through Jan 7 19:26:48 Mars kernel: sdc: Sooo... is this more a driver issue that might be solved with a new kernel compile or something like that? Could unRAID be forced to work around this issue? Any suggestions? I found one post asking about if this may have been fixed in an updated unRAID release but no response from there. (PS what's with limiting posts to about 10 lines. The editor keeps jumping back to the top whlst typing - running IE8 - very annoying, so ended up typing it in notepad) Thanks Christian
×
×
  • Create New...