rlung

Members
  • Posts

    8
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

rlung's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Very limited success ratio mounting bluray isos through user shares over network (smb) to windows boxes using slysoft virtual clone drive 5450. If successful disk will mount normally. If unsuccessful vcd will display "can't mount p:\03 video\1 movies BLURAY\movietitle.iso!" where 'p' is mapped user share. Same iso can mount today, not mount 10min from now and vice versa. Navigating again to same iso but instead of user shares direct to diskxx is successful. Copying isos to local hard drive then mounting is successful. First noticed in 5.0beta10 before that running 4.7 final without issue - navigating to same problem isos directly through network \\NORCO\disk8\ALL\03 video\1 movies BLURAY\movietitle.iso = 100% successful - mapping above (\\NORCO\disk8\ALL\03 video\1 movies BLURAY\movietitle.iso) = 100% successful - navigating to problem isos in network user share produces error "can't mount \\norco\all\03 video\1 movies BLURAY\movietitle.iso!" - rerun 'new permissions' in unraid setting = no change - regressed to vcd 5443, 5440 = no change - accessing through another windows box = no change - regressed unraid to 5.0beta9 = 100% successful - updated to 5.0beta11 = no change update 08/31/11 -- SOLVED -- 5.0beta12 = no change creating user with password, mapping with same credentials = 100% successful - previously was mapping under root without password. - 5.0beta9 did not have these requirements
  2. This is very interesting ... I'm just starting to play with this feature with large numbers of HDDs ... I'll need to keep a close eye on this and see if I get the same problem ... thanks for the warning (and hopefully Tom can find the bug if there is one). Parity rebuild completed, behavior persists. Narrowed and can duplicate on demand. This test is on another server running 5b10: 20 data + 1 parity, no cache 1 and only 1 user share labeled 'ALL' utilizing all disks available allocation = fill up min free space = 30,000,000 split = 3 free space on all 20 disks are > 30gb; windows correctly shows 32.7 tb/ 4.58 tb (total / free) 1st disk written with 4.35gb file to < 30 gb; windows reports 31.3 / 4.55 (disk 1 = 1.5 tb) - any further writing to this disk does not affect either value beyond the expected 2nd disk written with 4.35gb to < 30 gb; windows reports 30.0 / 4.52 (disk 2 = 1.5 tb) - further writing to disk no abnormal change 3rd disk written with 4.35gb to < 30gb; windows reports 28.7 / 4.49 (disk 3 = 1.5 tb) - further writing no abnormal change Data concludes that user shares is discarding total / free space of individual disks as they fall below min free space threshold. Able to duplicate this without writting by artificially raising / lower 'min free space' of user share and noting what windows displays. update 08/15/11 5.0beta11 fixed above
  3. When using user share & fill-up method whenever any drive dips below threshold, windows will report both incorrect total size & free space on mapped share 16 drives fill up 25,000,000 min free space split 0 when any drive is > 25 gb windows reports 12.9 TB / 1.87 TB ... total / used space (the correct amount) any drive dips < 25; > 24 windows reports 12.0 / 1.78 any drive dips < 24; > 23 windows reports 10.6 / 1.74 any drive dips < 23; > 22 windows reports 9.77 / 1.69 any drive dips < 22; > 21 windows reports 2.72 / 1.57 (2.72 not a typo) Once drive space increases to above 'min free space' threshold normal readings appear. Incorrect reading continued on another unraid 5.0b10 server using split 3. Bother servers migrated from a 4.7 environment with no change in settings and not displaying this behavior. - correction there is 1 change; both servers currently have no parity drive set. They did in the 4.7 environment. Will set parity, rebuild and observe
  4. Yes sir, server has been/is 100% under control.
  5. Do you know the actual size of the HPA in sectors? One way to get this is to examine the Devices page and look at the device identifier of the disk(s) with HPA. The identifer is in parenthesis, for example in the line below, the identifer is "sdh": disk1 device: pci-0000:00:1f.5-scsi-0:0:0:0 host4 (sdh) ST380811AS_5PS2P6J7 Now from the console or telnet session, type this command: hdparm -N /dev/<identifier> For the disk above I would type: hdparm -N /dev/sdh Please post output of this command. HPA has been expunged, this is current output of 3 disks that had it (hda, sdm, sdl) Tower login: root Linux 2.6.32.9-unRAID. root@Tower:~# hdparm -N /dev/hda /dev/hda: The running kernel lacks CONFIG_IDE_TASK_IOCTL support for this device. READ_NATIVE_MAX_ADDRESS_EXT failed: Invalid argument root@Tower:~# hdparm -N /dev/sdm /dev/sdm: max sectors = 976773168/3694640(976773168?), HPA setting seems invalid (buggy kernel device driver?) root@Tower:~# hdparm -N /dev/sdl /dev/sdl: max sectors = 976773168/3694640(976773168?), HPA setting seems invalid (buggy kernel device driver?) root@Tower:~# Output of 2 other 500GB that never had HPA. (sdk, sdn) root@Tower:~# hdparm -N /dev/sdk /dev/sdk: max sectors = 976773168/3694640(976773168?), HPA setting seems invalid (buggy kernel device driver?) root@Tower:~# hdparm -N /dev/sdn /dev/sdn: max sectors = 976773168/3694640(976773168?), HPA setting seems invalid (buggy kernel device driver?) root@Tower:~# Now I remember, the difference of 4 KB showed up immediately upon 1st boot of 4.7beta1, before '4K alignment' was selected.
  6. Upgraded from 4.6final to 4.7beta1. 3 disks that had a known pre-existing HPA partition showed up differently, all 500 GB. I can't remember if this was before/after choosing '4k alignment' in settings. From 488,385,496 to 488,385,492, difference of 4 KB. Unraid acknowledged the difference, offered option to import... I choose yes. Upon further thought decided to check filesystem, sure enough reiserfs was corrupted on same three disks. Had mirrors of disks stored offline, used hdat to finally expunge HPA then break parity -> reformat -> reimport data -> build parity. My fault on not dealing with HPA earlier with due diligence.
  7. Same issue here. Array was great on 4.4final with 16 + cache for ~ 3 months, upgraded to 4.5beta6 initial migration was flawless. However, upon adding a 17th drive system will freeze. - downgraded to 4.5beta4 (initial 20 drive support)... no change - added 17th drive, assigned to disk 18 or 19... no change - added 17th drive to different slots on norco-4020 (on both another aoc-sat2-mv8 & x7sbe controller)... no change - added 17th drive using 3 different wd5000abys (500GB) and a seagate 320GB... no change - format of USB key, fresh install of 4.4beta6 (erasing unmenu, dir_cache, etc)... no change - temp disable parity and add 1 wd5000abys... good, array completes mounting (adding parity back will freeze) - temp disable 2 existing data drives (wd10eacs) and added 2 wd5000abys... good, array completes mounting (adding another drive will freeze) I can see the syslog and looks simliar to ALR's except without the md12 issue but with a segmentation error, capturing it is proving elusive.