vsonerud

Members
  • Posts

    35
  • Joined

  • Last visited

Everything posted by vsonerud

  1. My unRAID-system is still running version 6.5.3 and I am planning to upgrade "soon" - but just by chance I stumbled upon this thread and since I have 2 Seagate ST8000VN0022 drives in my array it seemed like a good thing to use the seachest* utils to disable EPC and lowCurrentSpinup to be on the safe side. However, when running the Seachest_Configure executable with --EPCfeature disable, an error occurs stating that it is an unknown option , referring to --help for more information - and running with --help reveals that --EPCfeature is not a valid option (anymore) So, does anyone know why the --EPCfeature option has been removed ? Or does anyone have an older version to share ? The version I have tried reports the folllowing version information: SeaChest_Configure Version: 2.3.1-4_1_1 X86_64
  2. Just tried to connect the 18 TB disk to an onboard SATA port - voila, now it worked like a charm to add as a parity drive and parity rebuild is in progress after starting the array 😃
  3. Hi! I have recently bought a new Western Digital UltraStar 18 TB drive and connected it to my unRAID server. The drive is detected and succesfully assigned the /dev/sdi device - and I have succesfully run both short and extended SMART self tests as well as 1 successful preclear cycle using the docker-based binhex preclear plugin. However, when attempting to add it as my new parity drive (instead of my current 8 TB drive) something strange happens. I stop the array - and immediately after I have selected the new 18 TB drive in the drop-down list for the first parity drive slot - the drop down list refreshes / resets - and nothing ends up being selected. Thereafter the new 18 TB drive is gone from the drop down list of available drives to select (the old parity drive is the only one available). If i then select the old parity drive and restarts the array and repeat the process above, the new 18 TB parity drive is available for selection but after selecting it the drop-down list resets and nothing is selected and the drive disappears once again from the drop-down list. I have also tried the Tools | New config route - both with selecting that I want to preserve all previous assignments, but also selecting preserving only 'Data drives and Cache drives' from previous assignments. In both cases the same as above happens - I am able to select the new 18 TB drive but the drop-down resets to nothing after selecting the drive. The following excerpt from the syslog happens when attempting to select the new 18 TB drive as parity: Sep 30 17:27:17 Tower emhttpd: req (25): changeDevice=apply&csrf_token=****************&slotId.0=WDC_WUH721818ALE6L4_2MJ9EKDG Sep 30 17:27:17 Tower emhttpd: shcmd (26821): rmmod md-mod Sep 30 17:27:17 Tower kernel: md: unRAID driver removed Sep 30 17:27:17 Tower emhttpd: shcmd (26822): modprobe md-mod super=/boot/config/super.dat Sep 30 17:27:17 Tower kernel: md: unRAID driver 2.9.3 installed Sep 30 17:27:17 Tower emhttpd: Device inventory: Sep 30 17:27:17 Tower emhttpd: ST8000VN0022-2EL112_ZA1E12T7 (sdj) 512 15628053168 Sep 30 17:27:17 Tower emhttpd: ST8000VN0022-2EL112_ZA15RXGL (sdg) 512 15628053168 Sep 30 17:27:17 Tower emhttpd: HGST_HDN724040ALE640_PK1334PCJWU0JS (sdh) 512 7814037168 Sep 30 17:27:17 Tower emhttpd: HGST_HDN724040ALE640_PK2334PEJ7YKST (sdd) 512 7814037168 Sep 30 17:27:17 Tower emhttpd: HGST_HDN724040ALE640_PK1334PBKDE35S (sde) 512 7814037168 Sep 30 17:27:17 Tower emhttpd: Hitachi_HDS723020BLA642_MN1220F30E225D (sdb) 512 3907029168 Sep 30 17:27:17 Tower emhttpd: HGST_HDN724040ALE640_PK1381PCJZ0VLS (sdf) 512 7814037168 Sep 30 17:27:17 Tower emhttpd: HGST_HDN724040ALE640_PK1381PCKY343S (sdc) 512 7814037168 Sep 30 17:27:17 Tower emhttpd: WDC_WUH721818ALE6L4_2MJ9EKDG (sdi) 512 35156656128 Sep 30 17:27:17 Tower emhttpd: Sony_Storage_Media_1A08012384785-0:0 (sda) 512 3962880 Sep 30 17:27:17 Tower kernel: mdcmd (1): import 0 sdi 64 17578328012 0 WDC_WUH721818ALE6L4_2MJ9EKDG Sep 30 17:27:17 Tower kernel: md: import disk0: lock_bdev error: -13 Sep 30 17:27:17 Tower kernel: mdcmd (2): import 1 sdf 64 3907018532 0 HGST_HDN724040ALE640_PK1381PCJZ0VLS Sep 30 17:27:17 Tower kernel: md: import disk1: (sdf) HGST_HDN724040ALE640_PK1381PCJZ0VLS size: 3907018532 Sep 30 17:27:17 Tower kernel: md: disk1 new disk The 2 highlighted lines above is for the problematic new 18 TB drive. Anyone who has seen anything like this before, or have any good advice what could be the problem or what I should attempt to do to resolve the matter? The unRAID server is running version 6.5.3 and should work for this drive I think. But I am in the process of attempting to upgrade to 6.10.3 The 18 TB drive is connected to a HBA card which originally was a IBM M1015 card - flashed many years ago with LSI SAS 9211-8i IT-mode firmware (version 15.0.0.0 ) and BIOS. And as mentioned above I have successfully precleared the drive.... It might also be worth mentioning that the server has currently 7 array devices (1 * 8TB drive for parity, 1 * 8TB data drive and 5 * 4 TB data drives) as well as a 2 TB cache drive - and the new unassigned 18 TB drive. The 5 * 4 TB data drives and the 2 TB cache drive is still using ReiserFS - and I was planning to convert these drives to XFS after having added the new 18 TB drive for parity, and then using the old 8TB parity drive for data. ReiserFS apparently has a 16 TB hard limit, but could that affect the attempt to assign a brand new 18 TB unformatted drive as a parity drive ? (I have also configured the server to have XFS as default file system )
  4. @Frank1940 The server is mainly used as a NAS, but has a couple of Docker containers running SABnzbd and Sonarr. No VMs. The following plugins are installed: Community Applications (2020.01.09 - unable to update - requires minimum unRAID 6.9.0) Dynamix System Statistics ( 2018.08.29a unable to updaye - requires minimum unRAID 6.9.0) Fix Common Problems ( 2019.09.08 unable to update - requires minimum unRAID 6.7.0 ) Dynamix Active Streams (up-to-date) Dynamix Cache Directories (up-to-date) Nerd Tools (up-to-date) Preclear Disks (up-to-date) Statistics (up-to-date) The array has 7 drives ( 5 * 4TB + 2 * 8TB) plus one 2 TB cache drive. 4 of the 6 data-drives in the array still use ReiserFS (the plan is to migrate to XFS )
  5. Hi! I am still running unRAID 6.5.3 on my server and have postponed my upgrade to later versions for too long now But what is the recommended upgrade path to 6.10.x ? Any reason not to go directly from unRAID 6.5.3 to the latest 6.10.x release version?
  6. Hi! I am still running unRAID 5.x on my server and have postponed my upgrade to 6.x for too long now But what is the recommended upgrade path to 6.x? Any reason not to go directly from unRAID 5.x to the latest 6.4 release version?
  7. Hello! Is it probable that applications running inside docker containers in unRAID 6.x will be able to utilize AES-NI ?
  8. Hello! I am thinking about migrating my current unRAID 5.x server into using virtualization. Ideally I would like to achieve the following: * Run unRAID on server (obviously ) * Run Linux desktop environment (Linux Mint 16/17 for instance ) * Run Windows 7 VM using VMDirectPath VGA passthrough * Use only this server for both Linux & Windows desktop environment (not be dependent on using another computer for remote control of VM ) I am anxious about the "official" unRAID 6.x virtualization approach with regards to my wishes above. Am I correct in assuming that it will be rather difficult to achieve? If I have understood things correctly the official unRAID 6.x virtualization approach is to run unRAID as the hostOS (XEN dom0 ) ? This would make it difficult to have a complete Linux desktop environment on the same server, yes? Especially since I want to run a Windows7 VM using VGA passthrough. To achieve a full Linux desktop environment on the same server I would then have to run an additional Linux VM also utilizing a second discrete graphics card using VMDirectPath passthrough. Has anyone successfully done that? I mean: Run unRAID on XEN dom0 and Windows7 and some Linux distro in each of their own domU VMs - both using VMDirectpath VGA passthrough ? There are also practical limitations - since my current setup is as follows: * Motherboard: AsRock Z77 extreme4-M * CPU: Pentium G860 * HBAs: 2 * IBM M1015 flashed with IT-firmware The motherboard has 3*PCIE-x16 (2*PCIE 3.0 and 1*PCIE 2.0), but when populated they will run in x8,x8 and x4 mode. In addition there is 1 PCIE-x1. Currently the 2 IBM M1015 HBA cards occupy the 2 PCIE-x16 (PCIE 3.0) (in x8 mode) slots. Hopefully it will be ok to insert a Radeon HD5450 graphics card in the last PCIE-x16 (PCIE 2.0) slot running in x4 mode. (The HD5450 is not a very demanding card I think and merely the fact that Radeon HD5450 PCIE-x1 cards are available kind of indicates that....) ALternatively I guess I will have to place the HD5450 graphics card in the first PCIE-x16 slot and move one of the IBM M1015 cards into the PCIE-x16 slot running in x4 mode ( is that problematic ? ). The current Pentium G860 CPU obviously needs to be replaced with either a core i5/i7 CPU (I am leaning towards a core i7 3770 ) To achieve my wishes above I have been thinking about doing the following instead: * Install Linux Mint 16/17 as hostOS * Install XEN 4.3/4.4 * Run unRAID 5.x as guestOS in a domU VM. * Run Windows7 VM as guestOS in a domU VM using VMDirectPath VGA passthrough. Any reason why this approach might be a bad idea? What about the "future-proofability" of that approach? What about 64-bit unRAID 6.x availability?
  9. I have just finished building my new unRAID server using the ASRock Z77 Extreme4-m motherboard. No problems so far. Only issue for me was that I had to upgrade to the latest motherboard BIOS to be able to use > 1 IBM m1015 controller cards (reflashed with LSI IT-firmware). I am running unRAID 5.0 rc12a on the server.
  10. I am having problems using 2 IBM m1015 cards which I have succesfully flashed with P15 IT-firmware when inserted in my new ASrock Z77 Extreme4-m motherboard. I am only able to get 1 card at a time working in the "top" (designated PCIE1 ) PCIe-x16 slot. If I insert a second card in the other PCIe-x16 (designated PCIE3) slot no connected drives will appear on the second controller (the ones connected to the first controller card appear anyhow ). When having both of these PCIe-x16 slots populated they will be running in 2 * PCIe-x8 mode according to the motherboard manual. I have also tried switching the cards around to make sure that my problems wasn't because of one faulty card. With just one single card inserted, it has to be placed in the topmost PCIe-x16 slot (designated PCIE1 ). It kind of seems that any of the cards when inserted into the middle PCIe-x16 slot (designated PCIE3) does not get recognized. When I flashed the 2 IBM m1015 cards with the P15 IT-firmware I did this in another old PC with non-UEFI motherbord BIOS and everything seemed to go well during the flashing process. BUT: I flashed both of the cards with the OPTION ROM included. Should I not have done this? Should I have flashed only 1 card with the Option ROM included? Or neither? Can I now reflash and "get rid of" the Option ROM simply by using?: sas2flsh -o -f 2118it.bin UPDATE: I am following up on my own post above. After updating the motherboard BIOS to the latest version (1.50) both controller cards presented themselves and the connected drives and all seems to work fine.
  11. I was just wondering the same. Typical, when I have been "planning" to upgrade from 4.7 for ages and today when I am finally going to do it, the download link for the latest release is down try this one... http://download.lime-technology.com/download/ Ah, thank you very much!
  12. I was just wondering the same. Typical, when I have been "planning" to upgrade from 4.7 for ages and today when I am finally going to do it, the download link for the latest release is down
  13. I am planning to upgrade one of my old unRAID servers (which have been running 24/7 for the past ~5 years ) and I am planning to use an IBM m1015 card flashed with IT-firmware on the new unRAID server. The motherboard for the new unRAID server has not been decided yet, but it would be nice if it could be used for m1015 flashing. However, it does not look to good for several of my candidate boards which are Intel chipset B75,H77 or Z77-based. Therefore I am trying to find out if any of my old computers may be used. These use the following motherboards/chipsets: Epox 9npa+ ultra (nforce4 ultra) (old unRAID server) Asus P5e-vm-hdmi (Intel g45) Gigabyte GA-MA785G-D3H (AMD 785G)
  14. I have 2 unRAID servers, the oldest one still running unRAID version 4.4.2. Is there any reason why I shouldn't upgrade to v 4.7? And what can I do to "be prepared for the worst"? What if I have to rollback to the old version, is that unproblematic? Anything in particular I should do in advance before upgrading? The unRAID server is running on an old Epox 9NPA+ Ultra motherboard (Nforce4 ultra chipset) and has been running unRAID flawlessly for almost 3 years.
  15. One of my two unRAID servers is using an Epox 9NPA+Ultra (NForce4 Ultra) with an AMD Athlon 64 X2 3800+ Socket 939 CPU. The server has 8 drives, using 4 drives on the nForce4 Ultra SATA controller, and also has 3 Sil3132 PCI-E x1 controller cards - for a total of max 10 drives. The server has been running for a couple of years or so, and I have transferred HUGE amounts of data to/from the server, including checksum verification, and I have NEVER ever seen a single error.
  16. Just a note related to the above: I just read that: The VT-d specification states that all conventional PCI devices behind a PCIe-to-PCI bridge have to be assigned to the same domain. PCIe devices do not have this restriction. (taken from http://wiki.xensource.com/xenwiki/VTdHowTo )
  17. I am still "in the process" of testing my Epox 9npa+ Ultra - nforce4 Ultra motherboard for unRAID use. I have not concluded yet, but so far I have not been able to produce any bit errors. I have transferred quite a few hundred GB of data back and forth between another PC and unRAID running with this motherboard, without detecting a single bit failure (executing a LOT of binary comparisons across my network between the machines ) I am planning to perform more extensive testing pretty soon. Especially with more drives in the unRAID array. So far I have only been testing using 1 parity drive and 1 drive for data (1 drive connected to the nForce4 onboard controller and 1 drive on a Sil3132 PCI-E x1 controller card). All of the other drives in the machine are NTFS drives currently in use by Windows XP (which the machine is usally running when not testing unRAID ), so it takes some "planning" to do a smooth migration I have posted about my testing previously in this thread: http://lime-technology.com/forum/index.php?topic=1771.0
  18. Is your board still working ok? Any problems so far? I am considering buying this board for an unRAID machine.
  19. Still performing tests, the file transfer problems are starting to really make me frustrated. Using unRAID 4.3Final I still have serious problems with LARGE file transfers from Windows Vista/Windows XP machines to unRAID user-shares. They always terminate prematurely. This holds true both through Windows explorer as well as using Total Commander. I also tested 'Robocopy' (Robust Copy) which is now part of Vista SP1 using the recovery mode, which retries when experiencing network problems. Transferring a 66 GB file never succeeded even with recovery mode. Reading files from the unRAID user-share using Windows explorer on Windows Vista/XP works fine, with an average transfer speed of 20-25 MB/sec with default settings, and with approx 40 MB/sec with the "performance tweak" (blockdev --setra 2048 /dev/md1 ). BUT, the most interesting finding is a new test I perfomed: On the unRAID machine, I mounted a Windows share from my Vista machine using 'smbmount'. I then transferred the same 66 GB file as before, from the Vista machine to the unRAID user-share using a simple 'cp' on the unRAID machine. Result: No problems, and with an average transfer speed at about 10-14 MB/sec. Monitoring the bandwidth troughput on the Vista machine, the network transfer was now really smooth, only varying between 100-120 Mbit/sec. No bursts or spikes anymore. The only really positive result so far is that I do not seem to suffer from data corruption issues with my nForce4 ultra motherboard. Earlier today I executed a binary file comparison 15 times on a 66 GB file between the Vista machine where it originated from with the copy on the unRAID user-share where it was copied to. No differences at all. After doing a lot of data transfers, both successfully, and with problems as described above I have checked 'ifconfig' on the unRAID machine: 0 erroneous or failed packets I am going to test again with unRAID 4.4beta2, where I previously was successful with copying LARGE files from Vista/XP machines to unRAID shares, to see if that is a consistent result when repeated many times. But then again unRAID 4.4beta2 is not an option to use right now for my system when going "live", because of the new problems mentioned earlier in this thread. I just wished the next beta could be released soon to see if that problem has been fixed in newer Linux kernels.
  20. Update: After rebooting my unRAid system into unRAID 4.3Final I initially get approx. 20-25 MB/sec in read-transfer speed from the user-share. When I do the 'performance tweak' the read transfer speed increases to approx 60 MB/sec (average for a 7 GB transferred file for instance ) I then tried something new: I made a shelll script on the unRAID machine which performs the following: * record the time-stamp before starting * copy a ~6.55 GB file from a NTFS-drive mounted using ntfs-3g to my /mnt/user/Media user-share * record the time-stamp when finished This bypasses any networking problems/issues between the unRAID machine and my Vista machine, because all data transfers happen on the unRAID machine. The average write speed of the transfer was 5.6 MB/sec Then I read this file back from the user-share using my Windows Vista machine, and the read transfer speed was approx. 60 MB/sec And to answer a previous question: After having experienced file write transfer problems from my Windows Vista machine to the user-share on the unRAID machine, ifconfig always reports 0 packets dropped and 0 error packets.
  21. Hello again Robj and thank you for your suggestions. Regarding reset of my switch I am not so sure anymore about any significant increase in speed. I have done more testing. First of all I did not load the ntfs-3g module which was previously manually loaded when executing the previous file transfers. (because ntfs-3g as well as the unRAID filesystem driver uses FUSE I was thinking that to be 100% sure to avoid any side effect from ntfs-3g that it would be safest to not load it ) But it had no effect neither speedwise nor stability-wise on the transfers through the user-share. Large file transfers through the user-share do not succeed. Since this still is a test-system, I decided to try the "bleeding edge" 4.4 beta2. Booting the 4.4 beta2 takes a bit longer because of some new strange SATA-related errors. They show up in the syslog like this: Nov 11 12:02:06 Tower kernel: ata4: link is slow to respond, please be patient (ready=0) Nov 11 12:02:06 Tower kernel: ata4: COMRESET failed (errno=-16) Nov 11 12:02:06 Tower kernel: ata4: link is slow to respond, please be patient (ready=0) Nov 11 12:02:06 Tower kernel: ata4: COMRESET failed (errno=-16) Nov 11 12:02:06 Tower kernel: ata4: link is slow to respond, please be patient (ready=0) Nov 11 12:02:06 Tower kernel: ata4: COMRESET failed (errno=-16) Nov 11 12:02:06 Tower kernel: ata4: limiting SATA link speed to 1.5 Gbps Nov 11 12:02:06 Tower kernel: ata4: COMRESET failed (errno=-16) Nov 11 12:02:06 Tower kernel: ata4: reset failed, giving up And these error messages keep apperaring endlessly every minute or so even after the unRAID system is completely booted, and one of the drives are missing (that is, the drive missing is one of my NTFS drives - a 500 GB Western Digital RE2 with TLER disabled ) BUT: Now I am able to successfully transfer a 66 GB file from my Windows Vista machine to the user-share on the unRAID machine for the very first time. Previously this has only been working directly through the /mnt/disk1 share. The average trransfer speed was ~9 MB/sec I also monitored closely the network throughput, and even though the speed varied _a lot_ there were considerably fewer periods with VERY low transfer speed. The transfer speed was though often very "bursty" - with high short peaks - sometimes in the 100-300 Mbit/sec range. After the file transfer was successfully finished I tried copying the file back from unRAID user-share, and much to my dismay and surprise the transfer (read) speed was only approx 6.5 MB/sec. After some time I cancelled the transfer and retried a file transfer directly from the /mnt/disk1 share, but with no significant speed increase. And because I now was using unRAID 4.4 beta2 I did not bother to do the "read speed performance tweak", because it is sort of already done in unRAID 4.4 I then downgraded my unRAID system back to unRAID 4.3 final and booted up again, and retried file reading from the unRAID disk/user-shares, but still with approx. only 6-7 MB/sec transfer speed. I could swear I have seen MUCH higher read transfer speeds from the /mnt/disk1 share previously. But that may have been with files only some hundred Megabytes large - maybe upto 700 MB in size, so I guess maybe there could be caching/buffering mechanisms in place explaining the previous MUCH higher transfer speeds...? Executing: ethtool eth0 correctly shows for instance: Speed: 1000Mb/s Duplex: Full I then powered down the unRAID system and booted Windows XP on the machine, and then transferred files from the XP machine to the Vista machine (where I have been doing most of my testing) with transfer speeds around 60 MB/sec. (my 3com OfficeConnect 8 port gigabit switch is supposed to be good - and I am using almost brand new Cat6 cables.) Attached is my syslog shortly after having booted the unRAID system using the 4.4beta2 release. My Windows Vista system has SP1 installed (along with all other Vista updates from Microsoft ). And besides, as I mentioned previously I also conducted a test using another Windows XP machine with the exact same transfer (write) problems.
  22. Sure, I could try this and I will, although it's a brand new cable similar to the ones used on the other drives having no problems. In addition, if one looks closely at the syslog I attached, 2 drives out of the total 7 drives are seen as SATA-150 only by unRAID. These 2 drives are both Seagate drives. One of them is the brand new 1 TB Seagate assigned as parity drive, connected as the only drive to a Sil3132 PCI-Express x1 controller card. (I also have another similiar Sil3132 controller card installed, where there are 2 Samsung 750 GB drives connected, which are recognized as SATA-300, so the controller card should be ok). The other Seagate drive which is recognized as SATA-150 only by unRAID also does NOT have a jumper set (it is the Windows XP system boot drive). This drive is connected to the onboard nForce4 Ultra SATA controller. And before anyone starts yelling PSU-issues, the PSU in my system is a Corsair HX620, which should have no problems at all running 7 drives. ( I have also carefully chosen not to connect all of the 7 drives to the same 12 V rail - according to a specification from a Corsair person posting in a webforum a picture showing how the 3 different 12V rails are physically spread across the connectors on this PSU ). I have also been doing some more file transfers and so far I have seen the following: * When transferring ( writing) to the unRAID array through the /mnt/disk1 share at least the transfer finishes (using Vista explorer or Total Commander) even though it can slow down quite a bit from time to time * When transferring really large files (I have been using a 66 GB file for most of the file transfers ) through the user share I have set-up, I have yet to be able to complete a 66 GB file transfer. Total Commander errs out saying 'Disk full', but I guess that is just bad exception handling. The windows Vista Explorer throws an error saying that there was a network problem writing to the share. The windows event log shows a 'delayed write failure'. * In windows Vista I was able to retry the transfer when failing and continue where the transfer left off. Right before the transfer failed the network transfer speed had dropped to a few kbit/sec and stayed like that for a while before finally throwing an error. When retrying the transfer picked up a decent speed again, before failing some time later. * I also observed during the transfer (using the Vista performance monitor ) that the transfer several times dropped to a very low speed (a few KB/sec) but then managed to increase speed again after some time without failing. This could happen several times until it finally did not manage to pick up speed again and failed. * I also tried transferring a large file (26 GB) from another Windows XP machine in my local network to the unRAID user-share using Windows Total Commander, but that also failed similarly as on the Windows Vista machine. So at least, the problem does not seem to be Windows Vista related.
  23. >You mentioned some file comparison tests, but did not mention any results involving the unRAID server, so I will assume there were no compare errors? No compare errors so far when comparing files transferred to the unRAID array. But I haven't gotten around to run the comparisons very many times just yet. About the Seagate 1 TB drive and SATA-150/300 issue: I was very well aware of this fact, but there is NO jumper set on the drive at the moment. A jumper was included with the hard-drive to insert if I wanted to force the drive to SATA 1.5 Gb/sec compatibility mode. So, this is really a big mystery.
  24. Hello again! Since my last post I have done quite some more testing: * Transferred a 66 GB large file (a compressed Acronis True Image system image ) from my Windows XP machine to my Windows Vista machine * Booted Windows XP machine with a basic unRAID 4.3 final * Mounted NTFS drives on the unRAID machine using 3g-ntfs * Shared drive through SAMBA using unMenu * Executed binary file comparison on Vista machine with file on unRAID samba share. * Repeated file comparison 20 times, in other words 20 * 66 GB = 1320 GB * All file comparisons were successful * Added 2 brand new 1 TB drives to the unRAID machine * 1 drive (Samsung 1 TB) was added to the onboard nForce4 Ultra SATA controller * 1 drive (Seagate 1 TB) was added to a SiL3132 PCI-Express x1 SATA controller card * The Samsung drive was assigned the first slot in the unRAID array * The Seagate drive was assigned as parity drive * Intitial formatting executed ok * Initial parity sync was started * While parity syncing I started copying a 66 GB file from my Windows Vista machine to the /mnt/disk1 share on the unRAID machine. * This file transfer seemed to transfer at approx. 16.5 MB/sec (with initial parity syncing in progress at the same time) * This file transfer took place during the night, and when I woke up the file transfer and parity sync had finished without problems. * I then started a binary file comparison on the 66 GB file between my Windows Vista machine and the same file on the /mnt/disk1 share * The comparison was supposed to loop 10 times (while I was at work ) * I then initiated file copying of the same 66 GB file 2 more times (simultanously) to the unRAID /mnt/disk1 share (using another name for those 2 copying sessions of course ) * I noticed that the file transfer speed as reported by Vista then was approx 1.5 MB/sec for each of the 2 copy sessions. * When returning from work approx 10 hours later, the 2 file transfers had not finished * only the first binary file comparison had finished. The second was in progress. * I cancelled one of the file transfers as well as the file comparisons. * The one remaining file transfer then speeded up considerably for a while, then it suddenly slowed down to a crawl. * I then took a look at the syslog and noticed a lot of the following lines: * Tower kernel: eth0: too many iterations (6) in nv_nic_irq. * The first occurence of these lines was right around the time where I initiated the simultaneous file transfers. * This syslog is attached below and is called: syslog-2008-11-10.txt * I then rebooted the unRAID machine. * Added a user share to try file transferring through this instead. * Did some more file transfers of various files with strange results: * File transfers from my Windows Vista machine to my unRAID machine usually starts off quite fast and then decreases until the average speed is between 20-30 MB/sec * Then after some time - sometimes a very short period of time - other times after maybe 20% transfer, the speed very suddenly drops dramatically to around 6 MB/sec and quite often it almost stops completely * This happens in both Windows Vista explorer as well as in Windows Total Commander. * Using Total Commander I have also experienced several times that the file copying never really starts and in the end throws an error telling me that the disk is full. * Checking the syslog during these problematic transfers there is absolutely nothing new logged (and no signs of the 'eth0: too many iterations (6) in nv_nic_irq' error from previously) * I tried transferring through the /mnt/disk1 share as well as through my user share without any siginificant difference * Reading from the unRAID shares always happen with speeds around 65-70 MB/sec. * I then finally initiated a parity sync which is in progress as I type this, with a sync speed starting at around 100 MB/sec * I have attached another syslog starting from the reboot mentioned above and lasting through many problematic file transfers up until the parity sync was started. * The name of this syslog is: syslog-2008-11-10-#2.txt UPDATE: * The parity sync completed without errors with a rate=82839K/sec * I power-"rebooted" my 3Com 8-port Gigabit-switch, and voila, suddenly file transfers (writing) seemed to have stabilized around 10 MB/sec, but then suddenly after 37% transfer of a 9 GB file, the transfer just suddenly stopped, and after a while Windows Total Commander told me that 'Disk is full'.
  25. I have used the unmenu web-interface for "pre-testing" on my Windows XP machine (which I am hoping to migrate to unRAID ) to aid me in my testing for data corruption issues related to the nForce4 chipset. The plan was to use unmenu for easy access to my existing NTFS drives on the XP machine, and easy SAMBA sharing in unRAID of those same NTFS mounted drives. (I would then do a binary file comparison between VERY large files shared through unRAID and the same large files on my Windows Vista machine to check for data corruption issues ) I performed the following tasks: * Copied the unmenu files to the flash drive. The instructions talked about putting them in the /boot directory, so I created a boot directory on the flash drive and copied the files there * Booted the basic unRAID 4.3 final from the flash drive * Telnetted into the unRAID machine, and had a look around * Discovered that the 'boot' directory I had created in the root directory (containg the unmenu files) now was mounted on /boot/boot/ * Moved the unmenu files to /boot on the flash drive from the telnet shell * Manually started the unmenu interface as per instructions given here in the forum * Was able to access the web-interface on http://TOWER:8080 * Selected the 'package manager' and attempted to download and install ntfs-3g. Got an error message about wrong checksum * Found out that the URL for the ntfs-3g in the configuration file was invalid. * Downloaded a slightly newer ntfs-3g slackware package from another mirror (did not find the version referenced in the unmask distribution ) to my Vista PC * Used smbmount on the unRAID machine to access a share on my Windows Vista machine where I had downloaded the alternative ntfs-3g package and copied it from there into the .../packages directory * Manually installed the ntfs-3g package * Then I mounted all of my NTFS drives through the unmenu web-interface, and I could also add them as a SAMBA share. * BUT: I have 5 SATA drives on my system - 4 of them are connected to the onboard nForce4 controller, whereas 1 drive is connected to a Sil3132 PCI-Express x1 controller card * I was able to successfully mount ALL of the NTFS partitions on all of the drives, BUT for the one drive connected to the Sil3132 controll card, the unmenu web-interface does not reflect that the drive actually has been mounted (I do get a text message at the bottom cofnorming that it is mounted, and it is because I can access the files ) * Since the web-interface somehow does not reflect that one of the drives actually is mounted, I am also not presented with the option of adding as a SAMBA share, which is a pity. See the attached image..This is right after I have pressed the 'Mount /dev/sdf1' button, and the disk has been mounted.