ntrlsur

Members
  • Posts

    73
  • Joined

  • Last visited

Everything posted by ntrlsur

  1. I am going to guess that the docker image might have gotten corrupted.. You might have to redo the config for docker. That it just a guess though.
  2. I feel like an idiot. I didn't even notice that I was excluding the same disk I was including.
  3. Figure I will toss in my 2 cents. I am currently running a FX-6300 with 8 gigs of ram on my unraid machine. It currently runs2 VMs. A Debian instance for all of my usenet stuff. (CP,SABnzb,sickrage,headphones etc..) and a Win server 2008R2 instance as a DC for my home. It also runs 2 dockers 1 plex and 1 OpenVPN and there is plenty of horse power to spare. I am also running a total of 10 disks. 7 storage spinners and 3 ssds in a cache pool for VMs and Dockers. I haven't noticed any heat issues but I upgraded my case fans to noctua NF-F12 fans to keep the case quiet. Checking my CPU temp now and its running at 25 C
  4. So this is got my mind stuck. I am currently trying to copy music from 1 place to another. I keep getting the shfs/user: shfs_create: assign_disk: (28) No space left on device errors in the logs. I have plenty of space free on the disks in question. I have attached a screen grab of my share and the disks in question. I have attached the Diagnostics as well. Does anyone have an idea of why I keep catching this error?
  5. for 300 I would pick it up just cause personally. If you dont buy it put me intouch with the folks selling it and I will.
  6. that is alot enterprise hardware. To put unraid on it would be underkill. What controller cards in the hardware would determine if things worked or not. Unraid might not see the hardware controller in the R410 or the addon card for the MD1000. unraid is stripped down so alot of linux drivers are not installed. Plus that would be a very noisy setup..
  7. Yes unraid will allow you to do that. If you break your mirrors in the windows world you will have a failed drive. You can take those disks and create an array for unraid. The min number of drives needed for unraid technically would be 1. To get parity protection min number is 2. 1 array drive and 1 parity drive. unraid can grow as you add drive. For your situation I would pull the 2 drives from different mirrors. Set 1 drive as array and the other as parity. (The largest disk needs to always be the parity drive) Copy the data from one broken mirror to array. Take the drive that data was copied from in the windows world and move that to unraid and grow the array. Once the array has expanded copy the data from the second mirror set to unraid. Once that data has been copied take the final drive from the windows world and move it to unraid. You will then have a 4.5TB of space in the array and a 1.5tb drive as the parity drive.
  8. I don't think the backplane on the 2950 will support anything other then a dell card I believe it is a vendor lock in issue..
  9. The virus ran and attacked files on a windows share. All the damage was done from windows not from UnRaid.
  10. I was trying to find this program the other day - but the link to the download for version 1.3 is broken on the kodi forum. Do you know where I can find it elsewhere? Here you go. I zipped up my copy and removed the database.
  11. I use a various selection of tools. But when I was first trying to get things organized I used Ember Media Manager Great program and it makes your media library portable. I have attached a screenshot of it for you.
  12. You can do a cache pool. The default raid state is 1 I remember seeing a post tom or jonp made that said you can change is to raid0 but after reboots then it defaults back to raid1. I can't find that post though..
  13. The screenshot does not show it mounted as cache. Are you sure you have added as cache?
  14. Maybe we should edit the WIKI and indicate there are odd issues with some SAS(2) 9480 controllers. Since they are the least resistance to setting up, as in doesn't need to be flashed and works out of the box. As drives become larger and larger we will eventually do away with controller cards ( just as old adaptec cards are dead ) and use onboard controllers. But there will always be users that have a need for extra controllers for full systems. I agree. I got the sas2 because it was on the wiki and was plug and play.
  15. I would have imagined better performance. Maybe its the H310? I have a smaller array with a couple of older drives on a sas2lp I have attached my diskspeed for comparision.
  16. I did change my bios PCIe power setting after Rob noticing mine was off and it didn't make any difference, but no harm if you try it also. I was planning on making more tests before posting but I did find what looks like a correlation between the total number of reads and the speed of a parity check, i'm not talking about the different individual disk read numbers that we all assume are normal, but the total read numbers for the same array at the end of a parity check, for example: version - avg speed - total reads v6b1 - 125.1 - 2.612.645 v6b14 - 69.7 - 5.249.593 v.6.1.1 - 51.8 - 7.595.330 i'm going to compare all betas to confirm if it's true for all, also plan to test with a different controller like a SASLP to check if read numbers also change with different versions. It makes sense that more reads = more i/o's = less speed, but this issue doesn't make much sense, so who knows, also have no clue why there's such a big variation and if anything can be done to change it. ASPM didn't make a difference on my system back to the drawing board it would seem.
  17. root@Zeus:~# lspci -vv -d 1b4b:* 01:00.0 RAID bus controller: Marvell Technology Group Ltd. 88SE9485 SAS/SATA 6Gb/s controller (rev 03) Subsystem: Marvell Technology Group Ltd. 88SE9485 SAS/SATA 6Gb/s controller Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 18 Region 0: Memory at fd240000 (64-bit, non-prefetchable) [size=128K] Region 2: Memory at fd200000 (64-bit, non-prefetchable) [size=256K] Expansion ROM at fd260000 [disabled] [size=64K] Capabilities: [40] Power Management version 3 Flags: PMEClk- DSI- D1+ D2- AuxCurrent=375mA PME(D0+,D1+,D2-,D3hot+,D3cold-) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME- Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+ Address: 0000000000000000 Data: 0000 Capabilities: [70] Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 4096 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset- DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- MaxPayload 128 bytes, MaxReadReq 512 bytes DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend- LnkCap: Port #0, Speed 5GT/s, Width x8, ASPM L0s L1, Latency L0 <512ns, L1 <64us ClockPM- Surprise- LLActRep- BwNot- LnkCtl: ASPM L0s Enabled; RCB 64 bytes Disabled- Retrain- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Not Supported, TimeoutDis+, LTR-, OBFF Not Supported DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis- Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete-, EqualizationPhase1- EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest- Capabilities: [100 v1] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP+ Rollover- Timeout+ NonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn- Capabilities: [140 v1] Virtual Channel Caps: LPEVC=0 RefClk=100ns PATEntryBits=1 Arb: Fixed- WRR32- WRR64- WRR128- Ctrl: ArbSelect=Fixed Status: InProgress- VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans- Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256- Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=01 Status: NegoPending- InProgress- Kernel driver in use: mvsas Kernel modules: mvsas Also having speed issues.
  18. I went through the change log for Kernal 3.15.1 a bunch of changes happened for btrfs and some other lower level stuff that was over my head. I am a hardware / network guy not a software guy Reviewing the latest posts what was interesting to me is EdgarWallace got good results from his parity check over 100MB/sec. He posted the results of the lspci from his card and I noticed some diffrences between the results for his card and my card and flaggart's card. The line that stands out to me Edgar and bkastner who don't have problems with there SAS2LP's LnkCtl: ASPM L0s Enabled; RCB 64 bytes Disabled- Retrain- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- Mine flaggart opentoe we do. LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- RobJ pointed this out earlier in the thread and for some reason its sticking in my mind as an issue. I am wondering if Active State Power Management is the problem here. By running lspci -vv | grep ASPM I know that it is for sure disabled on my system. When I get home I will check the BIOS to see if I can enable Active State Power Management and run another parity check.
  19. One question to the Masses. What FS are you running for your array storage and cache? I am running XFS for the array and 3 SSD's in a BTRFS raid pool. Just wondering if this is an XFS file system issue or what. Checking out the release notes for the Linux kernal that was implemented for beta6 as well.. 3.15.0 maybe there is something in there that started to hose us.
  20. Updated to 6.1.1 and still very slow parity checks. Averaging around 46MB/s. Back to the release notes to see what I can dig up.
  21. Thanks for the heads up. I will grab the update and see if it makes a difference..
  22. I think this is the issue. CONFIG_SCSI_MVSAS_TASKLET: Support for interrupt tasklet (improves mvsas performance) was introduced beta 6. johnnie.black I'll bet if you can get beta6 to test with you will see the speed drop from from the tests you did with beta3. Tom is there any way we can get a current build without this tweak to test?
  23. Well at least we don't have to worry about the mpt3sas and mpt2sas drivers in unraid. I don't think the SAS2LP cards use those drivers. Mr Google has confirmed that the Supermicro AOC-SAS2LP-MV8 is supported in mvsas driver. Heading in search of release notes for all the beta and rc 6.x to see if I can figure out when the driver changed.
  24. Hmmm.... no VM as in KVM? That's ok. As long as my dockers work I'm covered. The only time I'll need actual VM's is when I build a $1,500+ unRaid & gaming machine. But that's not in the budget for several years anyway. If ever. I'm just hoping to put about $100-$150 (my budget for the foreseeable future) into what I've got and upgrade past the slow, single-core and RAM-less system I'm currently running. My current system is also out of SATA ports and the 2950 came with some spare 500GB spinners that I figure would finally let me have a cache drive. A ghetto cache drive, but it's better than nothing, right? I am not sure if the perc 5I or 6I will go in JOBD mode. I can check for you when I get to the office on Tuesday. They might. If thats the case then the 2950 will made a nice barebones unraid machine for dockers. The only downside is the 2TB disk limit. But you can work around that depending on how much storage you need.