Jayman6014
-
Posts
13 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Jayman6014
-
-
Ahh interesting. Thanks.
Now I know what I need to look into.
-
Hello,
I have a share made on Unraid and I am using NFS to connect multiple VMs to it for sharing files. Every few days without notice I get an alert that the NFS share isn't working, I look on Unraid and the share is missing from the Shares menu. I have to stop the array and start it again to bring the share back. I have attached the Diagnostics file which I took after I stopped the array but before restarting it. I think the Share went missing shortly before that.
-
Hey All,
I am upgrading my SERVER and NAS to 10G Fiber. I need a new mobo for my nas due to lack of a extra PCI-E x8 slot. I got two SAS to SATA cards already in it and I need to add in my 10G Fiber NIC.
I don't do any dockers this is just for storage only. Anyone got suggestions for a Mobo/CPU? I would like to keep it between $250-350
Or even a Mobo with alot of PCI-E slots that will work with a FM2+ AMD CPU.
-
1 hour ago, johnnie.black said:
There were read errors on the cache device:
Mar 18 12:32:02 NAS1 kernel: sd 12:0:7:0: [sds] tag#511 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Mar 18 12:32:02 NAS1 kernel: sd 12:0:7:0: [sds] tag#511 Sense Key : 0x2 [current] Mar 18 12:32:02 NAS1 kernel: sd 12:0:7:0: [sds] tag#511 ASC=0x4 ASCQ=0x0 Mar 18 12:32:02 NAS1 kernel: sd 12:0:7:0: [sds] tag#511 CDB: opcode=0x28 28 00 2b 30 01 28 00 00 08 00 Mar 18 12:32:02 NAS1 kernel: print_req_error: I/O error, dev sds, sector 724566312 Mar 18 12:32:02 NAS1 kernel: XFS (sds1): xfs_do_force_shutdown(0x2) called from line 1271 of file fs/xfs/xfs_log.c. Return address = 000000009734983c Mar 18 12:32:02 NAS1 kernel: XFS (sds1): Log I/O Error Detected. Shutting down filesystem Mar 18 12:32:02 NAS1 kernel: XFS (sds1): Please umount the filesystem and rectify the problem(s)
Start by replacing cables.
P.S. there were also read errors on disk3, do you have system notifications enable?
Disk 3 has been acting up for awhile, just need to replace it. Cache having errors I didn't know about. I'll take a look at that thanks.
1 hour ago, johnnie.black said: -
4 minutes ago, trurl said:
To start, same as always
Go to Tools - Diagnostics and attach the complete diagnostics zip file to your NEXT post.
Attached. I took it before I rebooted it again.
the Linux boxes do alot of small r/w of small files I wonder if this is the cause.
-
Hey All,
Every few hours now my shares just turn off. If I got the share menu both SMB and NFS are gone and nothing can R/W to them. It's taking a full reboot to bring them back online only to have them fail a few hours later.
This all seemed to start when I started using NFS from a couple of linux boxes into the Unraid.
What do you need from me to troubleshoot this issue because it's getting annoying.
I thought it was the 6.8.3 of unraid so I rolled back one version and still the same issues.
-
14 hours ago, johnnie.black said:
Diags are after rebooting, but yes, likely the Marvell controller with SATA port multiplier, those are a double no no.
Yeah ordered a LSI from Ebay from Art of Server.
-
16 hours ago, johnnie.black said:
Please post the diagnostics: Tools -> Diagnostics
Well I think I found my issue. I got one of these cards and it worked for awhile but now when it runs a parity build the runs the drives hard at the same time the card just goes sideways and in the end takes 6 drives offline.
I guess my fault for going cheap on the PCI-E Sata Card.
-
Logs attached. Might not show what you want as I just rebooted it before posting this. I suspect you want me to make it spit out all the errors and post that log. If thats the case let me know.
-
I have a 750W PSU coming, right now I got a 600 but with overhead it is likely 530W
-
Odd one.
The moment I start a parity sync I start getting errors in the logs as seen below. If I remove parity drives and bring the array online it works without errors. if i add the two parity drives to the normal array they work as well.
This is the new RC6
Parity says it will take 20-30 days to complete right now.
Could this be due to a poor power supply?
It now disabled one of my parity drives and the other one is going full speed. I thought it was a bad drive but I have changed the drives already and the same thing happens.
Nov 16 18:25:23 NAS1 sudo: root : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/bash -c /usr/local/emhttp/plugins/unbalance/unbalance -port 6237
Nov 16 18:25:27 NAS1 kernel: ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Nov 16 18:25:27 NAS1 kernel: ata9.00: supports DRM functions and may not be fully accessible
Nov 16 18:25:27 NAS1 kernel: ata9.00: supports DRM functions and may not be fully accessible
Nov 16 18:25:27 NAS1 kernel: ata9.00: configured for UDMA/133
Nov 16 18:25:28 NAS1 kernel: ata14.00: failed to read SCR 1 (Emask=0x40)
Nov 16 18:25:28 NAS1 kernel: ata14.01: failed to read SCR 1 (Emask=0x40)
Nov 16 18:25:28 NAS1 kernel: ata14.02: failed to read SCR 1 (Emask=0x40)
Nov 16 18:25:28 NAS1 kernel: ata14.00: exception Emask 0x100 SAct 0x92490 SErr 0x0 action 0x6 frozen
Nov 16 18:25:28 NAS1 kernel: ata14.00: failed command: READ FPDMA QUEUED
Nov 16 18:25:28 NAS1 kernel: ata14.00: cmd 60/40:20:c0:7f:00/05:00:00:00:00/40 tag 4 ncq dma 688128 in
Nov 16 18:25:28 NAS1 kernel: res 40/00:b0:30:76:72/00:00:74:00:00/40 Emask 0x100 (unknown error)
Nov 16 18:25:28 NAS1 kernel: ata14.00: status: { DRDY }
Nov 16 18:25:28 NAS1 kernel: ata14.00: failed command: READ FPDMA QUEUED
Nov 16 18:25:28 NAS1 kernel: ata14.00: cmd 60/40:38:00:85:00/05:00:00:00:00/40 tag 7 ncq dma 688128 in
Nov 16 18:25:28 NAS1 kernel: res 40/00:b0:30:76:72/00:00:74:00:00/40 Emask 0x100 (unknown error)
Nov 16 18:25:28 NAS1 kernel: ata14.00: status: { DRDY }
Nov 16 18:25:28 NAS1 kernel: ata14.00: failed command: READ FPDMA QUEUED
Nov 16 18:25:28 NAS1 kernel: ata14.00: cmd 60/40:50:40:8a:00/05:00:00:00:00/40 tag 10 ncq dma 688128 in
Nov 16 18:25:28 NAS1 kernel: res 40/00:b0:30:76:72/00:00:74:00:00/40 Emask 0x100 (unknown error)
Nov 16 18:25:28 NAS1 kernel: ata14.00: status: { DRDY }
Nov 16 18:25:28 NAS1 kernel: ata14.00: failed command: READ FPDMA QUEUED
Nov 16 18:25:28 NAS1 kernel: ata14.00: cmd 60/40:68:80:8f:00/05:00:00:00:00/40 tag 13 ncq dma 688128 in
Nov 16 18:25:28 NAS1 kernel: res 40/00:b0:30:76:72/00:00:74:00:00/40 Emask 0x100 (unknown error)
Nov 16 18:25:28 NAS1 kernel: ata14.00: status: { DRDY }
Nov 16 18:25:28 NAS1 kernel: ata14.00: failed command: READ FPDMA QUEUED
Nov 16 18:25:28 NAS1 kernel: ata14.00: cmd 60/40:80:c0:94:00/05:00:00:00:00/40 tag 16 ncq dma 688128 in
Nov 16 18:25:28 NAS1 kernel: res 40/00:b0:30:76:72/00:00:74:00:00/40 Emask 0x100 (unknown error)
Nov 16 18:25:28 NAS1 kernel: ata14.00: status: { DRDY }
Nov 16 18:25:28 NAS1 kernel: ata14.00: failed command: READ FPDMA QUEUED
Nov 16 18:25:28 NAS1 kernel: ata14.00: cmd 60/80:98:00:9a:00/00:00:00:00:00/40 tag 19 ncq dma 65536 in
Nov 16 18:25:28 NAS1 kernel: res 40/00:b0:30:76:72/00:00:74:00:00/40 Emask 0x100 (unknown error)
Nov 16 18:25:28 NAS1 kernel: ata14.00: status: { DRDY }
Nov 16 18:25:28 NAS1 kernel: ata14.01: exception Emask 0x100 SAct 0x124900 SErr 0x0 action 0x6 frozen
Nov 16 18:25:28 NAS1 kernel: ata14.01: failed command: READ FPDMA QUEUED
Nov 16 18:25:28 NAS1 kernel: ata14.01: cmd 60/40:40:00:85:00/05:00:00:00:00/40 tag 8 ncq dma 688128 in
Nov 16 18:25:28 NAS1 kernel: res 40/00:b8:30:76:72/00:00:74:00:00/40 Emask 0x100 (unknown error)
Nov 16 18:25:28 NAS1 kernel: ata14.01: status: { DRDY }
Nov 16 18:25:28 NAS1 kernel: ata14.01: failed command: READ FPDMA QUEUED
Nov 16 18:25:28 NAS1 kernel: ata14.01: cmd 60/40:58:40:8a:00/05:00:00:00:00/40 tag 11 ncq dma 688128 in
Nov 16 18:25:28 NAS1 kernel: res 40/00:b8:30:76:72/00:00:74:00:00/40 Emask 0x100 (unknown error)
Nov 16 18:25:28 NAS1 kernel: ata14.01: status: { DRDY }
Nov 16 18:25:28 NAS1 kernel: ata14.01: failed command: READ FPDMA QUEUED
Nov 16 18:25:28 NAS1 kernel: ata14.01: cmd 60/40:70:80:8f:00/05:00:00:00:00/40 tag 14 ncq dma 688128 in
Nov 16 18:25:28 NAS1 kernel: res 40/00:b8:30:76:72/00:00:74:00:00/40 Emask 0x100 (unknown error)
Nov 16 18:25:28 NAS1 kernel: ata14.01: status: { DRDY }
Nov 16 18:25:28 NAS1 kernel: ata14.01: failed command: READ FPDMA QUEUED
Nov 16 18:25:28 NAS1 kernel: ata14.01: cmd 60/40:88:c0:94:00/05:00:00:00:00/40 tag 17 ncq dma 688128 in
Nov 16 18:25:28 NAS1 kernel: res 40/00:b8:30:76:72/00:00:74:00:00/40 Emask 0x100 (unknown error)
Nov 16 18:25:28 NAS1 kernel: ata14.01: status: { DRDY }
Nov 16 18:25:28 NAS1 kernel: ata14.01: failed command: READ FPDMA QUEUED
Nov 16 18:25:28 NAS1 kernel: ata14.01: cmd 60/80:a0:00:9a:00/00:00:00:00:00/40 tag 20 ncq dma 65536 in
Nov 16 18:25:28 NAS1 kernel: res 40/00:b8:30:76:72/00:00:74:00:00/40 Emask 0x100 (unknown error)
Nov 16 18:25:28 NAS1 kernel: ata14.01: status: { DRDY }
Nov 16 18:25:28 NAS1 kernel: ata14.02: exception Emask 0x100 SAct 0x249240 SErr 0x0 action 0x6 frozen
Nov 16 18:25:28 NAS1 kernel: ata14.02: failed command: READ FPDMA QUEUED
Nov 16 18:25:28 NAS1 kernel: ata14.02: cmd 60/40:30:c0:7f:00/05:00:00:00:00/40 tag 6 ncq dma 688128 in
Nov 16 18:25:28 NAS1 kernel: res 40/00:f8:30:76:72/00:00:74:00:00/40 Emask 0x100 (unknown error)
Nov 16 18:25:28 NAS1 kernel: ata14.02: status: { DRDY }
Nov 16 18:25:28 NAS1 kernel: ata14.02: failed command: READ FPDMA QUEUED
Nov 16 18:25:28 NAS1 kernel: ata14.02: cmd 60/40:48:00:85:00/05:00:00:00:00/40 tag 9 ncq dma 688128 in
Nov 16 18:25:28 NAS1 kernel: res 40/00:f8:30:76:72/00:00:74:00:00/40 Emask 0x100 (unknown error)
Nov 16 18:25:28 NAS1 kernel: ata14.02: status: { DRDY }
Nov 16 18:25:28 NAS1 kernel: ata14.02: failed command: READ FPDMA QUEUED
Nov 16 18:25:28 NAS1 kernel: ata14.02: cmd 60/40:60:40:8a:00/05:00:00:00:00/40 tag 12 ncq dma 688128 in
Nov 16 18:25:28 NAS1 kernel: res 40/00:f8:30:76:72/00:00:74:00:00/40 Emask 0x100 (unknown error)
Nov 16 18:25:28 NAS1 kernel: ata14.02: status: { DRDY }
Nov 16 18:25:28 NAS1 kernel: ata14.02: failed command: READ FPDMA QUEUED
Nov 16 18:25:28 NAS1 kernel: ata14.02: cmd 60/40:78:80:8f:00/05:00:00:00:00/40 tag 15 ncq dma 688128 in
Nov 16 18:25:28 NAS1 kernel: res 40/00:f8:30:76:72/00:00:74:00:00/40 Emask 0x100 (unknown error)
Nov 16 18:25:28 NAS1 kernel: ata14.02: status: { DRDY }
Nov 16 18:25:28 NAS1 kernel: ata14.02: failed command: READ FPDMA QUEUED
Nov 16 18:25:28 NAS1 kernel: ata14.02: cmd 60/40:90:c0:94:00/05:00:00:00:00/40 tag 18 ncq dma 688128 in
Nov 16 18:25:28 NAS1 kernel: res 40/00:f8:30:76:72/00:00:74:00:00/40 Emask 0x100 (unknown error)
Nov 16 18:25:28 NAS1 kernel: ata14.02: status: { DRDY }
Nov 16 18:25:28 NAS1 kernel: ata14.02: failed command: READ FPDMA QUEUED
Nov 16 18:25:28 NAS1 kernel: ata14.02: cmd 60/80:a8:00:9a:00/00:00:00:00:00/40 tag 21 ncq dma 65536 in
Nov 16 18:25:28 NAS1 kernel: res 40/00:f8:30:76:72/00:00:74:00:00/40 Emask 0x100 (unknown error)
Nov 16 18:25:28 NAS1 kernel: ata14.02: status: { DRDY }
Nov 16 18:25:38 NAS1 kernel: ata14.15: softreset failed (1st FIS failed)
-
I have a Dell R510 I want to use with Unraid. I know the Perc6 cannot be flashed so can someone point me in the direction of what I should be getting for a new controller card for this server. And do I need to get new cables as well?
Cannot install ubuntu server in a VM
in VM Engine (KVM)
Posted
I am having this exact same issue. Dell R530 with a PERC 730P mini on HBA mode.
UEFI on and off, Q35 and not BIOS etc.. just nothing seems to work.