reapola

Members
  • Posts

    26
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    Scotland

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

reapola's Achievements

Noob

Noob (1/14)

0

Reputation

  1. No, still an issue but only complains at boot. @manHandsgood that it worked for you but I'd rather not start moving data about. @limetech , any ideas?
  2. Did you get this fixed? Noticed the same in my log. Disk1 is a 12TB drive, the others are 8TB and disk1 was a new install/upgrade in November so it was a fresh format and parity rebuild. Jan 18 23:42:31 Zeus emhttpd: shcmd (37): mkdir -p /mnt/disk1 Jan 18 23:42:31 Zeus emhttpd: shcmd (38): mount -t xfs -o noatime /dev/md1 /mnt/disk1 Jan 18 23:42:31 Zeus kernel: SGI XFS with ACLs, security attributes, no debug enabled Jan 18 23:42:31 Zeus kernel: XFS (md1): Mounting V5 Filesystem Jan 18 23:42:31 Zeus kernel: XFS (md1): Ending clean mount Jan 18 23:42:32 Zeus kernel: xfs filesystem being mounted at /mnt/disk1 supports timestamps until 2038 (0x7fffffff) Jan 18 23:42:32 Zeus emhttpd: shcmd (39): xfs_growfs /mnt/disk1 Jan 18 23:42:32 Zeus kernel: XFS (md1): EXPERIMENTAL online shrink feature in use. Use at your own risk! Jan 18 23:42:32 Zeus root: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device Jan 18 23:42:32 Zeus root: meta-data=/dev/md1 isize=512 agcount=32, agsize=91553791 blks Jan 18 23:42:32 Zeus root: = sectsz=512 attr=2, projid32bit=1 Jan 18 23:42:32 Zeus root: = crc=1 finobt=1, sparse=1, rmapbt=0 Jan 18 23:42:32 Zeus root: = reflink=1 bigtime=0 inobtcount=0 Jan 18 23:42:32 Zeus root: data = bsize=4096 blocks=2929721312, imaxpct=5 Jan 18 23:42:32 Zeus root: = sunit=1 swidth=32 blks Jan 18 23:42:32 Zeus root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Jan 18 23:42:32 Zeus root: log =internal log bsize=4096 blocks=521728, version=2 Jan 18 23:42:32 Zeus root: = sectsz=512 sunit=1 blks, lazy-count=1 Jan 18 23:42:32 Zeus root: realtime =none extsz=4096 blocks=0, rtextents=0 Jan 18 23:42:32 Zeus emhttpd: shcmd (39): exit status: 1 Jan 18 23:42:32 Zeus emhttpd: shcmd (40): mkdir -p /mnt/disk2 Jan 18 23:42:32 Zeus emhttpd: shcmd (41): mount -t xfs -o noatime /dev/md2 /mnt/disk2 Jan 18 23:42:32 Zeus kernel: XFS (md2): Mounting V5 Filesystem Jan 18 23:42:32 Zeus kernel: XFS (md2): Ending clean mount Jan 18 23:42:32 Zeus kernel: xfs filesystem being mounted at /mnt/disk2 supports timestamps until 2038 (0x7fffffff) Jan 18 23:42:32 Zeus emhttpd: shcmd (42): xfs_growfs /mnt/disk2 Jan 18 23:42:32 Zeus root: meta-data=/dev/md2 isize=512 agcount=8, agsize=244188659 blks Jan 18 23:42:32 Zeus root: = sectsz=512 attr=2, projid32bit=1 Jan 18 23:42:32 Zeus root: = crc=1 finobt=1, sparse=0, rmapbt=0 Jan 18 23:42:32 Zeus root: = reflink=0 bigtime=0 inobtcount=0 Jan 18 23:42:32 Zeus root: data = bsize=4096 blocks=1953506633, imaxpct=5 Jan 18 23:42:32 Zeus root: = sunit=0 swidth=0 blks Jan 18 23:42:32 Zeus root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Jan 18 23:42:32 Zeus root: log =internal log bsize=4096 blocks=476930, version=2 Jan 18 23:42:32 Zeus root: = sectsz=512 sunit=0 blks, lazy-count=1 Jan 18 23:42:32 Zeus root: realtime =none extsz=4096 blocks=0, rtextents=0 Jan 18 23:42:32 Zeus emhttpd: shcmd (43): mkdir -p /mnt/disk3 Jan 18 23:42:32 Zeus emhttpd: shcmd (44): mount -t xfs -o noatime /dev/md3 /mnt/disk3 Jan 18 23:42:32 Zeus kernel: XFS (md3): Mounting V5 Filesystem Jan 18 23:42:32 Zeus kernel: XFS (md3): Ending clean mount Jan 18 23:42:32 Zeus kernel: xfs filesystem being mounted at /mnt/disk3 supports timestamps until 2038 (0x7fffffff) Jan 18 23:42:32 Zeus emhttpd: shcmd (45): xfs_growfs /mnt/disk3 Jan 18 23:42:32 Zeus root: meta-data=/dev/md3 isize=512 agcount=8, agsize=244188659 blks Jan 18 23:42:32 Zeus root: = sectsz=512 attr=2, projid32bit=1 Jan 18 23:42:32 Zeus root: = crc=1 finobt=1, sparse=0, rmapbt=0 Jan 18 23:42:32 Zeus root: = reflink=0 bigtime=0 inobtcount=0 Jan 18 23:42:32 Zeus root: data = bsize=4096 blocks=1953506633, imaxpct=5 Jan 18 23:42:32 Zeus root: = sunit=0 swidth=0 blks Jan 18 23:42:32 Zeus root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Jan 18 23:42:32 Zeus root: log =internal log bsize=4096 blocks=476930, version=2 Jan 18 23:42:32 Zeus root: = sectsz=512 sunit=0 blks, lazy-count=1 Jan 18 23:42:32 Zeus root: realtime =none extsz=4096 blocks=0, rtextents=0
  3. They (the community devs in question) have stated they weren't communicated behind the scenes as you expected which is where this discussion of unraid vs community developers has erupted from. Glad you've planted your flag though.
  4. Nope, looking seriously at all the main players again - migrated from Readynas pro over 4 years ago which was also propped up by its community as to inbuilt limitations by design or lack of forethought.
  5. Great that integration of 3rd party modules into the kernel is being being helped by people that don't know how that 'works'. @limetech this is a situation of your own making which has just been made worse by your replies. I was looking for a project under our 2nd COVID lockdown and I think you've just given me it.
  6. *moved from hardware Use case: I've been using Unraid for the last 4 years and during that time I've upgraded a few times with the server going from just being used to host media files to being used for gaming/VR and now full circle back to a NAS host with extra capability. Now with the 'new WFH normal' I find myself using my own device more for accessing work services etc and I'm currently finding the Dell SFF endpoint I use lacking. Over the last few days I've been pricing up building another standalone gaming PC but now I'm looking at a full compute refresh for my Unraid box to be a catch all device again, which will save some money as I can sell the old kit..... So far I've looked at a Ryzen 3950x build to keep the physical core count I currently have but I've seen reference to the latency introduce due to the CCX design - would Intel be a better choice? Graphics wise I'm keeping the cost low and looking at a Radeon 5700XT to passthrough as previously the 2080Ti i had was overkill for my needs. Anyone been in a similar position or have some opinion/recommendations as to where to go next with the Unraid box? Current Setup: Unraid system: Unraid server Plus, version 6.9.0-beta25 Model: Xeon 32Gb RAM 24Tb Motherboard: ASRock - EP2C602-4L/D16 Processor: 2 x Intel® Xeon® CPU E5-2670 0 @ 2.60GHz Memory: 64 GB (max. installable capacity 192 GB) Storage: Cache: 512Gb SSD Array: 4 x WDC_WD80EZAZ 8Tb Scratch Disk: 512Gb SSD Apps in use: So this is what I'm looking at so far - https://uk.pcpartpicker.com/user/reapola/saved/fx99rH CPU: AMD Threadripper 3960X 3.8 GHz 24-Core Processor CPU Cooler: be quiet! Dark Rock Pro TR4 59.5 CFM CPU Cooler Motherboard : Gigabyte TRX40 AORUS MASTER EATX sTRX4 Motherboard Memory: Corsair Vengeance LPX 64 GB (4 x 16 GB) DDR4-3200 CL16 Memory Storage: ADATA XPG SX8200 Pro 2 TB M.2-2280 NVME Solid State Drive Video Card: Gigabyte GeForce RTX 2070 SUPER 8 GB WINDFORCE OC 3X Video Card With existing parts to be migrated: SAS 2308 Controller 4 x 8TB HDD 2 x 512Gb SSD Anyone know of a comparative Xeon build that would be of a similar performance?
  7. 1c | 285 | 8448 | 4224 | 128 | 4216 | 323.3 Wish it was that fast! Any comments on the results? The original settings were put in when I built the server in 2016. I'm currently looking at moving to 8/10Gb disks, probably shucking the WD elements as the price of reds are ridiculous! Unraid 6.x Tunables Tester v4.1 by Pauven Tunables Report produced Thu Sep 19 21:27:49 BST 2019 Run on server: Zeus Extra-Long Parity Sync Test Current Values: md_num_stripes=4096, md_sync_window=2048, md_sync_thresh=2000 Global nr_requests=8 Disk Specific nr_requests Values: sdc=8, sdb=8, sdh=8, sdg=8, sdf=8, sde=8, sdd=8, --- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 10min Duration)--- Tst | RAM | stri | win | req | thresh | MB/s ---------------------------------------------- 1 | 138 | 4096 | 2048 | 8 | 2000 | 154.7 --- BASELINE TEST OF UNRAID DEFAULT VALUES (1 Sample Point @ 10min Duration)--- Tst | RAM | stri | win | req | thresh | MB/s ---------------------------------------------- 1 | 43 | 1280 | 384 | 128 | 192 | 149.8 --- TEST PASS 1 (2.5 Hrs - 12 Sample Points @ 10min Duration) --- Tst | RAM | stri | win | req | thresh | MB/s | thresh | MB/s | thresh | MB/s -------------------------------------------------------------------------------- 1 | 25 | 768 | 384 | 128 | 376 | 151.2 | 320 | 151.1 | 192 | 148.6 2 | 51 | 1536 | 768 | 128 | 760 | 153.7 | 704 | 153.6 | 384 | 152.3 3 | 103 | 3072 | 1536 | 128 | 1528 | 155.4 | 1472 | 155.7 | 768 | 153.2 4 | 207 | 6144 | 3072 | 128 | 3064 | 157.7 | 3008 | 157.8 | 1536 | 156.5 --- TEST PASS 1_HIGH (40 Min - 3 Sample Points @ 10min Duration)--- Tst | RAM | stri | win | req | thresh | MB/s | thresh | MB/s | thresh | MB/s -------------------------------------------------------------------------------- 1 | 414 |12288 | 6144 | 128 | 6136 | 157.7 | 6080 | 157.8 | 3072 | 157.9 --- TEST PASS 1_VERYHIGH (40 Min - 3 Sample Points @ 10min Duration)--- Tst | RAM | stri | win | req | thresh | MB/s | thresh | MB/s | thresh | MB/s -------------------------------------------------------------------------------- 1 | 622 |18432 | 9216 | 128 | 9208 | 157.8 | 9152 | 157.8 | 4608 | 157.9 --- Using md_sync_window=3072 & md_sync_thresh=window-64 for Pass 2 --- --- TEST PASS 2 (10 Hrs - 49 Sample Points @ 10min Duration) --- Tst | RAM | stri | win | req | thresh | MB/s ---------------------------------------------- 1 | 103 | 3072 | 1536 | 128 | 1472 | 156.3 2 | 108 | 3200 | 1600 | 128 | 1536 | 156.2 3 | 112 | 3328 | 1664 | 128 | 1600 | 156.4 4 | 116 | 3456 | 1728 | 128 | 1664 | 156.7 5 | 121 | 3584 | 1792 | 128 | 1728 | 156.6 6 | 125 | 3712 | 1856 | 128 | 1792 | 156.6 7 | 129 | 3840 | 1920 | 128 | 1856 | 157.1 8 | 133 | 3968 | 1984 | 128 | 1920 | 157.0 9 | 138 | 4096 | 2048 | 128 | 1984 | 156.9 10 | 142 | 4224 | 2112 | 128 | 2048 | 157.1 11 | 146 | 4352 | 2176 | 128 | 2112 | 157.4 12 | 151 | 4480 | 2240 | 128 | 2176 | 157.2 13 | 155 | 4608 | 2304 | 128 | 2240 | 157.2 14 | 159 | 4736 | 2368 | 128 | 2304 | 157.5 15 | 164 | 4864 | 2432 | 128 | 2368 | 157.5 16 | 168 | 4992 | 2496 | 128 | 2432 | 157.3 17 | 172 | 5120 | 2560 | 128 | 2496 | 157.5 18 | 177 | 5248 | 2624 | 128 | 2560 | 157.7 19 | 181 | 5376 | 2688 | 128 | 2624 | 157.5 20 | 185 | 5504 | 2752 | 128 | 2688 | 157.6 21 | 190 | 5632 | 2816 | 128 | 2752 | 157.7 22 | 194 | 5760 | 2880 | 128 | 2816 | 157.6 23 | 198 | 5888 | 2944 | 128 | 2880 | 157.7 24 | 203 | 6016 | 3008 | 128 | 2944 | 157.6 25 | 207 | 6144 | 3072 | 128 | 3008 | 157.9 26 | 211 | 6272 | 3136 | 128 | 3072 | 157.8 27 | 216 | 6400 | 3200 | 128 | 3136 | 157.6 28 | 220 | 6528 | 3264 | 128 | 3200 | 157.7 29 | 224 | 6656 | 3328 | 128 | 3264 | 157.9 30 | 229 | 6784 | 3392 | 128 | 3328 | 157.7 31 | 233 | 6912 | 3456 | 128 | 3392 | 157.7 32 | 237 | 7040 | 3520 | 128 | 3456 | 157.9 33 | 242 | 7168 | 3584 | 128 | 3520 | 157.8 34 | 246 | 7296 | 3648 | 128 | 3584 | 157.7 35 | 250 | 7424 | 3712 | 128 | 3648 | 157.7 36 | 255 | 7552 | 3776 | 128 | 3712 | 157.9 37 | 259 | 7680 | 3840 | 128 | 3776 | 157.8 38 | 263 | 7808 | 3904 | 128 | 3840 | 157.8 39 | 267 | 7936 | 3968 | 128 | 3904 | 157.9 40 | 272 | 8064 | 4032 | 128 | 3968 | 157.7 41 | 276 | 8192 | 4096 | 128 | 4032 | 157.7 42 | 280 | 8320 | 4160 | 128 | 4096 | 157.8 43 | 285 | 8448 | 4224 | 128 | 4160 | 158.0 44 | 289 | 8576 | 4288 | 128 | 4224 | 157.8 45 | 293 | 8704 | 4352 | 128 | 4288 | 157.8 46 | 298 | 8832 | 4416 | 128 | 4352 | 157.9 47 | 302 | 8960 | 4480 | 128 | 4416 | 157.7 48 | 306 | 9088 | 4544 | 128 | 4480 | 157.8 49 | 311 | 9216 | 4608 | 128 | 4544 | 157.8 --- Using fastest result of md_sync_window=4224 for Pass 3 --- --- TEST PASS 3 (12 Hrs - 54 Sample Points @ 10min Duration) --- Tst | RAM | stri | win | req | thresh | MB/s ---------------------------------------------- 1a | 285 | 8448 | 4224 | 128 | 4223 | 158.0 1b | 285 | 8448 | 4224 | 128 | 4220 | 157.8 1c | 285 | 8448 | 4224 | 128 | 4216 | 323.3 1d | 285 | 8448 | 4224 | 128 | 4212 | 157.8 1e | 285 | 8448 | 4224 | 128 | 4208 | 157.8 1f | 285 | 8448 | 4224 | 128 | 4204 | 157.8 1g | 285 | 8448 | 4224 | 128 | 4200 | 158.0 1h | 285 | 8448 | 4224 | 128 | 4196 | 157.8 1i | 285 | 8448 | 4224 | 128 | 4192 | 157.7 1j | 285 | 8448 | 4224 | 128 | 4188 | 157.8 1k | 285 | 8448 | 4224 | 128 | 4184 | 157.9 1l | 285 | 8448 | 4224 | 128 | 4180 | 157.8 1m | 285 | 8448 | 4224 | 128 | 4176 | 157.8 1n | 285 | 8448 | 4224 | 128 | 4172 | 158.0 1o | 285 | 8448 | 4224 | 128 | 4168 | 157.8 1p | 285 | 8448 | 4224 | 128 | 4164 | 157.7 1q | 285 | 8448 | 4224 | 128 | 4160 | 157.8 1r | 285 | 8448 | 4224 | 128 | 2112 | 157.9 2a | 285 | 8448 | 4224 | 16 | 4223 | 157.0 2b | 285 | 8448 | 4224 | 16 | 4220 | 157.0 2c | 285 | 8448 | 4224 | 16 | 4216 | 157.3 2d | 285 | 8448 | 4224 | 16 | 4212 | 157.0 2e | 285 | 8448 | 4224 | 16 | 4208 | 157.1 2f | 285 | 8448 | 4224 | 16 | 4204 | 157.1 2g | 285 | 8448 | 4224 | 16 | 4200 | 157.3 2h | 285 | 8448 | 4224 | 16 | 4196 | 157.0 2i | 285 | 8448 | 4224 | 16 | 4192 | 157.1 2j | 285 | 8448 | 4224 | 16 | 4188 | 157.2 2k | 285 | 8448 | 4224 | 16 | 4184 | 157.1 2l | 285 | 8448 | 4224 | 16 | 4180 | 157.1 2m | 285 | 8448 | 4224 | 16 | 4176 | 157.1 2n | 285 | 8448 | 4224 | 16 | 4172 | 157.3 2o | 285 | 8448 | 4224 | 16 | 4168 | 157.1 2p | 285 | 8448 | 4224 | 16 | 4164 | 157.0 2q | 285 | 8448 | 4224 | 16 | 4160 | 157.1 2r | 285 | 8448 | 4224 | 16 | 2112 | 157.3 3a | 285 | 8448 | 4224 | 8 | 4223 | 154.9 3b | 285 | 8448 | 4224 | 8 | 4220 | 155.0 3c | 285 | 8448 | 4224 | 8 | 4216 | 154.9 3d | 285 | 8448 | 4224 | 8 | 4212 | 155.1 3e | 285 | 8448 | 4224 | 8 | 4208 | 154.8 3f | 285 | 8448 | 4224 | 8 | 4204 | 155.0 3g | 285 | 8448 | 4224 | 8 | 4200 | 155.2 3h | 285 | 8448 | 4224 | 8 | 4196 | 155.0 3i | 285 | 8448 | 4224 | 8 | 4192 | 155.0 3j | 285 | 8448 | 4224 | 8 | 4188 | 155.1 3k | 285 | 8448 | 4224 | 8 | 4184 | 155.2 3l | 285 | 8448 | 4224 | 8 | 4180 | 153.9 3m | 285 | 8448 | 4224 | 8 | 4176 | 155.0 3n | 285 | 8448 | 4224 | 8 | 4172 | 155.2 3o | 285 | 8448 | 4224 | 8 | 4168 | 154.9 3p | 285 | 8448 | 4224 | 8 | 4164 | 154.9 3q | 285 | 8448 | 4224 | 8 | 4160 | 155.0 3r | 285 | 8448 | 4224 | 8 | 2112 | 155.1 The results below do NOT include the Baseline test of current values. The Fastest settings tested give a peak speed of 323.3 MB/s md_sync_window: 4224 md_num_stripes: 8448 md_sync_thresh: 4216 nr_requests: 128 This will consume 285 MB (147 MB more than your current utilization of 138 MB) The Thriftiest settings (95% of Fastest) give a peak speed of 323.3 MB/s md_sync_window: 4224 md_num_stripes: 8448 md_sync_thresh: 4216 nr_requests: 128 This will consume 285 MB (147 MB more than your current utilization of 138 MB) The Recommended settings (99% of Fastest) give a peak speed of 323.3 MB/s md_sync_window: 4224 md_num_stripes: 8448 md_sync_thresh: 4216 nr_requests: 128 This will consume 285 MB (147 MB more than your current utilization of 138 MB) NOTE: Adding additional drives will increase memory consumption. In Unraid, go to Settings > Disk Settings to set your chosen parameter values. Completed: 21 Hrs 13 Min 42 Sec. NOTE: Use the smallest set of values that produce good results. Larger values increase server memory use, and may cause stability issues with Unraid, especially if you have any add-ons or plug-ins installed. System Info: Zeus Unraid version 6.7.2 md_num_stripes=4096 md_sync_window=2048 md_sync_thresh=2000 nr_requests=8 (Global Setting) sbNumDisks=8 CPU: Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz RAM: System Memory System Memory Outputting free low memory information... total used free shared buff/cache available Mem: 66000560 1580196 63223520 1143264 1196844 62802068 Low: 66000560 2777040 63223520 High: 0 0 0 Swap: 0 0 0 SCSI Host Controllers and Connected Array Drives ------------------------------------------------------------------------------ [0] scsi0 usb-storage [0:0:0:0] flash sda 15.3GB Cruzer Fit [1] scsi1 isci C602 chipset 4-Port SATA Storage Control Unit [1:0:0:0] cache sdi 512GB SanDisk SD8SB8U5 [2] scsi2 ahci [3] scsi3 ahci [4] scsi4 ahci [5] scsi5 ahci [6] scsi6 ahci [7] scsi7 ahci [8] scsi8 mpt2sas SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] [8:0:0:0] disk1 sdb 4.00TB ST4000VN000-1H41 [8:0:1:0] parity sdc 4.00TB ST4000VN000-1H41 [8:0:2:0] disk6 sdd 4.00TB ST4000VN000-1H41 [8:0:3:0] disk5 sde 4.00TB ST4000VN000-1H41 [8:0:4:0] disk4 sdf 4.00TB ST4000VN000-1H41 [8:0:5:0] disk3 sdg 4.00TB ST4000VN000-1H41 [8:0:6:0] disk2 sdh 4.00TB ST4000VN000-1H41 *** END OF REPORT ***
  8. I'm on my second unraid build after having a readynas pro for years. My last build (see below) set up in 2017 was based off the techspot article that year - https://www.techspot.com/review/1155-affordable-dual-xeon-pc/. At the time the CPUs were £180, ram £200 & motherboard £300 which for a 32thread server was a steal. This year I've been upgrading my gaming PC and now its time to look at the server, so far its been mainly housekeeping to reduce the data usage (currently 70% used) in order to replace the hdds as they are 4/5 years old now running 24/7. Are there any similar bargains to be had with more up to date server hardware or do i need to wait on Facebook doing a refresh again? Server specs: 2 x Xeon E5-2670 8 Core with Noctua cooling ASRock rack EP2C602-4L/D16 (SSI EEB form factor) 64Gb DDR3 ECC 2 x 512Gb SSD (Cache) SAS controller 7x 4Tb HDD (1 parity, 24Tb usable) 2 x 10Gb SFP
  9. Rolled back to 6.1.9 after hard lock of w10 gaming vm which in turn locked unraid necessitating in a unclean shutdown. Spent the rest of the night getting the server back up to scratch and creating a new w10 vm. I think at this point I'm "out" with 6.2 after the initial hassle of getting it installed upgrade followed by this.
  10. Upgraded to beta 20 and experienced the same label not found error as previous betas. Threw caution to the wind and created a new virgin usb unRAID bootdisk and it started fine. Slight issue with unraid not detecting the filesystem when starting the array resulting in a unmountable drives and brown pants moment. Changing from 'auto' to xfs for each affected drive fixed the issue and avoided me crying lots. I've now set up all the dockers, settings and plugins from scratch and very happy with the polish that's went in to the updated webui - good job! Since i'm having to remake my windows 10 gaming VM - whats the recommended settings with 6.2?
  11. The problem is happening earlier than your screenshot shows, and your system log attached in previous post is too garbled to read btw: how'd you get that log if you can't mount the usb flash? Did you try some of the other usb ports? Core linux os is loading so ssh is enabled, logged in and scp'd it across - garbled how? File is a tar bz2 of the whole log directory. Attached is the syslog extracted out the archive. Tried all the usb ports, 1 on motherboard for hypervisors, 4 on back, 2 on front and 2 usb3s syslog.txt
  12. In reference to the bug report above, here's the capture from ipmi console. Edit: This is with a freshly formatted usb which was formatted and labelled as per spec using windows epxlorer and not rufus as I was previously using. Reverting back to 6.1.9 fixes the issue so must be kernel or driver related. Motherboard is a SuperMicro X10SL7-F (http://www.supermicro.co.uk/products/motherboard/Xeon/C220/X10SL7-F.cfm) and the usb stick is connected to a USB2 port which is as far as I'm aware connected to an Intel® C222 Express PCH controller.
  13. Same issue here with b19 as 18, flash does not mount and therefore OS doesn't load. I've attached a copy of the log dir which should hopefully highlight the issue. Attached file is .tar.bz2 unraid_6.2b19.No_unRAIDlabel_logdir.txt
  14. Other people have reported the same bug in this thread?
  15. Nope, logs posted before are from a physical install with the usb stick originally on the esxi/internal USB port. Tried all the usb controllers and a different usb.