Leaderboard

Popular Content

Showing content with the highest reputation on 10/10/19 in all areas

  1. mitigations=off Plugin updated a week or so ago to reflect this
    2 points
  2. Really considering moving away from unraid at the moment, at least for VM's. Firstly the pass through, tried to pass an nvidia gtx 670 through to a Windows 10 VM, not happy, i have it set up so i can rdp in too it but when select the gpu it shows it booting but i can't actually get in too it. Unable to turn the hyper-v option off that every recommends as a first step, if i set it to off and save changes it re-enables itself. Also have a ubuntu VM and today out of nowhere it started but no network, when watch it boot get " a start job is running for raise network" then after 5 minutes it times out. Only changes i've made lately are nic teaming in r-r mode (i have a managed switch) however the bridge name remains the same and all VM's show the same bridge name. It just feels such an uphill struggle to get unraid VM's to work. I want them to "just work" not them randomly stop working one day or not work at all :(
    1 point
  3. Good. One of them must have become corrupt when you originally copied it.
    1 point
  4. Nope, nothing special. I did the same thing not too long ago with a Dell H310. unRAID tracks disks by serial number, so, you should be able to just insert the card and plug in the SATA cables removing them from the MB SATA ports. Just remember to leave SATA SSDs (if you have them) plugged into the MB SATA ports so they are properly TRIMmed.
    1 point
  5. The 'Fix Common Problems' plugin has a "Number of allowed invalid logins per day:" setting.
    1 point
  6. Quite commonly done, and yes The various icons in the Settings Tab will disable what unRaid has the options for.
    1 point
  7. They are support files for docker and VMs. If you have not been using these you can delete either (or both) of them. They are normally located on the cache for performance reasons.
    1 point
  8. When you take hardware passthru and dependency out of the picture, it becomes very simple and straightforward to set up VMs with Unraid. I have several VMs running and it took literally a couple of minutes to get them up and running. VMs can be complex depending on what you try to achieve and the actual hardware at hand.
    1 point
  9. That looks OK. Normally your VM and dockers are in the User Shares named appdata, domains, system. These are cache-prefer by default, which means if they ever overflowed to the array then mover could move them back to cache. But mover can't move open files, so for that to actually work, docker and VM services have to be stopped. It is really simpler to just have those shares cache-only, but you have to be careful and not fill up cache (or any other disk for that matter) or it could corrupt. Each user share has a Minimum Free setting which Unraid uses to decide which disk to use when it begins writing a new file. It has no way to know how large a file will become when it chooses a disk to write. If a disk has less than Minimum Free it will choose another. You should set Minimum Free to larger than the largest file you expect to write to the user share. Cache also has a Minimum Free in Global Share Settings. If cache has less than minimum, Unraid will use this setting to overflow to the array, but that only works for cache-yes and cache-prefer user shares as explained in that link I gave.
    1 point
  10. OK. That shows that somehow you have ended up with the same file on both array and cache which should not happen in normal operation. In such a case mover does not do anything as it does not know which is the 'good' copy. The most likely cause would be that at some point you started the array without the cache disk present and so the system created fresh copies of these files on the array. You need to decide which copy (if any) of each file you wand to keep and remove the other one.
    1 point
  11. Each user share has a setting which determines how it interacts with the cache pool. Specify whether new files and directories written on the share can be written onto the Cache disk/pool if present. No prohibits new files and subdirectories from being written onto the Cache disk/pool. Yes indicates that all new files and subdirectories should be written to the Cache disk/pool, provided enough free space exists on the Cache disk/pool. If there is insufficient space on the Cache disk/pool, then new files and directories are created on the array. When the mover is invoked, files and subdirectories are transferred off the Cache disk/pool and onto the array. Only indicates that all new files and subdirectories must be writen to the Cache disk/pool. If there is insufficient free space on the Cache disk/pool, create operations will fail with out of space status. Prefer indicates that all new files and subdirectories should be written to the Cache disk/pool, provided enough free space exists on the Cache disk/pool. If there is insufficient space on the Cache disk/pool, then new files and directories are created on the array. When the mover is invoked, files and subdirectories are transferred off the array and onto Cache disk/pool.
    1 point
  12. Hi! Since one of my older drives is failing I need to replace it. My current parity is 8TB so I am replacing it with a new 8TB drive. I have been buying WD Reds lately, since the Seagate Ironstar would be a bit cheaper I was wondering whether this would be a good alternative as well. Any other recommendations for reliable 8TB drives?
    1 point
  13. It was triggered by an unclean shutdown: Oct 9 02:29:05 Tower emhttpd: unclean shutdown detected I agree with Frank1940 that you should run memtest.
    1 point
  14. No, we do not have much options to map memory slots. To use virtio net, you have to do two thing: 1. hotplug virtio ethernet instead of predefined 2. boot args with debug=0x100 and keepsyms=1
    1 point
  15. your server restarted at this morning. here is the first line in the syslog. Oct 9 02:28:12 Tower kernel: microcode: microcode updated early to revision 0x27, date = 2019-02-26 You are also getting segfaults near the end of the syslog. I believe these are usually memory related. You might want to run memtst (from the boot menu) unless you have ECC memory. I would also double check that you didn't unlock any of the memory sticks when you were doing the drive changes.
    1 point
  16. Haha, Happens to the best of us Update worked a treat! thanks for your help. I can now rest easy at night knowing im getting full throughput from my drives, all 4.85GB/s of it!!!! Also good to know i have some theoretical headroom of about 3GB/s for future expansion through expanders if i ever find a case to support that many drives. Thank you for supporting such a useful tool.
    1 point
  17. I don't know much about cache disks. Obvious suggestion is to fix the error on the disk and then enable plugins/dockers/VMs one at a time until you can determine which one is causing the problem, I suppose.
    1 point
  18. ok I think I have what I need. Thanks for your help. Did not thing to search for moving docker from one to another.
    1 point
  19. At the time those diagnostics were created, were any CPUs showing 100% load? If so, which ones? Also, have you tried booting in safe mode and seeing if this occurs with no plugins/dockers/vms loaded? There does seem to be some corruption on one of your disks: Oct 7 23:56:33 Homebase kernel: BTRFS critical (device sdj1): corrupt leaf: root=5 block=1953586397184 slot=84, bad key order, prev (288230376157862467 96 4) current (6150723 96 5) ### [PREVIOUS LINE REPEATED 4 TIMES] ### Should probably run a check on that one, as it looks like it eventually causes a kernel fault: Oct 8 02:02:49 Homebase kernel: BUG: unable to handle kernel NULL pointer dereference at 0000000000000080 Oct 8 02:02:49 Homebase kernel: PGD 4ad0b1067 P4D 4ad0b1067 PUD 4ad0b0067 PMD 0 Oct 8 02:02:49 Homebase kernel: Oops: 0000 [#1] SMP NOPTI Oct 8 02:02:49 Homebase kernel: CPU: 15 PID: 1848 Comm: fstrim Tainted: P O 4.19.56-Unraid #1 Oct 8 02:02:49 Homebase kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./X570 Taichi, BIOS P2.10 09/09/2019 Oct 8 02:02:49 Homebase kernel: RIP: 0010:btrfs_trim_fs+0x166/0x369 Oct 8 02:02:49 Homebase kernel: Code: 00 00 48 c7 44 24 38 00 00 00 00 49 8b 45 10 48 c7 44 24 40 00 00 00 00 48 c7 44 24 30 00 00 00 00 48 89 44 24 20 48 8b 43 68 <48> 8b 80 80 00 00 00 48 8b 80 f8 03 00 00 48 8b 80 a8 01 00 00 0f Oct 8 02:02:49 Homebase kernel: RSP: 0018:ffffc9001294fc90 EFLAGS: 00010297 Oct 8 02:02:49 Homebase kernel: RAX: 0000000000000000 RBX: ffff888f5db68200 RCX: ffff888fbf604878 Oct 8 02:02:49 Homebase kernel: RDX: ffff888cac98de80 RSI: ffff888f5d718c00 RDI: ffff888fbf604858 Oct 8 02:02:49 Homebase kernel: RBP: 0000000000000000 R08: ffff888f5911fa70 R09: ffff888f5911fa68 Oct 8 02:02:49 Homebase kernel: R10: ffffea0022918ec0 R11: ffff888ffe9e0b80 R12: ffff888fbfafe000 Oct 8 02:02:49 Homebase kernel: R13: ffffc9001294fd20 R14: 0000000000000000 R15: 0000000000000000 Oct 8 02:02:49 Homebase kernel: FS: 000014b7fa3ac780(0000) GS:ffff888ffe9c0000(0000) knlGS:0000000000000000 Oct 8 02:02:49 Homebase kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Oct 8 02:02:49 Homebase kernel: CR2: 0000000000000080 CR3: 00000001a6aa2000 CR4: 0000000000340ee0 Oct 8 02:02:49 Homebase kernel: Call Trace: Oct 8 02:02:49 Homebase kernel: ? dput.part.6+0x24/0xf6 Oct 8 02:02:49 Homebase kernel: btrfs_ioctl_fitrim.isra.7+0xfe/0x135 Oct 8 02:02:49 Homebase kernel: btrfs_ioctl+0x4f6/0x28ad Oct 8 02:02:49 Homebase kernel: ? queue_var_show+0x12/0x15 Oct 8 02:02:49 Homebase kernel: ? _copy_to_user+0x22/0x28 Oct 8 02:02:49 Homebase kernel: ? cp_new_stat+0x14b/0x17a Oct 8 02:02:49 Homebase kernel: ? vfs_ioctl+0x19/0x26 Oct 8 02:02:49 Homebase kernel: vfs_ioctl+0x19/0x26 Oct 8 02:02:49 Homebase kernel: do_vfs_ioctl+0x526/0x54e Oct 8 02:02:49 Homebase kernel: ? __se_sys_newfstat+0x3c/0x5f Oct 8 02:02:49 Homebase kernel: ksys_ioctl+0x39/0x58 Oct 8 02:02:49 Homebase kernel: __x64_sys_ioctl+0x11/0x14 Oct 8 02:02:49 Homebase kernel: do_syscall_64+0x57/0xf2 Oct 8 02:02:49 Homebase kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9 Oct 8 02:02:49 Homebase kernel: RIP: 0033:0x14b7fa4de397 Oct 8 02:02:49 Homebase kernel: Code: 00 00 90 48 8b 05 f9 2a 0d 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d c9 2a 0d 00 f7 d8 64 89 01 48 Oct 8 02:02:49 Homebase kernel: RSP: 002b:00007ffc52c9f358 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 Oct 8 02:02:49 Homebase kernel: RAX: ffffffffffffffda RBX: 00007ffc52c9f4b0 RCX: 000014b7fa4de397 Oct 8 02:02:49 Homebase kernel: RDX: 00007ffc52c9f360 RSI: 00000000c0185879 RDI: 0000000000000003 Oct 8 02:02:49 Homebase kernel: RBP: 0000000000000003 R08: 0000000000000000 R09: 0000000000415fd0 Oct 8 02:02:49 Homebase kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000415740 Oct 8 02:02:49 Homebase kernel: R13: 00000000004156c0 R14: 0000000000415740 R15: 000014b7fa3ac6b0 Oct 8 02:02:49 Homebase kernel: Modules linked in: veth xt_CHECKSUM ipt_REJECT ip6table_mangle ip6table_nat nf_nat_ipv6 iptable_mangle ip6table_filter ip6_tables vhost_net tun vhost tap macvlan xt_nat ipt_MASQUERADE iptable_nat nf_nat_ipv4 iptable_filter ip_tables nf_nat xfs dm_crypt algif_skcipher af_alg dm_mod dax md_mod bonding edac_mce_amd kvm_amd nvidia_drm(PO) nvidia_modeset(PO) nvidia(PO) drm_kms_helper btusb btrtl btbcm drm kvm btintel igb bluetooth agpgart syscopyarea sysfillrect crct10dif_pclmul sysimgblt fb_sys_fops crc32_pclmul crc32c_intel ghash_clmulni_intel i2c_piix4 i2c_algo_bit pcbc i2c_core aesni_intel aes_x86_64 crypto_simd wmi_bmof mxm_wmi ahci ecdh_generic cryptd ccp libahci glue_helper wmi button pcc_cpufreq acpi_cpufreq Oct 8 02:02:49 Homebase kernel: CR2: 0000000000000080 Oct 8 02:02:49 Homebase kernel: ---[ end trace 9bdd9e618dc0d9c2 ]--- Oct 8 02:02:49 Homebase kernel: RIP: 0010:btrfs_trim_fs+0x166/0x369 Oct 8 02:02:49 Homebase kernel: Code: 00 00 48 c7 44 24 38 00 00 00 00 49 8b 45 10 48 c7 44 24 40 00 00 00 00 48 c7 44 24 30 00 00 00 00 48 89 44 24 20 48 8b 43 68 <48> 8b 80 80 00 00 00 48 8b 80 f8 03 00 00 48 8b 80 a8 01 00 00 0f Oct 8 02:02:49 Homebase kernel: RSP: 0018:ffffc9001294fc90 EFLAGS: 00010297 Oct 8 02:02:49 Homebase kernel: RAX: 0000000000000000 RBX: ffff888f5db68200 RCX: ffff888fbf604878 Oct 8 02:02:49 Homebase kernel: RDX: ffff888cac98de80 RSI: ffff888f5d718c00 RDI: ffff888fbf604858 Oct 8 02:02:49 Homebase kernel: RBP: 0000000000000000 R08: ffff888f5911fa70 R09: ffff888f5911fa68 Oct 8 02:02:49 Homebase kernel: R10: ffffea0022918ec0 R11: ffff888ffe9e0b80 R12: ffff888fbfafe000 Oct 8 02:02:49 Homebase kernel: R13: ffffc9001294fd20 R14: 0000000000000000 R15: 0000000000000000 Oct 8 02:02:49 Homebase kernel: FS: 000014b7fa3ac780(0000) GS:ffff888ffe9c0000(0000) knlGS:0000000000000000 Oct 8 02:02:49 Homebase kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Oct 8 02:02:49 Homebase kernel: CR2: 0000000000000080 CR3: 00000001a6aa2000 CR4: 0000000000340ee0
    1 point
  20. On the Main page click on the disk icon of a device and it shows the associated log with the disk.
    1 point
  21. Add it after the AppID. This should be a simple fix i will look into it after work. EDIT: Sure that this is not a bug in the game itself and they patch it in the next few days? I've downloaded the stable version and it works without a flaw, then i deleted the whole folder and the docker and installed the latest_experimental and it's the same as in your screenshot. Looked even if the folder structure itself is different but it's not it's exactly the same, even the missing steamclient.so is in the main directory... Is this the correct term for the beta build: '-beta latest_experimental' or is '-beta' engouh? EDIT2:Fixed the docker, please klick 'Check for Updates' on the Docker screen in Unraid and update the servers. The latest experimentail build now runs fine.
    1 point
  22. It will likely won't support trim either. As for the SSDs, one of the most common SSDs, the Samsung 860 EVO supports DRZAT.
    1 point
  23. IIRC current super.dat format where the disk assignments are stored can't support more than 30 devices, doesn't mean it couldn't be changed but likely not without a lot of work.
    1 point
  24. Any ideas when this is coming? Keeping 8 7200 SAS drives going all the time feels a waste...
    1 point
  25. Yes this will for sure be done
    1 point
  26. @unstatic FiveM Docker will go in the next few hours live.
    1 point