Leaderboard

Popular Content

Showing content with the highest reputation on 01/03/19 in Posts

  1. This array... is clean! [18421.678196] XFS (sdg1): Mounting V5 Filesystem [18421.702969] XFS (sdg1): Ending clean mount [18433.061212] mdcmd (236): set md_num_stripes 1280 [18433.061224] mdcmd (237): set md_sync_window 384 [18433.061232] mdcmd (238): set md_sync_thresh 192 [18433.061239] mdcmd (239): set md_write_method [18433.061248] mdcmd (240): set spinup_group 0 0 [18433.061257] mdcmd (241): set spinup_group 1 0 [18433.061265] mdcmd (242): set spinup_group 2 0 [18433.061274] mdcmd (243): set spinup_group 3 0 [18433.061282] mdcmd (244): set spinup_group 4 0 [18433.061290] mdcmd (245): set spinup_group 5 0 [18433.061298] mdcmd (246): set spinup_group 6 0 [18433.061306] mdcmd (247): set spinup_group 7 0 [18433.061314] mdcmd (248): set spinup_group 8 0 [18433.061322] mdcmd (249): set spinup_group 9 0 [18433.061330] mdcmd (250): set spinup_group 10 0 [18433.061338] mdcmd (251): set spinup_group 11 0 [18433.061346] mdcmd (252): set spinup_group 12 0 [18433.061355] mdcmd (253): set spinup_group 13 0 [18433.061363] mdcmd (254): set spinup_group 14 0 [18433.061371] mdcmd (255): set spinup_group 15 0 [18433.061388] mdcmd (256): set spinup_group 29 0 [18433.184487] mdcmd (257): start STOPPED [18433.184721] unraid: allocating 87420K for 1280 stripes (17 disks) [18433.206055] md1: running, size: 7814026532 blocks [18433.206317] md2: running, size: 3907018532 blocks [18433.206546] md3: running, size: 3907018532 blocks [18433.206787] md4: running, size: 3907018532 blocks [18433.207036] md5: running, size: 3907018532 blocks [18433.207294] md6: running, size: 3907018532 blocks [18433.207520] md7: running, size: 3907018532 blocks [18433.207743] md8: running, size: 3907018532 blocks [18433.207980] md9: running, size: 7814026532 blocks [18433.208195] md10: running, size: 11718885324 blocks [18433.208447] md11: running, size: 7814026532 blocks [18433.208663] md12: running, size: 2930266532 blocks [18433.208893] md13: running, size: 3907018532 blocks [18433.209121] md14: running, size: 3907018532 blocks [18433.209339] md15: running, size: 2930266532 blocks [18505.068952] XFS (md1): Mounting V5 Filesystem [18505.220978] XFS (md1): Ending clean mount [18505.241064] XFS (md2): Mounting V5 Filesystem [18505.420607] XFS (md2): Ending clean mount [18505.524083] XFS (md3): Mounting V5 Filesystem [18505.712850] XFS (md3): Ending clean mount [18505.807641] XFS (md4): Mounting V4 Filesystem [18505.990918] XFS (md4): Ending clean mount [18506.007166] XFS (md5): Mounting V5 Filesystem [18506.206230] XFS (md5): Ending clean mount [18506.276970] XFS (md6): Mounting V5 Filesystem [18506.462988] XFS (md6): Ending clean mount [18506.528073] XFS (md7): Mounting V4 Filesystem [18506.691736] XFS (md7): Ending clean mount [18506.735099] XFS (md8): Mounting V5 Filesystem [18507.017610] XFS (md8): Ending clean mount [18507.085893] XFS (md9): Mounting V5 Filesystem [18507.288553] XFS (md9): Ending clean mount [18507.393625] XFS (md10): Mounting V5 Filesystem [18507.577104] XFS (md10): Ending clean mount [18507.819136] XFS (md11): Mounting V5 Filesystem [18507.976554] XFS (md11): Ending clean mount [18508.106641] XFS (md12): Mounting V5 Filesystem [18508.341221] XFS (md12): Ending clean mount [18508.430243] XFS (md13): Mounting V5 Filesystem [18508.588536] XFS (md13): Ending clean mount [18508.660636] XFS (md14): Mounting V5 Filesystem [18508.805264] XFS (md14): Ending clean mount [18508.865881] XFS (md15): Mounting V5 Filesystem [18509.044894] XFS (md15): Ending clean mount [18509.134343] XFS (sdb1): Mounting V4 Filesystem [18509.288511] XFS (sdb1): Ending clean mount Quick final questions: 1. How can I report this to someone at Unraid so they can look at upgrading xfsprogs bundled with 6.7? 2. As there was a kernel panic and an unclear shutdown, its wanting to run a parity sync... there's a checkbox saying "Write corrections to parity", I assume that means take the disks as gospel and update the parity to match them? Parity fix running
    1 point
  2. It's client side browser. The server has won't affect it. Different browsers or different computers will have different settings saved.
    1 point
  3. Diagnostics already includes SMART for all disks, syslog, and much more. So no need for the separate SMART. Syslog also showing some issues with 1st cache. Just curious, why do you have 5 cache disks? SMART for disk3 looks OK. Assuming you aren't getting any warnings for other array disks on the Dashboard page, you can check ALL connections and rebuild the disk to itself: https://wiki.unraid.net/index.php/Troubleshooting#What_do_I_do_if_I_get_a_red_X_next_to_a_hard_disk.3F Not sure what if anything should be done about cache other than checking connections, but you can deal with that after the disk3 rebuild. Probably a good idea to quit writing to anything until everything is square again.
    1 point
  4. I had same problem - doing this mentioned above by some other helpful person fixes it: Go to your Headphones appdata folder and edit config.ini and change http_host to the below. http_host = 0.0.0.0
    1 point
  5. Appdata config path for kodi-headless was set for /mnt/user/MEDIA/ , so when you did the app cleanup for that docker and it asked to delete /mnt/user/MEDIA/ you told it yes.
    1 point
  6. Yep, especially when this isn't a general tech forum. It's a forum for users of Unraid, a NAS with the ability to run dockers and virtual machines. When you posted here with no mention of running Unraid, it immediately sent up red flags. This specific section you posted in is not for general tech, you posted in the section for discussing Unraid specific security issues. Posting in random tech forums about how you like surgeshank, or whatever service you are plugging, is not a good way to win friends. Normal first posts to this forum include things like, how you are using Unraid, issues you may be having with the latest version of Unraid, questions about what hardware works well with Unraid, etc. Notice a theme here? This is an Unraid forum, for users of Unraid. Would like to discuss Unraid instead of arguing about your poor choice of posting placement?
    1 point
  7. Your XML looks good and will pass through the iommu group 14. Your append line it looks fine too. But just as a side note many people (myself included until jonp told me a while ago) you can just add extra parameters to the end of the line. They don't have to be between the append and the initrd so the line can look like this as you have it append xen-pciback.hide=(02:00.0)(02:00.1) initrd=/bzroot or just at the end append initrd=/bzroot xen-pciback.hide=(02:00.0)(02:00.1) they just have to be on the same line. I think it is less likely to make a typo have it at the end.
    1 point
  8. If they are in separate groups, you can pass one through and let unraid use the others. If you use the method I mentioned, you use the PCI ID and not the vendor ID. This way you can pass hide which part of the card you want to pass through. Only thing different to the normal stubbing, is that you have to manually add the device tag in the xml. You can't choose it in the other devices list.
    1 point
  9. This docker is for people with little to no knowledge about nginx. It was not done with manual configuration file editing in mind. Some static configuration files are inside the container itself (/etc/nginx), while generated files are stored under the app data folder. If you want to migrate from LE docker, you should not try to replicate your config files, but instead, use the UI to re-create the same functionality (again, this container doesn't support subfolders yet).
    1 point
  10. I am sorry I am going to have to skip this update. I just can't... Any ETA on 6.6.7?
    1 point
  11. So what I did was in a console: cp -a /mnt/user/appdata/. /mnt/disks/SSD/appdata/ cp -a /mnt/user/system/. /mnt/disks/SSD/system/ cp -a /mnt/user/domains/. /mnt/disks/SSD/domains/ And then I changed the settings under VM-manager (libvirt and vdisks) to the new locations and the same for Docker (docker.img and appdata) And lastly I changed the settings for each VM I had already created. I hope this will do it.
    1 point