Leaderboard

Popular Content

Showing content with the highest reputation on 07/15/19 in all areas

  1. What hostility? I haven't seen any from anyone but yourself. We are hard at work trying to reproduce and resolve this issue, but you seem to think that because we haven't yet, that we're sitting here just twiddling our thumbs. We are not. We have multiple test servers constantly running Plex and injecting new data to it to try and force corruption. Hasn't happened to us once. That leads us to believe that this may be specific to individual setups/hardware, but we haven't figured out why just yet. You have a completely valid method to get back to a working state: roll back to the 6.6.7 release. Otherwise we are continuing to do testing and will provide more for folks to try in this bug report thread as we have ideas to narrow this down. Clearly this issue isn't as widespread as some may think, otherwise I think we'd have an outpouring of users and this thread would be a lot longer than 4 pages at this point. That said, it is a VERY valid concern that we are very focused on resolving, but sometimes things take longer to fix.
    3 points
  2. I just purchased 2 M.2 drives hoping to use them as a cache pool. The UEFI menu will show both drives, but the unRAID web GUI will only list one of the drives. Below is a little about my setup. Motherboard: ASRock X370 Taichi CPU: Ryzen 5 2600 M.2 drives: 2x XPG SX6000 Lite M.2 2280 512GB PCI-Express 3.0 x4 3D NAND HBAs: 2x LSI 9207-8i GPU: ATI FireMV 2250 I tested each drive separately and each will be detected in the web GUI in either M.2 slot when installed one at a time. However, unRAID isn't showing both of them when they're both installed, although the UEFI sees both of them. After swapping drives and slots, I removed the GPU thinking it may be some odd PCIe lane allocation issue, but the issue persisted. I also updated from 6.7.0 to 6.7.2. Attached are the diagnostics. I looked through syslog.txt and it looks to be an issue with both drives using the same NVMe Qualified Name (NQN). I'm researching this path at the moment, but hope someone with more experience can weigh in with a possible solution. Any ideas? elysium-diagnostics-20190713-0247.zip
    1 point
  3. Hi @Max, Are you talking about the command from this post? If so, you need to either SSH into unraid (same IP address as webUI) or use the >_ terminal icon from the webGUI to bring up the command line interface. From there you can you paste in the command, but you need to edit the command to your personal server. First, find your Nvidia GPU UUID in the Nvidia plugin under settings in the Unraid UI. Lastly, you need to change the directory to where your videos for converting are, such as /mnt/user/myvideos/:input:rw.
    1 point
  4. Sounds like a classic case of browser protections kicking in. Disable adblockers and popup blockers for the unraid IP.
    1 point
  5. Many thanks for providing this truly excellent resource, very much appreciated. I can't recommend this container enough to all unRAID users who aren't completely confident regarding their local network security. First scan here found 58 vulnerabilities just on my unRAID host, one rated HIGH that was an open SMB share which I'd accidentally put some files on that I shouldn't have. Anyway, this was really easy to set up, just takes a fair while on first use. [EDIT] Update: Found another interesting vulnerability in mDNS UDP port 5353 on my PS4 games console that is now blocked at firewall! IMPORTANT NOTE: My best advice to other new users of this is to set the advanced option to restrict the CPU affinity on the container, otherwise it can hammer your system at 100% CPU usage for a short while during the initial install at the plug in compilation process. I only noticed when my system fans suddenly went into full spin
    1 point
  6. Post the system diagnostics zip file (obtained via Tools->Diagnostics) so we can see what is going on with your system.
    1 point
  7. If you're going to keep the same cache devices there's no need to move anything.
    1 point
  8. With a single parity drive, you can recover from a single data drive failure. If you have two simultaneous failures, you will lose data unless you have dual parity drives.
    1 point
  9. I also have configured letsencrypt reverse proxy for subdomain nessus.subdomain.conf Note1: include /config/nginx/auth.conf points towards my Organizr setup. You might not want to use this server { listen 443 ssl; listen [::]:443 ssl; server_name nessus.*; include /config/nginx/ssl.conf; client_max_body_size 0; include /config/nginx/auth-location.conf; location / { include /config/nginx/auth.conf; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_nessus w.x.y.z; ## Change to IP of HOST proxy_pass https://$upstream_nessus:8834; } }
    1 point
  10. This did the trick! Didn't notice the alert until now. thanks
    1 point
  11. 2 NVMe detect, problem may fix by Kernel or NVMe firmware update. 01:00.0 Non-Volatile memory controller [0108]: Realtek Semiconductor Co., Ltd. Device [10ec:5762] (rev 01) Subsystem: Realtek Semiconductor Co., Ltd. Device [10ec:5762] Kernel driver in use: nvme Kernel modules: nvme 21:00.0 Non-Volatile memory controller [0108]: Realtek Semiconductor Co., Ltd. Device [10ec:5762] (rev 01) Subsystem: Realtek Semiconductor Co., Ltd. Device [10ec:5762] Kernel modules: nvme Jul 12 21:26:04 Elysium kernel: nvme nvme1: ignoring ctrl due to duplicate subnqn (nqn.2018-05.com.example:nvme:nvm-subsystem-OUI00E04C). Jul 12 21:26:04 Elysium kernel: nvme nvme1: Removing after probe failure status: -22 https://forums.lenovo.com/t5/ThinkPad-X-Series-Laptops/X1-Extreme-Intel-NVMe-Firmware-Upgrade-NQN-Duplicate-Issue/m-p/4415819#M99048 commit b9453f9bb66e864f8b7d7e112aea475bdd7a4e2b Author: James Dingwall <[email protected]> Date: Tue Jan 8 10:20:51 2019 -0700 nvme: introduce NVME_QUIRK_IGNORE_DEV_SUBNQN [ Upstream commit 6299358d198a0635da2dd3c4b3ec37789e811e44 ] If a device provides an NQN it is expected to be globally unique. Unfortunately some firmware revisions for Intel 760p/Pro 7600p devices did not satisfy this requirement. In these circumstances if a system has >1 affected device then only one device is enabled. If this quirk is enabled then the device supplied subnqn is ignored and we fallback to generating one as if the field was empty. In this case we also suppress the version check so we don't print a warning when the quirk is enabled. Reviewed-by: Keith Busch <[email protected]> Signed-off-by: James Dingwall <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
    1 point
  12. I have an unmountable BTRFS filesystem disk or pool, what can I do to recover my data? Unlike most other file systems, btrfs fsck (check --repair) should only be used as a last resort. While it's much better in the latest kernels/btrfs-tools, it can still make things worse. So before doing that, these are the steps you should try in this order: Note: if using encryption you need to adjust the path, e.g., instead of /dev/sdX1 it should be /dev/mapper/sdX1 1) Mount filesystem read only (safe to use) Create a temporary mount point, e.g.: mkdir /temp Now attempt to mount the filesystem read-only. v6.9.2 and older use: mount -o usebackuproot,ro /dev/sdX1 /temp v6.10-rc1 and newer use: mount -o rescue=all,ro /dev/sdX1 /temp For a single device: replace X with actual device, don't forget the 1 in the end, e.g., /dev/sdf1 For a pool: replace X with any of the devices from the pool to mount the whole pool (as long as there are no devices missing), don't forget the 1 in the end, e.g., /dev/sdf1, if the normal read only recovery mount doesn't work, e.g., because there's a damaged or missing device you should use instead the option below. v6.9.2 and older use: mount -o degraded,usebackuproot,ro /dev/sdX1 /temp v6.10-rc1 and newer use: mount -o degraded,rescue=all,ro /dev/sdX1 /temp Replace X with any of the remaining pool devices to mount the whole pool, don't forget the 1 in the end, e.g., /dev/sdf1, if all devices are present and it doesn't mount with the first device you tried use the other(s), filesystem on one of them may be more damaged then the other(s). Note that if there are more devices missing than the profile permits for redundancy it may still mount but there will be some data missing, e.g., mounting a 4 device raid1 pool with 2 devices missing will result in missing data. With v6.9.2 and older, these additional options might also help in certain cases (with or without usebackuproot and degraded), with v6.10-rc1 and newer rescue=all already uses all theses options and more. mount -o ro,notreelog,nologreplay /dev/sdX1 /temp If it mounts copy all the data from /x to another destination, like an array disk, you can use Midnight Command (mc on the console/SSH) or your favorite tool, after all data is copied format the device or pool and restore data. 2) BTRFS restore (safe to use) If mounting read-only fails try btrfs restore, it will try to copy all data to another disk, you need to create the destination folder before, e.g., create a folder named restore on disk2 and then: btrfs restore -v /dev/sdX1 /mnt/disk2/restore For a single device: replace X with actual device, don't forget the 1 in the end, e.g., /dev/sdf1 For a pool: replace X with any of the devices from the pool to recover the whole pool, don't forget the 1 in the end, e.g., /dev/sdf1, if it doesn't work with the first device you tried use the other(s). If restoring from an unmountbale array device use mdX, where X is the disk number, e.g. to restore disk3: btrfs restore -v /dev/md3 /mnt/disk2/restore If the restore aborts due an error you can try adding -i to the command to skip errors, e.g.: btrfs restore -vi /dev/sdX1 /mnt/disk2/restore If it works check that restored data is OK, then format the original btrfs device or pool and restore data. 3) BTRFS check --repair (dangerous to use) If all else fails ask for help on the btrfs mailing list or #btrfs on libera.chat, if you don't want to do that and as a last resort you can try check --repair: If it's an array disk first start the array in maintenance mode and use mdX, where X is the disk number, e.g., for disk5: btrfs check --repair /dev/md5 For a cache device (or pool) stop the array and use sdX: btrfs check --repair /dev/sdX1 Replace X with actual device (use cache1 for a pool), don't forget the 1 in the end, e.g., /dev/sdf1
    1 point