daddygrant

Members
  • Posts

    30
  • Joined

  • Last visited

Everything posted by daddygrant

  1. Thanks... I figured it out after re-reading some of your previous posts on the matter. What I did. There is minor down time between stopping the array to assign the drive. 1. Unassign old drive 1 from the pool 2. Let cache balance 3.Assigned the new drive into the pool. 4. Let cache balance 5. Unassign old drive 2 from the pool 6. Let cache balance 8. Drink beer and enjoy my new larger single drive.
  2. I factory reset the bios, cleared the cmos and now I'm back in business. Everything is now appearing in Unraid.
  3. @testdasi I appreciate your assistance. See the attached diagnostics. Card = Intel DC P3520 1.2TB level2-diagnostics-20200210-1211.zip
  4. Hey everyone, I am trying to add an Intel NVMe card to my Unraid setup but it's not showing up to the system. I know the card works so I figure there is a configuration issue I need to work around. I have an NVMe m.2 running perfectly fine in the system as well. This is what I've already tried, 1. disabling/enabling all virtualization features 2. Reset the card 3. Swap the card ( I have two ) 4. Move the card to a different slot. Any pointers on where to go next? Uuan
  5. Hey, I'll like to go from my two 480GB SATA SSD cache to a single Intel Enterprise 1.2TB NVMe cache drive. Currently both SATA drives are in a cache pool. I can have all three drives physically connected to facilitate the move. So, I need a sanity check on the fastest method to get this down with minimal down time (I do have backups just in case). Here is what I'm thinking. Method 1 1. Stop array 2. Set Cache from 2 to 3 3. Assign NVMe to slot 3 4. Start array 5. run a balance (Anything special needed?) 6. Stop array 7. Unassign the two SATA drives from cache setting 8. Start array Method 2 (A friend's suggestion due to BTRFS magic ) 1. Stop Array 2. Remove both SATA from Cache assignment 3. Assign NVMe to slot 1 4. Start Array Thoughts?
  6. I love how easy it is to add functionality to unRaid through the strong community applications. Also this platform allows me to reuse all my old drives more efficiently than solutions by QNAP and Synology. In 2020, I would like to see VM backup and restore functionally added via the GUI. Cheers and Thank you.
  7. I found the problem. Oddly enough, the local endpoint information went blank. I re-entered the information and now I'm rocking with LAN access client profile. The client profile for server only access is still not showing traffic.
  8. Interesting thing. I got it working last night from the phone without issue. Easy as pie. But, This morning I added a few more clients and none can connect including my phone that worked fine last night. The clients say connected but that isn't reflected on the server and traffic is not passing. Firewall ports and DDNS are good. Any thoughts?
  9. I turned off the QNAP OpenVPN service and everything is working now. Thank you.
  10. OK. Thank you for confirming my suspicions. I'll disable it and use an openvpn docker for the same duties.
  11. That is correct. The built in one from Qnap.
  12. I'm experiencing something very similar with my QNAP. I moved from unraid and it was working then it was not. I looked at /etc/daemon_mgr.conf but only found "DAEMON53 = openvpn, start, QNAP_QPKG=QVPN /usr/sbin/openvpn --config /etc/openvpn/server.conf --daemon ovpn-server" and not the "stop" other have removed to solve the problem. My logs look good on the container but access is a no go. I'm receiving the error: 192.168.1.54 took too long to respond. Try: Checking the connection Checking the proxy and the firewall ERR_CONNECTION_TIMED_OUT
  13. Updating the firmware fixed everything for me. Thanks for the tip.
  14. Thanks for the suggestion. I haven't tried that but I will.
  15. I updated but had to revert back. My Windows VM with HW GPU pass-through (Quadro P2000) was unable to start. I received the error "internal error: Unknown PCI header type '127' " I also checked the IOMMU Groups and even recreated a VM from scratch but it was still a no go. My system has one video card so I figured it was a reset bug.... I could be wrong.
  16. I'm having an issue where the unbalance webUI is not coming up despite the fact the plugin is running. Can anyone point me in the right direction?
  17. Good news. I was able to do a btrfs repair and get my server back online. I needed to redownload the docker img file. One odd this is when I ran the scrub it detected an " uncorrectable error" .. hmm scrub status for e5c6c962-9832-4b71-b271-74aadb623225 scrub started at Tue Jun 26 20:24:35 2018, running for 00:05:41 total bytes scrubbed: 520.07GiB with 1 errors error details: csum=1 corrected errors: 0, uncorrectable errors: 1, unverified errors: 0
  18. I'm checking the status now I'm getting the following checking extents incorrect offsets 9393 134227121 bad block 188066988032 ERROR: errors found in extent allocation tree or chunk allocation checking free space cache checking fs roots checking csums incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258214674432-258214723584 but there is no extent record incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258214969344-258214977536 but there is no extent record incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258215268352-258215272448 but there is no extent record incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258215464960-258215501824 but there is no extent record incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258215534592-258216222720 but there is no extent record incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258216493056-258216505344 but there is no extent record incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258216603648-258216951808 but there is no extent record incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258217000960-258217066496 but there is no extent record incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258217099264-258217955328 but there is no extent record incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258217988096-258218106880 but there is no extent record incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258218135552-258219134976 but there is no extent record incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258219167744-258219331584 but there is no extent record incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258219335680-258220089344 but there is no extent record incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258220318720-258220400640 but there is no extent record incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258220404736-258220904448 but there is no extent record incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258221137920-258221953024 but there is no extent record incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258222190592-258222215168 but there is no extent record incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258222465024-258222477312 but there is no extent record incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258222731264-258222776320 but there is no extent record incorrect offsets 9393 134227121 Error looking up extent record -1 csum exists for 258222809088-258222956544 but there is no extent record ERROR: errors found in csum tree Checking filesystem on /dev/nvme1n1p1 UUID: e5c6c962-9832-4b71-b271-74aadb623225 found 275468541952 bytes used, error(s) found total csum bytes: 0 total tree bytes: 170672128 total fs tree bytes: 0 total extent tree bytes: 170295296 btree space waste bytes: 25007386 file data blocks allocated: 182976512
  19. My UnRaid cache went offline overnight. Rebooting or reseating the drive doesn't help. I'm getting Unmountable file system. Can anyone add some insight? Jun 26 17:32:28 Level2 kernel: nvme0n1: p1 Jun 26 17:32:28 Level2 kernel: BTRFS: device fsid e5c6c962-9832-4b71-b271-74aadb623225 devid 1 transid 93580 /dev/nvme0n1p1 Jun 26 17:32:56 Level2 emhttpd: INTEL_SSDPEDMX012T7_CVPF7185000Y1P2JGN (nvme0n1) 512 2344225968 Jun 26 17:33:40 Level2 emhttpd: INTEL_SSDPEDMX012T7_CVPF7185000Y1P2JGN (nvme0n1) 512 2344225968 Jun 26 17:33:40 Level2 emhttpd: import 30 cache device: (nvme0n1) INTEL_SSDPEDMX012T7_CVPF7185000Y1P2JGN Jun 26 17:33:58 Level2 emhttpd: shcmd (139): mount -t btrfs -o noatime,nodiratime /dev/nvme0n1p1 /mnt/cache Jun 26 17:33:58 Level2 kernel: BTRFS info (device nvme0n1p1): disk space caching is enabled Jun 26 17:33:58 Level2 kernel: BTRFS info (device nvme0n1p1): has skinny extents Jun 26 17:33:58 Level2 kernel: BTRFS info (device nvme0n1p1): enabling ssd optimizations Jun 26 17:33:58 Level2 kernel: BTRFS info (device nvme0n1p1): the free space cache file (246982639616) is invalid, skip it Jun 26 17:33:58 Level2 kernel: BTRFS critical (device nvme0n1p1): corrupt leaf, slot offset bad: block=188066988032, root=1, slot=129 Jun 26 17:33:58 Level2 kernel: BTRFS critical (device nvme0n1p1): corrupt leaf, slot offset bad: block=188066988032, root=1, slot=129 Jun 26 17:33:58 Level2 kernel: BTRFS: error (device nvme0n1p1) in btrfs_run_delayed_refs:3089: errno=-5 IO failure Jun 26 17:33:58 Level2 kernel: BTRFS info (device nvme0n1p1): delayed_refs has NO entry Jun 26 17:33:58 Level2 kernel: BTRFS: error (device nvme0n1p1) in btrfs_replay_log:2476: errno=-5 IO failure (Failed to recover log tree) Jun 26 17:33:58 Level2 kernel: BTRFS error (device nvme0n1p1): cleaner transaction attach returned -30 Jun 26 17:33:58 Level2 kernel: BTRFS info (device nvme0n1p1): space_info 1 has 4316372992 free, is not full Jun 26 17:33:58 Level2 kernel: BTRFS info (device nvme0n1p1): space_info total=744111472640, used=739794771968, pinned=0, reserved=262144, may_use=0, readonly=65536 Jun 26 17:33:58 Level2 root: mount: /mnt/cache: can't read superblock on /dev/nvme0n1p1. Jun 26 17:33:58 Level2 kernel: BTRFS error (device nvme0n1p1): open_ctree failed Jun 26 17:37:28 Level2 emhttpd: shcmd (191): /usr/sbin/hdparm -y /dev/nvme0n1 Jun 26 17:37:28 Level2 root: /dev/nvme0n1: Jun 26 17:37:31 Level2 emhttpd: shcmd (192): /usr/sbin/hdparm -S0 /dev/nvme0n1 Jun 26 17:37:31 Level2 root: /dev/nvme0n1:
  20. I may be faster but if you have a parity disk it could slow you down.. I know Unraid warns that SSDs aren't supported in the array but it should work.
  21. I got every swapped to the new SSD for cache. I made sure it was btrfs this time. Thank you everyone
  22. The classic method involves stopping all VMs/Dockers Set shares to not use the cache. Run the mover to migrate the data to the array. Swap the cache drive, change the selected shares to use the cache and then run the mover again. Finally re-enable the cache and run the mover. Enable dockers and VMs. For me it takes about 2 days. Mostly because of Plex.
  23. Thank you. Unfortunately I checked and my current cache disk is xfs. I have however format the new disk as btrfs to future migrations. I'm wonder if I can use MC to move the data and swap the cache. Any other suggestions other than the legacy method?
  24. I currently have a 500GB cache drive that I want to replace with a 1.2TB drive. Both drives are in the server and the 1.2TB one is unassigned. In the past I used the mover to send data to the array then back to the new cache drive but that process took a long time. I saw a new method on the FAQ that may work for me since both drives are in the server. Does anyone have experience with this method and do I need to stop the Dockers or VMs? Is it really that easy with no data loss? Does the array continue to run during the process (Share/Dockers/VMs)? Stop - Select - Start .. that is? Mind blown if it is.