Pmarszal

Members
  • Posts

    36
  • Joined

  • Last visited

Everything posted by Pmarszal

  1. Actually, I just realized that backup is at 6.11 so the update might not be the issue. After restore from backup, unraid works, but as soon as shut down, drive stops working.
  2. My pc only has 3.0 usb ports. It worked fine for year and a half with no issues, just right after update to latest unraid version.
  3. Hey guys! So after upgrade to latest version my boot drive stopped working! I had a usb 3.0 and figured that was the problem. I ran out, got a usb 2.0 and ran a flash backup. It booted up correctly which just registration key missing (as expected), however I shut it down, and the usb 2.0 is now also not being recognized after first boot. Any ideas?
  4. Where is the mover logging setting. I cannot seem to find this.
  5. Hello, I wanted to loop back around to this issue, I am still having issues with mover not moving files. I have set all my shares to "most-free" and the mover is still not moving files to array. I am getting the follow log item when I manually click move: "Jul 18 07:44:37 unraid emhttpd: shcmd (236075): /usr/local/sbin/mover &> /dev/null &" Any insights or suggestions on how to resolve or diagnose this problem would be greatly appreciated. Thanks! unraid-diagnostics-20230718-0747.zip
  6. I encountered a problem with the cache_transfer drive being completely filled up, and I observed a significant amount of read and write activity on drive 7. Upon examining the array, I discovered that drive 7 indicates it is full at 14TB, yet it still shows 5.8TB of free space. Considering this is a 14TB drive, I'm wondering what steps I should take to resolve this issue. unraid-diagnostics-20230711-0743.zip
  7. Weird! Disk 7 showing up that is 14TB and its full but the free space showing up as 5.8TB?
  8. Hey Jorge, I have all shares using all disks. I dont have preferences and all are set to "high-water" allocation method. Shouldn't the invoker move to next non-full drive?
  9. Help needed! I've encountered a problem with my cache drive (cache_transfer - 2tb) where it is unexpectedly filled up completely. Normally, it operates smoothly and manages files efficiently and mover runs as expected. However, recently, Disk 7 has reached its maximum capacity (I think when mover stopped running correctly), and while running invoker, it seems to be reading and writing a massive amount of data on Disk 7. Is there anyone who has experience with this issue and can provide guidance on how to resolve it? Any assistance would be greatly appreciated! unraid-diagnostics-20230711-0743.zip
  10. After binding the vfio at boot, the GPU Statistics plugin no longer functions. Is there a solution to make this plugin work while the vfio is bound? I require the binding for GPU passthrough to one of my VMs. I appreciate any assistance.
  11. More diagnostic details: 1. Crash appears to only happen when docker is enabled. 2. Fix Common Problems is reporting: "Share appdata set to use pool cache_appdata, but files / folders exist on the cache pool Either adjust which pool this share should be using or manually move the files with Dynamix File Manager More Information" I am at this point starting over fresh with docker so all I am looking to do a fresh start of docker. I've deleted vDisk file and all appdata but everytime i enable docker - it starts, fix common problems reports above and then eventually my array crashes (as explained above post) I am also getting the following error in system log: May 16 16:34:32 unraid root: error: /webGui/include/InitCharts.php: wrong csrf_token
  12. I got VMs to return by replacing libvirt. Thanks! Another issue is that my server array freezes and have to restart. The server becomes unresponsive with the unraid "wait" logo. I can click through the tabs but everything becomes unresponsive, including running a diagnostic. Attached is a log after restart and starting array. I appriciate all the help @JorgeB and @itimpi as I wouldnt even have gotten this far without the community. unraid-diagnostics-20230516-1312.zip
  13. I have a copy of the libvert. How do I add old libvert I unfortunately don’t have old appdata
  14. I did take a flash backup from unraid.net prior to all this... if that helps.. but unsure what that would do.
  15. I believe I have worsened the situation, guys. Upon starting, I noticed that all VMs and Docker applications are nowhere to be found. When I explore the directory "/mnt/user/domains," I can locate the VMs. Similarly, when I browse through "/mnt/user/system/docker," I can see the docker.img file. However, when I access the appdata directory, only krusader is visible. I am convinced that I have completely messed up everything! Sigh... I deeply regret my actions. unraid-diagnostics-20230516-1042.zip
  16. Sorry! Might be a stupid question but, how do I do that once assigned. obviously the “cache” drive I just click format but the other one is assigned and doesn’t ask to be reformatted.
  17. The following two were in a pool together: Cache - Samsung_SSD_870_QVO_2TB_S6R4NJOR610134Y-2 Cache_appdata - SAMSUNG_MZVL2512HCJQ-00B00_S675NUOTA07380 ”Cache” drive needs to be formatted and the “cache_appdata” drive had the data left behind prior to new config. I am looking to keep them as separate pools, one as cache (2tb) and the other is going to be used to hold appdata on the cache_appdata
  18. Sorry for late reply, attached is the diagnostic file. unraid-diagnostics-20230515-1639.zip
  19. Hello everyone, I'm facing an issue with my Unraid server and I could use some help. Recently, I made a mistake by adding a cache drive to my pool that was only a fraction of the size of my other cache drives. Now I'm trying to remove this "smaller" cache drive from the pool, and here are the steps I took: First, I moved all the data from the cache drives by setting every share to YES and running the mover. After the mover completed, I disabled the use of cache drives. I also created a new configuration to ensure that the array drives were preserved. Next, I removed the "smaller" cache drive from the pool and created a new pool. However, the old cache pool required me to format it, which was not an issue since I had already moved everything to the array. The problem is that the "smaller" cache drive I removed still has some files on it, such as appdata and system files. Currently, the parity drive is in the process of rebuilding, and it will take a couple of days to complete. But I'm starting to wonder if I should have formatted the "smaller" drive before starting up the array, considering that there were still some remaining data on it. Did I make a mistake in the removal process, and how can I resolve this situation without creating further complications? I would greatly appreciate any assistance you can provide. Thank you!
  20. Jorge: Sorry, newb to my r720. Where in iDrac do I input this command? I dont see any command windows available for me to launch command the following command: touch /boot/config/modprobe.d/tg3.conf
  21. What is my best option - disable virtualization or unblacklist my NIC? Please help! Need virtualization ASAP... unraid-diagnostics-20220529-2101.zip
  22. Thanks guys! Took lots of learning but flashed to IT mode and everything is working like a charm!
  23. Im getting the following error when running OpenVPN-Client: Error: ipv4: FIB table does not exist. Im also getting the following warnings: 2022-01-19 19:40:43 WARNING: 'link-mtu' is used inconsistently, local='link-mtu 1582', remote='link-mtu 1569' 2022-01-19 19:40:43 WARNING: 'tun-mtu' is used inconsistently, local='tun-mtu 1532', remote='tun-mtu 1500' 2022-01-19 19:40:43 WARNING: 'comp-lzo' is present in local config but missing in remote config, local='comp-lzo' 2022-01-19 19:40:43 WARNING: 'auth' is used inconsistently, local='auth [null-digest]', remote='auth SHA256' 2022-01-19 19:40:43 WARNING: 'keysize' is used inconsistently, local='keysize 256', remote='keysize 128' 2022-01-19 19:40:44 WARNING: You have specified redirect-gateway and redirect-private at the same time (or the same option multiple times). This is not well supported and may lead to unexpected results 2022-01-19 19:40:44 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this How can I resolve these?