merlyn

Members
  • Posts

    38
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

merlyn's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Thanks JorgeB. i am currently copying data off the working vms on the cache to the array and it will probably take a while. about 4 TB copied off already. any suggestion for how to fix the mint linux VM that will no longer boot? i of course have data on it that i need to get off it. its not booting so i don’t know to retrieve it. and i don’t want to run the wrong repair tool and damage it more. thanks for any advice you can give.
  2. so after the balance ran automatically after i rebooted for 15 hours it now shows no balance found for /mnt/cache. and of course the VM that has files i do not have backed up is still not working do i run scrub ... or scrub with repair corrupted blocks checked? or boot in maintenance mode and run a check? i have no idea what to do first.
  3. i have 4 ssd drives setup as my cache drive in a btfs pool. i tried to setup up as raid 10 but i never could get them to work properly. so i believe they are currently in raid 1 not sure which or how to even tell which. it says raid one in the btfs pool. they have been working fine for months like this. a few days ago i noticed VM’s stopped working with VNC i simply could not get into them. VM logs will not even load. other vms work fine. i see errors coming up all over the place in the unraid logs unraid is laggy of course and generally freezing while it pauses to read from the disks. i am seeing sectors being moved. can someone help me reduce this pool to one drive and figure out what drive is failing ? i rebooted this morning and ever since btfs pool is running balance. got about 30 percent left to run. showing close to a billion reads and writes on some of my cache drives since this morning. the silly thing is all my vms will fit on any one ssd drive. help merlyn tower-diagnostics-20201030-1800.zip
  4. Same User plex not found in logs plex launches but it cannot connect to any of the data on my unraid tower.
  5. 9400-16i work fine in unraid. I just installed 2 of them in my server. They reduced my parity check time by abut 25 percent over my old 8 port card and the motherboard ports i was using. Used the firmware right out of the box without updating anything and they work fine.
  6. my iso path was just /mnt/ as soon as i selected my actual folder.. status changed to running and all is well. vm tab no longer blank. thanks.
  7. hummm .... I normally think LSI when i think controller card but now you have me thinking. Mind coming back and posting how it went when you get the adaptec shipped in? thanks in advance
  8. So since i need two 16 port cards for my system i guess i need to ask ... what is THE best card on the market with 16 ports currently (all internal)? cost not a issue what should i grab for my 4u unraid server? Is the answer the 9400i is the best but random guess as to if it will work or not. I kind of hate to spend a grand to "find out" if it works. What did you decide michael123?
  9. running off cache disk with no user shares enabled ... should i delete cache and just format it and make it a apps disk
  10. yes all dockers are installed to a cache disk (no user enabled so its pretty much a apps disk) thinking i should go ssd since it is a old drive.
  11. no but excellent question ... have multiple roku 3 's connecting but not currently actively?? but will investigate thanks
  12. Dockers are only introduced in v6, hence my question how did you do the upgrade from v5 to v6? been going on for years. was on 5 then beta 6 multiple versions (with plex plugin) then 6 rc with plex docker no plugin. so it is all revolving around plex as the problem since it is the common denominator.
  13. This has to be something specific to your hardware and not a bug. If this happened with every unraid server when rebooting i am sure there would be ALOT more posts about it. what exactly do you have running for dockers/VMs/Plugins? from the sounds of it you have a plugin or docker that is holding a share open. I can reproduce this by SSH into the server and doing something like cd /mnt/user/movies and leave the terminal there and try to reboot unraid, it will just sit there trying to unmount shares until i go back to the SSH terminal and get off /mnt/user as soon as i type in cd or cd /boot, it will stop the array without issue. lost my last post in the move of topics. Deleted all VMs (nothing was loaded just testing VMs) deleted all plugins (community plugin) deleted all dockers serviios and plex) as scottc predicted it will now allow me to stop the array. plex docker issue is the first thing to come to mind since it is the one that i have to rebuild all the time. will update thread as i go thru adding things one at a time.
  14. no i dont mind at all. Typed a response in the other thread but never posted will put over here next. users shares were always off turned on to test with all drives excluded. (no change still crashed) but long story short seems to be a docker issue? merlyn
  15. just for fun rebooted with putty (no use trying gui 100% failure) tried unraid safe mode no plugins loaded option on boot. once i started the array waited a minute then hit stop . same thing retry unmounting user shares over and over again until GUI crashes hours later. again everything is set to manual no dockers loaded. reboot thru putty only option i know (willing to try other suggestions upon request) been doing this before i even loaded plugins or dockers was hoping it would be fixed with 6 final guess not. doing same thing with 6 beta (tried many versions). ding similar in 5 but that was a long time ago. merlyn