merlyn

Members
  • Posts

    38
  • Joined

  • Last visited

Everything posted by merlyn

  1. Thanks JorgeB. i am currently copying data off the working vms on the cache to the array and it will probably take a while. about 4 TB copied off already. any suggestion for how to fix the mint linux VM that will no longer boot? i of course have data on it that i need to get off it. its not booting so i don’t know to retrieve it. and i don’t want to run the wrong repair tool and damage it more. thanks for any advice you can give.
  2. so after the balance ran automatically after i rebooted for 15 hours it now shows no balance found for /mnt/cache. and of course the VM that has files i do not have backed up is still not working do i run scrub ... or scrub with repair corrupted blocks checked? or boot in maintenance mode and run a check? i have no idea what to do first.
  3. i have 4 ssd drives setup as my cache drive in a btfs pool. i tried to setup up as raid 10 but i never could get them to work properly. so i believe they are currently in raid 1 not sure which or how to even tell which. it says raid one in the btfs pool. they have been working fine for months like this. a few days ago i noticed VM’s stopped working with VNC i simply could not get into them. VM logs will not even load. other vms work fine. i see errors coming up all over the place in the unraid logs unraid is laggy of course and generally freezing while it pauses to read from the disks. i am seeing sectors being moved. can someone help me reduce this pool to one drive and figure out what drive is failing ? i rebooted this morning and ever since btfs pool is running balance. got about 30 percent left to run. showing close to a billion reads and writes on some of my cache drives since this morning. the silly thing is all my vms will fit on any one ssd drive. help merlyn tower-diagnostics-20201030-1800.zip
  4. Same User plex not found in logs plex launches but it cannot connect to any of the data on my unraid tower.
  5. 9400-16i work fine in unraid. I just installed 2 of them in my server. They reduced my parity check time by abut 25 percent over my old 8 port card and the motherboard ports i was using. Used the firmware right out of the box without updating anything and they work fine.
  6. my iso path was just /mnt/ as soon as i selected my actual folder.. status changed to running and all is well. vm tab no longer blank. thanks.
  7. hummm .... I normally think LSI when i think controller card but now you have me thinking. Mind coming back and posting how it went when you get the adaptec shipped in? thanks in advance
  8. So since i need two 16 port cards for my system i guess i need to ask ... what is THE best card on the market with 16 ports currently (all internal)? cost not a issue what should i grab for my 4u unraid server? Is the answer the 9400i is the best but random guess as to if it will work or not. I kind of hate to spend a grand to "find out" if it works. What did you decide michael123?
  9. running off cache disk with no user shares enabled ... should i delete cache and just format it and make it a apps disk
  10. yes all dockers are installed to a cache disk (no user enabled so its pretty much a apps disk) thinking i should go ssd since it is a old drive.
  11. no but excellent question ... have multiple roku 3 's connecting but not currently actively?? but will investigate thanks
  12. Dockers are only introduced in v6, hence my question how did you do the upgrade from v5 to v6? been going on for years. was on 5 then beta 6 multiple versions (with plex plugin) then 6 rc with plex docker no plugin. so it is all revolving around plex as the problem since it is the common denominator.
  13. This has to be something specific to your hardware and not a bug. If this happened with every unraid server when rebooting i am sure there would be ALOT more posts about it. what exactly do you have running for dockers/VMs/Plugins? from the sounds of it you have a plugin or docker that is holding a share open. I can reproduce this by SSH into the server and doing something like cd /mnt/user/movies and leave the terminal there and try to reboot unraid, it will just sit there trying to unmount shares until i go back to the SSH terminal and get off /mnt/user as soon as i type in cd or cd /boot, it will stop the array without issue. lost my last post in the move of topics. Deleted all VMs (nothing was loaded just testing VMs) deleted all plugins (community plugin) deleted all dockers serviios and plex) as scottc predicted it will now allow me to stop the array. plex docker issue is the first thing to come to mind since it is the one that i have to rebuild all the time. will update thread as i go thru adding things one at a time.
  14. no i dont mind at all. Typed a response in the other thread but never posted will put over here next. users shares were always off turned on to test with all drives excluded. (no change still crashed) but long story short seems to be a docker issue? merlyn
  15. just for fun rebooted with putty (no use trying gui 100% failure) tried unraid safe mode no plugins loaded option on boot. once i started the array waited a minute then hit stop . same thing retry unmounting user shares over and over again until GUI crashes hours later. again everything is set to manual no dockers loaded. reboot thru putty only option i know (willing to try other suggestions upon request) been doing this before i even loaded plugins or dockers was hoping it would be fixed with 6 final guess not. doing same thing with 6 beta (tried many versions). ding similar in 5 but that was a long time ago. merlyn
  16. Go to Tools - Diagnostics and post the results. attached file as requested. remember system is stable until i reboot then 100 percent failure stopping array. when i downloaded this file no dockers loaded array started but not in active use. merlyn tower-diagnostics-20150703-1238.zip
  17. LONG time bug going on with my system. 100 percent repeatable. Whenever i shutdown 100 percent of the time the server crashes with shutting down user shares. I have let it go 10 hours plus and it will finally just crash all of the gui. This occurs whenever i click to stop the array. i have been able to shutdown with putty and typing in shutdown -r now (other options i am welcome to try) system will come up with unclean shutdown and parity started which always shows no errors ( i have my system set to where i have to manually start array). I have tried right after starting the array hitting the stop button and no dockers loaded (same result crashes) 80 percent of the time i have to completely delete plex docker and reboot (not even deleting and reinstalling will work) even if i shut it down before shutting down array. so it goes without saying i never reboot. my config no user shares enabled (tried enabling user shares and setting all disks to exemption no change) running two dockers plex (limetech version) and serviio. all dockers are set to not autostart. community plugin. Tamsolutions server that so many of us bought in the day. intel version 5130 intel cpu 8 gigs of ram Motherboard: X7DBE-X 40 TB of storage PRO version of Unraid. logs? tell me how and i will provide since the gui locks everytime i hit stop on array (been going on for years so long i forget what version of it happened in version 5)
  18. I am not sure if it is just me or anyone with user shares turned off. I know I cannot stop the array at any time without the gui crashing. it will say unmounting user shares ... retry retry etc ...until it crashes ... tower is still working though since i can ssh in and reboot the server . this has started happening since i went to beta 6 never had a problem before this on 5. I have one docker running plex with needo as a repository The dirty shutdown that resulted destroyed the plex docker to the point i cannot use it at all anymore ever after reinstall. see post on that topic top of page 50 .... http://lime-technology.com/forum/index.php?topic=33822.735 can anyone with user shares turned off verify if they can stop their array since on my machine it causes gui crash 100 percent of the time. merlyn
  19. I had PLEX working for 2 days. I went to reboot the server so I hit stop on the array. The gui stopped responding after that. whoops .... gave it a hour and no response so i SSH'd in and rebooted the server. my error should have shutdown the docker apps first before hitting stop. (i guess) Ever since that Dirty shutdown Plex seems to install but i cannot get the UI to come up at all. repository is needo/plex Things i have tried ... - deleted container ... rebuilt ... UI will not connect. - deleted image and container rebuilt ... UI will not connect. - deleted entire disk image ... created new image ... rebuilt plex ... UI will not connect. checking plex logs i see pages and pages of this error WARNING COULDN'T CREATE /config/Library/Application Support, MAKE SURE I HAVE PERMISSON TO DO THAT! mkdir: cannot create directory '/config/Library': Permission denied can i assume i have some permissions issues? can anyone help? I am just learning docker so i really don't know what to do next thanks in advance... merlyn
  20. Thank you all for your posts. I have never incorrectly assigned the drives so i naturally assumed that might have been the cause of the problem. I did swap 2 data disks luckily the parity was properly assigned. checked export settings it is set to yes. That all said Real Life got in the way so i just let my unraid box sit. I kept checking it on every once in a while but no change disk 8 not available. Today since i have some time I just checked it and it suddenly is back (i checked like 30 mins ago and it was still blank using smb) It currently seems fine. So i guess i did nothing wrong but their is still a random problem somewhere. loose cable HD failing etc dont know ... i ran a smart test on the drive will post here. it did pass. It passed a preclear before i put it in like a month ago. Its just odd that it never red balled . it just was blank for a while bizarre. I will reboot it a bunch of times do some copying off and on to it ... generally check it out ... thank you all. merlyn smart.txt
  21. ok in desperation went back to unraid 5.05 and i still cannot access files on disk 8. I really need some help with this since disk 8 is a 6tb drive loaded. I really dont want to lose the drive. to review made mistake when assigning drives in unraid 6. by mistake incorrectly assigned disk 8 to disk 9 and disk 9 to disk 8. started array all disks were fine but 8 swapped configs so it was setup correctly started array. still not able to access files in smb. reverted to known good config of unraid 5.05 and still cannot access files on disk 8 using smb. files appear to be accessible using the gui file viewer (browse/mnt/disk8) it wants to do a parity check but i keep canceling it since i dont know if a sync will bring my data back or destroy it so something is hosed in my config and i have no idea how to correct it. so i currently cannot access that 6tb of info. thanks in advance for any help you can give me.
  22. ok so I started the array and all disks are now ok except disk 8 . it shows blank when i access it thru windows networking. When i access disk 8 using the gui i can see all the files and they seem to be fine. so does that mean I am seeing my files thru parity? if i run a parity sync will it replace parity with the wiped drive or will it allow to drive to be correctly show in smb I really have no idea what to do at this point ... I am currently looking thru the log ....
  23. Ok so I made a classic mistake and selected the wrong drives when configuring unraid. When I upgraded from version 5 to version 6 of unraid my drives were not configured at all. So as i went thru and selected each drive to the matching serial number i swapped 2 by mistake. When i started the array i could access all but disk 8. Thats when i realized i swapped disk 8 with disk 9 proper info. i changed the drives so they are now correct but they are listed as both wrong in red. I thought i should ask what to do at this point before i screw something up. should i just start the array with two drives listed as wrong but are actually correct. help ... merlyn
  24. Get a multimeter and check the voltage output of the power supply. That would be my first guess not motherboard. Its probably putting out power but not enough to power anything up. merlyn
  25. Yes that was me who quoted 60 plus hours for a pre-clear. I did two simultaneously and the first disk took 63.43 hours the second took 62.51 hours. I cannot help you on how fast just 6tb drives would be for parity since i am running multiple brand and size drives in my 10 disk unraid. but it takes approx 1000 mins to do a parity check on my system now. merlyn