RaidUnnewb

Members
  • Posts

    36
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

RaidUnnewb's Achievements

Rookie

Rookie (2/14)

1

Reputation

  1. I would like the new pool to be 1 device. Snips showing each step below: Current config, array started. Crazy, I don't think I missed this before, but found this as I was stopping the array for the 20th time today lol. I formatted, and its now apparently working. I knew it should be this easy, it has been this easy in the past. I don't know why the option wasnt there before, or why I missed it (more likely). I did format a few times while it was unassigned, just havent been able to do it when it was in the pool and assigned. Transferring appdata and system back to array from other SSD using mover. Took about 15 minutes vs the 18 hours (before I gave up) on the old SSD. Hung up transferring back just now, so I let mover run overnight. Server crashed again during the night's auto-restart but recovered and Parity was going again I guess. Cancelled that, used MC to transfer over the system share that mover missed. Turned off the array for docker shares, and all seems to be working, except for 1 of my dockers. No biggie, just restore from a backup. Sorry for the false alarm, dont know why it wasnt there or I didnt see it.
  2. Hello, I have months/years of problems with my server that I am slowly trying to diagnose. My lastest efforts have been to replace an older sata SSD with a new NVME one for my dockers and eventually VM's (dont have any VM's yet). I told mover to move everything off the SSD onto the array. Mover bugged out and stayed "working" forever. I used MC and manually moved over the data. Something is up though, the dockers are all funky. I may have lost everything. Not the end of the world. I am now in process of removing the SSD from the Pool and adding a new NVME to the pool but I keep getting other problems. The Pool is zfs with compression and autotrim set to on. Every time I start my array I get: "Unmountable: Unsupported or no file system" I have cleared and formatted the drive, and cleared the Pool. I even deleted the pool and made a new one with the same name. I added back the old drive plus the new one, and it auto mirrored (it should have zero bytes on it, but w/e) and appeared to work. I then remove the old drive from the array and it goes back to 'Unmountable'. Running "zpool import" gives: I dont really care about the data. I just want the drive to work, so I can put the data back on it. I have many backups using 'ZFS Master'. The reason for running a zfs cache drive is to do daily/weekly/monthly backups onto the single zfs hard disk in my array. The only thing I can think of is I still see the new NVME drive under "Historical Devices" where I had it unmounted since installation a while ago (never used, never put data, never mounted). Any ideas? I will eventually open another topic to discuss the myriad of troubleshooting steps taken over the past few months to try and get my server to stop randomly crashing every few days. I am hoping a bad drive running my dockers is the culprit. Diagnostics included. gigavault-diagnostics-20240403-1051.zip Thank you!
  3. Copy that, will make a separate post in a week with results of the current safe mode experiment. Or want me to do it today so I can copy my last post over and delete it from this thread?
  4. We'll see. It lasted a week or so sometime 8 months ago in safe mode before becoming unresponsive again and requiring a hard reset. Rebooted into safe mode just now, plugins are all gone, docker was running? So disabled docker. I used to know when my server stopped working because my internet would go out (pihole), quickly got tired of that so bought a raspberry pi and now have that as my primary. So have been unable to catch the server lately in its 'almost dead' phase, instead only when its fully dead and I have to hard reset. In the past 3 weeks I installed most of those dockers, this problem has been happening for a year. Prior to December, I only has pihole/unbound/satisfactory(almost always off) and thats it, everything else is new, and Tdarr specifically seems to kill the server overnight instead of in a few days. Hard locked again last night, told Tdarr to do some stuff. Its not set to use CPU at all, just GPU. Something is using my CPU and I dont know what. I set every docker manually in extra parameters to something similar to: "--memory=2g --cpu-shares=256" This particular hard lock, has activity showing on this page, all of the CPU threads are dancing around a bit, but no buttons work and rest of server is dead. If I were to refresh page, I wouldnt even get this page. Reboot/shutdown/log/terminal buttons dont work of course.. Pihole was working, rest of the dockers were unresponsive. Yesterday it locked up and I plugged in a monitor (can sometimes tell it to reboot via keyboard/monitor), got this: Any help is appreciated, including mental
  5. IS THIS WHY MY SERVER SUCKS??!?!?!! Ive had constant problems with Unraid since about day 1, meanwhile my buddy hasnt restarted his server in 6 months... Tonight I finally caught the server right before it went unresponsive. All my shares were gone. Tried disabling docker (cant restart without my appdata share existing...), tried stopping and restarting array, nope. Checked some settings, I never had NFS enabled, I turned off disk shares to be sure, didnt have any. Only installed Tdarr a couple days ago, dont have any other Arr's. This is my second full system, I have been throwing money at this problem for a year now and I am still having trouble staying up long enough for a damn parity check, let alone being able to use the server to do actual work. Replaced the flash drive early on, replaced RAM early on. Ended up switching out the motherboard/ram (again)/cpu, still happening, replaced the motherboard and PSU again 2 days ago, still happening.... Ive tried Intel and AMD. Ive tried DDR4 and DDR5. Ive tried Asrock, Gigabyte and Asus motherboards. Rebooting always works, and I limped by with rebooting every week for a while. But I want to start using my server for server things, not just to empty out some drives from my computer. Windows is more stable than this Linux box..? I cant reboot every night when it takes 3 days to do a parity check... So far in a year I've probably had to hard reboot (kill power) over 30 times... Is this an Unraid problem or a Linux problem? Only thing I havent done yet is turn off hardlinks, which everyone says not to do because of trash guides?
  6. Hello, Is there any guides on how to get Mattermost Docker to work for a beginner? I am reading that I need other dockers or something to act as a database, but under Mattermost docker in the "Additional Requirements" field there is: "None Listed" Without know what to change, my Mattermost Docker is running and when I try to use the WebGui, I get "This site cant be reached". I try clicking on all the container settings and there arent any help-tips. I am assuming I am missing whatever needs to be changed in the 'DATASOURCE' and 'APP_HOST' settings since it has some username/password/ip stuff there but...... My IT/computer background is as advanced as: Watch youtube guide, download whatever docker, change the two settings the guide says to, and.... it works.. There just doesnt exist a how-to for this thing yet, unless I missed me. Thanks! Edit a couple weeks later: I gave up. Running Nextcloud instead of mattermost because there are guides in how to set up the database for it.
  7. Yes. Thank you, you summed up my dilemma perfectly lol. Though, I am further screwed because I want to add my old 1070 gpu to the 16x slot for VM use. So having 4 slots working at x4 speeds is all I need. They just don't make em. But the key to all this, and I hope it helps other people googling, is a PCI 3.0 x8 (or whatever) device hooked to a PCI 4.0 x1 speed slot, will only work at PCI 3.0 x1 speeds.
  8. I have a 7950x and will be purchasing a motherboard. Use my current one and buy a 7800x3d maybe. If, I can figure out how to get all the PCI slots I need working. Will need an AM5 mobo with 4 slots, 1 for GPU, 1 for networking 10gig, 2 for the HBA's. Case has room for 15 HDD's. Each HBA can have 6. Each HBA is a pcie 8slot card. So I need at least 4size slots to put em into. All the Mobo's I come across have 3 size 16 slots running in pcie 4.0 16/2/2 modes or something, and a 1 size slot, running at pcie3x1. Cant find something with the slots I need. And the 7950x is an expensive card to just not use.
  9. Sorry for the necro. I've looked through google for a while now and cant actually find an answer. I have a couple LSI 9207-8i's and am thinking about just rebuilding my entire system because of unraid stability problems. I also want to run a GPU for VM's in the future as well as 1-2 extra pcie slots for networking/drives. A lot of the motherboards I am looking at for AM5 have PCIE 4.0x16 slots running at x1 speeds. The HBA's are PCIE3.0. My question is plugging in a 3.0 device into a 4.0 x1 port, will I get 4.0x1 performance or 3.0x1 performance? Going from potentially bottlenecked to 'very much bottlenecked' during parity checks/rebuilds. Thanks!
  10. Yes, still no dockers or VMs. Its now been a month and those weird errors havent appeared again. I deleted a lot of the plugins that could have caused that. Got a new one though...... Parity check is scheduled for the 24th day of the month. The last parity check was the Jan 1st. Its the 25th today. The scheduler didnt start a parity check yesterday?
  11. Thanks, yeah... Got excited and did the Youtube search "Top X plugins for Unraid", and just followed one of the listings. Going to remove a bunch and wait and see if it helps. Got a new one today lol: Luckily I can still hit the restart button, but the Array and Pools seem to have disappeared. Cant reach network drives in windows. Dashboard seems fine, just everything in Main is gone and cant actually use. The warning error mentions pluging/unassigned.devices. I am going to go out on a limb and think that is the one causing all this. Dynamix was shown in my original errors... Hopefully its the Sleep one that I never used or set up. I really use the other plugins... Hmm wonder if its the Cache Directories in memory one, and my memory is throwing errors causing the problem?
  12. Thanks. Yeah, was able to swap out plugins to get a backup created for the USB. Fresh USB, one of the highest rated from the youtube Unraid USB showdown video. Bought specifically to run the server just a few months ago. Heres my plugin page. I have everything basically at stock settings until I actually start using them. Mainly set up the backup application for the USB/Appdata/ect, and currently using Preclear (damn 16tb drives take forever). Im scratching my head from the descriptions to even find something tangentially related to weird webpage errors preventing usage of Unraid through the web portal. Edit: Also using moving tuner to run a check every hour on SSD to see if its over 70% threshold. Im not using any Dockers or VM's yet. Kinda wanna get this figured out first before relying on the server to start hosting plex/gameservers/ect. Next order of business while tracking down this problem; finish the Parity check (2 more days) from the last forced shutdown, pull the parity drive for RMA (reallocated sectors going up) and replace with the one undergoing preclear right now, then figure out how to get my UPS to send shutdown signals. Also, find out why the drive I literally just shucked today isnt being detected by Unraid whatsoever, even though the new 16tb one is (same 4port sata cable and power being used).
  13. gigavault-diagnostics-20221228-1146.zip New diagnostics after todays change.
  14. I am removing CA Backup / Restore Appdata, and replacing with Kluth's to resolve a warning.
  15. Yes, it lasted approx a month until a network issue happened, so I restarted it into nonsafe Unraid. I believe I have fixed the network thing by manually assigning the IP in my modem.