Jump to content

Starlord

Members
  • Content Count

    28
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Starlord

  • Rank
    Member

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Oof that's what I was afraid of. The out of tree driver has made using the rocket 750 outside of unraid nearly impossible and I was worried that may be the issue here. I emailed them about it with no response ages ago.. good to know you guys havent had any luck either. I'll be avoiding RocketRaid products like the plague from now on. IMO that's unacceptable. Thankfully I ordered 2 LSI controllers similar to the ones in the new storinators that came in today. I'll be replacing them this afternoon. Will report back with results.
  2. Second post is the only relevant bits I was able to spot in the syslog not seeing anything else out of the ordinary.
  3. Yeah it's currently on 6.6.7. Was not updated to 6.7. It's been solid for a good 2 years
  4. BIOS is up to date as of last night, I'll try re-seating the SAS cables into the rocket but the drives themselves are attached to a back plane and I've tried moving slots already. I'll run a ram test too, and since this machine is rack mounted in its own climate controlled closet I dont think bumping/moving is an issue. Issue is, this is a production server that handles a site to site VPN so I gotta be selective when bringing it down. Might be a few hours before I report back
  5. Oh my bad, the system in question is different from sig. It's a 45 drive storinator, dual xeon, real fancy. It's a clients machine not "mine" Need to edit personal rig in sig too it's out of date. Will do so now
  6. Dont tell me my Rocket 750's biting the dust...
  7. I'm pulling my hair out at this point. Diagnostics.zip attatched So had a disk start throwing smart errors. No big deal, followed the wiki and swapped it. Everything seemed happy until about half way through parity rebuild when suddenly my parity drive starts reporting read errors.. so I followed the wiki on parity swap procedure and this time added in 2 parity drives. Ran the rebuild again, considering the original failed drive an acceptable loss and was prepared to move on. It "finished" but then dropped disk 1 and disk 5 from the array and said they had a bunch of read errors. So now im in panic mode, I bring the array offline and boot it in maintenance mode. Ran xfs_repair -v /dev/sdx on both disks, both disks reported no secondary super block could be found after several hours of just dots scrolling through terminal. So I started the array minus the 2 trouble disks, and attempted to mount disk1 partition 1 manually. I was able to browse files on the disk, and started copying them to disk 2. It got about 3/4 of the way through before MC reported an input/output error and now I can't manually browse to disk 2 ``` root@TPC-Abraham:/mnt/disk2# ls /bin/ls: cannot open directory '.': Input/output error ``` and the webgui reports 2018 read errors on disk.. im running out of disks to move stuff too here.. currently working on a solution to back what data I have left to the cloud. tpc-abraham-diagnostics-20191007-1318_1.zip
  8. I didn't read it in a way that seemed harsh to me so no worries! Loading games from a network share has been "fine" for me but some clients like Battle.net wont even let you install games on a mapped drive. Tried symlinking too but the client still detected it was a network share and refused to install my games. If I used the sata drives to store my game drive I'd have to move my media library to the NVME drives which seems even more pointless to me lol What I ended up doing is just re-balancing the raid for the cache. This gives me my 3tb image for windows games (outside of future parity) with 1tb for normal cache stuff left over. I think im happy with this for now. Will mark solved.
  9. Well I'd like to have the option to mount the image in my Linux vm as proton gets better and better. I work in IT and I dont currently have a girlfriend so I do alright but no not that rich lol. I just happened to stumble on an insane deal on most of these drives. The 4x intel NVME ssds and the 2 adatas were grabbed at the same time from a local shop that closed but I got them all on the last day for $190. I've had that Samsung for 2-3 years. My rationale for storing my game library on those NVMEs is pretty simple. My games library hasn't grown in over a year and is only gonna grow by whatever Cyberpunk uses. So for the most part it's read only other than updates etc and I have a whole 3/4 of a tarrabyte left after I install EVERY game I own. I dont see myself filling that up in the next 5 years unless like... Cyberpunk 2 comes out and is 750gb. I dont plan on picking up more NVME drives other than 1 for pop_os and it's not going to be used by unraid. Sata SSDS are cheap enough now I can pick up 1-2 a pay period to grow the sata array for my constantly expanding media library. My main OS's are already on NVME drives so why not benefit from shorter loading times in games due to the increased r/w over sata ssds
  10. Hello! My problem isn't really a technical one so much as a planning one. I already have unraid doing everything I want/need I just want to make sure I'm using my setup to its fullest. So I have a bit of a different setup. It's all ssd and its a mixture of 7 1tb pcie nvme ssds of varying brands (intel, adata, and 1 samsung) and 4 sata ssds (All sandisk). All are 1tb. The end goal here is I'd like to give unraid all of the sata disks because I plan on adding 4-12 more in the near future. I've got a nice fat docker stack that'll run on it as well. The samsung is already passed through and is the boot disk for my Windows VM. Next NVME I find a good deal on is going to be dedicated to a Pop_OS VM. And then my homelab vms are just simple ones that can run on images on the sata array. What I'd like to do, is take my 4 intel NVME ssds and essentially store a 3-4tb image that I store my entire game library on and mount it in my windows VM only. Problem is, when I assign them as cache drives it seems to stripe on 2 and mirror on 2 reducing the pool size to just 2tb. I know I can change the raidlevel via terminal with btrfs but is that the best solution to achieve a single mountable disk image across 4 of my fastest drives? I don't really care about parity just yet. I only have 10gb total data I cant afford to lose and that's already backed up in half a dozen other places. But in the future I'd love to run a FOG server to make incremental image level backups of all of my virtual machines but I need to grow my array first. Any suggestions? I've attached my hardware profile. And I can start fresh if need be with an entire new config. rasputin.xml
  11. Disk 6 was initialized into the array but did not have user data on it. But you are correct. That's exactly how he did the disk swap.
  12. I took a look at his system. Here's the diagnostics zip. He re-assigned every drive with a new config when he installed the new parity drive. I pointed docker at his old image which is now located in what unraid previously saw as disk 1, but is now disk 2, and docker straight up refused to start. Didn't have time to help him further today (I know OP IRL) tower-diagnostics-20190129-1923.zip Here's the log from when I tried to start docker when it was pointed at the old image. Jan 29 19:29:40 Tower emhttpd: shcmd (34312): /usr/local/sbin/mount_image '/mnt/disk2/system/docker/docker.img' /var/lib/docker 60 Jan 29 19:29:41 Tower kernel: BTRFS: device fsid 5932acc8-3d13-469c-ac36-2392ce4dfcc1 devid 1 transid 5090890 /dev/loop3 Jan 29 19:29:41 Tower kernel: BTRFS info (device loop3): disk space caching is enabled Jan 29 19:29:41 Tower kernel: BTRFS info (device loop3): has skinny extents Jan 29 19:29:41 Tower kernel: BTRFS error (device loop3): bad tree block start 0 21993553920 Jan 29 19:29:41 Tower kernel: BTRFS error (device loop3): bad tree block start 0 21993553920 Jan 29 19:29:41 Tower kernel: BTRFS warning (device loop3): failed to read root (objectid=4): -5 Jan 29 19:29:41 Tower kernel: BTRFS error (device loop3): open_ctree failed Jan 29 19:29:41 Tower root: mount: /var/lib/docker: wrong fs type, bad option, bad superblock on /dev/loop3, missing codepage or helper program, or other error. Jan 29 19:29:41 Tower root: mount error Jan 29 19:29:41 Tower emhttpd: shcmd (34312): exit status: 1
  13. I've used my Rocket 750 with unraid since 2016 and I've had zero isues with it. In fact, unraid is the only distro I'm able to get it working at all in.
  14. Forgot to update: With zenstates disables all issues still persist.
  15. So I was able to get this working sending and receiving mail (static ip, ptr record set by my isp, all ports forwarded and working) but I'm having issues getting this working with my nginx reverse proxy.. keep getting a 502 error Here's my proxy conf server { listen 443 ssl; listen [::]:443 ssl; server_name mail.*; include /config/nginx/ssl.conf; client_max_body_size 0; # enable for ldap auth, fill in ldap details in ldap.conf #include /config/nginx/ldap.conf; location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /login; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_mail mail; proxy_pass http://$upstream_mail:4433; } } and here's the container setup