tbonedude420

Members
  • Posts

    98
  • Joined

  • Last visited

About tbonedude420

  • Birthday December 16

Converted

  • Gender
    Male
  • URL
    https://marue13.com
  • Location
    PA
  • Personal Text
    I R geek.

Recent Profile Visitors

1276 profile views

tbonedude420's Achievements

Apprentice

Apprentice (3/14)

8

Reputation

  1. As it stands right now, theres no pool uuid, the disk cannot be formated, and im stuck. Its 8am MST, and I need some shuteye. I think my next plan of action is nuke the array, and add both nvme's to the normal array so they format as xfs. Stop, remove, and try again? Shrug. I knew I should've changed his name from Loki.... bwahahhaa. Thanks @itimpiand @JorgeB. Ill check in on this in a few hours after some much needed sleep. Latest diagnostics in the .zip above (0753). I put new ram on my desk too, so hopefully no one has to weed through mcelog errors 🤣 Hope to hear back about any possible solutions. Again, theres nothing here, literally. Just docker.img and libvert.iso. And maybe dozzle is installed. I am 100% sure I screwed this up somehow, just hoping I can figure it out and never do it again, and maybe help someone else in the future. 😛
  2. Hiya @JorgeBand thanks for the like on U.C.D Removed nvme#2 from vmdisks, started the array with the check mark. Ran mover. Made new diagnostics. I see no change, however, I think I need to go back to how it was. I think I should add both devices to the same pool, then remove one, then balance? How do I go about doing that? Its only 28gb, should take seconds on nvme. I feel as if I've goofed something up, and im not sure how to proceed lol. Should I just make a new hw config and nuke everything? Even after removing it, it shows under UD as a 'pool' device. loki-diagnostics-20230312-0737.zip EDIT: Readded it to 'cache' array. Its spinning its wheels and thinking. Next step is plan to turn array off, remove from 'cache' and do nothing else, and start array. Balance I guess should happen on its own? Then shutdown array again, and move it to vmdrive pool. Hopefully then start array, and format. (Appears its done thinking, gonna proceed as I typed above) will stop, remove drive, but leave 2 slots, and start array. EDIT2: Did as I said, and now its really angry bwahaha. I checked format, and it errors out saying no pool uuid, Making a new diag.zip now (0753.zip) loki-diagnostics-20230312-0753.zip
  3. Were they part of the same pool in the past? Yes, they were. I removed it after realizing 2x250s = 250gb and no real world gain in performance at least in my use-case. I attempted to switch it from btrfs to xfs and it gave me the option to format, but that was unsuccessful. My guess is they are still linked somehow. Not sure how to proceed. Again, its all expendable and theres no risk of anything getting lost. This is a server made from spare parts. I use it for a lot of testing. Can and will copy pasta nuclear destructive lines of text into command line 🙈 hahahah Edit: Stopped array, added second pool and that drive, started array. Made a new diagnostics.zip after those changes loki-diagnostics-20230312-0714.zip
  4. Yikes, I was afraid of that 🤣 So currently i've been nulling out my syslog and mcelog due to having a slight ecc error. I could null both, and then post in theory? I've got new ram for the server ready to go, im just not looking forward to unracking it. Lol, you know how it is right?! Hopefully you can see past the ECC errors. bwahahaha loki-diagnostics-20230312-0625.zip
  5. Been a YUGE fan of unraid for quite some time. Figured I should get involved, and well.. I've got a current issue with one of my servers and needed to kill some time 😅 I am a sysadmin, my handle is T-Bone. Codename: Tank https://prnt.sc/H2Pw3yfb_psX Theme: Solarized Dark Unraid ... 5...8? or so? CPU: 4x Xeon E7-8880v2 @ 2.5ghz/3.1 turbo 15c/30t 37.5MB cache (60c/120t)( Motherboard: Supermicro X10QBi RAM: 384GB of DDR3 ECC 1600MT's (running 1333mhz) Case: Supermicro 4U - CSE-848XA-R3240B Drive Cage(s): 24bay hotswap Power Supply: 4x1460W platinum (Using two) Expansion Card(s): LSI SAS2008 (9211-8i) Cables: SFF8087 Fans: Like... 10? Will count one day. Parity Drive: Where were going, we don't need those. Data Drives: 15 drives 4Seagate, 11 Toshiba*More info below* Cache Drive: Samsung 970 Evo 500gb nvme Total Drive Capacity: 100TB Drives, I started with Toshiba x300's and loved their performance. They were also one of the cheapest per gb when I started. Moved to seagate. https://prnt.sc/Fj2_SnxJcl1h Primary Use: NAS/Media streaming/Domain Controller Likes: Overpowered. Overbuilt. Overkill Dislikes: Its a commercial grade server, that requires commercial grade hearing protection Add Ons Used: Unassigned Devices, Nerdpack/tools, CA Auto update, All the dynamix plugins, Custom Tab, IPMI Support, Enhanced Log Viewer, File Activity, CA Unlimited Width, GPU Statistics, MyServers, Nvidia Driver, NVTOP, Rclone, Theme Engine, Tips and Tweaks, Unbalance, UserScripts Future Plans: A good dusting. Boot (peak): 1200w Idle (avg): ???w(array off) 450w(array on) Active (avg): 650-700w Light use (avg): 500-550w Codename: PhatMicro https://prnt.sc/Tp5qmuyTu3CJ Theme: Custom Green/Black /w logos Unraid ... 6? Ish CPU: 4x Xeon E7-4890v2@ 2.8ghz/3.4 turbo 15c/30t 37.5MB cache (60c/120t)( Motherboard: Supermicro X10QBi RAM: 768GB of DDR3 ECC 1600MT's (running 1333mhz) Case: Supermicro 4U - CSE-848XA-R3240B Drive Cage(s): 24bay hotswap Power Supply: 4x1460W platinum (Using two) Expansion Card(s): 2xLSI SAS3008 (9401-8i I think..) Cables: SFF8643 I think... Fans: Like... 10? Will count one day. Parity Drive: Where were going, we don't need those. Data Drives: 14 drives All Seagate*More info below* Cache Drive: 2x Samsung 980 Pro 1TB nvme Total Drive Capacity: 172TB All seagate drives. Started with IronWolf, moving to Exos. https://prnt.sc/GXLbjaMmRu3D Primary Use: NAS/Media streaming/Domain Controller/PON Controller/Remote Support/Small Business Likes: Overpowered. Overbuilt. Overkill Dislikes: Its a commercial grade server, that requires commercial grade hearing protection Add Ons Used: Unassigned Devices, Nerdpack/tools, CA Auto update, All the dynamix plugins, Custom Tab, IPMI Support, Enhanced Log Viewer, File Activity, CA Unlimited Width, GPU Statistics, MyServers, Nvidia Driver, NVTOP, Rclone, Theme Engine, Tips and Tweaks, Unbalance, UserScripts Future Plans: A good dusting. Boot (peak): 1200w Idle (avg): ???w(array off) 600w(array on) Active (avg): 750w Light use (avg): 575-600w Codename: Loki https://prnt.sc/dV4u6vIFO8nE (ignore the 100% log.. hah) Theme: NordDark Unraid ... 5...8? or so? CPU: 2x Xeon E5-2670v1 @ 2.6ghz/3.3 turbo 8c/16t 20MB cache (16c/32t)( Motherboard: ASRock EP2C602-4L/D16 RAM: 64GB of DDR3 ECC 1600MT's (running 1600mhz) Case: Rosewill 4U - RSV-L4000U Drive Cage(s): 4bay hotswap Power Supply: Evga Suprnova g3? 1000w Expansion Card(s): LSI SAS2008 (9211-8i) Cables: SFF8087 Fans: 2x120mm intake 2x92mm cpu, 2x80mm rear Parity Drive: Where were going, we don't need those. Data Drives: 13 array, 14drives*More info below* Cache Drive: 1xSamsung 960 Evo 250Gb nvme, 1xSamsung 970 Evo 250gb nvme Total Drive Capacity: 6.7TB Drives are hoge poge. All good and working, but small and useless. Leftovers. All brands, even a raid0 in the array (the one with the * for temperature) https://prnt.sc/X28JfKWuKJvG Primary Use: Testing platform, VPN, Backup Likes: Diy. Homebrew. Custom built Dislikes: His name is Loki and he causes he grief. Add Ons Used: Unassigned Devices, Nerdpack/tools, CA Auto update, All the dynamix plugins, Custom Tab, IPMI Support, Enhanced Log Viewer, File Activity, CA Unlimited Width, GPU Statistics, MyServers, Nvidia Driver, NVTOP, Rclone, Theme Engine, Tips and Tweaks, Unbalance, UserScripts Future Plans: A gpu? Larger and or working cache drives? Real drives? 😅 Boot (peak): 600w Idle (avg): ???w(array off) 250w(array on) Active (avg): 250w Light use (avg): 175w maybe Codename: BKuhl https://prnt.sc/m8SfpjmZ2Em8 Theme: Custom (old windows inspired) Unraid ... 6.4 maybe? CPU: Xeon E5-2680v3 @ 2.5ghz/3.3 turbo 12c/24t 30MB cache)( Motherboard: Dell 0K240Y RAM: 64GB of DDR4 ECC 2666MT's (running 2133mhz) Case: Dell Precision 5820 Drive Cage(s): 6x Internal Power Supply: 425w Gold rated, I think Expansion Card(s): None, using onboard Cables: N/a Fans: 3x120mm and 1x92mm cpu I think Parity Drive: Where were going, we don't need those. Data Drives: 4 drives All Seagate*More info below* Cache Drive: SanDisk SD8SB8U 512gb ssd(sata) Total Drive Capacity: 36TB All seagate drives. 2x10tb IronWolfs, 1x16TB Exos, 1x 3TB Skyhawk https://prnt.sc/MA8eC7Ovf-f7 Primary Use: NAS/Media streaming/NVR Controller/Home Automation Likes: Prebuilt(sorta) Dislikes: Not really upgradeable Add Ons Used: Unassigned Devices, Nerdpack/tools, CA Auto update, All the dynamix plugins, Custom Tab, IPMI Support, Enhanced Log Viewer, File Activity, CA Unlimited Width, GPU Statistics, MyServers, Nvidia Driver, NVTOP, Rclone, Theme Engine, Tips and Tweaks, Unbalance, UserScripts Future Plans: Rehouse it in a 3 or 4u? Boot (peak): 250? Idle (avg): ???w(array off) 150w?(array on) Active (avg): 200w? Light use (avg): 100-150w? Tank, PhatMicro, and Loki are all physically located in my home. Tank and Loki are mine. PhatMicro is my buddys server / our company server. BKuhl is located a few miles across town, at a third friends house. Network Related Ubiquity stuff Edge Router 4 Edge Router 4 Edge Switch es24lite Edge Router 3 lite ERX-10 Edge Switch 24 non lite a few unify cams Unify wifi mesh ap's Not my area of expertise (yet), so im winging this section. We've got two racks, a lot of switches, IPSEC site to site vpn, 3g, 4g, 5g, PON, wireguard vpn, and GRE tunnels. Theres also a synology ds18something plus. And a dell r270 something server, old lab stuff. Raspberry pi2 model b+ 4gb, Raspberry pi4 Model b+ 8gb, several laptops, and more. Incoming is a 1gb split 5 ways (5ips) via a /28 Across town is a /25 and a /24. Can achieve 1gbps throughput even on vpn) to all sites. And some 3g/4g/5g and PON stuff on /22 10gbe between Tank and PhatMicro via /30 Its also my job to cable manage all the things... and well... It always starts out good, but then we always change something. Please, complain below 🤣 My talents include hardware, building, customizing, and administering. Some scripting, and more. Phat's are primarily networking, and trying to beat me at my own game Together we are a force to be reckoned with. We have a business together, and now live together, and have future plans of solar and battery backup for the server room. We are both tri-lingual when it comes to OS's, but primarily Windows for gaming. I've got fair to strong linux skills, and phat is learning that most networking stuff, is linux, so hes catching up fast. Were currently testing a bad to the bone switch as seen on the small rack, which houses two xeons, some memory, and has the ability to run VM's. Some juniper stuff we've been testing. And more. Future plans of 10gbe from ISP. They offer 2.5gb and its not that much more money, waiting on a response about 10gbe. But 10gbe would be ideal seeing we've already got gear, and it doesnt support 2.5gb (sorta...) Tank, and Loki are located on the short rack. Phatmicro and networking gear on larger rack. And a hidden dell r270 or something. Also a hardware kvm. The room has more blinking lights then my k70mk2 corsair. 🤓😂 Note: Wattages are estimated, sorta. We've got IPMI which gives us some information, and we just recently got two of these emporia kits that allows us a lot more granular information. https://www.emporiaenergy.com/how-the-vue-energy-monitor-works Note2: Folks may wonder how the two main servers have so much ram. Google x10qbi from supermicro. We've got 8x daughter boards for memory each with 12 slots. The servers can theoretically hold like 6TB of ram. Note3: My phone has an outshell that makes night pictures suck. Sorry. Note4: We don't use parity anymore. Sizes are too large, and take too long. Even in raid0 and 1 for parity drives (Dont do this.. bad idea lol) Our current solution is a business dropbox account and utilizing encryption.
  6. My backup/testing server, Loki, is causing me some grief as he always does. I've got two NVME 250gb cache drives, one a samsung 960, one a samsung 970. GOAL: Have two separate fast drives, for different things. IE, one for docker/appdata use, and one for my vmdisks Reality: Docker, appdata, and vmdisks appear on both drives. I believe btrfs or something else is 'pooling' the devices together. Under 'Pool Devices' one is listed as 'cache' and one as 'vmdrive' but they don't appear independent of each other. I've even tried removing the pool, changing from btrfs to xfs, and redoing things. Currently it wont mount the second drive since its XFS. Am I doing something wrong here? Did I misunderstand 'Add a second pool' or am I finally going nutty? Everyone loves pictures, so heres one https://prnt.sc/Hnrg_minXscy https://prnt.sc/2VhboljUQJzZ (Array offline) https://prnt.sc/RiexvgJLSOYh (Unassigned Devices) Shows like its apart of 'pool' and the data that exists on it when viewing, is identical to 'cache' This server, and everything about it are expandable. I'm willing to try anything. Edit: This is working as expected on my other server 'PhatMicro'. (Can be seen in signature) appdata/system/isos on first ssd, and only domains on the second ssd. GRRR. I don't have much hair left, but I'm willing to pull it all!
  7. Took a short clip if it helps.. https://share.getcloudapp.com/bLuARyym I just happened to notice this... 6.9.0 and 6.9.2 effected for me. Could it be my browser?
  8. Not sure if I should open my own thread, but on my 6.9.0 server, theme engine is working fine, however on my two 6.9.2 servers it seems to be broken. Also to note, my 6.9.2 servers have the unraid.net plugin, does this change anything with the theme engine from community apps? Also as a few others have noted, cant seem to maximize my docker/volume mappings. Little down arrow is there, but doesnt seem to do anything
  9. Hello. Not an expert, not even close, but found my self in a similar pickle, and again, spaceinvaderone was there to the rescue. My assumption is you need to further split the IOMMU groups in regards to the quad network adapter, as shown in the picture, they are all responding with the same xxxx:xxxx pci identifier. I see you tried to append the kernel, no luck im guessing? I see one entry... shouldn't there be 4? One for each of the NICs on the card? Similar thread here.
  10. Is there anything I can provide to help? Logs? specs? Quick spec:, PNY (I think) Quadro, m2000 4gb, 4x Xeon 8880v2, 64gb ecc https://prnt.sc/1082llf Emby show hardware encoding, and working. https://prnt.sc/1082m94 Unraid showing GPU info https://prnt.sc/1082mjv Unmanic no longer pulling successfully docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused "process_linux.go:432: running prestart hook 1 caused \"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: device error: false: unknown device\\n\""": unknown. Steps i've taken... Tried without :staging first, ran into problems, came here. Since then i've removed the docker, rm -R'd the folder from CLI, and tried the staging version. --runtime=nvidia was added, as well as the others, including a line from my Emby docker (NVIDIA_DRIVER_CAPABILITIES) with the value of ' all ' (Tried with, and without that last capabilities line) Thanks for the response! Sorry, I didnt get back sooner... Covid time schedules are wonky. --==EDIT So, wiped clean, grabbed staging, put in nvidia guuid, started program, set to my needs, shut down, and restarted the container, all is working. I still don't see progress in nvidia-smi, but its going anywhere from 50x-100x, as to before it was going 10-20x. I have also enabled debugging incase this is important later. Please feel free to @ me for my logs before pushing the staging update, not sure if it helps but totally willing https://prnt.sc/1087alz I dont see usage in nvidia-smi, however, the card is 'ramping up' as it were, using pcie3. Normally when idle it shows 1(3). Question, possibly for its own thread. Handbrake is based on ffmpeg also, so in theory, I could copy over my existing tweaked settings into that advanced/extra arguments field, no?
  11. Hello there, and thanks for this app! Do I need to use unmanic:staging to enable HW transcoding? I've added --runtime=nvidia, as well as my gpu uuid via extra parameters, and I dont see anything under watch nvidia-smi Nvidia Quadro m2000 4gb, if it matters. Seems to work fine with emby, and handbrake. Not sure if I fat fingered something ☺️
  12. I just followed the guide as of the date writing this post, but changed everything from letsencrypt to swag and added this :6 at the end of the template, and all is working fine. Thank you
  13. Is it possible to add a time delay to a specific docker? Of the about 20 I have, mongoDB is at the top of my list, rocketchat all the way at the bottom, and still RC sometimes loads a bit too fast, waiting for the DB to load up.
  14. Emby beta confirmed working fine. I will edit the post to put solved. At least I know it does work. Thanks @HellDiverUK for the suggestion.