tbonedude420

Members
  • Posts

    98
  • Joined

  • Last visited

Everything posted by tbonedude420

  1. As it stands right now, theres no pool uuid, the disk cannot be formated, and im stuck. Its 8am MST, and I need some shuteye. I think my next plan of action is nuke the array, and add both nvme's to the normal array so they format as xfs. Stop, remove, and try again? Shrug. I knew I should've changed his name from Loki.... bwahahhaa. Thanks @itimpiand @JorgeB. Ill check in on this in a few hours after some much needed sleep. Latest diagnostics in the .zip above (0753). I put new ram on my desk too, so hopefully no one has to weed through mcelog errors ๐Ÿคฃ Hope to hear back about any possible solutions. Again, theres nothing here, literally. Just docker.img and libvert.iso. And maybe dozzle is installed. I am 100% sure I screwed this up somehow, just hoping I can figure it out and never do it again, and maybe help someone else in the future. ๐Ÿ˜›
  2. Hiya @JorgeBand thanks for the like on U.C.D Removed nvme#2 from vmdisks, started the array with the check mark. Ran mover. Made new diagnostics. I see no change, however, I think I need to go back to how it was. I think I should add both devices to the same pool, then remove one, then balance? How do I go about doing that? Its only 28gb, should take seconds on nvme. I feel as if I've goofed something up, and im not sure how to proceed lol. Should I just make a new hw config and nuke everything? Even after removing it, it shows under UD as a 'pool' device. loki-diagnostics-20230312-0737.zip EDIT: Readded it to 'cache' array. Its spinning its wheels and thinking. Next step is plan to turn array off, remove from 'cache' and do nothing else, and start array. Balance I guess should happen on its own? Then shutdown array again, and move it to vmdrive pool. Hopefully then start array, and format. (Appears its done thinking, gonna proceed as I typed above) will stop, remove drive, but leave 2 slots, and start array. EDIT2: Did as I said, and now its really angry bwahaha. I checked format, and it errors out saying no pool uuid, Making a new diag.zip now (0753.zip) loki-diagnostics-20230312-0753.zip
  3. Were they part of the same pool in the past? Yes, they were. I removed it after realizing 2x250s = 250gb and no real world gain in performance at least in my use-case. I attempted to switch it from btrfs to xfs and it gave me the option to format, but that was unsuccessful. My guess is they are still linked somehow. Not sure how to proceed. Again, its all expendable and theres no risk of anything getting lost. This is a server made from spare parts. I use it for a lot of testing. Can and will copy pasta nuclear destructive lines of text into command line ๐Ÿ™ˆ hahahah Edit: Stopped array, added second pool and that drive, started array. Made a new diagnostics.zip after those changes loki-diagnostics-20230312-0714.zip
  4. Yikes, I was afraid of that ๐Ÿคฃ So currently i've been nulling out my syslog and mcelog due to having a slight ecc error. I could null both, and then post in theory? I've got new ram for the server ready to go, im just not looking forward to unracking it. Lol, you know how it is right?! Hopefully you can see past the ECC errors. bwahahaha loki-diagnostics-20230312-0625.zip
  5. Been a YUGE fan of unraid for quite some time. Figured I should get involved, and well.. I've got a current issue with one of my servers and needed to kill some time ๐Ÿ˜… I am a sysadmin, my handle is T-Bone. Codename: Tank https://prnt.sc/H2Pw3yfb_psX Theme: Solarized Dark Unraid ... 5...8? or so? CPU: 4x Xeon E7-8880v2 @ 2.5ghz/3.1 turbo 15c/30t 37.5MB cache (60c/120t)( Motherboard: Supermicro X10QBi RAM: 384GB of DDR3 ECC 1600MT's (running 1333mhz) Case: Supermicro 4U - CSE-848XA-R3240B Drive Cage(s): 24bay hotswap Power Supply: 4x1460W platinum (Using two) Expansion Card(s): LSI SAS2008 (9211-8i) Cables: SFF8087 Fans: Like... 10? Will count one day. Parity Drive: Where were going, we don't need those. Data Drives: 15 drives 4Seagate, 11 Toshiba*More info below* Cache Drive: Samsung 970 Evo 500gb nvme Total Drive Capacity: 100TB Drives, I started with Toshiba x300's and loved their performance. They were also one of the cheapest per gb when I started. Moved to seagate. https://prnt.sc/Fj2_SnxJcl1h Primary Use: NAS/Media streaming/Domain Controller Likes: Overpowered. Overbuilt. Overkill Dislikes: Its a commercial grade server, that requires commercial grade hearing protection Add Ons Used: Unassigned Devices, Nerdpack/tools, CA Auto update, All the dynamix plugins, Custom Tab, IPMI Support, Enhanced Log Viewer, File Activity, CA Unlimited Width, GPU Statistics, MyServers, Nvidia Driver, NVTOP, Rclone, Theme Engine, Tips and Tweaks, Unbalance, UserScripts Future Plans: A good dusting. Boot (peak): 1200w Idle (avg): ???w(array off) 450w(array on) Active (avg): 650-700w Light use (avg): 500-550w Codename: PhatMicro https://prnt.sc/Tp5qmuyTu3CJ Theme: Custom Green/Black /w logos Unraid ... 6? Ish CPU: 4x Xeon E7-4890v2@ 2.8ghz/3.4 turbo 15c/30t 37.5MB cache (60c/120t)( Motherboard: Supermicro X10QBi RAM: 768GB of DDR3 ECC 1600MT's (running 1333mhz) Case: Supermicro 4U - CSE-848XA-R3240B Drive Cage(s): 24bay hotswap Power Supply: 4x1460W platinum (Using two) Expansion Card(s): 2xLSI SAS3008 (9401-8i I think..) Cables: SFF8643 I think... Fans: Like... 10? Will count one day. Parity Drive: Where were going, we don't need those. Data Drives: 14 drives All Seagate*More info below* Cache Drive: 2x Samsung 980 Pro 1TB nvme Total Drive Capacity: 172TB All seagate drives. Started with IronWolf, moving to Exos. https://prnt.sc/GXLbjaMmRu3D Primary Use: NAS/Media streaming/Domain Controller/PON Controller/Remote Support/Small Business Likes: Overpowered. Overbuilt. Overkill Dislikes: Its a commercial grade server, that requires commercial grade hearing protection Add Ons Used: Unassigned Devices, Nerdpack/tools, CA Auto update, All the dynamix plugins, Custom Tab, IPMI Support, Enhanced Log Viewer, File Activity, CA Unlimited Width, GPU Statistics, MyServers, Nvidia Driver, NVTOP, Rclone, Theme Engine, Tips and Tweaks, Unbalance, UserScripts Future Plans: A good dusting. Boot (peak): 1200w Idle (avg): ???w(array off) 600w(array on) Active (avg): 750w Light use (avg): 575-600w Codename: Loki https://prnt.sc/dV4u6vIFO8nE (ignore the 100% log.. hah) Theme: NordDark Unraid ... 5...8? or so? CPU: 2x Xeon E5-2670v1 @ 2.6ghz/3.3 turbo 8c/16t 20MB cache (16c/32t)( Motherboard: ASRock EP2C602-4L/D16 RAM: 64GB of DDR3 ECC 1600MT's (running 1600mhz) Case: Rosewill 4U - RSV-L4000U Drive Cage(s): 4bay hotswap Power Supply: Evga Suprnova g3? 1000w Expansion Card(s): LSI SAS2008 (9211-8i) Cables: SFF8087 Fans: 2x120mm intake 2x92mm cpu, 2x80mm rear Parity Drive: Where were going, we don't need those. Data Drives: 13 array, 14drives*More info below* Cache Drive: 1xSamsung 960 Evo 250Gb nvme, 1xSamsung 970 Evo 250gb nvme Total Drive Capacity: 6.7TB Drives are hoge poge. All good and working, but small and useless. Leftovers. All brands, even a raid0 in the array (the one with the * for temperature) https://prnt.sc/X28JfKWuKJvG Primary Use: Testing platform, VPN, Backup Likes: Diy. Homebrew. Custom built Dislikes: His name is Loki and he causes he grief. Add Ons Used: Unassigned Devices, Nerdpack/tools, CA Auto update, All the dynamix plugins, Custom Tab, IPMI Support, Enhanced Log Viewer, File Activity, CA Unlimited Width, GPU Statistics, MyServers, Nvidia Driver, NVTOP, Rclone, Theme Engine, Tips and Tweaks, Unbalance, UserScripts Future Plans: A gpu? Larger and or working cache drives? Real drives? ๐Ÿ˜… Boot (peak): 600w Idle (avg): ???w(array off) 250w(array on) Active (avg): 250w Light use (avg): 175w maybe Codename: BKuhl https://prnt.sc/m8SfpjmZ2Em8 Theme: Custom (old windows inspired) Unraid ... 6.4 maybe? CPU: Xeon E5-2680v3 @ 2.5ghz/3.3 turbo 12c/24t 30MB cache)( Motherboard: Dell 0K240Y RAM: 64GB of DDR4 ECC 2666MT's (running 2133mhz) Case: Dell Precision 5820 Drive Cage(s): 6x Internal Power Supply: 425w Gold rated, I think Expansion Card(s): None, using onboard Cables: N/a Fans: 3x120mm and 1x92mm cpu I think Parity Drive: Where were going, we don't need those. Data Drives: 4 drives All Seagate*More info below* Cache Drive: SanDisk SD8SB8U 512gb ssd(sata) Total Drive Capacity: 36TB All seagate drives. 2x10tb IronWolfs, 1x16TB Exos, 1x 3TB Skyhawk https://prnt.sc/MA8eC7Ovf-f7 Primary Use: NAS/Media streaming/NVR Controller/Home Automation Likes: Prebuilt(sorta) Dislikes: Not really upgradeable Add Ons Used: Unassigned Devices, Nerdpack/tools, CA Auto update, All the dynamix plugins, Custom Tab, IPMI Support, Enhanced Log Viewer, File Activity, CA Unlimited Width, GPU Statistics, MyServers, Nvidia Driver, NVTOP, Rclone, Theme Engine, Tips and Tweaks, Unbalance, UserScripts Future Plans: Rehouse it in a 3 or 4u? Boot (peak): 250? Idle (avg): ???w(array off) 150w?(array on) Active (avg): 200w? Light use (avg): 100-150w? Tank, PhatMicro, and Loki are all physically located in my home. Tank and Loki are mine. PhatMicro is my buddys server / our company server. BKuhl is located a few miles across town, at a third friends house. Network Related Ubiquity stuff Edge Router 4 Edge Router 4 Edge Switch es24lite Edge Router 3 lite ERX-10 Edge Switch 24 non lite a few unify cams Unify wifi mesh ap's Not my area of expertise (yet), so im winging this section. We've got two racks, a lot of switches, IPSEC site to site vpn, 3g, 4g, 5g, PON, wireguard vpn, and GRE tunnels. Theres also a synology ds18something plus. And a dell r270 something server, old lab stuff. Raspberry pi2 model b+ 4gb, Raspberry pi4 Model b+ 8gb, several laptops, and more. Incoming is a 1gb split 5 ways (5ips) via a /28 Across town is a /25 and a /24. Can achieve 1gbps throughput even on vpn) to all sites. And some 3g/4g/5g and PON stuff on /22 10gbe between Tank and PhatMicro via /30 Its also my job to cable manage all the things... and well... It always starts out good, but then we always change something. Please, complain below ๐Ÿคฃ My talents include hardware, building, customizing, and administering. Some scripting, and more. Phat's are primarily networking, and trying to beat me at my own game Together we are a force to be reckoned with. We have a business together, and now live together, and have future plans of solar and battery backup for the server room. We are both tri-lingual when it comes to OS's, but primarily Windows for gaming. I've got fair to strong linux skills, and phat is learning that most networking stuff, is linux, so hes catching up fast. Were currently testing a bad to the bone switch as seen on the small rack, which houses two xeons, some memory, and has the ability to run VM's. Some juniper stuff we've been testing. And more. Future plans of 10gbe from ISP. They offer 2.5gb and its not that much more money, waiting on a response about 10gbe. But 10gbe would be ideal seeing we've already got gear, and it doesnt support 2.5gb (sorta...) Tank, and Loki are located on the short rack. Phatmicro and networking gear on larger rack. And a hidden dell r270 or something. Also a hardware kvm. The room has more blinking lights then my k70mk2 corsair. ๐Ÿค“๐Ÿ˜‚ Note: Wattages are estimated, sorta. We've got IPMI which gives us some information, and we just recently got two of these emporia kits that allows us a lot more granular information. https://www.emporiaenergy.com/how-the-vue-energy-monitor-works Note2: Folks may wonder how the two main servers have so much ram. Google x10qbi from supermicro. We've got 8x daughter boards for memory each with 12 slots. The servers can theoretically hold like 6TB of ram. Note3: My phone has an outshell that makes night pictures suck. Sorry. Note4: We don't use parity anymore. Sizes are too large, and take too long. Even in raid0 and 1 for parity drives (Dont do this.. bad idea lol) Our current solution is a business dropbox account and utilizing encryption.
  6. My backup/testing server, Loki, is causing me some grief as he always does. I've got two NVME 250gb cache drives, one a samsung 960, one a samsung 970. GOAL: Have two separate fast drives, for different things. IE, one for docker/appdata use, and one for my vmdisks Reality: Docker, appdata, and vmdisks appear on both drives. I believe btrfs or something else is 'pooling' the devices together. Under 'Pool Devices' one is listed as 'cache' and one as 'vmdrive' but they don't appear independent of each other. I've even tried removing the pool, changing from btrfs to xfs, and redoing things. Currently it wont mount the second drive since its XFS. Am I doing something wrong here? Did I misunderstand 'Add a second pool' or am I finally going nutty? Everyone loves pictures, so heres one https://prnt.sc/Hnrg_minXscy https://prnt.sc/2VhboljUQJzZ (Array offline) https://prnt.sc/RiexvgJLSOYh (Unassigned Devices) Shows like its apart of 'pool' and the data that exists on it when viewing, is identical to 'cache' This server, and everything about it are expandable. I'm willing to try anything. Edit: This is working as expected on my other server 'PhatMicro'. (Can be seen in signature) appdata/system/isos on first ssd, and only domains on the second ssd. GRRR. I don't have much hair left, but I'm willing to pull it all!
  7. Took a short clip if it helps.. https://share.getcloudapp.com/bLuARyym I just happened to notice this... 6.9.0 and 6.9.2 effected for me. Could it be my browser?
  8. Not sure if I should open my own thread, but on my 6.9.0 server, theme engine is working fine, however on my two 6.9.2 servers it seems to be broken. Also to note, my 6.9.2 servers have the unraid.net plugin, does this change anything with the theme engine from community apps? Also as a few others have noted, cant seem to maximize my docker/volume mappings. Little down arrow is there, but doesnt seem to do anything
  9. Hello. Not an expert, not even close, but found my self in a similar pickle, and again, spaceinvaderone was there to the rescue. My assumption is you need to further split the IOMMU groups in regards to the quad network adapter, as shown in the picture, they are all responding with the same xxxx:xxxx pci identifier. I see you tried to append the kernel, no luck im guessing? I see one entry... shouldn't there be 4? One for each of the NICs on the card? Similar thread here.
  10. Is there anything I can provide to help? Logs? specs? Quick spec:, PNY (I think) Quadro, m2000 4gb, 4x Xeon 8880v2, 64gb ecc https://prnt.sc/1082llf Emby show hardware encoding, and working. https://prnt.sc/1082m94 Unraid showing GPU info https://prnt.sc/1082mjv Unmanic no longer pulling successfully docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused "process_linux.go:432: running prestart hook 1 caused \"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: device error: false: unknown device\\n\""": unknown. Steps i've taken... Tried without :staging first, ran into problems, came here. Since then i've removed the docker, rm -R'd the folder from CLI, and tried the staging version. --runtime=nvidia was added, as well as the others, including a line from my Emby docker (NVIDIA_DRIVER_CAPABILITIES) with the value of ' all ' (Tried with, and without that last capabilities line) Thanks for the response! Sorry, I didnt get back sooner... Covid time schedules are wonky. --==EDIT So, wiped clean, grabbed staging, put in nvidia guuid, started program, set to my needs, shut down, and restarted the container, all is working. I still don't see progress in nvidia-smi, but its going anywhere from 50x-100x, as to before it was going 10-20x. I have also enabled debugging incase this is important later. Please feel free to @ me for my logs before pushing the staging update, not sure if it helps but totally willing https://prnt.sc/1087alz I dont see usage in nvidia-smi, however, the card is 'ramping up' as it were, using pcie3. Normally when idle it shows 1(3). Question, possibly for its own thread. Handbrake is based on ffmpeg also, so in theory, I could copy over my existing tweaked settings into that advanced/extra arguments field, no?
  11. Hello there, and thanks for this app! Do I need to use unmanic:staging to enable HW transcoding? I've added --runtime=nvidia, as well as my gpu uuid via extra parameters, and I dont see anything under watch nvidia-smi Nvidia Quadro m2000 4gb, if it matters. Seems to work fine with emby, and handbrake. Not sure if I fat fingered something โ˜บ๏ธ
  12. I just followed the guide as of the date writing this post, but changed everything from letsencrypt to swag and added this :6 at the end of the template, and all is working fine. Thank you
  13. Is it possible to add a time delay to a specific docker? Of the about 20 I have, mongoDB is at the top of my list, rocketchat all the way at the bottom, and still RC sometimes loads a bit too fast, waiting for the DB to load up.
  14. Emby beta confirmed working fine. I will edit the post to put solved. At least I know it does work. Thanks @HellDiverUK for the suggestion.
  15. Good suggestion. Wanted to try the beta version anyway, so ill just test both and report back. My hope is its just not a supported GPU on the regular version, and it has been added in more recent updates/distros. On that note, Is there any commands I could/should type out to show more info? lspci only shows its passed to the container, not necessarily used.
  16. I was hoping it was something that trivial... no dice The worrying part is I should see them listed as shown in the tutorial...(mine pictured above) has two blank spaces Can I console into Emby? ... (goes to test)
  17. Glad to know others are having success (Rages inside!) bwahaha. I was hoping it was just as easy.... Maybe check over my config settings? Like you said, it was exactly the same I thought.... ๐Ÿ™„
  18. Hey unraiders! Trying to get my M2000 quadro working with emby, and having some issues it seems. I followed - https://forums.unraid.net/topic/77813-plugin-linuxserverio-unraid-nvidia/ - and had easy success with plex. Checking watch nvidia-smi from a terminal shows plex using it. Emby.... not so much...! Under Emby settings I expected to see my GPU listed as shown in the tutorial, but no dice. Any help greatly appreciated. TIA! Edit: Solved by updating to EmbyServerBeta (from Emby)
  19. Awesome, and thanks for quick responses! It appears I have more homework to-do. I am okay with building a kernel, assuming there is an idiot 101 guide for special ol' me. I will test out the plugin, this setup currently is not mission critical, I am open to failure and learning. I have a primary system for everything else. EDIT :: -N- :: UPDATE I hurried home, as fast as I could! Alright, so far so good. Oh boy, were getting somewhere! Yay!!! Success! Thanks again to @testdasi, it truly was a Viola'! moment on beta 25, took less then 10 minutes including shutdown, and restarting of the server. !!
  20. "With both, it's critical to note that you should NOT update Unraid using the official GUI. You basically have to wait till the appropriate custom build has been released." Important note, got it. I am on 6.9 beta, should I drop back to stable first ? I ran 6.8 a long time its very stable with my setup. Just wanted to play around with 6.9 though. Sidenote: I blame spaceinvaderone!
  21. Hi all, sorry if this isn't the right spot...please move accordingly! It seems HW transcoding has come along way, and I am ready to jump in with recently decommissioned hardware from work, woohoo! GPU : Quadro m2000 4gb, based on the elpamsoft list, it should be able to do h265. I believe its as straight forward as adding support, and... viola? I am using a quadro, so no issues there... right?! Also I see the linuxserver.io plugin and another from ich777, is there a difference? I think I just need the first plugin And one last question, since I've got a dual cpu server, does it matter if I put the gpu on cpu0 or cpu1?
  22. So, hmm... choices to make... I thank you for the reminder of Unassigned Devices. I had checked it out once, but then forgot about it. I am curious as to what kind of performance gains could be had in a 2x250 cache array, but my understanding is it would be 384gb? I would very much be interested in passing a 250 directly to a VM, thats an idea worth some exploration I think. I think I will do some testing, and report back here hopefully with geeky pics or something! Many thanks for the response and reminder on UD, man what a good plugin how could I forget?! A side note, for passthrough stuff, I need to enable IOMMU stuff or something? I may have to refer to some spaceinvader one videos in my near future. I also toyed with a plain jane sata ssd before and used that as write heavy cache space, and I killed it. (Not entirely, a full long format later, and a update to the firmware and shes back up to mostly full speed again) but not something im quick to return back to anytime soon. the 12tb seems to handle its own as the 'main drive'. given the size gap difference, it tends to be the first thing hit for incoming data / downloads / ect.
  23. Hey all! I've currently got a 10 disk, 70 TB array, and a 500gb nvme for cache. The question, Would I be better off upgrading to a 1tb single drive nvme, or adding in a second 500? In theory there could be more IOPS with two drives, but im not quite sure. Speed isnt really a concern seeing as its NVME. I also have 2x250 gb nvme's with adapters laying around. What would you do? Double up, and get a second five hundo Swap out for a 1tb. Add-on 2 more 250's and compare performance? (This is an option since im not using my pci slots for anything else at the moment) I have either 4 or 5 pcie slots, 1 for a 9211-8i raid card, one for the cache drive, have at least 2 more spots open to use. Media programs transcode in ram, but the library information, poster art, and a single w10VM all live in cache land. I can provide as much data as needed. Basic specs are xeon e5 2670v1 (two), ASRock EP2C602-4L/D16, unraid 6.8.2
  24. Many thanks Squid! (My apologies was out at lunch) All working as intended now. I swear I looked through that list two or three times and never saw it until you pointed it out ๐Ÿ˜ช