• Posts

  • Joined

  • Last visited

Everything posted by Caennanu

  1. Gday, Since it was hightlighted in the newsletter under usecases, i figured i'd read up on this. The only question i really have is, why would i want to do this for just streaming? With Nvenc encoding (only applicable for nvidia gpu's) the load on the gaming system is near to none (compared to 60% cpu utilization of your server with 16 cores / 32 threads?). Instead, this will load up your local network, which could cause issues with bandwidth depending on the network setup, as you will 'upload' the stream to the server, then 'upload' it to twitch. Causing it to require 2x the bandwidth compared to directly streaming, not to mention priority QoS settings. Next to this, since everything is virtualized, you (generally) do not have direct acces to the virtual machine. (docker is a virtual machine afterall). If your gaming system crashes, you have no more control over the stream since you have no more access. But the stream still goes on till and if you regain control? There are a couple more reasons i can think off why i don't think this is a good idea. But i'd love to hear some pro's (aside of offloading recording of the footage)
  2. Gday all, Since i recently lost a whole lot of my data. (after investigation, it was eventually all user error) I now have a big lost+found folder. In this folder are a ton of filers and folders that have limited / no attributes. I have however found that many of them still contain data that can be read. For some reason GIMP seems to be able to detect a whole lot of them and is able to view their original content. Now i have also found DROID (Digital Record Object IDentification), which seems to do the same. But can only create a list and not actually restore the file type en-masse. Same with Gimp. So i was wondering. Is there perhaps any app available to the community app or perhaps in development that could perhaps use the XML generated by the DROID scan, to atleast 'rename' the file type extension to the files in the lost and found folder? Wondering what i can recover. But its too much to do by hand, and i'm not handy enough to write me a script.
  3. Gday all, Since i'm battling some hardware related issues (suspecting my 'new' HBA, and ECC errors caused by improper contact of my EPYC cpu...... yes its the cpu or board, as swapping memory around, the errors stay with the slot. reseating the cpu moves the errors to another slot...), i been rebooting my system and swapping some PCI devices around. What i've noticed is tho, that unraid assigns my 10G network nics to random VM's. While they should be part of the bond, configured as active backup. After this happens, i have to shut down docker and VM manager. start them up again, and restart the system not to have the interface down message. Is this expected behaviour? p.s. i can attach logs, but due to hard disk dissappearing (HBA or cable issue) and the ECC error issue, its rather large and hard to read.
  4. Ill give that a go. thanks for the suggestion! And indeed . . . the pro version is a bit, pricey.
  5. Does anyone have any recommendations for recovery software i can use on either ubuntu live booted from usb or via windows, where i hook up the drives via an HDD dock? So far, i've been able to recover a decent amount of files. But, there's no folder structure present at all, its all individual files. Also, the softwares seem to read my VM image files internally, instead of giving me the image file itsself? Since i had a spare key for EaseUS laying around, i'm using that. Installed on a windows laptop that i can run 24/7 for the time being. But i had the same 'issue' with hetman linux recovery software and recoverit. (did only the scans with the latter 2, since . .. well limitations on free software)
  6. So . . . data recovery time. FML.
  7. Alright that makes sence. Diagnostics comming soon. --- Diagnostics added
  8. Running the manual check now.
  9. so, i'm trying to follow the guide @itimpi but i run into an issue at step 4. i do not have the 'check filesystem status' button? (i know the entire array was done in XFS, so following that thread) And taking the array out of maintenance mode. i get the following
  10. Thank you. If i had followed the advice of ChatNoir, i probably would have found that.... And good timing at that. The disk that had no issues so far and was clearing yesterday has just finished .. . . as you geussed it, unmountable.
  11. Aye, thats what i thought. But the disks were unmountable (forgot to mention, sorry, been a hectic day), thus i had to format them Would there have been another option when they became unmountable because of a missing file system? Yeah, in terms of recovery. i'm currently using recovery software for the original failed disks. It seems to be working, but scan is still in progress (for the next 7 hours...). And that is what i'm thinking currently. Or atleast hoping for, that it was simply an I/O issue, and that i can recover most of it.
  12. Hope is all i got. Ill try not to respond too quickly ;0
  13. Yeah.... Tell the other half that, who's missing all of her, oh so important pictures of the sky, and shoes, and what not
  14. That is what i thought. But why is it clearing the disk now then? Clearing the disk to me sounds like a format thing?
  15. Allright, ill give that a shot with the 2 failed drives i have laying abouts. They shouldn't have had anything happen to them other than what i described above. Maybe i get lucky.
  16. I only have windows machines tho. Don't think they read XFS?
  17. Gday all, Before i start my story, at time of writing i'm a little salty, so excuse the negative undertone. And a disclaimer: In previous posts i have mentioned having Machine Check Events from ECC memory. This was caused by a defective memory module and has been replaced. System was running fine for 2 - 3 weeks since replacing the module. While replacing the module i have also replaced 3 intertech Fans (1200rpm pwm) for Noctua NF-F12 industrial PPC 3000rpm fans. The question i'm looking to get answered: Where did i go wrong and how can i prevent this in the future? Last monday i noticed 2 of my drives had failed in my 5 disk array with 2 parity. All WD Red 4TB drives. Which i found a bit odd. So first thing i did was check if the parity hadn't failed. Which it didn't seem too. The disks were being emulated. Pffew, parity saved my ass....... orso i thought. Data was showing on the network, albeit a bit slow, i was able to acces it. So i ordered 2 new drives. And started troubleshooting. Checked all cables and connections. All was fine. VM's and Dockers were running from the cache disk. so atleast that hadn't failed. CCTV recordings in a 2nd pool, mirrored had no issues. All WD disks were attached to the onboard SAS controller of my Asrock rack EPYCD8-2T. The purple disks for my CCTV were also connected to the same controller, but on the 2nd port / breakout cable. So we wait for the replacement disks. 2x 4TB seagate drives. Figured i would take a different brand, making it easier to recognize. Disks come in, turn the server off. Replace the supposedly failed disks and start the machine up. Array started offline, as requested, so i could add the 2 new disks to the parity and start rebuilding. Rebuilding starts, data is still available thru emulation. And then... error galore. every bit read is being errored. ohh shhhh....t something is wrong. I let it run for a bit longer, figuring maybe its a parity thing, but rebuild drive speed drops from 80mb/ps to the kbs..... Alright, this isn't right. something else must be wrong. At this moment i also noticed that data that was on 1 disk (my collection of movies i gathered over the years) was no longer showing. So i hastely make an backup of my most important files to an USB disk, just to be save. So these are save. Turn off the machine and check all the connections once again. Nothing seemed out of place, but something must be wrong. So since all effected disks where in a HDD tray caddy (for hot swap purposes), i figured maybe something is wrong with that thing. I remove the caddy, place all the drives in a regular hdd tray, connect them directly to the power and data cables and boot the machine back up. Re-starting the parity build, by swapping the new disks per slot and formatting them. After about 2 hours, i reach the 500GB mark where previously all the errors started appearing. No errors this time. The parity rebuild continues, all seems fine. After about 5 hours, something happens. The one data disk that wasn't malfunctioning earlier, now all a sudden has errors and is being emulated. The parity build process changes in speed, from 80mb/ps on average, to 1,2gb/ps, 2.3gb/ps, 4.6gb/ps and even topping out around 8gb/ps. Now we all know, sata drives can be fast. But no way spinning disks on Sata are THIS fast. But the rebuild completes shortly after that. Since i had a hunch, i checked the network drive... Even tho accessible, only the shares on the cache are still present. All while the parity disks are up, as well as the 2 new disks, and then the disabled 3rd disk..... So at this point, i basically figured i lost all my data. So . . . we reboot. Cause . . . i haven't tried turning it off and on again. right? Now, unraid wants the same disk on position 1 and 3? Daf...q? When trying to force the correct disk, it gives me the 'wrong' message. Turn the machine off. Hook all drives up to my HBA which only uses 1 breakout for my SSD's connected for VM's and aren't showing any issues. And turn it back on. Hoping this would solve the issue. But the same thing happened.... crap.... Well, then there is only 1 thing left to do right? New config... So we do new config. assign all drives where they belong. Start the array, and start rebuilding.... And this is where i am at right now. No disks are being emulated. Only the shares on my cache are showing and unraid is clearing disk 2 and having little to no data on disk 1 and 3. But, i'm still having a little bit of hope.... According to the header, i'm still using about 34% of my array, which was the state before all things happened. So now we wait for disk2 to clear and hopefully rebuild data from the parity, but i'm not thinking ill be that lucky. So . . . will i get my data back, any data? Where did i go wrong in troubleshooting? What should i have done, maybe a different order, different troubleshooting steps. Cause if i don't learn now, ill never be able to trust this setup again.
  18. ha! well, i can confirm it works in pterodactyl. But you might be better off hosting it from a VM instead. If that is the only thing you want to host. Pterodactyl is more aimed at hosting multiple services at once, while keeping a low profile in terms of resources.
  19. Correct, even if you try to apply the update, it won't update. It will throw you an error in the lines of saying 'was this created thru a plugin'? They will update automatically, if set correctly via the panel. I have the exact same. Also when running 'docker stats' from the unraid commandline, you will see that the containers made by pterodactyl all report '0'. So long as they report 0 in the commandline from unraid, the deamon will do exactly the same.
  20. And for me. I been running into issues again. Issue 1: For some reason, i cannot find my game servers via LAN browsers. This happens for both Ark and Factorio. Have not tested with others. Direct IP connections work fine. But since i want to use clusters for ARK, it would be very nice to have them see eachother. (dino / item transfers work fine, so its not a config on the server side). Issue 2: For Atleast ARK, i am not able to download all the mods i'd like. Currently have 7 set in the .ini files, of which 2 fail to download. It doesn't even seem to attempt to do it anymore. The mods in question are Structures+ and Castle & Keeps Remastered. Even when copying them directly via SFTP or filemanager to the server, they do not get loaded. These 2 mods in particular also cannot seem to be extracted via the file manager when i upload them as zip files. While other (largers) mods seem to have no issue. As a test, i've set up a windows VM and used Ark Server Manager to run a test server on it in the same subnet as the dockers should be in. Here i do not experience issue 1 or 2. So it is specific to the pterodactyl install (on unraid). I am aware there are multiple factors at play. I hope i can get some support on this. --- update --- in regards to issue 1. i've found that changing the config.yml file to use host instead of pterodactyl_nw for network name, driver and network_mode 'solves' the lan browser issue. Hopefully someone can 'fix' this bit.
  21. I had the similar issue. It didn't work untill i added the custom docker network configuration in the config.yml. Have you added that part?
  22. so . . . since i am still having the issue, i figured i would come here. Now, the thing is . . . i understand what you're saying and what i need to do. However . . . how do i get to the file? Perhaps this could be a userscript?
  23. Gday all, Since this week (i'm asuming, as i don't use the panel too often), i can't acces the panel. My game servers were running, but after a docker reboot, they nolonger get started automatically. When trying to access the panel, i'm getting server 500 error. When checking the logs on the wings, everything seems fine (version 1.6.3) Checking the panel logs, i keep getting a GET error GET /api/remote/servers?page=0&per_page=50 HTTP/1.1" status:"500" agent:"Pterodactyl Wings/v1.6.3 (id:8QLdUeRoKVxDnsFk)" for:"-" Comming from source, which would be the custom internal network. So i'm geussing that is Wings requesting server instance configs. Google doesn't give me any answers, let alone recently. I was on Unraid V6.10.0 RC8, upgraded to V6.10.2 RC1, no issues that i know. Went to V6.10.2 RC2, and now noticed issues. Wings was updated 4 days ago, don't know when panel was updated last. i tried re-installing Both wings and panel without result. Would love some help in getting the panel back up, as i wanted to set up some new servers. ----- Edit: appearantly the panel docker was updated today. Reverting to v1.7.0 worked
  24. That is indeed simple enough. Thanks for explaining tho, cause i was lost!
  25. Gday good sir, Been using this docker for a while to sync my phones to my array. However, i'm looking to expand this. By having certain folders sync to essentially create an off-site backup. I want this offsite backup to be on a specific disk, outside of the array. Or an pool of devices, outside of the array. Either works. However, i cannot find a way to 'mount' these. Could you please point me in the direction of how i can mount a 2ndary target instead of the /media one?