Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Devy

  • Rank
  1. thank you for the reply, did also something else change with the final 6.8 version? as for 6.8 rc 7 my GPU runs completely fine days without a crash or anything at all but under 6.8 ( non beta ) daily i lose the output once and can't get it back without a reboot of the server, non of those in the beta build rc 7 maybe i should just return the gpu and get a nvidia card i mean slightly less performance for 400€ but also way less of an issue
  2. Hello, after trying to find some clues i couldn't find any, is there any information regarding the ditching of the navi reset patch? as one of the few unlucky ones with a 5700 XT it is kind of sad to always restart the complete Unraid Server from time to time just because the GPU felt like it obv. this isn't a problem directly related to unraid but i just wanted to know as it has been included in earlier versions of the 6.8 beta
  3. okay i tried that and in fact only 100GB are missing in Total, i did this on the new Disk and it looks pretty good so far, the iso folder is missing and one docker, there can't be much else that is missing i mean its a bit annoying to reconfigure the proxy manager and dump some gpu bios but thats nothing compared to if everything was lost will anothe rprogram get any more success out of this? like this ufs explorer i'm reading in some posts about?
  4. I also got the log of the repair with -n and it doesnt look "healthy" that was the newer drive that i placed into the system AFTER the data-rebuild. warning: there is a lot of text. the one of the drive before looked similiar, guess there isnt much hope which is pretty sad xfsrepair.log
  5. Hello. just to make sure my previous Steps: - Reboot and noticed that the file system is suddenly unmountable - Stopped the Array - Started in Maintenance Mode - Checked with this -n command the xfs_repair but it gave a lot of errors - Got another Disk that i plugge din on another sata cable on another Sata Port ( got the old one still plugged in ) - in Maintenance mode i did a data_rebuild that went for 7 hours that lasted the whole night, in the morning it was done but still unmountable Disk Log from the new Disk: now i'm really unsure on what to do now, the Disk Log from the Parity The unassigned Devices that i got all work without any issue. i still want to try and not lose all of the data and i'm a bit clueless on what steps to take next. This is the Current state, the one in Unassigned Devices is the old One that gave ma all the Troubles and the one above sdd is the one that i uses to replace it
  6. Update i inserted a new disk, different SATA cable/port and after a rebuild , it showed the new disk as emulated and ran for 7 hours in maintenance mode just reading parity and writing to the other disk now after that it shows again as unmountable file system
  7. i was running Unraid with just one HDD and one Parity Disk, today after a reboot for installing a new GPU after the reboot everything was gone, so i checked the main tab. My Disk one, a WDC one showed "unmountable file system". now i just got pretty much the parity disk and i could install another one, but would it recover the files from the parity drive if i install another disk? because i think after the array started it tried to do a parity check from the unmountable drive im really scared now, of the most important things i got backups but it would still hurt me and cost me weeks to rebuild it exactly like it has been the problem in the past with the cache drive was kinda meh but that was my fault for not having an ups. but i'm really confused with this suddenly unmountable drive (i also can't moutn it over the unassigned devices plugin) here is a log:
  8. i did this, i tried scrub now which just goes a bit and on both disks now ends with 272 unfixable errors or something like that, recover also jsut leads to an error, i really need a better way the most annoying part are the mails, due to using mailcow i still had to run a vm just for that with one week of mails missing, maybe i should increase the backup frequency what annoys me the most is the fact that i was feeling a littler bit save due to the raid 1 cache pool, little did i know that it seems useless in this case due to if one drive gets corrupted it just erases the files on the other disk
  9. i will do so, thanks one thing that is a bit weird for me, whats the next logical step for me? add another cache drive? and i was feeling a little bit secure welp it stil shows 168gb/250 in use, is there any chance to save a vm that was on there? without using the week old backup? i was feeling save with the cache pool but i guess i was wrong is there any like way to still have some hope? or what would be the next logical step? to get at least everything else up and running again does my whole cache like dissapear or the one file if one of my two raid 1 ssd's fails?
  10. Hello Everyone, Today i had a power loss, and after trying to reboot Unraid everyhing looked fine at first, my array started so i wanted to bring my vm back Online but suddenly libvirt failed to start, docker failed to start, when checking the settings it told me that it couldnt find the directory. same was kinda for all my shares, the shares tab is completely empty, i can still go to the shares i created and there are all the files in it, but when i try to find the system share it is completely empty. what is the best way to go from here? i included the diagnostic zip unrawr-diagnostics-20191117-1241.zip
  11. Hello, so i want to transfer my Home Setup all into one "little" Box having unraid on it as a network storage but also running my Debian Mailserver on it and for example 2 Windows VM's, or one Mac/Linux and one Windows Due to some limitations with my current setup i look for hardware that offers me the following: being able to pass both of my GPU's to a VM each. Having the Option to have 2 USB Controllers in total that i can pass through to a VM each, so each VM has one USB Controller Enough Cores for my Setup, 1 For unRaid, Two for my Debian VM, 4 for VM1 and 5 or so for VM2 ( so at least 12 Cores) So the tricky part that i figured out is that i need a Mainboard and CPU configuration that offers good IOMMU Groups and that information isn't that easy to find (for me) Which breaks my current setup is mainly no option for a second USB Controller and no way to passthrough a second GPU with my Prime x470 Pro and my Ryzen 7 2700x
  12. Hello everyone, so far i'm a happy user but there is one thing that annoys me a bit and even after trying a lot of things i just dont know what to do Problem: i got a docker-compose in a debian VM and so far back in the old server (same setup but without unraid and connecting to a normal nas via nfs) i didn't had a single issue the problems occur on Nextcloud and Emby that both reach the NFS Share, happens either after reboot or downloading a huge game on a different network share on windows that isnt even on the same disk Solution for me so far was if a rebooted the server or the error occurs to just restart the docker containers Here is my fstab line, that also worked flarwlessly on boot untill i switched to unraid /mnt/cloud nfs rw,sync,hard,intr 0 0 emby also uses the same folder as nextcloud for music/videos and so on is there something i need to know when using nfs? maybe it would work if someone has a better way mounting that thing there, however i can't use smb as nextcloud requires specific folder sharesettings userwise and from rwx EDIT: I got a second question, how many parity drives do you guys use? i currently use 2 Parity and 2 Data to have like the same situation as with my old raid6 nas however i'm not really sure if the additional parity drive doesnt for example slow down the transfer speeds...i was even thinking of using a normal HDD as a cache disk just so it doesnt get that slow from time to time to move some files
  13. Hello, i did that..i googled a bit more but still couldnt find a solution..my server pretty much got no internet it has a ens192 interface and thats the only one. i also changed the ethernet from virtio to "e1000-82545em" and in lspci the Ethernet Controller is listed as an Intel Corporation Controller, which should be fine i guess however : ig i ping something it tells me "connect" Network is unreachable, its a Debian VM EDIT: i solved it...networkctl was the magic word, my network interface wasnt in "interfaces" after moving the vm so i had to manually add it..everything works perfectly now
  14. Hello i'm a bit of a Noob but since moving to unraid i'm kinda lost with this one function, i tried multiple network ways the reason for needing my VM in the network is that its my Mailserver and i need to setup Portforward to it from my Router
  15. I actually made some checks with the drained Watts on my Old build and so far it looks pretty good comapred to my old NAS that also drains quite a bit...i got 3 Options now 1: Use My Second PC for it entirelly and just get some new HDD Screws 2: Use my Main PC (Which i would like) Headless and giving one GPU to my Linux System and one to my rarely used SSD. It would have 2 Minus Points - Only space for 2 Internal HDDs (i would probably do the Data drives internal and do the backup stuff/safety drives external if thats possible) - i will have a little bit less CPU Power when it comes to gaming, to not just run my Linux Desktop next to my Windows like now, but also my Datastore and my little Debian Server. 3: Have the Performance loss like before that comes with running a Datastore and a small linux Server but get a new PC Case to mount all the HDDs internally.