doma_2345

Members
  • Posts

    48
  • Joined

  • Last visited

Everything posted by doma_2345

  1. this error still shows Apr 15 16:45:12 35Highfield shfs: shfs: ../lib/fuse.c:1451: unlink_node: Assertion `node->nlookup > 1' failed. what do i do now?
  2. So i have rebooted and my shares are still missing 35highfield-diagnostics-20210415-1648.zip
  3. good to know, but also FFS. if I reboot how does that affect the parity rebuild will it start from where it left off or will it start again?
  4. I had an issue with one disk going off line with no SMART errors it just became disabled, in trying to fix that and swapping cables around I ended up with two disabled disks. I started a parity rebuild of those two disk, apparently there is no way to just add them back into the array once they are disabled. Then this afternoon my docker service crashed, when all my shares disappeared. My docker service has been crashing fairly regularly since installing 6.9.0 Browsing the disks the folders are there with data in but the shares are missing But because the shares are missing all access to these files has stopped. I don't want to reboot the server as this will interrupt the parity rebuild. Any help would be appreciated, diagnostics attached. 35highfield-diagnostics-20210415-1618.zip
  5. i tired swapping cables to see if that was an issue and ended up with two disks disabled. doh.... i dont see any issue with the disk in the SMART test so am just going to rebuild and see if it happens again. thanks for the help.
  6. I have this i believe this is from before i rebooted 35highfield-smart-20210414-2238.zip
  7. What do you mean its happening now, its still occurring. I just rebooted to see if it fixes the issue, it didnt.
  8. I will check all the connections when i get to the office, but the server doesn’t move so I dont think it could be a connection and I have never had a cable just die before. The hard drives are connected in sets of four to a sas back plain, so if it was a cable i would expect to lose four disks not just 1. Please find diagnostic data attached 35highfield-diagnostics-20210415-0750.zip
  9. Hi I have had a disk die tonight, its being emulated succesfully, but when i look at the SMART data for the disk I am not sure what I am looking at, but I am trying to work out the reason for it dying. Was i just unlucky or is there something I am missing in the data?
  10. I have just set this up on a p4000 can anyone explain to me why the current and reported hashrates are so different, is this because of dev and pool fees?
  11. after resolving the missing httpserver folder error, the latest update broke it again and i had to update my template to remove that fix, that gained me access to the GUI however I am now getting the following error. I backed up one of my backup tasks and deleted it, i then tried restoring that back up and the file import failed, i then added the back up task manually but that also did not resolve the issue. Any ideas?
  12. I have managed to fix this with the following that i found on the duplicati forum from 2018
  13. I am getting the following error over and over. I have tried restoring the container by re-downloading it but it does not seem to fix the issue. I am not sure how long this has been going on for but it stops the container properly launching. Any ideas? A serious error occurred in Duplicati: System.UnauthorizedAccessException: Access to the path "/tmp/HttpServer" is denied. at System.IO.Directory.CreateDirectoriesInternal (System.String path) [0x0005e] in <254335e8c4aa42e3923a8ba0d5ce8650>:0 at System.IO.Directory.CreateDirectory (System.String path) [0x0008f] in <254335e8c4aa42e3923a8ba0d5ce8650>:0 at HttpServer.HttpServer.Init () [0x0010a] in <bed89f1655ee48029f6d6812f54c58ad>:0 at HttpServer.HttpServer.Start (System.Net.IPAddress address, System.Int32 port) [0x00026] in <bed89f1655ee48029f6d6812f54c58ad>:0 at Duplicati.Server.WebServer.Server..ctor (System.Collections.Generic.IDictionary`2[TKey,TValue] options) [0x00215] in <c5f097a49c0a4f1fb0f93cf3f5f218b1>:0 at Duplicati.Server.Program.StartWebServer (System.Collections.Generic.Dictionary`2[TKey,TValue] commandlineOptions) [0x00000] in <c5f097a49c0a4f1fb0f93cf3f5f218b1>:0 at Duplicati.Server.Program.RealMain (System.String[] _args) [0x00227] in <c5f097a49c0a4f1fb0f93cf3f5f218b1>:0 A serious error occurred in Duplicati: System.UnauthorizedAccessException: Access to the path "/tmp/HttpServer" is denied.
  14. So can I safely assume it is balanced even though it is incorrectly reporting so and remove the smaller drive?
  15. I am moving from a single 250gb ssd cache disk to 2 (possibly 3) 1tb ssd cache disks, (I have high cache utilisation my mover runs every hour currently and i also have my vm’s, appdata, system, isos and plex metadata on unassigned disks and would like to add some redundancy to these.) I have currently installed 1 of the 1tb drives and set the raid configuration to 1 and run the balance with the aim to balance the drives and then remove the 250gb drive and add the second 1tb drive and then run the balance again. The balance has run, but I would expect there only to be 250gb available in the cache pool and I have for some reason 606gb available, this suggests to me the balance has not been completed or the raid is not set up correctly. (I did try running the balance multiple times, but it still reports ‘no balance found’) Can anyone point me in the right direction?
  16. Thanks for that. I have a few questions, based on your suggestion. 1. Will the data in the new array will stay intact? 2. I currently have my shares writing to specific disks will this need to be updated, I assume so as drive numbering will change? 3. Will my array be available during parity rebuild? 4. My SSD's are currently formatted to the Array format, does this matter when using them as unassigned disks? 5. Would their be any harm in leaving them where they are, as I understand it, write performance is impacted but read performance which is important for VM's and plex metadata is not, what is the impact of not having TRIM, would having them as unassigned drives give them TRIM?
  17. My array currently has two SSD’s in it. Because the array doesn’t support TRIM I have been led to believe this is a bad idea. The NVME drive currently has my appdata and my VM data on The other SSD has my plex metadata on. I want to move both of these drives to unassigned but without losing the data and without the drives being emulated by the array, so I can re-map my appdata and VM’s and metadata after they have been moved. Is this possible? If not what would people recommend?
  18. Surely that does not make sense, having all my appdata, domains, and vms on an nvme drive is the fastest way to do it and the slowness of parity only affects writes. So having it parity protected does not make a difference when running a VM or docker as it is mostly reads with some small writes. My vm’s write large data sets such as games to an unassigned drive and my plex metadata is on a separate SSD (parity protected), but again that is mostly reads with some writes. I also use CA Backup plugin. My issue with the yellow indicator was annoyance rather than worry as I couldn’t work out what file was on there.
  19. I figured i would just use the quick and dirty route and used ‘rm -r appdata’ not sure if this is the correct way but it has fixed the issue
  20. There is an empty appdata folder on the cache drive, what’s the best way to fix this?
  21. Please find logs attached 35highfield-diagnostics-20201130-1852.zip
  22. My situation appears to be unique and i haven’t come across it on the forum. My appdata folder is on an NVME drive which is part of the array, The nvme drive is set to not use the cache drive and no part is stored on the cache drive, however my Appdata has the orange triangle next to it showing it is unprotected. Any ideas? (Ignore disk 2 in the screen shots it is doing a parity rebuild)