Corvus

Members
  • Posts

    82
  • Joined

  • Last visited

Everything posted by Corvus

  1. I don't think so. How do I check? If that were the case, wouldn't this be happening all the time, as I usually have multiple users streaming off my server remotely every day?
  2. I'll do it after the parity check finishes tomorrow. In the meantime, can someone please tell me how I could find out how this happens?
  3. Thanks for the tip. However it's currently greyed out for me so I'm not able to enter any value. Could it be because the system is currently performing a parity check (after reboot)?
  4. Hey guys, This is the second time this has happened. The first time, I shrugged it off as an anomaly - maybe a rogue process or docker I forgot to turn off. However, this time, I have nothing out of the ordinary running. I woke up this morning to find notifications on my Unraid web UI saying: Unraid Cache disk disk utilization: 17-01-2024 07:13 Alert [NAS] - Cache disk is low on space (91%) SanDisk_SDSSDHP256G_132803400467 (sdf) Unraid Cache disk disk utilization: 17-01-2024 07:15 Alert [NAS] - Cache disk is low on space (92%) SanDisk_SDSSDHP256G_132803400467 (sdf) Unraid Cache disk disk utilization: 17-01-2024 07:17 Alert [NAS] - Cache disk is low on space (94%) SanDisk_SDSSDHP256G_132803400467 (sdf) Unraid Cache disk disk utilization: 17-01-2024 07:18 Alert [NAS] - Cache disk is low on space (95%) SanDisk_SDSSDHP256G_132803400467 (sdf) Unraid Cache disk disk utilization: 17-01-2024 07:19 Alert [NAS] - Cache disk is low on space (96%) SanDisk_SDSSDHP256G_132803400467 (sdf) Unraid Cache disk disk utilization: 17-01-2024 07:20 Alert [NAS] - Cache disk is low on space (98%) SanDisk_SDSSDHP256G_132803400467 (sdf) Unraid Cache disk disk utilization: 17-01-2024 07:21 Alert [NAS] - Cache disk is low on space (99%) SanDisk_SDSSDHP256G_132803400467 (sdf) Unraid Cache disk disk utilization: 17-01-2024 07:23 Alert [NAS] - Cache disk is low on space (100%) SanDisk_SDSSDHP256G_132803400467 (sdf) This was followed up by another notice saying: 'Notice [NAS] - Cache disk returned to normal utilization levelSanDisk_SDSSDHP256G_132803400467 (sdf)' at 08:25. I'm guessing this was Mover doing its thing. My cache is 2x 256 SSDs in RAID_0 making up 512GB total. Right now, the Cache is sitting on 223GB free. Every time I've noticed that the cache hits 100%, my dockers fail to work until I completely reboot the server. I've only just learned about the Disk Activity plugin, so I unfortunately can't rely on that (unless it happens again soon). The only Dockers I had running were Plex, transmission (with no torrents pending or downloading), Radarr, Sonarr, nzbget, bazarr and tautulli. I've attached my diagnostics (I'm yet to reboot the server, however will do so after I post this). How can I see what was responsible for temporarily eating all my cache, and what was (ostensibly) writing to my array? Please be clear and use step-by-step explanations, as I'm a noob and don't know my way around Linux. Thanks for any help. nas-diagnostics-20240117-1634.zip
  5. Hey all, So I logged into my server and noticed it had started a parity check. When I looked at the uptime, I noticed it rebooted 10 mins previously. There was no electrical outage, and it's constantly connected to a UPS. I got the MCE check system notification and it told me to install MCElog from NerdPack, although I know this has been depreciated. I tried installing NerdTools, but I couldn't find MCElog in there. Where do I find it? Here's my diagnostics. Can anyone shed some light on what's wrong? nas-diagnostics-20230829-0054.zip
  6. Hey guys, So let me put the backstory into point form. - I have a BlueIris CCTV system running on another machine with no dedicated GPU. - I have outsourced the AI recognition to my Unraid box because it has a GPU. - I use CodeProject.AI, and since they don't have an official docker which supports Nvidia GPUs, I followed the advice on this thread (CodeProject AI GPU on unRaid : unRAID (reddit.com)) to use a Deepstack docker as a template and make it into a CodeProject.AI docker running with my GPU. - Everything is working flawlessly. - Few days later, started noticing the Docker utilization warning going up by 1 percentage point per day starting from 75%. - Warnings stopped at 84%. - 5 days later, got a warning that it has now jumped to 100%. - Paradoxically, all my dockers still work flawlessly (even CP.AI). - Checked Docker container sizes: Total is '18.4 GB', despite my docker image size being 30 GB. What's going on? I know it's almost certainly something to do with the CP.AI docker I made, however it doesn't point to any directories on my system and Docker itself is reporting just over half of its capacity is used. Not sure if this is relevant, but I've also got a notification saying that CodeProject.AI-Server-GPU needs updating, which is weird because I essentially created the docker. If I press update, will it revert back to the Deepstack docker? I also ran 'Fix Common Problems' plugin today, and nothing seems out of the ordinary. Here are my logs: nas-diagnostics-20230220-0008.zip Where do I go from here?
  7. Yeah you're right. It's now showing 91MB/s @ 80% complete. I'm guessing it will take faster to finish the last 20%?
  8. nas-diagnostics-20221219-2157.zipHere you go!
  9. Ok so a new development. I logged onto my NAS this afternoon, and now the old parity drive has a grey square next to it. When I hover over it, it says 'new device, in standby mode (spun down)', despite the fact that it also says 'Reading' in the status, and the fact that I've already set its spin down delay to 'never'. Despite this, it's gone up a percentage point since then, so it looks like something's still happening. The new parity drive is writing *something* at a super slow speed, but Unraid is now showing the 'read' speed of the old parity drive to be stuck at 0.0 B/s. How is Unraid writing data to the new parity drive when it's not reading the old one - or any drive for that matter? P.S. When I click 'spin up all', literally nothing happens.
  10. So if I understand correctly, what you are saying is that the parity process keeps sending the spin down command, but since the disks keep receiving read commands, the disk keeps rapidly alternating between the spin down/up states, resulting in low transfer speeds? If true, this sounds alarming to me. Wouldn't this rapid, constant switching due to conflicting information put heavy stress on the disks?
  11. Yes I do. Does this open my options?
  12. That's what I'm saying. The drives aren't even spun down. Every time I look at them, they're active anyway. So this will achieve nothing, *and* I'll have to start over. So there's nothing else I can do? No idea what caused this?
  13. I doubt disabling spin down would fix it. I'm seeing all disks green right now (active), and it's still not going any faster than 25Mb/s. At this rate, it'll take a week! If I stop the procedure now, will I screw up any data/the array?
  14. But disk 1 was the previous parity drive. You mean I'll lose the contents of what used to be in the old disk that was in that slot? What are my options now? At this speed, this will take a week! Why isn't this mentioned in the Unraid official documentation?? I followed it exactly.
  15. Hey guys, So I'm having a similar issue to the OP, where the parity info is taking criminally slow from my old drive to the new drive. If I do a new config as recommended to the OP earlier in this thread, will I lose any data/shares on the array?
  16. Hey guys, So my existing parity drive is 6TB. I want to replace it with a new, bigger drive (8TB), demote the ex-parity drive to an array drive, which will replace an existing 3TB drive. Specs: Intel® Core™ i5-7400 ASRock Z270 Gaming-ITX/ac 16GB RAM I followed the procedure exactly as it says on the official Unraid documentation, and so far so good. It's up to the stage where I'm copying parity info from the old drive (which has now been assigned to the array slow that the old 3TB drive used to be) to the new parity drive (assigned in the same slot as the old parity drive). I started this process early yesterday evening at around 10.30pm. It is now 4pm *the next day* and it's still on 26%! Is this normal? If not, what can I do? I'm scared of stopping/interrupting anything in case I screw my array. Here's my diags.nas-diagnostics-20221217-1603.zip
  17. Ok cool thanks. I'll report back after this is all done. Thanks for your help!
  18. Yeah I do. I tried copying the recovered files back to the affected drive, and the share now successfully reports those files mixed in with the other files on the unaffected drive in the array. Now I just need to do this whole process 4 more times I forgot that copying to a disk directly bypasses the cache. Although since I've sparked my own curiosity, what's the answer to this question, and where are the settings that govern whether or not it starts copying to the array directly when the cache is full?
  19. Ok so I've recovered data from the first drive. I've placed the original drive back into the Unraid array and booted. So far so good. Now I'm trying to copy the data from the spare drive (which is connected to my windows PC on the same LAN) to this specific original drive. It's important that I do this because if I allow Unraid to decide which disk to put the data, it will overwrite recoverable data on the other (now empty) drives which I haven't recovered data from. I've enabled disk share, and I can see the disk in samba shares in Windows. Two problems: 1. Since the original drive is now empty, how do I know what the original file structure was? Disk 6 (which was untouched and still has files intact) has folders that correspond to the shares that I set up in Unraid, however since I set up the array to distribute data in the 'high water' setting, it doesn't contain all the files, naturally. Should I just recreate this folder structure in the original drive which is now empty and start copying my files to the relevant folders? 2. When I copy anything to Unraid, it first gets copied to my cache drive. Since the amount of data I'm copying far exceeds the capacity of my cache drive, what will happen once the capacity is reached? Will it just automatically start writing to the disk itself? Or will it stop and tell me it's out of space until I invoke the Mover script? Where can I go to check this setting?
  20. I see what you mean, but I think our situations aren't that dissimilar. The only difference is in the way the data is stored on the array. Nevertheless, I've found a way to copy the recovered data specifically to that original disk once it's installed back into Unraid. I believe with this considered, that the procedure should otherwise be the same.
  21. So then why wouldn't they finish the job? Why would they leave one drive untouched? They had all night to do it. Also why? Just for shits and kicks? What does someone have to gain from doing this?
  22. Right, but I can't see any option to just delete an entire disk from the webUI, assuming someone got access to it.