Abzstrak

Members
  • Posts

    110
  • Joined

  • Last visited

Everything posted by Abzstrak

  1. disk 5 was in question, so at the least run fsck on it... You're running a single cache with no redundancy and no backups? so, yeah, it's not a matter of IF you will lose data, just WHEN... that when might be right now. You can run badblocks on an SSD, but honestly that's not a perfect test because of th way ssd's work... you should run fsck on it too. I'd probably go grab whatever tool the manufacturer of the drive provides to test it, the only annoying thing is that might be some stupid windows utility... My gut feeling is your ssd is dying. also, you didn't answer when you last trim'd the ssd. ps, there is a plugin that will schedule a backup of your appdata to your array, you could use should at least do that in the future.
  2. Drives usually die early in their life, or pretty late. I mean you should recalc parity after you "trusted" the drive that was dropped out, because you couldn't trust it if it dropped out of an active array... it might have been fine, but you should recalc parity to be sure. Your idea of the cable isn't bad, they are cheap. Always use new cables when swapping drives or building machines, they aren't worth the cost if they could potentially cause any issues at all. memtest would be a good idea too, just to cover bases.
  3. It wouldn't hurt to collect smart data on all drives, make sure they all look ok. Post here if you are unsure. Do you just have a single cache drive? or is it a pool? (raid1?) when was the last time you trim'd it? badblocks is kinda like memtest for a drive. It can't fix anything really, but it can really help determine if a drive is bad (kinda like memtest does for ram). I did a quick google, the Arch wiki has alot of pretty great info on running badblocks. Obviously you'd not want your array spun up. And be aware that the write test is destructive, but much better. If you use it, you'll have to reconstruct the data from parity. Be aware it takes a while to run, like memtest. When ever I run badblocks, I usually use the following (the w flag is write, which destroys data): badblocks -svvvwt random /dev/sdX
  4. why are you confident the drive is fine? smart is like a check oil light in the car, hardly 100% accurate, just tends to give more info. Why did you do a trust array? you should recalc parity at the least now, because that wasn't a great idea. I'd run badblocks on the drive, I like to use random pattern write tests, but be aware it will clear that drive out. Its the best test i know of to determine if the drive is problematic or not.
  5. ram should be fine, I'd run badblocks on the drives. Write test is best, but it destroys the data on the drive... probably best to do that though and restore your data from backups. Have you collected smart data on the drives?
  6. try enabling the use_ino option. I'm wondering if it will help.
  7. so out of the Debian man pages: use_ino Honor the st_ino field in kernel functions getattr() and fill_dir(). This value is used to fill in the st_ino field in the stat(2), lstat(2), fstat(2) functions and the d_ino field in the readdir(2) function. The filesystem does not have to guarantee uniqueness, however some applications rely on this value being unique for the whole filesystem. if the inode numbers are not being honored unique currently, it might piss off sqlite. This sounds worth pursuing. I'm not super familiar with fuse, but it's supposed to use an internal mechanism instead of inodes, but if an app is unaware or expects inodes, then I could see it causing the file to become malformed. This would also explain why its better when kept further from fuse shares (like /mnt/user). I am on the list too of having problems. I currently have added cache and have changed appdata/system it to cache only and have the plex container pointing there... maybe that will help, only been 6 hours since the last corruption. Honestly I couldn't give a f about rescanning my library, it's annoying, but the dvr schedule, stuff programmed to be recorded that is then lost, etc, is really getting old. really thinking about just putting omv back on here.
  8. yeah, I read the help, the weird thing was if I set it to cache only then the mover seemingly never tries to move it from the array to the cache pool, which I would expect it to. My drives are all spinning down now the way I wanted, awesome. I mounted my movies and series folders as read only into docker, and now the parity's don't spin up; I can play a show out of readonly folder and it only spins up the data drive I only gave RW on the OTA recordings, as I want plex to be able to delete from there. Thanks for your help.
  9. no I still had them set to cache only. Finally got some time to play with it again, I set it to prefer and now its moving... I guess the mover doesn't work on cache only settings?
  10. ok, I put no in both vm manager and docker... went in and hit move now, and nothing... just sits there and didnt move or do anything. super annoying. I tried stopping the array and starting it back up, nope. I went ahead and bounced the box, same thing... cant get system to move. I have to assume mover doesn't want to move it for some valid reason From the /var/log/syslog: Jun 11 13:04:15 Athena emhttpd: shcmd (85): /usr/local/sbin/mover |& logger & Jun 11 13:04:15 Athena root: mover: started Jun 11 13:04:16 Athena root: mover: finished Looks like it ran for 2 seconds and stopped... Is there somewhere I can turn up the verbosity of the logs? After the reboot, fix common problems plugin ran and is giving me this warning: Share system set to cache-only, but files / folders exist on the array no kidding... that's what I'm trying to solve, lol Can I make sure that docker and vm's aren't running and just move that data to the cache manually?
  11. Yeah, with 16TB I prefer double parity. All these drives are older, makes me feel better even though I do have backups.
  12. 6 drives, two of which are parity. Writing to one of the data drives causes the two paritys to also spin up... therefore 3 of 6 drives spin (half my drives) No I didn't set docker and vm's to no, I just shut them all down. Sry, still new to unraid. I'll try that. Thanks guys.
  13. As I mentioned, I've tried cache only and prefer both, neither move anything from the system share to cache. I have 6 spinners, 3 of which wont sleep and I think it's because system wont move anything to cache. system only exists on one drive, but I have double parity so half my drives wont spin down. Moving things to the cache pool seems the obvious answer, I mean... I can rsync it manually, but that seems like a bad idea.
  14. so I added a couple of SSD's for cache, and I've set appdata, domains and system to only be on the cache. I made sure no containers or VM's were running, ran the mover and everything moved except system. I can't get the sucker to move to cache. Is there a trick to this? I've tried prefer cache and cache only, neither seem to have any effect on the system share. Something must be using it, I just don't know what.
  15. yep, and its mounted with default options, which I don't want
  16. This needs to be done, plenty of easy ways to make it happen and maintain license structure.
  17. Is there a reason iotop is not installed? It's pretty handy for diagnostic purposes, and should be trivial to include in the image.... unless it somehow doesn't play nice with the way unraid does storage.
  18. I hadn't considered the swappiness of the containers, I only have two, but I'll look and see if that is somehow related. I think I misunderstood on the ram drive, I thought you meant to make an additional docker just for it, which seemed, well, odd. I planned to just mount a 24GB max tmpfs from unraid and pass that mount to the docker transcode folder for plex.... The link you provided is interesting though, it appears as though the docker can mount it as it brings up the container, which would save a step. Now I'm just waiting patiently for my RAM to arrive... Then I can start testing it. regarding the Plex container, I used the limetech built template and then changed the repo to the official plex one... Is that not a good idea?
  19. yeah, using a plugin seems silly for something so easy and trivial. What benefit is there in a docker for doing a one line mount command for a ram drive? My swap is not acting the way I expected either... I have swappiness set to 0 and the vfs_cache_pressure set to 100 and yet the stupid thing still uses up to 100MB of swap when there is no reason (60%'ish ram usage). I've never had reason to try to get a system to NOT swap unless absolutely necessary, interesting problem. I tried a swappiness of 1, same crap... insists on using some swap. I don't want it to spin up a drive unless absolutely necessary... ugh.
  20. yeah, this is quite possible... Can you test from another machine? or temporarily just boot it into ubuntu or something and try from there?
  21. I like to run badblocks (in write mode) on a questionable drive, its takes a while, but gives me peace of mind.... kinda like memtest for block storage 1 pass is fine really.
  22. you have single parity which can recover from a single failure. You have multiple failures, normal recovery of the array is not gonna happen. You probably need to determine which disks have actual errors (start with smartdata) and which just have corrupt filesystems. If it's just the filesystem, hopefully your on xfs, it's pretty resilient... fix the filesystem to recover any data and back it up using your normal methods. You're probably going to end up having to make a new config and restore from backups. Don't use bad hardware for a new config, so make sure it's ok first.
  23. any cache involved? or is it writing straight to a single/double parity array? The only way I foresee getting decent speeds would be to a btrfs pool.
  24. yeah, seems odd to me but I could just mount and swapon in there I guess, same effect.