Jump to content

testdasi

Members
  • Content Count

    810
  • Joined

  • Last visited

  • Days Won

    1

testdasi last won the day on September 13 2018

testdasi had the most liked content!

Community Reputation

34 Good

1 Follower

About testdasi

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Question: are you not afraid that exposing your server to the Internet is rather risky? Or is OpenVPN generally safe?
  2. What do you mean by "stopped being recognized" and "nothing happens"?
  3. +1 same on Chrome. Btw, the OP's avatar next to the post is freaking hilarious! 🤣
  4. Such attitude. Let me simplify it for you. Unraid is a horse. You come to the forum asking if having 2 milking machines will help make it faster and then, after being told a horse is a not a cow, you lament that you would have to pay for a horse with less features than a free cow. Make sense?
  5. You can try starting each docker and see which one causes the problem.
  6. It sounds to me like you are trying to make Unraid into a FreeNAS clone, which it isn't. ZFS can be self-repairing because of its RAID design. Bringing the ZFS file system over to Unraid will not automatically make it self-repairing, because Unraid isn't RAID (i.e. what johnnie said a few posts back). If you want self-repairing, what you need to request is a partial rebuild feature. Because we can infer where a file is located on a drive, that section (or those sections) of the drive can be reconstructed from the rest of the drives + parity. That feature + the file integrity plugin should make Unraid self-repairing regardless of file system. That is probably more complicated to code on Unraid (because it's not RAID) so it will probably be a while before it's implemented. The grass always looks greener on the other side. Having been to both sides, I can tell you the other side certainly looks greener but is full of cow excretion that, if you are not careful, can explode in your face.
  7. I think with complex issues like this, we need to be scientific and methodological instead of having anyone and everyone reporting problem and telling each other to try this or that. How about this - for anyone who reports the problem, also report: What CPU? How much RAM? Array config? Roughly how large is your collection? I think file count, even a rough estimate, is more important here. Have you set your appdata to /mnt/cache (or for those without cache, /mnt/disk1)? If you haven't, we'll ignore you. Do you have a link between Sonarr and Plex? If yes, have you disable it? If you haven't, we'll ignore you. Do you have automatic library update on change / partial change? If yes, have you set it to hourly? If you haven't we'll ignore you. This is more controversial. Can you rebuild your db from scratch? <add more points as things progress> The key idea is to get all affected users within sufficiently small boundary that a clear pattern can emerge from all the noise. Perhaps we should have limetech have a separate topic with the first post updating the details for each user reporting the issue. I know the "we'll ignore you" seems harsh but adding noisy info can be worse than not having the info. And to brutally honest, if you can't be bothered to help yourself, we can't be bothered to help you. Now comes the hypothesizing: Reading through this topic again, here is how I would summarize it The issue affects a minority of users and not others The db corruption looks have no clear pattern (and not reproducible to those not affected e.g. limetech) Having a cache disk and setting Plex appdata to /mnt/cache seems to help with some users but not others Cutting the link between Plex and Sonarr seems to help Reducing Plex library scan frequency seems to help Based on the above 5 points, could it be that the affected users already have existing corruption with their db? That kinda would explain (1) and (2) i.e. why Limetech can't reproduce the issue because, hardware idiosyncrasies aside, the key difference is that they would either start from an good db or rebuild a brand new db which naturally makes it good. That would also explain why (4) and (5) help because fewer interactions reduce the probability of accessing the bad portion of the db. It's harder to explain (3), unless (3) was the cause of the original db corruption. Setting the db on /mnt/cache skips shfs, which otherwise can be a bit resource hungry. Perhaps the slower performance causes some writes to be done out of order which can corrupt db. This fits even more with why 6.6.7 is good but 6.7.0 isn't. Perhaps all the security fixes slow things down just enough to pass the threshold that would lead to corruption.
  8. Indeed your tests make it clear that these WX Threadripper need some intervention for best performance. Unraid KVM is not smart enough to deal with it.
  9. That means nothing. For example, the recent BIOS update for my motherboard cuts the voltage for Precision Boost down a tiny bit so it runs a bit cooler (what I have observed and what was widely reported elsewhere). It can still run all-core full load at 3.8GHz stable for days without any issue. Yet, it would crash (the entire server!) if I try to merge a panorama using Photoshop in my VM.
  10. That was one year ago. Ryzen BIOS issue is very recent, like last month. Search the forum for the PSA about that. Probably search for PSA + Ryzen will give you the result.
  11. Has anyone seen this behaviour? I know if Sonarr tries to do something to a file on a unionfs mount (and because the upload location is RW and the rclone mount is RO) it will make a copy of the file from rclone mount to upload location with whatever it tries to change (e.g. a rename or a date change). The problem is there is one particular episode (and only that one!) for which the copy and the original are identical in every way (e.g. the filename, dates, the data itself, etc.). So I have no idea what Sonarr is trying to change. So you say just upload it to update the file? I did! And once it's done, Sonarr would do the exact same thing again. I think this behaviour has always been there, I didn't notice it previously because the write was done live immediately to storage before I moved to the gdrive model. The fact that it's a Doctor Who episode makes it even spookier. I'm considering just sodding it and delete it.
  12. Install Dynamix Statistics plugin. It adds a Stats tab with a subtab that shows you network activity in nice graph and all.
  13. Do you overclock? Do you have Precision Boost enabled? Ryzen instability is in the rear view mirror by now so locking up is more likely due to RAM and/or CPU overclock.