Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Oops sorry I missed the "instant" keyword. It is not instant because Unraid will try to copy the file to the unionfs location and then delete the original. Unionfs isn't smart enough to realise the RW location is on the same media as your original.
  2. What is your unionfs line in your mount script? That sounds like you set your mount_rclone as the RW location so when you copy file over, it tries to upload to gdrive live during the transfer. rclone_upload is instant because it's local until you run the upload script.
  3. I have a DD-WRT Netgear router that has served me well but it cannot get over 250Mbps to the internet (it can do gigabit within the network fine). Apparently the processor is not powerful enough to handle gigabit internet. I need some recommendations for another router please. Criteria: Not based on pfsense (or opnsense etc.) Pfsense is too complicated for my simple minds and needs either passing through NIC to VM (can't do due to no spare PCIe slot available) and/or expensive separate components (e.g. separate mini computer, the video guides on YT also talk about needing separate switches etc.) Can handle Gigabit Internet. I took it for granted until finding out that having Gigabit LAN ports apparently doesn't guarantee gigabit internet. Not too big (e.g. not 1U server type) I probably can use the Netgear as an access point so I actually don't need the router to have wifi. TIA.
  4. I am also very much interested in the <$200 part. I have looked around but have not found anything at that price point.
  5. Question: are you not afraid that exposing your server to the Internet is rather risky? Or is OpenVPN generally safe?
  6. +1 same on Chrome. Btw, the OP's avatar next to the post is freaking hilarious! 🤣
  7. It sounds to me like you are trying to make Unraid into a FreeNAS clone, which it isn't. ZFS can be self-repairing because of its RAID design. Bringing the ZFS file system over to Unraid will not automatically make it self-repairing, because Unraid isn't RAID (i.e. what johnnie said a few posts back). If you want self-repairing, what you need to request is a partial rebuild feature. Because we can infer where a file is located on a drive, that section (or those sections) of the drive can be reconstructed from the rest of the drives + parity. That feature + the file integrity plugin should make Unraid self-repairing regardless of file system. That is probably more complicated to code on Unraid (because it's not RAID) so it will probably be a while before it's implemented. The grass always looks greener on the other side. Having been to both sides, I can tell you the other side certainly looks greener but is full of cow excretion that, if you are not careful, can explode in your face.
  8. I think with complex issues like this, we need to be scientific and methodological instead of having anyone and everyone reporting problem and telling each other to try this or that. How about this - for anyone who reports the problem, also report: What CPU? How much RAM? Array config? Roughly how large is your collection? I think file count, even a rough estimate, is more important here. Have you set your appdata to /mnt/cache (or for those without cache, /mnt/disk1)? If you haven't, we'll ignore you. Do you have a link between Sonarr and Plex? If yes, have you disable it? If you haven't, we'll ignore you. Do you have automatic library update on change / partial change? If yes, have you set it to hourly? If you haven't we'll ignore you. This is more controversial. Can you rebuild your db from scratch? <add more points as things progress> The key idea is to get all affected users within sufficiently small boundary that a clear pattern can emerge from all the noise. Perhaps we should have limetech have a separate topic with the first post updating the details for each user reporting the issue. I know the "we'll ignore you" seems harsh but adding noisy info can be worse than not having the info. And to brutally honest, if you can't be bothered to help yourself, we can't be bothered to help you. Now comes the hypothesizing: Reading through this topic again, here is how I would summarize it The issue affects a minority of users and not others The db corruption looks have no clear pattern (and not reproducible to those not affected e.g. limetech) Having a cache disk and setting Plex appdata to /mnt/cache seems to help with some users but not others Cutting the link between Plex and Sonarr seems to help Reducing Plex library scan frequency seems to help Based on the above 5 points, could it be that the affected users already have existing corruption with their db? That kinda would explain (1) and (2) i.e. why Limetech can't reproduce the issue because, hardware idiosyncrasies aside, the key difference is that they would either start from an good db or rebuild a brand new db which naturally makes it good. That would also explain why (4) and (5) help because fewer interactions reduce the probability of accessing the bad portion of the db. It's harder to explain (3), unless (3) was the cause of the original db corruption. Setting the db on /mnt/cache skips shfs, which otherwise can be a bit resource hungry. Perhaps the slower performance causes some writes to be done out of order which can corrupt db. This fits even more with why 6.6.7 is good but 6.7.0 isn't. Perhaps all the security fixes slow things down just enough to pass the threshold that would lead to corruption.
  9. Indeed your tests make it clear that these WX Threadripper need some intervention for best performance. Unraid KVM is not smart enough to deal with it.
  10. Has anyone seen this behaviour? I know if Sonarr tries to do something to a file on a unionfs mount (and because the upload location is RW and the rclone mount is RO) it will make a copy of the file from rclone mount to upload location with whatever it tries to change (e.g. a rename or a date change). The problem is there is one particular episode (and only that one!) for which the copy and the original are identical in every way (e.g. the filename, dates, the data itself, etc.). So I have no idea what Sonarr is trying to change. So you say just upload it to update the file? I did! And once it's done, Sonarr would do the exact same thing again. I think this behaviour has always been there, I didn't notice it previously because the write was done live immediately to storage before I moved to the gdrive model. The fact that it's a Doctor Who episode makes it even spookier. I'm considering just sodding it and delete it.
  11. Please note that it may or may not resolve your issue until you find out exactly what caused the high API calls. You might want to follow other users on this topic who have multiple client_id for different purposes. Then if one API is banned because of accidental overload, you can switch to something else.
  12. When running the benchmark, did you also set numatune to force memory load to the right NUMA node too? Your Node 0 vs Node 1-3 looks suspiciously like your VM is fully allocated to node 0 only. For VM that spread across multiple nodes, you might even want to force it so that memory is loaded evenly across different nodes. In my case, I have had to go as far as running 2 dummy VMs to load up 2 nodes to the right amount so that my main VM splits almost 50-50 across 2 nodes (and then shutdown the dummy VMs - thank goodness for CA User Scripts).
  13. Looks like you are the only one based on previous posts.
  14. You untick that partial scan option. On the main library page, hover your mouse over Libraries, there should be a button that when you click shows the various options, among which is rescan plex library (or something like that). Click on that to manually rescan. It is good practice to have each movie in its own folder under a main Movies folder e.g. Movies\Movie 1 Movies\Movie 2 etc. I don't use Radarr but I think it does all the organisation for you so that's a faster option if you know how to use it. I organise things manually but have always done that for years.
  15. I think it's a perfect storm kind of situation. My French is terrible but I'm guessing you set it up to scan on partial change? When you are uploading files to gdrive, every little change will cause a scan of the folder. You have all the movies in the same folder, that probably means Plex will rescan the entire folder, including files that were not changed (it doesn't know what has been changed, just that something was changed so it has to scan to know). You might want to disable automatic scanning and do it manually while you reorganise your library.
  16. Something is making too many API calls to cause yours to be blocked. You need to check what your various dockers are doing. From previous posts, it looks like Bazarr and Emby / Plex subtitle searches may be the main contributor. I run Plex and Sonarr refreshes a few times today already + calculate how much space I'm using (a lot of API calls to count things) and I'm nowhere close to the limit. Even the per 100s limit of 1000, I only get to 20% on the worst day. So your dockers must be doing something very drastic to cause API ban. You might want to separate that docker on its own client_id. Once banned, there's nothing you can do but to wait till your quota is reset. Usually reset time is midnight US Pacific (where Google HQ is). (You can see when it's reset and how many API calls you have done from your API dashboard - https://console.developers.google.com/apis/dashboard then click on quota) That is assuming you have set up your own API + OAUTH client_id + share the team drive with the appropriate account.
  17. If I change the rclone conf (e.g. point account to a different Team Drive), how do I make rclone remount? I tried the unmount script, which unmounts the tdrive but I can still see the rclone process running (i.e. it only unmounts, it doesn't actually kill the rclone processes with the old config). I have been restarting just to be safe but just thought to ask if maybe there's something I have done wrong.
  18. I have an autoboot script that automatically starts my VM and dockers in the right sequence (for better NUMA memory allocation and docker dependencies etc.). The funny thing is if I set it as run at array start, it gives me error that it can't communicate with virsh and docker (i.e. the services haven't started yet). However, if I set it to run at FIRST array start, it runs perfectly fine. I even tried to make the script wait x amount of seconds and went as far as 5 minutes and it still had the same can't comm with virsh and docker error. Maybe the script blocks the docker and virsh services or something? It doesn't affect me much (I typically don't restart an array without reboot) but just thought to report it.
  19. It's admin / password. I just tried. To be honest, prefer jdownloader2 as I can use multiple proxies.
  20. Thank you. It has gone 800+ GB already without any error so looks like the step 2 was indeed the step I missed. I'm running a script to sequentially go through 4 subfolders of 700GB each. My current connection means I can only go through about 3 folders/day so I should not be reaching the limit on any of the account. Your control file logic gate was quite elegant so I repurpose it to make my script run perpetually unless I delete the control file. I just need to "refill" daily and forget about it.
  21. Stupid question: does it work cross platform? e.g. Linux docker detecting Windows / MacOS viruses?
  22. Hey! I think I figured out what's wrong! When I do rclone config with the authorise link, when the sign in screen comes up on the browser, I clicked the main account because I can't see the team drive if I click on the account corresponding to the client_id. So your step 2 looks to be the step I missed. When I add the other accounts as content manager, I can now see the team drive if I click on the account corresponding to the client_id. Just did a test transfer for all the accounts and the activity page shows the actions on the corresponding accounts. I just kicked off the upload script and we'll probably know if it works by dinner time! Don't use Krusader to check. It somehow also doesn't show unionfs correctly for me too. MC works, sonarr works, Plex works, Krusader doesn't. Use mc from console is the most reliable way to check the mount itself. Cut out the docker-specific problem.
  23. 1. Yes. 2. You split this point 2 and point 3 so perhaps I have missed something here. By "shared" do you mean also adding those emails to the Member Access on the Gdrive website e.g. making those email Content Manager? Or maybe something else? 3. Yes. What I did was having each unique email + create a project for each (unique project names too) + adding Google Drive API to project + creating unique OAUTH client_id and secrets for each project API and use those for unique remote. 4. Yes (see below for section of conf) 5. Yes. I noticed you have --user-agent="unRAID" which I didn't have so will try that this morning when my limit is reset. A question: when your unique client_id move things to gdrive, does the gdrive website shows your activity (click on the (i) icon, upper right corner under the G Suite logo and then click Activity to show activity) as "[name of account] created an item" or does it shows as "You created an item"? (as in literally it says "You", not the email address or your name). For me it shows as "You" regardless of upload account so maybe that's an indication of something being wrong? .rclone.conf [gdrive] type = drive client_id = 829[random stuff].apps.googleusercontent.com client_secret = [random stuff] scope = drive token = {"access_token":"[random stuff]","token_type":"Bearer","refresh_token":"[random stuff]","expiry":"2019-06-16T01:51:02"} [gdrive_media_vfs] type = crypt remote = gdrive:crypt filename_encryption = standard directory_name_encryption = true password = abcdzyz password2 = 1234987 [tdrive] type = drive client_id = 401[random stuff].apps.googleusercontent.com client_secret = [random stuff] scope = drive token = {"access_token":"[random stuff]","token_type":"Bearer","refresh_token":"[random stuff]","expiry":"2019-06-16T01:51:02"} team_drive = [team_drive ID] [tdrive_vfs] type = crypt remote = tdrive:crypt filename_encryption = standard directory_name_encryption = true password = abcdzyz password2 = 1234987 [tdrive_01] type = drive client_id = 345[random stuff].apps.googleusercontent.com client_secret = [random stuff] scope = drive token = token = {"access_token":"[random stuff]","token_type":"Bearer","refresh_token":"[random stuff]","expiry":"2019-06-16T01:51:02"} team_drive = [team_drive ID] [tdrive_01_vfs] type = crypt remote = tdrive_01:crypt filename_encryption = standard directory_name_encryption = true password = abcdzyz password2 = 1234987 etc... rclone move command I have a command for each of 01, 02, 03, 04 and a folder for each. I ensure that each 0x folder has less than 750GB (about 700GB). rclone move /mnt/user/rclone_upload/tdrive_01_vfs/ tdrive_01_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --bwlimit 110000k --tpslimit 3 --min-age 30m
  24. Everything works, except for some reasons I am limited to 750GB / day total. Once 1 teamdrive hits limit, everything else (main gdrive + the other tdrive) is errorred out too. Same situation, once 1 client ID hits limit, the rest (6 IDs) are blocked. @DZMM: Looks like my daily limit is enforced even more rigorously than when you reported back in December 19 last year. But then it seemed to have been lifted for you out of a sudden based on the December 23 post and our recent PMs. Do you remember making any particular changes? My setup is pretty similar to yours except for having fewer client_IDs and I have only been using tdrive basically exclusively (gdrive only for testing purposes).
×
×
  • Create New...