Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. It's admin / password. I just tried. To be honest, prefer jdownloader2 as I can use multiple proxies.
  2. Thank you. It has gone 800+ GB already without any error so looks like the step 2 was indeed the step I missed. I'm running a script to sequentially go through 4 subfolders of 700GB each. My current connection means I can only go through about 3 folders/day so I should not be reaching the limit on any of the account. Your control file logic gate was quite elegant so I repurpose it to make my script run perpetually unless I delete the control file. I just need to "refill" daily and forget about it.
  3. Stupid question: does it work cross platform? e.g. Linux docker detecting Windows / MacOS viruses?
  4. Hey! I think I figured out what's wrong! When I do rclone config with the authorise link, when the sign in screen comes up on the browser, I clicked the main account because I can't see the team drive if I click on the account corresponding to the client_id. So your step 2 looks to be the step I missed. When I add the other accounts as content manager, I can now see the team drive if I click on the account corresponding to the client_id. Just did a test transfer for all the accounts and the activity page shows the actions on the corresponding accounts. I just kicked off the upload script and we'll probably know if it works by dinner time! Don't use Krusader to check. It somehow also doesn't show unionfs correctly for me too. MC works, sonarr works, Plex works, Krusader doesn't. Use mc from console is the most reliable way to check the mount itself. Cut out the docker-specific problem.
  5. 1. Yes. 2. You split this point 2 and point 3 so perhaps I have missed something here. By "shared" do you mean also adding those emails to the Member Access on the Gdrive website e.g. making those email Content Manager? Or maybe something else? 3. Yes. What I did was having each unique email + create a project for each (unique project names too) + adding Google Drive API to project + creating unique OAUTH client_id and secrets for each project API and use those for unique remote. 4. Yes (see below for section of conf) 5. Yes. I noticed you have --user-agent="unRAID" which I didn't have so will try that this morning when my limit is reset. A question: when your unique client_id move things to gdrive, does the gdrive website shows your activity (click on the (i) icon, upper right corner under the G Suite logo and then click Activity to show activity) as "[name of account] created an item" or does it shows as "You created an item"? (as in literally it says "You", not the email address or your name). For me it shows as "You" regardless of upload account so maybe that's an indication of something being wrong? .rclone.conf [gdrive] type = drive client_id = 829[random stuff].apps.googleusercontent.com client_secret = [random stuff] scope = drive token = {"access_token":"[random stuff]","token_type":"Bearer","refresh_token":"[random stuff]","expiry":"2019-06-16T01:51:02"} [gdrive_media_vfs] type = crypt remote = gdrive:crypt filename_encryption = standard directory_name_encryption = true password = abcdzyz password2 = 1234987 [tdrive] type = drive client_id = 401[random stuff].apps.googleusercontent.com client_secret = [random stuff] scope = drive token = {"access_token":"[random stuff]","token_type":"Bearer","refresh_token":"[random stuff]","expiry":"2019-06-16T01:51:02"} team_drive = [team_drive ID] [tdrive_vfs] type = crypt remote = tdrive:crypt filename_encryption = standard directory_name_encryption = true password = abcdzyz password2 = 1234987 [tdrive_01] type = drive client_id = 345[random stuff].apps.googleusercontent.com client_secret = [random stuff] scope = drive token = token = {"access_token":"[random stuff]","token_type":"Bearer","refresh_token":"[random stuff]","expiry":"2019-06-16T01:51:02"} team_drive = [team_drive ID] [tdrive_01_vfs] type = crypt remote = tdrive_01:crypt filename_encryption = standard directory_name_encryption = true password = abcdzyz password2 = 1234987 etc... rclone move command I have a command for each of 01, 02, 03, 04 and a folder for each. I ensure that each 0x folder has less than 750GB (about 700GB). rclone move /mnt/user/rclone_upload/tdrive_01_vfs/ tdrive_01_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --bwlimit 110000k --tpslimit 3 --min-age 30m
  6. Everything works, except for some reasons I am limited to 750GB / day total. Once 1 teamdrive hits limit, everything else (main gdrive + the other tdrive) is errorred out too. Same situation, once 1 client ID hits limit, the rest (6 IDs) are blocked. @DZMM: Looks like my daily limit is enforced even more rigorously than when you reported back in December 19 last year. But then it seemed to have been lifted for you out of a sudden based on the December 23 post and our recent PMs. Do you remember making any particular changes? My setup is pretty similar to yours except for having fewer client_IDs and I have only been using tdrive basically exclusively (gdrive only for testing purposes).
  7. Not just controller. Cable can be a problem too. Some years ago I had disk dropping randomly that turned out to be a bad cable / loose connection. I replaced the cable and it was all good. So recommend you replace the cables first to see if the problems come back before going drastic with the controller.
  8. Oh wow, that's great! Just to double check, that's their Business GSuite right? What about the domain name requirement mentioned by DZMM?
  9. Does this only work with vdisk image? Would it work with a passed-through sata ssd (via device-id)?
  10. How did you get "unlimited cloud space"? I thought it's only 2TB.
  11. Best ports, in decreasing order of "best-ness": Internal USB 2.0 ports. You can buy a cheapo USB2.0 internal to external adapter (basically a small little cable with internal USB 2.0 plug on one end and external USB2.0 socket on the other) and plug the Unraid USB stick on there. Alternatively, you can use your case USB2.0 header or some all-in-one USB2.0 internal hub. Reason for "best-ness": (1) USB2.0 has the least compatibility issues when booting (almost none); (2) few people have any other use for internal USB2.0 headers and (3) the USB2.0 motherboard controller can't be passed through to a VM even with ACS multifunction (many have tried and many have failed). Internal USB 3.0 ports. Similar to USB2.0 above, just that USB3.0 ports sometimes have boot issues. External USB 3.1 port (the red one at the back). This port can't be passed through to a VM similar to the internal USB2.0 port but give it's externally available, you might want to use it for some other purposes instead of booting Unraid (about 10s saving in boot time and that's it). External USB 3.0 ports (the 8 blue ones at the back). These are on their own controllers (2 controllers, 4 ports each) and in their own IOMMU group so they can be passed through to your VM with ease (need vfio stubbing). Note that they have identical IDs so if you stub 1 controller, you will end up stubbing both (I vaguely remember there's a way around that so you might want to google if required). USB 3.1 type C - because why would want to do that? Highly NOT recommended to overclock Unraid. Instability on a server is at best a nuisance (e.g. needing to run Parity check whenever things crash) and at worst causing crippling issues (e.g. data corruption). You are better off to use AMD Precision Boost. It's an effort-free per-core on-demand overclock. Sure all-core boost can only reach 3.8GHz but that's only 5% lower than your best-effort overclock that more likely than not is less stable and definitely uses a lot more power. On a side note, don't use the latest BIOS (AGESA 1.1.0.2), use the one right before that (AGESA 1.1.0.1a). The current BIOS has AMD PB cut down a little bit on the voltage which runs 1-2 degrees cooler but has caused me some unexpected stability issue with certain workloads.
  12. +1 (I'm not the user johnnie mentioned) Didn't know this was a bug and didn't have anything valuable on the disk so did the lowtech cp method just to get things done.
  13. First and foremost, you are copying user share to disk share. Be careful there mate. What speed do you get copying from cache to cache? Did you trim your SSD?
  14. Just approach it one step at a time. I did it quite a while ago so I recommend you to wait for @itimpi to respond in case the "off the top of my head" steps below have any issue. Back up your most critical data. Quickly test that your HBA cards work with Unraid. Usually it's sufficient to just use an existing server and plug the disks to the HBA and boot up to see if Unraid still recognise all the disks automatically. This will help with the not uncommon situation where the HBA truncates the drive ID so Unraid can't recognize the disk order. If there's a problem, stop and ask questions here. Identify the base server (call it S1) where things will be migrated into. Make sure everything is ready there (e.g. disks plugged into HBA, boot up with correct config, maybe even run it for a few days to sort out any teething issues etc.) Now pick the server to be emptied out (call it S2). Plug the disks from S2 to S1, boot up and start array. DO NOT do a New Config yet! Check that all S2 disks show up in S1 as unassigned. Now write down the order of the S1 disks (e.g. the serial numbers as shown on the Main page) and stop array, do a New Config. Add S1 disks back based on the order you written down and then add the S2 disks and start the array backup. Theoretically, it should just work like that. Tips for step 4. What I did previously was I turned off the server and physically disconnected the parity disk. Then when I do a New Config, I can simply leave the Parity slot empty and not have to worry about accidentally assigning a data disk to Parity slot. Once everything is up and running (except parity), I then connect the parity disk and assign it to the parity slot (no need to do a New Config here; the newly-reconnected disk will be the only possible choice as parity) and start array and the parity build process starts automatically.
  15. For the people with corrupted Plex db: do you have Marvell controller in your system? Note that it may be built-in to your motherboard so check your motherboard spec too. Problem with a controller may also cause data corruption. Not sure if this possibility has been eliminated.
  16. The simplest solution is to use Windows built-in Remote Desktop to rdp into Windows VM running the game. Free solution in the sense that you don't need to buy additional software for remote access and minimal configuration effort. Latency actually wasn't too bad for me (Surface 3 over Wifi to wired server). Are your "clients" far away from the server? If they are not (e.g. a game room kind of scenario) then running multiple monitors out of the server will give you best experience. If they are then you will really have to research which protocol has the lowest latency. Game streaming over network isn't unheard of so there ought to be a better solution out there than RDP.
  17. You misunderstood my point. It doesn't matter if Plex is accessing the idle drive or an active drive, the fact that the CPU is waiting for the drive to respond will exclude other activities. Think of it like this: Mover tells CPU: write file 1 to disk A CPU: ok, writing Plex tells CPU: read file 2 from disk B CPU: nope, writing Plex: are you done yet? CPU: nope, still writing Plex: are you done yet? CPU: nope, still writing Plex: are you done yet? CPU: nope, still writing Plex: are you done yet? CPU: nope, still writing Plex: are you done yet? CPU tells Mover: ok, file 1 written to disk A CPU tells Plex: ok, here is file 2 from disk B The entire time from step 4 to step 13, CPU will report 100% usage because it is fully occupied with writing. Yes, we can talk all days about how it is supposed to be multi-tasking in 2019 but the truth is some action requires the CPU (core) full attention and until it's done, it won't be able to do anything else. This is the same reason that if you plug a failing drive or SD card to Windows, it will hang the entire system while trying in vain to read the data.
  18. Wrong place to ask, mate. You are asking about VM stuff and this topic is about the plugin.
  19. No, it's octopus' fault. 😂 On a serious note though, that sorts of fit together (of course, if it's not just treat my post as random fart 😅) It's known that the mover can corrupt data if it's being run while the file being moved to the array is being accessed at the same time (e.g. a Plex media scan). Then it wouldn't be surprising if the appdata backup procedure interferes with the sqlite writing procedure and causes data corruption. I noticed the corruption seems to be reported to happen overnight, when it's highly probable that multiple simultaneous processes are run (e.g. Plex rescanning media, appdata backup, the mover etc.). The fact that some users use /mnt/user adds red herring to the situation i.e. we have 2 potential sources of corruption, the /mnt/user and the simultaneous read + write. Could be easy to test the hypothesis I reckon.
  20. Nothing you missed. That controller is just impossible to pass through - or at least I have not seen anyone reporting successful passing that through. Usually only the "USB 3.0" controller can be passed through. Btw, ACS override does not guarantee that a device can be passed through. It helps but not a guarantee.
  21. I reran my benchmark but saw no major difference vs F11e with pure CPU work load. If anything it's about 2% slower, could be due to Windows security patches. On the bright side, that means I can trust numactl core numbering. I did notice GPU-assisted encoding was significantly better. 4k encoding 15% better, 1080p a whooping 37% better! While I'm happy with that, I don't think it's BIOS related (or maybe it is, I don't know). It does seem to run about 1-2 degrees cooler, similar to the post you quoted.
  22. Already on latest BIOS. Same problem with previous BIOS.
  23. Interesting. How did you pass through the onboard soundcard to the VM? My IOMMU group has it together with a SATA controller. A quick google says it's the M.2 SATA controller and lstopo says my SATA drives are all NOT connected to it (naturally since I only use NVMe M.2). However, it still seems rather risky to vfio it to use it for pass through. How did you do it? With regards to Plex, I explicitly pin it to the dies without memory controller and have not had any stuttering. It only stutters when I run 4 simultaneous transcodes on top of Plex (so 1x4k HDR stream, 4x1080p streams) on the same set of cores - that is to be expected I guess. The other time it stutters was because the transcoding temp was on a failing HDD - switch it to SSD and it's gone.
  24. ALL containers becoming unresponsive is a different problem. As I mentioned, when the CPU is waiting for the drive to respond, it will report 100% usage, regardless of how powerful your CPU is. A slow-responding drive will cause what you are seeing. In terms of why a drive responds slowly, it may be due to high IO (e.g. repairing + extracting + copying strain a HDD a lot due to repeated seek) or even a failing drive (that was what happened when my old Hitachi 3TB was about to kick the dust).
×
×
  • Create New...