axeman

Members
  • Posts

    538
  • Joined

  • Last visited

Posts posted by axeman

  1. 5 hours ago, Bjur said:

    Can anyone tell what the best way to move the files back locally from Google?

    I got an email from Google saying they will delete my files within x days.

    I think it depends on your connection. But my download speed was a lot faster than my upload speed (1gb down vs 40mb up). So I just ended up moving groups of folders manually. But I thought a few posts up somebody had noted how to reverse the script to change it to download.

  2. On 7/6/2023 at 3:30 AM, Kaizac said:

    VPN is not needed. You're allowed with most providers to only stream from 1 IP address, this is your WAN/VPN IP. So you can have all the streams you want from within your network, as long as they share their IP. Outside your network, the same applies, so you have to be careful that someone isn't stream at home and someone else from another IP. But with 3 euros a month, getting another account is maybe not unwise to do.

    But they have some VPNs whitelisted, so you could if you wanted. You'll have to check their website and your VPN.

     

    Important to do, to protect yourself, is to make sure when using software like Stremio with Torrentio, to only use Debrid links (cached are preferred, since they already downloaded the torrent to their servers). You can also use software like Stremio to stream directly from Torrents, without Debrid. But you expose yourself that way without VPN. And the speed is often terrible, so I would really not advise doing that.

     

    Real-Debrid is the biggest for media streaming, with the most 4K quality as well. All-Debrid is also good, but has less 4K material cached (already downloaded on their servers). But since it's so cheap, I have it as backup as well. They are also very useful for those file hosts, when you want to download some file and you have that slow transfer speed. All-Debrid has the most file hosts, but if you have both, almost always 1 of the 2 has a premium connection and you can get full download speed.

     

    Premiumize is another provider, which I don't have experience with. They are a bit more expensive, but also offer Usenet access. Which you can use to download if you want to. If you use Kodi with some addons you can also get Easynews, which is the only usenet provider that allows to download and stream simultaneously. The others only offer that for Torrents. I'm not a fan of Kodi though, and Torrents are plentiful for me.

     

    One project to check out is https://github.com/itsToggle/plex_debrid. They built a script that connects with Plex and you can put in a watch list or just look up a movie/series in Plex, and it will be cached in within a few minutes, and you can watch it. This would be interesting for people with multiple Plex users. It's a bit more difficult to set up though, so prepare to spend some time on that.

    Thank you for the very informative write-up, @Kaizac. I will bookmark this and revisit once I solve my current, unrelated issues. 

  3. 10 minutes ago, Kaizac said:

     

    Debrid is not personal cloud storage. It allows you to download torrent files on their servers, often the torrents have already been downloaded by other members. It also gives premium access to a lot of file hosters. So for media consumption you can use certain programs like add-ons with Kodi or Stremio. With Stremio you install Torrentio, setup your Debrid account and you have all the media available to you in specific files/formats/sizes/languages. Having an own Media library is pretty pointless with this, unless you're a real connoisseur and want to have very specific formats and audio codecs. It also isn't great for non-mainstream audio languages, so you could host those locally when needed.

     

    I still got my library with both Plex and Emby lifetime, but I almost never use it anymore.

    First time I heard of Debrid was from one of your posts earlier in this thread. How's the privacy on that stuff? Do you need to run through a VPN? 

  4. On 6/25/2023 at 3:51 AM, DZMM said:

    I'm on Enterprise Standard.

     

    I hope I don't have to move to Dropbox as I think it will be quite painful to migrate as I have over a PB stored.

     

    I have a couple of people I could move with I think - @Kaizac just use your own encryption passwords of you're worried about security.

     

    I actually think if I do move, I'll try and encourage people to use the same collection e.g. just have one pooled library and if anyone wants to add anything, just use the web-based sonarr etc. It seems silly all of us maintaining separate libraries when we can have just one.

     

    Some of my friends have done that already in a different way - stopped their local Plex efforts and just use my Plex server.

     

     

    A BIG thank you to you, @DZMM! I had heard from various friends that were doing this - but never thought about it until I saw your post. I didn't really get to take FULL advantage of this, as I changed homes and my internet upload speeds were painfully slow. 

     

    It was fun while it lasted, but Google is known for killing things. They announced at the start of the year that Education accounts were losing unlimited storage, so it was only a matter of time before they came for us. 

     

    I am slowly repatriating my data back to my array. 

     

     

    Anyone else notice that Google's Usage is vastly overstated? My usage showed 60TB used when I started downloading. It's dropped to 30TB now, but my local array only used up 17TB. I ran out and got a bunch of drives thinking I need 60TB free on my array. Maybe I don't

     

  5. On 6/18/2023 at 4:48 PM, DZMM said:

    Is anyone else getting slow upload speeds recently?  My script has been performing perfectly for years, but for the last week or so my transfer speed has been awful
     

    2023/06/18 22:37:15 INFO  : 
    Transferred:          28.538 MiB / 2.524 GiB, 1%, 3 B/s, ETA 22y21w1d
    Checks:                 2 / 3, 67%
    Deleted:                1 (files), 0 (dirs)
    Transferred:            0 / 1, 0%
    Elapsed time:       9m1.3s


    It's been so long since I looked at my script I don't even know what to look at first ;-)

    Have I missed some rclone / gdrive updates? Thanks

     

    I've only be repatriating my data back to my array. As the deadline for me is 7/10 for it to go into read-only state. Perhaps your account hit that state already? 

     

    My notices just started about a week ago or so ago, and say 60 days, but the deadline is like 30 days... so it's definitely coming!

  6. 6 minutes ago, Kaizac said:

    I'm also curious where you see it. Somehow some people get it and some don't. I think it might depend on whether you have an actual company connected to the account. If you are using encryption or not and if you are storing on team drives or mostly personal google drive.

     

    I think the cheapest solution is to get a dropbox advanced account (you need 3 accounts). You might be able to pool together with others if you trust those. But it also depends on how much you store. Local storage could be more interesting financially.

    I see it now - as soon as I log into my Admin Console, it's right up top. 

     

    I do have an organization tied to it, but only 1 user. Does your organization have more than 1 user? 

  7. Thanks for your work!

     

    Would this be able to sync to piholes where one isn't in a docker? 

     

    My primary pihole runs as a VM on an ESXi machine. I'd like to sync that to a potential docker container on Unraid (potential because I need to understand how to get them to sync before deploying). Also - does the "disable pihole for x minutes" sync or is it just the lists that are getting synced? 

     

     

  8. On 11/10/2021 at 5:54 AM, JorgeB said:

    It mirrors in the sense that the end result will be the same.

     

    This is for v6.9.x, while it should also work with v6.10.x I didn't test it, and don't like to assume.

     

    You need to have both the old and new replacement devices connected at the same time, if you can have all 4 you can do both replacements and then reset cache config, if not do one, reset cache config, do the other reset cache config again.

     

    First you need to partition the new device, to do that format it using the UD plugin, you can use any filesystem, then with the array started and using the console type:

     

    btrfs replace start -f /dev/sdX1 /dev/sdY1 /mnt/cache

     

    Replace X with source, Y with target, note the 1 in the end of both, you can check replacement progress with:

     

    btrfs replace status /mnt/cache

     

    When done and if you have enough SATA ports you can repeat the procedure for the second device, if not do the cache reset below and then start over for the other device.

     

    Pool config reset: stop the array, if Docker/VM services are using the cache pool disable them, unassign all cache devices, start array to make Unraid "forget" old cache config, stop array, reassign the current cache devices (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), re-enable Docker/VMs if needed, start array.

     

    Hi JorgeB - I'm about to upgrade my Cache drive (not pool) from 1TB to 2TB ... I'm on 6.9.2 ... is there a different way to do this if just a size upgrade? Should I create a pool, and then follow the process outlined above? 

  9. 1 hour ago, itimpi said:

    You could use the Parity Swap procedure, but in this case that seems directly equivalent to just doing step 1) and then step 3) so you could just go with the simpler approach you outlined.

     

    Any reason you want to run a pre-clear on the old parity drive?   The rebuild overwrites every sector on the drive so it’s current contents are irrelevant.   The only reason for doing this would be as a stress test of the drive but since you have already been successfully using it as a parity drive this seems unnecessary, and just takes time and puts extra wear on the drive.

     

    Thanks. I thought we needed to do this prior to adding it to the array. If I don't need to do that, I will skip it. Save time, AND wear and tear, sign me up!

  10. My array has 24 drives, and one of the older ones is failing - we've had a couple of read errors on it, and basically it's aged out (10+ years old)... 

     

    I have dual parity (two 8tb drives). I have a somewhat faster (7200rmp, vs 5400) 8tb drive that I got a couple of weeks ago. 

     

    Ideally, I'd like to replace one of the parity drives with the faster drive. I get that the speed may not be faster, esp since the other drive isn't matched. However, figured next drive I get would be a 7200 to replace the other parity drive. 

     

    Then, I'd like to use the old parity drive to replace the failing drive. 

     

    How do I go about doing this? 

     

    1. Replace Parity Drive say Parity 1 with new drive, let parity rebuild.

    2. Preclear the old parity drive

    3. For replacing, follow [https://wiki.unraid.net/Replacing_a_Data_Drive] steps...?

     

    Anything I'm missing? 

  11. 2 hours ago, DZMM said:

    my other account that has my storage on and just 1 user got moved to Enterprise Standard for £15.30/mth when the offer period ends which I can live with as the effort to move so much data and get everything working again will be massive:

    image.thumb.png.bc3b2ee009f7d496b4911e8c97b02417.png

     

    Funny I started uploading a lot recently, as I'm moving to an area that doesn't offer symetrical Gig service... One day after I started uploading, I got the email that I was moved. I thought the increase in usage triggered it. 

  12. On 5/2/2022 at 5:00 AM, JorgeB said:

    These can be intermittent, since the test passed disk is OK for now, but you should keep monitoring, especially these attributes:

     

      1 Raw_Read_Error_Rate     POSR-K   200   200   051    -    0
    200 Multi_Zone_Error_Rate   ---R--   200   200   000    -    126

     

    If they climb you'll likely get more read errors.

     

    Thanks will do!

  13. 3 hours ago, sol said:

    New email from Google. Anyone else get this? Anyone figured out how to proceed? (edited out some identifying content)

     

    Our records indicate you have OAuth clients that used the OAuth OOB flow in the past.

    Hello Google OAuth Developer,

    We are writing to inform you that OAuth out-of-band (OOB) flow will be deprecated on October 3, 2022, to protect users from phishing and app impersonation attacks.

    What do I need to know?

    Starting October 3, 2022, we will block OOB requests to Google’s OAuth 2.0 authorization endpoint for existing clients. Apps using OOB in testing mode will not be affected. However, we strongly recommend you to migrate them to safer methods as these apps will be immediately blocked when switching to in production status.

    Note: New OOB usage has already been disallowed since February 28, 2022.

    Below are key dates for compliance

    September 5, 2022: A user-facing warning message may be displayed to non-compliant OAuth requests

    October 3, 2022: The OOB flow is blocked for all clients and users will see the error page.

    Please check out our recent blog post about Making Google OAuth interactions safer for more information.

    What do I need to do?

    Migrate your app(s) to an appropriate alternative method by following these instructions:

    Determine your app(s) client type from your Google Cloud project by following the client links below.

    Migrate your app(s) to a more secure alternative method by following the instructions in the blog post above for your client type.

    If necessary, you may request a one-time extension for migrating your app until January 31, 2023. Keep in mind that all OOB authorization requests will be blocked on February 1, 2023.

    The following OAuth client(s) will be blocked on Oct 3, 2022.

    OAuth client list:

    Project ID: rcloneclientid-247***

    Client: 211984046708-hahav9pt2t2v6mc6*********apps.googleusercontent.com

    Thanks for choosing Google OAuth.

    — The Google OAuth Developer Team

     

     

    Post #5 https://forum.rclone.org/t/google-oauth-migration-for-rclone/30545/5

     

    That's from THE lead on rclone. seems it might not be a big deal.