Bjur
-
Posts
183 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Bjur
-
-
Thanks for answering. Havent tried that yet, but out of interest why would it make a difference?
-
Just replaced Unraid USB key and have copies settings over, but Unraid server cant be accessed on network over IP or by name. Suggestions?
It worked perfectly before and the settings are the same
-
Hi i have 5 discs attached to Unraid. 3 of them are a concern to me.
Ive attached two pictures of the two drives. First is an Intel DC 3500 series if i remember correctly VK000240GWCNP 240 gb i want to use for cache and the next i the parity drive 8 TB Seagate BarraCuda 3.5. The third a Samsung 860 EVO i cant even get any SMART data on.
-
Got hold of support this can be solved
-
Hi I've been having problems with my old USB key, so I wanted to exchange with a new USB key.
I've used creator tool with newest stable Unraid, then copied config folder from old USB key.
When started up it said invalid GUID and key, so I choose replace key, but I don't get any email with link to new key.
And now I can't press replace key anymore.
What do I do?
-
Hi hope someone can please help me.
Every 2-3 days I loose access to my VM Unraid, can't ping it or anything like that, so had to reboot and lost log file.
Sometimes I get Docker execution error and one time I simply lost the docker, so had to reenable them.
Is it a corrupt docker file?
Hope you can help?
-
I have a question I hope someone can help with.
After google drive are now read only I face the problem when using Sonarr that sometimes delete a show after finding another version. The problem is it now downloads locally since it can't upload anymore and therefore delete the version on Google drive. Can I somehow have Sonarr not delete the Google drive version? Would the recycle bin be an option or would I not get a chance to move the files back afterwards since it's read only?
-
Hi I just had a power cut and after I run my scripts I can't mount them anymore.
It says.
make[2]: Leaving directory '/tmp/mergerfs/libfuse'
make[2]: *** [Makefile:125: build/fuse_loop.o] Error 1
make[1]: *** [Makefile:104: objects] Error 2
make[1]: Leaving directory '/tmp/mergerfs/libfuse'
make: *** [Makefile:257: libfuse] Error 2
strip: 'build/mergerfs': No such file
/tmp/build-mergerfs: line 18: build/mergerfs: not found
cp: can't stat 'build/mergerfs': No such file or directory
mv: cannot stat '/mnt/user/appdata/other/rclone/mergerfs/mergerfs': No such file or directory
12.09.2023 11:37:06 INFO: *sleeping for 5 seconds
12.09.2023 11:37:11 ERROR: Mergerfs not installed successfully. Please check for errors. Exiting.Hope someone can help.
Thanks @Josephgrosskopf the last line made it work again.
-
On 7/5/2023 at 10:41 PM, Kaizac said:
Debrid is not personal cloud storage. It allows you to download torrent files on their servers, often the torrents have already been downloaded by other members. It also gives premium access to a lot of file hosters. So for media consumption you can use certain programs like add-ons with Kodi or Stremio. With Stremio you install Torrentio, setup your Debrid account and you have all the media available to you in specific files/formats/sizes/languages. Having an own Media library is pretty pointless with this, unless you're a real connoisseur and want to have very specific formats and audio codecs. It also isn't great for non-mainstream audio languages, so you could host those locally when needed.
I still got my library with both Plex and Emby lifetime, but I almost never use it anymore.
Thanks I don't think I will get into torrent and am using Plex which is good. Plus I have a good percent of non English language so only option is to buy the drives since 90 $ for Dropbox is to expensive for me.
-
On 6/25/2023 at 9:41 AM, Kaizac said:
I think he means the Gsuite to Enterprise Standard switch we all had to do with the rebranding of Google.
But you have Enterprise Standard? And if so, if you go to billing or products. What does it say below your product? For me it says unlimited storage.
Right now I don't have to migrate, but as soon as I do I will go fully local. You can join a Dropbox group, but you would need to trust those people. That is too much of an insecurity for me. So with your storage of "just" 50TB, it would be a no-brainer for me. Just get the drives. You will have repaid them within the year. In the end, Dropbox will also end their unlimited someday and it will be the same problem.
And look at your use case. Is it just Media or mostly backups? Backups can be done with Backblaze and other offerings that aren't too costly.
Media has alternatives in Plex shares and using Debrid services. The last one I'm hugely impressed by. But also that depends on how particular you are about your media.
Thanks. I get an email everyday saying this xx date you need to be below the 5 TB limit or your data will become read only.
I won't get the drives before that date so I will just have the data read only until I find a solution.
The data is mainly media and Plex. I have never heard about Debrid, but is that a worthy alternative to drive?
-
15 hours ago, Kaizac said:
They already started limiting new Workspace accounts a few months ago to just 5TB per user. But recently they also started limiting older Workspace accounts with less than 5 users, sending them a message that they either had to pay for the used storage or the account would go into read-only within 30 days. Paying would be ridiculous, because it would be thousands per month. And even when you purchase more users, so you get the minimum 5, there isn't a guarantee you will actually get your storage limit lifted. People would have to chat with support to request 5TB additional storage, and would even be asked to explain the necessity, often refusing the request.
So far, I haven't gotten the message yet on my single user Enterprise Standard account. Some speculate that only the accounts using massive storage on the personal gdrive get tagged and not the ones who only store on team drives. I think it probably has to do with your country and the original offering, and Google might be avoiding European countries because they don't want the legal issues. I don't know where everyone is from though, so that also might not be true.
Anyway, when you do get the message, your realistic only alternative is moving to Dropbox or some other more unknown providers. It will be pricey either way.
I got that message. Within 30 days my account would be read only. When I go to dashboard it says there maybe interruptions after. I've come from Gsuite to Workspace Enterprise Standard and I live in Europe. I really don't know what to do.
I have around 50 TB on Teamdrive.
What the support said there's nothing I can do after the date I won't be able do upload anymore. It should be possible to download or delete data. Maybe I will be keeping the data for a few month until I get to the point of buying the drives.
What cloud service are you guys migrating to if any?
@DZMM You said you already migrated a year ago?
Thanks for the great work on this and support through the years especially @DZMM and @Kaizac. It's a shame it can't continue
-
On 6/18/2023 at 10:48 PM, DZMM said:
Is anyone else getting slow upload speeds recently? My script has been performing perfectly for years, but for the last week or so my transfer speed has been awful
2023/06/18 22:37:15 INFO : Transferred: 28.538 MiB / 2.524 GiB, 1%, 3 B/s, ETA 22y21w1d Checks: 2 / 3, 67% Deleted: 1 (files), 0 (dirs) Transferred: 0 / 1, 0% Elapsed time: 9m1.3s
It's been so long since I looked at my script I don't even know what to look at first
Have I missed some rclone / gdrive updates? ThanksDon't think so but it's soon over, so we can't upload anymore:(
-
30 minutes ago, axeman said:
Can you share a screenshot of where you see that? I don't see that on my admin console
It's a very big red square when you login where it says organization has no more storage left.
-
It says my account will need to be changed within 52 days or interruptions will be made...
What do you guys do? I'm guessing everyone has the same problem?
-
15 hours ago, francrouge said:
Hi guys since most of us are going to loose the Unlimited gdrive is there any order provider or config to be able to play from it ? Thx
Envoyé de mon Pixel 2 XL en utilisant Tapatalk
15 hours ago, francrouge said:Hi guys since most of us are going to loose the Unlimited gdrive is there any order provider or config to be able to play from it ? Thx
Envoyé de mon Pixel 2 XL en utilisant Tapatalk
What do you mean loose the unlimited😬?
-
-
37 minutes ago, KluthR said:
1. Normally, nothing gets deleted, but some users have "special" setups.
To be honest: I would disable USB backup for now, wait for Unraid 6.12 stable and migrate to
I just made the errors popping up correctly, the rest is the same. The new plugin is a complete rewrite. The ca.backup plugin will stop working with Unraid 6.12 (It still works for Unraid up to 6.11.5 though)
Thanks for the quick answer:)
What about
2. If I want to have 2 destinations. 1 locally which will be a folder in Rclone that syncs with cloud and a second one that will be a USB drive attaced?
Would it be possible to backup to two destinations?
3. Would it also be possible to have docker templates backed up?
Also would it be possible to not have the as tar files if it's uncompressed in regards to deduplicating backup software?
-
On 1/3/2023 at 3:08 AM, BlueLight said:
My appdata got deleted, I think?
I upgraded to v3 around the same time as upgrading to 6.11.5 and set backups for 4am on the 1st of every month. Even with the red warning,s I misinterpreted it as in USB flash backup has been deprecated ( and taken out on the docker side), so i mindlessly filled in the usb backup location and bing bang boom...
After the backup on the 1st, I woke up to finding that unraid is fine, but something happened within appdata (my suspicion).
Some apps can't start, most apps are running but not making connection to the server while in their GUI.
ex.: My NginxProxyManager docker starts up and takes me to login, but cannot verify my credentials to login. // Myplex app works fine, but can't connect to libraries. // Krusader opens up but don't see any of the folders/shares configured.
I've tried to restore using the backup file that was generated, and it did not work.
I copied my appdata folder to make a duplicate. This backup was right before upgrading to 6.11.5...
I've tried copying this duplicate folder over and I don't see that it made any difference.
What are my options moving forward? Is there any advice on how to make my duplicate appdata folder usable?
Thanks for any help in advanced! Happy 2023!
Hi thanks for the plugin.
This post below scares me a little bit.
I've just installed appdata backup 2.5 and Unraid 6.11.5.
1. If I want to backup USB flash drive and put in a folder in USB backup destination, would I then have my appdata folder deleted?
2. If I want to have 2 destinations. 1 locally which will be a folder in Rclone that syncs with cloud and a second one that will be a USB drive attaced?
3. Would it also be possible to have docker templates backed up?
-
On 11/29/2022 at 10:34 AM, Kaizac said:
Well, we've discussed this a couple of times already in this topic, and it seems there is not one fix for everyone.
What I've done is added to my mount script:
--uid 99
--gid 100
For --umask I use 002 (I think DZMM uses 000 which is allowing read and write to everyone and which I find too insecure. But that's your own decision.
I've rebooted my server without the mount script active, so just a plain boot without mounting. Then I ran the fix permissions on both my mount_rclone and local folders. Then you can check again whether the permissions of these folders are properly set. If that is the case, you can run the mount script. And then check again.
After I did this once, I never had the issue again.
When I did this I still find to have issues so I did what you guys suggested that was creating a script with mount permissions being set up startup array and that seems to have fixed it.
-
I've survived the last couple of days without a crash in mounts, so something must have changed.
Thanks for the help.
-
24 minutes ago, robinh said:
Sorry I didn't document all my steps but according to my history file it should be something like this:
# Pull the docker image
docker pull trapexit/mergerfs-static-build:latest# list docker images - check for image id of the mergerfs-static-build
sudo docker images# Go into container mergerfs-static-build
sudo docker run -it <your image ID> /bin/sh# Edit the build-mergerfs file
edit the build-mergerfs# Add the checkout to use a certain version github
# add after cd mergerfs
git checkout d1762b2bac67fbd076d4cca0ffb2b81f91933f63# save file
# exit the container by using the exit command# Saving your settings
sudo docker commit [CONTAINER_ID] [new_image_name]e.g. docker commit 1015996dd4ee test-mergerfsstat
# Start new container
docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm test-mergerfsstat /tmp/build-mergerfsHopefully this makes it a bit clear.
Thanks for the explanation.
My mergerfs is not in a docker, I just followed the guide here, so I'm guessing mergerfs is being built in the script.
So don't know how to fix this.
-
On 11/13/2022 at 4:16 PM, robinh said:
Be aware that there seems to be a bug in the latest version of mergerfs on the github page.
I've noticed it after I did reboot my unraid machine and afterwards the mergerfs was crashing everytime.The crashes did occur after write events.
In my dmesg logging:
[ 467.808897] mergerfs[7466]: segfault at 0 ip 0000000000000000 sp 0000147fb0e0e1a8 error 14
[ 467.808921] Code: Unable to access opcode bytes at RIP 0xffffffffffffffd6.The only way to get the mounts working again was using the unmount and mounting script, but as soon there was a write event the issue occured directly again (0 kb written files).
I've temporary solved it by edditing the mergerfs-static-build image so it wouldn't pull the latest version of mergerfs from github.
Instead I'm using now the 'd1762b2bac67fbd076d4cca0ffb2b81f91933f63' version from 7 aug.And that seems to be working again after copying the mergerfs to /bin 🙂
Not working mergerfs version is:
mergerfs version: 2.33.5-22-g629806e
Working version is:
mergerfs version: 2.33.5
On 11/13/2022 at 4:16 PM, robinh said:
Where do you do this, because I seems to have the same problem?
-
On 11/6/2022 at 11:48 PM, francrouge said:
did you fine a solution getting the samething
Nope.
Hope someone can help to resolve what can cause this...
- 1
-
On 11/3/2022 at 1:43 PM, animeking said:
Why is mount-mergefs changing my permissions and locking everything. I cant write to anything. When i change it it automatically changes it back to read only?
I have the same problem ever after manually using the permission script from bolognaise. Any thoughts?
- 1
Unraid not discoverable on LAN
in General Support
Posted · Edited by Bjur
Just tried a standart reboot, then all my dockers dissapeared.
I then tried to shutdown the system and start GUI mode, it gives Kexec error.
I just changed USB key and transferred license.
Don't get it.
Hope you can help.