ffhelllskjdje
-
Posts
118 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by ffhelllskjdje
-
-
I use an app called photosync, it will sync via smb and other protocols and you can have it do it automatically.
I use sftp to my unraid butI believe it can do all you ask.
-
On 3/21/2023 at 12:30 PM, Tolete said:
after upgrading to NC 26.0.0
Administration settings > Overviewerror-
The "X-Robots-Tag" HTTP header is not set to "noindex, nofollow". This is a potential security or privacy risk, as it is recommended to adjust this setting accordingly.
The Fix:
update [line 54] on your default.conf fileappdata > nextcloud > nginx > site-confs > default.conf
from
add_header X-Robots-Tag "none" always;
to
add_header X-Robots-Tag "noindex, nofollow" always;
Restart container.
Still getting this warning even though i updated default.conf and .htacces and restarting container on 25.0.5
Here's default.conf
# The settings allows you to optimize the HTTP2 bandwitdth. # See https://blog.cloudflare.com/delivering-http-2-upload-speed-improvements/ # for tunning hints client_body_buffer_size 512k; # HTTP response headers borrowed from Nextcloud `.htaccess` add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "noindex, nofollow" always; add_header X-XSS-Protection "1; mode=block" always; # Remove X-Powered-By, which is an information leak fastcgi_hide_header X-Powered-By;
here's .htaccess
<IfModule mod_env.c> # Add security and privacy related headers # Avoid doubled headers by unsetting headers in "onsuccess" table, # then add headers to "always" table: https://github.com/nextcloud/server/pull/19002 Header onsuccess unset Referrer-Policy Header always set Referrer-Policy "no-referrer" Header onsuccess unset X-Content-Type-Options Header always set X-Content-Type-Options "nosniff" Header onsuccess unset X-Frame-Options Header always set X-Frame-Options "SAMEORIGIN" Header onsuccess unset X-Permitted-Cross-Domain-Policies Header always set X-Permitted-Cross-Domain-Policies "none" Header onsuccess unset X-Robots-Tag Header always set X-Robots-Tag "noindex, nofollow" Header onsuccess unset X-XSS-Protection Header always set X-XSS-Protection "1; mode=block" SetEnv modHeadersAvailable true </IfModule>
- 1
-
On 6.11.5, parity disk appears to be dead. I have plenty of available space on the array so i think instead of getting a new parity disk, i'd like to reuse an existing array disk. So i would need to remove the disk from the array and then make it a parity. Any advise on how to do that?
-
I use an app called iMazing, it runs on windows or mac and can do remote backups which i then backup to unraid
-
2 hours ago, viktortras said:
This problem "corrupted PAT error" often happen due to VDISK2 or corrupted pat file. You have to use the version according to your unraid processor, not whatever you want.
Did you change controller"0" to "1" in XML mode before start the VM first time? If you don't set this option when you do ./rploader.sh satamap now, command will not recognize second device VDISK2 and it will not write correctly the build so it will not works.
./rploader.sh satamap now result is this:
Also make sure you change the MAC address in VM template (always in XML advanced mode, I repeat if you save MAC address in normal mode, previously change on controller"1" will be removed) after create the build and stop the machine.
Which *.pat are you using?
Ok, I have just discover *.pat downloaded from /home/tc/redpill-load/cache/ is not working. Download official release. I used this.
https://mega.nz/file/5YI2yCYL#oNR6Cq5FmIdySL1cdUm_8vm2A-BSuj2NqbP_7ywTe_A
You PAT worked! Thanks, I'm up and running now.
-
28 minutes ago, viktortras said:
Hi,
You have to edit vm template in XML advanced mode to put the MAC generated when you did ./rploader.sh serialgen DS918+ on STEP 2.
If you edit the MAC in normal mode, modifications done on XML will disappear (in this case the only modification in template we did is change controller="0" to "1" for VDISK2 so if you save MAC in normal mode controller modification will disappear and you will have to edit it again in XML advanced mode).
After that, save template.
When you start the vm and select the USB option, nothing appear except this.
This is the expected behaviour.
You have to find IP from Synology Assistant or find.synology.com and then install the *.pat.
Ah, thanks that worked! But now I'm getting a corrupted PAT error...hmmmm
is ./rploader.sh build apollolake-7.1.0-42661
using the processor that i want to DSM to have or what's installed on my unraid? My unraid has a 10th gen processor
-
can you give more details on STEP 5-Edit VM settings on advanced xml mode and install *.pat? I entered in the MAC but it just hangs when i try to boot from the usb option
-
My Time to win is over 30 years on a 10th gen I9 with 64gb ram. What am I doing wrong, using mostly default settings.
Plotting on an ssd and using raid storage for completed plots
-
Getting this error every time i run
Traceback (most recent call last): File "/usr/bin/shreddit", line 11, in <module> load_entry_point('shreddit==6.0.7', 'console_scripts', 'shreddit')() File "/usr/lib/python3.6/site-packages/shreddit/app.py", line 45, in main shredder.shred() File "/usr/lib/python3.6/site-packages/shreddit/shredder.py", line 68, in shred deleted = self._remove_things(self._build_iterator()) File "/usr/lib/python3.6/site-packages/shreddit/shredder.py", line 166, in _remove_things self._remove(item) File "/usr/lib/python3.6/site-packages/shreddit/shredder.py", line 137, in _remove self._remove_comment(item) File "/usr/lib/python3.6/site-packages/shreddit/shredder.py", line 124, in _remove_comment comment.edit(replacement_text) File "/usr/lib/python3.6/site-packages/praw/models/reddit/mixins/editable.py", line 20, in edit updated = self._reddit.post(API_PATH['edit'], data=data)[0] File "/usr/lib/python3.6/site-packages/praw/reddit.py", line 432, in post return self._objector.objectify(data) File "/usr/lib/python3.6/site-packages/praw/objector.py", line 122, in objectify raise APIException(*errors[0]) praw.exceptions.APIException: RATELIMIT: 'Looks like you've been doing that a lot. Take a break for 3 seconds before trying again.' on field 'ratelimit'
-
Getting an error on firefox sync on fresh install.
Switching to PGID 100... Switching to PUID 99... Setting timezone to America/Los_Angeles... Checking prerequisites... Generating configuration... Fixing perms... [2021-08-20 03:35:18 +0000] [1] [INFO] Starting gunicorn 19.6.0 [2021-08-20 03:35:18 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1) [2021-08-20 03:35:18 +0000] [1] [INFO] Using worker: sync [2021-08-20 03:35:18 +0000] [19] [INFO] Booting worker with pid: 19 [2021-08-20 03:35:18 +0000] [19] [ERROR] Exception in worker process Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 557, in spawn_worker worker.init_process() File "/usr/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 126, in init_process self.load_wsgi() File "/usr/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 136, in load_wsgi self.wsgi = self.app.wsgi() File "/usr/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/usr/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 63, in load return self.load_pasteapp() File "/usr/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 59, in load_pasteapp return load_pasteapp(self.cfgurl, self.relpath, global_conf=None) File "/usr/local/lib/python2.7/site-packages/gunicorn/app/pasterapp.py", line 69, in load_pasteapp global_conf=global_conf) File "/usr/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 253, in loadapp return loadobj(APP, uri, name=name, **kw) File "/usr/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 278, in loadobj return context.create() File "/usr/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 715, in create return self.object_type.invoke(self) File "/usr/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 152, in invoke return fix_call(context.object, context.global_conf, **context.local_conf) File "/usr/local/lib/python2.7/site-packages/paste/deploy/util.py", line 55, in fix_call val = callable(*args, **kw) File "/app/syncserver/__init__.py", line 265, in main config = get_configurator(global_config, **settings) File "/app/syncserver/__init__.py", line 254, in get_configurator config = mozsvc.config.get_configurator(global_config, **settings) File "/usr/local/lib/python2.7/site-packages/mozsvc/config.py", line 65, in get_configurator load_into_settings(config_file, settings) File "/usr/local/lib/python2.7/site-packages/mozsvc/config.py", line 43, in load_into_settings for name, value in config.get_map(section).iteritems(): File "/usr/local/lib/python2.7/site-packages/konfig/__init__.py", line 118, in get_map return dict(self.items(section)) File "/usr/local/lib/python2.7/site-packages/backports/configparser/__init__.py", line 878, in items return [(option, value_getter(option)) for option in d.keys()] File "/usr/local/lib/python2.7/site-packages/backports/configparser/__init__.py", line 875, in <lambda> section, option, d[option], d) File "/usr/local/lib/python2.7/site-packages/konfig/__init__.py", line 35, in before_get defaults) File "/usr/local/lib/python2.7/site-packages/backports/configparser/__init__.py", line 445, in before_get self._interpolate_some(parser, option, L, value, section, defaults, 1) File "/usr/local/lib/python2.7/site-packages/backports/configparser/__init__.py", line 508, in _interpolate_some "found: %r" % (rest,)) InterpolationSyntaxError: '$' must be followed by '$' or '{', found: u'$rmUhvLWU9A4EL*vM!s' [2021-08-20 03:35:18 +0000] [19] [INFO] Worker exiting (pid: 19) [2021-08-20 03:35:18 +0000] [1] [INFO] Shutting down: Master [2021-08-20 03:35:18 +0000] [1] [INFO] Reason: Worker failed to boot.
-
i keep getting an error complaining about file storage when doing snapshots. I'm not sure what it's referencing, i have set the backup location to /mnt/user/backups/vmbackup/ and this location works when stopping and backing up.
vm_state is running. vdisk_type is raw 2021-08-14 04:43:14 information: qemu agent found. enabling quiesce on snapshot. error: internal error: missing storage backend for 'file' storage
-
I get an error using snapshots, any ideas? On V6.10.0-rc1
2021-08-12 05:15:45 information: qemu agent found. enabling quiesce on snapshot.
error: internal error: missing storage backend for 'file' storage- 1
-
I have a nvidia 1050 (with rom bios) and am successfully running an Arch linux VM. Everything works like it should, however, if I have to reboot the Arch OS, or shutdown, I cannot bring it back up without rebooting UNRAID.
eg, i will shutdown Arch. Go the VM tab in Unraid and restart it. It will appear to boot up but my display will not turn on.
EDIT: ok, it looks like the keyboard isn't even turning on either. I am using a USB controller which may not be getting reset (is there a way to reset w/o reboot?), but the lack of display is also concerning, i would think those would fire up but they are not
Logs don't show anything.
-
So I deleted this file, restarted the docker but I am still getting the "Your web server is not properly set up to resolve "/.well-known/webfinger". Error. Any ideas?
25.02.21: - Nginx default site config updated for v21 (existing users should delete /config/nginx/site-confs/default and restart the container).
I've also tried adding this
location = /.well-known/webfinger { return 301 /index.php$uri; }
and still no luck
-
1 hour ago, jang430 said:
I checked out resilio. Price seems reasonable. Though are there quirks in ios? Such as deleting on ios, doesn't delete on other sync folders?
There is also something about media folder not accessible?
https://help.resilio.com/hc/en-us/articles/205506539-Sync-for-iOS-Peculiarities
Can you confirm?
Sent from my iPhone using TapatalkNot sure on the media, there are definitely limitations due to iOS, it's not a true sync, it's more of a download/upload type of thing. I use it to have certain files available. I download to iOS, maybe edit, then reupload. It's not ideal but unless apple changes their tune there is not much resilio or syncthing can do.
On android, there are true sync solutions (background syncs) which was great and the one thing I miss from that platform
-
21 hours ago, jang430 said:
Resilio for ios or Android? Does syncthing has ios app? I use syncthing to sync folders between 2 NAS. It will be good if it has ios app. Will research resilio for ios. This has pay Right? What is imazing?
Sent from my iPhone using TapatalkI use resilio (iOS) and the unraid docker. I indeed did pay for resilio, but it's so much better imo than synthing. As far as an iOS app, Syncthing doesn't but there is an app called mobius sync I believe that works just fine.
imazing allows you to interact with your iPhone/iPad. I use to to put files on the devices and also to backup (wired or wirelessly) to my unraid server. I'm on the beta (iOS) so having a good backup is a must.
-
2 hours ago, jang430 said:
Hi. Anyone know of a method to do this?
I use imazing to backup to smb share. for individual files, resilio sync or syncthing
-
On 4/6/2021 at 5:29 PM, ffhelllskjdje said:
I've tried both ways, by using a mapped folder and recently by just bashing into the docker container and creating the key under the docker filesystem.
I see in the template that the appdata /mnt/cache/appdata/crushftp is mapped to /var/opt/CrushFTP10 in the docker, should I place it there?
Seems it only works with non admin users, so i just created another user and it seems to work now.
-
39 minutes ago, MarkusMcNugen said:
Are you mounting that folder to the docker host? If not then it wouldn't have persistence.
There is already a volume for persistence that you can use for the container. Check out the docker hub or within pages for the container. You should have /config mounted to the docker host and your ssh key for the SFTP server should go in /config/sshd/keys. You'll probably need to be logged in as root on your Unraid server to get into that folder once you have /config mounted, its locked down pretty hard for security.
I've tried both ways, by using a mapped folder and recently by just bashing into the docker container and creating the key under the docker filesystem.
I see in the template that the appdata /mnt/cache/appdata/crushftp is mapped to /var/opt/CrushFTP10 in the docker, should I place it there?
-
anyone else getting constant disappearing ssh keys ? I set my key for sftp, and it usually works for a day or so, but eventually it always removes my key. I have to go back in user preferences and readd the key. I put my key in /usr/local/lib if it matters.
-
10 minutes ago, Hoopster said:
There are other options for the functionality lost when those docker containers were pulled. They are fully supported and provide the same functionality.
Of course, the choice is yours which way you go, but this ls.io container (unifi controller) and their many others for unRAID are fully supported by the ls.io team.
Perhaps. but it looks like neither @chbmb nor @linuxserver.io have been active since the blowup according to their last visited date on their profiles.
I know they are still updating the containers (i had a few updates today). No judgement either way from me...just interesting all around.
11 minutes ago, Hoopster said:There are other options for the functionality lost when those docker containers were pulled. They are fully supported and provide the same functionality.
Of course, the choice is yours which way you go, but this ls.io container (unifi controller) and their many others for unRAID are fully supported by the ls.io team.
-
19 minutes ago, wayner said:
Maybe I read too much into this, but that occured last November when unRAID incorporated Nvidia drivers into the core OS. Here is a quote from CHBMB: (in this thread: https://forums.unraid.net/bug-reports/prereleases/unraid-os-version-690-beta35-available-r1125/?tab=comments#comment-11340&searchlight=1)
And also:
Wow, didn't know all that. Definitely concerning to me. I'm rebuilding my system next week and will have to take this into consideration as i'm also looking at proxmox/truenas combo.
-
25 minutes ago, wayner said:
I apologize if this has been dealt with elsewhere, but I read that linuxserver.io will not longer be building dockers for unRAID. Does that apply to this docker, and if it does what does that mean for the future of this docker?
where did you read that?
-
I am using a trial of Unraid on an older server and it is constantly rebooting (seemingly random). Diagnostics attached, any insight would be appreciated. This computer previously ran windows and I never had issues with it, so it might be something linux specific, i don't know.
[Plugin] Appdata.Backup
in Plugin Support
Posted
wow just looked and having the same problem