ffhelllskjdje

Members
  • Posts

    118
  • Joined

  • Last visited

Everything posted by ffhelllskjdje

  1. wow just looked and having the same problem
  2. I use an app called photosync, it will sync via smb and other protocols and you can have it do it automatically. I use sftp to my unraid butI believe it can do all you ask.
  3. Still getting this warning even though i updated default.conf and .htacces and restarting container on 25.0.5 Here's default.conf # The settings allows you to optimize the HTTP2 bandwitdth. # See https://blog.cloudflare.com/delivering-http-2-upload-speed-improvements/ # for tunning hints client_body_buffer_size 512k; # HTTP response headers borrowed from Nextcloud `.htaccess` add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "noindex, nofollow" always; add_header X-XSS-Protection "1; mode=block" always; # Remove X-Powered-By, which is an information leak fastcgi_hide_header X-Powered-By; here's .htaccess <IfModule mod_env.c> # Add security and privacy related headers # Avoid doubled headers by unsetting headers in "onsuccess" table, # then add headers to "always" table: https://github.com/nextcloud/server/pull/19002 Header onsuccess unset Referrer-Policy Header always set Referrer-Policy "no-referrer" Header onsuccess unset X-Content-Type-Options Header always set X-Content-Type-Options "nosniff" Header onsuccess unset X-Frame-Options Header always set X-Frame-Options "SAMEORIGIN" Header onsuccess unset X-Permitted-Cross-Domain-Policies Header always set X-Permitted-Cross-Domain-Policies "none" Header onsuccess unset X-Robots-Tag Header always set X-Robots-Tag "noindex, nofollow" Header onsuccess unset X-XSS-Protection Header always set X-XSS-Protection "1; mode=block" SetEnv modHeadersAvailable true </IfModule>
  4. On 6.11.5, parity disk appears to be dead. I have plenty of available space on the array so i think instead of getting a new parity disk, i'd like to reuse an existing array disk. So i would need to remove the disk from the array and then make it a parity. Any advise on how to do that?
  5. I use an app called iMazing, it runs on windows or mac and can do remote backups which i then backup to unraid
  6. You PAT worked! Thanks, I'm up and running now.
  7. Ah, thanks that worked! But now I'm getting a corrupted PAT error...hmmmm is ./rploader.sh build apollolake-7.1.0-42661 using the processor that i want to DSM to have or what's installed on my unraid? My unraid has a 10th gen processor
  8. can you give more details on STEP 5-Edit VM settings on advanced xml mode and install *.pat? I entered in the MAC but it just hangs when i try to boot from the usb option
  9. My Time to win is over 30 years on a 10th gen I9 with 64gb ram. What am I doing wrong, using mostly default settings. Plotting on an ssd and using raid storage for completed plots
  10. Getting this error every time i run Traceback (most recent call last): File "/usr/bin/shreddit", line 11, in <module> load_entry_point('shreddit==6.0.7', 'console_scripts', 'shreddit')() File "/usr/lib/python3.6/site-packages/shreddit/app.py", line 45, in main shredder.shred() File "/usr/lib/python3.6/site-packages/shreddit/shredder.py", line 68, in shred deleted = self._remove_things(self._build_iterator()) File "/usr/lib/python3.6/site-packages/shreddit/shredder.py", line 166, in _remove_things self._remove(item) File "/usr/lib/python3.6/site-packages/shreddit/shredder.py", line 137, in _remove self._remove_comment(item) File "/usr/lib/python3.6/site-packages/shreddit/shredder.py", line 124, in _remove_comment comment.edit(replacement_text) File "/usr/lib/python3.6/site-packages/praw/models/reddit/mixins/editable.py", line 20, in edit updated = self._reddit.post(API_PATH['edit'], data=data)[0] File "/usr/lib/python3.6/site-packages/praw/reddit.py", line 432, in post return self._objector.objectify(data) File "/usr/lib/python3.6/site-packages/praw/objector.py", line 122, in objectify raise APIException(*errors[0]) praw.exceptions.APIException: RATELIMIT: 'Looks like you've been doing that a lot. Take a break for 3 seconds before trying again.' on field 'ratelimit'
  11. Getting an error on firefox sync on fresh install. Switching to PGID 100... Switching to PUID 99... Setting timezone to America/Los_Angeles... Checking prerequisites... Generating configuration... Fixing perms... [2021-08-20 03:35:18 +0000] [1] [INFO] Starting gunicorn 19.6.0 [2021-08-20 03:35:18 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1) [2021-08-20 03:35:18 +0000] [1] [INFO] Using worker: sync [2021-08-20 03:35:18 +0000] [19] [INFO] Booting worker with pid: 19 [2021-08-20 03:35:18 +0000] [19] [ERROR] Exception in worker process Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 557, in spawn_worker worker.init_process() File "/usr/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 126, in init_process self.load_wsgi() File "/usr/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 136, in load_wsgi self.wsgi = self.app.wsgi() File "/usr/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/usr/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 63, in load return self.load_pasteapp() File "/usr/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 59, in load_pasteapp return load_pasteapp(self.cfgurl, self.relpath, global_conf=None) File "/usr/local/lib/python2.7/site-packages/gunicorn/app/pasterapp.py", line 69, in load_pasteapp global_conf=global_conf) File "/usr/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 253, in loadapp return loadobj(APP, uri, name=name, **kw) File "/usr/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 278, in loadobj return context.create() File "/usr/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 715, in create return self.object_type.invoke(self) File "/usr/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 152, in invoke return fix_call(context.object, context.global_conf, **context.local_conf) File "/usr/local/lib/python2.7/site-packages/paste/deploy/util.py", line 55, in fix_call val = callable(*args, **kw) File "/app/syncserver/__init__.py", line 265, in main config = get_configurator(global_config, **settings) File "/app/syncserver/__init__.py", line 254, in get_configurator config = mozsvc.config.get_configurator(global_config, **settings) File "/usr/local/lib/python2.7/site-packages/mozsvc/config.py", line 65, in get_configurator load_into_settings(config_file, settings) File "/usr/local/lib/python2.7/site-packages/mozsvc/config.py", line 43, in load_into_settings for name, value in config.get_map(section).iteritems(): File "/usr/local/lib/python2.7/site-packages/konfig/__init__.py", line 118, in get_map return dict(self.items(section)) File "/usr/local/lib/python2.7/site-packages/backports/configparser/__init__.py", line 878, in items return [(option, value_getter(option)) for option in d.keys()] File "/usr/local/lib/python2.7/site-packages/backports/configparser/__init__.py", line 875, in <lambda> section, option, d[option], d) File "/usr/local/lib/python2.7/site-packages/konfig/__init__.py", line 35, in before_get defaults) File "/usr/local/lib/python2.7/site-packages/backports/configparser/__init__.py", line 445, in before_get self._interpolate_some(parser, option, L, value, section, defaults, 1) File "/usr/local/lib/python2.7/site-packages/backports/configparser/__init__.py", line 508, in _interpolate_some "found: %r" % (rest,)) InterpolationSyntaxError: '$' must be followed by '$' or '{', found: u'$rmUhvLWU9A4EL*vM!s' [2021-08-20 03:35:18 +0000] [19] [INFO] Worker exiting (pid: 19) [2021-08-20 03:35:18 +0000] [1] [INFO] Shutting down: Master [2021-08-20 03:35:18 +0000] [1] [INFO] Reason: Worker failed to boot.
  12. i keep getting an error complaining about file storage when doing snapshots. I'm not sure what it's referencing, i have set the backup location to /mnt/user/backups/vmbackup/ and this location works when stopping and backing up. vm_state is running. vdisk_type is raw 2021-08-14 04:43:14 information: qemu agent found. enabling quiesce on snapshot. error: internal error: missing storage backend for 'file' storage
  13. I get an error using snapshots, any ideas? On V6.10.0-rc1 2021-08-12 05:15:45 information: qemu agent found. enabling quiesce on snapshot. error: internal error: missing storage backend for 'file' storage
  14. I have a nvidia 1050 (with rom bios) and am successfully running an Arch linux VM. Everything works like it should, however, if I have to reboot the Arch OS, or shutdown, I cannot bring it back up without rebooting UNRAID. eg, i will shutdown Arch. Go the VM tab in Unraid and restart it. It will appear to boot up but my display will not turn on. EDIT: ok, it looks like the keyboard isn't even turning on either. I am using a USB controller which may not be getting reset (is there a way to reset w/o reboot?), but the lack of display is also concerning, i would think those would fire up but they are not Logs don't show anything.
  15. So I deleted this file, restarted the docker but I am still getting the "Your web server is not properly set up to resolve "/.well-known/webfinger". Error. Any ideas? 25.02.21: - Nginx default site config updated for v21 (existing users should delete /config/nginx/site-confs/default and restart the container). I've also tried adding this location = /.well-known/webfinger { return 301 /index.php$uri; } and still no luck
  16. Not sure on the media, there are definitely limitations due to iOS, it's not a true sync, it's more of a download/upload type of thing. I use it to have certain files available. I download to iOS, maybe edit, then reupload. It's not ideal but unless apple changes their tune there is not much resilio or syncthing can do. On android, there are true sync solutions (background syncs) which was great and the one thing I miss from that platform
  17. I use resilio (iOS) and the unraid docker. I indeed did pay for resilio, but it's so much better imo than synthing. As far as an iOS app, Syncthing doesn't but there is an app called mobius sync I believe that works just fine. imazing allows you to interact with your iPhone/iPad. I use to to put files on the devices and also to backup (wired or wirelessly) to my unraid server. I'm on the beta (iOS) so having a good backup is a must.
  18. I use imazing to backup to smb share. for individual files, resilio sync or syncthing
  19. Seems it only works with non admin users, so i just created another user and it seems to work now.
  20. I've tried both ways, by using a mapped folder and recently by just bashing into the docker container and creating the key under the docker filesystem. I see in the template that the appdata /mnt/cache/appdata/crushftp is mapped to /var/opt/CrushFTP10 in the docker, should I place it there?
  21. anyone else getting constant disappearing ssh keys ? I set my key for sftp, and it usually works for a day or so, but eventually it always removes my key. I have to go back in user preferences and readd the key. I put my key in /usr/local/lib if it matters.
  22. Perhaps. but it looks like neither @chbmb nor @linuxserver.io have been active since the blowup according to their last visited date on their profiles. I know they are still updating the containers (i had a few updates today). No judgement either way from me...just interesting all around.
  23. Wow, didn't know all that. Definitely concerning to me. I'm rebuilding my system next week and will have to take this into consideration as i'm also looking at proxmox/truenas combo.
  24. I am using a trial of Unraid on an older server and it is constantly rebooting (seemingly random). Diagnostics attached, any insight would be appreciated. This computer previously ran windows and I never had issues with it, so it might be something linux specific, i don't know. apollo-diagnostics-20210215-1753.zip