Leaderboard

Popular Content

Showing content with the highest reputation on 11/07/19 in all areas

  1. 2 points
  2. PLEASE - PLEASE - PLEASE EVERYONE POSTING IN THIS THREAD IF YOU POST YOUR XML FOR THE VM HERE PLEASE REMOVE/OBSCURE THE OSK KEY AT THE BOTTOM. IT IS AGAINST THE RULES OF THE FORUM FOR OSK KEY TO BE POSTED....THANKYOU The first macinabox is now been replaced with a newer version as below. Original Macinabox October 2019 -- No longer supported New Macinabox added to CA on December 09 2020 Please watch this video for how to use the container. It is not obvious from just installing the container. Now it is really important to delete the old macinabox, especially its template else the old and new template combine. Whilst this wont break macinabox you will have old variables in the template that are not used anymore. I recommend removing the old macinabox appdata aswell.
    1 point
  3. Oh, you should certainly see a difference. With respect to streaming, about 2000 passmarks per 10Mbit 1080p stream is the recommendation if transcoding is needed. With direct play media locally, of course, that is not an issue. You are not likely to see 3x more CPU performance in everything you do on the server, but, you might be able to do 3x as much simultaneously without the CPUs being taxed to their limit. You'll have a lot more CPU overhead with the 5680s and the 5680s run at a faster clock speed as well. It is also recommended that in normal NAS operations with unRAID about a 2000 passmark overhead is needed as well. If you are running a lot of active docker containers, it may be a bit more. 4139 to 13340 will certainly give you a lot more overhead for simultaneous operations and you should see an improvement. Streaming (especially if transcoding) and running a lot of dockers and doing normal NAS functions simultaneously would have had the 5603s sweating. The only real "concern" is that the recommended single thread passmark rating for 1080p transcoding is between 1700-2000 depending on various factors. The 5680 single thread rating is 1485. If you direct play and don't transcode, not a real big concern.
    1 point
  4. Check file system on disk3: https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui Remove -n flag or nothing will be done.
    1 point
  5. Well, since the FileZilla website states the FileZilla server is for Windows only, you probably won't find the server in a Docker container 😁 Of course a FileZilla client is available in a docker container, but, that is not what you are seeking.
    1 point
  6. Within the settings of the PMS, there is a Scheduled Task "Backup database every three days". This by default just makes a copy of the database and adds a date to the end of the name. So if you need to restore the database due - you delete the broken one and rename the backup (or copy with the correct name).
    1 point
  7. The custom Qemu arguments are still working. Starting with Qemu 4.0 they changed the naming to x-speed and x-width. from the Official Changelog Qemu 4.0 PCI/PCIe Generic PCIe root port link speed and width enhancements: Starting with the Q35 QEMU 4.0 machine type, generic pcie-root-port will default to the maximum PCIe link speed (16GT/s) and width (x32) provided by the PCIe 4.0 specification. Experimental options x-speed= and x-width= are provided for custom tuning, but it is expected that the default over-provisioning of bandwidth is optimal for the vast majority of use cases. Previous machine versions and ioh3420 root ports will continue to default to 2.5GT/x1 links. This is how it looks like in 6.8 RC5 now <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.x-speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.x-width=16'/> </qemu:commandline> Nvidia System Info is reporting the correct link speeds again
    1 point
  8. I'm wondering the same thing. I'm using /mnt/user0 and /mnt/cache to ad hoc moving of data between cache and user shares without messing with share use cache configuration. Maybe it can be preserved for experienced user via system tunable?
    1 point
  9. I'm curious why /mnt/user0 is being removed? I use it directly a lot when organizing things. Will it be possible to add it back with a plugin?
    1 point
  10. This post talks about another XML correction. Copying the <os> </os> section of your original xml back in after making edits. Saved me from the exact same graphical issues you are having when changing cpu threads.
    1 point
  11. I found if you do someething strange in the set up and hit apply, you will lose access to the server...you will not be able to ping it or load the interface. to fix without rebooting after deleted autostart from /etc/wireguard just get to the command line locally and type ifconfig wg0 down the server immediately becomes available and then you can go back to wireguard turn it off, correct the setting and enable it again
    1 point
  12. The corruption occurred as a result of failing a read-ahead I/O operation with "BLK_STS_IOERR" status. In the Linux block layer each READ or WRITE can have various modifier bits set. In the case of a read-ahead you get READ|REQ_RAHEAD which tells I/O driver this is a read-ahead. In this case, if there are insufficient resources at the time this request is received, the driver is permitted to terminate the operation with BLK_STS_IOERR status. Here is an example in Linux md/raid5 driver. In case of Unraid it can definitely happen under heavy load that a read-ahead comes along and there are no 'stripe buffers' immediately available. In this case, instead of making calling process wait, it terminated the I/O. This has worked this way for years. When this problem first happened there were conflicting reports of the config in which it happened. My first thought was an issue in user share file system. Eventually ruled that out and next thought was cache vs. array. Some reports seemed to indicate it happened with all databases on cache - but I think those reports were mistaken for various reasons. Ultimately decided issue had to be with md/unraid driver. Our big problem was that we could not reproduce the issue but others seemed to be able to reproduce with ease. Honestly, thinking failing read-aheads could be the issue was a "hunch" - it was either that or some logic in scheduler that merged I/O's incorrectly (there were kernel bugs related to this with some pretty extensive patches and I thought maybe developer missed a corner case - this is why I added config setting for which scheduler to use). This resulted in release with those 'md_restrict' flags to determine if one of those was the culprit, and what-do-you-know, not failing read-aheads makes the issue go away. What I suspect is that this is a bug in SQLite - I think SQLite is using direct-I/O (bypassing page cache) and issuing it's own read-aheads and their logic to handle failing read-ahead is broken. But I did not follow that rabbit hole - too many other problems to work on
    1 point
  13. It's hidden away in their forum somewhere as a workaround.. I'll report back if I find it again! EDiT: https://forums.plex.tv/t/hardware-transcoding-broken-when-burning-subtitles-apollolake-based-synology-nases/482428/33
    1 point
  14. I have tested and confirmed working, the updated plugin Dynamix System Temp on my test server, which has an AMD Ryzen 3400G processor and ASRock X570M Pro4 motherboard.
    1 point
  15. for people who booted up fine the first time around, decided to give more cores or ram and went back to find the apple logo upper left and weird graphical anomalies try restoring the <os> section of the xml with what was originally there when you first installed. You're welcome. <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_CODE.fd</loader> <nvram>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_VARS.fd</nvram> </os>
    1 point
  16. @CorneliousJD My config looked exactly the same like yours. I did what @local.bin and @Smooth Beaver suggested. 1. Backup of my current running Nextcloud 17 install 2. grabbed the config template for nginx from https://docs.nextcloud.com/server/17/admin_manual/installation/nginx.html 3. adjusted a couple things so it matches the old config like uncommented IPv6 access and adjusted the server_name _ without a domain name server { listen 80; # listen [::]:80; # server_name cloud.example.com; server_name _; adjusted the cert path ssl_certificate /config/keys/cert.crt; ssl_certificate_key /config/keys/cert.key; changed the path for Nextcloud # Path to the root of your installation root /config/www/nextcloud; changed the max upload size to my old settings # set max upload size client_max_body_size 10G; fastcgi_buffers 64 4K; included the full path for fastcgi_params "/etc/nginx/fastcgi_params" location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+)\.php(?:$|\/) { fastcgi_split_path_info ^(.+?\.php)(\/.*|)$; set $path_info $fastcgi_path_info; try_files $fastcgi_script_name =404; include /etc/nginx/fastcgi_params; and finally checked if all the settings for for "Strict-Transport-Security" and "X-Frame-Options" are the same as before. 4. restarted Nextcoud docker 5. restarted letsencrypt docker 6. logged into Nextcloud and checked the logs and disabled the "Nextcloud announcements" app because it spammend the logs with Symfony\Component\Routing\Exception\RouteNotFoundException: Unable to generate a URL for the named route "ocs.provisioning_api.AppsController.disable" as such route does not exist. Looks this a known issue and will be addressed with an later update. https://github.com/nextcloud/nextcloud_announcements/issues/54
    1 point
  17. 09 Dec 2020 Basic usage instructions. Macinabox needs the following other apps to be installed. CA User Scripts (macinabox will inject a user script. This is what fixes the xml after edits made in the Unraid VM manager) Custom VM icons (install this if you want the custom icons for macOS in your vm) Install the new macinabox. 1. In the template select the OS which you want to install 2. Choose auto (default) or manual install. (manual install will just put the install media and opencore into your iso share) 3. Choose a vdisk size for the vm 4. In VM Images: Here you must put the VM image location (this path will put the vdisk in for the vm) 5. In VM Images again : re enter the same location as above. Here its stored as a variable. This will be used when macinabox generate the xml template. 6. In Isos Share Location: Here you must put the location of your iso share. Macinabox will put named install media and opencore here. 7. In Isos Share Location Again: Again this must be the same as above. Here its stored as a variable. Macinabox will use this when it genarates the template. 8. Download method. Leave as default unless for some reason method 1 doesnt work 9. Run mode. Choose between macinabox_with_virtmanager or virtmanager only. ( When I started rewriting macinabox i was going to only use virtmanager to make changes to the xml. However I thought it much easier and better to be able to use the Unraid vm manager to add a gpu cores ram etc, then have macinabox fix the xml afterwards. I deceided to leave vitmanager in anyway, in case its needed. For example there is a bug in Unraid 6.9.beta (including beta 35.) When you have any vm that uses vnc graphics then you change that to a passed through gpu it adds the gpu as a second gpu leaving the vnc in place. This was also a major reason i left virtmanger in macinabox. For situations like this its nice to have another tool. I show all of this in the video guide. ) After the container starts it will download the install media and put it in the iso share. Big Sur seems to take alot longer than the other macOS versions. So to know when its finished goto userscripts and run the macinabox notify script (in background) a message will pop up on the unraid webui when its finished. At this point you can run the macinabox helper script. It will check to see if there is a new autoinstall ready to install then it will install the custom xml template into the VM tab. Goto the vm tab now and run the vm This will boot up into the Opencore bootloader and then the install media. Install macOS as normal. After install you can change the vm in the Unraid VM Manager. Add cores ram gpu etc if you want. Then go back to the macinabox helper script. Put in the name of the vm at the top of the script and then run the script. It will add back all the custom xml to the vm and its ready to run. Hope you guys like this new macinabox
    1 point
  18. The sequence that will work is: stop array unnassign parity2 start Array. This is required to get Unraid to commit the fact that parity2 has been removed stop the array assign the old parity2 as an additional data drive start the array. There is no need to preclear the old parity2 drive as you should not need to stress test it. Simply let Unraid Clear it (which is faster than using pre-Clear.) You can use the array while the Clear is in progress. When the Unraid Clear operation finishes Unraid will offer the option to format the disk (which only takes a couple of minutes) to get the disk ready to start being used.
    1 point