Jessie

Members
  • Posts

    266
  • Joined

  • Last visited

Everything posted by Jessie

  1. It is important that you route everything through proxynet, as per spaceinvader. If that fails you could set up a pfsense router in a unraid vm. Bridge the modem and pfsense does the rest. You will need an extra ethernet port for WAN in the unraid server for the internet side of the Pfsense VM. You have to set up statics on unraid so it is available before pfsense kicks in. If you have a VOIP phone line on the router, that will stop when you bridge it. A possible solution is to bridge another modem and hang the voip modem off the lan to act as a phone only. If your modem is acting as a wireless router, ie ethernet to the internet rather than dsl, you can just plug the wan cable straight into the pfsense router. and use the modem as the phone if required. (ie you don't need a second bridged modem) That will fix hairpinning. The other possibility is you may have a port clash if you are using port 443 on swag. I usually leave swag on 443 and go to unraid "Settings/Management Access" and set "use ssl/tls" to no. This will block access to unraid from outside. Personally, I don't want unraid to be available outside the network. You can still get to it through a VPN tunnel if necessary.
  2. I'm a bit lazy. I always go the GUI first, then resort to the console when/if it falls apart. The other gottcha is in some updates, you might have to go to the mariadb console to fix the database, but that's a different topic.
  3. More on upgrading via the gui. Start the upgrade. For me, it usually times out during the backup process. Back arrow button the browser to the settings screen. The backup process is still happening. Wait for a bit, the press update again. When ready, it should resume at the download step. The same may happen again for download so go back and wait. It will most likely go to almost the last step now. Don't go back after it enters maintenance mode. if you do, you will be going to the console to get it back. If it times out on the last step, just wait a bit then retry. When it is ready you should be able to proceed.
  4. I presume you are talking about updating nextcloud itself vs updating the docker in unraid. I'm also going to guess you are going to v21 from 20 In unraid open a console from the nextcloud icon Then run these commands one at a time. sudo -u abc php /config/www/nextcloud/occ db:add-missing-indices sudo -u abc php /config/www/nextcloud/occ db:convert-filecache-bigint sudo -u abc php /config/www/nextcloud/occ db:add-missing-columns sudo -u abc php /config/www/nextcloud/occ db:add-missing-primary-keys Then sudo -u abc php /config/www/nextcloud/occ maintenance:repair then sudo -u abc php /config/www/nextcloud/occ upgrade then sudo -u abc php /config/www/nextcloud/occ maintenance:repair (again for good measure) Then turn maintenance mode off sudo -u abc php /config/www/nextcloud/occ maintenance:mode --off Then see if you can get back to the logon screen
  5. Nextcloud icon when you create a share in windows client. When you create a share with the client, it changes the icon to the nextcloud logo. Is there a way to leave the default icon unchanged? eg the folder icon remains a standard folder icon.
  6. Just in case anyone else has a similar problem with this board. I just built a system using the above motherboard, Ryzen 7 3700x processor, 32gb ram. (Processor and ram most likely not relevant, but put it up anyway) The problem was when creating a Windows VM, it declared that the virtio drivers for the serial device not found. I built a similar system a few weeks ago no problems. Previous system used f30 BIOS. This one had F31 BIOS. Apparently there is a problem with F31 bios. Solution. I loaded new bios v F33g. Problem solved. Presumably you could go back to F30 BIOS also. Now you know
  7. It appears the issue is with the mail app. It is possible to log into a terminal and uninstall it. Then you do a repair and it will probably start. Once you get it started you can deal with the mail app. The issue with the mail app will probably be in the database engine. I'll offer to help if you want to persist with it.
  8. Getting an error message on nextcloud instances using mysql docker about support for version 21 of nextcloud. Does anyone know how to deal with this? I presume a merge to Mariadb would be the answer but I don't know how to do that. Any hints? Better still, a procedure to take away the guesswork.
  9. Me too. Coincided with the installation of my 9th array disk, installation of 2 extra secondary cache drives and upgrade to 6.9. So not sure what triggered it. I'm thinking it just has more to do during the timeout period of the shutdown. I just set the timeout to 100. Will see what happens.
  10. Is there a procedure to migrate nextcloud on a mysql database to mariadb?
  11. I upgraded my main unraid machine to 6.9 r2 I installed a secondary cache drive. It is for unraid caching only. Not sure of the number but when I transfer files I am getting cache full errors around the 100gb size. The secondary cache is 500gb. Is there a setting to tell unraid the size of the cache, or is this a bug?
  12. I think the c state issue was relevant to ryzen 1 systems. I've built plenty of systems on ryzen 2 and 3 platforms with no mods needed. Was it doing it before you adjusted the c states? Maybe reset the board back to factory and start again. For reference, I built on asus and gigabyte x170 platforms and x570 gaming x and b550 gaming x, all of which ran fine with r7 3770x. If you are passing through pci devices maybe isolate them and see if that is causing it. And as the previous person said, introduce your dockers one at a time to see if they are causing it.
  13. Do you have an array? (Mechanical hard drive) I assume you do. Try setting appdata, domains and system to disk 1 Then set cache to preferred. This will move them to the cache if there is enough room. Then try a move. You will probably find they stay on the array because they are active. so go to settings and turn docker engine off and move again. The directories should move to the cache. Then go back and turn docker on again. (I just added to this) Start again here Looking at your shares above, on a normal unraid setup, you would have media and software shares on the mechanical array. ie your big hard drives. Speed is not critical for these files. I usually put isos in the array as well because those files are normally used once to install a vm etc. So to unscramble the egg, with no duplication, set up unraid so you have a disk1 mechanical drive formatted as xfs. Leave your 2 ssd devices as is. I assume your dockers and vm's are on the samsung. Set all cache settings to yes. Now do a move. This should move all files to the array (mechanical disk 1) if it doesn't, go to settings and disable dockers and vms. Do the move again. Once all the shares are on the mechanical, change share preferences for appdata, domains and system to "Preferred Apps" Do a move again That should move those shares to the apps cache drive. At this point I'd reformat the samsung drive to btrfs. (or if you are only experimenting, dont bother.) You can do this by stopping the array and changing the xfs to btrfs then restart the array. finally go back to settings and turn docker and vms back on. your appdata should then be working on the ssd's and you slow large data will remain on the array. When you add data to the media and software directories, it will save to the ssd's because cache is turned on. Then at 3am the mover will run and put it on the array. Alternatively, you can click the move button and do it immediately.
  14. I've built quite a few Unraid systems over the past few years. I recently went to Ryzen. The first Ryzen3 was on a X570 platform (Gigabyte X570 Gaming X) Brilliant. You can change a setting in Cmos and it optimises the IOMMU allocations for Unraid passthrough. The next one I built was on a B550 platform. (Gigabyte B550 Gaming X) I went that way because the requirements of this machine didn't warrant an X570 and the B550's have a lower cost. I had trouble passing through a single Nvidia GT1030. I had previously followed Spaceinvaders methods for modifying a downloaded BIOS image, but this didn't work. In the end I resorted to the old method which was to install a cheap graphics card to boot unraid and then I passed the vm through to the second GPU. The problem with that was that I lost a full sized PCIe slot that could have been used for something else. Then I built another machine on a X570 and had the same problem. This machine had the same hardware specs as the first one, which worked straight off. I then decided to extract the BIOS from the same card and it worked. I suspect there might have been a firmware change in the graphic card which made the bios image incompatible. So with this new found knowledge, I returned to the B550 machine, extracted the BIOS and it now works perfectly on a single card. BIOS extraction was done using the GPU as a secondary card and command lines. Spaceinvader has now released a script which will do all the hard work for you. You don't even require the second GPU for the extraction. He is a very clever man. (If you don't know who he is, go to youtube and discover his brilliant tutorials on unraid, pfsense and others) So far I have been using the Ryzen 7 3700x and previously 2700 processors. I chose the 3700x over the 3800 because it draws less power and isn't much slower than the 3800. In the next couple of weeks I will build one on a B550 and a Ryzen r5 3600?? The r5 has 12 threads vs 16. 12 will be enough for the anticipated needs of its user. For the near future, I will build on either of these platforms depending on their anticipated workload. To date, I have been holding off going to 6.9 until it went mainstream. I changed my mind last week when chasing a problem on my current machine. 6.9 rc2 seems very stable and the features are too good to go back again. B550 Gaming X has 2 x m.2 slots and 4 Sata ports, so using 4tb drives you can have 12tb with parity plus 2 m.2 cache drives. X570 Gaming X has 2 x .m2 slots and 6 Sata ports, so using 4tb drives you can have up to 20tb with parity plus 2 x m.2 cache drives or with v6.9 12tb and 4 cache drives. I used to use coolermaster cases (Mainly silencio 550's) I feel the new generation doesn't suit unraid. So I have opted for Phantecs P400s cases. There is a gaming version of the P400 but I mainly build these machines for business so "light shows" are not really appropriate. I like the phantec case design. They provide for 2 sata ssd drives, 2 x 3.5" bays and then you can install up to 4 extra 3.5 bays to provision the max number of drives for the unraid setup. They have plenty of room behind the motherboard to hide the cables. A typical spec might use one of the above motherboards with a Ryzen 7 3700x, 32 or 16gb ddr4 3200 ram, gigabyte or asus GT-1030 graphics cards (no fans). possibly an addon pcie usb card for extra passed through usb slots, a pcie lan adapter if the machine is going to run a pfsense firewall vm. Then it will typically run at least 1 operating system. (usually win 10) In the background, a nextcloud / Collabora instance, maybe a pfsense vm, maybe a open vpn docker so they can securely access their system remotely and then the sky seems to be the limit for other docker applications.
  15. Which Ryzen version. I think the first gen chips had issues like this but there were workarounds. If it is a first Gen, spaceinvader one has some tutorials on youtube to resolve the problem. Years ago I had lockup problems on a x170 xeon platform and a firmware upgrade fixed it.
  16. Yeah, you can run everything as a single device, but the whole point of unraid is redundancy. At least it is for me. In Australia the cost of running 2 500gb ssd protected arrays is the same as running a single protected 1tb array. So if you were originally planning a single array on v6.83, halve the size of each ssd and buy twice as many and go to 6.9 for 2 arrays. My thinking is if you are going to use unraid, so you can break out into unprotected single arrays again, you might as well just build the system on a vanilla windows platform.
  17. As an alternative, have you considered going to v6.9 and running multiple caches? Then you can run your system as a standard setup including the shares. ie run the appdata/vm's on one cache and everything else on another cache. Then all of your data is protected.
  18. My daily user machine has been running fine for a few years until recently. It's a xeon e3-1231 with 32gb ecc on a supermicro X10sl7 motherboard. 9 drives plus a parity for the array and 4 cache drives It has 2 windows 10 vm's installed, although I normally only run 1. 2 instances of nextcloud with their associated collabora and swag dockers, and a few occasionally used dockers such as krusader. There are also a number of plugins primarily to keep the system healthy. Anyway all of this cruised along on an average cpu usage of 25% until recently. Then something happened. CPU usage started averaging 80 - 100%. Jerky sound and video. System overheats. Basically it became unusable. I did some isolation tests. Turned off the vm's and the dockers and it was still using 60 - 70% whilst doing nothing. If i stopped the array, it dropped to 1 - 2% I upgraded the op sys from 6.83 to 6.9 rc2 and it pretty well flat lined at 100%. I then started to look at the plugins. I did a bit of searching through the forums and singled out a couple of plugins. Turned them off and removed them. Still no good. I then went full hog and removed all of the plugins bar community apps. That fixed it. I then reinstalled the plugins that I really needed, which was most of them and the system is still cruising at 22%. Not a very technical way to solve a problem but it worked. So I'm not really sure what the cause was. The system has had continual upgrades since early version 6, so maybe there was a bit of old version plugins not playing nice with their later counterparts. I have to say I'm really impressed with version 6.9. Until now I was going to hang off installing until the official release. It is too good to go back. Pass through is brilliant since incorporating the VFIO plugin and I have been longing to separate the file cache from the appdata/vms for quite a while. I created a second cache whilst trying to solve this problem after going to 6.9. Brilliant.
  19. Looks like this is the culprit. WARNING: [pool www] server reached pm.max_children setting (5), consider raising it Any ideas where this setting resides?
  20. When you do the updates you have to go through them sequentially. So if you are on 18.02, updates will probably want you to go to 18.09 first. It will then offer 19.06 which was the last version of v19 and then it will proceed to 20.04. I found you have to do some database fixes along the way to get the green check tick.
  21. I just upgraded my main server to version 20.04. Now i get an error when I select "all files". "You do not have permission to upload or create files here" If I select "shares" or "recents", it displays files. Other users are working fine. I feel it might be a quota issue. My user has 550gb of flies in it. Other users have less and are working. If I revert to version 19.6, everything works again. Any ideas??
  22. I'll answer my own question in case someone else encounters the same problem. Recently, I built a machine on a Gigabyte X570 gaming x platform and a Ryzen 7 3770x. On this board, you can change a setting in BIOS and the IOMMU groups pretty well work in unraid for usb/gpu passthrough without the need to set ACS override in unraid. It worked flawlessly. Then I tried a Gigabyte B550 Gaming X motherboard with the same processor and tried to pass through a Gigabyte GT130 GPU (No fans) and a gpu bios dump from tech powerup. Black screen. I've built a few machines using this GPU and BIOS dump without problems on series 2 ryzen and intel boards. So I blamed the B550 board. Then I built a machine on an x570 board. Black screen again. So I dumped the BIOS from the actual card as per Spaceinvaders tutorial and it worked. I then added another GT130 card and was able to create a second Windows vm complete with keyboards/ mice and usb passthrough. I am assuming the B550 failed for the same reason. I will do the experiment in the near future. Not really sure why it didn't work. on the series 3 hardware but did on the second gen servers. Maybe the firmware changed in the GT130 card.