Jump to content

aglyons

Members
  • Posts

    195
  • Joined

  • Last visited

Everything posted by aglyons

  1. Nope, changing that in the Management section of the settings page didn't even change the tld that UnRaid responds to. <-----UNRAID-API-REPORT-----> SERVER_NAME: KNOXX.lyons ENVIRONMENT: production UNRAID_VERSION: 6.9.2 UNRAID_API_VERSION: 2.46.3 (running) NODE_VERSION: v14.15.3 API_KEY: valid MY_SERVERS: authenticated MY_SERVERS_USERNAME: aglyons RELAY: connected MOTHERSHIP: ok ONLINE_SERVERS: KNOXX[owner="aglyons"] OFFLINE_SERVERS: ALLOWED_ORIGINS: http://localhost, http://IPV4ADDRESS, https://IPV4ADDRESS, http://knoxx, https://knoxx, http://knoxx.local, https://knoxx.local HAS_CRASH_LOGS: no </----UNRAID-API-REPORT----->
  2. ah ha, I think I figured it out I am using my own TLD '.lyons' with my PiHole. Unifi does not like .local for some reason. It causes a bunch of problems. I failed to update the TLD in UnRaid. It still says .local
  3. Nope, no RP -yet that is. I do plan on going there once I get the hang of it. Plugin is up to date report contents looked ok to me too <-----UNRAID-API-REPORT-----> SERVER_NAME: KNOXX ENVIRONMENT: production UNRAID_VERSION: 6.9.2 UNRAID_API_VERSION: 2.46.3 (running) NODE_VERSION: v14.15.3 API_KEY: valid MY_SERVERS: authenticated MY_SERVERS_USERNAME: aglyons RELAY: connected MOTHERSHIP: ok ONLINE_SERVERS: KNOXX[owner="aglyons"] OFFLINE_SERVERS: ALLOWED_ORIGINS: http://localhost, http://192.168.XXX.XXX, https://192.168.XXX.XXX, http://knoxx, https://knoxx, http://knoxx.local, https://knoxx.local HAS_CRASH_LOGS: no </----UNRAID-API-REPORT----->
  4. Came across this recently and it won't go away. I searched around the forum and found nothing. knoxx-diagnostics-20220519-1233.zip
  5. How could I get duplicate lines in the go file? I've never edited that myself. This could only have to come from plugins being installed.
  6. Diagnostics..... knoxx-diagnostics-20220518-0949.zip Some lines in the syslog that jump out to me 172 285 392 406 629 927-931 967
  7. I managed to get the system started again. I took a look at the ident.cfg file, specifically lines 35 and 36. I deleted line 35 and line 36. Added a LF so that there was an empty line 35. I decided this due to the default ident.cfg file being formatted that way. I'm not sure what line 35 board" was supposed to be or if it was added by a plugin update. Somehow this file was changed during the backup process that hung the system. I'm not sure if I should go forward with an update to 6.10 at this point or if there are any other diagnostics I can do to make sure the system is stable enough to upgrade.
  8. No go, System is screwed. How can I get this back up with the current config working?
  9. chkdsk reported no errors. I only have three USB ports on the Dell R510. The external USB ports are home to a UPS and keyboard. The internal USB was my boot connection. It's been stable ever since I first installed everything. I've made a BIN backup image of the thumbdrive. Will try to start it up again.
  10. Thanks, Ill try that. I did try safe mode with GUI. The built-in Firefox loading localhost failed to load any page. That would tell me that unRaid was not running. Something is borked.
  11. This is a brand new Samsung Flash drive. I've had the system for about two weeks. 6.10 is released and it was advised to take a backup of the original flash. So I did. The file would start to DL but stop after a minute or so. The webUI became non-responsive so I had to restart via SSH (powerdown -r). Tried to backup again and the same thing happened. The file would start to DL but stall. WebUI again non-responsive. Restart again via SSH and now the system will not come back up. It's sitting at; /var/tmp/indent.cfg: line 35: unexpected EOF while looking for matching `"' /var/tmp/ident.cfg: line 36: syntax error: unexpected end of file Welcome to Linux 5.10.28-Unraid x86_64 (tty1) (none) login: Tried a full powerdown of the system and still won't come back up again.
  12. OK, so the definitive answer.... https://wiki.unraid.net/Cache_disk#Amount_of_data "Amount of data The final consideration in choosing a cache drive is to think about the amount of data you expect to pass through it. If you write ~10 GBs per day, then any drive 10 GB or larger will do (a 30 GB SSD may be a good fit in this case). If you write 100 GB in one day every few weeks, then you will want a cache drive that is larger than 100 GB. If you attempt a data transfer that is larger than the size of your cache drive, the transfer will fail."
  13. It's interesting that mapped SMB connections in Windows does show the correct array capacity.
  14. OK, that's good to know. BUT, the same thing still happens. Unraid reports the capacity of the cache. I just changed one share to use cache and ensured that it showed the full array capacity in the shares page. I tried the copy/paste again. Same error as before, approx 800GB more space needed to copy the pasted files.
  15. Ok, I think I figured out what is going on here. There STILL is an issue with Unraid in a way¹. I use Teracopy as my file copy handler. An option that it has is to check free space before a copy/move procedure. The built in Windows copy/pate I think does the same thing. It checks for free space prior to starting the operation. Unraid reports back the capacity of the cache, not the capacity of the share being transferred to. Copying a test folder of 1348GB (1.22TB on-disk) returned an error needing an extra 800GB of space. 500GB (cache size) + 800GB (additional space needed error) = 1300GB IMO, Unraid should be returning the free space available on the share not the cache size. I'm curious with regards to how the cache system works. I understand the mover and that it runs on a schedule. But, is the mover/cache system smart enough to monitor the current capacity of the cache and start moving files if capacity is getting too high, before the schedule kicks in? ¹ **UPDATE** As I write this and poke around the UI while testing, I noticed an interesting quirk which probably answers my own question. I noticed that if you have a share set to use cache (not 'cache only' but any cache setting), the available space shown in the share panel shows the available capacity of the cache, not the array. The effect of returning the cache capacity rather than the array capacity means that you will hit a hard limit on the size of the file transfer you want to accomplish. Exactly the situation I am hitting now. As I understand it, the purpose of using cache is to help speed up transfers. Like a fast loading holding space. This seems to work differently. The problem is if you turn cache off, transfer rates are terrible. I upgraded to 10GBe and with no cache, transfers will start in the 500MB/s range but very quickly plummet to < 90MB/s even with Turbo Write enabled. On a pipe that can handle 1250MB/s and SATA3 drives that can do 750MB/s (both in theory, yes), 90MB/s is a far far cry from potential. I would be happy with seeing a sustained 400-500MB/s.
  16. So I do have a min free space set for drives. In this scenario, I was copying a folder with multiple files totaling 1.2TB. Windows saw that there was only 500GB of space on the target share because it was the cache that it was looking at.
  17. I thought about how to go about searching for this in the forum and couldn't come up with anything. My apologies if this is a rehash. I have two 250GB SSD cache drives in a btrfs pool. Total 500GB. If you try to transfer in 1.2TB of data, Windows complains that there isn't enough space. Regardless of the fact that the array is 60TB in size. Is this as expected or is this something that is being looked into?
  18. Curious, why don't use tray override? If we shouldn't use that, why is it there? What was its intended use case? That setting was actually perfect. I have 12 external trays on the R510 chassis with two internal 2.5" bays. Creating a whole other layout for 2 drives seems overkill. I didn't think this issue would really affect anything, I saw it more of a cosmetic concern.
  19. I suspect you could create two layouts. One horizontal and one vertical and assign the drives appropriately.
  20. @olehj Seems to be a little bug in the layout page. The drive numbers are duping drive 12 when it should be continuing at 13 & 14.
  21. So the situation here is there are a number of different sources all over the Synology that are in the backup set. To "pull" the data to the unraid as apposed to "pushing" the data would mean a significant amount of scripting and management. Each source would need its own script to trigger the sync. While many might not be concerned about a 'little' scripting, I'm not a deep Linux guy and it would take me some time to figure it all out.
  22. So it's definitely something buggy with Synology then. Good to know, I can bring this up in the ticket I have with them. It's frustrating in that rsync is the ONLY way to automate a file backup to a server that is not another Synology NAS. The local backup option in hyperbackup won't let you choose a mapped network location!
×
×
  • Create New...