Leaderboard

Popular Content

Showing content with the highest reputation on 12/27/20 in all areas

  1. Redis starts now with the recommended parameter, hope that fixes the error, please update the container (force-update on the Docker page with advanced view turned on) and report back. No, actually the developers doesn't released a Linux version of the dedicated server yet. I know that it is possible to run it through WINE but that's not my preferred way of doing it since it can add much overhead and also lead to other problems... I recommend to post on the official Empyrion forums if there is any progress... Have you installed a Cache drive in your serve
    2 points
  2. Introduction unBALANCE is a plugin to transfer files/folders between disks in your array. Support Fund If you wish to do so, learn how to support the developer. Videos Thanks to gridrunner for his coverage of unBALANCE in one of his awesome videos Must Have unRAID Plugins - Part 3 Disk Utility Plugins (the discussion specific to unBALANCE starts here). Description unBALANCE helps you manage space in your unRAID array, via two different operating modes: Scatter Transfer data out of a disk, into one or
    1 point
  3. If you do a docker hub search within Apps (enable it in options), there's far more up to date versions of CUPS than what was available within apps.
    1 point
  4. Yes, depending upon your allocation method, split level, include / exclude settings.
    1 point
  5. Unraid never moves files from old disks to new ones. To do that, you'd want something like unbalance.
    1 point
  6. Same here. Broke all connections to my USB devices. You could try doing chmod 777 /dev/ttyACM0 in the server terminal and restart Domoticz. If it works, add the command to your go file.
    1 point
  7. I actually just got it working again, I just changed the container path to /Movies and re added it to root folders in Radarr. For some reason it wasn't working with /Media which is what I use for Sonarr. I'm not completely sure if that was the root cause but at least its fixed.. Thanks for the help and step in the right direction!
    1 point
  8. Even though I scratched mine out, looks like the 9500K limit and trickling all day might be the best way for you.
    1 point
  9. Right. Specifically it's the smartctl calls in telegraf (part of GUS.) Alternatively you can comment out the [[inputs.smart]] block in Grafana-Unraid-Stack/telegraf/telegraf.conf to disable them. Seems for whatever reason (new kernel, new smartctl) these calls are recorded as disk activity which causes the disks to fail unraid's "If inactive for X minutes, spindown" test.
    1 point
  10. I now also downloaded and installed the container and it starts just fine and I can query it from the Steam Server Browser, attached the log here: garry.log Can you try the following: Delete the container entirely Open up a command prompt from Unraid itself and type in: 'rm -rf /mnt/user/GModTest/' (without quotes) Also 'rm -rf /mnt/user/Steamcmd/' (without quotes) - I recommend leaving SteamCMD in the appdata directory... Download it again from the CA App What filesystem is your Array formated with? Can you eventually try to setup a Unassigned Devic
    1 point
  11. If you are using openhab/openhab:latest-debian as openHAB Repository you are getting that update automatically.
    1 point
  12. Hmm i see, in my case it was the formatting of my plex cache ssd in unraid, fomated it to NTFS first but that gave loads of errors while transcoding so had to switch to XFS and all started working flawless... but in this case i think i cant help sorry, this is all i know ^^
    1 point
  13. It's an unassigned slot in the array. I've made an update that will ignore a non-existent disk.
    1 point
  14. That would probably be it. `docker ps` will show you what’s running and `docker kill <id>` will take it down. You should be able to use the name (label) or the hex id, I think.
    1 point
  15. @nikaiwolf You will need to add a volume mapping to all of the servers that you want clustered that they can all read/write data to. add this mapping to all of your containers: /serverdata/serverfiles/clusterfiles <> /mnt/cache/appdata/ark-se/clusterfiles When you look in /mnt/cache/appdata/ark-se/clusterfiles you should see a folder with the name of your cluster.
    1 point
  16. when you Pm'd me i assumed that you read through my earlier posts about ARK... the automodmanager that wild card crammed in has all of its paths hardcoded into the game engine. Your choices are to install "YACOsteamCMD" (Yet Another Copy Of SteamCMD) or create some additional volume mappings for your container so that the game engine can find the SteamCMD folder and to the location of the workshop files, or to copy the workshop files from your client PC game folder to the server game folder. Adding these volume mapping should fix up the automodmanager: /serverdata/serve
    1 point
  17. the dynamic config is for a small number of options that can be changed without restarting the server. ?customdynamicconfigurl="<link>"a direct link to config file e.g.: http://arkdedicated.com/dynamicconfig.ini ; currently only the following options are supported to be adjusted dynamically: TamingSpeedMultiplier, HarvestAmountMultiplier, XPMultiplier, MatingIntervalMultiplier, BabyMatureSpeedMultiplier, EggHatchSpeedMultiplier, BabyFoodConsumptionSpeedMultiplier, CropGrowthSpeedMultiplier, MatingSpeedMultiplier, BabyCuddleIntervalMultiplier, BabyImprintAmountMultiplier, Cu
    1 point
  18. Hi, so I have an update. I found many posts with similar errors, but they were largely left without a solution so hopefully this helps someone else! I tried moving my script's cronjobs around and noticed the 'unlink_node: Assertion `node->nlookup > 1' failed' error appeared roughly around the same time, but when I moved it a few hours ahead of schedule and had nothing around the 3am mark, my shares stopped vanishing. I seeemed to have narrowed it down to the 'Recycle Bin' plugin. The Recycle Bin plugin was set to run it's cleanup and dump it's folder around 3am to
    1 point
  19. I think you can re-install the old plug-in from May by going to the history of the github and posting this URL (May 11th version of the plug-in script) into the URL for install plug-in. However, I have not tried this. https://raw.githubusercontent.com/hugenbd/ca.mover.tuning/6c146ad3ad63d162488d7e965c011d48d3e47462/plugins/ca.mover.tuning.plg
    1 point
  20. Hi, Well, the backup process has been running continuously for 4 days and a half, so I can step back a bit more now. The data to be backed up is 952 GB locally, 891 GB have been completed, and the remaining 61 GB should be uploaded in 11 hours from now. So, the global "performance" should be 952 GB in 127 hours, an average of 180 GB per day. Roughly, that is 2.1 MB/s or 17 Mbps, which is consistent with the obvious throttling I can see in Grafana for the CrashPlan container upload speed. Data is compressed before being uploaded, so translating the size of the data to bac
    1 point
  21. I only saw the connection issues on one disk in syslog so those CRC on the other disk could be old. The counts don't reset but you can acknowledge them as I mentioned. Since other disks don't seem affected I would be more inclined to suspect cables or just a loose connection. Make sure there isn't any tension on the cables that might disturb the connection and don't bundle SATA cables.
    1 point
  22. @limetech Here's what I did: Deleted the two files from /boot/config: rsyslog.cfg rsyslog.conf Rebooted Started the Array Reconfigured the Syslog settings Hit Apply Checked the Syslog and saw that rsyslogd started Verified that there was a file in my share Verified that data was in the file
    1 point
  23. SOLVED! This issue has been fixed by changing my NIC device to built-in. Here is link to guide so its easy to find. Have Big Sur up and running with passthrough gpu and nvme, everything is running fine apart from being able to sign into iCloud. I can sign into the App Store fine just not iCloud, I get the following error "could not communicate with server" I have tried with all different virtual NIC types and have even passed through an intel Nic that's working. Just no luck. Have also tried different smbios. Any help would be great as this is t
    1 point
  24. Yes you can, you just to need to make sure that those dockers/vms are not filling up the target disks, otherwise the transfer may run out of space to put files.
    1 point
  25. If Music was a top-level folder (share), per design, the app doesn't delete it.
    1 point
  26. I would use these sshd configurations + setting the users disabled. I don't think having users with empty passwords is a good idea. Thanks for the fast reply!
    1 point
  27. If a reallocation event happens once, and never again, it's not a concern. If it repeats again, it's time to pay attention. 3rd event, plan a replacement. If you get several reallocation events back to back, prepare to lose the drive, probably sooner rather than later. The number of sectors is a factor, but the rate of increase is what is most concerning. Some drives reallocate a dozen or so sectors, then stay quiet for years.
    1 point
  28. You certainly can use XFS for a single member cache. It's only when you have multiple cache device slots defined that you are forced to use BTRFS, because at the moment XFS doesn't support RAID volumes. The major downside to BTRFS is that it seems to be more brittle or fragile than XFS. What I mean by that is a lack of tolerance for hardware or software errors, and the recovery options for broken BTRFS volumes aren't as robust as the tools available for XFS, so having a comprehensive backup strategy is, as always, a high priority. So, if you are running server grade har
    1 point
  29. If as you try to access unsecure unRAID, you see this panel, insert \ backslash for user ID and click OK, your in.
    1 point
  30. Here's what I do when I replace a data drive: 1. Run a parity check first before doing anything else 2. Set the mover to not run by changing it to a date well into the future. This will need to be undone after the array has been recovered. 3. Take a screenshot of the state of the array so that you have a record of the disk assignments 4. Ensure that any dockers which write directly to the array are NOT set to auto start 5. Set the array to not autostart 6 Stop all dockers 7. Stop the array 8. Unassign the OLD drive (ie: the one being replaced) 9. Po
    1 point
  31. Here's a sample that will only allow the 192.168.2.* machines write access to the NFS share. Everybody else gets readonly Additionaly, the options allow root user access from the 192.168.2.* machines. Everybody else gets mapped to uid=99 192.168.2.0/24(sec=sys,rw,no_root_squash,insecure) *(sec=sys,ro,insecure,anongid=100,anonuid=99,all_squash) So, if the client IP fails to match whatever you have listed, they don't get access.
    1 point