Leaderboard

Popular Content

Showing content with the highest reputation on 10/17/20 in all areas

  1. So then would you recommend removing the established connection. Then enter a secure share, use my unraid account credentials. Then after that I can entire all secure shares (that that unraid account has access too) and public shares without issue?
    2 points
  2. Turbo Write technically known as "reconstruct write" - a new method for updating parity JonP gave a short description of what "reconstruct write" is, but I thought I would give a little more detail, what it is, how it compares with the traditional method, and the ramifications of using it. First, where is the setting? Go to Settings -> Disk Settings, and look for Tunable (md_write_method). The 3 options are read/modify/write (the way we've always done it), reconstruct write (Turbo write, the new way), and Auto which is something for the future but is currently the same as the old way. To change it, click on the option you want, then the Apply button. The effect should be immediate. Traditionally, unRAID has used the "read/modify/write" method to update parity, to keep parity correct for all data drives. Say you have a block of data to write to a drive in your array, and naturally you want parity to be updated too. In order to know how to update parity for that block, you have to know what is the difference between this new block of data and the existing block of data currently on the drive. So you start by reading in the existing block, and comparing it with the new block. That allows you to figure out what is different, so now you know what changes you need to make to the parity block, but first you need to read in the existing parity block. So you apply the changes you figured out to the parity block, resulting in a new parity block to be written out. Now you want to write out the new data block, and the parity block, but the drive head is just past the end of the blocks because you just read them. So you have to wait a long time (in computer time) for the disk platters to rotate all the way back around, until they are positioned to write to that same block. That platter rotation time is the part that makes this method take so long. It's the main reason why parity writes are so much slower than regular writes. To summarize, for the "read/modify/write" method, you need to: * read in the parity block and read in the existing data block (can be done simultaneously) * compare the data blocks, then use the difference to change the parity block to produce a new parity block (very short) * wait for platter rotation (very long!) * write out the parity block and write out the data block (can be done simultaneously) That's 2 reads, a calc, a long wait, and 2 writes. Turbo write is the new method, often called "reconstruct write". We start with that same block of new data to be saved, but this time we don't care about the existing data or the existing parity block. So we can immediately write out the data block, but how do we know what the parity block should be? We issue a read of the same block on all of the *other* data drives, and once we have them, we combine all of them plus our new data block to give us the new parity block, which we then write out! Done! To summarize, for the "reconstruct write" method, you need to: * write out the data block while simultaneously reading in the data blocks of all other data drives * calculate the new parity block from all of the data blocks, including the new one (very short) * write out the parity block That's a write and a bunch of simultaneous reads, a calc, and a write, but no platter rotation wait! Now you can see why it can be so much faster! The upside is it can be much faster. The downside is that ALL of the array drives must be spinning, because they ALL are involved in EVERY write. So what are the ramifications of this? * For some operations, like parity checks and parity builds and drive rebuilds, it doesn't matter, because all of the drives are spinning anyway. * For large write operations, like large transfers to the array, it can make a big difference in speed! * For a small write, especially at an odd time when the drives are normally sleeping, all of the drives have to be spun up before the small write can proceed. * And what about those little writes that go on in the background, like file system housekeeping operations? EVERY write at any time forces EVERY array drive to spin up. So you are likely to be surprised at odd times when checking on your array, and expecting all of your drives to be spun down, and finding every one of them spun up, for no discernible reason. * So one of the questions to be faced is, how do you want your various write operations to be handled. Take a small scheduled backup of your phone at 4 in the morning. The backup tool determines there's a new picture to back up, so tries to write it to your unRAID server. If you are using the old method, the data drive and the parity drive have to spin up, then this small amount of data is written, possibly taking a couple more seconds than Turbo write would take. It's 4am, do you care? If you were using Turbo write, then all of the drives will spin up, which probably takes somewhat longer spinning them up than any time saved by using Turbo write to save that picture (but a couple of seconds faster in the save). Plus, all of the drives are now spinning, uselessly. * Another possible problem if you were in Turbo mode, and you are watching a movie streaming to your player, then a write kicks in to the server and starts spinning up ALL of the drives, causing that well-known pause and stuttering in your movie. Who wants to deal with the whining that starts then? Currently, you only have the option to use the old method or the new (currently the Auto option means the old method). But the plan is to add the true Auto option that will use the old method by default, *unless* all of the drives are currently spinning. If the drives are all spinning, then it slips into Turbo. This should be enough for many users. It would normally use the old method, but if you planned a large transfer or a bunch of writes, then you would spin up all of the drives - and enjoy faster writing. Tom talked about that Auto mode quite awhile ago, but I'm rather sure he backed off at that time, once he faced the problems of knowing when a drive is spinning, and being able to detect it without noticeably affecting write performance, ruining the very benefits we were trying to achieve. If on every write you have to query each drive for its status, then you will noticeably impact I/O performance. So to maintain good performance, you need another function working in the background keeping near-instantaneous track of spin status, and providing a single flag for the writer to check, whether they are all spun up or not, to know which method to use. So that provides 3 options, but many of us are going to want tighter and smarter control of when it is in either mode. Quite awhile ago, WeeboTech developed his own scheme of scheduling. If I remember right (and I could have it backwards), he was going to use cron to toggle it twice a day, so that it used one method during the day, and the other method at night. I think many users may find that scheduling it may satisfy their needs, Turbo when there's lots of writing, old style over night and when they are streaming movies. For awhile, I did think that other users, including myself, would be happiest with a Turbo button on the Main screen (and Dashboard). Then I realized that that's exactly what our Spin up button would be, if we used the new Auto mode. The server would normally be in the old mode (except for times when all drives were spinning). If we had a big update session, backing up or or downloading lots of stuff, we would click the Turbo / Spin up button and would have Turbo write, which would then automatically timeout when the drives started spinning down, after the backup session or transfers are complete. Edit: added what the setting is and where it's located (completely forgot this!)
    1 point
  3. @Valerio found this out first, but never received an answer. Today I found it out, too. But this is present since 2019 (or even longer). I would say it's a bug, as: it prevents HDD/SSD spindown/sleep (depending on the location of docker.img) it wears out the SSD in the long run (if docker.img is located here) - see this bug, too. it prevents reaching CPU's deep sleep states What happens: /var/lib/docker/containers/*/hostconfig.json is updated every 5 seconds with the same content /var/lib/docker/containers/*/config.v2.json is updated every 5 seconds with the same content except of some timestamps (which shouldn't be part of a config file I think) Which docker containers: verified are Plex (Original) and PiHole, but maybe this is a general behaviour As an example the source of hostconfig.json which was updated yesterday 17280x times with the same content: find /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44 -ls -name hostconfig.json -exec cat {} \; 2678289 4 -rw-r--r-- 1 root root 1725 Oct 8 13:46 /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/hostconfig.json { "Binds":[ "/mnt/user/tv:/tv:ro", "/mnt/cache/appdata/Plex-Media-Server:/config:rw", "/mnt/cache/appdata/Plex-Transcode:/transcode:rw", "/mnt/user/movie:/movie:ro" ], "ContainerIDFile":"", "LogConfig":{ "Type":"json-file", "Config":{ "max-file":"1", "max-size":"50m" } }, "NetworkMode":"host", "PortBindings":{ }, "RestartPolicy":{ "Name":"no", "MaximumRetryCount":0 }, "AutoRemove":false, "VolumeDriver":"", "VolumesFrom":null, "CapAdd":null, "CapDrop":null, "Capabilities":null, "Dns":[ ], "DnsOptions":[ ], "DnsSearch":[ ], "ExtraHosts":null, "GroupAdd":null, "IpcMode":"private", "Cgroup":"", "Links":null, "OomScoreAdj":0, "PidMode":"", "Privileged":false, "PublishAllPorts":false, "ReadonlyRootfs":false, "SecurityOpt":null, "UTSMode":"", "UsernsMode":"", "ShmSize":67108864, "Runtime":"runc", "ConsoleSize":[ 0, 0 ], "Isolation":"", "CpuShares":0, "Memory":0, "NanoCpus":0, "CgroupParent":"", "BlkioWeight":0, "BlkioWeightDevice":[ ], "BlkioDeviceReadBps":null, "BlkioDeviceWriteBps":null, "BlkioDeviceReadIOps":null, "BlkioDeviceWriteIOps":null, "CpuPeriod":0, "CpuQuota":0, "CpuRealtimePeriod":0, "CpuRealtimeRuntime":0, "CpusetCpus":"", "CpusetMems":"", "Devices":[ { "PathOnHost":"/dev/dri", "PathInContainer":"/dev/dri", "CgroupPermissions":"rwm" } ], "DeviceCgroupRules":null, "DeviceRequests":null, "KernelMemory":0, "KernelMemoryTCP":0, "MemoryReservation":0, "MemorySwap":0, "MemorySwappiness":null, "OomKillDisable":false, "PidsLimit":null, "Ulimits":null, "CpuCount":0, "CpuPercent":0, "IOMaximumIOps":0, "IOMaximumBandwidth":0, "MaskedPaths":[ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware" ], "ReadonlyPaths":[ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ] }
    1 point
  4. This script automatically spins up all defined Disks to remove spin up latency on playback. It should be executed through CA User Scripts on array startup. This script is inspired by @MJFOx's version. But instead checking the Plex log file, which needs to enable Debugging, this script monitors the CPU load of the Plex container (which rises if a Plex Client has been started): #!/bin/bash # make script race condition safe if [[ -d "/tmp/${0///}" ]] || ! mkdir "/tmp/${0///}"; then exit 1; fi trap 'rmdir "/tmp/${0///}"' EXIT # ######### Settings ################## spinup_disks='1,2,3,4,5,6,7' # Note: Usually parity disks aren't needed for Plex cpu_threshold=1 # Disks spin up if Plex container's CPU load exceeds this value # ##################################### # # ######### Script #################### while true; do plex_cpu_load=$(docker stats --no-stream | grep -i plex | awk '{sub(/%/, "");print $3}') if awk 'BEGIN {exit !('$plex_cpu_load' > '$cpu_threshold')}'; then echo "Container's CPU load exceeded threshold" for i in ${spinup_disks//,/ }; do disk_status=$(mdcmd status | grep "rdevLastIO.${i}=" | cut -d '=' -f 2 | tr -d '\n') if [[ $disk_status == "0" ]]; then echo "Spin up disk ${i}" mdcmd spinup "$i" fi done fi done Explanation - it requests the containers CPU load every ~2 seconds (answer time of "docker stats") - if the load is higher than "cpu_treshold" (default is 1%) it checks the disks spinning status - all sleeping "spinup_disks" will be spun up Downside - as long a Movie is running, all (unused) disks won't reach their sleep state (they spin down, but will be directly spun up again) Monitoring If you like to monitor the CPU load while (not) using Plex to find an optimal threshold value (or just for fun ), open the WebTerminal and execute this (replace "1" against a threshold of your choice): while true; do plex_cpu_load=$(docker stats --no-stream | grep -i plex | awk '{sub(/%/, "");print $3}') echo $plex_cpu_load if awk 'BEGIN {exit !('$plex_cpu_load' > 1)}'; then echo "Container's CPU load exceeded threshold" fi done On my machine Plex idles between 0.1 and 0.5% CPU load, why I choosed 1% as the default threshold.
    1 point
  5. This video is about docker in unRAID. The video dicusses the folowing What is docker. How to enable docker The docker image file What is appdata and how it relates to dockers How to install a docker container Docker volume and port mapping principles Keeping dockers consistant Common problems Hope you find it interesting !! PS Sorry Squid i couldnt find a way to fit in dancing girls but lots of arrows and flashing things!! All about Docker in unRAID. Docker Principles and Setup
    1 point
  6. My understanding, and it's a primitive understanding, so please correct me if I am wrong... the only reason SSDs are not recommended for array is because when using TRIM for parity, it can get a little messy. But... in my scenario with the hardware I have to work with, I wont be doing parity. Fantastic thought with the usb stick as the array. Didnt think of that; mad genius. Really looking forward to making these ambitions a reality. Thanks for your help so far!
    1 point
  7. No objections. That board should work fine. In fact, before I bought the ASRock server board, I had an ASUS Z97 board in the case with my i5 4590 CPU. The only "limitation" is that board has only 4 SATA ports so if you need more drives for storage/parity/cache, you'll need to add an HBA to the x16 slot. That should be fine since your i5 4670K has an iGPU and the slot would not be needed for a graphics card. Your iGPU will handle any unRAID video needs and even Plex (or other media platform) streaming and basic transcoding needs. I know since my i5 4590 is capable of this. If you want the flexibility of adding a GPU to the expansion slot for use in a VM (although your processor really does not have much overhead for VM use), you might want to look for a socket 1150 Mini-ITX board with 6 SATA ports.
    1 point
  8. I honestly thought that $SERVER_DIR was a place holder for whatever my dir was. My apologies, I misunderstood you. I tried the command that you had suggested. This was the output: Steam>root@Tower:~# docker exec -it Left4Dead2 sh # ${STEAMCMD_DIR}/steamcmd.sh +login anonymous +force_install_dir ${SERVER_DIR} +workshop_download_item 222860 2032670332 +quit Redirecting stderr to '/root/Steam/logs/stderr.txt' /tmp/dumps is not owned by us - delete and recreate [ 0%] Checking for available updates... [----] Verifying installation... Steam Console Client (c) Valve Corporation -- type 'quit' to exit -- Loading Steam API.../data/src/clientdll/applicationmanager.cpp (4149) : Assertion Failed: CApplicationManager::GetMountVolume: invalid index /data/src/clientdll/applicationmanager.cpp (4149) : Assertion Failed: CApplicationManager::GetMountVolume: invalid index /data/src/clientdll/applicationmanager.cpp (4312) : Assertion Failed: m_vecInstallBaseFolders.Count() > 0 OK. Connecting anonymously to Steam Public...Logged in OK Waiting for user info...OK /data/src/clientdll/applicationmanager.cpp (4312) : Assertion Failed: m_vecInstallBaseFolders.Count() > 0 Downloading item 2032670332 ... ERROR! Download item 2032670332 failed (No match).# Is it not finding the workshop item? I'm certain that I have the correct ID number for the workshop item. For context, here is workshop item I'm trying to install: https://steamcommunity.com/sharedfiles/filedetails/?id=2032670332&searchtext=helms
    1 point
  9. I'm not sure what "get quotations" means as far as price and availability but here are four listings for the board on Alibaba. The prices are reasonable if they actually have the board at those prices:
    1 point
  10. LuckyBackup actually problem partially solved.. it was my mistake that I choosed the Synch not backup under (Type) but still have a problem of not deleting files that exist in destination but not in source. my need is to (one way sync) source to destination so that any change in source (add-delete) will be reflected to destination. thanks for fast reply
    1 point
  11. i created the passwd before, the lines for 'Debian.sh' was what i am looking for. now it works, thank you very much.
    1 point
  12. Vielen Dankl für die umfassende Info... ich hätte vlt. etwas mehr Informationen geben sollen. Also ich habe beide Plugins von Anfang an schon installiert, denn ich hatte mich schon recht gut informiert was dieses betrifft und dein Name fällt auch sehr oft bei den TGF auf Youtube und Twitch 😉 auch was deine Docker Container betrifft. Also ich habe dort bei Alex auch schon gesehen das er sehr viele UDs drin hat und er hat es auch schon mal kurz angerissen, aber der ist so schnell... so schnell kann ich gar nicht auf Stop drücken 😃 Nee, also ich werde mir das jetzt nochmal durchlesen was du geschrieben hast und auch die Hilfe mal durchsuchen. Mir ist das noch nicht so richtig klar. Ich konnte also den vergeben Namen schon löschen und dann zeigt er mir auch gleich den Button formatieren an. Dann muss ich nochmal schauen wie es weiter geht, wenn ich da mehrere Verzeichnis drauf bringe. Formatiert hatte ich schon einmal in XFS da die anderen Platten auch so formatiert waren/sind nur nicht das Cache Laufwerk das hat bei mir "btrfs". Aber OK ich muss das jetzt nochmals üben wie das einhängen funktioniert, wenn ich die Platte heute eingebaut habe, da ich sie bisher immer per USB Adapter an den Server gehangen habe. Ich habe mir die Sata Karte die bei den GeekFreaks unten bei YouTube drin ist bestellt und die baue ich heute ein. Dann gehts weiter. SInd ja noch keine Daten drauf und so kann ich getrost damit herum fummeln 😉 Habe jetzt nochmal ein Screenshot gemacht wie es bei mir aussieht. Habe den SATA Controler drin und 3 wietere Platten rein gehängt. Zwei davon waren schon formatiert und die dritte (Fotos) ist diese die ich als USB dran hatte, die habe ich formatiert und nun kann ich sie auch alle einhängen...! Die Formatierungen sind bei den oberen beiden sdf und sdg schon drauf gewesen und die sdh habe ich formatiert. Das sieht soweit doch schon recht gut aus. Ich muss jetzt nur mal schauen wie ich da drauf zugreifen kann, werde mich da wohl etwas einlesen, später.
    1 point
  13. I am using the ASRock Rack E3C226D2I in my backup unRAID server in the Node 304 case. See my signature for full system details. I am not sure what availability might be in your area. It's not easy to find any more. It's a great server board and even has IPMI.
    1 point
  14. I had no issues building my kernel while I was running the Nvidia Unraid from LISO.
    1 point
  15. Can you show us the details of the setting you used ? Providing your diagnostics (tools/diagnostics) in the next post could allow people to look at your syslog and stuff and see if something seems odd.
    1 point
  16. Are you running it with certificate like lets encrypt? Https than the Connection is secured Gesendet von meinem Redmi Note 8 Pro mit Tapatalk
    1 point
  17. So are mine, and like 90% or more are from one Windows Server 2012 R2 VM, I also have WIndows 8.1 and Windows 10 VMs and those don't write much, but total writes were already reduced 15x since -beta25, like mentioned btrfs also will have some write amplification, there's a study about that, I can live with 200GB per day, just couldn't with 3TB per day.
    1 point
  18. My question and yours is answered here : you can grab the drivers yourself and copy them over to the flash drive or upgrade to the beta.
    1 point
  19. This will hopefully shed some light on the underside working bits of building a cluster of servers for ARK:SurvivalEvolved. With the inspiration (read thievery) from @growlith I present the A3C (ARK Cluster Companion Container). It and the accompanying XLM files should allow for a fairly simple stand up of a new ARK server cluster. https://github.com/CydFSA/A3C Go to the github, fetch the XMLs for however many servers you want to use in your new cluster, salt them to taste with your information (server names, cluster name, passwords, adminpasswords, ect...). Good Luck and Happy Hunting! historical information left below (really go get the newer stuff from github) ------------------------------ We are going to start with getting -UseDynamicConfig working and talking to our config file rather than the Official one out at http://arkdedicated.com/dynamicconfig.ini , I know it feels like that should come last but bear with me. First we have a simple http server running by itself serving up the "dynamicconfig.ini" on port 80 on its container, this is mapped out to the host on 8080 and is not really needed but makes it easy to take a quick glance at tower:8080 to see what the settings are. I ran this container first so that it would receive a predictable IP address (172.17.0.2:80) that is then used in the configurations of the remaining containers to add an entry to the hosts file (--add-host=arkdedicated.com:172.17.0.2) so that requests to http://arkdedicated.com/dynamicconfig.ini in the game servers are pointed to the container running at 172.17.0.2. If you don't want or care to use the dynamic configs, omit the ARK0 container and remove -UseDynamicConfig from the "Extra Game Parameters" of all subsequent ARK's you deploy. Next I deployed 10 ARK server instances, why 10 when there are only 9 maps? Well, I assume that Wild card will have another map for Genesis Part 2 coming in the spring so I added a container to house it, currently it is configured as a 2nd Genesis1 map with all the correct ports and paths. If they do release a new map it will only require changing the map name in the config and starting the container. The ports are mapped sequentially so you will only need to insert three port forward blocks into your gateway router (UDP 7777-7796, UDP 27015-27024, TCP 27025-27034) You do not need anything forwarded to ARK0 as it is only there to talk to ARKs 1-10. ARK0-dynamicconfig tcp 80:8080 ARK1-TheIsland Udp1 7777 UDP2 7778 UDPSteam 27015 TCPRCON 27025 ARK2-ScorchedEarth_P Udp1 7779 UDP2 7780 UDPSteam 27016 TCPRCON 27026 ARK3-Aberration_P Udp1 7781 UDP2 7782 UDPSteam 27017 TCPRCON 27027 ARK4-TheCenter Udp1 7783 UDP2 7784 UDPSteam 27018 TCPRCON 27028 ARK5-Ragnarok Udp1 7785 UDP2 7786 UDPSteam 27019 TCPRCON 27029 ARK6-Valguero_P Udp1 7787 UDP2 7788 UDPSteam 27020 TCPRCON 27030 ARK7-CrystalIsles Udp1 7789 UDP2 7790 UDPSteam 27021 TCPRCON 27031 ARK8-Extinction Udp1 7791 UDP2 7792 UDPSteam 27022 TCPRCON 27032 ARK9-Genesis Udp1 7793 UDP2 7794 UDPSteam 27023 TCPRCON 27033 ARK10-Genesis2 Udp1 7795 UDP2 7796 UDPSteam 27024 TCPRCON 27034 Path mappings are slightly more complex. SteamCMD is in its original location per @ich777's standard and the binary data for ARK is also in the default location. Having the server binaries shared also means that when there is an update to ARK (and/or SteamCMD) it only has to be downloaded one time rather than 10. The update procedure is to bring all of the ARKs down then start ARK1 let it update and initialize then bring the others back up en mass, as a precaution I have the wait timer on ARKs 2-10 set to 600 seconds so that if the Tower host gets rebooted ARK1 has time to pull any updates and initialize. The ARK savegame data and Server Config files are mapped into the proper location on a per container basis. This prevents each server instance from mucking up the servers config .inis (which they liked to do), this also means that you can use different options on each ark and makes managing the SavedArks less hair pully outy. The clustering function is done with a shared resource directory and a ClusterID (-clusterid=arkStar under "Extra Game Parameters") Dynamiccimfig data: /dynamicconfig<>/mnt/cache/appdata/ark-se/dynamicconfig SteamCMD: /serverdata/steamcmd<>/mnt/user/appdata/steamcmd ARK data: /serverdata/serverfiles<>/mnt/cache/appdata/ark-se Cross ARK cluster data: /serverdata/serverfiles/clusterfiles<>/mnt/cache/appdata/ark-se/cluster ARK configs and Save Data: /serverdata/serverfiles/ShooterGame/Saved<>/mnt/cache/appdata/ark-se/ARK1-TheIsland /serverdata/serverfiles/ShooterGame/Saved<>/mnt/cache/appdata/ark-se/ARK2-ScorchedEarth_P /serverdata/serverfiles/ShooterGame/Saved<>/mnt/cache/appdata/ark-se/ARK3-Aberration /serverdata/serverfiles/ShooterGame/Saved<>/mnt/cache/appdata/ark-se/ARK4-TheCenter /serverdata/serverfiles/ShooterGame/Saved<>/mnt/cache/appdata/ark-se/ARK5-Ragnarok /serverdata/serverfiles/ShooterGame/Saved<>/mnt/cache/appdata/ark-se/ARK6-Valguero_P /serverdata/serverfiles/ShooterGame/Saved<>/mnt/cache/appdata/ark-se/ARK7-CrystalIsles /serverdata/serverfiles/ShooterGame/Saved<>/mnt/cache/appdata/ark-se/ARK8-Extinction /serverdata/serverfiles/ShooterGame/Saved<>/mnt/cache/appdata/ark-se/ARK9-Genesis /serverdata/serverfiles/ShooterGame/Saved<>/mnt/cache/appdata/ark-se/ARK10-Genesis2 The XML files are attached below, if you chose to use them please make sure to edit them to your taste; server names, passwords, clusterIDs, ect. They can be used by placing them in /boot/config/plugins/dockerMan/templates-user then going to "Docker/Add Container" then choosing from the "Template:" drop down. Or maybe @ich777 will do us all a favor and decide to add them to his already impressive XML collection
    1 point
  20. I am using 6.9.0-beta29 (will go to 30 ASAP) and do see those writes too. Noticed that only 2 of my containers end up writing to these 2 files which are the only containers that seem to have a docker health check available. Here is the reason why you see a write every 5 seconds for your plex container https://github.com/plexinc/pms-docker/blob/master/Dockerfile#L62 HEALTHCHECK --interval=5s --timeout=2s --retries=20 CMD /healthcheck.sh || exit 1
    1 point
  21. Found the source of the writes (they still exist, even if SSD has been reformatted to XFS):
    1 point
  22. to get german the interface in german language add the following to papermerge.conf.py in appdata folder LANGUAGE_CODE = "de-DE" to get german language for OCR go to docker shell and type apt-get install tesseract-ocr-deu and change the following in papermerge.conf.py like this OCR_DEFAULT_LANGUAGE = "deu" OCR_LANGUAGES = { "deu": "Deutsch", } And voila! German interface and OCR.
    1 point
  23. Google ... windows10 + unraid + smb1 , lots of info there.
    1 point
  24. Yes! Reached out to tech support, they stated to download the latest release, it will have the drivers. Updated now all is working! Love 10GbE! Hitting 300-900MB transfer rates on all workstations on network is scary. Thanks all!
    1 point
  25. If the key file is properly copied intact, the new stick should attempt to automatically migrate the license with the help of a wizard. This is allowed once a year, if it's sooner than that since your last transfer, yes, contact limetech and tell them your situation. Also, after copying the files you will need to do the "make bootable" procedure just like you did with the original stick.
    1 point
  26. So I'm only testing unraid out. Using it with a single drive and an ssd for cache. I'm setting up all of my dockers and user shares that I'm going to want for when I finally build out my actual server. I've only been using it for a couple of days now, but when I navigate to files in my user shares from a windows machine using Explorer, it won't let me delete files. Says I need permission from TOWER/Nobody. Who or what is Tower/Nobody as it's not a user. Is there a way to fix this? Am I missing something?
    1 point
  27. You probably first accessed a public user share. Windows / SMB negotiated a connection using your Windows account, and since it was a public share, it was allowed. Then you tried to access a share that was not public but the established connection was not allowed. SMB only allows one connection, the already established one. And even though Windows prompts you to login, it has no effect. This is just the way Windows / SMB works whether you are trying to access unRAID or some other system. You can go to Windows Control Panel - Credential Manager and delete the established connection so it can be renegotiated. Here is a sticky with other ideas that may help with Windows problems: https://lime-technology.com/forums/topic/53172-windows-issues-with-unraid/
    1 point
  28. Even on the secured shares. i have it set up to that user "mike" has read/write access. Then when I map the drive on my Win 10 PC, I use mike as the credentials, and it still says I need permission from TOWER/Nobody.
    1 point
  29. No.. is there a way to do that? I'm on a Windows 10 PC using my microsoft account (IE it's my msn email and password to log onto my PC).
    1 point
  30. Is the unRAID user account the same as your windows login?
    1 point
  31. Minus the default root account, I created one user account. This is happening on shares that are both secure and public.
    1 point
  32. Do you have any users setup on unRAID? Are any of your shares not public?
    1 point
  33. Alright I changed the code to this and it removed both the .DS_Store & ._.DS_Store Just a thought, it may be a good idea to change this script to just find those files and display them first and then give an option to review the files before they get deleted. Or you could just run two scripts: Find_DS_Stores.sh #!/bin/bash # Version .1 - find .DS_Store files and ._* files # ========================================================================== # # Program Body # # ========================================================================== # echo "Finding .DS_Store and ._.DS_Store files" find /mnt/ -name .DS_Store -print find /mnt/ -name "._*" -print Then once you agree with those findings you could run this. Remove_DS_Stores.sh #!/bin/bash # Version .1 - remove .DS_Store files and ._* files # ========================================================================== # # Program Body # # ========================================================================== # echo "Removing .DS_Store and ._.DS_Store files" find /mnt/ -name .DS_Store -print -delete find /mnt/ -name "._*" -print -delete
    1 point
  34. tail -n 42 -f /var/log/syslog then hit control-c to stop watching the syslog. The '-n' is the number of past lines to display. The '-f' means append output as the file grows. It's also similar to the [Log] button on the Web UI. http://tower/logging.htm?title=Log Information&cmd=/usr/bin/tail -n 42 -f /var/log/syslog
    1 point