Jump to content

mgutt

Moderators
  • Posts

    11,337
  • Joined

  • Last visited

  • Days Won

    124

Everything posted by mgutt

  1. I have a similar issue. I'm having a share and created a subfolder through its user by an android app. This app is still able to create files inside this subfolder. Now I opened this folder with the same user through Windows and get an network error (Windows can not access ... You do not have permission...). Really strange 🤔 The user rights are looking good as well: And it becomes even more strange. I opened the share through its IP-address instead, entered the credentials, and are now able to open the subfolder without problems 🤪 Seems to be windows related or what do you think? EDIT: Ok, Windows Reboot solved the issue. Ok, thank you Windows 🙄
  2. Is CA User Scripts race condition safe or do I need to make my own check to be sure that my script runs atomic?
  3. Additional question: Should I backup domains and/or system to the array if cache only/prefer is set?
  4. If I stop the array or start the array all discs are spinned up. As this produces a high load on the power supply I like to see that Unraid spins up every drive with a little delay. Of course my power supply is big enough, but I think there is more potential killing a power supply by those high peaks than through other operations. Yes, restarting the server still produces this load, but this is a hardware based problem and can not be avoided through Unraid.
  5. Yes, with such a setting Unraid must totally ignore the content of the array else it would not work.
  6. I'm having a 1TB SSD installed. So the cache is big enough for my needs. If not I will install a bigger one. Why I need it: 1.) I want to add my music collection to the SSD as it is more energy efficient compared to the HDD and the response time is much better between a sleeping SSD and HDD (a press on the play button on a sonos speaker plays almost directly if I use the SSD). 2.) I want to add my video transcoding folder to the cache. This is the data I'm working on the weekends. I move the videos to an other share if I finished my work (= long time archive). At the moment I can only take advantage of the SSD as long the mover did not move the videos to the array. Of course a defective SSD kills the work of multiple hours, but after the mover did his work the data would be safe. At the moment I'm using rsync to copy both data to a backup folder on a daily basis. This perfectly solves my problem, but maybe its interesting having it as a builtin feature of Unraid.
  7. I like to use only my SSD cache for some Shares, but by that I would need to create manually a backup routine. Instead I like to choose an option called "Both" where the Mover syncs between the Cache and the Share. Example:
  8. Nice changes. Love that the icons are forced to be displayed in one row. But there is a small bug. If you edit the name of a script the gear symbol is html code is the name: and if you edit it, it displays both names without the gear: and if you delete the name its not possible to edit the name anymore:
  9. Feature Requests: 1.) Jump to top if "Edit Script" is selected or display the code under the scripts row (maybe better as it prevents scrolling) 2.) *deleted* 3.) "Show log" uses the -0700 timezone. Instead it should respect the Unraid timezone setting 4.) Changing a script name should rename the script filename, too. Or the User Scripts overview should be sorted by the script name and not the script filename. 5.) Maybe its nice to see directly if a script returned an error. Maybe by displaying the row with a red background color?! 6.) Optional: Send script output to email address. 7.) If you add a cogwheel symbol after the script name you do not need to explain the user how to edit a script ("For more editing options, click on any script's name"), but I'm not sure if this would look good 8.) "Add new script" should contain the code textarea as well (no need for the extra step to edit it) Thank you for reading and building this plugin!
  10. That would be of course the best option if possible. Thank you for the feedback.
  11. I changed one SMB share visibility from "yes (hidden)" to "yes". I think this causes a restart of the SMB service because all my active connections were killed and a running upload was interrupted. I think a dialog should warn before a setting is saved and the SMB service is restarted. Maybe it contains something like "John is writing with x MB/s to disk Y. Are you sure to restart the SMB service? Data lost possible bla bla"
  12. I have the same problem. Did you finally solve it?
  13. +1 I think very basic functions should be added to the "view" function of shares and disks of the web UI. For example: Properties = Show amount of Files and Dirs + Total Size Create Folder = Opens dialog to create a folder with Name XYZ Delete '/Movies' = Opens a dialog with a warning "This deletes '/Movies' with all its contents" Copy '/Movies' to... = Opens a dialog with two dropdowns. 1st dropdown contains the list of shares and disks and 2nd updates to the folder structure of the selected share/disk. The dialog should contain a checkbox to allow "move" instead of "copy" as well. Create ZIP = Creates a ZIP of the directory (not visible in example) Upload = Upload a file through the browser (not visible in example) I think this should be sufficient for the most users.
  14. Yes it was caused by wget. I tried the following smbclient command (source) and it hits the max write speed of the disk: smbclient //10.1.2.3/video <smb_password> -U <smb_user> -c 'prompt ON; recurse ON; cd \Filme\KL\; lcd /mnt/user/Movies/KL/;mget *' I started two processes and its really fast now: Funnily the load is not really different: Sadly the source NAS seems to hit its limit now: At least, I think that, because I was not able to raise the network traffic by starting three parallel smbclient processes, so I think the "volume utilization" is the bottleneck, now. I never really found out what Synology meant with that. P.S. I hope someone can answer this question because I did not found out how to skip existing / older files with the smbclient command.
  15. I do not have a good start with Unraid. My first try ended maybe because of a defective usb stick or because I used the USB 3.0 port. Now I started again from the scratch. New stick, USB 2.0 port, moved license.. everything is new. And again I killed Unraid ^^ What I did: I started a wget command to fetch my files from the old server as follows: wget --background --quiet --recursive --level 0 --no-host-directories --cut-dirs=2 --no-verbose --timestamping --backups=0 --user=<ftp_username> --password=<ftp_password> "ftp://10.1.2.3/video/Filme/GH/" --directory-prefix=/mnt/user/Movies As the performance was bad I killed the wget process through the terminal and "pkill wget". Then I searched for an alternative method and I found the smbclient command. My first try ended in an error message (did not found directory), but this one worked: smbclient //10.1.2.3/video <smb_password> -U <smb_user> -c 'prompt ON; recurse ON; cd \Filme\KL\; lcd /mnt/user/Movies/KL/;mget *' The speed on the source NAS was good so I reloaded the Unraid Webclient and bamm.. everything was messed up: And on the main tab: So I checked the network monitor of Google Chrome and found this URL to the API: http://10.1.2.3/webGui/include/DeviceList.php I reloaded it multiple times without passing parameters and randomly it returned the above syntax error. Of course I stopped the smbclient process, too. This time by pressing CTRL+C in the terminal. So I checked through the terminal what could be happend and I found it out fast. My smbclient process wrote the file directly onto the stick instead of the target disk ^^ (file has a size of 16G = size of the stick) Of course Unraid was not able to write any more config data to the stick and this caused the error. I deleted the file and after that the errors are gone: I'm not sure if my first smbclient command was wrong or the second. And I need to find out why "prompt ON" and "lcd" did not work. But this should not be part of this topic. This is only for you as feedback what could be the reason for such errors Maybe Unraid should reserve a partition on the stick to avoid such user produced errors?!
  16. Nice to know. It jumps like crazy between 0 and 80 MB/s per Disk. Looks like wget is working in chunks.
  17. It's unsecure FTP. Should be fast. P.S. it transfers mainly huge files (~25GB per file).
  18. I started multiple wget processes (fetches files through FTP from source NAS). Each process writes files to different disks. The Unraid server has 10G: The source, too: The load on the Unraid server looks good: And on the source, too: Finally I started a fourth wget process that targets disk 4. This wast the first time the transfer exceeded 1G: Any idea why it could be so slow? I thought without parity it reaches the maximum write speed of the disk (up to 260 MB/s)?!
  19. Ok I'll do both (new stick and use an USB 2.0 port this time). The stick was really old, so this was probably the reason. P.S. after restarting the server it seem to work again, but I was not able to start the mover. It did nothing. So finally I did a fresh re-install. I hope this does not happen again.
  20. Finally I created the folders manually by CLI. Steps I did (maybe helpful for someone else) 1.) I set the share to "Manual: do not automatically split directories" (correct?) 2.) Then opened the terminal through Unraid Webclient: 3.) Created the Share Folder "Movies" and the subdirectory "KL": 4.) Opened the Unraid Main Tab and viewed the content of Disk3. Everything looked good: 5.) Installed the CA user scripts plugin. Added a new script with the following content (direct execution by terminal would be possible, too) to fetch the files from the source NAS through FTP (this script is helpful to backup websites as well): wget --background --quiet --recursive --level 0 --no-host-directories --cut-dirs=2 --no-verbose --timestamping --backups=0 --user=<ftp_username> --password=<ftp_password> "ftp://10.1.2.3/video/Filme/GH/" --directory-prefix=/mnt/user/Movies 6.) Added this script multiple times for multiple copy processes on different disks and different folder names (in my case the letters). Then I clicked on "Run in Background" and now I need to wait... In the main tab the "Writes" column rises on the corresponding disks, so everything looks good: 7.) Finally I opened the terminal again and typed in "top" to check if there are still active wget process: So it works now, but I hoped it would copy faster. The maximum is 970 Mbps altough 10G network is used and a parity disk is not set up: 10G source: The load looks good as well: Source: EDIT: Started a fourth wget process (now copying to four disks at the same time) and now I reach 1.2 Gbps: This means around 30 MB/s per disk. Much lower than the disks maximum write speed?!
  21. This happend for me, too. How I produced this error: Nearly fresh Unraid server with 2 parity disk, 6 data disks and 1 nvme ssd cache. While using wget through the user scripts plugin to get the files from my old NAS I disabled the ssd cache as it would produce unnecessary TBW on the SSD. But as the write speed was really low I dediced on the next day to remove the parity drives as well. So I tried to stop the array. After several minutes it was still not stopped (it was trying to unmount the user shares): I thought this was because of the still running wget script. So I removed the ethernet cable, waited several minutes and plugged it in again. Now the traffic monitor did not show anything so I successfully stopped the array. Now I unselected the Parity Disks: and started the Array again. At first it looks promissing: But on the next click the top left corner missed the server name and description and the disk overview as completely empty so the web clients JS API seemed to be dead now: Finally I logged out and logged in again and the known error message appeared: Warning: session_write_close(): write failed: No space left on device (28) in /usr/local/emhttp/login.php on line 33 Warning: session_write_close(): Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php) in /usr/local/emhttp/login.php on line 33 Warning: Cannot modify header information - headers already sent by (output started at /usr/local/emhttp/login.php:33) in /usr/local/emhttp/login.php on line 35 "usr/local/" is on the usb flash drive, correct? Does it probably mean that the drive is defective or is there a connection regarding the removed parity drives / bug in unraid? What should be my next step?
  22. Ah ok. Nice to know. For creating the letter-dirs this is absolutely sufficient. Thanks.
  23. What do you mean by that? Starting the CLI or something else?
  24. Any suggestions? I found: Filebrowser Krusader unBALANCE unBALANCE looks promising, but I'm not sure if this would fit my target.
×
×
  • Create New...