Jump to content

itimpi

Moderators
  • Posts

    20,214
  • Joined

  • Last visited

  • Days Won

    55

Everything posted by itimpi

  1. As far as I know you cannot - you have to do it at the command line level. However it only takes seconds to create the export file that can be viewed in any text editor.
  2. Yes. you do not delete the appdata share after doing the move. What you do is delete the appdata folder of each array disk.
  3. It is not that easy to view the attributes directly from Windows. However the plugin includes an export feature to write the checksums out to a text file (in the same format as used by some Windows utilises)
  4. Looking at the time shown for the individual phases the 195 total must be for more than one cycle . The individual phase times are about what I see on my system with the 8TB drives so seem perfectly normal. With larger drives the elapsed time gain from the 'faster' preclear are quite significant.
  5. You cannot run preclear against any drive that is assigned to the unRAID array. I assume that you currently have the drive in question assigned as parity?
  6. I have found that if you do anything that will cause a Linux 'sync' command to be issued while preclearing (e.g. stopping the array, adding new disks to the array) then the 'sync' effectively last forever (I presume because the preclear is continually filling the Linux disk buffers so they never completely get flushed to disk). When the sync finishes (e.g. by stopping the preclears) then everything comes back to life). I am not sure there is a workaround for this - it is just an interaction at a low level between the Linux sync command and the disk activity currently taking place.
  7. I believe it should if you have "Save new hashing results to flash" enabled. I don't. My reading of that option was that it wrote a new file including date in the name with the changes in it, not that it appended to the existing file. Could be wrong about that though.
  8. Glad to hear that fixed it! Like many problems the fix is easy once the reall issue is identified
  9. Are you certain that the USB stick was formatted as FAT32? If not then you can get these symptoms as the USB stick does not get correctly mounted, and therefore the command to start the GUI does not get executed. A quick check is to go into a console/telnet/ssh session and do the command ls /boot This should show the files/folders that were extracted from the ZIP download and placed on the USB stick. If it does not then the USB stick is not prepared correctly.
  10. Yes, but is local ssh access to the unRAID server still working?
  11. I have not tried it with the plugin, but I know using the preclear script you can clear USB drives. There are a couple of caveats: If it is USB 2 then it is very slow. USB 3 is similar in speed to SATA connected drives Many USB drives/enclosures do not report their SMART attributes. In such a case although the preclear will operate you get limited useful information from the preclear reports as it cannot tell you about changes to reallocated and pending sectors. Guess I need to dig out a USB drive and try it with the plugin to see if it is OK as well I think I can find USB enclosures that report SMART attributes and also one that does not to test both cases.
  12. You can do it locally on the unRAID server using something like the 'mc' (Midnight Command) tool from a console/ssh session at the Linux command line level. There are also dockers that can be installed to provide a graphical method of file management locally to the unRAID server but I do not think most people bother with this extra complexity. Alternatively you can do it over the network using your normal desktop file manager on your PC/Mac as the Unassigned Devices plugin will expose the drive as a new share. However in this case you could just as easily have plugged the drive directly into the PC/Mac and copied it so not sure which you will find more convenient.
  13. Yes - you never want anything in the root of the cache except folders that correspond to shares. The recommendation would be to have a folder under /mnt/cache/appdata corresponding to whatever application you are trying to install.
  14. No it is not. /mnt/cache refers directly to the cache drive - not to a folder on it. This is the internal Linux reference to the cache drive and is independent of sharing. If you used the suggestion of a cache-only share called appdata and then decided to hold plex specific setting in a plex folder within that then the mapping for /config would be to /mnt/cache/appdata/plex.
  15. I do not use Plex but the mapping for the /config folder inside the container looks wrong. It is being mapped directly /mnt/cache whereas I would expect it to be something more like /mnt/cache/appdata/plex. The normal approach is to have an 'appdata' share that is set to be cache only, and then within this have a folder to hold app specific settings.
  16. Not quite sure what you are asking? If you were using screen then rsync would have continued running and you can reconnect to its screen session. If you are asking whether rsync carries on from where it was if the rsync is aborted if you use the same rsync command then my experience is that it does s with the file that was being copied at the time of the abort. However the temporary file that was being created at the time of the abort is left behind.
  17. Thanks for confirming that works. I will do a bit of testing at my end and assuming nothing shows up I will upload an updated version of the script to the first post in this thread.
  18. Possible. The size is being obtained using a command of the form du -s filename so you could see if they are reported the same on your system? If that is the case then an alternative command could be used. Trydu -sb filename If you change the script on your system (it should be easy enough to find the du -s command in the script) does it fix the issue on your system? As I currently do not have an example of it going wrong then it is a bit harder to test here. Actually looking at the du options I am not sure why I used the -s option - it looks as if simply using the -b option instead would work? I find it intriguing that the script has been available for over a year and this is the first time this issue has come up. It is just shows how difficult it can be to allow for all the edge cases.
  19. Possible. The size is being obtained using a command of the form du -s filename so you could see if they are reported the same on your system? If that is the case then an alternative command could be used.
  20. It did happen a lot... I just showed the last few entries. root@Tower:/boot# cat duplicates.txt | grep WARNING | wc -l 1665 I've manually looked at a few of the files and the size matches what the script is printing. They are the same size. root@Tower:/boot# tail duplicates.txt WARNING: File sizes are different -rw-r--r-- 1 nobody users 1183365120 Oct 25 2011 /mnt/disk1/files/Wow World of Wonder/BDMV/STREAM/00619.m2ts -rw-r--r-- 1 nobody users 1183365120 Oct 25 2011 /mnt/disk3/files/Wow World of Wonder/BDMV/STREAM/00619.m2ts WARNING: File sizes are different root@Tower:/boot# ls -l "/mnt/disk1/files/Wow World of Wonder/BDMV/STREAM/00619.m2ts" -rw-r--r-- 1 nobody users 1183365120 Oct 25 2011 /mnt/disk1/files/Wow\ World\ of\ Wonder/BDMV/STREAM/00619.m2ts root@Tower:/boot# ls -l "/mnt/disk3/files/Wow World of Wonder/BDMV/STREAM/00619.m2ts" -rw-r--r-- 1 nobody users 1183365120 Oct 25 2011 /mnt/disk3/files/Wow\ World\ of\ Wonder/BDMV/STREAM/00619.m2ts Spaces in names an issue? Any other thoughts? Not off-hand. The script should have all relevant file names quoted so that spaces get handled and I have lots of spaces in my names (both folders and files) and have not noticed any issues. I notice that the version of the script on my home system reports itself as 1.4 while the one available for download is 1.3. Not sure if the difference is significant, the note against the 1.4 change suggests it is not. However it is quite a while since I looked at the script so trying to remember how it works is taxing my mind a bit I know I had to go through some loops around getting state correct when using shell script. If you look at the script the point at which the message is output has the lines if [ $previous_file_size -ne $ff_size ] ; then to_both " WARNING: File sizes are different" Changing it to read if [ $previous_file_size -ne $ff_size ] ; then to_both " WARNING: File sizes are different" $previous_file_size $ff_size will add the sizes that are mismatching to the message which might help track down the cause. Interesting comment around the file systems being different on the disks. I would not have thought it should be relevant as nothing is done that is file system aware (at least that I know off). However all my disks are XFS so it is possible that the difference triggers something.
  21. I agree the report looks strange! Have you actually directly checked the size on disk of the two files mentioned? It seems more likely that they are actually different and the details in the warning message is wrong or this would be reported far more frequently. If it is the report that is wrong can you let me know if it is the first or second line as that would help pin down the fault.
  22. If that behavior is still persistent, then something is wrong. Can you "stop" and "start" once the server is up? If you remove the auto-start setting again, is the "start" working or not? I will cross-check this with my server. I deliberately have the autostart disabled on my system. The only time I expect to be rebooting my server is when there has either been an issue or when I have made a significant change. In such cases I want the ability to bring the server up with minimal automatic processes starting so that I can check things out and make a considered choice to start the array (which also starts Dockers and VMs that are marked to autostart).
  23. Nothing obvious in the syslog - it all looks as one would expect. I cannot see any sign there, however, of an attempt to start the array. I notice that the contents of the disk.cfg file in the config folder looks a bit strange. Did you copy this from the previous install? The start of mine (before the individual disk details) is: Generated settings: startArray="no" spindownDelay="3" queueDepth="1" spinupGroups="yes" defaultFormat="2" defaultFsType="xfs" poll_attributes="1800" md_num_stripes="1280" md_write_limit="768" md_sync_window="384" which has a few significant differences to yours. Not sure if they matter or not.
  24. No idea from your description. Have you made sure that you have put your license key file in the config folder on the USB drive? I would suggest that providing the diagnostics file (Tools->Diagnostics) will help with providing information to get help. A screen shot of the Main tab showing the Array Devices and the array Operations might also help
  25. In a clean v5 install this is also the case. If yours is not then you have probably broken one of assumption that the plugin upgrade assumes. The plugin was an attempt to make this easier. Since it seems to fail with some frequency (particularly for those who upgraded to v5 from earlier unRAID releases) I feel it mighthave been simpler not to even produce the plugin in the first place. Once you are on v6 then upgrading to new releases via the GUI (from the plugins page) seems to be very reliable.
×
×
  • Create New...