rajk

Members
  • Posts

    9
  • Joined

  • Last visited

rajk's Achievements

Noob

Noob (1/14)

0

Reputation

  1. So i ran the unclear plugin with the Joe. L - 1.20 script and i did not do the pre read only the zeroing and the post read. When completed and i tried to assign the disk to the array unraid is clearing the disk. This never happened when i used the other preclear plugin script, although on those occassions i did do the preread, zeroing and the post read. Does anyone know why unriad is clearing the disk on assignment when its not happened to the other disks i use preclear plugin with before.
  2. thanks, i just tried the other script that is in the preclear pluging Joe.L - 1.20 and that seems to be working so far, currently doing the pre read. I take it this script does the same thing as the other?
  3. Unraid version: 6.9.2 Hi all I have an unraid server that consists of the following: Motherboard: Asus P5Q Premium CPU : Intel Core 2 Duo E7400 Memory 2GB Parity Disk: Seagate Ironwolf Pro 4TB Disk 1: Seagate Ironwolf Pro 4TB Cache Drive: Crucial MX500 1TB I just purchased an additional WD Red Pro 4TB drive. I installed it into a spare slot in my rack and started the pre clear plugin and selected the gfjdarmim 1.0.22 script. I just went for all other default options and started the script. I get the following error ;error encountered, please verify log' . In the preclear log (see below), It says something about low memory. I know i only have 2GB but is this really a problem for pre clear to work? If it is, is there anyway to reduce the meomry load? Here is the preclear log, any guidance would be appreciated! Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Command: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdf Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Preclear Disk Version: 1.0.22 Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T. info type: default Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T. attrs type: default Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Disk size: 4000787030016 Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Disk blocks: 976754646 Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Blocks (512 bytes): 7814037168 Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Block size: 4096 Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Start sector: 0 Dec 06 13:23:41 preclear_disk_VBH0L3ZF_10889: Pre-read: pre-read verification started (1/5).... Dec 06 13:23:41 preclear_disk_VBH0L3ZF_10889: Pre-Read: dd if=/dev/sdf of=/dev/null bs=2097152 skip=0 count=4000787030016 conv=noerror iflag=nocache,count_bytes,skip_bytes Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: size: 946212, available: 93896, free: 9% Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: Filesystem 1K-blocks Used Available Use% Mounted on Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: rootfs 946212 852316 93896 91% / Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: devtmpfs 946220 0 946220 0% /dev Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: tmpfs 1020916 0 1020916 0% /dev/shm Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: cgroup_root 8192 0 8192 0% /sys/fs/cgroup Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: tmpfs 131072 304 130768 1% /var/log Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: /dev/sda1 15000232 294664 14705568 2% /boot Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: overlay 946212 852316 93896 91% /lib/modules Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: overlay 946212 852316 93896 91% /lib/firmware Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: tmpfs 1024 0 1024 0% /mnt/disks Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: tmpfs 1024 0 1024 0% /mnt/remotes Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: Low memory detected, aborting... Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: Pre-read: pre-read verification failed! Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Error: Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: ATTRIBUTE INITIAL NOW STATUS Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Reallocated_Sector_Ct 0 0 - Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Power_On_Hours 0 0 - Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Temperature_Celsius 33 33 - Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Reallocated_Event_Count 0 0 - Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Current_Pending_Sector 0 0 - Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Offline_Uncorrectable 0 0 - Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: UDMA_CRC_Error_Count 0 0 - Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: SMART overall-health self-assessment test result: PASSED Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: error encountered, exiting...
  4. Unraid version: 6.9.2 Hi all I have an unraid server that consists of the following: Motherboard: Asus P5Q Premium CPU : Intel Core 2 Duo E7400 Memory 2GB Parity Disk: Seagate Ironwolf Pro 4TB Disk 1: Seagate Ironwolf Pro 4TB Cache Drive: Crucial MX500 1TB I just purchased an additional WD Red Pro 4TB drive. I installed it into a spare slot in my rack and started the pre clear plugin and selected the gfjdarmim 1.0.22 script. I just went for all other default options and started the script. I get the following error ;error encountered, please verify log' . In the preclear log (see below), It says something about low memory. I know i only have 2GB but is this really a problem for pre clear to work? If it is, is there anyway to reduce the meomry load? Here is the preclear log, any guidance would be appreciated! Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Command: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdf Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Preclear Disk Version: 1.0.22 Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T. info type: default Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T. attrs type: default Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Disk size: 4000787030016 Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Disk blocks: 976754646 Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Blocks (512 bytes): 7814037168 Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Block size: 4096 Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Start sector: 0 Dec 06 13:23:41 preclear_disk_VBH0L3ZF_10889: Pre-read: pre-read verification started (1/5).... Dec 06 13:23:41 preclear_disk_VBH0L3ZF_10889: Pre-Read: dd if=/dev/sdf of=/dev/null bs=2097152 skip=0 count=4000787030016 conv=noerror iflag=nocache,count_bytes,skip_bytes Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: size: 946212, available: 93896, free: 9% Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: Filesystem 1K-blocks Used Available Use% Mounted on Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: rootfs 946212 852316 93896 91% / Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: devtmpfs 946220 0 946220 0% /dev Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: tmpfs 1020916 0 1020916 0% /dev/shm Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: cgroup_root 8192 0 8192 0% /sys/fs/cgroup Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: tmpfs 131072 304 130768 1% /var/log Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: /dev/sda1 15000232 294664 14705568 2% /boot Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: overlay 946212 852316 93896 91% /lib/modules Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: overlay 946212 852316 93896 91% /lib/firmware Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: tmpfs 1024 0 1024 0% /mnt/disks Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: tmpfs 1024 0 1024 0% /mnt/remotes Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: Low memory detected, aborting... Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: Pre-read: pre-read verification failed! Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Error: Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: ATTRIBUTE INITIAL NOW STATUS Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Reallocated_Sector_Ct 0 0 - Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Power_On_Hours 0 0 - Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Temperature_Celsius 33 33 - Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Reallocated_Event_Count 0 0 - Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Current_Pending_Sector 0 0 - Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Offline_Uncorrectable 0 0 - Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: UDMA_CRC_Error_Count 0 0 - Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: SMART overall-health self-assessment test result: PASSED Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: error encountered, exiting...
  5. Thanks it did the trick and everything else looks okay. i need to get familiar with these command line instructions.
  6. ls -lah /mnt/cache/appdata/binhex-krusader total 240K drwxrwxr-x 1 nobody users 56 Nov 19 13:39 ./ drwxrwxrwx 1 nobody users 30 Nov 19 13:36 ../ drwxrwxrwx 1 nobody users 52 Nov 19 13:39 home/ -rw-r--r-- 1 root root 162 Nov 19 13:39 perms.txt -rwxrwxr-x 1 nobody users 235K Nov 21 17:01 supervisord.log* ls -lah /mnt/disk1/appdata/binhex-krusader total 0 drwxrwxr-x 3 nobody users 26 Nov 19 13:39 ./ drwxrwxrwx 3 nobody users 29 Nov 19 13:36 ../ drwxrwxrwx 3 nobody users 20 Nov 19 13:39 home/
  7. thanks, this is what i get ls -lah /mnt/cache/appdata total 16K drwxrwxrwx 1 nobody users 30 Nov 19 13:36 ./ drwxrwxrwx 1 nobody users 40 Nov 21 14:49 ../ drwxrwxr-x 1 nobody users 56 Nov 19 13:39 binhex-krusader/ ls -lah /mnt/disk1/appdata total 0 drwxrwxrwx 3 nobody users 29 Nov 19 13:36 ./ drwxrwxrwx 5 nobody users 47 Nov 19 13:04 ../ drwxrwxr-x 3 nobody users 26 Nov 19 13:39 binhex-krusader/
  8. Thanks when i installed the new SSD cache drive it didnt have anything on it, so there were no duplicates only the files that mover had previously moved to the array. In terms of just deleting it from disk 1 (my only data dsik) wont this cause a problem with the labels that appdata has. For example if i look at the share appdata and click on the little folder icon it says its in disk1, cache. if i simply delete it form disk 1 wont it get confused. Also how do i delete it from disk 1? If i look at krusader there is no path into disk 1. I am able to access shares, unassigned drives and remotes ( i follwed spaceinvader one process for krusader) but i cant see an entry for /mnt/disk1/. I can access it from a command line window but im not comfortable just deleting the whole appdata folder as i think it might cause a problem as unraid things its across my cache and disk1! Sorry im just a newb lol.
  9. Hi All i am a new user to unraid and currently runing version 6.9.2. My setup consits of 2 x 4TB Seagate Ironwolf Pro disks ( 1 x data and 1 x parity) and a cahce drive (oringinally HDD but now SSD, please see below) When i first set it up i did not have a SSD for cache so decided to just use a spare 1TB HDD. I just bough a SSD today and my intention was to replace the cache HDD with the SSD. I watched a few video on the internet and looked at the Wiki page on how to uprgade your cache drive. It seemed easy enough but i have a minor problem . I have highlighted this with an asterix in the body of the procedure i followed. I wonder if anyone can help I followed the process Stopped the dockers (only one, krusader the file manager) I had running Stopped the docker and VM service from the settings tab (my cpu does not support virtulisation so did not have any VM's) Changed the mover settings to monthly (it was on daily) Changed appdata share settings to 'use cache pool' setting from prefer to yes Changed domains share settings to 'use cache pool' setting from prefer to yes Changed system share settings to 'use cache pool' setting from prefer to yes Started the mover process. * The mover process seemed to take a long time for what seemed a small amount of data on the oringal cache drive (about 22GB). Anyway after it had finished i looked on the oringinal cache drive and there were still some folder for appdata. I tried mover again but some files remained . I looked into the folder and the unraid interface said 0 files too long too list. Anyway i proceeded to the next steps as below Stopped array, unassigned the existing Samsung Cache HDD and assigned the new Crucial SSD Formatted drive Set all shares to their previous setting of prefer cache (those that had that set to them before the upgrade) Started mover again Enabled the docker service Changed mover back to daily schedule. * The problem i have is that my only disk in the array still has appdata on it, just like the oringal cache drive had when i wanted the files to move to the array before the disk stop. What is causing this? Why isn't mover moving all the appdata files back to the cache even though i have disabled the docker service? When i look at the files in the appdata folder on the array disk they relate to krusader which i have shutdown and disabled the docker service. Can anyone shed any light on this please? Much appreicated Raji