jbartlett

Community Developer
  • Posts

    1896
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by jbartlett

  1. It was supposed to loop from 0 to 9 instead of 1 to 9. I made that change. I'll take a look at the block size computation. Can you email me a small debug file from the "Create Debug File" from the DiskSpeed app to [email protected]? I'm currently adding proper SSD benchmarking so it's kinda moot in 99% of the cases. It will write ten 2GB files and then read the ten files back taking the min/max/avg times, bypassing the system cache. Requires a mounted partition with at least 25GB of free space. That logic is done, now working on figuring out how to render that on a chart with a stock ticker reflecting the three values. Thank you, I'll investigate.
  2. Version 2.9.7 deployed. Change Log Correct issue of Drive images not preserving over installing a new version of this application Verify that the DiskSpeed application can write to the externally mapped volume Prevent against some potential race conditions on formatting drive images on full system scan On fresh startup, fetch device's block size from "/sys/class/block/DeviceID/queue/logical_block_size" for use in forcing a drive's spin-up Note: While your drive images will persist over this update if you have not previously submitted the drive image to the HDDB, the style formatting will not persist due to a bug on a second scan of your controllers setting the saved configuration to match one given drive. I was able to add code to persist the drive image. Ways to resolve this: If you don't care about your benchmarks, click on the button "Purge everything and start over". Note: After at least one benchmark is done on a drive, you have the ability to recover previous benchmarks that were uploaded to the HDDB by viewing the drive in question and clicking on the button "Manage Benchmarks". Edit a given drive and correct the text overlay. If you have multiple drives of the same model, you'll have an additional checkbox labeled "Apply changes to all drives of the same model" - check it.
  3. Ah, I follow now. That spinup code is ancient from around when I first wrote the Docker version. If the drives hasn't been scanned yet (or are being rescanned), there's no information on them so it defaults to a 512 byte block size. I also recall seeing that the spinup sometimes worked and sometimes didn't in the past, this might be the reason why. After hunting around, looks like I can gather the block size from /sys/class/block/nvme0n1/queue What do you get from the following commands? cat /sys/class/block/nvme0n1/queue/hw_sector_size cat /sys/class/block/nvme0n1/queue/logical_block_size cat /sys/class/block/nvme0n1/queue/physical_block_size
  4. How did you switch your nvme to 4K mode? DiskSpeed doesn't use a 512 byte block size. The block size that DiskSpeed uses is the values returned for MaxSectorsPerRequest multiplied by LogicalSectorSize. From my investigations, this is the same block size that the OS determines to use in access the drive. Max Sectors Per Request is determined via: blockdev --getmaxsect /dev/nvme0n1 For my drive, I get back "2560" Blocksizes for nvme drives are determined using the command "nvme list" and getting the value from the Format column. You'll need to be inside the DiskSpeed Docker to run this command. Enter via "docker exec -it DiskSpeed bash" My system returned Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 S3X4NB0K309824V Samsung SSD 960 EVO 500GB 1 191.01 GB / 500.11 GB 512 B + 0 B 3B7QCXE7 /dev/nvme1n1 175014425233 WDC WDS256G1X0C-00ENX0 1 256.06 GB / 256.06 GB 512 B + 0 B B35900WD So the block size is 2560 x 512 or 1310720 bytes. AKA bs=1310720 for the dd command.
  5. @Snubbers - There's no plans to add benchmarking of USB sticks because there's too many factors that can influence the results. However, you can perform the same test manually with the following command dd if=/dev/sdx of=/dev/null bs=1310720 skip=0 iflag=direct status=progress conv=noerror Replace "sdx" with your drive's device ID (such as "sda" for your UNRAID boot stick). Press CTRL-C to stop the scan when the reported speeds stabilize.
  6. http://strangejourney.net/hddb/index.cfm?View=Drives&Vendor=Western Digital&Model=WD120EFBX The tail end of the drive is kinda slow IMHO but the overall average is around 170mb/sec. Compare your graph to the ones in the link, the data sample is still small but out of 7 unique drives uploaded, the spread is pretty tight.
  7. Looks to be harmless, related to Java 11. https://dev.lucee.org/t/illegal-reflective-access/7052 https://issues.apache.org/jira/browse/FELIX-5765
  8. This is something I've been wanting to investigate too. But after you edit a drive image, click the "Submit Drive" button that shows up after you save the update (or viewing the drive after updating). Click the button again to confirm. You only need to do this step once per drive model. Next update, it should restore your drive image & overlay from what you uploaded. This is a per-machine setting.
  9. Remove the DiskSpeed directory, edit the Docker image, and save it. Check to see if the DiskSpeed directory has been recreated.
  10. What kind of upload? Drive image upload or Drive & Benchmark data? And please try again, I tested both and got back successful messages for both. If it does fail, note the time & time zone that you tried and I'll check the logs.
  11. It looks like the permissions on the new directory aren't correct and the application can't write to it. While the Docker config is set to R/W, that doesn't mean squat if the directory itself is not writeable by Docker. Open a shell prompt to the unraid server itself (not the Docker) and enter in the following lines. This is the same code when you run Unraid tool to apply new permissions. chmod -R u-x,go-rwx,go+u,ugo+X '/mnt/arraycache/appdata/DiskSpeed' chown -R nobody:users '/mnt/arraycache/appdata/DiskSpeed' sync
  12. In version 2.9.5, I had taken out the "apt update" command in favor of keeping the docker size smaller since this Docker relies on another Docker (Lucee) and they had the same command. I put it back in to ensure that when *I* build, the my DiskSpeed docker is current. Any other subsequent updates will rely on other teams implementing such as Lucee then Apache Tomcat and I think Tomcat is build off of Debian.
  13. Version 3 (in dev) will have write testing for solid state media because it's required to read an existing file on some drives vs reading a location on the drive. I'm contemplating adding write testing for spinners but if I do, it'll likely be ONLY on a drive with no partitions. I was able to duplicate this and implemented a fix including some other oddities I found from the newer version of Lucee and the base OS. Version 2.9.6 pushed.
  14. Pushed version 2.9.5 to the docker hub. @Howboys - let me know if it resolves your issues with the EOL notice. It's using the latest build of the Lucee app server. I'm starting to use tagged versions. 2.9.5/latest resolves issues with funky partition output from the "parted" utility. Well, hopefully resolves it as I couldn't duplicate. If you have issues with the version 2.9.5, change the repository to "jbartlett777/diskspeed:2.9.4" to roll back to the previous version.
  15. I plan on pushing an update soon<tm> with the latest version of Lucee which will resolve this.
  16. Renaming the "Instances" directory under "/appdata/DiskSpeed" will do the same without needing to install. If you do get this issue again and it's resolved by renaming the directory, let me know so I can get a copy of your "bad" data so I can duplicate and fix the issue.
  17. I've verified that it's not a problem with having more than 26 drives, I added two 10 port hubs and filled them with USB drives to push my sdx counts over sdaa and benchmarked sdab and sdac, no issues and it benchmarked the correct drive. Does it report something like "SpeedGap detected"? If so, you'll need to disable that when starting a benchmark from the main page, not from the drive itself. Also, if you select 2 or more drives to benchmark, you'll see the text "Click on a drive label to hide or show it." - that period on the end is a hidden hyperlink, clicking on it will reveal the hidden iframes that perform the actual work and you can see what's happening and if there's an issue.
  18. The error happens on the sdc, it's partition output isn't standard with extra blank lines, some padded with spaces. I've added code to handle the extra lines and code to catch other gotcha's so it'll continue.
  19. I've changed my default file system to "btrfs" under "Settings > Disk Settings" on my backup/development NAS system. If I run the new config option via "Tools > New Config", it does more than just reset the drive slots, it resets at least the default file system in Disk Settings as it gets changed to "xfs". I discovered this issue after taking a drive out of the Drive 2 slot, resetting the config so I can wipe the partitions and re-add as btrfs, and it was added & formatted with xfs. I was able to repeat the steps to duplicate. 1. Set default file system to btrfs 2. Run "New Config", preserve all drive settings. 3. View Disk Settings, default file system is now xfs nasbackup-diagnostics-20220821-1857.zip
  20. It looks like you have more than 26 drives attached. Can you submit a debug file using the "Create Debug File" at the bottom of the page? You do not need the Controller Info item.
  21. When you first pull up the app or click on the "DiskSpeed" label at the top of any page, it should display a "Benchmark Drives" button. That in turn will display a Benchmark page where you can optionally select which drives you want to test. By default, it starts with all drives but if you uncheck the "Check all drives" checkbox, all your drives will be listed for individual selection.
  22. I get this in my syslog every week. Jul 24 04:40:01 NAS apcupsd[118732]: apcupsd exiting, signal 15 Jul 24 04:40:01 NAS apcupsd[118732]: apcupsd shutdown succeeded Jul 24 04:40:03 NAS apcupsd[71875]: apcupsd 3.14.14 (31 May 2016) slackware startup succeeded Jul 24 04:40:03 NAS apcupsd[71875]: NIS server startup succeeded Looking at /var/log, I see that apcupsd.events rolled over at 4:40 am at the same time.
  23. In short, yes, but you'll still have to jump through a few hoops. I spent some time trying to create a UniFi Protect VM to do the conversion but it didn't pan out. I ended up using my Dream Machine SE as a conversion host after finding out Ubiquity added a MP4 export tool to their OS. After using the Ubuntu VM to copy the UBV files off to an unraid share, I used Putty to SSH to the DM and then from there I SFTP'ed the files to the DM, converted them, and SFTP'ed them back to unraid. More detailed instructions Enable SSH on UniFi OS - > System > SSH > Enable Telnet into the UniFi Protect server. Local video is hosted under "/srv/unifi-protect/video" by year, month, day. July 20, 2022 will be located in "/srv/unifi-protect/video/2022/07/20" If you don't have much space available, you'll need to edit the UniFi Protect settings to only archive x number of days (take how much you have and subtract a day or two) so it can free up space. Create directory for hosting conversions: /srv/video CD into /srv/video SFTP to remote server to download ubv files The video files start with the Mac ID of the camera. D021F991680A_0_rotating_1658291482261.ubv Mac ID |Q| Type | Epoch Q is video quality. "0" is full, each number higher is lower in quality. Transfer all the files for the day & camera in question from your remote host to your UniFi system. "mget D021F9924ABB_0_rotating_*" Quit SFTP. Create a single MP4 file. Use a different -d option for each file you're converting. "vid0", "vid1" for example. Use multiple -s options to specify the files in sequence. Using Putty makes it easy as you can double-click on the file name and then right-click to paste it. /usr/share/unifi-protect/app/node_modules/.bin/ubnt_ubvexport -d ./vid0 -s f1.ubv -s f2.ubv -s f3.ubv It'll also create one zero-byte MP4, you can delete it. SFTP back to the remote server and upload the mp4 files. The files will be owned by root on the remote server. You will need to "chmod 666 *.mp4" the files to be able to view them over the network. Don't forget to delete your working files in /srv/video.
  24. I had trouble duplicating these mounting steps today but I passed through the /dev/sdx ID as a drive to a Ubuntu VM and then used the "Disks" app to mount the "RAID-1 Array" under /dev/md These steps assume you will not be mounting the drive back into your UniFi Protect system without formatting it. Launch the "Disks" app and locate the UniFi OS video partition. For my 8TB drive, it was displayed as "8 TB RAID 1 Array" under /dev/md/3 (your last digit may differ). Select the drive "8 TB RAID 1 Array" and click on the Gear icon under the "Filesystem" bar and select "Edit mount options" Disable "User Session Defaults" and change the mount point to where you want it. For me, it's "/home/john/UniFi" and ensure "Mount at system startup" and "Show in user interface" are checked and click "OK" Click on the "Play" icon to mount (or reboot) Open a Terminal window. Change to "~" directory if not already there. "cd ~" Enter "ls -l" to see what user/group you're logged in as. For me, it was "john:john" CD to the UniFi drive mount location. Since I mounted it in my user directory, I just need to type "cd UniFi" You will need to update the owners of the UniFi Protect files. Enter in "sudo chown -R john:john .srv" - replace your user/group you're logged in as. Now if you use the File Manager to navigate to the UniFi directory with hidden directories enabled, you can navigate the directory tree and copy the files to another system.