Jump to content

jbartlett

Community Developer
  • Posts

    1,896
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by jbartlett

  1. Open the file in Notepad, then delete the file. Create the file using VI over Telnet and copy & paste the contents in.
  2. I wouldn't discount them out of hand, they would work fine for infrequently accessed files, music, or DVD quality media which doesn't require fast data and even then, those drive should still be able to saturate your network. That said, they do increase parity speed duration and would make for good candidates when upgrading drives.
  3. It's an old article, posted Feb 2013
  4. I have my Movies share with a split level of 1 to keep all the movie files together but a split level of 2 for the TV share. A split level of 2 keeps all the files for each season together but spreads out the seasons on different drives. The only delay most people are concerned about is going from one episode to the next and not from one season to the next.
  5. cache_dirs is the cause of the "retry" message when unmounting disks. It spawns a few "find" background sessions which are accessing the drives on a constant bases and it takes a few seconds for the script to realize that the server is going offline and to self-terminate.
  6. Thanks for the PM, I didn't know that this was forked out into it's own support thread. Unfortunately, I can't duplicate currently as it now works perfectly. Murphy's Law and all that. Mentioning the machine name, I've been connecting to my UNRAID server via the IP address for the past year because my PC won't find it when I refer to it as "NAS". Which also suddenly works...
  7. I have two mac servers, I just created a 2nd TM share and backing the 2nd one up to it now. Will post results on the new thread once it's created.
  8. I did a full system backup of my Mac Server without issue.
  9. Same with secured shares. I can access but when I try to create a directory, Windows tells me that I need permission to perform this action though I've been set to R/W.
  10. As with beta10, I'm not able to access Private disk shares with beta10a. I created a new private share and granted myself R/W access, no cache. Navigating to it via Win8.1, Windows said that access was denied and prompted me for my credentials. I added them and was still denied.
  11. Works for me. If I change the security on the SMB share from "private" to "secure" or "public", then I can access the disk share but I receive an access denied error on trying to create a file. ETA: After running the new permissions script, I can create directories & files if the share is public but still can't if it's not even if my account has access. Also tried resetting the password on my "john" account, no go.
  12. If you don't have a cache drive, check the shares and change the use cache drive to no. The disk1, disk2, etc shares do not utilize a cache drive. Changing it on the one regular share that wouldn't let me access made no difference.
  13. I don't see any mention of it but I can't access my disk shares via SMB with beta10 or other shares in which the SMB security is set to Private and my Windows account user has R/W access - No issue with access on beta9.
  14. Inline update + boot worked like a charm. However, I clicked the "Check for updates" for the plugins and it prompted that there's an update for the docker plugin. Clicking the update link doesn't upgrade, looks like there's an error in the status that says that a file already exists... Edit: Tried installing the update manually and the log window displayed the following: /usr/local/sbin/plugin install https://raw.githubusercontent.com/limetech/dockerMan/master/dockerMan.plg 2>&1 plugin: installing: https://raw.githubusercontent.com/limetech/dockerMan/master/dockerMan.plg plugin: downloading https://raw.githubusercontent.com/limetech/dockerMan/master/dockerMan.plg Warning: mkdir(): File exists in /usr/local/emhttp/plugins/plgMan/plugin on line 146 plugin: creating: /boot/config/plugins/dockerMan/dockerMan-2014.09.27.tar.gz - downloading from URL "https://github.com/limetech/dockerMan/archive/2014.09.27.tar.gz" Warning: mkdir(): File exists in /usr/local/emhttp/plugins/plgMan/plugin on line 146 plugin: wget: "https://github.com/limetech/dockerMan/archive/2014.09.27.tar.gz" retval: 8
  15. It uses the mask in the find command. I'll look into if it supports more than one mask or repeat the find command multiple times.
  16. I have UNRAID running under VirtualBox - putting a bunch of files under /mnt/disk1, building the hash, and then deleting them would set things up for a reiser rebuild tree check.... I simulated a "lost+found" to test the recovery aspect though with no keys preserved by moving a bunch of files to a single directory and running a recover against that directory.
  17. Question: What do you think about storing the file name location in the extended attributes as well? I think I nixed the idea at first in that there is a limited amount of space in the attributes and I didn't want to hog it - but it seems like there's very very few scripts that store info in the attribute space. Assuming attributes aren't lost when they show up in "lost+found" (something I haven't and hope I never have to test), it would make putting the file back into it's original location faster as the hash value wouldn't need to be computed first and then compared to an external file. If the attributes are lost, then the hash value compared to the exported file list would recover it. If added, I would need to add an option to refresh just the path in the attributes and it would automatically be refreshed during adding/verifying.
  18. With my inventory script saving the hashes to a SQL DB, the main issue I had was that sqlite would introduce long delays into the process, considerably so if running on a spinner. Running the insert/update query after every file was not feasible so I would append any SQL statements to a file and run them in a batch every minute - seemed to keep the delays down to around 10 seconds or so. Storing the hashes in the extended attributes takes zero time in comparison.
  19. CPU: Intel i7-4771 @ 3.50GHz Ran 7 verifies on 7 drives on large media files, 1 window with htop. CPU1 stayed around 50-60%, the rest at 80-99%. sha256deep had two threads open on each file. Aborting each task had a noticeable effect on the CPU utilization. With only one session running, a single CPU hovered at 35-40%. I stopped cache_dirs for this test.
  20. Here's the times I got running a 4.69 GB file from RAMFS vs disk. Times are in seconds. md5sum: 9 / 29 sha1deep: 17 / 29 sha256deep: 25 / 28 tigerdeep: 14 / 29 whirlpooldeep: 57/58 When I was validating my hashes after upgrading to beta9, I had 8 telnet sessions opened (the max) and had a validate running on 8 drives, one per session. I couldn't perceive any slow downs but then again, I wasn't looking for one.
×
×
  • Create New...