Jump to content

snowmirage

Members
  • Posts

    105
  • Joined

  • Last visited

Posts posted by snowmirage

  1. I'm attempting to setup this container and I am getting the error when logging in to the /super page about the database not being setup, as mentioned in the first page of this thread.

    However I can't execute the commands to setup the database it lists

    when I am in the console of the container

    /opt/shinobi # mysql
    ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO)
    /opt/shinobi # 

    Anyone find a way around this?

    From what I've read the DB is setup by default to not allow password less root login.  Do we know what database user / password this container is configured with?

  2. Anyone able to point me down a possible troubleshooting path here.

    Trying to open this Blue-ray Disc in this docker in Unraid and it fails loading the disc.

    image.thumb.png.45b173e669b1612ca8985f1333fec90c.png

    But on my windows machine using the same version of mkmkv and the same usb drive it reads it fine.

    image.thumb.png.698e04f4d9c0e435ae5cb66ff766f91d.png

    image.thumb.png.663e5ecbd472e8a7cbd7ca7936f0dc9b.png

    I can see the LibreDrive info in both windows (below pics) and linux/unraid docker (above) and both say the same info that its enabled.  

    But in the output in those screen shots the windows box has

     

    MakeMKV v1.16.5 win(x64-release) started
    Using LibreDrive mode (v06.2 id=866A98CB9C4E)
    Using direct disc access mode


    but in the unraid docker it doesnt seem to use LibreDrive mode

     

    MakeMKV v1.16.5 linux(x64-release) started
    Using direct disc access mode
    Loaded content hash table, will verify integrity of M2TS files.
    Error 'Scsi error - ILLEGAL REQUEST: COPY PROTECTION KEY EXCHANGE FAILURE

     

  3. I have 24 disks in Unraid (and another 6 cache drives), decided I wanted to encrypt the drives.  Following this guide from our beloved neighborhood space invader (Thank you! you absolute legend !)
     


    I hit some errors / warnings that I should run Docker Safe new perms.

    Started it got busy with other stuff came back the next day, and realized it had still not finished processing the 1st disk...

    A few hours into the day ~noon or so going on over 24hrs it moved on to disk 2, of 24 (well I guess of 21, 1 "hot spare* and 2 parity drives)

    I have ~53 TB of data and about 18 TB of free space all considered.

    Am I really looking at ~21 days of letting this run?  and needing to make sure I don't close the the browser window with the open window the tool mentions?

     

  4. On 11/17/2020 at 7:29 PM, scd780 said:

    I have 2 Windows 10 clients hardwired on local gigabit LAN. Logs on one of them are showing an average speed of about 170 MBits/s during full image backups. This works out to about an hour and 15 min for 100 GBs of data. I'm writing straight to the array as well and I think it usually goes to my 5400 rpm drives in the array since they're the least full. Grafana shows my array disks getting hit with about 15MB/s from urbackup during backups. I also haven't had turbo write mode turned on for Unraid until yesterday so I might start to see higher speeds. Oh and I use the compressed format offered by urbackup which probably makes a difference! Seems this is only a single threaded workflow so speeds may vary depending on your CPU capabilities as well (I'm running a Ryzen 7 1700 Stock).

     

    Here are my docker settings in unraid, I had a couple other things I wanted to keep in my Backups share so that's why /media is directed to a subfolder:

     

    urbackup-settings.thumb.PNG.ccac6a10a47ff56b01cd2451adfb1269.PNG

     

    Just a couple thoughts you may want to check: 

    • I have my minimum free space set at 300GB for my Backups share, what is yours at? This might have something to do with it but I doubt it since you said it's stuck on indexing...
    • Have you gone through the Client settings in the urbackup UI? 
      • This one was big for me, I could not get it to back anything up for awhile because of default directories, excluded files, some windows file paths it didn't like that were hidden deep in AppData, among other things I think... 
      • Would recommend using the documentation (link in WebUI) to populate all of your settings appropriately from the server side and then uncheck the option allowing clients to change settings. When I allowed clients to change settings and adjusted them client-side, it never seemed to want to sync up!

    I'd probably recommend removing the docker image for urbackup completely, cleaning out its spot on your array, uninstalling the client-side software and starting over with a clean slate! Had to do this when I added my second client since I screwed it up somehow. I can share pictures of my server side settings if you'd like as well. 

     

    *Disclaimer: I am definitely not an expert on this, just happen to have it running smoothly on my setup so results may vary lol


    Took me a while to get time to jump back to this but over the last few days I did some tests of all the drives in my unraid server.  I was able to write to all the drives at the expected rates (i.e. more than a minimum of 50MB/s, which in those rare cases seemed likely something else hit the drive other than the test.)

    I removed the docker container for urbackup, removed its appdata folder.  Uninstalled the app from my desktop.  Then reinstall the docker container.

    I removed my previous "backup" share or what ever I had called it.  made a new share called "backup" set its min free space to 300GB.  And configured the urbackup container to point there when I installed it.  I can't imagine this has anything to do with it, but I'll mention it just incase.  My appdata dir is on an unassigned device (a 2TB nvme ssd) as such I have the containers config path set to R/W Slave   instead of just read write as Fix Common Problems suggested to me (all my dockers are like that).  But the "/mnt/user/backup/" path mounted as /media in the urbackup container is just setup as "Read/Write" 

    I reinstalled the urbackup client on my desktop and I'm seeing the same issue again sub 1MB/s writes.  

    I'm a bit stumped here.  Stepping away from the problem for a bit if I think of anything else I'll report back.  Hopefully someone might have some ideas.  I have to guess there is something wrong with my Unraid array / config or something but not even sure where to begin.

  5. 3 hours ago, jbartlett said:

    Nothing looks wrong with the drive from these reports. A future update will add heat map support where the entire drive is read and the read speeds are represented in a color range. This might help see hidden trouble spots. That update is probably a couple months away though. Right now, I'm working on adding drive allocation & fragmentation maps which the heat maps will use to locate which files are contained in found bad spots.

     

    One thing you can try is to run a benchmark as given above but test the drive every 1%.

    That would be pretty slick I look forward to testing it out.

    Since your advice yesterday I've been going back through and trying to test all the drives.

    I've found several that are getting that "speed gap" notice.

    I've tried a few times now to test them with "Disable speed gap detection" checked and I still see reads start 90-100+ drop to 60-70 then repeated "retries" even when I have that disable speed gap detection checked.

  6. 11 hours ago, jbartlett said:

    Click on the Benchmark button on the main page and uncheck "Check all drives", set the checkbox for Disk 3, and set the checkbox for "Disable Speed Gap detection".

     

    If you frequently get a "retrying" on a spot, its because the min/max speeds at that spot were over a certain threshold and it is retrying that spot again after increasing that threshold by 5mb. Setting the Disable Speed Gap button disables this retry feature. This gap between the min/max speeds is typically caused by some other process accessing the drive at the same time but it can also be caused by a bad spot. I plan on adding a cap to the number of retries.

     

    The newest version of this app also displays the SMART information for the drive which by default displays only catastrophic drive health values. Take a screen shot of that and post it. If it's not displaying it, then there's something wrong with the smart report as it gets it from UNRAID and UNRAID isn't saving it. View the drive info on the unraid Main tab to get the smart values.

    That seems to have done it.  Test for that drive finished following your suggestions.

    image.thumb.png.2625ad4666f8edcbd3fbb4c70c6707c9.pngimage.png.cb0f755100f0c2182df0710c770910c9.pngimage.thumb.png.7a87a3e7fcda322320a046bb70ede6ec.png

    To be honest I haven't looked at a SMART report closely in years and need to go refresh my memory on what "bad" really looks like.  

  7. I suspect this is an issue with my system, and not with this container, but I'm hoping someone might be able to point me in the right direction here.

    I've had issues with very slow writes to my array (sub 1MB/s) at first I thought it was just the type of writes, lots of very small files with another CA docker container (urbackup).  But after nuking all its shares and existing backup data and starting over thinking there was a config issue with that backup app, I decided to run this speed benchmark to check all the drives.

    I have 24x 3.5" drives in the 2TB and 6TB range in the array.  When I start the bench mark it seem to keep getting "stuck" testing Disk 3 (sdp).

     

    Most tests took in the range of a few min (maybe 5?) but this drives has been stuck at 36% for over half an hour.  I don't see any reads on the main tab on that drive being reported.  if I reopen the webui it goes back to the main page where I can start another benchmark.  Doing so seems to hang up in the same spot.

    I've attached the diag from the main page below.  

    diskspeed_20201118_182405.tar.gz

    I'm afraid I have a bad drive or something and worse if thats the case that Unraid hasn't given me any warning about it.

    *edit* To rule out as many background processes and disk i/o as possible I stopped all of my other Docker containers and VMs (well 2 of the 4 vms I actually paused but that shouldn't make a difference in this case I don't think).

  8. 19 hours ago, scd780 said:

    I have 2 Windows 10 clients hardwired on local gigabit LAN. Logs on one of them are showing an average speed of about 170 MBits/s during full image backups. This works out to about an hour and 15 min for 100 GBs of data. I'm writing straight to the array as well and I think it usually goes to my 5400 rpm drives in the array since they're the least full. Grafana shows my array disks getting hit with about 15MB/s from urbackup during backups. I also haven't had turbo write mode turned on for Unraid until yesterday so I might start to see higher speeds. Oh and I use the compressed format offered by urbackup which probably makes a difference! Seems this is only a single threaded workflow so speeds may vary depending on your CPU capabilities as well (I'm running a Ryzen 7 1700 Stock).

     

    Here are my docker settings in unraid, I had a couple other things I wanted to keep in my Backups share so that's why /media is directed to a subfolder:

     

    urbackup-settings.thumb.PNG.ccac6a10a47ff56b01cd2451adfb1269.PNG

     

    Just a couple thoughts you may want to check: 

    • I have my minimum free space set at 300GB for my Backups share, what is yours at? This might have something to do with it but I doubt it since you said it's stuck on indexing...
    • Have you gone through the Client settings in the urbackup UI? 
      • This one was big for me, I could not get it to back anything up for awhile because of default directories, excluded files, some windows file paths it didn't like that were hidden deep in AppData, among other things I think... 
      • Would recommend using the documentation (link in WebUI) to populate all of your settings appropriately from the server side and then uncheck the option allowing clients to change settings. When I allowed clients to change settings and adjusted them client-side, it never seemed to want to sync up!

    I'd probably recommend removing the docker image for urbackup completely, cleaning out its spot on your array, uninstalling the client-side software and starting over with a clean slate! Had to do this when I added my second client since I screwed it up somehow. I can share pictures of my server side settings if you'd like as well. 

     

    *Disclaimer: I am definitely not an expert on this, just happen to have it running smoothly on my setup so results may vary lol

    Thanks for the advice.  I'll try starting again from scratch.  I know around the time I was first trying this I was also setting up timemachine, and some other stuff, and ended up filling my cache drive.  That was where I had appdata and all my VMs.... they were not happy about that...  I've since moved all my appdata and vms to a 2TB nvme ssd, but its possible some left over config either server or client side is still messing stuff up.  So I'll nuke it all and start from scratch.

    I also found a docker container in CA that will test each drives read and writes.  I'll turn off everything else and give that a try, as well as just try to move some files to the array via a windows share.  That should give me some comparisons between urbackup speeds and what I can verify the system can do.  My end up pointing out some bigger system problem... though I can't imagine what atm.

  9. For those that have completed backups with this from a Windows Client to unraid 

    How long did it take?

    What was the docker app pointing at to store the backups?  Unassigned device / Unraid Cache / Unraid Array
    I have this setup and it appears to be working but my 500GB SSD on my windows host has been running for nearly 24hrs now and its still listed in the webUI as "indexing"
     

    image.thumb.png.69f6b8b72a35b60b21bf335a855a11b4.png

     

    I'm seeing writes to the array in the <1MB/s range

    image.thumb.png.05f0422f93dc9fa4f59e36792c25f3f8.png

    I have it set to a share that doesn't use the cache.

  10. I'm up to 70TB

    Building a custom case has been a bit of a nightmare....  100% due to my crap design lol o well at least it looks nice + works well when it doesn't need maintenance. 

    Images from last weekends replacement of a bad sata cable...


    More pics of the initial build up here
    https://linustechtips.com/topic/353971-an-introduction-to-project-egor-the-never-ending-story/

    I've since replaced the motherboard.  The old SR-2 died folding for a cure to COVID :( 

  11. 25 minutes ago, Squid said:

    Switch to advances and show more settings.   Theres going to be another path

    Think I found it!

    My brain was thinking only the paths referencing the Unassigned Device needed to be changed.  

    You were right.  It was another path in each docker even ones not assigned to the unassigned device mount point that needed to be changed, for some reason that never occurred to me thanks for pointing me in the right direction!

     

  12. 19 minutes ago, Squid said:

    Switch to advances and show more settings.   Theres going to be another path

    Thanks Squid, I have been checking there.

    Here's what I'm seeing with advanced view switched on and after clicking the more settings drop down
    image.thumb.png.5a0f99991a628037ae269e5a193b1f02.png

    And when I check the AppData Config Path setting
    RW/Slave is already set


    image.thumb.png.c1219a2d2d3ca21770629f8c526f6c44.png






     

    image.png

  13. I'm not sure if I'm misunderstanding the errors the Fix Common Problems plugin is reporting here or if its possibly giving false reports.

    I moved my appdata + docker and VMs to a new NVME Unassigned Devices Drive yesterday.

    I followed this guide here
    https://forums.serverbuilds.net/t/guide-move-your-docker-image-and-appdata-to-an-unassigned-drive/1478

    All seemed to work great.

    This morning Fix Common Problems is reporting errors for just a few of my docker containers, not all of them.

    image.thumb.png.adb2ddfe2e2d9c83f93686762a145a8d.png

    Had no idea what this slave option was did some searching found this

     


    Then went through each of the docker  containers it was erroring about and checked the mount points that were using the unassigned device.

    But everyone that I checked (all six docker containers above) seem to already have the RW/Slave option set?

    For example here's what I see configured for my Sonar Docker.

    image.thumb.png.d80dfe008093665bd6cd2fb49bfe7b35.png

     

    Might anyone have an idea what I'm missing here?
     

  14. Could someone explain a good way to make sure a given bay is connected correctly?  If it wasn't as in this case is there a particular line I could look for in the syslog on boot up?

    For example if I reseat all these sata cables for the attached drives.  if I don't see "SError" in the syslog and the drives show up in the unraid UI is that an indication all is good?  Short of the drive being good that is, i suppose it could still have SMART errors etc.

    Because its such a nightmare to take this thing a part I want to take a known good drive and check all the sata connections.

  15. Well I did a clean shutdown reseated that drive but still see that error in the syslog.   I guess the other thing I can check is the sata cables.

    Unfortunately some bloody idiot (me.......) designed this custom case to be the most pain in the ass thing to get access to you can possibly imagine.image.png.d62364e0cc558edb5325092e3ded3cac.png
    image.png.f1f549ec4863ca212901cd0c69eea49c.png

    See https://linustechtips.com/topic/353971-an-introduction-to-project-egor-the-never-ending-story/

    I've since put in a new motherboard and CPU that old SR-2 gave its last breath folding for a cure for COVID :(


    ever since I put it back together some things in the "basement" of that case have been.... "wonky" I guess I'm going to have to finally take the time to tear it all apart to try to find a bad SATA cable wish me luck!

  16. 6 minutes ago, trurl said:

    Looks like connection problems with this cache disk:

    
    Nov  6 07:29:24 phoenix kernel: ata3.00: ATA-10: SPCC Solid State Disk, P1601544000000009646, V2.7, max UDMA/133
    ...
    Nov  6 07:30:29 phoenix kernel: ata3.00: exception Emask 0x10 SAct 0x4000000 SErr 0x400001 action 0x6 frozen
    Nov  6 07:30:29 phoenix kernel: ata3.00: irq_stat 0x08000000, interface fatal error
    Nov  6 07:30:29 phoenix kernel: ata3: SError: { RecovData Handshk }
    Nov  6 07:30:29 phoenix kernel: ata3.00: failed command: WRITE FPDMA QUEUED
    Nov  6 07:30:29 phoenix kernel: ata3.00: cmd 61/08:d0:40:00:02/00:00:00:00:00/40 tag 26 ncq dma 4096 out
    Nov  6 07:30:29 phoenix kernel:         res 40/00:d4:40:00:02/00:00:00:00:00/40 Emask 0x10 (ATA bus error)
    Nov  6 07:30:29 phoenix kernel: ata3.00: status: { DRDY }
    Nov  6 07:30:29 phoenix kernel: ata3: hard resetting link
    
    

    Not directly related except for the amount of cache space you are wasting.

     

    Why do you have 100G docker.img? 20G is usually more than enough unless you have some app misconfigured so it is writing into docker.img instead of to mapped storage. Have you had problems filling docker.img? I have 17 dockers and they are using less than half of 20G docker.img

    Hmm I have no idea why that would be 100G.  I do have a very large plex library (running plex in docker) and its not impossible that I misconfigured something along the way.

    Is there a good way for me to reset that?

    I looks like that is the SSD I recently replaced.  I'll try to stop the array shutdown and check its connection. 

    You mentioned cache space I'm wasting... even before I replaced that drive my cache on the main tab of unraid has always listed 743 GB total space.  And double checking with a btfs calculator that seems to be correct.  Are you saying that even though its reporting as "green" in the unraid UI and with 743GB total space no data can be written to that drive?  Also where did you find that in the main syslog?  So I can check after I reseat the disk.

    Thanks for the assistance its greatly appreciated.   


     

  17. This morning I found my cache had filled over night.  Nzbget was downloading a bunch of new files.  The mover was running but had been running all night long.  Checked the syslog and saw errors about BTFS, I didn't save the post but searching pointed me to doing a rebalance of BTFS after clearing some space on the drive.  I did that then did a clean shutdown and reboot.  Start the mover and was getting up to 100MB/s.

    Hours later its slowed back to a crawl between 300 and 800 KBps.  Enabled mover logging and its been moving the same 3.3GB file for over an hour so the reported write speeds seem to be accurate.

    I had to start work shortly after that, and needed to spin up an Ubuntu VM in Unraid to do some testing.  The install of that VM is taking ages, took 20+ min for the ubuntu installer to start booting.

    My appdata and VMs + docker are on my BTFS cache array..... or at least as far as I know they are.  A week or so ago I had replaced an SSD in that array to do so I changed all my shares to not use the cache  let the mover move everything off the cache, then changed appdata, domain, and system shares back to "cache prefer" and ran the mover again to move it back.  Seemed to be running ok for a while now.

    phoenix-diagnostics-20201106-1216.zip

    I'm hoping some kind soul with more experience than I could take a look at my attached diag files and help point me to a possible solution.

    In order to prevent the cache filling due to download / data ingest breaking docker / vms I just ordered a 2TB NVME ssd I'm planning to install and try to use as an Unassigned device just to have the VM + docker etc data on.  (Happen to know of a guide on how to do that correctly I'd appreciate a link, the few references I've found so far looked like they could have been from the early days of v6 and maybe out of date?)

  18. Messing around with it I flipped the disks in the external USB desktop thing.  The Identification strings for both drives in unassigned device now appear to be the S/N of the 6TB drive.  And the issues above are now the same but for the 6TB drive now instead of the 2TB drive.

    Maybe my USB enclosure is just a kinda garbage and doing some wacky stuff with drive identification?

  19. I think I'm having an odd problem with the plugin, or I'm doing something that completely isn't supported and don't realize it...

    I have a 2 bay external USB 3 sata desktop thing.  Something like this but a different brand.
    image.png.388607eed444b03c96531e1647b14e11.png


    I have several drives I wanted to test and preclear.  I put 2 in that unit, one 2TB drive and one 6TB drive.

    They detected in unassigned devices.  Though.... if the end of the names seem strangely familiar.

     

    image.thumb.png.9fbaf532402044c35ce9337c241c7a5c.png
     
    (The 3rd disk there that is preclearing is a 6 TB connected internally (was connected and preclearing before connecting this USB desk station thing)

    I went into Tool > Preclear > and started to preclear the 2TB drive but when I did the link on the right side to preclear the 6TB drive was no longer there.  I stopped the preclear on the 2TB drive Now when I click the preclear link on the lower 6TB drive (the one not already running that is in the USB encloser) it reports as the 2TB drive.
    image.thumb.png.6cafff32f25e1fd56023d07a33adf9b2.png

    Any ideas whats going on here I'm more than a bit confused.

  20. On 10/10/2020 at 3:02 PM, falconexe said:

    Well it’s most likely not Grafana because if it was a single internal setting, they should all log out around the same timeout. Very odd. I have my Grafana dashboards up 24/7 and the only time I ever have to log in again, is if I clear browser history (cookies). 🤷🏻‍♂️
     

    Try another computer.


    I'm seeing the same behavior on 2 other computers.  I started to try to look at the logs for grafana but haven't found them yet.

    opening the console for the docker container I don't see a grafana.log file inside of /var/log/grafana  in fact I don't see any files in there.  Maybe they aren't visible to the grafana user?

    /usr/share/grafana $ cd /var/log/grafana/
    /var/log/grafana $ ls -lh
    total 0      
    /var/log/grafana $ 

    /usr/share/grafana $ cd /var/log/grafana/
    /var/log/grafana $ ls -lh
    total 0      
    /var/log/grafana $ 


    I found a few topics that pointed me to a still active bug in grafana that maybe is related

    https://github.com/grafana/grafana/issues/16638

    but appears to be way back Version 6.1.4   we look to be on 7.2.1

  21. Tried another test had the same dashboard up in a normal chrome window for right around 20 min and was kicked to the grafana login screen.  This time I was running a Wireshark capture looking at the network traffic thought maybe I'd see the TCP / HTTP session get reset or something but no such luck.

    I'll have to give this some more thought still can't figure out what could be causing this behavior.  I'll try on another machine shortly maybe that will give some clue.

×
×
  • Create New...