Upgrade to current 2.0.6-dev head failed - SQLSTATE[28000] [1045] access denied for user 'fast1'@'localhost' using password: YES

Note: I use "fast1" instead of "o1"

I was in a stable BOA 2.0.5, having problems cloning and migrating to a new platform, and those tasks put some of my sites into an "error" mode that I had never seen before (like a maintenance mode page - but with the word "error" instead of a maintenance mode message), and additionally, neither my master barracuda control panel nor my "o1" octopus control panel would resolve (this happened suddenly this morning) so I:

1.) ran the barracuda upgrade "up-stable" (which appeared to work and gave me my master barracuda instance in Hostmaster 006

2.) ran the octopus upgrade "up-stable all both", which failed - both "Upgrade A" and "Upgrade B" failed.

I was, however, able to log-in to my octopus control panel after this, but could not run any task (verify, flush caches, etc) - I had to manually delete those tasks, which did not run.

With 5 of 16 sites down at this point, including 3 IDN's which didn't resolve on the www., I ran:

3.) barracuda up-head

and

4.) octopus up-head all both, and had the same result -

barracuda update appeared to work, giving me my master barracuda control panel in a Hostmaster 007

and the octopus up-head failed with the SQLSTATE error message:
SQLSTATE[28000] [1045] access denied for user 'fast1'@'localhost' using password: YES

When I logged into the barracuda master control panel - I saw that I could not run a backup of the master instance, and a verify task on it did not run, and I had to delete the task manually.

When I logged into the octopus control panel, I saw a Verify:localhost task had failed with the same SQLSTATE error message given above (below is the relevant part of the task view and it contains the only error in the task):

"
Provision client home path /data/disk/fast1/clients exists.
Provision client home ownership of /data/disk/fast1/clients has been changed to fast1.
Provision client home permissions of /data/disk/fast1/clients have been changed to 750.
Provision client home path /data/disk/fast1/clients is writable.
SQLSTATE[28000] [1045] Access denied for user 'fast1'@'localhost' (using password: YES)
Unable to connect to database server.
cdn has no server config file
Command dispatch complete
"

Still unable to run any tasks (I tried deleting caches and running cron in both the barracuda master and the octopus control panels and nothing helped), I noticed that in the barracuda master control panel I had the master.domain.com site up with drupal 6.28 on hostmaster 007, but on the octopus control panel I had the fast1.server1.domain.com site on drupal 6.27 on hostmaster 005, and while there is a hostmaster 006 entry, it is not complete, and it does not have a drupal version listed next to it, and it has not been verified.

so I ran just the "octopus up-head all both" command again - and got the exact same error. I still see: in the barracuda master control panel - the master.domain.com site up with drupal 6.28 on hostmaster 007, but on the octopus control panel - the fast1.server1.domain.com site on drupal 6.27 on hostmaster 005, there is a hostmaster 006 entry, but it is not complete, and it does not have a drupal version listed next to it, and it has not been verified.

I would appreciate any advice on how to get the barracuda and the octopus "instances" on the same hostmaster number and the same drupal 6.28 version.

/root/.USER.octopus.cnf given below.

###
### Configuration created on 121116-1354 with
### Octopus version BOA-2.0.4
###
### NOTE: the group of settings displayed bellow
### will *override* all listed settings in the Octopus script.
###
_USER="fast1"
_MY_EMAIL="info@domain.com"
_PLATFORMS_LIST="ALL"
_ALLOW_UNSUPPORTED=YES
_AUTOPILOT=NO
_HM_ONLY=NO
_O_CONTRIB_UP=NO
_DEBUG_MODE=NO
_MY_OWNIP=correct-ip-address
_FORCE_GIT_MIRROR=""
_THIS_DB_HOST=localhost
_DNS_SETUP_TEST=NO
_HOT_SAUCE=NO
_USE_CURRENT=YES
_REMOTE_CACHE_IP=127.0.0.1
_LOCAL_NETWORK_IP=
_PHP_FPM_VERSION=5.3
_PHP_CLI_VERSION=5.3
_USE_STOCK=NO
###
### NOTE: the group of settings displayed bellow will be *overriden*
### by config files stored in the /data/disk/fast1/log/ directory,
### but only on upgrade.
###
_DOMAIN="fast1.server1.domain.com"
_CLIENT_EMAIL="info@domain.com"
_CLIENT_OPTION="SSD"
_CLIENT_SUBSCR="Y"
_CLIENT_CORES="8"
###
### Configuration created on 121116-1354 with
### Octopus version BOA-2.0.4
###

CommentFileSizeAuthor
#35 latest-octopus-upgrade-terminal-copy.txt16.01 KBAnonymous (not verified)
#8 barracuda_log.txt1.85 KBAnonymous (not verified)
#8 octopus_log.txt555 bytesAnonymous (not verified)
octopus_log.txt555 bytesAnonymous (not verified)
barracuda_log.txt1.64 KBAnonymous (not verified)
Support from Acquia helps fund testing for Drupal Acquia logo

Comments

Anonymous’s picture

Issue summary: View changes

additional info

omega8cc’s picture

Please let us know the output of commands shown below - don't worry, all passwords are re-generated on every upgrade automatically:

cat /var/aegir/backups/system/.aegir_root.pass.txt
su -s /bin/bash - aegir -c "drush @hostmaster sqlq \"SELECT * FROM hosting_db_server\""
grep master_db /var/aegir/.drush/server_localhost.alias.drushrc.php
cat /data/disk/fast1/.fast1.pass.txt
su -s /bin/bash - fast1 -c "drush @hostmaster sqlq \"SELECT * FROM hosting_db_server\""
grep master_db /data/disk/fast1/.drush/server_localhost.alias.drushrc.php
Anonymous’s picture

The output is (thanks or your response):

server1:~# cat /var/aegir/backups/system/.aegir_root.pass.txt
?x
server1:~# su -s /bin/bash - aegir -c "drush @hostmaster sqlq \"SELECT * FROM ho
sting_db_server\""
vid nid db_user db_passwd
4 4 aegir_root ?x
server1:~# grep master_db /var/aegir/.drush/server_localhost.alias.drushrc.php
'master_db' => 'mysql://aegir_root:%3Fx@localhost',
server1:~# cat /data/disk/fast1/.fast1.pass.txt
S_¥±_¶¢
server1:~# su -s /bin/bash - fast1 -c "drush @hostmaster sqlq \"SELECT * FROM ho
sting_db_server\""
vid nid db_user db_passwd
4 4 fast1 S
server1:~# grep master_db /data/disk/fast1/.drush/server_localhost.alias.drushrc
.php
'master_db' => 'mysql://fast1:S@localhost',
server1:~#

omega8cc’s picture

Component: Aegir Provision » Code
Priority: Normal » Critical

Thanks, that looks pretty bad. Let's start with some testing. Please try to run on command line, *a few times*:

randpass 32 esc

Then a few times:

randpass 32 alnum

And post the output, so we could get a better idea on what happens there.

Anonymous’s picture

server1:~# randpass 32 esc
b?v«PUúdVªE;dU4
server1:~# randpass 32 esc
5IZ»wh¦%4Vöd
server1:~# randpass 32 esc

server1:~# randpass 32 alnum
_
server1:~# randpass 32 alnum
LdR
server1:~# randpass 32 alnum

server1:~#

Anonymous’s picture

Just to clarify: the third time for each command - there was just a space with no text in it as the return.

omega8cc’s picture

Thanks, so it looks like /dev/urandom is totally unreliable on your system for some reason. We need a better check to avoid this. I will post a repair how-to shortly. Stay tuned.

omega8cc’s picture

Status: Active » Needs review

This should do the trick:

cd
rm -f BOA.sh.txt
wget -q -U iCab http://files.aegir.cc/BOA.sh.txt
bash BOA.sh.txt
syncpass fix aegir
syncpass fix fast1
barracuda up-head
octopus up-head fast1 both
Anonymous’s picture

FileSize
555 bytes
1.85 KB

Hello (had to be away for a few hours).

I ran those exact commands and:

1) the "syncpass fix aegir" returned ?x (like before, in #2, above), and the "syncpass fix fast1 returned the same strange password as before, in #2, above)

2) The same exact thing happened - the barracuda up seemed fine (although I noticed the barracuda.cnf file had MyISAM for Db_Engine - and that surprised me - don't know if this is how it works or not), and the octopus up had the exact same SQLSTATE error - I list the relevant portions of the command terminal return with the only error at the end: (I'm also uploading the current log files):

"
Drush bootstrap phase : _drush_bootstrap_drupal_root() [32.24 sec, [bootstrap]
12.61 MB]
Initialized Drupal 6.28 root directory at [notice]
/data/disk/fast1/aegir/distro/006 [32.24 sec, 12.62 MB]
Drupal sites directory /data/disk/fast1/aegir/distro/006/sites is [message]
writable by the provisioning script [32.24 sec, 12.62 MB]
Undefined variable: sites provision_drupal.drush.inc:432 [32.24 sec, [notice]
12.62 MB]
This platform is running drupal 6.28 [32.24 sec, 12.62 MB] [notice]
Found 36 modules in base [32.24 sec, 12.62 MB] [notice]
Found 2 themes in base [32.24 sec, 12.62 MB] [notice]
Found installation profile hostmaster [32.24 sec, 12.62 MB] [notice]
Found installation profile default [32.24 sec, 12.62 MB] [notice]
Found 50 modules in profiles/hostmaster [32.24 sec, 12.62 MB] [notice]
Found 1 themes in profiles/hostmaster [32.24 sec, 12.62 MB] [notice]
nginx has no platform config file [32.24 sec, 12.63 MB] [notice]
nginx on server1.domain.com has been restarted [32.24 sec, 12.63 MB] [notice]
cdn has no platform config file [32.24 sec, 12.63 MB] [notice]
Template loaded: [notice]
/data/disk/fast1/.drush/provision/Provision/Config/provision_drushrc.tpl.php
[32.24 sec, 12.63 MB]
Generated config Platform Drush configuration file [32.24 sec, 12.63 [message]
MB]
Changed permissions of /data/disk/fast1/aegir/distro/006/drushrc.php [message]
to 444 [32.24 sec, 12.63 MB]
Platforms path /data/disk/fast1/platforms exists. [32.24 sec, 12.63 [message]
MB]
Platforms ownership of /data/disk/fast1/platforms has been changed to [message]
fast1. [32.24 sec, 12.63 MB]
Platforms permissions of /data/disk/fast1/platforms have been changed [message]
to 711. [32.24 sec, 12.63 MB]
Platforms path /data/disk/fast1/platforms is writable. [32.24 sec, [message]
12.63 MB]
Command dispatch complete [32.24 sec, 12.64 MB] [notice]
Peak memory usage was 11.82 MB [32.24 sec, 12.64 MB] [memory]
no crontab for fast1
Running: /data/disk/fast1/tools/drush/drush.php @hostmaster sqlq [command]
'UPDATE {system} SET weight = 0 WHERE type='\''module'\'' AND
name='\''hosting'\'';' --backend 2>&1 [32.24 sec, 12.06 MB]
Bootstrap to phase 0. [32.35 sec, 13.7 MB] [bootstrap]
Drush bootstrap phase : _drush_bootstrap_drush() [32.35 sec, 13.7 MB][bootstrap]
Load alias @hostmaster [32.35 sec, 13.7 MB] [notice]
Loading drushrc [bootstrap]
"/data/disk/fast1/aegir/distro/005/sites/fast1.server1.domain.com/drushrc.php"
into "site" scope. [32.35 sec, 13.7 MB]
Bootstrap to phase 0. [32.35 sec, 13.7 MB] [bootstrap]
Found command: sql-query (commandfile=sql) [32.35 sec, 13.7 MB] [bootstrap]
Initializing drush commandfile: db [32.35 sec, 13.71 MB] [bootstrap]
Initializing drush commandfile: dns [32.35 sec, 13.71 MB] [bootstrap]
Initializing drush commandfile: drush_make [32.35 sec, 13.71 MB] [bootstrap]
Initializing drush commandfile: drush_make_d_o [32.35 sec, 13.71 MB] [bootstrap]
Initializing drush commandfile: example [32.35 sec, 13.71 MB] [bootstrap]
Initializing drush commandfile: http [32.35 sec, 13.71 MB] [bootstrap]
Initializing drush commandfile: provision [32.35 sec, 13.71 MB] [bootstrap]
Load alias @server_localhost [32.35 sec, 13.71 MB] [notice]
Load alias @server_master [32.35 sec, 13.71 MB] [notice]
Loading nginx driver for the http service [32.35 sec, 13.71 MB] [notice]
Loading nginx driver for the cdn service [32.35 sec, 13.72 MB] [notice]
Loading mysql driver for the db service [32.35 sec, 13.72 MB] [notice]
Loading nginx driver for the cdn service [32.35 sec, 13.72 MB] [notice]
Load alias @platform_005 [32.35 sec, 13.72 MB] [notice]
Initializing drush commandfile: provision_cdn [32.35 sec, 13.72 MB] [bootstrap]
Initializing drush commandfile: provision_civicrm [32.35 sec, 13.72 [bootstrap]
MB]
Drush bootstrap phase : _drush_bootstrap_drupal_root() [32.35 sec, [bootstrap]
13.72 MB]
Loading drushrc "/data/disk/fast1/aegir/distro/005/drushrc.php" into [bootstrap]
"drupal" scope. [32.35 sec, 13.72 MB]
Initialized Drupal 6.27 root directory at [notice]
/data/disk/fast1/aegir/distro/005 [32.35 sec, 13.72 MB]
Drush bootstrap phase : _drush_bootstrap_drupal_site() [32.35 sec, [bootstrap]
13.72 MB]
Initialized Drupal site fast1.server1.domain.com at [notice]
sites/fast1.server1.domain.com [32.35 sec, 13.73 MB]
Loading drushrc [bootstrap]
"/data/disk/fast1/aegir/distro/005/sites/fast1.server1.domain.com/drushrc.php"
into "site" scope. [32.35 sec, 13.73 MB]
Drush bootstrap phase : _drush_bootstrap_drupal_configuration() [bootstrap]
[32.35 sec, 13.73 MB]
Command dispatch complete [32.35 sec, 13.73 MB] [notice]
Peak memory usage was 12.6 MB [32.35 sec, 13.73 MB] [memory]
Running: /data/disk/fast1/tools/drush/drush.php @hostmaster [command]
provision-migrate '@platform_006' --backend 2>&1 [32.35 sec, 12.11
MB]
Bootstrap to phase 0. [32.44 sec, 13.75 MB] [bootstrap]
Drush bootstrap phase : _drush_bootstrap_drush() [32.44 sec, 13.75 [bootstrap]
MB]
Load alias @hostmaster [32.44 sec, 13.75 MB] [notice]
Loading drushrc [bootstrap]
"/data/disk/fast1/aegir/distro/005/sites/fast1.server1.domain.com/drushrc.php"
into "site" scope. [32.44 sec, 13.76 MB]
Bootstrap to phase 1. [32.44 sec, 13.76 MB] [bootstrap]
Drush bootstrap phase : _drush_bootstrap_drupal_root() [32.44 sec, [bootstrap]
13.76 MB]
Loading drushrc "/data/disk/fast1/aegir/distro/005/drushrc.php" into [bootstrap]
"drupal" scope. [32.44 sec, 13.76 MB]
Initialized Drupal 6.27 root directory at [notice]
/data/disk/fast1/aegir/distro/005 [32.44 sec, 13.76 MB]
Found command: provision-migrate (commandfile=provision) [32.44 sec, [bootstrap]
13.76 MB]
Initializing drush commandfile: db [32.44 sec, 13.76 MB] [bootstrap]
Initializing drush commandfile: dns [32.44 sec, 13.76 MB] [bootstrap]
Initializing drush commandfile: drush_make [32.44 sec, 13.76 MB] [bootstrap]
Initializing drush commandfile: drush_make_d_o [32.44 sec, 13.76 MB] [bootstrap]
Initializing drush commandfile: example [32.44 sec, 13.77 MB] [bootstrap]
Initializing drush commandfile: http [32.44 sec, 13.77 MB] [bootstrap]
Initializing drush commandfile: provision [32.44 sec, 13.77 MB] [bootstrap]
Load alias @server_localhost [32.44 sec, 13.77 MB] [notice]
Load alias @server_master [32.45 sec, 13.77 MB] [notice]
Loading nginx driver for the http service [32.45 sec, 13.77 MB] [notice]
Loading nginx driver for the cdn service [32.45 sec, 13.77 MB] [notice]
Loading mysql driver for the db service [32.45 sec, 13.77 MB] [notice]
Loading nginx driver for the cdn service [32.45 sec, 13.77 MB] [notice]
Load alias @platform_005 [32.45 sec, 13.77 MB] [notice]
Initializing drush commandfile: provision_cdn [32.45 sec, 13.78 MB] [bootstrap]
Initializing drush commandfile: provision_civicrm [32.45 sec, 13.78 [bootstrap]
MB]
Including [bootstrap]
/data/disk/fast1/.drush/provision_civicrm/migrate.provision.inc
[32.45 sec, 13.78 MB]
Including /data/disk/fast1/.drush/provision/db/migrate.provision.inc [bootstrap]
[32.45 sec, 13.78 MB]
Including [bootstrap]
/data/disk/fast1/.drush/provision/http/migrate.provision.inc [32.45
sec, 13.78 MB]
Including [bootstrap]
/data/disk/fast1/.drush/provision/platform/migrate.provision.inc
[32.45 sec, 13.78 MB]
SQLSTATE[28000] [1045] Access denied for user 'fast1'@'localhost' [error]
(using password: YES) [32.45 sec, 13.78 MB]
Command dispatch complete [32.45 sec, 13.78 MB] [notice]
Peak memory usage was 11.89 MB [32.45 sec, 13.78 MB] [memory]
Command dispatch complete [32.45 sec, 12.1 MB] [notice]
Peak memory usage was 14.06 MB [32.45 sec, 12.1 MB] [memory]
Octopus [Mon Mar 25 22:40:40 CET 2013] ==> UPGRADE B: Hostmaster STATUS: upgrade
completed
Octopus [Mon Mar 25 22:40:40 CET 2013] ==> UPGRADE B: Simple check if Aegir upgr
ade is successful
Octopus [Mon Mar 25 22:40:42 CET 2013] ==> UPGRADE B: FATAL ERROR: Required file
/data/disk/fast1/aegir/distro/006/sites/fast1.server1.domain.com/settings.php
does not exist
Octopus [Mon Mar 25 22:40:42 CET 2013] ==> UPGRADE B: FATAL ERROR: Aborting Aegi
rSetupB installer NOW!
Octopus [Mon Mar 25 22:40:42 CET 2013] ==> UPGRADE A: FATAL ERROR: AegirSetupB i
nstaller failed
Octopus [Mon Mar 25 22:40:42 CET 2013] ==> UPGRADE A: FATAL ERROR: Aborting Aegi
rSetupA installer NOW!
Octopus [Mon Mar 25 22:40:42 CET 2013] ==> FATAL ERROR: AegirSetupA installer fa
iled
Octopus [Mon Mar 25 22:40:42 CET 2013] ==> FATAL ERROR: Aborting Octopus install
er NOW!
Done for /data/disk/fast1
"

omega8cc’s picture

Status: Needs review » Needs work

One last attempt before I will consider your server broken on some really low level, because I can't reproduce this anywhere.

Please post the result of a few attempts to run this command:

pwgen -v -s -1

Anonymous’s picture

Status: Needs work » Needs review

Additional info: I can login to my fast1 octopus, and the verify localhost task still failed

Here are the relevant listings from the platorms listing: (In my Barracuda I am at Aegi Hosmaster 008 now):

Aegir Hostmaster 005 drupal 6.27 server1.domain.com 1 week ago 1
Aegir Hostmaster 006 None server1.domain.com Never 0

Do you want me to try it again and this time manually remove the BOA.sh.txt file from the server to make sure I am using a fresh copy ?

*** When I run the commands you gave in #1 above, now the return is:

server1:~# cat /var/aegir/backups/system/.aegir_root.pass.txt

server1:~# su -s /bin/bash - aegir -c "drush @hostmaster sqlq \"SELECT * FROM ho
sting_db_server\""
vid nid db_user db_passwd
4 4 aegir_root
server1:~# grep master_db /var/aegir/.drush/server_localhost.alias.drushrc.php
'master_db' => 'mysql://aegir_root:@localhost',
server1:~# cat /data/disk/fast1/.fast1.pass.txt
h&?]Hll_?*pGA_??
server1:~# su -s /bin/bash - fast1 -c "drush @hostmaster sqlq \"SELECT * FROM ho
sting_db_server\""
vid nid db_user db_passwd
4 4 fast1 h&?]Hll
server1:~# grep master_db /data/disk/fast1/.drush/server_localhost.alias.drushrc
.php
'master_db' => 'mysql://fast1:h%26%CD%AC%5DHll@localhost',
server1:~#

Aren't characters that your change of earlier today (changing what you did on Feb 25) was to filter out, still being allowed in?
(i.e. the ] )?

Lastly, running the commands you listed in #3, above, now, gives:
server1:~# randpass 32 esc

server1:~# randpass 32 esc
oK
server1:~# randpass 32 esc

server1:~# randpass 32 esc

server1:~# randpass 32 esc

server1:~# randpass 32 alnum
znuk
server1:~# randpass 32 alnum

server1:~# randpass 32 alnum
vWARglnS
server1:~# randpass 32 alnum
P72
server1:~# randpass 32 alnum
ÿ
server1:~#

blank lines are just that in the terminal - blank lines.

Anonymous’s picture

As for my server being broken on some really low level - the only thing I've done with it is BOA - just doing the BOA install and barracuda and octopus upgrades. Isn't it a problem that I now have the master barracuda on Aegir Hostmaster 008 and the fast1.server1.domain.com octopus on Aegir Hostmaster 005, and that there is this partially installed Aegir Hostmaster 006 (perhaps I should simple erase this off my server? Would that be a good idea?)

Here's the result of your #9:

server1:~# pwgen -v -s -1
5nQEr8gF
server1:~# pwgen -v -s -1
3tFHKiQh
server1:~# pwgen -v -s -1
XvOPQ06m
server1:~# pwgen -v -s -1
05QlcyPF
server1:~# pwgen -v -s -1
z3iSlHBc
server1:~#

Thanks for your help.

omega8cc’s picture

Status: Needs review » Needs work

OK, so there is a hope, because at least pwgen -v -s -1 works fine. Now we need to figure out why the built-in check doesn't use it as a fallback (as designed) when both randpass 32 esc and randpass 32 alnum fail horribly. OK. Back to the drawing board. It is effectively release blocker (Ughh)

omega8cc’s picture

By the way, what is your parent system exactly? Xen? And provider name? Maybe we could try to reproduce it there, since we can't reproduce this anywhere.

Anonymous’s picture

This server has never had anything but debian 64 on it and then the BOA - it started with 2.0.3, I believe in October or November 2012. After a minimal debian 64 install, I have never used apt or aptitude at all on it - never done any upgrading.

Anonymous’s picture

There is no Xen or anything like that - just a simple debian 64 server (I think it was 6.0.2 that I installed 5 months ago)

It is debian x64 Squeeze 6.0.7 now

omega8cc’s picture

We are using the same version for live servers and for testing and we didn't experience any such issues. That is why I'm trying to get more information from you.

ocean:~# uname -m
x86_64
ocean:~# lsb_release -a
No LSB modules are available.
Distributor ID:	Debian
Description:	Debian GNU/Linux 6.0.7 (squeeze)
Release:	6.0.7
Codename:	squeeze
ocean:~#
omega8cc’s picture

Could you check also if you have some well looking password in /root/.my.cnf ? And if you can log in as root just by typing mysql on command line.

omega8cc’s picture

Status: Needs work » Needs review

So instead of further guessing we have made the strong random passwords optional and no longer default:

http://drupalcode.org/project/octopus.git/commit/b268a78
http://drupalcode.org/project/octopus.git/commit/de429c5

First make sure if your mysql root access still works - don't run anything below until you are 500% sure that there are no issues when you run:

mysql -u root -e "FLUSH PRIVILEGES;"

or just

mysql

If there are no problems with mysql root access, please run *all* commands in the given order:

cd
rm -f BOA.sh.txt
wget -q -U iCab http://files.aegir.cc/BOA.sh.txt
bash BOA.sh.txt
echo "_STRONG_PASSWORDS=NO" >> /root/.barracuda.cnf
echo "_STRONG_PASSWORDS=NO" >> /root/.fast1.octopus.cnf
syncpass fix aegir
syncpass fix fast1
barracuda up-head
octopus up-head fast1 both

Let us know if that worked.

Anonymous’s picture

Hello. The return for your # 16 is:

server1:~# uname -m
x86_64
server1:~# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 6.0.7 (squeeze)
Release: 6.0.7
Codename: squeeze
server1:~#

Anonymous’s picture

The answer to your # 17 is:

I get a very bizzare looking passwork with strange accented characters and mathematical superscripts ...
ÖKÃxë6v°î÷ç¼6»ÏªÄxgä-öz¦Ûö½¹xxxx (last 3 or 4 characters changed for security).

When Logged_in as root and I type MySQL - I'm in - it does not ask me for a password at all. I can run MySQL commands from the command line without entering a MySQL password.

Anonymous’s picture

s for your # 17 and # 18, here is the server terminal return:

server1:~# mysql
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2049
Server version: 5.5.30-30.1 Percona Server (GPL), Release 30.1

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

Appears to be very insecure with no password required ? Is this a big security issue?

Anonymous’s picture

Furthermore as to your # 17 and # 18:

I cannot login to chive with either the old 8 digit simple letter/number password, nor with the very strange-looking new 32 character password.

Where is this ridiculous-looking 32 character password with all the strange characters coming from? I have never seen anything like that in 10 years of working with MySQL.

I did a google.com search for randpass (randpass 32 errors) and there are many of them on github, including one by "mehak" (think "me hack") who states on his github site:

"
randpass is a simple psuedo-random password generator.

It is my first public program, so be easy on me.
"
see:
https://github.com/mehak/randpass

There's another one with the same name at:
https://github.com/amigorich/randpass

Are you sure your scripts aren't calling some kind of hack masquerading itself as "randpass"?

Anonymous’s picture

I wonder if this problem wouldn't be fixed if I added all locales to the server? Perhaps BOA uses hGetChar to read from
/dev/urandom, and perhaps hGetChar tries to read a character according to
the current locale, which can be UTF-8, and a random byte is not always
a valid UTF-8 character...

Does the BOA install automatically install all locales?

I have never done anything to this server except following the BOA instructions, thus I did NOT do as I usually would, install all locales, first thing before adding other software (in this case BOA). Would doing (adding all locales) his have any possible negative effect on this BOA server, which I need to keep running? Would you suggest that I try this. It doesn't seem to me that it could hurt anything, don't you agree?

Thanks for your help.

In your # 18, above, you state "... if there are no issues with MySQL root access ..." - well I don't know if I have an issue with MySQL root access or not. I can't login to chive at all, but I seem to have MySQL root access without entering any password at all. For example a show databases; command does list all the databases on the server.

Would you consider that this is OK (as regards MySQL root access) to follow your instructions in # 18, above?

Anonymous’s picture

You might consider making _STRONG_PASSWORDS=NO the default, because chive and sqlbudy can't understand many of those strange characters like the degree sign and mathematical superscripts, etc. People won't be able to login to their chive ... and who knows what corruption in the db tables might be occurring.

Anonymous’s picture

Well I ran the commands you listed in # 18 and received this result (barracuda upgrade seemed to go OK, however after logging-in to the master barracuda control panel I see there is no Aegir Hostmaster 009, as I would have expected. There is no failed verify: localhost task like before, but no tasks will run - I cannot verify the master.domain.com barracuda instance. As for the fast1.server1.domain.com octopus, I can login and there is no failed verify: localhost task, but I cannot run any tasks, and cannot verify the fast1.server1.domain.com "site" - which lists as not verified).:

Octopus [Tue Mar 26 10:29:56 CET 2013] ==> UPGRADE B: Drush seems to be function
ing properly
Octopus [Tue Mar 26 10:29:56 CET 2013] ==> UPGRADE B: Installing provision backe
nd in /data/disk/fast1/.drush
Octopus [Tue Mar 26 10:30:02 CET 2013] ==> UPGRADE B: Downloading Drush and Prov
ision extensions, please wait...
Octopus [Tue Mar 26 10:30:22 CET 2013] ==> UPGRADE B: Testing previous install..
.
Octopus [Tue Mar 26 10:30:22 CET 2013] ==> UPGRADE B: Hostmaster STATUS: upgrade
start
Octopus [Tue Mar 26 10:30:24 CET 2013] ==> UPGRADE B: Running hostmaster-migrate
, please wait...
Octopus [Tue Mar 26 10:31:12 CET 2013] ==> UPGRADE B: Hostmaster STATUS: upgrade
completed
Octopus [Tue Mar 26 10:31:12 CET 2013] ==> UPGRADE B: Simple check if Aegir upgr
ade is successful
Octopus [Tue Mar 26 10:31:14 CET 2013] ==> UPGRADE B: FATAL ERROR: Required file
/data/disk/fast1/aegir/distro/006/sites/fast1.server1.domain.com/settings.php
does not exist
Octopus [Tue Mar 26 10:31:14 CET 2013] ==> UPGRADE B: FATAL ERROR: Aborting Aegi
rSetupB installer NOW!
Octopus [Tue Mar 26 10:31:14 CET 2013] ==> UPGRADE A: FATAL ERROR: AegirSetupB i
nstaller failed
Octopus [Tue Mar 26 10:31:14 CET 2013] ==> UPGRADE A: FATAL ERROR: Aborting Aegi
rSetupA installer NOW!
Octopus [Tue Mar 26 10:31:14 CET 2013] ==> FATAL ERROR: AegirSetupA installer fa
iled
Octopus [Tue Mar 26 10:31:14 CET 2013] ==> FATAL ERROR: Aborting Octopus install
er NOW!
Done for /data/disk/fast1

OCTOPUS upgrade completed
Bye
server1:~#

Anonymous’s picture

Although in the master barracuda control panel there is no Aegir Hostmaster 009, it does exist in /var/aegir/host_master, but it does not exist in the MySQL db

I AM able to login to chive using the newly generated password.

Anonymous’s picture

Lastly, two things I think might be important, that I ask for your comment on, please:

1) is it a problem that in my fast1.server1.domain.com octopus site (which is listed as Aegir Hostmaster 005) - it is "locked" and cannot be unlocked via the control panel? Would this prevent Aegir Hostmaster 006 from being properly installed (4 or 5 times now octopus has failed to install Aegir Hostmaster 006, and it leaves empty directories named 006 on the server, and even one 006 directory that seems to contain all the files and folders of a drupal install.) So I wonder if I need to delete those and "unlock" the octopus instance before I am able to successfully upgrade octopus? I manually looked in chive and could find no mention of an 006 in octopus db at all.

2) I notice in the the db (seen looking at things with chive) something I also notice on my server (looking via ftp) - there is a hostmaster-BOA-2.0.4, for example in /var/aegir but nowhere is there (neither in files nor in the db) a hostmaster-BOA-2.0.5 - nowhere, and I have been running boa 2.0.5 fine until yesterday morning it seems.

My question to you and I request an answer, please - is: do you believe that neither of these 2 "conditions" of my boa 2.0.5 server have anything to do with my inability to upgrade octopus to head successfully? I just ask you to confirm that neither of these facts (states or conditions of my boa 2.0.5 server as it exists right now) could be preventing the successful octopus upgrade, please.

Thank you.

Anonymous’s picture

The problem may not be solved. I ran your instructions again and the barracuda upgrade failed with the same SQLSTATE error I started with:

Initialized Drupal 6.28 root directory at /var/aegir/host_master/008 [notice]
[33.6 sec, 11.91 MB]
Drush bootstrap phase : _drush_bootstrap_drupal_site() [33.6 sec, [bootstrap]
11.91 MB]
Initialized Drupal site master.domain.com at [notice]
sites/master.domain.com [33.6 sec, 11.91 MB]
Loading drushrc [bootstrap]
"/var/aegir/host_master/008/sites/master.domain.com/drushrc.php"
into "site" scope. [33.6 sec, 11.91 MB]
Drush bootstrap phase : _drush_bootstrap_drupal_configuration() [33.6[bootstrap]
sec, 11.91 MB]
Command dispatch complete [33.6 sec, 11.91 MB] [notice]
Peak memory usage was 11.13 MB [33.6 sec, 11.92 MB] [memory]
Running: /var/aegir/drush/drush.php @hostmaster provision-migrate [command]
'@platform_009' --backend 2>&1 [33.6 sec, 10.83 MB]
Bootstrap to phase 0. [33.69 sec, 11.93 MB] [bootstrap]
Drush bootstrap phase : _drush_bootstrap_drush() [33.69 sec, 11.94 [bootstrap]
MB]
Load alias @hostmaster [33.69 sec, 11.94 MB] [notice]
Loading drushrc [bootstrap]
"/var/aegir/host_master/008/sites/master.domain.com/drushrc.php"
into "site" scope. [33.69 sec, 11.94 MB]
Bootstrap to phase 1. [33.69 sec, 11.94 MB] [bootstrap]
Drush bootstrap phase : _drush_bootstrap_drupal_root() [33.69 sec, [bootstrap]
11.94 MB]
Loading drushrc "/var/aegir/host_master/008/drushrc.php" into [bootstrap]
"drupal" scope. [33.69 sec, 11.94 MB]
Initialized Drupal 6.28 root directory at /var/aegir/host_master/008 [notice]
[33.69 sec, 11.94 MB]
Found command: provision-migrate (commandfile=provision) [33.69 sec, [bootstrap]
11.94 MB]
Initializing drush commandfile: db [33.69 sec, 11.94 MB] [bootstrap]
Initializing drush commandfile: dns [33.69 sec, 11.94 MB] [bootstrap]
Initializing drush commandfile: drush_make [33.69 sec, 11.95 MB] [bootstrap]
Initializing drush commandfile: drush_make_d_o [33.69 sec, 11.95 MB] [bootstrap]
Initializing drush commandfile: example [33.69 sec, 11.95 MB] [bootstrap]
Initializing drush commandfile: http [33.69 sec, 11.95 MB] [bootstrap]
Initializing drush commandfile: provision [33.69 sec, 11.95 MB] [bootstrap]
Load alias @server_localhost [33.69 sec, 11.95 MB] [notice]
Load alias @server_master [33.69 sec, 11.95 MB] [notice]
Loading nginx driver for the http service [33.69 sec, 11.95 MB] [notice]
Loading nginx driver for the cdn service [33.69 sec, 11.95 MB] [notice]
Loading mysql driver for the db service [33.69 sec, 11.95 MB] [notice]
Loading nginx driver for the cdn service [33.69 sec, 11.96 MB] [notice]
Load alias @platform_008 [33.69 sec, 11.96 MB] [notice]
Initializing drush commandfile: provision_cdn [33.69 sec, 11.96 MB] [bootstrap]
Including /var/aegir/.drush/provision/db/migrate.provision.inc [33.69[bootstrap]
sec, 11.96 MB]
Including /var/aegir/.drush/provision/http/migrate.provision.inc [bootstrap]
[33.69 sec, 11.96 MB]
Including /var/aegir/.drush/provision/platform/migrate.provision.inc [bootstrap]
[33.69 sec, 11.96 MB]
SQLSTATE[28000] [1045] Access denied for user [error]
'aegir_root'@'localhost' (using password: YES) [33.69 sec, 11.96 MB]
Command dispatch complete [33.69 sec, 11.96 MB] [notice]
Peak memory usage was 10.59 MB [33.69 sec, 11.96 MB] [memory]
Command dispatch complete [33.69 sec, 10.83 MB] [notice]
Peak memory usage was 12.14 MB [33.69 sec, 10.83 MB] [memory]
Barracuda [Tue Mar 26 11:57:13 CET 2013] ==> INFO: Running hosting-dispatch (1/3
), please wait...
Barracuda [Tue Mar 26 11:57:25 CET 2013] ==> INFO: Running hosting-dispatch (2/3
), please wait...
Barracuda [Tue Mar 26 11:57:31 CET 2013] ==> INFO: Running hosting-dispatch (3/3
), please wait...
Barracuda [Tue Mar 26 11:57:34 CET 2013] ==> INFO: Aegir Master Instance upgrade
completed
Barracuda [Tue Mar 26 11:57:35 CET 2013] ==> INFO: New secure random password fo
r Percona generated and stored in /root/.my.pass.txt
Barracuda [Tue Mar 26 11:57:37 CET 2013] ==> INFO: New entry added to /var/log/b
arracuda_log.txt

Barracuda [Tue Mar 26 11:57:40 CET 2013] ==> CARD: Now charging your credit card
for this automated upgrade service...
Barracuda [Tue Mar 26 11:57:46 CET 2013] ==> JOKE: Just kidding! Enjoy your Aegi
r Hosting System :)

Barracuda [Tue Mar 26 11:57:50 CET 2013] ==> Final post-upgrade cleaning, please
wait a moment...
Barracuda [Tue Mar 26 11:57:53 CET 2013] ==> BYE!

BARRACUDA upgrade completed
Bye
server1:~#

Anonymous’s picture

SO I ran the octopus upgrade per your instructions, and it failed again with the exact same error I listed in my post # 25

I'm at a loss now. The only last thing I found in my research is that both /dev/random (especially and principally /dev/random) and /dev/urandom depend on a sufficient amount of server-generated entropy (randomness generated from server internal activity) in order to work properly, and that when they are used, the pool of server-generated entropy is deleted as a security measure so that it could take a certain amount of time for the server to re-generate enough entropy to allow /dev/random and /dev/urandom to work properly otherwise they fail - that is probably why, when I ran the randpass 32 tests for you yesterday - they got worse and worse the more I ran them. Perhaps just allowing alnum and punct with a length of 20 characters would work for everyone for a pretty strong password?

Anonymous’s picture

SO I waited a few hours, ran all the scripts in the BOA crontab, and ran the barracuda up-head again and this time it gave no errors, and Aegir Hostmaster 009 is showing in the master barracuda control panel and it contains the master.domain.com site. I ran a verify on the master.domain.com site and it verified with no errors!

Anonymous’s picture

Then I waited 5 minutes and ran octopus up-head fast1 both and it failed with the same message about the fast1/aegi/distro/006/sites/fast1.server1.domain.com settings.php file not found, and this time there was an additional error message in the terminal about the pressflow site off-line due to technical problems - complete with the site's code source in the terminal window - the text suggest checking I the db server is running or not..

MySQL is running, After restarting MySQL I got the octopus control panel (which showed the pressflow error message shown in the terminal window before restarting MySQL). The octopus control panel shown no stuck tasks, with the Aegir Hostmaster 005 (no 006) which is not verified.

I ran a verify task on the site, fast1.server1.domain.com and it gets stuck without processing. Same for the Aegir Hostmaster 005 platform - still in drupal 6.27

What to do?

omega8cc’s picture

Delete all "waiting" tasks from the queue, and then run:

su -s /bin/bash - fast1 -c "drush @hostmaster hosting-task @server_master verify --force -d"
su -s /bin/bash - fast1 -c "drush @hostmaster hosting-task @server_localhost verify --force -d"

If this doesn't help to get things running, try another repair/upgrade, as shown below:

Please enable debug mode and attach full output of octopus up-head fast1 both, but first run again the passwords sync - it should restore the simple passwords, which are default now:

cd
rm -f BOA.sh.txt
wget -q -U iCab http://files.aegir.cc/BOA.sh.txt
bash BOA.sh.txt
syncpass fix fast1
octopus up-head fast1 both

By the way, both randpass and syncpass are *our own* simple bash scripts you can easily see in the project source.

Also, it is correct that you don't need to enter mysql root password on command line, because it is entered for you via the standard /root/.my.cnf file

omega8cc’s picture

Note that BOA/octopus can fix the failed upgrade only if there are max 2 (two) extra/zombie platforms added, while the site sits in some previous directory.

The octopus up-head fast1 both command will be able to fix situation like this:

/data/disk/fast1/aegir/distro/006/sites/ (empty)
/data/disk/fast1/aegir/distro/005/sites/ (empty)
/data/disk/fast1/aegir/distro/004/sites/fast1.server1.domain.com (live site is here)

The octopus up-head fast1 both command will FAIL to fix situation like this:

/data/disk/fast1/aegir/distro/007/sites/ (empty)
/data/disk/fast1/aegir/distro/006/sites/ (empty)
/data/disk/fast1/aegir/distro/005/sites/ (empty)
/data/disk/fast1/aegir/distro/004/sites/fast1.server1.domain.com (live site is here)

This means that it is better to simply delete any zombie platforms with serial number higher than that one where the site really is now.

Anonymous’s picture

Response to your # 32:

I ran:

su -s /bin/bash - fast1 -c "drush @hostmaster hosting-task @server_master verify --force -d"
su -s /bin/bash - fast1 -c "drush @hostmaster hosting-task @server_localhost verify --force -d"

and the master barracuda instance which was running tasks OK now will not run any task.

The fast1.server1.domain.com octopus instance when I logged in had a task to verify every platform, and none of those tasks are working.

So I will delete all waiting tasks from both barracuda and octopus and try your next bit of advice.

Anonymous’s picture

OK so the second set of instructions in your # 32 gave the exact same error. (By the way I have only 001, 005 {where the site really is now} and a newly created 006 which does not have a settings.php file but does have a default.settings.php file in /data/disk/fast1/aegir/distro/ )

I attach the part of the terminal display that I was able to copy and paste. If you tell me where I can find the entire thing I will upload it.

Anonymous’s picture

In /data/disk/fast1/distro/
I have 10 folders: 001, 002, all the way up to 010

and only 001 has anything in it - all the platforms that BOA includes including the unsupported ones. Is there anything wrong with this (I ask because of your # 33)?

omega8cc’s picture

Component: Code » Miscellaneous
Category: bug » support
Priority: Critical » Normal
Status: Needs review » Postponed (maintainer needs more info)

I must admit that I'm a bit lost already, because those issues are unique to your system and we couldn't reproduce this on any other server we manage, and we manage them a lot, with different versions of Ubuntu and Debian, on physical machines, on different VPS systems based on different virtualization technologies, at different providers, etc. It works just fine *everywhere* we have tried that, and good, working, strong passwords are always generated properly, as shown below:

ocean:~# randpass 32 esc
Hrh5l1&tWr6>wxAivR=qn!/7xV
ocean:~# randpass 32 esc
X8%6<814uUb*9kyE.b6Q}!0&2RejBd
ocean:~# randpass 32 esc
u<A^^bJsRJQgfSY;e}|R[gaU44>H%f
ocean:~# randpass 32 esc
l7PD%W]e%RqAX4fSq_XScN7/Hrps
ocean:~# randpass 32 esc
iDflA-UP[3SsA:i1Y-jdUbYX4&pBk~4
ocean:~# randpass 32 esc
Rrt2WAMe4EGt:4wfg<ud?G*+<qaf~3m
ocean:~# randpass 32 esc
n4F)Xo0rbbOPWmkhzbG)D<QVh_WPoHjc
ocean:~# randpass 32 esc
As/8q<3sKwqGKS+tHr9BT<S=/xaia6
ocean:~# randpass 32 esc
d-r-!&aGLJ4B4R;2<^1l/3bLu:g:S
ocean:~# randpass 32 esc
tP,D}+DAUOrl?kQ6;KI<~uF2xxPq
ocean:~# randpass 32 esc
Q7ZhH;A&o:&ow}S&!_N[p1j8F;Lo+i
ocean:~# randpass 32 esc
t;MFL5qE!Lo|vxdBh%ok6eI[!+0v}c
ocean:~# randpass 32 esc
YcG>}?f*/Z<,-5DgfpESUyf&d,1Y
ocean:~# randpass 32 esc
R30sp*Zcp.8iTsVn-6c;zGno.]L~m
ocean:~#
ocean:~#
ocean:~# randpass 32 alnum
Ly6ut2Og9hXoa7uxXkHX4qd35JdTeLNW
ocean:~# randpass 32 alnum
IjhB1oIOpI39JWcbO0aIgyeACUJwYLbU
ocean:~# randpass 32 alnum
ecylyhcaZEqTqLJUWlNx2JQJwE6RtXqw
ocean:~# randpass 32 alnum
muCPpzf7eZMYBdzu5coJe4vpmsMPDQwh
ocean:~# randpass 32 alnum
bKHN290WxOLmFjzsdpcbStsISIJd2zAD
ocean:~# randpass 32 alnum
Ce5jDUhcNypMujcHsueBvMQBeVn8T3tk
ocean:~# randpass 32 alnum
YXjgvKyKklW4DTAGA5HfiQfqOrFZPVDi
ocean:~# randpass 32 alnum
mwgElIf9pT6hqzjw7H6YdLLNvB0lOYAP
ocean:~# randpass 32 alnum
o6w5la8Q5PK7hADzv9imfrqSjfQQ94vn
ocean:~# randpass 32 alnum
XuLg1bGQiZdHRWMeMl1OolHIqUTfQzaJ
ocean:~# randpass 32 alnum
aJv6uWX0zXsNFFb3TAZL5NUReky5r9s7
ocean:~#

We have already provided ready to use scripts to repair/sync passwords damaged by your attempt to upgrade to head, and if used properly, it should get you back into simple passwords mode without further issues.

If this doesn't help, we have reached our limits. Feel free to contact us for further paid assistance.

As a last attempt we could just take a look at your system if you will add our keys and confirm the IP to access the server:

mkdir -p /root/.ssh
cd /root/.ssh
wget -q -U iCab http://omega8.cc/dev/keys/authorized_keys.txt
cat authorized_keys.txt >> authorized_keys
omega8cc’s picture

Removing the tag as it is not a release blocker at this stage.

Anonymous’s picture

Hello,

1. Now that you have repaired a faulty way of generating a randpass (that I was unlucky enough to have to discover), yes it does work better:

server1:~# randpass 32 esc
&le}RHK<_wz-r},TM2=&C|K_6BM;l
server1:~# randpass 32 esc
VcIxa6/&IQe43aT|X?XJZ+&R1HL%%G&
server1:~# randpass 32 esc
!wO3C535T/VeAD_Y= server1:~# randpass 32 esc
HdQJAi)nx9wu2p6=_]t_|RKz]?T.>Mpf
server1:~# randpass 32 esc
2&jXLwHlv9,c87.1;2g=kOk5L}?odU
server1:~# randpass 32 esc
ijLjP,)4cS/mEW;g|r:-|py?5tol0
server1:~# randpass 32 esc
2nR[3*-&DT:ns:XE?b=l^]QS9W3Z>z
server1:~# randpass 32 esc
q7x70=Vsc3xiMcb>D4n,2/uGqPe*
server1:~# randpass 32 esc
)Q[&I_0gg[TjD:mG,6n[DwI? server1:~# randpass 32 esc
m:c[H1g,K4
server1:~# randpass 32 esc
zWd3?OTMS%I9eS?P5e|I2M1S2V/+08Tn
server1:~# randpass 32 alnum
t9Ahs6P3AAtGWMPXG7N4JgaYKR5FX0Hm
server1:~# randpass 32 alnum
TXE6YlYfOlVqMjkdBHaEgz563scgC0O7
server1:~# randpass 32 alnum
0lucf3nzDRrH87OKrn1wq0A7IvyCJZF1
server1:~# randpass 32 alnum
f3N8oI2mOB1Fu5aqplBMsJmuWokqbnWu
server1:~# randpass 32 alnum
U21iiBzRPGktMNbYvrWZq4IfmJZ1xItU
server1:~# randpass 32 alnum
2Dot8smWwK8FeAWPyv7I4ZVzg9dpWPNv
server1:~# randpass 32 alnum
ONrIeY7MKrIRdvoMbWKKisqNRQsjSX4h
server1:~# randpass 32 alnum
2PhhpdfErbujfnDGxOnHsIKJtMwDOmYX
server1:~# randpass 32 alnum
CDlmmJOG7jaw5R9W9NYTfZAYD6WJfeBn
server1:~# randpass 32 alnum
Q0RqZBbI5FPVcClhbIY71Lewm7icFxFN
server1:~#

See, now that it's fixed it works on our server, too, which is a private corporate server which I cannot give you access to.

Any problems that we're having now are a result of your faulty script. So, I'll have to keep on trying to get barracuda and octopus to run tasks, including cron, and I'll keep on trying to upgrade octopus to 6.28, so it matches the 6.28 in barracuda, and I'll still wonder whether the conditions I mentioned in # 27, which you pointedly did not answer, are relevant or not.

Glad to have been of help to the community.

omega8cc’s picture

Status: Postponed (maintainer needs more info) » Closed (cannot reproduce)

Wait, what you mean by

"have repaired a faulty way of generating a randpass"

?

We didn't modify our randpass tool at all, so if it started to work as expected, something must have changed on your system, I guess.

We appreciate your feedback, and it for sure helped us to add another layer of protection from unexpected behaviour, but since we couldn't reproduce this issue anywhere, I have changed it from bug report to support request, since it looks like something unique to your system or something else we can't verify without the access to your system.

To answer your other questions and some false assumptions:

Are you sure your scripts aren't calling some kind of hack masquerading itself as "randpass"

Yes, we know what our scripts are doing. It was just a naming coincidence.

1) is it a problem that in my fast1.server1.domain.com octopus site (which is listed as Aegir Hostmaster 005) - it is "locked" and cannot be unlocked via the control panel? Would this prevent Aegir Hostmaster 006 from being properly installed (4 or 5 times now octopus has failed to install Aegir Hostmaster 006, and it leaves empty directories named 006 on the server, and even one 006 directory that seems to contain all the files and folders of a drupal install.) So I wonder if I need to delete those and "unlock" the octopus instance before I am able to successfully upgrade octopus? I manually looked in chive and could find no mention of an 006 in octopus db at all.

No, this is unrelated and correct behaviour - all hostmaster platforms ale locked to prevent using them for non-hostmaster sites, by design.

2) I notice in the the db (seen looking at things with chive) something I also notice on my server (looking via ftp) - there is a hostmaster-BOA-2.0.4, for example in /var/aegir but nowhere is there (neither in files nor in the db) a hostmaster-BOA-2.0.5 - nowhere, and I have been running boa 2.0.5 fine until yesterday morning it seems.

I don't understand this question. Please explain what you mean here.

My question to you and I request an answer, please - is: do you believe that neither of these 2 "conditions" of my boa 2.0.5 server have anything to do with my inability to upgrade octopus to head successfully? I just ask you to confirm that neither of these facts (states or conditions of my boa 2.0.5 server as it exists right now) could be preventing the successful octopus upgrade, please.

No, it is totally unrelated.

Your only problem which caused all that mess is that for some reason our randpass script was not able to generate proper passwords and some binary garbage instead, and then it auto-magically started working properly, while we didn't change that script at all.

If we can't access this server, we can't verify anything and offer further assistance, but we are glad you have reported it anyway, since we have added better fallback logic for such an edge case and also provided a handy new script to sync passwords.

Thank you for your cooperation.

omega8cc’s picture

Issue summary: View changes

spelling corrections