Same database user for core and visibility?

I noticed that the config_template.yaml that is used to dockerize the container uses the same ENV for both the core and the visibility database under the postgres config. I tried modifying that template to set a different user for the visibility database, but startup fails… does the code re-use the username/pw from the core db settings to connect to visibility?

The error I get when starting frontend:

{“level”:“info”,“ts”:“2021-03-19T17:06:40.418Z”,“msg”:“Starting server for services”,“value”:“[frontend]”,“logging-call-at”:“server.go:110”}
Unable to start server: sql schema version compatibility check failed: unable to read DB schema version keyspace/database: connect_automation_visibility error: pq: relation “schema_version” does not exist.

The schema_version table does indeed exist, and is owned by the user I specified in the config.

connect_automation_visibility=> \d
List of relations
Schema | Name | Type | Owner
-------------------------------±----------------------±------±----------------------------
connect_automation_visibility | executions_visibility | table | connectautomationvisibility
connect_automation_visibility | flyway_schema_history | table | connectautomationvisibility
connect_automation_visibility | schema_update_history | table | connectautomationvisibility
connect_automation_visibility | schema_version | table | connectautomationvisibility
public | pg_buffercache | view | postgres
public | pg_stat_statements | view | postgres
(6 rows)

This is the template section I modified, adding .Env.POSTGRES_VIS_USER as the default for ‘visibility.sql.user’, w/ similar for the password:

    {{- else if eq $db "postgresql" }}
    default:
        sql:
            pluginName: "postgres"
            databaseName: "{{ default .Env.DBNAME "temporal" }}"
            connectAddr: "{{ default .Env.POSTGRES_SEEDS "" }}:{{ default .Env.DB_PORT "5432" }}"
            connectProtocol: "tcp"
            user: "{{ default .Env.POSTGRES_USER "" }}"
            password: "{{ default .Env.POSTGRES_PWD "" }}"
            maxConns: {{ default .Env.SQL_MAX_CONNS "20" }}
            maxIdleConns: {{ default .Env.SQL_MAX_IDLE_CONNS "20" }}
            maxConnLifetime: {{ default .Env.SQL_MAX_CONN_TIME "1h" }}
    visibility:
        sql:
            pluginName: "postgres"
            databaseName: "{{ default .Env.VISIBILITY_DBNAME "temporal_visibility" }}"
            connectAddr: "{{ default .Env.POSTGRES_SEEDS "" }}:{{ default .Env.DB_PORT "5432" }}"
            connectProtocol: "tcp"
            user: "{{ default .Env.POSTGRES_VIS_USER "" }}"
            password: "{{ default .Env.POSTGRES_VIS_PWD "" }}"
            maxConns: {{ default .Env.SQL_VIS_MAX_CONNS "10" }}
            maxIdleConns: {{ default .Env.SQL_VIS_MAX_IDLE_CONNS "10" }}
            maxConnLifetime: {{ default .Env.SQL_VIS_MAX_CONN_TIME "1h" }}
    {{- end }}

It seems like the code doing the version check might be assuming it’s the same user as core, but I thought I’d check before diving into doing a bunch of work to combine our migration images to create both dbs with the same user.

make sure DBNAME and VISIBILITY_DBNAME are different
make sure both database have schema installed

the server should be using the correct user / pass for schema version validation:
ref:https://github.com/temporalio/temporal/blob/v1.7.0/common/persistence/sql/version_checker.go#L66

schema setup ref: https://github.com/temporalio/temporal/blob/v1.7.0/Makefile#L335

one more thing to check, can you dump this file?
/etc/temporal/config/docker.yaml

1 Like

@Wenquan_Xing thank you for that suggestion to check docker.yaml!

The docker image I was producing had been copying the custom config_template.yml to the wrong location so it wasn’t pulled in by the dockerize command… and the docker.yaml didn’t get updated.

After fixing that it’s working as expected. The services are able to connect to both the core and visibility databases in Postgres with different credentials for each.

1 Like