While doing the query with custom attribute with tctl or from web ui we are getting INVALID_ARGUMENT: invalid query: unable to convert filter expression: unable to convert left part of comparison expression: filter by ‘BusinessKey1’ not supported for standard visibility in one of the environment but with the same configuration in other environment its working. Any idea how to debug the issue. Current Temporal java sdk version used 1.18.1 and temporal server version is 1.18.0 and temporal ui version is 2.6.1 in both the environments
Can you compare (and share if possible) your static configs (you can bash into one of your service pods and look at /etc/temporal/config/docker.yaml) please?
Thanks for the information.It worked
I’m getting the same error. From the python client that is failing:
temporalio.service.RPCError: invalid query: unable to convert filter expression: unable to convert left side of "AccountId = 'xxx'": filter by 'AccountId' not supported for standard visibility
I have added the search attribute (sshing into temporal frontend container)
temporal-frontend-769db9864b-jf8r5:/etc/temporal$ temporal operator search-attribute create --name AccountId --type Text
Search attributes already exist
And this works on my local machine using the temporal dev server. Here is an example of the /etc/temporal/config/docker.yaml
persistence:
defaultStore: default
visibilityStore: visibility
numHistoryShards: 512
datastores:
default:
sql:
pluginName: "postgres"
driverName: "postgres"
databaseName: "temporal"
connectAddr: "XXX:5432"
connectProtocol: "tcp"
user: XXX
password: "XXX"
maxConnLifetime: 1h
maxConns: 20
secretName: ""
visibility:
sql:
pluginName: "postgres"
driverName: "postgres"
databaseName: "temporal_visibility"
connectAddr: "XXX:5432"
connectProtocol: "tcp"
user: "XXX"
password: "XXX"
maxConnLifetime: 1h
maxConns: 20
secretName: ""
global:
membership:
name: temporal
maxJoinDuration: 30s
broadcastAddress: 10.1.0.165
pprof:
port: 7936
metrics:
tags:
type: frontend
prometheus:
timerType: histogram
listenAddress: "0.0.0.0:9090"
services:
frontend:
rpc:
grpcPort: 7233
membershipPort: 6933
bindOnIP: "0.0.0.0"
history:
rpc:
grpcPort: 7234
membershipPort: 6934
bindOnIP: "0.0.0.0"
matching:
rpc:
grpcPort: 7235
membershipPort: 6935
bindOnIP: "0.0.0.0"
worker:
rpc:
grpcPort: 7239
membershipPort: 6939
bindOnIP: "0.0.0.0"
clusterMetadata:
enableGlobalDomain: false
failoverVersionIncrement: 10
masterClusterName: "active"
currentClusterName: "active"
clusterInformation:
active:
enabled: true
initialFailoverVersion: 1
rpcName: "XXX"
rpcAddress: "127.0.0.1:7933"
dcRedirectionPolicy:
policy: "noop"
toDC: ""
archival:
status: "disabled"
publicClient:
hostPort: "temporal-frontend:7233"
dynamicConfigClient:
filepath: "/etc/temporal/dynamic_config/dynamic_config.yaml"
I’ve made sure the schemas are up to date, postgres is version 14, and the following is the output of inspecting the cluster using the temporal cli:
jack@terminal temporal % temporal operator cluster describe
ClusterName PersistenceStore VisibilityStore
active postgres postgres
jack@terminal temporal % temporal operator cluster system
ServerVersion SupportsSchedules UpsertMemo
1.22.2 true true
jack@terminal temporal %
As I’m writing this, I figured it out. My database was using schema v96 instead of v12. Advanced visiblity is only available on postgres12. Since these are not sequential, this really confused me (intuitively 96>12?). I updated the temporal visibility to v12 schema version 1.3, changed plugin name from pluginName: “postgres” to pluginName: “postgres12”, and all is well