We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi. I have a trouble with losing microseconds on DateTime64(6, ...) columns if row was written over clickhouse_fdw.
Example
CREATE TABLE signals.test ( `timestamp` DateTime64(6, 'Asia/Istanbul'), `from` String ) ENGINE = TinyLog;
I wrote there two rows, one directly from clickhouse, the second from postgres over clickhouse_fwd
This query was executed in clickhouse-client:
INSERT INTO signals.test VALUES ('2019-01-01 00:00:00.123456', 'written_by_clickhouse');
This one in postgresql:
INSERT INTO signals.test VALUES ('2019-01-01 00:00:00.123456', 'written_by_postgres');
Result of query SELECT * FROM signals.test is
SELECT * FROM signals.test
┌──────────────────timestamp─┬─from──────────────────┐ 1. │ 2019-01-01 00:00:00.123456 │ written_by_clickhouse │ 2. │ 2019-01-01 00:00:00.000000 │ written_by_postgres │ └────────────────────────────┴───────────────────────┘
So, there is losing microsecond part of date.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hi. I have a trouble with losing microseconds on DateTime64(6, ...) columns if row was written over clickhouse_fdw.
Example
I wrote there two rows, one directly from clickhouse, the second from postgres over clickhouse_fwd
This query was executed in clickhouse-client:
This one in postgresql:
Result of query
SELECT * FROM signals.test
is
So, there is losing microsecond part of date.
The text was updated successfully, but these errors were encountered: