Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 7 additions & 12 deletions _partials/_timescaledb-gucs.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,12 @@
| Name | Type | Default | Description |
| -- | -- | -- | -- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `GUC_CAGG_HIGH_WORK_MEM_NAME` | `INTEGER` | `GUC_CAGG_HIGH_WORK_MEM_VALUE` | The high working memory limit for the continuous aggregate invalidation processing.<br />min: `64`, max: `MAX_KILOBYTES` |
| `GUC_CAGG_LOW_WORK_MEM_NAME` | `INTEGER` | `GUC_CAGG_LOW_WORK_MEM_VALUE` | The low working memory limit for the continuous aggregate invalidation processing.<br />min: `64`, max: `MAX_KILOBYTES` |
| `auto_sparse_indexes` | `BOOLEAN` | `true` | The hypertable columns that are used as index keys will have suitable sparse indexes when compressed. Must be set at the moment of chunk compression, e.g. when the `compress_chunk()` is called. |
| `bgw_log_level` | `ENUM` | `WARNING` | Log level for the scheduler and workers of the background worker subsystem. Requires configuration reload to change. |
| `cagg_processing_wal_batch_size` | `INTEGER` | `10000` | Number of entries processed from the WAL at a go. Larger values take more memory but might be more efficient.<br />min: `1000`, max: `10000000` |
| `compress_truncate_behaviour` | `ENUM` | `COMPRESS_TRUNCATE_ONLY` | Defines how truncate behaves at the end of compression. 'truncate_only' forces truncation. 'truncate_disabled' deletes rows instead of truncate. 'truncate_or_delete' allows falling back to deletion. |
| `compression_batch_size_limit` | `INTEGER` | `1000` | Setting this option to a number between 1 and 999 will force compression to limit the size of compressed batches to that amount of uncompressed tuples.Setting this to 0 defaults to the max batch size of 1000.<br />min: `1`, max: `1000` |
| `compression_orderby_default_function` | `STRING` | `"_timescaledb_functions.get_orderby_defaults"` | Function to use for calculating default order_by setting for compression |
| `compression_segmentby_default_function` | `STRING` | `"_timescaledb_functions.get_segmentby_defaults"` | Function to use for calculating default segment_by setting for compression |
| `current_timestamp_mock` | `STRING` | `NULL` | this is for debugging purposes |
| `debug_allow_cagg_with_deprecated_funcs` | `BOOLEAN` | `false` | this is for debugging/testing purposes |
| `debug_bgw_scheduler_exit_status` | `INTEGER` | `0` | this is for debugging purposes<br />min: `0`, max: `255` |
| `debug_compression_path_info` | `BOOLEAN` | `false` | this is for debugging/information purposes |
| `debug_have_int128` | `BOOLEAN` | `#ifdef HAVE_INT128 true` | this is for debugging purposes |
Expand All @@ -25,13 +21,14 @@
| `enable_bulk_decompression` | `BOOLEAN` | `true` | Increases throughput of decompression, but might increase query memory usage |
| `enable_cagg_reorder_groupby` | `BOOLEAN` | `true` | Enable group by clause reordering for continuous aggregates |
| `enable_cagg_sort_pushdown` | `BOOLEAN` | `true` | Enable pushdown of ORDER BY clause for continuous aggregates |
| `enable_cagg_wal_based_invalidation` | `BOOLEAN` | `false` | Use WAL to track changes to hypertables for continuous aggregates. This is not meant for production use. |
| `enable_cagg_watermark_constify` | `BOOLEAN` | `true` | Enable constifying cagg watermark for real-time caggs |
| `enable_cagg_window_functions` | `BOOLEAN` | `false` | Allow window functions in continuous aggregate views |
| `enable_chunk_append` | `BOOLEAN` | `true` | Enable using chunk append node |
| `enable_chunk_auto_publication` | `BOOLEAN` | `false` | Enable automatically adding newly created chunks to the publication of their hypertable |
| `enable_chunk_skipping` | `BOOLEAN` | `false` | Enable using chunk column stats to filter chunks based on column filters |
| `enable_chunkwise_aggregation` | `BOOLEAN` | `true` | Enable the pushdown of aggregations to the chunk level |
| `enable_columnarscan` | `BOOLEAN` | `true` | A columnar scan replaces sequence scans for columnar-oriented storage and enables storage-specific optimizations like vectorized filters. Disabling columnar scan will make PostgreSQL fall back to regular sequence scans. |
| `enable_columnarindexscan` | `BOOLEAN` | `false` | Enable experimental support for returning results directly from compression metadata without decompression |
| `enable_columnarscan` | `BOOLEAN` | `true` | Transparently decompress columnar data using ColumnarScan custom node. Disabling columnar scan will ignore data stored in columnar format in queries. |
| `enable_compressed_direct_batch_delete` | `BOOLEAN` | `true` | Enable direct batch deletion in compressed chunks |
| `enable_compressed_skipscan` | `BOOLEAN` | `true` | Enable SkipScan for distinct inputs over compressed chunks |
| `enable_compression_indexscan` | `BOOLEAN` | `false` | Enable indexscan during compression, if matching index is found |
Expand All @@ -40,7 +37,6 @@
| `enable_compressor_batch_limit` | `BOOLEAN` | `false` | Enable compressor batch limit for compressors which can go over the allocation limit (1 GB). This feature will limit those compressors by reducing the size of the batch and thus avoid hitting the limit. |
| `enable_constraint_aware_append` | `BOOLEAN` | `true` | Enable constraint exclusion at execution time |
| `enable_constraint_exclusion` | `BOOLEAN` | `true` | Enable planner constraint exclusion |
| `enable_custom_hashagg` | `BOOLEAN` | `false` | Enable creating custom hash aggregation plans |
| `enable_decompression_sorted_merge` | `BOOLEAN` | `true` | Enable the merge of compressed batches to preserve the compression order by |
| `enable_delete_after_compression` | `BOOLEAN` | `false` | Delete all rows after compression instead of truncate |
| `enable_deprecation_warnings` | `BOOLEAN` | `true` | Enable warnings when using deprecated functionality |
Expand All @@ -50,6 +46,7 @@
| `enable_direct_compress_insert` | `BOOLEAN` | `false` | Enable experimental support for direct compression during INSERT |
| `enable_direct_compress_insert_client_sorted` | `BOOLEAN` | `false` | Correct handling of data sorting by the user is required for this option. |
| `enable_direct_compress_insert_sort_batches` | `BOOLEAN` | `true` | Enable batch sorting during direct compress INSERT |
| `enable_direct_compress_on_cagg_refresh` | `BOOLEAN` | `false` | Enable experimental support for direct compression during Continuous Aggregate refresh |
| `enable_dml_decompression` | `BOOLEAN` | `true` | Enable DML decompression when modifying compressed hypertable |
| `enable_dml_decompression_tuple_filtering` | `BOOLEAN` | `true` | Recheck tuples during DML decompression to only decompress batches with matching tuples |
| `enable_event_triggers` | `BOOLEAN` | `false` | Enable event triggers for chunks creation |
Expand All @@ -65,6 +62,7 @@
| `enable_ordered_append` | `BOOLEAN` | `true` | Enable ordered append optimization for queries that are ordered by the time dimension |
| `enable_parallel_chunk_append` | `BOOLEAN` | `true` | Enable using parallel aware chunk append node |
| `enable_partitioned_hypertables` | `BOOLEAN` | `false` | Enable experimental support for creating hypertables using PostgreSQL's native declarative partitioning |
| `enable_qual_filtering` | `BOOLEAN` | `true` | Filter qualifiers on chunks when complete chunk would be included by filter |
| `enable_qual_propagation` | `BOOLEAN` | `true` | Enable propagation of qualifiers in JOINs |
| `enable_rowlevel_compression_locking` | `BOOLEAN` | `false` | Use only if you know what you are doing |
| `enable_runtime_exclusion` | `BOOLEAN` | `true` | Enable runtime chunk exclusion in ChunkAppend node |
Expand All @@ -73,12 +71,9 @@
| `enable_skipscan_for_distinct_aggregates` | `BOOLEAN` | `true` | Enable SkipScan for DISTINCT aggregates |
| `enable_sparse_index_bloom` | `BOOLEAN` | `true` | This sparse index speeds up the equality queries on compressed columns, and can be disabled when not desired. |
| `enable_tiered_reads` | `BOOLEAN` | `true` | Enable reading of tiered data by including a foreign table representing the data in the object storage into the query plan |
| `enable_transparent_decompression` | `BOOLEAN` | `true` | Enable transparent decompression when querying hypertable |
| `enable_tss_callbacks` | `BOOLEAN` | `true` | Enable ts_stat_statements callbacks |
| `enable_uuid_compression` | `BOOLEAN` | `true` | Enable uuid compression |
| `enable_vectorized_aggregation` | `BOOLEAN` | `true` | Enable vectorized aggregation for compressed data |
| `last_tuned` | `STRING` | `NULL` | records last time timescaledb-tune ran |
| `last_tuned_version` | `STRING` | `NULL` | version of timescaledb-tune used to tune |
| `license` | `STRING` | `TS_LICENSE_DEFAULT` | Determines which features are enabled |
| `max_cached_chunks_per_hypertable` | `INTEGER` | `1024` | Maximum number of chunks stored in the cache<br />min: `0`, max: `65536` |
| `max_open_chunks_per_insert` | `INTEGER` | `1024` | Maximum number of open chunk tables per insert<br />min: `0`, max: `PG_INT16_MAX` |
Expand All @@ -89,6 +84,6 @@
| `skip_scan_run_cost_multiplier` | `REAL` | `1.0` | Default is 1.0 i.e. regularly estimated SkipScan run cost, 0.0 will make SkipScan to have run cost = 0<br />min: `0.0`, max: `1.0` |
| `telemetry_level` | `ENUM` | `TELEMETRY_DEFAULT` | Level used to determine which telemetry to send |

Version: [2.24.0][2240]
Version: [2.25.0][2250]

[2240]: https://github.com/timescale/timescaledb/releases/tag/2.24.0
[2250]: https://github.com/timescale/timescaledb/releases/tag/2.25.0