Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 32 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,35 @@
# dbt_netsuite v1.4.0-a2

## Schema/Data Change
**1 total change • 1 possible breaking change**

| Data Model(s) | Change type | Old | New | Notes |
| ------------- | ----------- | --- | --- | ----- |
| [int_netsuite2__tran_with_converted_amounts](https://fivetran.github.io/dbt_netsuite/#!/model/model.netsuite.int_netsuite2__tran_with_converted_amounts) | Materialization | Table (all warehouses except BigQuery)<br>Ephemeral (BigQuery) | Ephemeral (all warehouses) | **Breaking change**: Reverts the materialization change from v1.4.0-a1. The model now uses ephemeral materialization for all warehouse platforms, simplifying configuration and reducing storage overhead. Removes warehouse-specific materialization logic and partitioning configuration. |

# dbt_netsuite v1.4.0-a1

[PR #189](https://github.com/fivetran/dbt_netsuite/pull/189) includes the following updates:

## Schema/Data Change
**3 total changes • 2 possible breaking changes**

| Data Model(s) | Change type | Old | New | Notes |
| ------------- | ----------- | --- | --- | ----- |
| [netsuite2__balance_sheet](https://fivetran.github.io/dbt_netsuite/#!/model/model.netsuite.netsuite2__balance_sheet)<br>[netsuite2__income_statement](https://fivetran.github.io/dbt_netsuite/#!/model/model.netsuite.netsuite2__income_statement)<br>[netsuite2__transaction_details](https://fivetran.github.io/dbt_netsuite/#!/model/model.netsuite.netsuite2__transaction_details) | Materialization | Incremental (PostgreSQL, Redshift, Snowflake)<br>Table (Bigquery, Databricks, Spark) | Table (all warehouses by default) | **Breaking change**: PostgreSQL, Redshift, and Snowflake users will see materialization change from incremental to table by default. To restore incremental materialization, set `netsuite2__using_incremental: true`. This change provides consistent default behavior across all warehouse platforms and gives users explicit control over incremental materialization. |
| [netsuite2__balance_sheet](https://fivetran.github.io/dbt_netsuite/#!/model/model.netsuite.netsuite2__balance_sheet)<br>[netsuite2__income_statement](https://fivetran.github.io/dbt_netsuite/#!/model/model.netsuite.netsuite2__income_statement)<br>[netsuite2__transaction_details](https://fivetran.github.io/dbt_netsuite/#!/model/model.netsuite.netsuite2__transaction_details) | Configuration | `cluster_by: ['transaction_id']` | No clustering | **Breaking change**: Removes `cluster_by` configuration to improve model build performance. Clustering overhead on high-cardinality keys was causing performance degradation. Partitioning by `_fivetran_synced_date` provides sufficient query optimization. |
| [int_netsuite2__tran_with_converted_amounts](https://fivetran.github.io/dbt_netsuite/#!/model/model.netsuite.int_netsuite2__tran_with_converted_amounts) | Materialization | Ephemeral (all warehouses) | Table (all warehouses except BigQuery) | Materializes as a table for PostgreSQL, Redshift, Snowflake, Databricks, and Spark with partitioning by `_fivetran_synced_date`. BigQuery remains ephemeral. This update is intended to improve overall build performance. |

## Feature Update
- Introduces `netsuite2__using_incremental` variable to provide simplified control over incremental materialization for Netsuite2 end models. Users can now enable incremental materialization with a single variable instead of configuring each model individually. When enabled, uses `merge` strategy for Bigquery, Databricks, and Spark, and `delete+insert` strategy for PostgreSQL, Redshift, and Snowflake. See the [README](https://github.com/fivetran/dbt_netsuite?tab=readme-ov-file#enabling-incremental-materialization-netsuite2-only) for configuration details.

## Under the Hood
- Removes redundant join condition `and transactions_with_converted_amounts.source_relation = transactions_with_converted_amounts.source_relation` in `netsuite2__transaction_details` for improved code clarity.
- Updates integration test seed data with current date values and additional test records.

## Documentation
- Updates README and DECISIONLOG to document the new `netsuite2__using_incremental` variable and incremental materialization strategies for different warehouse platforms.

# dbt_netsuite v1.3.0

[PR #187](https://github.com/fivetran/dbt_netsuite/pull/187) includes the following updates:
Expand Down
7 changes: 3 additions & 4 deletions DECISIONLOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,11 +24,10 @@ models:
netsuite2:
intermediate:
+materialized: [table or view]
```

## Incremental Strategy Selection

For incremental models, we have chosen the `delete+insert` strategy for PostgreSQL, Redshift, and Snowflake destinations.
For the Netsuite2 end models (`netsuite2__balance_sheet`, `netsuite2__income_statement`, `netsuite2__transaction_details`), incremental materialization is disabled by default to avoid unexpected warehouse costs and build times. Users can opt-in to incremental materialization by setting `netsuite2__using_incremental: true`. For instructions on how to enable incremental materialization, see the [README](https://github.com/fivetran/dbt_netsuite?tab=readme-ov-file#enabling-incremental-materialization-netsuite2-only).

For Bigquery and Databricks, we have turned off incremental strategy by default since we did not want to cause unexpected warehouse costs for users. If you choose to enable the incremental materialization for these destinations, we have set it up to use the `merge` strategy. For instructions on how to enable the incremental strategy, see the [README](https://github.com/fivetran/dbt_netsuite?tab=readme-ov-file#adding-incremental-materialization-for-bigquery-and-databricks).

These strategies were selected since transaction records can be updated retroactively, and `merge` and `delete+insert` work well since they rely on a unique id to identify records to update or replace.
When enabled, we use the `merge` strategy for Bigquery, Databricks, and Spark destinations, and the `delete+insert` strategy for PostgreSQL, Redshift, and Snowflake destinations.
31 changes: 15 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ Include the following netsuite package version in your `packages.yml` file:
```yaml
packages:
- package: fivetran/netsuite
version: [">=1.3.0", "<1.4.0"]
version: "1.4.0-a2"
```
### Step 3: Define Netsuite.com or Netsuite2 Source
As of April 2022 Fivetran released a new Netsuite connector version which leverages the Netsuite2 endpoint opposed to the original Netsuite.com endpoint. This package is designed to run for either or, not both. By default the `netsuite_data_model` variable for this package is set to the original `netsuite` value which runs the netsuite.com version of the package. If you would like to run the package on Netsuite2 data, you may adjust the `netsuite_data_model` variable to run the `netsuite2` version of the package.
Expand Down Expand Up @@ -340,31 +340,30 @@ vars:
#### Override the data models variable
This package is designed to run **either** the Netsuite.com or Netsuite2 data models. However, for documentation purposes, an additional variable `netsuite_data_model_override` was created to allow for both data model types to be run at the same time by setting the variable value to `netsuite`. This is only to ensure the [dbt docs](https://fivetran.github.io/dbt_netsuite/) (which is hosted on this repository) is generated for both model types. While this variable is provided, we recommend you do not adjust the variable and instead change the `netsuite_data_model` variable to fit your configuration needs.

#### Lookback Window
Records from the source can sometimes arrive late. Since several of the models in this package are incremental, by default we look back 3 days from the `_fivetran_synced_date` of transaction records to ensure late arrivals are captured and avoiding the need for frequent full refreshes. While the frequency can be reduced, we still recommend running `dbt --full-refresh` periodically to maintain data quality of the models.
#### Enabling incremental materialization (Netsuite2 only)
Since pricing and runtime priorities vary by customer, by default we materialize the below models as tables. For more information on this decision, see the [Incremental Strategy section](https://github.com/fivetran/dbt_netsuite/blob/main/DECISIONLOG.md#incremental-strategy-selection) of the DECISIONLOG.

To change the default lookback window, add the following variable to your `dbt_project.yml` file:
If you wish to enable incremental materializations for the following models, you can set the `netsuite2__using_incremental` variable to `true` in your `dbt_project.yml` file:
- `netsuite2__balance_sheet`
- `netsuite2__income_statement`
- `netsuite2__transaction_details`

When enabled, the models use the `merge` strategy for Bigquery, Databricks, and Spark, and the `delete+insert` strategy for PostgreSQL, Redshift, and Snowflake.

```yml
vars:
netsuite:
lookback_window: number_of_days # default is 3
netsuite2__using_incremental: true # False by default. Materializes the above models as incremental instead of table.
```
#### Lookback Window
Records from the source can sometimes arrive late. If leveraging the incremental logic for the end models (disabled by default), we look back 3 days from the `_fivetran_synced_date` of transaction records to ensure late arrivals are captured and avoiding the need for frequent full refreshes. While the frequency can be reduced, if using the incremental strategy we recommend running `dbt --full-refresh` periodically to maintain data quality of the models.

#### Adding incremental materialization for Bigquery and Databricks
Since pricing and runtime priorities vary by customer, by default we chose to materialize the below models as tables instead of an incremental materialization for Bigquery and Databricks. For more information on this decision, see the [Incremental Strategy section](https://github.com/fivetran/dbt_netsuite/blob/main/DECISIONLOG.md#incremental-strategy) of the DECISIONLOG.
To change the default lookback window, add the following variable to your `dbt_project.yml` file:

If you wish to enable incremental materializations leveraging the `merge` strategy, you can add the below materialization settings to your `dbt_project.yml` file. You only need to add lines for the specific model materializations you wish to change.
```yml
models:
vars:
netsuite:
netsuite2:
netsuite2__income_statement:
+materialized: incremental # default is table for Bigquery and Databricks
netsuite2__transaction_details:
+materialized: incremental # default is table for Bigquery and Databricks
netsuite2__balance_sheet:
+materialized: incremental # default is table for Bigquery and Databricks
lookback_window: number_of_days # default is 3
```
</details>

Expand Down
2 changes: 1 addition & 1 deletion dbt_project.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
config-version: 2
name: 'netsuite'
version: '1.3.0'
version: '1.4.0'
require-dbt-version: [">=1.3.0", "<3.0.0"]

models:
Expand Down
2 changes: 1 addition & 1 deletion docs/catalog.json

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/manifest.json

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion integration_tests/dbt_project.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name: 'netsuite_integration_tests'
version: '1.3.0'
version: '1.4.0'
profile: 'integration_tests'
config-version: 2

Expand Down
6 changes: 3 additions & 3 deletions models/netsuite2/netsuite2__balance_sheet.sql
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
{%- set multibook_accounting_enabled = var('netsuite2__multibook_accounting_enabled', false) -%}
{%- set using_to_subsidiary_and_exchange_rate = (var('netsuite2__using_to_subsidiary', false) and var('netsuite2__using_exchange_rate', true)) -%}
{%- set using_incremental = var('netsuite2__using_incremental', false) -%}
{%- set balance_sheet_transaction_detail_columns = var('balance_sheet_transaction_detail_columns', []) -%}
{%- set accounts_pass_through_columns = var('accounts_pass_through_columns', []) -%}
{%- set lookback_window = var('lookback_window', 3) -%}

{{
config(
enabled=var('netsuite_data_model', 'netsuite') == var('netsuite_data_model_override','netsuite2'),
materialized='table' if target.type in ('bigquery', 'databricks', 'spark') else 'incremental',
materialized='incremental' if using_incremental else 'table',
partition_by = {'field': '_fivetran_synced_date', 'data_type': 'date', 'granularity': 'month'}
if target.type not in ['spark', 'databricks'] else ['_fivetran_synced_date'],
cluster_by = ['transaction_id'],
if target.type not in ['spark', 'databricks'] else ['_fivetran_synced_date'],
unique_key='balance_sheet_id',
incremental_strategy = 'merge' if target.type in ('bigquery', 'databricks', 'spark') else 'delete+insert',
file_format='delta'
Expand Down
4 changes: 2 additions & 2 deletions models/netsuite2/netsuite2__income_statement.sql
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
{%- set multibook_accounting_enabled = var('netsuite2__multibook_accounting_enabled', false) -%}
{%- set using_to_subsidiary = var('netsuite2__using_to_subsidiary', false) -%}
{%- set using_exchange_rate = var('netsuite2__using_exchange_rate', true) -%}
{%- set using_incremental = var('netsuite2__using_incremental', false) -%}
{%- set income_statement_transaction_detail_columns = var('income_statement_transaction_detail_columns', []) -%}
{%- set accounts_pass_through_columns = var('accounts_pass_through_columns', []) -%}
{%- set classes_pass_through_columns = var('classes_pass_through_columns', []) -%}
Expand All @@ -10,10 +11,9 @@
{{
config(
enabled=var('netsuite_data_model', 'netsuite') == var('netsuite_data_model_override','netsuite2'),
materialized='table' if target.type in ('bigquery', 'databricks', 'spark') else 'incremental',
materialized='incremental' if using_incremental else 'table',
partition_by = {'field': '_fivetran_synced_date', 'data_type': 'date', 'granularity': 'month'}
if target.type not in ['spark', 'databricks'] else ['_fivetran_synced_date'],
cluster_by = ['transaction_id'],
unique_key='income_statement_id',
incremental_strategy = 'merge' if target.type in ('bigquery', 'databricks', 'spark') else 'delete+insert',
file_format='delta'
Expand Down
6 changes: 2 additions & 4 deletions models/netsuite2/netsuite2__transaction_details.sql
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
{%- set using_to_subsidiary = var('netsuite2__using_to_subsidiary', false) -%}
{%- set using_exchange_rate = var('netsuite2__using_exchange_rate', true) -%}
{%- set using_vendor_categories = var('netsuite2__using_vendor_categories', true) -%}
{%- set using_incremental = var('netsuite2__using_incremental', false) -%}
{%- set accounts_pass_through_columns = var('accounts_pass_through_columns', []) -%}
{%- set departments_pass_through_columns = var('departments_pass_through_columns', []) -%}
{%- set locations_pass_through_columns = var('locations_pass_through_columns', []) -%}
Expand All @@ -13,10 +14,9 @@
{{
config(
enabled=var('netsuite_data_model', 'netsuite') == var('netsuite_data_model_override','netsuite2'),
materialized='table' if target.type in ('bigquery', 'databricks', 'spark') else 'incremental',
materialized='incremental' if using_incremental else 'table',
partition_by = {'field': 'transaction_line_fivetran_synced_date', 'data_type': 'date', 'granularity': 'month'}
if target.type not in ['spark', 'databricks'] else ['transaction_line_fivetran_synced_date'],
cluster_by = ['transaction_id'],
unique_key='transaction_details_id',
incremental_strategy = 'merge' if target.type in ('bigquery', 'databricks', 'spark') else 'delete+insert',
file_format='delta'
Expand Down Expand Up @@ -314,9 +314,7 @@ transaction_details as (
on transactions_with_converted_amounts.transaction_line_id = transaction_lines.transaction_line_id
and transactions_with_converted_amounts.transaction_id = transaction_lines.transaction_id
and transactions_with_converted_amounts.source_relation = transaction_lines.source_relation

and transactions_with_converted_amounts.transaction_accounting_period_id = transactions_with_converted_amounts.reporting_accounting_period_id
and transactions_with_converted_amounts.source_relation = transactions_with_converted_amounts.source_relation

{% if multibook_accounting_enabled %}
and transactions_with_converted_amounts.accounting_book_id = transaction_lines.accounting_book_id
Expand Down