Skip to content

Conversation

@veblush
Copy link
Collaborator

@veblush veblush commented Jan 13, 2026

Brainstroming.

BUG=N/a

@veblush
Copy link
Collaborator Author

veblush commented Jan 13, 2026

cc @rameshkunasi

Comment on lines 320 to 365
TfLiteStatus MicroInterpreterGraph::ResetVariableTensor(int tensor_index,
int subgraph_index) {
const SubGraph* subgraph = (*subgraphs_)[subgraph_index];
auto* tensor = subgraph->tensors()->Get(tensor_index);
if (tensor->is_variable()) {
size_t buffer_size;
TF_LITE_ENSURE_STATUS(TfLiteEvalTensorByteLength(
&subgraph_allocations_[subgraph_index].tensors[tensor_index],
&buffer_size));

int value = 0;
if (tensor->type() == tflite::TensorType_INT8) {
value = tensor->quantization()->zero_point()->Get(0);
}
memset(subgraph_allocations_[subgraph_index].tensors[tensor_index].data.raw,
value, buffer_size);
}
return kTfLiteOk;
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this only works for TensorFlow 1.0 variable type tensors, the default return value should be failure. Also TensorType_INT8 should not be used for quantization checks. Instead the existence of the quantization field in the schema should be used. Multi-channel quantization checks also need to be here, or alternatively fail this method.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this only works for TensorFlow 1.0 variable type tensors, the default return value should be failure.

Done.

Also TensorType_INT8 should not be used for quantization checks. Instead the existence of the quantization field in the schema should be used. Multi-channel quantization checks also need to be here, or alternatively fail this method.

This reset logic is identical to the existing ResetVariableTensors. We can improve it in a separate PR if this is needed.

@ddavis-2015
Copy link
Member

ddavis-2015 commented Jan 14, 2026

@veblush This will not work for RESOURCE variable tensors (TensorFlow 2.0)

@veblush
Copy link
Collaborator Author

veblush commented Jan 16, 2026

RESOURCE

Noted. If we need a way to reset RESOURCE variable, then we can add a new api for that.

@veblush veblush requested a review from ddavis-2015 January 16, 2026 22:29
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only the use of data.data is allowed from the TfLitePtrUnion struct. All other members have been deprecated by the LiteRT team (formerly the TfLite folks).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please be advised that TF_LITE_MICRO_EXPECT_ macros do not end the test. This could cause the test to crash on null pointer checks, and the logs for the test will be lost.

Perhaps we should add a new macro:

#define TF_LITE_MICRO_CHECK_FAIL_AND_STOP if (micro_test::did_test_fail) continue

TF_LITE_ENSURE_STATUS(TfLiteEvalTensorByteLength(eval_tensor, &buffer_size));

int value = 0;
if (tensor->type() == tflite::TensorType_INT8) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, a multi-channel tensor variable probably doesn't exist. But this line would be better as:

if (tensor->type() == tflite::TensorType_INT8 && tensor->quantization() && tensor->quantization()->zero_point() && tensor->quantization->zero_point()->size() > 0) {

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants