-
Notifications
You must be signed in to change notification settings - Fork 67
Description
When using the standalone Java streaming ingest example (SnowflakeStreamingIngestExample.java), we observe that calling channel.close().get() sometimes hangs indefinitely after ingestion, even though all data appears to have reached the target table. This issue is more likely when ingesting a large volume of data
Enabling detailed logging (e.g., TRACE) is one way we found to slow down ingestion and reproduce the issue with smaller data set (e.g., totalRowsInTable = 1000000)., but we have also observed that ingesting massive volumes at high speed (even without logging) can result in the same hang.
Use the sample program from Snowflake's GitHub.
Set totalRowsInTable = 1000000 (or a similarly large value).
enable detailed logging (e.g., TRACE) using a log4j.properties file to slow ingestion:.
example
log4j.rootLogger=TRACE,FILE
log4j.appender.FILE=org.apache.log4j.RollingFileAppender
log4j.appender.FILE.File=D://freshtestagent//file.log
log4j.appender.FILE.MaxFileSize=100MB
log4j.appender.FILE.MaxBackupIndex=4
log4j.appender.FILE.layout=org.apache.log4j.PatternLayout
log4j.appender.FILE.layout.ConversionPattern=%d %p %t %c: %m%n
Run the program and ingest all rows.
After ingestion, call channel.close().get().
Observed Behavior:
channel.close().get() sometimes hangs indefinitely, even though all data appears to be present in the target table.
This is more likely when there is a backlog of uncommitted data at the time of channel closure, which can be triggered by slower ingestion (due to logging) or by ingesting a massive volume of data at high speed.
And in the logs we could also see some exception reported but , the future never gets completed even reporting an error in the log
Oct 25, 2025 1:19:14 AM net.snowflake.ingest.internal.net.snowflake.client.jdbc.cloud.storage.SnowflakeS3Client upload
INFO: Starting upload from stream (byte stream) to S3 location: ytzy-s-euss0037/streaming_ingest/2025/10/25/0/17/t4nvhq_AAAxAFV6mDVh5InkWyyqI0n98aE63TbkQ8wj7hd16mscCC_1014_32_0.bdec
Exception in thread "main" java.util.concurrent.ExecutionException: net.snowflake.ingest.utils.SFException: One or more channels [DBMI_DB1.NBA.TEST_CHECK.MY_CHANNEL_1] might contain uncommitted rows due to server side errors, please consider reopening the channels to replay the data loading by using the latest persistent offset token.
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073)
at com.infa.snowflake.SnowflakeStreamingIngestApp.main(SnowflakeStreamingIngestApp.java:148)
Caused by: net.snowflake.ingest.utils.SFException: One or more channels [DBMI_DB1.NBA.TEST_CHECK.MY_CHANNEL_1] might contain uncommitted rows due to server side errors, please consider reopening the channels to replay the data loading by using the latest persistent offset token.
at net.snowflake.ingest.streaming.internal.SnowflakeStreamingIngestChannelInternal.lambda$close$0(SnowflakeStreamingIngestChannelInternal.java:298)
at java.base/java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:787)
at java.base/java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1193)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1666)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1633)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
Oct 25, 2025 1:19:16 AM net.snowflake.ingest.internal.net.snowflake.client.jdbc.cloud.storage.SnowflakeS3Client upload
INFO: Uploaded data from input stream to S3 location: ytzy-s-euss0037/streaming_ingest/2025/10/25/0/17/t4nvhq_AAAxAFV6mDVh5InkWyyqI0n98aE63TbkQ8wj7hd16mscCC_1014_32_0.bdec. It took 1,510 ms with 0 retries
Expectation:
channel.close().get() should eventually return after ensuring all pending data is committed, or throw an exception if it cannot complete.
Request:
Please investigate why channel.close().get() hangs in this scenario and provide guidance or a fix.
Note : SDK version 4.1.0