Replies: 10 comments 1 reply
-
|
I'm not opposed to some kind of configuration option on the TCP transport for this, but I'm also not confident there's any way to actually guarantee this. Flushing and TCP NO_DELAY will probably decrease the likelihood of multiple Modbus frames being sent in a single TCP frame, but I think ultimately you would need full control over the TCP implementation to guarantee that. TCP is fundamentally a streaming protocol, not a packet-based protocol, and the implementation and buffering behavior is ultimately handled by whatever OS you are running on.
Even with some changes, this may be the only reliable option. |
Beta Was this translation helpful? Give feedback.
-
|
and each frame is already flushed as it's written: |
Beta Was this translation helpful? Give feedback.
-
|
I feared you would say something like this. Is there any callback I could register or get a hold of when a package was sent (not necessarily with a response yet)? Otherwise my systems throughput would ultimately be limited by the response time of the controller. Meaning, if it requires 5ms I could only do in best case 200 dedicated requests per second. When setting the channel option like I did and the lib already flushing the channel after each frame, batching still occurring, means the tcp stack is the guilty party? |
Beta Was this translation helpful? Give feedback.
-
This sounds very much like the real world. I've worked with many, many, many Modbus devices. You're lucky that you have one that even accepts concurrent requests at all. It is unusual that it can't handle multiple requests in a frame, though. Especially if it's "modern" enough to handle 200 or more requests/s. What OS are you testing on? I'm still trying to even get multiple requests in a single frame so that I can play around with possible options.
This is always going to be the case. Even if you were using a Java Socket directly, and flushing its output stream after writing, the docs on OutputStream make it clear:
(equally applies to a socket / network connection as it does to a file) |
Beta Was this translation helpful? Give feedback.
-
|
Ah, and I should mention, TCP_NODELAY is already configured by default: |
Beta Was this translation helpful? Give feedback.
-
okay that's really interesting. We have multiple customer sites where we crawl their controllers and we never had a problem with concurrent requests nor the batching apparently. This is the first site which makes trouble.
graph TD
A["Java Application with modbus Lib <br/> (Packaged as Docker Image)"] --> B("Running on AKS EE <br/> Single Machine Cluster")
B --> C{Hyper-V Linux VM}
C --> D[Windows Server 2022]
Yeah it's a bit complicated, I know :D
thanks for making that clear!
Okay so basically everything should already be configured as intended... Perfect 😞 ... haha |
Beta Was this translation helpful? Give feedback.
-
|
Yeah, sorry, I really don't think there's anything that can be done on the library side here. I suspect your deployment environment has something to do with it, because I have to issue hundreds/thousands of async requests in a hard loop with no wait before I begin to see multiple requests in a single TCP Segment/Frame in Wireshark. |
Beta Was this translation helpful? Give feedback.
-
|
Okay I see... So I think the only solution would be to decrease the amount of required requests then to get a bit more throughput. The weird behaviour basically as follows (I have many days of debugging and reading wire-shark logs behind me 😅):
I know it might be a bit out of scope, do you see anything else we could do here to somehow increase throughput if we have to stick to the sequential requests? At the moment we're using a single client instance for all requests. |
Beta Was this translation helpful? Give feedback.
-
Each ModbusTcpClient + transport would have its own TCP connection. I've seen this used to increase concurrency with devices that don't support concurrent requests within a TCP connection. I don't think you would have success with thousands of connections though... you probably just want to try like 2-3. I guess I don't know what kind of PLC you're dealing with, but it would be very unusual if it could handle thousands of connections. How sure are you that the multiple requests per frame are even the real issue? Real life Modbus devices fail in all kinds of interesting way and have all kinds of interesting bugs... it's a simple enough protocol that most vendors implement it themselves, often get it wrong, and introduce their own unique bugs by doing so. |
Beta Was this translation helpful? Give feedback.
-
Okay, so I would have to create a pool of clients within my app for making sure I don't use an excessive amount of client instances. Seems a bit complicated though, as from what you said I would have to check how many parallel TCP streams the controller is even able to support.
I mean what's for certain? I'm pretty confident though. At the moment the batching appears, you can see the weird behaviour. Running sequentially, the error does not show up at all. It's like when the controller sees such a request it freezes a few seconds and then it starts responding again. As the controller unfortunately is out of reach for me I can't really do or even examine it in any way except for sending requests and observing how it reacts. Thanks for all the help @kevinherron! I really appreciate it. I guess the issue can be closed than. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
First, thank you for creating and maintaining this excellent and high-performance Modbus library.
The Issue
I am using the library to communicate with an industrial controller (PLC) that has a very strict implementation of the Modbus/TCP protocol.
When I send multiple requests asynchronously in a rapid succession, the library's underlying Netty channel batches them into the payload of a single TCP packet. My Wireshark captures clearly show multiple Modbus ADUs being sent in one TCP segment.
This behavior violates the official Modbus specification, which states, "A TCP frame must transport only one MODBUS ADU." As a result, the controller considers this a framing error and discards the packet, so I never receive a response.
What I've Tried
TCP_NODELAY: I have confirmed that settingChannelOption.TCP_NODELAYtotruedoes not solve the problem. The batching appears to happen at the application/Netty level before the data is handed to the OS network stack, so Nagle's algorithm is not the root cause. Or the channel option is not applied properly, I can't really tell.What I'm basically doing is the following:
ModbusTcpClientinstance with a semaphore and force sequential request / response behaviour.The Problem with the Workaround
While manually synchronization works for ensuring compliance, it forces a sequential execution model. This makes the application code more complex and negates the throughput benefits of sending multiple independent requests concurrently in an asynchronous manner.
Unfortunately otherwise the controller does not respond in a predictable way... Sometimes I simply does not answer at all.
Is there any similar problem you are aware of? Would it be possible to add a configuration option to the client to handle this common industrial requirement more directly?
A configuration flag—perhaps something like
.strictComplianceMode(true)or.flushAfterRequest(true)—would be incredibly helpful. When enabled, this mode would ensure that the library flushes the channel after each request is written, guaranteeing one ADU per TCP packet without forcing the user to implement complex sequential logic.Or is there anything I'm missing here?
Thank you for your time and consideration.
Spec:

TCP_NODELAY=false


TCP_NODELAY=true

FYI:
I'm using the newest version of the client v2.1.1
Beta Was this translation helpful? Give feedback.
All reactions