TCP/IP, how to verify delivery?

When I successfully write data to a TCP/IP socket, as I understand it, I am only guaranteed the data gets to the TCP/IP stack's buffer. However, a successful write doesn't guarantee that the data actually gets to the recipient. Since data can linger in the TCP/IP stack's buffer "indefinately," it is quite possible that the TCP/IP socket be closed before the buffered data is ever delivered.

Of course, since the recipient's side is also buffered, I realize the sender's TCP/IP stack can successfully flush its buffer, presumably meaning it entered the recipient's buffer, but the recipient never actually has to read it. The recipient's failure to read the data will, long term, cause their buffer to fill. As their buffer fills the packets will be acked in such a way to cause the sender's TCP/IP window reduce until, ultimately, the recipient's buffer fills entirely and the sending stack is blocked. At which point, since the sending stack can't flush, the sender's buffer will fill up, which will then cause subsequent writes, to a blocking socket, to block or, to a non-blocking socket, to return -1 setting errno to EAGAIN.

So...I suppose that's a long way of saying that buffering in the two TCP stacks make it hard to figure out, just by a successful call to write, whether or not the data written actually got to the recipient. Further, without an acknowledgment from the recipient, knowing the data was actually read is impossible. This leads me to my question.

Assuming I can't have the recipient acknowledge the response (protocol limitation), the best I could presume to do is verify the data got to the recipient's TCP/IP buffer. Since the two TCP/IP stacks will ack the packets under the hood, checking the sender's TCP buffer capacity should be sufficient to tell me how much data has been undelivered. If this is true, is there any good way to accomplish it? I ask because, given I can't modify the protocol to contain an acknowledgement, knowing the data got to the recipient's buffer is superior to "well, I stuck it the TCP/IP buffer (write succeeded), it must have got there."

The O/S is AIX 5.3 if it matters. Also, feel free to correct any misunderstandings I may have about the process.

Thanks!

You can use the SIOCOUTQ ioctl to measure how much outgoing data is queued on a socket, which will be roughly proportional to how much they have left to receive. You could also combine this with setting TCP_NODELAY with setsockopt so that data will leave immediately, without waiting for large bundles. This could be combined to make a more robustly blocking send() call; send the data and wait for the queue to diminish.

You could also simply make your send buffer really small for a similar effect.

This can substantially cut your throughput since you'll be spending more time waiting for transmission and less time working, and sending less data with more packets. The same'd be true of a manual acknowledgement though.

You can't guarantee the other end has used it without some sort of acknowledgment, but then, you can't guarantee the program hasn't read it and thrown it away either. Guaranteeing the correctness of the other end's software is a little beyond the scope of a network communication protocol.

Most apps that are worried about that use some sort of short packet header kind of ack-back system example:

One system I just coded used packet header format:
nnnnnnnnXXmmmmmmmmmm

where nnnnnnnnn= zero-filled len of whole packet
XX == type of packet: DT == data AK == acknowledge
mmmmmm==unique packet identifier, rolling

An AK packet is 00000020AK0000001234, with only 20 bytes is sent back to acknowledge a datapacket named 00000576DT0000001234

The system expects an AK packet back in 1 minute or less, AK
packets are stored in a queue with a timestamp. I can check for receipt success or resend status. There are also keepalive packets of the same short format sent every 30seconds, for low traffic times.

As others have said if you want to "guarantee" that your "message" is delivered to the application at the other end, you need to move beyond relying on the network protocol. There are a number of middleware messaging protocols which can provide this guarantee. AMQP is one, others include JMS, IBM's MQSeries and Tibco's Rendezvous.

Thanks all, as I mentinoned, I can't modify the protocol to use any sort of ack because the protocol is already set.

I'll try what Corona suggested and see if the SIOCOUTQ ioctl gives me anything useful. I suppose I would have to send the data and, once it is all buffered up, loop until it drains.

All the other points, about knowing the recipient has read/consumed the data, I understand. I just have to deal within the confines of the protocol that exists...so I'm trying to do the best I can to make sure the data has at least reached the recipient host.

edit: Ugh, I just checked and SIOCOUTQ isn't available on AIX...and if it is, it isn't listed in any of the headers under /usr/include :-(. Is there an AIX alternative?

Like I said, a similar effect could be achieved by making your transmit buffer really small, though I don't know how this could be done per-connection.

I'm not sure it can be done per-connection, I think those settings for the tcp send/receive space are across the board.

I believe I can set the high/low water marks per connection. I believe these control when a socket becomes writable (as per poll/select). But...I'm not sure that's a "good" idea. It'd seem like I'd have to do something like write the whole "message", reduce the water mark to 0, then call poll/select until it returns that the socket is writable, then reset the water mark to it original value for the next message. But...that seems, hokey....and I'm not sure it'd be reliable.... Ideas?

You're trying to make a best-effort asynchronous socket act like a guaranteed-delivery synchronous one, so none of this is ideal. If it works, it works, if it doesn't, it doesn't.