It is always exiting to optimize any kind of environment we are building. Netscaler provide us with the possibility of creating own customized TCP profiles, but you really need to understand all core optimization parameters while configuring them, otherwise you could end up messing things up instead.

To create a TCP profile, logon to Netscaler WebGui and:

  1. Navigate to System > Profiles.
  2. In the details pane, click on the TCP Profiles tab and then click Add.
  3. In the Create TCP Profiles dialog box, configure the parameters for the TCP profile
  4. Click Create.

Creating_Tcp_Profile

So first of all you need to give the TCP Profile a name.

Window Scaling

TCP Window scaling allows increasing the TCP receive window size beyond 65535 bytes. It helps improving TCP performance overall and specially in high bandwidth and long delay networks. It helps with reducing latency and improving response time over TCP.

Maximum Burst Limit

This setting controls the burst of packets on wire form NetScaler in single attempt. Higher max burst limit ensures faster delivery of data in congestion free network. Limiting burst of packets helps with avoiding congestion at link level and the intermediary nodes.

Initial Congestion Window size

TCP initial congestion window size determines the number of bytes which can be outstanding in beginning of the transaction. It enables NetScaler to send those many bytes without bothering for congestion on the wire. The default size is kept as 4 which is 4*MSS.

TCP Delayed ACK timeout

To avoid sending too many ACK packets on the wire NetScaler implements delayed ACK mechanism. NetScaler sends delayed ACK with default timeout of 200ms. So we accumulate data packets and send ACK to sending party only if we receive 2 data packets in continuation or the timer expires.

Maximum OOO packet queue size

TCP maintains Out Of Order queue to keep the OOO packets in the TCP communication. This setting impacts system memory if the queue size is long as the packets need to be kept in runtime memory. Thus this needs to be kept at optimized level based on the kind of network and application characteristics. You can raise this parameter up to 65535, but that is unrealistic.

MSS and Max Packet per MSS

With the MSS setting you can specify TCP Maximum Segment Size to be used for transactions. By default NetScaler maintains 8 different MSS sizes to be used for TCP transactions: 1460, 1440, 1360, 1212, 956, 536, 384 and 128. Based on what you set here the same or immediate lower one is picked for communication. Maximum packets per MSS, is used as a factor to do packet based congestion control.

Max packets per Retransmission

This setting allows NetScaler to control how many packets to be retransmitted in one attempt. When NetScaler receives a partial ACK and it has to do retransmission then this setting comes into play. This does not impact the RTO based retransmissions.

Minimum RTO

The TCP retransmission timeout is calculated on each received ACK based on internal implementation logic. The default retransmission timeout happens at 1 second to start with and this can be tweaked with this setting. For second retransmission of these packets RTO will be calculated by N*2 and then N*4 … N*8… goes on till last retransmission attempt.

Slow Start Increment

TCP slow start is implemented to control the congestion by starting at low rate. The rate is increased exponentially until first loss of packet is noticed. It impacts the growth of congestion window and default setting of 2 will grow the congestion window size by 2*MSS on receiving every ACK. It helps with faster growth of congestion window and enables faster data transmission rate. You can increase this parameter if you see the network is capable enough to handle the high rate of packets.

TCP Buffer size

TCP buffer size is the receive buffer size on NetScaler. This buffer size is advertised to clients and servers from NetScaler and it controls their ability to send data to NetScaler. The default buffer size is 8K and in most cases it will be safe to increment this while talking to internal server farms. The buffer size is also impact by the actual application layer in NetScaler like for SSL endpoint cases it is set to 40K and for Compression it is set to 96K.

SACK (selective Acknowledgement)

TCP SACK addresses the problem of multiple packet loss which reduces the overall throughput capacity. With selective acknowledgement the receiver can inform the sender about all the segments which are received successfully, enabling sender to only retransmit the segments which were lost. This technique helps NetScaler improve overall throughput and reduce the connection latency.

Nagle’s Algorithm

Nagle’s Algorithm fights with the problem of small packets in TCP transmission. Applications like Telnet and other real time engines which require every key stroke to be passed to the other side often create very small packets. With Nagle’s algorithm NetScaler can buffer such small packets and sends them together to increase on the wire efficiency. This algorithm needs to work along with other TCP optimization techniques in NetScaler.

Immediate ACK on PUSH

By default NetScaler uses the delayed ACK technique to ensure that there are not too many just ACK packets on wire. But at times applications may expect NetScaler to send ACK immediately without any delay and such packets are often marked with PUSH flag. If we enable this setting then NetScaler will respond with ACK to the sender immediately on receiving a packet with PUSH.

Lots of interesting optimization techniques exposed though the TCP profiles. You need to understand these and their impact to your network/application deployment. One important thing to note is these optimization parameters need to work together and thus have some amount of cross impact as well. Having proper profiles can change the way your application behaves and could create much better impact at client side.

You could bind TCP Profiles to AGvServers, LB Service Groups, LB Services….