- Where Developers Learn, Share, & Build Careers
I am facing a problem, where running the same application on different servers, unpredictable throughput results are produced. For example, running the application when there is no load on a particular fast server (faster CPU, more memory), there is a slow output compared to running on less powerful servers on the same network.
I suspect that OS or TCP is generating slowness on the fast server. Is there any device that can see the OS and TCP configuration and tell the reason for the lethargy?
All servers are running Red Hat Linux.
UPDATE This is a Socket based client server application. It works with a single connection, such as a single client connected to the server, trying to send the message as soon as possible. No forking or multi threading.
And further thought, are there key TCP options that can either affect latency or throughput?
The key to debugging issues is to remove physical networks from test cases; In other words TCP application is done through localhost rather than network compared to transfer.
In most cases, NIC configurations, binding configurations, Android TCP RTTs, packet rendering, packet lawn rate, or other external factors are guilty of these types of differences to determine whether it is "network "Or server, test from local host to localhost.
Comments
Post a Comment