Asynchronous Http and WebSocket Client library for Java

Overview

Async Http Client Build Status Maven Central

Follow @AsyncHttpClient on Twitter.

The AsyncHttpClient (AHC) library allows Java applications to easily execute HTTP requests and asynchronously process HTTP responses. The library also supports the WebSocket Protocol.

It's built on top of Netty. It's currently compiled on Java 8 but runs on Java 9 too.

New Roadmap RFCs!

Well, not really RFCs, but as I am ramping up to release a new version, I would appreciate the comments from the community. Please add an issue and label it RFC and I'll take a look!

This Repository is Actively Maintained

@TomGranot is the current maintainer of this repository. You should feel free to reach out to him in an issue here or on Twitter for anything regarding this repository.

Installation

Binaries are deployed on Maven Central.

Import the AsyncHttpClient Bill of Materials (BOM) to add dependency management for AsyncHttpClient artifacts to your project:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.asynchttpclient</groupId>
            <artifactId>async-http-client-bom</artifactId>
            <version>LATEST_VERSION</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

Add a dependency on the main AsyncHttpClient artifact:

<dependencies>
    <dependency>
    	<groupId>org.asynchttpclient</groupId>
    	<artifactId>async-http-client</artifactId>
    </dependency>
</dependencies>

The async-http-client-extras-* and other modules can also be added without having to specify the version for each dependency, because they are all managed via the BOM.

Version

AHC doesn't use SEMVER, and won't.

  • MAJOR = huge refactoring
  • MINOR = new features and minor API changes, upgrading should require 1 hour of work to adapt sources
  • FIX = no API change, just bug fixes, only those are source and binary compatible with same minor version

Check CHANGES.md for migration path between versions.

Basics

Feel free to check the Javadoc or the code for more information.

Dsl

Import the Dsl helpers to use convenient methods to bootstrap components:

import static org.asynchttpclient.Dsl.*;

Client

import static org.asynchttpclient.Dsl.*;

AsyncHttpClient asyncHttpClient = asyncHttpClient();

AsyncHttpClient instances must be closed (call the close method) once you're done with them, typically when shutting down your application. If you don't, you'll experience threads hanging and resource leaks.

AsyncHttpClient instances are intended to be global resources that share the same lifecycle as the application. Typically, AHC will usually underperform if you create a new client for each request, as it will create new threads and connection pools for each. It's possible to create shared resources (EventLoop and Timer) beforehand and pass them to multiple client instances in the config. You'll then be responsible for closing those shared resources.

Configuration

Finally, you can also configure the AsyncHttpClient instance via its AsyncHttpClientConfig object:

import static org.asynchttpclient.Dsl.*;

AsyncHttpClient c = asyncHttpClient(config().setProxyServer(proxyServer("127.0.0.1", 38080)));

HTTP

Sending Requests

Basics

AHC provides 2 APIs for defining requests: bound and unbound. AsyncHttpClient and Dsl` provide methods for standard HTTP methods (POST, PUT, etc) but you can also pass a custom one.

import org.asynchttpclient.*;

// bound
Future<Response> whenResponse = asyncHttpClient.prepareGet("http://www.example.com/").execute();

// unbound
Request request = get("http://www.example.com/").build();
Future<Response> whenResponse = asyncHttpClient.executeRequest(request);

Setting Request Body

Use the setBody method to add a body to the request.

This body can be of type:

  • java.io.File
  • byte[]
  • List<byte[]>
  • String
  • java.nio.ByteBuffer
  • java.io.InputStream
  • Publisher<io.netty.bufferByteBuf>
  • org.asynchttpclient.request.body.generator.BodyGenerator

BodyGenerator is a generic abstraction that let you create request bodies on the fly. Have a look at FeedableBodyGenerator if you're looking for a way to pass requests chunks on the fly.

Multipart

Use the addBodyPart method to add a multipart part to the request.

This part can be of type:

  • ByteArrayPart
  • FilePart
  • InputStreamPart
  • StringPart

Dealing with Responses

Blocking on the Future

execute methods return a java.util.concurrent.Future. You can simply block the calling thread to get the response.

Future<Response> whenResponse = asyncHttpClient.prepareGet("http://www.example.com/").execute();
Response response = whenResponse.get();

This is useful for debugging but you'll most likely hurt performance or create bugs when running such code on production. The point of using a non blocking client is to NOT BLOCK the calling thread!

Setting callbacks on the ListenableFuture

execute methods actually return a org.asynchttpclient.ListenableFuture similar to Guava's. You can configure listeners to be notified of the Future's completion.

ListenableFuture<Response> whenResponse = ???;
Runnable callback = () -> {
	try  {
		Response response = whenResponse.get();
		System.out.println(response);
	} catch (InterruptedException | ExecutionException e) {
		e.printStackTrace();
	}
};
java.util.concurrent.Executor executor = ???;
whenResponse.addListener(() -> ???, executor);

If the executor parameter is null, callback will be executed in the IO thread. You MUST NEVER PERFORM BLOCKING operations in there, typically sending another request and block on a future.

Using custom AsyncHandlers

execute methods can take an org.asynchttpclient.AsyncHandler to be notified on the different events, such as receiving the status, the headers and body chunks. When you don't specify one, AHC will use a org.asynchttpclient.AsyncCompletionHandler;

AsyncHandler methods can let you abort processing early (return AsyncHandler.State.ABORT) and can let you return a computation result from onCompleted that will be used as the Future's result. See AsyncCompletionHandler implementation as an example.

The below sample just capture the response status and skips processing the response body chunks.

Note that returning ABORT closes the underlying connection.

import static org.asynchttpclient.Dsl.*;
import org.asynchttpclient.*;
import io.netty.handler.codec.http.HttpHeaders;

Future<Integer> whenStatusCode = asyncHttpClient.prepareGet("http://www.example.com/")
.execute(new AsyncHandler<Integer>() {
	private Integer status;
	@Override
	public State onStatusReceived(HttpResponseStatus responseStatus) throws Exception {
		status = responseStatus.getStatusCode();
		return State.ABORT;
	}
	@Override
	public State onHeadersReceived(HttpHeaders headers) throws Exception {
		return State.ABORT;
	}
	@Override
	public State onBodyPartReceived(HttpResponseBodyPart bodyPart) throws Exception {
		return State.ABORT;
	}
	@Override
	public Integer onCompleted() throws Exception {
		return status;
	}
	@Override
	public void onThrowable(Throwable t) {
	}
});

Integer statusCode = whenStatusCode.get();

Using Continuations

ListenableFuture has a toCompletableFuture method that returns a CompletableFuture. Beware that canceling this CompletableFuture won't properly cancel the ongoing request. There's a very good chance we'll return a CompletionStage instead in the next release.

CompletableFuture<Response> whenResponse = asyncHttpClient
            .prepareGet("http://www.example.com/")
            .execute()
            .toCompletableFuture()
            .exceptionally(t -> { /* Something wrong happened... */  } )
            .thenApply(response -> { /*  Do something with the Response */ return resp; });
whenResponse.join(); // wait for completion

You may get the complete maven project for this simple demo from org.asynchttpclient.example

WebSocket

Async Http Client also supports WebSocket. You need to pass a WebSocketUpgradeHandler where you would register a WebSocketListener.

WebSocket websocket = c.prepareGet("ws://demos.kaazing.com/echo")
      .execute(new WebSocketUpgradeHandler.Builder().addWebSocketListener(
          new WebSocketListener() {

          @Override
          public void onOpen(WebSocket websocket) {
              websocket.sendTextFrame("...").sendTextFrame("...");
          }

          @Override
          public void onClose(WebSocket websocket) {
          }
          
    		  @Override
          public void onTextFrame(String payload, boolean finalFragment, int rsv) {
          	System.out.println(payload);
          }

          @Override
          public void onError(Throwable t) {
          }
      }).build()).get();

Reactive Streams

AsyncHttpClient has built-in support for reactive streams.

You can pass a request body as a Publisher<ByteBuf> or a ReactiveStreamsBodyGenerator.

You can also pass a StreamedAsyncHandler<T> whose onStream method will be notified with a Publisher<HttpResponseBodyPart>.

See tests in package org.asynchttpclient.reactivestreams for examples.

WebDAV

AsyncHttpClient has build in support for the WebDAV protocol. The API can be used the same way normal HTTP request are made:

Request mkcolRequest = new RequestBuilder("MKCOL").setUrl("http://host:port/folder1").build();
Response response = c.executeRequest(mkcolRequest).get();

or

Request propFindRequest = new RequestBuilder("PROPFIND").setUrl("http://host:port").build();
Response response = c.executeRequest(propFindRequest, new AsyncHandler() {
  // ...
}).get();

More

You can find more information on Jean-François Arcand's blog. Jean-François is the original author of this library. Code is sometimes not up-to-date but gives a pretty good idea of advanced features.

User Group

Keep up to date on the library development by joining the Asynchronous HTTP Client discussion group

Google Group

Contributing

Of course, Pull Requests are welcome.

Here are the few rules we'd like you to respect if you do so:

  • Only edit the code related to the suggested change, so DON'T automatically format the classes you've edited.
  • Use IntelliJ default formatting rules.
  • Regarding licensing:
    • You must be the original author of the code you suggest.
    • You must give the copyright to "the AsyncHttpClient Project"
Issues
  • Grizzly provider TimeoutException making async requests

    Grizzly provider TimeoutException making async requests

    When making async requests using the Grizzly provider (from AHC 2.0.0-SNAPSHOT), I get some TimeoutExceptions that should not occur. The server is serving these requests very rapidly, and the JVM isn't GCing very much. The requests serve in a fraction of a second, but the Grizzly provider says they timed out after 9 seconds. If I set the Grizzly provider's timeout to a higher number of seconds then it times out after that many seconds instead..

    Some stack trace examples:

    java.util.concurrent.TimeoutException: Timeout exceeded at org.asynchttpclient.providers.grizzly.GrizzlyAsyncHttpProvider.timeout(GrizzlyAsyncHttpProvider.java:485) at org.asynchttpclient.providers.grizzly.GrizzlyAsyncHttpProvider$3.onTimeout(GrizzlyAsyncHttpProvider.java:276) at org.glassfish.grizzly.utils.IdleTimeoutFilter$DefaultWorker.doWork(IdleTimeoutFilter.java:382) at org.glassfish.grizzly.utils.IdleTimeoutFilter$DefaultWorker.doWork(IdleTimeoutFilter.java:362) at org.glassfish.grizzly.utils.DelayedExecutor$DelayedRunnable.run(DelayedExecutor.java:158) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722)


    another stack trace:

    java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timeout exceeded at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252) at java.util.concurrent.FutureTask.get(FutureTask.java:111) at org.asynchttpclient.providers.grizzly.GrizzlyResponseFuture.get(GrizzlyResponseFuture.java:165) at org.ebaysf.webclient.benchmark.NingAhcGrizzlyBenchmarkTest.asyncWarmup(NingAhcGrizzlyBenchmarkTest.java:105) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.ebaysf.webclient.benchmark.AbstractBenchmarkTest.doBenchmark(AbstractBenchmarkTest.java:168) at org.ebaysf.webclient.benchmark.NingAhcGrizzlyBenchmarkTest.testAsyncLargeResponses(NingAhcGrizzlyBenchmarkTest.java:84) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:45) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:119) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:101) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.maven.surefire.booter.ProviderFactory$ClassLoaderProxy.invoke(ProviderFactory.java:103) at com.sun.proxy.$Proxy0.invoke(Unknown Source) at org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:150) at org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcess(SurefireStarter.java:91) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:69) Caused by: java.util.concurrent.TimeoutException: Timeout exceeded at org.asynchttpclient.providers.grizzly.GrizzlyAsyncHttpProvider.timeout(GrizzlyAsyncHttpProvider.java:485) at org.asynchttpclient.providers.grizzly.GrizzlyAsyncHttpProvider$3.onTimeout(GrizzlyAsyncHttpProvider.java:276) at org.glassfish.grizzly.utils.IdleTimeoutFilter$DefaultWorker.doWork(IdleTimeoutFilter.java:382) at org.glassfish.grizzly.utils.IdleTimeoutFilter$DefaultWorker.doWork(IdleTimeoutFilter.java:362) at org.glassfish.grizzly.utils.DelayedExecutor$DelayedRunnable.run(DelayedExecutor.java:158) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722)

    Here's what my asyncWarmup() method looks like:

    public void asyncWarmup(final String testUrl) {
        List<Future<Response>> futures = new ArrayList<Future<Response>>(warmupRequests);
        for (int i = 0; i < warmupRequests; i++) {
            try {
                futures.add(this.client.prepareGet(testUrl).execute());
            } catch (IOException e) {
                System.err.println("Failed to execute get at iteration #" + i);
            }
        }
    
        for (Future<Response> future : futures) {
            try {
                future.get();
            } catch (InterruptedException e) {
                e.printStackTrace();
            } catch (ExecutionException e) {
                e.printStackTrace();
            }
        }
    }
    

    And here's how the client is initialized:

    @Override
    protected void setup() {
        super.setup();
    
        GrizzlyAsyncHttpProviderConfig providerConfig = new GrizzlyAsyncHttpProviderConfig();
        AsyncHttpClientConfig config = new AsyncHttpClientConfig.Builder()
                .setAsyncHttpClientProviderConfig(providerConfig)
                .setMaximumConnectionsTotal(-1)
                .setMaximumConnectionsPerHost(4500)
                .setCompressionEnabled(false)
                .setAllowPoolingConnection(true /* keep-alive connection */)
                // .setAllowPoolingConnection(false /* no keep-alive connection */)
                .setConnectionTimeoutInMs(9000).setRequestTimeoutInMs(9000)
                .setIdleConnectionInPoolTimeoutInMs(3000).build();
    
        this.client = new AsyncHttpClient(new GrizzlyAsyncHttpProvider(config), config);
    
    }
    
    Grizzly 
    opened by jbrittain 55
  • Allow DefaultSslEngineFactory subclass customization of the SslContext

    Allow DefaultSslEngineFactory subclass customization of the SslContext

    See #1170 for context.

    If you have ideas for how to usefully test this, I'm happy to write them up, but it wasn't obvious to me how to usefully test this change.

    Enhancement 
    opened by marshallpierce 38
  • AsyncHttpClient does not close sockets under heavy load (1.9 only)

    AsyncHttpClient does not close sockets under heavy load (1.9 only)

    If you create 1000 requests in a very short time frame and use connection pool with AsyncHttpClient 1.9.21 and Netty 3.10.1, then some sockets will leak and stay open even past the idle socket reaper. This was initially filed as https://github.com/playframework/playframework/issues/5215, but can be replicated without Play WS.

    Created a reproducing test case here: https://github.com/wsargent/asynchttpclient-socket-leak

    if you have 50 requests, then they'll all be closed immediately. if you have 1000 requests, they'll stay open for a while. After roughly two minutes, AHC will close off all idle sockets, but up to 30 will never die and will always be established.

    To see the dangling sockets, run the id of the java process:

    sudo lsof -i | grep 31602
    

    You'll see

    java      31602       wsargent   89u  IPv6 0xe1b25a8062380645      0t0  TCP 192.168.1.106:58646->ec2-54-173-126-144.compute-1.amazonaws.com:https (ESTABLISHED)
    

    The client port number is your key into the application: if you search for "58646" in application.log, then you'll see that there's a connection associated with it:

    2015-11-02 20:41:38,496 [DEBUG] from org.jboss.netty.handler.logging.LoggingHandler in New I/O worker #1 - [id: 0x5650b318, /192.168.1.106:58646 => playframework.com/54.173.126.144:443] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=2357, cap=2357)
    

    You can see the lifecycle of a handle by using grep:

    grep "0x5650b318" application.log
    

    and what's interesting is that while most ids will have a CLOSE / CLOSED lifecycle associated with them:

    2015-11-02 20:41:45,878 [DEBUG] from org.jboss.netty.handler.logging.LoggingHandler in Hashed wheel timer #1 - [id: 0x34804fcc, /192.168.1.106:59122 => playframework.com/54.173.126.144:443] WRITE: BigEndianHeapChannelBuffer(ridx=0, widx=69, cap=69)
    2015-11-02 20:41:46,427 [DEBUG] from org.jboss.netty.handler.logging.LoggingHandler in New I/O worker #2 - [id: 0x34804fcc, /192.168.1.106:59122 => playframework.com/54.173.126.144:443] CLOSE
    2015-11-02 20:41:46,427 [DEBUG] from org.jboss.netty.handler.logging.LoggingHandler in New I/O worker #2 - [id: 0x34804fcc, /192.168.1.106:59122 :> playframework.com/54.173.126.144:443] DISCONNECTED
    2015-11-02 20:41:46,434 [DEBUG] from org.jboss.netty.handler.logging.LoggingHandler in New I/O worker #2 - [id: 0x34804fcc, /192.168.1.106:59122 :> playframework.com/54.173.126.144:443] UNBOUND
    2015-11-02 20:41:46,434 [DEBUG] from org.jboss.netty.handler.logging.LoggingHandler in New I/O worker #2 - [id: 0x34804fcc, /192.168.1.106:59122 :> playframework.com/54.173.126.144:443] CLOSED
    

    In the case of "0x5650b318", there's no CLOSE event happening here. In addition, there's a couple of lines that say it's a cached channel:

    2015-11-02 20:41:33,340 [DEBUG] from com.ning.http.client.providers.netty.request.NettyRequestSender in default-akka.actor.default-dispatcher-4 - Using cached Channel [id: 0x5650b318, /192.168.1.106:58646 => playframework.com/54.173.126.144:443]
    2015-11-02 20:41:33,340 [DEBUG] from com.ning.http.client.providers.netty.request.NettyRequestSender in default-akka.actor.default-dispatcher-4 - Using cached Channel [id: 0x5650b318, /192.168.1.106:58646 => playframework.com/54.173.126.144:443]
    

    So I think Netty is not closing cached channels even if they are idle, in some circumstances.

    Contributions Welcome! Defect Netty 
    opened by wsargent 37
  • FeedableBodyGenerator - LEAK: ByteBuf.release() was not called before it's garbage-collected.

    FeedableBodyGenerator - LEAK: ByteBuf.release() was not called before it's garbage-collected.

    When I use custom FeedableBodyGenerator or SimpleFeedableBodyGenerator I see the next error message:

    [error] i.n.u.ResourceLeakDetector - LEAK: ByteBuf.release() was not called before it's garbage-collected. Enable advanced leak reporting to find out where the leak occurred. To enable advanced leak reporting, specify the JVM option '-Dio.netty.leakDetection.level=advanced' or call ResourceLeakDetector.setLevel() See http://netty.io/wiki/reference-counted-objects.html for more information.

    I guess the main problem is here: https://github.com/netty/netty/blob/4.0/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java#L990-L992 they just assign null to message, but should also call 'release' (ReferenceCountUtil.release(msg);) Am I right?

    Defect 
    opened by mielientiev 37
  • [release] 2.0.0-beta1

    [release] 2.0.0-beta1

    @rlubke @slandelle Any objection? What is missing is a small documentation update to describe how to hook up the new artifact.

    opened by jfarcand 30
  • IOException: Too many connections per host <#>

    IOException: Too many connections per host <#>

    After we set .setMaxConnectionsPerHost(64), our server seemed to happily work. We can pound it with traffic and see very few issues with connections since its so efficient at pooling connections that are in good condition.

    Lib: async-http-client 1.9.15 java version "1.7.0_67"

    After a while, however (about 24 hours), we start getting the above exception coming from the ChannelManager.

    Looking at the NettyResponseListener code, I noticed something odd.

    https://github.com/AsyncHttpClient/async-http-client/blob/b85d5b3505d9f6e80d278fef88876f6546e73079/providers/netty4/src/main/java/org/asynchttpclient/providers/netty4/request/NettyRequestSender.java

    In NettyRequestSender.sendRequestWithNewChannel(), there's this bit:

            boolean channelPreempted = false;
            String partition = null;
    
            try {            // Do not throw an exception when we need an extra connection for a
                // redirect.
                if (!reclaimCache) {
    
                    // only compute when maxConnectionPerHost is enabled
                    // FIXME clean up
                    if (config.getMaxConnectionsPerHost() > 0)
                        partition = future.getPartitionId();
    
                    channelManager.preemptChannel(partition);
                }
    
                if (asyncHandler instanceof AsyncHandlerExtensions)
                    AsyncHandlerExtensions.class.cast(asyncHandler).onOpenConnection();
    
                ChannelFuture channelFuture = connect(request, uri, proxy, useProxy, bootstrap, asyncHandler);
                channelFuture.addListener(new NettyConnectListener<T>(future, this, channelManager, channelPreempted, partition));
    
            } catch (Throwable t) {
                if (channelPreempted)
                    channelManager.abortChannelPreemption(partition);
    
                abort(null, future, t.getCause() == null ? t : t.getCause());
            }
    

    If you notice, channelPreempted never gets written to. Isn't channelPreempted = true missing from the block where the channel is preempted?

    Shouldn't it be:

            boolean channelPreempted = false;
            String partition = null;
    
            try {            // Do not throw an exception when we need an extra connection for a
                // redirect.
                if (!reclaimCache) {
    
                    // only compute when maxConnectionPerHost is enabled
                    // FIXME clean up
                    if (config.getMaxConnectionsPerHost() > 0)
                        partition = future.getPartitionId();
    
                    channelManager.preemptChannel(partition);
                    channelPreempted = true;
                }
    
                if (asyncHandler instanceof AsyncHandlerExtensions)
                    AsyncHandlerExtensions.class.cast(asyncHandler).onOpenConnection();
    
                ChannelFuture channelFuture = connect(request, uri, proxy, useProxy, bootstrap, asyncHandler);
                channelFuture.addListener(new NettyConnectListener<T>(future, this, channelManager, channelPreempted, partition));
    
            } catch (Throwable t) {
                if (channelPreempted)
                    channelManager.abortChannelPreemption(partition);
    
                abort(null, future, t.getCause() == null ? t : t.getCause());
            }
    

    The same class for netty3 has the correct code:

    https://github.com/AsyncHttpClient/async-http-client/blob/b85d5b3505d9f6e80d278fef88876f6546e73079/providers/netty3/src/main/java/org/asynchttpclient/providers/netty3/request/NettyRequestSender.java

    Waiting for user 
    opened by yoeduardoj 27
  • Add host and port to SSLEngineFactory

    Add host and port to SSLEngineFactory

    Fixes https://github.com/AsyncHttpClient/async-http-client/issues/513

    opened by wsargent 26
  • Grizzly provider fails to handle HEAD with Content-Length header

    Grizzly provider fails to handle HEAD with Content-Length header

    I am trying to use Grizzly provider (v1.7.6), and noticed a timeout for simple HEAD request. Since this is local test, with 15 second timeout, it looks like this is due to blocking. Same does not happen with Netty provider.

    My best guess to underlying problem is that Grizzly provider expects there to be content to read since Content-Length is returned. This would be incorrect assumption, since HTTP specification explicitly states that HEAD requests may contain length indicator, but there is never payload entity to return.

    Looking at Netty provider code, I can see explicit handling for this use case, where connection is closed and any content flushes (in case server did send something). I did not see similar handling in Grizzly provider, but since implementation code structure is very different it may reside somewhere else.

    opened by cowtowncoder 24
  • Spawning AHC 2.0 w/ Netty 4 instances very fast leads to fd/thread/memory starvation

    Spawning AHC 2.0 w/ Netty 4 instances very fast leads to fd/thread/memory starvation

    After updating the Play codebase to AHC 2.0-alpha9, I've started to experience issues with non terminating tests because of OutOfMemoryException. Until today, I wrongly assumed this was caused by the changes I made in AHC to support reactive streams, but the assumption turned out to be wrong. In fact, I can reproduce even by using AHC 2.0-alpha8, which doesn't include the reactive stream support.

    Here is a link to the truly long thread dump demonstrating that AHC 2.0-alpha8 is leaking threads https://gist.github.com/dotta/6e388962cf0d904e8170

    This issue is currently preventing https://github.com/playframework/playframework/pull/5082 to successfully build.

    Defect Netty 
    opened by dotta 23
  • Backpressure in AsyncHandler

    Backpressure in AsyncHandler

    AsyncHandler provides no mechanism to send back pressure on receiving the body parts.

    Imagine you have a server that stores large files on Amazon S3, and streams them out to clients, using async http client to connect to S3. Now imagine you have a very slow client, that connects and downloads a file. The slow client pushes back on the server via TCP. However, async http client will keep on calling onBodyPartReceived as fast as S3 provides it with data. The AsyncHandler implementation will have three choices:

    1. Block. Then it's blocking a worker thread, preventing other concurrent operations from happening. This is not an option.
    2. Buffer. Eventually this will cause an OutOfMemoryError. This is not an option.
    3. Drop. Then the client gets a corrupted file. This is not an option.

    AsyncHandler therefore needs a mechanism to propagate back pressure when handling body parts. One possibility here is to provide a method to say whether you are interested in receiving more data or not. This would correspond to a Channel.setReadable(true/false) in the netty provider, which will push back via TCP flow control. This could either be provided by injecting some sort of "channel" object into the AsyncHandler, or, since HttpResponseBodyPart already provides mechanisms for talking back to the channel (eg closeUnderlyingConnection()), it could be provided there.

    Contributions Welcome! Enhancement 
    opened by jroper 23
  • Is there any utility class to create an immediate Future for tests?

    Is there any utility class to create an immediate Future for tests?

    Maybe something similar to Guava's Future

    opened by liriarte 0
  • Testing Ci with Azure Pipelines

    Testing Ci with Azure Pipelines

    This is the default Azure Pipelines YAML file, adapted for our test goal. Let's see what gives.

    opened by TomGranot 1
  • Always remove subscriber from pipeline when completing a reactive stream

    Always remove subscriber from pipeline when completing a reactive stream

    • Only send LastHttpContent if channel is active
    • Avoid async scheduling if possible
    opened by hamnis 2
  • Fix tests

    Fix tests

    • Using apache.org for options does not seem to work.
    • Disabled an assertion which is system specific in SpnegoEngineTest
    • Add extra allowed dns message when on Java 11 in TextMessageTest
    opened by hamnis 0
  • Thread pool using memory

    Thread pool using memory

    Hi,

    We are using this http client at the Selenium project, we switched to Netty and we found this client's implementation suitable for what we want to do.

    When doing requests, we block on the future because that is the nature of WebDriver commands, they basically follow a request-response pattern before the client sending requests can move to the next step.

    We saw that memory usage went up and we identified several AsyncHttpClient threads using memory.

    After checking the README and some messages in the Google group, we saw that it is better to use a single client instance and reuse it, so we did that. This improved things a little.

    However, the thread pool grows until # of processors * 2, and still the memory is never released. To be more detailed, memory is used up to the point where the thread pool is full, after that it kind of stays stable.

    Nevertheless, when the application using the http client is idle, one would expect to see the memory being claimed by the garbage collector, and this does not happen.

    Do you have any advice to reclaim the used memory by the thread pool? Or is this a matter of finding the right pool size through setIoThreadsCount()?

    Thanks in advance for your help,

    Diego

    opened by diemol 0
  • Insecure cipher and hash function usage

    Insecure cipher and hash function usage

    Hi there, we found that the following places using the insecure cipher and hash functions:

    /home/xwt/IdeaProjects/async-http-client-latest/client/src/main/java/org/asynchttpclient/util/MessageDigestUtils.java:23: error: [algorithm.not.allowed] Algorithm: MD5 is not allowed by the current rules
          return MessageDigest.getInstance("MD5");
                                           ^
    /home/xwt/IdeaProjects/async-http-client-latest/client/src/main/java/org/asynchttpclient/util/MessageDigestUtils.java:31: error: [algorithm.not.allowed] Algorithm: SHA1 is not allowed by the current rules
          return MessageDigest.getInstance("SHA1");
                                           ^
    /home/xwt/IdeaProjects/async-http-client-latest/client/src/main/java/org/asynchttpclient/ntlm/NtlmEngine.java:505: error: [algorithm.not.allowed] Algorithm: MD5 is not allowed by the current rules
                final MessageDigest md5 = MessageDigest.getInstance("MD5");
                                                                    ^
    /home/xwt/IdeaProjects/async-http-client-latest/client/src/main/java/org/asynchttpclient/ntlm/NtlmEngine.java:1464: error: [algorithm.not.allowed] Algorithm: MD5 is not allowed by the current rules
                    md5 = MessageDigest.getInstance("MD5");
                                                    ^
    /home/xwt/IdeaProjects/async-http-client-latest/client/src/main/java/org/asynchttpclient/ntlm/NtlmEngine.java:446: error: [algorithm.not.allowed] Algorithm: DES/ECB/NOPADDING is not allowed by the current rules
                        Cipher des = Cipher.getInstance("DES/ECB/NoPadding");
                                                        ^
    /home/xwt/IdeaProjects/async-http-client-latest/client/src/main/java/org/asynchttpclient/ntlm/NtlmEngine.java:449: error: [algorithm.not.allowed] Algorithm: DES/ECB/NOPADDING is not allowed by the current rules
                        des = Cipher.getInstance("DES/ECB/NoPadding");
                                                 ^
    /home/xwt/IdeaProjects/async-http-client-latest/client/src/main/java/org/asynchttpclient/ntlm/NtlmEngine.java:473: error: [algorithm.not.allowed] Algorithm: RC4 is not allowed by the current rules
                final Cipher rc4 = Cipher.getInstance("RC4");
                                                      ^
    /home/xwt/IdeaProjects/async-http-client-latest/client/src/main/java/org/asynchttpclient/ntlm/NtlmEngine.java:538: error: [algorithm.not.allowed] Algorithm: DES/ECB/NOPADDING is not allowed by the current rules
                final Cipher des = Cipher.getInstance("DES/ECB/NoPadding");
                                                      ^
    /home/xwt/IdeaProjects/async-http-client-latest/client/src/main/java/org/asynchttpclient/ntlm/NtlmEngine.java:626: error: [algorithm.not.allowed] Algorithm: DES/ECB/NOPADDING is not allowed by the current rules
                final Cipher des = Cipher.getInstance("DES/ECB/NoPadding");
                                                      ^
    
    opened by xingweitian 0
  • Transfer-Encoding: chunked in response header and channel reuse

    Transfer-Encoding: chunked in response header and channel reuse

    Hello,

    we face the following issue with AsyncHttpClient. When the response is chunked, using of the open channel from the previous request does not work and runs into timeout delaying the HTTP request.

    Situation:

    Application sends request A to a remote destination

    2021 03 18 09:35:42#+00#DEBUG#org.asynchttpclient.netty.channel.NettyConnectListener##xxx#AsyncHttpClient-113-4###ed86a5e0a#na#na#na#na#Using new Channel '[id: 0x6b4eed08, L:/SOURCE_IP:39394 - R:DEST_HOST/IP:443]' for 'GET' to '/url'|

    Here request A is sent and response received (quickly, 200 ms) with Transfer-Encoding: chunked!!!

    2021 03 18 09:35:42#+00#TRACE#io.netty.handler.logging.LoggingHandler##xxx#AsyncHttpClient-113-4###ed86a5e0a#na#na#na#na#[id: 0x6b4eed08, L:/SOURCE_IP:39394 - R:DEST_HOST/IP:443] READ: 3875B

    Netty is actually still waiting for the next chunk of HTTP response, but the channel is immediately added to the pool

    2021 03 18 09:35:43#+00#DEBUG#org.asynchttpclient.netty.channel.ChannelManager##xxx#AsyncHttpClient-113-4###ed86a5e0a#na#na#na#na#Adding key: https://DEST_HOST:443 for channel [id: 0x6b4eed08, L:/SOURCE_IP:39394 - R:eu.DEST_HOST/IP:443]| 2021 03 18 09:35:43#+00#DEBUG#org.asynchttpclient.netty.channel.DefaultChannelPool##anonymous#AsyncHttpClient-timer-112-1####na#na#na#na#Entry count for : https://DEST_HOST:443 : 1|

    Application decides to send the next request B to the same destination Pooled channel is used, although Netty is still waiting for the next chunk of request A!

    2021 03 18 09:35:44#+00#DEBUG#org.asynchttpclient.netty.request.NettyRequestSender##xxx#default-workqueue-6###ed86a5e0a#na#na#na#na#Using pooled Channel '[id: 0x6b4eed08, L:/SOURCE_IP:39394 - R:DEST_HOST/IP:443]' for 'GET' to 'https://DEST_HOST/url'| 2021 03 18 09:35:44#+00#DEBUG#org.asynchttpclient.netty.request.NettyRequestSender##xxx#default-workqueue-6###ed86a5e0a#na#na#na#na#Using open Channel [id: 0x6b4eed08, L:/SOURCE_IP:39394 - R:DEST_HOST/IP:443] for GET '/url'|

    The request B is not actually sent out because Netty is still busy waiting for the next chunk from the previous request

    After around 3 minutes of waiting Netty figures out that no further chunks will come and the READ operation is finally complete for the request A:

    2021 03 18 09:38:43#+00#TRACE#io.netty.handler.logging.LoggingHandler##xxx#AsyncHttpClient-113-4###ed86a5e0a#na#na#na#na#[id: 0x6b4eed08, L:/10.215.181.236:39394 - R:DEST_HOST/IP:443] READ COMPLETE|

    The channel is closed immediately after that

    2021 03 18 09:38:43#+00#DEBUG#org.asynchttpclient.netty.handler.HttpHandler##xxx#AsyncHttpClient-113-4###ed86a5e0a#na#na#na#na#Channel Closed: [id: 0x6b4eed08, L:/SOURCE_IP:39394 ! R:DEST_HOST/IP:443] with attribute NettyResponseFuture{currentRetry=0, isDone=0, isCancelled=0, asyncHandler=AhcAsyncHandler for exchangeId: ID-xxxx-1615703537064-62-74 -> https://DEST_HOST/url, [email protected]d1df, [email protected][Not completed], uri=https://DEST_HOST/url, keepAlive=true, redirectCount=0, [email protected]685873b5, inAuth=0, touch=1616060323740}|

    Recovering for the request B works, but nearly 3 minutes were wasted

    2021 03 18 09:38:43#+00#DEBUG#org.asynchttpclient.netty.request.NettyRequestSender##xxx#AsyncHttpClient-113-4###ed86a5e0a#na#na#na#na#Trying to recover request DefaultFullHttpRequest(decodeResult: success, version: HTTP/1.1, content: EmptyByteBufBE)

    From my point of view there seems to be miscommunication between Channel Pool and Netty. The channel that is still busy reading should not be added to the pool and used by other requests.

    Thanks, Yuri

    opened by YurZir 1
  • Too many thread waiting,low send speed,and oom .  Need help,thanks!

    Too many thread waiting,low send speed,and oom . Need help,thanks!

    Problem Suspect 1

    188 instances of "io.netty.channel.nio.NioEventLoopGroup", loaded by "java.net.URLClassLoader @ 0x9707a360" occupy 931.71 MB (58.73%) bytes.

    Keywords io.netty.channel.nio.NioEventLoopGroup java.net.URLClassLoader @ 0x9707a360

    Details »

    Problem Suspect 2

    20 instances of "io.netty.buffer.PoolArena$HeapArena", loaded by "java.net.URLClassLoader @ 0x9707a360" occupy 304.36 MB (19.19%) bytes.

    Biggest instances:

    io.netty.buffer.PoolArena$HeapArena @ 0x847a2d88 - 16.02 MB (1.01%) bytes. io.netty.buffer.PoolArena$HeapArena @ 0x847b2710 - 16.02 MB (1.01%) bytes. io.netty.buffer.PoolArena$HeapArena @ 0x86a409a8 - 16.02 MB (1.01%) bytes. io.netty.buffer.PoolArena$HeapArena @ 0x86a41530 - 16.02 MB (1.01%) bytes. io.netty.buffer.PoolArena$HeapArena @ 0x890257a0 - 16.02 MB (1.01%) bytes. io.netty.buffer.PoolArena$HeapArena @ 0x8a059980 - 16.02 MB (1.01%) bytes. io.netty.buffer.PoolArena$HeapArena @ 0x8b06e308 - 16.02 MB (1.01%) bytes. io.netty.buffer.PoolArena$HeapArena @ 0x8c09d4a8 - 16.02 MB (1.01%) bytes. io.netty.buffer.PoolArena$HeapArena @ 0x8d0c83f0 - 16.02 MB (1.01%) bytes. io.netty.buffer.PoolArena$HeapArena @ 0x8e0f3d40 - 16.02 MB (1.01%) bytes. io.netty.buffer.PoolArena$HeapArena @ 0x97b56ec8 - 16.02 MB (1.01%) bytes. io.netty.buffer.PoolArena$HeapArena @ 0x97c8a0f0 - 16.02 MB (1.01%) bytes. io.netty.buffer.PoolArena$HeapArena @ 0x97e1fc28 - 16.02 MB (1.01%) bytes. io.netty.buffer.PoolArena$HeapArena @ 0x97e55458 - 16.02 MB (1.01%) bytes. io.netty.buffer.PoolArena$HeapArena @ 0x97e861d0 - 16.02 MB (1.01%) bytes. io.netty.buffer.PoolArena$HeapArena @ 0x97e86d00 - 16.02 MB (1.01%) bytes. io.netty.buffer.PoolArena$HeapArena @ 0x97e87888 - 16.02 MB (1.01%) bytes. io.netty.buffer.PoolArena$HeapArena @ 0x97ea8500 - 16.02 MB (1.01%) bytes. io.netty.buffer.PoolArena$HeapArena @ 0x97ec1f38 - 16.02 MB (1.01%) bytes. These instances are referenced from one instance of "io.netty.buffer.PoolArena[]", loaded by "java.net.URLClassLoader @ 0x9707a360"

    AsyncHttpClient-5355-48 "AsyncHttpClient-5073-62" prio=5 tid=195418 WAITING "AsyncHttpClient-4989-18" prio=5 tid=192121 WAITING "AsyncHttpClient-5043-31" prio=5 tid=194190 WAITING "AsyncHttpClient-5027-27" prio=5 tid=193689 WAITING "AsyncHttpClient-5051-19" prio=5 tid=194387 WAITING "AsyncHttpClient-4983-28" prio=5 tid=191981 WAITING "AsyncHttpClient-5000-37" prio=5 tid=192948 WAITING "AsyncHttpClient-5071-27" prio=5 tid=195437 WAITING "AsyncHttpClient-4987-58" prio=5 tid=192407 WAITING "AsyncHttpClient-4967-3" prio=5 tid=191162 WAITING "AsyncHttpClient-4979-54" prio=5 tid=192016 WAITING "AsyncHttpClient-5357-9" prio=5 tid=206213 RUNNABLE "AsyncHttpClient-5003-45" prio=5 tid=193099 WAITING "AsyncHttpClient-5005-12" prio=5 tid=192707 WAITING "AsyncHttpClient-4965-29" prio=5 tid=191263 WAITING "AsyncHttpClient-5033-55" prio=5 tid=194072 WAITING "AsyncHttpClient-5001-54" prio=5 tid=193041 WAITING "AsyncHttpClient-5016-32" prio=5 tid=192893 WAITING "AsyncHttpClient-5049-10" prio=5 tid=194327 WAITING "AsyncHttpClient-4967-70" prio=5 tid=191918 WAITING "AsyncHttpClient-5021-11" prio=5 tid=193149 WAITING "AsyncHttpClient-4989-41" prio=5 tid=192159 WAITING "AsyncHttpClient-4967-16" prio=5 tid=191266 WAITING "AsyncHttpClient-5353-11" prio=5 tid=206289 RUNNABLE "AsyncHttpClient-4969-49" prio=5 tid=191477 WAITING "AsyncHttpClient-5131-52" prio=5 tid=197793 WAITING "AsyncHttpClient-4957-64" prio=5 tid=191551 WAITING "AsyncHttpClient-5037-48" prio=5 tid=194512 WAITING "AsyncHttpClient-5357-57" prio=5 tid=206344 RUNNABLE "AsyncHttpClient-4993-54" prio=5 tid=192579 WAITING "AsyncHttpClient-5347-71" prio=5 tid=206173 WAITING "AsyncHttpClient-5013-42" prio=5 tid=193410 WAITING "AsyncHttpClient-5121-44" prio=5 tid=197084 WAITING "AsyncHttpClient-4973-28" prio=5 tid=191792 WAITING "AsyncHttpClient-5053-58" prio=5 tid=195048 WAITING "AsyncHttpClient-4981-45" prio=5 tid=192046 WAITING "AsyncHttpClient-4979-14" prio=5 tid=191741 WAITING "AsyncHttpClient-5018-55" prio=5 tid=193494 WAITING "AsyncHttpClient-4987-4" prio=5 tid=191860 WAITING "AsyncHttpClient-4963-69" prio=5 tid=191714 WAITING "AsyncHttpClient-5023-5" prio=5 tid=193201 WAITING "AsyncHttpClient-5039-42" prio=5 tid=194457 WAITING "AsyncHttpClient-5020-68" prio=5 tid=193774 WAITING "AsyncHttpClient-5011-76" prio=5 tid=192860 WAITING "AsyncHttpClient-5011-12" prio=5 tid=192796 WAITING "AsyncHttpClient-5355-52" prio=5 tid=206316 RUNNABLE "AsyncHttpClient-4997-34" prio=5 tid=192594 WAITING "AsyncHttpClient-5064-5" prio=5 tid=194920 WAITING "AsyncHttpClient-5009-27" prio=5 tid=193029 WAITING "AsyncHttpClient-5023-64" prio=5 tid=193603 WAITING "AsyncHttpClient-5011-44" prio=5 tid=192828 WAITING "AsyncHttpClient-1845-6" prio=5 tid=70723 RUNNABLE "AsyncHttpClient-5127-72" prio=5 tid=197824 WAITING "AsyncHttpClient-4972-50" prio=5 tid=191751 WAITING "AsyncHttpClient-5353-18" prio=5 tid=206328 RUNNABLE "AsyncHttpClient-4986-61" prio=5 tid=192290 WAITING "AsyncHttpClient-5037-1" prio=5 tid=193718 WAITING "AsyncHttpClient-5020-65" prio=5 tid=193754 WAITING "AsyncHttpClient-5059-8" prio=5 tid=194811 WAITING "AsyncHttpClient-4979-8" prio=5 tid=191716 WAITING "AsyncHttpClient-4975-51" prio=5 tid=191441 WAITING "AsyncHttpClient-4953-66" prio=5 tid=191110 WAITING "AsyncHttpClient-5043-33" prio=5 tid=194192 WAITING "AsyncHttpClient-5031-20" prio=5 tid=193757 WAITING "AsyncHttpClient-4979-43" prio=5 tid=191933 WAITING "AsyncHttpClient-5350-7" prio=5 tid=205902 WAITING "AsyncHttpClient-4959-59" prio=5 tid=191576 WAITING "AsyncHttpClient-5037-3" prio=5 tid=193868 WAITING "AsyncHttpClient-4955-37" prio=5 tid=190961 WAITING "AsyncHttpClient-4987-62" prio=5 tid=192426 WAITING "AsyncHttpClient-4986-70" prio=5 tid=192331 WAITING "AsyncHttpClient-4975-62" prio=5 tid=191452 WAITING "AsyncHttpClient-5351-33" prio=5 tid=206332 RUNNABLE "AsyncHttpClient-5021-69" prio=5 tid=193736 WAITING "AsyncHttpClient-5016-55" prio=5 tid=192916 WAITING "AsyncHttpClient-4989-5" prio=5 tid=191983 WAITING "AsyncHttpClient-5007-46" prio=5 tid=193330 WAITING "AsyncHttpClient-5011-36" prio=5 tid=192820 WAITING "AsyncHttpClient-5039-41" prio=5 tid=194452 WAITING "AsyncHttpClient-5053-30" prio=5 tid=194780 WAITING "AsyncHttpClient-4986-11" prio=5 tid=191932 WAITING "AsyncHttpClient-4979-69" prio=5 tid=192172 WAITING "AsyncHttpClient-5217-60" prio=5 tid=201173 WAITING "AsyncHttpClient-4991-8" prio=5 tid=192194 WAITING "AsyncHttpClient-5059-35" prio=5 tid=194983 WAITING "AsyncHttpClient-5009-55" prio=5 tid=193171 WAITING "AsyncHttpClient-4979-48" prio=5 tid=191979 WAITING "AsyncHttpClient-5241-62" prio=5 tid=202631 WAITING "AsyncHttpClient-4967-35" prio=5 tid=191549 WAITING "AsyncHttpClient-4969-21" prio=5 tid=191265 WAITING "AsyncHttpClient-4995-38" prio=5 tid=192619 WAITING "AsyncHttpClient-5000-68" prio=5 tid=192979 WAITING "AsyncHttpClient-4983-55" prio=5 tid=192197 WAITING "AsyncHttpClient-5039-5" prio=5 tid=193967 WAITING "AsyncHttpClient-5047-68" prio=5 tid=195154 WAITING "AsyncHttpClient-4987-35" prio=5 tid=192297 WAITING "AsyncHttpClient-5031-35" prio=5 tid=193952 WAITING "AsyncHttpClient-5020-71" prio=5 tid=193796 WAITING "AsyncHttpClient-4967-29" prio=5 tid=191408 WAITING "AsyncHttpClient-4995-37" prio=5 tid=192610 WAITING "AsyncHttpClient-5042-11" prio=5 tid=194035 WAITING "AsyncHttpClient-5011-35" prio=5 tid=192819 WAITING "AsyncHttpClient-4975-67" prio=5 tid=191457 WAITING "AsyncHttpClient-5011-34" prio=5 tid=192818 WAITING "AsyncHttpClient-4995-65" prio=5 tid=192743 WAITING "AsyncHttpClient-4983-26" prio=5 tid=191969 WAITING "AsyncHttpClient-5241-61" prio=5 tid=202630 WAITING "AsyncHttpClient-4977-71" prio=5 tid=192187 WAITING "AsyncHttpClient-4967-22" prio=5 tid=191301 WAITING "AsyncHttpClient-4969-47" prio=5 tid=191475 WAITING "AsyncHttpClient-4977-60" prio=5 tid=192068 WAITING "AsyncHttpClient-5018-54" prio=5 tid=193493 WAITING "AsyncHttpClient-5057-76" prio=5 tid=194994 WAITING "AsyncHttpClient-4979-24" prio=5 tid=191788 WAITING "AsyncHttpClient-5071-21" prio=5 tid=195321 WAITING "AsyncHttpClient-4965-71" prio=5 tid=191684 WAITING "AsyncHttpClient-4989-56" prio=5 tid=192338 WAITING "AsyncHttpClient-4983-1" prio=5 tid=191700 WAITING "AsyncHttpClient-5049-59" prio=5 tid=194771 WAITING "AsyncHttpClient-5033-68" prio=5 tid=194085 WAITING "AsyncHttpClient-5036-65" prio=5 tid=194342 WAITING "AsyncHttpClient-4991-48" prio=5 tid=192431 WAITING "AsyncHttpClient-5003-63" prio=5 tid=193377 WAITING "AsyncHttpClient-5033-54" prio=5 tid=194071 WAITING "AsyncHttpClient-5001-52" prio=5 tid=193026 WAITING "AsyncHttpClient-4997-14" prio=5 tid=192460 WAITING "AsyncHttpClient-5339-66" prio=5 tid=206296 RUNNABLE "AsyncHttpClient-5003-40" prio=5 tid=193072 WAITING "AsyncHttpClient-5037-18" prio=5 tid=194045 WAITING "AsyncHttpClient-5013-51" prio=5 tid=193482 WAITING "AsyncHttpClient-5020-30" prio=5 tid=193325 WAITING "AsyncHttpClient-4983-60" prio=5 tid=192226 WAITING "AsyncHttpClient-5253-46" prio=5 tid=202494 WAITING "AsyncHttpClient-4967-2" prio=5 tid=191152 WAITING "AsyncHttpClient-5025-27" prio=5 tid=193544 WAITING "AsyncHttpClient-5007-15" prio=5 tid=193042 WAITING "AsyncHttpClient-5043-45" prio=5 tid=194204 WAITING "AsyncHttpClient-5025-17" prio=5 tid=193455 WAITING "AsyncHttpClient-4955-67" prio=5 tid=191141 WAITING "AsyncHttpClient-4983-25" prio=5 tid=191957 WAITING "AsyncHttpClient-4991-11" prio=5 tid=192203 WAITING "AsyncHttpClient-5055-65" prio=5 tid=194884 WAITING "AsyncHttpClient-4997-6" prio=5 tid=192368 WAITING "AsyncHttpClient-4991-29" prio=5 tid=192350 WAITING "AsyncHttpClient-5030-17" prio=5 tid=193737 WAITING "AsyncHttpClient-5020-28" prio=5 tid=193294 WAITING "AsyncHttpClient-4997-57" prio=5 tid=193023 WAITING "AsyncHttpClient-4991-75" prio=5 tid=192638 WAITING "AsyncHttpClient-4965-5" prio=5 tid=191105 WAITING "AsyncHttpClient-5071-11" prio=5 tid=195237 WAITING "AsyncHttpClient-5071-49" prio=5 tid=195459 WAITING "AsyncHttpClient-5020-14" prio=5 tid=193156 WAITING "AsyncHttpClient-5016-63" prio=5 tid=192924 WAITING "AsyncHttpClient-5033-11" prio=5 tid=193829 WAITING "AsyncHttpClient-5329-2" prio=5 tid=205120 WAITING "AsyncHttpClient-5357-58" prio=5 tid=206345 RUNNABLE "AsyncHttpClient-5023-15" prio=5 tid=193290 WAITING "AsyncHttpClient-5007-55" prio=5 tid=193423 WAITING "AsyncHttpClient-5355-53" prio=5 tid=206317 RUNNABLE "AsyncHttpClient-5023-49" prio=5 tid=193568 WAITING "AsyncHttpClient-5007-42" prio=5 tid=193310 WAITING "AsyncHttpClient-5043-55" prio=5 tid=194214 WAITING "AsyncHttpClient-5055-28" prio=5 tid=194643 WAITING "AsyncHttpClient-5042-7" prio=5 tid=194012 WAITING "AsyncHttpClient-5013-43" prio=5 tid=193419 WAITING "AsyncHttpClient-5125-75" prio=5 tid=197787 WAITING "AsyncHttpClient-5000-59" prio=5 tid=192970 WAITING "AsyncHttpClient-4969-40" prio=5 tid=191425 WAITING "AsyncHttpClient-4961-34" prio=5 tid=191149 WAITING "AsyncHttpClient-5005-10" prio=5 tid=192690 WAITING "AsyncHttpClient-4979-10" prio=5 tid=191729 WAITING "AsyncHttpClient-5007-66" prio=5 tid=193530 WAITING "AsyncHttpClient-4975-37" prio=5 tid=191393 WAITING "AsyncHttpClient-4981-18" prio=5 tid=191831 WAITING "AsyncHttpClient-4953-49" prio=5 tid=190974 WAITING "AsyncHttpClient-4955-29" prio=5 tid=190929 WAITING "AsyncHttpClient-5000-51" prio=5 tid=192962 WAITING "AsyncHttpClient-5049-70" prio=5 tid=194830 WAITING "AsyncHttpClient-5007-25" prio=5 tid=193172 WAITING "AsyncHttpClient-4983-50" prio=5 tid=192104 WAITING "AsyncHttpClient-4969-36" prio=5 tid=191391 WAITING "AsyncHttpClient-5073-69" prio=5 tid=195425 WAITING "AsyncHttpClient-5047-48" prio=5 tid=194879 WAITING "AsyncHttpClient-5009-19" prio=5 tid=192773 WAITING "AsyncHttpClient-5057-67" prio=5 tid=194948 WAITING "AsyncHttpClient-5057-66" prio=5 tid=194947 WAITING "AsyncHttpClient-5011-43" prio=5 tid=192827 WAITING "AsyncHttpClient-5016-65" prio=5 tid=192926 WAITING "AsyncHttpClient-5359-1" prio=5 tid=206251 RUNNABLE "AsyncHttpClient-5043-57" prio=5 tid=194216 WAITING "AsyncHttpClient-4961-28" prio=5 tid=191130 WAITING "AsyncHttpClient-5071-10" prio=5 tid=195226 WAITING "AsyncHttpClient-5036-64" prio=5 tid=194331 WAITI

    opened by Christian-health 0
  • Closing channels on request streaming failures

    Closing channels on request streaming failures

    Im using the reactive streams mode of the AsyncHttpClient library and have uncovered some behavior I feel may be incorrect.

    Take the following example:

    1. User creates a request and specifies the body as a Publisher<ByteBuf> (code)
    2. Request is executed, body begins streaming
    3. During the streaming of the body, an exception occurs. The body publisher's subscriber's onError method (code) is called to signal this error

    When this occurs, I am observing that the connection to the external service is left open and no error is signaled (error can probably only be signaled by closing the connection). This means that the external service sits there waiting indefinitely (if it configures some timeout, then until that timeout). The reason this happens is because the NettyReactiveStreamsBody overrides the error behavior and simply aborts the future (thus leaving the channel intact) (override code) (abort code)

    Why do I think this is incorrect behavior? Well I am probably wrong, but to me it seems that once there is an error, there is no good way to recover. Assuming the streaming behavior is not replayable (it usually isnt because you are loading the bytes from elsewhere and typically overwriting buffers to save space, hence "streaming"), then this request cannot be saved. If the request cannot be saved, then the "right" thing to do is to signal that to the downstream service being called. This can be done by closing the channel (and subsequently the connection). This may not be ideal in HTTP/2 if multiple requests are using the same channel, but in HTTP/1.1, closing the channel seems appropriate. (last I checked this library didnt support H2?)

    I went back and looked at the commit that added the code and its pull request. It actually seems like the decision to call abort instead of cancel was not necessarily an intentional one. Here is the pull request. There is a comment stating "Shouldn't this method be capturing the NettyResponseFuture, and at very least invoking future.abort in the error callback?" which leads me to believe the decision to call abort was not well thought out.

    What am I asking? If I am correct, and cancelling the connection is the correct thing to do, then I think this line should be changed to

    future.cancel(<force>)
    

    Perhaps if this is an issue when using HTTP/2, a check could be added to only do this if the used protocol is HTTP/1.1

    I can attempt to make the change if it is deemed the correct way to handle this case.

    Thanks

    opened by sanjams2 1
  • Websocket connection is lost without notifying client

    Websocket connection is lost without notifying client

    I am using async http websocket client to send requests and I have observed many times that connection is lost without calling onClose callback. websocket stays open (websokcet.isOpen still true)but no requests are sent to server. A connection interrupt doesn't close the Netty websocket and doesn't trigger the disconnected event. It is difficult to catch the disconnection , so i am not able to retry for the connection. I have retrial logic in onClose callback but for this issue onClose is not event called. I see there is no support for onConnectionLost as we have in - https://github.com/TooTallNate/Java-WebSocket/issues/473 implementing custom heartbeats through ping/pong would be tricky. please help.

    opened by rgupt102 0
Owner
AsyncHttpClient
AsyncHttpClient
Socket.IO server implemented on Java. Realtime java framework

Netty-socketio Overview This project is an open-source Java implementation of Socket.IO server. Based on Netty server framework. Checkout Demo project

Nikita Koksharov 5.2k Mar 14, 2021
TCP/UDP client/server library for Java, based on Kryo

KryoNet can be downloaded on the releases page. Please use the KryoNet discussion group for support. Overview KryoNet is a Java library that provides

Esoteric Software 1.6k Mar 12, 2021
A Java library that implements a ByteChannel interface over SSLEngine, enabling easy-to-use (socket-like) TLS for Java applications.

TLS Channel TLS Channel is a library that implements a ByteChannel interface over a TLS (Transport Layer Security) connection. It delegates all crypto

Mariano Barrios 113 Mar 1, 2021
Socket.IO Client Implementation in Java

Socket.IO-Client for Java socket.io-java-client is an easy to use implementation of socket.io for Java. It uses Weberknecht as transport backend, but

Enno Boland 950 Apr 21, 2021
A High Performance Network ( TCP/IP ) Library

Chronicle-Network About A High Performance Network library Purpose This library is designed to be lower latency and support higher throughputs by empl

Chronicle Software : Open Source 160 Mar 8, 2021
Fibers and actors for web development

COMSAT Scalable, Concurrent Web Apps Getting started Add the following Maven/Gradle dependencies: Feature Artifact Servlet integration for defining fi

Parallel Universe 586 Feb 23, 2021
API gateway for REST and SOAP written in Java.

API gateway for REST and SOAP written in Java.

predic8 GmbH 353 Apr 26, 2021
Full-featured Socket.IO Client Library for Java, which is compatible with Socket.IO v1.0 and later.

Socket.IO-client Java This is the Socket.IO Client Library for Java, which is simply ported from the JavaScript client. See also: Android chat demo en

Socket.IO 4.6k Mar 13, 2021
Java API over Accelio

JXIO JXIO is Java API over AccelIO (C library). AccelIO (http://www.accelio.org/) is a high-performance asynchronous reliable messaging and RPC librar

Accelio 71 Jan 5, 2021
An annotation-based Java library for creating Thrift serializable types and services.

Drift Drift is an easy-to-use, annotation-based Java library for creating Thrift clients and serializable types. The client library is similar to JAX-

null 177 Mar 3, 2021
"I am fluent in over 6 million forms of communication!"

K3PO K3PO is a network driver and language agnostic testing tool. It is designed to be able to create arbitrary network traffic and behavior, and to c

K3PO 40 Oct 16, 2020
Square’s meticulous HTTP client for the JVM, Android, and GraalVM.

OkHttp See the project website for documentation and APIs. HTTP is the way modern applications network. It’s how we exchange data & media. Doing HTTP

Square 39.6k Mar 13, 2021
Netty project - an event-driven asynchronous network application framework

Netty Project Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol serv

The Netty Project 26.2k Mar 14, 2021
ssh, scp and sftp for java

sshj - SSHv2 library for Java To get started, have a look at one of the examples. Hopefully you will find the API pleasant to work with :) Getting SSH

Jeroen van Erp 1.8k Mar 12, 2021