Netty project - an event-driven asynchronous network application framework

Related tags

netty
Overview

Build project

Netty Project

Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients.

Links

How to build

For the detailed information about building and developing Netty, please visit the developer guide. This page only gives very basic information.

You require the following to build Netty:

Note that this is build-time requirement. JDK 5 (for 3.x) or 6 (for 4.0+ / 4.1+) is enough to run your Netty-based application.

Branches to look

Development of all versions takes place in each branch whose name is identical to <majorVersion>.<minorVersion>. For example, the development of 3.9 and 4.1 resides in the branch '3.9' and the branch '4.1' respectively.

Usage with JDK 9+

Netty can be used in modular JDK9+ applications as a collection of automatic modules. The module names follow the reverse-DNS style, and are derived from subproject names rather than root packages due to historical reasons. They are listed below:

  • io.netty.all
  • io.netty.buffer
  • io.netty.codec
  • io.netty.codec.dns
  • io.netty.codec.haproxy
  • io.netty.codec.http
  • io.netty.codec.http2
  • io.netty.codec.memcache
  • io.netty.codec.mqtt
  • io.netty.codec.redis
  • io.netty.codec.smtp
  • io.netty.codec.socks
  • io.netty.codec.stomp
  • io.netty.codec.xml
  • io.netty.common
  • io.netty.handler
  • io.netty.handler.proxy
  • io.netty.resolver
  • io.netty.resolver.dns
  • io.netty.transport
  • io.netty.transport.epoll (native omitted - reserved keyword in Java)
  • io.netty.transport.kqueue (native omitted - reserved keyword in Java)
  • io.netty.transport.unix.common (native omitted - reserved keyword in Java)
  • io.netty.transport.rxtx
  • io.netty.transport.sctp
  • io.netty.transport.udt

Automatic modules do not provide any means to declare dependencies, so you need to list each used module separately in your module-info file.

Issues
  • Memory leak in latest netty version.

    Memory leak in latest netty version.

    After recent update to 4.1.7-Final (from 4.1.4-Final) my servers started dying with OOM within few hours. Before they were running for weeks with no issues.

    Error :

    io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 64 byte(s) of direct memory (used: 468189141, max: 468189184)
            at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:614) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:568) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.allocateDirect(UnpooledUnsafeNoCleanerDirectByteBuf.java:30) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.buffer.UnpooledUnsafeDirectByteBuf.<init>(UnpooledUnsafeDirectByteBuf.java:68) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.<init>(UnpooledUnsafeNoCleanerDirectByteBuf.java:25) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.buffer.UnsafeByteBufUtil.newUnsafeDirectByteBuf(UnsafeByteBufUtil.java:625) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.buffer.UnpooledByteBufAllocator.newDirectBuffer(UnpooledByteBufAllocator.java:65) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:170) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:131) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:73) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.channel.RecvByteBufAllocator$DelegatingHandle.allocate(RecvByteBufAllocator.java:124) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:956) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe$1.run(AbstractEpollChannel.java:359) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.util.concurrent.SingleThreadEventExecutor.safeExecute(SingleThreadEventExecutor.java:451) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:418) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:306) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:877) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144) ~[server-0.22.0-SNAPSHOT.jar:?]
            at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
    

    Or :

    08:28:00.752 WARN  - Failed to mark a promise as failure because it has succeeded already: [email protected]
    32d(success)io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 18713 byte(s) of direct memory (used: 468184872, max: 468189184)        
    	at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:631) ~[server-0.21.7-2.jar:?]        
    	at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:585) ~[server-0.21.7-2.jar:?]        
    	at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.allocateDirect(UnpooledUnsafeNoCleanerDirectByteBuf.java:30) ~[server-0.21.7-2.jar:?]        
    	at io.netty.buffer.UnpooledUnsafeDirectByteBuf.<init>(UnpooledUnsafeDirectByteBuf.java:68) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.<init>(UnpooledUnsafeNoCleanerDirectByteBuf.java:25) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.UnsafeByteBufUtil.newUnsafeDirectByteBuf(UnsafeByteBufUtil.java:624) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.UnpooledByteBufAllocator.newDirectBuffer(UnpooledByteBufAllocator.java:65) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:170) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.allocate(SslHandler.java:1533) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.allocateOutNetBuf(SslHandler.java:1544) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:575) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.wrapAndFlush(SslHandler.java:550) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:531) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:1324) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.closeOutboundAndChannel(SslHandler.java:1307) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.close(SslHandler.java:498) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:625) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:609) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.close(CombinedChannelDuplexHandler.java:504) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelOutboundHandlerAdapter.close(ChannelOutboundHandlerAdapter.java:71) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.CombinedChannelDuplexHandler.close(CombinedChannelDuplexHandler.java:315) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:625) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:609) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelDuplexHandler.close(ChannelDuplexHandler.java:73) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:625) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:609) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:466) ~[server-0.21.7-2.jar:?]
            at cc.blynk.server.core.protocol.handlers.DefaultExceptionHandler.handleUnexpectedException(DefaultExceptionHandler.java:59) ~[server-0.21.7-2.jar:?]
            at cc.blynk.server.core.protocol.handlers.DefaultExceptionHandler.handleGeneralException(DefaultExceptionHandler.java:43) ~[server-0.21.7-2.jar:?]
            at cc.blynk.core.http.handlers.StaticFileHandler.exceptionCaught(StaticFileHandler.java:277) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireExceptionCaught(CombinedC:
    08:28:00.752 WARN  - Failed to mark a promise as failure because it has succeeded already: [email protected]
    32d(success)io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 18713 byte(s) of direct memory (used: 468184872, max: 468189184)        
    	at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:631) ~[server-0.21.7-2.jar:?]        
    	at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:585) ~[server-0.21.7-2.jar:?]        
    	at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.allocateDirect(UnpooledUnsafeNoCleanerDirectByteBuf.java:30) ~[server-0.21.7-2.jar:?]        
    	at io.netty.buffer.UnpooledUnsafeDirectByteBuf.<init>(UnpooledUnsafeDirectByteBuf.java:68) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.<init>(UnpooledUnsafeNoCleanerDirectByteBuf.java:25) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.UnsafeByteBufUtil.newUnsafeDirectByteBuf(UnsafeByteBufUtil.java:624) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.UnpooledByteBufAllocator.newDirectBuffer(UnpooledByteBufAllocator.java:65) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:170) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.allocate(SslHandler.java:1533) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.allocateOutNetBuf(SslHandler.java:1544) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:575) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.wrapAndFlush(SslHandler.java:550) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:531) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:1324) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.closeOutboundAndChannel(SslHandler.java:1307) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.close(SslHandler.java:498) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:625) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:609) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.close(CombinedChannelDuplexHandler.java:504) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelOutboundHandlerAdapter.close(ChannelOutboundHandlerAdapter.java:71) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.CombinedChannelDuplexHandler.close(CombinedChannelDuplexHandler.java:315) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:625) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:609) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelDuplexHandler.close(ChannelDuplexHandler.java:73) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:625) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:609) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:466) ~[server-0.21.7-2.jar:?]
            at cc.blynk.server.core.protocol.handlers.DefaultExceptionHandler.handleUnexpectedException(DefaultExceptionHandler.java:59) ~[server-0.21.7-2.jar:?]
            at cc.blynk.server.core.protocol.handlers.DefaultExceptionHandler.handleGeneralException(DefaultExceptionHandler.java:43) ~[server-0.21.7-2.jar:?]
            at cc.blynk.core.http.handlers.StaticFileHandler.exceptionCaught(StaticFileHandler.java:277) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireExceptionCaught(CombinedC:
    

    I did restart and made heap dump before abnormal memory consumption and after first error messages from above :

    memory

    This screenshot shows difference between heap after server start (takes 17% of RAM of Instance) and first OOM in logs (takes 31% of RAM of instance). Instance RAM is 2 GB. So look like all direct memory was consumed (468MB) while heap itself takes less than direct buffers. Load on server is pretty low - 900 req/sec, with ~600 active connections. CPU consumption is only ~15%.

    I tried to analyze heap dump but I don't know netty well in order to make any conclusions.

    java version "1.8.0_111"
    Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
    Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
    
            <netty.version>4.1.7.Final</netty.version>
            <netty.tcnative.version>1.1.33.Fork25</netty.tcnative.version>
    
            <dependency>
                <groupId>io.netty</groupId>
                <artifactId>netty-transport-native-epoll</artifactId>
                <version>${netty.version}</version>
                <classifier>${epoll.os}</classifier>
            </dependency>
            <dependency>
                <groupId>io.netty</groupId>
                <artifactId>netty-tcnative</artifactId>
                <version>${netty.tcnative.version}</version>
                <classifier>${epoll.os}</classifier>
            </dependency>
    

    Right now I'm playing with

    -Dio.netty.leakDetectionLevel=advanced 
    -Dio.netty.noPreferDirect=true 
    -Dio.netty.allocator.type=unpooled 
    -Dio.netty.maxDirectMemory=0
    

    to find out working settings. I'll update ticket with additional info if any.

    Unfortunately I wasn't able to reproduce this issue on QA env. Please let me know if you need more info.

    defect 
    opened by doom369 123
  • DNS Codec

    DNS Codec

    This is the codec for a DNS resolver. I also wrote a basic test program "DNSTest" that will use the codec to resolve an address. This was part of a GSoC proposal for an asynch DNS resolver (I threw in an application).

    opened by mbakkar 107
  • ForkJoinPool-based EventLoopGroup and Channel Deregistration

    ForkJoinPool-based EventLoopGroup and Channel Deregistration

    Hey,

    this is a first version of Netty running on a ForkJoinPool and deregister (hopefully) working correctly. The main idea behind the changes in deregister is that deregister is always executed as a task and never invoked directly. We also do not allow any new task submissions after deregister was called, making the deregistration task the last task of a particular Channel in the task queue and we thus do not have to worry about moving tasks between EventLoops. We achieve this mainly by wrapping all calls to .eventLoop() and .executor(). There is some special treatment required for scheduled tasks, but this is best explained in the code.

    Let me know what you guys think.

    @normanmaurer @trustin

    feature 
    opened by buchgr 96
  • DNS resolver failing to find valid DNS record

    DNS resolver failing to find valid DNS record

    Expected behavior

    The DNS resolver should find valid DNS records.

    Actual behavior

    Exception thrown:

    Caused by: io.netty.resolver.dns.DnsNameResolverContext$SearchDomainUnknownHostException: Search domain query failed. Original hostname: 'host.toplevel' failed to resolve 'host.toplevel.search.domain' after 7 queries 
    	at io.netty.resolver.dns.DnsNameResolverContext.finishResolve(DnsNameResolverContext.java:721)
    	at io.netty.resolver.dns.DnsNameResolverContext.tryToFinishResolve(DnsNameResolverContext.java:663)
    	at io.netty.resolver.dns.DnsNameResolverContext.query(DnsNameResolverContext.java:306)
    	at io.netty.resolver.dns.DnsNameResolverContext.query(DnsNameResolverContext.java:295)
    	at io.netty.resolver.dns.DnsNameResolverContext.tryToFinishResolve(DnsNameResolverContext.java:636)
    	at io.netty.resolver.dns.DnsNameResolverContext$3.operationComplete(DnsNameResolverContext.java:342)
    	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
    	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
    	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
    	at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
    	at io.netty.resolver.dns.DnsQueryContext.setSuccess(DnsQueryContext.java:197)
    	at io.netty.resolver.dns.DnsQueryContext.finish(DnsQueryContext.java:180)
    	at io.netty.resolver.dns.DnsNameResolver$DnsResponseHandler.channelRead(DnsNameResolver.java:969)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1412)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:943)
    	at io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:93)
    	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
    	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
    	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
    	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
    	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
    	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    	at java.lang.Thread.run(Thread.java:748)
    

    Steps to reproduce

    1. Configure a top level domain someDomain on a DNS server you own
    2. Configure a host under the new top level domain someHost.someDomain
    3. Configure multiple resolvers on the DNS client machine that will run the Netty code. i.e. 8.8.8.8, 192.168.1.1, and 10.0.0.1 (I have 3 resolvers configured, each pointing to different DNS masters - global DNS, local personal private network, company private network over a VPN)
    4. Configure the search domain to not match the top level domain, i.e. search.otherDomain on the DNS client machine that will run the Netty code
    5. Ask netty to resolve someHost.someDomain
    6. failure.

    Minimal yet complete reproducer code (or URL to code)

    I'm not using Netty directly so I'm not sure what to put here. Do you want my Redisson code?

    Netty version

    Breaks when I upgrade to Reddison 3.6+ which pulls in Netty 4.1.20+ When forcing downgrade to Netty 4.1.13 the problem still shows, but with a slightly different stack trace.

    JVM version (e.g. java -version)

    java version "1.8.0_162" Java(TM) SE Runtime Environment (build 1.8.0_162-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)

    OS version (e.g. uname -a)

    Windows 10, Centos 7, Ubuntu 16.04

    defect 
    opened by johnjaylward 93
  • ALPN / NPN need to handle no compatible protocols found

    ALPN / NPN need to handle no compatible protocols found

    Motivation: If there are no common protocols in the ALPN protocol exchange we still compete the handshake successfully. This handshake should fail according to http://tools.ietf.org/html/rfc7301#section-3.2 with a status of no_application_protocol.

    Modifications: -Upstream project used for ALPN (alpn-boot) does not support this. So a PR https://github.com/jetty-project/jetty-alpn/pull/3 was submitted. -The netty code using alpn-boot should support the new interface (return null on existing method). -Version number of alpn-boot must be updated in pom.xml files

    Result: -Netty fails the SSL handshake if ALPN is used and there are no common protocols.

    defect 
    opened by Scottmitch 85
  • SockJS Support for Netty4

    SockJS Support for Netty4

    Please see sockjs/README.md for instructions about running tests general information regarding the SockJS support.

    feature 
    opened by danbev 81
  • EPollArrayWrapper.epollWait 100% CPU Usage

    EPollArrayWrapper.epollWait 100% CPU Usage

    Hi,

    I believe I have an issue similar to #302 but on Linux (Ubuntu 10.04) with JDK (1.6.0u30) and JDK(1.7.0u4) using Netty-4.0.0 (Revision: 52a7d28cb59e3806fda322aecf7a85a6adaeb305)

    The app is proxying connections to backend systems. The proxy has a pool of channels that it can use to send requests to the backend systems. If the pool is low on channels, new channels are spawned and put into the pool so that requests sent to the proxy can be serviced. The pools get populated on app startup, so that is why it doesn't take long at all for the CPU to spike through the roof (22 seconds into the app lifecycle).

    The test box has two CPUs, the output from 'top' is below:

    PID  USER   PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
    8220 root   20   0 2281m 741m  10m R 50.2 18.7  0:22.57 java                                                                             
    8218 root   20   0 2281m 741m  10m R 49.9 18.7  0:22.65 java                                                                             
    8219 root   20   0 2281m 741m  10m R 49.2 18.7  0:22.86 java                                                                             
    8221 root   20   0 2281m 741m  10m R 49.2 18.7  0:22.20 java 
    

    Thread Dump for the four NioClient based Worker Threads that are chewing up all the CPU.

    "backend-worker-pool-7-thread-1" prio=10 tid=0x00007f5918015800 nid=0x201a runnable [0x00007f5924ba3000]
       java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        - locked <0x000000008be93580> (a sun.nio.ch.Util$2)
        - locked <0x000000008be93570> (a java.util.Collections$UnmodifiableSet)
        - locked <0x000000008be92548> (a sun.nio.ch.EPollSelectorImpl)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at io.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:55)
        at io.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:261)
        at io.netty.channel.socket.nio.NioWorker.run(NioWorker.java:37)
        at io.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:43)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)   Locked ownable synchronizers:    - <0x000000008be00748> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
    
    "backend-worker-pool-7-thread-2" prio=10 tid=0x00007f5918012000 nid=0x201b runnable [0x00007f5924b82000]
       java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        - locked <0x000000008be94a28> (a sun.nio.ch.Util$2)
        - locked <0x000000008be94a18> (a java.util.Collections$UnmodifiableSet)
        - locked <0x000000008be90648> (a sun.nio.ch.EPollSelectorImpl)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at io.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:55)
        at io.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:261)
        at io.netty.channel.socket.nio.NioWorker.run(NioWorker.java:37)
        at io.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:43)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)   Locked ownable synchronizers:    - <0x000000008be904c8> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
    
    "backend-worker-pool-7-thread-3" prio=10 tid=0x00007f5918007800 nid=0x201c runnable [0x00007f5924b61000]
       java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        - locked <0x000000008be952e0> (a sun.nio.ch.Util$2)
        - locked <0x000000008be952d0> (a java.util.Collections$UnmodifiableSet)
        - locked <0x000000008be8f858> (a sun.nio.ch.EPollSelectorImpl)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at io.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:55)
        at io.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:261)
        at io.netty.channel.socket.nio.NioWorker.run(NioWorker.java:37)
        at io.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:43)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)   Locked ownable synchronizers:    - <0x000000008be8f618> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
    
    "backend-worker-pool-7-thread-4" prio=10 tid=0x00007f5918019000 nid=0x201d runnable [0x00007f5924b40000]
       java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        - locked <0x000000008be003f8> (a sun.nio.ch.Util$2)
        - locked <0x000000008be003e8> (a java.util.Collections$UnmodifiableSet)
        - locked <0x000000008be00408> (a sun.nio.ch.EPollSelectorImpl)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at io.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:55)
        at io.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:261)
        at io.netty.channel.socket.nio.NioWorker.run(NioWorker.java:37)
        at io.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:43)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)   Locked ownable synchronizers:    - <0x000000008be004e0> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
    
    defect 
    opened by blucas 73
  • (POC) Refactor FileRegion to implement new ReadableObject API

    (POC) Refactor FileRegion to implement new ReadableObject API

    Motivation:

    Based on #3965. As a first step of integrating the unified reading API, the low-hanging fruit is to refactor FIleRegion to use it.

    Modifications:

    Refactored FileRegion to extend ReadableObject and implemented the new interface in DefaultFileRegion.

    Result:

    FileRegion implements the new ReadableObject interface.

    opened by nmittler 72
  • Draft - io_uring - GSoC 2020

    Draft - io_uring - GSoC 2020

    This draft can be build on Linux Kernel 5.9-rc1 and Linux Kernel 5.8.3 I came up with some ideas on how to implement it, but I'm not sure what is the right way, so feel free to comment.

    I created an event Hashmap to keep track of what kind of events are coming off the completion queue, that means you have to save the eventId to the submission queue->user_data to identify events

    Write

    • io_uring events are not in order by default but write events in netty should be in order, there is a flag for that IOSQE_IO_LINK (related to one channel)
    • This function doWrite(ChannelOutboundBuffer in) writes until readableBytes is 0, so that means you have to store Bytebuf somewhere

    Accept

    • you need the address of the peer socket to create a new ChildChannel. One solution would be to save the filedescriptor in the event ServerChannel because acceptedAddress argument is saved in AbstractIOUringServerChannel to call serverChannel.createNewChildChannel
    • My idea is whenever accept event is executed, a new accept event is started in EventLoop

    Read

    • I'm wondering how to get the pipeline instance to fireChannelRead(ByteBuf) in EventLoop and have to save Bytebuf(as mentioned above) or is it possible to get the same Bytebuf reference from ByteAllocatorHandle?
    • as discussed above, save the file descriptor in Event and then invoke pipeline.fireChannelRead, WDYT?
    • How often isChannel.read called or doBeginRead called?

    what's the difference between ByteBufUtil.threadLocalDirectBuffer and isDirectBufferPooled?

    what about naming? IOUringSocketChannel, IO_UringSocketChannel or UringSocketChannel WDYT?

    #10142

    opened by 1Jo1 70
  • Direct memory exhausted after cant recycle any FastThreadLocalThreads

    Direct memory exhausted after cant recycle any FastThreadLocalThreads

    Version:netty-all-4.0.25.Final Using:PooledByteBufAllocator I have encounter a problem that direct memory keeps rising by increasing 16M per time when load average go up till direct memory having 2G uplimit is exhausted ,and crashed. I find there are mass of FastThreadLocalThreads and io.netty.util.Recycler$DefaultHandle my code configed as follows:

    code

    chart4 f0 u hb nmzhp62ht b b75

    chart6

    what's wrong with my using netty? can any one help me?

    needs info 
    opened by jiangguilong2000 70
  • Use the correct readerIndex() when handle BLOCK_TYPE_NON_COMPRESSED

    Use the correct readerIndex() when handle BLOCK_TYPE_NON_COMPRESSED

    Motivation:

    We need to use the readerIndex() as offset when handle BLOCK_TYPE_NON_COMPRESSED as it might not be 0.

    Modifications:

    Correctly use readerIndex()

    Result:

    Correctly handle BLOCK_TYPE_NON_COMPRESSED when the readerIndex() != 0

    opened by normanmaurer 1
  • stop using the ASF 'backup' servers to download Maven

    stop using the ASF 'backup' servers to download Maven

    The tree includes 'Maven Wrapper' with settings such that it downloads Maven 3.8.1 from the ASF 'backup' distribution servers, rather than the main distribution mirrors, or more typically in the wrappers case actually Maven Central (which it seems the settings did use prior to the last config update). As the CI jobs are grabbing Maven several times for every commit, plus adding on any related use from peoples Netty forks/downloads etc, this will add up. It would be good to direct them elsewhere.

    This changes the wrapper settings so it grabs from the Google mirror of Maven Central.

    opened by gemmellr 3
  • Disable Flaky `ThreadPerChannelEventLoopGroupTest` test

    Disable Flaky `ThreadPerChannelEventLoopGroupTest` test

    Motivation: There are multiple instances when the build stalled at io.netty.channel.ThreadPerChannelEventLoopGroupTest test and kept running forever until it was interrupted by GitHub Actions for consuming 6 hours of runtime.

    Modification: Disabled the test until it's fixed.

    Result: No more build failure

    opened by hyperxpro 1
  • `SingleThreadEventExecutor.shutdownGracefully` starts thread to shut it down?

    `SingleThreadEventExecutor.shutdownGracefully` starts thread to shut it down?

    Expected behavior

    If shutdownGracefully is called on a SingleThreadEventExecutor that has not created a thread yet, the executor should shutdown without creating a thread.

    Actual behavior

    If shutdownGracefully is called on SingleThreadEventExecutor, ensureThreadStarted will be called in line #660, which will start the thread, to then? I don't know? Shut it down? I do not get behind the logic to return the terminationFuture in that case.

    Steps to reproduce

    Create any kind of EventLoopGroup, without putting work on it, shut it down gracefully and threads will be created.

    Minimal yet complete reproducer code (or URL to code)

    public static void main(String[] args) {
        EventLoopGroup group = new NioEventLoopGroup(1, runnable -> {
            return new Thread(runnable) {
                {
                    new IllegalStateException("I was created, because Netty wants to shut me down :(")
                            .printStackTrace();
                }
            };
        });
        group.shutdownGracefully();
    }
    

    Will print:

    java.lang.IllegalStateException: I was created, because Netty wants to shut me down :(
            ...
            at io.netty.util.concurrent.ThreadPerTaskExecutor.execute(ThreadPerTaskExecutor.java:32)
            at io.netty.util.internal.ThreadExecutorMap$1.execute(ThreadExecutorMap.java:57)
            at io.netty.util.concurrent.SingleThreadEventExecutor.doStartThread(SingleThreadEventExecutor.java:978)
            at io.netty.util.concurrent.SingleThreadEventExecutor.ensureThreadStarted(SingleThreadEventExecutor.java:961)
            at io.netty.util.concurrent.SingleThreadEventExecutor.shutdownGracefully(SingleThreadEventExecutor.java:663)
            at io.netty.util.concurrent.MultithreadEventExecutorGroup.shutdownGracefully(MultithreadEventExecutorGroup.java:163)
            at io.netty.util.concurrent.AbstractEventExecutorGroup.shutdownGracefully(AbstractEventExecutorGroup.java:70)
            ...
    

    Netty version

    4.1

    JVM version (e.g. java -version)

    openjdk 11.0.12 2021-07-20 LTS OpenJDK Runtime Environment SapMachine (build 11.0.12+7-LTS-sapmachine) OpenJDK 64-Bit Server VM SapMachine (build 11.0.12+7-LTS-sapmachine, mixed mode)

    OS version (e.g. uname -a)

    Windows 10

    opened by kristian 4
  •  Add new Compression / Decompression API which not depends on the Channel API

    Add new Compression / Decompression API which not depends on the Channel API

    Motivation:

    At the moment all our compression implementations are written by implement the ChannelHandler interface. Due of this re-using these in other codecs (like for example HTTP1/HTTP2) makes things very heavy weight. It would be much better if we implement these by only depend on ByteBuf and some API contract. This way it will be easier to re-use things and make it easier to optimize in the future.

    Modifications:

    • Add Compressor and Decompressor interfaces
    • Use these interfaces for all our compression implementations
    • Add CompressionHandler which uses a Compressor to do the compression
    • Add DecompressionHandler which uses a Decompressor to doo the decompression
    • Adjust tests

    Result:

    More fine grained API.

    opened by normanmaurer 0
  • Add more utility methods to Buffer

    Add more utility methods to Buffer

    Motivation

    During the process of migrating codecs to use Buffer, I stumbled upon a few utility methods that may be useful to be added to Buffer

    Modification

    • Added [read, write]Split() methods to split at an offset from the current reader/writer offset.
    • Added accumulate[Reader,Writer]Offset() methods to increment/decrement reader/writer offsets.
    • Added [read, write]CharSequence() methods which is used extensively in HTTP.

    Result

    More utility methods to be used by codecs.

    netty5 
    opened by NiteshKant 0
  • ByteBuf very poor sequential read/write performance compared with ByteBuffer or byte[]

    ByteBuf very poor sequential read/write performance compared with ByteBuffer or byte[]

    Expected behavior

    ByteBuf better performance

    Actual behavior

    ByteBuf seq R/w is the worst

    Steps to reproduce

    $java SeqRwPerf
    Netty ByteBuf seq R/W time: 5668ms
    Java ByteBuffer seq R/W time: 4241ms
    Java byte[] seq R/W time: 1528ms
    

    Minimal yet complete reproducer code (or URL to code)

    import io.netty.buffer.ByteBuf;
    import io.netty.buffer.ByteBufAllocator;
    
    import java.nio.ByteBuffer;
    
    public class SeqRwPerf {
    
        static final ByteBufAllocator ALLOC = ByteBufAllocator.DEFAULT;
        static final int N = 4 << 10, M = 100 * N;
    
        public static void main(String[] args) {
            long ta = System.currentTimeMillis();
            testNettyRw();
            long tb = System.currentTimeMillis();
            System.out.printf("Netty ByteBuf seq R/W time: %sms%n", tb - ta);
    
            ta = System.currentTimeMillis();
            testNioRw();
            tb = System.currentTimeMillis();
            System.out.printf("Java ByteBuffer seq R/W time: %sms%n", tb - ta);
    
            ta = System.currentTimeMillis();
            testByteaRw();
            tb = System.currentTimeMillis();
            System.out.printf("Java byte[] seq R/W time: %sms%n", tb - ta);
        }
    
        public static void testNettyRw() {
            for (int i = 0; i < M; ++i) {
                ByteBuf buf = ALLOC.buffer(N);
    
                for (int j = 0; j < N; ++j) {
                    buf.writeByte(j);
                }
                for (int j = 0; j < N; ++j) {
                    byte b = buf.readByte();
                    assertEquals(b, (byte) j);
                }
    
                buf.release();
            }
        }
    
        public static void testNioRw() {
            for (int i = 0; i < M; ++i) {
                ByteBuffer buf = ByteBuffer.allocate(N);
    
                for (int j = 0; j < N; ++j) {
                    buf.put((byte)j);
                }
                buf.flip();
                for (int j = 0; j < N; ++j) {
                    byte b = buf.get();
                    assertEquals(b, (byte) j);
                }
            }
        }
    
        public static void testByteaRw() {
            for (int i = 0; i < M; ++i) {
                byte[] buf = new byte[N];
    
                for (int j = 0; j < N; ++j) {
                    buf[j] = (byte)j;
                }
                for (int j = 0; j < N; ++j) {
                    byte b = buf[j];
                    assertEquals(b, (byte) j);
                }
            }
        }
    
        static void assertEquals(int a, int b) {
            if (a != b) throw new AssertionError(a + " != " + b);
        }
    
    }
    

    Netty version

    netty-4.1.67.Final

    JVM version (e.g. java -version)

    java version "1.8.0_231" Java(TM) SE Runtime Environment (build 1.8.0_231-b11) Java HotSpot(TM) 64-Bit Server VM (build 25.231-b11, mixed mode)

    OS version (e.g. uname -a)

    Linux 5.4.0-84-generic #94-Ubuntu SMP Thu Aug 26 20:27:37 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

    opened by forchid 2
  • Fix ByteBufUtil indexOf ClassCastException

    Fix ByteBufUtil indexOf ClassCastException

    Motivation:

    In this issue(https://github.com/netty/netty/issues/11678) , it is proposed that ByteBufUtil.indexOf() may throw ClassCastException, so type checking on haystack is required

    Modification:

    Use ByteBuf.indexOf() instead of firstIndexOf(), and add test case

    Result:

    Fixes https://github.com/netty/netty/issues/11678

    opened by skyguard1 3
  • ByteBufUtil ClassCastException in 4.1.66+

    ByteBufUtil ClassCastException in 4.1.66+

    ByteBufUtil.indexOf was rewritten recently in https://github.com/netty/netty/pull/11367.

    That change introduced a cast of the passed parameter to AbstractByteBuf https://github.com/netty/netty/blob/43f3956030dbb0e6e6cca9b6ceee6812f11c5a4b/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java#L250

    The consequence of this is that a ClassCastException is now thrown whenever other ByteBuf implementations are used.

    The method doc did not and does not indicate that the parameter is required to be an AbstractByteBuf, so this appears to be a bug. https://github.com/netty/netty/blob/43f3956030dbb0e6e6cca9b6ceee6812f11c5a4b/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java#L227-L232

    regression 
    opened by rs017991 2
  • 4.1.68 security advisories use floating links for code references

    4.1.68 security advisories use floating links for code references

    The security reports for 4.1.68, GHSA-9vjp-v76f-g363 and GHSA-grg4-wf29-r9vv, contain a few reference links to their related code areas. However they use links to the 4.1 branch which may float over time, and so while they may be correct now they will likely become incorrect as the various files change. The references should used fixed links such as via the release tag to ensure they remain appropriate.

    opened by gemmellr 0
Owner
The Netty Project
Opening the future of network programming since 2001
The Netty Project
Socket.IO server implemented on Java. Realtime java framework

Netty-socketio Overview This project is an open-source Java implementation of Socket.IO server. Based on Netty server framework. Checkout Demo project

Nikita Koksharov 5.4k Sep 13, 2021
A Java library that implements a ByteChannel interface over SSLEngine, enabling easy-to-use (socket-like) TLS for Java applications.

TLS Channel TLS Channel is a library that implements a ByteChannel interface over a TLS (Transport Layer Security) connection. It delegates all crypto

Mariano Barrios 119 Sep 8, 2021
A High Performance Network ( TCP/IP ) Library

Chronicle-Network About A High Performance Network library Purpose This library is designed to be lower latency and support higher throughputs by empl

Chronicle Software : Open Source 185 Sep 18, 2021
Asynchronous Http and WebSocket Client library for Java

Async Http Client Follow @AsyncHttpClient on Twitter. The AsyncHttpClient (AHC) library allows Java applications to easily execute HTTP requests and a

AsyncHttpClient 5.8k Sep 17, 2021
Pcap editing and replay tools for *NIX and Windows - Users please download source from

Tcpreplay Tcpreplay is a suite of GPLv3 licensed utilities for UNIX (and Win32 under Cygwin) operating systems for editing and replaying network traff

AppNeta, Inc. 806 Sep 8, 2021
Simple & Lightweight Netty packet library + event system

Minimalistic Netty-Packet library Create packets with ease Bind events to packets Example Packet: public class TestPacket extends Packet { privat

Pierre Maurice Schwang 11 Aug 26, 2021
Chaos engineering tool for simulating real-world distributed system failures

Proxy for simulating real-world distributed system failures to improve resilience in your applications. Introduction Muxy is a proxy that mucks with y

Matt Fellows 771 Sep 13, 2021
TCP/UDP client/server library for Java, based on Kryo

KryoNet can be downloaded on the releases page. Please use the KryoNet discussion group for support. Overview KryoNet is a Java library that provides

Esoteric Software 1.6k Sep 10, 2021
The Java gRPC implementation. HTTP/2 based RPC

gRPC-Java - An RPC library and framework gRPC-Java works with JDK 7. gRPC-Java clients are supported on Android API levels 16 and up (Jelly Bean and l

grpc 9k Sep 10, 2021
《让天下没有难学Netty》系列博文40+,并且对RocketMQ远程通信框架的提取,通过阅读大牛对Netty的封装,感悟Netty编程之美。

netty-learning netty-learning主要由**《让天下没有难学的Netty》免费专栏与Netty二次开发封装的框架**构成。 1、《让天下没有难学的Netty》 《让天下没有难学的Netty》专栏将从通道篇、内存篇、性能篇、实战篇详细剖析Netty的实现原理、设计理念,同时通过

null 152 Sep 13, 2021
A Linux packet crafting tool.

Pig Pig (which can be understood as Packet intruder generator) is a Linux packet crafting tool. You can use Pig to test your IDS/IPS among other stuff

Rafael Santiago 399 Aug 26, 2021
A networking framework that evolves with your application

ServiceTalk ServiceTalk is a JVM network application framework with APIs tailored to specific protocols (e.g. HTTP/1.x, HTTP/2.x, etc…) and supports m

Apple 694 Sep 10, 2021
Simulating shitty network connections so you can build better systems.

Comcast Testing distributed systems under hard failures like network partitions and instance termination is critical, but it's also important we test

Tyler Treat 7.6k Sep 15, 2021
LINE 3.3k Sep 18, 2021
Magician is an asynchronous non-blocking network protocol analysis package, supports TCP, UDP protocol, built-in Http, WebSocket decoder

An asynchronous non-blocking network protocol analysis package Project Description Magician is an asynchronous non-blocking network protocol analysis

贝克街的天才 88 Sep 11, 2021
"I am fluent in over 6 million forms of communication!"

K3PO K3PO is a network driver and language agnostic testing tool. It is designed to be able to create arbitrary network traffic and behavior, and to c

K3PO 42 Aug 19, 2021
Fibers and actors for web development

COMSAT Scalable, Concurrent Web Apps Getting started Add the following Maven/Gradle dependencies: Feature Artifact Servlet integration for defining fi

Parallel Universe 595 Sep 1, 2021
Experimental Netty-based Java 16 application/web framework

Experimental Netty-based application/web framework. An example application can be seen here. Should I use this? Probably not! It's still incredibly ea

amy null 6 Sep 14, 2021
Java API over Accelio

JXIO JXIO is Java API over AccelIO (C library). AccelIO (http://www.accelio.org/) is a high-performance asynchronous reliable messaging and RPC librar

Accelio 71 Apr 29, 2021