Details

    • Type: Bug Bug
    • Status: Resolved Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 6.0.0RC0, 6.0.0rc1, 6.0.0rc2
    • Fix Version/s: 6.0.0
    • Component/s: NIO
    • Labels:
      None
    • Environment:
      FreeBSD 6.0-RELEASE
      Java HotSpot(TM) Server VM (build diablo-1.5.0_07-b00, mixed mode)
    • Number of attachments :
      0

      Description

      OutOfMemoryError occurs every once in a while, following by a Denial of Service (that is, the Jetty server stops responding until it is restarted).

      :WARN: handle failed
      java.lang.OutOfMemoryError
      at sun.misc.Unsafe.allocateMemory(Native Method)
      at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:99)
      at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
      at sun.nio.ch.IOUtil.write(IOUtil.java:134)
      at sun.nio.ch.SocketChannelImpl.write0(SocketChannelImpl.java:331)
      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:354)
      at java.nio.channels.SocketChannel.write(SocketChannel.java:360)
      at org.mortbay.io.nio.ChannelEndPoint.flush(ChannelEndPoint.java:238)
      at org.mortbay.jetty.nio.HttpChannelEndPoint.flush(HttpChannelEndPoint.java:141)
      at org.mortbay.jetty.HttpGenerator.flushBuffers(HttpGenerator.java:754)
      at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:321)
      at org.mortbay.jetty.nio.HttpChannelEndPoint.run(HttpChannelEndPoint.java:270)
      at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:475)
      java.lang.OutOfMemoryError
      at sun.misc.Unsafe.allocateMemory(Native Method)
      at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:99)
      at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
      at sun.nio.ch.IOUtil.write(IOUtil.java:134)
      at sun.nio.ch.SocketChannelImpl.write0(SocketChannelImpl.java:331)
      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:354)
      at java.nio.channels.SocketChannel.write(SocketChannel.java:360)
      at org.mortbay.io.nio.ChannelEndPoint.flush(ChannelEndPoint.java:238)
      at org.mortbay.jetty.nio.HttpChannelEndPoint.flush(HttpChannelEndPoint.java:141)
      at org.mortbay.jetty.HttpGenerator.flushBuffers(HttpGenerator.java:754)
      at org.mortbay.jetty.HttpConnection.flushResponse(HttpConnection.java:480)
      at org.mortbay.jetty.HttpConnection$Output.close(HttpConnection.java:711)

      I suspect this is an issue described in SDN Bug 4797189:
      http://bugs.sun.com/bugdatabase/view_bug.do;:WuuT?bug_id=4797189

      If so, solutions would be to provide an option not to use direct buffers (too much trouble with them, and little to no performance improvement, so, for example, Berkley DB-JE made them an option, which is by default turned off).

        Activity

        Hide
        Greg Wilkins added a comment -

        OK, so this brings us back to two questions:
        1) why are the direct buffers not being pooled
        2) why does this cause grief on some JVM/machines but not on others?

        Could you try putting a println into AbstractNIOBuffer to get a feel for how often your
        system is creating new buffers?

        Show
        Greg Wilkins added a comment - OK, so this brings us back to two questions: 1) why are the direct buffers not being pooled 2) why does this cause grief on some JVM/machines but not on others? Could you try putting a println into AbstractNIOBuffer to get a feel for how often your system is creating new buffers?
        Hide
        Artem Kozarezov added a comment -

        I've added a println into constructor in jetty\src\main\java\org\mortbay\io\nio\NIOBuffer.java
        1712 allocations in twenty minutes.

        > 2) why does this cause grief on some JVM/machines but not on others?

        A combination of limited memory (either "-Xmx" or, if that one is high, "ulimit -d") with a lack of full collections (achieved with parralel collector default "-XX:GCTimeRatio", which is too low, or with "-Xms").
        In my case, "-Xmx" was high (in recent versions of JVM, default value of "-XX:MaxDirectMemorySize" is calculated from "-Xmx", see SDN Bug ID 4879883), but "ulimit -d" was hardcoded by the operating system to be 500m; after i increased "ulimit -d" to 1000m, the ratio between "leaked" native memory and "-Xms" became bearable, that is, the leaked native memory is swapped away (about 500m of swap is created and lying there undisturbed), and the process survives till the -Xms is reached and full collection occurs. So, either decreasing -Xms (increasing -XX:GCTimeRatio), or increasing the memory limit (-Xmx and ulimit) is a workaround, although you can't increase the memory limit much unless you have a 64-bit memory space.

        Show
        Artem Kozarezov added a comment - I've added a println into constructor in jetty\src\main\java\org\mortbay\io\nio\NIOBuffer.java 1712 allocations in twenty minutes. > 2) why does this cause grief on some JVM/machines but not on others? A combination of limited memory (either "-Xmx" or, if that one is high, "ulimit -d") with a lack of full collections (achieved with parralel collector default "-XX:GCTimeRatio", which is too low, or with "-Xms"). In my case, "-Xmx" was high (in recent versions of JVM, default value of "-XX:MaxDirectMemorySize" is calculated from "-Xmx", see SDN Bug ID 4879883), but "ulimit -d" was hardcoded by the operating system to be 500m; after i increased "ulimit -d" to 1000m, the ratio between "leaked" native memory and "-Xms" became bearable, that is, the leaked native memory is swapped away (about 500m of swap is created and lying there undisturbed), and the process survives till the -Xms is reached and full collection occurs. So, either decreasing -Xms (increasing -XX:GCTimeRatio), or increasing the memory limit (-Xmx and ulimit) is a workaround, although you can't increase the memory limit much unless you have a 64-bit memory space.
        Hide
        Greg Wilkins added a comment -

        OK - I have found that I am not recylcing one of the response buffers!
        So I am working on a patch that will improve this.... stay tuned.

        Show
        Greg Wilkins added a comment - OK - I have found that I am not recylcing one of the response buffers! So I am working on a patch that will improve this.... stay tuned.
        Hide
        Greg Wilkins added a comment -

        in svn now... RC2 coming out tonight

        Show
        Greg Wilkins added a comment - in svn now... RC2 coming out tonight
        Hide
        Artem Kozarezov added a comment -

        The fix confirmed. Three days without a fault.

        Show
        Artem Kozarezov added a comment - The fix confirmed. Three days without a fault.

          People

          • Assignee:
            Greg Wilkins
            Reporter:
            Artem Kozarezov
          • Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved: