BTM
  1. BTM
  2. BTM-68

BitronixTransactionManager.resume() gradually slower when called very often

    Details

    • Type: Improvement Improvement
    • Status: Closed Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 1.3.3
    • Fix Version/s: 2.0.0
    • Labels:
      None
    • Environment:
      Hibernate (3.5beta4) with Infinispan (4.0.0.CR3) as 2LCache implementation using BitronixTM 1.3.3 for JTA integration
    • Number of attachments :
      0

      Description

      Sometimes it is necessary to suspend/resume a Transaction very often.
      If there is already at least one XaResource enlisted,
      then resuming takes gradually longer as the size of the collection passed to CollectionUtils#containsByIdentity()
      increments with each resume call and the collection is scanned sequentially (bad scalibility):

      bitronix.tm.utils.CollectionUtils.java:
      public static boolean containsByIdentity(Collection collection, Object toBeFound) {
      Iterator it = collection.iterator();
      while (it.hasNext())

      { Object o = it.next(); if (o == toBeFound) return true; }

      return false;
      }

      Here a little testcase with his output:

      // begin a transaction and enlist at least one XAResource
      for (int i=0; i < 10; i++) {
      long start = System.currentTimeMillis();
      for (int j=0; j < 3000; j++)

      { _suspendedTransaction = transactionManager().suspend(); transactionManager().resume(_suspendedTransaction); }

      long end = System.currentTimeMillis();
      System.out.println(i + ".round: toke " + (end-start) + " ms for 3000 suspend + resume actions");
      }

      0.round: toke 1227 ms for 3000 suspend + resume actions
      1.round: toke 2884 ms for 3000 suspend + resume actions
      2.round: toke 4707 ms for 3000 suspend + resume actions
      3.round: toke 7407 ms for 3000 suspend + resume actions
      4.round: toke 9291 ms for 3000 suspend + resume actions
      5.round: toke 10972 ms for 3000 suspend + resume actions
      6.round: toke 12832 ms for 3000 suspend + resume actions
      7.round: toke 15243 ms for 3000 suspend + resume actions
      8.round: toke 17569 ms for 3000 suspend + resume actions
      9.round: toke 17778 ms for 3000 suspend + resume actions

      As you can see, the suspend/resume actions gradually get slower.

      I suggest to change the collection implementation in order that a contains call
      does not sequentially scan the entries.

      How I discovered this:
      Due a bug in Infinispan 4.00.CR3 (http://opensource.atlassian.com/projects/hibernate/browse/HHH-4836)
      each infinispan-cache-put action did implicitly suspend/resume the current Transaction,
      so I reached a number of 10.000 suspend/resume-actions within one single Transaction,
      with very bad performance. Due jstack I discovered the bottleneck in containsByIdentity method.

        Issue Links

          Activity

          Hide
          Ludovic Orban added a comment -

          This is once again a consequence of the internal design mistake I described in BTM-67.

          I believe fixing BTM-67 will also solve this issue but I'll keep both open for now as you came with a very good use case which should be tested as well.

          Thanks for the report!

          Show
          Ludovic Orban added a comment - This is once again a consequence of the internal design mistake I described in BTM-67 . I believe fixing BTM-67 will also solve this issue but I'll keep both open for now as you came with a very good use case which should be tested as well. Thanks for the report!
          Hide
          Ludovic Orban added a comment -

          Fix for BTM-67 also solves this problem.

          Show
          Ludovic Orban added a comment - Fix for BTM-67 also solves this problem.

            People

            • Assignee:
              Ludovic Orban
              Reporter:
              Guenther Demetz
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: