Details

    • Type: Improvement Improvement
    • Status: Open Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: 3.2
    • Component/s: MMTk
    • Labels:
      None
    • Number of attachments :
      3

      Description

      If we allow promotion into the LOS, then large objects can be allocated into the regular nursery (rather than the PLOS), and subsequently copied into the LOS at collection time. This ameliorates the problem of RVM-619 because we remove the PLOS (though we still have a nursery with a prescribed size, so the problem has not completely gone away).

      Note that his only affects non-reference arrays, since reference arrays are already pretenured directly into the LOS.

      1. loscopy.patch
        3 kB
        Steve Blackburn
      2. noplos.patch
        16 kB
        Steve Blackburn
      3. removeplos.patch
        39 kB
        Steve Blackburn

        Issue Links

          Activity

          Hide
          Steve Blackburn added a comment -

          The attached patch is WIP on this, applied only to the SemiSpace collector. It allows all large objects to be allocated with the semi-space bump pointer, and then they are copied into the LOS at collection time. Optionally (via a static final boolean), they can just be copied into the other semi-space (so the large objects remain in the semi-space forever, and so are copied at every GC while they are live).

          Show
          Steve Blackburn added a comment - The attached patch is WIP on this, applied only to the SemiSpace collector. It allows all large objects to be allocated with the semi-space bump pointer, and then they are copied into the LOS at collection time. Optionally (via a static final boolean), they can just be copied into the other semi-space (so the large objects remain in the semi-space forever, and so are copied at every GC while they are live).
          Hide
          Steve Blackburn added a comment -

          This patch against the head avoids using the PLOS in all GCs. For copying collectors, the objects are allocated using the bump allocator and then copied to the LOS at GC time.

          Show
          Steve Blackburn added a comment - This patch against the head avoids using the PLOS in all GCs. For copying collectors, the objects are allocated using the bump allocator and then copied to the LOS at GC time.
          Hide
          Steve Blackburn added a comment -

          This patch removes the PLOS completely.

          Show
          Steve Blackburn added a comment - This patch removes the PLOS completely.
          Hide
          Steve Blackburn added a comment -

          I have committed a working version in r14970

          This is performance neutral across dacapo, jvm98 and jbb2000 in moderate heaps (worst 3% degradation, best 11% win, average insignificant win). It looks to be a slight winner in tight heaps.

          Results below show production (left) and production with no PLOS (right). The worst result for no PLOS is eclipse (-3%), the best is pmd (11%)

          time
          benchmark divisor prod | prodNP
          _201_compress 3581 1.000 | 1.018
          _202_jess 1100 1.000 | 1.001
          _205_raytrace 950 1.001 | 1.002
          _209_db 6400 1.000 | 0.983
          _213_javac 2993 1.000 | 0.999
          _222_mpegaudio 2482 1.000 | 1.001
          _227_mtrt 781 1.001 | 1.020
          _228_jack 2059 1.000 | 0.999
          antlr 1724 1.000 | 1.010
          bloat 6187 1.000 | 0.995
          chart 6702 1.000 | 1.009
          eclipse 28176 1.000 | 1.031
          fop 1835 1.000 | 1.007
          hsqldb 1812 1.000 | 1.017
          jython 5885 1.000 | 0.988
          luindex 8104 1.000 | 0.984
          lusearch 0 | 3837.35
          pjbb2000 15169 1.000 | 1.005
          pmd 4756 1.000 | 0.887
          xalan 4864 1.000 | 0.995
          min 1.000 | 0.887
          max 1.001 | 1.031
          mean 1.000 | 0.997
          geomean 1.000 | 0.997

          Show
          Steve Blackburn added a comment - I have committed a working version in r14970 This is performance neutral across dacapo, jvm98 and jbb2000 in moderate heaps (worst 3% degradation, best 11% win, average insignificant win). It looks to be a slight winner in tight heaps. Results below show production (left) and production with no PLOS (right). The worst result for no PLOS is eclipse (-3%), the best is pmd (11%) time benchmark divisor prod | prodNP _201_compress 3581 1.000 | 1.018 _202_jess 1100 1.000 | 1.001 _205_raytrace 950 1.001 | 1.002 _209_db 6400 1.000 | 0.983 _213_javac 2993 1.000 | 0.999 _222_mpegaudio 2482 1.000 | 1.001 _227_mtrt 781 1.001 | 1.020 _228_jack 2059 1.000 | 0.999 antlr 1724 1.000 | 1.010 bloat 6187 1.000 | 0.995 chart 6702 1.000 | 1.009 eclipse 28176 1.000 | 1.031 fop 1835 1.000 | 1.007 hsqldb 1812 1.000 | 1.017 jython 5885 1.000 | 0.988 luindex 8104 1.000 | 0.984 lusearch 0 | 3837.35 pjbb2000 15169 1.000 | 1.005 pmd 4756 1.000 | 0.887 xalan 4864 1.000 | 0.995 min 1.000 | 0.887 max 1.001 | 1.031 mean 1.000 | 0.997 geomean 1.000 | 0.997
          Hide
          Steve Blackburn added a comment -

          We can probably optimize this further.

          For example, we could probably use mmap to avoid copying large primitive arrays.

          Show
          Steve Blackburn added a comment - We can probably optimize this further. For example, we could probably use mmap to avoid copying large primitive arrays.
          Hide
          Steve Blackburn added a comment -

          I had to back out of r14970 because a) it was broken, and b) it exposed another bug (RVM-605).

          r14985, r14986 and r14987 have fixed the problems and again removed the PLOS.

          Show
          Steve Blackburn added a comment - I had to back out of r14970 because a) it was broken, and b) it exposed another bug ( RVM-605 ). r14985, r14986 and r14987 have fixed the problems and again removed the PLOS.

            People

            • Assignee:
              Unassigned
              Reporter:
              Steve Blackburn
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated: