Skip to content
Snippets Groups Projects
  1. Sep 17, 2017
  2. Sep 07, 2017
  3. Aug 03, 2017
  4. Jul 04, 2017
  5. Jun 19, 2017
    • Emilio G. Cota's avatar
      tcg: allocate TB structs before the corresponding translated code · 6e3b2bfd
      Emilio G. Cota authored
      Allocating an arbitrarily-sized array of tbs results in either
      (a) a lot of memory wasted or (b) unnecessary flushes of the code
      cache when we run out of TB structs in the array.
      
      An obvious solution would be to just malloc a TB struct when needed,
      and keep the TB array as an array of pointers (recall that tb_find_pc()
      needs the TB array to run in O(log n)).
      
      Perhaps a better solution, which is implemented in this patch, is to
      allocate TB's right before the translated code they describe. This
      results in some memory waste due to padding to have code and TBs in
      separate cache lines--for instance, I measured 4.7% of padding in the
      used portion of code_gen_buffer when booting aarch64 Linux on a
      host with 64-byte cache lines. However, it can allow for optimizations
      in some host architectures, since TCG backends could safely assume that
      the TB and the corresponding translated code are very close to each
      other in memory. See this message by rth for a detailed explanation:
      
        https://lists.gnu.org/archive/html/qemu-devel/2017-03/msg05172.html
      
      
        Subject: Re: GSoC 2017 Proposal: TCG performance enhancements
        Message-ID: <1e67644b-4b30-887e-d329-1848e94c9484@twiddle.net>
      
      Suggested-by: default avatarRichard Henderson <rth@twiddle.net>
      Reviewed-by: default avatarPranith Kumar <bobby.prani@gmail.com>
      Signed-off-by: default avatarEmilio G. Cota <cota@braap.org>
      Message-Id: <1496790745-314-3-git-send-email-cota@braap.org>
      [rth: Simplify the arithmetic in tcg_tb_alloc]
      Signed-off-by: default avatarRichard Henderson <rth@twiddle.net>
      6e3b2bfd
  6. Jun 05, 2017
    • Emilio G. Cota's avatar
      tcg: Introduce goto_ptr opcode and tcg_gen_lookup_and_goto_ptr · cedbcb01
      Emilio G. Cota authored
      
      Instead of exporting goto_ptr directly to TCG frontends, export
      tcg_gen_lookup_and_goto_ptr(), which calls goto_ptr with the pointer
      returned by the lookup_tb_ptr() helper. This is the only use case
      we have for goto_ptr and lookup_tb_ptr, so having this function is
      very convenient. Furthermore, it trivially allows us to avoid calling
      the lookup helper if goto_ptr is not implemented by the backend.
      
      Reviewed-by: default avatarAlex Bennée <alex.bennee@linaro.org>
      Signed-off-by: default avatarEmilio G. Cota <cota@braap.org>
      Message-Id: <1493263764-18657-2-git-send-email-cota@braap.org>
      Message-Id: <1493263764-18657-3-git-send-email-cota@braap.org>
      Message-Id: <1493263764-18657-4-git-send-email-cota@braap.org>
      Message-Id: <1493263764-18657-5-git-send-email-cota@braap.org>
      [rth: Squashed 4 related commits.]
      Signed-off-by: default avatarRichard Henderson <rth@twiddle.net>
      cedbcb01
  7. Feb 24, 2017
  8. Feb 22, 2017
  9. Jan 10, 2017
  10. Nov 01, 2016
  11. Oct 31, 2016
  12. Oct 26, 2016
  13. Sep 16, 2016
  14. Sep 15, 2016
  15. Aug 05, 2016
  16. Jul 17, 2016
  17. Jul 06, 2016
    • Sergey Sorokin's avatar
      tcg: Improve the alignment check infrastructure · 1f00b27f
      Sergey Sorokin authored
      
      Some architectures (e.g. ARMv8) need the address which is aligned
      to a size more than the size of the memory access.
      To support such check it's enough the current costless alignment
      check implementation in QEMU, but we need to support
      an alignment size specifying.
      
      Signed-off-by: default avatarSergey Sorokin <afarallax@yandex.ru>
      Message-Id: <1466705806-679898-1-git-send-email-afarallax@yandex.ru>
      Signed-off-by: default avatarRichard Henderson <rth@twiddle.net>
      [rth: Assert in tcg_canonicalize_memop.  Leave get_alignment_bits
      available for, though unused by, user-mode.  Retain logging difference
      based on ALIGNED_ONLY.]
      1f00b27f
  18. Jun 20, 2016
  19. May 19, 2016
  20. May 13, 2016
    • Sergey Fedorov's avatar
      tcg: Clean up from 'next_tb' · 819af24b
      Sergey Fedorov authored
      
      The value returned from tcg_qemu_tb_exec() is the value passed to the
      corresponding tcg_gen_exit_tb() at translation time of the last TB
      attempted to execute. It is a little confusing to store it in a variable
      named 'next_tb'. In fact, it is a combination of 4-byte aligned pointer
      and additional information in its two least significant bits. Break it
      down right away into two variables named 'last_tb' and 'tb_exit' which
      are a pointer to the last TB attempted to execute and the TB exit
      reason, correspondingly. This simplifies the code and improves its
      readability.
      
      Correct a misleading documentation comment for tcg_qemu_tb_exec() and
      fix logging in cpu_tb_exec(). Also rename a misleading 'next_tb' in
      another couple of places.
      
      Signed-off-by: default avatarSergey Fedorov <serge.fdrv@gmail.com>
      Signed-off-by: default avatarSergey Fedorov <sergey.fedorov@linaro.org>
      Signed-off-by: default avatarRichard Henderson <rth@twiddle.net>
      819af24b
Loading