From xen-devel-bounces@lists.xen.org Mon May 19 08:55:21 2014 Received: (at maildrop) by bugs.xenproject.org; 19 May 2014 07:55:21 +0000 Received: from lists.xen.org ([50.57.142.19]) by bugs.xenproject.org with esmtp (Exim 4.80) (envelope-from ) id 1WmIPp-0001Qc-9p for xen-devel-maildrop-Eithu9ie@bugs.xenproject.org; Mon, 19 May 2014 08:55:21 +0100 Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WmIJ9-0004Tb-JA; Mon, 19 May 2014 07:48:27 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WmIJ8-0004TM-BD for xen-devel@lists.xen.org; Mon, 19 May 2014 07:48:26 +0000 Received: from [193.109.254.147:29759] by server-1.bemta-14.messagelabs.com id AF/2A-00839-947B9735; Mon, 19 May 2014 07:48:25 +0000 X-Env-Sender: yang.z.zhang@intel.com X-Msg-Ref: server-6.tower-27.messagelabs.com!1400485704!5590456!1 X-Originating-IP: [134.134.136.24] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n X-StarScan-Received: X-StarScan-Version: 6.11.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 17606 invoked from network); 19 May 2014 07:48:24 -0000 Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24) by server-6.tower-27.messagelabs.com with SMTP; 19 May 2014 07:48:24 -0000 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP; 19 May 2014 00:43:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.98,865,1392192000"; d="scan'208";a="542669630" Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54]) by orsmga002.jf.intel.com with ESMTP; 19 May 2014 00:48:09 -0700 Received: from fmsmsx155.amr.corp.intel.com (10.18.116.71) by FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server (TLS) id 14.3.123.3; Mon, 19 May 2014 00:48:06 -0700 Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by FMSMSX155.amr.corp.intel.com (10.18.116.71) with Microsoft SMTP Server (TLS) id 14.3.123.3; Mon, 19 May 2014 00:48:06 -0700 Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.192]) by SHSMSX151.ccr.corp.intel.com ([169.254.3.7]) with mapi id 14.03.0123.003; Mon, 19 May 2014 15:48:02 +0800 From: "Zhang, Yang Z" To: Tim Deegan , Jan Beulich Thread-Topic: [Xen-devel] [PATCH] Don't track all memory when enabling log dirty to track vram Thread-Index: AQHPJjaWGU8BoIe0JUmrwANmd1LxUZquITGwgAIV/CWAAJImEIACCY0AgAACsICAAAbjAICVQRIQ Date: Mon, 19 May 2014 07:48:01 +0000 Message-ID: References: <20140210080314.GA758@deinos.phlegethon.org> <20140211090202.GC92054@deinos.phlegethon.org> <20140211115553.GB97288@deinos.phlegethon.org> <52FA2C63020000780011B201@nat28.tlf.novell.com> <52FA480D.9040707@eu.citrix.com> <52FCE8BE.8050105@eu.citrix.com> <52FCF90F020000780011C29A@nat28.tlf.novell.com> <20140213162022.GE82703@deinos.phlegethon.org> In-Reply-To: <20140213162022.GE82703@deinos.phlegethon.org> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] MIME-Version: 1.0 Cc: George Dunlap , "andrew.cooper3@citrix.com" , "xen-devel@lists.xen.org" , "'Keir Fraser \(keir.xen@gmail.com\)'" , "Nakajima, Jun" , "Zhang, Xiantao" Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log dirty to track vram X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org Tim Deegan wrote on 2014-02-14: > At 15:55 +0000 on 13 Feb (1392303343), Jan Beulich wrote: >>>>> On 13.02.14 at 16:46, George Dunlap >>>>> > wrote: >>> On 02/12/2014 12:53 AM, Zhang, Yang Z wrote: >>>> George Dunlap wrote on 2014-02-11: >>>>> I think I got a bit distracted with the "A isn't really so bad" thing. >>>>> Actually, if the overhead of not sharing tables isn't very high, >>>>> then B isn't such a bad option. In fact, B is what I expected >>>>> Yang to submit when he originally described the problem. >>>> Actually, the first solution came to my mind is B. Then I >>>> realized that even >>> chose B, we still cannot track the memory updating from DMA(even with >>> A/D bit, it still a problem). Also, considering the current usage case >>> of log dirty in Xen(only vram tracking has problem), I though A is >>> better.: Hypervisor only need to track the vram change. If a malicious >>> guest try to DMA to vram range, it only crashed himself (This should be > reasonable). >>>> >>>>> I was going to say, from a release perspective, B is probably >>>>> the safest option for now. But on the other hand, if we've been >>>>> testing sharing all this time, maybe switching back over to >>>>> non-sharing whole-hog has >>> the higher risk? >>>> Another problem with B is that current VT-d large paging >>>> supporting relies on >>> the sharing EPT and VT-d page table. This means if we choose B, >>> then we need to re-enable VT-d large page. This would be a huge >>> performance impaction for Xen 4.4 on using VT-d solution. >>> >>> OK -- if that's the case, then it definitely tips the balance back >>> to A. Unless Tim or Jan disagrees, can one of you two check it in? >>> >>> Don't rush your judgement; but it would be nice to have this in >>> before RC4, which would mean checking it in today preferrably, or >>> early tomorrow at the latest. >> >> That would be Tim then, as he would have to approve of it anyway. > > Done. > >> I should also say that while I certainly understand the >> argumentation above, I would still want to go this route only with >> the promise that B is going to be worked on reasonably soon after >> the release, ideally with the goal of backporting the changes for 4.4.1. > > Agreed. > > Tim. Hi all Sorry to turn out this old thread. Because I just noticed that someone is asking when Intel will implement the VT-d page table separately. Actually, I am totally unaware it. The original issue that this patch tries to fix is the VRAM tracking which using the global log dirty mode. And I thought the best solution to fix it is in VRAM side not VT-d side. Because even use separate VT-d page table, we still cannot track the memory update from DMA. Even worse, I think two page tables introduce redundant code and maintain effort. So I wonder is it really necessary to implement the separate VT-d large page? Best regards, Yang _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel