From xen-devel-bounces@lists.xen.org Mon May 19 14:35:03 2014 Received: (at maildrop) by bugs.xenproject.org; 19 May 2014 13:35:03 +0000 Received: from lists.xen.org ([50.57.142.19]) by bugs.xenproject.org with esmtp (Exim 4.80) (envelope-from ) id 1WmNiZ-0005E7-Ib for xen-devel-maildrop-Eithu9ie@bugs.xenproject.org; Mon, 19 May 2014 14:35:03 +0100 Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WmNbn-00067W-Ep; Mon, 19 May 2014 13:28:03 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WmNbm-00067K-9u for xen-devel@lists.xen.org; Mon, 19 May 2014 13:28:02 +0000 Received: from [85.158.139.211:58161] by server-2.bemta-5.messagelabs.com id 66/E2-12074-1E60A735; Mon, 19 May 2014 13:28:01 +0000 X-Env-Sender: dunlapg@gmail.com X-Msg-Ref: server-5.tower-206.messagelabs.com!1400506080!5089379!1 X-Originating-IP: [209.85.212.179] X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP X-StarScan-Received: X-StarScan-Version: 6.11.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 31960 invoked from network); 19 May 2014 13:28:00 -0000 Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com) (209.85.212.179) by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP; 19 May 2014 13:28:00 -0000 Received: by mail-wi0-f179.google.com with SMTP id bs8so4151668wib.6 for ; Mon, 19 May 2014 06:28:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type:content-transfer-encoding; bh=3NSClSsl8O55fzQt2pEnkI13UkM5DifF3ly49ePvpgY=; b=xCEaP652rXcJ5V1h3p0fDNbXBYmIpXYm61mwtgllLTo0YvRvYJ6G00zcX28AEMw/QU gY+ZeQsY4qGaBkYori73v/GCJzyFTJ7Jtkcd8IQai2oMtWzqZosZ+9f85FisADf72Qgl UdbDJ6U1EWg3skTkfxCBEYuPkVYiywKF1aIa3uESpzDE0EWom5YwnMbNpDAf8Q/aqWk2 oN12+KRlwwNRBhx2oBKq0oQsiiFnkPps9ggEDbcPyDbCp9DBBFnI6wXr64D8/ZZCNjcA 8JfWr/X+1jJt8P47/hayOdD3H5GK0kEhAOY5YXwIn0lNeRyuHdmDIlEZ4oaHvOfbnOUh 5eoQ== MIME-Version: 1.0 X-Received: by 10.180.126.33 with SMTP id mv1mr13112776wib.6.1400506080148; Mon, 19 May 2014 06:28:00 -0700 (PDT) Received: by 10.194.14.228 with HTTP; Mon, 19 May 2014 06:27:59 -0700 (PDT) In-Reply-To: References: <20140210080314.GA758@deinos.phlegethon.org> <20140211090202.GC92054@deinos.phlegethon.org> <20140211115553.GB97288@deinos.phlegethon.org> <52FA2C63020000780011B201@nat28.tlf.novell.com> <52FA480D.9040707@eu.citrix.com> <52FCE8BE.8050105@eu.citrix.com> <52FCF90F020000780011C29A@nat28.tlf.novell.com> <20140213162022.GE82703@deinos.phlegethon.org> Date: Mon, 19 May 2014 14:27:59 +0100 X-Google-Sender-Auth: rnzi4MhTglk4uo7okwTQvZsddwQ Message-ID: From: George Dunlap To: "Zhang, Yang Z" Cc: Jan Beulich , "andrew.cooper3@citrix.com" , Tim Deegan , "xen-devel@lists.xen.org" , "Keir Fraser \(keir.xen@gmail.com\)" , "Nakajima, Jun" , "Zhang, Xiantao" Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log dirty to track vram X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org On Mon, May 19, 2014 at 8:48 AM, Zhang, Yang Z wrote: > Tim Deegan wrote on 2014-02-14: >> At 15:55 +0000 on 13 Feb (1392303343), Jan Beulich wrote: >>>>>> On 13.02.14 at 16:46, George Dunlap >>>>>> >> wrote: >>>> On 02/12/2014 12:53 AM, Zhang, Yang Z wrote: >>>>> George Dunlap wrote on 2014-02-11: >>>>>> I think I got a bit distracted with the "A isn't really so bad" thing. >>>>>> Actually, if the overhead of not sharing tables isn't very high, >>>>>> then B isn't such a bad option. In fact, B is what I expected >>>>>> Yang to submit when he originally described the problem. >>>>> Actually, the first solution came to my mind is B. Then I >>>>> realized that even >>>> chose B, we still cannot track the memory updating from DMA(even with >>>> A/D bit, it still a problem). Also, considering the current usage case >>>> of log dirty in Xen(only vram tracking has problem), I though A is >>>> better.: Hypervisor only need to track the vram change. If a malicious >>>> guest try to DMA to vram range, it only crashed himself (This should be >> reasonable). >>>>> >>>>>> I was going to say, from a release perspective, B is probably >>>>>> the safest option for now. But on the other hand, if we've been >>>>>> testing sharing all this time, maybe switching back over to >>>>>> non-sharing whole-hog has >>>> the higher risk? >>>>> Another problem with B is that current VT-d large paging >>>>> supporting relies on >>>> the sharing EPT and VT-d page table. This means if we choose B, >>>> then we need to re-enable VT-d large page. This would be a huge >>>> performance impaction for Xen 4.4 on using VT-d solution. >>>> >>>> OK -- if that's the case, then it definitely tips the balance back >>>> to A. Unless Tim or Jan disagrees, can one of you two check it in? >>>> >>>> Don't rush your judgement; but it would be nice to have this in >>>> before RC4, which would mean checking it in today preferrably, or >>>> early tomorrow at the latest. >>> >>> That would be Tim then, as he would have to approve of it anyway. >> >> Done. >> >>> I should also say that while I certainly understand the >>> argumentation above, I would still want to go this route only with >>> the promise that B is going to be worked on reasonably soon after >>> the release, ideally with the goal of backporting the changes for 4.4.1. >> >> Agreed. >> >> Tim. > > Hi all > > Sorry to turn out this old thread. > Because I just noticed that someone is asking when Intel will implement the VT-d page table separately. Actually, I am totally unaware it. The original issue that this patch tries to fix is the VRAM tracking which using the global log dirty mode. And I thought the best solution to fix it is in VRAM side not VT-d side. Because even use separate VT-d page table, we still cannot track the memory update from DMA. Even worse, I think two page tables introduce redundant code and maintain effort. So I wonder is it really necessary to implement the separate VT-d large page? Yes, it does introduce redundant code. But unfortunately, IOMMU faults at the moment have to be considered rather risky; having on happens risks (in order of decreasing probability / increasing damage): * Device stops working for that VM until an FLR (losing a lot of its state) * The VM has to be killed * The device stops working until a host reboot * The host crashes Avoiding these by "hoping" that the guest OS doesn't DMA into a video buffer isn't really robust enough. I think that was Tim and Jan's primary reason for wanting the ability to have separate tables for HAP and IOMMU. Is that about right, Jan / Tim? -George _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel