From xen-devel-bounces@lists.xen.org Mon May 19 15:26:24 2014 Received: (at maildrop) by bugs.xenproject.org; 19 May 2014 14:26:24 +0000 Received: from lists.xen.org ([50.57.142.19]) by bugs.xenproject.org with esmtp (Exim 4.80) (envelope-from ) id 1WmOWG-0005mC-BD for xen-devel-maildrop-Eithu9ie@bugs.xenproject.org; Mon, 19 May 2014 15:26:24 +0100 Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WmOPT-0001yC-Uj; Mon, 19 May 2014 14:19:23 +0000 Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WmOPS-0001y4-UZ for xen-devel@lists.xen.org; Mon, 19 May 2014 14:19:23 +0000 Received: from [85.158.143.35:45397] by server-1.bemta-4.messagelabs.com id FB/1B-09853-AE21A735; Mon, 19 May 2014 14:19:22 +0000 X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-11.tower-21.messagelabs.com!1400509161!5820659!1 X-Originating-IP: [130.57.118.101] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 6.11.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 24774 invoked from network); 19 May 2014 14:19:21 -0000 Received: from mail.emea.novell.com (HELO mail.emea.novell.com) (130.57.118.101) by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted SMTP; 19 May 2014 14:19:21 -0000 Received: from EMEA1-MTA by mail.emea.novell.com with Novell_GroupWise; Mon, 19 May 2014 15:19:21 +0100 Message-Id: <537A2F040200007800013B6B@mail.emea.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.0.0 Date: Mon, 19 May 2014 15:19:16 +0100 From: "Jan Beulich" To: "George Dunlap" , "Yang Z Zhang" References: <20140210080314.GA758@deinos.phlegethon.org> <20140211090202.GC92054@deinos.phlegethon.org> <20140211115553.GB97288@deinos.phlegethon.org> <52FA2C63020000780011B201@nat28.tlf.novell.com> <52FA480D.9040707@eu.citrix.com> <52FCE8BE.8050105@eu.citrix.com> <52FCF90F020000780011C29A@nat28.tlf.novell.com> <20140213162022.GE82703@deinos.phlegethon.org> <537A284F0200007800013ADC@mail.emea.novell.com> <537A0E55.502@eu.citrix.com> In-Reply-To: <537A0E55.502@eu.citrix.com> Mime-Version: 1.0 Content-Disposition: inline Cc: "andrew.cooper3@citrix.com" , Tim Deegan , "xen-devel@lists.xen.org" , "Keir Fraser\(keir.xen@gmail.com\)" , Jun Nakajima , Xiantao Zhang Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log dirty to track vram X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org >>> On 19.05.14 at 15:59, wrote: > On 05/19/2014 02:50 PM, Jan Beulich wrote: >>>>> On 19.05.14 at 15:27, wrote: >>> On Mon, May 19, 2014 at 8:48 AM, Zhang, Yang Z wrote: >>>> Because I just noticed that someone is asking when Intel will implement the >>> VT-d page table separately. Actually, I am totally unaware it. The original >>> issue that this patch tries to fix is the VRAM tracking which using the >>> global log dirty mode. And I thought the best solution to fix it is in VRAM >>> side not VT-d side. Because even use separate VT-d page table, we still >>> cannot track the memory update from DMA. Even worse, I think two page tables >>> introduce redundant code and maintain effort. So I wonder is it really >>> necessary to implement the separate VT-d large page? >>> >>> Yes, it does introduce redundant code. But unfortunately, IOMMU >>> faults at the moment have to be considered rather risky; having on >>> happens risks (in order of decreasing probability / increasing >>> damage): >>> * Device stops working for that VM until an FLR (losing a lot of its state) >>> * The VM has to be killed >>> * The device stops working until a host reboot >>> * The host crashes >>> >>> Avoiding these by "hoping" that the guest OS doesn't DMA into a video >>> buffer isn't really robust enough. I think that was Tim and Jan's >>> primary reason for wanting the ability to have separate tables for HAP >>> and IOMMU. >>> >>> Is that about right, Jan / Tim? >> Yes, and not just "about" (perhaps with the exception that I think/ >> hope we don't have any lurking host crashes here). > > I think the fear was that buggy hardware might cause a host crash / hang. That's a valid concern indeed. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel