From xen-devel-bounces@lists.xen.org Wed Feb 12 00:59:40 2014 Received: (at maildrop) by bugs.xenproject.org; 12 Feb 2014 00:59:40 +0000 Received: from lists.xen.org ([50.57.142.19]) by bugs.xenproject.org with esmtp (Exim 4.80) (envelope-from ) id 1WDOAu-0003WM-Rl for xen-devel-maildrop-Eithu9ie@bugs.xenproject.org; Wed, 12 Feb 2014 00:59:40 +0000 Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WDO5E-0002AU-7g; Wed, 12 Feb 2014 00:53:48 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WDO5C-0002AP-7n for xen-devel@lists.xen.org; Wed, 12 Feb 2014 00:53:46 +0000 Received: from [85.158.137.68:34965] by server-2.bemta-3.messagelabs.com id 6E/27-06531-916CAF25; Wed, 12 Feb 2014 00:53:45 +0000 X-Env-Sender: yang.z.zhang@intel.com X-Msg-Ref: server-10.tower-31.messagelabs.com!1392166423!1239606!1 X-Originating-IP: [134.134.136.20] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n X-StarScan-Received: X-StarScan-Version: 6.9.16; banners=-,-,- X-VirusChecked: Checked Received: (qmail 24725 invoked from network); 12 Feb 2014 00:53:44 -0000 Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20) by server-10.tower-31.messagelabs.com with SMTP; 12 Feb 2014 00:53:44 -0000 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP; 11 Feb 2014 16:53:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.95,829,1384329600"; d="scan'208";a="481894608" Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36]) by orsmga002.jf.intel.com with ESMTP; 11 Feb 2014 16:53:42 -0800 Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server (TLS) id 14.3.123.3; Tue, 11 Feb 2014 16:53:42 -0800 Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by SHSMSX103.ccr.corp.intel.com ([169.254.4.253]) with mapi id 14.03.0123.003; Wed, 12 Feb 2014 08:53:35 +0800 From: "Zhang, Yang Z" To: George Dunlap , Jan Beulich , Tim Deegan Thread-Topic: [Xen-devel] [PATCH] Don't track all memory when enabling log dirty to track vram Thread-Index: AQHPJjaWGU8BoIe0JUmrwANmd1LxUZquITGwgAIV/CWAAJImEA== Date: Wed, 12 Feb 2014 00:53:34 +0000 Message-ID: References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com> <20140210080314.GA758@deinos.phlegethon.org> <20140211090202.GC92054@deinos.phlegethon.org> <20140211115553.GB97288@deinos.phlegethon.org> <52FA2C63020000780011B201@nat28.tlf.novell.com> <52FA480D.9040707@eu.citrix.com> In-Reply-To: <52FA480D.9040707@eu.citrix.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] MIME-Version: 1.0 Cc: "andrew.cooper3@citrix.com" , "Zhang, Xiantao" , "xen-devel@lists.xen.org" Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log dirty to track vram X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org George Dunlap wrote on 2014-02-11: > On 02/11/2014 12:57 PM, Jan Beulich wrote: >>>>> On 11.02.14 at 12:55, Tim Deegan wrote: >>> At 10:59 +0000 on 11 Feb (1392112778), George Dunlap wrote: >>>> What I'm missing here is what you think a proper solution is. >>> >>> A _proper_ solution would be for the IOMMU h/w to allow restartable >>> faults, so that we can do all the usual fault-driven virtual memory >>> operations with DMA. :) In the meantime... >> >> Or maintaining the A/D bits for IOMMU side accesses too. >> >>>> It seems we have: >>>> A. Share EPT/IOMMU tables, only do log-dirty tracking on the >>>> buffer being tracked, and hope the guest doesn't DMA into video >>>> ram; DMA causes IOMMU fault. (This really shouldn't crash the host >>>> under normal circumstances; if it does it's a hardware bug.) >>> >>> Note "hope" and "shouldn't" there. :) >>> >>>> B. Never share EPT/IOMMU tables, and hope the guest doesn't DMA >>>> into video ram. DMA causes missed update to dirty bitmap, which >>>> will hopefully just cause screen corruption. >>> >>> Yep. At a cost of about 0.2% in space and some extra bookkeeping >>> (for VMs that actually have devices passed through to them). >>> The extra bookkeeping could be expensive in some cases, but >>> basically all of those cases are already incompatible with IOMMU. >>> >>>> C. Do buffer scanning rather than dirty vram tracking (SLOW) D. >>>> Don't allow both a virtual video card and pass-through >>> >>> E. Share EPT and IOMMU tables until someone turns on log-dirty mode >>> and then split them out. That one >> >> Wouldn't that be problematic in terms of memory being available, >> namely when using ballooning in Dom0? >> >>>> Given that most operating systems will probably *not* DMA into >>>> video ram, and that an IOMMU fault isn't *supposed* to be able to >>>> crash the host, 'A' seems like the most reasonable option to me. >>> >>> Meh, OK. I prefer 'B' but 'A' is better than nothing, I guess, and >>> seems to have most support from other people. On that basis this >>> patch can have my Ack. >> >> I too would consider B better than A. > > I think I got a bit distracted with the "A isn't really so bad" thing. > Actually, if the overhead of not sharing tables isn't very high, then > B isn't such a bad option. In fact, B is what I expected Yang to > submit when he originally described the problem. Actually, the first solution came to my mind is B. Then I realized that even chose B, we still cannot track the memory updating from DMA(even with A/D bit, it still a problem). Also, considering the current usage case of log dirty in Xen(only vram tracking has problem), I though A is better.: Hypervisor only need to track the vram change. If a malicious guest try to DMA to vram range, it only crashed himself (This should be reasonable). > > I was going to say, from a release perspective, B is probably the > safest option for now. But on the other hand, if we've been testing > sharing all this time, maybe switching back over to non-sharing whole-hog has the higher risk? Another problem with B is that current VT-d large paging supporting relies on the sharing EPT and VT-d page table. This means if we choose B, then we need to re-enable VT-d large page. This would be a huge performance impaction for Xen 4.4 on using VT-d solution. > > Anyway, both are at least probably equal risk-wise. How easy is it to > implement? > > -George Best regards, Yang _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel