From xen-devel-bounces@lists.xen.org Mon Jun 03 16:15:44 2013 Received: (at maildrop) by bugs.xenproject.org; 3 Jun 2013 15:15:44 +0000 Received: from lists.xen.org ([50.57.142.19]) by bugs.xenproject.org with esmtp (Exim 4.80) (envelope-from ) id 1UjWU4-0001GS-Bo for xen-devel-maildrop-Eithu9ie@bugs.xenproject.org; Mon, 03 Jun 2013 16:15:44 +0100 Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1UjWSj-00020l-L6; Mon, 03 Jun 2013 15:14:21 +0000 Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1UjWSh-00020b-Vu for xen-devel@lists.xen.org; Mon, 03 Jun 2013 15:14:20 +0000 Received: from [85.158.143.99:60027] by server-1.bemta-4.messagelabs.com id 76/8A-06122-BC2BCA15; Mon, 03 Jun 2013 15:14:19 +0000 X-Env-Sender: Stefano.Stabellini@eu.citrix.com X-Msg-Ref: server-10.tower-216.messagelabs.com!1370272456!21713145!1 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n X-StarScan-Received: X-StarScan-Version: 6.9.6; banners=-,-,- X-VirusChecked: Checked Received: (qmail 1362 invoked from network); 3 Jun 2013 15:14:18 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP; 3 Jun 2013 15:14:18 -0000 X-IronPort-AV: E=Sophos;i="4.87,793,1363132800"; d="scan'208";a="28895673" Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net) ([10.9.154.239]) by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA; 03 Jun 2013 15:14:06 +0000 Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id 14.2.342.3; Mon, 3 Jun 2013 11:14:06 -0400 Received: from kaball.uk.xensource.com ([10.80.2.59]) by ukmail1.uk.xensource.com with esmtp (Exim 4.69) (envelope-from ) id 1UjWST-00067D-SH; Mon, 03 Jun 2013 16:14:05 +0100 Date: Mon, 3 Jun 2013 16:14:03 +0100 From: Stefano Stabellini X-X-Sender: sstabellini@kaball.uk.xensource.com To: Konrad Rzeszutek Wilk In-Reply-To: <20130603131115.GJ6893@phenom.dumpdata.com> Message-ID: References: <513739DD.8050507@eu.citrix.com> <513868F0.6020104@eu.citrix.com> <20130603131115.GJ6893@phenom.dumpdata.com> User-Agent: Alpine 2.02 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="1342847746-798375137-1370272443=:4589" Cc: Hanweidong , George Dunlap , "xudong.hao@intel.com" , Yanqiangjun , Luonengjun , Wangzhenguo , Yangxiaowei , "Gonglei \(Arei\)" , "xiantao.zhang@intel.com" , Anthony Perard , "xen-devel@lists.xen.org" , Stefano Stabellini Subject: Re: [Xen-devel] GPU passthrough issue when VM is configured with 4G memory X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org --1342847746-798375137-1370272443=:4589 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: QUOTED-PRINTABLE On Mon, 3 Jun 2013, Konrad Rzeszutek Wilk wrote: > On Wed, May 29, 2013 at 05:18:24PM +0100, Stefano Stabellini wrote: > > On Thu, 25 Apr 2013, Hanweidong wrote: > > > > -----Original Message----- > > > > From: xen-devel-bounces@lists.xen.org [mailto:xen-devel- > > > > bounces@lists.xen.org] On Behalf Of Hanweidong > > > > Sent: 2013=E5=B9=B43=E6=9C=8826=E6=97=A5 17:38 > > > > To: Stefano Stabellini > > > > Cc: George Dunlap; xudong.hao@intel.com; Yanqiangjun; Luonengjun; > > > > Wangzhenguo; Yangxiaowei; Gonglei (Arei); Anthony Perard; xen- > > > > devel@lists.xen.org; xiantao.zhang@intel.com > > > > Subject: Re: [Xen-devel] GPU passthrough issue when VM is configure= d > > > > with 4G memory > > > >=20 > > > >=20 > > > > > -----Original Message----- > > > > > From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com= ] > > > > > Sent: 2013=E5=B9=B43=E6=9C=8818=E6=97=A5 20:02 > > > > > To: Hanweidong > > > > > Cc: George Dunlap; Stefano Stabellini; Yanqiangjun; Luonengjun; > > > > > Wangzhenguo; Yangxiaowei; Gonglei (Arei); Anthony Perard; xen- > > > > > devel@lists.xen.org; xudong.hao@intel.com; xiantao.zhang@intel.co= m > > > > > Subject: RE: [Xen-devel] GPU passthrough issue when VM is configu= red > > > > > with 4G memory > > > > > > > > > > On Wed, 13 Mar 2013, Hanweidong wrote: > > > > > > MMIO HOLE was adjusted to e0000000 - fc000000. But QEMU uses be= low > > > > > code to init > > > > > > RAM in xen_ram_init: > > > > > > > > > > > > ... > > > > > > block_len =3D ram_size; > > > > > > if (ram_size >=3D HVM_BELOW_4G_RAM_END) { > > > > > > /* Xen does not allocate the memory continuously, and k= eep > > > > a > > > > > hole at > > > > > > * HVM_BELOW_4G_MMIO_START of HVM_BELOW_4G_MMIO_LENGTH > > > > > > */ > > > > > > block_len +=3D HVM_BELOW_4G_MMIO_LENGTH; > > > > > > } > > > > > > memory_region_init_ram(&ram_memory, "xen.ram", block_len); > > > > > > vmstate_register_ram_global(&ram_memory); > > > > > > > > > > > > if (ram_size >=3D HVM_BELOW_4G_RAM_END) { > > > > > > above_4g_mem_size =3D ram_size - HVM_BELOW_4G_RAM_END; > > > > > > below_4g_mem_size =3D HVM_BELOW_4G_RAM_END; > > > > > > } else { > > > > > > below_4g_mem_size =3D ram_size; > > > > > > } > > > > > > ... > > > > > > > > > > > > HVM_BELOW_4G_RAM_END is f0000000. If we change HVM_BELOW_4G_RAM= _END > > > > > to e0000000, > > > > > > Which it's consistent with hvmloader when assigning a GPU, and = then > > > > > guest worked > > > > > > for us. So we wondering that xen_ram_init in QEMU should be > > > > > consistent with > > > > > > hvmloader. > > > > > > > > > > > > In addition, we found QEMU uses hardcode 0xe0000000 in pc_init1= () > > > > as > > > > > below. > > > > > > Should keep these places handle the consistent mmio hole or not= ? > > > > > > > > > > > > if (ram_size >=3D 0xe0000000 ) { > > > > > > above_4g_mem_size =3D ram_size - 0xe0000000; > > > > > > below_4g_mem_size =3D 0xe0000000; > > > > > > } else { > > > > > > above_4g_mem_size =3D 0; > > > > > > below_4g_mem_size =3D ram_size; > > > > > > } > > > > > > > > > > The guys at Intel sent a couple of patches recently to fix this i= ssue: > > > > > > > > > > http://marc.info/?l=3Dxen-devel&m=3D136150317011027 > > > > > http://marc.info/?l=3Dqemu-devel&m=3D136177475215360&w=3D2 > > > > > > > > > > Do they solve your problem? > > > >=20 > > > > These two patches didn't solve our problem. > > > >=20 > > >=20 > > > I debugged this issue with above two patches. I want to share some in= formation and discuss solution here. This issue is actually caused by that = a VM has a large pci hole (mmio size) which results in QEMU sets memory reg= ions inconsistently with hvmloader (QEMU uses hardcode 0xe0000000 in pc_ini= t1 and xen_ram_init). I created a virtual device with 1GB mmio size to debu= g this issue. Firstly, QEMU set memory regions except pci hole region in pc= _init1() and xen_ram_init(), then hvmloader calculated pci_mem_start as 0x8= 0000000, and wrote it to TOM register, which triggered QEMU to update pci h= ole region with 0x80000000 using i440fx_update_pci_mem_hole(). Finally the = windows 7 VM (configured 8G) crashed with BSOD code 0x00000024. If I hardco= de in QEMU pc_init1 and xen_ram_init to match hvmloader's. Then the problem= was gone.=20 > > >=20 > > > Althrough above two patches will pass actual pci hole start address t= o QEMU, but it's too late, QEMU pc_init1() and xen_ram_init() already set t= he other memory regions, and obviously the pci hole might overlap with ram = regions in this case. So I think hvmloader should setup pci devices and cal= culate pci hole first, then QEMU can map memory regions correctly from the = beginning. =20 > > >=20 > >=20 > > Thank you very much for your detailed analysis of the problem. > >=20 > > After reading this, I wonder how is possible that qemu-xen-traditional > > does not have this issue, considering that AFAIK there is no way for > > hvmloader to tell qemu-xen-traditional where the PCI hole starts. > >=20 > > The only difference between upstream QEMU and qemu-xen-traditional is > > that the former would start the PCI hole at 0xf0000000 while the latter > > would start the PCI hole at 0xe0000000. > >=20 > > So I would expect that your test, where hvmloader is updating the PCI > > hole region to start at 0x80000000, would fail on qemu-xen-traditional > > too. > >=20 > > Of course having the PCI hole starting unconditionally at 0xf0000000 > > makes it much easier to run into problems than starting it at > > 0xe0000000. > >=20 > >=20 > > Assuming that everything above is correct, this is what I would do: > >=20 > > 1) modify upstream QEMU to start the PCI hole at 0xe0000000, to match > > qemu-xen-unstable in terms of configuration and not to introduce any > > regressions. Do this for the Xen 4.3 release. > >=20 > > 2) for Xen 4.4 rework the two patches above and improve > > i440fx_update_pci_mem_hole: resizing the pci_hole subregion is not > > enough, it also needs to be able to resize the system memory region > > (xen.ram) to make room for the bigger pci_hole >=20 >=20 > Would that make migration more difficult - meaning if you have now two > different QEMU versions where the PCI hole is different on them? Or is > that not an issue and QEMU handles setting the layout nicely? Or is > the 0xe0000000 the norm in Xen 4.1, and Xen 4.2? > > I am assuming you unplug the PCI device before you migrate of course. the change in configuration is only for qemu-xen and upstream QEMU and Xen 4.3 is the first release that defaults to it, so I don't think we need to maintain save/restore compatibility yet. But from the next one is going to be unavoidable. --1342847746-798375137-1370272443=:4589 Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel --1342847746-798375137-1370272443=:4589--