From xen-devel-bounces@lists.xen.org Mon Jan 20 18:21:46 2014 Received: (at maildrop) by bugs.xenproject.org; 20 Jan 2014 18:21:46 +0000 Received: from lists.xen.org ([50.57.142.19]) by bugs.xenproject.org with esmtp (Exim 4.80) (envelope-from ) id 1W5JTm-0001rb-Gs for xen-devel-maildrop-Eithu9ie@bugs.xenproject.org; Mon, 20 Jan 2014 18:21:46 +0000 Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1W5JPF-0003Np-Im; Mon, 20 Jan 2014 18:17:05 +0000 Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1W5JPD-0003Nj-7l for xen-devel@lists.xen.org; Mon, 20 Jan 2014 18:17:03 +0000 Received: from [85.158.143.35:36669] by server-2.bemta-4.messagelabs.com id 65/83-11386-E186DD25; Mon, 20 Jan 2014 18:17:02 +0000 X-Env-Sender: dario.faggioli@citrix.com X-Msg-Ref: server-7.tower-21.messagelabs.com!1390241811!12824228!1 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, ML_RADAR_SPEW_LINKS_8, spamassassin: , async_handler: YXN5bmNfZGVsYXk6IDcwNTU0MDAgKHRpbWVvdXQp\n X-StarScan-Received: X-StarScan-Version: 6.9.16; banners=-,-,- X-VirusChecked: Checked Received: (qmail 28166 invoked from network); 20 Jan 2014 18:16:53 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP; 20 Jan 2014 18:16:53 -0000 X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92513825" Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net) ([10.9.154.239]) by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 18:16:41 +0000 Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 20 Jan 2014 13:16:40 -0500 Message-ID: <1390241799.23576.42.camel@Solace> From: Dario Faggioli To: Justin Weaver Date: Mon, 20 Jan 2014 19:16:39 +0100 In-Reply-To: <1386984785.3980.96.camel@Solace> References: <1386984785.3980.96.camel@Solace> Content-Type: multipart/mixed; boundary="=-udcPDT2/1ZUlzquM5prW" X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) MIME-Version: 1.0 X-DLP: MIA2 Cc: George Dunlap , xen-devel Subject: Re: [Xen-devel] multiple runqueues in credit2 X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org --=-udcPDT2/1ZUlzquM5prW Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit create ^ title it credit2 only uses one runqueue instead of one runq per socket thanks On sab, 2013-12-14 at 02:33 +0100, Dario Faggioli wrote: > Hi George, > BTW, creating a tracking bug entry for this issue. > Now the question is, for fixing this, would it be preferable to do > something along this line (i.e., removing the right side of the || and, > in general, make csched_alloc_pdata() a pcpu 0 only thing)? Or, perhaps, > should I look into a way to properly initialize the cpu_data array, so > that cpu_to_socket() actually returns something '< 0' for pcpus not yet > onlined and identified? > I prepared and gave it a quick try to the attached patch... Only to figure out that it won't work. Well, it does for certain configurations (so, perhaps, Justin, if that is your case you may be able to at least do some development on top of it), but it's not the correct approach... Or at least it's not enough. In fact, what it does is initializing the pCPU info field used by cpu_to_socket() to -1, which means now all pCPUs --apart from pCPU 0-- are associated with the proper runqueue. pCPU 0, OTOH, is always associated with runqueue 0, and that is necessary and intended, as it does not get the notifier call, and hence it needs to be initialized when the correct cpu_to_socket() information is still not available. And that's where the problem is. In fact, this is fine if pCPU 0 is actually on socket 0, but what if it is, say, on socket 1? :-O That happens to be the case on one of my test boxes, and here's what I get on it: root@Zhaman:~# xl dmesg |grep runqueue (XEN) Adding cpu 0 to runqueue 0 (XEN) First cpu on runqueue, activating (XEN) Adding cpu 1 to runqueue 1 (XEN) First cpu on runqueue, activating (XEN) Adding cpu 2 to runqueue 1 (XEN) Adding cpu 3 to runqueue 1 (XEN) Adding cpu 4 to runqueue 1 (XEN) Adding cpu 5 to runqueue 1 (XEN) Adding cpu 6 to runqueue 1 (XEN) Adding cpu 7 to runqueue 1 (XEN) Adding cpu 8 to runqueue 0 (XEN) Adding cpu 9 to runqueue 0 (XEN) Adding cpu 10 to runqueue 0 (XEN) Adding cpu 11 to runqueue 0 (XEN) Adding cpu 12 to runqueue 0 (XEN) Adding cpu 13 to runqueue 0 (XEN) Adding cpu 14 to runqueue 0 (XEN) Adding cpu 15 to runqueue 0 root@Zhaman:~# xl dmesg |grep 'runqueue 0'|cat -n 1 (XEN) Adding cpu 0 to runqueue 0 2 (XEN) Adding cpu 8 to runqueue 0 3 (XEN) Adding cpu 9 to runqueue 0 4 (XEN) Adding cpu 10 to runqueue 0 5 (XEN) Adding cpu 11 to runqueue 0 6 (XEN) Adding cpu 12 to runqueue 0 7 (XEN) Adding cpu 13 to runqueue 0 8 (XEN) Adding cpu 14 to runqueue 0 9 (XEN) Adding cpu 15 to runqueue 0 root@Zhaman:~# xl dmesg |grep 'runqueue 1'|cat -n 1 (XEN) Adding cpu 1 to runqueue 1 2 (XEN) Adding cpu 2 to runqueue 1 3 (XEN) Adding cpu 3 to runqueue 1 4 (XEN) Adding cpu 4 to runqueue 1 5 (XEN) Adding cpu 5 to runqueue 1 6 (XEN) Adding cpu 6 to runqueue 1 7 (XEN) Adding cpu 7 to runqueue 1 :-( I'll keep looking into this, although I can't promise it will be my top priority for the coming weeks. :-/ If, in the meantime, someone (George?) has an idea on how to solve this, I gladly accept suggestions. :-) Regards, Dario -- <> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) --=-udcPDT2/1ZUlzquM5prW Content-Disposition: attachment; filename="phys_proc_id-init.patch" Content-Type: text/x-patch; name="phys_proc_id-init.patch"; charset="UTF-8" Content-Transfer-Encoding: 7bit diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c index 42b8a59..1588d71 100644 --- a/xen/arch/x86/smpboot.c +++ b/xen/arch/x86/smpboot.c @@ -59,7 +59,8 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_mask); cpumask_t cpu_online_map __read_mostly; EXPORT_SYMBOL(cpu_online_map); -struct cpuinfo_x86 cpu_data[NR_CPUS]; +struct cpuinfo_x86 cpu_data[NR_CPUS] = + { [0 ... NR_CPUS-1] = { .phys_proc_id=-1 } }; u32 x86_cpu_to_apicid[NR_CPUS] __read_mostly = { [0 ... NR_CPUS-1] = BAD_APICID }; --=-udcPDT2/1ZUlzquM5prW Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel --=-udcPDT2/1ZUlzquM5prW--