+++ to secure your transactions use the Bitcoin Mixer Service +++

 

|
|
Subscribe / Log in / New account

An EEVDF CPU scheduler for Linux

An EEVDF CPU scheduler for Linux

Posted Mar 10, 2023 14:46 UTC (Fri) by HenrikH (subscriber, #31152)
Parent article: An EEVDF CPU scheduler for Linux

"It should, for example, maximize the benefit of the system's memory caches, which requires minimizing the movement of processes between CPUs"

Which is a topic I feel that CFS fails quite a bit in, on the old O(1) scheduler you could launch a cpu heavy thread and it would more or less remain running on the same core while that same thread moves around a lot under CFS (not as bad as in the Windows scheduler though that seems to move around such a thread constantly so that a thread that takes 100% cpu on e.g a 4 core system results in all 4 taking 25% constantly).

So I hope that EEVDF will some how be better in this regard. And then we have to added complexity of the E-cores, P-cores mix from Intel and CCD:s with and without the 3d-cache from AMD, and of your BIG.little from ARM.


(Log in to post comments)

An EEVDF CPU scheduler for Linux

Posted Mar 15, 2023 0:01 UTC (Wed) by intgr (subscriber, #39733) [Link] (4 responses)

> on the old O(1) scheduler you could launch a cpu heavy thread and it would more or less remain running on the same core

Note that this is no longer optimal for modern CPUs. There are lots of thermal sensors, one or more per core, and boost clocks for the entire CPU are usually limited by the hottest reading. So keeping a heavy task on the same core leads to it developing a hotspot and limiting boost -- whereas boost is most useful when most cores are idle.

So some bouncing can actually lead to better single thread performance. Not sure what the ideal bounce interval is, I doubt CFS is particularly tuned for it.

Also it's common that some cores are better cooled due to their positioning on the chip, uneven cooler contact etc. It would be even better if the scheduler had such thermal information and would prefer the coolest cores.

An EEVDF CPU scheduler for Linux

Posted Mar 15, 2023 14:52 UTC (Wed) by farnz (subscriber, #17727) [Link]

In theory, at least, AMD offers the information you want via amd-pstate. It tells you what the highest performance is of a given CPU thread if all other threads are idle and cooling is adequate, what performance you can get if the cooling subsystem meets AMD specs for that CPU, and where you transition from "maximum perf/joule" to "fewer joules but lower perf".

An EEVDF CPU scheduler for Linux

Posted Mar 24, 2023 12:59 UTC (Fri) by nim (guest, #102653) [Link] (2 responses)

I really doubt that a single active core can heat up to throttling levels given adequate cooling (i.e. one that can maintain the nominal load on all cores). Plus, AFAIK, modern CPUs from both Intel and AMD expose the concept of best core, which is meant to be used exclusively when you need the highest clock. Doesn't that contradict your claim?

OTOH, cold caches are a real phenomenon. Sadly, I cannot point to a figure about how much that affects execution speed, especially if it can pull the data from the shared L3 cache instead of going to RAM, but it seems an easier nut to crack. It would be nice to at least have the option to say "don't move threads to other cores unless their current one is needed for sth else."

An EEVDF CPU scheduler for Linux

Posted Mar 24, 2023 15:19 UTC (Fri) by intgr (subscriber, #39733) [Link] (1 responses)

> I really doubt that a single active core can heat up to throttling levels given adequate cooling (i.e. one that can maintain the nominal load on all cores).

This is two sides of the same coin -- modern CPUs dynamically adjust their clocks to always operate at thermal limits (or sometimes power limits or other constraints) -- but CPU manufacturers call it "boost" rather than "throttling".

For each CPU, AMD individually lists "Base Clock" and "Max. Boost Clock", e.g. 5950X can range from 3.4 to 4.9GHz, the latter being the clock under ideal conditions for single-threaded workload. But that's usually not achievable for an extended steady workload. This article has some graphs showing clock behavior during extended single-thread workloads: https://www.tomshardware.com/news/amd-ryzen-3000-boost-cl...

> modern CPUs from both Intel and AMD expose the concept of best core, which is meant to be used exclusively when you need the highest clock. Doesn't that contradict your claim?

This is true as well, there may be only one or a few cores on a package capable of reaching the max boost clocks.

But as stated above, under longer workloads, it's common for modern CPUs to be thermally limited. If due to thermals, the "best" core's clock can no longer boost higher than the 2nd best core, you don't lose anything by running on the 2nd best core.

> OTOH, cold caches are a real phenomenon.

Agreed. It all depends on how often this bouncing happens. If it's just once every few seconds, it's likely that the performance penalty due to cold caches is small enough to be unmeasurable.

But I admit my analysis is hypothetical at this point: I have no information whether possible performance gains from core bouncing are any higher than the losses.

An EEVDF CPU scheduler for Linux

Posted Mar 24, 2023 15:28 UTC (Fri) by Wol (subscriber, #4433) [Link]

> I really doubt that a single active core can heat up to throttling levels given adequate cooling (i.e. one that can maintain the nominal load on all cores). Plus, AFAIK, modern CPUs from both Intel and AMD expose the concept of best core, which is meant to be used exclusively when you need the highest clock. Doesn't that contradict your claim?

"Given adequate cooling" - THIS IS THE PROBLEM. In order for cores to run faster, they need to get smaller. If they get smaller, cooling gets harder, and "adequate cooling" becomes impossible.

So as your core heats up, you have a choice - reduce the VA going in (and hence the heat that needs to be dissipated), or move the task to a different core to give the first one a chance to cool down. Do neither, and you'll kill the core. If you could easily kill older larger chips (I had problems with a 686 ages ago with inadequate cooling) it's going to be LOT easier to kill today's faster, more powerful cores.

Cheers,
Wol


Copyright © 2024, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds