Now Playing Tracks

The Many-Core Answer - Relativistic Computation

Relativistic Computation
Larger Image here:

This picture illustrates the architecture of the system I have thought of and plan on creating over the next little while on my spare time during University.

As a first pass to explain I’ll start off by saying that the design is a 3-layered system using an OS kernel interface(API) to call for initiating and receiving tasks, which is akin to a traditional stdin and stdout of sorts for the system.

Once inside the input scheduler, this execution controller does NOT work at the threading or task level, but at a bit-stream of opcodes and data level to have the FINEST GRAIN concurrent execution strategy possible with current hardware and techniques. The input scheduler (opcode chunker) takes control of input streams of data as 32 or 64-bit lengths for one instruction in succession so it has as much assembly instruction opcodes as required to saturate each individual CPU core’s pipeline.

Those instructions are then passed from within the “Virtual machine” down into this systems core as true heavyweight OS threads bound to one hardware core each, 1, 2, 4, 8, 16, etc; using one of the Linux VM’s (kvm/qemu) a Windows User-Mode-Scheduler(UMS) or a Multi-platform VM (VMware/VirtualBox) to map 1:1 with threads into the kernel onto hardware.
From there, the whole point of the scheduler has one point in life, making sure each core’s execution pipeline is balanced and saturated across cores to leave as little time idle as hopefully possible until all work is completed.

The actual execution model of the scheduler is the most important conceptual understanding which I am calling “Relativistic Execution”. Whereby one execution is in succession and relative to another only after one full iteration through the currently available resources, which is hopefully explained as such with an increment function example starting from a variable declared 0:

- Core 1 receives a 1 
- Core 2 receives a 2 which is the NEXT CORE included by the scheduler for execution
- Core 3 receives a 3 and finally
- Core 4 receives a 4.

The loop back around to the first core is the most important detail creating the concept:

- Core 1 receives a 5 
- Core 2 receives a 6 
- Core 3 receives a 7 
- Core 4 receives a 8 

This second iteration across the core array as scheduled by the controller now has 4 pillars of numbers: 1:5, 2:6, 3:7, and 4:8 to create the final function of the threads work as +4 instead of the original +1 upon each core as separate entities!
This sets up the actual computation by the output scheduler to instruct the offset computation from its original value for the rest of the functions execution, all the way up to whatever the program code decides, lets say 40.

Step #: 01 - 02 - 03 - 04 - 05 - 06 - 07 - 08 - 09 - 10
Core 1: 01 - 05 - 09 - 13 - 17 - 21 - 25 - 29 - 33 - 37
Core 2: 02 - 06 - 10 - 14 - 18 - 22 - 26 - 30 - 34 - 38
Core 3: 03 - 07 - 11 - 15 - 19 - 23 - 27 - 31 - 35 - 39
Core 4: 04 - 08 - 12 - 16 - 20 - 24 - 28 - 32 - 36 - 40

Each core therefore does exactly 1/4 the work with perfect scheduling with minimal overhead of system scheduling using this model.

The input and output schedulers as I see it are a minimal Virtual Machine abstraction with its own protected memory using input processes as stream ID’s. These would be a good way to schedule through the system from input to the output of the VM for pairing with the original Process(PID) which requested the work to be completed as input.

Oddly enough this method of work distribution is so far below the threading abstraction that sequential code could easily be fed into the system and be scheduled in a concurrent fashion internally which has methods for data fetching into the scheduler for data-dependent computations. The “Silver bullet” which everyone thought did not exist. So I give you this early idea of mine which as school permits, I will slowly code in most likely Haskell, I’m a Haskell developer to the end ;).

I would really enjoy to have feedback upon this idea, and hear all the constructive ideas as well as criticism to whoever may be reading!!

I am HeavensRevenge and various means of contacting me may be found here:

To Tumblr, Love Pixel Union