Results 1 to 1 of 1

Thread: SBC's: an evaluation

  1. #1
    Join Date
    Sep 2010
    Location
    Leiden, the Netherlands
    Posts
    3,956

    SBC's: an evaluation

    SBC's are a great way of scoring credit while keeping the electricity bill as low as possible, but they do have their limitations.

    The first limitation is that they mostly run either Android/- or Linux/ARM, and that not too much projects cater applications for those OS-es. This could be solved by porting as much applications to Android/ARm and/or Linux/ARM as possible.

    The second limitation is more serious: the hardware is not fully used. Let's take the good old BeagleBone Black as an example. The BeagleBone Black is seen by BOINC as having an single-core ARM Cortex-A8 CPU@1000MHz, and that's all. Engineers at Texas Instruments can tell you that that is a very narrow way of looking at their product. Internally the BeagleBone Black is also running an ARM Cortex-M3, used as microcontroller for power management, and it has two PRU 32-bit real-time microcontrollers, plus a PowerVR SGX530 GPU, capable of 1.6 GFLOPS. All these can't be used by BOINC -though I read somewhere that the NEON Floating-point Accelerator is actually handled by the ARM Cortex-M3. The GPU and PRUs are running @200MHz and the Cortex-M3 might be running at that same -slow- speed, which might explain the somewhat lackluster performance of this board.

    Another example is the Raspberry Pi. The Broadcom BCM2835, as used in the original Models A and B, the A+ and B+, the Zero and the Compute Module 1, contains a ARM1176JZFS CPU, with floating point, running @700Mhz stock speed (which can be overclocked to 1000MHz when cooled properly -or stock in the case of the Zero). But the BCM2835 it also contains a VideoCore 4 GPU -a far more capable GPU than that of the BeagleBone Black: it is capable of BluRay quality playback, can use H.264 at 40MBits/s; has a fast 3D core access using the supplied OpenGL ES2.0 and OpenVG libraries; 1080p30 H.264 high-profile decode; capable of 1Gpixel/s, 1.5Gtexel/s or 24 GFLOPs of general purpose compute and features a bunch of texture filtering and DMA infrastructures. The graphics capabilities are roughly equivalent to Xbox 1 level of performance.

    That same VideoCore 4 GPU also featured in the Raspberry Pi 2 Model B, Raspberry Pi 3 Model 3 B, B+ and A+ and in the Compute Module 3 and 3+. Unlocking the full capabilities of the VideoCore 4 would be a big help in the case of BOINC, or other computing tasks.

    And that's just two 'old' (2011-2012) boards. Your own cell phone/mobile/handy has a much more advanced SOC, with a far superior GPU and who-knows-what co-processor(s) running. When you use it for BOINC only the CPU will be used however. With a bit of luck the GPU will be recognized, but that's all: no project saw the need to make an ARM GPU application, not even in the wake of the introduction of the nVidia Jetson, that combined an ARM core with CUDA graphics.

    So it is up to us, BOINC users/volunteers: get hold of the code, compile it to the best of the capabilities of your own hardware and when you get great results: share them with the rest of the world.

    Merry Christmas and a happy New Year,
    Dirk
    Last edited by Dirk Broer; 01-23-2021 at 12:12 AM.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •