Thank you for your comment.

I understand that you personally are not wasting your computer power but billions of computers in the world (in fact, ~2 Bn in 2010 according to Forrester research) could be.

To give an example: Take Diabetic Retinopathy, a disease that affects people with diabetes, and can ultimately cause blindness. It affects nearly 100 Million people in the world.

For the sake of understanding what it would take to offer a screening solution through AI, let us assume the following.
- explore 100 ideas
- run 50 experiments per idea
- run each of them for a week of computation, on 50 machines

That’s a total of 42MM compute hours.

That would cost around $10MM on the public cloud (eg: on AWS’s c4.2xlarge), or several dozens of millions of upfront investment for a private infrastructure.
Or it could be provided by 15,000 contributors who provide 8 hours of compute a day for a year, on recent computers.

We see it as a tremendous opportunity to contribute your GPU to science research - you won't be playing games 24/7 so it's idle time can be used for research.

Actually, I've recently written an article explaining why we have created Cluster One, you can read it on Medium: https://medium.com/@mhejrati/announc...r-3abff76a0bb2