Page 1 of 2 12 LastLast
Results 1 to 10 of 15

Thread: Brand "X"

  1. #1
    Join Date
    Apr 2005
    Location
    US
    Posts
    2,229

    Brand "X"

    In the continuing curiosity killed the cat series... Once upon a time I used to crunch with some cuda crunchers, which put out enough heat to melt the arctic, and sucked dam generators like Bonneville dry. So what is the state of the cuda cards now? What is the best card they put out now when viewed from the least heat and power used aspect yet is a respectable cruncher when viewed from a nvidia perspective and not ranked against amd/ati??

  2. #2
    Join Date
    Sep 2010
    Location
    Leiden, the Netherlands
    Posts
    4,428
    The best CUDA card with the least heat, yet a respectable cruncher might be the GeForce GTX 950.

    With a tdp of 'only' 90 Watt it gives 1572 GFLOPS Single Precision and 49.1 GFLOPS Double Precision, or 1.5x the SP of a HD 4850 and 0.25 the DP of that card -for a slightly lower tdp.
    As a bonus you get a much more recent design.

    If you can spare $5.000, buy a Tesla K80.


  3. #3
    Join Date
    Apr 2005
    Location
    US
    Posts
    2,229
    wow..............

  4. #4
    Join Date
    Sep 2010
    Location
    Leiden, the Netherlands
    Posts
    4,428
    Quote Originally Posted by Brucifer View Post
    wow..............
    It's not all that bad, take a GeForce GTX 750 Ti:
    1306 GFLOPS SP, 40.8 GFLOPS DP and 60 Watt tdp


  5. #5
    Join Date
    Nov 2005
    Location
    Central Pennsylvania
    Posts
    4,333
    Explain the 'Titan Z' Dirk...





    Challenge me, or correct me, but don't ask me to die quietly.

    …Pursuit is always hard, capturing is really not the focus, it’s the hunt ...

  6. #6
    Join Date
    Sep 2010
    Location
    Leiden, the Netherlands
    Posts
    4,428

    The Titan Z is the utter king of consumer CUDA -like the Tesla K80 is for the professional.
    It has compute capability 3.5, which is better than the -later- compute capability 5.0 and 5.2 of e.g. the Titan X.
    It has 8122 GFLOPS SP and 2707 GFLOPS DP, but burns your wallet down with a tdp of 375 Watt.

    Titan X does 6144 GFLOPS SP and 192 (! no more) GFLOPS DP, burning at 250 Watt tdp
    Last edited by Dirk Broer; 10-18-2015 at 02:34 PM.


  7. #7
    Join Date
    Apr 2005
    Location
    US
    Posts
    2,229
    At 375w 24x7 that's going to put a noticeable dent in one's monthly power bill, not to mention the amount of heat that thing has to throw off. Where would that card excel at in the DC crunching world? And at what point in the nvidia DP game does it make more sense to go to the professional cards?

  8. #8
    Join Date
    Apr 2005
    Location
    US
    Posts
    2,229
    Quote Originally Posted by Dirk Broer View Post
    It's not all that bad, take a GeForce GTX 750 Ti:
    1306 GFLOPS SP, 40.8 GFLOPS DP and 60 Watt tdp
    Definitely interesting from the power consumption view. So we have nvidia and amd/ati putting out cards. As I understand it from a programing viewpoint, the nvidia cuda and amd programming approaches are different...????? I guess what I'm wondering is what are the strength/weaknesses of each and why would a project select one versus the other ie, what are the considerations involved? Any idea?

    If you were going to tackle learning to program gpu's, which would you tackle and why?

    And another related question, regarding both brands, do the mfr's utilize that same methods for all their cards, or are there variations within each mfr's card designs which utilize different toolkits, etc.??
    Last edited by Brucifer; 10-18-2015 at 07:09 PM.

  9. #9
    Join Date
    Sep 2010
    Location
    Leiden, the Netherlands
    Posts
    4,428
    Quote Originally Posted by Brucifer View Post
    Where would that card excel at in the DC crunching world? And at what point in the nvidia DP game does it make more sense to go to the professional cards?
    It would excel in those (sub)projects that have DP as prerequisite (MilkyWay@Home as an example). For us consumers there is no need to go for the professional (Quadro, Tesla) cards unless we have money to burn.


  10. #10
    Join Date
    Sep 2010
    Location
    Leiden, the Netherlands
    Posts
    4,428
    Quote Originally Posted by Brucifer View Post
    I guess what I'm wondering is what are the strength/weaknesses of each and why would a project select one versus the other ie, what are the considerations involved? Any idea?
    It is what the programmer of the project wants/has learned that decides which API is being developed in.

    Quote Originally Posted by Brucifer View Post
    If you were going to tackle learning to program gpu's, which would you tackle and why?
    I would try to make the OpenCL API better than both CUDA and CAL++/Brook, perhaps by integrating the API with OpenGL too -if possible.

    Quote Originally Posted by Brucifer View Post
    do the mfr's utilize that same methods for all their cards, or are there variations within each mfr's card designs which utilize different toolkits, etc.??
    nVidia used to have only CUDA, but the last generations also support OpenCL. Ati used CAL++/Brook and shortly after -or before- AMD acquired them also started to support OpenCL.
    nVidia still continues CUDA, AMD has stopped CAL++/Brook.


Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •