Results 1 to 3 of 3

Thread: Benchmarks

  1. #1
    Join Date
    Feb 2006
    Location
    UK
    Posts
    1,098

    Benchmarks

    Thought you might be interested in seeing these with regards to next months Opteron 256 single core 3.0GHz

    XEON 3.8GHz 800MHz 1MB/SLC - 2-way
    SPECint_base2000 = 1796
    SPECint_rate_base2000 = 42.5
    SPECfp_base2000 = 1933
    SPECfp_rate_base2000 = 33.6


    Opteron256 3.0GHz 1MB/SLC - 2-way
    SPECint_base2000 = 1887
    SPECint_rate_base2000 = 43.2
    SPECfp_base2000 = 2179
    SPECfp_rate_base2000 = 49.5

    btw, the Opteron was running in a blade whilst the Xeon had the advantage of running in a rack server! :shock:


  2. #2
    NeoGen's Avatar
    NeoGen is offline AMD Users Alchemist Moderator
    Site Admin
    Join Date
    Oct 2003
    Location
    North Little Rock, AR (USA)
    Posts
    8,451
    It's always nice to see that AMD keeps whacking Intel silly.

    But tell me something, as I don't know how do those SPEC benchmarks work, are those values much greater than similar desktop solutions?

    And by the way... you have been watching out for Bozo on BBC Climate Prediction, haven't you? He's getting real close to your spot. :P

  3. #3
    Join Date
    Feb 2006
    Location
    UK
    Posts
    1,098
    Desktop and server benchmarking are typically based on totally different tests so you can't really compare. With servers, integer operations are more in the foreground so are of more importance than in a desktop.

    Manufacturers measure their servers and highend workstations with the www.spec.org benchmark code but obviously these are only indicative figures and not real world application tests in a production environment.

    So basically it consists of measuring how much time is required to execute a certain task (speed) and how often a task is executed within a certain period of time (rate)

    SPEC aren't too bad compared with others as they simply measure the system performance in case of integer operations and floating-point operations and give a fairly level playing field. Although the source code does have to be compiled for different platforms so certain tests can be optimized based due to the complier, but SPEC feel they have got round this by measuring with conservative (base) optimization where strict guidelines exist - essentially the use of the same optimization flags in identical sequence for all individual benchmarks.

    They also can measure with aggressive (peak) optimization where the guidelines are more generous, so each single benchmark can be optimized individually.

    The first measurement is mandatory (base), the second one (peak) is optional only so I did not show them as they are not a fair comparison and are usually only for manufacturers to brag how much faster their systems are when compared to competitors.


    Now back to the important stuff! BBC Climate Change - yep I noticed Bozo storming in there! :shock: We need more like him to push us back up again as the team has dropped to #6


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •