PDA

View Full Version : Primergy BX600 - Just imagine the possibility's



Nflight
04-03-2006, 09:09 PM
Not everyone has several thousand of dollars to spend on such equipment, but all of us have dreamed of someday making a good enough living we could afford such an item or two or three. What if someone could come up with a Business Plan that would allow people like you and I to use this equipment to better the worlds curing of Diseases faster. I thought I was dreaming when I first started on this path, but the more I keep thinking about it the more I beleive I can make it work. I currently have a relationship going with a local Medical college and a few professors are my drinking buddies down at the local pub. Our conversation is that they need a way to find cures faster, process more clincal lab trials faster and more efficiently.

Faster more powerful kinda sounds like Mitro's little one on the way! j/k

Actually speaking seriousness here, I write Grants for a living on the side, it does not make me a lot of money but it puts me in the right position with politicians and those who are looking for cures to diseases like the up and coming bird flu virus. By dwelling on computers and there ability to find answers faster instead of test tubes and accidental vial contamination. Solutions being found on the computer analysis instead of real life becomes more feasible. This due to the fact that one problem can be attacked by 100 computers doing 10,000 test tube tests per minute handled by just one computer operator. This instead of Lab Technicians who perform tests day after day to achieve just 8 trials a day maybe. Plus each Lab Technician gets paid some amazing fee of $45,000 plus benefits. Now multiply that by the number of 1,000,000 tests that need to be completed before an FDA ruling can be garnered and you get Time and Expense. This is why Drugs for cures cost so much money.

Take a Business Plan dressed up to look good with detail in every facet and then offer this plan to the medical community/Politicians/Grant Funders and retreive funds to operate a Data Mining & Research operation from your own small not for profit business/ foundation operation. You get to sit in front of your own system like we do everyday pushing our machines to the maximum and watching our output and adjusting as needed. Most of us would sit here 14 hours a day if you let us.

Since I brought up the BX600 and I know I can't afford it but I wondered if I could encourage thought from others on this idea? Grid Computing where by you never actually own the system you have crunching data can be accomplished with Sun Microsystems and others. I sit here and teleport my directives to a Grid of computers I have working for a certain project all for the good of mankind. I will not own my computers as they would outdate themselves as soon as they would arrive in crates to be assembled. But If I leased them under a Grid contract then the ownership of the systems would increase in efficiency as newer technology would come out. Instead of dishing out $100,000 for a few systems I could actually get 36 months of Grid Computing contracts, 36 months of living expenses. With this concept I would never have to rewire my house or add additional cooling because the computers are not in my house they are at whomever's the holder is of the contract for the Grid Computing.

I ramble when I dream up such hair brain idea's but now some of you can see where I have been thinking and yes if I have been quiet lately now you know where my thoughts have been.

As I have been told most Grid Computer contracting companies will not contract out to an individual so you must become an incorporated entity. This of course is the cheapest way to go, if anyone has a better way please advise?

http://www.fujitsu-siemens.com/products/standard_servers/blade/primergy_bx_600.html#

I suggest clicking on the related links section and slide down to the 3D Animation, awesome presentation even if you can't afford it.

NeoGen
04-03-2006, 09:58 PM
Say... instead of paying for the use of those machines, why not start a public distributed computing project? :)

If I'm not mistaken, the HashClash project is running off a single 400MHz machine. The crunching power is kindly donated by all of us DC fanatics. :)

Lagu
04-03-2006, 10:10 PM
Hi

I use my Intel PII, 350 MHz in HascClash and a WU takes from 14 to 54 minutes regarding Boinc but often it is chorter

Lagu :)

Nflight
04-04-2006, 01:44 AM
Gentleman your not seeing it my way. The whole idea is to provide an income for myself so I may sit online like I am now and tinker to my hearts content, all while having someone else pay for it.

If you want a real machine to act as a DC Project site mainframe see this machine: http://apstc.sun.com.sg/popup.php?l1=research&l2=projects&l3=biobox&f=overview

NeoGen
04-04-2006, 02:15 AM
So, you mean something like... for example renting a cluster's computing power, and using it for bio-medical applications, providing that you can get a grant or sponsorship to get the plan going (and ballance the costs)?

AMD-USR_JL
04-04-2006, 02:34 AM
Or maybe a bunch of people could chip in to rent it. On that first machine it said you could have 10 blades and in each blade up to 16 processors. So thats 160 cores i'm guessin? So i'm sure the price of that beasty macine would be a lot less if it were divided by 160 people.

It could be like one of those mutual fund things, but everyone would get tax deductions, if uncle sam :hathat1: would give us them.

I'd like to see how much heat that would create, you could problably heat my whole neighborhood with something like that. :D

Strongbow
04-04-2006, 05:51 AM
Nflight,

You've mixed two subjects here - blade servers and utility grids!

One is the BX600 blade portfolio - from my company, FSC - this is a standard 7U rack mounted blade chassis which has 10 slots at the front. Each slot can have 1 blade which can contain up to 2 x DC Opterons with up to 32GB RAM. The nice feature with this design is that you can directly connect up to 4 blades together making an 8 socket 16 core (AMD's current maximum) 128GB blade. So you could have 2 x 16 core 128GB and 1 x 8 core 64GB (or 2 x 4 core 32GB) blades in this 7U monster. Regarding power and heat, it churns out way less than all of the competition, IBM, HP, Dell!

The other part is on Utility Grids - Sun have finally got their utility grid sort of operational in the US. They state it is charged at $1 per CPU per Hour! You can read more here http://www.sun.com/2006-0320/feature/index.jsp

What you're not told is how fast are these CPUs and do you have a choice on what type of CPU (crucial for scientifc experiments as an Opteron or a SPARC (for example) will all have different floating point results during high precision computing calculations).

EDIT ADDITION - I've just found out that the servers within Sun's compute grid are typically AMD Opterons with 8GB RAM running Solaris 10 x86.

If you are looking for a system that can dynamically allocate processor resources to environments for temporary use (pooled architecture) then there are products out there that can do this. My company's strategy in this area is called "the Dynamic Data Center" and we have several products which cater here.

One software product which allows you to use existing kit is called ASCC (Adaptive Services Control Center) http://www.fujitsu-siemens.com/le/it_trends/triole/ascc.html

another product, which is a complete solution, has been OEM'd from Egenera called BladeFrame. This system has decoupled points of state and identity away from the processor and memory so that you can dynamically allocate server configurations along with connectivity and builds all within software. http://www.fujitsu-siemens.com/le/products/standard_servers/bframe/index.html

Bladeframes brings some great new possibilities into a datacenter as you no longer have silo'd preconfigured servers because the hardware configuration and builds along with the network design are all logical so you can change and redesign at will with just a few point and clicks.

If you have any questions on GRID, HPC or Pooled architectures then let me know as these are some of the areas I specialise in!

Hope this helps!

Strongbow
04-04-2006, 07:14 AM
just carrying on from where I left off (I'm starting to write more than even Nflight usually does! ;) ...

If you are after an environment that would crunch fast and report back to the host then you could just have a load of loosely coupled (on a LAN or Internet) fast servers and a job scheduler that drops the tasks onto them and then waits for the results. Typically called High Throughput Computing (sort of similar to Distributed Computing - like BOINC then!), you would use this as an environment which handles many independent coarse tasks which get distributed in serial, the design would benefit from bandwidth due to the direct affect of high throughput (large tasks) and mass (lots of servers) but latency is not that important as each task is independent.

If you really want an environment which is more of a high computation environment then parallelised computing grids or HPC (high performance computing) would be the way forward (start saving as this is not cheap) these are designed to handle fine-grained tasks and have rapid synchronisation of the bits(tasks) due to running them in parallel. These are tightly coupled environments and would not suit a BOINC environment unless the developer designed the task specifically for it. Latency is the killer here, you need very low latency as a task is spread in pieces across so many processors that syncronisation of results is paramount and so the bottleneck that typically dictates the overall performance (not the compute power) is the connectivity. Inifiband is popular in these environments due to being fairly low latency and decent bandwidth (although there are a few new technologies coming out in 2007 and 2008 which would stomp all over Infiniband), I wouldn't even use 10G Ethernet as although you would get fairly decent bandwidth you would be hindered by the packet verification and addressing latency of Ethernet.


So, what would a scientific client want? depends how they constructed their application - either fine (typically parallel) or coarse grained (typically serial) tasks!

Strongbow
04-04-2006, 01:37 PM
I think I have missed your point entirely! :scratch:

...but on the attempt of trying to salvage my posts above, if you don't or can't afford to build an HPC or HTC system then you could set up a BOINC type service (possibly even using BOINC) but you would need to do something a bit more advanced than what everyone else is doing. Maybe guarantee compute power or maybe just organise the promotion to the DC communities - it's a million $ question that I wouldn't be typing in public if I knew it. Also, to generate a large dedicated community of systems you would probably have to pay the compute clients (us not them!) based on their task progress and on their availability.

Nflight
04-04-2006, 02:37 PM
Recourse to Blackheaths comments, you are actually farther along on my idea then I was ready to explain.

1). Devising a plan to allow myself or any other member of AMDusers team income whereby we can work from home on the DC projects of Boinc. we also would be paid a minimal salary to live on while we are running the project. This is the first objective in my thinking of obtaining an income from NIH. CDC. or HHS Grants or a Private foundation.

2). Acquiring the direction to follow to optimize the best output for the price for either fine tuned (parallel) or coarsed Grain typically (serial) layouts. I would ask for the best information possible from my experienced team members of AMDusers to bring together there experience with each project that has been deemed potential of finding Grant Funds. What system configuration works best =>Speed of processors, memory of FSB, Memory usuage of RAM, amount of Bandwidth required, etc.

3). Acquiring the best company in which to deal with in the Grid Computing area, whether it be Sun Microsystems, Fujitsu -Siemens, IBM, HP, even some one named Axceleon, or some other small name I have never heard of that may need to make a name for themselves. Looking at all the loose ends and finding solutions to the problems before we push for an actual operation and drawing up a business plan/Grant Request.

P.S. Yes I know this is a public forum so if we would want this project to be for only AMDusers Team only, maybe we shoud take the conversation off public availability. Maybe you want everyone to know that we are working in making reality come faster - The questions are open for discussion.

Strongbow
04-04-2006, 08:30 PM
The thought of that 1024 core processor got me thinking about the CELL processors which will be in the PlayStation 3 and a range of furture IBM Servers probably in 2007. These are the ones you should be looking at closely as the architecture brings loads of new possibilities with impressive high performance.

If you're interested in reading more then you should spend some time digesting this : http://www.blachford.info/computer/Cell/Cell0_v2.html

AMD-USR_JL
04-04-2006, 09:28 PM
Don't get me started about the ps3 man. I had a long fight on einstein about it.

It boiled down to this.
The PS3 will have 7 ppc cores @3.2 Ghz. In 32-bit mode it has an incredible speed of somewhere around 200+GFLOPS. In 32-bit mode it cannot be used for DC'ing, because it is not IEEE7... whatever compliant, it rounds off numbers. It does have a 64-bit mode, but it is only 25GFLOPS. There still are no projects with a 64-bit app, so until they do, anyone who has one won't be able to use it. A few weeks ago they said they are "planning" that it will ship with a HDD (I think it is 60Gb), it won't be an upgrade/add-on. All this can be bought for a mere $400-$600 US (You can never be sure until it comes out, this Christmas). It will either have linux or MAC on it, i think. Can you run windows on a PPC core?

Another thing i have been wondering about is the GPU. The PS3's GPU has a Floating Point performance of 1.8TeraFlops. With everyone talking about using gpus to crunch, if we could use that one, then a PS3 would be a real workhorse.


Oh i forgot to say that it comes with a built-in wireless card, so it will hookup to your wireless network straight out-of-the-box.

Empty_5oul
04-04-2006, 11:14 PM
i havent read much about the PS3 lately. they keep changing it!

what is the difference between the modes? would this ever be used with say a game itself? And why by increasing the mode allowing larger/faster data transfer does this make it slower performance wise?

I believe DNET (RC5-72 and OGR25) has true 64 bit clients. but it is down at the moment due to a RAID crash. It should be back soon though!

AMD-USR_JL
04-04-2006, 11:54 PM
The 32 bit mode on the ps3 i think is a lot faster because it doesn't round those numbers. You don't have to round them in games, so i guess they cut that out to make it a lot faster. They will problably use the 32-bit mode for games, since the ps3 won't have more than 4Gb of ram. I think it is going to have 512Mb.

DNET has true X86_64 client? Sounds sweet! I think i might upgrade my OS and stop with all this boincing and crunch for that in 64-bit SwEEtNeSS. I think i might wait for Vista to come out though, then i'll switch.

Empty_5oul
04-05-2006, 09:28 AM
dnet didnt release one for windows yet, possibly they are waiting for vista util they do.

this is their current list: http://www.distributed.net/download/clients.php (linux at number 3 is 64)

Nflight
05-19-2006, 01:17 PM
OK so I like to resolve issues that linger in my head, for weeks or months on end. I was looking at issues for Gamer007s overheating problem when I found this link of AMD's all tucked in nice and neat.

http://www.amd.com/us-en/assets/content_type/DownloadableAssets/PID31342D_4P_server_competitive_comp.pdf

I love the thought of pondering what If? What if I could afford such an item, What if I could rearrange my expenses so I could afford these and all the other goodies I come across!