I'm hypothetically assuming that a hypothetical Rs. 2000 cheaper AMD offering with more cores compared to a comparable Intel part is utilizing 40w more constantly the whole year. (which won't be true ALL the time, again a hypothetical scenario)
0.04 * 24 * 365 = 350.4 units. Now I'm taking a cluster of 100. 35040 units more than than Intel. Commercial electricity is almost Rs. 1.50 per unit. 35040 * 1.5 = Rs. 52560 spent more with AMD than Intel.
But then the AMD offerings were Rs. 2000 cheaper. Multiply by 100. Rs. 2 lakh saved.
Do your math now.
Electricity isn't a huge deciding factor for data centers.
And datacenters will also not mind buying 100 Intel offerings than AMD offerings too despite the fact they'll have to pay more.
The most important factor for them is...throughput they can get from the limited amount of space available to them. Whether it's Intel or AMD, they don't care.
Heck, runtime cost whether you choose AMD or Intel for 4 years turns out to be same.
If you go with AMD, you pay 2 lakhs less for the server but ~50k more for electricity per year. If you go Intel, you pay 2 lakhs more for the server but ~50k less for electricity per year.
This again vindicates my point of maximum performance from a limited floorspace. That's all what matters for datacenters. With reliability ofc.
Moroever, they deal in crores, not lakhs. This discussion is silly as much as tkin's post was.