Agree with apples deal but they play a huge role in designing the chips like all their components. (Ex. Apple designed Samsung screens have a reject rate of about 50% and was higher when the X came out). And their Rosetta translation layer is incredibly efficient. Took Microsoft years to develop a worse version.
I learned Rosetta is efficient because it’s backed by hardware on the M1.
I saw that if you use Rosetta on Linux for example the Qualcomm emulator competes.
Can you share more about the hardware support? What I heard from Marcan that was driving the effort to port Linux to m1 is that the instruction set is the same as non apple arm. Is it memory architecture? Register set? Co processor acceleration?
The software aspect I won’t argue with. But I will go against the chip design. In 2024, most parts of most chips are built from library prefabs, and outside of that, all the efficiencies come from taking advantage of what the chip fab is offering.
That’s why these made up nm numbers are so important. They are effectively marketing and don’t have much basis in reality (euv wavelength is 13nm~, but we’re claiming 6 now) - what they do indicate is improvements in other aspects of lithography.
Apple aren’t the geniuses here, which is why their M chips were bested by intels euv chips as soon as Intel upgraded its fabs to be more advanced than tsmc for six months. It’s all about who’s fsb is running the bleeding edge.
source? last i heard intel received an euv tool from asml but certainly haven’t produced anything with it - that’s slated for next year earliest, and until it hits mass production all numbers are just marketing
intel and apple aren’t aiming for the same things - apples chip designs arent generic. they target building an apple device… which means that it will run an apple device incredibly efficiently - gpu vs gpu m series chips are fine, cpu vs cpu they’re among the top of the range, and at everything they do they’re incredibly efficient (because apple devices are about small, cool, battery-saving)… and they certainly don’t optimise for cost
what they do better than anyone else is produce an ultralight device made for running macos, or a phone made for running ios - the coprocessors etc they put onto their SoCs that offload from their generalised processors
you wouldn’t say that honeywell is “bested” by intel because intel cpus are faster… that’s not the goal of things like radiation hardened cpus
Intel is really good at making 300+ watt monster CPUs. Intel really fucking sucks at making a good laptop CPU. Apple is really good at making an incredible laptop CPU, but sucks at making a Mac Pro CPU.
Process node differences definitely play a part, but it’s almost like comparing apples to oranges.
Intel really fucking sucks at making a good laptop CPU
Which is funny, because it was the power efficiency of the P6 (Pentium III/Pentium Pro) core versus the Netburst Pentium 4 that resulted in Intel dropping Netburst and basing the Core series off of an evolution of the P6, and only reason they kept the P6 around was that Netburst was a nightmare in laptops.
Agree with apples deal but they play a huge role in designing the chips like all their components. (Ex. Apple designed Samsung screens have a reject rate of about 50% and was higher when the X came out). And their Rosetta translation layer is incredibly efficient. Took Microsoft years to develop a worse version.
I learned Rosetta is efficient because it’s backed by hardware on the M1. I saw that if you use Rosetta on Linux for example the Qualcomm emulator competes.
Can you share more about the hardware support? What I heard from Marcan that was driving the effort to port Linux to m1 is that the instruction set is the same as non apple arm. Is it memory architecture? Register set? Co processor acceleration?
Here is a great explanation on the matter.
https://x.com/ErrataRob/status/1331735383193903104?s=20
:( it’s only available after sign up
The software aspect I won’t argue with. But I will go against the chip design. In 2024, most parts of most chips are built from library prefabs, and outside of that, all the efficiencies come from taking advantage of what the chip fab is offering.
That’s why these made up nm numbers are so important. They are effectively marketing and don’t have much basis in reality (euv wavelength is 13nm~, but we’re claiming 6 now) - what they do indicate is improvements in other aspects of lithography.
Apple aren’t the geniuses here, which is why their M chips were bested by intels euv chips as soon as Intel upgraded its fabs to be more advanced than tsmc for six months. It’s all about who’s fsb is running the bleeding edge.
source? last i heard intel received an euv tool from asml but certainly haven’t produced anything with it - that’s slated for next year earliest, and until it hits mass production all numbers are just marketing
intel and apple aren’t aiming for the same things - apples chip designs arent generic. they target building an apple device… which means that it will run an apple device incredibly efficiently - gpu vs gpu m series chips are fine, cpu vs cpu they’re among the top of the range, and at everything they do they’re incredibly efficient (because apple devices are about small, cool, battery-saving)… and they certainly don’t optimise for cost
what they do better than anyone else is produce an ultralight device made for running macos, or a phone made for running ios - the coprocessors etc they put onto their SoCs that offload from their generalised processors
you wouldn’t say that honeywell is “bested” by intel because intel cpus are faster… that’s not the goal of things like radiation hardened cpus
Intel is really good at making 300+ watt monster CPUs. Intel really fucking sucks at making a good laptop CPU. Apple is really good at making an incredible laptop CPU, but sucks at making a Mac Pro CPU.
Process node differences definitely play a part, but it’s almost like comparing apples to oranges.
Which is funny, because it was the power efficiency of the P6 (Pentium III/Pentium Pro) core versus the Netburst Pentium 4 that resulted in Intel dropping Netburst and basing the Core series off of an evolution of the P6, and only reason they kept the P6 around was that Netburst was a nightmare in laptops.