That’s certainly a big part of it. When one needs to buy a metric crap load of CPUs, one tends to shop outside the popular defaults.
Another big reason, historically, is that Supercomputers didn’t typically have any kind of non-command-line way to interact with them, and Windows needed it.
Until PowerShell and Windows 8, there were still substantial configuration options in Windows that were 100% managed by graphical packages. They could be changed by direct file edits and registry editing, but it added a lot of risk. All of the “did I make a mistake” tools were graphical and so unavailable from command line.
So any version of Windows stripped down enough to run on any super-computer cluster was going to be missing a lot of features, until around 2006.
Since Linux and Unix started as command line operating systems, both already had plenty fully featured options for Supercomputing.
Almost, the default boot drive is C:, everything gets mapped after that. So if you have a second HDD at D: and a disk reader at E:, any USBs you plug in would go to F:.
Maybe windows is not used in supercomputers often because unix and linux is more flexiable for the cpus they use(Power9,Sparc,etc)
That’s certainly a big part of it. When one needs to buy a metric crap load of CPUs, one tends to shop outside the popular defaults.
Another big reason, historically, is that Supercomputers didn’t typically have any kind of non-command-line way to interact with them, and Windows needed it.
Until PowerShell and Windows 8, there were still substantial configuration options in Windows that were 100% managed by graphical packages. They could be changed by direct file edits and registry editing, but it added a lot of risk. All of the “did I make a mistake” tools were graphical and so unavailable from command line.
So any version of Windows stripped down enough to run on any super-computer cluster was going to be missing a lot of features, until around 2006.
Since Linux and Unix started as command line operating systems, both already had plenty fully featured options for Supercomputing.
More importantly, they can’t adapt Windows to their needs.
Yep the other reason.
Plus Linux doesn’t limit you in the number of drives, whereas Windows limits you from A to Z. I read it here.
You can mount drives against folders in windows. So while D: is one drive, D:\Logs or D:\Cake can each be a different disk.
What in the world? I don’t think I’ve ever seen that in the wild
It’s common in the server world. KB article on it is here.
For people who haven’t installed Windows before, the default boot drive is G, and the default file system is C
So you only have 25 to work with (everything but G)
Almost, the default boot drive is C:, everything gets mapped after that. So if you have a second HDD at D: and a disk reader at E:, any USBs you plug in would go to F:.
Why do you copy the boot files from C and put them in G during install then?
I don’t think anybody does that, honestly.
You can have a helper script do it for you (the gui) but it still happening in the background
The boot files go into C:, not G:.
Windows can’t operate if you did that, it doesn’t let you.
Copy Boot Files to EFI Copy the boot files to complete the EFI partition to boot into our windows.
bcdboot c:\Windows /s G: /f ALL
Source: https://christitus.com/install-windows-the-arch-linux-way/
G can be mapped after boot (usually to removable drives)
Ok that would make sense tbh