tr:dr; he says “x86 took over the server market” because it was the same architecture developers in companies had on their machines thus it made it very easy to develop applications on their machines to then ship to the servers.

Now this, among others he made, are very good points on how and why it is hard for ARM to get mainstream on the datacenter, however I also feel like he kind lost touch with reality on this one…

He’s comparing two very different situations, more specifically eras. Developers aren’t so tied anymore like they used to be to the underlaying hardware. The software development market evolved from C to very high language languages such as Javascript/Typescript and the majority of stuff developed is done or will be done in those languages thus the CPU architecture becomes irrelevant.

Obviously very big companies such as Google, Microsoft and Amazon are more than happy to pay the little “tax” to ensure Javascript runs fine on ARM than to pay the big bucks they pay for x86…

What are your thoughts?

  • jet@hackertalks.com
    link
    fedilink
    English
    arrow-up
    61
    ·
    edit-2
    1 year ago

    He has a strong opinion, but he hasn’t lost the plot. It’s very reasonable to say you need to develop on the architecture you wanted to deploy to. If you want to be efficient, so most companies are going to deploy to architecture they have locally.

    But you’re taking comments from 2019. Nowadays lots of Mac developers develop directly on arm. So by his own argument, those Mac developers would be more comfortable deploying to an arm-based architecture cuz the running on an arm-based architecture.

    So broadly I agree with him, or his past comments from 2019, you’re going to need local developer environments, before you’re going to get efficient server software

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      1 year ago

      ARM on Mac isn’t nearly as helpful for workloads on an ARM server as x86 PC for an x86 server. The differences in hardware behavior between the two x86 parts is small because the platforms are standardized way beyond the instruction set. The ARM server on the other hand has nothing to do with the Mac beyond the instruction set. Something runs great on your Mac because of the on-SoC ridiculously fast RAM. You throw it on an ARM server with completely different ARM CPUs, slotted RAM and a bottleneck shows up.

  • pastermil@sh.itjust.works
    link
    fedilink
    arrow-up
    40
    ·
    1 year ago

    As someone dealing with enterprise software for living, what he’s saying absolutely makes sense, and I deal mostly in web applications (where I never really have to worry about the low level stuff).

    Just because the top layer seems to be the same, doesn’t mean the underlying ones are. There’s a reason why perfect bug compatibility is a thing (or maybe, was, in RHEL ecosystem?).

    Things that looks like slam dunks in theories are never such in practice. Weird bugs pop up from time to time; and believe me, they will!

    It might be rare, you may only see it once or twice in a project; but when it happens, you’re gonna want to be ready, or people will question your ability to do your job.

    • thelastknowngod@lemm.ee
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      The cross-compiling point makes sense but, since this is a 4.5 year old message, the state of ARM in the cloud has changed. Now developers do actually have ARM-based machines because of Apple. AWS has Graviton2 instances now and they are a lot cheaper than similarly specced x86_64 instances. ARM is a viable consideration that can be made.

      • pastermil@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        While its true that having ARM ecosystem is more feasible now, there’s not many companies that’s willing to equip their whole team with very specific model of laptop, with almost no servicable parts for no perceivable benefit. No, Pinebooks as well as Raspberry Pi laptops and cyberdecks are not feasible for industry.

        Most companies are not looking for gimmicks for work, even when they make some for living; so no, looking cool is not a benefit that defeats all that cost.

        Meanwhile, most people in the industry, such as myself, and my current bosses & colleagues, and my previous bosses & colleagues, and probably all my future bosses & colleagues are fine running x86 for production servers. It got everything we’d need, including upgradable RAM and decades worth of collective experience, which I cannot say ARM has.

        At the same time, I have some hope for RISC-V. It won’t take over the industry anytime soon, but it’s been showing some promise for long term.

        • TCB13@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          No, Pinebooks as well as Raspberry Pi laptops and cyberdecks are not feasible for industry.

          You know that’s all just a software thing. If Microsoft decided to open Windows for ARM then it would no longer apply.

          are fine running x86 for production servers. It got everything we’d need, including upgradable RAM and decades worth of collective experience, which I cannot say ARM has.

          Yes but people nowadays go mostly for the cloud. Cloud providers will make scale ARM and sell it cheaper and you won’t be replacing RAM on those for sure… at some point your management will simply crush your budget you’ll be forced on ARM.

            • TCB13@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              If we exclude all cloud providers who sell ARM like Google, Amazon and Oracle. Facebook actively uses ARM at scale and I personally have seen medium size companies (~200-500 employees) using it simply because their backend run fine and it’s cheaper.

            • thelastknowngod@lemm.ee
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              1 year ago

              Not who you were responding to but, my company does this in AWS. To be fair, the entire platform is running in EKS so it’s not much more difficult than updating the CI build pipelines to build multi-arch containers, adding additional nodepools, and scaling down the amd64 ones. This was tedious but not difficult to do. I keep a small set of amd64 nodes for off the shelf software that doesn’t support arm… I think the only thing left on those now is newrelic agents. Once we move off of them the x86_64 nodes can be killed entirely.

              This ended up saving us tens of thousands of dollars per month. The next step is to move the bulk of workloads to spot instances. I’ll be preferring arm but if there is only capacity for x86_64, I’ll have that option because of the multi-arch containers. This is going to save even more money and force developers to build applications more tolerant of node failure in the process.

        • thelastknowngod@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Fair. For what it’s worth though, macbooks have been the default laptop at every startup I’ve worked at over the last ~8 years… The first M1 mbp was released in 2020 and most of those companies I was at had a policy of replacing machines after 2-3ish years too. it’s getting to the point where entire companies can be/are running on arm.

          Might be more specific to particular industries or company maturity level but this has been my personal experience.

    • TCB13@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Things that looks like slam dunks in theories are never such in practice. Weird bugs pop up from time to time; and believe me, they will!

      It might be rare, you may only see it once or twice in a project; but when it happens, you’re gonna want to be ready, or people will question your ability to do your job.

      Yes, however price is more important that all that. If your management knows it can save 20% on their cloud spending by running ARM they’ll run ARM and have you deal with those rare bugs.

      • mea_rah@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        “If” being the key word here. There are nuances to be considered. One DB might run really well on arm, the other not so much.

        I’m saying it as huge fan of the arm servers. They are amazing and often save a lot of money essentially for free. (practically only a few characters change in terraform) In AWS with the hosted services (Opensearch, and such) there’s usually no good reason to pay extra for x86 hardware especially since most of the intricacies are handled by AWS.

        But there are workloads that just do not run on arm all that well and you would end up paying more for the HW to get to the performance levels you had with x86.

        And that’s beside all those little pain points mentioned above that you’re “left to deal with” which isn’t cheap either. (but that doesn’t show up on the AWS bill, so management is happy to report cost savings)

        • TCB13@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          there’s usually no good reason to pay extra for x86 hardware especially since most of the intricacies are handled by AWS. (…) all those little pain points mentioned above that you’re “left to deal with” which isn’t cheap either. (but that doesn’t show up on the AWS bill, so management is happy to report cost savings)

          Exactly my point above when people start shouting about upgradability compatibility and whatnot.

          • mea_rah@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Yeah, I was saying “no reason” in the context of SAAS. Once the management falls on the end user, it’s a different beast altogether.

            I think we’re trying to say the same in a different way actually. 😅

    • TCB13@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Have you used ARM servers? They’re a massive pain to work with because they just need that one little extra step every time.

      Yes I’ve had that experience and a similar one once the first ARM SBCs came to the market circa 2009 with the SheevaPlug. At that time was trying to get stuff work on those and I know how things go.

      when you actually need performance, Javascript needs to go. Java and dotnet have the same cross platform advantages with much higher speeds.

      After this point you’re essentially saying the same thing I was BUT replacing the word Javascript with Java/dotnet. Once those virtual machines runs well on ARM (as they mostly do) developers won’t care anymore about the architecture. I only picked Javascript/Typescript as an example because it will most like take over everything in a few years.

      That’s part of the reason why companies like Oracle are handing out free ARM VPS products with tons of free RAM, to convince people to try their ARM product for real.

      And why are they trying to push developers into ARM? It is medium term strategic investment, they’re just waiting and pushing ARM manufacturers such as Ampere Computing to develop “bigger and better” CPUs that will take on Intel. Once they’re very competitive in performance they’ll simply start replacing Intel with ARM and nobody will complain because at that point the 90% of developers are using Java/dotnet/Javascript (things that run on VMs) will not even notice the difference between running on their amd64 or ARM.

      There’s no benefit to running ARM servers. Running slow software like PHP and Javascript becomes especially problematic on slower hardware, so for those cross platform runtimes, you’re still better off running on amd64

      It seems that Facebook, the holy grail of running PHP, doesn’t agree with you. They’ve been pushing ARM on their datacenters for years now.

    • socsa@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I’ve been using Linux4Tegra since before the M1 silicon and it’s really not that bad if you are at all used to build chain management. Granted, Nvidia does a lot of the initial heavy lifting here, but really to spin up a custom environment, you really only need to get the builds done right the first time and then it’s pretty smooth sailing.

  • umami_wasabi@lemmy.ml
    link
    fedilink
    arrow-up
    21
    ·
    1 year ago

    He is sort of right, back in 2019. Even then, IBM PowerPC mainframe are still thriving.

    Now, new language with cross compilation with some maturity are here. Major cloud providers now have ARM base machines ready, even designing to their own need.

    ARM is in the datacenter market and become a trend.

    The only thing I worried about, is the architecture of ARM are too fractured. AWS Graviton might behave differently than Ampere Altra, despite both have the ARM ISA.

  • kornel@programming.dev
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 year ago

    I’ve got an ARM Mac. I’ve got ARM VPSes from Hetzner, and I’m compiling native code for the server.

    It’s definitely easier to develop, build, and test on the same architecture, than to deal with cross-compilation and emulation.

    So I think Linus is right.

  • phx@lemmy.ca
    link
    fedilink
    arrow-up
    19
    ·
    1 year ago

    X86 and AMD64 based stuff is fairly standard in terms of a motherboard with a BIOS/UEFI and peripheral busses. ARM has for a long time been kind of a mess in this regard, and there are several varieties of ARM architecture that don’t play nicely with code compiled for others.

    Don’t get me wrong. ARM can be great for certain types of workloads. It’s typically more efficient at lower power than X86, and better at various types of math. That’s why we DO see it available on ARM for certain stuff like Lambda functions, but you probably won’t be running full VM environments on it.

    Last: notice how it’s been hard to find certain varieties of Pi and various other stuff running ARM? There’s shortages all over the place but I’m general Intel and AMD have been able to apply demand for their CPU’s.

    Yes, devs aren’t tied to hardware, but there are efficiencies of scale to consider

    • TCB13@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      That’s why we DO see it available on ARM for certain stuff like Lambda functions, but you probably won’t be running full VM environments on it.

      We do see Amazon, Oracle and other providers offering full ARM based VMs, they work fine for the price… Even Facebook have been investing in ARM for their datacenters.

      , but there are efficiencies of scale to consider

      Yes there are, ARM will always be cheaper than Intel and is reaching competitive / comparable levels of performance.

      • qaz@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Yes there are, ARM will always be cheaper than Intel and is reaching competitive / comparable levels of performance.

        Compute time is significantly cheaper than dev time. 76% of the internet web is powered by PHP and entire services are developed in JS. The average cost of a software developer in the US is 140k, while you can rent a server with 24 cores, 64 GiB of RAM and 4 TiB SSD that can run plenty of badly optimized Node.js docker containers for 90 bucks a month.

  • Wooki@lemmy.world
    link
    fedilink
    arrow-up
    21
    arrow-down
    2
    ·
    edit-2
    1 year ago

    JavaScript and TS are script languages with little to nothing to do with threading

  • Windex007@lemmy.world
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    1 year ago

    The luxuries you have to not know a thing about enterprise grade servers because your world is JavaScript was made possible, and continues to be made possible, by people working on layers that do require familiarity with the underlying hardware.

    • jcg@halubilo.social
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 year ago

      Right, whenever someone like Linus talks about developers he’s probably not referring to your run-of-the-mill code monkey making simple web apps.

      • TCB13@lemmy.worldOP
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Okay… that’s fair, but “your run-of-the-mill code monkey” that writes JS is the majority of the market nowadays and it will only grow more.

        • Windex007@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          They’re going to be writing the firmware for enterprise grade servers? If not, they’re irrelevant to what Linus is talking about here.

      • Windex007@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I think people underestimate the challenges involved when building software systems tightly coupled to the underlying hardware (like if you are a team tasked with building a next gen server).

        Successful companies in the space don’t underestimate it though, the engineers who do the work don’t underestimate it, and Linus doesn’t underestimate it either.

        The domain knowledge in your org required to mitigate the business risk isn’t trivial. The value proposition always needs to be pretty juicy to overcome the inertia present caused by institutional familiarity. Like, can we save a few million on silicon? Sure. Do we think we understand the challenges well enough to keep our hardware release schedules without taking shortcuts that will result in reputational impact? Do we think we have the right people in place to oversee the switch?

        Over and over again, it comes back to “is it worth it”, and it’s much more complex of a question to offer than just picking the cheaper chips.

        I imagine at this point there is probably a metric fuckton of enterprise software what strictly dictate that it must be run on X86. Even if it doesn’t have to. If you stray from the vendor hardware requirements, bullshit or not, you’ll lose your support. There is likely friction on some consumer segments as well on the uptake.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      And that underlying stuff doesn’t run the same on x86 and dog knows who’s ARM implementation.

    • TCB13@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      1 year ago

      The luxuries you have to not know a thing about enterprise grade servers because your world is JavaScript was made possible, (…) by people working on layers that do require familiarity with the underlying hardware.

      That’s kind my point… Since everyone is or will be coding on Javascript (or other languages that run on virtual machines / “layers”) general developers won’t have a problem running on ARM datacenters anymore. Big cloud providers will take the opportunity to move to ARM as it is cheaper for them.

      And btw, the people making JS fast and stable on ARM are, most likely, not that familiar with server grade hardware. They’re optimizing for phones and whatever where ARM was born.

      • Windex007@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Big cloud provides will take the opportunity to move to ARM as it is cheaper for them.

        The cloud isn’t a literal empheral cloud. It’s still a physical thing with physical devices physically linked. Physical ram on physical slots with physical buses and physical chips (not just CPUs, many other ICs are in those machines too). The complexity of the demands of the arrangement and linkages of that physical hardware is incredible.

        Nobody is out there writing enterprise server firmware in java. How can you have a java VM when the underlying compents of the physical device don’t have the necessary code to offer the services required by the VM to run?

        To be incredibly blunt, and I don’t say this to be rude, your questions and assertions are incredibly ignorant. So much so that it’s essentially nonsense. It’s like asking “why do we still even have water when we have monster energy drink?” It demonstrates such a fundamental misunderstanding of the premise that it’s honestly difficult to even know where to begin explaining how faulty the line of thinking even is.

        Linus isn’t talking about JS developers at all. Even a little bit. I promise you, you would not enjoy hearing his unfiltered thoughts on JS developers.

        He’s talking about the professional engineers who design, build, and write firmware for enterprise grade servers. There no overlap between JS coders and these engineers.

        • TCB13@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          To be incredibly blunt, and I don’t say this to be rude, your questions and assertions are incredibly ignorant. Linus isn’t talking about JS developers at all. Even a little bit. I promise you, you would not enjoy hearing his unfiltered thoughts on JS developers.

          Are you drunk? The guy literally speaks about cross-platform and higher level stuff, let me quote him for you:

          This is true even if what you mostly do is something ostensibly cross-platform like just run perl scripts or whatever. Simply because you’ll want to have as similar an environment as possible,

          lol

          • Windex007@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            1 year ago

            In this case the reason that you see the rest of the pack in your rear view mirror isn’t because you’re in the lead: it’s because you’re getting lapped.

            I strongly encourage you to reach out to Linus directly to inform him of your insights. Please post back with the results.

  • bobtreehugger@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    3
    ·
    1 year ago

    It’s tough to debug issues when you can’t run on the same hardware directly.

    There’s a reason that arm support in open source software has exploded in the past few years, and it’s because of apple silicon.

    I’ll agree that it’s easier now, with most developers using higher level runtimes, but someone’s got to get those runtimes working, and it’s much easier to develop if you have a laptop running that hardware.

  • Gecko@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    The linked message is from 2019, i.e. per-M1 Apple laptops and at a time when arm in datacenter was just starting out.

    Tbh, I feel like it’s kinda pointless to discuss a comment made by someone over 4-years ago. Both the environment and the person itself can change a lot in that time.

  • Kethal@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    1 year ago

    “The software development market evolved from C to very high language languages such as Javascript/Typescript and the majority of stuff developed is done or will be done in those languages thus the CPU architecture becomes irrelevant.”

    I saw someone else make a similar comment about C. People track these things, and C has been in the top 2 most widely used languages for more than 2 decades. Not knowing this should probably make you wonder why your background has resulted in such a narrow experience.

    https://en.m.wikipedia.org/wiki/TIOBE_index#

    • TCB13@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      5
      ·
      edit-2
      1 year ago

      Look, I’m not saying C is important nor that people aren’t using it but… Let me ask you one thing, if you look at the majority of the web (not specific cases) you’ll find that 76% of it is PHP. Furthermore if you think that everyone is moving to mobile apps you’ll get a mix of Java/Kotlin, Swift and a very strong move to towards cross-platform stuff that is, in most cases, based on Javascript. To make things worse bootcamps for wanna be devs have been teaching node as a valid backend solution for quite a while now. We see startups going that route and things going perfectly well.

      Since we’ve that huge market for higher level that run perfectly well on ARM do you really thing that stuff made in C really dictates the future of the market? The “issue” I see with the link you’ve provided is simple: nobody is developing “run of the mill” solutions with C anymore like we used to and those are the solutions that have the numbers to move the market. Nowadays C is operating systems, libraries for higher level languages, engines such as the JS V8, a ton of IoT devices (that ironically are ARM), low level electronics, industrial automation and financial use cases where performance is really important.

      C is going to stay on specific places but nobody develops websites, desktop and mobile applications with hence my simplistic “the software development market evolved from C to very high language languages such as Javascript/Typescript” conclusion.

      The market is moved by the large masses and the masses use technologies that are not bound anymore to architectures like other used to be.

      • Kethal@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        1 year ago

        It’s odd that you’re saying you shouldn’t consider the specific cases where C excels and then narrowing down things to the Web, where languages like php excel. So now you probably have some idea why your experience is so narrow. There’s a lot more to programming than the Web, and there’s always going to be.

  • INeedMana@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    1 year ago

    From what I learned at university:
    CISC instruction set (x86) was developed to adress the technical reality of its time - time costly CPU operation and fast read from storage. Not long after that the situation has changed - storage reads became slower in comparison to computing time (putting it simply it’s faster to read an archive and unpack it than to read unpacked thing). But in the meantime the PC boom has happened. In a way backward compatibility and market inertia locked us with instruction set that is not the best optimised for our tech, despite the fact that RISC (for example ARM) was conceived earlier.

    In a way software (compilers and interpreters too) is like a muscle. The more/wider it’s used, the better it becomes. You can be writing in python but if your interpreter has some missed optimization opportunities, your code will be running faster on architecture with a better optimized interpreter available.

    From personal observations:
    The biggest cost of software is not to write something super efficient. It’s maintainability (readability and debugging), ease of use (onboarding/training time) and versatility (“let’s add the feature that is missing to what we have, instead of reinventing the wheel and maintaining two toolsets”).

    The new languages are not created because they can do something faster than assembler (they can’t, btw). If assembly code is written as optimal as possible, high level languages can at best be as fast. Writing such assembly is a problem behind the keyboard, not a technical limitation. The only thing high-level languages do better is how much time it takes a human to work with it.
    I would not be surprised to learn that bigger part of these big bucks you mention go not into optimization but rather into “how can we work around that difference so the high-level interface stays the same as for more widely used x86?”

    In the end it all boils down to machine code - it’s the only thing that really exists when it comes to executing code. If your “human to bits translator” produces unoptimized binaries, it doesn’t matter how high-level your code was written in.
    And sometime in the meantime we’ve arrived at a level when even a few behemoths like Google or Microsoft throwing money into research (not that I believe they are doing so when it comes to optimization) is enough.
    It’s the field use that from time to time provides a use-case that helps finding edge-case where optimization can be made.
    To purposefully find it? Dumping your datacenter in liquid nitrogen might be cheaper and probably more predictable.

    So yeah, I mostly agree with him.
    Maybe the times have changed a little, the thing that gave RISCs the most kick were smartphones, then one board computers, so not long ago. The improvements are always bigger at the beginning.
    But the fact that some companies are trying to get RISC back into userland in my opinion means that the computer world has only started to heal itself after the effects of PC boom. There’s around 20 year difference where x86 was the main thing and RISC was a niche