Omae wa mou shindeiru
AlmightySnoo 🐢🇮🇱🇺🇦
Yoko, Shinobu ni, eto… 🤔
עַם יִשְׂרָאֵל חַי Slava Ukraini 🇺🇦 ❤️ 🇮🇱
- 62 Posts
- 231 Comments
AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.worldto Programmer Humor@programming.dev•modern operating system running on a Reagan era computer42·1 year agoYeah it’s not Linux. It’s forked off MenuetOS (https://menuetos.net/ ) which is a hobby OS written entirely in assembly (FASM flavor, https://flatassembler.net/ ).
It’s actually a good thing that visual learners get a chance to learn useful stuff by watching videos. Not everyone has the attention span required to read through a Wikipedia page.
For anyone wondering what Proton GE is, it’s Proton on steroids: https://github.com/GloriousEggroll/proton-ge-custom
For instance, even if you have an old Intel integrated GPU, chances are you can still benefit from AMD’s FSR just by pushing a few flags to Proton GE, even if the game doesn’t officially support it, and you’ll literally get a free FPS boost (tested it for fun and can confirm on an Intel UHD Graphics 620).
AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.worldto Linux@lemmy.ml•Reddit API blew up and now I run Linux?181·1 year agoCongrats! Your laptop will be even happier with a lighter but still nice-looking desktop environment like Xfce and you even have an Ubuntu flavor around it: Xubuntu.
reminds me of
instead of
#if !defined(...)
AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.worldto Linux@lemmy.ml•Fedora 40 Looks To Ship AMD ROCm 6 For End-To-End Open-Source GPU Acceleration3·1 year agoHard to tell as it’s really dependent on your use. I’m mostly writing my own kernels (so, as if you’re doing CUDA basically), and doing “scientific ML” (SciML) stuff that doesn’t need anything beyond doing backprop on stuff with matrix multiplications and elementwise nonlinearities and some convolutions, and so far everything works. If you want some specific simple examples from computer vision: ResNet18 and VGG19 work fine.
AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.worldto Linux@lemmy.ml•Fedora 40 Looks To Ship AMD ROCm 6 For End-To-End Open-Source GPU Acceleration6·1 year agoWorks out of the box on my laptop (the
export
below is to force ROCm to accept my APU since it’s not officially supported yet, but the 7900XTX should have official support):Last year only compiling and running your own kernels with
hipcc
worked on this same laptop, the AMD devs are really doing god’s work here.
AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.worldto Linux@lemmy.ml•Fedora 40 Looks To Ship AMD ROCm 6 For End-To-End Open-Source GPU Acceleration7·1 year agoYup, it’s definitely about the “open-source” part. That’s in contrast with Nvidia’s ecosystem: CUDA and the drivers are proprietary, and the drivers’ EULA prohibit you from using your gaming GPU for datacenter uses.
AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.worldto Linux@lemmy.ml•Fedora 40 Looks To Ship AMD ROCm 6 For End-To-End Open-Source GPU Acceleration191·1 year agoROCm is that its very unstable
That’s true, but ROCm does get better very quickly. Before last summer it was impossible for me to compile and run HIP code on my laptop, and then after one magic update everything worked. I can’t speak for rendering as that’s not my field, but I’ve done plenty of computational code with HIP and the performance was really good.
But my point was more about coding in HIP, not really about using stuff other people made with HIP. If you write your code with HIP in mind from the start, the results are usually good and you get good intuition about the hardware differences (warps for instance are of size 32 on NVidia but can be 32 or 64 on AMD and that makes a difference if your code makes use of warp intrinsics). If however you just use AMD’s CUDA-to-HIP porting tool, then yeah chances are things won’t work on the first run and you need to refine by hand, starting with all the implicit assumptions you made about how the NVidia hardware works.
AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.worldto Linux@lemmy.ml•Fedora 40 Looks To Ship AMD ROCm 6 For End-To-End Open-Source GPU Acceleration592·1 year agoHIP is amazing. For everyone saying “nah it can’t be the same, CUDA rulez”, just try it, it works on NVidia GPUs too (there are basically macros and stuff that remap everything to CUDA API calls) so if you code for HIP you’re basically targetting at least two GPU vendors. ROCm is the only framework that allows me to do GPGPU programming in CUDA style on a thin laptop sporting an AMD APU while still enjoying 6 to 8 hours of battery life when I don’t do GPU stuff. With CUDA, in terms of mobility, the only choices you get are a beefy and expensive gaming laptop with a pathetic battery life and heating issues, or a light laptop + SSHing into a server with an NVidia GPU.
It depends. I’m working in the quant department of a bank and we work on pricing libraries that the traders then use. Since traders often use Excel and expect add-ins, we have a mostly Windows environment. Our head of CI, a huge Windows and Powershell fan, once then decided to add a few servers with Linux (RHEL) on them to have automated Valgrind checks and gcc/clang builds there to continuously test our builds for warnings, undefined behavior (gcc with O3 does catch a few of them) and stuff.
I thought cool, at least Linux is making it into this department. Then I logged into one of those servers.
The fucker didn’t like the default file system hierarchy and did stuff like
/Applications
and `/Temp’ and is installing programs by manually downloading binaries and extracting them there.
That’s a good way of maximizing technical debt.
AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.worldto Linux@lemmy.ml•OpenDX: An Open Source DirectX implementation for Linux, providing native support for DirectX-based applications and games!48·2 years agoThat repo is just pure trolling, read the “Improved performance” section and open some source files and you’ll understand why.
FreeBSD is now obsolete
AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.worldto Linux@lemmy.ml•GNOME's Dynamic Triple Buffering "Ready To Merge"54·2 years agoBiased opinion here as I haven’t used GNOME since they made the switch to version 3 and I dislike it a lot: the animations are so slow that they demand a good GPU with high vRAM speed to hide that and thus they need to borrow techniques from game/GPU programming to make GNOME more fluid for users with less beefy cards.
AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.worldto Linux@lemmy.ml•GNOME's Dynamic Triple Buffering "Ready To Merge"961·2 years agoDouble and triple buffering are techniques in GPU rendering (also used in computing, up to double buffering only though as triple buffering is pointless when headless).
Without them, if you want to do some number crunching on your GPU and have your data on the host (“CPU”) memory, then you’d basically transfer a chunk of that data from the host to a buffer on the device (GPU) memory and then run your GPU algorithm on it. There’s one big issue here: during the memory transfer, your GPU is idle because you’re waiting for the copy to finish, so you’re wasting precious GPU compute.
So GPU programmers came up with a trick to try to reduce or even hide that latency: double buffering. As the name suggests, the idea is to have not just one but two buffers of the same size allocated on your GPU. Let’s call them
buffer_0
andbuffer_1
. The idea is that if your algorithm is iterative, and you have a bunch of chunks on your host memory on which you want to apply that same GPU code, then you could for example at the first iteration take a chunk from host memory and send it tobuffer_0
, then run your GPU code asynchronously on that buffer. While it’s running, your CPU has the control back and it can do something else. Here you prepare immediately for the next iteration, you pick another chunk and send it asynchronously tobuffer_1
. When the previous asynchronous kernel run is finished, you rerun the same kernel but this time onbuffer_1
, again asynchronously. Then you copy, asynchronously again, another chunk from the host tobuffer_0
this time and you keep swapping the buffers like this for the rest of your loop.Now some GPU programmers don’t want to just compute stuff, they also might want to render stuff on the screen. So what happens when they try to copy from one of those buffers to the screen? It depends, if they copy in a synchronous way, we get the initial latency problem back. If they copy asynchronously, the host->GPU copy and/or the GPU kernel will keep overwriting buffers before they finish rendering on the screen, which will cause tearing.
So those programmers pushed the double buffering idea a bit further: just add an additional buffer to hide the latency from sending stuff to the screen, and that gives us triple buffering. You can guess how this one will work because it’s exactly the same principle.
for the math homies, you could say that NaN is an absorbing element