• 0 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle



  • That was my first take as well, coming back to C++ in recent years after a long hiatus. But once I really got into it I realized that those pointer types still exist (conceptually) in C, but they’re undeclared and mostly unmanaged by the compiler. The little bit of automagic management that does happen is hidden from the programmer.

    I feel like most of the complex overhead in modern C++ is actually just explaining in extra detail about what you think is happening. Where a C compiler would make your code work in any way possible, which may or may not be what you intended, a C++ compiler will kick out errors and let you know where you got it wrong. I think it may be a bit like JavaScript vs TypeScript: the issues were always there, we just introduced mechanisms to point them out.

    You’re also mostly free to use those C-style pointers in C++. It’s just generally considered bad practice.



  • As someone who has often been asked for help or advice by other programmers, I know with 100% certainty that I went to university and worked professionally with people who did this, for real.

    “Hey, can you take a look at my code and help me find this bug?”
    (Finding a chunk of code that has a sudden style-shift) “What is this section doing?”
    “Oh that’s doing XYZ.”
    “How does it work?”
    “It calculates XYZ and (does whatever with the result).”
    (Continuing to read and seeing that it actually doesn’t appear to do that) “Yes, but how is it calculating XYZ?”
    “I’m not 100% sure. I found it in the textbook/this ‘teach yourself’ book/on the PQR website.”


  • Most people use the term “Hungarian Notation” to mean only adding an indicator of type to a variable or function name. While this is one of the ways in which it has been used (and actually made sense in certain old environments, although those days are long, long behind us now), it’s not the only way that it can be used.

    We can use the same concept (prepending or appending an indicator from a standard selection) to denote other, more useful categories that the environment won’t keep straight for us, or won’t warn us about in easy-to-understand ways. In my own projects I usually append a single letter to the ends of my variable names to indicate scope, which helps me stay more modular, and also allows me to choose sensible variable names without fear of clashing with something else I’ve forgotten about.






  • Re: the Acceptance stage.

    Years ago I worked at a family-run business with a good working environment. The staff were once told a story of how, earlier in the company’s history, a manager made a mistake that caused the company a substantial monetary loss.

    The manager immediately offered their resignation, but the owner said to them, “Why would I let you go now? I’ve just spent all this money so you could learn a valuable lesson!”

    So yeah, generally, most managers’ reaction to accidentally deleting vital data from production is going to be to fire the developer as a knee-jerk “retaliation”, but if you think about it, the best response is to keep that developer; your data isn’t coming back either way, but this developer has just learned to be a lot more careful in the future. Why would you send them to a potential competitor?



  • Oh hell, you gave me a PTSD flashback!

    It’s the late 90s. My mother suddenly discovers File Explorer on her refurbished commodity Wintel box and decides that all this messy clutter has to go. Never mind that the drive was 80% empty when delivered and I didn’t expect her to come close to filling it before it was replaced. Fortunately I had already backed up everything that looked important or interesting.

    One day she calls from the office, “I don’t need this ‘Windows’ any more, do I?”

    “What? Wait! Don’t do anything!” I walk in and she’s got C:/Windows highlighted and the cursor is hovering over “Delete”.

    “I already have Windows installed on this computer, so I don’t need this any more, do I?” Spoken more as a statement than a question. It took several minutes of forced calm explanation to get her to accept that this “Windows” directory WAS the Windows that’s installed on the machine. She still wasn’t happy that she could see it in File Explorer, though. So untidy!




  • I tought myself programming as a kid in the 80s and 90s, and just got used to diagnostic print statements because it was the first thing that occurred to me and I had no (advanced) books, mentors, teachers, or Internet to tell me any different.

    Then in university one of my lecturers insisted that diagnostic prints are completely unreliable and that we must always use a debugger. He may have overstated the case, but I saw that he had a point when I started working on the university’s time-sharing mainframe systems and found my work constantly being preempted and moved around in memory in the middle of critical sections. Diagnostic prints would disappear, or worse, appear where, in theory, they shouldn’t be able to, and they would come and go like a restless summer breeze. But for as much as that lecturer banged on about debuggers, he hardly taught us anything about how to use them, and they confused the hell out of me, so I made it through the rest of my degree without using debuggers except for one part of one subject (the “learn about debuggers” part).

    Over 20 years later, after a little professional work and a lot of personal projects and making things for other non-coding jobs I’ve had, I still haven’t really used debuggers much. But lately I’ve been forcing myself to use them sometimes, partly to help me pick apart quirks in external libraries that I’m linking, and partly because I’d like to start using superscalar instructions and threading in my programs, and I remember how that sort of thing screwed up my diagnostic prints in university.