• 0 Posts
  • 254 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle




  • c-suite

    CEO, CTO, CFO etc. In a '90s Internet startup like the company I worked for, the “C” really stood for “clueless”.

    giant printouts of insanely over-normalized databases

    Over-normalization is a database thing - a simple example of normalization would be a “People” table where instead of having the “Salutation” field just contain text like Mr, Mrs. etc., you have a separate “Salutations” table with all the possibilities listed and keyed with an ID (usually just a sequential number), and then the “People” table stores a Salutation ID for each entry instead of the actual text. It’s a valid and standard thing to do with database design, but it can be taken to extremes where absolutely every possible trivial thing that can be normalized is, producing an overcomplicated mess that is extremely difficult to work with programmatically.

    Printing out this over-normalized mess of a database on multiple sheets of paper which are then taped to the wall is utterly useless.

    How is a database a trick?

    The printout is the trick - it fools the bosses into thinking you’re doing something amazing and productive when you’re really just fucking around. It only works on the technically incompetent, of which there was no shortage in '90s Internet startups (or today).








  • My main experience using C++ was because I got stuck modifying an app written with Qt Creator, an utterly insane cross-platform framework that used (still uses? I dunno, only people in Finland ever used it in the first place) C++ for the under-the-hood processing and Javascript for the UI. For good measure, the application developers had modified all the C++ stuff with macros to the point where it was barely even recognizable as C++. Fortunately, it mattered not at all because the app’s customers were ISPs who just wanted a Skype clone so they could say they had one even though none of their customers ever used the damn thing.


  • The one thing that stands out to me the most is that programmatic “neurons” are basically passive units that weigh inputs and decide to fire or not. The whole net is exposed to the input, the firing decisions are worked through the net, and then whatever output is triggered. In biological neural nets, most neurons are always firing at some rate and the inputs from pre-synaptic neurons affect that rate, so in a sense the passed information is coded as a change in rate rather than as an all-or-nothing decision to fire or not fire as is the case with (most) programmatic neurons. Implementing something like this in code would be more complicated, but it could produce something much more like a living organism which is always doing something rather than passively waiting for an input to produce some output.

    And TBF there probably are a lot of people doing this kind of thing, but if so they don’t get much press.




  • Neural networks are based on an oversimplified model of neuron cells.

    As a programmer who has studied neuroanatomy and the structure/function of neurons themselves, I remain astonished at how not like real biological nervous systems computer neural networks still are. It’s like the whole field is based on one person’s poor understanding of the state of biological knowledge in the late 1970s. That doesn’t mean it’s not effective in some ways as it is, but you’d think there’d be more experimentation in neural networks based on current biological knowledge.



  • Years ago I got a copy of MSDN which had apparently been put together by developers who all had giant monitors. On a normal laptop screen none of the text wrapped properly so every article had a horizontal scrollbar which you had to work left and right to read every fucking line. I eventually had to start copying the contents into a Notepad instance just to be able to read the damn things normally.

    This is why I think developers should always have to work on 10-year-old laptops with 800x600 screens.