I think the problem is simply that for a large part of the playerbase, increasing your APM is directly correlated with increasing your win rate/ranking.
And frankly, that's not fun for a lot of people.
I don't want to win by clicking and mashing hotkeys like a schizophrenic on speed.
I don't think this is true. Granted, last time I tried to get good at an RTS was toward the end of the Brood War era but the established wisdom at that time was very clear that hour-for-hour, time spent practicing resource management was much more effective than time spent practicing clicking quickly.
Yes, really good players click fast, but they also have impeccable resource management. The group I played with did run the obvious experiment: the best one of us was forced to play against the rest (one at a time) with an artificial click frequency limit. He felt like his abilities were greatly reduced, but he still beat everyone else quite easily.
Yeah, I played a lot of StarCraft 2. By myself, 2v2 with a really talented friend and 3v3 with two other friends that were total beginners that I could beat 1v2.
At the bottom to upper mid level all you need to win is to figure out the macro game of building construction while also getting enough workers and units. With enough of that no micro is needed, just attack-moving into the enemy is more than enough.
Then at the upper mid level you're going to run into people who often don't build as effectively but they'll micro every unit or they'll be constantly doing raids when you don't expect it, scouting better than you and/or just understanding which units are better vs which so as to counter you.
From that point on it becomes much more of an effort to play the game because then you need to become better in all of those fields, while also becoming faster. But to be honest that point is probably 2/3rd's up the tree of all the people playing.
When people complain about APM in an RTS like StarCraft, they’re really not complaining about the spam clicking done by players at the pro level. They’re talking about multitasking which is an essential skill at all levels of the game.
Not even at the lowest rankings are you permitted to ignore what your opponent is doing and focus on building workers and base facilities. StarCraft is infamous for the ability of anyone to sacrifice their economy to perform an early rush attack (most infamously with a ton of early zerglings).
To combat early rush attacks you need to be able to multitask: send out early scouts to see what your opponent is doing, if they have any hidden building on the map, how many workers they have, etc. You need to be able to do this while building your own workers, base facilities, and units for defence. This is the multitasking that so many struggle with and it’s required to be able to play at the most basic level!
Optimally queueing SCVs and marines and supply depots requires an APM of 11 or so in the early stages of a Brood War game on the fastest setting. Add a couple more APM for scouting, and we'ree still not talking crazy levels of multitasking.
Dealing with your opponent is a fact of every strategy game!
And yet if you watch low level players they’ll be fine with that until a bunch of zerglings show up at their base and then they panic trying to micro marines and repair bunkers while their minerals shoot up to 1000 and then they have no units and lose.
Keeping a scouting SCV alive in your opponent’s base while building more SCVs at home, building more barracks, building supply depots, killing the enemy scouting worker, and actually reading and correctly interpreting what your opponent is doing is non-trivial.
>This turns navigation into muscle memory. Cmd-2 is not "Switch to Terminal"; Cmd-2 is just the physical reflex of "I want to code." I don't look. I just hit the key combination, and the active workspace changes.
What happens when some app (like, say, the browser) binds Cmd+<number>? If I hit Cmd+2 right now it'd switch me to the second tab in firefox. Seems like a pain to have to rebind everything.
It is a kid's show. The main characters' outfit is modelled after Japan's iconic recovery workers (stark orange and blue), a compliment of their heroics echoed in fiction.
This character can clone himself hundreds times to help others, with art often mirroring the thousands of recovery workers seen in actual event footage.
My comment intended to link back the image of childhood heroes as corporeal selfless adults
It's a technique to temporarily make one or more duplicates of your body which can move independently and have your memories/abilities. A strong enough hit will dispel them, or the user can do it manually, after which the memories of what the clones did return to the user.
The usage here by GP might just be because everyone looks/is-dressed the same and is working in unison, and since they're Japanese, anime comes to mind. In the show, Naruto often uses shadow clones to pull off more complex techniques, throwing himself, having them take turns punching/kicking, or in the case of the rasengan he divides the work of controlling the ball of chakra since he struggled to do it successfully by himself.
The reference is that the anime character "Naruto"[0] wears the same colors and roughly the same uniform as a Japanese recovery worker[1].
During disaster work, you see swarms of recovery workers and the joke/reference being made is that this looks like Naruto doing a "shadow clone" technique.
I use Halloy, one of the applications featured in the readme, on the regular. It's great and the UI is very pleasant. I don't enjoy writing rust and very much wish someone would port iced to other languages.
Depends on the company in my experience. I've seen some suppliers that basically just wire up the diagram in Matlab/simulink and hit Autocode. No humans actually touch the C that comes out.
Honestly I think that's probably the correct way to write high reliability code.
You’re joking right? That autogenerated code is generally garbage and spaghetti code. It was probably the reason for Toyotas unintended acceleration glitch.
In the case of the Toyota/Denso mess, the code in question had both auto-generated and hand-written elements, including places where the autogenerated code had been modified by hand later. That is the worst place to be, where you no longer have whatever structure and/or guarantees the code gen might provide, but you also don't have the structure and choices that a good SWE team would have to develop that level of complexity by hand.
The toyota code was a case of truly abysmal software development methodology. The resultant code they released was so bad that neither NASA, nor Barr, nor Koopman could successfully decipher. (Although Barr posited that the issue was VERY LIKELY in one of a few places with complex multithreaded interactions).
Which therein lies the clue. They wrote software that was simply unmaintainable. Autogenerated code isnt any better.
This isn't necessarily a problem if you don't consider the output to be "source" code. Assembly is also garbage spaghetti code but that doesn't stop you from using a compiler does it?
For control systems like avionics it either passes the suite of tests for certification, or it doesn't. Whether a human could write code that uses less memory is simply not important. In the event the autocode isn't performant enough to run on the box you just spec a faster chip or more memory.
I’m sorry, but I disagree. Building these real-time safety-critical systems is what I do for a living. Once the system is designed and hardware is selected, I agree that if the required tasks fit in the hardware, it’s good to go — there’s no bonus points for leaving memory empty. But the sizing of the system, and even the decomposition of the system to multiple ECUs and the level of integration, depends on how efficient the code is. And there are step functions here — even a decade ago it wasn’t possible to get safety processors with sufficient performance for eVTOL control loops (there’s no “just spec a faster chip”), so the system design needed to deal with lower-ASIL capable hardware and achieve reliability, at the cost of system complexity, at a higher level. Today doing that in a safety processors is possible for hand-written code, but still marginal for autogen code, meaning that if you want to allow for the bloat of code gen you’ll pay for it at the system level.
>And there are step functions here — even a decade ago it wasn’t possible to get safety processors with sufficient performance for eVTOL control loops (there’s no “just spec a faster chip”)
The idea that processors from the last decade were slower than those available today isn't a novel or interesting revelation.
All that means is that 10 years ago you had to rely on humans to write the code that today can be done more safely with auto generation.
50+ years of off by ones and use after frees should have disabused us of the hubristic notion that humans can write safe code. We demonstrably can't.
In any other problem domain, if our bodies can't do something we use a tool. This is why we invented axes, screwdrivers, and forklifts.
But for some reason in software there are people who, despite all evidence to the contrary, cling to the absurd notion that people can write safe code.
> All that means is that 10 years ago you had to rely on humans to write the code that today can be done more safely with auto generation.
No. It means more than that. There's a cross-product here. On one axis, you have "resources needed", higher for code gen. On another axis, you have "available hardware safety features." If the higher resources needed for code gen pushes you to fewer hardware safety features available at that performance bucket, then you're stuck with a more complex safety concept, pushing the overall system complexity up. The choice isn't "code gen, with corresponding hopefully better tool safety, and more hardware cost" vs. "hand written code, with human-written bugs that need to be mitigated by test processes, and less hardware cost." It's "code gen, better tool safety, more system complexity, much much larger test matrix for fault injection" vs "human-written code, human-written bugs, but an overall much simpler system." And while it is possible to discuss systems that are so simple that safety processors can be used either way, or systems so complex that non-safety processors must be used either way... in my experience, there are real, interesting, and relevant systems over the past decade that are right on the edge.
It's also worth saying that for high-criticality avionics built to DAL B or DAL A via DO-178, the incidence of bugs found in the wild is very, very low. That's accomplished by spending outrageous time (money) on testing, but it's achievable -- defects in real-world avionics systems overwhelming are defects in the requirement specifications, not in the implementation, hand-written or not.
Codegen from Matlab/Simulink/whatever is good for proof of concept design. It largely helps engineers who are not very good with coding to hypothesize about different algorithmic approaches. Engineers who actually implement that algorithm in a system that will be deployed are coming from a different group with different domain expertise.
No I'm not joking at all. The Autocode feature generates code that has high fidelity to the model in simulink, and is immensely more reliable than a human.
It is impossible for a simulink model to accidentally type `i > 0` when they meant `i >= 0`, for example. Any human who tells you they have not made this mistake is a liar.
Unless there was a second uncommanded acceleration problem with Toyotas, my understanding is that it was caused by poor mechanical design of the accelerator pedal that caused it to get stuck on floor mats.
In any case, when we're talking about safety critical control systems like avionics, it's better to abstract away the actual act of typing code into an editor, because it eliminates a potential source of errors. You verify the model at a higher level, and the code is produced in a deterministic manner.
> It is impossible for a simulink model to accidentally type `i > 0` when they meant `i >= 0`
The Simulink Coder tool is a piece of software. It is designed and implemented by humans. It will have bugs.
Autogenerated code is different from human written code. It hits soft spots in the C/C++ compilers.
For example, autogenerated code can have really huge switch statements. You know, larger than the 15-bit branch offset the compiler implementer thought was big enough to handle any switch-statement any sane human would ever write? So now the switch jumps backwards instead when trying to get the the correct case-statement.
I'm not saying that Simulink Coder + a C/C++ compiler is bad. It might be better than the "manual coding" options available. But it's not 100% bug free either.
> It is impossible for a simulink model to accidentally type `i > 0` when they meant `i >= 0`
That's a classic bias: Comparing A and B, show that B doesn't have some A flaws. If they are different systems, of course that's true. But it's also true that A doesn't have some B flaws. That is, what flaws does Autocode have that humans don't?
The fantasy that machines are infallible - another (implicit) argument in this thread - is just ignorance for any professional in technology.
What's the difference between autogenerated C code and compiling to assembly or machine code? Seems academic to me.
The main flaw of autocode is that a human can't easily read and validate it, so you can't really use it as source code. In my experience, this is one of the biggest flaws of these types of systems. You have to version control the file for whatever proprietary graphical programming software generated the code in the first place, and as much as we like to complain about git, it looks like a miracle by comparison.
> What's the difference between autogenerated C code and compiling to assembly or machine code? Seems academic to me.
It's an interesting question and point, but those are two different things and there is no reason to think you'll get the same results. Why not compile from natural language, if that theory is true?
The C specification is orders of magnitude more complex and is much less defined than assembly. Arguably, the same could be said comparing natural language with C.
I admit that's mostly philosphical. But I think saying 'C can autogenerate reliable assembly, therefore a specification can autogenerate reliable C' is also about two different problems.
That's a nonsensical connection. "Spaghetti code" is a very general term, that's nowhere near specific enough for the two to be related.
"I know for a fact that Italian cooks generate spaghetti, and the deceased's last meal contained spaghetti, therefore an Italian chef must have poisoned him"
SRS is a for-profit corporation whose income comes from lawsuits, so their reports/investigations are tainted by their financial incentive to overstate the significance of their findings.
And frankly, that's not fun for a lot of people.
I don't want to win by clicking and mashing hotkeys like a schizophrenic on speed.