Hacker Newsnew | past | comments | ask | show | jobs | submit | more voidpointer's commentslogin

You don't need many books on it. There's Petzold.


Programming Windows 5th Edition by Charles Petzold, still commands a price on eBay for its Win32-centric program development strategy. IIRC 6th edition used C# .NET and 7th and 8th went all in on UWP.


Just noticed that the 5th Edition is still available on Kindle (at least on Amazon Australia) and is a lot cheaper than I expected it to be. The hardcover edition is still very expensive there though.


Petzold is focused on GUI stuff, neglecting other aspects of the API.


Somehow, the examples all sound more like a quality and/or testing issue. The workflow seems prone to people rebasing to a buggy state and at that point, in a non-trivial system, all bets are basically off. Basically I need to be able to have a "JTB" about a pull request having undergone enough review and testing before being merged on the master that it doesn't introduce such glaring regressions as cited in the examples. If that cannot be ensured, I'm setting myself up for one goose chase after the other...


Feb. 2016?


> The interesting thing here is that the common response here is: "Hey that's a great theory, but in real life, I just paid off my mortgage with my proceeds from bitcoin."

Mortgages have been payed off with lottery wins. That doesn't make lottery tickets a good investment.


Yea that's basically the point, it's a terrible argument to say that it worked.


Maybe it depends how many mortgages it pays off?


Unfortunately, it cannot power the dock. That was a bit of a letdown for me


passing 0 to f returns infinity (IEEE 754)

Not arguing the point here. It's just a terrible example.


You are both correct and wrong: I want X/0 to immediately crash (fatal error) before the bad data that caused my to try to divide by zero gets propagated farther. Now that I think about it, I really want X/Y where Y is "close" to 0 to crash too.

Of course I made the above up on the spot, but it is a reasonable thing to do for some situations. Those who know floating point math are well aware that dividing by something close to zero tends to result in very large errors, vs the true answer. (particularly if the division is part of a larger calculation)


I thought you'd get NaN, not Inf. Of course I'm probably misremembering.


I think what the above returns depends on the language. In java I think there is an actual run time exception for this. The point of the original comment is to show how 100% coverage is easy to achieve but often meaningless.

There is some other more abstract concept related to the classes of input that can possibly be passed to a method. IMHO 100% coverage and "test first" have done more harm than good to the cause of automated testing.


Just out of curiosity... What is the point in testing all 2^24 possible color values?


The point is being certain I haven't missed an edge case somewhere.

The hard part is the percentage rgb() values, of which there are technically an uncountably infinite number (since any real number in the range 0-100 is a legal percentage value). For those I generate all 16,777,216 integer values, and verify that converting to percentage and back yields the original value.


Because freelancers get paid sick days?


Not sure how this is designed in this game, but if you want to make the game only show a certain portion of the map to clients at a given time, you need to make sure that things that the player should not see are never sent to the client. Otherwise, you are always opening the system up for this sort of "cheating". With the game being open-source, it is easier to do but also with closed-source games, this is what things like "wallhacks" etc are based off: making information that is available to the client visible to the user.


I don't know how you're supposed to do that and maintain acceptable performance for scrolling the screen around the battlefield. It's one thing for a game like DotA 2 to only send position updates for units that aren't obscured by fog of war, it's quite another to constantly keep track of the scrolling screen of the player and only send them the objects in that rectangle.


This is unlikely to be based on a camera rectangle. Fog of war, sure, but in most rts games the camera can jump immediately from a minimap click. Players would complain about popping and its doubtful a developer would value cheat prevention over standard experience.


warzone has a fog of war, i think people are confusing zooming out with that?

If you zoom out you will still only be able to see map area you revealed and in that map area you would only see enemy units if your units could observer an area.


No. I'm talking about the unit position data which is sent from server to client. If the designers of the game intend for you to have a restricted view of the battlefield that you must scroll around actively and a mini map showing unit positions within the visible range of your army, they are consciously making a choice to restrict the way in which your game client views the data sent to it. By modifying the client to allow you to zoom out, you're bypassing this restriction and gaining a wider view of the same data.


OK i see what you mean, i wouldn't have considered zooming out like that cheating.


SC2 does it not problem


Not true. Map hacks have existed for SC2 since beta. They would not be possible if this technology was used.


What do the map hacks accomplish? Can you see full unit information across the map? I doubt this is the case, except for the game host. For clients, I'd be surprised/disappointed if this were true as it is not costly to filter the information from clients if FoW visibility has already been calculated.


My guess is that the main technical problem preventing this in SC2 is related to the game's replay-saving system. If you have only partial data, your client can't include the data from the opponent's perspective, which makes the replays suck. Theoretically you could generate the replay server-side but it would be resource-intensive without a costly technical overhaul.


Yeah, I'm surprised they didn't design for this. Typically replays (in any game) are a log of initial state and then player input commands, rather than complicated state streams. Input commands are simply replayed in the simulation. It seems possible to keep the master log of input commands and then simply replay the simulation on a client during replay. With this design you could filter inappropriate information from each client during gameplay but have full replay information.

I'm sure there are good reasons why the design in SC2 is as it is, but this was surprising! Thanks for the info.


> Can you see full unit information across the map?

Yes. The engine works by syncing the entire game state between all players (and observers, which in tournaments can become an annoying issue) and all clients have the same information.


Interesting. I'm surprised they took this approach.



Wow, that is crazy! Thanks for the info.


While modems may be dead and buried, Carmack's response regarding wallhacks to reading ESR's essay "The Case of the Quake Cheats" in '99 still applies:

>With a sub-100 msec ping and extremely steady latency, it would be possible to force a synchronous update with no extra information at all, but in the world of 200-400 msec latency [and] low bandwidth modems, it just plain wouldn't work.


Most RTS games are simulations, so the client has to have the enemy unit positions in order to conduct its local simulation. In this case the best defense against map hacks (assuming it's closed source) is frequently updating the game, and messing around with the underlying unit data structures or systems each time there's an update. A map hack isn't very effective if it breaks every week or two on a regular basis, requiring more reverse engineering effort each time.

One notable exception to the local simulation model is Planetary Annihilation, which uses a traditional client/sever model as far as I'm aware.


Changing the games doesn't really help unless you also change the network protocol as a lot of hacks just inspect the network data steams. This makes it harder to check for at runtime and it is harder to change the network protocols since changes break everybody on the old version.

I think quake cheats got to the point a cheat server would MITM the game and shoot automatically and or auto-aim shots for the user.


One doesn't become a leader by just being called the lead. Some people have personal traits that make them assume a leader role more often then others. In my experience, a team that is constantly looking for one person to call all the shots isn't very much empowered. If everyone can take responsibility to own up for the calls to make, you end up with a far more engaged team. It requires a certain maturity but a motivated team should be able to grow into such a mode over a couple of weeks or month. Experts will emerge on certain topics and the team will often look to them to weight in on certain topics. That will all come pretty naturally once a team has passed its initial formation phase.


"In the cases of a disagreement deadlock you need someone to break the deadlock." != "a team that is constantly looking for one person to call all the shots".


A good tech lead actively works to minimize occasions to exercise his role as deadlock breaker. But you can't eliminate that need. If you do then the first time you have a deadlock will be a disaster and you may find that a well functioning team has now turned in a massively dysfunctional one.


"In the cases of a disagreement deadlock you need someone to break the deadlock." != "You can eliminate the need for disagreement deadlocking."


"In the cases of a disagreement deadlock you need someone to break the deadlock."

97% of the time consensus driven decision making means no deadlock and the other 3% of the time you can put decisions to a vote.

I've seen more value destroyed by a team lead inappropriately setting the wrong agenda than I have by team members not knowing when to shut up.

If anything I've found that there's often too much consensus (people who just go with the flow rather than voicing an opinion).


> 97% of the time consensus driven decision making means no deadlock and the other 3% of the time you can put decisions to a vote.

That's now how deadlocks work. You can't just vote your way out of them, if you could, you wouldn't be in a deadlock to begin with. You'd just be at the point where a decision needs to be made.

> If anything I've found that there's often too much consensus (people who just go with the flow rather than voicing an opinion).

That's a separate problem. A healthy and functional team needs to be able to trust one another's opinions and provide an environment where everyone feels comfortable speaking their minds. If you don't have those the value of intrateam communication is comically low.


>That's now how deadlocks work. You can't just vote your way out of them, if you could, you wouldn't be in a deadlock to begin with. You'd just be at the point where a decision needs to be made.

I've seen a number of different scenarios:

1) Consensus after a short discussion (vast majority of cases).

2) Consensus after a drawn out discussion (occasional, usually the discussion is valuable even if it takes a while).

3) Consensus after a drawn out discussion with one or two holdouts who agree to go with the majority opinion under protest (not common).

4) A drawn out discussion where it becomes clear that further discussion is fruitless and a (close) vote makes the decision (very, very rare but it has happened).

I'd say that that most of the time the decisions made in one of these 4 scenarios are better than the decisions made unilaterally by a team lead.

Whatever you're referring to as 'deadlock' I'm not sure I've ever seen it - as a team lead or otherwise. What is it?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: