From the original interview:
"The only advantage you had over a newcomer was that you were prepared to read the manual."
So true still today. I am always astonished by how much time some programmers are wasting by not reading manuals and how much of a competitive advantage even basic reading comprehension and willingness to do so is.
Variants of large scale Passive Radar http://en.wikipedia.org/wiki/Passive_radar
was supposed to beat first generation stealth.
I suppose it's a closely guarded secret if it still does for the more modern systems.
This also disrupts the standard tactics of taking out the radars first to let non stealthy planes follow.
There are also some research around high frequency radar and its ability to overcome stealth.
The SA-3/S-125 Neva soviet surface-to-air missile system are rumored to have something that resembles such a radar. The Yugoslavia army used one to shoot down an Americana first generation stealth fighter in 1999[1]. It may have been the radar, but there are also reports that the bomb-bay doors was opened, raising its radar signature.
High frequency radar are also very vulnerable to clodering. The version that is known to exist can only be used if the plane is flying above the missile battery. If the radar is illuminating the plane in such a way that there is a mountain or other terrain behind it, there will be to much noise to make out the plane. But breakthrough in digital computer based system is expected to change this in the future.
They optimize away unnecessary monitoring inside the traces once the types have been propagated there. So only slow path code does the monitoring. All the active traces should get stable types without extra monitoring at some point.
Essentially it's moving doing type propagation in "batch"
to a "dynamic trigger" system. How effective that is really depends on the programs (or the programing style) and how often they really change types.
I hope it's not just optimized for the usual benchmarks, but some real code.
Apparently that gets spent on all kinds of things, just not the actual education the students need. At >30k a year you could really expect that they would hire enough real teachers.
It seems strange to me that schools can have folks with titles like "Dean of Community Outreach" but they feel compelled to wring cutting-edge research and excellent teaching out of the same people. I'd like to see the proof that world-class teachers and world-class researchers are the exact same set.
Less jobs for lawyers generally seems like a good thing to me. Like investment banks lawyers do not create much (any?) value. That's a good sign for society. There may be some hope left.
Now it's not good for those who wasted a lot of money on a law school. They made a bad decision and will adjust.
And yet there are more and more law schools gleefully suckering new students into the major and getting rich off of it, while not being able to provide a return on the investment. That's a very bad thing.
Lawyers as a whole are not a symbol of malaise. It really depends on what kind of lawyers are around. You increase regulation in an industry, then companies in that industry are going to need to hire lawyers to advise the companies on how to comply. You're also going to need people familiar enough with the law to write good regulatory legislation.
You want startups to make deals with each other, and to receive investments? You're going to want lawyers who can sort out the terms and put it in writing that reflects the intent of all stakeholders. So much misery and broken friendship between co-founders could have been easily relieved by using contracts instead of handshakes.
If you're investing in a company, it's likely you'll want a lawyer to perform due diligence on the company's current contractual obligations.
Really good point. That always annoys me with most formulas. I have to dig back several paragraphs (with no scope/indexing tool) to figure out what the individual variables mean.
And a lot of papers do not even bother to explain some variable but just assume the reader already knows it.
One of the earlier commenters made this mistake very prominently enthusing over Maxwell's equations.
Yes of course the formula looks better shorter when you already know what it means. But that completely misses the point that it was written down to explain it to someone who doesn't already know what it means. If you already do you're the wrong target audience.
Longer variable names would help. Or for online paper just have a tooltip that opens the paragraph that explains what the variable mans when you hover over it would also help a lot.
Or maybe some color coding to easily find it (there was a recent link here recent explaining FFT which used this trick very successfully)
But I guess most Mathematicians don't bother because they only write for a small circle of colleagues anyways.
Maxwell's equations were not "written down to explain it to someone who doesn't already know what it means". They, quite simply were written down to express relationships between the motion of charge, and the behavior of electric and magnetic fields as a consequence.
You have to understand the physics behind the equations (what's charge? what's an electric field) as well as the mathematical structure (what's a vector space? what's a line integral?) in order to make use of them. They are equations used to answer questions like "What happens to the strength of this magnetic field if I increase the velocity of this charged particle responsible for generating it"? They aren't meant to be explanatory (but have the beautiful side-effect of explaining how light propagates in a vacuum.)
I agree with you. A book that aims to TEACH should provide the most common form of the equation and provide detailed annotations of what everything is. (Wikipedia should be like this as well since I know no one is using it as a reference).
For a REFERENCE, you can just list the equations in their most commonly used form.
Don't use a off the shelf database. They are overkill for this.
This only needs a very single table key->value store where every transaction is only a single access. I would just map it to a hash table mapped directly to the raw disk. Doesn't need transactions with a log if done right.
The hash spreads out the allocation, no information leaks for the allocation. If you don't have a log there are no time stamps.
Modern disks are big enough that you dont need to worry about resizing the hash table. And the random salt makes collisions unlikely enough.
Don't do your own locking. Let the caller pass in state. Push locking to the caller. Don't have your own global state that would need hidden locks, but instead let the caller handle it with arguments.
That is similar how the STL does it, but not like stdio
I'm not sure I fully agree.
- For simple libraries it's likely good advice.
- But it encourages big locks and poor scaling. It may be right for desktop apps, but not necessarily for server code that needs to scale. For some things that's fine, but you don't want that for the big tree or hash table that your multi threaded server is built around on.
- It avoids the problems of locks being non composable, that is the caller may need to know which order the locks need to be called, to avoid deadlock. Actually it doesn't avoid it, just pushes it to someone else.
However if you make sure the library is always the leaf and never calls back the library locks will be generally at the bottom of the lock hierarchy.
If this advice is taken the wrong way, then it "just pushes [the locking problem] to someone else", but often locking is a crutch. Sure, there are some programs that have a natural need for a lot of globally mutable state, but not many.
Let's be honest. Most multithreaded programs evolve from programs that are more-or-less single threaded. Then, threads are added in an attempt to improve performance, and high-contention locks are broken into finer grained locks when profiling shows lock contention in the critical path. I would argue it's better to either design for minimal mutable global state from the start. Failing that, it's often better to re-factor the code when you start scaling up the number of threads, before you start investing a lot of time into locking and breaking down your big locks into finer and finer grained locks.
I'm sure you're not one of those programmers who often leans on mutexes/semaphores/etc. as a crutch to prop up poor design, but there are a lot of programmers who do.