Unfortunately this is in the Opinion section which has been known to be reliably frothing at the mouth, anti-goverment, ayn randian propaganda machine.
Is there any particular reason people don't use plain old xterm. I know that it does not provide tabs, but other than that what are the problems that people face when using xterm. I have had utf8 support, zoom and copy paste working on it forever.
I ignored xterm because the defaults were ugly. Then I figured out how to use a nice font at a reasonable size and added tmux, and I haven't looked back.
Maybe its just me, but why not wipe your screen with a flannel cloth or even your t-shirt every so often. That would take care of most of the more egregiously straight-forward attacks won't it. This of course assumes that you have fairly secure number based key to begin with.
That is primarily because the prevailing wage is calculated by agencies like erieri and Cal EDD (in the case of California) surveys which are usually nine months to a year old. So ymmv. If the survey was taken during a boom year and is used in a bust year, the prevailing wages are usually higher than market. The reverse is also true. There is also a selection bias in the data as they only have data from organizations which are willing to share this. There is no mandated requirement for this.
Another thing to keep in mind is that (in theory) prevailing wage data reflects market wages (i.e. all types of engineers) not just those on a visa. Whether or not this is actually what happens is something I have not idea about.
So in theory the LCA data should be a good datum to start your salary negotiations on. However it should be tempered by current market conditions, location and most importantly your own position in the negotiations. Its always easier to negotiate from a position of strength. When you are looking to provide for your family, insurance, rent/mortgage etc and are unemployed, negotiating may or may not work depending on how desperate you are.
I have realized that all kool-aid is quite useless for most situations and thus have reduced my tests to two simple things:
- write unit tests for units that actually are complex and do need it
- focus on getting as much coverage as possible on functional and system tests
Can we please, please, please stop trying to redesign everything under the sun, especially in the name of eco-friendliness or usability or some other such thing?
That is not to say that designs and decisions and concepts should not be questioned, but that they should not be questioned for questioning's sake. A lot of thought, time and effort goes into the design of recognizable brands and widely used things. Just because they are designed by behemoths does not necessarily mean that they don't know what they are doing. On the contrary I would think they have put a lot of thought into it and the designs work quite well given the constraints.
So if you are going to challenge it, please have something substantial that operates in the constraints that apply to the product, not just something that looks pretty.
I say the opposite, lets redesign everything under the sun. If it works, we'll use it. If it doesn't, we won't.
Why do people expect simple projects presented on a blog to have all the answers? Is it not possible to just take it on face value and entertain the idea?
A few years ago I came across a vending machine selling bottles of something called "Pocari Sweat". I stared at it: what the hell was a Pocari, and why would I want to drink its sweat? I cocked my head and stared at it some more. Then I fished in my pocket for some coins to buy a bottle. It was alright; kind of fruity.
Novelty can be fun, and it can attract customers. Why not try redesigning things for the fun of it? You don't have to make a New Coke every five months, but there should be a good amount of churn for its own sake.
Neat use of development tools to illustrate a point. But, to be quite blunt, very naive in the best case and quite stupid in the worst case. We can "ship early, iterate often" in software because quite often it doesn't cost much (certainly a naive way of putting it) to do so. However that paradigm breaks down even in software when dealing with things like enterprise software or say medical device software or FDA approved software. Guess what would happen if the large scale infrastructure construction industry went with this model?
Think about what it costs for an organization to adapt to a new set of regulations. A boatload of new paperwork needs to be setup, ordered, printed and distributed, after a thorough analysis of the regulations and a whole long process to determine documentary requirements. If software or electronic records are used, those need to be updated. The new set of rules need to be communicated to a large number of participating health care providers with appropriate document and/or software updates. Each of those things needs to be audited and certified. Personnel have to be trained to understand and comply to the new set of regulations and potentially certified. There would be a period of enforcement where nobody will quite understand the limits and those limits would be tested and the legal system would then clarify those regulations.
And that is just the tip of the iceberg. In such cases the massive one-shot refactor gives you way more bang for the buck than not.
They do... what you see there in the "4,000 page" bill is legislative language - pretty clear code for a lawyer or legislative aide. On the other hand, the Congressional Research Service puts out some pretty good nonpartisan "plain-language" bill summaries (and for anyone who asks "well why don't they just use those the laws themselves", ask yourself "why don't you just write code in English" - they both have the potential for ambiguity).
Incidentally, I put "4,000 page" in quotes because it's a pet peeve... if you actually look at the text, it's in 24pt font and giant margins - makes it much easier to read and mark up around a table, but in no way approximates the amount of text people envision when you say "X-thousand pages".
Yea but if you compile it, wouldn't it be faster than being interpreted? And the interpretor is also a human brain, which is much more error prone than a computer.
http://www.ancient.eu/Ganges/