Hacker Newsnew | past | comments | ask | show | jobs | submit | tomr_stargazer's favoriteslogin

This was one of those big eye opening moments for me. Consultants are hired mercenaries in coporate warfare, they don't care about you, they don't care about your company or the rivalries or the squabbaling. You pay them a bunch of money to come run roughshod over your enemies by producing reams of analysis and Powerpoints, to fling the arrows of jargon, and lay siege to your enemies employees by endlessly trapping them in meetings and then they depart.

Consultants are brought in to secure your flank, to provide air cover and to act as disposable pawns in interoffice combat.

They are not brought in to solve problems, to find solutions, or because of their incredibly acumen. It's because they have no loyalty or love but money.


Hmm, re-reading your post and thinking more about the specifics, I think the next steps vary a lot depending on your personal experience and resources.

You can basically break a "software engineering" degree down to three components - writing code, "core theory", and "specialist knowledge".

You may already have "writing code" down well enough for entry level gigs; if not, side projects are a fine way to get you there. Look over job postings that interest you, see what languages they ask for, try writing some code in that language.

For "core theory", there are three basic classes you'd ordinarily take: algorithms/data structures, an introduction to compilers, and an introduction to operating systems.

If you've taken these courses already while in your current degree program, great, you're done. Don't worry about all of the extra specialist knowledge, you can pick it up later if you need or want it. You'll know as much as any other entry level generic software engineer.

Algorithms/data structures tells you why some code is fast and why some is slow, and the theory of fast code. Compilers teaches you how programs work under the hood, how the computer actually interprets and executes your programs. Operating systems teach you how the computer as a whole works what happens when you write to a file or a socket or a screen.

You don't need to be able to write a compiler or am operating system from scratch to be a good generic software engineer, but having a rough overview of how the entire system works is one of the things that elevates you past being "just" a self-taught coder. It helps you understand what your code is doing as well as how to write it.

If you haven't been exposed to the "core theory", you're probably best off just trying to pick up books or (free) online courses on the subject. Unless you have just fantastic financial resources you shouldn't try to change your degree or get extra formal schooling in the subject. As long as you can write code, there will be people happy to hire you as a SWE, so you don't even have to wait to master the core theory to start applying to jobs.

Start with algorithms/data structures. It's the most useful and a good opportunity to get more practice writing code in, especially in new languages.


Yes, arXiv generally doesn't accept PDF's, preferring instead the tex source (which is amusing when people don't realize their comments show up. It uses something called autotex which has a few quirks (e.g. all images have to be in the same dir, etc.).

Here is the makefile I use that also generates a .tar.gz for the arxiv (obviously won't help with Overleaf without cloning first, but)

    FIGURES=$(wildcard figs/*) 
    TEXFILES=main.tex included.tex included2.tex ...

    main.pdf: $(TEXFILES) $(FIGURES) main.bib 
         latexmk -pdf -g $< 

    .PHONY: clean show

    clean:
        latexmk -C 
        rm -rf forarxiv* 

    forarxiv.tar.gz: forarxiv/main.tex forarxiv.pdf 
        rm -f forarxiv.tar.gz 
        cd forarxiv &&  tar --exclude=*.bib -cvzf ../forarxiv.tar.gz *

    forarxiv: 
       mkdir -p $@

    forarxiv/main.tex: main.tex main.bib | forarxiv
       latexpand --empty-comments $< | sed -e 's#figs/##g' > $@

    forarxiv/main.bib: main.bib
       cp $< $@

    forarxiv.pdf: forarxiv/main.tex $(FIGURES) forarxiv/main.bib
       ln -f $(FIGURES) forarxiv/
       ln -f foo.sty forarxiv/
       ln -f foo.bst forarxiv/
       latexmk -cd -pdf forarxiv/main.tex 
       latexmk -cd -c forarxiv/main.tex 
       mv forarxiv/main.pdf $@



    show: main.pdf 
        xdg-open main.pdf

Printers. Yeah.

Herewith, my standard advice.

Buy a Brother monochrome laser with duplex, an ethernet port, and BRScript/3 (their PostScript clone). Even if you're sure you will never need one or more of those features, get them all. Wifi and Bluetooth and NFC are strictly optional, and probably not worthwhile.

If you need color printing, send it to a printing company. There might even be a local one. It will be done at a higher quality, with better ink and good paper, than you can do at your office or house -- unless you are big enough to utilize a whole flock of printers, or you are a professional. If you are a pro, you don't need this advice.


The popular factoid is correct, but the confusion here is that these are different measurements. Humans and chimps' genomes are similar in that if you align all the bases [A,C,T,G] that can be unambiguously aligned between the two genomes, 98.8% of the bases are identical. For modern humans to Neanderthals, that number is 99.7%, and between two random modern humans, it would be ~99.9% on average.

This paper is asking a subtly different question - how much of the modern human genome is strictly human, not by simply lining up bases and running a diff, but looking at the inheritance of chunks of DNA ("haplotype blocks", size determined by processes of recombination, etc.) to try to understand how much and which regions of the modern human genome came from interbreeding with Neanderthals or Denisovans. There was variation in the pre-human population before the human/Neanderthal split, which means that if you compare just a single human to a single neanderthal, you'll find unique variants to each. However, most of those variants will have existed in both the human and neanderthal populations, so they should count neither as uniquely human nor neanderthal (knows as Incomplete Lineage Sorting, or ILS).

The chunks in modern humans that derive from Neanderthals or Denisovans are different in different people and broadly across population groups (e.g. highest percent introgressed in Melanesians, lowest in Africans). But across all the modern humans in the study, there are regions where Neanderthal/Denisovan inheritance or shared variation (ILS) was never seen - that's 7% of the genome ("deserts"). And just 1.5% of the genome was in chunks where moderns human commonly have a unique mutation compared to Denisovans/Neanderthals.


> that makes a strong implication towards a many world interpretation

You say that like it's a shortcoming. :)

There are many who take the (very reasonable) position that the many worlds interpretation is the most epistemologically parsimonious one. Contrary to some misunderstandings of it, it doesn't "add" extra worlds; it removes the concept of "wave function collapse", and leaves all the other known laws of quantum mechanics completely unchanged. The "worlds" arise naturally as more and more particles in the environment become entangled with the measured system, and "wave function collapse" turns out to be the predicted observation of an observer who is themselves made out of quantum states.

The only difference between many worlds and the "standard" Copenhagen interpretation is that Copenhagen adds that, at some point, the entanglement process stops, and a bunch of states in the wave function disappear. And it doesn't specify how, or why, or how to calculate when it will happen. Those that advocate for many worlds would point out that this extra epistemological burden is questionable, given that the correct prediction is made without it.


Thank you for pointing this out. I live in the area and showed up for the pickets when they fired all of their warehouse employees for complaining about unsafe working conditions.

If possible use Adorama or some competitor.


These days I avoid Amazon where possible since so often it's just cheap Chinese knockoffs or obviously returned products sold as new.

B&H has been a godsend for tech, especially since there's no tax with their card. Crutchfield/Headphones.com has been great for speaker and audio gear. West Elm has a consistently premium quality for kitchen, home, and furniture items (though furniture is a story of its own, with even better vendors.) Walmart/Target/BestBuy have been good for everything else.

If you're too lazy to figure out yourself which products are quality, Wirecutter, NyMag, and Consumer Reports all perform unbiased testing of multiple products in almost every product segment I can think of.

And for simply next-level quality, nothing beats DIY. Personalize the final product exactly to your specifications, choosing the highest quality or even custom-machined parts with zero cost cutting. Requires time and passion, however.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: