Hacker News new | past | comments | ask | show | jobs | submit | abricot's favorites login

I call these features "dead birds" because they remind me of gifts that an outdoor cat will leave on your doorstep. They took quite the effort to do and were made with good intention, but ultimately I don't want them.

Some other Internet 'lore' rabbitholes:

* SilvaGunner (also: Unregistered HyperCam 2): https://siivagunner.fandom.com/wiki/SiIvaGunner_Wiki

* Pronounciation Book / horse_ebooks: https://77days.fandom.com/wiki/Pronunciation_Book_Conspiracy...

* Cicada 3301: http://www.cicada3301.org

I'm cataloguing some of these here: https://href.cool/Stories/Folkmeme

Interested to hear about other favorites of yours!


A few of my favorite print styles, from my personal site:

   h2,h3,h4,h5,h6,h7,h8 {break-after: avoid-page;}
   img, svg, table, canvas {break-inside: avoid;}
   a::after {content: " (" attr(href) ")";}

Explanation:

- Avoid printing section headers at the bottom of one page with the section content left headerless at the top of the next page.

- Prefer printing graphics and figures on whole pages instead of split across pages.

- Print out the URL of every hyperlink instead of having links only as useless underlined text.


Two talks given by Ben Collins-Sussman absolutely changed my career path from being a hot headed programmer to thinking like a professional engineer.

The Myth of the Genius Programmer: https://www.youtube.com/watch?v=0SARbwvhupQ

The Art of Organizational Manipulation: https://www.youtube.com/watch?v=OTCuYzAw31Y

I rewatch these every few years, or before an interview. Puts me back in the right headspace.

If you're reading this Ben, thank you.


Simulating a game you hate in a language you also hate, does that cancel out or double up?

Back in the '90s I consulted at HBO, and they were migrating from MS Mail on Mac servers to MS Exchange on PCs. Problem was that MS Mail on the Mac had no address book export function, and execs often have thousands or even tens of thousands of contacts. The default solution was for personal assistants to copy out the contacts one by one.

So I experimented with screen hotkey tools. I knew about QuicKeys, but its logic and flow control at the time was somewhat limited. Enter <some program I can't remember the name of> which had a full programming language.

I wrote and debugged a tool that:

   1. Listened to its own email box: cole.exporter@hbo.com
   2. You emailed it your password (security? what security?)
   3. Seeing such an email, it logged out of its own email and logged in to yours.
   4. Then it opened your address book and copied out entries one by one. 
   5. It couldn't tell by any other method that it had reached the end of your address book, so if it saw the same contact several times in a row it would stop.
   6. Then it formatted your address book into a CSV for importing to Exchange, and emailed it back to you.
   7. It logged out of your account, and back into its own, and resumed waiting for an incoming email.
This had to work for several thousand employees over a few weeks. I had 4 headless pizza box Macs in my office running this code. Several things could go wrong, since all the code was just assuming that the UI would be the same every time. So while in the "waiting" state I had the Macs "beep" once per minute, and each had a custom beep sound, which was just me saying "one" "two" "three" and "four". So my office had my voice going off an average of once every fifteen seconds for several weeks.

Not to be confused with “Jesus Code”. Jesus code is any code written in a way that only the author and god (Jesus) would know how it works. That is until inevitably the author forgets. Leaving Jesus as the only source of reference.

It is believed that Jesus Code is the reason the second coming of Christ has not yet occurred. As Jesus would be flooded with support requests, jira tickets, and zoom meetings. (Some people believe the modern SDLC to be the devil’s way of keeping Jesus from coming.)


> Creating a theme for tabletop role-playing games would take some elbow grease

The article links to several, including a convincing reproduction of basic Wizards of the Coast house style for D&D 5E:

https://github.com/rpgtex/DND-5e-LaTeX-Template

Here's another 5E one with additional sidebar styles:

https://github.com/anoderay/DND-5e-LaTeX-Template/

Also 5E-inspired, with a template for card accessories:

https://github.com/Krozark/RPG-LaTeX-Template

A 5E-compatible character sheet:

https://github.com/matsavage/DND-5e-LaTeX-Character-Sheet-Te...

CTAN also has packages for Basic D&D-inspired typesetting (rpg-module, also linked from the article), GURPS (gurps), generic hex boards (hexboard), and wargame hex boards with counters (wargame).

There are also indie TTRPGs that've shipped using custom LaTeX templates; this one has CC-BY licensed source: https://github.com/ludus-leonis/nipajin

And the blog author's own, with a more restrictive CC-NC-SA license: https://github.com/Vladar4/itdr

From personal experience, the biggest struggle is non-rectangular text wrapping around images.


Very recently, Los Alamos National Lab published a report An evaluation of risks associated with relying on Fortran for mission critical codes for the next 15 years [1]. In their summary, they write:

<quote> Our assessment for seven distinct risks associated with continued use of Fortran are that in the next fifteen years:

1. It is very likely that we will be unable to staff Fortran projects with top-rate computer scientists and computer engineers.

2. There is an even chance that we will be unable to staff Fortran projects with top-rate computational scientists and physicists.

3. There is an even chance continued maintenance of Fortran codes will lead to expensive human or financial maintenance costs.

4. It is very unlikely that codes that rely on Fortran will have poor performance on future CPU technologies.

5. It is likely that codes that rely on Fortran will have poor performance for GPU technologies.

6. It is very likely that Fortran will preclude effective use of important advances in computing technology.

7. There is an even chance that Fortran will inhibit introduction of new features or physics that can be introduced with other languages. </quote>

In my view, a language is destined for being a "maintenance language" if all of these are simultaneously true:

1. There is a dearth of people who know the language well.

2. Few people are opting to learn it in their free time, and/or seek it out for a job.

3. Companies do not seriously invest in training in learning the language.

4. Companies don't pay enough to convince an engineer to use it. who otherwise loves using other languages and has better prospects with them.

I've experienced unique challenges in hiring Lisp programmers, but the fact it remains sufficiently interesting to enough software engineers (who are usually good programmers) has been a boon, and likewise providing space to learn it helps even more.

Fortran though is teetering on its historical significance and prevalence in HPC. Aside from the plethora of existing and typically inscrutable scientific code, I'm not sure what the big iron imperative language offers over the dozens of other superficially similar choices these days, except beaten-path HPC integration. Scientists are more and more writing their code in C++ and Python—definitively more complicated than Fortran but still regarded as premier languages for superlative efficiency and flexibility respectively. Julia has had a few moments, but (anecdotally) it doesn't seem to have taken off with a quorum of hardcore scientist-programmers.

[1] https://permalink.lanl.gov/object/tr?what=info:lanl-repo/lar...


Here is the problem.

The human memory is limited. When you are first excited about something you get lots and lots of fleeting ideas. By the time you are working on the ideas, you have most likely forgotten most of them and then you are stuck. You no longer know what the next incremental step is, even though you feel it should be obvious.

My most successful project was organized via GitHub issues with a title and no content. To start to get into the mindset of shipping all you have to do is gather a bunch of these issues and assign them to a milestone, that is your product, your initial release. It doesn't matter if it is only 1% of the final vision, you will have many more milestones after this one. After your first milestone what you should do is get users. If you can't, then you should stop unless you need the software yourself. If a milestone is too big and you can't finish it in the deadline you gave yourself but you finished a significant amount of issues, then just take some of the issues out and shift them to the next milestone. Shipping will become a regular habit instead of some mysterious event in the far away future.


It will break some people's hearts to hear this perhaps, but way back in the day, around 1976, when I was at Bell Labs, we came up with a solid-state replacement for the 300Bs and other tubes used in repeaters (the word they used for an amplifier back in those days because it replaced a person who would repeat, a whole other fun story). But this solid-state replacement was called a Fetron, it used FETs and was plug compatible with the tubes. We had armies of people with asbestos gloves pulling out the hot tubes and throwing them in huge trash bins and plugging in the replacement Fetrons. We smashed 300B's by the thousands back then. Bunch of old junk! was the cry.

BTW, the repeater used to be a person's job, mostly women, they would sit in a small both and a call from say New York would come in and it could not reach Chicago with just simple copper loop so their job was to listen to what the person in New York said and repeat it into the next loop and so on until it go to Chicago, the longest loop would be maybe 100 miles. Originally the booth was 2 feet wide, just wide enough for a woman to fit into. It was a repeater bay, with little side walls to isolate the sound somewhat between repeaters. This was before electronic amplifiers.

One day, along comes an efficiency expert from HQ who figures out they can add a couple more repeater bays into the line up if you just shrunk it down by an inch to 23 inches. Then later, when electronic amplifiers came along, tube type of course, it replaced the jobs of all these women who were repeaters. Put them out of work. Later came the Fetron. But that is why bays are 23 inches wide to this day. Sort of like the old lore about railroad widths being the same as roman carriages,this was the lore passed down to me from what were the old timers then. Guess that's me now :)


A commercial Nas (qnap, this time), vs just a simple Linux box with hard drives. Simple Linux box was the better option. We even wrote a blog post about it: https://www.factorio.com/blog/post/fff-330 (scroll down, it's topic #3. Also, I don't work there anymore)

I built a distraction free writing app in 2012 to help me write, and it has been the single greatest distraction of all. it was successful, becoming the number one writing app on Windows store for years. I now have thousands of users, and it makes a tiny bit of money each month, and supporting and fixing bugs takes up most of my spare time. I have to date not managed to finish more than a sketchy first draft of a novel using the software I wrote to help me write.

Not really a script, but a `.ssh/config` to automatically deploy parts of my local cli environment to every server i connect to (if username and ip/hostname matches my rules).

On first connect to a server, this sync all the dotfiles i want to a remote host and on subsequent connects, it updates the dotfiles.

Idk if this is "special", but I haven't seen anyone else do this really and it beats for example ansible playbooks by being dead simple.

   Match Host 192.168.123.*,another-example.org,*.example.com User myusername,myotherusername
      ForwardAgent yes
      PermitLocalCommand yes
      LocalCommand rsync -L --exclude .netrwhist --exclude .git --exclude .config/iterm2/AppSupport/ --exclude .vim/bundle/youcompleteme/ -vRrlptze "ssh -o PermitLocalCommand=no" %d/./.screenrc %d/./.gitignore %d/./.bash_profile %d/./.ssh/git_ed25519.pub %d/./.ssh/authorized_keys %d/./.vimrc %d/./.zshrc %d/./.config/iterm2/ %d/./.vim/ %d/./bin/ %d/./.bash/ %r@%n:/home/%r

We went with this approach. Pandas hit GIL limits which made it too slow. Then we moved to Dask and hit GIL limits on the scheduler process. Then we moved to Spark and hit JVM GC slowdowns on the amount of allocated memory. Then we burned it all down and became hermits in the woods.

Telling computers what to do and how to do it is an art, a craft, a practice, a discipline, a medium, a profession, and a science (and probably a few more categories besides). It is just isn't (usually) all of those things at once. Most of the difficulties we have in discussions about the subject have to do with category errors. Literate Programming has stylistic, technical, toolchain, and disciplinary aspects, and Knuth's exemplar demonstrated these. It was then critiqued on pragmatic grounds. I'm not sure if this counts as a bait and switch, rope a dope, or strawman.

I mean, if I was participating in a computer programming class and given the same problem as an assignment, I would write a program to satisfy the requirements. If then told that I should have written a few lines of shell script instead and given a poor grade, I would be livid at the instructor.

As an aside, I was interested to know if there are LP tools for shell scripting. A cursory search turned this up:

https://github.com/bashup/mdsh


I’m with you generally, but having written some code targeting these instructions from a disinterested third-party perspective, there are big enough differences in some instructions in performance or even behavior that can sincerely drive you to inspect the particular CPU model and not just the cpuid bits offered.

Off the top of my head, SSSE3 has a very flexible instruction to permute the 16 bytes of one xmm register at byte granularity using each byte of another xmm register to control the permutation. On many chips this is extremely cheap (eg 1 cycle) and its flexibility suggests certain algorithms that completely tank performance on other machines, eg old mobile x86 chips where it runs in microcode and takes dozens or maybe even hundreds of cycles to retire. There the best solution is to use a sequence of instructions instead of that single permute instruction, often only two or three depending on what you’re up to. And you could certainly just use that replacement sequence everywhere, but if you want the best performance _everywhere_, you need to not only look for that SSSE3 bit but also somehow decide if that permute is fast so you can use it when it is.

Much more seriously, Intel and AMD’s instructions sometimes behave differently, within specification. The approximate reciprocal and reciprocal square root instructions are specified loosely enough that they can deliver significantly different results, to the point where an algorithm tuned on Intel to function perfectly might have some intermediate value from one of these approximate instructions end up with a slightly different value on AMD, and before you know it you end up with a number slightly less than zero where you expect zero, a NaN, square root of a negative number, etc. And this sort of slight variation can easily lead to a user-visible bug, a crash, or even an exploitable bug, like a buffer under/overflow. Even exhaustively tested code can fail if it runs on a chip that’s not what you exhaustively tested on. Again, you might just decide to not use these loosely-specified instructions (which I entirely support) but if you’re shooting for the absolute maximum performance, you’ll find yourself tuning the constants of your algorithms up or down a few ulps depending on the particular CPU manufacturer or model.

I’ve even discovered problems when using the high-level C intrinsics that correspond to these instructions across CPUs from the same manufacturer (Intel). AVX512 provided new versions of these approximations with increased precision, the instruction variants with a “14” in their mnemonic. If using intrinsics, instruction selection is up to your compiler, and you might find compiling a piece of code targeting AVX2 picks the old low precision version, while the compiler helpfully picks the new increased-precision instructions when targeting AVX-512. This leads to the same sorts of problems described in the previous paragraph.

I really wish you could just read cpuid, and for the most part you’re right that it’s the best practice, but for absolutely maximum performance from this sort of code, sometimes you need more information, both for speed and safety. I know this was long-winded, and again, I entirely understand your argument and almost totally agree, but it’s not 100%, more like 100-epsilon%, where that epsilon itself is sadly manufacturer-dependent.

(I have never worked for Intel or AMD. I have been both delighted and disappointed by chips from both of them.)



Ask yourself, "what would batman want to learn," and learn that.

That's how I ended up with Practical Homicide Investigation Tactics, Procedures, and Forensic Techniques. It's amazing.


Qt's corporate owners have been openly hostile to the community for a while, and have been trying to get away with shady stuff. The license terms for commercial use are terrible, you need an active license to distribute existing software, not only develop new software, the prices are excessive and keep increasing, and once you get a commercial use license you have to keep it forever or you won't be able to distribute your existing work. Commercial license users are prohibited from contributing to the community version. They also made registration mandatory for all downloads, even of the GPL/LGPL licensed stuff, which information they use for Oracle-style license shakedowns. They keep delaying GPL/LGPL releases and have stated an intention to delay them by the maximum permitted amount (one year) meaning all community contributions are effectively blocked because contributors don't have access to the current codebase. While the contract with the Qt community does prevent the Qt Company from going completely closed, they're certainly not someone I'm comfortable with.

How to Talk So Kids Will Listen & Listen So Kids Will Talk, by Adele Faber and Elaine Mazlish.

Stack Overflow co-founder Jeff Atwood recommends this book on his Coding Horror blog[0]. I don't have children, but his statement that a book on talking with children "improved [his] interactions with all human beings from age 2 to 99" intrigued me enough to get the book.

I feel that out of any book I've read, this is the one that's affected my actual behavior the most. The book features a lot of examples that help the reader to internalize the lessons. Also, the lessons are so broadly applicable that you can likely apply them immediately in your life.

It was honestly really amazing to read some pages of the book one night, find an opportunity to apply a lesson from it in a situation the next day, and then see immediate results when the conversation would go in positive direction that I maybe would have screwed up otherwise.

[0]: https://blog.codinghorror.com/how-to-talk-to-human-beings/


The publisher surely has the original tex (or whatever format the author/editor used) file(s) that can easily be rendered to epub/mobi, right?!

Wrong.

Prior to 2000, the MS may never have been submitted electronically; or might be in an obsolete format (Word Perfect, anyone?).Then, until circa 2008, the book was almost certainly copy-edited on paper -- that is, the editor printed out a paper copy and the copy-editor corrected typos by hand, which where then manually transcribed on a DTP system to produce the final output. The author therefore doesn't have an as-published electronic copy.

To make matters worse, all the big publishers outsourced copy-editing, typesetting, and printing many years ago. The typesetting was probably executed using Quark Publishing System on MacOS by an external agency, who then burned any backups on floppy disk or CDROM. The publishers do not own these, and in fact cannot acquire them without paying the typesetting bureau a three (or four) digit copying fee. So neither the authors nor the publishers own an as-published copy.

These days stuff is typeset on Adobe InDesign, which is a whole lot more ePub-friendly, and which can import Quark files ... except that Quark's format is notoriously idiosyncratic and import ops commonly lose some formatting info. (Such as italics, font changes, etc.)

TL:DR is that any book published before 2007 or thereabouts may be impossible to republish without either OCR or re-typesetting from scratch.

(TeX is pretty much unheard of among authors outside the science fields, and thanks to M$ churning the MS Word file format repeatedly between 1990 and 2008 it may be difficult to do anything with the original manuscript.)

Going forward the picture is brighter: I have as-published ebook editions of all my books and know how to crack the DRM on them to pull that text out in a legible form, with formatting intact. (Yes, I appreciate the irony of this. Why, just last week I emailed three DRM-cracked novels I downloaded off BitTorrent to an editor. I wrote 'em, the editor has a license to publish them in the UK, so it's legal, but ... the irony! It burns!)



“I remember working on Excel 5. Our original feature list was huge and would have gone way over schedule. Oh my! we thought. Those are all super important features! How can we live without a macro editing wizard?

As it turns out, we had no choice, and we cut what we thought was “to the bone” to make the schedule. Everybody felt unhappy about the cuts. To assuage our feelings, we simply told ourselves that we weren’t cutting the features, we were simply deferring them to Excel 6, since they were less important.

As Excel 5 was nearing completion, I started working on the Excel 6 spec with a colleague, Eric Michelman. We sat down to go through the list of “Excel 6” features that had been cut from the Excel 5 schedule. We were absolutely shocked to see that the list of cut features was the shoddiest list of features you could imagine. Not one of those features was worth doing. I don’t think a single one of them was ever done, even in the next three releases. The process of culling features to fit a schedule was the best thing we could have done.”

— Joel Spolsky, Painless Software Schedules, https://www.joelonsoftware.com/2000/03/29/painless-software-...


December 2015: "We're going to end up with complete autonomy, and I think we will have complete autonomy in approximately two years."

January 2016: "In ~2 years, summon should work anywhere connected by land & not blocked by borders, eg you're in LA and the car is in NY"

June 2016: "I really consider autonomous driving a solved problem, I think we are less than two years away from complete autonomy, safer than humans, but regulations should take at least another year," Musk said.

March 2017: "I think that [you will be able to fall asleep in a tesla] is about two years"

March 2018: "I think probably by end of next year [end of 2019] self-driving will encompass essentially all modes of driving and be at least 100% to 200% safer than a person."

Nov 15, 2018: "Probably technically be able to [self deliver Teslas to customers doors] in about a year then its up to the regulators"

Feb 19 2019: "We will be feature complete full self driving this year. The car will be able to find you in a parking lot, pick you up, take you all the way to your destination without an intervention this year. I'm certain of that. That is not a question mark. It will be essentially safe to fall asleep and wake up at their destination towards the end of next year"

April 12th 2019: "I'd be shocked if not next year, at the latest that having the person, having human intervene will decrease safety. DECREASE! (in response to human supervision and adding driver monitoring system)"

April 22nd 2019: "We expect to be feature complete in self driving this year, and we expect to be confident enough from our standpoint to say that we think people do not need to touch the wheel and can look out the window sometime probably around the second quarter of next year."

April 22nd 2019: “We will have more than one million robotaxis on the road,” Musk said. “A year from now, we’ll have over a million cars with full self-driving, software... everything."

May 9th 2019: "We could have gamed an LA/NY Autopilot journey last year, but when we do it this year, everyone with Tesla Full Self-Driving will be able to do it too"

Dec 1, 2020: “I am extremely confident of achieving full autonomy and releasing it to the Tesla customer base next year. But I think at least some jurisdictions are going to allow full self-driving next year.”

-

Elon’s just been repeating the same promise for the over half a decade now. Oldest trick in the book.

Disclaimer: I drive a Model 3


> This journey began some 27 years ago. Amazon was only an idea, and it had no name.

It had a name, and that name was "Cadabra".

It didn't become Amazon until Jeff watched a documentary about the Amazon River. His lawyer had already turned up his nose at "Cadabra", and Jeff was looking for something else.

It's also worth noting that the idea didn't grow over time - Jeff always intended to build something like "Sears for the 21st century". The bookstore was just the way in, not the long term plan.

ps. amazon employee #2


Many years ago (2006 and 2009, respectively) I had to choose European and Asian locations for rsync.net storage arrays.

My primary, overriding criteria in choosing locations was what would be the coolest, most interesting place to visit for installs and maintenance.

I chose Zurich and Hong Kong.

Measured results have exceeded my initial models.


Maybe I can help fill the gaps.

I ran my own computer store with a small IT consultancy attached to it for a few years. Then I chose to pivot and get a "real job". Things change once you're married with a child on the way.

Like many, I started out doing 3 months to perm contract jobs. The first contract was a Linux system administrator at Google in Atlanta automating the huge fleet of servers there. I learned enough shell scripting to be dangerous, but it was mostly racking and stacking servers, and provisioning top of rack switches -- hello minicom.

3 months later I was working in tech support, for more money, at a company called Vocalocity, who was early in the VoIP game. That's where I learned how to PXE boot and flash Cisco IP phones to work with our custom Asterisk based backends. I was there almost a year and then it was time to move on.

This would continue every three months or so. I held jobs at places like Cox Communications working in the NOC during the night shift so I could be home with my daughter. Three to six months later I quit.

I know what you're thinking, this guy jumped around a lot. I had to, money was tight, and it was the fastest way to get a raise, and it also accelerated my learning. Coming from being your own boss it's really hard to get excited about an entry level job and look forward to working your way up the corporate ladder.

My skills really leveled up when I landed a full time job at Peer 1 Web Hosting, where I started in Tech Support working tickets and taking calls helping people with Linux servers, Plesk, and MySQL. It's true, it's always a DNS problem.

Peer 1 is where I really learned how to write code, it started with bash, and eventually Python. I automated the SSL certificate provisioning system, and wrote some scripts that allowed me to close tickets faster than anyone else.

About 6 months later I was promoted to the engineering team and worked on our automated provisioning system for Server Beach, acquired from Rackspace, which was the part of Peer 1 that hosted YouTube before YouTube was bought by Google. Server Beach ran those "Latency Kills" ads to help sale dedicated gaming servers.

That provisioning system was responsible for allowing people to order a server back in the early 2000s from a web form and have it provisioned in less than an hour. We PXE booted servers, configured RAID controllers, and bootstrapped the OS, including Windows, and handed back an IP address and login creds to the larger system.

I was there for over a year before landing a job that would double my salary around 2008, 2009.

I joined the company mentioned in the article, TSYS, where I brought in a lot of automation, thanks Puppet, and learned enough Java to earn the respect of the broader organization and really help transform the place.

I was a Red Hat Certified Engineer (RHCE) from my days at Peer 1 and I leveraged that set of skills to package all the production applications into fat RPMs (Java, JBoss, and all the war files required to make it work) in the same way we use containers today. I also revamped the CI/CD system leveraging Bamboo with tight Jira integration. I also helped the company move on from CVS to SVN. Don't ask.

We had automated deployments and tight integration with our apps over the course of the 3 years I was leading the team. We automated everything from Oracle running on AIX, to provisioning SSH keys and access to production servers based on Jira tickets and Puppet.

On the software development side I learned enough COBOL to port some of our mainframe jobs to Python. I wrote packed-decimal libraries and EBCDIC encoders so we could use Python going forward to process batch jobs. A big deal in the payments industry.

During my time at TSYS I really got exposed to open source and made some major contributions to Puppet and Cobbler -- I added a feature to Cobbler that enabled us to configure servers while leveraging Cobbler metadata and tools like Puppet.

I also started contributing to distutils and pip back in the day. I did some of the work that made pip and virtulenv play nice together. I also started public speaking at local meetup, PyATL, in Atlanta, and found my voice in the Python community.

It's my PuppetConf 2012 talk that landed me a job at Puppet Labs, the rest is history.


Whenever I see headlines like this, I'm reminded that physics has a communication problem where the public thinks that high energy physics is the only type (or the only worthy type) of physics out there. There's a lot more to the field than the next theory of everything, and those other fields aren't stuck

For instance, in just this decade human kind turned on the first x-ray free electron laser and increased the state of the art in the brightness of x-ray sources by 10 ORDERS OF MAGNITUDE!!! I'm not sure I can fully explain how transformational that level of improvement was because I work on the accelerator side, not the user side. But, at least understand that there are entire subfields that now exist which didn't before. Not just physics, but in biology, chemistry, materials science, and numerous other fields. It's an instrument so powerful that every other scientifically advanced nation is now building their own. That instrument was developed by physicists and not high energy physicists.

I feel that this also gets at a sort of toxic notion that I've noticed some high energy physicists in my own department have (not all, but you probably know the people that I'm talking about). The notion is that if you aren't working on 12 dimensional quantum theories of gravity or the holographic principle and black hole physics, then you aren't a real physicist. Somehow they think that if your work has a real world impact, then it's tainted in some way. I'm not sure how it's gotten this way, but I've actually been told as much by another grad student that my field should be moved to the engineering department because we aren't doing real physics.


> there were no humans on board

Back when I worked at a supercomputing center, we had "operators" on duty, who were supposed to visit the machine room every 2-3h or so and check several things.

It turned out that they were the major cause of hangs and reboots of our SunSITE server (a large FTP archive) — walking on the lifted datacenter floor caused vibrations which were enough to disturb the (terrible) external SCSI connectors to multiple drive arrays.

So, I can certainly believe that statement.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: