Hacker News new | past | comments | ask | show | jobs | submit login

Maintenance is a challenge anywhere where software is developed in-house without a dedicated development team. Development is often lead by one person and becomes very difficult when they depart. It seems like all the regular maintenance challenges are present in these situations, just exacerbated. Not sure what organizations that aren't software-focused can do to improve their situation in this regard.



One thing we can do as users is champion the idea that open-source authors don't owe us anything. Having support or getting help with problems is great, but the author's already done us a huge favor by writing the software we needed in the first place, and they aren't required to go beyond that or do anything specifically for an individual.


Yeah this is what I expected the article to be about: what drives me insane as an open-source developer is how a paying customer who has an outage at the worst possible time will be so much more polite and grateful for the help I'm contractually obligated to give them, than so many random people on forums are about the product not having a feature they want.

It makes me want to give my customer's the source and tell them they can do whatever they want, and then ignore the rest of the community except for high-quality pull requests.


Mostly agree, however, at some point the author's DO NEED to do something beyond creating the thing or else face the extinction of that piece of software.

I think most people who have created something will generously bend over backwards to help individuals in the early stages of it's lifecycle. You can see that all the time on github.

The problems come when the project takes off to the point where there isn't enough support for the number of people using it BUT the software isn't mature/popular/fit-enough to be "under the wing" of a larger organization who can afford to pay for it's maintenance and evolution.

Is there a way to bridge the gap between author's-generosity-support and corporate/organizational stewardship? We do have the social networks in place to allow that, they're just focused on different objectives.


Wasn't there a thing recently were an author just gave away one of his node.js libraries, and then it was used maliciously by the requester to attempt to hijack bitcoin wallets?

Found it: https://arstechnica.com/information-technology/2018/11/hacke...

I don't blame anyone in this scenario because the culture of open source projects and their interplay with enterprise encourages it


Exactly why the archive button exists :)


I think an answer to your final comment is quite simple: to invest in teaching proper software engineering practices, especially if it's not your focus. Get one or two people (potentially outside of the group/collaboration) that are experts and teach the group. I can say in high energy physics the environment is very much moving in the right direction. My collaboration has a dedicated tutorial three times a year which includes tutorials for things such as git and CMake (an overview of the concepts of version control and build systems are introduced as well, along with the definition of what a software release is). Just a few years ago this didn't exist; if you wanted to be proficient with these tools or understand the lingo you had to be self taught and a lot of people don't have the time to do that, so when they had to do it, it's like pulling teeth for a lot of people. Spending 2 to 3 days of a week 3 times a year is not a super serious commitment, and the material from the previous tutorial is always available for the next (always with some minor improvements/fixes). It gets people to a productive state a lot faster than just giving them a problem with our massive software stack and saying "oh and if you don't know git google it." I strongly believe all research groups that use software need well defined teaching material for how to be a productive user of and contributor to the local software stack. It's not a hard problem to solve and helps eliminate what are actually fake hard problems.


As an academic, I think that's a great first step. But there are many structural issues beyond this: advisors and students both under pressure to do whatever it takes to get the paper out and move on to the next project, students needing / wanting to just graduate and move on, usual yard sticks of academic achievement behind in recognizing software as legitimate product of research, etc (). On top of which, lots of different kinds of software is developed in research institutions, most of which are just rapid prototypes but some do get used by many people everyday, and it's not clear one process fits all.

If anyone out there has suggestions of useful resources, I'm all ears!

() Those of us who care about software do try to work on all these issues, but progress is slow.


I'm not sure how well this approach works on average, but I haven't had a great experience. I'm a software engineer supporting a research group that's mostly CS PhD's. I take any chance I get to teach good software engineering practices, but they mostly just don't care.


Yeah I hear you and empathize with this. That is especially difficult to deal with, but I think the model I describe with the three-times-a-year dedicated event helps, because people can really direct their focus for those few days and it's not random factoids as they come up.


Agreed, I've done some automation work at my job for stuff that has exploded in volume over the last few years and wasn't feasible to keep doing by copy/pasting stuff through Excel anymore.

It's in Python. I've avoided any external dependencies, kept inputs to CSV files that can be made from existing Excel sheets, and the code is fairly well commented.

But there used to be two people here who've written at least a line of python in their lives. Now it's just me, and if I leave I have no illusions that it'll be maintained.

Best thing to do is write instructions for whoever will need to run it, and they can hope that they never need it to do anything new.


I work at a place almost like that and I know the solution but nobody wants to pay for it.

We develop software in house. I am the single dev on staff, we have a few contractors that we use for legacy ERP system programming/maint./modifications. I work with people who have CS degrees, however, it is still hard for them to understand why I'm spending time on layered architecture instead of using some basic OOP. Thankfully, they trust me and allow me to do what is required.

The BEST solution I see is having more than one full stack dev but then you are paying 2x more. Also, have some kind of standard and review any outsourced work. I picked up a legacy app after an outside contractor and it has been a disaster to work with.

The source code provided was out of date, he could not produce the source code that was running in production. No naming conventions were used, literally everything was just generic names like command1, textbox1, and so on. This could have been easily caught if any competent junior looked at the code. Some methods with tens of if statements, methods that are 1k+ lines long, almost zero OOP.

If a company does have a single dev and cannot afford another they really need to stress maintenance and verify in some way that the dev is capable of producing a maintainable project. Therefore, they maybe should hire someone to help with the hiring process but hiring SEs is difficult even when experienced people are doing the hiring.


In Academia lots of this software is built by doctorate or postdoc students with limited contracts - they are gone after a short while. In many other places the people writing software outside of IT departments are typically staying longer with the company.

Question now is: which is better? In one case there is a single "God" in the other it's passed on for generations, while everybody mostly cares about their research and not long term maintainability.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: