Without going into details of my case and even what type of law it was related to, here is my eight years wasted forever and $240,000 dumped on expensive lawyers that got nowhere: never find lawyer in yellow pages or by referal from friends because every case is different. Always look for lawyers that used to have government job, were prosecutors or even judges (it happens). Like OP said its a game of legal system not a system of justice - you would be suprised how far “buddies that used to work in the same building” can go before anyone can call it a bribe. In my case four lawyers burnt time and money never getting anywhere. I got very inexpensive lawyer that turned out to work for government before. By the end of my free consulatation he got response via personal email from government employee that was holding up my case (indefinitely). By end of week it was positively resolved at the retainer of $2,400.
I do consulting, the kind I do is irrelevant because there are others with high bill rates too (I know of some iOS devs that make $2,500/day). I have friends who are Java devs making $250/hr. They're not dummies but they aren't exactly Linus Torvalds or Chris Lattner either.
Since you asked, I advise companies on how to implement certain development practices, things like DevOps, TDD, etc.. Before you totally write this off, hold your horses. I'm not saying this is THE path for you. It's one of many. I just happen to care a lot about how we work together to make software and I always found myself drifting into those discussions on the team.
I'm based in Houston, TX. I charge $3k - $5k per day. I usually pay my own travel expenses. I haven't been charging that for long and the first time I submitted a proposal at that rate I almost threw up I was so nervous. I work usually two weeks per month. I usually travel to my clients. I also sell support contracts where I offer them unlimited Slack and 1-2 calls per week. I offer them coaching, feedback, guidance. Sometimes it's pairing on building out a Jenkins pipeline, sometimes it's just explaining why "change fail percentage" is a good metric to track backed by industry studies.
I have a friend who couldn't join us for BBQ this weekend because he's traveling to SF just to visit with a former client who paid him something like $100k for a handful of Java classes he taught. I'd have to look at the invoices to know the exact amount and I have a call coming up. He went through my company. The nice thing is once you set up a company properly it becomes a vehicle for all sorts of financial endeavors. People think creating a company is scary and complicated and can really fuck you up. That's wrong. It's trivial to create a functioning company and the upkeep just to keep it compliant is absolutely minimal if it's incorporated in a state like Texas.
If you've read this far, here's the golden nugget. I wish someone had told me what I'm about to tell you.
1. Follow the things that really interest you, not in your head, the things that make your heart pound. Maybe that's picking up certain types of stories in the sprint, or helping a coworker with a certain kind of bug. While your energy will come and go, that thing that makes your heart pound will always be there. It's connected to your calling, which you might not understand until you're in your 40's (like me).
2. Always ask for more money. Ask nicely, and after you deliver something of value. The company ALWAYS can afford to give you more. If giving you $20k more will bankrupt the company then your company is dying and you're going to be out of a job anyway. People who aren't business owners (myself included at one time) do not comprehend the decision flow business owners take. Ask for the money until they don't have any more. It is more ethical than letting them waste it on another kegerator for the office. Programmers in particular have a very skewed sense of the value they provide. Even a mediocre programmer is worth 10x his salary. You have no idea how valuable you are to a smart business person. Instagram had 30 employees when it was sold for $1B. Think about that.
3. Be polite and talk about the things that interest you with others. Share what excites you. It will be genuine and people will like that. It will link you up with the kind of people you should be linked up with.
4. Only invest time in the people with the most potential. Don't waste your time having coffee with people who aren't passionate, smart, hard working, or creative. This means avoid shit-magnets/pin-cushions. Within 10 years these high potential people will pay off for you in multiples. For me they've become great friends and have fed me most of my business. Back then they were just "I like this person".
5. Show your work. Imposter syndrom is lies you tell yourself in the absence of valuable feedback. Make things, no matter how flimsy or unfinished, and show them to people. A Russian guy once told me "never show unfinished work to an idiot". So show your work, just don't show it to idiots. Every single time I've had the courage to show smart people things I was tinkering with it has led to an opportunity.
6. Be courageous. Learn to do things you know to be wise, even when they're scary.
7. Be patient. It's the journey not the destination.
I know this all sounds like horse shit but how many times have you heard these exact things from "successful" people before? Did you ever stop to ask yourself why? Maybe they're good principles. Maybe they actually work. If I told you the "path" to where I'm at and you tried to follow it you'd most certainly fail because the world is so complex that the correct answer (in discrete steps) is only knowable after the fact. The only thing you can do is be guided by principles that bear good fruit. Follow your heart, ask for more money, be polite, invest in potential, show your work, be courageous, be patient.
From the bottom of my heart I wish you the best. If you ever would like to chat I'll happily hop on a zoom and share as much as I can with you. I want you to find as much joy and financial reward in your endeavors as I have found in mine. God bless.
MIT recorded a set of Calculus video courses back in 1970s that they have since made publicly available. It is taught by a lecturer named Herbert Gross. His style of lecturing is clear, he states why things are defined the way they are and derives everything from first principles. There is an unusual mix of rigor and focus on building understanding - where everything comes from. It also taught me that math is about reasoning logically and rigorously and we shouldn't always rely on intuition (at least while doing math). Deriving almost all the basic calculus results that were drilled into me from the basic concept of a limit, deltas and epsilons was really refreshing.
Compared to more recent OCW calculus videos, I found this to be better in terms of respecting the learner's intellect, presenting the whole proof rigorously and teaching the student to think a certain way.
I felt obliged to comment because I feel I know what you are talking about and I also worry that much of the advice posted so far is wrong at best, dangerous at worst.
I am 42-year-old very successful programmer who has been through a lot of situations in my career so far, many of them highly demotivating. And the best advice I have for you is to get out of what you are doing. Really. Even though you state that you are not in a position to do that, you really are. It is okay. You are free. Okay, you are helping your boyfriend's startup but what is the appropriate cost for this? Would he have you do it if he knew it was crushing your soul?
I don't use the phrase "crushing your soul" lightly. When it happens slowly, as it does in these cases, it is hard to see the scale of what is happening. But this is a very serious situation and if left unchecked it may damage the potential for you to do good work for the rest of your life. Reasons:
* The commenters who are warning about burnout are right. Burnout is a very serious situation. If you burn yourself out hard, it will be difficult to be effective at any future job you go to, even if it is ostensibly a wonderful job. Treat burnout like a physical injury. I burned myself out once and it took at least 12 years to regain full productivity. Don't do it.
* More broadly, the best and most creative work comes from a root of joy and excitement. If you lose your ability to feel joy and excitement about programming-related things, you'll be unable to do the best work. That this issue is separate from and parallel to burnout! If you are burned out, you might still be able to feel the joy and excitement briefly at the start of a project/idea, but they will fade quickly as the reality of day-to-day work sets in. Alternatively, if you are not burned out but also do not have a sense of wonder, it is likely you will never get yourself started on the good work.
* The earlier in your career it is now, the more important this time is for your development. Programmers learn by doing. If you put yourself into an environment where you are constantly challenged and are working at the top threshold of your ability, then after a few years have gone by, your skills will have increased tremendously. It is like going to intensively learn kung fu for a few years, or going into Navy SEAL training or something. But this isn't just a one-time constant increase. The faster you get things done, and the more thorough and error-free they are, the more ideas you can execute on, which means you will learn faster in the future too. Over the long term, programming skill is like compound interest. More now means a LOT more later. Less now means a LOT less later.
So if you are putting yourself into a position that is not really challenging, that is a bummer day in and day out, and you get things done slowly, you aren't just having a slow time now. You are bringing down that compound interest curve for the rest of your career. It is a serious problem.
If I could go back to my early career I would mercilessly cut out all the shitty jobs I did (and there were many of them).
One more thing, about personal identity. Early on as a programmer, I was often in situations like you describe. I didn't like what I was doing, I thought the management was dumb, I just didn't think my work was very important. I would be very depressed on projects, make slow progress, at times get into a mode where I was much of the time pretending progress simply because I could not bring myself to do the work. I just didn't have the spirit to do it. (I know many people here know what I am talking about.) Over time I got depressed about this: Do I have a terrible work ethic? Am I really just a bad programmer? A bad person? But these questions were not so verbalized or intellectualized, they were just more like an ambient malaise and a disappointment in where life was going.
What I learned, later on, is that I do not at all have a bad work ethic and I am not a bad person. In fact I am quite fierce and get huge amounts of good work done, when I believe that what I am doing is important. It turns out that, for me, to capture this feeling of importance, I had to work on my own projects (and even then it took a long time to find the ideas that really moved me). But once I found this, it basically turned me into a different person. If this is how it works for you, the difference between these two modes of life is HUGE.
Okay, this has been long and rambling. I'll cut it off here. Good luck.
I'm unlikely to write a book, but here are a few more tidbits that come to mind.
Re the above -- I don't mean to imply that any of this is malicious or even conscious on anyone's behalf. I suspect it is for a few people, but I bet most people could pass a lie detector test that they care about their OKRs and the OKRs of their reports. They really, really believe it. But they don't act it. Our brains are really good at fooling us! I used to think that corporate politics is a consequence of malevolent actors. That might be true to some degree, but mostly politics just arises. People overtly profess whatever they need to overtly profess, and then go on to covertly follow emergent incentives. Lots of misunderstandings happen that way -- if you confront them about a violation of an agreement (say, during performance reviews), they'll be genuinely surprised and will invent really good reasons for everything (other than the obvious one, of course). It's basically watching Elephant In The Brain[1] play out right in front of your eyes.
Every manager wants to grow their team so they can split it into multiple teams so they can say they ran a group.
When there is a lot of money involved, people self-select into your company who view their jobs as basically to extract as much money as possible. This is especially true at the higher rungs. VP of marketing? Nope, professional money extractor. VP of engineering? Nope, professional money extractor too. You might think -- don't hire them. You can't! It doesn't matter how good the founders are, these people have spent their entire lifetimes perfecting their veneer. At that level they're the best in the world at it. Doesn't matter how good the founders are, they'll self select some of these people who will slip past their psychology. You might think -- fire them. Not so easy! They're good at embedding themselves into the org, they're good at slipping past the founders's radars, and they're high up so half their job is recruiting. They'll have dozens of cronies running around your company within a month or two.
From the founders's perspective the org is basically an overactive genie. It will do what you say, but not what you mean. Want to increase sales in two quarters? No problem, sales increased. Oh, and we also subtly destroyed our customers's trust. Once the steaks are high, founders basically have to treat their org as an adversarial agent. You might think -- but a good founder will notice! Doesn't matter how good you are -- you've selected world class politicians that are good at getting past your exact psychological makeup. Anthropic principle!
There's lots of stuff like this that you'd never think of in a million years, but is super-obvious once you've experienced it. And amazingly, in spite of all of this (or maybe because of it?) everything still works!
That's because most of those tutorials have not been written by somebody actually putting something in production.
I've been using asyncio for a while now, and you can't get away with a short introduction since:
- it's very low level
- it's full of design flaws and already has accumulated technical debt
- it requires very specific best practices to be usable
I'm not going to write a tutorial here, it would take me a few days to make a proper one, but a few pointers nobody tells you:
- asyncio solves one problem, and one problem only: when the bottleneck of your program is network IO. It's a very small domain. Most programs don't need asyncio at all. Actually many programs with a lot of network IO don't have performance problems, and hence don't need asyncio. Don't use asyncio if you don't need it: it adds complexity that is worth it only if it solves your problem.
- asyncio is mostly very low level. Unless you code your own lib or framework with it, you probably don't want to use it directly. E.G: if you want to make http requests, use aiohttp.
- use asyncio.run_until_complete(), not asyncio.run_forever(). The former will crash on any exception, making debugging easy. The later will just display the stack trace in the console.
- talking about easy debugging, activate the various debug features when not in prod (https://docs.python.org/3/library/asyncio-dev.html#debug-mod...). Too many people code with asyncio in the dark, and don't know there are plenty of debug info available.
- await is just a way to inline a callback. When you do "await", you say 'do the stuff', and any lines of code that are after "await" are called when "await" is done. You can run asynchronous things without "await". "await" is just useful if you want 2 asynchronous things to happen one __after__ another. Hence, don't use it if you wants 2 asynchronous things to progress in parallel.
- if you want to run one asynchronous thing, but not "await" it, call "asyncio.ensure_future()".
- errors in "await" can be just caught with try/except. If you used ensure_future() and no "await", you'll have to attach a callback with "add_done_callback()" and check manually if the future has an exception. Yes, it sucks.
- if you want to run one blocking thing, call "loop.run_in_executor()". Careful, the signature is weird.
- CPU intensive code blocks the event loop. loop.run_in_executor() use threads by default, hence it doesn't protect you from that. If you have CPU intensive code, like zipping a lot of files or calculating your own precious fibonacci, create a "ProcessPoolExecutor" and use run_in_executor() with it.
- don't use asyncio before Python 3.5.3. There is a incredibly major bug with "asyncio.get_event_loop()" that makes it unusable for anything that involve mixing threads and loops. Yep. Not a joke.
- but really use 3.6. TCP_NODELAY is on by default and you have f-string anyway.
- don't pass the loop around. Use asyncio.get_event_loop(). This way your code will be independent of the loop creation process.
- you do pretty much nothing yourself in asyncio. Any async magic is deep, deep down the lib. What you do is define coroutines calling the magic things with ensure_future() and await. Pretty much nothing in your own code is doing IO, it's just asking the asyncio code to do IO in a certain order.
- you see people in tutorials simulate IO by doing "asyncio.sleep()". It's because it's the easiest way to make the event loop switch context without using the network. It doesn't mean anything, it just pauses and switch, but if you see that in a tutorial, you can mentally replace it with, say, an http call, to get a more realistic picture.
- asyncio comes with a lot of concepts, let's take a time to define them:
* Future: an object with a thing to execute, with potentially some callbacks to be called after it's executed.
* Task: a subclass of future. The thing to execute is a coroutine,, and the coroutine is immediately scheduled in the event loop when the task is instantiated. When you do ensure_future(coroutine), it returns a Task.
* coroutine: a generator with some syntaxic sugar. Honestly that's pretty much it. They don't do much by themself, except you can use await in them, which is handy. You get one by calling a coroutine function.
* coroutine function: a function declared with "async def". When you call it, it doesn't run the code of the function. Instead, it returns a coroutine.
* awaitable: any object with an __await__ method. This method is what the event loop uses to execute asynchronously the code. coroutines, tasks and futures are awaitables. Now the dirty secret is this: you can write an __await__ method, but in it, you will mostly call the __await__ from some magical object from deep inside asyncio. Unless you write a framework, don't think too much about it: awaitable = stuff you can pass to ensure_future() to tell the event loop to run it. Also, you can "await" any awaitable.
* event loop: the magic "while True" loop that takes awaitables, and execute them. When the code hits "await", the event loop switch from one awaitable to another, and then go back to it later.
* executor: an object that takes code, execute it in a __different__ context, and return a future you can await in your __current__ context. You will use them to run stuff in threads or separate processes, but magically await the result in your current code like it's regular asyncio. It's very handy to naturally integrate blocking code in your workflow.
* event loop policy: the stuff that creates the loop. You can override that if you are writing a framework and wants to get fancy with the loop. Don't do it. I've done it. Don't.
* task factory: the stuff that creates the tasks. You can override that if you are writing a framework and wants to get fancy with the tasks. Don't do it either.
* protocols: abstract class you can implement to tell asyncio __what__ to do when it establish/loose a connection or send/receive a packet. asyncio instantiate one protocol for each connection. Problem is: you can't use "await" in protocols, only old fashion callback.
* transports: abstract class you can implement to tell asyncio __how__ to establish/loose a connection or send/receive a packet.
Now, I'm putting the last point separately because if there is one thing you need to remember it's this. It's the most underrated secret rules of asyncio. The stuff that is literally written nowhere ever, not in the doc, not in any tuto, etc.
asyncio.gather() is the most important function in asyncio
===========================================================
You have no freaking idea of when the code will start or end execution.
To stay sane, you should never, ever, have an dangling awaitable anywhere. Always get a reference on all your awaitables. Decide where in the code you think their life should end.
And at this very point, call asyncio.gather(). It will block until all awaitables are done.
foo = asyncio.ensure_future(bar())
fooz = asyncio.get_event_loop().run_in_executor(None, barz)
await asyncio.sleep(10)
await asyncio.gather(foo, fooz) # this is The Only True Way
Your code should be a meticulous tree of hierarchical calls to asyncio.gather() that delimitates where things are supposed to stop. And if you think it's annoying, wait for debugging something which life cycle you don't have control over.
Of course it's getting old pretty fast, so you may want to write some abstraction layer such as https://github.com/Tygs/ayo. But I wouldn't use this one in production just yet.
It is also based on gallup data. They determined that employee happiness was not correlated to company success. They did find that the following questions in order were highly correlated to company success.
1. Do I know what is expected of me at work?
2. Do I have the materials and equipment I need to do my work right?
3. At work, do I have the opportunity to do what I do best every day?
4. In the last seven days, have I received recognition or praise for doing good work?
5. Does my supervisor, or someone at work, seem to care about me as a person?
6. Is there someone at work who encourages my development?
7. At work, do my opinions seem to count?
8. Does the mission/purpose of my company make me feel my job is important?
9. Are my co-workers committed to doing quality work?
10. Do I have a best friend at work?
11. In the last six months, has someone at work talked to me about my progress?
12. This last year, have I had opportunities at work to learn and grow?
Here's the recommendation I give to students when they ask me this question (it's a common one!):
You come up with a brilliant idea, you obsess over it, you Google some info, and on your screen lies your idea, being done by someone else, for the last two years. You’re all too familiar with that sinking feeling in your stomach that follows. You abandon the idea almost immediately after all that excitement and ideation.
First (as already mentioned), existing solutions prove your idea — their existence proves that you’re trying to solve a real problem that people might pay to have solved. And it proves that you’re heading in a direction that makes sense to others, too.
Second, and this is the biggie: The moment you see someone else’s solution, you mar and limit your ideas. It suddenly becomes a lot more difficult to think outside the box because before, you were exploring totally new territory. Your mind was pioneering in a frontier that had no paths. But now, you’ve seen someone else’s path. It becomes much harder to see any other potential paths. It becomes much harder to be freely creative.
Next time you come up with that great idea, don’t Google it for a week. Let your mind fester on the idea, allow it to grow like many branches from a trunk. Jot down all of the tangentially related but equally exciting ideas that inevitably follow. Allow your mind to take the idea far into new places. No, you won’t build 90% of them, but give yourself the time to enjoy exploring the idea totally.
When I do this, once I do Google for existing solutions, I usually find that all the other things I came up with in the ensuing week are far better than what’s already out there. I have more innovative ideas for where it could go next; I have a unique value proposition that the other folks haven’t figured out yet. But had I searched for them first, I never would have come up with those better ideas at all.
Finally, I’ll say this: if you see your idea has already been done and you no longer care about it, then it probably wasn’t something you were passionate enough about in the first place; it was just a neat idea to you.
* Basic Monitoring, instrumentation, health checks
* Distributed logging, tracing
* Ready to isolate not just code, but whole build+test+package+promote for every service
* Can define upstream/downstream/compile-time/runtime dependencies clearly for each service
* Know how to build, expose and maintain good APIs and contracts
* Ready to honor b/w and f/w compatibility, even if you're the same person consuming this service on the other side
* Good unit testing skills and readiness to do more (as you add more microservices it gets harder to bring everything up, hence more unit/contract/api test driven and lesser e2e driven)
* Aware of [micro] service vs modules vs libraries, distributed monolith, coordinated releases, database-driven integration, etc
* Know infrastructure automation (you'll need more of it)
* Have working CI/CD infrastructure
* Have or ready to invest in development tooling, shared libraries, internal artifact registries, etc
* Have engineering methodologies and process-tools to split down features and develop/track/release them across multiple services (xp, pivotal, scrum, etc)
* A lot more that doesn't come to mind immediately
Thing is - these are all generally good engineering practices.
But with monoliths, you can get away without having to do them. There is the "login to server, clone, run some commands, start a stupid nohup daemon and run ps/top/tail to monitor" way. But with microservices, your average engineering standards have to be really high. Its not enough if you have good developers. You need great engineers.
Salaries never stay secrets forever. Hiding them only delays the inevitable.
Last year we were having a discussion at lunch. Coworker was building a new house, and when it came to the numbers it was let loose that it was going to cost about $700K. This didn't seem like much, except to a young guy that joined the previous year and had done nothing but kick ass and take names. The new guy was arguably the most talented guy in the company by a considerable margin, so he thought someone building a $700K home might've been overextending themselves. The person buying the home retorted that it was reasonable and asked the new guy why he wouldn't buy the Porsche Boxster he considered his dream car. The new guy responded that would never be prudent. That didn't seem right, as several of us at the table could've nearly swung a Boxster with just our bonus.
The conversation ended up in numbers. Coworker building the house pulled about $140K base (median for a programmer was probably $125K), and his bonus nearly matched the new guy's salary, which was an insulting $60K -- and got cut out of the bonus and raise in January for not being there a full year, only 11 months.
Turns out he was a doormat in negotiating, though his salary history was cringeworthy. It pained everyone to hear it, considering how nice of a guy he was. In all honestly, $60K was a big step up for him. Worst of all, this wasn't a cheap market (Boston). The guy probably shortchanged himself well over a half-million dollars in the past decade. This was someone who voluntarily put in long hours and went out of his way to teach others, and did everything he could to help other departments like operations and other teams. On top, he was beyond frugal. Supposedly he saved something around 40% of his take home pay, despite living alone in Boston. He grew up in a trailer park.
He spent the next day in non-stop meetings with HR, his manager and the CTO. That Friday he simply handed in his badge without a word, walked out and never came back.
Until 3 months later. As a consultant. At $175/hour.
This is a great post and so spot on. At some point in my career my 'review prep' (which was the time I spent working on my own evaluation of my year at a company) became answering the question, "Do I still want to work here?" I categorize my 'review' in four sections (which are each rated at one of five levels, needs improvement, sometimes meets expectations, meets expectations, sometimes exceeds expectations, or consistently exceeds expectations)
I start by reviewing how I'm being managed, I expect someone managing me to be clear in their expectations of my work product, provide resources when I have identified the need to complete jobs, can clearly articulate the problem I am expected to be solving, and can clearly articulate the criteria by which the solution will be evaluated.
Second I review my co-workers, using a three axis evaluation, can I trust what they say to be accurate/honest, can I count on them to meet their commitments, and are they willing to teach me when I don't understand something and conversely learn when their is something they do not know.
Third I review what level of support do I get to do my job. Am I provided with a workspace where I can get work done? Do have have the equipment I need to do what is being asked? Is my commute conducive to the hours required? And finally and most important, does this job allow me to balance work obligations and non-work obligations?
Fourth I review whether or not the company mission, ethics, and culture is still one that I wish to be a part of. Am I proud of the company's mission? Do I believe that the leadership will make ethical calls even if doing so would mean less profit margin? Can I relate to and am I compatible with the values that my co-workers espouse and the actions they take? (this is the "company culture" theme, is it still a company that fits me culturally)
A company that receives lower than a 3.0 rating I put on a 90 day "company improvement plan" (CIP). I bring issues to the leadership who are in a position to address the situations that I've found wanting and try to secure their commitment to change. If after 90 days they haven't been able to (if they choose not to they're done right away), then I "fire" the company and work to process my exit as expeditiously as possible.
To be honest, this isn't the best list, it's a bit too blog heavy. I've started reading up on ML only recently but here are my recommendations. Note that I haven't went through all of them in entirety but they all seem useful. Note that a lot of them overlap to a large degree and that this list is more of a "choose your own adventure" than "you have to read all of these".
Reqs:
* Metacademy (http://metacademy.org) If you just want to check out what ML is about this is the best site.
Hasura by far, lets you point-and-click build your database and table relationships with a web dashboard and autogenerates a full GraphQL CRUD API with permissions you can configure and JWT/webhook auth baked-in.
I've been able to build in a weekend no-code what would've taken my team weeks or months to build by hand, even with something as productive as Rails. It automates the boring stuff and you just have to write single endpoints for custom business logic, like "send a welcome email on sign-up" or "process a payment".
It has a database viewer, but it's not the core of the product, so I use Forest Admin to autogenerate an Admin Dashboard that non-technical team members can use:
For interacting with Hasura from a client, you can autogenerate fully-typed & documented query components in your framework of choice using GraphQL Code Generator:
Then I usually throw Metabase in there as a self-hosted Business Intelligence platform for non-technical people to use as well, and PostHog for analytics:
Hmm... that aligns somewhat with my own thoughts on the actual cause of depression. I've spent a lot of time thinking about since I spent a significant portion of my life depressed, and I find the current approach to it in health care unsettling.
Allow me, if you will, to engage in some inexpert speculation. If you read the following, please keep in mind that I am just some idiot on the internet and not in any way qualified to give advice.
It seems to me that depression is not a disorder, disease, or abnormality, but a necessary and purposeful reaction of the mind and brain to certain stimuli. Of course this is not always the case, and the same symptoms can be triggered by other factors that affect our neurochemistry or mental function, but in a normally functioning mind and brain I think this is true. When examined in this context, what do we find?
Depression makes us apathetic, reluctant to act, and unconfident. A while back there was an article on HN spitballing that depression and mania were related to our mind's assessment of its own ability to predict outcomes. Overconfidence in its own predictive ability manifests as mania, and low confidence manifests as depression. This makes some sense. If you are confident in your predictions you are more likely to act on them, and if you are not you are less likely to. Given this, I submit that it's possible that what depression really is, much of the time, is a philosophical problem.
Philosophy is our model of reality, and we use that model to make predictions and decide how to act in the world to affect change. When that model is known to be broken, we lower our confidence in it and act less. Over time, as more and more of our model is revealed as flawed and our confidence in it continues to plummet, we enter a state of learned helplessness. Finding ourselves unable to predict the results of our actions, we are unable to determine how to effect the changes we desire in our lives, leading to interesting contradictions like being bored and at the same time unmotivated to do things we used to enjoy. We don't want to be in this state, but we lack the ability to see a path out of it, so we become frustrated, angry, and/or sad. It can eventually reach a point where the only path out of the suffering that we're confident in, is death.
In fact, this model-breaking occurs many times in our minds' development. As we grow up we form several different models of reality, all of which are inevitably revealed to be flawed. This is the reason you find children who believe they are hidden just because they can't see you (their model of reality doesn't include the concept of different perspectives), and why the terrible twos are so terrible (the young mind is dealing with its model of reality failing), for instance. With children, however, there are plenty of people around them operating with better models of reality to help them work out a new one. Societies can also be modeled this way, and if we look at the past we find that human cultures also go through a similar pattern of forming a stable model of reality, eventually finding it flawed, suffering through process of dealing with that, and ultimately resolving the crisis. I say resolving because, in actuality, there are two solutions to the problem of realizing your model is broken: forming a new, more accurate, one; or ignoring the information that contradicts it.
This is the important point, I think: When an individual's model of reality is broken, and society cannot guide them towards a more accurate one because society itself is still operating on the model that individual has determined to be flawed, then chronic depression is a likely result. Our current societal philosophy, the one our health care system is also based on, see's this individual's suffering not as a transition period in which they form a new model, but a severe disorder. To them, the rejection of the model is a form of insanity, and unclear thinking. This is why you sometimes see people tell a depressed person an obvious platitude in an attempt to cheer them up, only for it to further frustrate the depressed individual: they are aware that the platitude is part of a flawed model.
Further, the health care system is, like most of current western society, firmly implanted in empiricism. Science and measurement are the hammer, and everything else is a nail. Society as a whole forms its model of depression on measurements and manipulation of the neurochemical and behavioral aspects of depression, the social side effects, etc, but without regard for its greater reason for being. They are witchdoctors, sacrificing chickens to drive out the demons and bloodletting to balance the humors. Sometimes it works, because even a broken clock is right twice a day, but a lot of times it doesn't.
If one were to assume that this assessment is accurate, then reason we get depressed is so that our mind is motivated to take a step back and build a more accurate model of reality. The thing to do, then, is to help the sufferer realize why they are suffering. There's nothing wrong with them, they don't have a chemical imbalance of the humors, they aren't bad people for feeling the way they do or for not having faith in what society tells them is true. They have in fact taken a step toward growth, and nearly all growth comes at the cost of suffering. They need to look hard at where reality has shone the light on their flawed conception of it, reason through the problems, and build a more accurate replacement, and we may not be equipped to help them.
Some kids grow up on football. I grew up on public speaking (as behavioral therapy for a speech impediment, actually). If you want to get radically better in a hurry:
1) If you ever find yourself buffering on output, rather than making hesitation noises, just pause. People will read that as considered deliberation and intelligence. It's outrageously more effective than the equivalent amount of emm, aww, like, etc. Practice saying nothing. Nothing is often the best possible thing to say. (A great time to say nothing: during applause or laughter.)
2) People remember voice a heck of a lot more than they remember content. Not vocal voice, but your authorial voice, the sort of thing English teachers teach you to detect in written documents. After you have found a voice which works for you and your typical audiences, you can exploit it to the hilt.
I have basically one way to start speeches: with a self-deprecating joke. It almost always gets a laugh out of the crowd, and I can't be nervous when people are laughing with me, so that helps break the ice and warm us into the main topic.
3) Posture hacks: if you're addressing any group of people larger than a dinner table, pick three people in the left, middle, and right of the crowd. Those three people are your new best friends, who have come to hear you talk but for some strange reason are surrounded by great masses of mammals who are uninvolved in the speech. Funny that. Rotate eye contact over your three best friends as you talk, at whatever a natural pace would be for you. (If you don't know what a natural pace is, two sentences or so works for me to a first approximation.)
Everyone in the audience -- both your friends and the uninvolved mammals -- will perceive that you are looking directly at them for enough of the speech to feel flattered but not quite enough to feel creepy.
4) Podiums were invented by some sadist who hates introverts. Don't give him the satisfaction. Speak from a vantage point where the crowd can see your entire body.
5) Hands: pockets, no, pens, no, fidgeting, no. Gestures, yes. If you don't have enough gross motor control to talk and gesture at the same time (no joke, this was once a problem for me) then having them in a neutral position in front of your body works well.
6) Many people have different thoughts on the level of preparation or memorization which is required. In general, having strong control of the narrative structure of your speech without being wedded to the exact ordering of sentences is a good balance for most people. (The fact that you're coming to the conclusion shouldn't surprise you.)
7) If you remember nothing else on microtactical phrasing when you're up there, remember that most people do not naturally include enough transition words when speaking informally, which tends to make speeches loose narrative cohesion. Throw in a few more than you would ordinarily think to do. ("Another example of this...", "This is why...", "Furthermore...", etc etc.)
This is a long read, but it's worth it. The metric can be calculated in FRED[2], and as a predictor of future returns, it outperforms all of the most common stock market valuation metrics, including cyclically-adjusted price-earnings (CAPE) ratio[3]. (Basically, the average investor portfolio allocation to equities versus bonds and cash is inversely correlated with future returns over the long-term. This works better than pure valuation models because it accounts for supply and demand dynamics.)
Lots of people make the mistake of thinking there's only two vectors you can go to improve performance, high or wide.
High - throw hardware at the problem, on a single machine
Wide - Add more machines
There's a third direction you can go, I call it "going deep". Today's programs run on software stacks so high and so abstract that we're just now getting around to redeveloping (again for like the 3rd or 4th time) software that performs about as well as software we had around in the 1990s and early 2000s.
Going deep means stripping away this nonsense and getting down closer to the metal, using smart algorithms, planning and working through a problem and seeing if you can size the solution to running on one machine as-is. Modern CPUs, memory and disk (especially SSDs) are unbelievably fast compared to what we had at the turn of the millenium, yet we treat them like they're spare capacity to soak up even lazier abstractions. We keep thinking that completing the task means successfully scaling out a complex network of compute nodes, but completing the task actually means processing the data and getting meaningful results in a reasonable amount of time.
This isn't really hard to do (but it can be tedious), and it doesn't mean writing system-level C or ASM code. Just seeing what you can do on a single medium-specc'd consumer machine first, then scaling up or out if you really need to. It turns out a great many problems really don't need scalable compute clusters. And in fact, the time you'd spend setting that up, and building the coordinating code (which introduces yet more layers that soak up performance) you'd probably be better off just spending the same time to do on a single machine.
Bonus, if your problem gets too big for a single machine (it happens), there might be trivial parallelism in the problem you can exploit and now going-wide means you'll probably outperform your original design anyways and the coordination code is likely to be much simpler and less performance degrading. Or you can go-high and toss more machine at it and get more gains with zero planning or effort outside of copying your code and the data to the new machine and plugging it in.
Oh yeah, many of us, especially experienced people or those with lots of school time, are taught to overgeneralize our approaches. It turns out many big compute problems are just big one-off problems and don't need a generalized approach. Survey your data, plan around it, and then write your solution as a specialized approach just for the problem you have. It'll likely run much faster this way.
Some anecdotes:
- I wrote an NLP tool that, on a single spare desktop with no exotic hardware, was 30x faster than a 6-high-end-system-distributed-compute-node that was doing a comparable task. That group eventually used my solution with a go-high approach and runs it on a big multi-core system with as fast of memory and SSD as they could procure and it's about 5 times faster than my original code. My code was in Perl, the distributed system it competed against was C++. The difference was the algorithm I was using, and not overgeneralizing the problem. Because my code could complete their task in 12 hours instead of 2 weeks, it meant they could iterate every day. A 14:1 iteration opportunity made a huge difference in their workflow and within weeks they were further ahead than they had been after 2 years of sustained work. Later they ported my code to C++ and realized even further gains. They've never had to even think about distributed systems. As hardware gets faster, they simply copy the code and data over and realize the gains and it performs faster than they can analyze the results.
Every vendor that's come in after that has been forced to demonstrate that their distributed solution is faster than the one they already have running in house. Nobody's been able to demonstrate a faster system to-date. It has saved them literally tens of millions of dollars in hardware, facility and staffing costs over the last half-decade.
- Another group had a large graph they needed to conduct a specific kind of analysis on. They had a massive distributed system that handled the graph, it was about 4 petabytes in size. The analysis they wanted to do was an O(N^2) analysis, each node needed to be compared potentially against each other node. So they naively set up some code to do the task and had all kinds of exotic data stores and specialized indexes they were using against the code. Huge amounts of data was flying around their network trying to run this task but it was slower than expected.
An analysis of the problem showed that if you segmented the data in some fairly simple ways, you could skip all the drama and do each slice of the task without much fuss on a single desktop. O(n^2) isn't terrible if your data is small. O(k+n^2) isn't much worse if you can find parallelism in your task and spread it out easily.
I had a 4 year old Dell consumer level desktop to use so I wrote the code and ran the task. Using not much more than Perl and SQLite I was able to compute a large-ish slice of a few GB in a couple hours. Some analysis of my code showed I could actually perform the analysis on insert in the DB and that the size was small enough to fit into memory so I set SQLite to :memory: and finished it in 30 minutes or so. That problem solved, the rest was pretty embarrassingly parallel and in short order we had a dozen of these spare desktops occupied running the same code on different data slices and finishing the task 2 orders of magnitude than what their previous approach had been. Some more coordinating code and the system was fully automated. A single budget machine was theoretically now capable of doing the entire task in 2 months of sustained compute time. A dozen budget machines finished it all in a week and a half. Their original estimate on their old distributed approach was 6-8 months with a warehouse full of machines, most of which would have been computing things that resulted in a bunch of nothing.
To my knowledge they still use a version of the original Perl code with SQlite running in memory without complaint. They could speed things up more with a better in-memory system and a quick code port, but why bother? It's completing the task faster than they can feed it data as the data set is only growing a few GB a day. Easily enough for a single machine to handle.
- Another group was struggling with handling a large semantic graph and performing a specific kind of query on the graph while walking it. It was ~100 million entities, but they needed interactive-speed query returns. They had built some kind of distributed Titan cluster (obviously a premature optimization).
Solution, convert the graph to an adjacency matrix and stuff it in a PostgreSQL table, build some indexes and rework the problem as a clever dynamically generated SQL query (again, Perl) and now they were realizing .01second returns, fast enough for interactivity. Bonus, the dataset at 100m rows was tiny, only about 5GB, with a maximum table-size of 32TB and diskspace cheap they were set for the conceivable future. Now administration was easy, performance could be trivially improved with an SSD and some RAM and they could trivially scale to a point where dealing with Titan was far into their future.
Plus, there's a chance for PostgreSQL to start supporting proper scalability soon putting that day even further off.
- Finally, a e-commerce company I worked with was building a dashboard reporting system that ran every night and took all of their sales data and generated various kinds of reports, by SKU, by certain number of days in the past, etc. It was taking 10 hours to run on a 4 machine cluster.
A dive in the code showed that they were storing the data in a deeply nested data structure for computation and building and destroying that structure as the computation progressed was taking all the time. Furthermore, some metrics on the reports showed that the most expensive to compute reports were simply not being used, or were being viewed only once a quarter or once a year around the fiscal year. And cheap to compute reports, where there were millions of reports being pre-computed, only had a small percentage actually being viewed.
The data structure was built on dictionaries pointing to other dictionaries and so-on. A quick swap to arrays pointing to arrays (and some dictionary<->index conversion functions so we didn't blow up the internal logic) transformed the entire thing. Instead of 10 hours, it ran in about 30 minutes, on a single machine. Where memory was running out and crashing the system, memory now never went above 20% utilization. It turns out allocating and deallocating RAM actually takes time and switching a smaller, simpler data structure makes things faster.
We changed some of the cheap to compute reports from being pre-computed to being compute-on-demand, which further removed stuff that needed to run at night. And then the infrequent reports were put on a quarterly and yearly schedule so they only ran right before they were needed instead of every night. This improved performance even further and as far as I know, 10 years later, even with huge increases in data volume, they never even had to touch the code or change the ancient hardware it was running on.
It seems ridiculous sometimes, seeing these problems in retrospect, that the idea was that to make these problems solvable racks in a data center, or entire data centeres were ever seriously considered seems insane. A single machine's worth of hardware we have today is almost embarrassingly powerful. Here's a machine that for $1k can break 11 TFLOPS [1]. That's insane.
It also turns out that most of our problems are not compute speed, throwing more CPUs at a problem don't really improve things, but disk and memory are a problem. Why anybody would think shuttling data over a network to other nodes, where we then exacerbate every I/O problem would improve things is beyond me. Getting data across a network and into a CPU that's sitting idle 99% of the time is not going to improve your performance.
Analyze your problem, walk through it, figure out where the bottlenecks are and fix those. It's likely you won't have to scale to many machines for most problems.
I'm almost thinking of coming up with a statement: Bane's rule, you don't understand a distributed computing problem until you can get it to fit on a single machine first.
(3) As you work for clients, keep a sharp eye for opportunities to build "specialty practices". If you get to work on a project involving Mongodb, spend some extra time and effort to get Mongodb under your belt. If you get a project for a law firm, spend some extra time thinking about how to develop applications that deal with contracts or boilerplates or PDF generation or document management.
(4) Raise your rates.
(5) Start refusing hourly-rate projects. Your new minimum billable increment is a day.
(6) Take end-to-end responsibility for the business objectives of whatever you build. This sounds fuzzy, like, "be able to talk in a board room", but it isn't! It's mechanically simple and you can do it immediately: Stop counting hours and days. Stop pushing back when your client changes scope. Your remedy for clients who abuse your flexibility with regards to scope is "stop working with that client". Some of your best clients will be abusive and you won't have that remedy. Oh well! Note: you are now a consultant.
(7) Hire one person at a reasonable salary. You are now responsible for their payroll and benefits. If you don't book enough work to pay both your take-home and their salary, you don't eat. In return: they don't get an automatic percentage of all the revenue of the company, nor does their salary automatically scale with your bill rate.
(8) You are now "senior" or "principal". Raise your rates.
(9) Generalize out from your specialties: Mongodb -> NoSQL -> highly scalable backends. Document management -> secure contract management.
(10) Raise your rates.
(11) You are now a top-tier consulting group compared to most of the market. Market yourself as such. Also: your rates are too low by probably about 40-60%.
Try to get it through your head: people who can simultaneously (a) crank out code (or arrange to have code cranked out) and (b) take responsibility for the business outcome of the problems that code is supposed to solve --- people who can speak both tech and biz --- are exceptionally rare. They shouldn't be; the language of business is mostly just elementary customer service, of the kind taught to entry level clerks at Nordstrom's. But they are, so if you can do that, raise your rates.