...a PhD-candidate is usually a first year PhD-student...
This is actually not true. PhD students have to advance to candidacy, which requires completing a set amount of coursework and passing a qualifying exam; this usually takes 1-2 years. Therefore, most first-year PhD students are not yet PhD candidates.
> PhD students have to advance to candidacy, which requires completing a set amount of coursework and passing a qualifying exam
This varies from university to university. Some of them do not have the qualifier exams and you are working towards your PhD from day one. Although there is usually some amount of course work still involved as overall requirements for graduating.
Everywhere I've been, "candidate" is a technical term and first-years are definitely NOT. But even if there are universities where you become a candidate upon admission, presumably you remain a candidate all the way to the end, so "...a PhD-candidate is usually a first year PhD-student..."
is STILL quite wrong.
Fair enough. Every program I was accepted to, though, did not grant you candidacy from the start as you first had to complete your courses and pass one or more exams.
I went to CMU LTI and they don't have a candidacy exam. As per this link: http://www.cs.uccs.edu/~gsc/phdProgramComparison.htm UTA and UMD don't have one. I checked Univ of Edinburgh and INRIA across the pond and they also don't seem to have one. All this is for the computer science PhD programs only. Things may be different for pure sciences.
A few British universities did have candidacy. There was a financial penalty from the research council if a certain percentage of your students didn't finish in 3 or 4 years - so the university would make everyone a candidate and only count them as a PhD student for the stats once it looked like they would do OK. Eventually this evolved to being a candidate right up to when you submitted so there was a 100% completion rate but the research councils wised up to the scam.
I would say the best realization of this article is that entrepreneur is a state of mind. A willingness to create from nothing in the face of extreem adversity.
I agree with this. I came across the following definition of an entrepreneur, which I also really liked: "[the] definition of an entrepreneur is someone who makes things happen despite not controlling the resources necessary to achieve them."
90% of BlackBerry users will never change the battery on their device. 100% of BlackBerries have a stupid door on the back which allows you to remove the battery and eventually becomes very loose and slides off.
True, but it was sometimes the only way you could do a hard reboot on those things!
I thought there was an OS feature that did that. I remember scheduling a reboot nightly at 3am. I loved that phone at the time, but hate it in hindsight.
So pulling the battery becomes a crutch for the RIM software team. No need to do a more elegant watchdog timer or properly trap Java errors. Just pull the battery!
To be fair, that strategy worked pretty well for Unix:
>We went to lunch afterward, and I remarked to Dennis that easily half the code I was writing in Multics was error recovery code. He said, "We left all that stuff out. If there's an error, we have this routine called panic, and when it is called, the machine crashes, and you holler down the hall, 'Hey, reboot it.'"[0]
Mark Andreessen, paraphrased: "Salespeople can be very good at optimizing a company over a 2-4 year period. The [Andreessen-Horowitz] fake hedge fund trade is: when a sales guy replaces a product guy as CEO, go long 2 years, then short."
Just for fun I went through a small sample of US companies who changed from technologist-led to MBA-led; this trade is garbage. Sometimes facts make inconvenient clever quips.
Same here, though I am more interested in how one could quickly collect a list of companies that have undergone that specific leadership change. Proprietary or public database? Google? Old-fashioned human memory?
Bloomberg (8K(date of change in executive management) + bios(presence of "MBA" after exec without and at least a B.S.; finance, accounting, marketing, etc. excluded) | Jensen's alpha over IWM and sector SPDR over [0,2) versus [2,5)) -> R. Russell 2000 members 2006-2012. Bias may be introduced from these all being large, public companies - I don't know if VC-backed companies led by engineers v ops guys have a different story. I didn't dig very deep to try and see if I could remove noise, but TL;DR too much noise and drawdown at the outset.
Disclaimer: I'm not an MBA and don't intend on getting one. That being said, I've noticed Silicon Valley discounts solid operations knowledge as much as Wall Street tends to discount the difference between a hacker and an okay coder. Andreesen is a smart man and was probably making a rallying cry/culture call rather than an analytic one with this line.
From Apple's website: "Before being named CEO in August 2011, Tim was Apple's Chief Operating Officer and was responsible for all of the company’s worldwide sales and operations..."
How does being responsible for all of the company's worldwide sales make Tim Cook not a sales guy?
"Chief Operating Officer" goes far beyond sales. Creation of the Apple supply-management strategies is one example of a domain that was far beyond sales.
Scientists and engineers expand the size of the economic pie; lawyers mostly work to divide it up differently. Whenever possible, work to be a person who creates things, instead of a person who tries to take stuff created by someone else. There is an infinite amount of work in science because the universe is big and we don’t really understand it and we probably never will. New answers to questions in science yields more questions.
Greenspun makes and attend to his discussion of what grad school in the sciences is like, especially this, his main point: “Adjusted for IQ, quantitative skills, and working hours, jobs in science are the lowest paid in the United States.”
In other words, science is good for society but bad for the individual, from a purely economic standpoint.
My friends and I build a website for scientists to discover & share journal articles that are worth reading, to discuss scientific ideas that are worth spreading, and to connect with people who share similar interests: http://www.pubup.org/ . Please let me know what you think of the idea and the website. Thanks!
The idea is great, but just so you know... my experience in academia (anecdotal) suggests that Mendeley already has a pretty substantial lead: http://www.mendeley.com/
Me and my friend use facebook, twitter, and (mostly) have appoinment to share interesting paper to read. Discussing paper offline are the reason why we meet. Haha..
I'm completely open to an alternative service (or even a complementary one). So... I guess what I'm trying to say: Give me, one of your intended customers, a compelling use scenario or value-add proposition that might compel me to switch.
There is an infinite amount of work in science because the universe is big and we don’t really understand it and we probably never will.
There may be an infinite amount of work in science, but there is a finite (and very unevenly distributed) number of grants. Your conclusion rings true to me.
I think it means that if you created a model that used IQ, quantitative skills, and working hours as inputs and tried to predict pay across all jobs in the US, it would predict that science jobs should pay higher than they actually do. Which leaves open the question of whether those variables are actually the most predictive of pay levels in non-science jobs; I suspect not.
It means normalizing a certain data set for the IQ that the subjects from whom these data points were taken have. In other words, adjusting one property of something be an amount relative to some other property of that something. This is done in cases where it is suspected that this one property influences the other. In this case, the assumption (quite reasonably, imo) is that (in the aggregate) people who are smarter make more money.
Haven't we reached the point yet where the whole idea of IQ testing is pretty discredited? And adjusting for IQ, meaning that someone is smart therefore they are expected to earn more money? It sounds like "taking into account these variables, and introducing this ridiculous one to allow me to adjust the outcomes as needed."
Well if you don't agree that somebody with an IQ of 120 can be reasonably expected to be better at making money, and in the aggregate people with that IQ do actually make more, than somebody with an IQ of 80, then you have such a radically different view from the mainstream that any discussion is pointless.
FYI, adjusting for certain variables is a fundamental aspect of statistics. All studies in social sciences and economics of real-world data do so, because no real-world effect can be isolated to the point that it can be measured independently.
"Haven't we reached the point yet where the whole idea of IQ testing is pretty discredited?"
What? No, of course not. Are you saying that there are no people who are smarter than others?
IQ is not synonymous with intelligence, it is married to the 20th century philosophy of psychology and history of testing methods.
For IQ to simply mean the intelligence scale normalized so that average intelligence = 100 would take a big marketing effort amongst the education and psychology communities.
Even if it were divorced from the twists and turns of its historical development, it seems pretty clear that at it's best IQ can aggregate the values of creativity, lateral thinking, calculation, memory-retrieval, memory-storage, memory-organization, (even, despite tester's best efforts) domain knowledge together and replace them with one number.
I think that some of the "everybody learns differently!" stuff has jumped the fence and become an old wives' tale, but there has to be a happy medium between assigning someone a 40 column printout to summarize their intelligence and slapping one number on it.
[This is not to mention all of the shift in emphasis away from intelligence towards results and output based partially on Outliers, and partially on the idea that if you praise kids for an inherent trait that they have no control over that they will stop playing to win and start playing not-to-lose.]
I'm not sure if you're agreeing or disagreeing with me? Yes of course there are various definitions and measurement methods of 'IQ' and 'intelligence' and one can define all of them in various ways. Exact definitions aren't interesting for the current purposes. What I said was, some people are smarter than others, even when considering orthogonal traits. If we hypothetically consider 'intelligence' as a combination of trait A, B and C, and we choose A, B and C carefully enough so that we can score or normalize each of them to a scale of 0-10, then Alice with a score of 8 on each of them is more intelligent than Bob who scores 4 on each of them. Now in the margin you can argue who is more intelligent when the scores are 5-8-4 and 5-4-8 but that doesn't take away from the point.
It's not like we're talking about one specific methodology for measuring IQ. The whole argument is in the context of the OP arguing that correcting for intelligence is necessary for making a meaningful comparison between wages earned (basically, it's discounting for opportunity cost). Which is totally reasonable and obvious.
I don't think I made the argument that being functionally disabled is meaningless, but straw men live in all discussions at some point.
My argument is that above average and even extraordinary intelligence have no correlation to money making. Smart people live in poverty all the time. To say that expectations of earned income should be adjusted for IQ is meaningless in that context because the adjustment would be zero. Making money doesn't derive from general intelligence, but from how it is applied and luck. There are many smart technical people on this message board who are clearly lucky that the world is in the middle of a massive expansion of technically complicated economic areas like apps and programming in general. It allows them to achieve wealth that otherwise is not a predetermined given.
When I refer to IQ testing I refer to the concept of a universalized IQ test that can, without cultural bias, give an objective measure of intelligence.
"My argument is that above average and even extraordinary intelligence have no correlation to money making."
Well you're objectively wrong. Go to your national data office and look at income vs education (using education as a proxy for intelligence is not perfect but works OK enough for this purpose). You will see strong correlations between the two. (not perfectly linear, and not perfectly correlated, but strong enough to be not random). Look at any data set of reasonably stable and free countries that have these two data points, and you will find the same.
> Haven't we reached the point yet where the whole idea of IQ testing is pretty discredited?
No, the opposite! Throw together any list of questions requiring intellectual ability, on any subjects you choose (making sure to have a wide range of difficulty). Use the list to test a few thousand people selected at random from the same society. Perform principal components analysis (a type of statistical cross correlation) on the answers. The answers will turn out to correlate with a single characteristic of the individual test takers. This common factor is labeled g, general intelligence. There is virtually no sign of multiple intelligences or other factors, just one honking big signal for the g factor.
> And adjusting for IQ, meaning that someone is smart therefore they are expected to earn more money?
There have been large studies of people from the general population, with the scientists measuring every data point they can lay their hands on. It turns out the only factor that significantly affects adult income is IQ. Earned income is almost totally uncorrelated with race, skin color, culture, family wealth, family social rank, location of residence, school system, characteristics of siblings, and so forth. IQ also strongly predicts criminality.
Studies of twins separated at birth show huge IQ correlations between identical twins, but IQ correlations between fraternal twins are no greater than for sibling pairs from different pregnancies. So IQ is mostly inherited, and mostly fixed by the time of conception. In other words, genetic.
also, pretty much all the low-hanging fruit has been done in science. Even if we allow for science to be infinite, each new brick in the wall of science requires an ever-increasing amount of resources to gain, and usually those bricks are also much smaller.
My old company made what was essentially datalog viewing software. Version 1 of the software was basically just squiggly lines going across the screen. People loved it, because previously they only had squiggly lines going across paper, and a 20-minute recording was the size of a phone book. But version 2 of the software required far more developer work, and had much more polish and detailed algorithms underlying the analyses... but people bridled at 'having to pay again' when 'we don't really get that much for it'. More work, smaller bricks, less appreciation of what it takes to get there...
I don't have the requisite knowledge to comment on fields other than the life sciences, and I certainly agree with your point that every new scientific and technological advance necessarily has to be more sophisticated than the one(s) that it's built upon. But the notion that the past was chock-full of low-hanging fruit just waiting to be picked seems exaggerated and doesn't give enough credit to the people who did all that work that, in hindsight, looks easy, trivial, and obvious, but at the time certainly required significant insights, advances, and effort.
Good question. I'm an author on both of the new CD47 studies, so I'll try to answer this for you:
1. CD47 appears to be higher on tumor cells than on normal cells. In general, the higher the expression of a marker on a diseased cell versus a healthy cell, the more of a therapeutic window/index [1] that is available for a treatment to specifically target diseased cells.
2. Inhibiting the anti-phagocytosis (i.e., "don't eat me") signal is only half of the equation. There are also pro-phagocytosis (i.e., "eat me") signal(s) that also are present [2] and that play a role in whether, and how well, macrophages, and other phagocytes, can phagocytose a cell. "Eat me" signal(s) also differ between normal and cancer cells, adding more nuance to the situation.
It does sound like a very promising avenue, but I imagine that this route could still be a problem for people already pre-disposed to autoimmune diseases. Still, most of those aren't as bad as the alternative here. Any drug based on this doesn't have to not have side effects, it just has to be better than normal chemotherapy.
You raise another good point here, that the allowable side effect profiles for cancer drugs are much less stringent than for drugs indicated for less life-threatening diseases.