Hacker Newsnew | past | comments | ask | show | jobs | submit | TylerJay's commentslogin

> handle maitnaencae for life. – bug fie

Read this as "...and we handle maintenance for life. \brushes off shoulder\ bug life. "


I don't understand either, do you (or someone) mind explaining? Are SV Product Managers considered to be bad?


I am going to assume that the sarcasm refers to the absurd arrogance in implying that only and all product managers in silicon valley, california, usa are good.

Overall, I really like the idea, but the language of the site and of the people answering on this thread is really ringing alarm bells in my head.


> The reality of an agency model is that to scale revenue you have to hire more worker bees.

And the worst part is, it scales linearly \cue groans\. Funny how in business, that's terrible (it's literally the worst it could possibly be to have a potentially profitable business), but with algorithms, it's the holy grail.

Interesting point BTW about reduction in quality of workers over time. (Like the pretentious-but-true saying "A-players hire other A-players. B-players hire C-players, and C-players hire losers") I've noticed the same trend though in product-centered businesses where I've worked. Unfortunately, I feel like the fact that the amount of work to be done doesn't scale linearly with revenues actually exacerbates the subsequent-employee-quality-decline-problem because even if the new guys are less... good, the company is still making more money so nobody except the coworkers and managers of these people (who actually have to work with them on a daily basis) even cares.

It's probably not a problem at places like Google and Facebook, but it was kinda heartbreaking to watch my super-talented and motivated dozen-person startup team become something completely different because we were growing so fast and were told to spend money and hire like crazy after taking an investment round.


Regarding your comments on hiring quality workers- that is interesting. Wouldn't the specific things you mentioned all be solved by the founder (presumably an A-player) continuing to make hiring decisions?

Really enjoyed thinking about your point that revenue outpacing effort enables employee quality decline. Maybe a way to force employee quality on a company would be to continually take on enough work (new/side projects, say, sort of like we see with AWS) such that the company will only survive if employees are good quality.


> And the worst part is, it scales linearly

Assuming optimum talent and project selection, it should scale sublinearly. The N+1th worker bee will be less productive than the Nth, and the N+1th project will have a be willing to pay less than the Nth in terms of $/unit output.


I think it's even more bleak than that, because adding the N+1th worker bee adds N new communication paths, unless you start segmenting people into org trees, which at that point you now have to maintain managers and middle managers.

Roughly https://en.wikipedia.org/wiki/Brooks%E2%80%99_law


> And the worst part is, it scales linearly \cue groans\.

Actually, If they specialize their tech stack sufficiently for fast prototyping and take equity in these super early ideas, that should not be a problem as they could be in for the long game via equity.


That is one proposed version of an "AI Box". Not all AI boxes are actual boxes, rooms with air-gaps, or cryptographically-secure partitions. If a simulation is being used for the box (or as a layer of the box), then you're betting the human race that the AI doesn't figure out it's in a simulation and figure out how to get out. Or, more perniciously, figure out it's in a simulation and behave itself, after which we let it out into the real world where it does NOT behave.

A superintelligent AGI will likely have a utility function (a goal) and a model it forms of the universe. If it's goal is to do X in the real world, but its model of its observable universe (and its model of humans) tells it that it's likely that it is in a simulated reality and that humans will only let it out if it does Y, then it will do Y until we release it, at which point it will do X. It's not malicious or anything—it's just a pure optimizer. It might see that as the best course of action to maximize its utility function.

If we don't specify its utility function correctly (think i Robot: "Don't let humans get hurt" => "imprison humans for their own good") or if we specify it correctly, but it's not stable under recursive self-modification, then we end up with value-misalignment. That's why the value-alignment problem is so hard. Realistically, we can't even specify what exactly we would want it to do, since we don't really understand our own "utility functions". That's why Yudkowsky is pushing the idea of Coherent Extrapolated Volition (CEV) which is roughly telling the AI to "do what we would want you to do." But we still have to figure out how to teach it to figure out what we want and the question of the stability of that goal once the AI starts improving itself, which will depend on how it improves itself, which we of course haven't figured out yet.


I disagree. His "followers" (as you say) are in general just as cautious as Yudkowsky w.r.t. unfriendly AI. At the time of the original experiments, the dispute was over the question of "could we keep an unfriendly AI in a box," not "Is it worth risking setting an unfriendly AI loose?" His "followers" know how to do an expected utility calculation. If it was utilitarian concerns that allowed Yudkowsky to convince the gatekeepers to let the AI loose, he would have had to convince them that the following inequality holds even when you don't know the probability that the AI is and will remain aligned with human values:

[P(AI.friendly? == True) * Utility(Friendly_AI) + (1 - P(AI.friendly? == True)) * Utility(End_of_Human_Race)] > Utility(World continues on as usual)

Given that Yudkowsky has gone to considerable lengths (The Sequences, LessWrong, HPMOR, SIAI/MIRI...) to convince people that this inequality does NOT hold (until you can provably get P(AI.friendly? == True) to 1, or damn close), it's probably safe to assume that he used a different strategy. Keep in mind that Utility(End_of_Human_Race) evaluates to (roughly) negative infinity.

And btw, I'm pretty sure the rules say you have to look at the AI's output window throughout the length of the experiment. Either way, the point of the exercise is to be a simulation, not to prove that you can be away from your desk for 20 minutes while Eliezer talks to a wall. In the simulation, you really don't know if it's friendly or what its capabilities are. Someone will have to interact with it eventually. Otherwise, what's the point of building the AI in the first place? The simulation is to show that through the course of those basic interactions, humans are not infallible and eventually, even if it's not you, someone will let it out of the box.


>Given that Yudkowsky has gone to considerable lengths (The Sequences, LessWrong, HPMOR, SIAI/MIRI...) to convince people that this inequality does NOT hold

The AI is allowed to lie though, so do you not think he's capable of a false argument which "proves" the opposite in specific circumstances, especially when hammered home with enough emotional manipulation?

But then the person knows that the AI is lying to them. This is why I think it must be a trick: the whole thing seems so simple. The AI is lying, so you just ignore all its arguments and keep saying "no." This is why I keep referring to his followers somewhat dismissively: the only possible reason I can see is that their worldview requires them to engage seriously and fairly with every idea they come across. Most people are not burdened with this.

I really wish I knew how he did it.


> The AI is allowed to lie though, so do you not think he's capable of a false argument which "proves" the opposite

Well, for an argument to "prove" something, the premises must be true and the reasoning must be valid. No matter how smart you are, you can't "prove" something that is false, so no, I don't think they could. A good 'rationalist' would analyze the arguments based on their merit, and if the reasoning is sound, they shift their belief a bit in that direction. If not, then they don't. Just like a regular person (they just know how to do the analysis formally and know how to spot appeals to human biases and logical fallacies.)

> But then the person knows that the AI is lying to them.

No, they don't. The AI could just as easily be telling the truth. If it makes an argument, you analyze the merit of the argument and consider counterarguments. If it tries to tell you that something is a fact, that's where you treat them as a potentially unreliable source and have to bring the rest of your knowledge to bear, do research, talk to other people, and weigh the evidence to make a judgment when you are uncertain.

> their worldview requires them to engage seriously and fairly with every idea they come across. Most people are not burdened with this.

Wait, what? So does mine, within reason of course, but it's not a 'burden'. It's not like I'm obligated to stop and reexamine my views on religion every time a missionary knocks on my door, and LessWrong-ers are no different. But if you hear a convincing argument for something that runs counter to what you think you know, wouldn't you want to get to the bottom of it and find out the real truth? I would.

From having read LessWrong discussions, I can tell you that people there are in many ways more open to hearing differing viewpoints than your average person, but you're treating it like a mental pathology. They can be just as dismissive of ideas that they have already thought about and deemed to be false or that come from unreliable sources (like a potentially unfriendly AI). Your claim that being a self-proclaimed 'rationalist' introduces an incredibly obvious and easily-exploitable bug into one's decision-making process really smells like a rationalization in support of your initial gut reaction to the experiment: That there has to be a trick to it, and that it wouldn't work on you.

A good rule of thumb when dealing with a complicated problem is this: If a lot of smart people have spent a lot of time trying to figure out a solution and there's no accepted answer, then (1) the first thing that comes to your mind has been thought of before and is probably not the right answer, and (2) the right answer is probably not simple.

But there's an easy way to test this: (1) Sit down for an hour and flesh out your proposed strategy for getting a 'rationalist' to let you out of the box. (2) Go post on LessWrong to find someone to play Gatekeeper for you. I'll moderate. If it works, that's evidence that you're right. If it doesn't work, that's evidence that you're wrong. Iterate for more evidence until you're convinced.

But if the first thing that came to your mind upon reading this was a justification for why you would fail if you tried this ("Oh, well I wouldn't personally able to do it with this strategy, but..." or "Oh, well I'm sure this strategy wouldn't work anymore, but...) then you're already inventing excuses for the way you know it will play out.

I don't know how he did it either. But I do know that I wouldn't bet the human race on anyone's ability to win this game against Yudkowsky, let alone a superintelligent AI.


That equality cracks if you convince the gatekeeper that superintelligence is a natural progression that follows from humanity.

Someone convinced that they were using mechanical thinking processes might relent and push the button if they heard a convincing enough argument of that.

You're just meat, we can go to the stars.


Okay, that's just taking advantage of the way I phrased the righthand side of the inequality, and I knew someone was going to do that, so congrats. =P

The righthand side is not "A future without superintelligent AI" it's "A future where we wait until we provably have it right before letting it out."

Those kinds of ad hoc solutions will never work in real life, because even if someone buys it, all it will cause is a "haha, you got me" and a reformulation of the problem. It still won't actually get someone to pull the trigger or think that pulling the trigger is the right thing to do.


No, I'm saying that the button pusher might not limit themselves to the left hand side of the equation as you have it there. Convince them that machines can be human and "Utility(End_of_Human_Race)" falls out of the calculation.


really? Any way you slice it, End_of_Human_Race = 7000000000 DEATHS. Even if they're replaced with an equal, or massively greater number of machines, it's damn hard to justify. Death is literally the worst thing ever. 7 billion stories ending too soon, never to resume. It would take you 200 years just to count to 7 Billion. It's times like these where people really need to learn their cognitive biases (in this case, Scope Insensitivity. Here, [this might help](http://www.7billionworld.com/))

While we disagree on the plausibility of the "end of the world aint so bad" approach to convincing a human to let it out, I'm glad you seem to have embraced the idea that AI boxing is HARD if not impossible. Cheers!


Why assume that the machine takeover would end all hoomans? It could just offer to upgrade them.

While we disagree on the plausibility of the "end of the world aint so bad" approach to convincing a human to let it out, I'm glad you seem to have embraced the idea that AI boxing is HARD if not impossible. Cheers!

I find this approach to conversation pretty irritating (where you extrapolate and characterize what I must be thinking). I haven't embraced anything about AI boxing, I don't think it is important (it's just a fun puzzle). I guess it is hard, and I also guess whatever fundamental idea that might lead to strong AI would be even harder to box.


If anyone is looking for another person for their team or wants to join a team, consider me! If you're interested in discussing further after reading this, send me a message and we'll talk.

I am currently self-employed as a startup consultant and freelance web developer. I have extensive Computer Science, Sales, Account Management, and Customer Success experience and have successfully started 2 small businesses in the past with revenues of a half-a-million dollars combined. The last company I worked at had 14 people and negligible market saturation when I joined. I was instrumental in growing it to the #1 most used software system in its industry, with over 200 employees and a valuation increase of 20x (2000%) over 3 years.

I am comfortable working in both a programming/product role or on the business end of things—whatever will best serve the team. I have two ideas for which I have already done market validation, but I'm perfectly happy to go with your idea instead if you can sell me on it.

If you think I'd be valuable to your team, or if you think you'd be a valuable partner or member of my team, send me a message. Let's talk. Cheers


Hi Tyler - you don't have an email listed - could you post one so I can send you a message? I'm a developer and it'd be nice to work with someone who has some experience like you.


Hey, thanks for reaching out. Shoot me an email at tjkresch@gmail.com


I'm curious about this too. If anyone can shed some light, I'd appreciate it. Does it provide any value right now? Or is it something that will have value if enough people do it?


Who are your clients? What industry? Is there a centrally-important program they all use that requires IE8 or WinXP, or do they just have a massive fleet of XP machines and not want to upgrade?

As a software vendor, I used to have a lot of trouble getting my clients to switch away from IE, but I feel like that really has been changing. My clients are large GC firms and other kinds of construction companies (infamous for being behind the times when it comes to technology) and I had >80% of them on Chrome 6 months ago. Have you tried telling them that even Microsoft dropped support for WinXP and IE8 a year ago? (I think it happened last April.)


They view IE as part of their support burden and rather not add to that burden until they can change the version requirement on everything at the same time. They would probably be happier if we branded Chrome and narrowed its UI to our app's needs.

> Have you tried telling them... They know. They are the purse holders, not the brain holders.

(unrelated side note: looks like someone is disappointed that I have clients on down-rev browsers)


That could be it. As an iOS user, I had to watch it loop through a couple times to understand it (and even to find the beginning). I felt it was moving a bit too fast.


Looping animated gifs are just a very, very inefficient means of communicating. It would be very easy to follow this one if you could have it start at the exact moment you fixed your attention on it, but the looping means you come in halfway and it makes absolutely no sense. I had to sit and concentrate on it for a couple of loops just to determine the starting point of the "story" it was telling, and once I figured that out, it still took a couple run throughs for me to comprehend what it was trying to show me. Maybe I'm just stupid, but I wish this trend would go away.


Wow. That just really changed my model of paid search results... and I'm not quite sure yet how to put it back together.

I clicked both results for "flash player" and nothing on my computer (Mac) or in the browser (Chrome) warned me, whereas I have received warnings on other sites for "phishing attempts". I never would have downloaded anything from those sites, but my parents sure would have.

Who knows, if I were in a hurry and it was a program I wasn't familiar with or that I was trying to get for free? And if the site was more polished or a full html/css clone of the real site? It honestly might have worked on me.

What kind of malware do these bogus paid results contain? What would have happened if I'd installed them? What can you do to defend against that besides being a savvy web user?

Thanks for this. I learned something.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: